Continuous Threat Exposure Management (CTEM) is the shiny new gizmo in today’s cybersecurity toolbox. Anointed by Gartner as an innovation trigger, CTEM promises to deliver a security posture based on detecting actual exploitability and continuous remediation. At the heart of real-world CTEM practice lies the concept of a purple team, with red and blue teams in constant engagement.
At the same time, the nascent potential of AI in cybersecurity remains underrealized. AI can enhance the effectiveness, efficiency and responsiveness of many cybersecurity measures – it can help protect cloud-based systems and assets as part of network monitoring, vulnerability management, threat detection, incident response and threat validation. It’s a perfect tool for supporting CTEM in the cloud.
The legacy workflow for intrusion detection proceeds from intruders tripping alarms, generating alerts to a security operations center (SOC), where intrusions are detected, triaged, cataloged and addressed.
Problems with this sequence include 1) it’s too slow for today’s threatscape, 2) it lacks scalability and speed to match the cloud, and 3) the SOC is too noisy, receiving up to 10,000 alerts/day (Ponemon), overwhelming resources and providing ample opportunity for intruders to inflict damage, modify configurations, install malware and exfiltrate sensitive information.
A preemptive approach builds on the combined expertise of security practitioners and technology (including AI) to probe networks and validate defenses, continuously and dynamically. Don’t wait for the black hats to find holes in your cloud security – you do it first, do it comprehensively and do it continuously.
One approach to preemptive cybersecurity pits an intruding “red team” against a defending “blue team.” Human and digital actors collaborate, resulting in a net “purple” security approach.
As with premises exercises, a cloud-based red team performs reconnaissance, scanning/enumeration and proceeds to attempt exploitation. Using discovered topology, the reds perform penetration testing, pursuing attacks with gleaned intelligence. For its part, the blue team works to close off those avenues, reduce attack surfaces and note exploits that cannot be addressed directly. Repeat and rinse until red progress is halted. Such exercises inform and train both security practitioners and AIs and add realism to an organization’s cybersecurity mix, especially in support of CTEM.
Red vs. blue team exercises represent an important component of preemptive security, but also have shortcomings:
AI is a powerful ally for purple team security, helping overcome challenges and shortcomings (listed above) and integrating new capabilities – all at a reasonable cost. Following are areas where AI offers an edge to a CTEM framework:
In CTEM implementation, AI provides a nexus for analyzing vulnerabilities, exposures and threats. AI can also automatically integrate discovered threats into red team tactics, and conversely, blue team responses, with greater accuracy, reduced risk and at a lower cost than manual human implementation.
AI can digest intelligence from diverse sources to provide insights into emerging threats, furnishing red and blue teams with intel on new attack techniques and defense measures. AI can analyze larger volumes of network traffic and system logs to identify anomalies, speeding threat identification and response, during exercises and under actual attack.
AI bridges the gap between merely listing vulnerabilities to identifying exploitable vulns. A focus on exploitability is key to prioritizing remediation based on probable impact, assisting blue teams in focusing on critical issues.
AI can enhance the realism of simulated attacks and assess the effectiveness of defenses, streamlining penetration testing. Conversely, AI can automate intrusion response, isolating affected systems and blocking malicious traffic, minimizing response times and reducing damage.
Artificial intelligence can monitor user and system behavior to detect aberrations that indicate incidents-in-progress. AI can also generate detailed reports and analyses of compromises, offering insights for both teams for exploitation strategies and defensive tactics.
AI facilitates red-blue collaboration by sharing insights, findings and recommendations. It can keep both teams “honest” through comprehensive exchanges of intelligence and tactics, supporting continuous improvement.
Despite the many ways that AI can enhance preemptive security, it is no magic bullet. To support CTEM and purple security exercises, AI must be adequately trained and employ machine learning to benefit from the back-and-forth of purple team war games and be able to understand the exploitation potential of discovered vulns. Together, AI and purple security offer ideal actionable input and ongoing orientation for a CTEM framework.