At DEF CON, Michael Brown, Principal Security Engineer at Trail of Bits, sat down with Michael Novinson from Information Security Media Group (ISMG) to discuss four critical areas where AI/ML is revolutionizing security. Here’s what they covered:
AI/ML techniques surpass the limits of traditional software analysis
As Moore’s law slows down after 20 years of increasing computational power, traditional methods for finding, analyzing, and patching bugs yield diminishing returns. However, cloud computing and GPUs enable a new class of AI/ML systems that aren’t as constrained as conventional methods. By pivoting to AI/ML or a combination of AI/ML and traditional approaches, we can make new breakthroughs.
Leverage AI/ML to solve complex security problems
When solving computing problems using conventional methods, we use a prescriptive approach—we feed the system an algorithm that then produces a solution. In contrast, AI/ML systems are descriptive; we feed them numerous examples of what is right and wrong, and they learn to solve problems through their own modeling algorithms. This is beneficial In areas where we rely on highly specialized security engineers to solve complex, ‘fuzzy’ problems, because now AI/ML can step in. This is crucial as more complex problems are on the rise, yet there isn’t enough specialized expertise to address them all, and traditional methods fall short.
Securing AI/ML systems is different than securing traditional systems
Engineers at Trail of Bits have been researching ML vulnerabilities, both data- and deployment-born, and have discovered that the vulnerabilities affecting AI/ML systems differ significantly from those in traditional software. So to secure AI/ML, we need distinct methods to avoid missing large parts of the attack surface. Therefore, it’s crucial to acknowledge these differences, treat them as such, and harden AI/ML systems early in their development to prevent costly, persistent flaws—avoiding the unnecessary mistakes that plagued early iterations of Web 2.0, mobile apps, and blockchain.
DARPA-funded projects, like AIxCC, apply AI/ML to traditional cyber issues
DARPA’s AI Grand Cyber Challenge (AIxCC) challenges teams to develop AI/ML systems that address conventional security problems. Our team’s submission, Buttercup, is one of seven finalists advancing to next year’s AIxCC finals, where it will compete on its ability to autonomously detect and patch vulnerabilities in real-world software.
That’s a wrap! Watch the full video here!
Trail of Bits is at the forefront of integrating AI and ML into cybersecurity practices. Through our involvement in initiatives like the AI Cyber Challenge, we are addressing today’s security challenges while shaping the future of cybersecurity.
Reach out to us to learn more: www.trailofbits.com/contact
Explore our AI/ML resources:
- Vulnerabilities and audits
- Research and blog
- Tools
- AI safety and security training