Dan Guido, CEO
The second meeting of the Commodity Futures Trading Commission’s Technology Advisory Committee (TAC) on July 18 focused on the effects of AI on the financial sector. During the meeting, I explained that AI has the potential to fundamentally change the balance between cyber offense and defense, and that we need security-focused benchmarks and taxonomies to properly assess AI capabilities and risks.
- The widespread availability of capable AI models presents new offensive opportunities that defenders must now account for. AI will make certain attacks dramatically easier, upsetting the equilibrium of offense and defense. We must reevaluate our defenses given this new reality.
- Many think AI is either magical or useless, but the truth lies between these extremes. AI augments human capabilities; it does not wholly replace human judgment and expertise. One key question is: can a mid-level practitioner operate at an expert level with the help of AI? Our experience suggests yes.
- AI models can do many helpful things: decompile code into high-level languages, identify and trigger bugs, and write scripts to launch exploits. But to leverage it effectively, we must ask the right questions (e.g., with knowledge of the subject matter and prompt engineering techniques) and evaluate progress correctly (is AI better than state-of-the-art techniques)?
- It’s also necessary to choose the right problems. AI is better for problems that require breadth of knowledge and where mistakes are acceptable (e.g., document this function, write a phishing email). It’s not great at problems that require mastery and correctness (e.g., find and exploit this iOS 0-day).
- Bug bounties, phishing defenses, antivirus, IDS, and attribution will be among the first fields impacted as AI confers a greater advantage to attackers in the near term. For example, AI can mass produce tailored phishing messages, for every target, in their native language, and without errors. We can’t just regulate these problems away; alignment and attempts to restrict model availability won’t work, since impressively capable open-source models are already here.
- What’s needed now is a systematic measurement of these models’ capabilities that focuses on cybersecurity, not programming. We need benchmarks that let us compare AI versus existing state-of-the-art tools and human experts, and taxonomies that map advancements to opportunities and risks.
The full video is available here:
Finally, I am honored to have been named the co-chair of the Subcommittee on Cybersecurity. I look forward to continuing our work with the committee. We will continue studying the risks and opportunities of AI, supply chain security, and authentication technology in the finance industry.
Read our prior coverage of the CFTC TAC’s first meeting, which focused on blockchain risks. For our work on AI-enabled cybersecurity, see the links below: