The AI executive order: What AppSec teams need to know
2023-11-30 23:39:44 Author: securityboulevard.com(查看原文) 阅读量:5 收藏

robot-eo-ai-security-appsec

The White House’s sweeping executive order (EO) on artificial intelligence has put the onus on software vendors to implement proactive measures for assessing and mitigating potential security risks and biases in products that use AI.

The full implications of the EO will likely vary depending on the extent to which an organization’s products and services might incorporate AI mechanisms or on whether they are dependent on them. 

But at a high level, the EO puts pressure on organizations that produce AI-enabled, AI-generated, or AI-dependent products to adopt new application security (AppSec) practices for assessing these systems for safety, security, and privacy. They will need to account for risks, such as those from cyberattacks, adversarial manipulation of AI models, and potential theft or replication of proprietary algorithms and other sensitive data. Required security measures include penetration testing and red-team procedures to identify potential vulnerabilities and other security defects in finished products.

The EO also imposes other requirements, including the need for developers of AI systems to guard against the potential for bias to creep into their models, as well the need to maintain data that allows regulators and other stakeholders to audit their development techniques.

Some of the requirements in the new EO on AI are likely to be more relevant for builders of foundational AI systems — as the EO describes them — rather than others. But security teams reviewing AI initiatives are still in the hot seat. Here’s what teams responsible for AppSec need to know.

[ See related: OWASP Top 10 for LLM bridges the gap between AppSec and AI | See Webinar: Secure by Design: Why Trust Matters for Risk Management ]

A push toward AI security standards 

Darren Guccione, co-founder and CEO at Keeper Security, said developers will be required to show that their public-facing AI systems are safe, secure, and trustworthy. And all of this will need to be done before an AI or AI-enabled system becomes available to the public. 

“This EO provides clarity to the subject of accountability in how AI is developed and deployed across organizations.”
Darren Guccione

Standardized tools and tests will be developed and implemented to provide governance over new and existing AI systems, Guccione said. With the widespread adoption of AI systems, this means every organization will need to consider the EO.

“Given the range of recommendations and actions included, organizations will likely feel the effects of this EO across all sectors, regardless of where they are in their AI journey or what type of AI system is being used.”
—Darren Guccione

Implementing the EO remains a work in progress

Many of the details describing how software developers will implement the EO’s requirements remains unclear. However, the National Institute of Standards and Technology (NIST) will now develop standards and best practices for developing safe and trustworthy AI systems. This will include standards for red-team testing of AI systems before they are publicly released. The Department of Homeland Security will ensure that organizations in critical-infrastructure sectors apply these standards when using internally developed or externally sourced AI systems.

Ashlee Benge, director of threat intelligence at ReversingLabs, said software development organizations and publishers will need to follow NIST’s AI standards and guidelines in order to secure government contracts.

“By laying out clear use guidelines and requiring transparency when it comes to security testing, this EO will likely force a deeper consideration of safety measures than may have originally been taken.”
Ashlee Benge

Benge said that for consumers of AI systems, data privacy — with regard to the use of personally identifiable information (PII) used to train AI models — is a serious concern.

“This is a potentially major issue for any developer of software with AI capabilities.”
—Ashlee Benge

Marcus Fowler, CEO of Darktrace Federal, said the EO is a reminder that it is not possible to achieve AI safety without cybersecurity. The edict highlights the need for action on data security, control, and trust on the part of those developing AI systems. 

“It’s promising to see some specific actions in the executive order that start to address these challenges.”
Marcus Fowler

Organizations will need to implement systems and safeguards to ensure that red teaming exercises are useful, he said. They will need to implement a continuous process for testing AI security and safety through a product’s life cycle. Fowler said the EO’s emphasis on red teaming and penetration testing is relevant to any discussion about AI and security. 

“Red teaming exercises aim to test all the layers of an organization’s security posture. In the case of AI systems, that means testing for security problems, user failures, and other unintended questions.” 
—Marcus Fowler

The EO also pushes developers of AI systems to implement Secure by Design, introduced this year by the Cybersecurity and Infrastructure Security Agency (CISA) to shift ownership of security of software from consumers to producers, for every step of an AI system’s creation and deployment. “Security is a challenge for the here and now, as well as a necessity for tackling longer term risks,” Fowler said.

A road map and guidelines for AI security 

Resources are now available for organizations looking to develop their own road map to meeting the EO’s objectives. The U.K.’s National Cyber Security Center (NCSC) and the U.S. CISA have released guidelines for secure AI system development. The document provides guidelines that organizations can use to implement Secure by Design principles, as well as the secure development, deployment, operation, and maintenance of AI systems. Each of the four sections in the road map drills down into specific measures that developers of AI systems can take.

The section on Secure by Design highlights the importance of threat modeling and staff awareness, while the one on secure development focuses on the need to secure the software supply chain, perform asset identification, and track and maintain detailed documentation. The document notes:

“Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorized parties.”

CISA has separately released its own road map for AI, which could serve as a model for developers looking for hints on how U.S. agencies plan to implement the requirements of the new EO on AI.

An SBOM for AI?

Matt Rose, field CISO at ReversingLabs, said the effective requirements of the EO are similar to that for a software bill of materials (SBOM), given that the EO requires developers to document everything that goes into an AI system, including whether it was created by an AI or a large language model (LLM).

“If the data that an AI system is pulling from is tainted in any way, it can basically create problems and potentially major security breaches.”
Matt Rose

The EO requires being “diligent and granular” about AI safety and security requirements, to ensure that users of AI systems and all other stakeholders have clear visibility into the technology that will house things like government and military secrets, as well as other information of critical national importance.

“It goes hand in hand with being fully transparent and self-attesting with an SBOM.”
—Matt Rose

Legacy AST is not up to the job of securing AI development

However, the EO goes beyond attestation, Rose said. Developers will need to consider the security and safety of the AI tools on which their products are built, as well as the potential for hackers and other malicious actors to poison or attack these systems using that AI technology.

One challenge will be to understand where a generative AI system is getting its information. “They say the Internet is full of fake news. If your AI system is using data scraped from the Internet to come up with directed actions, then the information is only as good as the data it was sourced from,” Rose said.

Addressing AI-related security challenges is ultimately about the ability to look for and understand the behavior and the source of AI-generated code in applications, Rose said. That requires more than the code scanning in traditional application security testing (AST). Rose said software composition analysis (SCA), SBOMs, and complex binary analysis of software packages are essential to securing AI systems.

The EO on AI is an important first step

The EO is important step in getting industry to pay attention to security and safety issues as they roll out AI systems, said Darktrace’s Fowler. But additional guidance will be needed to help organizations get ahead of risk.

“But as the government moves forward with regulations for AI safety, it’s also important to ensure that it is enabling organizations to build and use AI to remain innovative and competitive globally and stay ahead of the bad actors.”
—Marcus Fowler

See ReversingLabs Field CISO Matt Rose’s explainer covering the EO on AI:

*** This is a Security Bloggers Network syndicated blog from ReversingLabs Blog authored by Jai Vijayan. Read the original post at: https://www.reversinglabs.com/blog/the-ai-executive-order-what-appsec-teams-need-to-know


文章来源: https://securityboulevard.com/2023/11/the-ai-executive-order-what-appsec-teams-need-to-know/
如有侵权请联系:admin#unsafe.sh