Google Expands Bug Bounty Program to Find Generative AI Flaws
2023-10-28 01:46:39 Author: securityboulevard.com(查看原文) 阅读量:7 收藏

Google, a big player in the rapidly expanding world of Ai, is now offer rewards to researchers who find vulnerabilities in its generative AI software.

Like Microsoft, Amazon, and other rivals, Google is integrating AI capabilities in a widening swatch of its products and services, most recently unveiling new AI-powered features in its Maps software, enhancing driving directions and search results and introducing immersive navigation.

Now Google is taking another step to ensure that AI-based software that it’s rolling into its offerings is more secure. The company is expanding its Vulnerability Rewards Program (VRP) to include attack scenarios specific to generative AI.

At the same time, Google also is broadening its open source security work to make its AI supply chain security more transparent.

Security an Ongoing Concern

This comes at a time when the excitement around generative AI that kicked off almost a year ago when OpenAI released its ChatGPT chatbot is being met with equal levels of concern about security and privacy.

“Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations),” Laurie Richardson, vice president of trust and safety, and Royal Hansen, vice president of privacy, safety, and security engineering at Google, wrote in a blog post this week. “As we continue to integrate generative AI into more products and features, our Trust and Safety teams are leveraging decades of experience and taking a comprehensive approach to better anticipate and test for these potential risks.”

DevOps Unbound Podcast

That said, Richardson and Hansen also noted that outside security researchers are another way to find and fix flaws in the generative AI products, thus the expansion of VRP and the release of guidelines for the security researchers.

“We expect this will spur security researchers to submit more bugs and accelerate the goal of a safer and more secure generative AI,” they wrote.

Looking for Attacks on LLMs

The AI bug bounty program touches on a range of categories for outside researchers to focus on, from attacks designed to use adversarial prompts to influence the behavior of a large-language model (LLM) and manipulate models to those aimed at stealing models and covertly changing the behavior of a model.

“Our scope aims to facilitate testing for traditional security vulnerabilities as well as risks specific to AI systems,” Eduardo Vela, Jan Keller, and Ryan Rinaldi, members of Google Engineering, wrote in a note.

They added that the amount of money rewarded to researchers depends on the severity of an attack scenario and the type of target affected.

Last year, the VRP program paid out more than $12 million in bug bounty rewards.

Google isn’t the first to turn to outside researchers to find vulnerabilities in its AI offerings. Earlier this month, Microsoft unveiled a similar program that will pay researchers $2,000 to $15,000 for flaws found in its AI-powered Bing search engine offerings, including Bing Chat, Bing Chat for Enterprise, and Bing Image Creator.

In April, OpenAI announced a bug bounty program in conjunction with Bugcrowd, which offers crowdsourced programs.

Multi-Pronged Approach to AI Security

The bug bounty follows a number of other steps Google has taken to secure generative AI products, which include the Bard chatbot and Lens image recognition technology. Like Microsoft, Google also is integrating AI throughout its portfolio, from Gmail and Search to Docs and – as mentioned – Maps.

Earlier this year, Google introduced an AI red teaming function to grow implementation of its Secure AI Framework (SAIF).

“The first principle of SAIF is to ensure that the AI ecosystem has strong security foundations, and that means securing the critical supply chain components that enable machine learning (ML) against threats like model tampering, data poisoning, and the production of harmful content,” Richardson and Hansen wrote.

Along with the AI bug bounty offering, Google this week also said it is growing the work it’s doing with the Open Source Security Foundation and that its own Open Source Security Team is enhancing the security of the AI supply chain by leveraging open source initiatives Supply Chain Levels for Software Artifacts (SLSA) and Sigstore.

SLSA includes standards and controls for resiliency in supply chains and Sigstore is used to verify that software in the supply chain is what it claims to be, they wrote. Google is making the first prototypes for model signing with Sigstore and attestation verification with SLSA available.

Recent Articles By Author


文章来源: https://securityboulevard.com/2023/10/google-expands-bug-bounty-program-to-find-generative-ai-flaws/
如有侵权请联系:admin#unsafe.sh