5 Steps to Assess the Cyber and Privacy Risk of Generative AI
2024-10-30 12:10:20 Author: securityboulevard.com(查看原文) 阅读量:3 收藏

Generative AI, often abbreviated as GenAI, is undoubtedly here to stay. The power to generate humanlike text, images, code, and more at computer speed in response to simple prompts is a turning point in human-machine collaboration. Now, the GenAI genie can’t be put back into the bottle. In fact, more sophisticated models like GPT-4o are being released every month. These models are already augmenting human capabilities in industries ranging from content creation and marketing to software development and scientific research. The business potential is immense, with GenAI poised to supercharge productivity, automate mundane tasks, and spark innovation across sectors. 

Like other new technologies, GenAI adoption introduces new risks to organizations. However, trying to keep GenAI contained to avoid these risks is a fruitless exercise and can quickly become a risk on its own. GenAI will be used by your competitors and cybercriminals. 

Organizations that aren’t actively figuring out how to securely adopt Gen AI will inevitably begin to lag behind their competitors over time. Worse, cybercriminals are also increasingly maliciously using GenAI to improve the effectiveness of their attacks. In other words, the impact of generative AI on their enterprise risk profile is unavoidable and should be assessed and actively managed, rather than ignored.  

In this blog, I lay out a simplified view of the impact of generative AI on enterprise risks, and propose a framework for assessing these risks effectively. 

The Impact of Generative AI on Enterprise Risk Profiles 

Much has been written about the risks of Generative AI. New adversarial threat models and vulnerabilities seem to be discovered weekly. This is inevitable with new technology, but the high-level risk exposure from GenAI can be categorized into four areas, where I believe they will have the most impact. 

AWS

AWS Hub

Amplify External Cyberthreats

GenAI is already amplifying existing cyber threats by providing powerful tools for attackers to automate and scale their malicious activities. Cybercriminals can rapidly produce large volumes of convincing phishing emails, malware code, or disinformation content to fuel social engineering and cyberattacks. This increases the frequency and probability of success for established threats like malware, fraud, and disinformation campaigns. 

AI-Empower Insider Threats

GenAI amplifies external cyber threats. However, it also empowers insider threats within organizations. With the deployment of personal GenAI copilots, which train GenAI agents on all the data a user has access to, the potential for insider misuse of AI capabilities becomes a pressing concern.

Insiders can utilize these copilots’ AI to extract sensitive information from corporate databases or communication channels with unprecedented ease and efficiency. Often the audit trail behind the copilot’s prompts and responses is unmonitored and the initial training on data is obfuscated by the share volume. By generating targeted queries, they can sift through vast amounts of data and identify valuable assets without conspicuous traces.

Expand Data Leakage Vectors

We have all used GenAI models like ChatGPT, but now, GenAI is often embedded in existing tools like our web browsers. The growing use of public generative AI services by employees outside of sanctioned internal GenAI deployments introduces new privacy and data risks. Employees may inadvertently disclose sensitive enterprise data when interacting with these models, leading to potential data breaches and regulatory violations. There are also risks of intellectual property theft if proprietary information is used in prompts to public AI models, which effectively trains the models.

Introduce AI Model Risks 

As enterprises increasingly integrate genAI models into business-critical processes like content creation, customer-facing chatbots, software development, and decision support systems, they face new classes of risk specific to AI models. These include risks of models producing unsafe or biased outputs, being exploited through adversarial attacks or poisoning, lacking transparency and auditability, and propagating or amplifying societal biases and harms. Mitigating these “AI model risks” requires robust governance, testing, and monitoring controls.

Evaluating and managing these multifaceted genAI risks is crucial for enterprises to safely harness the transformative potential of these technologies while safeguarding against unintended negative consequences.

5 Steps to Assess the Risk Impact of Generative AI

The risk impact of Generative AI is best codified based on key differences in the following risk drivers and an evaluation of: 

  • What data is used to train the models
  • The user of the model
  • The identities who have access to the model and training data
  • The purpose behind the use of the model

This all starts with having the right governance structure and processes in place to ensure the relevant business, technology, security, and privacy stakeholders are involved in decisions around genAI adoption. This process will allow the risks of individual GenAI deployments to be assessed.

1. Completing a Data Risk Assessment

The data used to train generative AI models can introduce significant risks:

  • Privacy Violations: If the training data contains personal information or sensitive content, the model could leak or expose this data through its outputs.
  • Intellectual Property Theft: Proprietary data like code, designs, or confidential documents used for training could be reconstructed from the model.
  • Bias Propagation: Societal biases and skews present in the training data can be amplified and baked into the model’s outputs.
  • Representativeness: Lack of diverse, representative data can lead to models that are inaccurate or unsafe for certain demographics.

It’s critical to evaluate the training data for such risks, assess its lineage and potential for misuse, and determine the sensitivity level. This includes who has access to the data to manipulate the outputs from the input. 

2. Threat Model the Users

The risks also depend heavily on who can use or interact with the gen AI. For example, public GenAI models can be used equally by your employees, and cybercriminals, while personal Copilots should only be able to be used by licensed employees. 

Organizations must identify different user roles, assess risks based on their intent and training, and implement robust access controls and monitoring.

3. Understand the Purpose/Use Case Risk Assessment

The intended use case for a generative AI model also determines its risk profile. Organizations should evaluate the intended purpose or use of each GenAI model, and pay particular attention to use of the Gen AI use in:

  • Business-Critical Processes: For high-stakes use cases like automated code development or decision support, unsafe or biased model outputs could severely impact operations.
  • Externally Visible Outputs: Producing externally visible outputs with GenAI brings its own set of concerns around the bias of the training datasets that become immediately apparent at scale. 

Identifying critical processes where AI will be used requires organizations to further assess risks like unsafe outputs, adversarial vulnerabilities, and lack of transparency.

4. Conduct a Scenario Analysis

With an understanding of the data, user, and purpose risks, enterprises can construct concrete risk scenarios by combining these elements. For example:

  • Scenario 1: Biased training data used by a developer to automate the coding of a customer service system, risking discrimination based on gender-specific language.
  • Scenario 2: Sensitive corporate data is exposed when an untrained employee uses a public generative AI service.
  • Scenario 3: An insider threat leverages adversarial inputs to training data in a pricing model to unlock free products.

Such scenarios should be scored or preferably quantified based on factors like likelihood/frequency and potential impact – considering the data at risk, purpose, and external visibility.

5. Assessing Control Effectiveness and the Residual Risks

For prioritized risk scenarios, relevant controls can be mapped across the data, access management, and model domains. At a minimum, this should include a focus on access to sensitive data and control of the model. The specific controls will depend on the risk scenario, with objectives like preventing data leaks, securing access, and ensuring model reliability. After evaluating existing controls and implementing identified controls to mitigate high-risk scenarios, organizations must evaluate the residual risk that remains.

Based on this analysis, the appropriate governance bodies must determine whether the residual risk level is acceptable given the organization’s defined risk appetite and tolerances. Factors like compliance obligations, brand reputation, and the severity of potential impacts should be weighed.

If the residual risk is deemed too high, the organization will need to iterate further by identifying and implementing additional controls from the data, access, and model spheres to drive the risk down to an acceptable level.

It Doesn’t Stop There

Generative AI is still a rapidly evolving space. The capabilities of the technology and the understanding of its associated risks will continue to progress quickly. As such, organizations must continuously monitor their use of GenAI and their generative AI risk posture. Make sure to capture any loss events or near misses from GenAI. It’s also recommended that the overall generative AI risk assessment be periodically re-evaluated, perhaps quarterly or bi-annually, to account for changes in the organization’s risk landscape, rogue AI implementations, and latest best practices.

It’s critical to frame GenAI risk assessment through the key drivers of data, user identities, and model purposes while leveraging approaches like scenario analysis, control mapping, and continuous monitoring. Then, organizations can establish a comprehensive framework to identify, mitigate, and manage the multifaceted risks introduced by generative AI technologies.

This blog originally appeared on Auditboard on June 14, 2024


文章来源: https://securityboulevard.com/2024/10/5-steps-to-assess-the-cyber-and-privacy-risk-of-generative-ai/
如有侵权请联系:admin#unsafe.sh