Why and How to Secure GenAI Investments From Day Zero
2024-9-6 15:50:4 Author: securityboulevard.com(查看原文) 阅读量:7 收藏

Generative AI opens many opportunities for businesses to improve productivity and gain efficiencies across virtually all dimensions of their operations. At the same time, however, it introduces new risks – not least in security and data privacy. Yet, because GenAI remains a relatively novel concept that many companies are officially using only in limited contexts, it can be tempting for business decision-makers to ignore or downplay the security stakes of GenAI for the time being. They assume there will be time to figure how to secure large language models (LLMs) and mitigate data privacy risks later, once they’ve established basic GenAI use cases and strategies.

Unfortunately, this attitude toward GenAI is a huge mistake, to put it mildly. It’s like learning to pilot a ship without thinking about what you’ll do if the ship sinks, or taking up a high-intensity sport without figuring out how to protect yourself from injury until you’ve already broken a bone.

A healthier approach to GenAI is one in which organizations build security protections from the start. Here’s why, along with tips on how to integrate security into your organization’s GenAI strategy from day zero.

The State of Generative AI Security in Business

As of spring 2024, 65% of organizations reported that they were already using generative AI in at least one business function. This means that when it comes to GenAI, a majority of businesses have moved beyond the talking phase and are now in the implementation phase.

Nonetheless, some organizations are still prone to underestimating their exposure to GenAI security risks, for two main reasons.

Claroty

One is that, in many cases, companies that have adopted GenAI in certain areas – such as software engineering, a domain where 75% of developers will be using AI by 2028, according to Gartner – are still not making widespread use of the technology. They may be using GenAI tools like GitHub Copilot to help generate code, or deploying basic AI-powered chatbots to support customers, but they haven’t deeply integrated GenAI across multiple business domains and functions. As a result, they believe their Generative AI security exposure is limited in scope.

The second reason is that GenAI is in use within some organizations in ways that business leaders don’t even know about, due to what’s known as “Shadow LLMs.” I’m referring to the use of publicly available Generative AI services, like ChatGPT and Gemini, by employees without the company’s knowledge or approval. Because business leaders often aren’t even aware that Shadow LLM use is happening, they haven’t considered the full scope of the security risks that generative AI presents to their organizations.

The bottom line here is that within many companies, there is likely to be misalignment between the extent to which decision-makers believe their organizations are exposed to GenAI security challenges and the reality. As a result, many are at risk of underinvesting in generative AI security protections because they falsely believe that they don’t yet need them.

Why to Secure GenAI From Day 1

That’s a serious mistake given the diversity and complexity of GenAI security risks, such as:

  • Prompt injection vulnerabilities, which attackers could use to exfiltrate sensitive data from LLMs or, in some cases, compromise other systems with which LLMs integrate.
  • Package hallucination risks, which could lead to software supply chain breaches.
  • The risk is that even when interacting with well-intentioned users, LLMs will share data or perform actions that should be disallowed, simply because proper safeguards are not in place to control data output.
  • Challenges related to the growing array of generative AI compliance regulations, such as the European Union AI Act and the White House’s Executive Order on AI.

These GenAI security and data privacy challenges exist regardless of the extent to which an organization has adopted GenAI or which types of use cases it’s targeting. It’s not as if they only matter for companies making heavy use of AI or using AI in domains where special security, privacy or compliance risks apply.

This is why every organization that is using AI today or might use it in the future – a category that describes virtually every company, given that, again, employees may use AI services without official approval by their employers – should be taking steps starting now to ensure that it mitigates AI security risks.

Otherwise, they might find themselves facing issues like deploying chatbots that harm their brands, or failing to detect instances where employees share sensitive business information with external AI services like ChatGPT – and, by extension, potentially expose that data to third parties that use the same AI services.

Basic Steps for Securing Generative AI

A complete discussion of how to manage GenAI security risks is beyond the scope of this article. But basic best practices include:

  • Wrap AI tools and services in custom runtimes to track which data users enter into AI models.
  • Block malicious model interactions, which is also possible using runtime guardrails.
  • Establish clear governance policies for when and how employees can use GenAI technology and monitor for shadow LLM instances that violate those policies.

In short, whenever and wherever GenAI is in use within the business, safeguards should be in place to monitor AI interactions and mitigate security and privacy risks in real-time.

Again, these are not the types of protections that businesses should simply tack onto AI technologies after they’re already in use. They need to be present from the very start, lest serious security risks undercut the value that GenAI delivers as organizations begin adopting this powerful new technology.

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/09/why-and-how-to-secure-genai-investments-from-day-zero/
如有侵权请联系:admin#unsafe.sh