The Necessity of Security Best Practices When Implementing Generative AI
2023-11-28 23:0:0 Author: securityboulevard.com(查看原文) 阅读量:13 收藏

It is no surprise that so many companies are eager to implement generative AI. Already, this technology has enabled users to improve productivity, streamline tedious processes and elevate the customer experience. Although businesses must capitalize on this surge of innovation to maintain a competitive advantage, they should also be mindful of the security and data privacy challenges of a new generative AI deployment. By leveraging security best practices, companies can protect sensitive data while avoiding costly legal fees.

Protecting Against Internal Data Leaks

One of the primary risks of any generative AI implementation is the threat of proprietary or private data leakage. Recently, Samsung banned its employees from using generative AI tools like Open AI’s ChatGPT and Google Bard on company-owned devices, including computers, tablets, phones and internal networks. This policy came after Samsung engineers uploaded internal source code to ChatGPT, resulting in an accidental leak. Samsung is not alone in their decision, with notable Wall Street banks like Citigroup, Goldman Sachs and JP Morgan also restricting employees from using ChatGPT.

The threat of data leakage shouldn’t be a reason businesses neglect generative AI altogether, especially because of its potentially significant impact on brand reputation and revenue. Companies can prevent such instances by understanding the characteristics of one’s large language model (LLM). For starters, if a company’s LLMs are open to the public, it should train its employees accordingly so they understand not to insert proprietary or sensitive information into these models. Businesses can also establish policies limiting personnel access to these tools.

Another option for enterprises is using a generative AI solution that operates inside a private environment. Most are unaware that in addition to its standard ChatGPT, Open AI provides another offering, ChatGPT Enterprise, which functions within an isolated space. Open AI does not train its ChatGPT Enterprise models on proprietary customer data, prompts or employee usage, which adds a layer of safeguards against data leaks.

Data Breaches

Like any new technology, there is always a risk of data breaches, and generative AI models are no exception. By gaining access to the data sets used to train these models, bad actors can easily access sensitive information. In fact, there are various ways these cybercriminals can infiltrate systems and execute data breaches; Forrester predicts that AI-generated code with security flaws will be one such vulnerability. While the threat of data breaches is nothing new in the history of cybercrime, companies shouldn’t understate the importance of following security best practices when integrating generative AI into their business processes.

For those building their own models, they must ensure all of the data used to train their generative AI solutions is classified and anonymized. That way, if a breach does occur, the bad actors won’t obtain sensitive customer or enterprise data. Likewise, enterprises must identify how they capture data, noting where they store it and if it is encrypted in transit and at rest. Additionally, organizations can employ risk assessments, routine audits and regular penetration tests to address product vulnerabilities and protect against data breaches. It is also helpful to risk assess third-party vendors to ensure their technology is secure.

DevOps Unbound Podcast

Copyright Infringement and Accuracy

Another risk of leveraging generative AI is unintentionally breaking intellectual property and copyright laws. If built improperly or haphazardly, generative AI models might not be able to accurately provide specific sources or citations for their responses, potentially incurring legal repercussions. Businesses can take corrective action to avoid copyright issues by using observability monitoring tools to effectively understand their models’ real-time performance.

Additionally, having these monitoring tools in place enables companies to track the accuracy of their generative AI models. Customers expect generated responses to be accurate 100% of the time. Although perfection is the ideal, no generative AI model is always perfect. In the worst cases, an improperly built and maintained generative AI model can produce inaccurate responses that degrade the customer experience. However, with monitoring and measurement tools, enterprises can help their models stay consistent across platforms. Likewise, businesses need the capability to retrieve relevant artifacts or evidence should a dispute arise from a customer over the model’s performance.

The Newest of GenAI Shouldn’t Hinder Adoption

Generative AI is still a relatively new technology, and experts have yet to grasp every security threat. Nevertheless, the murky waters shouldn’t stop companies from embracing generative AI. Waiting until generative AI matures could put one at a severe competitive disadvantage and even upset employees as they become more accustomed to using it in their personal lives when interacting with brands. Instead, businesses should continue to abide by responsible AI and governance practices and utilize resources from industry leaders, such as the National Institute of Standards and Technology’s AI risk management framework.


文章来源: https://securityboulevard.com/2023/11/the-necessity-of-security-best-practices-when-implementing-generative-ai/
如有侵权请联系:admin#unsafe.sh