IT, Security Leaders Play Catch-Up With Generative AI Threats
2023-10-27 20:0:33 Author: securityboulevard.com(查看原文) 阅读量:6 收藏

Nearly three-quarters of IT and security leaders admit their employees are using generative AI tools at work, but leaders lack a plan to counter the security risks posed by generative AI and the large language models (LLMs) powering the technology.

These were among the key findings of an ExtraHop survey of more than 1,200 security and IT leaders, which highlighted a significant disparity between organizations’ concerns about generative AI risks and their effectiveness in addressing them.

Security and IT leaders are more concerned about inaccurate or nonsensical answers from generative AI apps (40%) than security-centric issues, like exposure of customer and employee personal identifiable information or exposure of trade secrets.

Generative AI Bans

Approximately one-third of organizations banned generative AI tools, yet only 5% of respondents claimed that their employees never use them, suggesting that these bans may not be as effective as hoped.

Additionally, while nearly 82% of respondents expressed confidence in their ability to defend against AI threats, half of them lacked technology for monitoring AI tool usage and only 42% and 46% provided user training and governance policies, respectively.

This raises questions about whether the bans have inadvertently fostered an inflated sense of confidence within organizations.

DevOps Unbound Podcast

Shawn Surber, senior director of technical account management at Tanium, said bans on the use of generative AI are difficult to enforce and often don’t make sense to the employee.

“Therefore, those employees who are unlikely to follow rules they don’t agree with may decide that the value of AI outweighs the risk of the slap on the wrist they expect to get if they’re caught,” he said.

To address the challenge of enforcing these bans, organizations must combine education with technical controls.

“Blocking access to AI sites from corporate devices is the first step, and it needs to be enforced whether the device is on campus or not, which greatly increases the difficulty and complexity of that control,” Surber explained.

Then, organizations need to monitor outbound personal VPN connections from those devices as well, as that is a common method for getting around such controls.

Since the content of those VPN connections is encrypted, he says the best method might be to prevent those tools from being used on the employee devices at all.

“Lastly, and most importantly, organizations need to clearly communicate not just the ban on using AI, but how and why they consider it dangerous to the organization,” he added.

Responsible Use

Mike Apigian, senior director, product marketing at ExtraHop, said the challenge IT leaders are facing is the lack of knowledge and resources available to inform policies and training despite the risks associated with misuse and a possible increase in their attack surface.

“Given its relative novelty, businesses find themselves uncertain about how to oversee employee utilization,” he said.

He added that clear directives could provide business leaders with a greater sense of assurance when establishing governance and policies for the utilization of these tools, especially considering the fast rate of adoption.

“We’ve already seen governments begin the early stages of setting up what an AI framework may look like, and it’s clear from our research that businesses are keen on what’s to come,” Apigian says.

John Allen, vice president of cyber risk and compliance at Darktrace, explained that the use of generative AI tools is ultimately still in its infancy.

“There are still many questions that need to be addressed to help ensure data privacy is respected and organizations can remain compliant,” he says.

He argued that the IT security community at large has a role to play in better understanding the potential risks and ensuring that the right guardrails and policies are put in place to protect privacy and keep data secure.

“As an industry, in order to realize the anticipated value from AI, we need to work alongside governing bodies to help ensure a level of consistency and sensibility are present in potential laws and regulations,” Allen added.

From his perspective, CISOs and CIOs must balance the need to restrict sensitive data from generative AI tools with the need for businesses to use these tools to improve their processes and increase productivity.

“Many of the new generative AI tools have subscription levels that have enhanced privacy protection so that the data submitted is kept private and not used in tuning or further developing the AI models,” he explained.

In fact, many vendors will also enter into Data Processing Agreements (DPA) and Business Associate Agreements (BAA) to meet specific compliance requirements for handling sensitive data.

“This can open the door for covered organizations to leverage generative AI tools in a more privacy-conscious way. However, they still need to ensure that the use of protected data meets the relevant compliance and notification requirements specific to their business,” Allen said.

Recent Articles By Author


文章来源: https://securityboulevard.com/2023/10/it-security-leaders-play-catch-up-with-generative-ai-threats/
如有侵权请联系:admin#unsafe.sh