In this special edition, we’ve selected the most-read Cybersecurity Snapshot items about AI security this year. ICYMI the first time around, check out this roundup of data points, tips and trends about secure AI deployment; shadow AI; AI threat detection; AI risks; AI governance; AI cybersecurity uses — and more.
ICYMI, here are six things that’ll help you better understand AI security.
Looking for tips on how to roll out AI systems securely and responsibly? The guide “Deploying AI Systems Securely” has concrete recommendations for organizations setting up and operating AI systems on-premises or in private cloud environments.
“Deploying AI systems securely requires careful setup and configuration that depends on the complexity of the AI system, the resources required (e.g., funding, technical expertise), and the infrastructure used (i.e., on premises, cloud, or hybrid),” reads the 11-page document, jointly published by cybersecurity agencies from the Five Eyes Alliance countries: Australia, Canada, New Zealand, the U.K. and the U.S.
The agencies recommend that organizations developing and deploying AI systems incorporate the following:
For more information about deploying AI systems securely:
As organizations scale up their AI adoption, they must closely monitor the usage of unapproved AI tools by employees — an issue known as “shadow AI.”
So how do you identify, manage and prevent shadow AI? The Cloud Security Alliance’s “AI Organizational Responsibilities: Governance, Risk Management, Compliance and Cultural Aspects” white paper offers recommendations to tackle shadow AI, including:
“By focusing on these key areas, organizations can significantly reduce the risks associated with shadow AI, ensuring that all AI systems align with organizational policies, security standards, and regulatory requirements,” the white paper reads.
For example, to create an inventory that offers the required visibility into AI assets, the document explains different elements each record should have, such as:
Meanwhile, the report “Oh, Behave! The Annual Cybersecurity Attitudes and Behaviors Report 2024-2025” from the National Cybersecurity Alliance (NCA) adds insights to the issue of employee use AI, with its finding that almost 40% of employees have fed sensitive work information to AI tools without their employers’ knowledge
These findings, according to the NCA, highlight why organizations must urgently adopt AI usage policies and offer AI security training so employees understand the risks of using this technology.
Have you ever shared sensitive work information without your employer’s knowledge?
(Source: “Oh, Behave! The Annual Cybersecurity Attitudes and Behaviors Report 2024-2025” study by the National Cybersecurity Alliance, September 2024)
For more information about AI risks to cybersecurity, check out these Tenable blogs:
AI has greatly impacted real-time threat detection by analyzing large datasets at unmatched speeds and identifying subtle, often-overlooked, changes in network traffic or user behavior. For example, AI can detect when a system atypically accesses sensitive data. Traditional tools may miss these nuanced anomalies, but AI systems are adept at spotting them.
“For security, GenAI can revolutionize the field if applied correctly, especially when it comes to threat detection and response. It enhances efficiency and productivity by swiftly processing and delivering critical information when it matters most,” Nicholas Weeks, a Tenable senior product marketing manager, wrote in a blog post.
One of AI's significant advantages in threat detection is its ability to be proactive. AI-powered systems continuously refine their algorithms as new malware strains and attack techniques emerge, learning from each event and integrating new insights into their threat detection mechanisms. This allows them to respond to both known and unknown threats more effectively than traditional, static, signature-based tools.
"There has been automation in threat detection for a number of years, but we're also seeing more AI in general. We're seeing the large models and machine learning being applied at scale," Josh Schmidt, partner in charge of the cybersecurity assessment services team at BPM, a professional services firm, told TechTarget.
In addition to monitoring internal network behavior, AI systems can more comprehensively analyze external sources of intelligence like RSS feeds, cybersecurity forums and global threat data. This wide-reaching capability helps AI gather actionable insights and recommend defense strategies that are tailored to current attack trends. For example, AI can flag a spike in phishing attacks targeting specific industries and suggest measures to counter these emerging threats.
Additionally, as AI-generated phishing lures become nearly impossible for humans to detect, researchers and operators are turning to AI-based systems to assess if an email was AI-generated by looking for subtle telltales or differences when compared to a legitimate human-sourced email.
For more information about ways in which AI can help boost cybersecurity programs:
Finding it hard to track all the cyber risks impacting AI systems? Check out the Massachusetts Institute of Technology’s AI Risk Repository, which aims to consolidate in a single place all risks associated with the use of artificial intelligence.
To compile the database’s initial set of 700-plus risks, MIT analyzed 43 existing AI risk frameworks, and found that even the most comprehensive framework overlooked about 30% of all risks currently listed in the database.
“Since the AI risk literature is scattered across peer-reviewed journals, preprints, and industry reports, and quite varied, I worry that decision-makers may unwittingly consult incomplete overviews, miss important concerns, and develop collective blind spots,” project leader and MIT postdoctoral researcher Peter Slattery said in a statement.
The AI Risk Repository’s risk domains include:
The risk domains are further subdivided into 23 subdomains. The AI Risk Repository is a “living database” that’ll be expanded and updated, according to MIT.
Meanwhile, the January publication from the U.S. National Institute of Standards and Technology (NIST) “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2)” aims to help AI developers and users understand the types of attacks their AI systems can be vulnerable to, as well as ways to mitigate these threats.
Specifically, the publication zeroes in on four attack types:
Taxonomy of attacks on generative AI systems
(Source: NIST’s “Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2)” document, January 2024)
For more information about protecting AI systems from cyberattacks:
A majority of cybersecurity professionals feel cautiously hopeful about artificial intelligence’s potential for strengthening their organizations’ cyber defenses, while also recognizing AI’s risks and adoption obstacles.
That’s according to a global survey of almost 2,500 IT and security professionals conducted by the Cloud Security Alliance (CSA).
“While there’s optimism about AI’s role in enhancing security, there’s also a clear recognition of its potential misuse and the challenges it brings,” reads the “State of AI and Security Survey Report,” which was commissioned by Google.
Specifically, 63% of respondents said AI can potentially boost their organizations’ cybersecurity processes. Only 12% felt the opposite way. The rest had no opinion.
Already, 22% of polled organizations use generative AI for security. More than half (55%) plan to use it within the next year, with the top use cases being rule creation, attack simulation and compliance monitoring. C-level and board support is driving generative AI adoption.
Furthermore, 67% have tested AI for security purposes, and 48% feel either “very” or “reasonably” confident in their organizations’ ability to use AI for security successfully.
What are your desired outcomes when it comes to implementing AI in your security team?
(Source: Cloud Security Alliance’s “State of AI and Security Survey Report,” April 2024)
Meanwhile, in a commissioned study conducted by Forrester Consulting on behalf of Tenable in October 2023, 44% of IT and security leaders polled said they were either “extremely confident” or “very confident” about their ability to use generative AI to enhance their organization’s cybersecurity strategy.
In addition, 68% of respondents showed some level of interest in using GenAI to align IT/security goals with business goals; and a similar number — 67% — showed interest in using it to increase or improve the way their organization practices preventive cybersecurity.
To get more details, check out the CSA report’s announcement “More Than Half of Organizations Plan to Adopt Artificial Intelligence (AI) Solutions in Coming Year” and the full 33-page report “State of AI and Security Survey Report.”
For more information about how AI can help cybersecurity teams:
Here’s a guide that might interest business and tech chiefs eager to ensure their organizations develop and deploy generative AI securely and responsibly.
The Open Worldwide Application Security Project (OWASP) guide “LLM AI Cybersecurity & Governance Checklist” is aimed at business, privacy, compliance, legal and cybersecurity leaders, among others, tasked with setting guardrails for their organization’s generative AI use.
The goal: Help them stay abreast of AI developments so that their organizations will reap business success from their generative AI use while avoiding legal, security and regulatory pitfalls.
“These leaders and teams must create tactics to grab opportunities, combat challenges, and mitigate risks,” reads the document, which was created by the same OWASP team in charge of the group’s “OWASP Top 10 for LLM Applications” list.
Areas covered by the checklist include:
For more information about using generative AI responsibly and securely:
Juan has been writing about IT since the mid-1990s, first as a reporter and editor, and now as a content marketer. He spent the bulk of his journalism career at International Data Group’s IDG News Service, a tech news wire service where he held various positions over the years, including Senior Editor and News Editor. His content marketing journey began at Qualys, with stops at Moogsoft and JFrog. As a content marketer, he's helped plan, write and edit the whole gamut of content assets, including blog posts, case studies, e-books, product briefs and white papers, while supporting a wide variety of teams, including product marketing, demand generation, corporate communications, and events.