Orca Security Taps Amazon for Generative AI Expertise
2023-11-2 02:11:14 Author: securityboulevard.com(查看原文) 阅读量:8 收藏

Avatar photo

Orca Security today revealed it is adding large language models (LLMs) hosted on the Amazon Web Services (AWS) cloud to the LLMs it already uses from Microsoft and OpenAI to provide additional generative artificial intelligence (AI) capabilities to cybersecurity teams.

The Amazon Bedrock service provides access to a set of foundational LLMs that organizations can extend using vector databases to enable them to leverage external data in the summarizations and recommendations they make. In other instances, it’s possible to create a custom LLM that can be trained using that same data.

The Orca Cloud Security Platform can now leverage Amazon Bedrock to generate remediation instructions for detected risks in cloud environments automatically. Security teams or developers can copy and paste generated code into a command line interface or infrastructure-as-code (IaC) provisioning tools to remediate vulnerabilities.

Orca Security CEO Gil Geron said as generative AI continues to evolve, it will become increasingly easier for cybersecurity teams to respond to attacks by having a better understanding of the nature of the attack as explained by an LLM and then generating any code required to mitigate it. In effect, cybersecurity teams will be able to respond much faster to both potential threats and attacks in progress, noted Geron.

Over time, the company will mix and match LLMs to address specific tasks and functions as required, he added.

DevOps Unbound Podcast

It’s not clear what impact generative AI will have on cybersecurity, but the overall level of stress experienced might decline as it becomes easier to respond to threats and cyberattacks. That should help reduce burnout and help narrow the gap between the demand for cybersecurity expertise and the current limited pool of available talent.

Cybersecurity teams will still need to work with application developers to apply fixes, but it should become simpler to collaborate as a common understanding of the nature of a threat is more easily achieved and shared across a DevSecOps workflow, noted Geron.

Of course, cybercriminals will have access to similar generative AI capabilities to help them identify potential attack vectors. As a result, organizations are now locked in an AI arms race with entities that have the resources needed to experiment with a wide range of emerging technologies. The AI genie is out of the bottle, and there is no going back.

Ultimately, cybersecurity teams may benefit more as AI helps level what today is decidedly uneven playing field. Cybercriminal gangs have operationalized cyberattacks in ways that today are increasing in volume and sophistication. If AI can be relied on to automate responses to low-level attacks, there should be more time for cybersecurity teams to spend investigating more nuanced threats.

Each cybersecurity team will need to decide for themselves how to combine LLMs with their data to advance cybersecurity, but as far as LLMs are concerned, it appears providers of cybersecurity vendors are more than willing to do the heavy lifting. It’s now more a question of ensuring that whatever data does get exposed to an LLM doesn’t wind up being used to train an LLM that is open to anyone who cares to ask a question.

Recent Articles By Author


文章来源: https://securityboulevard.com/2023/11/orca-security-taps-amazon-for-generative-ai-expertise/
如有侵权请联系:admin#unsafe.sh