Get details on CoSAI and why Legit chose to be a part of this forum.
Surging adoption of generative artificial intelligence (GenAI) and large language models (LLMs) is both revolutionizing software development and exposing organizations to risk at an unprecedented pace. This is why, as CTO and Co-Founder of Legit Security, I am proud to announce that we are the first application security posture management (ASPM) vendor to join the newly formed Coalition for Secure AI (CoSAI) — an independent industry forum founded by Google that’s dedicated to advancing comprehensive security measures for AI.
We’re at the dawn of an AI revolution that will profoundly transform the business landscape — especially impacting the way we build and deliver software. According to Gartner®, Inc., by 2025, 80% of the product development lifecycle will make use of generative AI (GenAI) code generation, with developers acting as validators and orchestrators of back-end and front-end components and integrations. Nearly all of our enterprise customers, including many of the world’s largest development shops, have embraced AI and are leveraging it in various capacities throughout the software development lifecycle (SDLC) to build applications faster, more efficiently, and at unprecedented scale.
But as significant as the positive effects of AI, so too are the risks.
The reality is that the vast majority of organizations aren’t prepared to secure or mitigate the risks of rapid AI adoption. In a recent survey of technology leaders, IBM found that only 24% of new AI projects actually contained a security component. And the implications of this could be dire unless organizations urgently prioritize security controls and best practices.
We’re already seeing just how vulnerable organizations are to AI models when they aren’t properly monitored, secured, or managed. Developers are mistakenly leveraging malicious AI models available on open-source registries (e.g., Hugging Face) in their own software projects. And even more LLMs and AI models contain bugs and vulnerabilities that have the potential to cause AI supply chain attacks, like the AI Jacking vulnerability Legit discovered earlier this year. Everyday there are more reports of AI security vulnerabilities from prompt injection to inadvertent data disclosure to poor implementations and misconfigurations of LLMs in applications.
AI security risks go well beyond open-source AI. Leading providers of commercial and proprietary AI products have experienced their fair share of security setbacks themselves. For instance, OpenAI disclosed a vulnerability last year in ChatGPT’s information collection capabilities that attackers could exploit to obtain customers’ secret keys and root passwords.
At Legit Security, we’re on a mission to secure the world’s software. By joining CoSAI, we’re aligning ourselves to a trailblazing coalition of industry-leading organizations that all share our commitment to robust AI security. CoSAI’s focus on the following three key areas of AI security resonates deeply with our own strategic objectives:
As AI continues to evolve, it’s imperative that the industry’s risk mitigation strategies advance together with it. Legit is committed to contributing to CoSAI’s mission and collaborating with industry peers to ensure the secure implementation, training, and use of AI. This way, all organizations, including our customers, are equipped with the latest guidance and tooling to safeguard their environments today and far into the future.
This is also why Legit is prioritizing development of AI security capabilities throughout our application security posture management (ASPM) platform, including our announcement earlier today introducing Legit AI Command Center.
Together, we can build a safer, more secure future for AI in software development. We invite you to join us in this pivotal endeavor.
*** This is a Security Bloggers Network syndicated blog from Legit Security Blog authored by Liav Caspi. Read the original post at: https://www.legitsecurity.com/blog/why-legit-joined-coalition-for-secure-ai-cosai