The traditional CIA Triad (Confidentiality, Integrity, and Availability) has long been a cornerstone of information security, providing a solid framework to protect data and systems. However, the rising presence of AI in our lives introduces new challenges that extend beyond the current scope of the CIA Triad. In this AI mini-series, we will analyse the adequacy of the CIA Triad in addressing AI-specific challenges and propose potential additions to this framework.
Artificial intelligence (AI) has been the hottest topic in Information Technology (IT) over the past two years. However, AI goes much further back than recent media attention suggests. Alan Turing described the conceptual foundations decades ago. The first AI-inspired programs followed just a few years later1. They proved to be interesting concepts, but failed to show broad applicative adoption because of limitations in computing power and data availability. The recent generative AI race proved these limitations have disappeared. Our computing power and available data grew exponentially, and even enabled to apply decade-old AI concepts in household applications!
While the conceptual foundations of AI date back to the 1950s, recent developments, particularly in the field of Machine Learning (ML), have significantly put AI into the spotlight for many. The type of AI leveraging ML processes large amounts of data to provide solutions that were unimaginable just a few years ago.
Just recently, the AI Act, with its initial proposal being released in 2021, was finally put into force, with the final version giving extensive attention to a type of AI called “general-purpose AI”. This is a form of AI that is capable of performing a wide range of tasks and can be integrated in various other systems and applications. It leverages ML to accomplish this goal. The added focus on general-purpose AI largely came into place due to ChatGPT impacts.
Using AI clearly creates significant value, but also new risks. Consequently, our security solutions should start evolving too. As AI based on ML processes large amounts of data on its own, it must be a specific focus point for Confidentiality, as data privacy concerns in society have grown, or Integrity as the impacts of data poisoning might be tremendous.
The risks may expand the need for changes to concepts such as the existing CIA Triad. The latter may no longer be sufficient as a framework to cover the security of AI implementations. AI based on ML requires models to be trained, which introduces a new uncertainty. Traditionally, computers were predictable processors of data, the CIA Triad covered the risks associated with a predictable compute paradigm. With ML, computers are learning by assimilating huge volumes of data, and both the quality of data and the quality of the learning process are more difficult to establish. Therefore, the information security pillars need expansion to cope with the challenges of a less predictable compute model. The European AI Act reflects this need of change.
In response to the new challenges, essentially marking the transition of a predictable computing model to a more powerful but less predictable computing model, we advise to ensure the other pillars of information security, Transparency, Traceability and Non-Repudiation, are treated as equals to the historic CIA triad consisting of Confidentiality, Integrity and Availability. This results in an Information Security Hexad.
While Transparency and Traceability are very closely related, they are fundamentally different and thus considered as separate pillars. Transparency involves making an AI’s process, used data and decisions clear towards end users. This is especially relevant to verify whether models operate within ethical and legal boundaries. Traceability ensures that every aspect of the AI system, including data sources, model development, and decision pathways, can be tracked and documented. This is crucial for auditing, understanding, and verifying an AI system’s operations throughout its lifecycle. Non-repudiation, on the other hand, ensures that actions taken by a system cannot be denied after occurrence, which is essential to keep entities accountable of the actions taken by their AI.
Following this introductory post, subsequent posts will introduce these pillars and the risk implications driving their relevance in a cybersecurity framework fit to deal with the challenges of ML-driven AI. Please join us on this journey addressing emerging challenges, providing insights on how to safeguard an AI-filled future with appropriate conceptual frameworks.
At NVISO, we are well aware of security challenges in the development and actual use of general-purpose and high-risk AI systems. As a pure-play cybersecurity company, we can advise your organisation in how to be compliant with both the AI Act, and extending your current governance and risk management practices in line with security standards, such as ISO42001. We can also offer services to plot out security controls tailored to your specific environment, including your AI systems. In addition to this, we continuously research and innovate in leveraging AI capabilities in our offensive and defensive cybersecurity services.
Maxou Van Lauwe is a Cybersecurity Architect within NVISO’s Cyber Strategy and Architecture team. As recent graduate in industrial engineering specialising in smart applications, he is well-aware of the AI landscape. Furthermore, through client work and internal research, he has acquired a vast understanding of how to secure AI implementations, while keeping in mind the necessary regulations.