Exploring AI: Regulations and Threat Mitigation
2024-8-20 15:5:26 Author: securityboulevard.com(查看原文) 阅读量:4 收藏

Exploring AI: Regulations and Threat Mitigation
josh.pearson@t…

It’s something we’ve all heard repeatedly, but it’s a point worth hammering home: AI will shape the future of humanity. This fact is not lost on policymakers, and they are reacting accordingly. In October 2022, the US released its Blueprint for an AI Bill of Rights. While the Blueprint is still just that, a blueprint with no legal backing, the fact that the US chose to call this framework a “Bill of Rights” reflects how seriously the US Government takes AI. Similarly, in May 2024, the European Council approved the Artificial Intelligence Act, the first-ever legal framework on AI. The Act was published in the EU Official Journal on 12 July and will come into force on August 1st. It imposes requirements on companies designing and/or using AI in the European Union and provides a way of assessing risk levels. But let’s look at them both in a little more detail.

Exploring AI: Regulations and Threat Mitigation

It’s something we’ve all heard repeatedly, but it’s a point worth hammering home: AI will shape the future of humanity. This fact is not lost on policymakers, and they are reacting accordingly.

In October 2022, the US released its Blueprint for an AI Bill of Rights. While the Blueprint is still just that, a blueprint with no legal backing, the fact that the US chose to call this framework a “Bill of Rights” reflects how seriously the US Government takes AI.

Similarly, in May 2024, the European Council approved the Artificial Intelligence Act, the first-ever legal framework on AI. The Act was published in the EU Official Journal on 12 July and will come into force on August 1st. It imposes requirements on companies designing and/or using AI in the European Union and provides a way of assessing risk levels.

But let’s look at them both in a little more detail.

Why We Need an AI Regulation

These regulations have been introduced partly in response to several controversial AI incidents that have taken place over the past few years. The Blueprint for an AI Bill of Rights explains areas of potential vulnerability; for example, in February 2024, Air Canada was ordered to pay damages to a passenger after its virtual assistant gave him incorrect information about flight fares. The Canadian Tribunal said the airline didn’t take “reasonable care to ensure its chatbot was accurate.” The US Blueprint aims to ensure that AI-based chatbots (and other AI systems) are reliable and that businesses understand they are accountable for decisions made by these systems.

Similarly, the “safe and effective systems” section of the Blueprint aims to prevent, in part, ineffective systems such as DPD’s ill-fated customer service chatbot. In early 2024, X user Ashley Beauchamp found that the chatbot could not answer even the simplest customer service queries but could write a rudimentary poem about how terrible it was.

The EU AI Act, however, highlights the importance of recognizing that AI is still in its relative infancy and should be treated with appropriate levels of caution (both in terms of inputs and outputs) based on usage and deployment risk.

Blueprint for an AI Bill of Rights

The US Blueprint for an AI Bill of Rights outlines five principles and practices for ensuring the safe and equitable use of automated systems. They are listed below, along with some real-world examples for context:

  • Safe and Effective Systems – Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use.
  • Algorithmic Discrimination Protections – Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.
  • Data Privacy – Individuals should be protected from abusive data practices via built-in protections and have agency about how personal data is used. Designers, developers, and deployers should seek individuals’ permission with a brief and accessible consent request and respect their data collection, use, access, transfer, and deletion decisions.
  • Notice and Explanation – Individuals should be notified when an automated system is used and explained how and why it contributes to outcomes that impact them.
  • Human Alternatives, Consideration, and Fallback – Individuals should be able to opt-out where appropriate and have access to someone who can quickly consider and remediate any problems.

Artificial Intelligence Act (EU AI Act)

The Artificial Intelligence Act (EU AI Act) takes a risk-based approach to regulating AI, defining four levels of risk for AI systems:

  • Unacceptable risk – AI systems considered a clear threat to people’s safety, livelihoods, and rights will be banned, this includes a wide range of scenarios such as social scoring by governments, to toys using voice assistance that encourage dangerous behavior.
  • High risk – This includes but is not limited to AI technology used in critical infrastructures, essential private and public services, and migration, asylum, and border control management. These AI systems are subject to the following obligations before they can be put on the market:
    • adequate risk assessment and mitigation systems;
    • high quality of the datasets feeding the system to minimize risks and discriminatory outcomes;
    • logging of activity to ensure traceability of results;
    • detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
    • clear and adequate information to the deployer;
    • appropriate human oversight measures to minimize risk;
    • high level of robustness, security, and accuracy.
  • Limited risk – Relates primarily to transparency in AI usage; for example, ensuring that humans are informed when interacting with a chatbot or that AI-generated content is identifiable.
  • Minimal risk – Minimal risk AI systems, such as AI-enabled video games or spam filters, are allowed freely under the Act. The common denominator in both frameworks is the concept of security and protection. Let’s examine the most common threats to AI platforms and how businesses can mitigate them.

Threats to AI Platforms

Emerging threats to AI platforms loom large over cybersecurity teams, leaving their enterprise environments vulnerable to attack and data loss. The two most common threats to AI systems are:

  • Model stealing – When an attacker attempts to steal an AI platform’s machine learning model. Common techniques include querying the target model and using the responses to create a replica or stealing training data to train a new model that reproduces the target model’s behavior. This would allow unauthorized parties to use a machine learning model without a license, potentially resulting in lost revenue.
  • Data poisoning – Introducing malicious data to corrupt an AI model’s training data in such a way that the model learns biased or incorrect information. This false information brings about vulnerabilities that attackers can exploit to launch futureattacks on an intentionally corrupted model or to extract unauthorized sensitive data.

Mitigating Threats to AI Platforms

To protect against model theft, organizations must control access to their machine models. The best way to achieve this is through encryption and the safeguarding of the associated encryption keys. Encrypting a model and rigorously controlling access to the encryption keys based on the application of strong authentication, roles and policies, ensures that only authorized users can access it, meaning that attackers cannot analyze the model structure and, ultimately, replicate it. Similarly, a robust licensing system will prevent unauthorized users from accessing the model.

To mitigate against data poisoning, organizations must monitor, evaluate, and debug AI software. This involves carefully selecting, verifying, and cleaning data before using it for training or testing AI models and refraining from using untrusted or uncontrolled data sources such as crowdsourcing or web scraping. Strong data governance is also essential for preventing data poisoning attacks.

Organizations should consider building Confidential AI models, which would allow them to run AI processes within a trusted confidential computing environment with complete confidence. In this scheme, the security and integrity of the hardware execution environment, as well as the data and applications running inside, are independently attested by a third party to confirm that they have not been compromised.

Thales is enabling its CipherTrust Data Security Platform (CDSP) to support End-To-End Data Protection (E2EDP) on Intel TDX chips that that support Confidential Computing (CC) services offered by Google Cloud and Microsoft Azure. In this architecture, cloud-independent attestation is provided by Intel Trust Authority and subsequently verified by Thales.

Similarly, Thales and Imperva have joined forces to provide security for a world powered by the cloud, data, and software. Check out our Cloud Protection and Licensing solutions to protect your AI system today.

Schema

{
“@context”: “https://schema.org”,
“@type”: “BlogPosting”,
“mainEntityOfPage”: {
“@type”: “WebPage”,
“@id”: “https://cpl.thalesgroup.com/blog/data-security/exploring-ai-regulations-threat-mitigation”
},
“headline”: “Exploring AI: Regulations and Threat Mitigation”,
“description”: “An in-depth look at AI regulations like the US AI Bill of Rights and the EU AI Act, and how businesses can mitigate threats to AI platforms.”,
“image”: “https://cpl.thalesgroup.com/sites/default/files/content/footer/thaleslogo-white.png”,
“author”: {
“@type”: “Person”,
“name”: “Tim Phipps”,
“url”: “https://cpl.thalesgroup.com/blog/author/timphipps”,
},
“publisher”: {
“@type”: “Organization”,
“name”: “Thales Group”,
“description”: “The world relies on Thales to protect and secure access to your most sensitive data and software wherever it is created, shared, or stored. Whether building an encryption strategy, licensing software, providing trusted access to the cloud, or meeting compliance mandates, you can rely on Thales to secure your digital transformation.”,
“url”: “https://cpl.thalesgroup.com”,
“logo”: {
“@type”: “ImageObject”,
“url”: “https://cpl.thalesgroup.com/sites/default/files/content/footer/thaleslogo-white.png”
},
“sameAs”: [
“https://www.facebook.com/ThalesCloudSec”,
“https://www.twitter.com/ThalesCloudSec”,
“https://www.linkedin.com/company/thalescloudsec”,
“https://www.youtube.com/ThalesCloudSec”
] },
“datePublished”: “2024-08-20”,
“dateModified”: “2024-08-20”
}

studio

*** This is a Security Bloggers Network syndicated blog from Thales CPL Blog Feed authored by [email protected]. Read the original post at: https://cpl.thalesgroup.com/blog/data-security/exploring-ai-regulations-threat-mitigation


文章来源: https://securityboulevard.com/2024/08/exploring-ai-regulations-and-threat-mitigation/
如有侵权请联系:admin#unsafe.sh