The President’s EO on AI – What it Does and Why it Won’t Work
2023-11-3 21:0:47 Author: securityboulevard.com(查看原文) 阅读量:6 收藏

On October 30, 2023, president Biden issued an executive order on a topic that seems to scare a lot of people – Artificial intelligence. Like most executive orders, there’s a lot of fluff in this one – aspirations about doing good and not doing evil, but the EO touches on some — but not all — of the concerns that people might have about our future robotic overlords. For the most part, the EO lacks the effect of law, does not mandate much of anything and overlooks some of the more difficult issues involving AI — including many of the legal issues associated with bias, prejudice, hallucinations, derivative works, copyright infringement, ownership of AI-generated intellectual property and liability when AI-created materials fail. And that’s just scratching the surface.

For companies developing or using AI-based products or services, the EO and the related AI “Bill of Rights” set out a framework for identifying and resolving issues that scare people about the nascent technology. In particular, they address issues related to transparency, bias, privacy, security, notice and technical and other standards for the use or deployment of AI-based technologies. In essence, companies and government agencies using AI will be required to provide certain assurances and meet certain yet-to-be-developed standards—particularly if the AI is to be deployed in the medical or national security fields.

The EO attempts to address five issues broadly related to the application of artificial intelligence in society. These are: Safety and security, privacy, civil rights and AI bias, job security “responsible” use of AI. Each of these present challenges which are only partly addressed by the order.

The executive order is a good idea, but the proposals are more focused on studying the problem than attempting to solve it. It’s also all carrot and very little stick. That’s fine for now—since we cannot completely agree on the nature of the “problem” with AI (or, for that matter, exactly what AI is). Nevertheless, the EO should also address questions of liability, responsibility and duties with respect to AI developers, software engineers and those who simply use or deploy AI-based products or services—an area of law which has not yet been developed. The EO does little to move the needle in this regard.

“Secure” AI?

In the area of “security,” the EO would work with industry leaders, universities and the National Institute of Standards and Technology (NIST) to “develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy” including standards for “red teaming” AI programs and specifically addressing the application of AI programs to assess “chemical, biological, radiological, nuclear, and cybersecurity risks.” Cool. Cool. Cool. Except that “AI,” like “cyber” is not a single technology or a single application. It’s like trying to assess the risks associated with writing, or communicating by telephone or having people meet with other people. There are many kinds of AI presently and new kinds being developed and assisting the appropriate “standard” for all AI may be a task worthy of Sisyphus. The EO also addressed setting standards to protect against using AI for nefarious things like bioengineering (biological synthesis) and requiring life science companies to adopt the NIST standards as a condition of federal funding.

AI-Generated Fraud

The presidential document also proposes that the U.S. Department of Commerce (of which NIST is a part) develop standards and best practices for detecting AI-generated content and authenticating official content. The government proposes not only to develop and implement technologies for authentication and developing the provenance for materials generated by AI but also proposes a requirement of a kind of “Scarlet AI”—a requirement to “clearly label AI-generated content,” although it is not clear how this will work in real life. For example, how much of a piece of work must be generated by artificial intelligence to require labeling? If an author uses it to help write a book, an email, a tweet, a screenplay or song lyrics, must the work now be labeled? The EO also proposes that federal agencies deploy these tools to “know that the communications they receive from their government are authentic,” but this might limit the ability of the government to send out routine communications and notices using AI tools.

DevOps Unbound Podcast

AI and Cybersecurity

AI has a tremendous capacity to be used both offensively and defensively in areas related to national security. It can be used to identify cybersecurity threats and vulnerabilities, to develop exploits (including those directed at social engineering) and to select appropriate targets for attacks. On the defensive side, artificial intelligence programs can develop fixes for vulnerabilities, create targeted training programs and tabletop exercises and automate some of the more mundane cybersecurity tasks. That is, of course, if the program is working properly.

The EO proposes to “[e]stablish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software…” In other words, relying on AI to fix human-developed (well, cobbled together) buggy code since there’s no better fix for buggy code than more buggy code that we don’t understand. The EO touts the robotics programs as an effort to “harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.” Sure. It will do that. I’ve still got the greatest enthusiasm and confidence in the mission and I want to help you.

Protecting Americans’ Privacy

The executive order also calls on Congress to enact comprehensive data privacy laws and directs agencies to use AI in a manner that will not scoop up and digest mountains of personal data (like regulating data brokers and their massive databases) and to strengthen what the order calls “privacy preserving techniques” (including more advanced encryption) by funding a research coordination network and funding new initiatives within the NSF to advance rapid breakthroughs and development. However, there is no attached legislation or legislative proposals to actually do these things. Moreover, powerful encryption is a double-edged sword, protecting personal data from brokers and hackers, but also protecting attackers and terrorists (and common criminals) from the prying eyes of law enforcement and intelligence agencies. I’m just sayin’.

Advancing Equity and Civil Rights

The EO also vaguely proposes to regulate the discriminatory use or impact of poorly trained AI programs—for example, to guide federal aid recipients and contractors on how to avoid biased and discriminatory AI, and to have DOJ’s civil rights office provide training, technical assistance, and coordination with other civil rights organizations.

Significantly, the order calls out law enforcement agencies themselves, who have a long history of using these programs to determine things like how and where to deploy police resources, who to arrest, where to concentrate law enforcement, how to predict “future violations” and to have sentences, detentions, bail and bond determinations made by AI-based algorithms. The DOJ also resisted efforts by prisoners discriminated against by these AI programs to see the data or code upon which the computers were making their determination. The new AI guidance does not specifically address this transparency issue, but suggests that law enforcement agencies will be forced to accept “best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.”

Consumers and the New AI Workplace

The executive order also recognizes that AI will be used for a wide variety of activities including health care and health care administration, education, employment and a host of other areas. To that end, the order directs the federal government to “[d]evelop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection.” Presumably, after these “best practices” are developed, someone will implement them, right?

The government also calls for the hiring of new AI professionals and training federal workers about the nature and use of AI—all great ideas, but this assumes that we have people with that skill set in the pipeline. That’s a dubious assumption since ChatGPT and generative AI are extremely recent developments.

All told, there are some good principles here. For companies deploying AI-related tools, the mantra “Doveryai, no proveryai,” applies. Trust, but verify. One problem, though, is that it is often impossible to “verify” what the application is doing since the application itself may not know how it knows what it thinks it knows. All of these proposals are necessary, but it’s not clear that they will be sufficient. Until then, we continue to study the problem.

Recent Articles By Author


文章来源: https://securityboulevard.com/2023/11/the-presidents-eo-on-ai-what-it-does-and-why-it-wont-work/
如有侵权请联系:admin#unsafe.sh