Deepfake Nudes – Can I Sue?
2023-11-10 21:0:57 Author: securityboulevard.com(查看原文) 阅读量:4 收藏

Recent articles in the Washington Post and the Wall Street Journal explored the problem of “fake nudes” — that is, AI-generated or — assisted sexually explicit images of real people. In a previous discussion, we explored the nature of the problem of virtual revenge porn — the limitation on the use of various revenge porn statutes with respect to images that are wholly or partly generated by a computer. Here, we look at remedies — what can the victims of the creation and distribution of AI-generated pornographic images do to stop their dissemination as a legal and practical matter, and what is the likely outcome of such litigation?

For these purposes, we assume that the AI porn images depict an identifiable individual in a sexually explicit situation or image in which they did not participate or create. This is different from the generic “revenge porn” image, where the person depicted may have created or participated in the creation of an image but objects to the dissemination of that image, or in circumstances where the image, while sexually explicit, was created without their effective knowledge or consent. Those situations typically involve things like sexual partners sharing explicit pictures or videos and then one of them disseminating them to friends or posting them on social media or to websites like OnlyFans without the consent of the other or taking pictures of sexually explicit conduct at a party or in a bedroom without the effective consent of one or more of the participants — often due to the influence of drugs or alcohol.

The AI-generated “fake nudes” do not rely on any action or participation by the victim — who typically first learns of their existence from third parties (if they learn of them at all). The victim is often a celebrity — where an obsessed or merely curious fan will use an AI program either to take a “normal” image of that celebrity and create a version of that person sans clothing, or generate a wholly AI-generated image of that celebrity not only naked, but engaged in whatever activity the AI-assisted creator desires.

Websites with the term “fappening” (based on leaks of actual naked celebrities from hacks to their cloud accounts on August 31, 2014) carry thousands of supposed naked images of celebrities. Other fake nudes target former lovers. In the cases cited by the Washington Post and Wall Street Journal, teens and preteens are using AI tools to generate images of their classmates and others and disseminate them over the internet.

How AI Works

Just a quick primer on AI as it relates to generative images. AI — or here, technically, machine learning or ML) is a type of computer programming that trains a computer algorithm to recognize patterns and generate similar patterns based on a command. “Draw me a picture of a dog playing poker” would require the program to know what a “dog” is and what a poker game looks like. Indeed, it would have to know what an image is, how to generate an image, what style, context, etc., to use, and a host of other things. To get that computer to understand these things, the computer would have to use a “training set” of images — often billions of images. Based on the content of these images (some have dogs in them) and the description of these images (my dog Fido), the AI algorithm “learns” not only what a “dog” looks like but what different breeds of dogs look like. That “training set” may be (and often is) any available image on the public internet – including images from Facebook, LinkedIn, Instagram, X or others, which may be copyrighted. A recent federal court case in San Francisco preliminarily determined that the use of publicly posted images from the internet to train an AI model likely does not violate the copyright of those who posted the image, but that case is limited to specific facts. For AI-generated porn images, they can be derivative or synthetic. In a derivative image, the actor takes a picture of, say, Jennifer Lawrence in a bikini downloaded from the internet, and uses an AI website tool like “nudify me” (or dozens of others which I won’t promote here) and generates that same picture, but without the bikini. In fully synthetic mode, a program like Midjourney or Dall-E can be used to generate an image based on a prompt like, “photorealistic image of Jennifer Lawrence naked on a beach at sunset in 4k…” and the AI program will generate this based on its knowledge of the terms “beach,” “sunset,” “naked” and, of course, Jennifer Lawrence. In both cases, however, the AI program “knows” what Jennifer Lawrence looks like the same way you or I do — based on pictures of her from movies, TV, magazines and the World Wide Web. In each case, the resulting nude image did not exist until the AI program created it.

You Can’t Sue What You Can’t See

The first problem with AI-generated porn — and indeed any harmful content — is that the internet is a big place. The “victim” — or the victim’s representative — must know that the offending images exist and are being disseminated in order to take action. In the context of an eighth-grade classroom, ultimately, the rumors of the existence of these images will likely reach the victim. Indeed, much of the actionable harm to the victim may occur when he, she, or they (usually but not exclusively she) learn about the dissemination.

DevOps Unbound Podcast

Thus, in order to take action against deepfake porn, you have to know it exists, where it is and who is responsible for its creation, posting, hosting and dissemination. That is no small task. Most image detection programs are based on copyright protocols — I took a picture of Cleveland’s West Side Market and someone else is using that specific image (and not one that they took). They use things like MD5 hash matching or digital watermarking to protect and match images – but those protocols search for and find “exact” or “nearly exact” matches to the original image. They won’t find new images “based on” the original image. Other programs use a form of AI called “fuzzy logic” to search for images that “look like” the original image or use AI and ML to find images like the original. For example, were you to take that Jennifer Lawrence bikini picture and put it into Google image search, it may find other copies of that picture, but it may also find either other pictures of Jennifer Lawrence or other pictures of women in bikinis. Commercial services like Zero Fox and others can scan the web for “infringing” content, but we are not yet at the place where we can look effectively for images that seem similar to images of identifiable persons. Thus, if you were representing Jennifer Lawrence and you wanted to scrub the internet of deepfake porn images of her, you would typically rely on the same types of tools used by those who wanted to see naked images of the actress – the various dark web sites, or publicly accessible porn sites (again, the unnamed “fappening” sites) with tags publicizing “Jennifer Lawrence nude…” This approach is hit or miss, but you will likely find the images that are the most public and the most accessible. These are the ones most likely to cause reputational damage.

Unfortunately, this approach will not work for deepfake porn images of “regular” people. The posted images may or may not include the name of the person depicted. They may be disseminated person to person through direct messages (DMs), texts or encrypted messaging protocols like Signal. They may be shared through messaging on Slack. They may be posted on the tens of thousands of websites dedicated to “amateur” porn — including PornHub sites. The victim or their representative usually finds out about their existence (or location) through rumor or innuendo.

For the lawyer representing the victim of such revenge porn, the goals are (1) removal; (2) prevention of dissemination; (3) investigation and attribution; (4) compensation and/or punishment. These goals are often in conflict, however, as litigation surrounding the deepfake porn may raise the profile of the images themselves and exacerbate the harm to the victim.

Removal

Success at removing (or delisting or delinking) an AI-generated deepfake porn image is complicated. First, you have to understand what “removal” means. If the image is posted to a standalone website (e.g., PornHub), then you have to identify the party or parties responsible for that website. Many websites that operate in the U.S. have “DMCA agents” — that is, an agent responsible for enforcement of the provisions of the Digital Millennium Copyright Act, which generally requires removal of materials that are posted/hosted in violation of copyright law. However, it is not clear whether a fully deepfake image that is based on a training model that includes a copyrighted image (and particularly a non-registered copyrighted image) comes under the DMCA – and the language of the statute (and the recent California federal AI case) suggest that it does not. The DMCA requires the person requesting the takedown to certify under penalties of perjury that they are the owner or agent of the owner of a copyrighted work that is being infringed and to provide a copy of the work (or a link to the work) that is allegedly infringed. A digital doppelganger may or may not constitute an infringing derivative work, and therefore, the DMCA may or may not apply. Nevertheless, a takedown request may be directed to the DMCA agent — even if it is not based on the DMCA. It provides the victim’s counsel with at least an email address or webform to contact the hosting site. Other ways to contact the hosting site may include emails to abuse@offendingsite or similar addresses — there’s a lot of legwork that has to be done here. The use of protocols like “whois” may give you more information about the “owner” or registrant of the site.

In addition to the DMCA, removal may be requested/demanded based on (1) general copyright law; (2) state or local criminal laws including revenge porn statutes, deepfake statutes, harassment or threat statutes, etc.; (3) the website’s own terms of service or terms of use; (4) general tort law; (5) privacy-related tort law; (6) general privacy law; (7) specific privacy statutes; or (8) anything else you can think of. Of course, this presupposes that you can identify an individual or entity responsible for hosting/posting the content. Social media sites like X, Facebook, Instagram and others and “reputable” porn sites (if such a thing exists) like PornHub are usually very good at removing such offending content – often based simply on their terms of service or terms of use. In making such a demand/request, it’s often helpful to find specific provisions of the terms of use/terms of service that the content violates, as well as statutory provisions in both the jurisdiction in which the victim resides and the jurisdiction where the entity hosting the content operates. This may not be domestic, and you may have to resort to researching the porn law or copyright laws of Indonesia, Singapore or Belarus.

Delinking or Delisting

In addition to “removal,” you may also seek to have the offending data “delisted” or “delinked.” In this way, a general search for the content will not find it — at least in theory. Google and other search engines have new privacy tools that permit users to request the removal/delinking of personal information (doxxing) as well as of “imagery [which] shows you (or the individual you’re representing) nude, in a sexual act, or in an intimate state” and for which “[y]ou (or the individual you’re representing) didn’t consent to the imagery or the act and it was made publicly available or the imagery was made available online without your consent.” and where “[y]ou aren’t currently being paid to commercialize this content online or elsewhere.” While the policy does not explicitly cover AI-generated images, it is likely that Meta would honor a delinking request for such virtual images as well as real ones. Organizations like the Cyber Civil Rights Initiative provide links for requesting the delisting of revenge porn, which should (should) work for virtual revenge porn as well, and the Federal Trade Commission provides guidance on steps you can take to mitigate revenge porn.

Whack-a-Mole

AI-generated deepfake nudes, like other images, are often posted on various websites at the same time. Often, takedown orders – aimed at a specific website or hosting site — are like squeezing a plastic bag — the same images then appear somewhere else for an endless game of whack-a-mole. Indeed, the problem is worse for deepfake nudes, as they can be consistently generated and regenerated by AI programs. It’s not like you have to find one online — using any AI-generating tool, a user can create one out of whole cloth.

An interesting but untested legal question arises over whether the AI program itself (or, more accurately, the developer of that program) has liability either for contributory copyright infringement or, more generally, for aiding and abetting one of the privacy-related torts or criminal statutes. The theory would be that the developer of the AI knew — or reasonably should have known — that the program would be (or could be) used to further not only the creation of a derivative work but for the intentional infliction of emotional distress, intrusion into seclusion or portraying someone in a false light. Imagine going to a photographer, painter or other artist and asking them to create a realistic portrait of some person engaged in explicit sexual activity, with instructions that the resulting image be virtually indistinguishable from a real picture. Would the photographer, painter or artist have any liability to the person portrayed in the image when that image is used to the detriment of the person portrayed? For now, there are no cases holding an AI program or its developers liable under this theory.

DeepFake or Revenge Porn Statutes

Illinois, Virginia, California, Hawaii and Florida have each passed legislation that either redefines the nature of sexual harassment to include the use of deepfake naked images or creates a standalone cause of action or criminal statute prohibiting the creation or dissemination of such deepfake porn images. In other states, like Ohio, the criminal statute (with a private right to sue) is limited to the non-consensual dissemination of an “image” of the person” who is “in a state of nudity or is engaged in a sexual act.” The statute defines an “image” as “a photograph, film, videotape, digital recording, or other depiction or portrayal of a person.” While this would likely (but not inevitably) include an AI-generated image, it’s not clear what the limit of this interpretation would be. Would it include, for example, a painting? A drawing? A cartoon? A stick figure with the caption “Mrs. Robinson?” The term “depiction or portrayal” would suggest that it would, but because the statute imposes both civil and criminal penalties under what is called the “rule of lenity” and because the dissemination of images is expressive conduct under the First Amendment, it’s not clear how a court would interpret this provision in the realm of AI-generated images.

Both deepfake and revenge porn statutes typically permit a private cause of action against the distributor or disseminator of offending images but typically do not create liability for the “mere carrier” or hoster of the image. Indeed, Section 230 of the Communications Decency Act provides that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” As such, it is likely that a court would find that the site that hosts or permits the dissemination of the revenge porn or AI-generated porn of an adult would not be held liable if they did not take down the content. Indeed, last term, the Supreme Court held that sites like Google and Facebook are not liable even for takedown policies that can be seen to promote terrorism.

As a practical matter, this may mean that you will have to initiate a “John Doe” lawsuit (if your state permits it) against the typically unknown person who posted the AI-generated porn and then use discovery and subpoenas to obtain the IP address from which the content was posted. An additional subpoena to the internet service provider (ISP) would then be needed to obtain subscriber, payment or other information about the account associated with the IP address – assuming that the actor was not taking steps (like the use of TOR or similar obfuscation techniques) to conceal their identity. Such lawsuits may be time-consuming and expensive and may never result in finding the true identity of the person posting. However, they can be used to get an injunction ordering a carrier, ISP or other entity to remove, block or otherwise take down offending or actionable content.

Other Crimes

In addition to violating deepfake or revenge porn statutes, the posting or dissemination of AI-generated porn could violate other criminal statutes, including the federal extortion statute or its state equivalent, the federal threats or harassment statutes or their state equivalent or a menacing or stalking law if the facts support such an action. In recent years, however, the Supreme Court, in an effort to balance free speech laws against threat laws, has severely limited the kinds of conduct online that can be prosecuted to those actions that represent a “true threat” in which the actor intends to cause fear in the victim.

The advantage of criminal cases is that the FBI or local law enforcement officials are responsible for their investigation and that tools like a “2703 order” or a grand jury subpoena, or search warrant or even a FISA 702 order can be used by law enforcement to obtain discovery in their investigation. However, different law enforcement entities may have different tolerances for the prosecution of AI-based pornographic images, especially in light of the ambiguity of the current state of the law. Moreover, a criminal prosecution may result in the defendant’s incarceration or payment of a fine but often does not result in restitution to the victim despite statutes that require such restitution.

The Special Problem of AI-Generated Child Porn

Special considerations apply to AI-generated pornographic images of either generic persons under the age of 18 or, more specifically, images of identifiable children in which an AI tool renders them to appear naked or in a sexually explicit manner.

In 2002, the Supreme Court struck down as unconstitutional a 1996 federal law that made it a crime to disseminate images that were not images made with actual children but which were “virtually indistinguishable” from actual child pornography – or, as the term is now used, “child sexual abuse materials (CSAM).” The court reasoned that child pornography was a limitation on free speech (and different from adult pornography) because of the fact that an actual child was abused in the creation or dissemination of the image, and therefore prosecution required proof that the image was of an actual minor and not a virtual one. However, the state of AI has changed dramatically since 1996, and new virtual child porn now can depict actual minors in a way that causes true harm to them even if – and sometimes especially because – they were never actually photographed in the nude. In the wake of the Ashcroft decision by the Supreme Court, Congress passed a new law that “makes it illegal for any person to knowingly produce, distribute, receive, or possess with intent to transfer or distribute visual representations, such as drawings, cartoons, or paintings that appear to depict minors engaged in sexually explicit conduct and are deemed obscene” and another which defines CSAM as “any visual depiction of sexually explicit conduct involving a minor,” including “computer-generated images indistinguishable from an actual minor.” While the Supreme Court has not weighed in on the enforceability of either provision as applied to AI-generated images, in the former case, the statute focuses not on pornography (which, in the absence of a minor, is protected speech) but on obscenity, which can be lawfully prohibited. The latter definitional statute reflects the feeling that there is a difference between images that are indistinguishable from those of some generic minor and those that reflect an actual, identifiable minor – and that the latter may still be prohibited under the Ashcroft ruling. In September, AG’s offices urged Congress to address the problem. The Ohio Supreme Court, in the wake of the Ashcroft ruling, upheld the constitutionality of Ohio’s child pornography law and rejected the argument that the government had to prove that the persons depicted in the images represented real children. The Ohio court distinguished between images of wholly synthetic children and what the U.S. Supreme Court called “morphed images,” where the program will “alter innocent pictures of real children so that the children appear to be engaged in sexual activity.” So AI-generated pornographic images involving identifiable children can likely be prohibited under Ashcroft because real children are harmed even if they never pose for the pictures.

Privacy Torts

There are various tort actions that might be invoked to prevent the dissemination of AI-generated nudes. These include (1) False light, (2) Defamation, (3) Intrusion into seclusion, (4) Intentional infliction of emotional distress, (5) Negligent infliction of emotional distress and (6) General negligence/Recklessness.

False Light:

In the world of AI-generated content, the tort of false light can come into play when AI-generated nude images are disseminated with a false or highly misleading narrative. For instance, imagine an AI program that generates explicit images of a public figure and falsely portrays them as engaging in inappropriate behavior. The operator of the AI program, who knows the portrayal is false, shares these images widely on social media. In such a scenario, a false light claim might be pursued.

To establish a false light claim, plaintiffs typically need to demonstrate that the defendant made a false or highly misleading statement or representation, that this statement was made publicly, that it placed the plaintiff in a false light and that the defendant acted with some level of fault, often involving negligence or recklessness. Here, the intent or recklessness of the AI program operator in disseminating the false content becomes a key element in a potential false light claim. Moreover, you would have to show that the AI-generated image itself was a “statement or representation” – implicitly, that the person portrayed did, in fact, pose for the generated image.

Defamation:

Defamation involves the publication of false statements that harm a person’s reputation. In the context of AI-generated content, defamation claims could arise when false statements accompany the AI-generated nude images. For instance, if an AI program generates explicit images and includes false statements about criminal conduct or moral wrongdoing alongside the images, the operator’s intent or recklessness in making false statements could be pivotal in a defamation claim. A classic example of defamation per se is a statement that implies that someone has engaged in sexual misconduct or was unchaste. An AI image that falsely creates these impressions may constitute defamation. Defamation claims require the plaintiff to prove that a false statement was made about them, that the statement was published to a third party, that it was negligently or intentionally false and that the plaintiff suffered harm as a result.

Intrusion into Seclusion:
Intrusion into seclusion deals with the invasion of an individual’s private affairs without consent. In the realm of AI-generated content, this tort claim can arise when an individual intentionally directs an AI program to create nude images of another person without their consent and then shares these images online. This constitutes an intentional intrusion into the victim’s private affairs.n To succeed in an intrusion into seclusion claim, plaintiffs typically need to demonstrate an intentional intrusion by the defendant, the intrusion into a private matter, and that the intrusion would be highly offensive to a reasonable person. However, in the area of AI-generated porn or naked images, the images give the illusion of intrusion – but that illusion itself may be (and likely is) invasive of the victim’s privacy.

Intentional/Negligent Infliction of Emotional Distress (IIED):

IIED involves intentionally or recklessly causing severe emotional distress. In the context of AI-generated nude images, a potential scenario could involve an individual who uses an AI program to create explicit images of a coworker and then shares them widely within the workplace. If the conduct is extreme and outrageous, leading to severe emotional distress for the coworker, the person directing the AI program might be liable for IIED. If the conduct were not only extreme and outrageous, the tort of IIED would require proof that it was the intent of the party to cause emotional distress. Another tort not recognized in all jurisdictions is when the person acts without concern about whether the creation and posting will cause distress – this would be the negligent infliction of emotional distress (NIED). Again, the plaintiff would have to show specific damages resulting from the AI-generated nudes.

General Negligence/Recklessness:

General negligence or recklessness claims may apply in AI-related cases when careless or reckless behavior leads to harm. For example, if an AI program creator/operator fails to implement adequate safeguards to prevent the misuse of their technology for creating and disseminating non-consensual nude images, they might be held liable for negligence or recklessness. Similarly, if the person directing the AI program engages in reckless behavior in generating and disseminating nude images without consent, they may be found liable for general negligence or recklessness. These claims often require establishing a duty of care owed by the defendant, a breach of that duty through negligence or recklessness, a causal connection between the breach and the harm suffered and the actual harm or damages incurred by the plaintiff. The specific facts and circumstances of each case would determine the viability of such claims.

“Product” Liability

As noted, it is possible that the developer or distributor of the AI product that was used to create the AI porn could be held liable for negligence itself. In addition, there are various categories of “product” liability that might create such liability — assuming that AI software is a “product” and that the AI software developer has some duty of due care to the person depicted. The AI generative program itself may be considered a product for the purposes of product liability law, especially if it is sold or distributed to users. In this case, it might be subject to the same legal principles that govern physical products.

If the AI generative program has a flaw in its programming or functioning that leads to unintended and harmful results, such as the creation and dissemination of non-consensual explicit content, it could potentially be subject to liability under manufacturing defect theories. Even if this is not strictly a “defect,” a plaintiff could argue that the AI program could have (and therefore should have) had a process to detect and prevent the creation of such images. If the AI generative program’s design inherently leads to harmful outcomes, such as the generation of explicit content without consent, it might be subject to liability under design defect principles. This could involve claims that the AI program was not reasonably safe when used as intended. Additionally, if the AI program does not provide adequate warnings or safeguards against its misuse for creating explicit content without consent, it may be subject to liability under failure-to-warn principles. Users could argue that they were not adequately informed of the potential harm associated with its use. A key factor in evaluating product liability claims related to AI generative programs is whether the harm results from the intended use of the product or from misuse by the user. If the AI program is being used in a manner that the manufacturer or developer did not intend, and this misuse leads to harm, the liability analysis may differ.

There is little doubt that AI-generated images can create great harm — from falsely depicting persons in sexual situations to falsely generated videos, speeches, etc. Who is liable when these programs are misused or go rogue has not yet been determined.


文章来源: https://securityboulevard.com/2023/11/deepfake-nudes-can-i-sue/
如有侵权请联系:admin#unsafe.sh