In recent years, advancements in artificial intelligence (AI) have given rise to powerful tools like StyleGAN and sophisticated language models such as ChatGPT. These technologies can create hyper-realistic images and conversations, blurring the line between authentic human presence and synthetic creations. While this progress opens new possibilities for creativity and automation, it also introduces profound ethical and moral dilemmas, especially when these capabilities are harnessed by entities like the nation state actors for strategic operations.
According to a recent Intercept article, the DoD has explored the use of StyleGAN to create artificial online personas. You can read the original request here.
These personas are designed to be indistinguishable from real individuals, passing both human scrutiny and machine learning models that detect fakes. Here’s the initial list of criteria:
The desire for this isn’t exactly new. What is interesting is that the level of detail required to pass scrutiny implies that these will primarily be used for social media “influencer” type accounts, where the location specific audio and background are just as convincing as the smile and lighting.
As advancements in tech, particularly AI, make attackers more dangerous, organizations and security vendors are trying to keep up by also leveraging AI.
However, the weak point remains human. It always has and always will be people who are often the gateway to that first lateral movement. Opening one email attachment puts all the other layered defences a few seconds behind, and these days those few seconds are making all the difference.
This scenario, where social engineering leads to compromise and breaches, is going to become a lot more common with fake personas, who may be used to establish long term relationships (some people are already voluntarily engaging in relationships with AI) with their targets.
When we consider this situation, where the DoD essentially has a wish list to be full filled by a vendor, three things are going to happen:
The use of such technology certainly raises significant moral and ethical concerns, however the erosion of public trust may actually be an even greater risk.
The development of chatbots like ChatGPT that can pass the Turing Test—a measure of a machine’s ability to exhibit intelligent behavior indistinguishable from a human—further complicates the issue. If a bot can convincingly simulate human interaction, and if AI-generated personas can appear visually authentic, the line between human and machine becomes increasingly blurred. This convergence of technologies could lead to a digital environment where it is nearly impossible to trust any interaction as genuine.
What happens to society when people no longer believe anything they see?
The DoD’s rationale for using such technology is this—it offers a strategic advantage in intelligence gathering and information operations. It allows for non-intrusive intelligence collection, reducing risks to personnel and enabling operations in hostile environments without physical presence. However, this strategy involves significant trade-offs.
The use of fake personas to influence public discourse raises concerns about the effects of deception on a populace. Using AI-generated personas to steer conversations or gather information covertly can be seen as a significant infringement on free speech and digital autonomy. In some scenarios it could also lead to entrapment.
There is also the potential psychological impact on individuals who discover that they have been interacting with synthetic personas. Trust in digital platforms and online communities may diminish, leading to skepticism about interactions on social media. Beyond that, trust in real people will also diminish.
If government agencies deploy such technology, it may set a precedent that encourages other state and non-state actors to do the same. This could result in an arms race of AI-generated content, contributing to a misinformation crisis where distinguishing truth from fiction becomes increasingly difficult.
As these technologies advance, there is a pressing need for transparent governance and ethical guidelines. Policymakers must address questions such as:
Clear regulations and international agreements will also be necessary to establish norms around the use of AI in digital manipulation, ensuring that such tools are not used to undermine democratic processes or human rights.
The rise of technologies like StyleGAN and ChatGPT represents a pivotal moment in the relationship between humans and AI. The potential misuse of AI-generated personas by entities like the DoD creates a moral and ethical labyrinth that society must navigate carefully. While the strategic benefits are undeniable, the risks to public trust and digital integrity are equally critical.
The challenge lies in balancing national security interests with the principles of transparency and ethical AI usage. Failing to do so could lead us into a future where truth itself becomes malleable, and where our interactions—online and off—are shadowed by uncertainty.
*** This is a Security Bloggers Network syndicated blog from Berry Networks authored by David Michael Berry. Read the original post at: https://berry-networks.com/2024/10/18/ai-generated-personas-trust-and-deception/