DOJ Shutters Massive Russian Bot Farm Spreading Disinformation
2024-7-11 00:26:33 Author:查看原文) 阅读量:18 收藏

A Russian bot farm operation run with the Kremlin’s support used AI software to create almost 1,000 fake social media profiles to spread disinformation in the United States and other countries about Russia’s war with Ukraine and other political and social issues.

The U.S. Justice Department (DOJ) and FBI, along with law enforcement agencies from Canada and the Netherlands, seized two domain names and 968 social media accounts on X (formerly Twitter) that were used to widely spread pro-Russian messages. The accounts were made to look like they belonged to Americans who were posting the Russian-friendly content, such as videos of Russian President Vladimir Putin justifying the country’s illegal invasion of its smaller neighbor in 2022.

In 2022, officials with RT – a Russian state media outlet – were looking for new ways to distribute their information. An RT editor that year started the operation in 2022, including developing bespoke AI software that was used to help create and run the bot farm, including false online personas for social media accounts, according to the DOJ.

In early 2023, an officer with Russia’s Federal Security Service (FSB) – with the approval and financial support of Russia’s government – created a private intelligence agency (PIO) to leverage the bot farm to create social media accounts that would be conduits for spreading disinformation. According to an affidavit filed in support of the DOJ’s actions, members of the private agency included a deputy editor of RT and other employees.

Advancing Russia’s Message

“Russia intended to use this bot farm to disseminate AI-generated foreign disinformation, scaling their work with the assistance of AI to undermine our partners in Ukraine and influence geopolitical narratives favorable to the Russian government,” FBI Director Christopher Wray said in a statement.

“The true purpose of the P.I.O. was to advance the mission of the FSB and the Russian government,” a FBI special agent wrote in the affidavit. “One manner in which the P.I.O. accomplished this mission was by attempting to sow discord in the United States by spreading misinformation through the social media accounts created by the bot farm.”

The RT employee setting up the bot farm bought two domain names – “mlrtr[.]com” and “otanmail[.]com” – from an Arizona company called Namecheap. Those domain names were used to create two email servers “that ultimately allowed them to create fictitious social media accounts using the bot farm software,” the DOJ wrote.

Enter AI

According to a joint cybersecurity advisory from the law enforcement agencies, RT starting in 2022 used an AI-based bot farm generation and management software called Meliorator to spread disinformation in and about the United States, Israel, and several European countries, including Spain, Germany, Poland, the Netherlands, and Ukraine.

“Meliorator was designed to be used on social media networks to create ‘authentic’ appearing personas en masse, allowing for the propagation of disinformation, which could assist Russia in exacerbating discord and trying to alter public opinion as part of information operations,” the agencies wrote.

As of last month, Meliorator only worked on X, but analysts investigating the software said its functionality could likely be extended to other social media platforms. Meliorator includes an administrator panel called “Brigadir” and a seeding tool called “Taras.” Users connected to the software via a virtual network computing (VNC) connection.

Users of the bot farm created identities – or “souls” – of the bots by selecting specific archetypes, with the bot archetypes then used to build groups of bots that are ideologically aligned via an algorithm that determined the location, political leanings, and biographical data of the persona.

“Once Taras creates the identity, it is registered on the social media platform,” the agencies wrote. “The identities are stored using a MongoDB, which can allow for ad hoc queries, indexing, load-balancing, aggregation, and server-side JavaScript execution.”

There also were “thoughts,” which included automated scenarios the helped direction the actions of a soul or a group of souls, such as liking, sharing, reposting, or commenting on the posts of others using videos or links.

The operators also used Meliorator to evade detection by obfuscating their IP, bypassing multifactor authentication, and changing the user agent string. A back-end code also was used to automatically assign a proxy IP address to the persona created via the AI software based on their assumed location.

The DOJ, which said the investigation is ongoing, said the FSB’s use of U.S.-based domain names to register the bots violated the International Emergency Economic Powers Act and that the accompanying payments for the operation’s infrastructure broke federal money laundering laws.

Worries About Disinformation and Elections

The raid on the Russian bot farm comes amid growing concerns in the United States and other countries about outside disinformation campaigns – particularly using generative AI technology – being used to interfere with elections, including the upcoming U.S. presidential election.

Stephen Kowski, field CTO at SlashNext Email Security+, said that such an extensive bot farm operation shouldn’t be surprising.

“The 2024 U.S. election and ongoing global conflicts will likely lead to increased nation-state cyber activity and disinformation campaigns,” Kowski said. “We may see more attacks on election infrastructure, political organizations, and media outlets. Protecting against these threats will require a combination of user education, advanced threat intelligence, and robust email [and] web security measures.”

Zendata CEO Narayana Pappu said actions like those of the DOJ and FBI will be embraced by many in the United States, adding that 55% of Americans supporting the government restricting false information online, even if it limits people from being able to freely publish or access information. In 2018, that number was 38%.

In addition, 53% of Americans say they regularly run into false or misleading information online, Pappu said.

Recent Articles By Author