'Iran vs. Trump, and Russia vs. Harris': Congress presses Silicon Valley giants on disinformation
2024-9-19 23:1:37 Author: therecord.media(查看原文) 阅读量:5 收藏

Alphabet, Meta and Microsoft had few answers for Congress during a hearing on Wednesday about how to stop foreign disinformation campaigns, with each company warning that limited threat visibility and First Amendment concerns are stifling efforts to protect U.S. populations from dubious content.

The Senate Select Committee on Intelligence peppered the presidents of each tech giant with nearly two hours of questions seeking answers about what they are doing to address efforts by Russia, Iran and China to sow discord through fake news.

Committee Chairman Mark Warner (D-VA) said that with less than two months until Election Day, American policymakers are still grappling with what to do about a new world of text, image, audio and video generation capabilities now at the fingertips of a wider variety of actors.

“I fear that Congress’s inability to establish new guardrails in the last 18 months leaves U.S. elections vulnerable to widespread, AI-enabled mischief … the first hints of which we saw conducted by domestic actors in this year’s primary season,” Warner said. 

“While Congress hasn’t accomplished anything on this front, we’ve seen states take the lead – across the ideological spectrum – to pass some of the first guardrails around use of AI in elections, including Alabama, Texas, Michigan, Florida and California. Unfortunately, none of these guardrails are likely robust enough to impact foreign influence actors.”

The hearing was prompted by recent revelations from the Justice Department uncovering a years-long effort by Russia to create and spread disinformation to specific populations in election swing states and more pernicious plans to funnel misinformation through U.S. content creators.

Read More: US agencies say Iran offered hacked Trump docs to Democrats but was ignored

Alphabet President Kent Walker, Microsoft President Brad Smith and Meta’s president for global affairs, Nick Clegg, outlined their specific processes behind taking down malicious content, disrupting foreign government account networks and more.

“It's become Iran versus [former President Donald] Trump, and Russia versus [Vice President Kamala] Harris,” Smith said. 

“It is an election where Russia, Iran and China are united with the common interest in discrediting democracy in the eyes of our own voters, and even more so in the eyes of the world.”

‘Take down the amplifiers’

Meta and Alphabet, through their various social networks, have troves of behavioral data that they use to try and identify foreign disinformation networks — which include bots, recently created accounts and more. 

Walker said Alphabet has more than 500 analysts and researchers working on foreign disinformation alongside teams within the threat intelligence division that are tracking between 270 and 300 different foreign state-affiliated cyberattack groups.

But when pressed, Walker admitted that it is likely the company does not know the true scale of the foreign disinformation effort.

All three executives were frank about the limitations they face in addressing the problem as Republican leaders of the committee questioned them about past instances of conservative-leaning accounts being swept up in the disinformation dragnet. 

“I think there are two kinds of things we're trying to address. The first is generated disinformation. That is some foreign adversary — Iran, China, Russia — creating or making something up, and then they amplify it. They push it out there, and they hope people believe it,” said the committee’s vice chairman, Marco Rubio (R-FL). 

But the second issue, according to Rubio, is thornier. Using the example of opposition to U.S. support for Ukraine, Rubio said people with legitimate, pre-existing views on the topic are being lumped in with disinformation networks discovered by federal authorities and companies. 

“And now some Russian bot decides to amplify the views of an American citizen who happens to hold those views. And the question becomes, is that disinformation, or is that misinformation? Is that an influence operation, because an existing view is being amplified?” he said. 

“It's easy to say, ‘just take down the amplifiers.’ But the problem is, it stigmatizes the person who holds that view. The accusation is that that person isn't simply holding a view. As a result, they themselves must be an asset, and that's problematic, and it's complicated.”

Senators and witnesses floated multiple ideas, including watermarks on fake content or potentially labeling the country of origin for certain information and using machine learning to automatically take down fake information. 

Alphabet, Microsoft and Meta described taking down thousands of accounts, videos and posts identified as run by state-backed actors.

But Warner slammed the companies, particularly Alphabet and Meta, for not limiting advertisements for fake news that are being purchased by foreign actors. Several senators also questioned the numbers provided by the tech presidents and were skeptical of whether they were making a dent in the efforts of highly-motivated actors from Russia, China and Iran.  

“The one thing we do know, most all of us would agree,” Warner said. “Is that in the next 49,48 days, it's only going to get worse.”

Get more insights with the

Recorded Future

Intelligence Cloud.

Learn more.

No previous article

No new articles

Jonathan Greig

Jonathan Greig

is a Breaking News Reporter at Recorded Future News. Jonathan has worked across the globe as a journalist since 2014. Before moving back to New York City, he worked for news outlets in South Africa, Jordan and Cambodia. He previously covered cybersecurity at ZDNet and TechRepublic.


文章来源: https://therecord.media/2024-election-disinformation-hearing-senate-alphabet-meta-microsoft
如有侵权请联系:admin#unsafe.sh