EU Wants Details from Meta, TikTok About Disinformation Measures
2023-10-24 00:34:39 Author: securityboulevard.com(查看原文) 阅读量:6 收藏

The European Union is putting more pressure on social media companies to crack down on disinformation that has been spreading rapidly on their platforms since the start of fighting between Israel and Hamas.

The European Commission – the EU’s regulation creating and enforcement arm – earlier this month announced an investigation into Elon Musk’s X (formerly Twitter) following allegations that the company was letting terrorist and violent content and hate speech onto its platform.

Late last week, the EC expanded its focus to include Meta and TikTok, asking for information from both to explain the steps they’ve taken to stem the dissemination and application of illegal content and disinformation on their platforms.

The agency gave both companies until October 25 to comply with the request for information, adding that the next steps it will take will be based on their replies. Those could include a formal investigation, fines for incorrect, incomplete, or misleading information, or – should they not comply – demanding the information via a formal decision.

As part of its request, the EC also wants Meta – parent company of Facebook – to detail how it is protecting the integrity of elections and for TikTok to say how they are protecting election integrity as well as minors online. Both have until November 8 to provide that information.

AWS Builder Community Hub

EU’s New DSA

The EC’s actions fall under the Digital Services Act (DSA), a policy introduced by EC in 2020 and adopted by the EU last year in an effort to create safer environments online. It went into effect in August. More than a dozen online giant – including Amazon, Google, X, Meta, and TikTok – were designated “very large online platforms,” making them legally responsible for the content that appears on their platforms.

The DSA gives European regulators substantially more enforcement capabilities than their counterparts in the United States or Congress, which while they can bring the heads of the companies into hearings to testify or urge greater responsibility, have few legal tools they can leverage and have to rely on the hope that these companies will do the right thing.

The war between Israel and Hamas is putting much of that goodwill to test. A staggering amount of disinformation has been streaming out of the conflict, with myriad incidents outlined by such outlets as Media Matters, the Associated Press, Wired, NBC, and Reuters that range from false news stories, fake images, and misrepresentations of what videos show. There also are reports of active campaigns underway to disseminate disinformation.

Disinformation can cause a collapse of trust in the information coming out of the region and in those organizations distributing it, causing deep divisions among  populations. Reports of a missile hitting a hospital in Gaza put that into sharp focus. Outlets like The New York Times quickly ran a story calling the incident an Israeli missile attack. However, soon after, Israeli intelligence experts – and later their counterparts in the United States – said the explosion was caused by a wayward rocket shot by a small terrorist organization in Gaza.

X in the Crosshairs

Much of the regulators’ focus and general ire about disinformation has focused on X, which Musk bought last for $44 billion and has since made major changes to, including axing its election integrity team in late September.

The EU has been critical of the amount of disinformation on X and the EC this month said it was “investigating X’s compliance with the DSA, including with regard to its policies and actions regarding notices on illegal content, complaint handling, risk assessment and measures to mitigate the risks identified.”

In a letter to Musk October 10 and shared on X, EC member Thierry Breton wrote that “following the terrorist attacks carried out by Hamas against Israel, we have indications that your platform is being used to disseminate illegal content and disinformation in the EU. Let me remind you that the Digital Services Act sets very precise obligations regarding content monitoring.”

Musk answered the same day, writing that “our policy is that everything is open source and transparent, an approach that I know the EU supports.”

Meta, TikTok Take Steps

For their part, both Meta and TikTok this month – around the same time as the EC’s statement about its investigation into X – outlined measures they were taking to fight disinformation. On Oct. 13, Meta wrote that it created a “special operations center” that includes experts fluent in Arabic and Hebrew “to closely monitor and respond to this rapidly evolving situation in real time. This allows us to remove content that violates our Community Standards or Community Guidelines faster, and serves as another line of defense against misinformation.”

Meta also noted that in the three days after Hamas’ attack on Israel, the company removed or marked as “disturbing” more than 795,000 pieces of content.

Meta later updated its efforts to protect people in the regions from unwelcomed comments, including making temporary policy changes for people in the region around Israel and Gaza, including changing the default for who can comment on new public Facebook post to Friend and established followers only, made it easier for people to bulk delete comments on their posts, and disabled the feature that typically displays the first couple of comments under posts in Feed.

Comments praising Hamas – deemed by Meta as a terrorist organization – are still not allowed on the site.

Following Hamas’ attack and Israel’s response, “our teams introduced a series of measures to address the spike in harmful and potentially harmful content spreading on our platforms,” the company wrote. “Our policies are designed to keep people safe on our apps while giving everyone a voice.”

Similarly, TikTok also created a command center to more quickly respond to disinformation and other issues arising from the Middle East situation and added more moderators who speak Arabic and Hebrew to review content, adding that “as we continue to focus on moderator care, we’re deploying additional well-being resources for frontline moderators through this time.”

TikTok also is updating its proactive automated detections systems to automatically detect and remove graphic and violent content, adding opt-in screens over contents that may be too shocking or graphic to some members, and cooperating with law enforcement agencies worldwide when necessary.

Recent Articles By Author


文章来源: https://securityboulevard.com/2023/10/eu-wants-details-from-meta-tiktok-about-disinformation-measures/
如有侵权请联系:admin#unsafe.sh