“The big lie” of a stolen 2020 US presidential election fuels democracy-threatening misinformation — and analysts say social media platforms are doing too little to respond.

Twitter, TikTok, and YouTube have enacted new reinforced policies to combat mis- and disinformation. Facebook leads the industry in transparency regarding content moderation, welcoming scrutiny of its actions to remove lies about voting, according to a report from NYU’s Stern School of Business.

But the measures are insufficient. Facebook continues to exempt politicians from its platform-wide fact-checking program. Enforcement and transparency remain inadequate on all major social media platforms according to Paul Barrett, the author of the NYU report, and other scholars.

Although TikTok has reiterated its ban on paid political advertising and made unsubstantiated content “ineligible for recommendation” while being fact-checked, it continues to allow users to share unverified content after it is flagged. In September, YouTube, owned by Google, touted measures referring users to authoritative news sources and enforcing its moderation rules on politicians. But YouTube remains slow to self-police, facilitates echo chambers with its recommendation algorithm, and refuses to remove content it admits contains “demonstrably false information.”

A major uncertainty is how Elon Musk might alter Twitter’s Civic Integrity Policy. In August, Twitter announced it is “activating enforcement” of the policy to combat misinformation around elections. But Twitter pauses enforcement during non-election years, allowing misinformation to spread in “off” cycles. Musk has announced his intention to convene a “content moderation council,” while stating the company has “not yet” altered the platform’s content policies. He has assured the European Commission that Twitter will abide by tough European rules on illegal online content.

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

It is not only European lawmakers who are outraged by online disinformation. So is NATO. In 2019, NATO’s Riga-based Strategic Communications Centre of Excellence purchased 54,000 fake social media interactions (e.g., followers, likes, comments, views, etc.) for a mere 300 Euros. The same year, Facebook’s “operations center” in Dublin buckled under the demands of policing political ads in the EU’s 24 official languages. European intelligence officials are concerned about the “destabilizing” effect of conspiracy theories such as QAnon gaining a foothold on the continent.

In the leadup to the US midterms, disinformation in Chinese and Spanish is exploding. After top US national security officials warned of foreign actors employing “information manipulation tactics,” Google-owned cybersecurity firm Mandiant exposed a widespread foreign influence campaign dubbed “DRAGONBRIDGE” intended to discredit the US elections.

If it is challenging to fight disinformation in the US, it is even harder in non-English speaking countries due to a lack of funding and expertise. Facebook’s leader for civic integrity has conceded a “painful reality” that the social media network “simply can’t cover the entire world with the same level of support.” An analysis by the Alethea Group found that Twitter could not make “accurate and consistent decision[s] on what is misinformation” because it lacked employees with regional expertise and language capabilities.

NYU’s Barrett argues that platforms devoted insufficient resources to content moderation during Brazil’s recent presidential elections. Brazil’s federal electoral court intervened and granted the country’s elections chief unilateral authority to deem online content misinformation and demand its removal.

In order to bolster the fight against disinformation, Barrett recommends a variety of measures. Companies should increase transparency around algorithms for recommending and promoting content. Election-related policies should be applied year-round. Fact-checking policies should be enforced on politicians. Independent audits should be regularly conducted.

Social media companies acknowledge the pervasiveness of mis- and disinformation on their platforms and the threat it poses to democracy. They now need to step up their investments to combat the scourge.

Matthew Eitel is a Program Assistant for CEPA’s Digital Innovation Initiative.

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.

Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More