Information Warfare in Russia’s War in Ukraine

The Role of Social Media and Artificial Intelligence in Shaping Global Narratives

In the lead-up to Russia’s invasion of Ukraine, and throughout the ongoing conflict, social media has served as a battleground for states and non-state actors to spread competing narratives about the war and portray the ongoing conflict in their own terms. As the war drags on, these digital ecosystems have become inundated with disinformation. Strategic propaganda campaigns, including those peddling disinformation, are by no means new during warfare, but the shift toward social media as the primary distribution channel is transforming how information warfare is waged, as well as who can participate in ongoing conversations to shape emerging narratives.

Examining the underlying dynamics of how information and disinformation are impacting the war in Ukraine is crucial to making sense of, and working toward, solutions to the current conflict. To that end, this FP Analytics brief uncovers three critical components:

  • How social media platforms are being leveraged to spread competing national narratives and disinformation;
  • The role of artificial intelligence (AI) in promoting, and potentially combating, disinformation; and,
  • The role of social media companies and government policies on limiting disinformation.

The Role of Social Media and National Disinformation Campaigns

Russia and Ukraine both use social media extensively to portray their versions of the events unfolding, and amplify contrasting narratives about the war, including its causes, consequences, and continuation. Government officials, individual citizens, and state agencies and have all turned to an array of platforms, including Facebook, Twitter, TikTok, YouTube, and Telegram, to upload information. It is difficult to pinpoint the exact amount of content uploaded by these various actors, but the scale of information being uploaded on social media about the war is immense. For instance, in just the first week of the war, videos from a range of sources on TikTok with the tag #Russia and #Ukraine had amassed 37.2 billion and 8.5 billion views, respectively.

At their core, the narratives presented by Russia and Ukraine are diametrically opposed. Russia frames the war in Ukraine, which Putin insists is a “special military operation,” as a necessary defensive measure in response to NATO expansion into Eastern Europe. Putin also frames the military campaign as necessary to “de-nazify” Ukraine and end a purported genocide being conducted be the Ukrainian government against Russian speakers. In contrast, Ukraine’s narrative insists the war is one of aggression, emphasizes its history as a sovereign nation distinct from Russia, and portrays its citizens and armed forces as heroes defending themselves from an unjustified invasion.

Ukraine and Russia are not the only state actors interested and engaged in portraying the war on their own terms. Countries such as China and Belarus have engaged in efforts to portray the conflict on their own terms, and they have launched coordinated disinformation campaigns on social media platforms. These campaigns have broadly downplayed Russia’s responsibility for the war and have promoted anti-U.S. and anti-NATO posts. The mix of narratives, both true and false, originating from different state actors as well as millions of individual users on social media has enlarged tech platforms’ roles in shaping the dynamics of the war and could influence its outcomes.

Graphic 1

Russia and Ukraine Used Social Media Heavily Pre-War

Before Russia’s most recent internet crackdown, U.S.-based social media platforms were widely used for communication and accessing information.

Data sources: DataReportal, WIRED, TIME

The scale of information uploaded to social media and the speed with which it proliferates create novel and complex challenges to combating disinformation campaigns. It is often hard to identify the origin of a campaign or its reach, complicating efforts to remove false content in bulk or identify false posts before they reach mass audiences. For example, the active “Ghostwriter” disinformation campaign, attributed to the Belarusian government, uses a sophisticated network of proxy servers and virtual private networks (VPNs), which enabled it to avoid detection for years. Before the operation was uncovered in July 2021, it effectively hacked the social media accounts of European political figures and news outlets and spread fabricated content critical of the North Atlantic Treaty Organization (NATO) across Eastern Europe. The level of sophistication that these types of modern state-backed disinformation campaigns possess makes them exceedingly difficult to detect early and counter effectively. Russia, in particular, has spent decades developing a propaganda ecosystem of official and proxy communication channels, which it uses to launch wide-reaching disinformation campaigns. For instance, “Operation Secondary Infektion,” one of Russia’s longest ongoing campaigns, has spread disinformation about issues such as the COVID-19 pandemic across over 300 social media platforms since 2014.

Graphic 2

Social Media Platforms Supporting Russia’s Information Ecosphere

With most U.S.-based social media platforms now restricted, these domestic platforms are facilitating online communication within Russia.

Data sources: WIRED, New York Times, Intellinews, The Economist, Coda Story

Graphic 3

Russians are Using VPNs to Access Restricted Websites

Russia banned over 2,384 websites since the start of the war, propelling a rapid increase in VPN downloads.

Source: APPTOPIA

The range of social media platforms in use, and the variation in their availability across different countries, hinders the ability to coordinate efforts to combat disinformation, while creating different information ecosystems across geographies. The narratives about the war emerging on social media take different forms, depending on the platform and the region, including within Russia and Ukraine. Facebook and Twitter are both banned within Russia’s borders, but Russian propaganda and disinformation aimed at external audiences still flourishes on these platforms. Within Russia, YouTube and TikTok are still accessible to everyday citizens, but with heavy censorship. The most popular social media platform used within Russia is VKontakte (VK), which hosts 90 percent of internet users in Russia, according to the company’s self-reported statistics. It was previously available and widely used in Ukraine until 2017, but the Ukrainian government blocked access to VK and other Russian social media such as Yandex in an effort to combat online Russian propaganda. In 2020, Ukrainian president Volodymyr Zelenskyy extended the ban on VK until 2023, so it has not facilitated communications between Russians and Ukrainians throughout the war this year.

The government-imposed restrictions placed on these major social media platforms leave Telegram as the main social media platform currently accessible to both Russians and Ukrainians. Telegram is an encrypted messaging service created and owned by Russian tech billionaire Pavel Durov, which is being used in the war for everything from connecting Ukrainian refugees to opportunities for safe passage to providing near-real-time videos of events on the battlefield. Critically, in the fight against disinformation, Telegram has no official policies in place to censor or remove content of any nature. While some channels on Telegram have been shut down, the company does not release official statements on why, and it generally allows the majority of content posted by users to continue circulating, regardless of its nature. This allows Telegram to serve as a mostly unfiltered source of disinformation within Russia and Ukraine and reaches audiences that Western social media platforms have been cut off from. While Telegram does not filter content like many other platforms, it also does not use an algorithm to boost certain posts, and it relies on direct messaging between users. This design makes it difficult for AI tools to effectively boost disinformation. In contrast, on other platforms such as Twitter and Facebook, AI is further enabling the rapid spread of disinformation about the war.

Already a Subscriber?

FP Analytics

Leverage FP’s hard-hitting research and analysis to make smarter decisions.

An FP subscription includes full access to Foreign Policy’s leading journalism, plus exclusive access to FP Analytics’ cross-cutting research at the intersection of policy, technology, and global markets.

Subscribe to FP

FP subscribers get exclusive access to:

  • Special Reports: Original research that delves deep into the critical issues influencing foreign policy and investment trends around the world, including 5G, data governance, resource allocation, climate and security, and more.
  • Power Maps: An innovative, interactive new form of geopolitical intelligence that synthesizes data into key takeaways and insights, so you and your team can strategize more effectively.
  • Graphics Database: A powerful tool to visualize data on the economic, technological, and geopolitical trends shaping our world, available for use in your own presentations and publications.
  • FP Live: Foreign Policy’s forum for live journalism, analyzing the world’s biggest events and bringing in-depth discussions with senior government officials, leading foreign-policy experts and thinkers.
  • Newsletter Briefings: FP’s expert researchers and policy fellows break down transformational trends impacting geopolitics and business.

Subscribe to FP by following the prompts below.

Already have an account? Upgrade your access in the “Subscription” tab of your account page.



The Impact of Artificial Intelligence in Online Disinformation Campaigns

AI and its subcomponents, such as algorithms and machine learning, are serving as powerful tools for generating and amplifying disinformation about the Russia-Ukraine war, particularly on social media channels. The underlying algorithms that social media platforms use to determine what content is allowed, and what posts become the most viewed, are driving differences in users’ perception of the events unfolding. Before the war, there was significant controversy over how social media platforms prioritized and policed content on all kinds of political and social issues. In recent years, both Facebook and YouTube have come under scrutiny from regulators in the U.S. and EU, concerned that their algorithms prioritize extremist content, and for failing to adequately remove disinformation despite some improvements to automated and human-led procedures.

Graphic 4

Social Media Platforms Struggle to Remove False Content

The amount of false content removed about the war represents only a small fraction of the total.

Data source: Twitter, The Disinformation Situation Center, Atlantic Council, Facebook, TikTok, The Washington Post

Throughout the Russia-Ukraine war, similar concerns have risen across a range of platforms. For example, researchers found that TikTok directed users to false information about the war within 40 minutes of signing up. New users on TikTok were shown videos claiming that a press conference given by Vladimir Putin in March 2020 was “Photoshopped” and that clips from a videogame was real footage of the war. Likewise, Facebook’s algorithm routinely promoted disinformation about the war, including the conspiracy theory that the U.S. is funding bioweapons in Ukraine. A study by the Center for Countering Digital Hate (CCDH) found that Facebook failed to label 80 percent of posts spreading this conspiracy theory about U.S.-funded bioweapons as disinformation.

Social media platforms also host popular AI-driven tools for spreading disinformation such as chatbots and deepfakes. Bots—AI-enabled computer programs that mimic user accounts on social media networks—are one of the most effective ways that disinformation about the war spreads. Russia has extensive experience effectively using bots to spread disinformation. For example, Russian government agencies and their affiliates previously used them to spread disinformation during the U.S.’s 2016 election as well as throughout the COVID-19 pandemic. Russia is continuing to use bots, and since the start of the war in Ukraine earlier this year, Twitter has reported removing at least 75,000 suspected fake accounts linked to online Russian bots for spreading disinformation about Ukraine. However, the scale and speed at which disinformation can be produced and spread using bots make it nearly impossible to monitor or remove all false accounts and posts.

Graphic 5

The Role of Bots and Deepfakes in Spreading Disinformation

Bot networks are a primary driver of pro-Russian disinformation campaigns, especially on Twitter.

Data source: Security Service of Ukraine, Brookings Institute, The Guardian, Euro News

Graphic 6

Notable Disinformation Campaigns

Global disinformation campaigns related to Russian and Ukraine span numerous languages and continents.

Data source: Brookings Institute, News Guard

In addition to bots, deepfakes—videos that use AI to create false images and audio of real people—have circulated online throughout the conflict. Beginning in March 2022, deepfakes portraying both Vladimir Putin and Volodymyr Zelenskyy giving fabricated statements about the war have repeatedly appeared on social media. A deepfake of Vladimir Putin declaring peace widely circulated through Twitter, before being removed, while a deepfake portraying Volodymyr Zelenskyy circulated on YouTube and Facebook. Beyond deepfakes, experts have expressed concern that AI could be leveraged for more sophisticated disinformation techniques. These include using AI to better identify targets for disinformation campaigns, as well as using techniques such as Natural Language Processing (NLP), which allows AI to produce fake social media posts, articles, and documents that are nearly indistinguishable from those by human posters.

While AI is contributing to the spread of disinformation across social media, AI tools also show promise for combating it. The sheer volume of information uploaded to social media daily makes developing AI tools that can accurately identify and remove disinformation essential. For example, Twitter users upload over 500,000 posts per minute, well beyond what human censors can monitor. Social media platforms are beginning to combine human censors with AI, to monitor false information more effectively. Facebook developed an AI tool called SimSearchNet at the start of the COVID-19 pandemic to identify and remove false posts. SimSearchNet relies on human monitors to first identify false posts, and then uses AI to identify similar posts across the platform. AI tools are significantly more effective than human content moderators alone. According to Facebook, 99.5 percent of terrorist-related content removals and 98.5 percent of fake accounts are identified and removed primarily using AI trained with data from their content-moderation teams. Currently, AI aimed at combatting disinformation on social media still relies on both human and computer elements. This limits AI’s ability to detect novel pieces of mis- and disinformation, and means that false posts routinely reach large audiences before they are identified and removed using AI. The current technical limitations on being able to proactively identify and remove false information, combined with the scale of information uploaded online, pose a continuing challenge for limiting disinformation on social media in the Russia-Ukraine war and beyond.


Government and Social Media Disinformation Policies

Social media companies and governments have enacted a range of policies to limit the spread of disinformation, but their application has been fragmented, depending on the platform and geography, with varying effect. The different policies that social media platforms apply, the extent of their efforts to combat disinformation, and their availability within countries, all help shape the way the public understands the Russia-Ukraine war. Critically, social media companies are privately controlled, and their interests may or may not be aligned with varying state interests, including the states where these companies are registered and headquartered as well as others.

In the Russia-Ukraine war, social media companies have taken a range of different measures. Facebook is deploying a network of fact checkers in Ukraine in attempts to eliminate disinformation, and YouTube has blocked channels associated with Russian state media globally. Both of these platforms enacted restrictions beyond the legal requirements under U.S. and EU sanctions on Russia. In contrast, Telegram and TikTok have not taken as significant steps to limit disinformation on their platform, beyond complying with EU sanctions on Russian state media within the EU. The differences in responses among the platforms reflect the government and public pressures that varying platforms are subject to. In general, platforms based in the U.S. have taken stricter stances on limiting Russian disinformation than their international counterparts, such as Telegram and TikTok. The difference in social media platforms’ policies, their efforts to limit disinformation, and their geographic access are all becoming powerful drivers of not only how individuals consume news about the Russia-Ukraine war globally, but also the narratives—including information, misinformation, and disinformation—that they are exposed to and thereby the views that they may adopt.

The growing role of social media channels in shaping narratives on geopolitical issues, including conflicts, is generating pushback from governments, both democratic and autocratic. This, in turn, has contributed to the trend of some governments placing restrictions on the public’s use of social media and the internet more generally. For example, Russia has restricted its internet activity since 2012, but increased the intensity of its crackdown on dissidents, online dissent, and independent media coverage leading up to, and since, the invasion of Ukraine. Russia recently passed new laws targeting foreign internet companies, such as the 2019 Sovereign Internet Law and the federal “Landing Law” signed in June 2021. These laws grant the Russian state extensive online surveillance powers and require foreign internet companies operating in Russia to open offices within the country. Additionally, Russia has completely banned Facebook, Twitter, and Instagram within its borders. Ukraine cracked down on online expression in late 2021, in response to fears that Russia was sponsoring Ukrainian media outlets and preparing to invade. However, since the beginning of the invasion, Ukraine has turned toward openly embracing social media as a means to broadcast messages outside its borders and garner public support for its resistance efforts.

Globally, this follows a number of existing trends, including numerous countries’ regulatory efforts to enforce digital and data sovereignty. A range of countries are now attempting to regulate social media outlets and restrict online speech domestically, while using these same platforms to shape narratives internationally. For instance, China, Iran, and India, have all enacted restrictive legislation on internet and social media use domestically while simultaneously using social media channels to spread targeted disinformation campaigns globally.

Graphic 7

Social Media Platforms’ Policies for Combatting Disinformation

The varied timing and nature of policies enacted creates a fragmented information ecosystem.

Data sources: CNET, Coda Story, Twitter, Meta, Vice, Google, TIME

The effectiveness of governments’ efforts globally to curve access to social media and prevent disinformation, both in the Russia-Ukraine war and overall, has been limited thus far. Regulatory efforts have neither curbed disinformation in robust and systematic ways, nor reigned in the role of social media platforms as domains of political polarization and vitriolic social interactions. While governments have been more effective at curbing access to information within their domestic jurisdictions, many individuals can still circumvent restrictions through virtual private networks (VPNs). These networks allow users to hide the origin of the internet connection and offer access websites that may be blocked within a specific countries’ borders. After Russia’s invasion of Ukraine, VPN downloads within Russia spiked to over 400,000 per day, illustrating the extent of the challenge to completely block access to online spaces.


Looking Ahead

Technical and regulatory strategies for combating disinformation are evolving rapidly but are still in their early stages. In modern conflicts, social media platforms control some of the main channels of information, and their policies can have an outsized effect on public sentiment. In the Russia-Ukraine conflict, the largest global social media platforms have broadly agreed to attempt to limit Russian propaganda messages, but they have placed far fewer restrictions on official content from the Ukrainian government. This type of broad power that social media companies exercise by choosing which voices are amplified during conflicts is driving governments to push for increased control over these channels of information. Among others, China, Russia, and Iran all have onerous restrictions on what content can be posted online and have banned most U.S.-based social media companies. Further, both Russia and China are taking measures to move their populations onto domestic social media channels, such as WeChat in China or VKontakte in Russia, which can be heavily censored, in addition to intensive government oversight and interference. The EU and India have also placed regulatory restrictions on U.S.-based social media platforms, with the intent of developing their own domestic platforms. These developments create challenges for existing international social media platforms and continue to complicate efforts to fight disinformation. As social media channels become more fragmented, and users are subject to differing policies restricting content and disinformation, coherently coordinating efforts to fight disinformation across platforms will become increasingly difficult.