The National-Security Case for Fixing Social Media

Mark Zuckerberg calling in on a video screen to a Senate hearing
We’ve entered a world in which our national well-being depends not just on the government but also on Facebook and the other companies through which we lead our digital lives.Photograph by Michael Reynolds / Getty

On Wednesday, July 15th, shortly after 3 P.M., the Twitter accounts of Barack Obama, Joe Biden, Jeff Bezos, Bill Gates, Elon Musk, Warren Buffett, Michael Bloomberg, Kanye West, and other politicians and celebrities began behaving strangely. More or less simultaneously, they advised their followers—around two hundred and fifty million people, in total—to send Bitcoin contributions to mysterious addresses. Twitter’s engineers were surprised and baffled; there was no indication that the company’s network had been breached, and yet the tweets were clearly unauthorized. They had no choice but to switch off around a hundred and fifty thousand verified accounts, held by notable people and institutions, until the problem could be identified and fixed. Many government agencies have come to rely on Twitter for public-service messages; among the disabled accounts was the National Weather Service, which found that it couldn’t send tweets to warn of a tornado in central Illinois. A few days later, a seventeen-year-old hacker from Florida, who enjoyed breaking into social-media accounts for fun and occasional profit, was arrested as the mastermind of the hack. The F.B.I. is currently investigating his sixteen-year-old sidekick.

In its narrowest sense, this immense security breach, orchestrated by teen-agers, underscores the vulnerability of Twitter and other social-media platforms. More broadly, it’s a telling sign of the times. We’ve entered a world in which our national well-being depends not just on the government but also on the private companies through which we lead our digital lives. It’s easy to imagine what big-time criminals, foreign adversaries, or power-grabbing politicians could have done with the access the teen-agers secured. In 2013, the stock market briefly plunged after a tweet sent from the hacked account of the Associated Press reported that President Barack Obama had been injured in an explosion at the White House; earlier this year, hundreds of armed, self-proclaimed militiamen converged on Gettysburg, Pennsylvania, after a single Facebook page promoted the fake story that Antifa protesters planned to burn American flags there.

A group called the Syrian Electronic Army claimed responsibility for the A.P. hack; the Gettysburg hoax was perpetrated by a left-wing prankster. A more determined and capable adversary could think bigger. In the run-up to this year’s Presidential election, e-mails and videos that most analysts attributed to the Iranian government were sent to voters in Arizona, Florida, and Alaska, purporting to be from the Proud Boys, a neo-Fascist, pro-Trump organization: “Vote for Trump,” they warned, “or we will come after you.” Calls to voters in swing states warned them against voting and text messages pushed a fake video about Joe Biden supporting sex changes for second graders. But a truly ambitious disinformation attack would be cleverly timed and coördinated across multiple platforms. If what appeared to be a governor’s Twitter account reported that thousands of ballots had gone missing on Election Day, and the same message were echoed by multiple Facebook posts—some written by fake users or media outlets, others by real users who had been deceived—many people might assume the story to be true and forward it on. The goal of false information need not be an actual change in events; chaos is often the goal, and sowing doubt about election results is a perfect way to achieve it.

When we think of national security, we imagine concrete threats—Iranian gunboats, say, or North Korean missiles. We spend a lot of money preparing to meet those kinds of dangers. And yet it’s online disinformation that, right now, poses an ongoing threat to our country; it’s already damaging our political system and undermining our public health. For the most part, we stand defenseless. We worry that regulating the flow of online information might violate the principle of free speech. Because foreign disinformation played a role in the election of our current President, it has become a partisan issue, and so our politicians are paralyzed. We enjoy the products made by the tech companies, and so are reluctant to regulate their industry; we’re also uncertain whether there’s anything we can do about the problem—maybe the price of being online is fake news. The result is a peculiar mixture of apprehension and inaction. We live with the constant threat of disinformation and foreign meddling. In the uneasy days after a divisive Presidential election, we feel electricity in the air and wait for lightning to strike.

In recent years, we’ve learned a lot about what makes a disinformation campaign effective. Disinformation works best when it’s consistent with an audience’s preconceptions; a fake story that’s dismissed as incredible by one person can appear quite plausible to another who’s predisposed to believe in it. It’s for this reason that, while foreign governments may be capable of more concerted campaigns, American disinformers are especially dangerous: they have their fingers on the pulse of our social and political divisions. At the moment, disinformation seems to be finding a more receptive audience on the political right. Perhaps, as some researchers have suggested, an outlook rooted in aggrievement and a distrust of institutions makes it easier to believe in wrongdoing by élites. Breitbart columnists and some Fox News commentators are also happy to corroborate and amplify fringe ideas. In any event, during this year’s Presidential election, our social-media platforms have been awash in corrosive disinformation, much of it generated by Americans, ranging from lurid conspiracy-mongering—Antifa protesters starting wildfires in Oregon; Democrats arranging child-sex rings—to the faux-legalistic questioning of voting procedures.

For the most part, this disinformation has been scattershot. What would a more organized effort look like? The cyber-disinformation campaign conducted by Russia in 2016, largely on Facebook, gave us a glimpse of what’s possible. The five-volume bipartisan Senate report on Russia’s efforts, produced by the Select Committee on Intelligence, reveals an effort of startling scale. Russia conducts disinformation operations at home, in bordering countries, and across the world. It works through several arms at once: the sophisticated, Kremlin-directed S.V.R. (the equivalent of the C.I.A.); the clumsier, military-run G.R.U.; and the savvier Internet Research Agency in St. Petersburg. In general, Russia seeks to push disinformation in a comprehensive, integrated way, so as to give its content an aura of authenticity. Using so-called sockpuppets—inauthentic personas on Facebook and elsewhere—its campaigns inflame existing political tensions with calls to action, online petitions, forged evidence, and false news. This specious material is then cited by seemingly legitimate news sites established by Russia for the purpose of spreading and corroborating disinformation. Facebook and Twitter have built automated systems that look for inauthentic accounts with manufactured followings. But Russian cyber actors have become increasingly sophisticated, using an integrated array of what spy agencies call T.T.P.s—tactics, techniques, and procedures—to avoid detection.

Like a musical toured in smaller markets before it hits Broadway, Russian T.T.P.s are tested first in border states—Lithuania, Estonia, Ukraine, Poland—before being deployed against America. In the past year, Russian trolls working in those countries have adopted a new strategy: impersonating actual organizations or people, or claiming to be affiliated with them—a muddying of the waters that makes detection harder. According to experts, they’ve also begun corrupting legitimate Eastern European news sites: hackers manipulate real content, sometimes laying the groundwork for future disinformation, at others, inserting fake articles for immediate dissemination.

China, meanwhile, already adept at intellectual-property cyber theft, has begun shifting toward active disinformation of the Russian sort. Most of its efforts are focussed on propaganda portraying China as a peace-loving nation with a superior form of government. But earlier this year, a pro-China operation, nicknamed Spamouflage Dragon by cybersecurity firms, deployed an array of Facebook, YouTube, and Twitter accounts with profile pictures generated by artificial intelligence to attack President Trump and spread falsehoods about the George Floyd killing, the Black Lives Matter movement, and Hong Kong’s pro-democracy protests. Compared to Russia, China’s disinformation efforts are less immediately alarming, because its government is more concerned about how it’s perceived around the world. But it seems possible that, in the longer term, the country will pose a more significant threat. If China harnessed the vast intelligence resources of its Ministry of State Security and its People’s Liberation Army to mount a coördinated disinformation campaign against the United States, its reach could be significant. Foreign powers could get better at pushing our buttons; domestic disinformers could get better-organized. In either case, we could face a more acute version of the disinformation crisis we’re struggling with now.

There’s a sense in which it doesn’t matter who our disinformers are, since they all use the same social-media technology, which has transformed our societies quickly and pervasively, outpacing our ability to anticipate its risks. We’ve taken a relatively minimal and reactive approach to regulating our new digital world. The result is that we lag behind in security: the malicious use of new platforms begins before security experts, in industry or government, can weigh in. Because new vulnerabilities are revealed individually, we tend to perceive them as one-offs—a hack here, a hack there.

As cyber wrongdoing has piled up, however, it has shifted the balance of responsibility between government and the private sector. The federal government used to be solely responsible for what the Constitution calls our “common defense.” Yet as private companies amass more data about us, and serve increasingly as the main forum for civic and business life, their weaknesses become more consequential. Even in the heyday of General Motors, a mishap at that company was unlikely to affect our national well-being. Today, a hack at Google, Facebook, Microsoft, Visa, or any of a number of tech companies could derail everyday life, or even compromise public safety, in fundamental ways.

Because of the very structure of the Internet, no Western nation has yet found a way to stop, or even deter, malicious foreign cyber activity. It’s almost always impossible to know quickly and with certainty if a foreign government is behind a disinformation campaign, ransomware implant, or data theft; with attribution uncertain, the government’s hands are tied. China and other authoritarian governments have solved this problem by monitoring every online user and blocking content they dislike; that approach is unthinkable here. In fact, any regulation meant to thwart online disinformation risks seeming like a step down the road to authoritarianism or a threat to freedom of speech. For good reason, we don’t like the idea of anyone in the private sector controlling what we read, see, and hear. But allowing companies to profit from manipulating what we view online, without regard for its truthfulness or the consequences of its viral dissemination, is also problematic. It seems as though we are hemmed in on all sides, by our enemies, our technologies, our principles, and the law—that we have no choice but to learn to live with disinformation, and with the slow erosion of our public life.

We might have more maneuvering room than we think. The very fact that the disinformation crisis has so many elements—legal, technological, and social—means that we have multiple tools with which to address it. We can tackle the problem in parts, and make progress. An improvement here, an improvement there. We can’t cure this chronic disease, but we can manage it.

On the legal side, there are common-sense steps we could take without impinging on our freedom of speech. Congress could pass laws to curtail disinformation in political campaigns, not necessarily by outlawing false statements—which would run afoul of the First Amendment—but by requiring more disclosure, and by making certain knowing falsehoods illegal, including wrongful information about polling places. Today, political ads that appear online aren’t subject to the same disclosure and approval rules that apply to ads on radio and television; that anachronism could be corrected. Lawmakers could explore prohibiting online political ads that micro-target voters based on race, age, political affiliation, or other demographic categories; that sort of targeting allows divisive ads and disinformation to be aimed straight at amenable audiences, and to skirt broader public scrutiny. Criminal laws could also be tightened to outlaw, at least to some extent, the intentional and knowing spread of misinformation about elections and political candidates.

Online, the regulation of speech is governed by Section 230 of the Communications Decency Act—a law, enacted in 1996, that was designed to allow the nascent Internet to flourish without legal entanglements. The statute gives every Internet provider or user a shield against liability for the posting or transmission of user-generated wrongful content. As Anna Wiener wrote earlier this year, Section 230 was well-intentioned at the time of its adoption, when all Internet companies were underdogs. But today that is no longer true, and analysts and politicians on both the right and the left are beginning to think, for different reasons, that the law could be usefully amended. Republicans tend to believe that the statute allows liberal social media companies to squelch conservative voices with impunity; Democrats argue that freewheeling social media platforms, which make money off virality, are doing too little to curtail online hate speech. Amending Section 230 to impose some liability on social-media platforms, in a manner that neither cripples them nor allows them to remain unaccountable, is a necessary step in curbing disinformation. It seems plausible that the next Congress will amend the statute.

Other legal steps might flow from the recognition that the very ubiquity of social-media companies has created vulnerabilities for the millions of Americans who rely on them. Antitrust arguments to break up platforms and companies are one way to address this aspect of the problem. The Senate has asked the C.E.O.s of Facebook and Twitter to appear at a hearing on November 17th, intended to examine the platforms’ “handling of the 2020 election.” Last month, a House hearing on the same topic degenerated into an argument between Republicans, who claimed that social media was censoring the President, and Democrats, who argued that the hearing was a campaign gimmick. It remains to be seen whether Congress can separate politics from substance and seriously consider reform proposals, like the one put forth recently by the New York State Department of Financial Services, which would designate social-media platforms as “systemically important” and subject to oversight. It will be difficult to regulate such complicated and dynamic technology. Still, the broader trend is inescapable: the private sector must bear an ever-increasing legal responsibility for our digital lives.

Technological progress is possible, too, and there are signs that, after years of resistance, social-media platforms are finally taking meaningful action. In recent months, Facebook, Twitter, and other platforms have become more aggressive about removing accounts that appear inauthentic, or that promote violence or lawbreaking; they have also moved faster to block accounts that spread disinformation about the coronavirus or voting, or that advance abhorrent political views, such as Holocaust denial. The next logical step is to decrease the power of virality. In 2019, after a series of lynchings in India was organized through the chat program WhatsApp, Facebook limited the mass forwarding of texts on that platform; a couple of months ago, it implemented similar changes in the Messenger app embedded in Facebook itself. As false reports of ballot fraud became increasingly elaborate in the days before and after Election Day, the major social media platforms did what would have been unthinkable a year ago, labelling as misleading messages from the President of the United States. Twitter made it slightly more difficult to forward tweets containing disinformation; an alert now warns the user about retweeting content that’s been flagged as untruthful. Additional changes of this kind, combined with more transparency about the algorithms they use to curate content, could make a meaningful difference in how disinformation spreads online. Congress is considering requiring such transparency.

Finally, there are steps we could take that have nothing to do with regulation or technology. Many national-security experts have argued for an international agreement that outlaws disinformation, and for coördinated moves by Western democracies to bring cybercriminals to justice. The President could choose to make combating foreign disinformation a national-security priority, by asking the intelligence community to focus on it in a cohesive way. (We have an integrated national counterterrorism center, but not one focused on disinformation.) Our national-security agencies could share more with the public about the T.T.P.s used by foreign disinformation campaigns. And the teaching of digital literacy—perhaps furthered by legislation that promotes civic education—could make it harder for disinformation, foreign or domestic, to take hold.

We will soon no longer have a President who himself creates a storm of falsehoods. But the electricity will remain in the air, regardless of who occupies the Oval Office. Perhaps because the disinformation crisis has descended upon us so suddenly, and because it reinforces our increasing political polarization, we’ve tended to regard it as inevitable and unavoidable—a fact of digital life. But we do have options, and if we come together to exercise them, we could make a meaningful difference. In this case, it might be possible to change the weather.

A previous version of this piece misstated the state where the Gettysburg rally took place.