Opinion Congress wants to regulate AI. Here’s where to start.

|
May 26, 2023 at 12:59 p.m. EDT
(Daniel Hertzberg for The Washington Post)
9 min

The conversation about artificial intelligence tends to devolve into panic over humanity’s eventual extinction, or at the very least subjugation: Will robot overlords one day rule the world? But machine-learning is more than a hypothetical, and it presents plenty of immediate problems that deserve attention, from the mass production of misinformation to discrimination to the expansion of the surveillance state. These harms — many of which have been with us for years — ought to be the focus of AI regulation today.

The good news is that Congress is on guard, holding hearings and drafting bills that attempt to grapple with these new systems that can absorb and process information in a manner that has typically required human input. Bipartisan legislation is under discussion, spearheaded by Senate Majority Leader Charles E. Schumer (D-N.Y.).

The bad news is that nothing so far is close to comprehensive — and piecing these ideas together with steps the White House and federal agencies have already taken entails some conflict and confusion. Before the country can even start to agree on a single, clear set of rules for these rapidly evolving tools, regulators need to agree on some basic principles.

AI systems should be safe and effective

This one is pretty basic. Anyone designing these tools should conduct a thorough evaluation of any harm they might cause, take steps to prevent it and measure the rate at which that harm occurs. Guarding against misuse or abuse could be trickiest of all. Already, con artists are using AI apps to simulate the voices of victims’ loved ones to persuade them to fork over cash; deepfake videos of celebrities and political candidates could threaten reputations or even democracy.

More generally, systems should be able to demonstrate some baseline accuracy. They should, in short, do what they say they’re going to do. But what exactly the threshold ought to be depends on the tool’s impact: A false negative in an initial test for cancer, for instance, is far more damaging than a false positive likely to be reassessed after further evaluation.

A chatbot such as ChatGPT “hallucinating,” or fabricating facts and figures out of nothing, can produce manifold inaccuracies — including, on the more worrying end of things, false claims of sexual assault against a law professor and plagiarism against an author. Maybe we’re willing to tolerate these flaws to some degree in a general-purpose assistant. But what about in a specialized one designed to offer legal advice or diagnose an ailment?

Then there are dangers that perhaps shouldn’t be tolerated, full stop. Think of a gun that determines when to discharge a bullet.

A final way to evaluate an AI’s performance is to weigh it against the alternative: the status quo, usually involving human input. What benefits does it provide, what problems could it cause and how do those stack up against each other? In other words, is it worth it?

AI systems shouldn’t discriminate

This principle nicely ties in with the safety and effectiveness guarantee — impact assessments, for instance, can help guard against discrimination if they measure effects by demographic group.

But to root out bias, it will also be essential to examine the data used to train these algorithms. Consider data drawn from criminal justice databases where higher arrest rates of minorities are baked in. Reusing those numbers to, for example, predict a convict’s chances of recidivism could end up reinforcing racist policing and punishment.

Data should be representative of the community in which a system will be deployed — for instance, facial recognition models crafted mostly from troves of photos of White men are likely to flop when it comes to identifying Black women.

Data should also be reviewed with an eye toward its historical context. For instance, technology companies that have tended to promote men to higher positions should realize that relying on those past statistics to measure the potential of would-be hires could disadvantage female applicants. With that knowledge, tool designers can correct for disparities in how a system favors or disfavors members of a protected class.

Skip to end of carousel
  • Lawyers plead guilty in racketeering case in Fulton County, Ga.
  • The Biden administration announces more than $100 million to improve maternal health.
  • Wisconsin Republicans back off impeachment threat against justice.
  • Bahrain’s hunger strike ends, for now, after concessions to prisoners.
  • A Saudi court sentences a retired teacher to death based on tweets.
Attorneys for Donald Trump have pleaded guilty in the racketeering case led by Fulton County, Ga., District Attorney Fani T. Willis. Even those lawyers related to the deals focused on equipment-tampering in rural Coffee County are relevant to the former president — they help to establish the “criminal enterprise” of which prosecutors hope to prove Mr. Trump was the head. The news is a sign that the courts might be the place where 2020 election lies finally crash upon the rocks of reality. The Editorial Board wrote about the wide range of the indictment in August.
The Department of Health and Human Services announced more than $103 million in funding to address the maternal health crisis. The money will boost access to mental health services, help states train more maternal health providers and bolster nurse midwifery programs. These initiatives are an encouraging step toward tackling major gaps in maternal health and well-being. In August, the Editorial Board wrote about how the United States can address its maternal mortality crisis.
Wisconsin state Assembly Speaker Robin Vos (R) announced Tuesday that Republicans would allow the nonpartisan Legislative Reference Bureau to draw legislative maps, a dramatic reversal after years of opposing such an approach to redistricting. A new liberal majority on the state Supreme Court is expected to throw out the current maps, which make Wisconsin the most gerrymandered state in America. Mr. Vos has been threatening to impeach Justice Janet Protasiewicz, whose election this spring flipped control of the court, in a bid to keep those maps. This led to understandable outcry. Now it seems Mr. Vos is backing off his impeachment threat and his efforts to keep the state gerrymandered. Read our editorial on the Protasiewicz election here.
Prisoners are eating again in Bahrain after the government agreed to let them spend more hours outside and expanded their access to visitors, a welcome development ahead of the crown prince’s visit to Washington this week. Activists say the monthlong hunger strike will resume on Sept. 30 if these promises aren’t kept. Read our editorial calling for the compassionate release of Abdulhadi al-Khawaja, a political prisoner since 2011 who participated in the strike.
A retired teacher in Saudi Arabia, Muhammad al-Ghamdi, has been sentenced to death by the country’s Specialized Criminal Court solely based on his tweets, retweets and YouTube activity, according to Human Rights Watch. The court’s verdict, July 10, was based on two accounts on X, formerly Twitter, which had only a handful of followers. The posts criticized the royal family. The sentence is the latest example of dictatorships imposing harsh sentences on people who use social media for free expression, highlighted in our February editorial.
End of carousel

AI systems should respect civil liberties

As always when personal data is involved, privacy is key. Essentially, what companies can and can’t do should depend on what consumers would reasonably expect in that particular context. For example, it makes sense that Netflix is vacuuming up viewer preferences to finely tune its recommendation algorithms; it would make a lot less sense if Netflix collected the precise locations of those viewers to build a tool unrelated to streaming. It would make even less sense if Netflix sold its viewers’ preferences to a Cambridge Analytica-style political consulting firm.

Then there’s the question of privacy in how these systems are used. The Chinese Communist Party has notoriously installed more than 500 million cameras around the country; it’s impossible to hire 500 million people to monitor them, so AI does the job. President Xi Jinping’s regime is pushing to sell these tools to other governments around the world. The United States can’t allow these violations of privacy to happen here — but the lack of oversight that has already allowed firms such as Clearview AI to scrape more than 30 billion images from social media sites and sell them to law enforcement agencies across the country suggests we’re not as far off as we might like to think.

AI systems should be transparent and explainable

People also need to know when they’re interacting with an AI system, period — not only so no one falls in love with their search engine but also so if one of these tools does cause injury, whoever has been hurt has an avenue to seek recourse. That’s why it’s important for AI systems to explain both that they’re AI and how they work.

This second part is far more difficult. Making accuracy and impact assessments accessible or disclosing the sources of training data is one thing. But detailing the causes of an AI tool’s behavior is another altogether — because, in many cases, those who build and run these tools have no way to peer into the black boxes they’re overseeing. The matter of “explainability” is front of mind for researchers today, but that does little good in a world where these systems are already making all sorts of decisions about our lives.

The extent to which a given tool must be able to explain itself should vary based on what it’s doing for or to us. Think, for example, of a toaster: Do consumers really need to know why this appliance determines a piece of bread has reached optimal crispness? Then think of a lending algorithm. The individual whose ability to rent an apartment depends on the algorithm’s answer has a right to understand the reasons they have been turned away. What about the person whose medical claim has been rejected by an insurance company without anyone even looking at the patient file? And somewhere in between lie the systems that social media sites rely on to feed posts and other content to their users.

Just as the possibility of serious harm posed by some AI systems might be too great for them to be deployed without meeting stringent standards for safety and effectiveness, the inability of the same sort of systems to explain themselves might mean that, for now, they should remain undeployed.

Putting principles to work

AI isn’t one thing — it’s a tool that allows for new ways of doing many things. Applying a single set of requirements to all machine-learning models wouldn’t make much sense. But to figure out what those requirements should be, case by case, the country does need a single set of goals.

Putting these principles — versions of which appear in the White House’s nonbinding Blueprint for an AI Bill of Rights as well as a framework from the National Institute of Standards and Technology — into practice won’t be easy. Even the question of who should be responsible for enforcing whatever regulations emerge is uncertain: a new federal agency? Existing agencies using their existing authorities? Or perhaps there’s a middle ground, a sort of coordinating body that reviews agency standards, reconciles authorities where they overlap and fills in any gaps.

There’s the matter of applying today’s laws to AI systems, and the matter of asking whether those systems create the need for new laws in new areas. What do we do about legal liability for speech that’s algorithmically generated? What about copyright? None of this is even to mention the potential for job loss at a massive scale in some areas (many experts point to accounting as an easy example) and perhaps equally dramatic job creation in others (humans will need to train and maintain these models, after all). That subject will likely need addressing outside the scope of safety regulations. And finally, there’s the matter of ensuring that any barriers to AI innovation don’t lock in the market power of today’s tech titans that can most easily afford compliance and computing costs.

The downside of any stringent AI regulation is that these technologies are going to exist regardless of whether the United States allows them to. Instead, it will be countries such as China that build them, without the commitment to democratic values that our nation could ensure. Certainly, it’s better for the United States to be involved and influential than to bow out and sacrifice its ability to point this powerful technology in a less terrifying direction. But that’s exactly why these principles are the essential place to begin: Without them, there’s no direction at all.

The Post’s View | About the Editorial Board

Editorials represent the views of The Post as an institution, as determined through discussion among members of the Editorial Board, based in the Opinions section and separate from the newsroom.

Members of the Editorial Board: Opinion Editor David Shipley, Deputy Opinion Editor Charles Lane and Deputy Opinion Editor Stephen Stromberg, as well as writers Mary Duenwald, Shadi Hamid, David E. Hoffman, James Hohmann, Heather Long, Mili Mitra, Eduardo Porter, Keith B. Richburg and Molly Roberts.