Trump supporters protest ballot counts outside the TFC Center in Detroit on Nov. 5. (Salwan Georges/The Washington Post)

David Hill is director of Hill Research Consultants and a 2020 fellow at the University of Southern California’s Dornsife Center for the Political Future.

There’s a dirty little secret that we pollsters need to own up to: People don’t talk to us anymore, and it’s making polling less reliable.

When I first undertook telephone polling in the early 1980s, I could start with a cluster of five demographically similar voters — say, Republican moms in their 40s in a Midwestern suburb — and expect to complete at least one interview from that group of five. I’d build a sample of 500 different clusters of five voters per cluster, or 2,500 voters total. From that number, I could be reasonably assured that 500 people would talk to us. The 500 clusters were designed to represent a diverse cross-section of the electorate.

As the years drifted by, it took more and more voters per cluster for us to get a single voter to agree to an interview. Between 1984 and 1989, when caller ID was rolled out, more voters began to ignore our calls. The advent of answering machines and then voicemail further reduced responses. Voters screen their calls more aggressively, so cooperation with pollsters has steadily declined year-by-year. Whereas once I could extract one complete interview from five voters, it can now take calls to as many as 100 voters to complete a single interview, even more in some segments of the electorate.

And here’s the killer detail: That single cooperative soul who speaks with an interviewer cannot possibly hold the same opinions as the 99 other voters who refused.

In short, we no longer have truly random samples that support claims that poll results accurately represent opinions of the electorate.

Instead, we have samples of “the willing,” what researchers call a “convenience sample” of those consenting to give us their time and opinions. Despite knowledge of this, pollsters (including myself) have glossed over this reality by dressing up our results with claims of polls having a “margin of error” of three or four percentage points when we knew, or should have known, that the error factor is incalculable given the non-random sample. Most pollsters turned to weighting results to “fix” variations in cooperation, but this can inadvertently amplify sampling errors due to noncooperation.

For a while, most polls conducted most of the time in most places seemed reasonably accurate, so we kept at it, claiming random sample surveys with low margins of error. Weighting became a Band-Aid for noncooperation. And polling still seemed better than hoisting a wet finger to the political winds. Then came the past two presidential elections, exposing deeper wounds.

I offer my own experience from Florida in the 2020 election to illustrate the problem. I conducted tracking polls in the weeks leading up to the presidential election. To complete 1,510 interviews over several weeks, we had to call 136,688 voters. In hard-to-interview Florida, only 1 in 90-odd voters would speak with our interviewers. Most calls to voters went unanswered or rolled over to answering machines or voicemail, never to be interviewed despite multiple attempts.

The final wave of polling, conducted Oct. 25-27 to complete 500 interviews, was the worst for cooperation. We could finish interviews with only four-tenths of one percent from our pool of potential respondents. As a result, this supposed “random sample survey” seemingly yielded, as did most all Florida polls, lower support for President Trump than he earned on Election Day.

After the election, I noted wide variations in completion rates across different categories of voters, but nearly all were still too low for any actual randomness to be assumed or implied.

Many voters who fit the “Likely Trump Supporter” profile were not willing to do an interview. It was especially hard to interview older men. Similarly, we were less likely to complete interviews with Trump households in Miami’s media market. Whatever the motivation, this behavior almost certainly introduced bias into poll results, dampening apparent support for Trump.

Pollsters and poll readers can anticipate low and variable cooperation rates to persist, undermining randomness. In anticipation of this, cooperation rates need to be published with all polls, to add a dash of real-world sobriety to our weighing of poll results. Presently, this is very rarely done for public or private political polling. If you don’t believe me, ask your pollster for his “disposition of sample” report and get ready for some cagey equivocation.

Some say online polling will help, and it may. But most online polling uses non-random samples from pre-recruited “panels” of voters who have signed up to be interviewed, typically for some incentive. And online surveys have serious data quality or integrity issues. Most voters rush through them too rapidly for real thought. And we cannot verify that online voters are indeed registered to vote or have the requisite vote history they may claim.

One promising approach to making online samples more verifiable and random is by texting interview requests to a genuinely random sample of those on the voter rolls. But standing pat on the old ways, or denying the non-randomness of today’s polls, won’t make polls great again.

Watch Opinions videos:

The MAGA march on D.C. showed Trump supporters are not a monolith, but their dedication to the president is singular. (Video: The Washington Post)

Read more:

Mark Mellman: Polling isn’t broken. But we too often miss its hidden signals.

David Byler: Did ‘Shy Trump Voters’ throw off the polls? Maybe not.

David Byler: It’s too early to trash the polls

Charles Lane: The quality of polling is a symptom of a democracy’s health

Henry Olsen: The polling industry can’t sweep its failure under the rug