Dem 50
image description
   
GOP 50
image description

PollWatch 2024, Part II: The Track Record of National Preference Polls

Yesterday, we noted that we're going to undertake a series looking at the polling of the 2024 election, as best we can. We have some questions we want to look at. Readers added a bunch of good suggestions, which we're still poring through. It's going to take some time to do the necessary planning and research, and to get all our ducks in a row.

So, for today, we're going to start with something fairly simple. A couple of weeks ago, Rick Perlstein wrote a piece with the headline "The Polling Imperilment: Presidential polls are no more reliable than they were a century ago. So why do they consume our political lives?" That pretty much tells you what the argument is. And Perlstein raises some good points, although his evidence is very much cherry-picked, and not at all systematic.

We thought we would devise a very simple test, just to see how well Perlstein's thesis holds up. If there is any circumstance in which pollsters should be accurate, it's national polls (fewer wonky variances) taken just before an election (fewer undecided/wavering voters). So, we compiled the average of all national polls taken in the final week before each presidential election since 1980. Then, we compared that to the actual result (all numbers rounded to the nearest whole number). The final column is how much the pollsters over- or underestimated the popular vote winner (so, for example, if they guessed a candidate would win the popular vote by 5%, and the candidate actually won by 8%, then they underestimated them, which means a -3%).

Year Prediction Actual Difference
1980 Reagan +3% Reagan +10% -7%
1984 Reagan +18% Reagan +18% Even
1988 G.H.W. Bush +12% G.H.W. Bush +6% +6%
1992 B. Clinton +12% B. Clinton +6% +6%
1996 B. Clinton +11% B. Clinton +8% +3%
2000 G.W. Bush +2% Gore +1% -3%
2004 Even G.W. Bush +2% -2%
2008 Obama +11% Obama +7% +4%
2012 Obama +1% Obama +4% -3%
2016 H. Clinton +3% H. Clinton +2% +1%
2020 Biden +7% Biden +4% +3%

Again, this is a very basic look at the question, although two things immediately stand out to us. The first is that Perlstein's basic assertion, that polling is no better today than it was 100 years ago, just does not stand up. Our insta-response was that the claim is ridiculous; everyone knows about the Literary Digest disaster of 1936, the "Dewey Defeats Truman" election of 1948, etc.—surely pollsters are doing better than that. And the numbers above sustain this. The worst misses were back in the 20th century. In the 21st century, the pollsters have consistently been relatively near the bullseye.

With that said, "relatively near the bullseye" is not "the bullseye." That leads us to the second thing that stands out. Pollsters make no secret of the fact that their methods have a margin of error of 3-4%. And whaddya know, they are right. Even under ideal circumstances, including averaging polls together to minimize the effect of outliers, they've been wrong in the range of 1%-4%. And there's no particular direction to the error, either; sometimes they give a popular-vote-winning candidate 1%-4% too much credit, sometimes 1%-4% too little.

A big problem with polls is that most people interpret them incorrectly. If a poll with a 4-point margin of error says that candidate X will get 52% and win and candidate X gets 49% and loses, then the poll was right, not wrong. No poll predicts who will win. What the pollster was saying is that there is 95% chance candidate X will get between 48% and 56% and 49% is within the predicted range. Also note that the margin of error is arbitrary. It means the true value has a 95% chance (two standard deviations or sigma) of being in the range. Pollsters could report the MoE as one sigma and say the probability of the true value being in the range is 68% or three sigma and say the probability of getting it right is 99.7%. Two standard deviations is merely a widely used (but arbitrary) convention.

Put another way, close elections are pollsters' kryptonite. If the pollsters tell you that a candidate has a national lead of 7%, or is ahead in Pennsylvania by 5%, then that candidate is almost certainly going to win the popular vote and the state of Pennsylvania, and the only question is how close the pollsters got on the margin. On the other hand, if a candidate has a national lead of 2%, or is ahead in Pennsylvania by 1%, then it really could go either way. The candidate who is ahead is certainly in a stronger position, but that strength is feeble enough that it could be wiped out by a rainstorm in the wrong part of a state, or by a five-cent increase in the price of gas, or a slightly wrong guess about the level of Black voter turnout, or any of a hundred other X-factors.

So, if the 2024 election really is as close as it appears to be, then we just won't know until we know, once the ballots are all counted. The question is whether there might be something wrong with the polls this year, such that one candidate or the other is being underestimated, and has a lead greater than it seems (and, potentially, outside the normal variance of 1%-4%). The thing we are worried about is methodological errors in the polls. One possibility is that the electorate is substantially different from what the pollsters expect (e.g., more young people or more women vote than last time). Another is that some important group is undersampled. In 2016, that was the case with noncollege voters. This time it could be... and we won't know until the exit polls have been analyzed. Most of the rest of this series (but not all) will focus on that question. (Z)



This item appeared on www.electoral-vote.com. Read it Monday through Friday for political and election news, Saturday for answers to reader's questions, and Sunday for letters from readers.

www.electoral-vote.com                     State polls                     All Senate candidates