PollWatch 2024, Part V: How Has the Polling Changed Since 2020?
Will the polls nail it this time? Ask us, say, Nov. 10. In a sense, they probably won't get as bad a rap this time as
in 2016. Then the national polls predicted that Hillary Clinton's popular vote margin would be 3%. It was actually 2%,
so in fact, the polls did very well. But because they "predicted" a Clinton win and she didn't win, many people thought
they were way off, even though they weren't really since they weren't trying to predict a winner, just what percentage
of the national popular vote each candidate would get.
This time, the polls are so close that the pollsters aren't predicting anything except that it will be close. One day
Donald Trump is ahead, next day Kamala Harris is ahead. No matter the outcome, it will be hard for people to say: "The
polls said X will win and X lost," because there is no clear favorite. The only way the polls could really be way off is
if the election is not close. If either candidate wins in a landslide, then we can say the polls were way
off.
Needless to say, the pollsters are trying to make sure they do better this time, so they have
adjusted
their methods to try to avoid the errors of 2016 and 2020. In addition, there are other differences between this time
and last time. Some of the main ones are:
- Data Collection: In the past—say, 15 or 20 years ago—all polling was done by
random-digit dialing. The area code and first three digits of a phone number used to indicate where a (landline) phone
was located. The computer then picked a random four digit number, prefixed it by the area code and exchange (first three
digits of the phone number after the area code) and dialed it. The pollster then knew where the person who answered the phone was. For
example, the number (212) 602-xxxx was likely to be in lower Manhattan. With cell phones, even the area code doesn't
tell you where the caller is. One of us has a (212) cell phone number despite never having lived in Manhattan. In
addition, many people now don't answer the phone when it is from an unknown caller. That used to not to be the case so
much.
Pollsters have thus been forced to try new methods. One improvement is sending text messages to random phone numbers
asking the recipient to go online at a certain URL and fill out a survey. That works better than calling.
Another new method is—get this—snail mail. It is possible to buy lists of valid mailing addresses. The
pollsters then pick random addresses and mail letters to those addresses, asking them to go online and fill out a
survey, usually for a small payment for a completed survey. It turns out that people who would never answer a phone call
from an unknown caller are willing to open letters from unknown senders. Weird! When the letter offers a payment for
going online and spending 10 minutes filling out a survey in return for a gift card from one of a list of well-known
companies (Amazon, Apple, Visa, etc.), many people do it. This gives much better response rates than random-digit
dialing.
Some pollsters recruit a very large number of people using random addresses and tell them that they can sign up to be on
a panel and will be contacted by e-mail from time to time with a survey request (in return for a gift card). Then a
random selection is made from the panel members, many of whom are hoping to be picked so they can get the gift card.
This changes their mindset from "I don't want to be bothered" to "I hope I get picked." Of course, "people who are
motivated by a gift card" is probably not a perfect representation of the general populace, even if the sample
is adjusted to match the (expected) demographics of the electorate. One can imagine, for example, that someone who
places great value on a relatively low-denomination incentive would also be particularly concerned about, say,
higher taxes.
Another change since last time is the use of the benchmark survey. One of the big problems in 2016 (and since) is that
the pollster's model of the electorate may be off. With this scheme, people are contacted either by text or snail mail
and asked only (or mostly) demographic questions in return for a larger gift card. Due to the major financial
incentive, response rates on these surveys are fairly good, as much as 30%. This gives the pollster a much better idea
of what the electorate is like. Questions can be included to get an idea if the respondent is a likely voter (e.g., how
certain are you to vote this year, from 1 to 5). Pew does this
NPORS survey
every year and makes the results available to interested pollsters.
- Weighting: No survey exactly matches the expected electorate, so all pollsters weight
each respondent in many ways. If, for example, the pollster expects 52% of the voters to be women and in the actual
sample has only 48% women, each woman will count for 1.083 (52/48) women. If the pollster expects 23% of the voters to
be Catholics and 25% in the sample are Catholics, then each male Catholic is assigned a weight of 0.920 but a female
Catholic will be weighted as 1.083 x 0.920 = 0.996, and so on. Most people will have multiple weighting factors.
Pollsters work with statisticians to try to correct for sampling errors this way.
Given all the partisanship nowadays, trying to figure out what fraction of the electorate will be Democrats and what
fraction will be Republicans (including people who call themselves independents but really aren't) is crucial. One
method being used this year is "recalled vote." Many pollsters are now asking respondents "If you voted in 2020, who did
you vote for?" They then weight for that to make sure the weighted sample matches the 2020 results. It is believed this
measure will counteract a well-known problem of getting a sample with too many highly-engaged voters and not enough
unengaged voters. Of course, if the ratio of Democrats to Republicans this time is different, this weighting will ruin
the poll, even if the sample is correct.
In 2016, few pollsters weighted for educational level and it turns out that was very important. Voting behavior
correlated strongly with educational level. They now know that. Will they miss something else this year? Some people think
gender
may be this year's defining issue, maybe even more important than the economy, even though it is not discussed much.
Kamala Harris is going out of her way to refuse to answer any questions about the "first woman president" thingie, but
many women (and some men) are keenly aware of it. In likely-voter screens, pollsters often ask "Are you planning to
vote" but rarely ask "Is having a woman president important to you?" Nor do they ask: "Do you think women should go back
to the kitchen where they belong?" Maybe they should ask that. People who study voting patterns say that gender,
including people's attitudes toward the role of women in society, could be the defining issue of this election, much more than education in
2016, in part due to Dobbs.
- New Data Available: We now have data on two elections with Donald Trump as a candidate,
one of which he won and one of which he lost. This data is invaluable. For example, we now know that Democrats tended to
vote by mail but Democrats who voted in person disproportionately supported Trump. We also know that Republicans tended
to vote in person on Election Day, but those Republicans who voted by mail were mostly Biden supporters. This
information can help get a sample not only weighted by party, but also make sure it has the right kinds of Democrats and
Republicans.
Another important piece of data is that for a long time, Trumpy voters were registered as Democrats and simply hadn't
bothered to correct their registration to Republican. In the past, that meant if, say, 38% of the registered voters in a
state were registered as Democrats, that didn't mean they would all vote for the Democrats. The Trumpy Democrats were
never going to vote for a Democrat, despite the (D). Registrations are more indicative now of true allegiance and
pollsters study the change in partisan registrations and what it means.
- Different Pollster Mix: Not all the pollsters operating in 2016 and 2020 are still
running polls and there are plenty of new ones. SurveyMonkey has dropped out altogether. PPP
is nearly gone. In contrast, we have over 35 new pollsters this time that did not poll in 2020. Steve Bannon once said
he could manipulate reality by flooding the zone. Indeed, we have tried hard to eliminate suspicious "pollsters" who are
just making up numbers to help Trump, but we may not have succeeded. Other aggregators don't always do this because then
they are adding their subjective judgments into the mix and they don't like to do that. As academics, we trust all the
small colleges not to cheat (although we sometimes wonder if they have enough experience with getting good samples), but when new
pollsters pop up and their websites do not indicate that they are basically in the polling business, we tend not to use them.
Anyway, the point is that pollsters are trying very hard not to repeat the mistakes of the past
when there are so many new mistakes they can make. (V)
This item appeared on www.electoral-vote.com. Read it Monday through Friday for political and election news,
Saturday for answers to reader's questions, and Sunday for letters from readers.
www.electoral-vote.com
State polls
All Senate candidates