Georgia, we've got a problem. So do all the other people who follow polls and and study them and dissect them like that frog in high school biology. The poll is from Landmark Communications, a firm in Alpharetta, Georgia, It is a new kid on the polling block. The poll shows Michelle Nunn 7 points ahead of David Perdue in Georgia, 47% to 40%. However a poll 6 days earlier from SurveyUSA puts Perdue 9 points ahead 50% to 41%. Our software will average these two and put Perdue just ahead. Don't you believe it.
Obviously, a spread of 16 points is completely impossible. One of the polls is simply wrong and there is no way to see which one it is. SurveyUSA is a long-standing and fairly reliable pollster. Landmark is new. It is hard to tell if it is biased or incompetent or right and SurveyUSA is wrong. This is a critical problem for this site as well as others that do similar aggregation. The bottom line is that there are relatively few public polls available. Some of them come from openly partisan pollsters that advertise on their Website: "We have helped elect 23 Republicans to office" or "We have worked successfully with the DSCC on many campaigns." They probably are actually good pollsters but their real business model is helping candidates get elected. It is rare that these hired guns don't care which side they are on. They are not like mercenaries who will fight on any side of a war if the pay is good enough. This makes good business sense if you see them as campaign consultants, not independent pollsters. After all, if you are a Republican candidate running for office would you want to hire a firm that helped elect a bunch of Democrats in the previous cycle?
These firms no doubt do good work and tell their clients the truth. After all, if a candidate is in big trouble with 30-39 year-old high-income single female Democratic vegetarians, he probably wants to know that so he can adjust his tactics if need be. The problem is that if the firm's goal is to help elect its clients, who knows whether the numbers they publish have been massaged, altered, or just plain made up. Two firms in past years, Strategic Vision and Research 2000 have been accused of simply making up the numbers they published. Maybe they decided that a good way to help their clients is just make up and publish fake polls. They got caught because their numbers didn't follow Benford's law. Very briefly, if you measure the lengths of all the world's rivers or populations of its cities, a lot more of the numbers will begin with a "1" than with a "9" (see the link to Benford's law for the mathematics of this). Polling data also has well-known statistical properties that can be measured. For example, if half the poll numbers end in a "0" or "5" you can begin to smell dead fish. Statisticans can run tests on sets of polls to look for more subtle irregularities.
Now back to the two Georgia polls. The Landmark Communications poll was done for WSB-TV in Atlanta. The station is an ABC affiliate and presumably did at least some due diligence before hiring Landmark Communications. Landmark's Website gives no indication of a partisan bias. So while we simply exclude all partisan pollsters from further consideration, there is no a priori reason to exclude Landmark, even if the results strongly contradict a poll a week earlier from a known reputable pollster.
A related problem we have (and everybody else has) is the growing number of universities that are getting into the polling business. For automated polls, like those of PPP, Rasmussen, and SurveyUSA, all you need is a computer, polling analysis software (which you can buy commercially), and some phone lines. For a live-interviewer poll you need, well, live interviewers. And you have to pay them--unless this work is part of a university course in a political science department on elections and the students do the calling as part of their lab work. Now some of the universities in the polling business, like Quinnipiac University and Marist College have been doing this so long and so well that major media outlets hire them to do quality work. But there also are universities that do one poll and are never heard from again. Do they even understand what they are doing? To get a proper sample, you have to normalize the data. Suppose you call 1000 people and by chance 600 are women (women answer the phone more than men). Women also favor the Democrats more than men. So your poll shows that the Democratic candidate is doing well. A good pollster will normalize the data, basically counting each woman for only 53/60 of a respondent to reflect the fact that women are (for example) 53% of the electorate in the area of interest. Pollsters also normalize for age, race, income, partisan affiliation, etc. This requires (1) knowing how to do this and (2) having the true data for the electorate. Then there is the whole business of filtering out unlikely voters.
As a result, it is very hard for anyone to know which polls to count and which not to count. Getting rid of the proud partisans is easy. Getting rid of the stealth partisans or incompetents is much harder. Just to be on the safe side, we won't accept a poll from an established legitimate pollster if the poll was paid for by a candidate or party. If there were tons of data, we could just stick with the half dozen or so known reputable pollsters, but then we would go for weeks with no data on many races. Different sites tackle this problem differently, which is why even serious, reliable sites may get different results. In any event, we are counting Landmark since there is some reason to believe the poll (an ABC TV station hired them) and there is no reason to suspect they are doing something wrong. The hope here is that by averaging many polls, some of the inherent noise will cancel out.
State | Democrat | D % | Republican | R % | I | I % | Start | End | Pollster |
Georgia | Michelle Nunn | 47% | David Perdue | 40% | Aug 20 | Aug 21 | Landmark Communications |