POLL WATCHING IN THE HOME STRETCH
by Ron Faucheux
Election Day is almost here. Poll numbers are flying. Just when you think there is a trend, a new poll comes along showing something different. It’s hard to know what’s really happening.
The best advice: Don’t take any one poll to heart. Don’t assume any one poll is always right or wrong. Look at polls in context, look for trends. That’s why averaging polls is useful.
You ask: Why do survey results differ so much? There are a variety of reasons.
First, there is always the possibility of sampling error, which is part of the scientific method. That explains why multiple polls taken the same time can differ a few points without any of them being wrong.
Second, there is a quality factor. How survey questions are worded, samples selected and interviews conducted can all impact results. High-quality polls cost more; doing a live interview with a cell phone is twice as costly as doing it with a landline, and even more expensive than doing it online.
Also look out for pollsters who cut corners throughout an entire election cycle and then conduct their final poll properly. If that final poll comes close to the actual results, they will be praised for “correctly calling” the election, even though their previous work was shoddy and frequently off beam.
Third, timing matters. Polls are snapshots, not crystal balls. They tell you where things were at the time they when taken; they don’t predict the future. A good example was Wisconsin in the 2016 presidential race. The last two polls had Hillary Clinton ahead by an average of seven points. But––these two polls were completed about a week before the election, and that was before late-deciders broke for Trump, who won the state by a tiny margin.
Fourth, some polls are just flat-out wrong. There’s no sugar coating flawed survey research, especially when questionnaires are biased and samples are out of whack.
While legitimate pollsters always want their work to be accurate, sometimes there are rewards for substandard work. Bad polls are frequently outliers. And, an outlier will often get more media attention than a poll that’s consistent with other surveys. When outliers are released, they hit like bombshells. The media uses them to heighten the drama (“Has Reagan Lost His Lead?”) and partisans use them to prop up optimism when the results favor their side (“New Poll Shows McGovern Has a Chance”).
Of course, just because a poll is an outlier doesn’t mean it’s wrong; it could be the canary in the coal mine.
After the 2016 election, there was a common misperception that the ection polls were terribly wrong. In truth, they weren’t.
Of the 13 final polls that measured the national popular vote four years ago, 12 put Hillary Clinton ahead by 3.1 points. When the votes were counted, she won the national popular vote by 2.1 points. She lost the presidency because of Trump’s edge in key states that won for him a majority of the electoral votes.
Polls in those key states also came close. In Florida, the average of the final three polls had Trump ahead by three-tenths of a point. He won it by 1.2 points. Ohio’s last poll gave Trump a seven-point lead, and he carried it by 8.1 points. The final polls in Pennsylvania and Michigan showed them to be one- or two-point races, and they were.
Another reason polls got a bad rap in 2016 had nothing to do with actual polling, but with predictive modeling that was often confused with polling.
Well-known modelers––including Nate Silver’s FiveThirtyEight, Upshot at the New York Times and the Princeton Election Consortium––incorrectly predicted Clinton would win. They were also far off in key states. Some observers erroneously equated these “black box” predictions with poll results.
Predictive modeling and election betting markets are based on educated guesswork, not representative samples. They may be fun to discuss, but they should not be taken too seriously.
Election-watchers, especially my friends who read this newsletter, are transfixed on every new poll number. But after we review the numbers, let’s all take a breath and relax. The actual results will come soon enough––and when they do, we will all be prepared to say, “I told you so.”
A version of this piece was published today in the Times-Picayune. Click here
|