Many people are keenly focused on polling predictions for the U.S. presidential election. After 2016, a common reaction is to say, “How can you trust the polls? They were wrong last time.”
The instinct to be skeptical is good, but to not trust all polls is wrong. Trust the polls, but understand their claims, uncertainty and limitations.
Let’s think about a simple poll. Ask 100 random people buying ice cream next week what flavor they will choose. People may have a sense of their favorite flavor, but some may be undecided, and some may change their minds next week. In 2016, “about 13% of voters in Wisconsin, Florida and Pennsylvania decided on their presidential vote choice in the final week,” according to the American Association for Public Opinion Research. A change in opinion after a poll is conducted does not make the poll incorrect.
Next, let’s assume we had 136 million ice cream buyers next week. Could we use the results of our tiny survey as an estimate? Surprisingly yes. Flashing back to high school stats, the accuracy of a poll increases with the square root of the number of responses. More intuitively, you get the most information from the first hundreds or thousands of answers. Beyond that you’d only be refining your results slightly.
Pollsters rely on this statistical law to draw their conclusions. For example, a recent ABC/Washington Post survey of 879 people drew conclusions about how roughly 136 million people will vote.
However, in any randomly selected group, you’ll have a mix of gender, ethnic, age and geographic characteristics. Pollsters need to adjust the data from their survey respondents to ensure the results represent the electorate as a whole. For example, if men made up 48 percent of survey respondents but were projected to be 52 percent of voters, a pollster would need to overweight the survey results of men.
In addition, pollsters must analyze and adjust for new factors such as “shy” Trump voters, the impact of early mail-in ballots, under sampling of non-college educated voters, decreased landline usage and, of course, the ongoing pandemic.
Different pollsters will adjust for these factors differently, explaining how pollsters can and do come to different results. In 2016, for example, the New York Times gave raw data to four different pollsters: Their assessments of the popular vote percentage difference ranged from Clinton +4 to Trump +1.
Given all this, pollsters know they are not going to be exact, so they use margins of error to convey their level of certainty. What’s a margin of error? Imagine you were asked to guess the height of someone down the street, and you guessed him to be 6 feet tall. How confident would you be? You might be sure he’s not 5 feet or 7 feet, but could you do better? If you said, “I’m 95 percent confident he’s between 5′8” and 6′4," your margin of error would be plus or minus four inches.
Spend your days with Hayes
Subscribe to our free Stephinitely newsletter
You’re all signed up!
Want more of our free, weekly newsletters in your inbox? Let’s get started.
Explore all your optionsLastly, we should understand what the polls actually measure. Generally, polls make predictions about national popular vote totals, not who will win the presidency. The presidency goes to the person who wins the most Electoral College votes.
A separate set of organizations referred to as “poll aggregators” or “predictors” take polling data from multiple sources, put it through their own electoral college models and generate predictions for who will win the White House. Here, though, instead of expressing their results as a popular vote percentage with a margin of error, they express a probability of victory.
Which brings us to the interpretation of probability and what should be considered a “wrong” prediction. Poll aggregator FiveThirtyEight put the probability of a Trump victory in the late days of the 2016 campaign at approximately one in three. A Trump victory was not the expected outcome, but neither was it particularly unexpected. If someone told you there was a one in three chance of rolling a 1 or 2 on a standard die, would you then tell them they were “wrong” if a 1 was rolled?
Admittedly some predictors did put Clinton’s chances much higher. The Princeton Election Consortium put Clinton’s odds at 99 percent. On the face of it, the underlying models and assumptions that produced this assessment likely had serious flaws.
FiveThirtyEight currently puts Trump’s odds at about one in eight. Once again, a Trump loss is the expected outcome, but a Trump win would not be entirely unexpected. Should Trump win, it would again be difficult to state the predictions were wrong.
Overall, trust what the polls are telling you but understand their claims, uncertainty and limitations. That advice is as true today as it was in 2016.
Chetan Raina is a quantitative finance executive with 18 years' experience in Toronto, New York and Connecticut. He splits his time between St. Petersburg and Toronto.