Over the past year, political professionals have been picking over the pre-election polling data to figure out whether the polls failed to predict Donald Trumpís upset victory over Hillary Clinton.
In many peopleís minds, the polls were flat wrong in 2016. Actually, itís more complicated than that.
When the American Association for Public Opinion Research asked a blue-ribbon panel of polling specialists to critique the industryís performance in 2016, they found some important oversights and misfires. But the panel also emphasized that, contrary to popular belief, the national polls werenít that far off.
After interviewing almost a dozen pollsters and experts, here are the key points we learned about how the election influenced polling, and what to look for in the future.
National polls vs. state polls
The panel concluded that the national polls, which had Clinton up by about 3 percentage points, were "basically correct," as she won the popular vote by 2 percentage points.
The problem is that the popular vote is not the one that determines the winner ó the Electoral College does. And Trump won the presidency by narrowly carrying the battleground states of Michigan, Pennsylvania and Wisconsin, where the polling was less accurate.
"Polls showed Hillary Clinton leading, if narrowly, in Pennsylvania, Michigan and Wisconsin, which had voted Democratic for president six elections running," their report said. "Those leads fed predictions that the Democratic Ďblue wallí would hold. Come Election Day, however, Trump edged out victories in all three."
The association pointed to several issues that led to underestimated support for Trump, such as late-deciding voters choosing Trump too late to be captured in the polling.
The idea that polls in individual states were more problematic than national polls in 2016 is now widely accepted.
"The most obvious lesson of 2016 is that the national polls did well," said Steven S. Smith, a political scientist and polling specialist at Washington University in St. Louis.
"The problem, as usual, was in the smaller electorates ó states and districts ó where there are small samples, infrequent surveys, and variation in turnout from election to election."
Traditional polls vs. nontraditional polls
Earlier this year, Jon A. Krosnick, a professor of communication and political science at Stanford University, analyzed every publicly available poll in the closing days of the campaign for a paper he presented at the American Association for Public Opinion Research national conference.
Krosnick found that polls that used the traditional method ó having live persons call telephone numbers chosen at random, and multiple times if necessary ó worked well in 2016, even for polls on the state level.
This included polls run by the major networks and newspapers, as well as Quinnipiac University, Marist College and several others, Krosnick said. Other types of nontraditional polls, such as those using recorded questioners or online surveys, did less well.
One polling outlet that uses traditional methods is the Arkansas Poll at the University of Arkansas.
Its researchers are happy with using the traditional phone method, even if it is more expensive.
"Iím not doing anything different" as a result of the 2016 election "other than continuing to incrementally boost the proportion of interviews conducted by folks reached by cellphone ó weíre at 40 percent," said Janine A. Parry, the pollís director. "These are practices that have been widely used and have worked well."
Weighting for education
One of the most troublesome problems in 2016 was most pollsí decision not to "weight" for educational attainment among a pollís respondents.
"Weighting" means adjusting the results so that the demographics of the sample approximates the demographics of the state being tested or, for a national survey, the country as a whole. This helps straighten out the results of a poll that happened to have an unrepresentative sample. Weighting is common for some basic factors such as race and ethnicity. But it was not commonly done for education.
One of the signature aspects of the 2016 presidential race is the degree to which voters with lower educational attainment voted for Trump and voters with higher educational attainment voted for Clinton, But "many polls, especially at the state level, did not adjust their weights to correct for the over-representation of college graduates in their surveys, and the result was overestimation of support for Clinton," the associationís report found.
A case in point was the University of New Hampshire Survey Centerís 2016 polling, which its director called "the worst we ever had."
"We did not weight by the level of education," said director Andrew E. Smith. "It had never been an issue."
After the fact, Smith applied the proper weighting, and "everything snapped back to being accurate," he said. "So, going forward, we will have to include education in our weighting."
Mark Blumenthal, the head of election polling with SurveyMonkey, agreed that the industry has taken the educational weighting issue seriously. He said that his company is digging even deeper.
"Weíve always weighted by education, but our review of nearly a million interviews we conducted last year demonstrated that we should have been even more granular in our approach," said Blumenthal, who also co-founded the website now known as HuffPost Pollster. "In 2016, there was an abnormal gap between the vote preferences of those with bachelorís degrees and those with postgraduate degrees. Had we broken out these two different groups of Ďcollege graduates,í our estimates would have been even closer."
Words of wisdom
The experts offered some parting advice for reading polls after 2016.
ē Donít cherry-pick the results you prefer. "Iím using the same strategy today that Iíve used since I started in this business," said Amy Walter, national editor at the nonpartisan Cook Political Report. "Take the highest and lowest polls, throw them out, and the result will be somewhere in the middle."
ē When you have a series of polls over time, "pay attention to the trend, not the margin," Walter said. In other words, if a candidate is getting stronger or weaker over time in a series of polls, thatís a pattern worth watching.
ē "If you see a poll from a pollster youíve never heard of, be skeptical," Walter said.
ē Consider various scenarios for voter turnout scenarios. This became especially important in the recent Alabama Senate race between Republican Roy Moore and Democrat Doug Jones. With Moore an unusually polarizing figure and accused of sexual misconduct, and with Jones running as a Democrat in a state that hadnít elected one statewide in years, the dynamics of who might turn up at the polls was unclear right up through Election Day.
"The 2018 midterms are still far away," said Margie Omero, a Democratic pollster who serves as executive vice president of public affairs at the firm PSB Research. Given that, itís important to consider "different assumptions about the composition and size of the electorate."
ē "Pay attention to undecided voters," Walter said. "Undecideds almost always break toward the challenger. It happened in 2016 to Trump. In midterms they break away from the party holding the White House." In the recent Virginia gubernatorial election, she noted, Republican Ed Gillespie was at 44 percent. He ultimately took 45 percent, with Democrat Ralph Northam taking most of the undecided voters.
ē Donít just look at "horse-race" polls ó the polls showing head-to-head matchups between candidates. "Campaign pollsters have long argued thereís far more to assessing a race than just the ballot question," Omero said. "Awareness of the candidates, engagement and enthusiasm, candidate image, and external news events can all fluctuate and change a race." Focus groups, she added, can also be effective in gauging votersí feelings ó especially in less populated areas, where they are not done as frequently.
The professional forecasters said they are keeping their alarm systems on alert in todayís confusing polling universe.
"Iíve always been skeptical of surveys (taken by machines rather than live callers), and Iím not sure that Internet polls at the state and local level have been perfected," said Jennifer Duffy, senior editor at the Cook Political Report. "Some public pollsters have figured out how to do it. Others havenít. I still have great faith in polls conducted for campaigns, campaign committees and super PACs, but like most of these pollsters will admit, their job gets harder by the cycle."