"Now I know why they call this the Granite State," said candidate Bob Dole eight years ago after losing a Republican primary in the state for the second time. "It's so hard to crack."
Dole's assessment is as true for pollsters as for the candidates in New Hampshire, where bad methodology, bad timing or simply bad luck have produced some of the most memorable miscues in polling. Consider these flubs:
In 2000, the headline on an Associated Press day-before-the-primary story was "Nearing the N.H. finish line; Polls declare GOP dead heat." John McCain then went on to beat George W. Bush by 18 percentage points.
The New Hampshire-based American Research Group's tracking poll ended up buried deepest in the snow bank: They had Bush winning by two the day before the primary, merely 20 points off the mark. On the Democratic side, the losing pollster at least got the winner right: The Quinnipiac poll predicted Gore would win by 17 percentage points, but he won by four.
It was the second debacle for ARG in as many New Hampshire Republican primaries. The day before the 1996 contest, ARG's Dick Bennett told the Manchester (N.H.) Union Leader, "It looks like Dole's going to win," based on the Kansan's seven-point advantage in their tracking poll. He didn't, losing to Pat Buchanan by a single percentage point.
Exit pollsters aren't immune. In 1992, Voter Research and Surveys' exit poll showed the first George Bush beating Buchanan by a relatively narrow 6 percentage points, only to have Bush finish 16 points ahead on election night.
In 1988, it was the Gallup poll that fell victim. Gallup's final pre-election survey had Dole up by 8 percentage points. He ended up losing to Bush senior by 9.
So is New Hampshire just jinxed, or what?
Not necessarily. The dirty little secret in New Hampshire and elsewhere is that too many of the widely reported pre-election polls cut corners or otherwise use methods that are less than gold standard.
Perhaps the best-known of the bunch, Zogby International, does all kinds of controversial things to produce its headline-grabbing tracking poll. Surveys taken by students for Franklin Pierce College use samples based on lists of registered voters that have proven to be incomplete or outdated. Suffolk University, which is polling for a Boston television station, asks a curiously convoluted candidate preference question that ends: "toward whom would you vote or lean?"
Many professionals consider student interviewers unreliable, especially when unsupervised. Franklin Pierce and Suffolk use student interviewers, as does the University of New Hampshire. Polling directors at the schools insist the kids are all right: "Their quality is tremendous," said Richard Killion, who oversees Franklin Pierce polls, later adding: "It really improved when I started paying them."
Research 2000, which does polls for the Concord Monitor, doesn't randomly select respondents within the households they contact. Instead, they interview the person who answers the telephone if they qualify as a likely Democratic primary voter on the basis of answers to subsequent questions. The data is adjusted so that the proportion of men and women, Democrats and independents and other key groups match the proportion who voted in the 2000 New Hampshire primary, reports Del Ali, president of the firm. That could improve the accuracy of the results if Tuesday's electorate is a carbon copy of 2000, but could be a problem if it is not.
Likewise, student interviewers at Franklin Pierce don't randomly select people once they reach a household. They talk to whomever answers the telephone unless their sample is skewing more male or female. In that case, if they need more women, they ask in subsequent calls to speak to women.
The problem with not randomly selecting within households is the resulting sample is more likely to be biased in ways that are not readily apparent.