by D. Cupples | Yesterday, Zogby seemed worried because its polls had wrongly indicated that Barack Obama would win the New Hampshire Democratic primary. What about the Iowa Caucus?
A Zogby poll released on December 30 indicated that of "934 likely caucus goers," 31% supported Hillary Clinton, 27% supported Barack Obama, and 24% supported John Edwards (3.3% margin of error). It was closer than Zogby's New Hampshire predictions, but it was still wrong.
Relying on polls requires strong faith that the answers of, say, 934 survey participants truly reflect the answers of tens of thousands of people (or more) who haven't been surveyed. Incidentally, Zogby's polls weren't the only ones out of sync with the actual Iowa Caucus results.
A CNN/Opinion Research Corp poll put Clinton two points above Obama and 11 points above Edwards in Iowa (BN-Politics). On January 1, a Des Moines Register poll put Obama at 32%, Clinton at 25%, and Edwards at 24%.
In other words, two polls taken within days of each other showed different results among the top three candidates. That, alone, should have compelled pundits and journalists to view polls with caution.
Polls about Republicans in Iowa also conflicted. Released on December 30, a poll by MSNBC/McClatchy/Mason-Dixon showed Mitt Romney at 27% and Mike Huckabee at 23% (5% error margin). A CNN poll around the same time put Romney at 31% and Huckabee at 28% . Meanwhile, a poll by the Des Moines Register put Huckabee at 32% and Romney at 26%.
Which polls should we have believed? The better question is: why did any pundits or journalists have to adopt any poll or make any prediction?
The only way to know which polls were right is hindsight -- or, perhaps, a properly blessed deck of Tarot cards.
A New York Times op-ed explains (in the last six paragraphs) that pre-New Hampshire polls might have been off because of sampling problems. Reportedly, less wealthy and less educated people were less likely to even answer survey questions. In other words, the opinions of a significant number of people who would end up voting may not have been factored into the polling data.
Yet another problem is that polls rely on self-reporting. This is nobody's fault: it can't be any other way, but it does create limitations for people interpreting polling data. Some people answering survey questions may sound more definite about their answers than they really feel. Some may change their mind in short order. Some might give inaccurate answers, just for the heck of it (you never know).
Then there's the issue of how questions are phrased and whether phrasing influences participants' answers.
Until we have a fully functioning mind-reading machine, we can't know whether survey participants' answers truly reflect how they do or will feel -- which is partly why reliance on polls requires a leap of faith.
Maybe the fault for mis-predicting the New Hampshire Democratic Primary lies more with the reporting of polling data. Numbers create the impression of concreteness, which may be why pundits and journalists enjoy citing them. How many reporters and pundits truly understand the subtleties and limitations of polling data?
More importantly, how many of them have the time or column-inch space to adequately explain those subtleties and limitations to their audiences?
Note that polling data can fail to reflect reality with respect to other issues, beyond elections. For examples, see the posts linked below.
See Memeorandum for other bloggers' commentary.
Other BN-Politics Posts:
* Again, Polls Don't Mean Much: Hillary Wins New Hampshire
* Primary 2008: Polls Don't Seem to Mean Much
* Something's Amiss at Gallup: Approval Ratings 7 Mid-East Peace
* Polling Data Inadequately Reported
* CNN Poll: Bad News for Dems, No News for Republicans?
Comments