Oct. 19, 2004 – In a full-page advertisement in the September 26 New York Times, the liberal advocacy organization MoveOn.org lambasted an early Gallup poll from earlier that month that placed George W. Bush ahead of John Kerry by a 13-point margin. Critical of what they called Gallupâ€™s pattern of "pro-Bush findings," MoveOn hurled every charge from political bias to flawed methodology at the organization. They even implied that former Gallup head George Gallup Jr. manipulated his polls in service of his religious beliefs.
White flag in hand, Times reporter Jim Rutenbergâ€™s surrender came a few days later in the form of an article that brandished a familiar adage: please, donâ€™t shoot the messenger if you donâ€™t like the message.
As November fast approaches, however, the proliferation of pre-election public opinion polls has rekindled an old debate -- do polls only measure opinion or can they be used to shape what people believe? Pollsters deny that their "disinterested" number crunching can sway a vote and paint themselves as the hapless targets of partisan bickering, instead blaming the media for the improper use of polling data.
But political groups disagree. Critics from both sides of the party line cite serious problems with polling logistics that they say influence the outcome of a poll and can translate into skewed news coverage and impact voters at the booth.
Mickey Huff of the progressive group Retro Poll says that when selectively reported or extrapolated to paint a larger picture, poll data can be manipulated to say almost anything.
"The polling community is in disarray over this question right now," revealed Marc Sapir, executive director of the grassroots polling organization, Retro Poll, and a member of the American Association of Public Opinion Research. Retro Poll is a unique polling organization that designs and performs opinion polls to look at the relationship between public knowledge and public opinion. The organizationâ€™s mission is to reveal how "government and corporate media distort information in order to manipulate, confuse and disorganize the public's will."
Sapir is part of a small but growing group of scholars and polling professionals who consider polls a "negative factor in terms of having a democratic election." In an interview with The NewStandard, Retro Poll Director Mickey Huff explained that polls are like a "snapshot of a particular place at a particular time." He believes that when selectively reported or extrapolated to paint a larger picture, poll data can be manipulated to say almost anything.
Their organization, by contrast, uses a two-tiered methodology that asks both factual and opinion questions. They maintain that this generates a benchmark of knowledge against which opinion can be measured. Without such a benchmark, Huff contends that polls "must show a trend over time in order to have any kind of scientific value."
Though many Americans still trust these snapshots of public opinion, Huff and Sapir have some harsh criticism for the methodology that much of their field relies on. "If you take apart the way polls are done, you find that all of the polls are bull," Sapir said.
Statistical Friend or Statistical Foe?
While the MoveOn advertisement bashes Gallupâ€™s "likely voters" poll in comparison with fourteen other polls claiming to sample the same group, Sapir doubts the predictive powers of the entire likely voter model. Such polls purport to predict candidate ratings among those who will vote come election time -- traditionally a disproportionately older, whiter and more Republican demographic than the rest of the population.
But pollsters can be slow to catch on to trends in voter demographics. During a 2002 speech, top pollster John Zogby cited difficulty in predicting voter turnout as a severe impediment to accurate polling. He admitted that his Congressional elections forecasts that year were off the mark because he concentrated on changing voting patterns in black and Latino communities and severely underestimated the Republican get-out-the-vote effort. This year, as millions of new voter registrations overwhelm elections offices across the country, it may be more difficult than ever to define the likely voter demographic.
Just as the fluid definition of the likely voter can bias a pollâ€™s sample, Sapir sees a similar pitfall in the use of margins of error.
Though margin of error figures often make polls seem highly accurate, they are standardized numbers determined only by a pollâ€™s sample size and can obscure the existence of large numbers of non-responders. In his own practice, Sapir finds that he reaches increasingly few persons while conducting a poll. Of those reached, as many as 70 percent refuse to participate in his surveys. He theorizes that poll respondents self-select -- the ones that choose to participate are those most likely to agree with the poll in the first place.
"Even if 99 percent refused to participate and we had to speak to 100,000 people to find 1,000 who would talk with us, the margin of error statistic would still be reported as the same 3%," explained Sapir in a 2003 Z Magazine article.
Mainstream polling organizations too have identified non-responders as a major obstacle to their work, and some pollsters even anticipate that the growing popularity of cell phones and caller-ID services may soon make telephone-based polling obsolete. Yet according to a study released last April by the Pew Research Center, these technologies do not cut equally across all sectors of society. Pew found that more African-Americans have caller-ID and a greater percentage of them use it for call screening than white Americans. And 41 percent of young people ages 18-29 say that they screen their calls as compared to only 12 percent of Americans 65 or older.
Huff believes that because non-responders are more likely to be younger and African-American, these groups are often underrepresented in polls.
"We hear that youth are apathetic," said Huff, "but we donâ€™t know, because they have placed themselves outside of pollingâ€™s reach."
Though he finds many flaws with mainstream polling methodology, Sapir believes that their biggest potential for influence lies in the "time, place and purpose that the polls happen." Polling organizations are independent and not inherently biased, but they will rarely refuse to conduct a poll, says Sapir. Because polls are prohibitively expensive to all but the most wealthy companies and news conglomerates, these organizations yield a disproportionate influence over what issues are introduced into public discourse.
How does the ability to commission a poll translate into influence?
"Itâ€™s not exactly that Gallupâ€™s cooking the books," read the MoveOn ad. But "poll results profoundly affect a campaignâ€™s news coverage as well as the publicâ€™s perception of the candidates."
That polls can influence news coverage and impact voters is hardly a revolutionary idea. Both Canada and the European Union have blackout dates on the public release of exit and opinion poll data during an election week. And in April, Indian political parties unanimously sought a total ban on the airing of polls as early as the date of notification of an election. Though Indian Attorney General Soli J. Sorabjee declared that an outright polling ban would be a violation of constitutional free speech guarantees, he instead recommended regulatory measures to restrict the publication of poll data.
The Blame Game
Veteran pollsters know that in an election season, theirs can be an unpopular business, and many, like Roger Tourangeau of the American Association of Public Opinion Research, often write off castigations as mere partisanship. "When their particular candidate is being gored there is a tendency to discredit the poll," said Tourangeau in an interview with The NewStandard.
Tourangeau, like most pollsters, insists that there is "no evidence to date" to suggest that polls impact voters. Instead many believe that misuse and misreporting of their nonpartisan data by the media is to blame for any of pollingâ€™s negative influence.
Most recently, the National Council on Public Polls released a statement questioning the Commission on Presidential Debatesâ€™ decision to use "combined survey results" to determine which candidates could participate in the widely televised presidential debates.
Lawrence Jacobs, director of the University of Minnesotaâ€™s 2004 Elections Project, encouraged critics of opinion polls to "think of polls as a tool like a rake. Is it the fault of the rake that the grass seeds are being pulled up?" he asked. "Weâ€™re confusing the operators with the tool -- itâ€™s not the polls that are at fault here."
"Thatâ€™s rubbish," responds Retro Pollâ€™s Huff. "The polls have this bizarre symbiotic relationship with the media. They are not divorced from the media. These groups both have a responsibility to ensure polls are used properly."
In his latest article in the September 27 edition of the Polling Report, Jacobs details the triple alliance that has caused polling to blossom into a central feature of American elections in recent years. Voters use polls to stay informed about the horse race, candidates use polls to determine their platforms and the media use polls to shape the news.
The press spends millions of dollars per year on polls and uses them in a number of ways, says Jacobs. Though the media are themselves often critical of polls, having the latest figures in the presidential race is an easy way to sell papers. Polls help editors frame their coverage, defining for example the perceived narrowing gap between candidates or if one nominee appears to have a strong lead over the other. In the same vein, editors use polls to determine who is deserving of precious column inches or airtime.
In an interview with The NewStandard, Jacobs conceded that politicians and editors rely heavily on poll data and pay an almost slavish attention to these snapshots of public opinion. "Candidates are convinced that polls are like oxygen, they think if they donâ€™t pay attention, theyâ€™ll pass out," he said.