Jump to content

Opinion poll

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 78.32.186.118 (talk) at 13:21, 29 April 2008. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

An opinion poll is a survey of public opinion from a particular sample. Opinion polls are usually designed to represent the opinions of a population by conducting a series of questions and then extrapolating generalities in ratio or within confidence intervals.

History

The first known example of an opinion poll was a local straw vote conducted by The Harrisburg Pennsylvanian in 1824, showing Andrew Jackson leading John Quincy Adams by 335 votes to 169 in the contest for the United States Presidency. Such straw votes—unweighted and unscientific— gradually became more popular, but they remained local, usually city-wide phenomena. In 1916, the Literary Digest embarked on a national survey (partly as a circulation-raising exercise) and correctly predicted Woodrow Wilson's election as president. Mailing out millions of postcards and simply counting the returns, the Digest correctly called the following four presidential elections.

In 1936 however the Digest came unstuck. Its 2.3 million "voters" constituted a huge sample; however they were generally more affluent Americans who tended to have have Republican sympathies. The Literary Digest was stagnant to offset the bias. The week before election day, it reported that Alf Landon was far more popular than Franklin D. Roosevelt. At the same time, George Gallup conducted a far smaller, but more scientifically-based survey, in which he polled a demographically representative sample. Gallup correctly predicted Roosevelt's landslide victory. The Literary Digest soon went out of business, while polling started to take off.

Elmo Roper was another American pioneer in political forecasting using scientific polls.[1] He predicted the reelection of President Franklin D. Roosevelt three times, in 1936, 1940, and 1944. Louis Harris had been in the field of public opinion since 1947 when he joined the Elmo Roper firm then later became partner.

Gallup launched a subsidiary in the United Kingdom, where it correctly predicted Labour's victory in the 1945 general election, in contrast with virtually all other commentators, who expected the Conservative Party, led by Winston Churchill.

By the 1950s, various types of polling had spread to most democracies. During these post-economic periods, surveys, analysis, and other formats are conducted in many regions. The Dean of American Public Opinion polling, Mr. Harris virtually engineered and pioneered new mechanisms for surveys and sector analysis also for various applications influencing all forms of information terminals such as the Bloomberg Terminal. Societies differentiate in reactions and tend to avoid sensitive political issues though these incentives provide a push towards the forefronts of economies and humanity. In Iraq, surveys conducted soon after the 2003 war helped to measure the true feelings of Iraqi citizens to Saddam Hussein, post-war conditions, and the presence of US forces. The 9/11 Commission and over many years other various types of public Opinion polling surveys and analysis have also aided in post-war conditions and the presence of U.S. forces.

Sample and pooling methods

Opinion polls for many years were maintained through telecommunications or in person to person contact. Methods and techniques variate though they are widely accepted in most areas. Verbal, ballot, and processed types can be conducted efficiently contrasting other types of survey, systematics, and complicated matrices beyond previous orthodox procedures. Opinion polling developed into popular applications through popular thought although response rates for some surveys declined. Also the following has also led to differentiating results:[1] Some polling organizations, such as YouGov and Zogby use Internet surveys, where a sample is drawn from a large panel of volunteers and the results are weighed to reflect the demographics of the population of interest. This is in contrast to popular web polls that draw on whoever wishes to participate rather than a scientific sample of the population and are therefore not generally considered as accurate.

The wording of a poll can include bias, as the bias can be in the opinion. For instance, the public is more likely to indicate support for a person who is described by the operator as one of the "leading candidates". This support itself overrides subtle bias for one candidate, as is lumping some candidates in an "other" category or vice versa. 21st century Polling arms variate in complexity due to these circumstances.[2]

Potential for inaccuracy

Polls based on samples or populations are subject to sampling error which reflects the effects of chance and uncertainty in the sampling process. The uncertainty is often expressed as a margin of error. The margin of error is usually defined as the radius of a confidence interval for a particular statistic from a survey. One example is the percent of people who prefer product A versus product B. When a single, global margin of error is reported for a survey, it refers to the maximum margin of error for all reported percentages using the full sample from the survey. If the statistic is a percentage, this maximum margin of error can be calculated as the radius of the confidence interval for a reported percentage of 50%. Others suggest that a poll with a random sample of 1,000 people has margin of sampling error of 3% for the estimated percentage of the whole population. A 3% margin of error means that 95% of the time the procedure used would give an estimate within 3% of the percentage to be estimated. The margin of error can be reduced by using a larger sample, however if a pollster wishes to reduce the margin of error to 1% they would need a sample of around 10,000 people. In practice pollsters need to balance the cost of a large sample against the reduction in sampling error and a sample size of around 500-1,000 is a typical compromise for political polls. (Note that to get complete responses it may be necessary to include thousands of additional participators.)[2]

Nonresponse bias

Since some people do not answer calls from strangers, or refuse to answer the poll, poll samples may not be representative samples from a population. Because of this selection bias, the characteristics of those who agree to be interviewed may be markedly different from those who decline. That is, the actual sample is a biased version of the universe the pollster wants to analyze. In these cases, bias introduces new errors, one way or the other, that are in addition to errors caused by sample size. Error due to bias does not become smaller with larger sample sizes. If the people who refuse to answer, or are never reached, have the same characteristics as the people who do answer, then the final results should be unbiased. If the people who do not answer have different opinions then there is bias in the results. In terms of election polls, studies suggest that bias effects are small, but each polling firm has its own formulas on how to adjust weights to minimize selection bias.[3]

Response bias

Survey results may be affected by response bias, where the answers given by respondents do not reflect their true beliefs. This may be deliberately engineered by unscrupulous pollsters in order to generate a certain result or please their clients, but more often is a result of the detailed wording or ordering of questions (see below). Respondents may deliberately try to manipulate the outcome of a poll by e.g. advocating a more extreme position than they actually hold in order to boost their side of the argument or give rapid and ill-considered answers in order to hasten the end of their questioning. Respondents may also feel under social pressure not to give an unpopular answer. For example, respondents might be unwilling to admit to unpopular attitudes like racism or sexism, and thus polls might not reflect the true incidence of these attitudes in the population. In American political parlance, this a phenomenon is often referred to as the Bradley Effect. If the results of surveys are widely publicized this effect may be magnified - the so-called spiral of silence.

Wording of questions

It is well established that the wording of the questions, the order in which they are asked and the number and form of alternative answers offered can influence results of polls. Thus comparisons between polls often boil down to the wording of the question. On some issues, question wording can result in quite pronounced differences between surveys. [3] [4] [5] This can also, however, be a result of legitimately conflicted feelings or evolving attitudes, rather than a poorly constructed survey. [6] One way in which pollsters attempt to minimize this effect is to ask the same set of questions over time, in order to track changes in opinion. Another common technique is to rotate the order in which questions are asked. Many pollsters also split-sample. This involves having two different versions of a question, with each version presented to half the respondents.

The most effective controls, used by attitude researchers, are:

  • asking enough questions to allow all aspects of an issue to be covered and to control effects due to the form of the question (such as positive or negative wording), the adequacy of the number being established quantitatively with psychometric measures such as reliability coefficients, and
  • analyzing the results with psychometric techniques which synthesize the answers into a few reliable scores and detect ineffective questions.

These controls are not widely used in the polling industry.

Coverage bias

Another source of error is the use of samples that are not representative of the population as a consequence of the methodology used, as was the experience of the Literary Digest in 1936. For example, telephone sampling has a built-in error because in many times and places, those with telephones have generally been richer than those without. Alternately, in some places, many people have only mobile telephones. Because pollsters cannot call mobile phones (it is unlawful in the United States to make unsolicited calls to phones where the phone's owner may be charged simply for taking a call), these individuals will never be included in the polling sample. If the subset of the population without cell phones differs markedly from the rest of the population, these differences can skew the results of the poll. Polling organizations have developed many weighting techniques to help overcome these deficiencies, to varying degrees of success. Several studies of mobile phone users by the Pew Research Center in the U.S. concluded that the absence of mobile users was not unduly skewing results, at least not yet. [4]

An oft-quoted example of opinion polls succumbing to errors was the UK General Election of 1992. Despite the polling organizations using different methodologies virtually all the polls in the lead up to the vote (and exit polls taken on voting day) showed a lead for the opposition Labour party but the actual vote gave a clear victory to the ruling Conservative party.

In their deliberations after this embarrassment the pollsters advanced several ideas to account for their errors, including:

Late swing
For example, the Conservatives gained from people who switched to them at the last minute, so the error was not as great as it first appeared.
Nonresponse bias
For example, Conservative voters were less likely to participate in the survey than in the past and were thus underrepresented.
The spiral of silence
For example, the Conservatives had suffered a sustained period of unpopularity as a result of economic stagnation and a series of minor unpopular actions. Some Conservative supporters felt under pressure to give a more popular answer.

The relative importance of these factors was, and remains, a matter of controversy, but since then the polling organizations have adjusted their methodologies and have achieved more accurate surveys and analysis in subsequent elections.

Polling organizations

There are many polling organizations. The most famous is the Gallup poll run by The Gallup Organization.

Other major polling organizations in the United States include:

In the United Kingdom, the most notable pollsters are:

In Australia the most notable companies are:

In Canada the most notable companies are:

In New Zealand the most notable polling organization is:

In Nigeria the most notable polling organization is:

In India the major polling organizations are:

  • C - fore
  • A.C Nielsen - Org
  • TNS

All the major television networks, alone or in conjunction with the largest newspapers or magazines, in virtually every country with elections, operate their own versions of polling operations, in collaboration or independently through various applications.

Several organizations try to monitor the behavior of Polling arms and the use of polling and statistical data, including the Pew Research Center and, in Canada, the Laurier Institute for the Study of Public Opinion and Policy.[5]

The best-known failure of opinion polling to date in the United States was the prediction that Thomas Dewey would defeat Harry S. Truman in the 1948 U.S. presidential election. Major polling organizations, including Gallup and Roper, indicated a landslide victory for Dewey.

In the United Kingdom, most polls failed to predict the Conservative election victories of 1970 and 1992, and Labour's victory in 1974. However, their figures at other elections have been generally accurate.

Influence

By providing information about voting intentions, Opinion polls can sometimes influence the behavior of electors. The various theories about how this happens can be split up into two groups: bandwagon/underdog effects, and strategic ('tactical') voting.

A Bandwagon effect occurs when the poll prompts voters to back the candidate shown to be winning in the poll. The idea that voters are susceptible to such effects is old, stemming at least from 1884; Safire (1993: 43) reported that it was first used in a political cartoon in the magazine Puck in that year. It has also remained persistent in spite of a lack of empirical corroboration until the late 20th century. George Gallup spent much effort in vain trying to discredit this theory in his time by presenting empirical research. A recent meta-study of scientific research on this topic indicates that from the 1980s onward the Bandwagon effect is found more often by researchers (Irwin & van Holsteyn 2000).

The opposite of the bandwagon effect is the Underdog effect. It is often mentioned in the media. This occurs when people vote, out of sympathy, for the party perceived to be 'losing' the elections. There is less empirical evidence for the existence of this effect than there is for the existence of the Bandwagon effect (Irwin & van Holsteyn 2000).

The second category of theories on how polls directly affect voting is called strategic or tactical voting. This theory is based on the idea that voters view the act of voting as a means of selecting a government. Thus they will sometimes not choose the candidate they prefer on ground of ideology or sympathy, but another, less-preferred, candidate from strategic considerations. An example can be found in the United Kingdom general election, 1997. Then Cabinet Minister, Michael Portillo's constituency of Enfield was believed to be a safe seat but opinion polls showed the Labour candidate Stephen Twigg steadily gaining support, which may have prompted undecided voters or supporters of other parties to support Twigg in order to remove Portillo. Another example is the Boomerang effect where the likely supporters of the candidate shown to be winning feel that chances are slim and that their vote is not required, thus allowing another candidate to win.

These effects indicate how opinion polls can directly affect political choices of the electorate. But directly or indirectly, other effects can be surveyed and analyzed on on all political parties. The form of media framing and party ideology shifts must also be taken under consideration. Opinion polling in some instances is a measure of cognitive bias, which is variably considered and handled appropriately in its various applications.

References

Additional Sources

Walden, Graham R. Survey Research Methodology, 1990-1999: An Annotated Bibliography. Bibliographies and Indexes in Law and Political Science Series. Westport, CT: Greenwood Press, Greenwood Publishing Group, Inc., 2002. xx, 432p.

Walden, Graham R. Public Opinion Polls and Survey Research: A Selective Annotated Bibliography of U.S. Guides and Studies from the 1980s. Public Affairs and Administrative Series, edited by James S. Bowman, vol. 24. New York, NY: Garland Publishing Inc., 1990. xxix, 360p.

Walden, Graham R. Polling and Survey Research Methods 1935-1979: An Annotated Bibliography. Bibiographies and Indexes in Law and Political Science Series, vol. 25. Westport, CT: Greenwood Publishing Group, Inc., 1996. xxx, 581p.

See also