Response rate (survey): Difference between revisions
Muv4zqlvc3 (talk | contribs) →Importance: Added another more recent study looking at response rate bias |
Muv4zqlvc3 (talk | contribs) →Importance: added a summary paragraph. |
||
Line 16: | Line 16: | ||
For many years, a survey's response rate was viewed as an important indicator of survey quality. Many observers presumed that higher response rates assure more accurate survey results (Aday 1996; Babbie 1990; Backstrom and Hursh 1963; Rea and Parker 1997). But because measuring the relation between non-response and the accuracy of a survey statistic is complex and expensive, few rigorously designed studies provided empirical evidence to document the consequences of lower response rates, until recently. |
For many years, a survey's response rate was viewed as an important indicator of survey quality. Many observers presumed that higher response rates assure more accurate survey results (Aday 1996; Babbie 1990; Backstrom and Hursh 1963; Rea and Parker 1997). But because measuring the relation between non-response and the accuracy of a survey statistic is complex and expensive, few rigorously designed studies provided empirical evidence to document the consequences of lower response rates, until recently. |
||
Such studies have finally been conducted in recent years, and |
Such studies have finally been conducted in recent years, and several conclude that the expense of increasing the response rate frequently is not justified given the difference in survey accuracy. |
||
One early example of a finding was reported by Visser, Krosnick, Marquette and Curtin (1996) who showed that surveys with lower response rates (near 20%) yielded more accurate measurements than did surveys with higher response rates (near 60 or 70%).<ref>Visser, Penny S., Jon A. Krosnick, Jesse Marquette, and Michael Curtin. 1996. “Mail Surveys for Election Forecasting? An Evaluation of the Colombia Dispatch Poll.” Public Opinion Quarterly 60: 181–227.</ref> In another study, Keeter et al. (2006) compared results of a 5-day survey employing the Pew Research Center’s usual methodology (with a 25% response rate) with results from a more rigorous survey conducted over a much longer field period and achieving a higher response rate of 50%. In 77 out of 84 comparisons, the two surveys yielded results that were statistically indistinguishable. Among the items that manifested significant differences across the two surveys, the differences in proportions of people giving a particular answer ranged from 4 percentage points to 8 percentage points.<ref>Keeter, Scott, Courtney Kennedy, Michael Dimock, Jonathan Best and Peyton Craighill. 2006. “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey.” Public Opinion Quarterly. 70(5): 759–779.</ref> |
One early example of a finding was reported by Visser, Krosnick, Marquette and Curtin (1996) who showed that surveys with lower response rates (near 20%) yielded more accurate measurements than did surveys with higher response rates (near 60 or 70%).<ref>Visser, Penny S., Jon A. Krosnick, Jesse Marquette, and Michael Curtin. 1996. “Mail Surveys for Election Forecasting? An Evaluation of the Colombia Dispatch Poll.” Public Opinion Quarterly 60: 181–227.</ref> In another study, Keeter et al. (2006) compared results of a 5-day survey employing the Pew Research Center’s usual methodology (with a 25% response rate) with results from a more rigorous survey conducted over a much longer field period and achieving a higher response rate of 50%. In 77 out of 84 comparisons, the two surveys yielded results that were statistically indistinguishable. Among the items that manifested significant differences across the two surveys, the differences in proportions of people giving a particular answer ranged from 4 percentage points to 8 percentage points.<ref>Keeter, Scott, Courtney Kennedy, Michael Dimock, Jonathan Best and Peyton Craighill. 2006. “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey.” Public Opinion Quarterly. 70(5): 759–779.</ref> |
||
Line 25: | Line 25: | ||
Choung et al (2013) looked at community response rate to a mailed functional gastrointestinal disorders questionnaire. The response rate to their community survey was 52%. Then, they took a random sample of 428 responders and 295 non-responders for for medical record abstraction, and compared non-responders against responders. They found that respondents had a significantly higher body mass index and more health care seeking behavior for non-GI problems. However, with the exception of diverticulosis and skin diseases, there was no significant difference between responders and non-responders in terms of any gastrointestinal symptoms or specific medical diagnosis. <ref>Rok Seon Choung, G. Richard Locke III, Cathy D. Schleck, Jeanette Y. Ziegenfuss, Timothy J. Beebe, Alan R. Zinsmeister, Nicholas J. Talley. 2013. "A low response rate does not necessarily indicate non-response bias in gastroenterology survey research: a population-based study." Journal of Public Health February 2013, Volume 21, Issue 1, pp 87-95. http://link.springer.com/article/10.1007/s10389-012-0513-z</ref> |
Choung et al (2013) looked at community response rate to a mailed functional gastrointestinal disorders questionnaire. The response rate to their community survey was 52%. Then, they took a random sample of 428 responders and 295 non-responders for for medical record abstraction, and compared non-responders against responders. They found that respondents had a significantly higher body mass index and more health care seeking behavior for non-GI problems. However, with the exception of diverticulosis and skin diseases, there was no significant difference between responders and non-responders in terms of any gastrointestinal symptoms or specific medical diagnosis. <ref>Rok Seon Choung, G. Richard Locke III, Cathy D. Schleck, Jeanette Y. Ziegenfuss, Timothy J. Beebe, Alan R. Zinsmeister, Nicholas J. Talley. 2013. "A low response rate does not necessarily indicate non-response bias in gastroenterology survey research: a population-based study." Journal of Public Health February 2013, Volume 21, Issue 1, pp 87-95. http://link.springer.com/article/10.1007/s10389-012-0513-z</ref> |
||
Nevertheless, in spite of these recent research studies, a higher response rate is preferable because the missing data is not random. <ref>Altman DG, Bland JM. Missing data. BMJ. 2007 Feb 24;334(7590):424. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1804157/</ref> There is no satisfactory statistical solution to deal with missing data that may not be at random. Assuming an extreme bias in the responders is one suggested method of dealing with low survey response rates. Getting a high response rate (>80%) from a small, random sample is considered preferable to a low response rate from a large sample.<ref> Evans SJ. Good surveys guide. BMJ. 1991 Feb 9;302(6772):302-3. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1669002/pdf/bmj00112-0008.pdf</ref> |
|||
==See also== |
==See also== |
Revision as of 17:53, 12 April 2014
Response rate (also known as completion rate or return rate) in survey research refers to the number of people who answered the survey divided by the number of people in the sample. It is usually expressed in the form of a percentage.
Example: if 1,000 surveys were sent by mail, and 257 were successfully completed and returned, then the response rate would be 25.7%.
In direct marketing, the response rate refers to the number of people who responded to the offer.
In oncology, response rate (RR) is a figure representing the percentage of patients whose cancer shrinks (termed a partial response, PR) or disappears after treatment (termed a complete response, CR) . In simpler terms RR=PR+CR.
There may be a non-response bias if the response rate is low.
Importance
A survey’s response rate is the result of dividing the number of people who were interviewed by the total number of people in the sample who were eligible to participate and should have been interviewed.[1]
A low response rate can give rise to sampling bias if the nonresponse is unequal among the participants regarding exposure and/or outcome.
For many years, a survey's response rate was viewed as an important indicator of survey quality. Many observers presumed that higher response rates assure more accurate survey results (Aday 1996; Babbie 1990; Backstrom and Hursh 1963; Rea and Parker 1997). But because measuring the relation between non-response and the accuracy of a survey statistic is complex and expensive, few rigorously designed studies provided empirical evidence to document the consequences of lower response rates, until recently.
Such studies have finally been conducted in recent years, and several conclude that the expense of increasing the response rate frequently is not justified given the difference in survey accuracy.
One early example of a finding was reported by Visser, Krosnick, Marquette and Curtin (1996) who showed that surveys with lower response rates (near 20%) yielded more accurate measurements than did surveys with higher response rates (near 60 or 70%).[2] In another study, Keeter et al. (2006) compared results of a 5-day survey employing the Pew Research Center’s usual methodology (with a 25% response rate) with results from a more rigorous survey conducted over a much longer field period and achieving a higher response rate of 50%. In 77 out of 84 comparisons, the two surveys yielded results that were statistically indistinguishable. Among the items that manifested significant differences across the two surveys, the differences in proportions of people giving a particular answer ranged from 4 percentage points to 8 percentage points.[3]
A study by Curtin et al. (2000) tested the effect of lower response rates on estimates of the Index of Consumer Sentiment (ICS). They assessed the impact of excluding respondents who initially refused to cooperate (which reduces the response rate 5–10 percentage points), respondents who required more than five calls to complete the interview (reducing the response rate about 25 percentage points), and those who required more than two calls (a reduction of about 50 percentage points). They found no effect of excluding these respondent groups on estimates of the ICS using monthly samples of hundreds of respondents. For yearly estimates, based on thousands of respondents, the exclusion of people who required more calls (though not of initial refusers) had a very small one.[4]
Holbrook et al. (2005) assessed whether lower response rates are associated with less unweighted demographic respresentativeness of a sample. By examining the results of 81 national surveys with response rates varying from 5 percent to 54 percent, they found that surveys with much lower response rates decreased demographic representativeness within the range examined, but not by much. [5]
Choung et al (2013) looked at community response rate to a mailed functional gastrointestinal disorders questionnaire. The response rate to their community survey was 52%. Then, they took a random sample of 428 responders and 295 non-responders for for medical record abstraction, and compared non-responders against responders. They found that respondents had a significantly higher body mass index and more health care seeking behavior for non-GI problems. However, with the exception of diverticulosis and skin diseases, there was no significant difference between responders and non-responders in terms of any gastrointestinal symptoms or specific medical diagnosis. [6]
Nevertheless, in spite of these recent research studies, a higher response rate is preferable because the missing data is not random. [7] There is no satisfactory statistical solution to deal with missing data that may not be at random. Assuming an extreme bias in the responders is one suggested method of dealing with low survey response rates. Getting a high response rate (>80%) from a small, random sample is considered preferable to a low response rate from a large sample.[8]
See also
References
- ^ “Response Rates – An Overview.” American Association for Public Opinion Research (AAPOR). 29 Sept 2008. http://www.aapor.org/responseratesanoverview
- ^ Visser, Penny S., Jon A. Krosnick, Jesse Marquette, and Michael Curtin. 1996. “Mail Surveys for Election Forecasting? An Evaluation of the Colombia Dispatch Poll.” Public Opinion Quarterly 60: 181–227.
- ^ Keeter, Scott, Courtney Kennedy, Michael Dimock, Jonathan Best and Peyton Craighill. 2006. “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey.” Public Opinion Quarterly. 70(5): 759–779.
- ^ Curtin, Richard, Stanley Presser and Eleanor Singer. 2000. "The Effects of Response Rate Changes on the Index of Consumer Sentiment." Public Opinion Quarterly 64(4): 413–428.
- ^ Holbrook, Allyson, Jon Krosnick, and Alison Pfent. 2007. “The Causes and Consequences of Response Rates in Surveys by the News Media and Government Contractor Survey Research Firms.” In Advances in telephone survey methodology, ed. James M. Lepkowski, N. Clyde Tucker, J. Michael Brick, Edith D. De Leeuw, Lilli Japec, Paul J. Lavrakas, Michael W. Link, and Roberta L. Sangster. New York: Wiley. https://pprg.stanford.edu/wp-content/uploads/2007-TSMII-chapter-proof.pdf
- ^ Rok Seon Choung, G. Richard Locke III, Cathy D. Schleck, Jeanette Y. Ziegenfuss, Timothy J. Beebe, Alan R. Zinsmeister, Nicholas J. Talley. 2013. "A low response rate does not necessarily indicate non-response bias in gastroenterology survey research: a population-based study." Journal of Public Health February 2013, Volume 21, Issue 1, pp 87-95. http://link.springer.com/article/10.1007/s10389-012-0513-z
- ^ Altman DG, Bland JM. Missing data. BMJ. 2007 Feb 24;334(7590):424. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1804157/
- ^ Evans SJ. Good surveys guide. BMJ. 1991 Feb 9;302(6772):302-3. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1669002/pdf/bmj00112-0008.pdf