For decades survey research has provided trusted data about political attitudes and voting behavior, the economy, health, education, demography and many other topics. But political and media surveys are facing significant challenges as a consequence of societal and technological changes.
A new study by the Pew Research Center for the People & the Press, shows that it has become increasingly difficult to contact potential respondents and to persuade them to participate. The percentage of households in a sample that are successfully interviewed (the response rate) has fallen dramatically. At Pew Research, the response rate of a typical telephone survey was 36% in 1997 and is just 9% today.
Reaching and Persuading Potential Respondents | ||||||
| Year and % of Responses | |||||
Reach, Coop, and Response | 1997 | 2000 | 2003 | 2006 | 2009 | 2012 |
Contact Rate (% of HH in which an adult was reached) | 90% | 77% | 79% | 73% | 72% | 62% |
Cooperation Rate (% of HH contacted yielding interview) | 43 | 40 | 34 | 31 | 21 | 14 |
Response Rate (% HH sampled yielding interview) | 36 | 28 | 25 | 21 | 15 | 9 |
Source: Pew Research Center, May 2012 |
The general decline in response rates is evident across nearly all types of surveys, in the United States and abroad. At the same time, greater effort and expense are required to achieve even the diminished response rates of today.
Although response rates have decreased in landline surveys, the inclusion of cell phones, necessitated by the rapid rise of households with cell phones but no landline, has further contributed to the overall decline in response rates for telephone surveys.
These challenges have led many to question whether surveys are still providing accurate and unbiased information.
Despite declining response rates, says the report, telephone surveys that include landlines and cell phones and are weighted to match the demographic composition of the population continue to provide accurate data on most political, social and economic measures.
This is not to say that declining response rates are without consequence. People who volunteer are more likely to agree to take part in surveys than those who do not do these things. This has serious implications for a survey’s ability to accurately gauge behaviors related to volunteerism and civic activity.
Telephone surveys may overestimate such behaviors as church attendance, contacting elected officials, or attending campaign events, but the tendency to volunteer is not strongly related to political preferences, including partisanship, ideology and views on a variety of issues.
Survey analysis draws on three types of comparisons.
Comparisons of a range of survey questions with similar questions asked by the federal government on its large national demographic, health and economic studies show Pew Research’s standard survey to be generally representative of the population on most items, though there are exceptions.
It appears that the same motivation that leads people to do volunteer work may also lead them to be more willing to agree to take a survey.
Comparing Pew Survey With US Government Survey | ||
| % of Survey Respondents | |
Characteristic | Pew Standard Survey | US Gov’t Survey |
U.S. Citizen | 95% | 92% |
Homeowner | 63 | 62 |
Current address 5 years | 56 | 59 |
Married | 50 | 54 |
Children in HH | 37 | 37 |
Internet user | 80 | 74 |
Current smoker | 22 | 19 |
In prior year received: |
|
|
Unemployment benefits | 11 | 11 |
SS payments | 32 | 27 |
Food stamps or nutrition assistance | 17 | 10 |
Registered to vote | 75 | 75 |
Contacted a public official past year | 31 | 10 |
Volunteered for org. past year | 55 | 27 |
Talked with neighbors past week | 58 | 41 |
Source: Pew Research Center, May 2012 |
The second type of comparison used in the study to evaluate the potential for non-response bias is between the estimates from the standard survey and the high-effort survey on identical questions included in both surveys. The high-effort survey employed a range of techniques to obtain a higher response rate, including an extended field period, monetary incentives for respondents, and letters to households that initially declined to be interviewed, as well as the deployment of interviewers with a proven record of persuading reluctant respondents to participate.
Consistent with the two previous studies, the vast majority of results did not differ between the survey conducted with the standard methodology and the survey with the higher response rate; only a few of the questions yielded significant differences. In general, the additional effort and expense in the high-effort study appears to provide little benefit in terms of the quality of estimates.
A third way of evaluating the possibility of non-response bias is by comparing the survey’s respondents and non-respondents using two large national databases provided by commercial vendors that include information on nearly every U.S. household, drawn from both public and private sources.
An attempt was made to match all survey respondents and non-respondents to records in both the voter and consumer databases so they could be compared on characteristics available in the databases. Very few telephone numbers in the cell phone frame could be matched in either of the databases, especially for non-respondents, and thus the analysis is limited only to the landline frame.
The analysis indicates that surveyed households do not significantly over-represent registered voters, as the comparison of the survey’s voter registration estimate with the Current Population Survey estimate shows.
However, significantly more responding than non-responding households are listed in the database as having voted in the 2010 congressional elections (54% vs. 44%) This pattern, which has been observed in election polls for decades, has led pollsters to adopt methods to correct for the possible over-representation of voters in their samples.
The intuitive takeaway is that Pew, and presumably other generally known and respected pollsters, are using methodologies to compensate for known difficulties in achieving appropriate quantities and qualities of responses, and making other compensatory comparisons with traditional databases to ensure representative responses.
For more from Pew about the study, and additional data, please visit here.
I expect to see a lot more stories like this between now and Election Day, with a coordinated effort in the mainstream media to downplay Obama's dismal poll numbers. If people seem disenchanted then methodology must be to blame.