Understanding Survey Bias
Welcome to the mad, mad world of survey research! It can get a bit complicated, but there’s no need to panic. In my last post about understanding survey results, I described survey sampling error and what it means to say that a survey has a certain “margin of error.” This is one of the most important types of error to which surveys are subject, but it is by no means the only one. Surveys that try to achieve accurate representation of their target populations can be subject to other types of errors and biases that can be introduced throughout the research design and implementation processes.
According to the Roper Center for Public Opinion Research, apart from sampling error there are three types of errors that are most important when interpreting survey results. These are Measurement Error, Non-Response Error, and Coverage Error. For details, see Roper’s Introduction to Polling Fundamentals. The descriptions below are excerpted from this article.
Measurement Error is error or bias that occurs when surveys do not survey what they intended to measure. This type of error results from flaws in the instrument, question wording, question order, interviewer error, timing, question response options, etc. This is perhaps the most common and most problematic collection of errors faced by the polling industry.
Coverage Error is the error associated with the inability to contact portions of the population. Telephone surveys usually exclude people who do not have land-line phones in their household, the homeless, and institutionalized populations. This error includes people who are not home at the time of attempted contact because they are on vacation, in the military overseas, etc. It also affects those who only use a cell phone, since Random Digit Dialing (RDD) samples do not include cell phone exchanges.
Non-response Error results from not being able to interview people who would be eligible to take the survey. Many households now have answering machines and caller ID that prevent easy contact; other people simply do not want to respond to calls sometimes because the endless stream of telemarketing appeals make them wary of answering. Non-response bias is the difference in responses of those people who complete the survey vs. those who refuse to for any reason.
In our experience, non-response error is a regular problem faced by animal advocates attempting to do research on the cheap. The problem is worsened by surveys that allow self-selection, which includes most of the surveys conducted by advocates, simply due to lack of resources. But if you post a survey to your website or send out a mail survey to your supporters, who do you think is most likely to respond? Generally speaking, it will be people with the strongest opinions; additionally, in most cases, respondents’ opinions will be biased in favor of the organization because supporters are typically more likely than opponents to respond to your survey.
It is nearly impossible to know how respondents to your survey differ from non-respondents. And because non-response bias is extremely difficult to measure, it must be addressed as early in the research process as possible. The best way to handle non-response bias is to maximize the survey response rate and minimize the number of non-respondents. Funds permitting, this can be accomplished using multiple follow-up contacts, cash incentives, and a variety of other tools. The survey design is also important, with short, easily understood surveys generally receiving much higher response rates than long, complex surveys.
Likewise, the other types of error mentioned by Roper can be partly addressed early in the process. Measurement error can be reduced with careful planning and some degree of survey design expertise to draft surveys that are clear and reduces bias. Coverage error can be offset by ensuring that everyone in the target respondent pool has a good (preferably equal) chance of responding to the survey. This might involve offering multiple response methods (e.g., phone, web, or mail) or allowing longer response times for people who may be temporarily absent. Of course, these are just a few options available to mitigate survey bias, but we hope it provides a useful introduction.
Faunalytics suggests that advocates start by becoming more educated survey consumers, including understanding how to read and interpret public opinion survey results. This will also give you a great start toward understanding some of the intricacies of survey design and implementation to prepare for planning your own research projects. Of course, for important projects where the results will be used to make critical decisions, it might make sense to bring in some outside expertise.
Question, comments, or other resources? Weigh in with your comments below.
