Understanding Survey Results
The results produced by opinion surveys are easily misunderstood and sometimes intentionally misleading. Learn to identify the good, bad, and just plain ugly surveys.
Surveys are an abused art form. In the wrong hands, they can be meaningless or malicious. A company may tout the results of its web survey, but unless the respondents are selected in a controlled fashion, the results are almost meaningless. Similarly, an intentionally biased survey (i.e., a “push poll”) may have skewed language and still be conducted in an otherwise legitimate fashion. Of course, in the right hands surveys are an incredibly valuable tool. Animal advocates can help themselves by learning to recognize “bad” surveys when they turn up and treating the results with an appropriate amount of caution.
What Isn’t a Survey?
We all get them… emails from some of our colleagues and favorite animal groups asking us to “please vote in this web poll to make sure animals win!” These surveys might be worth responding to for whatever impact they have on persuading others, but don’t take the results seriously. One thing is certain: it ain’t a real survey if you’re getting a plea from your friend to help skew the results. Even web surveys that allow only one response per visitor are essentially meaningless because anyone can respond. Unless the target population is actually website visitors, these kinds of online surveys will never achieve a representative sample.
What is representation? In a nutshell, it means that your survey sample is truly representative of the population you’re surveying. Doing this requires a probability sampling method, which is harder than it sounds, though there are several approaches. Random, systematic, and stratified sampling are examples of probability-based survey methods. In each case, every potential survey respondent in the target population has a known (and “non-zero”) chance of being selected. Non-probability samples, by contrast, are situations where respondents have an unknown chance of being selected.
Without getting any more technical, the difference is a matter of confidence. Probability surveys, if done correctly, are robust and valid and produce results that can be trusted. Non-probability samples can still be very useful in certain situations, but the results must be qualified and cannot be fully verified. In these cases, where researchers use “convenience,” judgment-based, or quota sampling methods, sampling error cannot be calculated. In other words, one must use a probability-based sampling method to be able to say that the results are accurate within a certain “margin of error.” This is one of the first things to look for when reviewing survey results.
Margin of What?
You’ve probably heard of “margin of error,” but you might not know what it is. Don’t feel bad, though, because you’re not alone. According to a recent poll by Harris Interactive, a two-thirds majority of US adults believes (incorrectly) that “margin of error” includes any error or bias caused by the wording of the questions. Just over half believe that “margin of error” is meant to describe all types of error that apply to the survey. Strong minorities of respondents also wrongly believed other statements about margin of error, although in reality it only covers errors related specifically to sampling.
Margins of error are somewhat complex and beyond the scope of this blog post, but for a fairly quick primer, check out this article by research expert Pamela Hunter. It may be enough to underscore that not all error margins are created equal. In fact, the error margin depends on the “confidence level,” sample size, and even the response distribution for a specific question. A quick example shows how this can be a special challenge for animal advocates, who sometimes deal with very small populations.
The Vegetarian Resource Group (VRG) has conducted a series of adult surveys every three years (most recently in 2006). In these surveys, VRG asks respondents, “Which of the following foods do you NEVER eat?” The surveys are based on probability sampling methods, with sample sizes of about 1,000 adults each. VRG found the following percentages of vegetarians among US adults: 2.3% in 2006; 2.8% in 2003; and 2.5% in 2000. For each of the three surveys, the overall error margin is about +/- 3%, but this can change depending on the actual response for a specific question. For “low incidence” responses such as this one, the results are actually more accurate; in this case, the margin of error is about +/- 1%.
Here’s what the VRG results look like with the correct margin of error applied:
Year | Response | Error Margin | Actual Response Range |
2006 | 2.3% | +/- .93% | 1.37% to 3.23% |
2003 | 2.8% | +/- 1.02% | 1.78% to 3.82% |
2000 | 2.5% | +/- .97% | 1.52% to 3.47% |
This example demonstrates two things about margin of error. First, it is somewhat variable, and the error margin is narrower for response distributions that are significantly skewed toward one end (e.g., 2% or 98% of respondents giving a certain answer). Second, despite this narrower margin of error, even probability-based survey results should be thought of in terms of ranges rather than absolute or fixed percentages. As shown above, VRG has conducted the most valid research available, but even their surveys have not demonstrated any meaningful change in vegetarianism over time, once you incorporate the margin of error.
Who Are You Calling Biased?
As mentioned, the margin of error is a measure of sampling error only, and does not take into account the many other types of error that can impact survey results. For a quick overview, the Harris Interactive link above has a good summary of different types of bias other than sampling. In my next post, I’ll cover some of these potential sources of error in more detail, with the goal of helping animal advocates become more informed survey “consumers,” and also survey researchers. To that end, if there are any questions or topics that you’d like us to cover in the future, please add your comments below!
