If a simple program evaluation survey is all you’re looking for, you can find one possible template here.
There are many validated measures assessing different animal-related attitudes, beliefs, and more. If the options listed here don’t suit what you’re looking for, you have a few more options: You can look through Faunalytics’ past surveys on the Open Science Framework, have a look in the Faunalytics library for studies on a similar topic, or reach out to us for help using the contact form.
Animal Attitudes Scale (Herzog et al., 1991, 2015)
- Measures general attitudes toward animal protection
- 5- or 10-item versions (see Appendix of linked article for full scale)
- Example: “I sometimes get upset when I see wild animals in cages at zoos.”
Solidarity with Animals (Amiot & Bastian, 2017)
- Measures solidarity (belonginess, closeness, attachment) with animals
- 5 agreement items (see Study 1 Method section of linked article for scale details)
- Example: “I feel close to other animals.”
Speciesism Scale (Caviola, Everett, & Faber, 2018)
- Measures speciesism, the prejudice that humans are superior to other animals
- 6 agreement items (see linked summary for full scale)
- Example: “It is morally acceptable to trade animals like possessions.”
Individual Attitude Items (Farmed Animals):
- “Animals used for food have approximately the same ability to feel pain and discomfort as humans” (used in Faunalytics’ study of attitudes in BRIC countries)
- “Eating meat directly contributes to the suffering of animals” (used in Faunalytics’ study of attitudes in BRIC countries)
- “Low meat prices are more important than the well-being of animals used for food” (used in Faunalytics’ study of attitudes in BRIC countries)
- “It is important that animals used for food are well cared for” (used in Faunalytics’ study of attitudes in BRIC countries)
If you want to know what people actually eat, we don’t recommend directly asking respondents whether they are vegetarian or vegan, due to very high rates of misreporting. Studies have found that up to half of people who call themselves vegetarian also reported having eaten meat in the past two days (Juan et al., 2015). For this reason, you should ask about being vegetarian or vegan only if you are interested in respondents’ use of that label rather than as an indication of dietary behavior.
Self-report diet measures like you’d be using in a survey ask people about their food consumption directly. Food Frequency Questionnaires (FFQs) are a common way of doing so. For example:
FFQs are easy to use, as they can be included in a survey. They are also flexible and easy to modify (Cade et al., 2002). A few adjustments you might want to consider to suit your needs include:
- Changing the time scale from 3 months to a longer or shorter period. For instance, Faunalytics successfully adapted the pork scale above to a one-month time frame for our study of Animal Equality’s video outreach, which spanned a one-month period.
- Changing the frequency options if you have reason to believe they won’t capture the frequencies very well for your population of interest.
- Combining similar categories to make them easier to complete if you are short on space in your study. E.g., “Chicken and Turkey” could be combined, or “Dairy and Eggs.”
- Including plant-based products in the food list. A food list that asks many more questions about animal products than about plant-based foods may cue participants about the purpose of the study, which could affect their answers. A longer list that also includes plant-based “distractor” items may be preferable if respondents will not already know who is surveying them. Similarly, including other categories of food that have social or medical significance, like caffeinated beverages, may provide respondents with alternative explanations for the survey, reducing the likelihood that they will figure out the goal of the study (Hebert et al., 1997).
- Choosing culturally-relevant foods if the FFQ is intended for use in another country or with a specific cultural group in the U.S. (Vergnaud, et al., 2010).
Social desirability bias is a serious concern with studies conducted by animal advocacy organizations. Survey results regularly come back with unreasonably high rates of success, due to a combination of respondents incorrectly identifying as vegetarian, higher rates of engagement and response from the participants most moved by an activity (also known as response bias), and responses intended to make oneself look good to others (social desirability bias).
The most reliable way to control for social desirability bias is to avoid giving the respondent clues about which answers the surveying organization would prefer. However, if respondents will unavoidably know which answers the surveying organization would prefer, social desirability bias can be addressed by including a set of questions called the Marlowe-Crowne Scale (Reynolds’ Form C), shown below.
These questions are designed to measure respondents’ tendency to answer in ways that make them look good rather than truthfully. Higher scores on this measure indicate greater tendencies toward socially desirable responding. In turn, a high correlation between scores on the social desirability measure and responses to other questions on the survey would suggest that answers to those questions may be driven in part by socially desirable responding, rather than by respondents’ true beliefs and behaviors.
During statistical analysis, respondents’ scores on the social desirability measure can be “controlled.” Doing so lets you determine whether there are real associations between your key variables or just socially desirable responding.
Listed below are a number of statements concerning personal attitudes and traits. Read each item and decide whether the statement is True or False as it pertains to you personally.
Note: Response options of True and False should be provided for each statement.
Assign each respondent a social desirability score based on their answers to the questions on the scale:
Each respondent should now have a social desirability score between 0 and 13. These scores are intended to measure how likely the respondent is to give answers that sound good instead of answers that are true. While most people will answer in the socially desirable way to some questions, and some people really will have more of the “good” traits than others, those respondents with especially high scores may have obtained them by answering in ways that exaggerate their good qualities and minimize their bad ones. This is the same behavior we would be worried about leading to under-reporting of animal product consumption if the respondents know the survey is being carried out by an animal advocacy group.
With demographic and personal data, it is particularly important to be cautious about what you collect and how the data will be stored. If you don’t need the information, don’t ask these questions as they are potentially sensitive. If you do ask them, give participants the option not to answer if you can.
What is your gender?
- I do not see myself represented in the above options. My gender is ____.
What is your age? ___
Where do you live?
- [dropdown list of states]
- U.S. territory (e.g., Puerto Rico)
- I do not reside in the United States
Usage Note: You should not look at your data at the state level unless you have several thousand participants—there will be too few per state for the results to be reliable. Instead, re-categorize the state-level information into regions: Northeast, Midwest, South, and West. We recommend asking participants for their state rather than their region directly because some may be unsure of the regions.
What is your race/ethnicity?
- Hispanic or Latino/Latina
- White, non-Hispanic
- Black or African-American
- American Indian or Alaska Native
- Native Hawaiian or Other Pacific Islander
- Two or more races
Usage note: This item is adapted from the U.S. census format to combine race and ethnicity into one question for simplicity.
What is your annual household income before taxes?
- Less than $20,000
- $20,000 to $39,999
- $40,000 to $59,999
- $60,000 to $79,999
- $80,000 to $99,999
- $100,000 or more
Why use a validated scale instead of your own items?
If you’ve combed through the sections above and don’t see anything that will work for you, you may want to write your own questions. Here are a few suggestions:
- Keep your question or statement short and simple. Many adults in the U.S. have low literacy. If a student in grade 7 or 8 would have trouble with your question, as many as half of your participants will also have trouble with it. This lowers data quality because they can’t answer properly.
- Use common response options. This is good practice even if you’re writing your own question or statement, to ensure that you don’t inadvertently use options that are confusing to participants or produce results that are hard to interpret (examples below). Symmetrical scales are easier for participants and researchers, so use them whenever possible. With a symmetrical scale, the difference between “Negative” and “Somewhat Negative” is the same as the difference between “Positive” and “Somewhat Positive.” With an asymmetrical scale, the differences between the options is more subjective and hard to interpret.
- Five-point, symmetrical scale with a midpoint: Strongly Disagree / Disagree / Neither Agree Nor Disagree / Agree / Strongly Agree
- Six-point, symmetrical scale with no midpoint: Completely Dissatisfied / Dissatisfied / Somewhat Dissatisfied / Somewhat Satisfied / Satisfied / Completely Satisfied
- Five-point, assymetrical scale: Not At All Likely / Somewhat Likely / Moderately Likely / Very Likely / Extremely Likely
- You can replace the words used with your own, but keep the format (e.g., Accurate/Inaccurate, Positive/Negative, Important/Unimportant).
- Avoid double-barreled questions (questions that ask about two things at once). For instance, imagine asking: “How satisfied were you with how knowledgeable and interesting the tour guide was?” Participants who thought the tour guide was boring but knowledgeable or interesting but inexperienced will have a hard time answering, and you’ll have a hard time interpreting the answers.
- Use negatives sparingly. Small negation words like ‘not’ or ‘don’t’ are easily missed by participants (e.g., “I often spend time with people who don’t care about animal rights”). Even more importantly, avoid confusing participants with multiple negations in one sentence (e.g., “I never spend time with people who don’t care about animal rights”).
- Consider the whole range of people who might participate. Think through how you’re going to recruit participants, and who will end up completely your study as a result. For instance, if your survey about your website pops up when someone visits the site, you will get some participants who are visiting for the first time. Do you need a “don’t know” or “not applicable” option on any of your questions? Can they skip some entirely?
- Don’t use Yes/No for a subjective question. Unless you’re asking something extremely straightforward, you can probably get more information by providing a wider range of response options. For example, if you’re asking “Would you recommend this product to a friend?”, rather than just choosing Yes or No, you could give five choices: Definitely Yes, Probably Yes, Uncertain, Probably No, Definitely No. (Sometimes it’s nice to report a simple percentage of people who said yes, but you can just combine the percentages who said probably or definitely yes.)
- Test your study. To make sure your questions are short and simple enough, conduct an informal pilot test by having 5 to 10 people complete your study as though they were participants. Ask them to tell you about their thought process and any problems they encountered. Watch for points of confusion, ambiguity, or difficulty finding a response option that fits.
To Cite This Page:
Faunalytics (2019). Questions to Use in Survey Research and Experiments. Retrieved from https://faunalytics.org/survey-questions