Evaluating Our Effectiveness At Faunalytics
At Faunalytics, impact evaluation is a huge part of what we do. Whether it’s the impact of a particular advocacy method, a donation request, or a strategy to help someone stay vegan, most of our original research program revolves around one type of impact evaluation or another. Similarly, the pro bono support we provide to other organizations via office hours and email largely focuses on improving those organizations’ self-evaluations. Yet in many ways, the most difficult type of impact to evaluate is our own.
Our mission is to support and build capacity for other animal advocates, so our impact is indirect. This is a problem that many organizations struggle with: When the value they provide is intangible (e.g., training, consulting, long-term campaigns) it can be much more difficult to quantify. It strikes me as appropriate that a research-focused organization like ours should experience these same struggles, to inspire us to convey the difficulties and help identify solutions.
This blog is intended to do both: It outlines our process, a few results, and provides an example of imperfect impact evaluation when your impact is indirect. Some of these initiatives are relatively new, so we don’t have enough data to share yet. The goal isn’t to provide a comprehensive list of our results, but rather to explain the process.
First Steps
We need to be clear on what we’re trying to achieve. This is the same way we start all of our research projects, by defining the research questions—or in business terms, our key performance indicators.
As it says on our About page, Faunalytics’ mission is to empower animal advocates with access to research, analysis, strategies, and messages that maximize their effectiveness to reduce animal suffering. That tells us what (in general terms) we need to measure. Our key performance indicators need to reflect that mission.
Next, we turn to the more difficult question of how to do so. My advice to anyone attempting indirect impact evaluation is the same: multiple imperfect measures are better than one imperfect measure. And the more different they are, the better, so that the strengths of one method make up for the weaknesses of another. Just as with physical measurement, thinking about impact from multiple angles lets you triangulate on the correct answer.
In keeping with this approach, multiple measurement tools currently feed into our ongoing impact evaluation, as shown in this conceptual diagram.
More specifically, our impact measurement methodology currently involves:
- A long list of analytics from our website, social media platforms, infographics, and newsletter, as well as custom tracking of PDF download numbers and academic citations for our research reports;
- An annual survey of our community members assessing the value, quality, and usage of our resources;
- Surveys specifically targeting stakeholders and people who use our pro bono office hour; and
- Observed data capturing the number of pro bono support requests we handle, webinar attendance, a rough rating of the potential impact of the organizations on our stakeholder list, and even spontaneous comments on the value of our work that come in via email or social media—all of which we can roughly categorize as positive or negative.
I go into more detail about each of these measures below and provide some sample results. The sections are based on our mission statement to make clear why these are important questions for us.
Empowerment
Faunalytics exists to build capacity in the movement, to give animal advocates the power to make impactful decisions. So the first part of our research question is: Is our community empowered to make better evidence-based decisions?
Currently, we are evaluating empowerment with measures like our annual community survey and ongoing stakeholder surveys. Our goal is to understand how our resources are being used to inform decisions about advocacy strategies, to improve advocacy materials with facts, and to improve measurement of the effectiveness of a program.
For the future, we are exploring the feasibility of an observational method to determine our impact on other organizations’ decision-making. For instance, this might entail a comparison of how organizations who are (versus are not) part of the Faunalytics community use data and evidence. However, such an effort would be substantially more resource-intensive and still correlational, so it is not something we would undertake lightly.
Reaching Animal Advocates
Animal advocates comprise most of Faunalytics’ community and are our target audience: our goal is to support all of you in your work. To provide impactful support, we need to reach as many advocates as possible. Reach—in general and with individual publications—is one of the easier things to measure objectively.
Our indicators of reach include a range of default and custom analytics, as well as observed data. For example:
- Unique and total page views for the site and specific resources
- Newsletter subscribers and open rate
- Citations counted by Google Scholar
- Research report downloads
- Advocates assisted via office hours or email support
- Social media views and engagement (comments, retweets, etc.)
In the future, we would like to move toward a better understanding of the scope of the animal advocacy population so that we can examine reach as a proportion of the maximum possible. For now, though, as we mentioned in our advocate retention survey, this is a difficult problem and one that we will approach in stages.
Access to Resources That Help Maximize Effectiveness
There is no other organization that does what Faunalytics does—we have provided animal advocates with access to research and resources for more than two decades. Most studies and resources are published and shared through our Research Library. It not only makes academic research accessible to everyone and applicable to advocates, but also helps us share our original research and evidence-based blogs.
Quantifying our output, as shown above, is just part of evaluating this goal of providing access to helpful resources. More importantly, we track resource quality (as rated by our community members and stakeholders) and usefulness. This includes measures like:
- Community survey feedback (see examples below)
- Feedback about the quality and usefulness of specific resources from stakeholders and community members: about individual reports and pro bono support
- The content and overall positivity of spontaneous, unsolicited feedback pertaining to this topic. This is not a collection of testimonials—while useful in their own right, testimonials are not suitable for impact evaluation, as they are selected for their positivity. For evaluation purposes, we keep a record of all email and social media comments pertaining to the value of our work. These will be categorized as positive or negative in order to provide a rough overview of this type of feedback.
The above indicators are from the 2020 community survey as we have not yet collected enough data from the targeted surveys to share. Our immediate goal for this aspect of our impact evaluation is to increase the completion rates on the targets surveys or identify an alternative method of collecting data from stakeholders.
Reducing Animal Suffering
The ultimate goal of our work at Faunalytics is to reduce suffering and lives lost. In an ideal world, we would have an objective way of measuring or estimating this key outcome, but only a subset of organizations that work directly with animals have that option, and even then it can be very difficult. (For instance, estimating the impact of a change from battery cages to cage-free systems means not just estimating the number of hens affected but also the difference in subjective wellbeing for hens in each system.)
In our case, because we help animals by helping other advocates, our hypothesized impact looks something like this:
Currently, we’re measuring every step in this model with varying degrees of accuracy and success, but we are a long way from the ideal of being able to produce a reasonable estimate of the number of animal lives we have impacted. In some cases, making impact estimates is useful, even when it entails a number of assumptions or uncertainties (for instance, our animal product impact scales). The difference here is that the best-case outcome would be an estimate of how much “credit” we can take for other advocates’ work, and that isn’t an outcome we want, especially with such low certainty.
For that reason, our primary measures of impact for animals are currently limited to what we believe can be reasonably estimated, while we continue to improve on all parts of our impact estimation model. These sources are:
- Community survey feedback (see example below)
- Feedback from targeted surveys about whether Faunalytics’ resources and support have helped stakeholders and community members save animal lives or reduce suffering
- A rough count of potentially high-impact organizations on our stakeholder list, determined by the size of the animal and/or advocate population they serve. This is not intended to provide a specific estimate of how many animals Faunalytics has indirectly helped for the reasons stated above. However, to the extent that we serve organizations that serve a large number of animals or advocates and the members of those organizations state that we provide useful resources that have improved their advocacy, we can be reasonably confident that our work has an impact—and that the more high-impact organizations use our work, the more impact we are likely to have.
Limitations & Future Directions
As I said at the beginning, this impact evaluation is imperfect. Of note…
- It’s largely reliant on self-report survey data, and concerns about selection bias in the community survey data are reasonable despite our efforts to counter it with anonymity, wide distribution, and a lottery incentive.
- Thus far, we haven’t had a strong response on our stakeholder surveys, so that methodology may need to be revisited.
- The preferred goal of any impact evaluation is to understand the absolute amount of impact one has (number of animals saved, for instance), but with our current reliance on self-reported perceptions, beliefs, and attitudes, ours is more useful for looking at relative impact (change over time).
There are several aspects of this impact evaluation that we would like to improve. But we always recommend a policy of measuring what you can in the best way you can, while continuing to reflect and refine, and I’m proud to say we practice what we preach! We have revised or added many evaluation methods over the past few years, but as noted throughout, we continue to think about ways to do better.
I hope that other organizations will join us in evaluating their impact and making that process public, imperfections and all. We all know that there are many factors to balance: cost, accuracy, our community’s time, our funders’ and reviewers’ needs for information, and our own needs for information. You probably won’t find the perfect balance on the first try—difficult research tends to be an iterative process. But the only way to start is to start, and we’re here to help!
Please reach out with all your questions, concerns, ideas, and suggestions. Faunalytics exists to empower animal advocates—and even with our own impact evaluation, we hope to do just that.
