AI In Animal Advocacy: Promise, Peril, Or Patience?
For people interested in technology and cutting-edge computing, the release of ChatGPT in late 2022 felt like a game-changer: all of a sudden you could meaningfully chat with a ‘bot’ in normal human language, in all of its wonderful messiness. The bot seemed to not only be able to understand you, but would give you correct-sounding answers to questions and feedback on ideas. It was fascinating; it was buggy; and it was a turning point.
Since then, there has been a veritable explosion of generative AI across the world — in terms of investment, adoption, news, policy, tool proliferation, and more. Animal advocates, for our part, have displayed a full spectrum of responses: from tech-enthusiasts going “all in” on the potential of AI to transform our work, to those who find it problematic and refuse to engage with it on principle. In between the extremes, there is a nuanced gradient of thought, opinion, and practice.
Below, we look at broad trends in the animal advocacy movement around AI, noting where we feel the movement is doing well, and where more critical thought might be warranted.
The Influence Of EA & The Tech Industry
Since the early 2010s, Effective Altruism has become more and more intertwined with the animal advocacy movement in a variety of ways. For decades, animal advocates were generally content to engage in tactics that felt good but didn’t necessarily deliver results. The EA movement encouraged animal advocates to think seriously about impact in measurable, more quantifiable terms. While this hasn’t been a detriment per se, it has meant that our movement has shifted markedly towards focusing largely on farmed animal campaigns of scale, and interventions that have a more quantifiable angle.
EA’s origins are firmly rooted in — and continue to grow around — the U.S. tech sector. The tech industry’s EA mindset (and high salaries) have even inspired many people to “earn to give,” and to make sure their giving is guided towards having the greatest impact possible. This has been a funding boon to the movement, and those funding flows have shaped the issues that animal advocates focus on. It has helped the movement professionalize, and develop in ways that often mirror tech sector companies and startups.
Many proponents of AI in animal advocacy come from the EA side of the movement. This makes intuitive sense, since the EA movement has long had a keen interest in AI, and since AI has the potential to help us increase our impact and become a force multiplier for the movement’s relatively small amount of resources. It’s important to note, however, that EA’s interest in AI goes far beyond its application to advocacy, or animal-related topics.
Considering how much animal advocacy and EA have become intertwined, EA’s relationship to AI should be one we keep in mind as our movement pushes towards greater AI adoption. EA-minded animal advocates may be interested in both the immediate, practical applications of AI for animals, but also in AI issues far beyond that scope.
Knowing When To Speak, Knowing When To Listen
For decades, farmed animal advocates spent a good deal of time torn between two polarized approaches: telling people emphatically to go vegan, or working on more “welfarist” approaches and meat reduction campaigns. We also spent a lot of time debating the relative merits of both approaches and dealing with our fair share of infighting along the way.
Throughout the 2000s and 2010s, however, organizations and individual advocates began to shift their perspective. Rather than seeing veganism and animal rights as a morally superior position that we can simply bestow upon the general public, we started to wonder: Is there something we’re doing wrong? How can we make our own messages more effective? How can we meet people where they’re at? What does the public want, and want to hear?
Inspired by the power of data, groups like Faunalytics (formerly the Humane Research Council, established over 25 years ago) brought market research techniques to advocacy, and started to look at advocacy as a matter of understanding the audience rather than assuming we know what will work. It was a decidedly EA approach to solving a longstanding problem.
Which brings us to a vital question as we consider our movement’s relationship to AI: what does the general public think about AI — from usage to generated content to overall sentiment?
The current outlook is not a positive one.
Two recent, comprehensive studies of U.S. adults from Pew Research, and a broad ranging meta-review from the Brookings Institute show that sentiment generally skews negative. The Brookings meta-review found that people were more likely to feel “cautious” (54%) or “concerned” (49%) than “curious” (29%), “excited” (19%), or “hopeful” (19%) about AI. What’s more, feelings of skepticism and overwhelm had increased markedly between December 2024 and March 2025 (by 8% and 6% respectively), while feelings of excitement decreased by 5%.
Meanwhile, the Pew study did some interesting segmentation, delineating AI “experts” from the public. What they found was that far more “experts” believe AI will benefit (76%) rather than harm (15%) them personally. Meanwhile, the public is far more likely to think AI will harm them (43%) than benefit them (24%). In their study, 73% of AI experts surveyed said AI will have a very or somewhat positive impact on how people do their jobs over the next 20 years — but only 23% of the general public feels the same way.
Of course, these dynamics are not limited to the United States. A massive, global study by KPMG finds a mixed bag of results, varying widely by sector and region. There is optimism — but still, over half of the respondents are wary of trusting AI on a variety of levels. Interestingly, this study looked deeply at the use of AI in education and found that four in five students regularly use AI in their studies (at least partially borne out in ChatGPT usage data, which drops significantly during summer vacation time), reporting benefits such as efficiency and “reduced workload and stress.” However, the study also found that “inappropriate, complacent and non-transparent use of AI by students is widespread, raising concerns about over-reliance and diminished critical thinking, [and] collaboration.” It’s worth considering how student use of these tools (for summarizing text, and for help with writing) might mirror what we hope for ourselves.
What all of this means is that our audience — the people we’re actually trying to reach with our message — are clearly skeptical and concerned, and are only potentially positive about AI. They might also be using AI in ways that are reckless and possibly diminishing their own critical thinking. Adding to the above are concerns about AI’s environmental impact (both individual and at scale, whether founded or unfounded), the further entrenchment of surveillance (including / especially against political activists), abusive working conditions of AI data labellers in Kenya and beyond, and the degradation of the overall information landscape due to the widespread publication of AI slop (both in the wild and in the workplace).
This is a potent stew of factors by which our movement’s use or promotion of AI could backfire. It happens to be brewing at the same time as an increasing number of advocates are using AI to do things like generate advocacy content, but also to research and strategize, and much more.
Where Is AI Going, And What Can We Expect?
A common refrain among tech enthusiasts is that, despite its current flaws, “this is the worst that AI is ever going to be” — the implication being that these technologies will continue to improve on a predictable upward trajectory.
This is a fallacy: there is no guarantee that AI products and software will improve indefinitely. In the coming years, it is very possible — and, we believe, probable — that AI tools will get better on a variety of metrics. It is also probable that AI products will predictably degrade in user experience as profit motives create significant pressure and the investors pouring hundreds of billions of dollars into these projects seek returns. This degradation could happen in several ways:
- Increased subscription prices — There is no guarantee that our use of AI tools will remain as cost-effective as it currently is. Currently, AI companies are providing access to very advanced software at rock-bottom prices, while they simultaneously take many billions of dollars of investment. As with Netflix, Spotify, and many others before them, we believe it is very likely that subscription costs will rise, along with API call costs. The pace and extent of such a rise remains uncertain.
- Advertising-supported AI — Related to the above, a key way that companies like Facebook and Google have become massively profitable is through advertising, to the point where it’s arguable that they are very much advertising companies rather than social media or search tools. Generative Engine Optimization is already shooting ahead of Search Engine Optimization as a marketing concern, and we are already in the era of sponsored content being embedded into AI chats. Whether or not it will be clearly marked as such is not clear, and currently unregulated; the effect this might have on advocates is uncertain.
- Selling data — Related to advertising, generative AI represents another frontier of wholesale data gathering on billions of users, similar to social media but supercharged by the granular and highly personal data that users might share with their therapist LLMs, for example. While there are currently legal and political reasons for AI companies to not retain user data, profit motives could move that needle.
Meanwhile, the broad scale productivity boosts promised by AI companies are largely failing to materialize — or are they? Two reputable surveys paint a picture of the full spectrum: One major study, a survey of over 500 CFOs across sectors, found that the vast majority reported “no change” in the last 12 months thanks to AI on everything from labor productivity, to decision-making speed, to customer satisfaction, to time spent on high value-add tasks. In the same study, a small number of CFOs reported a 1–5% positive increase — roughly even with the number of CFOs who were “not sure” of any increase or decrease. Meanwhile, another major study found that 39% of enterprises are seeing “moderately positive ROI (e.g., measurable benefits but limited or in early stages),” while 35% claim “significantly positive ROI (e.g., clear financial returns or major operational improvements).”
Such wide-ranging, contradictory results may indicate that productivity benefits from AI adoption are unevenly distributed, hard to capture, or simply not there. While large-scale productivity boosts may still be a question mark, this doesn’t preclude them from becoming clearer in the future. There are obviously many individuals who are seeing productivity boosts, but this may not yet be translating at a higher scale. The uncertainty and contradictory data suggest that ignoring AI’s potential is a gamble — as is going “all in” or completely changing course to prioritize AI above all other tools and strategic approaches.
For our part, Faunalytics has been mindful and methodical in our adoption and use of AI tools, recognizing that the intersection of AI and research is still very much a fraught topic. To that end, working with Kyle Behrend and with the support of Stray Dog Institute, we’ve been developing an LLM-enhanced synthesis tool that would draw specifically from Faunalytics’ extensive Library of human-summarized, verified, and vetted research. Using this tool, you’ll be able to get the trustworthy, reliable data that you’re used to from us, in an instantly synthesized form based on your advocacy questions. We feel this is a natural extension and evolution of our longstanding commitment to curation, accuracy, and usefulness, and we’re excited to be working towards a public launch in the coming months. Our tool is currently in a beta testing phase — if you’re interested in trying it out, get in touch.
What About AGI, ASI, and AI Sentience?
Beyond the practical applications and concerns around AI’s current capabilities are speculations (and hype) around Artificial General Intelligence (AGI), Artificial Superintelligence (ASI), and AI Sentience. These terms have varying and contested definitions (ranging from contextual to hopeful to skeptical to financial), and as such are the subject of much discussion and debate.
Discussing any one of these concepts in-depth would merit an essay (or series of essays) on its own, and their larger implications are beyond the scope of this blog. With that said, it is worth noticing that, much like animal ethicists’ discussions around animal sentience and intelligence, these are concepts beset by murky definitions and moving goalposts.
However, that has not stopped some animal advocates from sounding the alarm about AGI, and its potential positive — and negative — impacts on our advocacy. Across this topic, speculation abounds. One of the most popular and comprehensive resources on a scenario of runaway AGI by 2027 launched in April of 2025, and was shared widely in the animal advocacy community. By the end of 2025, however, its timelines had already been revised and spoken of with much less certainty. In the skeptical camp, leading experts assert that not only will such timelines not be met, but that LLMs categorically cannot lead to AGI — indeed, leading experts in the AI field disagree broadly on AGI timelines, and even more generally on how AI will impact the world in the short and long term.
The field of AI sentience, meanwhile, is a somewhat separate sphere that has potential strategic overlaps with animal advocacy, and is asking qualitatively different questions: Could AI gain moral personhood? If so, what might that mean for broader social understanding of personhood and moral status? People working in this field are both concerned about the welfare of potentially sentient AI systems, as well as the ripple effects of what that might mean for animal welfare.
All of this leads to rather existential questions: Is there anything animal advocates can truly do to prepare in the face of superintelligent AI transforming all of society? Is there anything anyone can do? If AI is determined to be sentient, what might that mean for all of our ongoing work for non-human animals?
The answers to these questions could portend positive outcomes: AI that actively promotes the sentience and welfare of animals; a shift in the economic order that makes advocacy more possible for the average person; new avenues of science that advance alternative proteins in ways we can’t yet anticipate; and more. There could also be a variety of negative outcomes on the horizon: massive unemployment and economic disruption that greatly reduces potential advocacy funding; further entrenchment of factory farming using AI-enhanced technologies; even AI that actively devalues organic life, including humans.
All of these scenarios, however, remain highly speculative. While we don’t discourage anyone from speculation about higher-level issues, and indeed are encouraged by thoughtful approaches to these questions, we remain focused on day-to-day practical applications, and how animal advocates can use AI most effectively and mindfully. Whether or not AGI arrives in the near term (or at all), AI is already becoming a normal technology that is part of all of our daily lives to varying degrees, and that is something animal advocates must factor in to their strategy.
Using The Slingshot Carefully
Around the time I started working for Faunalytics, Founder Che Green asked me to create a promotional graphic with text that read “Sometimes fighting animal abuse feels like David vs. Goliath.” The accompanying image showed a person with a sling drawn back, aiming at something out of frame. The following line read “We’re the slingshot” — referring to Faunalytics specifically, but data-driven advocacy more broadly as well.
Animal advocates have always been creative and crafty, making an outsized impact with a small amount of resources. For decades before effective altruism entered the lexicon, we got by on our convictions and our moxie, making gains that might seem inconceivable without the resources the movement has today. And since the 2000s and 2010s, research and data have become one of the key slingshots in our David vs. Goliath battle. The hope is that AI tools will become an even better slingshot for us to use — but just like a slingshot, using it recklessly can backfire.
Of course, no tool comes without tradeoffs, and this is not necessarily a perspective that the biggest proponents of AI want to hear. But in an information landscape that is becoming increasingly fragmented, diffused, and untrustworthy, we feel that the opposite is vital to pursue: trust and reliability are becoming a greater currency for our movement than ever before. While we appreciate the movement’s desire to embrace new technologies — and share in this desire — it is vitally important that we recognize what belongs to us (as a movement) and what doesn’t.
When Facebook began a “pivot to video” in 2015, news companies — in a desperate attempt to stay relevant and in good favor of the platforms and their algorithms — gutted departments and shifted strategy, only to find later that the viewer numbers were inflated, and what they burned down in the process could not be easily rebuilt. It’s arguable that the journalism industry has never recovered.
AI can do amazing things, but the underlying companies and infrastructure do not belong to animal advocates, do not necessarily care about our cause, and do not have a vested interest in improving the lives of animals. What does belong to us — our resourcefulness, our relationships, and our critical thinking — are aspects of our movement that warrant our protectiveness and vigilance. We should not dismantle some of our best tactics, resources, and strategies only to face our own “pivot to video” and find it impossible to recover from.
As Faunalytics incorporates more AI into our work in ways that are thoughtful and considered, we remain steadfast in self-awareness, knowing our strengths and what we see as the key values of our organization and our movement. But using AI doesn’t preclude us from having a critical perspective on the tools and their industry — and having a critical perspective doesn’t preclude us from using these tools when it serves us well. Going forward into a future where the proliferation of AI is, in many ways, an inevitability, we hope you’ll continue to join us in this critical work and in these critical conversations.

