Last Updated: August, 2025
Since 2000, Faunalytics has built a reputation of trust with animal advocates and the advocacy movement at large. We’ve done so through years of diligent work, data-driven strategies, thorough research, and careful editing — and our efforts have resulted in animal advocates holding us in high esteem for our accuracy and reliability.
Over the past several years, industries around the world have begun to think more seriously about how they use and interact with Artificial Intelligence (AI) tools. Animal advocates are no different, and since late 2022, Faunalytics has been monitoring the proliferation and promotion of AI tools for use by animal advocates, noting both the positive and negative potentials as we develop and update our own policy framework.
As with all of our work, our approach to this policy closely tracks the current understanding and data around AI tools — how they work, what they do well, and what they do badly. Our approach is also informed by a desire to uphold high organizational standards, and to maintain our reputation as a reliable source of factual information for advocates.
What follows is Faunalytics’ framework for the use of different categories of AI tools, both internally and externally. We recognize that the landscape of AI tools is evolving rapidly — therefore, in addition to a straightforward explanation of our policies, below we outline some of our thought processes and principles around these tools. We will continue to adapt and update the specifics of this policy on an as-needed basis, and it will be reviewed at least annually.
Policies
Large Language Models / Chatbots / Text Generators
Context
Large Language Models (LLMs) and AI chatbots include proprietary tools such as Google’s Gemini, OpenAI’s ChatGPT, and Anthropic’s Claude, as well as open-source alternatives such as Meta’s LLaMA. These tools can generate text in a variety of styles, based on prompts by users, typically in a conversational manner. LLMs can generate a large volume of text on virtually any topic, revise already existing text that is input as a prompt, and connect to the live internet to retrieve information, rather than relying strictly on training data.
LLMs have (somewhat reductively) been characterized as “advanced auto-complete” tools, not unlike the kinds that can be found in most cell phone messaging apps. While there are similarities, LLMs are significantly more advanced. LLMs and chatbots function by using a combination of extremely large-scale datasets (i.e., giant swaths of text scraped from the Internet) and complex probability calculations to determine the next most likely word in a sentence. However, their probabilities are calculated via a random seed mechanism, which means that the same prompt can generate many different responses.
LLMs are highly capable of generating text that is grammatically correct. They are also capable of generating text that, while “confidently” presented, is highly inaccurate. This is commonly referred to as an “hallucination” on the part of the LLM Commercial models also contain certain “guardrails,” internal rules that govern what kind of subject matter they will avoid and what kind of language they can and cannot use. LLM guardrails can be circumvented, however, by using creative questions and various “prompt injection” techniques.
The actual code and functioning of the most popular LLMs are closed and proprietary, and the public has no ability to verify what is happening with them. Open-source models, while more “verifiable” in theory, are still extremely complex and require advanced programming knowledge to examine. In either case, a lot of what LLMs are “doing” is not fully understood. LLMs may exhibit phenomena, malfunctions, and errors in their output for reasons that are unclear, even to the programmers who work on them.
In Faunalytics’ view, LLMs and chatbots have a variety of positive use cases, mostly as internal productivity tools that could help us save time and work more efficiently. With well-composed prompts, LLMs can be used to quickly generate first-draft text for things like email responses, first drafts of low-stakes internal documents, and social media copy. They can also be used for a variety of other low-stakes tasks such as brainstorming blog post ideas (but not directly generating their contents), plotting out travel and accommodations for conferences or events, and other straightforward logistics. Faunalytics permits staff to use LLMs for such tasks. However, all staff are prohibited from sharing/inputting any sensitive information as part of their prompts.. Examples of sensitive information specific to Faunalytics is outlined in an internal policy document for staff.
Faunalytics regularly experiments with using LLMs for summarizing research, largely for the purposes of benchmarking and understanding their capabilities. LLMs and chatbots are not widely used to directly generate text for our Research Library summaries or Original Research projects, whether internal or public-facing. In our testing, the current capabilities of LLMs preclude them from being widely useful for us in such areas; indeed, their potential for inaccuracy creates/requires even closer editing than text written by our team directly. Faunalytics is currently using LLMs to provide first pass summaries of very long (50+ page reports) that are not appropriate to assign to volunteers, as well as to summarize studies which contain graphic descriptions of animal cruelty or animal industries. In both of these cases, AI generated summaries are closely reviewed and checked in our editing process. Summaries which have had a first draft generated by AI are clearly marked as such.
Text-based content is the basis of what Faunalytics does. Virtually all of our resources begin and end with text of some kind, most of our communications are in written format, and the vast majority of our public-facing materials are text-based. As such, Faunalytics takes our editing and verification process extremely seriously, and we will continue to do so as we continue to evaluate LLMs potential to enhance our Research Library.
Image / Artwork Generators
Context
AI image and artwork generators include tools such as Midjourney, DALL-E, and Stable Diffusion, and many LLMs now have some kind of image generating ability presented alongside their text outputs. As with LLms, these likewise range from proprietary tools to open-source models. They can generate illustrations, artwork, and photo-realistic images, with text-to-image prompts as well as image-to-image prompts. There are also a growing number of tools (including built in features in popular products like Photoshop) that can take existing photos and “extend” or “clean” them, and video-generating tools are becoming more powerful.
For the most part, image generation tools have high guardrails and cannot be prompted to make images that are graphic or violent in nature. As with any guardrails, however, there are worrisome workarounds.
It is our view that image generation in animal advocacy should be approached with mindfulness and caution: animal advocates have spent many years trying to build trust with the general public, and these efforts could be jeopardized by the reckless use of generated content. Using AI-generated images poses a particularly significant risk, especially if used in an attempt to depict the general living conditions on farms or specific instances of cruelty or suffering. It is not hard to imagine the damage that could be done if animal advocates were found to be using fake / AI-”enhanced” investigation photos, or even fake / AI-”enhanced” photos of animals living peaceful lives. There are other use cases, however, which are less problematic: generating images of “vegan food” or a depiction of an advocate handing out a leaflet are two such examples.
Both internally and externally, Faunalytics tends to use images and videos in a limited capacity: generally, our use of photos is limited to header images for study summaries, blog posts, and Original Research, while video clips tend to be used in our Faunalytics Explains videos and some other limited ways.
Based on the reputational risks posed by AI image generation, Faunalytics will avoid the use of image-generation tools for photorealistic images in our public-facing work. We likewise caution individual advocates and groups about using image generation tools, as they have the potential to seriously erode public trust in our organizations, and our movement more broadly. There are many groups actively working in the field to gather real-life investigative materials to support animal advocacy and legislative campaigns. There are also a variety of repositories of real-life stock photography that animal advocates can use freely in their materials and campaigns. Simply put, we don’t need to generate images of animal exploitation or suffering using AI because vast amounts of factual, real-world material already exists, and many investigators are still doing this work — and deserve our support. Additionally, Faunalytics will consider and create a procedure/safeguard against using images that are AI-generated but are not clearly labeled as such.
For other use cases — such as AI-generated illustrations, drafts of infographics, concept drawings for internal documents, and more — we encourage individuals and groups to use their best judgment, and to clearly label AI-generated images to maintain transparency and good relations with the general public. Faunalytics is open to the potential of non-photorealistic uses, and will clearly label such images as AI-generated.
Machine Learning
Context
Machine learning (ML) models have largely been excluded from recent discussions of AI tools, though they have been around for significantly longer than prompt-based text and image generators. ML models take training data (in varying quantities) and can “learn” to do different tasks and solve problems through intensive repetition, which then generates solution algorithms used for automation. Such tools are much more varied, customizable, and generally require experienced users who can program them and oversee their training, processes, and outputs.
Many machine learning models are already used for the benefit of animals in various forms of wildlife conservation — for example, camera traps and ML models are used in combination to help identify, estimate, and track wild animal populations, at a scale and with an accuracy that humans could not hope to achieve in the same amount of time. Likewise, ML is widely used in other “AI” applications, such as audio editing or transcription tools, and more.
ML models are not foolproof. Like all of the AI models above, ML models are limited by their training data, and, depending on the quality of this data, they can range from very useful to totally useless, and any gradation in between. Given biased or limited data, poor instructions, or lack of training feedback, ML models cannot and will not produce good results. With the right data, the right instructions, and the right oversight, however, they can do much more than a human can, and much more quickly.
It is our view that ML models could be particularly useful to Faunalytics as an organization, and to the animal advocacy movement more broadly. Internally at Faunalytics, we are considering various ways that machine learning may be employed to analyze trusted datasets such as those from public sources like the UN Food and Agriculture Organization (FAO), data from our own Original Research, and aggregated data from our Research Library.
While Faunalytics is not currently engaged in any projects that involve machine learning, there is a strong possibility that we will explore this avenue in the future. Any projects that use machine learning as a technique will be clearly labeled as such. Meanwhile, our Research Library will continue to consider and summarize studies that use machine learning to examine animal issues, as long as those studies are relevant to advocates.
Other Considerations
AI Bots / Crawlers
Faunalytics’ mission is to provide reliable and relevant data to animal advocates. We do so through a variety of avenues: our website, social media platforms, newsletters, and videos, as well as in-person talks, webinars, and more.
In the current context, we recognize that some advocates are using AI tools such as LLMs to provide research and find information that they may have previously retrieved from visiting a website, reading a journal article, or attending a webinar. Faunalytics’ is pleased to see that our materials are often ranked highly in LLM responses to questions about animal advocacy, and our aim is to continue to make our work available for such uses.
With the above generative engine optimization (GEO) in mind, Faunalytics has made the intentional decision not to block AI bots from indexing / “crawling” our website, and indeed hope that they continue to do so — we now consider AI, and LLMs in particular, to be a new avenue through which our reliable and relevant data can be disseminated to animal advocates.
Overarching Principles
The above policies are a living document and subject to revision. Guiding Faunalytics in this and all subsequent AI policy updates will be three key principles:
- Understanding: Faunalytics will continue to learn about and understand how new AI tools function, in order to have a clear sense of their pros and cons. While some of the most common AI tools that currently exist have been made user-friendly to a fault, as professionals we must go further than FAQs and make an effort to understand how they generate their content. With this understanding, we will be able to better evaluate their usefulness, and what sort of caution we may need to take in their use.
- Verification: There are no AI models that currently exist that can produce content that is above the need for verification and fact-checking. Faunalytics has earned a reputation as a solid source of data about animal advocacy, and we will not incorporate new tools or the content they produce without engaging in our usual best practices to ensure accuracy.
- Transparency: Users should know when they are engaging with AI-generated content. As such, any public-facing use of AI tools will clearly be marked as such, while internal use in research projects will be clearly documented.
Faunalytics remains committed to providing animal advocates with quality, trustworthy data and insights to help the animal protection movement be more effective. We will continue to approach all of our research publications, study summaries, and visual resources with diligence and rigor, as we monitor the development of new tools that may help us serve you better.
Read more about Faunalytics’ perspective on AI as a research and summarization tool, and the broader landscape of AI use in the animal advocacy movement.
If you have any questions or concerns about Faunalytics AI Usage Policy, please contact Resource Director karol orzechowski: [email protected].