Faunalytics.org makes use of JavaScript to operate fully, but Javascript is disabled in your browser

Faunalytics Policies

A collection of Faunalytics terms and policies, as well as our commitment to diversity and inclusivity.

AI Usage Policy

Approved: Nov. 29. 2023
Last Updated: Nov. 29, 2023
Scheduled Review: Annually, in September

Since 2000, Faunalytics has built a reputation of trust with animal advocates and the advocacy movement at large. We’ve done so through years of diligent work, data-driven strategies, thorough research, and careful editing — and our efforts have resulted in animal advocates holding us in high esteem for our accuracy and reliability.

Over the past several years, industries around the world have begun to think more seriously about how they use and interact with Artificial Intelligence (AI) tools. Animal advocates are no different, and in recent months we’ve been monitoring the proliferation and promotion of AI tools for use by animal advocates, noting both the positive and negative potentials as we develop an initial policy framework.

As with all of our work, our approach to developing this policy closely tracks the current facts and data around AI tools — how they work, what they do well, and what they do badly. Our approach is also informed by a desire to uphold high organizational standards, and to maintain our reputation as a reliable source of factual information for advocates. 

What follows is a framework for the use of different categories of AI tools at Faunalytics, both internally and externally. We recognize that the landscape of AI tools is shifting rapidly, on an almost daily basis. Therefore, in addition to a straightforward explanation of our policies, below we outline some of our thought processes and principles around these tools. Of course, we will adapt and update the specifics of this policy on an as-needed basis, and it will be reviewed at least annually.

Models

Large Language Models / AI Chatbots / Text Generators

Large Language Models (LLMs) and AI chatbots include proprietary tools such as Google’s Bard and OpenAI’s ChatGPT, as well as open-source alternatives such as LLaMA (Meta’s Large Language Model AI). These tools can generate text in a variety of styles, based on prompts by users, typically in a conversational manner. LLMs can generate a large volume of text on virtually any topic, revise already existing text that is input as a prompt, and in certain instances (using plugins) some chatbots can “browse the Internet” and retrieve materials.

LLMs have (somewhat reductively) been characterized as “advanced auto-complete” tools, not unlike the kinds that can be found in most cell phone messaging apps. While there are similarities, LLMs are significantly more advanced. LLMs and chatbots function by using a combination of extremely large-scale datasets (i.e., giant swaths of text scraped from the Internet) and complex probability calculations to determine the next most likely word in a sentence. However, their probabilities are calculated via a random seed mechanism, which means that the same prompt can generate many different responses. LLMs are highly capable of generating text that is grammatically correct. They are also capable of generating text that, while “confidently” presented, is highly inaccurate. These models also contain certain “guardrails,” internal rules that govern what kind of subject matter they will generate text about and what kind of language they can and cannot use. LLM guardrails can be circumvented with relative ease, however, by using creative questions and various “prompt injection” techniques. 

The actual code and functioning of the most popular LLMs are closed and proprietary, and the public has no ability to verify what is happening with them. Open-source models, while more “verifiable” in theory, are still extremely complex and require advanced programming knowledge to examine. In either case, a lot of what LLMs are “doing” is not fully understood. LLMs may exhibit phenomena, malfunctions, and errors in their output for reasons that are unclear, even to the programmers who work on them.

In Faunalytics’ view, LLMs and chatbots have a variety of positive use cases, mostly as internal productivity tools that could help us save time and help us work more efficiently. With well-composed prompt inputs, LLMs could be used to quickly generate first-draft text for things like email responses, first drafts of low-stakes internal documents, and social media copy. They can also be used for a variety of other low-stakes tasks such as brainstorming blog post ideas (but not their contents), plotting out travel and accommodations for conferences or events, and other straightforward logistics. Faunalytics permits staff to use LLMs for such tasks. However, all staff are prohibited from sharing/input any sensitive information as part of their prompts, since such information becomes integrated into training data and could be exposed to others. Examples of sensitive information specific to Faunalytics will be outlined in an internal policy document for staff. 

LLMs and chatbots will not be used to directly generate text for our Research Library summaries or Original Research projects, whether internal or public-facing. It is our view that the current capabilities of LLMs preclude them from being particularly useful for us in such areas, and indeed, their potential for inaccuracy could create more problems and require even closer editing than text written by us directly. If this policy is revised in the future, it will go hand-in-hand with a policy of attribution, so that readers know transparently when they are reading AI-generated content.

Text-based content is the basis of what Faunalytics does. Virtually all of our resources begin and end with text of some kind, most of our communications are in written format, and the vast majority of our public-facing materials are text-based. Nothing gets published on the Faunalytics website until the content has been reviewed by at least two editors. In sum, while we look forward to the ways that LLMs could make some of our work processes more efficient, we do not plan to outsource our most vital work to such tools.

Image / Artwork Generators

AI image and artwork generators include tools such as DALL-E and Stable Diffusion, and likewise range from proprietary tools to open-source models. These tools can generate illustrations, artwork, and photo-realistic images, with text-to-image prompts as well as image-to-image prompts. There are also a growing number of tools that can take existing photos and “extend” them, and video-related tools are becoming more powerful. For the most part, image generation tools have high guardrails and cannot be prompted to make images that are graphic or violent in nature. As with any guardrails, however, there are worrisome workarounds.

Both internally and externally, Faunalytics tends to use images and videos in a limited capacity: generally, our use of photos is limited to header images for study summaries, blog posts, and Original Research, while video clips tend to be used in our Faunalytics Explains videos and some other limited ways.

It is our view that image generation in animal advocacy should be approached with mindfulness and caution: animal advocates have spent many years trying to build trust with the general public, and these efforts could be jeopardized by the reckless use of generated content. Using AI-generated images poses a particular risk, especially if used in an attempt to depict the general living conditions on farms or specific instances of cruelty or suffering. It is not hard to imagine the damage that could be done if animal advocates were found to be using fake / AI-”enhanced” investigation photos, or even fake / AI-”enhanced” photos of animals living peaceful lives. There are other use cases, however, which are less problematic: generating images of “vegan food” or a depiction of an advocate handing out a leaflet are two such examples.

Based on the reputational risks posed by AI image generation, Faunalytics will avoid the use of image-generation tools for photorealistic images in our public-facing work. We likewise caution individual advocates and groups about using image generation tools, as they have the potential to seriously erode public trust in our individual organizations, and our movement more broadly. There are many groups actively working in the field to gather real-life investigative materials to support animal advocacy and legislative campaigns. There is also a variety of repositories of real-life stock photography that animal advocates can use freely in their materials and campaigns. Simply put, we don’t need to generate images of animal exploitation or suffering using AI because vast amounts of factual, real-world material already exists, and many investigators are still doing this work — and deserve our support. Additionally, Faunalytics will consider and create a procedure/safeguard against using images that are AI-generated but are not clearly labeled as such. 

For other use cases — such as AI-generated illustrations, drafts of infographics, concept drawings for internal documents, and more — we encourage individuals and groups to use their best judgment on whether image generation is truly necessary, and to clearly label AI-generated images to maintain transparency and good relations with the general public. Faunalytics is open to the potential of non-photorealistic uses, and will clearly label such images as AI-generated.

Machine Learning

Machine learning (ML) models have largely been excluded from recent discussions of AI tools, though they have been around for significantly longer than prompt-based text and image generators. ML models take training data (in varying quantities) and can “learn” to do different tasks and solve problems through intensive repetition, which then generates solution algorithms used for automation. Such tools are much more varied, customizable, and generally require experienced users who can program them and oversee their training, processes, and outputs. Many machine learning models are already used for the benefit of animals in various forms of wildlife conservation — for example, camera traps and ML models are used in combination to help identify, estimate, and track wild animal populations, at a scale and with an accuracy that humans could not hope to achieve in the same amount of time.

ML models are not foolproof. Like all of the AI models above, ML models are limited by their training data, and, depending on the quality of this data, they can range from very useful to totally useless, and any gradation in between. Given biased or limited data, poor instructions, or lack of training, ML models cannot and will not produce good results. With the right data, the right instructions, and the right oversight, however, they can do much more than a human can, and much more quickly.

It is our view that ML models could be particularly useful to Faunalytics as an organization, and to the animal advocacy movement more broadly. Internally at Faunalytics, we are considering the various ways that machine learning may be employed to analyze trusted datasets such as those from public sources like the UN Food and Agriculture Organization (FAO), data from our own Original Research, and aggregated data from our Research Library. 

While Faunalytics is not currently engaged in any projects that involve machine learning, there is a strong possibility that we will explore this avenue in the future. Any projects that use machine learning as a technique will be clearly labeled as such. Meanwhile, our Research Library will continue to consider and summarize studies that use machine learning to examine animal issues, as long as those studies are relevant to advocates.

Overarching Principles

The above policies are a living document and subject to revision. Guiding Faunalytics in this and all subsequent AI policies will be three key principles:

  1. Understanding: Faunalytics will continue to learn about and understand how new AI tools function, in order to have a clear sense of their pros and cons. While some of the most common AI tools that currently exist have been made user-friendly to a fault, as professionals we must go further than FAQs and make an effort to understand how they generate their content. With this understanding, we will be able to better evaluate their usefulness, and what sort of caution we may need to take in their use.
  2. Verification: There are no AI models that currently exist that can produce content that is above the need for verification and fact-checking. Faunalytics has earned a reputation as a solid source of data about animal advocacy, and we will not incorporate new tools or the content they produce without engaging in our usual best practices to ensure accuracy.
  3. Transparency: Users should know when they are engaging with AI-generated content. As such, any public-facing usage of AI tools will clearly be marked as such, while internal usage in research projects will be clearly documented.

Faunalytics remains committed to providing animal advocates with quality, trustworthy data and insights to help the animal protection movement be more effective. We will continue to approach all of our research publications, study summaries, and visual resources with diligence and rigor, as we monitor the development of new tools that may help us serve you better. 

If you have any questions or concerns about Faunalytics AI Usage Policy, please contact Content Director karol orzechowski: [email protected].

Faunalytics’ AI Usage Policy will be further elaborated upon in a forthcoming blog post.

Faunalytics uses cookies to provide necessary site functionality and to help us understand how you use our website. By continuing to use this site, you agree to our privacy policy.
meerkat-emails.jpg

Don’t Miss a Thing

Faunalytics delivers the latest and most important information directly to your inbox. Choose what topics you want to see and how often you get our emails, and you can unsubscribe anytime.