Bringing Animals Into AI Ethics
Artificial Intelligence (AI) systems have expanded their presence in many societies, and the field of AI ethics has grown accordingly. In this paper, the authors argue that AI ethics has largely ignored the impact of AI systems on nonhuman animals. They address this gap by making the case that animals should be given consideration within this realm and then reviewing four ethical issues relevant to animals that have been ignored.
Before continuing, it’s important to note that the authors used a broad definition of AI to capture all the systems where animal ethics may apply. This means they included systems such as computer vision (e.g., facial and object recognition), natural language processing (e.g., translation and voice-to-text), search engines, recommendation algorithms, robotics, and machine learning.
The authors begin by examining the three ways that AI systems impact animals. They draw special attention to factory farming, self-driving cars, animal targeting drones, and animal experimentation, given the number of animals affected and the intensity of their experiences.
- AI systems that are designed to interact directly with animals. Examples include systems on factory farms, such as those that monitor and adjust the environment in concentrated animal feeding operations (CAFO) and robotic systems used in milking operations. Another example are drone systems that identify “invasive” species and target them with bullets or poison.
- AI systems that interact with animals unintentionally. Here, the authors focus on self-driving cars and their potential to cause and avoid animal collisions.
- AI systems that impact animals without interacting with them. For example, search recommendation algorithms may present people with material depicting animal suffering or condoning speciesist views.
The authors examine four ethical issues relevant to animals that have been neglected by a human-centric perspective of AI systems. First, the AI industry needs to develop a practical system to hold developers, vendors, and users of the technology morally responsible for their impacts on animals. AI technology must be able to identify harm that would otherwise go unnoticed because animals cannot directly report on their experiences. The authors propose involving animal welfare experts in designing systems that can identify such harms. However, they caution that AI systems that improve animal welfare could simultaneously enable further animal exploitation. For example, factory farm operators may use it to pack more animals into their facilities, ultimately perpetuating the industry.
Second, the authors raise concerns about AI algorithms that perpetuate speciesism in the same way as other human biases, such as sexism, racism, and ageism. For example, a simple internet search using the term “chicken” reveals many chicken recipes, further normalizing the eating of some animals for food. Such bias has also been found with AI moral judgment models that conclude that eating animals is acceptable, and in some cases even morally positive.
Third, unlike other stakeholders impacted by AI systems, animals can’t provide direct feedback or input during the AI development process. Instead, the authors recommend that AI developers look to other data sources to safeguard animals’ interests. One option is collaborating with animal behavior and cognition experts to learn how to associate an animal’s behavior with their emotional state. Another option is to gather physiological data, such as body temperature and heart rate, to train the AI technology to recognize how an animal is feeling.
Finally, the authors acknowledge the limitations in extending equal consideration to all sentient beings impacted by AI systems. Some constraints come from the technology itself. For example, self-driving cars might be unable to sense, and therefore avoid, smaller animals. Even if AI developers could achieve this level of detection, the demands on the AI system may render the technology physically cumbersome or difficult to sell. Other constraints come from the ethical views of the end user. For example, consumers might be unwilling to accept self-driving cars that avoid entire roads that cross the migration routes of sea crabs.
Unfortunately, creating AI technology that considers human and animal interests equally is a difficult prospect. The authors present two ways of doing this: top-down training and bottom-up training. In top-down models, the developer would choose a moral framework and teach the AI system to follow it. However, this requires the developer to make difficult moral judgments ahead of time, such as choosing which beings have moral status and how to make ethical trade-offs. In a bottom-up model, AI systems “learn as they go,” shaping their behavior based on which behaviors are rewarded. However, because human society is largely speciesist, this means that the AI will likely be rewarded for adopting speciesist views.
AI is quickly becoming an everyday part of our lives, but much of the technology is still in its infancy. As such, this paper is largely theoretical. However, animal advocates can play an important role in bringing animals further into the circle of AI ethics. For example, they can provide feedback on AI legislation. They can also seek to join consultation groups to provide input on AI systems and animal behavior, and educate the public about the importance of considering animals as we continue to advance technology.
