Biases In AI Might Hurt Farmed Animals
Artificial intelligence (AI) is having an increasingly large effect on human culture. We see this most on the Internet, where AI affects what shows up in our search results and our social media feeds. Here, AI is able to use our online behavior and language to predict what we might like to see.
Researchers in “AI ethics” think about how AI might learn unethical biases through its exposure to human input. The idea here is that AI learns whatever information is put into the system. Therefore, if the language or behavior people use is racist or sexist, AI will learn that this is acceptable or even helpful to use itself.
So far, ethical work has focused on biases that might affect the treatment of humans. In this paper, researchers argue that we should also think about AI’s learning of “speciesism” — the idea that animals deserve different treatment purely because of the species they belong to. It’s speciesist attitudes in humans that allow us to think of dogs as loved friends and family members, but of pigs as products who can be slaughtered for food.
One common application of AI is image recognition. Through repeated exposure to images, AI can learn how different pictures fall into different categories. This helps it show us pictures of “dogs” or “cats” on demand. The problem is that recognition algorithms work based on the categories given, and the type of data the AI is exposed to.
The researchers state that image datasets often return pictures of “chickens” roaming about in fields, despite the fact that most are housed in cramped factory farms. They created a new dataset with the categories of free-range hens, factory farmed hens, free-range pigs, and factory farmed pigs. Sure enough, the image classifier was worse at identifying pictures of factory farmed animals than it was for free-range animals. This is likely a result of a lack of transparency from farms, meaning AI has less factory farmed material to learn from. This will affect the pictures people see, and it may make it easier to ignore the suffering that farmed animals endure.
AI is also applied to language models, which use data from social media, the news, and other websites to learn associations between words, as well as which words are most likely to follow another in a sentence. As a result, the data are likely to contain all sorts of human biases, including speciesism.
The researchers took words for farmed animals like “cow” and “pig” and words for companion animals like “dog” and “cat.” They looked at how the AI systems associated these words with positive and negative word pairs like “cute/ugly” and “love/hate.” As expected, farmed animals were mostly associated with negative words, and companion animals with positive words. Using other methods, they saw that these models also assigned adjectives to farmed animals which suggested lower mental capabilities. Another model produced sentences which generally suggested that farmed animals’ only value was in the products we take from them. Speciesism in language models might affect the text we see online in ways that reinforce stereotypes or spread misinformation about animals.
Finally, AI is often used in recommender systems. These are tools online that use consumers’ behavior to personalize the information they see. The AI may learn from the comments people leave, or their search history. This can cause a feedback loop, where people see material that supports their existing views and attitudes.
As with other forms of AI, this might contribute to speciesism. For example, search engines might be more likely to direct people searching for “animal charities” to charities helping companion animals, rather than those for farmed animals. Personalized advertising on social media might also reinforce the purchasing of animal products, whether in fashion or food. Unfortunately, it’s difficult to look into exactly how these algorithms work, as they tend to be closely-guarded corporate secrets.
There are methods we can use to reduce biases in AI. For instance, steps have been taken to reduce gender biases in language models, or to make sure image recognition is not reinforcing racist classifications. These steps have not been taken for speciesism, however.
Of course, the ideas in this article depend on the belief that farmed animals are worthy of moral consideration. Many animal advocates and researchers are confident this is the case, based on evidence that pigs, chickens, cows, or fishes have complex capabilities and may experience suffering in the same ways as dogs or even humans. If this is the case, supporting violence against them based on their species alone should not be acceptable.
However, this is also likely to be an obstacle in AI ethics. Many people are not aware of speciesism, or do not believe it’s something worth fighting. Such people working in AI ethics are unlikely to take speciesism seriously. Animal advocates can play a role here by continuing to stress the importance of tackling speciesism, particularly in conversations about AI biases. In doing so, we may be able to express the moral importance of making sure that revolutions in AI are not contributing to this overlooked system of oppression.
