The Overlaps Between Artificial Intelligence & Animal Welfare
Ethics in the twenty-first century faces two immense challenges with the potential to protect trillions of sentient beings from whole lives of suffering — or to entrench them further in those lives. Specifically, factory farming and the more general question of killing non-human animals for human desires (such as food, clothing, and research), as well as what do if we create a general artificial intelligence (AI), loom large. The suffering of animals is documented and immediate, while the possibility of AIs to either experience pain and misery themselves or create it for humans is a futuristic concern.
Nevertheless, we should neither discount the suffering of animals in the future, such as those in factory farms, nor fail to preemptively think about how to avoid similar suffering for the computers that will drive society. While there had previously been comparisons between these two seemingly detached dilemmas, little research or thought has sought to answer how taking steps to resolve one could help to resolve the other.
In a recent paper a researcher at AI Policy Labs, argues we ought to extend the alignment of AI systems with human values to include the values of nonhuman animals, as it would be speciesism not to. In the article, they focus on two subproblems of AI value alignment—value extraction and value aggregation—and review how AI systems could address the challenges for integrating the values of nonhuman animals.
Briefly, the “AI value alignment problem” that the author addresses seeks to ensure that AI systems act in ways that are aligned with human goals and values. A classic example describes an AI being asked to create and manufacture paper clips. The result is that in pursuing this goal, other seemingly unrelated goals become part of its mission. For example, rather than relying on traditional iron-ore supply chains, the AI might attempt to take over a country with rich iron deposits, enslave the local population, and force everyone to mine as much iron as humanly possible.
While extreme, this case makes an important point: unless we develop AI in a way that it can effectively understand and reasonably do what we ask of it, we may end up with some we didn’t (and wouldn’t have) wish for. As mentioned, the two subproblems of value alignment the author addresses with respect to the interests of nonhuman animals are value extraction and value aggregation.
Taking each of these in turn, value extraction basically boils down to what it means to behave well or badly. The author presents six parts to understanding good or bad behavior for an AI.
- Instructions: the AI does what it’s asked.
- Expressed intentions: the AI does what was intended.
- Revealed preferences: the AI does what people truly prefer.
- Informed preferences or desires: the AI does what people with good information would want.
- Interest or wellbeing: the AI does what is best for people.
- Values: the AI does what it morally should.
For simplicity, the author discusses parts two, three, and five with respect to both the challenges and promise of incorporating nonhuman animals’ values into AI systems. Part two would require that the AI does what nonhuman animals would intend it to do. Unless the AI were able to communicate with nonhuman animals, however, it isn’t clear how this would be possible. This issue might be reconcilable in considering part three: research on behavioral indicators of animal welfare tells us about what they might prefer (for example, the ability to forage and roam). This is complicated by the fact that what individual animals or even species prefer is different from what the collective of all nonhuman animals would. Finally, part five would require that the AI does what is in nonhuman animals’ best interest. Again, this is hard to measure. Certainly, the rich, complex emotional lives of nonhuman animals give us a starting point, but no clear roadmap from there. Ultimately, it seems the safest assumption is to take the self-preservation of all species as a given and look at how captive animals might behave in the wild as a reference.
While there are no immediate answers here, the author urges that general ethical considerations of nonhuman animals are critical to consider early and often, rather than waiting until after the fact.
Moving on to value aggregation, the issue is that the values and interests of humans and nonhuman animals are conflicting. For instance, consumption of nonhuman animals (e.g., for food, clothing, cosmetics, etc.) is common and unlikely to fully disappear before we develop AI. Habitat destruction, rare cases of nonhuman animals killing humans, and the complex interplay of natural food webs among nonhuman animals also factor in.
The author admits that it is well beyond the scope of his article to flesh out a robust, empirical understanding of how to optimize these dynamics. Rather, they argue that, given the chance and appropriate inputs, AI may be valuable as a sort of advisor, provided that it does in fact take into account nonhuman animal values. Relying on the idea that nonhuman animals, like us, might act one way versus another if they had “good information” to do so, the author argues that an AI system, owing to its intelligence, could plausibly figure out long-term values of nonhuman animals through the analysis of behavioral data of nonhuman animals.
In summary, the author makes a strong case that we need urgently to consider the interest of nonhuman animals as we develop AI. Animal welfare advocates would be right to point out that this article is perhaps thin on immediate solutions, and is generally discussing rather grand ideas. The important point here, however, is that this may be a relatively distant issue, but it is one that advocates need to stay aware of, and begin to strategize interventions around from a long-termist perspective. It does not mean we should direct all attention away from the over one trillion animals that are killed every year for human interests; rather, it means that we ought to double down on efforts to understand animal welfare to protect the orders of magnitude more nonhuman animals that will exist in the future. Doing so will allow us to utilize technology to better protect and advocate for the wellbeing of nonhuman animals and perhaps even eliminate the consumption of them altogether.
https://doi.org/10.3390/philosophies6020031
