Automated Pain Detection In Farmed Animals: Promises And Problems
Artificial intelligence (AI) has moved beyond imagination to expeditiously transform our world, impacting industries from healthcare to energy. Yet, as we marvel and prepare for its potential to revolutionize human life, current discussions about AI ethics overwhelmingly focus on humans, ignoring the consequences for animals. This oversight risks widening the distance between humans and animals, diminishing empathy and reinforcing the belief that animals are resources rather than beings of moral significance.
Automated pain detection (APD) is a stark example of AI’s potential impact on animal lives. Using tools like facial recognition and wearable devices, APD identifies signs of pain in animals by analyzing measurable changes in physiology and behavior.
While touted as a way to improve welfare, APD raises serious concerns. In this paper, the authors draw on existing research to critically examine APD’s use in agriculture. They explore its ethical and practical challenges, questioning whether APD can genuinely advance animal welfare or merely optimize ongoing exploitation.
The authors argue that neglecting sentient animals as stakeholders leads to incomplete assessments of AI innovations and their impacts. Some suggest that respecting animals’ welfare requires removing them entirely from agriculture, rendering discussions of APD irrelevant. However, the authors contend that because APD is already used in farming, its ethical challenges must be evaluated in its current context.
Can We Accurately Measure Pain?
Central to the paper is the concept of the normativity of pain — the idea that pain is a deeply subjective experience. On its face, APD sounds promising as a means to better understand and reduce animal suffering. However, the authors contest whether pain can ever be completely captured in an objective measurement because of its inherent ties to individual experience.
To illustrate this, they reference philosopher Thomas Nagel’s 1974 essay What Is It Like to Be a Bat?. Consciousness, Nagel argued, is innately an individual experience, making it nearly impossible to objectively understand another being’s mental state. This poses a fundamental problem for APD: if humans struggle to understand each other’s pain, how can we code machines to reliably detect pain in animals?
The authors emphasize that pain is more than a mechanical response. It’s a complex interplay of emotional, social, and environmental factors that can’t be fully captured by measurable data. Oversimplifying these experiences misses key components of well-being and undermines APD’s potential to truly improve animal welfare.
Ethical Challenges Of Automated Pain Detection
The authors identify two types of ethical challenges with APD: extrinsic issues (external and solvable) and intrinsic issues (fundamental to the technology itself).
Extrinsic issues include:
- Industry ties: APD is often developed within the animal agricultural industry, where it’s marketed as a welfare tool. In countries with little transparency in farming practices, this in-house technology research and implementation provokes well-founded skepticism. The authors question whether APD’s primary aim is to reduce suffering or maximize productivity under the guise of care.
- Data quality: Reliable APD systems depend on high-quality, diverse datasets. Yet current datasets frequently fail to account for variations in pain expression across species, life stages, and individuals.
Intrinsic issues include:
- Interspecies comparison: Biological and behavioral differences between species complicate detection. Some animals, for example, have evolved to mask their pain to avoid being targeted as prey, making it difficult for algorithms to identify signs of suffering.
- Reductionism: APD simplifies welfare to quantifiable physical indicators, like heart rate or facial expressions, missing the social and emotional dimensions of pain.
- Bias in science: Scientific studies, often perceived as neutral, can reflect cultural and ideological biases that sustain outdated views of animals. In the context of APD, for example, researchers’ view of animals can influence the questions they ask, the methods they use, and the conclusions they draw.
The authors stop short of rejecting APD outright, concluding that while it holds potential for improving animal welfare, it must be rigorously evaluated for its validity and ethical implications. This includes addressing biases in data, species-specific differences, and the broader consequences of its use. While APD could help monitor and reduce pain, it risks further automating an already exploitative system and worsening the disconnect between humans and the animals they consume.
APD, like other AI technologies, must not become a distraction from the urgent need to reform or dismantle unsustainable agricultural practices. It’s not a replacement for addressing deeper systemic problems in modern agriculture, including its ethical and environmental failings.
As this technology grows more widespread, animal advocates must push for transparency in its development and insist on guidelines that ensure productivity never overshadows animal welfare. The authors stress that true progress hinges on AI benefiting all sentient beings — not just humans. By asking critical questions and demanding better outcomes, advocates can help create a future where compassion and animal welfare can advance alongside technology.
https://doi.org/10.1163/9789004715509_043

