
AI Outperforms Humans in Scanning Animal Faces For Stress And Pain, Potentially Improving Animal Welfare
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
AI is advancing the ability to interpret animal facial expressions, offering new tools for monitoring welfare. AI systems can detect signs of stress, pain, and other emotional states in various species, potentially improving care standards, as first reported by a report by Science .
In a Rush? Here are the Quick Facts!
- AI systems monitor animal facial expressions to detect stress, pain, and emotions.
- Intellipig, developed by UWE and SRUC, analyzes pigs’ faces for signs of distress.
- AI could help smart farms ensure animals live stress-free, happy lives with continuous monitoring.
One such system, Intellipig , developed by researchers at the University of the West of England Bristol (UWE) and Scotland’s Rural College (SRUC), is undergoing testing on U.K. farms.
Cameras capture pigs’ faces as they approach feeding stalls each morning. In under a second, AI identifies individual pigs and assesses their expressions for indications of distress or discomfort, notifying farmers when intervention is needed.
“These tools could usher in a new era of caring for animals that gives higher priority to their health, welfare, and protection,” says Melvyn Smith, a machine vision engineer at UWE, as reported by Science.
The system relies on deep learning, an AI technique that enables pattern recognition. It has achieved a 97% accuracy rate in identifying pigs and is effective at detecting stress through facial features alone.
The paper explains that AI training begins with “landmarking,” where researchers mark key facial points—such as the edges of eyes or nostrils—on thousands of images. These points form a facial map, which AI learns to recognize even in partially obscured faces.
AI then analyzes distances between landmarks to identify expressions. For example, a cat in pain may widen its muzzle, increasing the distance between mouth edges. By comparing these changes with established grimace scales, AI can detect pain with high accuracy.
In 2023, Anna Zamansky, a computer scientist at the University of Haifa, and her team achieved 77% accuracy in detecting pain in cats . Similarly, Peter Robinson at the University of Cambridge developed an AI tool that identified sheep suffering from footrot or mastitis within a herd.
However, Science notes that AI applications face limitations, including insufficient high-quality training data. “There aren’t that many pictures of dogs and cats and sheep on the internet,” Robinson notes, particularly those with clear emotional indicators. AI can also misinterpret expressions by focusing on irrelevant features.
To address concerns about AI’s “black box” nature, Zamansky’s team uses tools like GradCAM to visualize which facial areas AI prioritizes. Their findings suggest the eye region is most informative across species, as noted by Science.
Researchers are now pushing AI to interpret more nuanced emotions, such as happiness, frustration, or fear. Brittany Florkiewicz, an evolutionary psychologist at Lyon College, has cataloged 276 distinct facial expressions in cats, as previously reported also by Science.
Collaborating with Zamansky’s team, she is using AI to analyze these expressions, revealing subtle mimicry among cats that often signals bonding or playfulness.
Zamansky’s team has also trained AI to differentiate between “happy” and “frustrated” dogs and horses. In one experiment, AI correctly identified a dog’s emotional state 89% of the time. While success rates for more complex emotions, like disappointment, are lower, they still outperform chance.
More broadly, labs and pet shelters could leverage AI to track pain and emotional states in animals , says Florkiewicz as reported by Science. Meanwhile, “smart farms”—such as those being tested in the English countryside—aim to deliver personalized care to animals through continuous monitoring.
Ultimately, Smith explains to Science that AI systems could assist farmers in ensuring pigs not only live stress-free but also experience happiness.

Image by Vladimir Fedotov, from Unsplash
Should AI Influence End-of-Life Medical Choices? Experts Discuss
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
AI is already revolutionizing healthcare in areas like imaging and diagnostics. However, as AI technology continues to advance, there are growing discussions about its potential role in end-of-life medical decision-making.
In a Rush? Here are the Quick Facts!
- End-of-life decisions should reflect patients’ wishes and be medically appropriate.
- AI can help provide prognostic information and assist in decision-making processes.
- AI may help with incapacitated patients who lack advance directives but has limitations.
Rebecca Weintraub Brendel, director of the Harvard Medical School Center for Bioethics, recently discussed the ethical implications of using AI in critical decision-making in a Harvard Medical School press release .
Weintraub Brendel emphasized that end-of-life choices ultimately reflect patients’ wishes, provided they are competent to make those decisions and the choices are medically appropriate.
But complications arise when a patient is unable to express their desires due to illness. In these cases, understanding both the cognitive and emotional implications of the decision becomes essential.
For instance, patients with progressive neurological conditions, such as ALS, may eventually reach a point where they are prepared to make end-of-life decisions. Conversely, individuals with cancer often experience significant shifts in their mindset once symptoms are addressed, leading them to reconsider their choices.
“People sometimes say, ‘I would never want to live that way,’ but they wouldn’t make the same decision in all circumstances,” noted Weintraub Brendel.
The conversation then turned to younger patients facing life-altering injuries. “When we’re faced with something that alters our sense of bodily integrity, our sense of ourselves as fully functional human beings, it’s natural, even expected, that our capacity to cope can be overwhelmed,” said Weintraub Brendel.
However, many individuals, even those suffering from severe injuries, report an improved quality of life over time, highlighting the importance of resilience and hope.
Weintraub Brendel also discussed the potential role of AI in helping patients navigate these tough decisions. AI systems could offer valuable insights into what might be expected during the progression of a chronic illness or how a person may cope with pain.
With its ability to process vast amounts of data, AI could provide prognostic information, helping clinicians and patients better understand potential outcomes and make informed decisions. “AI could give us a picture that could be helpful,“ she explained.
One of the more contentious issues, however, is the use of AI when patients are incapacitated and lack advance directives. In such cases, medical teams often rely on assumptions about what the patient would have wanted.
“I’m less optimistic about the use of large-language models for making capacity decisions or figuring out what somebody would have wanted. To me it’s about respect. We respect our patients and try to make our best guesses, and realize that we all are complicated, sometimes tortured, sometimes lovable, and, ideally, loved,“ Weintraub Brende argues.
Weintraub Brendel stresses that “having a better prognostic sense of what might happen is really important,” but cautions against over-relying on AI without acknowledging the complexity of human values.
Despite its potential, Weintraub Brendel is wary of AI’s role in making ethical decisions. “We can’t abdicate our responsibility to center human meaning in our decisions, even when based on data,” she stated. While AI can assist in diagnostics and provide valuable insights, the final decision-making should remain a human responsibility.
Ultimately, the integration of AI into healthcare, particularly in end-of-life decisions, requires careful ethical consideration.
As Weintraub Brendel put it, “We have to ask, ‘How do we do that and follow our values of justice, care, respect for persons?’” As technology advances, the balance between human judgment and AI’s capabilities will continue to shape the future of medical care.