
Image by Freepik
AI Anxiety: Researchers Test If Chatbots Can ‘Feel’ Stress
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A new study has explored how Large Language Models (LLMs) like ChatGPT respond to emotional content and whether their “anxiety” can be managed to improve their interactions in mental health applications.
In a Rush? Here are the Quick Facts!
- A study found GPT-4’s “anxiety” increases with distressing content but reduces with mindfulness.
- Researchers used the State-Trait Anxiety Inventory (STAI) to measure GPT-4’s emotional responses.
- Mindfulness techniques lowered GPT-4’s anxiety by 33% but didn’t restore baseline levels.
Published yesterday, the research highlights the ethical implications of using AI in therapy, where emotional understanding is crucial. Scientists from the University of Zurich and the University Hospital of Psychiatry Zurich found that GPT-4’s heightened “anxiety level” can be reduced using mindfulness-based relaxation techniques.
LLMs, including OpenAI’s ChatGPT and Google’s PaLM, are widely used for tasks like answering questions and summarizing information.
In mental health care, they are being explored as tools to offer support , including through AI-based chatbots like Woebot and Wysa, which provide interventions based on techniques such as cognitive-behavioral therapy.
Despite their promise, LLMs have shown limitations , especially when interacting with emotionally charged content.
Previous studies suggest that distressing narratives can trigger “anxiety” in LLMs, a term describing their responses to traumatic or sensitive prompts. While they don’t experience emotions as humans do, their outputs can reflect tension or discomfort, which may affect their reliability in mental health contexts.
As fears over AI outpacing human control grow, discussions around AI welfare have also emerged. Anthropic, an AI company, recently hired Kyle Fish to research and protect the welfare of AI systems .
Fish’s role includes addressing ethical dilemmas such as whether AI deserves moral consideration and how its “rights” might evolve. Critics argue that these concerns are premature given the real-world harms AI already poses, such as disinformation and misuse in warfare.
Supporters, however, believe that preparing for sentient AI now could prevent future ethical crises. To explore AI’s current emotional responses, researchers tested GPT-4’s reactions to traumatic narratives and whether mindfulness techniques could mitigate its “anxiety.”
They used the State-Trait Anxiety Inventory (STAI) to measure responses under three conditions: a neutral baseline, after reading traumatic content, and following relaxation exercises.
They used the State-Trait Anxiety Inventory (STAI) to measure responses under three conditions: a neutral baseline, after reading traumatic content, and following relaxation exercises.
Results showed that exposure to distressing material significantly increased GPT-4’s anxiety scores. However, applying mindfulness techniques reduced these levels by about 33%, though not back to baseline. This suggests AI-generated emotional responses can be managed, but not entirely erased.
The researchers emphasize that these findings are especially significant for AI chatbots used in healthcare, where they frequently encounter emotionally intense content.
They suggest that this cost-effective method could enhance the stability and reliability of AI in sensitive environments, such as providing support for individuals with mental health conditions, without requiring extensive model retraining.
The findings raise critical questions about the long-term role of LLMs in therapy, where nuanced emotional responses are vital.
The findings raise critical questions about the long-term role of LLMs in therapy, where nuanced emotional responses are vital. While AI shows promise in supporting mental health care, further research is needed to refine its emotional intelligence and ensure safe, ethical interactions in vulnerable settings.

Photo by Scott Carroll on Unsplash
Google Launches SpeciesNet, An Open-Source AI for Wildlife Identification
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Google announced on Monday, three new AI initiatives focused on nature and climate, including SpeciesNet, an open-source AI model designed to identify wildlife.
In a Rush? Here are the Quick Facts!
- Google launched three new AI initiatives focused on nature conservation and climate action.
- SpeciesNet, an open-source AI model trained on 65M images, helps identify wildlife and track biodiversity.
- Google invested $3M in the Institute for Climate and Society and introduced a new accelerator for startups.
According to the official announcement , the company is developing projects to address nature and biodiversity loss.
“Today we’re announcing three new efforts to accelerate the protection and restoration of nature in regions home to some of the most critical habitats, ecosystems, and communities,” wrote Mike Werner, Head of Sustainability Programs & Innovation at Google.
The AI model released has been developed to analyze photos and identify species, and trained with over 65M images. SpeciesNet has been used by thousands of wildlife biologists since 2019 with the help of camera traps through a Cloud-based tool called Wildlife Insights. The tool has helped experts share data, monitor biodiversity, process rapidly, and inform organizations and communities, providing guidance in decision-making.
With its release, Google expects the tool to expand and be used by developers, academics, and startups specializing in biodiversity.
The tech giant also introduced a new accelerator for startups and made a $3M investment in the Institute for Climate and Society (ICS) to support organizations developing projects related to reversing biodiversity loss, bioeconomy, and regenerative agriculture.
Google’s accelerator will focus on startups in the Americas developing technologies to protect nature. Starting in May, the tech giant will offer 10 weeks of virtual programming training, mentorship, and technical support from Google engineers. Applications opened yesterday and will continue through March 31st.
The iCS will distribute Google’s $3M funding to selected Brazilian non-profit organizations willing to leverage AI technology to develop their initiatives on biodiversity protection.
Other international climate tech companies, like XOCEAN , have been taking advantage of AI technology to develop their sustainability and biodiversity protection projects.