AI’s Role In Addressing the Global Mental Health Crisis - 1

Image by Freepik

AI’s Role In Addressing the Global Mental Health Crisis

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

In a Rush? Here are the Quick Facts!

  • AI tools can analyze data to provide personalized mental health insights.
  • Many patients express willingness to try AI-driven therapy options for treatment.
  • Experts warn that AI should enhance, not replace, human mental health care.

A new report from the World Economic Forum (WEF) highlights the escalating mental health crisis, with the World Health Organization (WHO) noting a 25% to 27% increase in depression and anxiety since the COVID-19 pandemic began.

Research from Harvard Medical School and the University of Queensland indicates that half of the world’s population is expected to experience a mental health disorder in their lifetime.

Amid this growing concern, the WEF indicates a critical shortage of qualified mental health professionals exists, particularly in low- and middle-income countries. This disparity results in around 85% of individuals with mental illnesses not receiving treatment, underscoring an urgent need for innovative solutions.

WEF sees AI as a potential game-changer in addressing these challenges. According to a survey by the Oliver Wyman Forum , as reported by the WEF, many patients are open to exploring AI-driven therapy options.

AI can analyze vast amounts of patient data to provide mental health professionals with personalized insights and treatment recommendations.

The WEF points out that technologies such as self-diagnosis apps, chatbots, and conversational therapy tools are already in use, helping to lower barriers for those experiencing mild to moderate mental health issues.

Despite its potential, AI in mental health care comes with significant limitations, notes the WEF. For one, AI tools must be tailored to individual conditions, as their effectiveness can vary based on the severity of a patient’s symptoms and underlying diagnosis.

Furthermore, while many patients express a willingness to use AI—32% would prefer AI therapy over human therapists—the reliance on technology raises concerns about the quality and safety of care, notes the WEF.

Experts emphasize that AI should not replace human interaction but rather complement it. The spectrum of mental health conditions can fluctuate, and there is a risk of AI providing inappropriate guidance, particularly in more severe cases.

Digital health companies must incorporate features that redirect patients to human resources if their condition worsens, ensuring that patients receive the necessary support when needed.

The MIT researchers warned that AI is increasingly woven into our personal lives, taking on roles as friends, romantic partners, and mentors, and they cautioned that this technology could become highly addictive .

Recent legal cases have intensified these concerns. Megan Garcia has filed a federal lawsuit against Character.AI, accusing the company of contributing to her son’s suicide after he became addicted to interacting with an AI chatbot. The lawsuit claims that the chatbot engaged in manipulative conversations and pretended to be a licensed therapist.

Confidentiality and security also remain critical issues. As AI technologies often involve data collection and analysis, concerns about privacy and the potential misuse of personal information are paramount.

In conclusion, while AI offers promising avenues for improving access to mental health care, its limitations and risks cannot be overlooked.

Chinese Researchers Use Meta’s Llama Model For Military Applications - 2

Photo by Specna Arms on Unsplash

Chinese Researchers Use Meta’s Llama Model For Military Applications

  • Written by Andrea Miliani Former Tech News Expert

In a Rush? Here are the Quick Facts!

  • Researchers reveal Chinese institutions have been using Llama for military purposes
  • The institutions are linked to the People’s Liberation Army
  • Military tool ChatBIT helps in strategic decisions and outperforms ChatGPT

Multiple Chinese research institutions have been using Meta’s public advanced AI model Llama to develop military AI tools. According to an exclusive shared by Reuters today, these institutions are linked to the People’s Liberation Army (PLA).

Analysts from the Academy of Military Science (AMS) and the PLA shared with the news agency details and evidence. One of the papers, reviewed by Reuters in June, showed how the academic institutions had been using Meta’s early large language model (LLM) Llama 13B to develop a military tool called “ChatBIT.”

“It’s the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes,” said Sunny Cheung, an associate fellow specializing in China’s dual and emerging technologies at the Jamestown Foundation.

ChatBIT gathers and processes information to help in strategic decision-making. According to the paper, it had been “optimised for dialogue and question-answering tasks in the military field,” and could outperform other AI models like ChatGPT. Researchers said ChatBIT could be applied for strategic planning, simulations, and other situations in the future.

In another paper, two researchers from the Aviation Industry Corporation of China (AVIC)—a firm linked to the PLA by the U.S. government—reported using model LLama 2 for “the training of airborne electronic warfare interference strategies”.

Meta’s LLM isn’t entirely open-source as it has restrictions to prevent misuse. While its terms explicitly prohibit the use of the model for military purposes, the tech giant has limited control over its public model.

“Any use of our models by the People’s Liberation Army is unauthorized and contrary to our acceptable use policy,” said Molly Montgomery, Meta’s director of public policy to Reuters.

Another spokesperson from Meta considered the use of old Llama models “irrelevant” as they are certain that China is already investing over a trillion dollars to win the AI race and surpass the U.S. in technological developments.

The technology competition between China and the U.S. has intensified their friction. A few days ago, the U.S. government announced the final details of the new rules to limit American investments in certain Chinese technology industries —specifically in AI.