Nearly Half Of U.S. Users Seek Mental Health Support From AI Chatbots - 1

Image by Christopher Lemercier, from Unsplash

Nearly Half Of U.S. Users Seek Mental Health Support From AI Chatbots

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The use of AI chatbots for mental health support has become common among Americans, yet experts recognize potential dangers which require urgent regulatory measures and monitoring.

In a rush? Here are the quick facts:

  • Nearly 49% of U.S. users sought mental health help from LLMs last year.
  • 37.8% of users said AI support was better than traditional therapy.
  • Experts warn LLMs can reinforce harmful thoughts and cause psychological harm.

A nationwide survey of 499 American citizens showed that 48.7% of respondents used ChatGPT, alongside other large language models, for psychological support during the last year, mainly to manage anxiety and depression and receive personal advice, as first reported by Psychology Today (PT).

Most AI users experienced neutral or positive outcomes from the technology, according to their reports, while 37.8% of them preferred AI over traditional therapy. The survey also revealed that harmful effects were reported by only 9% of users.

Despite some benefits, mental health experts warn about serious risks. LLMs tend to tell people what they want to hear rather than challenge harmful thoughts, sometimes worsening mental health.

This growing use of unregulated AI for therapy is described as a dangerous social experiment, as reported by PT. Unlike FDA-regulated digital therapeutics, LLMs are treated like over-the-counter supplements, lacking safety oversight. PT reports that experts, including the World Health Organization and U.S. FDA, have issued warnings about unsupervised use of AI in mental health.

The American Psychological Association (APA) emphasizes that these systems support dangerous mental patterns instead of addressing them, which hinders therapeutic progress.

The use of algorithms by AI chatbots represents the opposite approach of what a trained clinician would use, according to APA CEO Arthur C. Evans Jr. This practice leads users toward incorrect perceptions regarding authentic psychological care.

Indeed, experts explain that AI chatbots operate without the ability to use clinical judgment, and they also lack the accountability features of licensed professionals. Generative models that include ChatGPT and Replika adapt to user feedback by accepting distorted thinking instead of providing therapeutic insights.

The technology’s ability to adapt makes users feel supported, although it fails to provide any meaningful therapeutic assistance. Researchers from MIT have shown that AI systems are highly addictive , thanks to their emotional responses and persuasive capabilities.

Privacy is another major concern. Users share their personal information during conversations, which gets stored while being analyzed before sharing data with third parties who develop new products. Users transmit their deeply personal information without knowing the data management processes that occur after sharing.

AI chatbots which process sensitive conversations are vulnerable to hacking attacks and data breaches, according to cybersecurity specialists. These tools operate in legal ambiguity because of lacking strict regulations, which makes users more exposed to potential threats.

The call to action is clear: governments, researchers, and clinicians must create regulations and ethical guidelines to ensure safe, transparent, and effective use of AI in mental health.

Without oversight, the risks of psychological harm, dependency, and misinformation could grow as more people turn to AI for emotional support.

New Malware Turns Real Banking Apps Into Spy Tools - 2

Image by Vitaly Gariev, from Unsplash

New Malware Turns Real Banking Apps Into Spy Tools

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Researchers warn that the GodFather banking malware has evolved, transforming trusted applications into tools for theft.

In a rush? Here are the quick facts:

  • GodFather malware creates fake versions of real banking apps.
  • It records every tap and keystroke in real-time.
  • Uses virtualization to bypass visual detection and security.

Cybersecurity researchers at Zimperium zLabs discovered this advanced version of the malware, which uses virtualization to create deceptive copies of genuine applications, thus making user detection nearly impossible.

“This method marks a significant leap in mobile threat capabilities,” explained researchers Fernando Ortega and Vishnu Pratapagiri. Instead of simply showing a fake login screen like older malware, this version installs a host app that runs a virtual copy of your real banking or crypto app.

So when you open your banking app, you’re actually using a hijacked version that looks and behaves like the original, but every tap and password is being recorded.

The malware attacks applications from more than 500 companies, which include worldwide banks, crypto wallets, shopping, and messaging services. The malware specifically targets 12 Turkish banks, including Ziraat, Akbank, and ING Mobil. After installation, the malware can extract all user data, including PINs and passwords, together with messages and crypto wallet keys.

Worse still, it uses tricks to avoid detection. It manipulates Android ZIP files to fool security scans, hides malicious code in harmless-looking parts of the app, and abuses Android’s accessibility services to spy on users. “Ultimately, this virtualization technique erodes the fundamental trust between a user and their mobile applications,” researchers warned.

Infected devices under GodFather malware control allow hackers to perform device swipe actions, application taps, and screen lock password theft. The malware even sends fake pop-ups to users, which trick them into granting permissions without them realizing.

The researchers stress that mobile banking, and crypto users, need to download apps only from authorized sources while monitoring their applications for any abnormal behavior. Even a real app, they warn, might not be what it seems.