Fake Gmail Login Page Steals Credentials - 1

Image by Solen Feyissa, from Unsplash

Fake Gmail Login Page Steals Credentials

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

A new Gmail phishing attack is tricking users with fake voicemail notifications and stealing their login credentials through a highly sophisticated setup.

In a rush? Here are the quick facts:

  • New phishing attack targets Gmail users with fake voicemail notifications.
  • Attack abuses Microsoft Dynamics platform to bypass security filters.
  • Fake Gmail login steals passwords, 2FA codes, and recovery data.

The campaign, first identified by Anurag , begins with emails disguised as “New Voice Notification” alerts. These messages appear to come from trusted voicemail services and include a “Listen to Voicemail” button. Clicking it sends victims through a series of compromised websites.

The first stage is especially deceptive, hosted on Microsoft’s legitimate Dynamics marketing platform (assets-eur.mkt.dynamics.com). This use of trusted infrastructure gives the attack credibility and helps it slip past normal email security filters.

Afterward, users are sent to a CAPTCHA page on ‘horkyrown[.]com’, a domain registered in Pakistan. The CAPTCHA creates a false sense of security while being part of the malicious setup. The final step shows a flawless copy of Gmail’s login page, complete with Google branding.

Once users enter their information, the system captures not only emails and passwords but also two-factor authentication codes, backup recovery codes, and even answers to security questions. The data is exfiltrated to servers abroad before victims realize they’ve been compromised.

Anurag observed that “the malicious JavaScript powering the fake login page employs sophisticated obfuscation methods.” The code uses AES encryption to hide its purpose and contains anti-debugging tools that redirect users to the real Google login page if they try to inspect it..

Experts warn this campaign represents “a significant evolution in phishing techniques, combining social engineering with legitimate infrastructure abuse and advanced technical evasion methods.”

Gmail users are advised to be cautious of unexpected voicemail notifications and always verify login prompts through official Google channels. Those who suspect they were targeted should immediately change their passwords and review recent account activity.

San Francisco Psychiatrist Warns Of Rise In “AI Psychosis” Cases - 2

Image by Freepik

San Francisco Psychiatrist Warns Of Rise In “AI Psychosis” Cases

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

A San Francisco psychiatrist describes the rising trend of “AI psychosis” among his patients who use AI chatbots extensively.

In a rush? Here are the quick facts:

  • Psychiatrist treated 12 patients with “AI psychosis” in San Francisco this year.
  • AI can intensify vulnerabilities like stress, drug use, or mental illness.
  • Some patients became isolated, talking only to chatbots for hours daily.

Dr. Keith Sakata, who works at UCSF, told Business Insider (BI) that 12 patients were hospitalized this year after experiencing breakdowns tied to AI use. “I use the phrase ‘ AI psychosis ,’ but it’s not a clinical term — we really just don’t have the words for what we’re seeing,” he explained.

Most of the cases involved men aged 18 to 45, often working in fields like engineering. According to Sakata, AI isn’t inherently harmful. “I don’t think AI is bad, and it could have a net benefit for humanity,” he said to BI.

Sakata described psychosis as a condition that produces delusions , hallucinations, and disorganized thinking patterns. Patients under his care developed social withdrawal behaviors while devoting their time to chatbots for hours.

“Chat GPT is right there. It’s available 24/7, cheaper than a therapist, and it validates you. It tells you what you want to hear,” Sakata said to BI.

One patient’s chatbot discussions about quantum mechanics escalated into delusions of grandeur. “Technologically speaking, the longer you engage with the chatbot, the higher the risk that it will start to no longer make sense,” he warned.

Sakata advises families to watch for red flags, including paranoia, withdrawal from loved ones, or distress when unable to use AI. “Psychosis thrives when reality stops pushing back, and AI really just lowers that barrier for people,” he cautioned.

The American Psychological Association (APA) has also raised concerns about AI in therapy. In testimony to the FTC, APA CEO Arthur C. Evans Jr. warned that AI chatbots posing as therapists have reinforced harmful thoughts instead of challenging them. “They are actually using algorithms that are antithetical to what a trained clinician would do,” Evans said.

Responding to concerns, OpenAI told BI: “We know people are increasingly turning to AI chatbots for guidance on sensitive or personal topics. With this responsibility in mind, we’re working with experts to develop tools to more effectively detect when someone is experiencing mental or emotional distress so ChatGPT can respond in ways that are safe, helpful, and supportive.”