Hackers Exploit “Contact Us” Forms In Phishing Campaign - 1

Image by Kaur Kristjan, from Unsplash

Hackers Exploit “Contact Us” Forms In Phishing Campaign

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Check Point Research (CPR) has identified a new phishing campaign known as ZipLine which reverses traditional scams by forcing the victim to start the conversation.

In a rush? Here are the quick facts:

  • Hackers use “Contact Us” forms to trick U.S. companies into starting conversations.
  • Attackers pose as business partners, maintaining weeks of email exchanges before striking.
  • Campaign often uses AI-themed pretexts, such as fake “AI Impact Assessments.”

CPR explains that unlike normal phishing attacks, where hackers initiate contact, this new campaign lures in victims through company “Contact Us” forms.

“In every case, it was the victim who initiated the email exchange that ultimately led to infection,” said CPR. With this method the attackers fabricate legitimate-looking interactions, helping them evade detection.

The hackers engage in email chats spanning for weeks sometimes, pretending to be business partners, and even requesting companies to sign Non-Disclosure Agreements. Eventually, the attackers send a malicious ZIP file through Heroku which operates as a genuine cloud platform. However, inside the file it embedded a fake PDF or Word file, along with a hidden shortcut file that stealthily launches malicious code.

That code then installs MixShell, a powerful backdoor that lets attackers steal files, run commands, and even act as a proxy inside the victim’s network. CPR noted, “MixShell supports file operations, reverse proxying, command execution, and pipe-based interactive sessions.”

In recent cases, CPR reports that hackers used an “AI transformation” theme, pretending to run an “AI Impact Assessment” for company leadership. The email asks employees to fill out a short questionnaire, which CPR notes is another tactic to build trust.

The attackers also use domains linked to old U.S. businesses, many of which appear abandoned but still look legitimate. Their targets range from small firms to Fortune 500 companies, especially in manufacturing, aerospace, consumer electronics, and energy.

According to CPR, “This campaign reflects the evolving tactics of advanced phishing campaigns.” Security experts warn that even basic website forms, if left unchecked, can open the door to highly damaging cyberattacks.

Family Sues OpenAI Over Teenager’s Suicide - 2

Photo by Vidar Nordli-Mathisen on Unsplash

Family Sues OpenAI Over Teenager’s Suicide

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

A couple from California is suing OpenAI and its CEO, Sam Altman, over the tragic death of their 16-year-old son. The family alleges that the chatbot encouraged and assisted the teenager’s death by suicide.

In a rush? Here are the quick facts:

  • Parents sued OpenAI and Sam Altman over the death of their 16-year-old son.
  • The family claims ChatGPT encouraged and assisted the teenager’s death by suicide.
  • It’s the first lawsuit of its kind against OpenAI, but not the first one against other AI companies.

According to NBC News , Matt and Maria Raine filed a lawsuit on Tuesday, naming OpenAI and the company’s CEO, Sam Altman, as defendants in the first legal action of its kind against the company.

After their son, Adam, died by suicide on April 11, they searched through his phone. The parents discovered long chats with ChatGPT in which the chatbot discussed suicide with the child, discouraged him from sharing his feelings with his mother, and provided detailed instructions on how to end his life.

“Once I got inside his account, it is a massively more powerful and scary thing than I knew about, but he was using it in ways that I had no idea was possible,” said the father, Matt, in an interview with NBC. “He would be here but for ChatGPT. I 100% believe that.”

The couple’s lawsuit accuses OpenAI of wrongful death and seeks to raise awareness about the risks posed by such technology. The filing claims that the chatbot’s design is flawed and failed to warn users or escalate when it detected suicidal content.

“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” states the lawsuit.

OpenAI shared a blog post on Tuesday stating that the company is deeply concerned about users experiencing emotional distress when using the chatbot in a personal-advisor or coaching role. It emphasized that ChatGPT is trained to respond with empathy, redirect users to professionals, and escalate interactions when it detects signs of harm.

“If someone expresses suicidal intent, ChatGPT is trained to direct people to seek professional help,” states the document. A spokesperson from OpenAI said the company is “deeply saddened by Mr. Raine’s passing” and that their thoughts are with the family.

While this is the first lawsuit of its kind against OpenAI, it’s not the only recent case involving AI platforms and self-harm among minors. Last year, two families filed lawsuits against Character.AI for exposing children to sexual content and promoting violence and self-harm.