Man Poisoned Himself After Following ChatGPT’s Advice - 1

Image by Denise Chan, from Unsplash

Man Poisoned Himself After Following ChatGPT’s Advice

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

A 60-year-old man gave himself a rare 19th-century psychiatric illness after following advice from ChatGPT.

In a rush? Here are the quick facts:

  • He replaced table salt with toxic sodium bromide for three months.
  • Hospitalized with hallucinations, paranoia, and electrolyte imbalances due to poisoning.
  • ChatGPT suggested bromide as a chloride substitute without health warnings.

A case study published in the Annals of Internal Medicine reveals the case of a man suffering from bromism, a condition caused by poisoning from sodium bromide.

Apparently, this was caused by his attempt to replace table salt with a dangerous chemical, which ChatGPT suggested he use. The man reportedly arrived at the emergency room experiencing paranoia, auditory and visual hallucinations, and accused his neighbor of poisoning him.

The following medical tests revealed abnormal chloride levels, as well as other indicators confirming bromide poisoning. The man revealed that he had followed a restrictive diet and used sodium bromide to replace salt. He did so after asking ChatGPT how to eliminate chloride from his diet.

“For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT,” the study reads. The researchers explain that sodium bromide is typically used as a dog anticonvulsant or a pool cleaner, but is toxic to humans in large amounts.

The man spent three weeks in hospital, where his symptoms gradually improved with treatment.

The study highlights how AI tools may provide incomplete and dangerous guidance to users. In a test, the researchers asked ChatGPT to suggest chloride alternatives, and as a result, received sodium bromide as a response. This response lacked any warning about its toxic nature or a request for the context of the question.

The research warns that AI can also spread misinformation and lacks the critical judgment of a healthcare professional.

404Media notes how OpenAI recently announced improvements in ChatGPT 5, aiming to provide safer, more accurate health information. This case shows the importance of cautious AI use and consulting qualified medical experts for health decisions.

AI Industry Faces Largest Copyright Class Action Threat - 2

Image by Tingey Injury Law Firm, from Unsplash

AI Industry Faces Largest Copyright Class Action Threat

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The AI industry faces its largest legal challenge yet because a court certified what might become the largest copyright class action suit in history.

In a rush? Here are the quick facts:

  • Court certified a massive copyright class action against AI company Anthropic.
  • Up to seven million claimants could join, risking huge damages.
  • Anthropic warns damages could total hundreds of billions of dollars.

The lawsuit, initially brought by just three authors against Anthropic, now threatens to expand into a massive case that could include seven million potential claimants, as first pointed out by ArsTechnica . If the case moves forward, it could financially devastate the entire AI sector due to potentially massive damages.

Anthropic petitioned the appeals court to reverse the class certification decision, arguing that the district judge William Alsup performed an insufficient review. ArsTecnica notes that the company faces potential damages of “hundreds of billions of dollars” if the certification persists, as each claimant’s work could result in penalties reaching $150,000.

Industry groups like the Consumer Technology Association, along with the Computer and Communications Industry Association, have joined Anthropic and argued that this type of ruling would produce negative consequences for the entire AI sector.

They fear it could scare off investment and slow down AI innovation in the U.S., and threaten its global position.

The groups say that one key problem lies in the fact that copyright lawsuits rarely fit well into class actions, as each author must prove ownership. Many authors may never even hear about the suit, and the court’s proposed notification system puts the burden on claimants themselves. There are also several issues around “orphan works,” which are partially owned books, and estates of deceased authors.

ArsTechnica reports that both author and library advocates support this stance because the court failed to consider decades of established copyright research and legislation. They argue this rushed ruling could prevent fair resolution of important legal questions around AI’s use of copyrighted material.

“This case is of exceptional importance,” they said, as reported by ArsTechnica. The decision risks creating a “death knell” for properly addressing authors’ rights in the era of AI.