Judge Fines Lawyers For Using Fake AI-Generated Legal Research - 1

Image by Freepik

Judge Fines Lawyers For Using Fake AI-Generated Legal Research

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

A U.S. judge has sharply criticized two law firms for including fake legal information generated by AI in a court filing, calling it a major lapse in legal responsibility.

In a rush? Here are the quick facts:

  • Judge fined two law firms $31,000 for fake AI-generated legal citations.
  • False information was found in a court brief filed in a State Farm case.
  • At least two cited legal cases were completely fabricated by AI.

Judge Michael Wilner, based in California, fined the firms $31,000 after discovering the brief was filled with “false, inaccurate, and misleading legal citations and quotations,” as first reported by WIRED .

“No reasonably competent attorney should out-source research and writing’’ to AI, Wilner wrote in his ruling, warning that he was close to including the fake cases in a judicial order.

The situation arose during a civil lawsuit against State Farm. One lawyer used AI tools to draft a legal outline. That document, containing fake research, was handed to the larger law firm K&L Gates, which added it to an official filing.

“No attorney or staff member at either firm apparently cite-checked or otherwise reviewed that research before filing the brief,” Wilner noted, as reported by WIRED.

After discovering that at least two of the cited cases were completely made up, Judge Wilner asked K&L Gates for clarification. When they submitted a new version, it turned out to include even more fake citations. The judge demanded an explanation, which revealed sworn statements admitting to the use of AI tools, as reported by WIRED..

Wilner concluded: “The initial, undisclosed use of AI products to generate the first draft of the brief was flat-out wrong […] And sending that material to other lawyers without disclosing its sketchy AI origins realistically put those professionals in harm’s way,”as reported by WIRED.

This is not the first time AI has caused trouble in courtrooms. Indeed, two Wyoming lawyers recently admitted using fake AI-generated cases in a court filing for a lawsuit against Walmart. A federal judge threatened to sanction them as well.

In this scenario, AI “hallucinations” — made-up information generated AI tools — are becoming a growing concern in the legal system.

AI Interview Gone Wrong: Job Seeker’s Viral TikTok Shows Robotic Failures - 2

Image by Freepik

AI Interview Gone Wrong: Job Seeker’s Viral TikTok Shows Robotic Failures

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

A job seeker named Leo Humps recently went viral after sharing his unusual and frustrating experience being interviewed by an AI interviewer for his dream job.

In a rush? Here are the quick facts:

  • Job seeker Leo Humps interviewed by glitching AI for dream news reporter job.
  • AI interviewer’s voice repeated and malfunctioned, failing to ask coherent questions.
  • AI falsely thanked Humps despite no real answers given.

The incident unfolded during a virtual interview for a news reporter position at a national company, and was first spotted by The Independent .

The TikTok video showing Humps in his suit and tie became viral after he expressed initial optimism before his confusion and disappointment increased as the interview experienced technical problems.

The AI system repeated the word “when” multiple times and displayed clear technical malfunction symptoms. During the session, the AI system thanked Humps for answering the questions, despite him not being able to answer. The agent also claimed at the end of the interview it had collected excellent information about his background.

Humps made a follow up video, highlighting the absurdity of this interview. He received an automated email that rejected Humps, however it had his name misspelled as “Henry,” as well as referring to the interview on the wrong date.

The message stated that the interview was ‘memorable’ and highlighted his energetic personality, yet stated that the company would choose different candidates to move forward.

This incident highlights the risks of relying too heavily on AI in critical human processes like recruitment, reminding employers and candidates alike of the importance of maintaining a human touch.