AI-Generated Errors in Court Papers Lead to Legal Trouble for Lawyers - 1

Photo by Saúl Bucio on Unsplash

AI-Generated Errors in Court Papers Lead to Legal Trouble for Lawyers

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

A report shared by Reuters yesterday reveals that AI’s hallucinations—errors and made-up information created by generative AI models—are causing legal problems in courts in the United States.

In a Rush? Here are the Quick Facts!

  • Morgan & Morgan sent an email to 1,000 lawyers warning about the risks of AI.
  • The recent case of Walmart lawyers admitting to using AI for their cases has raised alarms in the legal community.
  • The use of chatbot hallucinations in court statements has become a recurring issue in recent years.

This month, the law firm Morgan & Morgan sent an email warning over 1,000 lawyers about the risks of using chatbots and fake cases generated by artificial intelligence.

A few days ago, two lawyers in Wyoming admitted including fake cases generated by AI in a court filing for a lawsuit against Walmart, and a federal judge threatened to sanction them.

In December, Stanford professor and misinformation expert Jeff Hancock was accused of using AI to fabricate court declaration citations as part of his statement in defense of the state’s 2023 law criminalizing the use of deepfakes to influence elections.

Multiple cases like these, throughout the past few years, have been generating legal friction and adding trouble to judges and litigants. Morgan & Morgan and Walmart declined to comment on this issue.

Generative AI has been helping reduce research time for lawyers, but its hallucinations can incur significant costs. Last year, Thomson Reuters’s survey revealed that 63% of lawyers used AI for work and 12% did it regularly.

Last year, the American Bar Association reminded its 400,000 members of the attorney ethics rules, which include lawyers standing by all the information in their court filings, and noted that this included AI-generated information, even if it was unintentional—as in Hancock’s case .

“When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that’s incompetence, just pure and simple,” said Andrew Perlman, dean of Suffolk University’s law school to Reuters.

A few days ago, the BBC also shared a report warning about fake quotes generated by AI and the issues with AI tools in journalism.

Former OpenAI Chief Technology Officer Mira Murati Launches AI Startup - 2

Photo by Kaleidico on Unsplash

Former OpenAI Chief Technology Officer Mira Murati Launches AI Startup

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

Mira Murati, former OpenAI Chief Technology Officer, launched an AI startup called Thinking Machines Lab this Tuesday.

In a Rush? Here are the Quick Facts!

  • Mira Murati launched an AI startup called Thinking Machines Lab this Tuesday.
  • The company will focus on reducing the gap between the rapid pace of AI and people’s and scientists’ understanding and adoption of it.
  • Murati gathered experts from OpenAI, Google DeepMind, Mistral, and other major AI companies.

Murati gathered a talented team including experts from OpenAI, Google DeepMind, CharacterAI, Mistral, and more AI companies to develop the project, as it had been rumored during the past few months.

Now, Murati and her team publicly announce the new company and explain more about their vision and mission.

“Thinking Machines Lab is an artificial intelligence research and product company,” states the official website . “We’re building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.”

According to the information shared, the company acknowledges the fast pace generative AI has been developing recently, and shares its mission is to reduce the gaps between AI’s pace and the scientific community’s understanding and people’s abilities to use AI and make the most of it.

“To bridge the gaps, we’re building Thinking Machines Lab to make AI systems more widely understood, customizable, and generally capable,” states the post.

On the social media platform X, Murati explained that they will focus on helping people use AI and adapt models to meet their needs, developing strong and capable AI systems, and nurturing an open science culture.

“Our goal is simple, advance AI by making it broadly useful and understandable through solid foundations, open science, and practical applications,” wrote Murati in her post on X .

Murati quit OpenAI in September last year—along with researchers Bob McGrew and Barret Zoph—and shared that one of the reasons was to do her own exploration. She joined the long list of crucial talents leaving the company in the past few years.