
Photo by Sasun Bughdaryan on Unsplash
Anthropic’s Lawyers Take Responsibility For AI Hallucination In Copyright Lawsuit
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
This Thursday, a lawyer for Anthropic acknowledged using an incorrect citation generated by the company’s AI chatbot, Claude, during an ongoing copyright case in Northern California. The AI company is currently in a legal battle with Universal Music Group (UMG), which sued Anthropic over the use of copyrighted song lyrics to train its AI models.
In a rush? Here are the quick facts:
- Lawyer defending Anthropic in court acknowledged using an incorrect citation generated by Claude.
- The error was addressed by UMG’s lawyers during the ongoing music copyright case in Northern California.
- Dukanovic described the situation as an “embarrassing and unintentional mistake.”
The lawyer, Ivana Dukanovic, an associate at Latham & Watkins, stated in a court filing that Claude made a citation error by listing the wrong title and author for an article used in the case. However, she noted that the publication, link, year, and content of the article were correct.
“Our investigation of the matter confirms that this was an honest citation mistake and not a fabrication of authority,” states the document. “We apologize for the inaccuracy and any confusion this error caused.”
According to Reuters , Dukanovic’s explanation comes after UMG’s lawyers said data scientist Olivia Chen used AI-fabricated sources to defend Anthropic in the case.
Dukanovic explained that Chen used the correct article from the journal American Statistician but that lawyers at Latham & Watkins added the incorrect footnote provided by Claude:
After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article. Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors. Our manual citation check did not catch that error.
Dukanovic described the situation as an “embarrassing and unintentional mistake” and assured that they have implemented new measures to ensure it doesn’t happen again.
A few weeks ago, during an early stage of the case, the jury had ruled in favor of Anthropic and considered UMG’s requests too broad. This new situation could jeopardize Anthropic’s advantage in the case.
In the past few months, multiple lawyers have presented incorrect AI-generated documents in court in the United States, raising concerns and legal problems. This week, a judge fined two law firms $31,000 for fake AI-generated legal citations.

Image by Choong Deng Xiang, from Unsplash
OpenAI Launches Codex: An AI Assistant For Developers
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
OpenAI has just introduced Codex, a new cloud-based AI assistant designed to help developers with software tasks.
In a rush? Here are the quick facts:
- Codex is OpenAI’s new cloud-based AI for software development tasks.
- Available to Pro, Team, and Enterprise users; Plus and education coming soon.
- It runs tasks in a secure environment with real-time logs and test outputs.
It’s now available to ChatGPT Pro, Team, and Enterprise users, with support for Plus and education users coming soon.
Codex runs on codex-1, a model specially trained for software engineering. It can write features, fix bugs, answer code questions, and propose pull requests — all in parallel. Each task runs inside a secure environment loaded with the user’s code, making it safe and easy to track.
Codex is easy to use: just type a prompt in ChatGPT, click “Code” or “Ask,” and it starts working. It edits files, runs tests, and even shows live progress. Tasks can take from 1 to 30 minutes, depending on how complex they are.
“Codex provides verifiable evidence of its actions through citations of terminal logs and test outputs,” OpenAI explained. Users can review these results, ask for changes, or directly merge them into their code.
Codex can also read special instructions from AGENTS.md files inside your codebase. These files help it understand your testing process and coding standards. Still, “codex-1 shows strong performance even without AGENTS.md files,” OpenAI stated.
Codex also recently fixed a bug in the Astropy library’s separability_matrix function. A user noted: “Suddenly the inputs and outputs are no longer separable? This feels like a bug to me.” Codex identified the issue and proposed a patch that was accepted — showing its practical value.
OpenAI calls this a “research preview,” meaning it’s still being improved. They stress the importance of safety, transparency, and human oversight: “It still remains essential for users to manually review and validate all agent-generated code before integration and execution.”