
Image by Levart_Photographer, from Unsplash
OpenAI Appeals NYT’s Request to Store User Chats Indefinitely
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
OpenAI defends against a legal requirement from The New York Times, which demands the company maintain all ChatGPT user data indefinitely, including deleted chats and API content.
In a rush? Here are the quick facts:
- New York Times demands OpenAI keep all ChatGPT user data indefinitely.
- OpenAI calls the demand an overreach risking user privacy.
- Order affects Free, Plus, Pro, Team, and some API users.
OpenAI and Microsoft faced copyright infringement litigation from The New York Times in 2023, after the newspaper claimed they used millions of its articles to train their AI systems. Along with other plaintiffs, the Times is now asking the court to compel OpenAI to retain all user conversations permanently.
The Verge reports that Times maintains that maintaining user data will enable them to preserve evidence needed for their legal case.
OpenAI opposes this demand because it violates their privacy commitments and standard industry practices, while failing to advance the lawsuit resolution process.
“We strongly believe this is an overreach,” said Brad Lightcap, OpenAI’s Chief Operating Officer. “We’re continuing to appeal this order so we can keep putting your trust and privacy first.”
OpenAI states that it follows a standard procedure to erase user chats and API data during a 30 day period. Under this court order, the company must maintain all data, including deleted information. OpenAI explains that the stored data exists in secure systems, which only a limited legal and security team can access, without automatic sharing with the plaintiffs.
OpenAI continues to fight against the court-issued order. The company stated that this legal dispute does not affect their AI training procedures, because business data remains exempt from training by default, and users maintain control over chat improvements for ChatGPT.

Image by Sue Winston, from Unsplash
UK Court Warns Lawyers: Fake AI Citations May Lead to Criminal Charges
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The High Court of England and Wales issued a severe warning to lawyers, indicating they must refrain from using false legal information produced by artificial intelligence tools like ChatGPT, because they will face criminal penalties.
In a rush? Here are the quick facts:
- UK court warns lawyers of criminal charges over fake AI-generated citations.
- AI-created case references used in £89 million lawsuit lacked factual basis.
- Lawyer self-reported after submitting 18 false cases mixed with genuine citations.
The warning became necessary after two recent legal proceedings. The man who pursued an £89 million lawsuit against two banks created artificial intelligence-based legal citations, as reported by The New York Times .
Among the 45 case references presented, eighteen completely lacked any factual basis. The Guardian reported that the lawyer provided real citations, yet both the genuine and fake references failed to establish valid connections to the case.
The attorney took responsibility for his oversight of the citations before reporting himself to the regulatory body.
Haringey Law Centre filed a lawsuit against the London borough council because of their housing dispute. The lawyer presented to the court five completely fabricated previous cases. The court determined she was accountable for creating unnecessary legal expenses, according to The Guardian.
Though she denied using AI directly, she admitted she may have “carried out searches on Google or Safari” and unknowingly relied on AI-generated summaries, as reported by The Times.
Judge Victoria Sharp, president of the King’s Bench Division, said, “There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused,” as reported by The Times. She warned that lawyers risk criminal charges or being barred from practicing law.
The judgment added that AI tools “can produce apparently coherent and plausible responses… [but] may make confident assertions that are simply untrue,” as reported by The Times. Ian Jeffery from the Law Society backed the ruling, saying it “lays bare the dangers of using AI in legal work,” as reported by The Guardian.
The court called on legal leaders to urgently train staff on how to responsibly use AI, which is known to “hallucinate” information.
The practice of unchecked AI use has already caused similar problems throughout the United States as well as Australia and Europe thus generating widespread concern among legal professionals.
A U.S. judge handed two law firms $31,000 in fines for using AI to create artificial court briefs that included fabricated legal citations . The State Farm-related court document included false judicial references, which the attorneys at both firms neglected to verify.
Judge Michael Wilner expressed his disappointment toward the lawyers for not verifying AI-generated content, which almost resulted in them including deceptive information within a court order.