
Image by Tingey Injury Law Firm, from Unsplash
Woman Wins Eviction Case Using ChatGPT Instead of a Lawyer
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A California woman overturned her eviction using ChatGPT, joining a growing wave of people relying on AI to fight legal battles without lawyers.
In a rush? Here are the quick facts:
- AI helped her overturn $55,000 in penalties and $18,000 in rent.
- More litigants are using AI instead of lawyers in U.S. courts.
- Some users are fined for citing fake AI-generated cases.
Facing eviction from her mobile home in Long Beach, California, Lynn White had no money for a lawyer. After losing with a court-appointed attorney, she decided to appeal, this time with the help of ChatGPT, as first reported by NBC News .
“It was like having God up there responding to my questions,” White said.
By feeding the chatbot her legal documents, White said ChatGPT helped her identify errors, research laws, and draft responses.
After months of litigation, she overturned her eviction notice, avoiding $55,000 in penalties and $18,000 in rent. “I never, ever, ever, ever could have won this appeal without AI,” she said.
With more generative AI tools available, many litigants are skipping lawyers and using chatbots as their legal guides. “I’ve seen more and more pro se litigants in the last year than I have in probably my entire career,” said Meagan Holmes, a paralegal in Phoenix.
But results vary. Some users succeed, while others face fines for filing false citations invented by AI. “They take it very, very seriously and don’t let you off the hook because you’re a pro se litigant,” said Earl Takefman, who once cited a nonexistent case.
Perplexity spokesperson Jesse Dwyer said, “We don’t claim to be 100% accurate, but we do claim to be the only company who works on it relentlessly.”
Despite the risks, lawyers see promise. “Going forward in the legal profession, all attorneys will have to use AI in some way or another,” said attorney Andrew Montez.
White, who calls AI her “virtual law clerk,” added, “It felt like David and Goliath, except my slingshot was AI.”

Photo by Ladislav Sh on Unsplash
Anthropic Launches Petri, An Open-Source Tool To Analyze AI Behavior
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Anthropic released on Monday a new open-source AI tool called Petri to help researchers study and analyze AI models’ behaviors. The new AI-powered agent is part of the company’s broader effort to mitigate emerging cyber threats associated with the rapid development of advanced AI technologies.
In a rush? Here are the quick facts:
- Anthropic released Petri, an open-source AI tool designed to assist researchers in studying and analyzing AI models’ behaviors.
- Petri can audit AI systems autonomously, simulating realistic environments and applying different benchmarks.
- The new feature has been built as part of Anthropic’s efforts to mitigate emerging cyber threats powered by AI models.
According to Anthropic , Petri—an acronym for Parallel Exploration Tool for Risky Interactions—enables researchers to test hypotheses and automatically run multiple experiments to study AI systems’ behavior.
The tool simulates realistic environments, provides performance scores, and generates summaries of model behavior. The process is fully automated and designed to streamline testing while giving human researchers greater leverage as AI-powered threats continue to grow.
Hackers have been using AI models to attack organizations, employing sophisticated strategies such as “ vibe hacking ,” in which malicious actors use chatbots and AI-powered platforms to create harmful software and tools with minimal technical expertise.
“As AI becomes more capable and is deployed across more domains and with wide-ranging affordances, we need to evaluate a broader range of behaviors,” wrote Anthropic. “This makes it increasingly difficult for humans to properly audit each model—the sheer volume and complexity of potential behaviors far exceeds what researchers can manually test.”
Petri has been trained to apply different benchmarks during its audits and assist developers in evaluation processes. Anthropic shared a demonstration in which Petri tested 14 frontier models across multiple behavioral dimensions, including deception, self-preservation, sycophancy, reward hacking, power-seeking, encouragement of user delusion, and cooperation with harmful requests.
One of Anthropic’s latest models , Claude Sonnet 4.5, showed some of the strongest results, outperforming competitors’ frontier models such as OpenAI’s GPT-5.
The company shared more technical documents for researchers and developers interested in learning more about Petri.