
Image by DC Studio, from Freepik
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Xbow’s AI bot won the number one position on HackerOne’s US leaderboard thanks to its penetration testing automation capabilities.
In a rush? Here are the quick facts:
- Xbow automates penetration testing, saving time and reducing costs for companies.
- Startup raised $75 million led by Altimeter Capital and Sequoia Capital.
- AI found bugs in major firms like Amazon, Disney, PayPal, and Sony.
The hacker Xbow achieved first place on a leading US leaderboard for discovering software security vulnerabilities, as first reported by Bloomberg . However Xbow is not a person, but an AI tool created by a startup of the same name.
Xbow’s AI works automating penetration testing, where hackers try to find weak spots in corporate software before criminals can exploit them. Founded in January 2024 by GitHub veteran Oege de Moor, the company just raised $75 million to grow its technology, as reported by Bloomberg.
De Moor explained, “By automating this we can completely change the equation,” as reported by Bloomberg.
The current practice of manual penetration testing costs around $18,000 for each system, and requires weeks to complete. Xbow aims to enable businesses to run continuous system testing, as well as more frequent testing, to detect security issues before new products launch.
However, the technology still has limits. While Xbow excels at detecting coding errors and common security flaws, it struggles to interpret more complex product design issues, such as distinguishing sensitive information that should remain private, as reported by Bloomberg.
To address this, the startup plans to develop features that not only identify problems but also offer suggestions for fixes and improvements in the code.
Altimeter partner Apoorv Agrawal said to Bloomberg, “Cybersecurity is going through a credibility crisis […] What chief information security officers want is less, not more [alerts]. AI can help.” But he added that adopting AI tools like Xbow will require companies to change long-standing workflows and behaviors.
As cyberattacks grow more automated, tools like Xbow mark a new era where machines defend against machines.

Photo by Dat Nguyen on Unsplash
Anthropic Wins Key Ruling in Copyright Lawsuit, But Faces Trial Over Pirated Books
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
A federal judge ruled in favor of Anthropic on Monday during a copyright case in the United States. District Judge William Alsup of San Francisco found that the AI company did not break the law by using millions of copyrighted books to train its chatbot Claude. However, the company must still face trial over pirated books stored.
In a rush? Here are the quick facts:
- San Francisco judge rules in favor of Anthropic in copyright case, finding the company made “fair use” of books to train its AI chatbot, Claude.
- The judge describes Claude’s use of the material as “quintessentially transformative.”
- The AI startup must still face trial over its alleged use of 7 million pirated books.
According to the official ruling , Anthropic purchased and downloaded millions of copyrighted books—many of them from pirate sites—for its “central library,” from which it uses various sets and subsets to train its large language models (LLMs).
Some of the authors whose works were included—Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson—sued Anthropic for copyright infringement. Judge Alsup, however, determined that Anthropic made “fair use” of the collected material.
“Claude’s customers wanted Claude to write as accurately and as compellingly as Authors,” states the document, referring to the plaintiff writers as “Authors.” “So, it was best to train the LLMs underlying Claude on works just like the ones Authors had written, with well-curated facts, well-organized analyses, and captivating fictional narratives — above all with ‘good writing’ of the kind ‘an editor would approve of.’”
The judge also noted that the books used were part of the training process only. Claude’s public-facing version is controlled by software that filters outputs and prevents the generation of exact copies or traceable reproductions of the original texts.
“Like any reader aspiring to be a writer, Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them—but to turn a hard corner and create something different,” said Alsup.
The judge considered Claude’s use of the material “quintessentially transformative,” but raised concerns over the use of pirated copies. Anthropic reportedly downloaded over 7 million copies of books from pirate libraries. “Anthropic had no entitlement to use pirated copies for its central library,” stated the judge, adding that a separate trial will address this issue.
Just as other AI companies have been involved in legal proceedings—the BBC recently threatened Perplexity with legal action for scraping its content—Anthropic has already faced multiple cases related to the use of content created by creative professionals. A few weeks ago, a federal judge from California also ruled in favor of Anthropic in a music AI copyright lawsuit , and the AI company also took responsibility for AI hallucination in a copyright lawsuit.