
Photo by Patrick Tomasso on Unsplash
Microsoft Partners With HarperCollins For Nonfiction AI Training
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
The tech giant Microsoft and HarperCollins, one of the world’s largest English-language publishing companies, signed a deal to train artificial intelligence model with data from nonfiction titles.
In a Rush? Here are the Quick Facts!
- Bloomberg revealed Microsoft is the AI company behind HarperCollins’s AI training process
- A heated debate emerged in social media and news publications regarding the authors’ work and the publisher’s new AI deal
- Microsoft’s AI model will not generate new books, but it will be trained with the publisher’s non-fiction books
According to Bloomberg , the book publisher will allow Microsoft’s software to learn information from its books to train an AI model—that has not been disclosed yet—but not to generate books. An anonymous source with knowledge of the deal shared the information with Bloomberg.
Microsoft’s role in this deal was revealed by Bloomberg yesterday, right after a heated debate between authors and HarperCollins and after 404 Media shared that a secret tech company was behind the AI training. Microsoft declined to comment on the new deal when Bloomberg reached out.
American writer and comedian Daniel Kibblesmith shared on Bluesky that HarperCollins offered him a fixed price of $2,500 to opt in and allow AI to use his creative content on his book Santa’s Husband—a fictional children’s book suggesting that the deal could go beyond non-fiction. The author was outraged and many users joined the debate.
It’s not the first time this year that a big tech company has partnered with a large publisher. In May, OpenAI signed a multiyear agreement with News Corp —HarperCollins’s parent company—to access multiple publications like the New York Post, The Wall Street Journal, and The Daily Telegraph with a focus on journalism.
In June, it was revealed that Microsoft has been training a new large language model (LLM) , known as MAI-1, to compete with Google and OpenAI. The tech giant has been working on multiple AI developments and expanding its chatbot capabilities. In September, Microsoft announced multiple enhancements to its AI agent Copilot and new features like Copilot Pages and interactions with programs like PowerPoint, Excel, and Outlook.

Image by Freepik
IACP conference In Boston Highlighted AI’s Growing Role In Modern Policing
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
At the IACP conference, police chiefs explored AI technologies like VR training, generative reports, and data systems, raising privacy concerns and highlighting regulatory gaps in policing.
In a Rush? Here are the Quick Facts!
- Over 600 vendors showcased technologies, including VR training systems and AI tools.
- VR training promises engagement but lacks realism for complex police-public interactions.
- Generative AI tools like Axon’s Draft One raise concerns over report accuracy and bias.
The International Association of Chiefs of Police (IACP) conference, one of the most exclusive gatherings in law enforcement, offered a rare glimpse into the evolving landscape of policing technology last month in Boston, according to an MIT Review press release.
The event, often closed to the press, brought together leaders from across the U.S. and abroad to discuss innovations shaping the future of policing.
MIT reports that vendors and companies showcased cutting-edge tools aimed at revolutionizing policing practices, particularly in training, data analysis, and administrative tasks.
One of the most attention-grabbing demonstrations was from V-Armed, a company specializing in virtual reality (VR) training systems . In its booth, complete with VR goggles and sensors, attendees could simulate active shooter scenarios.
VR training, touted as an engaging and cost-effective alternative to traditional methods, has drawn interest from police departments, including the Los Angeles Police Department.
However, critics argue that while VR systems offer immersive experiences, they cannot replicate the nuanced human interactions officers encounter in real-world situations.
Beyond training, AI’s role in data collection and analysis took center stage. Companies like Axon and Flock unveiled integrated systems combining cameras, license plate readers, and drones to gather and interpret data, reports MIT.
These tools promise efficiency but have sparked privacy concerns. Civil liberties advocates warn such systems could lead to over-surveillance with limited accountability or public benefit, reported MIT.
Administrative efficiency was another key focus. Axon introduced “Draft One,” a generative AI tool that creates initial drafts of police reports by analyzing body camera footage.
While this technology could save officers significant time, legal experts like Andrew Ferguson caution against the risk of inaccuracies in these critical documents. Errors or biases in AI-generated reports could influence case outcomes, from bail decisions to trial verdicts, sais MIT.
MIT notes that the absence of federal regulations governing AI use in policing adds to the complexity. With over 18,000 largely autonomous police departments in the U.S., decisions about adopting AI tools rest with individual agencies.
This fragmented approach raises concerns about inconsistent standards for ethics, privacy, and accuracy. As AI becomes a cornerstone of policing, its unregulated expansion highlights the need for oversight.
Without clear boundaries, critics warn the industry risks prioritizing profit over public accountability—a challenge set to intensify amid shifting political priorities and advancements in policing technologies.