
Image by Nathana Rebouças, from Unsplash
Google Faces First Major U.S. Publisher Lawsuit Over AI Search
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The company that owns Rolling Stone, Billboard, Variety, and other media outlets has filed a lawsuit against Google for using their news content to create AI-generated summaries without permission.
In a rush? Here are the quick facts:
- CEO Jay Penske said publishers must protect journalism from Google’s misuse.
- Google defends AI Overviews as making search “more helpful.”
- PMC is first major U.S. publisher directly suing Google over AI.
Penske Media Corporation (PMC) says Google’s “AI Overviews” are significantly reducing traffic to its websites and cutting into revenue.
“As a leading global publisher, we have a duty to protect PMC’s best-in-class journalists and award-winning journalism as a source of truth,” said CEO Jay Penske, as reported by Reuters .
“Furthermore, we have a responsibility to proactively fight for the future of digital media and preserve its integrity — all of which is threatened by Google’s current actions,” Penske added.
Recently Nicholas Thompson, CEO of The Atlantic, warned staff that Google traffic could eventually drop to zero, adding , “Google is shifting from being a search engine to an answer engine. We have to develop new strategies.”
Google Search remains the leading search platform in the United States, controlling more than 90% of the market. The company rejected the accusation made against it.
Spokesperson José Castañeda said AI Overviews make search “more helpful” and create “new opportunities for content to be discovered,” as reported by Reuters.
He added, “Every day, Google sends billions of clicks to sites across the web, and AI Overviews send traffic to a greater diversity of sites. We will defend against these meritless claims.”
Penske is the first major U.S. publisher to sue Google directly over AI Overviews, though others have raised similar complaints. The Verge notes that earlier this year, Chegg and several European publishers filed lawsuits, while U.S. outlets including The New York Times sued Microsoft and OpenAI over AI training practices.
Critics warn Google’s dominance leaves publishers little leverage. “That is the problem,” said Danielle Coffey, CEO of the News/Media Alliance, as reported by TechCrunch.
The lawsuit comes after the news that Google is testing a new “ AI Mode ” for its search engine, allowing users to communicate with a chatbot interface instead of using traditional search queries.
Publishers warn this change could worsen the damage already caused by Google’s AI Overviews, which have slashed traffic by more than 50% for outlets like HuffPost and Washington Post. Critics argue Google is turning into an “answer engine,” keeping users on its platform while starving publishers of clicks and revenue.
While Google says “blue links” will remain accessible under a Web tab, experts predict AI Mode will take over, threatening the long-term survival of online news outlets.

Photo by ThisisEngineering on Unsplash
Developers Are Spending More Time Fixing AI-Generated Code
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Senior software developers have been spending more time fixing AI-generated code as trends like “ vibe coding ” rise. Seasoned programmers have described the modern task as “worse than babysitting.”
In a rush? Here are the quick facts:
- Senior software developers are spending more time fixing AI-generated code.
- Trends such as “vibe coding” have been adding work for seasoned professionals as they have to fix the AI-generated output.
- New roles such as “vibe code cleanup specialist” have emerged in the industry.
A recent survey conducted by Fastly , which included nearly 800 participants, confirmed that senior professionals are spending significant time fixing and editing AI output, addressing issues such as security risks, hallucinations, and missing information.
TechCrunch also noted that the problem has grown so widespread that it has even given rise to a new role in the industry: “vibe code cleanup specialist.”
“Using a coding co-pilot is kind of like giving a coffee pot to a smart six-year-old and saying, ‘Please take this into the dining room and pour coffee for the family,’” said Carla Rover, a senior web developer who has been using AI for developing software for her startup. She explained that while AI is capable of generating code, the results are rarely clean or correct, calling the task of fixing AI output “worse than babysitting.”
Another developer interviewed by TechCrunch, Feridoon Malekzadeh, agreed that generative AI often behaves like a child, describing it as “hiring your stubborn, insolent teenager to help you do something.”
Malekzadeh said he spends 30% to 40% of his time fixing AI-written code. “You have to ask them 15 times to do something,” he told TechCrunch. “In the end, they do some of what you asked, some stuff you didn’t ask for, and they break a bunch of things along the way.”
While professionals criticize AI-generated code for inaccuracies, hallucinations, and errors, cybersecurity experts warn of broader consequences. A few days ago, researchers reported that a security flaw in one of the most popular AI code editors among developers, Cursor, allowed hackers to execute malicious code .