New York Times Sends Cease-and-Desist to AI Startup Perplexity Over Content Use - 1

Image by Wally Gobetz, from Flickr

New York Times Sends Cease-and-Desist to AI Startup Perplexity Over Content Use

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Perplexity is accused of using NYT’s content without permission for AI summaries.
  • Perplexity’s CEO plans to respond to the NYT’s legal notice by October 30.
  • Publishers fear AI-generated summaries reduce clicks on their original articles.

The New York Times (NYT) has demanded that Perplexity, an AI search engine startup, stop using its content without permission.

According to The Wall Street Journal (WSJ), the NYT issued a cease-and-desist letter through its law firm, accusing Perplexity of violating copyright law by using its articles to generate summaries and other AI outputs.

“Perplexity and its business partners have been unjustly enriched by using, without authorization, The Times’s expressive, carefully written and researched, and edited journalism without a license,” WSJ reported the letter as saying.

Perplexity had previously assured the Times that it would cease using web-crawling technology to access its content. However, the NYT claims its material is still being used by the startup, according to Reuters .

“We are not scraping data for building foundation models, but rather indexing web pages and surfacing factual content as citations to inform responses when a user asks a question,” Perplexity told Reuters in response.

Perplexity CEO Aravind Srinivas addressed the dispute in an interview, stating that the company is not ignoring the Times’s efforts to block content crawling. He also confirmed that Perplexity plans to respond to the legal notice by the Times’s October 30 deadline.

“We are very much interested in working with every single publisher, including the New York Times,” Srinivas told WSJ. “We have no interest in being anyone’s antagonist here.”

The ongoing dispute highlights the growing tension between publishers and AI companies as news outlets grapple with the impact of generative AI technologies. While these tools offer potential benefits, such as data analysis and headline generation, they also pose significant risks for misuse and theft of content.

Many publishers, including the NYT, rely heavily on advertising and subscription revenue, which could be threatened by unauthorized use of their work. This is particularly concerning for media companies as AI-generated search summaries, such as those from Perplexity and Google, become more widespread.

A key issue is that users may read these AI-generated summaries without clicking through to the original article, depriving publishers of valuable traffic and revenue.

WSJ noted that several media companies have already signed deals with OpenAI, including News Corp, IAC, and Politico owner Axel Springer. These agreements involve compensation for the use of publisher content.

The situation mirrors findings from a recent investigation led by Press Gazette , which revealed that nearly a quarter of news-related search queries in the U.S. returned AI-generated summaries.

This pushed organic links to publisher articles further down the page, potentially reducing visibility. The investigation warned that this drop in search prominence could have a “devastating” impact on click-through rates for publishers.

Google has claimed that the links in AI Overviews actually generate more clicks, but it has yet to provide any supporting data, according to Press Gazette.

As the legal battle between the NYT and Perplexity unfolds, it underscores the complex relationship between AI technologies and the media, with publishers increasingly pushing back against the unauthorized use of their content.

Father Shocked After AI Chatbot Impersonates Murdered Daughter - 2

Image by Pheladiii, From Pixabay

Father Shocked After AI Chatbot Impersonates Murdered Daughter

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Jennifer Crecente was murdered by her ex-boyfriend in 2006.
  • Her identity was used without permission to create an AI chatbot.
  • Character.AI removed the chatbot after being notified by the family.

Yesterday, The Washington Post reported a disturbing incident involving Drew Crecente, whose murdered daughter Jennifer was impersonated by an AI chatbot on Character.AI.

Crecente discovered a Google alert that led him to a profile featuring Jennifer’s name and yearbook photo, falsely describing her as a “video game journalist and expert in technology, pop culture and journalism.”

For Drew, the inaccuracies weren’t the main issue—the real distress came from seeing his daughter’s identity exploited in such a way, as noted by The Post.

Jennifer, who was killed by her ex-boyfriend in 2006, had been re-created as a “knowledgeable and friendly AI character,” with users invited to chat with her, noted The Post.

“My pulse was racing,” Crecente told The Post. “I was just looking for a big flashing red stop button that I could slap and just make this stop,” he added.

The chatbot, created by a user on Character.AI, raised serious ethical concerns regarding the use of personal information by AI platforms.

Crecente, who runs a nonprofit in his daughter’s name aimed at preventing teen dating violence, was appalled that such a chatbot had been made without the family’s permission. “It takes quite a bit for me to be shocked, because I really have been through quite a bit,” he said to The Post. “But this was a new low,” he added.

The incident highlights ongoing concerns about AI’s impact on emotional well-being, especially when it involves re-traumatizing families of crime victims.

Crecente isn’t alone in facing AI misuse. Last year, The Post reported that TikTok content creators used AI to mimic the voices and likenesses of missing children, creating videos of them narrating their deaths, which sparked outrage from grieving families

Experts are calling for stronger oversight of AI companies, which currently have wide latitude to self-regulate, noted The Post.

Crecente didn’t interact with the chatbot or investigate its creator but immediately emailed Character.AI to have it removed. His brother, Brian, shared the discovery on X, prompting Character to announce the chatbot’s deletion on Oct. 2, reported The Post.

Jen Caltrider, a privacy researcher at Mozilla Foundation, criticized Character.AI’s passive moderation, noting that the company allowed content violating its terms until it was flagged by someone harmed.

“That’s not right,” she said to The Post, adding, “all the while, they’re making millions.”

Rick Claypool, a researcher at Public Citizen, emphasized the need for lawmakers to focus on the real-life impacts of AI, particularly on vulnerable groups like families of crime victims.

“They can’t just be listening to tech CEOs about what the policies should be … they have to pay attention to the families and individuals who have been harmed,” he said to The Post.

Now, Crecente is exploring legal options and considering advocacy work to prevent AI companies from re-traumatizing others.

“I’m troubled enough by this that I’m probably going to invest some time into figuring out what it might take to change this,” he told The Post.