
Photo by Joshua Woroniecki on Unsplash
Cloudflare Researchers Claim Perplexity Is Scraping Websites Despite AI Bot Block
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Researchers from internet infrastructure provider Cloudflare claim that the AI system Perplexity has been scraping content from websites without permission, even when publishers have implemented AI bot blocks.
In a rush? Here are the quick facts:
- Cloudflare claims that Perplexity has been scraping content from websites without permission.
- Researchers confirmed Perplexity’s “stealth crawling” behavior even when publishers implement AI bot blocks.
- A spokesperson from Perplexity called Cloudflare’s report a “publicity stunt.”
According to the report shared by Cloudflare on Monday, Perplexity crawls websites using its default user agent and switches its identity to bypass these blocks. This “stealth crawling” behavior was confirmed by Cloudflare’s experts.
“We see continued evidence that Perplexity is repeatedly modifying their user agent and changing their source ASNs to hide their crawling activity, as well as ignoring — or sometimes failing to even fetch — robots.txt files,” wrote the researchers.
Crawlers are expected to be transparent, state their purpose clearly, and respect websites’ preferences, but researchers claim Perplexity has not been following these trust principles. This conclusion was reached following an investigation prompted by customer complaints.
“We received complaints from customers who had both disallowed Perplexity crawling activity in their robots.txt files and also created WAF rules to specifically block both of Perplexity’s declared crawlers: PerplexityBot and Perplexity-User,” wrote the researchers. “These customers told us that Perplexity was still able to access their content even when they saw its bots successfully blocked.”
Cloudflare researchers said they verified these claims by replicating the blocks and conducting multiple tests to observe the crawler’s behavior. In one test, they created new domains that had not yet been indexed and included robots.txt files to block “respectful bots.” Later, they queried Perplexity for specific information about the restricted domains and found that the AI-powered answer engine still provided details and accurate information about the website.
“This response was unexpected, as we had taken all necessary precautions to prevent this data from being retrievable by their crawlers,” added the researchers.
A spokesperson from Perplexity, Jesse Dwyer, called the research a “publicity stunt” in a statement for The Verge . Dwyer added that there are “misunderstandings” in Cloudflare’s report.
Cloudflare has been developing multiple tools to help publishers prevent unauthorized AI crawling. In March, Cloudflare released “AI Labyrinth, ” a tool that redirects unauthorized crawlers into AI-generated content mazes. Last month, it launched “Pay Per Crawl,” a system to charge AI bots for accessing publishers’ content.

Photo by Inspa Makers on Unsplash
OpenAI Will Add Mental Health Guardrails For ChatGPT
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
OpenAI announced new changes to its AI chatbot, ChatGPT, on Monday, including the introduction of mental health guardrails aimed at supporting vulnerable users. The updates are designed to prevent ChatGPT from making decisions for users and will include reminders to take breaks after extended interactions with the AI system.
In a rush? Here are the quick facts:
- OpenAI announced changes to ChatGPT, including mental health guardrails.
- The chatbot will send reminders so that users take breaks after extended interactions.
- The updates come after multiple reports of users becoming overly attached to ChatGPT, experiencing worsening mental health, and cases of AI psychosis.
According to the official announcement , OpenAI plans to make ChatGPT more useful, and correct and optimize its behavior. The company acknowledges that previous updates made the AI model “too agreeable” and that many people use it as a therapist during difficult times.
“AI can feel more responsive and personal than prior technologies, especially for vulnerable individuals experiencing mental or emotional distress,” wrote OpenAI. “To us, helping you thrive means being there when you’re struggling, helping you stay in control of your time, and guiding—not deciding—when you face personal challenges.”
ChatGPT has now been trained to better recognize when a user is emotionally or mentally struggling and to provide more appropriate responses. These changes follow several reports of users becoming overly attached to ChatGPT , experiencing worsening mental health, and the emergence of cases described as “ AI psychosis. ”
“There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency,” explained OpenAI, adding that new tools are being implemented to better detect potential mental health concerns.
The AI system will also include periodic reminders for users to take breaks from interactions, and will focus on helping users reach their own conclusions by asking more guiding questions instead of offering direct answers. This approach is similar to ChatGPT’s new Study Mode , in which it supports students by encouraging critical thinking and posing Socratic questions instead of simply providing solutions.
OpenAI also noted that it has been working closely with a range of experts—including general practitioners, psychiatrists, and pediatricians—to ensure ChatGPT offers meaningful assistance during critical moments. The company also created an advisory group to consult with mental health professionals and guide the development of these features.