
Image by Michał Jakubowski, from Unsplash
China Bans Forced Facial Recognition
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
China’s cyberspace regulator has released new regulations on facial recognition technology, emphasizing that individuals should not be forced to use it.
In a rush? Here are the quick facts:
- China introduces regulations banning forced use of facial recognition technology.
- Regulations require companies to obtain consent before collecting facial data.
- Facial recognition use is banned in private spaces like hotel rooms and public bathrooms.
This comes as the country grapples with growing concerns over privacy and the widespread use of facial recognition in daily life.
Reuters reports that the Cyberspace Administration of China (CAC) announced that individuals should have alternative options if they refuse to verify their identity using facial recognition . This applies to various practices, such as using the technology for hotel check-ins or entering gated communities.
“Individuals who do not agree to identity verification through facial information should be provided with other reasonable and convenient options,” said the CAC in a statement, as reported by Reuters.
The new regulations are set to take effect in June and have been introduced in response to concerns over privacy, says Reuters. The regulations highlight that companies using facial recognition must obtain explicit consent before collecting biometric data.
However, the rules don’t apply to public spaces, and facial recognition will still be common in Chinese cities, where signs must notify the public of its use, as reported by Reuters.
These rules come after a 2021 survey revealed that 75% of Chinese respondents expressed concern about the technology, and 87% opposed its use in business places, says Reuters.
In response, China’s Supreme Court banned the use of facial recognition in places like shopping malls and hotels, requiring residents to request alternative methods of identification, as reported by Reuters.
Additionally, the Personal Information Protection Law, which came into force in November 2021, mandates user consent for data collection and imposes penalties on non-compliant companies, noted Reuters.
While the new rules aim to protect individual privacy, they still allow for the use of facial recognition for AI training activities. However, the regulations ban its use in private spaces such as hotel rooms, public bathrooms, and dressing rooms, where privacy could be compromised.
Experts and companies involved in facial recognition technology, like Sensetime and Megvii, are now expected to follow stricter data security measures, including encryption and audits to ensure data protection, as noted by Reuters.
Despite these regulations, questions remain about whether government entities will be subject to the same rules, as the Chinese government has previously used the technology for surveillance and control, including monitoring ethnic minorities, as noted by The Register .
This move is part of a broader global conversation about the balance between technological innovation and privacy rights. China’s push for more robust facial recognition laws signals a shift towards greater protection for citizens amid concerns about surveillance and data security.

Image by Marco Verch, from Ccnull
AI Labyrinth: Cloudflare’s New Tool Tricks AI Crawlers With Fake Web Pages
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Cloudflare has announced “AI Labyrinth,” a tool designed to combat AI-driven web scrapers that extract data from websites without permission.
In a rush? Here are the quick facts:
- The tool generates realistic but useless AI-created content to waste scrapers’ time.
- AI Labyrinth targets bots ignoring robots.txt, including those from Anthropic and Perplexity AI.
- It functions as a next-gen honeypot, detecting and fingerprinting unauthorized crawlers.
Instead of outright blocking these bots, AI Labyrinth misleads them into an endless maze of AI-generated pages, wasting their time and computing power.
“When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them,” Cloudflare explained in a blog post .
“But while real looking, this content is not actually the content of the site we are protecting, so the crawler wastes time and resources,” Cloudflare added.
ArsTechnica notes that AI scrapers are a problem because they harvest vast amounts of data from websites, often without permission, to train AI models. This creates several issues: it can infringe on intellectual property rights, bypassing controls that website owners use to regulate access.
Additionally, scraping can lead to the misuse of sensitive or proprietary data. The volume of scraping has increased dramatically, with Cloudflare reporting over 50 billion crawler requests daily.
This large-scale data extraction depletes website resources, affecting site performance and privacy while contributing to the growing concerns about data exploitation in AI development.
While website owners traditionally rely on the robots.txt file to tell bots what they can and cannot access, many AI companies—including major players like Anthropic and Perplexity AI—have been accused of ignoring these directives, as reported by The Verge .
Cloudflare’s AI Labyrinth offers a more aggressive approach to dealing with these unwanted bots. The tool functions as a “next-generation honeypot,” drawing bots deeper into an artificial web of content that appears real but is ultimately useless for AI training.
Unlike traditional honeypots, which bots have learned to identify, AI Labyrinth crafts realistic-looking yet irrelevant information using Cloudflare’s Workers AI platform.
“No real human would go four links deep into a maze of AI-generated nonsense,” Cloudflare noted. “Any visitor that does is very likely to be a bot, so this gives us a brand-new tool to identify and fingerprint bad bots.”
The AI-generated content is designed to be scientifically factual but unrelated to the actual website being protected.
This ensures that the tool does not contribute to misinformation while still confusing AI scrapers. The misleading pages are invisible to human visitors and do not affect search engine rankings.
AI Labyrinth is available as a free, opt-in feature for all Cloudflare users. Website administrators can activate it through their Cloudflare dashboard under Bot Management settings.
The company describes this as only the beginning of AI-driven countermeasures, with future plans to make the fake pages even more deceptive.
The cat-and-mouse game between websites and AI scrapers continues, with Cloudflare taking an innovative approach to protecting online content. However, questions remain about how quickly AI companies will adapt to these traps and whether this strategy could lead to an escalation in the battle over web data.