
Image by Good Faces Agency, from Unsplash
Tinder Now Requires Facial Recognition For California Users
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Tinder now requires new users in California to complete a facial recognition scan, aiming to stop fake accounts and improve user safety.
In a rush? Here are the quick facts:
- Tinder now requires facial recognition for new users in California.
- Face Check uses video selfies and biometric tech for verification.
- Verified users receive a badge; selfie video is then deleted.
Starting this week, Tinder requires all new users from California to complete facial recognition verification, as first reported by Axios . The new “Face Check” tool has been confirmed by Match Group executives designed to fight fake accounts, online impersonation, and bots.
The Face Check process requires users to record a brief video selfie when they register their account. The system uses FaceTec biometric technology to confirm that the person exists in reality, that their profile pictures match their appearance, and that the account links to single profiles only.
After verification, the system grants users a verified badge. The system discards the video data but maintains an encrypted, unalterable face representation, which helps identify duplicate profiles.
“This person is a real, live person and not a bot or a spoofed account,” said Yoel Roth, head of trust and safety at Match Group, as reported by Axios. “We see this as one part of a set of identity assurance options that are available to users,” he added.
Face Check is a separate process from Tinder’s ID Check, which verifies user age and identity using government-issued identification documents. The photo verification process using video selfies used to be optional, but it’s now mandatory for Californians.
The feature was previously tested in Colombia and Canada. According to Roth, those pilots showed “promising” results: fewer reports of fake profiles and improved trust in the platform. California was chosen as the first U.S. market due to its size, diversity, and strict digital privacy laws, as reported by Axios.
The rollout comes at a critical time when artificial intelligence is fueling a rise in deepfake-enabled sextortion scams, particularly on dating platforms. The use of AI by scammers has become more prevalent, as they create sophisticated fake romantic profiles that use deepfake images, and use with chatbot-generated dialogue.
In some cases, criminals even superimpose victims’ faces onto explicit content using deepfake technology, eliminating the need for actual shared content before launching blackmail threats.
Law enforcement and safety experts warn that these scams are especially hard to detect, as AI-generated content often evades reverse image searches and traditional red flags like awkward phrasing or inconsistent visuals.
Tinder plans to assess how users in California respond before deciding whether to expand Face Check across the U.S.

Image by Freepik
Scientists Train AI To Think Like A Human Using Psychology Studies
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The new AI system, Centaur, demonstrates human-like thinking in multiple experiments, which produces new findings yet sparking debate on what true understanding means.
In a rush? Here are the quick facts:
- It learned from 160 studies and 10 million responses.
- Centaur generalizes strategies like humans in new situations.
- Some experts say it outperforms classical cognitive models.
An international team of scientists has developed a new AI system called Centaur, which performs like a human being in psychological tests.
In their study , the development team used Meta’s open-source LLaMA model to create Centaur, which processed results from 160 studies involving more than 60,000 volunteers. The goal? The researchers wanted to determine if AI systems could duplicate various types of thinking processes.
“Ultimately, we want to understand the human mind as a whole and see how these things are all connected,” said Marcel Binz, lead author of the study, in an interview with The New York Times .
Modern AI, like ChatGPT, can produce responses that seem human, but the system still makes basic mistakes. A chess bot can’t drive a car, and a chatbot might let pawns move sideways. General intelligence, which functions similarly to human mental processes, continues to be out of reach. The research approach of Centaur advances the field by bringing scientists closer to their objective.
The AI was trained to copy human choices in tasks like steering a spaceship toward treasure or learning patterns in games. “We essentially taught it to mimic the choices that were made by the human participants,” Binz explained to The Times.
Centaur not only learned like a human, it generalized like one, too. When the spaceship task was swapped for a flying carpet version, Centaur reused the same successful strategy, just like people did.
Experts were impressed. “This is really the first model that can do all these types of tasks in a way that’s just like a human subject,” said Stanford’s Russ Poldrack.
Still, some critics say mimicking behavior isn’t the same as understanding the mind. “The goal is not prediction. The goal is understanding,” said Indiana University’s Gary Lupyan, in the interview with the Times.
Even Binz agrees. “Centaur doesn’t really do that yet,” he said. But with five times more data coming, the team hopes Centaur will grow into something even more powerful, and possibly even help unlock the mysteries of the human mind.