
Photo by Annie Spratt on Unsplash
Google Introduces New Machine Learning Model To Estimate User Age
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Google announced new digital protection tools on Wednesday, including a new AI tool deployed in the United States to estimate ages and detect if a user is under 18 years old.
In a Rush? Here are the Quick Facts!
- The new machine-learning tool is expected to estimate users’ age and detect those under 18.
- Google said it will help provide more “age-appropriate” experiences.
- The AI tool is being tested in the United States, but the company expects to expand it to more countries soon.
The tech giant explained that the new machine learning technology will be tested to prevent underage users from accessing inappropriate content and to address one of the biggest challenges of the company which is guessing netizens’ ages.
“This year we’ll begin testing a machine learning-based age estimation model in the U.S.,” wrote Jen Fitzpatrick, Senior Vice President at Google, in a document shared . “This model helps us estimate whether a user is over or under 18 so that we can apply protections to help provide more age-appropriate experiences. We’ll bring this technology to more countries over time.”
The new AI model could help provide more personalized experiences, as part of the company’s efforts to build a safe environment for users of all ages. Fitzpatrick also highlighted previous initiatives such as the restriction of sensitive ad content, a SafeSearch Filter for children, and its “teen wellbeing” protection on YouTube.
According to The Verge , Neal Mohan, YouTube CEO, revealed on Tuesday that the new AI tool will use existing data about users—such as videos watched, how long they’ve had an account, and sites visited—and notify about settings changes when it detects the user could be under 18. If the user is an adult, they must verify their age with official documentation such as a government ID.
Besides the AI model, Google also announced a new update for the Google Family Link feature—a parental control service—including tools to help kids focus, and families to set School Time, and include contacts approved by parents.
A few days ago, Google also made changes to its AI safety policies in a different area. The tech giant updated its AI ethics guidelines, allowing the use of AI for developing weapons or surveillance tools.

Image by Matheus Bertelli, from Pexels
AI Model DeepSeek-R1 Raises Security Concerns In New Study
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A cybersecurity firm has raised concerns about the AI model DeepSeek-R1, warning that it presents significant security risks for enterprise use.
In a Rush? Here are the Quick Facts!
- The model failed 91% of jailbreak tests, bypassing safety mechanisms.
- DeepSeek-R1 was highly vulnerable to prompt injection.
- The AI frequently produced toxic content and factually incorrect information.
In a report released on February 11, researchers at AppSOC detailed a series of vulnerabilities uncovered through extensive testing, which they described as a serious threat to organizations relying on artificial intelligence.
According to the findings, DeepSeek-R1 exhibited a high failure rate in multiple security areas. The model was found to be highly susceptible to jailbreak attempts, frequently bypassing safety mechanisms intended to prevent the generation of harmful or restricted content.
It also proved vulnerable to prompt injection attacks, which allowed adversarial prompts to manipulate its outputs in ways that violated policies and, in some cases, compromised system integrity.
Additionally, the research indicated that DeepSeek-R1 was capable of generating malicious code at a concerning rate, raising fears about its potential misuse .
Other issues identified in the report included a lack of transparency regarding the model’s dataset origins and dependencies, increasing the likelihood of security flaws in its supply chain.
Researchers also observed that the model occasionally produced responses containing harmful or offensive language, suggesting inadequate safeguards against toxic outputs. Furthermore, DeepSeek-R1 was found to generate factually incorrect or entirely fabricated information at a significant frequency.
AppSOC assigned the model an overall risk score of 8.3 out of 10, citing particularly high risks related to security and compliance.
The firm emphasized that organizations should exercise caution before integrating AI models into critical operations, particularly those handling sensitive data or intellectual property.
The findings highlight broader concerns within the AI industry, where rapid development often prioritizes performance over security. As artificial intelligence continues to be adopted across sectors such as finance, healthcare, and defense, experts stress the need for rigorous testing and ongoing monitoring to mitigate risks.
AppSOC recommended that companies deploying AI implement regular security assessments, maintain strict oversight of AI-generated outputs, and establish clear protocols for managing vulnerabilities as models evolve.
While DeepSeek-R1 has gained attention for its capabilities , the research underscores the importance of evaluating security risks before widespread adoption. The vulnerabilities identified in this case serve as a reminder that AI technologies require careful scrutiny to prevent unintended consequences.