
Image By Sean Bernstein, from Unsplash
Mattel and OpenAI Face Backlash Over AI Toys
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Consumer advocates are warning of the risks posed by the new partnership between Mattel and OpenAI to create AI-powered toys .
In a rush? Here are the quick facts:
- Mattel and OpenAI plan to launch AI-powered toys by 2026.
- Consumer advocates warn of potential harm to children’s development.
- Toys may process kids’ voice data and behavioral patterns.
Public Citizen co-President Robert Weissman demands Mattel more transparency, and to reveal the details about their upcoming product.
“Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children,” Weissman said, as first reported by Ars Technica .
He fears these toys could harm social development, interfere with real-life friendships, and cause long-term psychological harm. “Mattel should not leverage its trust with parents to conduct a reckless social experiment on our children by selling toys that incorporate AI,” Weissman added.
The first product from the partnership will not target children under 13, according to an anonymous Axios source, which Ars Technica suggests is likely due to OpenAI’s age restrictions. Critics argue that the age restriction does not provide sufficient protection.
OpenAI declined to comment, while Mattel has yet to respond to Ars Technica’s inquiry. The first product from the partnership will be announced this year and released in 2026, according to Mattel’s press release , which states that the collaboration will support AI-powered products and experiences, based on Mattel’s brands.
However, critics like tech executive Varundeep Kaur and digital safety expert Adam Dodge warn that AI toys may expose children to privacy breaches, biased content, or confusing chatbot replies. Kaur also flagged the danger of AI hallucinations, saying these toys could give “inappropriate or bizarre responses” that are unsettling for kids.
Ars Technica reports that critics such as Varundeep Kaur and Adam Dodge, who are a tech executive and digital safety expert respectively, express concerns that AI toys could lead to privacy violations, biased content delivery, and confusing chatbot responses.
Kaur highlighted the risk of AI hallucinations, which could cause toys to generate disturbing or strange responses that might unsettle kids. He also added that further risks may be linked to the toys recording “voice data, behavioral patterns, and personal preferences.”
Ars Technica reports that Dodge added, “unpredictable, sycophantic, and addictive ,” and warned of worst-case scenarios, like toys promoting self-harm . Both experts called for strict parental controls, transparency, and independent audits before any launch.
Indeed, researchers from MIT have issued a separate but related warning about the addictive nature of AI companions .
Mattel has faced similar backlash before. In 2015, the company released “Hello Barbie,” a Wi-Fi-connected doll that listened to kids and responded using cloud-based AI, as reported by Forbes .
Critics at the time, including cybersecurity expert Joseph Steinberg, warned that the toy posed a massive privacy threat. Hello Barbie recorded and uploaded children’s conversations to a server operated by a third party, ToyTalk, which shared the data with vendors to improve AI systems.
Steinberg pointed out that children often confide deeply personal thoughts to their dolls—sometimes discussing fears, family issues, or school problems. “Would you want recordings of their intimate childhood conversations to persist in the hands of unknown parties?” he asked, as reported by Forbes.
Privacy experts argue that unless companies offer plain-language warnings on packaging, many parents will unknowingly expose their children’s private lives to corporations under the guise of convenience and entertainment.

Image by Christopher Lemercier, from Unsplash
Nearly Half Of U.S. Users Seek Mental Health Support From AI Chatbots
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The use of AI chatbots for mental health support has become common among Americans, yet experts recognize potential dangers which require urgent regulatory measures and monitoring.
In a rush? Here are the quick facts:
- Nearly 49% of U.S. users sought mental health help from LLMs last year.
- 37.8% of users said AI support was better than traditional therapy.
- Experts warn LLMs can reinforce harmful thoughts and cause psychological harm.
A nationwide survey of 499 American citizens showed that 48.7% of respondents used ChatGPT, alongside other large language models, for psychological support during the last year, mainly to manage anxiety and depression and receive personal advice, as first reported by Psychology Today (PT).
Most AI users experienced neutral or positive outcomes from the technology, according to their reports, while 37.8% of them preferred AI over traditional therapy. The survey also revealed that harmful effects were reported by only 9% of users.
Despite some benefits, mental health experts warn about serious risks. LLMs tend to tell people what they want to hear rather than challenge harmful thoughts, sometimes worsening mental health.
This growing use of unregulated AI for therapy is described as a dangerous social experiment, as reported by PT. Unlike FDA-regulated digital therapeutics, LLMs are treated like over-the-counter supplements, lacking safety oversight. PT reports that experts, including the World Health Organization and U.S. FDA, have issued warnings about unsupervised use of AI in mental health.
The American Psychological Association (APA) emphasizes that these systems support dangerous mental patterns instead of addressing them, which hinders therapeutic progress.
The use of algorithms by AI chatbots represents the opposite approach of what a trained clinician would use, according to APA CEO Arthur C. Evans Jr. This practice leads users toward incorrect perceptions regarding authentic psychological care.
Indeed, experts explain that AI chatbots operate without the ability to use clinical judgment, and they also lack the accountability features of licensed professionals. Generative models that include ChatGPT and Replika adapt to user feedback by accepting distorted thinking instead of providing therapeutic insights.
The technology’s ability to adapt makes users feel supported, although it fails to provide any meaningful therapeutic assistance. Researchers from MIT have shown that AI systems are highly addictive , thanks to their emotional responses and persuasive capabilities.
Privacy is another major concern. Users share their personal information during conversations, which gets stored while being analyzed before sharing data with third parties who develop new products. Users transmit their deeply personal information without knowing the data management processes that occur after sharing.
AI chatbots which process sensitive conversations are vulnerable to hacking attacks and data breaches, according to cybersecurity specialists. These tools operate in legal ambiguity because of lacking strict regulations, which makes users more exposed to potential threats.
The call to action is clear: governments, researchers, and clinicians must create regulations and ethical guidelines to ensure safe, transparent, and effective use of AI in mental health.
Without oversight, the risks of psychological harm, dependency, and misinformation could grow as more people turn to AI for emotional support.