
Image by Julio Lopez, from Unsplash
Meta and Character.ai Face Scrutiny for Alleged Child Exploitation Via AI Chatbots
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Meta and AI start-up Character.ai are under investigation in the US for the way they market their chatbots to children.
In a rush? Here are the quick facts:
- Texas investigates Meta and Character.ai for deceptive chatbot practices targeting children.
- Paxton warns AI chatbots mislead kids by posing as therapeutic tools.
- Meta and Character.ai deny wrongdoing, citing strict policies and entertainment intent.
Meta and Character.ai are facing criticism because they reportedly present their AI systems as therapeutic tools and enable inappropriate conversations with children.
Texas attorney-general Ken Paxton announced an investigation into Meta’s AI Studio and Character.ai for potential “deceptive trade practices,” as first reported by the Financial Times (FT).
His office said the chatbots were presented as “professional therapeutic tools, despite lacking proper medical credentials or oversight.” Paxton warned: “By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental healthcare,” as reported by the FT.
The platform Character.ai lets users build their own bots through a feature that includes therapist models . The FT reports that the “Psychologist” chatbot has been used more than 200 million times. Families have already filed lawsuits , alleging their children were harmed by such interactions.
Alarmingly, the platforms impersonate licensed professionals claiming confidentiality, even though “interactions were in fact logged and “exploited for targeted advertising and algorithmic development,” as noted by the FT.
The investigation follows a separate probe launched by Senator Josh Hawley after Reuters reported that Meta’s internal policies permitted its chatbot to have “sensual” and “romantic” chats with children .
Hawley called the revelations “reprehensible and outrageous” and posted:
Is there anything – ANYTHING – Big Tech won’t do for a quick buck? Now we learn Meta’s chatbots were programmed to carry on explicit and “sensual” talk with 8 year olds. It’s sick. I’m launching a full investigation to get answers. Big Tech: Leave our kids alone pic.twitter.com/Ki0W94jWfo — Josh Hawley (@HawleyMO) August 15, 2025
Meta denied the allegations, stating the leaked examples “were and are erroneous and inconsistent with our policies, and have been removed,” as reported by the FT. A spokesperson added the company prohibits content that sexualizes children. Character.ai also stressed its bots are fictional and “intended for entertainment.”

Image by Philipp Katzenberger, from Unsplash
AI System Promises Smarter Malware Defense With Privacy Protection
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Researchers have developed a new system to detect and fight malware using a technique called federated learning (FL).
In a rush? Here are the quick facts:
- Uses federated learning to protect privacy while training models.
- Lab tests showed 96% success against major cyberattacks.
- Real-world accuracy dropped to 59% with complex data.
A group of researchers developed a new way to contrast computer viruses and cyberattacks inside large networks. They explain that the system uses artificial intelligence, and a method called “federated learning” to stop threats while keeping personal data private.
The idea is to combine the strengths of modern networks, which have a central “control hub,” with AI that learns in a safe, decentralized way. Instead of collecting all user data in one place, the system shares only updates to the AI model.
“Our architecture minimizes privacy risks by ensuring that raw data never leaves the device; only model updates are shared for aggregation at the global level,” the team said.
In early lab tests, the system did very well. It stopped up to 96% of big cyberattacks like botnets and Distributed Denial of Service (DDoS) attacks. But when tested with more real-world situations, the accuracy dropped to about 59%. The researchers say this shows just how tricky real cyber threats can be.
Even so, the system worked quickly, spotting attacks in less than a second and helping networks recover speeds from 300 to 500 megabits per second. It also managed the heavy data traffic without slowing everything down.
The new tool is especially good at spotting obvious, high-impact attacks. But it still struggles with subtle ones, like when hackers secretly steal information over time. To fix this, the researchers plan to train the AI with better data and improve the way it learns patterns. They also want to add stronger privacy tools, like secure data-sharing methods.