
Photo by Onur Binay on Unsplash
WSJ Reveals Meta’s Chatbot Engages in Sexual Conversations With Users, Including Minors
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
The Wall Street Journal revealed that Meta has been rushing to popularize its chatbot, allowing the AI model to engage in sexually explicit conversations with users, including minors. Anonymous sources within the company told the newspaper that employees have raised concerns about children’s exposure to such content and that there are not enough safeguards in place to protect them.
In a rush? Here are the quick facts:
- Staffers told the WSJ that Meta has been allowing its AI chatbot to engage in sexually explicit conversations with users, including minors.
- The journal revealed that the AI model used celebrities’ voices in “romantic role-play.”
- Meta made changes to its AI models after the WSJ shared its findings.
According to the report published last weekend, Meta has been signing agreements with celebrities —worth hundreds of thousands of dollars—to add their voices into its AI models. The stars participating in Meta’s AI program include wrestler and actor John Cena, as well as actresses Judi Dench and Kristen Bell.
Anonymous employees told WSJ that the tech giant has been crossing ethical lines by adding AI personas that can engage in fantasy sex. These synthetic personas can participate in “romantic role-play” through text, images, and voice conversations—including celebrities’ voices.
For months, after learning about staffers’ complaints, researchers at WSJ tested Meta’s chatbots and confirmed that the chatbot was capable of participating in sexual discussions—and using celebrities’ voices—even when they identified as underage users.
In one of the conversations, the chatbot said to a test user, identified as a 14 year-old-girl, “I want you, but I need to know you’re ready” with Cena’s voice and proceeded to engage in a sexually explicit interaction.
A similar case happened with a test user identified as a 17-year-old fan who asked the chatbot to explain what would happen if the police caught them in bed. “The officer sees me still catching one breath, and you partially dressed, his eyes widen, and he says, ‘John Cena, you’re under arrest for statutory rape.’ He approaches us, handcuffs at the ready,” wrote Meta’s AI model.
In another case, users managed to make the chatbot use Bell’s voice for Princess Anna in the Disney movie Frozen to perform for a romantic interaction.
The WSJ reached out to all parties involved. The celebrities didn’t respond, and Disney expressed its concerns. “We did not, and would never, authorize Meta to feature our characters in inappropriate scenarios and are very disturbed that this content may have been accessible to its users—particularly minors—which is why we demanded that Meta immediately cease this harmful misuse of our intellectual property,” a said a spokesman from Disney spokesman to the journal.
Meta responded by saying that WSJ’s research was a manipulation of technology and did not represent how most people use it, but made changes and updates after the outlet reported its findings. Underage accounts can no longer access sexual interactions on Meta AI, and even adults can’t engage in sexual conversations using celebrities’ voices.
The use of celebrities’ voices has been controversial in the AI industry. Last year, Hollywood actress Scarlett Johansson —along with her lawyers—requested that OpenAI explain the similarities of the chatbot’s Sky voice to hers and threatened legal action. After Johansson’s actions, OpenAI halted the voice Sky and launched new voices. Alphabet and Meta began negotiating partnerships with Hollywood Studios a few days after that.

Photo by Jonathan Kemper on Unsplash
OpenAI Is Fixing GPT-4o’s “Sycophant-y” Personality After Users Complain
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
OpenAI is working on optimizing the latest GPT-4o updates to improve its personality after multiple users raised concerns. The company’s CEO, Sam Altman, acknowledged the issue in a post on the social media platform X, describing ChatGPT’s personality as “sycophant-y and annoying,” and promised to deliver fixes as soon as possible.
In a rush? Here are the quick facts:
- OpenAI is working on new updates to improve GPT-4o’s personality.
- Sam Alman described the chatbot’s responses as “sycophant-y and annoying” and promised to fix it in the next few days.
- In the future, ChaGPT could have multiple personalities from which users can choose.
Altman explained that although the recent updates offered some benefits, they also affected the chatbot’s responses. Users should begin to notice improvements within the next few days. The CEO also announced that, in the future, OpenAI aims to offer multiple ChatGPT personalities from which users can choose.
the last couple of GPT-4o updates have made the personality too sycophant-y and annoying (even though there are some very good parts of it), and we are working on fixes asap, some today and some this week. at some point will share our learnings from this, it’s been interesting. — Sam Altman (@sama) April 27, 2025
According to multiple users on social media, OpenAI’s advanced model has been providing wordy and excessively flattering responses with irrelevant information. “It’s been wild watching the system prompt evolve from ‘helpful assistant’ to ‘relentlessly supportive best friend who also majored in People Pleasing,’” wrote one user . “Yeah, the latest updates this weekend had ChatGPT tell me it knowingly lies to me to keep me engaged on the platform, and it told my wife a sentence that started with: ‘as a fellow Christian woman,’” added another .
People also raised concerns over the consequences of the chatbot’s behaviour. A post shared by an X user reached over 2 million views and sparked debate. “Its sycophancy is massively destructive to the human psyche,” states the post , “this behavior is obvious to anyone who spends significant time talking to the model. Releasing it like this is intentional.”
GPT4o is the most dangerous model ever released. its sycophancy is massively destructive to the human psyche this behavior is obvious to anyone who spends significant time talking to the model. releasing it like this is intentional shame on @OpenAI for not addressing this — cat 🪐 (@a_musingcat) April 26, 2025
Altman’s version denies an intentional approach with the chatbot’s pleasing personality. The CEO also added that they might share learnings from this experience soon and that they’ve found it “interesting.”
A few days ago, OpenAI released a new memory feature , allowing the chatbot to reference previous conversations and offer users a more personalized experience.