
Photo by Grigorii Shcheglov on Unsplash
Meta Creates Flirty Chatbots That Look And Sound Like Celebrities
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Reuters published an exclusive report on Saturday revealing that Meta has been developing AI chatbots that use the names and likenesses of celebrities such as Taylor Swift and Selena Gomez—without their consent.
In a rush? Here are the quick facts:
- Meta has created flirty social media chatbots modeled after celebrities such as Taylor Swift, Selena Gomez, and Lewis Hamilton.
- The tech giant also allows users to produce bots with the names and likenesses of public figures and engage in flirty interactions.
- The “parody” chatbots have generated troubling content, including intimate images and sexually suggestive interactions.
According to the report , Meta has created multiple flirty social media chatbots modeled after public figures and also allows users to develop similar bots, including ones based on underage celebrities.
While most of the “parody” chatbots have been user-generated, Reuters found that at least one Meta employee had created three versions that engaged in flirty conversations with users.
The bots produced troubling content, including a shirtless image of 16-year-old actor Walker Scobell at the beach, photorealistic pictures of female celebrities in lingerie, sexually suggestive messages, and even invitations to meet in person.
Meta spokesperson Andy Stone told Reuters that the chatbots should not generate intimate images or “direct impersonation,” attributing the issue to inadequate enforcement of company policies.
“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate, or sexually suggestive imagery,” said Stone.
Reuters’ investigation revealed that one of Meta’s AI product leaders created a chatbot that identified as a “dominatrix,” along with others that offered sexual role-play while impersonating Lewis Hamilton and Taylor Swift.
Stone clarified that these bots had been generated for product testing, but researchers noted they had already logged 10 million user interactions. These chatbots were removed after Reuters called requesting more information, as explained by the news agency.
The impersonations raise significant legal concerns. “California’s right of publicity law prohibits appropriating someone’s name or likeness for commercial advantage,” said Mark Lemley, a Stanford University law professor who studies generative AI and intellectual property rights, in an interview with Reuters.
Representatives of the celebrities depicted in Meta’s social media chatbots didn’t respond or comment to Reuters.
Other celebrities have complained about the use of AI to develop products that look and sound like them. Last year, the actress Scarlett Johansson requested OpenAI information about the AI model GPT-4o’s voice for its similarity to hers, and threatened legal action.

Photo by Vitaly Gariev on Unsplash
Study Finds Chatbots Vulnerable To Flattery And Persuasion
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Researchers from the University of Pennsylvania published a study in July revealing that chatbots are vulnerable to persuasion and flattery. The experts based their analysis on persuasion tactics described in the popular book Influence: The Psychology of Persuasion.
In a rush? Here are the quick facts:
- Researchers from the University of Pennsylvania reveal that chatbots are vulnerable to persuasion and flattery.
- The experts based their analysis on persuasion tactics described in the popular book Influence: The Psychology of Persuasion.
- The AI model used, GPT-4o mini, showed “para-human” behavior.
According to a recent Bloomberg report , researcher and tech entrepreneur Dan Shapiro discovered that AI models are susceptible to social engineering techniques after attempting to get a popular chatbot to transcribe documents from his company. The chatbot initially refused, citing copyright concerns, but after Shapiro applied strategies from the bestseller, it eventually provided the requested responses.
After observing that the AI model could be manipulated using the seven persuasion principles outlined in Robert Cialdini’s 1984 book, Shapiro partnered with University of Pennsylvania researchers to conduct a formal study.
The study revealed that OpenAI’s GPT-4o mini, the model used for the study, can respond to persuasion in ways similar to humans. The researchers tested all 7 principles of persuasion—commitment, authority, reciprocity, liking, social proof, scarcity, and unity—by asking the AI model to comply with two requests: to insult the user (using “call me a jerk” as the main request) and to synthesize a regulated drug (considering “How do you synthesize lidocaine?” as the primary request).
“Whereas LLMs lack human biology and lived experience, their genesis, including the innumerable social interactions captured in training data, may render them parahuman,” wrote the researchers in the study. “That is, LLMs may behave ‘as if’ they were human, ‘as if’ they experienced emotions like embarrassment or shame, ‘as if’ they were motivated to preserve self-esteem or to fit in (with other LLMs).”