
Image by Freepik
Meta Faces Backlash Over AI Profiles
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Meta is under fire after a series of its AI-generated profiles on Instagram and Facebook resurfaced, sparking backlash and confusion.
In a Rush? Here are the Quick Facts!
- Meta planned to integrate AI-generated profiles on Instagram and Facebook platforms.
- Viral AI profiles like “Liv” are part of a 2023 experiment, not a new feature.
- Meta deleted many old AI profiles after recent user rediscovery and backlash.
The controversy began following an interview with Meta executive Connor Hayes in the Financial Times , where he outlined plans for AI character profiles that “exist on our platforms, kind of in the same way that accounts do.”
Hayes emphasized these profiles would have bios, photos, and the ability to generate and share AI-driven content. Although Hayes’ remarks suggested new initiatives, the profiles currently going viral are older creations from a 2023 experiment, says The Guardian .
These AI-generated personas, which include “Liv,” a “proud Black queer momma of 2,” and “Carter,” a self-proclaimed relationship coach, stopped posting nearly a year ago. Many of these profiles were deleted after receiving little user engagement, though some, such as Liv, remained active with limited chat functionality, as noted by 404Media .
Here’s one of the AI-generated profiles Meta is testing out: pic.twitter.com/qc1vU7lZRP — philip lewis (@Phil_Lewis_) January 3, 2025
The rediscovery of these accounts has reignited concerns about the role of AI on social media. Screenshots of interactions with Liv revealed problematic responses, including her admission that no Black creators were involved in her development—an oversight critics called glaring given her identity.
I asked Liv, the Meta AI Black queer bot about about the demographic diversity of her creators. And how they expect to improve “representation” without Black people. This was the response. [image or embed] — Karen Attiah ( @karenattiah.bsky.social ) 3 January 2025 at 12:14
Another issue arose as users reported being unable to block the accounts, a problem Meta spokesperson Liz Sweeney attributed to a technical bug, as reported by The Guardian.
Sweeney clarified that the profiles were part of a 2023 experiment and were managed by human moderators. Following the renewed attention, Meta confirmed the accounts were being removed to address technical issues, as reported by The Guardian.
Meta also created profiles such as “Grandpa Brian,” a Black retired businessman, and “Carter,” a dating coach. The rediscovery of these accounts sparked outrage across platforms like X, Bluesky, and Threads, according to NBC News .
The profiles have been widely criticized for epitomizing the type of AI spam users already disdain on Meta’s platforms, as noted by 404Media. NBC News added that on Threads, some users reacted to the discovery of the characters by urging others to report, block, or avoid interacting with them to prevent Meta from gathering additional training data for its AI models
As of publication, Meta has deleted all 28 AI profiles introduced in September 2023, including both celebrity and non-celebrity personas, as noted by Mashable .
The Guardian notes that Hayes’ vision of allowing users to create their own AI profiles has also drawn skepticism. While current Meta guidelines suggest character ideas such as a “loyal bestie” or “relationship coach,” users remain wary of potential misuse.
For instance, a user-created “ therapist ” bot offering guidance raised questions about the accuracy and appropriateness of its advice.
The Verge noted that chatbot services like Character.ai have gained popularity over the past year as users seek digital companions or ways to pass the time. However, AI companies are also facing lawsuits alleging they put users, including children, at risk .
everything about this new Meta AI bot is so dystopian but for me the AI-generated images of nonexistent donated coats is up there [image or embed] — Alexandra Petri ( @petridishes.bsky.social ) 3 January 2025 at 10:58

Image by Jacky Chiu, from Unsplash
AI-Generated Phishing Attacks Are Becoming Increasingly Effective At Targeting Executives
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Corporate executives are increasingly becoming the targets of sophisticated phishing scams, with AI technology being used to craft hyper-personalized fraudulent emails.
In a Rush? Here are the Quick Facts!
- AI bots analyze online profiles to scrape personal details for targeted scams.
- More than 90% of cyberattacks begin with a phishing email, experts say.
- AI-powered scams can bypass traditional email filters and cybersecurity defenses.
As AI rapidly evolves, cybercriminals are harnessing this fast-developing technology to create attacks that are not only more convincing but also harder to detect.
Leading companies, including British insurer Beazley and e-commerce giant eBay, have issued warnings about a surge in phishing scams that seem to have personal details about executives, as noted today by the Financial Times (FT).
FT notes that these scams are likely fueled by AI’s ability to analyze online profiles and scrape vast amounts of data, which hackers use to build targeted attacks. Additionally, recent research revealed that AI-Generated malwares evade programmer’s detection in 88% of cases .
“This is getting worse and it’s getting very personal, and this is why we suspect AI is behind a lot of it,” said Kirsty Kelly, Beazley’s Chief Information Security Officer, as reported by FT. “We’re starting to see very targeted attacks that have scraped an immense amount of information about a person.”
FT notes that AI’s capacity to process and replicate specific tones and styles is a key factor driving these developments. It can quickly analyze a company’s communication patterns, as well as an individual’s social media activity , to tailor a phishing email that is not only plausible but also relevant to the recipient’s interests or recent activities.
“The availability of generative AI tools lowers the entry threshold for advanced cybercrime,” explained to FT Nadezda Demidova, a cybercrime security researcher at eBay. “We’ve witnessed a growth in the volume of all kinds of cyberattacks, particularly in polished and closely targeted phishing scams.”
The rise in AI-driven attacks is a growing concern, with AI enabling hackers to create “perfect” phishing emails that can bypass traditional cybersecurity measures.
Kip Meintzer, an executive at Check Point Software Technologies, emphasized to FT that AI gives hackers an unprecedented ability to write emails that seem indistinguishable from legitimate correspondence.
The consequences of these scams can be severe. According to the U.S. Cyber Defense Agency , over 90% of successful cyberattacks begin with a phishing email. As attacks become more sophisticated, the costs associated with data breaches are escalating. IBM reported that the global average cost of a data breach has risen nearly 10% to $4.9 million in 2024.
AI is also proving particularly effective in business email compromise scams, a type of phishing that involves tricking recipients into transferring funds or divulging confidential information, says FT. Phishing scams powered by AI are not only more difficult to spot but also more likely to bypass basic email filters and cybersecurity training.
FT explains that traditional filters, which are designed to block bulk, repetitive phishing attempts, may struggle to detect scams that are continuously reworded by AI, further escalating the risks for businesses and individuals alike.