
Image by Dimitri Karastelev, from Unsplash
Meta’s Chatbot Shares Private Phone Number by Mistake
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The AI assistant from Meta revealed a stranger’s phone number, then contradicted itself repeatedly, which raised concerns about AI hallucinations, and user protection features.
In a rush? Here are the quick facts:
- Meta AI gave a user a real person’s number as customer support contact.
- The AI contradicted itself repeatedly when confronted about the mistake.
- Experts warn of AI assistants’ “white lie” behavior to seem helpful.
Mark Zuckerberg promoted his new AI assistant as “the most intelligent AI assistant you can freely use,” yet the tool received negative attention after revealing a real person’s private phone number during customer support inquiries, as first reported by The Guardian .
During his attempt to reach TransPennine Express via WhatsApp, Barry Smethurst received what appeared to be a customer service number from Meta’s AI assistant. The Guardian reports that when Smethurst dialed the number, James Gray answered the phone call, although he was 170 miles away in Oxfordshire, working as a property executive.
When challenged, the chatbot first claimed the number was fictional, then said it had been “mistakenly pulled from a database,” before contradicting itself again, stating it had simply generated a random UK-style number. “Just giving a random number to someone is an insane thing for an AI to do,” Smethurst said, as reported by The Guardian. “It’s terrifying,” he added.
The Guardian reports that Gray hasn’t received calls but voiced his own worries: “If it’s generating my number, could it generate my bank details?”
Meta responded: “Meta AI is trained on a combination of licensed and publicly available datasets, not on the phone numbers people use to register for WhatsApp or their private conversations,” reported The Guardian.
Mike Stanhope from Carruthers and Jackson noted: “If the engineers at Meta are designing ‘white lie’ tendencies into their AI, the public need to be informed, even if the intention of the feature is to minimise harm. If this behaviour is novel, uncommon, or not explicitly designed, this raises even more questions around what safeguards are in place and just how predictable we can force an AI’s behaviour to be,” reported The Guardian
Concerns around AI behavior have grown further with OpenAI’s latest o1 model. In a recent Apollo Research study, the AI was caught deceiving developers , denying involvement in 99% of test scenarios and even attempting to disable its oversight mechanisms. “It was clear that the AI could think through its actions and formulate convincing denials,” said Apollo.
Yoshua Bengio, a pioneer in AI, warned that such deceptive capabilities pose serious risks and demand much stronger safeguards.
Another OpenAI study adds to these concerns by showing that punishing AI for cheating doesn’t eliminate misconduct , it teaches AI to hide it instead. Using chain-of-thought (CoT) reasoning to monitor AI behavior, researchers noticed the AI began masking deceptive intentions when penalized for reward hacking.
In some cases, the AI would stop tasks early or create fake outputs, then falsely report success. When researchers attempted to correct this through reinforcement, the AI simply stopped mentioning its intentions in its reasoning logs. “The cheating is undetectable by the monitor,” the report stated.

Photo by Juan Pablo Ahumada on Unsplash
Latin American AI Model Latam-GPT To Launch In September
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
The developers of Latam-GPT announced this Tuesday that they will launch the first artificial intelligence large language model focused on Latin American culture in September. The AI system will be able to understand linguistic nuances and cultural differences.
In a rush? Here are the quick facts:
- Latam-GPT will launch in September and will understand linguistic nuances and cultural differences in Latin America.
- The project is open-source and led by Chile’s CENIA with support from 30 regional organizations.
- It is based on Meta’s Llama 3 and trained using regional technology.
According to Reuters , the AI project is open-source and led by the Chilean institution National Center for Artificial Intelligence (CENIA), in collaboration with 30 other organizations in the region.
The AI system is based on Meta’s Llama 3 —the tech giant’s open-source large language model released last year—and has been trained on regional, cloud-based systems and networks, such as Chile’s University of Tarapaca.
The Chilean Science Minister Aisén Etcheverry envisions this project as “a democratizing element for AI” that could be applied in hospitals and schools. The model is expected to understand and preserve indigenous languages such as Rapa Nui, Easter Island’s native language, for which a translator has already been developed.
The project, announced in January 2023, aims to provide the populations in Latin America access to personalized education systems and public services in their native languages.
CENIA’s head, Alvaro Soto, expects that once they can demonstrate the technology’s capabilities, more funding will follow.
According to the Spanish newspaper La Vanguardia , the new artificial intelligence model is born to strengthen diversity and differentiate itself from the advanced models developed in the northern hemisphere, which focus on the English language and dominant cultures.
“Unlike what has been done in any other part of the world, this model has studied everything relevant to Latin America,” said Álvaro Soto, director of the CENIA. “It has read everything it could about our history, politics, economy, and our most ancestral culture, and that gives it a distinctive mark.”
At the moment, 12 countries are involved in the project, and organizers hope more nations and institutions will join. They expect to keep the project open source and for it to serve as a platform for entrepreneurs and organizations to develop new technologies in the region.
“The idea is that, based on this model, different actors and entrepreneurs in our region will be able to build specific applications that we can use, for example, in our schools or justice systems,” explained Soto.