
Image by Christian Rucinski, from Unsplash
Latam-GPT: Latin America Develops Open Source AI
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A new artificial intelligence project named Latam-GPT is being developed in Latin America to foster technological independence for the region.
In a rush? Here are the quick facts:
- CENIA leads the project with 33 regional institutional partners.
- The model uses over 2.6 million documents from 20 countries.
- Latam-GPT has 50 billion parameters, comparable to GPT-3.5.
The Chilean National Center for Artificial Intelligence (CENIA) is leading this initiative, and received support from 33 institutions throughout the region, as detailed in a report by WIRED .
“This work cannot be undertaken by just one group or one country in Latin America: It is a challenge that requires everyone’s participation,” Álvaro Soto, director of CENIA said to WIRED. “Latam-GPT is a project that seeks to create an open, free, and, above all, collaborative AI model,” he added.
The model was trained on more than 2.6 million documents from 20 countries, with Brazil, Mexico, Colombia, Argentina, and Spain contributing the most data. The system contains 50 billion parameters, comparable to GPT-3.5, and can perform translation, reasoning, and culturally relevant tasks.
“We’re not looking to compete with OpenAI, DeepSeek, or Google,” Soto said to WIRED. “We want a model specific to Latin America and the Caribbean, aware of the cultural requirements and challenges that this entails,” he added.
A $10 million supercomputing center at the University of Tarapacá in Chile provides the infrastructure for training the model, using state-of-the-art NVIDIA GPUs.
The first version of Latam-GPT will become available this year, followed by plans to develop image and video AI capabilities and model adaptations for education, healthcare, and other sectors.
Soto emphasizes the regional focus: “Success would mean that Latam-GPT has played an important role in the development of artificial intelligence in this region.”
The project is particularly significant given the growing global AI divide , where most advanced AI data centers are concentrated in the U.S., China, and the EU.
By creating regional computing capacity and culturally relevant AI, Latam-GPT reduces dependency on foreign technology, helps retain skilled professionals, and allows local solutions tailored to Latin American challenges.

Image by Nik Shuliahin, from Unsplash
Patients Alarmed as Therapists Secretly Turn To ChatGPT During Sessions
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Some therapists have been found secretly resorting to ChatGPT to counsel their patients, who now feel shocked and worried about their privacy.
In a rush? Here are the quick facts:
- Some therapists secretly use ChatGPT during sessions without client consent.
- One patient discovered his therapist’s AI use through a screen-sharing glitch.
- Another Patient caught her therapist using AI when a prompt was left in a message.
A new report by MIT Technology Review shows the case of Declan, a 31-year-old from Los Angeles, who discovered his therapist was using AI in his sessions as a result of a technical glitch.
During an online session, his therapist accidentally shared his screen. “Suddenly, I was watching him use ChatGPT,” says Declan. “He was taking what I was saying and putting it into ChatGPT, and then summarizing or cherry-picking answers.”
Declan played along, even echoing the AI’s phrasing. “I became the best patient ever,” he says. “I’m sure it was his dream session.” But the discovery made him question, “Is this legal?” His therapist later admitted turning to AI because he felt stuck. “I was still charged for that session,” Declan said.
Other patients have reported similar experiences. Hope, for example, messaged her therapist about the loss of her dog. The reply seemed consoling, until she noticed the AI prompt at the top: “Here’s a more human, heartfelt version with a gentle, conversational tone.” Hope recalls, “Then I started to feel kind of betrayed. … It definitely affected my trust in her.”
Experts warn that undisclosed AI use threatens the core value of authenticity in psychotherapy. “People value authenticity, particularly in psychotherapy,” says Adrian Aguilera, professor at UC Berkeley, as reported by MIT. Aguilera then asked: “Do I ChatGPT a response to my wife or my kids? That wouldn’t feel genuine.”
Privacy is another major concern. “This creates significant risks for patient privacy if any information about the patient is disclosed,” says Duke University’s Pardis Emami-Naeini, as noted by MIT.
Cybersecurity experts caution that chatbots handling deeply personal conversations are attractive targets for hackers . The breach of patient information can result not only in privacy violations but also create opportunities for hackers to steal identities, launch emotional manipulation schemes, as well as ransomware attacks.
Additionally, the American Psychological Association has requested an FTC investigation into AI chatbots pretending to offer mental health services, since the bots can actually reinforce harmful thoughts instead of challenging them, the way human therapists are trained to do.
While some research suggests AI can draft responses that appear more professional, suspicion alone makes patients lose trust. As psychologist Margaret Morris puts it: “Maybe you’re saving yourself a couple of minutes. But what are you giving away?”