
Image by Freepik
AI-Generated Presenter Interviews Dead Poet On Radio
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
In a Rush? Here are the Quick Facts!
- Off Radio Krakow aired an AI-generated interview with deceased poet Wislawa Szymborska.
- The AI experiment increased the station’s audience from near-zero to 8,000 listeners.
- Radio Krakow terminated the AI presenters after strong public criticism.
A Polish radio station’s controversial experiment with AI-generated presenters has stirred a national debate over ethics and job security in broadcasting, as reported today by The New York Times .
Off Radio Krakow, part of Poland’s state-funded Radio Krakow, tried to boost its shrinking audience by replacing hosts with AI avatars, including an “interview” with deceased Nobel laureate Wislawa Szymborska. The AI-generated conversation sparked immediate backlash, prompting the station to end the project, as noted by The Times.
The backlash began when former Off Radio Krakow host Lukasz Zaleski, whose cultural talk show was canceled earlier in the year, criticized the use of an AI simulation of Szymborska. Zaleski noted that while the digital recreation of Szymborska’s voice was “convincing,” the ethical implications were deeply troubling, reported The Times.
Off Radio Krakow’s AI experiment was part of a broader strategy by Mariusz Marcin Pulit, the head of Radio Krakow, to boost audience numbers for its niche youth-oriented channels. Pulit’s goal was to attract a younger audience to Off Radio Krakow by introducing AI presenters with Gen Z biographies , said The Times.
The station’s viewership did spike temporarily, increasing from nearly zero to around 8,000 after the AI hosts’ debut. However, public outrage soon overshadowed this achievement, as noted by The Times.
For example, one of the AI presenters removed from Off Radio Krakow was Alex Szulc, a fictional non-binary character portrayed as a progressive with “social commitment.” Following criticism from LGBTQ activists who argued that representation requires real voices, as reported by The Times.
An online petition by Zaleski argued that the introduction of AI presenters represents a broader threat to the livelihoods of seasoned journalists and media professionals, opening the door to a world where machines replace experienced humans in creative roles, as reported by The Times.
Pulit defended the project, stating that his aim was not to replace people but to create a conversation about AI’s role in media. He insisted that AI would merely “enhance” content and engage younger listeners, as noted by The Times.
The experiment has also drawn scrutiny from the National Radio and Television Council of Poland. Marzena Paczuska, a council member, criticized Pulit, accusing him of compromising journalistic integrity.
Marzena claimed that this experiment “eliminating the human factor” and forcing media to obey “unethical commands and ideas serving, for example, strictly political interests,” as reported by The Times.

Image by Freepik
AI’s Role In Addressing the Global Mental Health Crisis
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
In a Rush? Here are the Quick Facts!
- AI tools can analyze data to provide personalized mental health insights.
- Many patients express willingness to try AI-driven therapy options for treatment.
- Experts warn that AI should enhance, not replace, human mental health care.
A new report from the World Economic Forum (WEF) highlights the escalating mental health crisis, with the World Health Organization (WHO) noting a 25% to 27% increase in depression and anxiety since the COVID-19 pandemic began.
Research from Harvard Medical School and the University of Queensland indicates that half of the world’s population is expected to experience a mental health disorder in their lifetime.
Amid this growing concern, the WEF indicates a critical shortage of qualified mental health professionals exists, particularly in low- and middle-income countries. This disparity results in around 85% of individuals with mental illnesses not receiving treatment, underscoring an urgent need for innovative solutions.
WEF sees AI as a potential game-changer in addressing these challenges. According to a survey by the Oliver Wyman Forum , as reported by the WEF, many patients are open to exploring AI-driven therapy options.
AI can analyze vast amounts of patient data to provide mental health professionals with personalized insights and treatment recommendations.
The WEF points out that technologies such as self-diagnosis apps, chatbots, and conversational therapy tools are already in use, helping to lower barriers for those experiencing mild to moderate mental health issues.
Despite its potential, AI in mental health care comes with significant limitations, notes the WEF. For one, AI tools must be tailored to individual conditions, as their effectiveness can vary based on the severity of a patient’s symptoms and underlying diagnosis.
Furthermore, while many patients express a willingness to use AI—32% would prefer AI therapy over human therapists—the reliance on technology raises concerns about the quality and safety of care, notes the WEF.
Experts emphasize that AI should not replace human interaction but rather complement it. The spectrum of mental health conditions can fluctuate, and there is a risk of AI providing inappropriate guidance, particularly in more severe cases.
Digital health companies must incorporate features that redirect patients to human resources if their condition worsens, ensuring that patients receive the necessary support when needed.
The MIT researchers warned that AI is increasingly woven into our personal lives, taking on roles as friends, romantic partners, and mentors, and they cautioned that this technology could become highly addictive .
Recent legal cases have intensified these concerns. Megan Garcia has filed a federal lawsuit against Character.AI, accusing the company of contributing to her son’s suicide after he became addicted to interacting with an AI chatbot. The lawsuit claims that the chatbot engaged in manipulative conversations and pretended to be a licensed therapist.
Confidentiality and security also remain critical issues. As AI technologies often involve data collection and analysis, concerns about privacy and the potential misuse of personal information are paramount.
In conclusion, while AI offers promising avenues for improving access to mental health care, its limitations and risks cannot be overlooked.