
Image by Greg Martínez, from Unsplash
Open-Source Tool Can Disable Most Remote-Controlled Malware Automatically
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Cybersecurity researchers at Georgia Tech have created a new tool that removes malware from infected devices, by turning the malware’s own systems against it.
In a rush? Here are the quick facts:
- ECHO repurposes malware’s update system to disable infections.
- It automates malware removal in just minutes.
- Tool is open-source and presented at NDSS 2025.
The tool, called ECHO, uses the malware’s built-in update features to shut it down, stopping remote-controlled networks of infected machines, known as botnets, as first reported by Tech Xplore (TX).
ECHO’s open-source code is now available on GitHub and has shown success in 75% of tested cases. The researchers applied their tool to 702 Android malware samples and achieved successful removal of infections in 523 cases, as explained in their paper .
“Understanding the behavior of the malware is usually very hard with little reward for the engineer, so we’ve made an automatic solution,” said Runze Zhang, a PhD student at Georgia Tech, as reported by TX.
Botnets have been causing problems since the 1980s and have grown more dangerous in recent years. The malware Retadup spread across Latin America in 2019, according to TX. The threat was eventually neutralized but it required substantial time and effort to do so.
“This is a really good approach, but it was extremely labor-intensive,” said Brendan Saltaformaggio, associate professor at Georgia Tech, as reported by TX. “So, my group got together and realized we have the research to make this a scientific, systematic, reproducible technique, rather than a one-off, human-driven, miserable effort.”
TX reports that ECHO works in three steps: it analyzes how the malware spreads, repurposes that method to send in a fix, and then pushes out the code to clean the infected systems. It’s quick enough to stop a botnet before it causes major damage.
“We can never achieve a perfect solution,” said Saltaformaggio, as reported by TX. “But we can raise the bar high enough for an attacker that it wouldn’t be worth it for them to use malware this way.”

Image by Jonathan Velasquez, from Unsplash
AI Host Went Live for Six Months, Listeners Thought It Was A Real Person
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A Sydney radio station aired an AI-generated host for months, deceiving listeners and reigniting debate over transparency in Australian media.
In a rush? Here are the quick facts:
- CADA aired an AI radio host, Thy, for six months without disclosure.
- Thy’s voice was cloned from a real ARN finance team employee.
- The show reached over 72,000 listeners monthly, many unaware she wasn’t real.
An Australian radio station has come under fire after secretly airing an AI-generated host for six months, without ever telling its audience.
Sydney’s CADA station, owned by ARN Media, introduced “Thy,” a digital voice created using ElevenLabs AI software, back in November 2024, as first reported by The Sydney Morning Herald (SMH) . Thy hosted the Workdays with Thy segment every weekday from 11am to 3pm, playing hip-hop, R&B, and pop hits. But there was no public mention that Thy wasn’t a real person.
The Independent reported that listeners only found out recently, after journalist Stephanie Coombes raised questions online. “What is Thy’s last name? Who is she? Where did she come from?” she wrote in a blog post. “There is no biography, or further information about the woman who is supposedly presenting this show.”
Eventually, ARN admitted Thy’s voice had been cloned from an actual finance team employee. One project lead wrote in a deleted LinkedIn post:
Despite the show reaching over 72,000 people, CADA never disclosed Thy’s artificial nature on-air or online. “If your day is looking a bit bleh, let Thy and CADA be the energy and vibe,” the show’s page still reads, as reported by The Independent.
While there are currently no rules in Australia requiring media companies to label AI use, the incident has sparked debate over transparency.
“They should have been upfront and completely honest,” said Teresa Lim, vice president of the Australian Association of Voice Actors, as reported by The Independent. “People have been deceived into thinking it’s a real person because there’s no AI labelling,” she added.
Lim noted that the issue also touches on fair representation. “When we found out she was just a cardboard cut-out, it cemented the disappointment. There are a limited number of Asian-Australian female presenters who are available for the job, so just give it to one of them,” as reported by the SMH.
CADA defended the trial, saying it was part of exploring new technologies in broadcasting. “This is a space being explored by broadcasters globally,” an ARN spokesperson said, as reported by the SMH. “This is a space being explored by broadcasters globally, and the trial has offered valuable insights”
The Australian Communications and Media Authority said that AI policy is still being developed, and discussions around transparency and regulation are ongoing.