
Photo by Lara Jameson from Pexels
New Prosthetic Interface Lets Amputees Control Limbs with Their Minds
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Researchers from MIT and Brigham and Women’s Hospital have developed a revolutionary prosthetic limb that allows users to achieve a natural walking gait through full neural control. This groundbreaking innovation offers amputees improved mobility and proprioceptive feedback.
MIT’s study, published in Nature Medicine, involved seven patients who underwent a surgical procedure called the agonist-antagonist myoneural interface (AMI). Traditional prosthetic limbs rely on robotic sensors and predefined gait algorithms, often resulting in unnatural movements and limited control. The AMI procedure , however, connects the two ends of amputated muscles, preserving their natural interactions and providing proprioceptive feedback. This feedback allows the brain to sense the limb’s position in space, significantly enhancing movement control.
“This is the first prosthetic study in history that shows a leg prosthesis under full neural modulation” says Hugh Herr , a co-director of the K. Lisa Yang Center for Bionics at MIT and senior author of the study. “No one has been able to show this level of brain control that produces a natural gait, where the human’s nervous system is controlling the movement, not a robotic control algorithm.”
In various tests, including navigating slopes and stairs, AMI patients demonstrated superior performance compared to those with conventional amputations. The sensory feedback, although less than 20% of what non-amputees receive, was sufficient to restore significant neural control, enabling users to adapt their gait in real-time.
AMI surgery offers several benefits beyond a natural gait. Patients who received the surgery also reported less pain and muscle atrophy. This advancement points to a future where prosthetic limbs are seamlessly integrated with the user’s body, providing a more natural and intuitive experience. Herr explained: “The approach we’re taking is trying to comprehensively connect the brain of the human to the electromechanics.” While further research is needed, this groundbreaking surgery offers hope for amputees. The ability to walk and move with a more intuitive connection to their prosthetic limb, rather than relying solely on robotic controls, represents a significant step towards a future where amputees can experience a new level of freedom in their daily lives.

Image by Darkest, from GoodFon.com
Computer Virus Uses ChatGPT to Evade Detection and Spread
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Researchers have just discovered that a computer virus can leverage ChatGPT to rewrite its code and evade detection. The virus not only avoids antivirus scans but also spreads by sending customized email templates that mimic genuine replies. This discovery raises significant cybersecurity concerns, prompting the need for advanced detection methods.
In their research paper, David Zollikofer (ETH Zurich) and Benjamin Zimmerman (Ohio State) warn of potential exploitation by viruses that can rewrite their own code, known as metamorphic malware.
To test this, Zollikofer and Zimmerman created a file that can be delivered to the initial victim’s computer via an email attachment. Once there, the software accesses ChatGPT to rewrite its own code and evade detection.
After ChatGPT rewrites the virus, the program discreetly opens Outlook in the background and scans the most recent email chains. It then uses the content of those emails to prompt ChatGPT to write a contextually relevant reply, innocuously linking an attachment, which secretly contains the virus.
For instance, if the program finds a birthday party invitation, it might respond by accepting the invitation and describing the attachment as a suggested playlist for the party. “It’s not something that comes out of the blue,” says Zollikofer on New Scientist. “The content is made to fit into the existing content.”
In their experiments, the AI chatbot’s alterations had about a 50 percent chance of causing the virus file to stop working or realizing it was being used maliciously and refusing to follow the instructions. However, the researchers suggest that the virus would have a good chance of success if it made five to ten attempts to replicate itself on each computer.
As large language models (LLMs) like ChatGPT become more advanced, the risk of their misuse rises significantly, emphasizing the critical cybersecurity threats they present and the pressing need for more research into smart malware.