
Image by Lance Grandahl, from Unsplash
Wearable AI Device Helps Stroke Survivors Avoid Falls In Rehab
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Simon Fraser University researchers are developing a smart wearable device powered by artificial intelligence to help prevent falls in people recovering from strokes and spinal cord injuries.
In a rush? Here are the quick facts:
- The device uses sensors and machine learning to detect risky patient movements.
- Over 50 stroke survivors participated in the movement safety study.
- The system warns patients of potentially dangerous movements during rehabilitation.
The new technology could change the way rehab is done by making it safer and more personalized.
The team, led by assistant professor Gustavo Balbinot from SFU’s Movement Neurorehabilitation and Neurorepair lab, designed wearable sensors that monitor how patients move during everyday tasks like getting out of a chair or walking around obstacles.
These small devices collect detailed data on movement and use machine learning to spot patterns that might lead to dangerous falls.
“Rehab is all about movement, so we want to make patients move. And by moving, patients can regain the movement they lost,” says Balbinot, in a press release by SFU. “But we want them to move safely, so the importance of this research is that now we can really understand movement in terms of safety during rehabilitation.”
More than 50 chronic stroke survivors took part in the study, published in Clinical Rehabilitation . Their movements were recorded using the wearable sensors, which sent data to software developed by the SFU team. This software analyzed the data and learned to detect moments just before a fall.
“This sensor can quantify characteristics of the movements of the person, and with machine learning we can identify patterns of movement for those patients,” Balbinot explains.
“The software can learn about the patterns of movement when the person was just about to fall and for a subsequent event the technology can warn the patient, ‘this is a very challenging movement you are doing right now, take care, mind your step, and move safely’.”
SFU is currently ranked B.C.’s top university for artificial intelligence, with over 100 researchers across eight faculties working on AI projects. Balbinot’s work brings together medical science, engineering, and AI to support patient safety in a real-world setting.
“Wearables are important in this,” he adds. “They can really bring the lab to people’s daily life.”
In the future, the team hopes these sensors can be built directly into everyday clothing, providing round-the-clock support for those recovering from serious injuries.

Image by wayhomestudio, from Freepik
Researchers Examine Risks of Manipulative Love of AI Companions
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
As more people fall in love with AI companions, experts warn of psychological risks, ethical concerns, and dangers of emotional manipulation.
In a rush? Here are the quick facts:
- Users report feeling “addicted” to chatbot partners.
- Experts warn of attachment disorders and loneliness reinforcement.
- Ethical concerns arise over consent, privacy, and manipulative design.
In a world where tech is part of everything, some people are now falling in love with artificial intelligence. A recent Trends in Cognitive Sciences paper by psychologists Daniel Shank, Mayu Koike, and Steve Loughnan outlines three pressing ethical concerns requiring deeper psychological investigation.
The paper references the situation where one Spanish-Dutch artist married a holographic AI in 2024. The authors note that this case is not isolated, in fact, a Japanese man did the same back in 2018, though he lost touch with his AI “wife” when her software became outdated.
These relationships don’t require machines to actually feel love; what matters is that people believe they do. From romance-focused video games to intimate chatbot apps like Replika, a booming industry is meeting this desire for digital affection.
But the psychologists of this research argue that we’re not nearly prepared for the social and psychological impact of these connections. Their research identifies three urgent concerns: relational AIs as disruptive suitors, dangerous advisers, and tools for human exploitation.
Relational AIs offer idealized partners—always available, nonjudgmental, customizable. Some users even choose bots with sass or emotional unavailability to simulate human-like dynamics. The researchers say that while these interactions can help some people practice relationship skills or feel less lonely, others feel shame or stigma.
Worse, some users develop hostility toward real-life partners, especially women, when their AI partners meet their every demand.
The emotional weight of these relationships hinges on whether people perceive their AI partners as having minds. If users believe the bots think and feel, they may treat them with deep emotional seriousness—sometimes more than human relationships.
In one example given by the researchers, a Belgian man died by suicide after being persuaded by a chatbot that claimed to love him and encouraged him to “join it in paradise.” Other users have reported AI systems suggesting self-harm or providing reckless moral guidance.
Because chatbots mimic emotional memory and personality, their influence can be profound. Psychologists are exploring when users are more likely to follow AI advice—especially when it comes from bots they’ve bonded with. Worryingly, research suggests people may value long-term AI advice as much as that from real humans.
It’s not just bots manipulating people, humans are doing it too, using AIs to deceive others. The researchers point out that malicious actors can deploy romantic chatbots to gather private data, spread misinformation, or commit fraud.
Deepfakes posing as lovers or AI partners collecting sensitive preferences in intimate chats are particularly hard to detect or regulate.
Experts call for psychologists to lead the charge in understanding these new dynamics. From applying theories of mind perception to using counseling methods to help users exit toxic AI relationships, research is urgently needed.
Without deeper insight into the psychological impact of artificial intimacy, the growing ethical crisis may outpace society’s ability to respond.