Advanced Wearable Robot Offers Mobility To Paraplegic Users - 1

Image by KAIST, from FMT

Advanced Wearable Robot Offers Mobility To Paraplegic Users

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

South Korean researchers have developed an advanced wearable robot that offers paraplegic users the ability to walk, climb stairs, and navigate obstacles.

In a Rush? Here are the Quick Facts!

  • The exoskeleton weighs 50 kg and uses 12 motors to mimic human joint movements.
  • WalkON Suit adapts to users’ walking styles after 20 sessions via a learning program.
  • Sensors process 1,000 signals per second to maintain balance and detect obstacles.

Named the WalkON Suit , the device was designed by the Exoskeleton Laboratory at the Korea Advanced Institute of Science and Technology ( KAIST ) to address the mobility challenges faced by individuals with disabilities.

Kim Seung-hwan, a paraplegic member of the KAIST team, showcased the robot’s capabilities, demonstrating how it allowed him to walk at 3.2 kph (2 mph), climb a flight of stairs, and sidestep onto a bench, as reported by Reuters .

“It can approach me wherever I am, even when I’m sitting in a wheelchair, and be worn to help me stand up, which is one of its most distinct features,” Kim explained to Reuters.

The exoskeleton weighs 50 kg (110 lb) and is constructed from aluminum and titanium. Powered by 12 electronic motors, it mimics human joint movements to enable walking and other tasks.

It also employs a sophisticated balance control system with sensors that measure posture and analyze ground forces, processing 1,000 signals per second to anticipate and adjust to the user’s movements.

The robot’s design incorporates high-power actuators and friction compensation systems, allowing it to produce the necessary force for movement while maintaining control.

Additionally, a built-in learning program adapts the robot’s functions to each user’s walking style. According to the researchers, after about 20 uses, the device creates a customized joint trajectory, providing a smoother and more efficient walking experience.

Park Jeong-su, a team member at KAIST, shared that his inspiration for the project came from the movie Iron Man. “ I thought it would be great if I can help people with a robot in real life,” he said to Reuters.

The robot also features an interface that allows users to control its functions and monitor its status via a back-panel display.

Assistants can adjust its settings, while users interact with the device through intuitive control buttons. Additional technologies include muscle-imitating actuators for enhanced balance and ultra-thin actuators for efficient force production.

The WalkON Suit series has already garnered international attention. Earlier versions were showcased at the Cybathlon competition in Switzerland in 2016 and the UAE AI & Robotics for Good competition in 2017.

Each iteration has been refined to meet the distinct needs of individuals with complete or partial paralysis, reflecting ongoing advancements in wearable robotics. As researchers continue to refine the exoskeleton, the focus remains on creating a device that seamlessly integrates into daily life, providing independence and mobility for those who rely on it.

Leading AI Chatbots Show Signs Of Cognitive Impairment In Dementia Tests, Study Finds - 2

Image by Freepik

Leading AI Chatbots Show Signs Of Cognitive Impairment In Dementia Tests, Study Finds

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Almost all leading large language models (LLMs) show signs of mild cognitive impairment in tests commonly used to detect early dementia, according to research published in The BMJ .

In a Rush? Here are the Quick Facts!

  • Chatbots struggled with visuospatial and executive tasks like clock drawing and trail making.
  • Tasks like naming, attention, and language were well-performed by all chatbots.
  • Researchers say chatbots’ cognitive limitations may impede their use in clinical settings.

The findings suggest that “older” chatbot versions, like older human patients, tend to perform worse on cognitive assessments, challenging assumptions that AI might soon replace human doctors.

Advances in artificial intelligence have sparked debates about its potential to outperform human physicians, particularly in diagnostic tasks . While previous studies have highlighted LLMs’ medical proficiency , their vulnerability to human-like impairments such as cognitive decline has remained unexplored.

To address this, researchers tested the cognitive abilities of widely available chatbots—ChatGPT 4 and 4o (OpenAI), Claude 3.5 “Sonnet” (Anthropic), and Gemini 1 and 1.5 (Alphabet)—using the Montreal Cognitive Assessment (MoCA).

The MoCA is a diagnostic tool for detecting cognitive impairment and early dementia. It evaluates attention, memory, language, visuospatial skills, and executive functions through a series of short tasks.

Scores range from 0 to 30, with 26 or above generally considered normal. The chatbots were given the same instructions as human patients, and scoring was reviewed by a practicing neurologist.

Interestingly, the “age” of the models—defined as their release date—appears to influence performance. The researchers noted that older versions of chatbots scored lower than newer ones, mirroring patterns of cognitive decline seen in humans.

Older versions tended to score lower than their newer counterparts. For example, Gemini 1.5 outperformed Gemini 1.0 by six points despite being released less than a year later, suggesting rapid “cognitive decline” in the older version.

ChatGPT 4o excelled in attention tasks and succeeded in the Stroop test’s challenging incongruent stage, setting it apart from its peers. However, none of the LLMs completed visuospatial tasks successfully, and Gemini 1.5 notably produced a clock resembling an avocado—an error associated with dementia in human patients.

Despite these struggles, all models performed flawlessly in tasks requiring text-based analysis, such as the naming and similarity sections of the MoCA. This contrast underscores a key limitation: while LLMs handle linguistic abstraction well, they falter in integrating visual and executive functions, which require more complex cognitive processing.

The study acknowledges key differences between the human brain and LLMs but highlights significant limitations in AI cognition. The uniform failure of all tested chatbots in tasks requiring visual abstraction and executive function underscores weaknesses that could hinder their use in clinical settings .

“Not only are neurologists unlikely to be replaced by large language models any time soon, but our findings suggest that they may soon find themselves treating new, virtual patients—artificial intelligence models presenting with cognitive impairment,” the authors concluded.

These findings suggest that while LLMs excel in specific cognitive domains , their deficits in visuospatial and executive tasks raise concerns about their reliability in medical diagnostics and broader applications.