Navigation Tech By Haptic Uses Vibrations To Guide Blind Users - 1

image by tirachardz, from Freepik

Navigation Tech By Haptic Uses Vibrations To Guide Blind Users

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Haptic’s navigation tech offers non-visual guidance for blind and sighted users alike.
  • The system uses vibrations to guide users in real-time through a “haptic corridor.”
  • Compatible with smartwatches, it leverages existing tech advancements for broader accessibility.

Navigation for people with visual impairments could soon rely on wearable technology that uses touch rather than sight or sound.

Haptic , a technology company focused on developing non-visual navigation, presented its approach yesterday at TechCrunch Disrupt 2024 , featuring a “haptic corridor” that uses vibrations to guide users in real-time.

Founded in 2017, Haptic was inspired by a friend of the founders who lost their sight, leading the team to explore tactile-based navigation, as reported by TechCrunch (TC).

The company’s solution relies on a sequence of vibrations, with a steady pulse signaling the correct path and increased intensity indicating when the user veers off course.

Initially designed for Haptic’s own wearable devices, the technology has since been adapted for use with existing smartwatches and smartphones, aiming to leverage advancements in consumer technology rather than competing directly with major tech firms.

“Technology advances while you’re advancing — and smartwatches got better. So, do you want to be in competition with the Googles and Apples out there… or do you want to have them as allies?” said Enzo Caruso, Haptic’s co-founder.

Kevin Yoo, Haptic’s CEO and co-founder, explained that the company’s focus has shifted from product development to expanding its user base. He suggested that partnering with large firms like Google or Uber could bring the technology to a wider audience, as reported by TC.

Yoo envisions the system being used by anyone, not just those with vision impairments, with potential applications in crowded spaces where visual or auditory cues are challenging.

“Google and Apple, telecoms, Uber, governments… all of this is coming together into a common ground,” Yoo noted to TC, as Haptic continues to develop its “hyper-accurate location” technology, with plans to incorporate indoor navigation features in future versions.

Currently, Haptic partners with companies such as Waymap, Cooley, WID, and Infinite Access. Recently, the company signed a contract with Aira, an app that connects visually impaired users with sighted helpers. The integration aims to reduce the need for constant guidance by providing tactile navigation support, noted TC.

Haptic’s business model is based on licensing its software, rather than monetizing its app directly. “We have a free app available to the world, live in 31 countries right now… and we have the licensing and integration model — that’s the business,” Yoo stated, as reported by TC.

Haptic is currently in the process of raising additional funds, which it hopes to use to secure further partnerships with companies such as Uber and T-Mobile, as it continues to scale the technology.

Researchers Discover Security Flaws In Open-Source AI And ML Models - 2

Image by master1305, from Freepik

Researchers Discover Security Flaws In Open-Source AI And ML Models

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Over 30 security flaws found in open-source AI and ML tools.
  • Severe vulnerabilities impact tools like Lunary, ChuanhuChatGPT, and LocalAI.
  • LocalAI flaw allows attackers to infer API keys through timing analysis.

A recent investigation has uncovered over 30 security flaws in open-source AI and machine learning (ML) models, raising concerns about potential data theft and unauthorized code execution, as reported by The Hacker News (THN).

These vulnerabilities were found in widely used tools, including ChuanhuChatGPT, Lunary, and LocalAI, and were reported via Protect AI’s Huntr bug bounty platform, which incentivizes developers to identify and disclose security issues.

Among the most severe vulnerabilities identified, two major flaws impact Lunary, a toolkit designed to manage large language models (LLMs) in production environments.

The first flaw, CVE-2024-7474, is categorized as an Insecure Direct Object Reference (IDOR) vulnerability. It allows a user with access privileges to view or delete other users’ data without authorization, potentially leading to data breaches and unauthorized data loss.

The second critical issue, CVE-2024-7475, is an improper access control vulnerability that lets an attacker update the system’s SAML (Security Assertion Markup Language) configuration.

By exploiting this flaw, attackers can bypass login security to gain unauthorized access to personal data, raising significant risks for any organization relying on Lunary for managing LLMs.

Another security weakness identified in Lunary, CVE-2024-7473, also involves an IDOR vulnerability that allows attackers to update prompts submitted by other users. This is achieved by manipulating a user-controlled parameter, making it possible to interfere with others’ interactions in the system.

In ChuanhuChatGPT, a critical vulnerability (CVE-2024-5982) allows an attacker to exploit a path traversal flaw in the user upload feature, as noted by THN.

This flaw can lead to arbitrary code execution, directory creation, and exposure of sensitive data, presenting high risk for systems relying on this tool. LocalAI, another open-source platform that enables users to run self-hosted LLMs, has two major flaws that pose similar security risks, said THN.

The first flaw, CVE-2024-6983, enables malicious code execution by allowing attackers to upload a harmful configuration file. The second, CVE-2024-7010, lets hackers infer API keys by measuring server response times, using a timing attack method to deduce each character of the key gradually, noted THN.

In response to these findings, Protect AI introduced a new tool called Vulnhuntr , an open-source Python static code analyzer that uses large language models to detect vulnerabilities in Python codebases, said THN.

Vulnhuntr breaks down code into smaller chunks to identify security flaws within the constraints of a language model’s context window. It scans project files to detect and trace potential weaknesses from user input to server output, enhancing security for developers working with AI code.

These discoveries highlight the critical importance of ongoing vulnerability assessment and security updates in AI and ML systems to protect against emerging threats in the evolving landscape of AI technology.