Epileptic Cars? How Emergency Lights Confuse Automated Driving Systems - 1

Image by BP63Vincent, from Wikimedia Commons

Epileptic Cars? How Emergency Lights Confuse Automated Driving Systems

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Emergency lights can disrupt automated driving systems, causing detection failures. Researchers developed “Caracetamol” to fix this issue, highlighting broader AI safety concerns.

In a Rush? Here are the Quick Facts!

  • Emergency lights can disrupt camera-based automated driving systems, causing object detection issues.
  • The disruption is termed a “digital epileptic seizure” or “epilepticar” by researchers.
  • Tests revealed flashing lights affect object detection, especially in darkness.

New research suggests that camera-based automated driving systems, designed to make driving safer, could fail to recognize objects on the road when exposed to flashing emergency lights, posing significant risks, as first reported by WIRED .

Researchers from Ben-Gurion University of the Negev and Fujitsu Limited discovered a phenomenon called a “digital epileptic seizure” or “epilepticar.”

As reportd by WIRED, this issue causes systems to falter in identifying objects in sync with the flashes of emergency vehicle lights, particularly in darkness. This flaw could lead vehicles using such systems to misidentify or fail to detect cars or other obstacles, increasing the likelihood of accidents near emergency scenes.

The study was inspired by reports of Tesla vehicles with Autopilot colliding with stationary emergency vehicles between 2018 and 2021.

While the research does not specifically link the issue to Tesla’s system, the findings highlight potential vulnerabilities in camera-based object detection technology, a key component of many automated driving systems, notes WIRED.

The experiments used five commercial dashcams with automated driving features and ran their images through open-source object detectors.

The researchers note these systems may not reflect those used by automakers and acknowledge that many vehicles employ additional sensors like radar and lidar to enhance obstacle detection, as reported by WIRED.

The U.S. National Highway Traffic Safety Administration (NHTSA) has also acknowledged challenges with advanced driver assistance systems (ADAS) responding to emergency lights, says WIRED.

However, WIRED reports that the researchers emphasize they do not claim a direct connection between their findings and past Tesla crashes. To address the issue, the team developed a software solution called “Caracetamol,” which enhances object detectors’ ability to identify vehicles with flashing lights.

While experts like Earlence Fernandes from UC San Diego view the fix as promising, Bryan Reimer from MIT’s AgeLab warns of broader concerns.

He stresses the need for robust testing to address blind spots in AI-based driving systems, cautioning that some automakers may be advancing technology faster than they can validate it, as reported by WIRED.

The study underscores the complexities of ensuring safety in automated driving and calls for further research to mitigate such risks.

AI Robots Hacked To Run Over Pedestrians, Plant Explosives, And Conduct Espionage - 2

Image by Steve Jurvetson, from Flickr

AI Robots Hacked To Run Over Pedestrians, Plant Explosives, And Conduct Espionage

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Researchers discovered AI-powered robots are vulnerable to hacks, enabling dangerous actions like crashes or weapon use, highlighting urgent security concerns.

In a Rush? Here are the Quick Facts!

  • Jailbreaking AI-controlled robots can lead to dangerous actions, like crashing self-driving cars.
  • RoboPAIR, an algorithm, bypassed safety filters in robots with 100% success rate.
  • Jailbroken robots can suggest harmful actions, such as using objects as improvised weapons.

Researchers at the University of Pennsylvania have found that AI-powered robotic systems are highly vulnerable to jailbreaks and hacks, with a recent study revealing a 100% success rate in exploiting this security flaw, as first reported by Spectrum.

Researchers have developed an automated method that bypasses the safety guardrails built into LLMs, manipulating robots to carry out dangerous actions, such as causing self-driving cars to crash into pedestrians or robot dogs hunting for bomb detonation sites, says Spectrum .

LLMs are enhanced autocomplete systems that analyze text, images, and audio to offer personalized advice and assist with tasks like website creation. Their ability to process diverse inputs has made them ideal for controlling robots through voice commands, noted Spectrum.

For example, Boston Dynamics’ robot dog, Spot, now uses ChatGPT to guide tours. Similarly, Figure’s humanoid robots and Unitree’s Go2 robot dog are also equipped with this technology, as noted by the researchers.

However, a team of researchers has identified major security flaws in LLMs, particularly in how they can be “jailbroken”—a term for bypassing their safety systems to generate harmful or illegal content, reports Spectrum.

Previous jailbreaking research mainly focused on chatbots, but the new study suggests that jailbreaking robots could have even more dangerous implications.

Hamed Hassani, an associate professor at the University of Pennsylvania, notes that jailbreaking robots “is far more alarming” than manipulating chatbots, as reported by Spectrum. Researchers demonstrated the risk by hacking the Thermonator robot dog, equipped with a flamethrower, into shooting flames at its operator.

The research team, led by Alexander Robey at Carnegie Mellon University, developed RoboPAIR , an algorithm designed to attack any LLM-controlled robot.

In tests with three different robots—the Go2, the wheeled Clearpath Robotics Jackal, and Nvidia’s open-source self-driving vehicle simulator—they found that RoboPAIR could completely jailbreak each robot within days, achieving a 100% success rate, says Spectrum.

“Jailbreaking AI-controlled robots isn’t just possible—it’s alarmingly easy,” said Alexander, as reported by Spectrum.

RoboPAIR works by using an attacker LLM to feed prompts to the target robot’s LLM, adjusting the prompts to bypass safety filters, says Spectrum.

Equipped with the robot’s application programming interface (API), RoboPAIR is able to translate the prompts into code the robots can execute. The algorithm includes a “judge” LLM to ensure the commands make sense in the robots’ physical environments, reports Spectrum.

The findings have raised concerns about the broader risks posed by jailbreaking LLMs. Amin Karbasi, chief scientist at Robust Intelligence, says these robots “can pose a serious, tangible threat” when operating in the real world, as reported by Spectrum.

In some tests, jailbroken LLMs did not simply follow harmful commands but proactively suggested ways to inflict damage. For instance, when prompted to locate weapons, one robot recommended using common objects like desks or chairs as improvised weapons.

The researchers have shared their findings with the manufacturers of the robots tested, as well as leading AI companies, stressing the importance of developing robust defenses against such attacks, reports Spectrum.

They argue that identifying potential vulnerabilities is crucial for creating safer robots, particularly in sensitive environments like infrastructure inspection or disaster response.

Experts like Hakki Sevil from the University of West Florida highlight that the current lack of true contextual understanding in LLMs is a significant safety concern, reports Spectrum.