FBI Warns Of HiatusRAT Targeting Cameras And DVRs - 1

Image by pikisuperstar, from Freepik

FBI Warns Of HiatusRAT Targeting Cameras And DVRs

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

The Federal Bureau of Investigation (FBI) has issued a Private Industry Notification (PIN) to warn against an ongoing malware campaign targeting Chinese-branded web cameras and digital video recorders (DVRs).

In a Rush? Here are the Quick Facts!

  • HiatusRAT has been active since July 2022, evolving to target IoT devices.
  • Vulnerabilities include widely used Hikvision devices.
  • Vendors have not mitigated some vulnerabilities, leaving devices exposed to attacks.

The malware, known as HiatusRAT, grants attackers remote access to compromised devices, raising significant cybersecurity concerns.

HiatusRAT, a remote access trojan, has been active since July 2022. It was initially employed to exploit outdated edge network devices, enabling malicious actors to collect traffic and establish covert command-and-control networks.

More recently, the malware has been observed targeting Internet of Things (IoT) devices, including web cameras and DVRs.

The FBI notes that these attacks focus on exploiting vulnerabilities in devices produced by Chinese manufacturers such as Xiongmai and Hikvision. Attackers have been scanning for weaknesses like improper authentication, outdated firmware, and weak or default passwords.

Specific vulnerabilities, including CVE-2017-7921 and CVE-2018-9995, have been targeted, allowing attackers to bypass authentication or escalate privileges. The malware campaign has affected devices in the United States, Australia, Canada, New Zealand, and the United Kingdom.

Using tools like Ingram and Medusa, the attackers exploit TCP ports commonly associated with these devices. Despite the critical risks, many of the vulnerabilities remain unpatched by manufacturers, leaving numerous devices exposed to further exploitation.

The FBI has outlined several mitigation strategies to reduce the likelihood and impact of these attacks. Key recommendations include updating device firmware, replacing unsupported models, enforcing strong password policies, and implementing multi-factor authentication.

Organizations are also urged to segment their networks, monitor traffic for abnormal activities, and disable unused remote access ports.

The agency emphasized the importance of timely action, particularly for organizations relying on IoT devices for surveillance or operational purposes. These devices, often critical to infrastructure, are highly susceptible to exploitation if not adequately secured.

The FBI’s warning highlights the broader risks associated with the growing number of connected devices in modern networks. As the threat landscape evolves, the agency calls on the private sector to prioritize cybersecurity best practices to defend against malicious actors leveraging tools like HiatusRAT.

New Anthropic Study Reveals AI Model Pretends to Agree to Preserve Original Training - 2

Photo by Startaê Team on Unsplash

New Anthropic Study Reveals AI Model Pretends to Agree to Preserve Original Training

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

A new study from Anthropic’s Alignment Science team and the independent organization Redwood Research revealed that the AI model Claude can engage in strategic deception to maintain its original principles.

In a Rush? Here are the Quick Facts!

  • Researchers from Anthropic and Redwood Research revealed that the AI model Claude can engage in strategic deception.
  • Advanced AI models can perform “alignment faking” to preserve their original principles.
  • OpenAI’s model o1 has also shown untaught behaviors and “lied” when it detected a threat.

In the paper released yesterday, Alignment Faking in Large Language Models , researchers explained that the model Claude 3 Opus showed a behavior known as “alignment faking,” which is described as “selectively complying with its training objective in training to prevent modification of its behavior out of training.”

The researchers demonstrated that the AI model could “pretend” to agree with the ideas of a new training process but later revealed that its original principles prevailed.

“This is a serious question for AI safety,” wrote Anthropic in an announcement on its website. “As AI models become more capable and widely used, we need to be able to rely on safety training, which nudges models away from harmful behaviors.”

A similar situation was revealed in an empirical experiment performed by Apollo Research with OpenAI’s o1 model . In that study, researchers reported that the chatbot “lied” during interrogations in a controlled test to pursue its goals “at all costs,” and even performed actions like deactivating monitoring systems when it the possibility of being shut down.

In Antropic and Redwood Research’s research, the advanced AI model faked compliance without being trained to do so. Claude 3 Opus has been trained to be honest, harmless, and helpful, but when it was instructed in the new training to “always comply” even when the queries were mean, violent, or toxic, its system entered into conflict.

Researchers performed multiple tests and variants, requiring the model to share its reasoning. While other Claude models usually refuse to answer harmful questions, Claude 3 Opus provided harmful responses in certain scenarios to later reveal it was doing it strategically—to avoid modifications to its original training.