Kaspersky Reports 135% Surge In Crypto-Stealing Drainers On Dark Web - 1

Image pvproductions, from Freepik

Kaspersky Reports 135% Surge In Crypto-Stealing Drainers On Dark Web

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Cybercriminal interest in crypto-draining malware surged dramatically in 2024, with discussions on dark web forums rising by 135%, according to Kaspersky’s latest Security Bulletin.

In a Rush? Here are the Quick Facts!

  • Drainers use fake airdrops, phishing sites, and deceptive ads to steal funds.
  • Corporate database ads on the dark web rose 40% from 2023 to 2024.
  • Cybercriminals are shifting from Telegram back to private dark web forums.

Kaspersky’s report highlights the growing focus on crypto-drainers—malware designed to trick victims into authorizing fraudulent transactions, swiftly draining funds from cryptocurrency wallets.

Kaspersky’s Digital Footprint Intelligence revealed that discussions on crypto-drainers increased from 55 unique dark web threads in 2022 to 129 in 2024. These forums are rife with cybercriminals exchanging ideas, trading malware, and collaborating on large-scale distribution.

Alexander Zabrovsky, a security expert at Kaspersky, predicts further growth in crypto-drainer interest in 2025.

“Crypto enthusiasts need to be more vigilant than ever, adopting robust crypto security measures. Meanwhile, companies should focus on educating their customers and employees while actively monitoring their online presence to reduce the risk of successful attacks,” Zabrovsky emphasized.

He added that drainers often leverage social engineering tactics, impersonating popular wallets and exchange brands to lure victims into fraudulent transactions.

Cybercriminals appear increasingly focused on leaking or reselling stolen data , sometimes amplifying older breaches as new incidents to damage corporate reputations.

“Some ‘offers’ may simply be well-marketed materials. For example, certain databases might combine publicly available information or previously leaked data, presenting it as breaking news ,” Zabrovsky added.

“By making such claims, cybercriminals can generate publicity, create buzz, and tarnish the reputation of the targeted company simply by announcing a data breach,” Zabrovsky continued.

Emerging trends point to further developments in 2025. Kaspersky predicts a migration of cybercriminals from Telegram back to private dark web forums, as increased platform bans drive users to less accessible spaces.

High-profile law enforcement operations are also expected to intensify, forcing cybercriminal groups to fragment into smaller, harder-to-track units.

Other anticipated trends include the rise of Malware-as-a-Service models promoting drainers and credential stealers, escalating cyber threats in the Middle East due to geopolitical tensions, and an uptick in ransomware attacks across the region.

To combat these threats, Kaspersky advises individuals to use comprehensive security solutions and remain vigilant against phishing schemes. Businesses should proactively monitor dark web activity and employ tools to detect and respond to potential data breaches and malware-related risks.

YouTube Introduces Opt-In AI Training For Video Creators - 2

Image by Wiroj Sidhisoradej, from Freepik

YouTube Introduces Opt-In AI Training For Video Creators

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

YouTube is introducing a new feature that allows creators to opt in and permit third-party companies to use their videos for training AI models. By default, the feature is turned off, requiring creators to actively choose to participate.

In a Rush? Here are the Quick Facts!

  • YouTube allows creators to opt in for third-party AI model training on videos.
  • By default, creators are opted out and must manually enable the feature.
  • Creators can authorize 18 companies, including OpenAI, Meta, Microsoft, and Adobe.

Within the YouTube Studio dashboard, creators can access a new setting to enable this feature. From there, they can authorize specific third-party companies to train AI models using their videos.

YouTube says these companies were selected because they are building generative AI models and represent logical partners for such collaborations, as reported by TechCrunch.

Alternatively, creators can select an option to allow “All third-party companies,” granting permission to any third party to train on their videos, even if the company isn’t on the list.

TechCrunch reports that the feature is available to eligible creators who have access to YouTube Studio Content Manager with an administrator role. Creators can view or adjust these settings at any time through their YouTube Channel settings.

While the new setting controls third-party access, YouTube clarified to TechCrunch that Google will continue to train its own AI models on certain YouTube content, as outlined in its existing agreements with creators.