Hackers Use Deepfake Zoom Call To Breach Crypto Firm - 1

Image by DC Studio, from Unsplash

Hackers Use Deepfake Zoom Call To Breach Crypto Firm

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Hackers used a fake Zoom call with deepfakes to breach a crypto firm’s Mac system and steal crypto wallet data.

In a rush? Here are the quick facts:

  • Hackers used deepfakes in a fake Zoom meeting.
  • Victim was tricked via Telegram and a fake Calendly link.
  • Malware targeted macOS with AppleScript and process injection.

The attack began weeks earlier when a staff member received an unexpected Telegram message that led them to a Google Meet link. The link redirected the user to a fake Zoom website, where they later participated in a deepfake-filled meeting. The system blocked their microphone, so they were prompted to download a malicious Zoom extension. The AppleScript file ‘zoom_sdk_support.scpt’ looked harmless, but it secretly installed malware in the background.

The malware disabled history logging while it installed Rosetta 2 for software compatibility, and then downloaded additional tools, which included backdoors, keyloggers, and cryptocurrency stealers. Huntress researchers detected eight different malicious files that specifically targeted macOS users through advanced process injection techniques, which are unusual for Apple systems.

Key components included “Telegram 2,” a persistent implant that enabled remote access; “Root Troy V4,” a full-featured backdoor; and “CryptoBot,” designed to search for and steal crypto wallet data from browsers. The hackers also used deepfake avatars to build trust and gather passwords.

The company advises organizations to be cautious of urgent meeting invites, last-minute platform changes, and requests to install unfamiliar extensions—especially from unknown contacts.

YouTube Creators Unknowingly Fuel Google’s AI Models - 2

Image by Szabo Viktor, from Unsplash

YouTube Creators Unknowingly Fuel Google’s AI Models

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Google has confirmed to use a subset of YouTube videos to train its artificial intelligence models, which include Gemini and the advanced Veo 3 video generator.

In a rush? Here are the quick facts:

  • Creators were not informed that their videos train AI tools.
  • YouTube terms allow Google to license uploaded content globally and royalty-free.
  • Experts warn AI could compete with creators without consent or compensation.

The news, first reported by CNBC , has sparked criticism from content creators and intellectual property specialists, who worry about their content being used to develop tools that could eventually replace them.

“We’ve always used YouTube content to make our products better, and this hasn’t changed with the advent of AI,” a YouTube spokesperson said to CNBC.

“We also recognize the need for guardrails, which is why we’ve invested in robust protections that allow creators to protect their image and likeness in the AI era,” the spokesperson added.

CNBC reports that YouTube hosts over 20 billion videos. Google, however, has not revealed the specific number of videos it uses for AI training. The article notes that even a 1% selection from YouTube’s vast catalog would still result in billions of minutes of content, which exceeds the training data of most competing AI platforms.

CNBC spoke with several creators and intellectual property professionals who were unaware their content might be used to train AI. “It’s plausible that they’re taking data from a lot of creators that have spent a lot of time and energy and their own thought to put into these videos,” said Luke Arrigoni, CEO of digital identity company Loti. “That’s not necessarily fair to them,” he added.

“We’ve seen a growing number of creators discover fake versions of themselves,” Neely said to CNBC.

Further fueling the debate, an investigation revealed that several major AI firms such as, Apple, Nvidia, Anthropic, and Salesforce, have used transcripts from over 173,000 YouTube videos to train AI models, despite platform policies.

These videos came from more than 48,000 channels, including top creators like MrBeast, PewDiePie, and Marques Brownlee, as well as academic and news institutions such as MIT, Khan Academy, NPR, and the BBC.

The lack of a clear opt-out option, or warning when an AI is scraping content, has prompted creators to demand better transparency and protection for AI training processes.