
Photo by K. Mitch Hodge on Unsplash
BBC To Take Legal Action Against Perplexity For Scraping Its Content
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
The BBC is threatening the AI company Perplexity with legal action if it continues to use its content without permission. The British broadcaster has accused the startup of scraping content from its website to train its “default AI model.”
In a rush? Here are the quick facts:
- The BBC threatened Perplexity with legal action if it continues to use its content without permission.
- The broadcaster accused the AI company of scraping its content to train its AI model.
- The BBC requested that Perplexity delete the copyrighted material and submit “a proposal for financial compensation.”
According to the Financial Times (FT) , the BBC sent a letter to Aravind Srinivas, Perplexity’s CEO, saying that it has evidence of its content being used to train the company’s AI model.
The BBC has requested the American startup to stop scraping its content, delete the material already used, and submit “a proposal for financial compensation.” If Perplexity does not comply, the broadcaster has threatened to seek an injunction.
The BBC is not the only media organization threatening Perplexity with legal action over the unauthorized use of copyrighted content. Last year, in October, the New York Times also sent a cease-and-desist letter to Perplexity , while the Wall Street Journal and The Post filed a lawsuit against the company.
This marks the first time the BBC has taken legal steps against an AI company for using its content without permission, although it has previously voiced concerns about AI. A few months ago, the BBC raised issues about AI chatbots struggling to provide accurate news information , naming Perplexity among the models analyzed.
Perplexity, on the other hand, considered the BBC’s latest threat “manipulative and opportunistic.” The AI company claimed the BBC does not understand how the internet and technology work.
“[The claims] also show how far the BBC is willing to go to preserve Google‘s illegal monopoly for its own self-interest,” said Perplexity to the FT.

Image by DC Studio, from Unsplash
Hackers Use Deepfake Zoom Call To Breach Crypto Firm
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Hackers used a fake Zoom call with deepfakes to breach a crypto firm’s Mac system and steal crypto wallet data.
In a rush? Here are the quick facts:
- Hackers used deepfakes in a fake Zoom meeting.
- Victim was tricked via Telegram and a fake Calendly link.
- Malware targeted macOS with AppleScript and process injection.
The attack began weeks earlier when a staff member received an unexpected Telegram message that led them to a Google Meet link. The link redirected the user to a fake Zoom website, where they later participated in a deepfake-filled meeting. The system blocked their microphone, so they were prompted to download a malicious Zoom extension. The AppleScript file ‘zoom_sdk_support.scpt’ looked harmless, but it secretly installed malware in the background.
The malware disabled history logging while it installed Rosetta 2 for software compatibility, and then downloaded additional tools, which included backdoors, keyloggers, and cryptocurrency stealers. Huntress researchers detected eight different malicious files that specifically targeted macOS users through advanced process injection techniques, which are unusual for Apple systems.
Key components included “Telegram 2,” a persistent implant that enabled remote access; “Root Troy V4,” a full-featured backdoor; and “CryptoBot,” designed to search for and steal crypto wallet data from browsers. The hackers also used deepfake avatars to build trust and gather passwords.
The company advises organizations to be cautious of urgent meeting invites, last-minute platform changes, and requests to install unfamiliar extensions—especially from unknown contacts.