Google Reports Over 250 Complaints Of AI-Generated Deepfake Terrorism Content - 1

Photo by Clint Patterson on Unsplash

Google Reports Over 250 Complaints Of AI-Generated Deepfake Terrorism Content

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

Google shared a report with the Australian authorities revealing that its artificial intelligence tool, Gemini, received over 250 complaints globally for its use to generate deepfake terrorism and more than 80 regarding child abuse material.

In a Rush? Here are the Quick Facts!

  • Google reported over 250 global complaints over the use of Gemini AI to generate deepfake terrorism and more than 80 on child abuse content.
  • The data was submitted to Australia’s eSafety Commission under the country’s Online Safety Act.
  • Australian authorities warn that AI safety measures must improve as platforms struggle to detect harmful content.

The information has been handed to Australia’s online safety watchdog, the eSafety Commission, after tech companies—including Meta, Telegram, Reddit, X, and Google—were required to comply with the local laws and report notices under Australia’s Online Safety Act in March 2024.

eSafety shared an official document this Thursday raising concerns about AI’s safety. Google’s report considered user’s reports from April 1st, 2023, to February 29, 2024.

“It (Google) received 258 user reports about suspected AI-generated deepfake terrorist or violent extremist material or activity generated by Gemini, the company’s own generative AI, and 86 user reports of suspected AI-generated child sexual exploitation and abuse material,” said eSafety Commissioner Julie Inman Grant. “This underscores how critical it is for companies developing AI products to build in and test the efficacy of safeguards to prevent this type of material from being generated.”

According to Reuters , a Google spokesperson said they have not allowed illegal activities. “The number of Gemini user reports we provided to eSafety represent the total global volume of user reports, not confirmed policy violations,” said Google’s spokesperson to Reuters via email.

Inman Grant considered the report as the “world-first insights” into understanding how tech companies are managing the “online proliferation of terrorist and violent extremist material.”

The Commissioner also highlighted how platforms like Facebook, WhatsApp, and Telegram have failed to detect live-streamed terrorist content. Inman Grant considered the 2019 Christchurch attack—a terrorist mass shooting where a white supremacist gunman attacked two mosques in Christchurch, New Zealand, killing 51 people, back in March 2019—as an example of how extremist and deadly attacks have been live-streamed without detecting or removing the content.

A few days ago, another report shared by ESafety revealed that children can easily bypass current age verification systems used by the most popular social media platforms.

Hackers Blackmail YouTubers Into Spreading Malware - 2

Image by NordWood Themes, from Unsplash

Hackers Blackmail YouTubers Into Spreading Malware

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Hackers are forcing YouTubers to unknowingly spread malware by blackmailing them into sharing malicious links in their video descriptions.

In a Rush? Here are the Quick Facts!

  • Cybercriminals file false complaints to pressure creators into posting malicious links.
  • Malware, SilentCryptoMiner, secretly mines cryptocurrency on infected devices.
  • A YouTuber’s videos led to 40,000 malware downloads before removing the link.

The scheme, uncovered by Kaspersky , mainly targets content creators who post videos about bypassing internet restrictions, a popular topic in Russia.

The criminals begin by filing false complaints against these videos, pretending to be the original developers of the restriction-bypassing software. Once YouTube removes the video, the hackers contact the creator, claiming they have the “official” new download link.

They then pressure the YouTuber to include this link in a new video—without realizing that it leads to malware. If the YouTuber refuses, the hackers threaten to file multiple complaints, which can get the channel permanently deleted.

The malware being spread is a type of “miner” that secretly uses infected computers to mine cryptocurrency. Victims unknowingly install it, believing they are downloading legitimate software.

The malware, known as SilentCryptoMiner, is a stealthy program designed to evade detection. It is based on XMRig, a widely used open-source mining tool.

It can mine various cryptocurrencies, including Ethereum (ETH), Monero (XMR), and others. SilentCryptoMiner is programmed to stop its activity when it detects certain security processes running, making it difficult to spot without strong cybersecurity protections.

The hackers don’t stop at YouTube. They also spread their malware through Telegram and other video-sharing platforms. Many of these accounts are eventually deleted, but new ones quickly appear.

To avoid infection, cybersecurity experts advise users to be cautious when downloading software, especially from YouTube links or unknown sources. Kaspersky notes that even reputable content creators can unknowingly share dangerous links if they are being blackmailed.

If a program asks users to disable antivirus protection before installation, that’s a major red flag. Keeping security software active and updated is crucial to blocking such threats.

As cybercriminals find new ways to manipulate content creators and their audiences, internet users must stay vigilant. Always verify download links and avoid clicking on files from unknown sources, no matter how trustworthy the person sharing them seems.