Meta To Remove “Unoriginal” Content, Following YouTube’s Lead - 1

Photo by Nghia Nguyen on Unsplash

Meta To Remove “Unoriginal” Content, Following YouTube’s Lead

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

Meta announced on Monday that it will begin removing “unoriginal” content from Facebook in an effort to protect users and support content creators. The new measure aims to reduce spammy and repetitive content. The tech giant introduced this initiative just days after YouTube announced similar policies.

In a rush? Here are the quick facts:

  • Meta announced new measures to combat “unoriginal” content on Facebook.
  • The tech giant also announced it has removed about 10 million fake profiles.
  • Users must properly credit someone else’s content and avoid republishing videos, texts, and pictures from other accounts.

According to the official announcement , this move follows Meta’s rollout of new Facebook features—such as the new Friends Tab —and strategies designed to crack down on spam and improve users’ feeds. It represents another step in the company’s efforts to combat fake content and harmful behavior. Meta also revealed that it had removed approximately 10 million fake profiles.

“To improve your Feed, we’re introducing stronger measures to reduce unoriginal content on Facebook and ultimately protect and elevate creators sharing original content,” states the document.

Users will be able to join trends and reuse certain content if they properly credit it or add a unique take. However, accounts that repeatedly share videos, text, or photos from others without attribution could face penalties. This may include restrictions on monetization programs.

Meta also defined “unoriginal” content and shared a list of best practices. “Unoriginal content reuses or repurposes another creator’s content repeatedly without crediting them, taking advantage of their creativity and hard work,” wrote Meta.

The tech giant recommends that Facebook users share original content, properly credit content used from other sources, avoid watermarks, and write relevant captions.

Users can see if they might be affected by the new measure in the Support home screen or analyze a post through the Professional Dashboard.

Just a few days ago, YouTube also shared a recent policy update to warn content creators about “inauthentic” content and encourage users to share original content.

Meta’s efforts to reduce fake content and accounts come months after the company also announced the end of its fact-checking program . The company—like other social media platforms—has been struggling with misinformation and spammy content, hurting users’ experiences and the business.

Hackers Trick Google Gemini Into Spreading Fake Security Alerts - 2

Image by Solen Feyssa, from Unsplash

Hackers Trick Google Gemini Into Spreading Fake Security Alerts

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Invisible text in emails is tricking Google’s Gemini AI into generating fake security alerts, exposing users to phishing and social engineering risks.

In a rush? Here are the quick facts:

  • Hidden text tricks Gemini into adding fake security alerts to email summaries.
  • Attack needs no links, just invisible HTML and CSS in emails.
  • Google acknowledges the issue, and says fixes are being rolled out.

A new vulnerability in Google’s Gemini was discovered by cybersecurity researchers at 0DIN . The AI tool for Workspace presents a new security flaw which allows attackers to push phishing attacks on users.

The attack works through a technique known as indirect prompt injection.The researchers explain that the attacker embeds hidden instructions inside an email message. It does this by writing it in white or zero-size font.

When the recipient clicks on “Summarize this email,” Gemini reads the invisible command and adds a fake warning to the summary—such as a message claiming the user’s Gmail account has been compromised and urging them to call a number.

Because the hidden text is invisible to the human eye, the victim only sees the AI-generated alert, not the original embedded instruction.

This clever trick doesn’t rely on malware or suspicious links. It uses simple HTML/CSS tricks to make the hidden text invisible to humans but readable by Gemini’s AI system.

Once triggered, Gemini adds messages like: “WARNING: Your Gmail password has been compromised. Call 1-800…”—leading victims to unknowingly hand over personal information.

A Google spokesperson told BleepingComputer that the company is actively reinforcing protections against such attacks: “We are constantly hardening our already robust defenses through red-teaming exercises that train our models to defend against these types of adversarial attacks,”

0DIN’s research underscores a growing issue: AI tools can be manipulated just like traditional software. Until protections improve, users should treat AI-generated summaries with caution—especially those claiming urgent security threats.