
Image by ijeab, from Freepik
How A YouTuber Is Tricking AI Bots That Steal Her Content
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A YouTuber is taking a creative stand against AI models that scrape and repurpose online content without permission.
In a Rush? Here are the Quick Facts!
- A YouTuber hides junk text in subtitles to confuse AI content scrapers.
- AI summarizers struggle to filter the junk, generating inaccurate video summaries.
- The technique disrupts AI scraping but isn’t foolproof against advanced transcription tools.
F4mi , a creator known for in-depth videos on obscure technology, has developed a method to disrupt AI summarizers by filling her transcripts with misleading, machine-confounding text while keeping them readable for human viewers, as first reported by ArsTechnica .
The rise of AI-generated “faceless” YouTube channels has been a growing concern for many content creators, as noted by Medium . These channels often use AI tools to generate scripts, voiceovers, and visuals, frequently pulling material from existing videos to produce near-instant knock-offs.
Many YouTubers have reported seeing their work copied and repurposed, with AI models pulling directly from their video transcripts.
To counter this, F4mi turned to a decades-old subtitle format called .ass, originally developed for fansubbing anime. Unlike standard subtitle files, .ass supports advanced formatting options like custom fonts, colors, and positioning.
By leveraging these features, F4mi embeds additional text into her subtitles, invisible to human viewers but highly disruptive to AI scraping tools.
Her method involves inserting extra text outside the visible bounds of the screen, using formatting tricks to make the words transparent and unreadable to humans. The inserted text includes public domain passages with minor word replacements, as well as AI-generated nonsense designed to overwhelm summarization tools.
When AI attempts to extract and summarize these subtitles, it ends up with a garbled, inaccurate version of the original content.
F4mi found that while basic AI tools struggled with her approach, more advanced models like OpenAI’s Whisper were still able to extract meaningful information.
To counteract this, she experimented with further scrambling the text at the file level while keeping it readable in playback, adding another layer of complexity for AI attempting to parse it.
ArsTechnica notes that YouTube does not natively support .ass subtitles, so F4mi had to convert her captions to YouTube’s .ytt format. However, this workaround came with drawbacks, particularly on mobile devices where the altered subtitles sometimes appeared as black boxes.
To address this, she developed a Python script that hides her misleading text as black-on-black captions, visible only when the screen fades to black.
Despite these efforts, F4mi acknowledges that her method is not foolproof. AI can still generate transcripts directly from the audio track, and advanced screen readers can extract visible text from videos.
Still, her experiment highlights the growing resistance among content creators against AI models scraping online material without consent. As AI-generated content continues to proliferate, innovative countermeasures like F4mi’s may become increasingly common.

Image by Freepik
More Teens Are Being Misled By AI-Generated Content, Study Reveals
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A growing number of teenagers are struggling to distinguish between authentic and manipulated online content, with AI-generated media adding to the confusion.
In a Rush? Here are the Quick Facts!
- Many teens have shared content they later discovered was fake.
- Many teens don’t trust tech companies to prioritize their mental health.
- Most teens support watermarking AI-generated content for transparency.
A recent report highlights that 35% of teens have been misled by fake content, while 22% admitted to sharing content they later discovered was false, and 28% have questioned whether they were conversing with a human or a chatbot.
These experiences have significantly reshaped teens’ trust in online information. The report found that 72% of teenagers have changed how they evaluate digital content after encountering deceptive material.
Additionally, more than a third (35%) believe generative AI will further erode trust in online information. Those who have been misled by false content are even more skeptical, with 40% saying AI will make it harder to verify accuracy, compared to 27% of those who haven’t had similar experiences.
Generative AI faces serious credibility issues among teens, particularly in academic settings. Nearly two in five (39%) students who have used AI for schoolwork reported finding inaccuracies in AI-generated content. Meanwhile, 36% did not notice any problems, and 25% were unsure.
This raises concerns about AI’s reliability in educational contexts, highlighting the need for better tools and critical thinking skills to help teens assess AI-generated content.
Beyond AI, trust in major tech companies remains low. About 64% of teens believe big tech firms do not prioritize their mental health, and 62% think these companies would not protect users’ safety if it harmed their profits.
More than half also doubt that tech giants make ethical design decisions (53%), safeguard personal data (52%), or fairly consider different users’ needs (51%). Regarding AI, 47% have little confidence in tech companies making responsible decisions about its use.
Despite these concerns, teens strongly support protective measures for generative AI. Nearly three in four (74%) advocate for privacy safeguards and transparency, while 73% believe AI-generated content should be labeled or watermarked. Additionally, 61% want content creators to be compensated when AI models use their work for training.
CNN notes that teenagers’ distrust of Big Tech reflects a broader dissatisfaction with major U.S. tech companies. American adults also face rising levels of misleading or fake content, worsened by weakening digital safeguards.
As generative AI reshapes the digital landscape, addressing misinformation and restoring trust requires collaboration between tech companies, educators, policymakers, and teens themselves.
Strengthening digital literacy and implementing clear AI governance standards will be essential to ensuring a more transparent and trustworthy online environment.