Meta AI Wrongly Accuses Teacher Of Child Exploitation, Restores Instagram After Public Pressure - 1

Image by Aleksandra Sapozhnikova, from Unsplash

Meta AI Wrongly Accuses Teacher Of Child Exploitation, Restores Instagram After Public Pressure

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Meta suspended a Canadian teacher’s Instagram over false abuse claims. AI moderation flagged her account, and only media attention reversed the decision.

In a rush? Here are the quick facts:

  • Meta wrongly accused a teacher of child exploitation and suspended her account.
  • Megan Conte lost 15 years of photos and work access due to the ban.
  • Meta only restored her account after media inquiries from CBC Toronto.

A high school teacher in Vaughan, Ontario, had her Instagram account suspended after Meta wrongly accused her of posting content related to “child sexual exploitation, abuse and nudity.”

“When I read what I was accused of, I was very hurt. I was very surprised, especially considering what I do for a living,” Megan Conte told CBC Toronto. “And there was no one I could contact — no human.”

The history teacher, Conte, lost access to her personal photos, business contacts, and creative work spanning 15 years. Conte tried multiple times to fix the problem through Meta’s help system, she also tried paying to verify her mother’s account in hopes of reaching a real person. Conte only had her account restored after CBC Toronto contacted Meta.

“We’re sorry we got this wrong and that you were unable to use Instagram for a while,” Meta said in an email. “Sometimes we need to take action to keep our community safe.”

CBC reports that Tech analyst Carmi Levy says Meta relies heavily on AI to moderate billions of users, but warns, “It is automation run amok.” He added, “With over three billion regular users of these platforms, there’s no way that Meta could hire enough people […] Automation is the only way they can make this scale.”

Brittany Watson of Peterborough, Ontario, started a petition after her own wrongful suspension. “Social media isn’t just social media anymore. It’s now part of daily lives,” she said. Her petition has over 34,000 signatures, as reported by CBC.

Conte said, “The accusation is horrifying, offensive and completely false […] It feels like a kind of identity theft.” Although she recovered her account, she continues to worry about other users encountering similar problems and not receiving assistance or clarification.

Meta’s Fact-Checker Replacement Fails To Stop Misinformation - 2

Image by Anthony Quintano, from Flickr

Meta’s Fact-Checker Replacement Fails To Stop Misinformation

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Meta replaced its professional fact-checking with volunteers. But after 65 attempts to correct misinformation, only three community notes were published.

In a rush? Here are the quick facts:

  • Meta replaced fact-checkers with a volunteer-based “community notes” system.
  • A Washington Post columnist submitted 65 notes; only 3 were published.
  • Meta calls the system experimental but offers little transparency or data.

Meta’s decision to replace professional fact-checkers with a crowdsourced “community notes” system is facing scrutiny after a tech columnist tested the tool and found it largely ineffective, as reported in an investigation by The Washington Post .

“When a hoax about Donald Trump went viral at the funeral of Pope Francis, I went on social media to try to set the record straight,” wrote The Post columnist Geoffrey A. Fowler.

The author participated undercover in the Meta program as one of many volunteers who work to stop misinformation across Facebook, Instagram, and Threads platforms.

Fowler spent four months sending 65 community notes to Meta that aimed to correct fake claims, which included AI-generated videos and fake ICE-DoorDash partnerships. Only three of those were published.

“That’s an overall success rate of less than 5 percent,” he wrote, even though many of the hoaxes he flagged had already been debunked by Snopes and Bloomberg News.

Meta claims the program is still in its “test-and-learn phase,” according to spokeswoman Erica Sackin. The platform uses a “bridging algorithm”, which matches approvals from users with conflicting opinions before it can publish a note, thus making it very challenging to achieve approval.

“The algorithm is better at avoiding bad stuff than ensuring the good stuff actually gets published,” said Kolina Koltai, a former developer of community notes at X, as reported by The Post. Her own success rate on X is 30%, still far above Fowler’s 5% on Meta.

Experts like Alexios Mantzarlis, of Cornell Tech’s Trust and Safety Initiative, have also criticized Meta’s approach. “It is concerning that four months in, they have shared no updates,” he said, reports The Post.

Fowler argues that unpaid volunteers cannot replace professionals. “Since Zuckerberg already fired the professional fact-checkers, the community notes system isn’t just a test — it’s our current main line of defense,” reported The Post.

Adding fuel to criticism, former Facebook executive Sarah Wynn-Williams accused Mark Zuckerberg of dishonesty, toxic leadership, and ignoring human rights concerns in her memoir Careless People.

She claims Meta silences dissenting voices and prioritizes power over ethics. The company attempted to block the book’s release, citing a non-disparagement agreement.

Critics also argue that Zuckerberg’s elimination of the fact-checking program is less about free speech and more about consolidating power while offloading content responsibility onto unpaid users. The news defense mechanism remains critically weak because 54% of Americans depend on social media for their news consumption.