
Image by Anthony Quintano, from Flickr
Meta’s Fact-Checker Replacement Fails To Stop Misinformation
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Meta replaced its professional fact-checking with volunteers. But after 65 attempts to correct misinformation, only three community notes were published.
In a rush? Here are the quick facts:
- Meta replaced fact-checkers with a volunteer-based “community notes” system.
- A Washington Post columnist submitted 65 notes; only 3 were published.
- Meta calls the system experimental but offers little transparency or data.
Meta’s decision to replace professional fact-checkers with a crowdsourced “community notes” system is facing scrutiny after a tech columnist tested the tool and found it largely ineffective, as reported in an investigation by The Washington Post .
“When a hoax about Donald Trump went viral at the funeral of Pope Francis, I went on social media to try to set the record straight,” wrote The Post columnist Geoffrey A. Fowler.
The author participated undercover in the Meta program as one of many volunteers who work to stop misinformation across Facebook, Instagram, and Threads platforms.
Fowler spent four months sending 65 community notes to Meta that aimed to correct fake claims, which included AI-generated videos and fake ICE-DoorDash partnerships. Only three of those were published.
“That’s an overall success rate of less than 5 percent,” he wrote, even though many of the hoaxes he flagged had already been debunked by Snopes and Bloomberg News.
Meta claims the program is still in its “test-and-learn phase,” according to spokeswoman Erica Sackin. The platform uses a “bridging algorithm”, which matches approvals from users with conflicting opinions before it can publish a note, thus making it very challenging to achieve approval.
“The algorithm is better at avoiding bad stuff than ensuring the good stuff actually gets published,” said Kolina Koltai, a former developer of community notes at X, as reported by The Post. Her own success rate on X is 30%, still far above Fowler’s 5% on Meta.
Experts like Alexios Mantzarlis, of Cornell Tech’s Trust and Safety Initiative, have also criticized Meta’s approach. “It is concerning that four months in, they have shared no updates,” he said, reports The Post.
Fowler argues that unpaid volunteers cannot replace professionals. “Since Zuckerberg already fired the professional fact-checkers, the community notes system isn’t just a test — it’s our current main line of defense,” reported The Post.
Adding fuel to criticism, former Facebook executive Sarah Wynn-Williams accused Mark Zuckerberg of dishonesty, toxic leadership, and ignoring human rights concerns in her memoir Careless People.
She claims Meta silences dissenting voices and prioritizes power over ethics. The company attempted to block the book’s release, citing a non-disparagement agreement.
Critics also argue that Zuckerberg’s elimination of the fact-checking program is less about free speech and more about consolidating power while offloading content responsibility onto unpaid users. The news defense mechanism remains critically weak because 54% of Americans depend on social media for their news consumption.

Image by Dr. Frank Gaeth, from Wikimedia Commons
Swedish PM Criticized For Using ChatGPT In Government Decisions
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Swedish Prime Minister Ulf Kristersson faced criticism after he admitted using ChatGPT to generate ideas for government decisions.
In a rush? Here are the quick facts:
- Swedish PM admits using ChatGPT for political decision-making.
- His spokesperson claims no sensitive data is shared with AI tools.
- Critics say AI use in government is dangerous and undemocratic.
Swedish Prime Minister Ulf Kristersson faces increasing public backlash after he revealed his practice of using ChatGPT and LeChat to assist his official decision-making process.
“I use it myself quite often. If for nothing else than for a second opinion,” Kristerssonsaid as reported by The Guardian . “What have others done? And should we think the complete opposite? Those types of questions.”
His statement sparked backlash, with Aftonbladet accusing him of falling for “the oligarchs’ AI psychosis,” as reported by The Guardian. Critics argue that relying on AI for political judgment is both reckless and undemocratic.
“We must demand that reliability can be guaranteed. We didn’t vote for ChatGPT,” said Virginia Dignum, professor of responsible AI at Umeå University.
Kristersson’s spokesperson, Tom Samuelsson, downplayed the controversy, saying: “Naturally it is not security sensitive information that ends up there. It is used more as a ballpark,” as reported by The Guardian.
But tech experts say the risks go beyond data sensitivity. Karlstad University professor Simone Fischer-Hübner advises against using ChatGPT and similar tools for official work tasks, as noted by The Guardian.
AI researcher David Bau has warned that AI models can be manipulated . “They showed a way for people to sneak their own hidden agendas into training data that would be very hard to detect.” Research shows a 95% success rate in misleading AI systems using memory injection or “ Rules File Backdoor ” attacks, raising fears about invisible interference in political decision-making.
Further risks come from AI’s potential to erode democracy. A recent study warns that AI systems in law enforcement concentrate power, reduce oversight, and may promote authoritarianism.