
Image generated with DALL·E through ChatGPT
Opinion: Is Meta’s Safety Regulation System Broken?
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Almost any content creator or active social media manager has faced the same issue when posting on Facebook or Instagram recently: a post or an account is banned, probably for the wrong reasons.
This annoying situation is just a piece of the puzzle of a complex problem with Meta’s content regulation system. While Meta seems to have many control measures—sometimes absurd—the root of the problem doesn’t seem to be solved.
Over the past few months, Meta has introduced numerous updates to its content guidelines and implemented stricter rules aimed at building a healthier online environment. One of the consequences has been that many businesses, publishers, and user accounts have been banned, leading to hundreds of complaints across forums, chat platforms, and social media channels. Multiple news publishers and brands were removed from Meta’s platforms in certain regions this year, raising concerns among business owners, journalists, and content creators.
Despite Meta’s recent updates, which give the impression of stricter moderation and closer scrutiny of content shared on its platforms, posts related to drugs, suicide, sexual harassment, bullying, hate speech, abuse, and fake news continue to slip through algorithms, reaching vulnerable communities.
I can’t help but wonder: What is happening to Meta’s safety regulation system?
Accounts Banned For the Wrong Reasons
It all starts with a similar message: “Your Meta account doesn’t follow our rules.” Many Facebook and Instagram users have been banned or temporarily kicked out of their accounts “for not complying” with Meta’s rules, even when they believe they have.
It’s a situation that we have experienced at Wizcase. Meta’s system flagged relevant items as inappropriate and made the community manager go through an appeal process and provide government-issued IDs.
Hundreds of users, community managers, and account managers have complained on platforms like Reddit and other forums, chats, and social media channels about similar situations. In multiple unfortunate scenarios, users lose their accounts and there’s nothing they can do about it, and they don’t even get an explanation.
“The support team at Facebook is terrible. No matter how many times we tell them everything or explain everything to them, they just simply don’t understand,” said one user on Reddit on a threat about banned accounts. “One thing I can say right away is that you’re not going to get your account reinstated unless you were spending hundreds of thousands per day,” added another one.
This problem, although it may seem to affect only content creators, is only a small part of a bigger challenge.
Meta Against Lawsuits
For years, Meta has been investing in content moderation, and new strategies to work on a safer platform for users and to protect themselves from more lawsuits—the most recent one is from Kenyan content moderators , currently requiring $1.6 billion in compensation for massive layoffs and to compensate for the distressful material they were exposed to while analyzing content for Facebook.
The tech company relies on third parties to help with content regulations and develops tools to recognize when content violates the platform rules. However, these measures have not been enough and the situation got out of hand, especially among underage users.
In 2021 Meta introduced new protection features, but it didn’t stop the Senate from including Instagram and Facebook among the platforms considered harmful for children last year. The tech giant faced a joint lawsuit filed by 33 states in the United States for its manipulative and harmful algorithm.
Just a few days ago, the company announced the new Teen Accounts to protect teenagers and “reassure parents that teens are having safe experiences.” Would it be enough to protect children? What about older users?
An Unsafe Ecosystem
Facebook, Instagram, and WhatsApp are still plagued with harmful content that affects users, of all ages, professions, and social groups, and it will hardly be solved any time soon.
The multiple lawsuits against Meta have literally proven that the company is struggling to protect users from damaging content, and has also been hurting creators, publishers, and brands with unfair filters and poorly implemented safety tools.
Of course, this is a complex and deep issue that should be addressed in depth, but maybe it is time to accept that Meta has not been able to handle this situation. Band-Aid fixes won’t solve a system that’s fundamentally broken. Now the real question is: How many more users need to be unfairly banned, misled, or manipulated before real change happens?

Image by AppsHunter, from Unsplash
Epic Games Files Lawsuit Against Google And Samsung
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- The lawsuit alleges anti-competitive practices involving the Auto Blocker feature.
- Tim Sweeney criticized Google for misleading users about app safety.
- Samsung maintains its security features to protect users.
Epic Games, the creator of “Fortnite’’, has announced the filing of a lawsuit against Google and Samsung, accusing both companies of coordinating efforts to block competition in app distribution on Samsung devices through the default-on”Auto Blocker” feature.
According to Epic, Auto Blocker is the latest in a series of agreements where Google and Samsung have conspired to protect Google’s monopoly power.
This feature, pre-enabled on new Samsung phones, prevents users from installing apps from any source other than the Google Play Store or Samsung’s app store.
Epic claims the feature solidifies the Google Play Store as the only viable app source on Samsung devices, preventing other app stores from competing on equal terms.
This lawsuit follows Epic’s April proposal for an injunction aimed at increasing competition in Google’s Play Store by allowing third-party app stores and billing systems equal access, challenging Google’s control over Android’s app ecosystem.
Epic argues that there is no system in place for rival stores to gain “authorized” status, said The Verge. As reported by Reuters , Epic claims Samsung and Google are violating U.S. antitrust laws by restricting consumer options and stifling competition, which could lead to higher app prices.
Epic’s CEO, Tim Sweeney, criticized Google’s approach to app security, stating that it falsely portrays third-party apps as unsafe. “Google is pretending to protect users while knowing full well that Fortnite is safe, as they have previously distributed it,” Sweeney told Reuters.
BBC also notes that Google Play and Samsung have previously collaborated with Epic on Fortnite-related events, including digital skin promotions.
Meanwhile, Samsung has rejected Epic’s claims, maintaining that its security features are designed to protect users from harmful apps.
“We are committed to ensuring users’ privacy and security,” Samsung said in a statement to Reuters, arguing that users can turn off the Auto Blocker feature if they wish.
“Contrary to Epic Game’s assertions, Samsung actively fosters market competition, enhances consumer choice, and conducts its operations fairly,” a spokesperson for Samsung said in a statement reported by CNET .
Multiple news outlets have reached out to Google, but the company has not yet provided a response to the request for comment.