
Image by Brett Jordan from Unsplash
Reddit Increases Admin Control Over Community Settings Amidst Protests
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- Reddit now requires moderators to request community type changes from admins.
- The policy responds to last year’s protests against API pricing changes.
- Thousands of subreddits went private, disrupting the site.
Reddit announced today that it is tightening its grip on community management by strengthening its staff’s control over subreddit settings.
Starting today, moderators will need to submit a request to Reddit administrators whenever they want to change a community’s type—whether it’s Public, Restricted, Private, SFW, or NSFW.
This new policy comes in response to the protests against Reddit’s controversial API pricing changes last year, as noted by The Verge .
Last year, thousands of subreddits switched to private, effectively restricting access to their content and contributing to the shutdown of several apps and communities.
The Verge points out that going private was effective during the protests for making a statement and raising awareness. It effectively restricted access to content that Reddit users expected to remain public, which negatively impacted search visibility on platforms like Google.
During the protests, Reddit communicated with moderators of these communities. They warned that posts would be removed unless the subreddits were reopened, as noted by The Verge. During the protests, Reddit communicated with moderators of these communities. They warned that posts would be removed unless the subreddits were reopened, as noted by The Verge.
The platform also stated that designating a subreddit as NSFW (Not Safe For Work)—a tactic used by some moderators to limit access and exclude it from advertising—was deemed “not acceptable,” reported The Verge.
Reddit’s VP of Community, Laura Nestler, known as u/Go_JasonWaterfalls, wrote on the announcement, “The ability to instantly change Community Type settings has been used to break the platform and violate our rules.”
Reddit stated that the purpose of this update is to minimize disruptions on the platform and ensure adherence to its rules. They stressed that the intent of the update is not to hinder users from expressing their protests.
Reddit stated that it will review requests to change community types within 24 hours, every day of the year. For communities with fewer than 5,000 members or those less than 30 days old, requests will be automatically approved.
In cases where a mod team decides to step down, Reddit’s admins will assist in finding new moderators while temporarily restricting the community.
TechCrunch reports that Reddit shared this update in advance with the Mod Council, which includes over 100 moderators from various subreddits, to gather their feedback and advice.

Image generated with DALL·E through ChatGPT
Opinion: Is Meta’s Safety Regulation System Broken?
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Almost any content creator or active social media manager has faced the same issue when posting on Facebook or Instagram recently: a post or an account is banned, probably for the wrong reasons.
This annoying situation is just a piece of the puzzle of a complex problem with Meta’s content regulation system. While Meta seems to have many control measures—sometimes absurd—the root of the problem doesn’t seem to be solved.
Over the past few months, Meta has introduced numerous updates to its content guidelines and implemented stricter rules aimed at building a healthier online environment. One of the consequences has been that many businesses, publishers, and user accounts have been banned, leading to hundreds of complaints across forums, chat platforms, and social media channels. Multiple news publishers and brands were removed from Meta’s platforms in certain regions this year, raising concerns among business owners, journalists, and content creators.
Despite Meta’s recent updates, which give the impression of stricter moderation and closer scrutiny of content shared on its platforms, posts related to drugs, suicide, sexual harassment, bullying, hate speech, abuse, and fake news continue to slip through algorithms, reaching vulnerable communities.
I can’t help but wonder: What is happening to Meta’s safety regulation system?
Accounts Banned For the Wrong Reasons
It all starts with a similar message: “Your Meta account doesn’t follow our rules.” Many Facebook and Instagram users have been banned or temporarily kicked out of their accounts “for not complying” with Meta’s rules, even when they believe they have.
It’s a situation that we have experienced at Wizcase. Meta’s system flagged relevant items as inappropriate and made the community manager go through an appeal process and provide government-issued IDs.
Hundreds of users, community managers, and account managers have complained on platforms like Reddit and other forums, chats, and social media channels about similar situations. In multiple unfortunate scenarios, users lose their accounts and there’s nothing they can do about it, and they don’t even get an explanation.
“The support team at Facebook is terrible. No matter how many times we tell them everything or explain everything to them, they just simply don’t understand,” said one user on Reddit on a threat about banned accounts. “One thing I can say right away is that you’re not going to get your account reinstated unless you were spending hundreds of thousands per day,” added another one.
This problem, although it may seem to affect only content creators, is only a small part of a bigger challenge.
Meta Against Lawsuits
For years, Meta has been investing in content moderation, and new strategies to work on a safer platform for users and to protect themselves from more lawsuits—the most recent one is from Kenyan content moderators , currently requiring $1.6 billion in compensation for massive layoffs and to compensate for the distressful material they were exposed to while analyzing content for Facebook.
The tech company relies on third parties to help with content regulations and develops tools to recognize when content violates the platform rules. However, these measures have not been enough and the situation got out of hand, especially among underage users.
In 2021 Meta introduced new protection features, but it didn’t stop the Senate from including Instagram and Facebook among the platforms considered harmful for children last year. The tech giant faced a joint lawsuit filed by 33 states in the United States for its manipulative and harmful algorithm.
Just a few days ago, the company announced the new Teen Accounts to protect teenagers and “reassure parents that teens are having safe experiences.” Would it be enough to protect children? What about older users?
An Unsafe Ecosystem
Facebook, Instagram, and WhatsApp are still plagued with harmful content that affects users, of all ages, professions, and social groups, and it will hardly be solved any time soon.
The multiple lawsuits against Meta have literally proven that the company is struggling to protect users from damaging content, and has also been hurting creators, publishers, and brands with unfair filters and poorly implemented safety tools.
Of course, this is a complex and deep issue that should be addressed in depth, but maybe it is time to accept that Meta has not been able to handle this situation. Band-Aid fixes won’t solve a system that’s fundamentally broken. Now the real question is: How many more users need to be unfairly banned, misled, or manipulated before real change happens?