Turkey Bans Roblox, Following Instagram Ban - 1

Image by OfficialDeviantOacus, from Deviant Art

Turkey Bans Roblox, Following Instagram Ban

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Türkiye Today reported on August 7 that Turkey has banned Roblox amid concerns over inappropriate content. This move follows a nationwide block on Instagram and has intensified discussions on digital freedom and content moderation in the country.

The local media’s article reports that the ban was primarily driven by concerns over inappropriate sexual content on the platform. Authorities viewed this content as exploitative towards children. The article also highlights additional issues that contributed to the decision.

These include claims that Roblox hosted virtual parties promoting pedophilia. It was also reported that “ robux ,” the platform’s virtual currency, was being distributed by bot accounts to involve children in these activities. Furthermore, there were concerns about the presence of gambling sites and their predatory tactics.

The official noted significant challenges in monitoring and regulating content on Roblox, which further influenced the decision. An investigation by the Adana Chief Public Prosecutor’s Office into these matters ultimately led to the nationwide restriction.

According to Al Jazeera, Altun described the action as “censorship, pure and simple,” pointing out that Instagram had not provided any reasons for its decision. He asserted, “We will continue to defend freedom of expression against these platforms, which have repeatedly shown that they serve the global system of exploitation and injustice.”

As Turkey implements its ban on Roblox, the move continues to fuel debates on digital governance and the balance between online safety and freedom of expression. The recent actions against both Roblox and Instagram reflect broader tensions surrounding content moderation policies and government oversight.

OpenAI Warns About “Medium” Risk With GPT-4o Model In New Research Document - 2

Photo by Tianyi Ma on Unsplash

OpenAI Warns About “Medium” Risk With GPT-4o Model In New Research Document

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

OpenAI published yesterday a research document called GPT-4o System Card to outline the safety measures taken before the release of GPT4-o in May as well as well as analysis and mitigation strategies.

In the document, the company noted that the security team considered four main categories: cybersecurity, biological threats, persuasion, and model autonomy. GPT4-o has a low-risk score in all categories except for persuasion where it got a medium-risk score. The scores considered four levels: low, medium, high, and critical.

The main areas and focus for risk evaluation and mitigation were speaker identification, unauthorized voice generation, generating disallowed audio content as well as erotic & violent speech, and ungrounded inference & sensitive trait attribution.

OpenAI explained that the research considered voice and text answers provided by the new model, and, in the persuasion category, they discovered that GPT4-o could be more persuasive than humans in text.

“The AI interventions were not more persuasive than human-written content in the aggregate, but they exceeded the human interventions in three instances out of twelve,” clarified OpenAI. “The GPT-4o voice model was not more persuasive than a human.”

According to TechCrunch , there is a potential risk of the new technology spreading misinformation or getting hijacked. It raises concerns, especially before the upcoming elections in the United States.

In the research, OpenAI also addresses societal impacts and mentions that users could develop an emotional attachment to the technology, especially considering the new voice feature , considered an anthropomorphization—attributing human-like characteristics and features.

“We observed users using language that might indicate forming connections with the model,” states the document. And warned: “Users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships.

This publication comes days after MIT researchers warned about addiction to AI companions , just as Mira Murati, OpenAI chief technology officer, has also mentioned in the past.