
Instagram Expands Anti-Harassment Tool to Help Teens Combat Bullying
- Written by Shipra Sanganeria Cybersecurity & Tech Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Instagram is expanding its “Limits” tool to help teenagers restrict unwanted interactions from everyone except users in their “Close Friends” group.
According to TechCrunch , Instagram announced last week that it had tailored its “Limits” feature for teens by enabling the “Close Friends” setting by default. This adjustment is intended to protect teens from bullying and harassment.
According to TechCrunch, the “Limits” tool previously enabled users to restrict interactions solely with accounts they followed or long-term followers. However, recent refinements now permit users to limit interactions with recent followers and unknown accounts as well. Furthermore, it hides updates from the restricted accounts without notifying them that they’ve been added to the limit interactions list. Users can activate this feature for up to 4 weeks at a time.
In a bid to fortify user safety against unwelcome interactions, Instagram also introduced a new functionality to its “Restrict” feature. This enhancement enables users to discreetly limit interactions with specific accounts without resorting to blocking or unfollowing them.
Earlier this year, Meta also introduced new restrictions preventing adults over 18 from messaging teenagers who do not follow them, alongside a feature to blur nudity in Instagram DMs for teens .
These developments mark Instagram’s continued efforts to enhance the safety of its young users, particularly in the face of heightened scrutiny from regulators in the US and European Union.

OpenAI Deleted Accounts From Foreign Groups Using AI Models For Disinformation
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
OpenAI has deleted accounts from threat actors from Russia, China, Iran, and Israel which were using its models to manipulate, spread disinformation, and influence political outcomes without disclosing a real identity.
The tech company published a study of its recent investigation as well as its first report of this kind with more details on its policies, how it monitors threat actors, trends and threats in 2024, case studies, and more relevant insights from the investigation.
“Over the last three months, our work against deceptive and abusive actors has included disrupting covert influence operations that sought to use AI models in support of their activity across the Internet,” states the document. “These included campaigns linked to operators in Russia (two networks), China, Iran, and a commercial company in Israel.”
OpenAI highlighted five cases of the main threat groups: Bad Grammar—named by OpenAI— and Doppelganger from Russia, Spamouflage from China, the International Union of Virtual Media (IUVM) from Iran, and an operation nicknamed Zero Zeno for this investigation from Israel.
Most of these threat actors used OpenAI models to translate, create content, spread disinformation in different languages, and address international audiences through social media channels, Telegram, forums, and different blogs and websites.
OpenAI has clarified that these actors haven’t had significant results, reached large audiences, or considerably increased engagement by using tools like ChatGPT. The company emphasized how its AI tools also helped its team identify threats and take action.
“Detecting and disrupting multi-platform abuses such as covert influence operations can be challenging because we do not always know how content generated by our products is distributed,” said OpenAI. “But we are dedicated to finding and mitigating this abuse at scale by harnessing the power of generative AI.”
OpenAI has been working on improving the quality of its content as well as optimizing its AI models to reduce hallucinations—one of the major concerns of users. The company also partnered with News Corp recently to feed their AI models with reputable journalistic content.