
Photo by Shutter Speed on Unsplash
Instagram’s Head Warns About Sharing AI-Generated Content
- Written by Andrea Miliani Former Tech News Expert
Instagram head Adam Mosseri published a series of Threads posts last Sunday warning users about sharing content on social media as generative AI can create images that look real. Mosseri admitted that they have limitations in tagging all ai-generated content.
In a Rush? Here are the Quick Facts!
- Adam Mosseri, Head of Instagram, warned about AI generating realistic images that can be perceived as real
- Mosseri encouraged users to be more critical when consuming content on their social media platforms
- Meta is working on labeling AI content, but they have been facing challenges and limitations
Mosseri—who has been working for Meta since 2008 —explained that realistic AI creations are now easily made and that users should consider the account providing the information and its credibility before sharing with others.
The executive compared massive film productions like Jurassic Park to current technologies and acknowledged the quality and the speed of advanced AI tech.
“Generative AI is clearly producing content that is difficult to discern from recordings of reality, and improving rapidly,” wrote Mosseri in one of the posts.
Mosseri also acknowledges that they are not able to filter and control all the images and AI-generated content shared.
“Our role as internet platforms is to label content generated as AI as best we can,” He added. “But some content will inevitably slip through the cracks, and not all misrepresentations will be generated with AI, so we must also provide context about who is sharing so you can assess for yourself how much you want to trust their content.”
The Head of Instagram emphasized that users should be more critical when consuming content, question whether the statements could be real or not, and “always consider who is that is speaking.”
Many users replied and complained about the company not taking more responsibility for the situation. “I haven’t seen a single meta platform label a single post as AI outside of the -voluntary- labels on IG by the poster,” wrote one user. “You’ve had years and years to be ahead of this… But instead, put your head in the sand and ignored every one of your users who have been reporting this for you,” wrote another.
Users have been also reporting this year that Meta’s content regulation system is not working properly , as it has been banning accounts for the wrong reasons.

Image by Emiliano Vittoriosi, from Unsplsh
Ex-OpenAI Researcher And Whistleblower Found Dead
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A former OpenAI researcher turned whistleblower, Suchir Balaji, 26, was found dead in a San Francisco apartment, authorities confirmed, as first reported by The Mercury News .
In a Rush? Here are the Quick Facts!
- Former OpenAI researcher Suchir Balaji was found dead in a San Francisco apartment.
- Balaji’s death on November 26 was ruled a suicide with no signs of foul play.
- Balaji publicly criticized OpenAI’s practices, including its data-gathering methods, before his death.
Police discovered Balaji’s body on November 26 after receiving a welfare check request. The San Francisco medical examiner’s office ruled the death a suicide, and investigators found no signs of foul play, said BBC .
In the months leading up to his death, Balaji had publicly criticized OpenAI’s practices. The company is currently facing multiple lawsuits over its data-gathering methods.
I recently participated in a NYT story about fair use and generative AI, and why I’m skeptical “fair use” would be a plausible defense for a lot of generative AI products. I also wrote a blog post ( https://t.co/xhiVyCk2Vk ) about the nitty-gritty details of fair use and why I… — Suchir Balaji (@suchirbalaji) October 23, 2024
In a recent interview with the New York Times , Mr. Balaji said he saw the threats posed by AI as immediate and significant. He argued that ChatGPT and similar chatbots are undermining the commercial viability of individuals, businesses, and internet services that originally created the digital data used to train these systems.
OpenAI, Microsoft, and other companies maintain that training their AI systems on internet data falls under the “fair use” doctrine.
This doctrine considers four factors, and these companies assert they meet the criteria, including significantly transforming copyrighted works and not directly competing in the same market as those works.
Mr. Balaji disagreed. He contended that systems like GPT-4 make complete copies of training data. While companies like OpenAI can program these systems to either replicate the data or produce entirely new outputs, the reality, he says, lies somewhere in between, as reported by The Times.
Mr. Balaji published an essay on his personal website, offering what he describes as a mathematical analysis to support this claim. “If you believe what I believe, you have to just leave the company,” he said, as reported by The Times.
According to Mr. Balaji, the technology violates copyright law because it often directly competes with the works it was trained on. Generative models, designed to mimic online data, can substitute for nearly anything on the internet, from news articles to online forums, reported The Times.
Balaji’s death occurred just one day after a court filing identified him as a person whose professional files OpenAI would review in connection with a lawsuit filed by several authors against the startup, noted Forbes .
Beyond legal concerns, Mr. Balaji warned that AI technologies are degrading the internet . As these tools replace existing services, they often generate false or entirely fabricated information — a phenomenon researchers call “hallucinations.” He believed this shift is changing the internet for the worse, reported The Times.
Bradley J. Hulbert, an intellectual property lawyer, noted that current copyright laws were established long before the advent of AI and that no court has yet ruled on whether technologies like ChatGPT violate these laws, as reported by The Times.
He emphasized the need for legislative action. “Given that A.I. is evolving so quickly,” he said, “it is time for Congress to step in.” Mr. Balaji concurred, stating, “The only way out of all this is regulation,” reported The Times.