
Image by Jernej Furman from: Wikimedia Commons
OpenAI Delays ChatGPT Watermarking System Amid Internal Disagreements
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
According to the Wall Street Journal , OpenAI is debating whether to release a watermarking system designed to detect text generated by its ChatGPT AI. Despite having this technology ready for about a year, internal disagreements have delayed its rollout.
In their recently updated post , OpenAI states that their text watermarking method is accurate against localized tampering. However, it is vulnerable to global tampering. This includes methods such as using translation systems, rewording with another generative model, or inserting and then removing a special character between each word.
Another major concern for OpenAI is that their research suggests the text watermarking method could impact some groups more than others. For example, it might stigmatize the use of AI as a writing tool for non-native English speakers.
OpenAI has prioritized launching audiovisual content provenance solutions due to the higher risk levels associated with current AI models, especially during the U.S. election year, as reported by the Wall Street Journal. The company has extensively researched text provenance, exploring solutions like classifiers, watermarking, and metadata.
An OpenAI spokesperson told TechCrunch , “The text watermarking method we’re developing is technically promising but has important risks we’re weighing while we research alternatives, including susceptibility to circumvention by bad actors and the potential to disproportionately impact groups like non-English speakers.”
TechCrunch also reports that OpenAI’s text watermarking would specifically target text generated by ChatGPT, rather than content from other models. This involves making subtle adjustments in how ChatGPT selects words, creating an invisible watermark that can be detected by a separate tool.
A company survey, reported by the Wall Street Journal, found that nearly one-third of loyal ChatGPT users would be discouraged by anti-cheating technology. This puts OpenAI in a difficult position. Employees are conflicted between maintaining transparency and wanting to attract and retain users.
In the OpenAI blog post, they emphasize that as the volume of generated text continues to expand, the importance of reliable provenance methods for text-based content becomes increasingly critical.

Photo by Borna Hržina on Unsplash
MIT AI Researchers Warn About Addiction To Artificial Intelligence
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Two AI researchers from MIT, Pat Pataranutaporn—a researcher at the MIT Media Lab who studies human-AI interaction with a focus on cyborg psychology—, and Robert Mahari—a joint JD-PhD candidate at the MIT Media Lab and Harvard Law School who focuses on computational law—published a joined article warning about addictive AI companions at the MIT Technology Review magazine.
In the piece, the experts explain that they have analyzed a million ChatGPT interaction logs and discovered that sexual role-playing is the second most popular use for AI chatbots—the first use is for creative composition— and shared their results in a paper just a few days ago.
“We are already starting to invite AIs into our lives as friends, lovers, mentors, therapists, and teachers,” wrote the researchers. “AI wields the collective charm of all human history and culture with infinite seductive mimicry.”
Pataranutaporn and Mahari are concerned about our current massive experiment rolling out in real-time without true knowledge of the consequences in our society and as individuals.
“As AI researchers working closely with policymakers, we are struck by the lack of interest lawmakers have shown in the harms arising from this future,” wrote the experts while highlighting the urge to combine law, psychology and technology research for AI regulation.
AI researchers explained that IA companions become addictive because the technology can identify people’s desires and its submissive nature knows how to serve users as they wish. Combined with the already existing addictive social media algorithms and the integration of new generative AI technologies, the experts believe it can easily grow into an extremely addictive technology.
Mira Murati, OpenAI chief technology officer, has previously mentioned the addictive qualities of their technology. During an interview last year, Murati said that ChatGPT models could become “extremely addictive” and that as users we could become “enslaved” to this technology.
This warning arrives only days after OpenAI rolls out its voice feature for users of the paid version of their product, and days after a new startup goes viral for launching a new AI necklace called Friend that is constantly listening and interacting with users.