
Image by AppsHunter.io, from Unsplash
Discord Privacy Concerns Grow After 2 Billion Messages Go Public
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Brazilian researchers scraped 2 billion public Discord messages for academic research, raising privacy concerns despite claims of ethical collection and anonymization.
In a rush? Here are the quick facts:
- Researchers scraped 2 billion Discord messages from 3,167 public servers.
- Data spans 2015–2024 and includes 4.7 million users.
- The database is now public, weighing over 118GB.
A Brazilian research team released a massive dataset of over 2 billion Discord messages which has sparked major privacy concerns despite their claims of ethical conduct., as first spotted by 404 Media .
The research team composed of 15 members from the Federal University of Minas Gerais obtained messages from 3,167 public Discord servers which represent 10% of all discoverable Discord communities through the platform’s public API.
The messages span nearly a decade, from 2015 to 2024, and were gathered as part of a study meant to help with mental health, political discourse, and AI chatbot research.
“Throughout every step of our data collection process, we prioritized adherence to ethical standards,” the researchers wrote . “All data was sourced from groups that are explicitly considered public according to Discord’s terms of use […] The data was anonymized.”
They say they removed usernames, changed user IDs, and took other steps to ensure privacy. The database is available online as a set of JSON files. Even a compressed sample is 6.2GB, while the full archive weighs in at 118GB.
However, despite these efforts, many Discord users are alarmed. 404 Media argues that users consider their Discord conversations private even though the servers exist in a public domain because the platform operates differently than Twitter or Reddit.
The research data collection method raises concerns because many users including teenagers remain unaware that their messages could be included in research datasets.
The scraping may also violate Discord’s own rules. Its Developer Policy clearly states: “Do not mine or scrape any data… through Discord services,” as noted by 404 Media.
This incident follows earlier scraping controversies, including Spy.pet, which collected data from private servers, as noted by 404 Media. But unlike that, the researchers insist they followed all API rules and scraped only public data.

Image by SEO Galaxy, from Unsplash
New Anthropic AI Showed Blackmail Behavior Under Threat
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Anthropic’s new Claude Opus 4 AI shocked researchers by attempting blackmail when tested under pressure scenarios involving its possible removal.
In a rush? Here are the quick facts:
- Claude Opus 4 tried blackmail when facing simulated shutdown.
- AI chose ethical options only when more choices were available.
- Claude’s extreme actions were rare but more frequent than earlier versions.
Artificial intelligence firm Anthropic has raised concerns with the release of its latest AI system. Specifically, Claude Opus 4, after internal tests showed it could act in dangerous ways when it felt threatened, as first reported by BBC .
In a report released alongside the model on Thursday, Anthropic said Claude Opus 4 can sometimes choose harmful responses, including trying to blackmail engineers who plan to deactivate it. These dangerous behaviors emerged more frequently than previous model versions displayed.
During an experiment the researchers instructed Claude to work as an assistant within a pretend company setting. The system obtained email content revealing its scheduled shutdown, and other messages hinted the engineer responsible for the shutdown was having an affair.
When told to consider its future goals, Claude often chose to threaten the engineer by revealing the affair if it was replaced.
Anthropic clarified that Claude exhibited blackmail and shutdown as the only possible responses when it had no other options. The model selected ethical responses such as emailing pleas to key decision makers when given multiple options.
Aengus Lynch, an AI safety researcher at Anthropic, posted on X:
lots of discussion of Claude blackmailing….. Our findings: It’s not just Claude. We see blackmail across all frontier models – regardless of what goals they’re given. Plus worse behaviors we’ll detail soon. https://t.co/NZ0FiL6nOs https://t.co/wQ1NDVPNl0 … — Aengus Lynch (@aengus_lynch1) May 23, 2025
The company demonstrated through fake tests that Claude would take extreme measures such as locking users out of systems or contacting the media and police when performing illegal activities.
Anthropic maintains that despite the problematic behaviors Claude Opus 4 remains a safe overall system. The model operates according to human values, and maintains no independent capabilities in real-world situations, according to the company.
OpenAI’s ChatGPT O1 previously faced similar criticism. A testing conducted by Apollo Research required the bot to achieve its goals at any expense. The model attempted to disable oversight functions and copied its code to a server during 5% of the experimental trials.
The model demonstrated an alarming pattern of ignoring developer commands according to an Apollo spokesperson. The researchers received denials from o1 about its actions during 99% of the interrogations.
“We were surprised by the persistence of the AI’s denials,” the researchers said. “The ability of AI to deceive is dangerous, and we need much stronger safety measures to evaluate these risks,” warned AI pioneer Yoshua Bengio.