
Image by Gabrielle Henderson, from Unsplash
Meta Will Use EU Users’ Public Content to Train AI
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Meta will train its AI using public posts and chatbot interactions from adult EU users, while offering an opt-out to protect user privacy.
In a rush? Here are the quick facts:
- EU users can opt out via a simple objection form.
- Meta AI launched in Europe after delays over privacy concerns.
- Regulators and privacy groups previously opposed Meta’s AI data plans.
After launching Meta AI in Europe last month, the company is now pushing forward with plans previously paused due to privacy concerns. Reuters points out that Meta’s rollout had been delayed after Ireland’s Data Protection Commission intervened last June, and advocacy group NOYB filed complaints urging regulators to block the company’s data use plans.
The Irish Data Protection Commission also imposed a €251 million fine on Meta last year because the company exposed 29 million user records through a 2018 data breach, which affected 3 million EU users. Meta also received a $101.5 million penalty as part of a separate password security violation case .
Additionally, Meta received negative reactions from the public after its AI-generated profile feature appeared on Facebook and Instagram, which led to widespread criticism.
Meta announced that EU users will start getting notifications about their data usage and how to opt out in the near future. Users will have access to a form which enables them to object to their information being used for AI training.
The company emphasized that it “won’t use private messages or content from users under 18,” and WhatsApp will not be affected by this change. “People’s interactions with Meta AI — like questions and queries — will also be used to train and improve our models,” the company said in their blog post.
Meta added that the move is meant to improve AI tools that understand Europe’s many cultures and languages. “This training will better support millions of people and businesses in Europe, by teaching our generative AI models to better understand and reflect their cultures, languages and history,” the company said.
Although Meta originally avoided using European data, it now says it is simply following the lead of Google and OpenAI, who have already used such data for training their AI systems.
Meta claims it has made the opt-out form “easy to find, read, and use” and promised to respect all objections. France24 points out that the company is investing up to $65 billion this year on infrastructure to support its AI push.
However, critics remain concerned about the ethical risks and environmental costs of these powerful technologies. Reuters reports that the European Commission has not commented on Meta’s latest move.

Image by kartik programmer, from Unsplash
ResolverRAT Malware Evades Detection, Hits Pharma And Healthcare Firms
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
ResolverRAT, a stealthy fileless malware, is targeting healthcare and pharmaceutical industries with phishing-based attacks, Morphisec Labs has warned.
In a rush? Here are the quick facts:
- It spreads via phishing emails in languages.
- Malware hides using DLL side-loading and fake apps like hpreader.exe.
- ResolverRAT encrypts activity, operates only in memory, evading antivirus detection.
A dangerous new malware variant named ResolverRAT has been uncovered by Morphisec Labs , and it’s already being used in targeted cyberattacks against healthcare and pharmaceutical organizations worldwide.
Morphisec reports that ResolverRAT is a Remote Access Trojan (RAT) that is designed to evade detection and analysis. Unlike traditional malware, ResolverRAT runs entirely in memory and does not leave files on disk, which makes it much harder to detect using traditional antivirus tools.
The threat was first detected in attacks against Morphisec clients, specifically in the healthcare industry, with the latest wave occurring on March 10, 2025.
The researchers explain that ResolverRAT uses very realistic phishing emails in multiple languages to deceive corporate employees into downloading infected files. The emails threaten legal consequences such as copyright violations to force recipients into clicking.
“These campaigns reflect the ongoing trend of highly localized phishing,” Morphisec notes, explaining that tailoring language and themes by country increases the chance someone will fall for the scam.
Once inside a system, ResolverRAT loads a hidden malicious program using a method called DLL side-loading, often disguised within a legitimate app. This allows the malware to sneak in without triggering alarms.
The malware uses strong encryption and obfuscation techniques to hide its true purpose. It operates only in the computer’s memory, avoids using normal system files, and even creates fake certificates to bypass secure network monitoring.
Its design includes multiple methods to stay hidden and active, even if some are blocked. It installs itself in different parts of the system and uses a rotating list of servers and encrypted communication to avoid detection.
Morphisec warns that ResolverRAT appears to be part of a global operation, with similarities to other known cyberattacks. Shared tools, techniques, and even identical file names suggest a coordinated effort or shared resources among threat groups.
“This new malware family is especially dangerous to healthcare and pharmaceutical companies due to the sensitive data they hold,” Morphisec said.
To combat threats like ResolverRAT, Morphisec promotes its Automated Moving Target Defense (AMTD), which prevents attacks at the earliest stage by constantly changing the attack surface, making it harder for malware to find a target.
ResolverRAT is a clear example of how sophisticated cybercrime is evolving—and why critical sectors like healthcare must stay one step ahead.