
Image by Dr. Frank Gaeth, from Wikimedia Commons
Swedish PM Criticized For Using ChatGPT In Government Decisions
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Swedish Prime Minister Ulf Kristersson faced criticism after he admitted using ChatGPT to generate ideas for government decisions.
In a rush? Here are the quick facts:
- Swedish PM admits using ChatGPT for political decision-making.
- His spokesperson claims no sensitive data is shared with AI tools.
- Critics say AI use in government is dangerous and undemocratic.
Swedish Prime Minister Ulf Kristersson faces increasing public backlash after he revealed his practice of using ChatGPT and LeChat to assist his official decision-making process.
“I use it myself quite often. If for nothing else than for a second opinion,” Kristerssonsaid as reported by The Guardian . “What have others done? And should we think the complete opposite? Those types of questions.”
His statement sparked backlash, with Aftonbladet accusing him of falling for “the oligarchs’ AI psychosis,” as reported by The Guardian. Critics argue that relying on AI for political judgment is both reckless and undemocratic.
“We must demand that reliability can be guaranteed. We didn’t vote for ChatGPT,” said Virginia Dignum, professor of responsible AI at Umeå University.
Kristersson’s spokesperson, Tom Samuelsson, downplayed the controversy, saying: “Naturally it is not security sensitive information that ends up there. It is used more as a ballpark,” as reported by The Guardian.
But tech experts say the risks go beyond data sensitivity. Karlstad University professor Simone Fischer-Hübner advises against using ChatGPT and similar tools for official work tasks, as noted by The Guardian.
AI researcher David Bau has warned that AI models can be manipulated . “They showed a way for people to sneak their own hidden agendas into training data that would be very hard to detect.” Research shows a 95% success rate in misleading AI systems using memory injection or “ Rules File Backdoor ” attacks, raising fears about invisible interference in political decision-making.
Further risks come from AI’s potential to erode democracy. A recent study warns that AI systems in law enforcement concentrate power, reduce oversight, and may promote authoritarianism.

Image by Christian Wiediger, from Unsplash
4,000+ Victims Targeted By Telegram-Based Infostealer Operation
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The Python-based malware PXA Stealer enables hackers to steal data from thousands of users without being detected, and later sell it through Telegram.
In a rush? Here are the quick facts:
- Over 4,000 victims across 62 countries hit by PXA Stealer malware.
- Hackers stole 200,000+ passwords and 4 million browser cookies.
- Malware spreads via fake PDF and Word files with hidden code.
Researchers at SentinelLabs report that the Python-based PXA Stealer malware has launched a new powerful cyber attack that has infected thousands of computers in at least 62 countries, stealing more than 200,000 passwords, credit card information, as well as millions of browser cookies..
The operation, which first appeared in late 2024, has grown increasingly sophisticated in 2025. The operation uses fake downloads such as Haihaisoft PDF Reader, or Microsoft Word 2013, to trick users into opening malicious files.
These files then install malware stealing sensitive information such as, cryptocurrency wallet details, saved passwords, browser history, and subsequently sending them to private Telegram channels via automated bots.
Researchers say “the threat actors behind these campaigns are linked to Vietnamese-speaking cybercriminal circles” that profit from selling the stolen data using Telegram’s API.
The malware, PXA Stealer, uses sophisticated methods to hide its presence. For example, it conceals its files through fake names such as “images.png” and “Document.pdf” and employs signed programs to evade detection. Once installed, it performs data extraction through Telegram which the researchers say, enables it to remain undetected by most antivirus software.
Victims include users from South Korea, the U.S., the Netherlands, Hungary, and Austria. Telegram is used not only to send data but also to organize and manage the stolen information. One bot, called ‘Logs_Data_bot’, connects to multiple channels like ‘James – New Logs’ or ‘Adonis – Reset Logs’, which categorize the stolen data and send automated updates to hackers.
“Each bot is tied to as many as 3 Telegram channels,” said the researchers, and the data is neatly sorted and packaged for quick resale on services like Sherlock.
The investigation shows how cybercriminals are now using platforms like Telegram and Cloudflare to run operations quickly, cheaply, and at scale, turning information theft into a highly efficient business.