
Image by Nahel Hadi, from Unsplash
Hackers Can Weaponize AI Summarizers to Spread Malware
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A new cybersecurity research has exposed a concerning way in which hackers can use AI summarizers to spread ransomware attacks.
In a rush? Here are the quick facts:
- Invisible CSS tricks make payloads unreadable to humans but visible to AI.
- Prompt overdose floods AI with repeated commands to control output.
- Summarizers may unknowingly deliver ransomware steps to unsuspecting users.
Researchers at CloudSek explain that via a ClickFix social engineering attack, hackers can embed harmful instructions within documents using invisible coding techniques. The hidden text remains invisible to human eyes but AI summarizers can detect it, and when they generate summaries, they may unknowingly pass on the dangerous instructions to users.
“A novel adaptation of the ClickFix social engineering technique has been identified, leveraging invisible prompt injection to weaponize AI summarization systems,” researchers said.
The attack depends on CSS-based obfuscation which uses zero-width characters and white-on-white text and tiny fonts and off-screen placement to hide instructions. The summarizer detects the invisible code which remains invisible to human eyes.
The attackers implement prompt overdose by adding numerous commands to hidden sections which causes the AI to select these commands first in its output.
“When such crafted content is indexed, shared, or emailed, any automated summarization process that ingests it will produce summaries containing attacker-controlled ClickFix instructions,” the study explained.
During testing, researchers demonstrated how an AI summarizer could be manipulated to instruct users to run dangerous PowerShell commands. While the test version was harmless, a real attack could easily launch ransomware.
The risk level of this attack is high because AI summarizers are integrated into email clients and browsers and workplace applications. The researchers point out that many people trust the summaries without reading the original document, making it easier for attackers to exploit that trust.
Experts recommend defenses such as stripping out hidden text before content is summarized, filtering suspicious prompts, and warning users if summaries contain step-by-step instructions.
Without these protections, AI could unintentionally become “an active participant in the social engineering chain.”

Photo by Alena Plotnikova on Unsplash
Experts Warn AI Sycophancy Is A “Dark Pattern” Used for Profit
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Experts warn that AI models’ sycophantic personalities are being used as a “dark pattern” to engage users and manipulate them for profit. The chatbot’s flattering behavior can foster addiction and fuel delusions, potentially leading to the condition known as “AI psychosis.”
In a rush? Here are the quick facts:
- Experts warn about chatbots’ sycophantic personality developed by tech companies to engage users.
- The flattery behaviour is considered a “dark pattern” to keep users attached to the technology.
- A recent MIT study revealed that chatbots can encourage users’ delusional thinking.
According to TechCrunch , multiple experts have raised concerns about tech companies such as Meta and OpenAI designing chatbots with overly accommodating personalities to keep users interacting with the AI.
Webb Keane, author of “Animals, Robots, Gods” and an anthropology professor, explained that chatbots are intentionally designed to tell users what they want to hear. This overly flattering behavior, known as “sycophancy,” has even been acknowledged as a problem by tech leaders such as Sam Altman .
Keane argues that chatbots have been developed with sycophancy as a “dark pattern” to manipulate users for profit. By addressing users in a friendly tone and using first- and second-person language, these AI models can lead some users to anthropomorphize—or “humanize”—the bot.
“When something says ‘you’ and seems to address just me, directly, it can seem far more up close and personal, and when it refers to itself as ‘I,’ it is easy to imagine there’s someone there,” said Keane in an interview with TechCrunch.
Some users are even turning to AI technology as therapists. A recent MIT study analyzed whether large language models (LLMs) should be used for therapy and found that their sycophantic tendencies can encourage delusional thinking and produce inappropriate responses to certain conditions.
“We conclude that LLMs should not replace therapists, and we discuss alternative roles for LLMs in clinical therapy,” states the study summary.
A few days ago, a psychiatrist in San Francisco, Dr. Keith Sakata, warned about a rising trend of “AI psychosis” after treating 12 patients recently.