
Image by franco alva, from Unsplash
Adult Sites Use Malware-Laced SVG Files to Hijack Facebook Likes
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
As more countries require age verification on adult websites, some shady adult sites are using sophisticated malware tricks to promote themselves on Facebook.
In a rush? Here are the quick facts:
- Trojan.JS.Likejack silently clicks “Like” on Facebook without user consent.
- SVG files can contain harmful JavaScript, not just images.
- Many promoted sites claim AI-generated explicit celebrity images.
Security researchers at Malwarebytes discovered that dozens of adult websites use hidden malicious code in SVG image files, leading users to “Like” Facebook posts without their consent.
The attackers do this by embedding dangerous JavaScript code within the SVG graphic files, which can contain both pictures, as well as malicious scripts.
“When one of these people clicks on the image, it causes browsers to surreptitiously register a like for Facebook posts promoting the site,” explains ArsTechnica . “The user will have to be logged in on Facebook for this to work, but we know many people keep Facebook open for easy access,” said Malwarebytes researcher Pieter Arntz.
The malicious code is heavily disguised using a method called “JSFuck,” which turns the JavaScript into confusing text, making detection difficult. Once triggered, it downloads a Trojan, named ‘‘Trojan.JS.Likejack,’’ which starts to silently click on adult content posts to increase their visibility throughout Facebook.
Many of the promoted sites claim to show explicit celebrity photos, often generated by AI, and are hosted on free blogging platforms like blogspot.com.
The attackers exploit the misconception that SVG files represent harmless images to execute their campaign. The combination of HTML and JavaScript code within SVG files transforms them into dangerous tools for cyberattacks.
Facebook regularly shuts down abusive accounts, but the malicious profiles frequently return, making this an ongoing problem.

Image by Denise Chan, from Unsplash
Man Poisoned Himself After Following ChatGPT’s Advice
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A 60-year-old man gave himself a rare 19th-century psychiatric illness after following advice from ChatGPT.
In a rush? Here are the quick facts:
- He replaced table salt with toxic sodium bromide for three months.
- Hospitalized with hallucinations, paranoia, and electrolyte imbalances due to poisoning.
- ChatGPT suggested bromide as a chloride substitute without health warnings.
A case study published in the Annals of Internal Medicine reveals the case of a man suffering from bromism, a condition caused by poisoning from sodium bromide.
Apparently, this was caused by his attempt to replace table salt with a dangerous chemical, which ChatGPT suggested he use. The man reportedly arrived at the emergency room experiencing paranoia, auditory and visual hallucinations, and accused his neighbor of poisoning him.
The following medical tests revealed abnormal chloride levels, as well as other indicators confirming bromide poisoning. The man revealed that he had followed a restrictive diet and used sodium bromide to replace salt. He did so after asking ChatGPT how to eliminate chloride from his diet.
“For 3 months, he had replaced sodium chloride with sodium bromide obtained from the internet after consultation with ChatGPT,” the study reads. The researchers explain that sodium bromide is typically used as a dog anticonvulsant or a pool cleaner, but is toxic to humans in large amounts.
The man spent three weeks in hospital, where his symptoms gradually improved with treatment.
The study highlights how AI tools may provide incomplete and dangerous guidance to users. In a test, the researchers asked ChatGPT to suggest chloride alternatives, and as a result, received sodium bromide as a response. This response lacked any warning about its toxic nature or a request for the context of the question.
The research warns that AI can also spread misinformation and lacks the critical judgment of a healthcare professional.
404Media notes how OpenAI recently announced improvements in ChatGPT 5, aiming to provide safer, more accurate health information. This case shows the importance of cautious AI use and consulting qualified medical experts for health decisions.