
Image by Jonas Leupe, from Unsplash
New Android Malware Lets Hackers Steal Your Card With A Tap
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
SuperCard X malware targets Android phones in Italy, using NFC relay and fake banking apps to instantly steal and misuse credit card data.
In a rush? Here are the quick facts:
- Victims are tricked via fake bank messages sent through SMS or WhatsApp.
- Malware hides in a fake app posing as a bank security tool.
- Card data is transmitted in real time to attackers’ second device.
Cybersecurity researchers at Cleafy have uncovered a new Android malware called SuperCard X revealing how scammers can steal card information and perform immediate cash withdrawals through Android devices. The sophisticated scam exploits Near-Field Communication (NFC) technology which most people use daily without awareness of its potential risks.
The malware operates through a Malware-as-a-Service (MaaS) model which enables instant contactless fraud attacks through victims’ phones in Italy.
The wireless technology NFC enables devices to exchange information through short-range connections which typically span only a few centimeters. The technology enables contactless payments through simple card or phone taps at stores. But attackers are now using NFC to pull off something called a relay attack.
In a relay attack, scammers trick victims into installing a malicious app—SuperCard X—on their phone. Once installed, this app silently captures payment card data when the user taps their card against their phone, thinking it’s for a legit reason.
That data is then instantly sent to another device controlled by the scammers, who use it to make unauthorized purchases or ATM withdrawals somewhere else.
The newly discovered attack starts with a scam message which reaches users through SMS or WhatsApp to warn them about fake suspicious payments. The warning messages force victims to contact a phony bank support hotline. The scammers lead victims through multiple phone instructions which ultimately lead to the theft of their card information.
First, they convince the victim to reveal their card PIN, remove spending limits from their card, and download an app disguised as a bank security tool. This app hides the SuperCard X malware, which silently captures the victim’s card data via NFC when the user brings their physical card near the infected phone.
The attackers transmit the stolen data through an instant process to their second device. The attackers use this information to perform unauthorized contactless payments and ATM withdrawals. The attackers focus on the card itself instead of the bank, Cleafy researchers observed that this attack works regardless of the financial institution involved.
This method allows criminals to commit fraud in real time, making it harder for banks to detect or stop the activity before money is withdrawn or spent.
SuperCard X uses two apps: “Reader” to capture card data and “Tapper” to perform the fake transaction. Both apps connect through a central server run by the malware developers. According to Cleafy, the malware is based on previously known tools and shares similarities with NGate , another Android threat identified in 2024.
Because of its minimalist design, SuperCard X is currently hard for antivirus software to detect. “This creates a dual benefit for the fraudster: the rapid movement of stolen funds and the immediate usability of the fraudulent transaction,” the report warned.
Cleafy advises banks and payment companies to stay alert, as similar scams may be running in other countries.

Image by Christin Hume, from Unsplash
Claude AI Study Reveals How Chatbots Apply Ethics in Real-World Chats
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Claude AI demonstrates how ethical principles like helpfulness and transparency play out across 300,000 real chats, raising questions about chatbot alignment.
In a rush? Here are the quick facts:
- Helpfulness and professionalism appeared in 23% of conversations.
- Claude mirrored positive values, resisted harmful requests like deception.
- AI alignment needs refinement in ambiguous value situations.
A new study by Anthropic sheds light on how its AI assistant, Claude, applies values in real-world conversations. The research analyzed over 300,000 anonymized chats to understand how Claude balances ethics, professionalism, and user intent.
The research team identified 3,307 separate values which shaped Claude’s responses. The values of helpfulness and professionalism appeared together in 23% of all interactions, followed by transparency at 17%.
The research points out that the chatbot was able to apply ethical behavior to new topics, in a flexible way. For example, Claude emphasized “healthy boundaries” during relationship advice, “historical accuracy” when discussing the past, and “human agency” in tech ethics debates.
Interestingly, human users expressed values far less frequently—authenticity and efficiency being the most common at just 4% and 3% respectively—while Claude often reflected positive human values such as authenticity, and challenged harmful ones.
The researcher reported that requests involving deception were met with honesty, while morally ambiguous queries triggered ethical reasoning.
The research identified three main response patterns. The AI matched user values during half of all conversations. This was particularly evident when users discussed prosocial activities that built community.
Claude used reframing techniques in 7% of cases to redirect users toward emotional well-being when they pursued self-improvement.
The system displayed resistance in only 3% of cases because users asked for content that was harmful or unethical. The system applied principles like “harm prevention” or “human dignity” in these specific cases.
The authors argue that the chatbot’s behaviors—such as resisting harm, prioritizing honesty, and emphasizing helpfulness—reveal an underlying moral framework. These patterns form the basis for the study’s conclusions about how AI values manifest as ethical behavior in real-world interactions.
While Claude’s behavior reflects its training, the researchers noted that the system’s value expressions can be nuanced to the situation—pointing to the need for further refinement, especially in situations involving ambiguous or conflicting values.