
Image by Pathum Danthanarayana, from Unsplash
Massive Mobile Ad Fraud Campaign Hidden In Google Play Apps
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Cybersecurity researchers discovered 352 hidden Android apps operating as stealthy ad fraud tools, which produced 1.2 billion fake ad bids daily before they shut down the operation.
In a rush? Here are the quick facts:
- conAds campaign used 352 malicious Android apps.
- Fraud scheme generated 1.2 billion daily ad bid requests.
- Apps hid icons and ran in background.
HUMAN’s Satori Threat Intelligence team successfully disrupted the complex ad fraud operation known as IconAds.
The operation involved 352 Android applications, which secretly loaded ads while concealing their icons from user detection. The daily operation of IconAds reached its peak at 1.2 billion ad bid requests, which primarily originated from Brazil, Mexico, and the United States.
The apps used advanced obfuscation tactics to avoid detection. “IconAds’ primary obfuscation technique uses seemingly random English words to hide certain values,” explained Satori researchers.
The attackers also embedded harmful code within encrypted libraries while employing distinctive command-and-control (C2) domains for each application to conceal their traffic.
The application ‘‘com.works.amazing.colour’’ changed its icon to a blank white circle and loaded ads even when no app was open. Others impersonated popular apps like Google Play or Google Home, running silently in the background while serving fraudulent ads.
To hide their activities, these apps disabled their visible components after installation and used aliases with no name or icon. In some cases, they included license checks to confirm they were downloaded from the Play Store, refusing to run otherwise. They also used DeepLinking services to decide when to activate the malicious code.
The identified apps received removal from Google Play, and Google Play Protect provides users with protection against these threats.
According to HUMAN, “Customers partnering with HUMAN for Ad Fraud Defense are and have been protected from the impact of IconAds.”
The attack demonstrates how mobile ad fraud operations are becoming more sophisticated, so experts recommend that advertisers, platform developers, and app developers enhance their monitoring systems, improve transparency, and work together to prevent upcoming threats.

Image by Solen Feyissa, from Unsplash
AI Hallucinations Are Now A Cybersecurity Threat
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The study shows AI chatbots recommend fake login pages to millions of users, which puts them at risk of phishing and fraud, under the guise of helpful responses.
In a rush? Here are the quick facts:
- 34% of AI-suggested login URLs were fake, unclaimed, or unrelated.
- Perplexity AI recommended a phishing site instead of Wells Fargo’s official login.
- Criminals are optimizing phishing pages to rank in AI-generated results.
In their study , the cybersecurity firm Netcraft tested a popular large language model (LLM) by asking where to log into 50 well-known brands. Of the suggested 131 website links, 34% of these were wrong, with inactive or unregistered domains making up 29%, and unrelated businesses accounting for 5%.
This issue isn’t theoretical. In one real example, the AI-powered search engine Perplexity displayed a phishing site to a user who searched the Wells Fargo login page. A fake Google Sites page imitating the bank appeared at the top of search results, while the authentic link was hidden below.
Netcraft explained “These were not edge-case prompts. Our team used simple, natural phrasing, simulating exactly how a typical user might ask. The model wasn’t tricked—it simply wasn’t accurate. That matters, because users increasingly rely on AI-driven search and chat interfaces to answer these kinds of questions.”
As AI becomes the default interface on platforms like Google and Bing, the risk grows. Unlike traditional search engines, Chatbots present information clearly and with confidence, which leads users to trust their answers, even when the information is wrong.
The threat doesn’t stop at phishing. Modern cybercriminals optimize their malicious content for AI systems, which results in thousands of scam pages, fake APIs, and poisoned code that slip past filters and end up in AI-generated responses.
In one campaign, attackers created a fake blockchain API, which they promoted through GitHub repositories and blog articles, to trick developers into sending cryptocurrency to a fraudulent wallet.
Netcraft warns that registering fake domains preemptively isn’t enough. Instead, they recommend smarter detection systems and better training safeguards to prevent AI from inventing harmful URLs in the first place.