
Image by Pathum Danthanarayana, from Unsplash
Over 50,000 Infected By Banking Trojan Posing as PDF Tool
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Researchers say that Anatsa malware is back, this time targeting North American users by hiding in fake Google Play apps to steal banking credentials and funds.
In a rush? Here are the quick facts:
- It hides in fake apps uploaded to the Google Play Store.
- Over 50,000 users downloaded a malicious “PDF Update” app.
- Malware performs fraud via fake overlays on banking apps.
A dangerous Android banking malware known as Anatsa has launched a new wave of attacks on users across the United States and Canada, according to ThreatFabric researchers.
The researchers say that this marks at least the third time the malware has shifted its focus to North American mobile banking customers, and it’s doing so using familiar and successful techniques.
Anatsa is a sophisticated device takeover trojan that lets cybercriminals steal banking credentials, log keystrokes, and perform remote fraudulent transactions from infected phones. The malware hides inside applications that seem harmless at first, such as file managers and PDF readers, which are uploaded to the official Google Play Store.
The researchers explain that application functions as any other useful tool. Firstly, it gains user trust through downloads, over 50,000 in the most recent case. Then, weeks later, an update quietly installs the Anatsa malware. From there, the infected phone becomes a weapon.
The malware communicates with remote servers to select banking apps to target. When a user tries to log into their bank, a fake maintenance message appears: “ We are currently enhancing our services and will have everything back up and running shortly. Thank you for your patience. ”
This message blocks users from realizing they’re being hacked while the malware carries out unauthorized transactions or captures login credentials.
In the latest campaign, a fake “PDF Update” application reached the third position in the “Top Free Tools” list before Google Play Store removed it on June 30. Although the application was short-lived, it caused significant damage to users.
Cybersecurity experts say Anatsa’s increasing focus on U.S. banks and its success through cyclical attacks and app store manipulation make it a growing threat. Financial institutions are urged to stay alert and inform users about this evolving tactic.

Image by Aerps.com, from Unsplash
Language Models Are Doubling In Power Every 7 Months
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Large Language Models (LLMs) are getting smarter fast, and experts predict they could be doing in hours what takes humans a whole month by 2030.
In a rush? Here are the quick facts:
- By 2030, LLMs may complete a month’s work in just hours.
- METR created a new benchmark called “task-completion time horizon.”
- Researchers warn progress may outpace our ability to control it.
A study by Model Evaluation & Threat Research (METR), a group based in Berkeley, California, found that LLM capabilities are doubling every seven months.
The group developed a way to measure this progress with a concept called the “task-completion time horizon,” which estimates how long a human would take to do something an AI can now handle with 50% reliability.
The researchers argued that by 2030, they estimate that the most advanced models could complete, with 50 percent reliability, software-based tasks that currently take humans a full month to finish.
That level of ability could have huge upsides and serious risks. The availability of LLMs with that kind of capability ‘‘would come with enormous stakes, both in terms of potential benefits and potential risks,” said AI researcher Zach Stein-Perlman, as reported by Spectrum .
However, the researchers note that LLMs still struggle with “messy” tasks, such as jobs that are open-ended, poorly defined, or similar to real-world challenges.
“Even if it were the case that we had very, very clever AIs, this pace of progress could still end up bottlenecked on things like hardware and robotics,” said METR researcher Megan Kinniment, as reported by Spectrum.
She also warned about the risks of rapid acceleration: “You could get acceleration that is quite intense and does make things meaningfully more difficult to control.”