
Image by Dimitri Karastelev, from Unsplash
Low-Cost Phones Come With Fake WhatsApp That Steals Crypto
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A fake version of WhatsApp pre-installed on cheap Android phones is stealing cryptocurrency by swapping wallet addresses and scanning user data.
In a rush? Here are the quick facts:
- Fake WhatsApp app pre-installed on cheap Android phones.
- Trojan sends user messages and images to hackers.
- Hackers earned over $1 million through stolen cryptocurrency.
Security researchers have uncovered a dangerous scam involving cheap Android smartphones with pre-installed fake apps designed to steal cryptocurrency. According to Russia-based antivirus company Doctor Web , the malware campaign was first reported in mid-2024 and has grown significantly since.
The attackers are targeting users who purchase low-cost smartphones that appear similar to big-name models like the “S23 Ultra” or “Note 13 Pro.” These phones often claim to run Android 14 but are actually running modified Android 12, with fake system specs.
A trojanized version of WhatsApp, secretly installed on these phones, is at the center of the scam. Using a tool called LSPatch, hackers added a hidden module to the app. Once active, it quietly intercepts and changes copied cryptocurrency wallet addresses, a method known as “clipping.”
The malware even tricks both sender and recipient. Doctor Web explains that “in the case of an outgoing message, the compromised device displays the correct address of the victim’s own wallet, while the recipient… is shown the address of the fraudsters’ wallet.”
This version of WhatsApp also sends all user messages to the hackers and scans the device for images containing recovery phrases, often used to access crypto wallets. Many users take screenshots of these phrases, giving hackers full access if found.
Doctor Web named the trojan Shibai. It reportedly affects around 40 apps, including Telegram, Trust Wallet, and MathWallet. The campaign uses over 60 servers and 30 domains, and some hacker wallets have received over $1 million in stolen crypto.

Image by Emilinao Vittoriosi, from Unsplash
OpenAI’s New AI Models Can Now “Think” With Images
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
OpenAI has launched o3 and o4-mini, advanced AI models that combine image manipulation with text-based reasoning to solve complex problems.
In a rush? Here are the quick facts:
- These models manipulate, crop, and transform images to solve complex tasks.
- o3 and o4-mini outperform earlier models in STEM questions, visual search, and chart reading.
- The models combine text and image processing, using tools like web search and code analysis.
OpenAI has announced two new AI models, o3 and o4-mini, that can reason with images—marking a major leap in how artificial intelligence understands and processes visual information.
“These systems can manipulate, crop and transform images in service of the task you want to do,” said Marc Chen, OpenAI’s head of research, during a livestream event on Wednesday, as reported by the New York Times .
The o3 and o4-mini models now have the ability to analyze images as part of their internal thinking process, whereas previous models could only see images.
The system enables users to upload photos of math problems, technical diagrams, handwritten notes, posters, and blurry or rotated images. It will break down the content into step-by-step explanations, regardless of multiple questions or visual elements in one image.
The system can now focus on unclear parts of an image, rotating it for better understanding. It combines visual understanding with text-based reasoning to deliver precise answers. The system can interpret science graphs to explain their meaning and identify coding errors in screenshots to generate solutions.
The models can also use other tools like web search, Python code, and image generation in real time, which allows them to solve much more complex tasks than before. OpenAI says these capabilities come built-in, without needing extra specialized models.
Tests show that o3 and o4-mini perform better than previous models in all visual tasks they were given. The visual search benchmark, known as V*, shows o3 reaching 95.7% accuracy. However, the models still have some flaws, as OpenAI states they can produce overthinking mistakes and basic perception errors.
OpenAI introduced this update as part of its initiative to develop AI systems that reason similarly to humans. The models require extensive thought sequences to function, which means they need extra time to handle complex questions. They also integrate tools like image generation, web search, and Python code analysis to give more precise and creative answers.
However, there are limits. The models sometimes process excessive amounts of information, make perception errors, and shift their reasoning approaches between attempts. The company is working to improve the models’ reliability and consistency.
Both o3 and o4-mini are now available to ChatGPT Plus ($20/month) and Pro ($200/month) users. OpenAI also released Codex CLI, a new open-source tool to help developers run these AI models alongside their own code.
While OpenAI faces legal challenges over content use, its visual reasoning tech shows how AI is getting closer to solving real-world problems in more human-like ways.