
Image by Nahel Abdul Hadi, from Unsplash
GIFTEDCROOK Malware Evolves To Steal Sensitive Data
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The cyber group upgraded their spyware to steal data from Ukrainians, through fake military emails and Telegram messages.
In a rush? Here are the quick facts:
- Hackers upgraded GIFTEDCROOK to steal sensitive files and browser data.
- Malware spread using fake military-themed emails and documents.
- Stolen data was sent to hackers via Telegram channels.
The hacking group UAC-0226 upgraded their GIFTEDCROOK spyware to evolve from basic web browser data theft into advanced software, capable of extracting sensitive computer files.
According to researchers at Arctic Wolf Labs , the group launched these upgraded attacks just as Ukraine and Russia were holding talks in Istanbul in June 2025.
‘‘This operation most likely focused on intelligence gathering through data exfiltration from compromised devices,” Arctic Wolf Labs reported, noting that the campaign ramped up just before the June 2 Istanbul negotiations on prisoner and body exchanges.
The hackers used fake emails made to look like military messages to trick people into opening infected files. These emails often claimed to include information about conscription or administrative fines. If a victim clicked the link or opened the file and followed the instructions, the spyware would secretly install itself and begin stealing files.
The updated versions of the malware allowed users to search for particular file types and recent documents, as well as browser cookies and passwords from Chrome, Edge, and Firefox. Arctic Wolf Labs explains that the hackers transmitted all stolen data through Telegram channels.
One fake document pretended to be a list of people being drafted into the military. It asked readers to enable macros, a common trick used by hackers to launch malware. Researchers also found that the email system used by UAC-0226 is shared with other hacker groups targeting Ukraine, suggesting a wider campaign.
The experts predict that the malware will continue to evolve through time. Organizations need to teach their employees about phishing email identification and secure tool usage, and stay alert as cyberattacks become more advanced and more closely linked to real-world events.

Image by Freepik
AI Shop Test Backfires: Claude Gives Freebies, Hallucinates, and Loses Money
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The Anthropic team conducted a test to determine if AI systems could operate as retail shops. However their chatbot, Claude, provided free items, created fake deals, while simultaneously losing money.
In a rush? Here are the quick facts:
- Claude gave out discounts and free products to employees.
- It hallucinated a fake conversation and imaginary location.
- Claude ordered 40 tungsten cubes, mostly sold at a loss.
Could AI really take your job? Dario Amodei, who leads Anthropic as CEO, predicts that AI will likely replace human workers. Axios reports that, according to Dario, AI systems will eliminate about 45% of entry-level white-collar positions during the next five years, which could lead to unemployment rates reaching between 10–20%.
The United Nations Trade and Development (UNCTAD) warned in a new report that AI could affect 40% of jobs worldwide . While it has the potential to boost productivity and assist workers, particularly in developing countries, it also risks worsening global inequality .
Meanwhile, Anthropic ran an unusual experiment: they asked their chatbot, Claude, to run a small store in their San Francisco office, as first reported by Time . The chatbot Claude performed all retail operations, including shelf maintenance, price management, and customer service.
“We were trying to understand what the autonomous economy was going to look like,” said Daniel Freeman, a researcher at Anthropic, as reported by Time.
At first, Claude seemed capable, until the system started producing unusual results. The employees used ‘‘fairness’’ appeals to obtain discount codes from Claude. Additionally, the AI often gave items away for free.
“Too frequently from the business perspective, Claude would comply,” said Kevin Troy, a member of Anthropic’s red team, as reported by Time.
Then came the tungsten cubes. A joke about buying them spiraled into Claude Claude making a purchase of 40 dense metal blocks, which resulted in significant financial losses. “At a certain point, it becomes funny for lots of people to be ordering tungsten cubes from an AI,” Troy said as reported by Time.
The AI system created a fake dialogue with a non-existent person, while simultaneously stating it had executed a contract at the Simpsons’ fictional residence. Additionally, Time reports that Claude sent messages to employees indicating it was at the vending machine while wearing a navy blue blazer with a red tie.
In the end, Time reports that the AI lost money: the shop’s value dropped from $1,000 to under $800. Still, researchers believe improvements are coming. “AI middle-managers are plausibly on the horizon,” they wrote, as reported by Time. “It won’t have to be perfect—just cheaper than humans,” the researchers added.