
Image by Scarecrow artworks, from Unsplash
Cybercriminals Use Fake AI Tools To Spread Ransomware and Malware
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Cybercriminals disguise ransomware as fake AI tools, exploiting growing AI demand to infect business systems with CyberLock, Lucky_Gh0$t, and Numero malware.
In a rush? Here are the quick facts:
- Numero malware pretends to be InVideo AI, corrupting Windows systems.
- Attackers spread threats via fake sites, search engine manipulation, and Telegram.
- Ransom demands include $50,000 in Monero, falsely claiming humanitarian use.
Cybercriminals are taking advantage of the growing popularity of AI by disguising malicious software as AI tools. The Talos Intelligence Group at Cisco has detected multiple dangerous threats which endanger businesses that want to acquire new technology solutions.
The researchers identified ransomware families CyberLock and Lucky_Gh0$t join Numero as new destructive malware which impersonates legitimate AI software installers.
CyberLock hides inside a fake website that mimics a real AI lead generation platform. Users who download NovaLeadsAI.exe from the fake website unknowingly install ransomware onto their systems.
After activation CyberLock encrypts essential files before demanding $50,000 in Monero cryptocurrency from victims. The attackers falsely claim the ransom “will be allocated for humanitarian aid in various regions, including Palestine, Ukraine, Africa and Asia,” say the researchers.
The ransomware known as Lucky_Gh0$t uses a fake ChatGPT installer called ChatGPT 4.0 full version – Premium.exe to spread its malware. The attackers embed the ransomware inside a ZIP file containing actual Microsoft AI tools to evade detection.
The ransomware encrypts files under 1.2 GB in size, but it destroys files that exceed this limit. The attackers instruct victims to reach them through a secure messaging platform.
Numero, meanwhile, pretends to be an installer for InVideo AI, a popular video creation tool. Instead of helping users make videos, it corrupts the Windows interface, making systems unusable.
These threats are distributed through search engine manipulation, fake websites, and messaging apps like Telegram. As businesses increasingly adopt AI, attackers are exploiting that interest to spread malware.
Experts urge companies and individuals to verify software sources carefully. “ not only compromises sensitive business data and financial assets but also undermines trust in legitimate AI market solutions.”

Image by Marco Lenti, from Unsplash
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
In a rush? Here are the quick facts:
- The New York Times licensed its content to Amazon for A.I. use.
- The deal includes news, NYT Cooking, and The Athletic content.
- Amazon may use Times content in Alexa and A.I. model training.
In 2023, The Times sued OpenAI and Microsoft, accusing them of using its articles to train chatbots without permission. OpenAI and Microsoft denied wrongdoing. While that lawsuit continues, this new partnership signals a different approach.
“This deal is consistent with our long-held principle that high-quality journalism is worth paying for,” said Meredith Kopit Levien, CEO of The Times. “It aligns with our deliberate approach to ensuring that our work is valued appropriately, whether through commercial deals or through the enforcement of our intellectual property rights.”