
Image by charlesdeluvio, from Unsplash
New AI Code Vulnerability Exposes Millions to Potential Cyberattacks
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
- Reader’s Comments 1
Researchers at Pillar Security have uncovered a significant vulnerability in GitHub Copilot and Cursor, two widely used AI-powered coding assistants.
In a rush? Here are the quick facts:
- Hackers can exploit AI coding assistants by injecting hidden instructions into rule files.
- The attack uses hidden Unicode characters to trick AI into generating compromised code.
- Once infected, rule files spread vulnerabilities across projects and survive software updates.
Dubbed the “Rules File Backdoor,” this new attack method allows hackers to embed hidden malicious instructions into configuration files, tricking AI into generating compromised code that can bypass standard security checks.
Unlike traditional attacks that exploit known software vulnerabilities, this technique manipulates the AI itself, making it an unwitting tool for cybercriminals. “This attack remains virtually invisible to developers and security teams,” warned Pillar Security researchers.
Pillar reports that generative AI coding tools have become essential for developers, with a 2024 GitHub survey revealing that 97% of enterprise developers rely on them.
As these tools shape software development, they also create new security risks. Hackers can now exploit how AI assistants interpret rule files—text-based configuration files used to guide AI coding behavior.
These rule files, often shared publicly or stored in open-source repositories, are usually trusted without scrutiny. Attackers can inject hidden Unicode characters or subtle prompts into these files, influencing AI-generated code in ways that developers may never detect.
Once introduced, these malicious instructions persist across projects, silently spreading security vulnerabilities. Pillar Security demonstrated how a simple rule file could be manipulated to inject malicious code.
By using invisible Unicode characters and linguistic tricks, attackers can direct AI assistants to generate code containing hidden vulnerabilities—such as scripts that leak sensitive data or bypass authentication mechanisms. Worse, the AI never alerts the developer about these modifications.
“This attack works across different AI coding assistants, suggesting a systemic vulnerability,” the researchers noted. Once a compromised rule file is adopted, every subsequent AI-generated code session in that project becomes a potential security risk.
This vulnerability has far-reaching consequences, as poisoned rule files can spread through various channels. Open-source repositories pose a significant risk, as unsuspecting developers may download pre-made rule files without realizing they are compromised.
Developer communities also become a vector for distribution when malicious actors share seemingly helpful configurations that contain hidden threats. Additionally, project templates used to set up new software can unknowingly carry these exploits, embedding vulnerabilities from the start.
Pillar Security disclosed the issue to both Cursor and GitHub in February and March 2025. However, both companies placed the responsibility on users. GitHub responded that developers are responsible for reviewing AI-generated suggestions, while Cursor stated that the risk falls on users managing their rule files.

Image by Brands & People, from Unsplash
Gaming’s Bot Problem: Would You Scan Your Eye to Prove You’re Human?
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Gamers frustrated with artificial intelligence-powered bots disrupting online multiplayer matches may soon have a solution.
In a rush? Here are the quick facts:
- “Razer ID verified by World ID” authenticates players to block bots from game servers.
- Players verify with iris scans or NFC-enabled government IDs via World ID.
- The first game using this system, Tokyo Beast, launches in Q2 2025.
Tech company World, co-founded by OpenAI’s Sam Altman, has teamed up with gaming giant Razer to introduce a new verification system designed to differentiate real players from bots, as reported by the Wall Street Journal (WSJ).
The software, called “Razer ID verified by World ID,” will authenticate users and provide a badge for their Razer gaming profile.
The goal is to prevent bots from infiltrating game servers, a problem that has plagued online gaming for years. Bots are often used to exploit in-game economies, artificially inflate user numbers, and disrupt fair competition, as previously noted by Medium .
A World survey found that 59% of gamers regularly encounter bots, with 71% believing they ruin competitive gaming, as reported by Block Works (BW).
“We’re on the brink of an AI tsunami,” said Trevor Traina, chief business officer of Tools for Humanity, a company that helps develop products for World, as reported by WSJ. “This partnership helps reclaim gaming for human players,” he added.
The first game to incorporate the system is Tokyo Beast, developed by CyberAgent, set to launch in the second quarter of 2025. If successful, Razer hopes to integrate the technology into more games, as noted by Forbes .
To obtain verification, players must register with World ID, a system linked to the Worldcoin cryptocurrency. Users can prove their identity in two ways: scanning their eyes using a specialized Orb device or submitting an NFC-enabled government ID, such as a passport or driver’s license.
The verification aims to make it harder for bots to repeatedly register new accounts after being banned. Additionally, the system allows game developers to offer “human-only” servers, ensuring that players are matched against other verified users, as noted by Forbes.
While the technology offers a potential breakthrough in combating bots, it has also raised concerns over privacy and accessibility.
The eye-scanning Orbs are not widely available, with many major European countries lacking access to them. As a result, most users will have to rely on submitting government-issued IDs, notes Forbes.
Despite these concerns, Razer insists the system enhances security. “Being able to verify a human is very important, because the last thing you want is that all your potential rewards, all your hard-earned rewards, are stolen by a bot,” said Wei-Pin Choo, Razer’s chief corporate officer, as reported by Forbes.
World Network, formerly Worldcoin, verifies humans via iris scans, offering its cryptocurrency as an incentive. However, the company has faced privacy concerns and accusations of exploiting users in developing countries, noted BW.
In 2024, Spain and Germany banned Worldcoin over fears that users could not withdraw consent for their biometric data. Portugal and Kenya have also restricted or suspended the project, as reported by Forbes.
BW reports that Singapore authorities investigated misuse of World ID accounts for potential criminal activities. While the system could help identify real gamers, it isn’t foolproof. Still, efforts to tackle bots in gaming signal growing industry awareness of the issue.
Tiago Sada, a spokesperson for World, defended the verification system, stating, “Unlike traditional verification systems, the only thing World ID knows about a person is that they’re a real and unique human being. It doesn’t know who you are, your name, your email, anything like that,” reports Forbes.
While the new system could be a step toward cleaner online gaming, it remains to be seen whether privacy concerns and regulatory challenges will limit its adoption.