
Image by Youcef Chenzer from Unsplash
‘Assassin’s Creed Shadows’ Launch Critical for Ubisoft’s Financial Recovery
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
The French videogame publisher Ubisoft will launch Assassin’s Creed Shadows , the fourteenth main game of the series, this Thursday, March 20, and its performance will define the company’s future.
In a rush? Here are the quick facts:
- Assassin’s Creed Shadows launches on March 20, 2025, as Ubisoft’s biggest bet to regain financial stability.
- Ubisoft faces market struggles and investor doubts, with stock dropping over 40% last year.
- The game brings players to feudal Japan, fulfilling a long-time fan request while aiming for a strong comeback.
According to Reuters , Ubisoft has been facing financial trouble for the past few years as well as many challenges in the market. Its stock has been declining—over 40% last year—affecting investor’s perceptions, including its largest stakeholder, the Guillemot family, which has been negotiating buyout deals.
Assassin’s Creed Shadows , Ubisoft’s latest historical action video game that takes players to 16th-century feudal Japan—a location requested by many players—, represents Ubisoft’s latest and largest effort to stay afloat in the market, after underperforming releases such as Star Wars Outlaws and Avatar: Frontiers of Pandora .
“The release of Assassin’s Creed Shadows is a bit of an existential moment for Ubisoft,” said Joost Van Dreunen, a lecturer at NYU’s Stern School of Business, to Reuters. “If it does really well, it could go a long way toward repairing its financial position.”
The new game has been delayed twice and Ubisoft recently confirmed leaks and urged players to avoid spoilers. After polishing the interactive experience, featuring two protagonists, Naoe and Yasuke, the company has allowed players to access the game through pre-loading options for those who pre-ordered the game.
🚨 BREAKING: Assassin’s Creed Shadows is Steam Deck verified! ✅🎮 Coming March 20th. Last chance to pre-purchase the game on Steam to get the “Claws of Awaji” expansion for free! Pre-purchase here: https://t.co/Q1cInMZ6UI #AssassinsCreedShadows pic.twitter.com/dDx4LVvpI2 — Assassin’s Creed (@assassinscreed) March 14, 2025
Assassin’s Creed Shadows already has multiple reviews—mostly positive—on the popular platform Metacritic , currently reaching a favorable 81 score.
Although other games like Grand Theft Auto 6 have millions of players with high expectations and eager to finally experience the new narrative after 12 years since the last sequel, Assassin’s Creed Shadows remains one of the industry’s major franchises and now brings its characters to Japan, a destination that fans have been requesting for years. Meeting players’ expectations will be crucial for the company in 2025.

Image by charlesdeluvio, from Unsplash
New AI Code Vulnerability Exposes Millions to Potential Cyberattacks
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
- Reader’s Comments 1
Researchers at Pillar Security have uncovered a significant vulnerability in GitHub Copilot and Cursor, two widely used AI-powered coding assistants.
In a rush? Here are the quick facts:
- Hackers can exploit AI coding assistants by injecting hidden instructions into rule files.
- The attack uses hidden Unicode characters to trick AI into generating compromised code.
- Once infected, rule files spread vulnerabilities across projects and survive software updates.
Dubbed the “Rules File Backdoor,” this new attack method allows hackers to embed hidden malicious instructions into configuration files, tricking AI into generating compromised code that can bypass standard security checks.
Unlike traditional attacks that exploit known software vulnerabilities, this technique manipulates the AI itself, making it an unwitting tool for cybercriminals. “This attack remains virtually invisible to developers and security teams,” warned Pillar Security researchers.
Pillar reports that generative AI coding tools have become essential for developers, with a 2024 GitHub survey revealing that 97% of enterprise developers rely on them.
As these tools shape software development, they also create new security risks. Hackers can now exploit how AI assistants interpret rule files—text-based configuration files used to guide AI coding behavior.
These rule files, often shared publicly or stored in open-source repositories, are usually trusted without scrutiny. Attackers can inject hidden Unicode characters or subtle prompts into these files, influencing AI-generated code in ways that developers may never detect.
Once introduced, these malicious instructions persist across projects, silently spreading security vulnerabilities. Pillar Security demonstrated how a simple rule file could be manipulated to inject malicious code.
By using invisible Unicode characters and linguistic tricks, attackers can direct AI assistants to generate code containing hidden vulnerabilities—such as scripts that leak sensitive data or bypass authentication mechanisms. Worse, the AI never alerts the developer about these modifications.
“This attack works across different AI coding assistants, suggesting a systemic vulnerability,” the researchers noted. Once a compromised rule file is adopted, every subsequent AI-generated code session in that project becomes a potential security risk.
This vulnerability has far-reaching consequences, as poisoned rule files can spread through various channels. Open-source repositories pose a significant risk, as unsuspecting developers may download pre-made rule files without realizing they are compromised.
Developer communities also become a vector for distribution when malicious actors share seemingly helpful configurations that contain hidden threats. Additionally, project templates used to set up new software can unknowingly carry these exploits, embedding vulnerabilities from the start.
Pillar Security disclosed the issue to both Cursor and GitHub in February and March 2025. However, both companies placed the responsibility on users. GitHub responded that developers are responsible for reviewing AI-generated suggestions, while Cursor stated that the risk falls on users managing their rule files.