Shadow AI Threatens Enterprise Security - 1

Image by Freepik

Shadow AI Threatens Enterprise Security

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

As AI technology evolves rapidly, organizations are facing an emerging security threat: shadow AI apps. These unauthorized applications, developed by employees without IT or security oversight, are spreading across companies and often go unnoticed, as highlighted in a recent VentureBeat article.

In a Rush? Here are the Quick Facts!

  • Shadow AI apps are created by employees without IT or security approval.
  • Employees create shadow AI to increase productivity, often without malicious intent.
  • Public models can expose sensitive data, creating compliance risks for organizations.

VentureBeat explains that while many of these apps are not intentionally malicious, they pose significant risks to corporate networks, ranging from data breaches to compliance violations.

Shadow AI apps are often built by employees seeking to automate routine tasks or streamline operations , using AI models trained on proprietary company data.

These apps, which frequently rely on generative AI tools such as OpenAI’s ChatGPT or Google Gemini, lack essential safeguards, making them highly vulnerable to security threats .

According to Itamar Golan, CEO of Prompt Security, “Around 40% of these default to training on any data you feed them, meaning your intellectual property can become part of their models,” as reported by VentureBeat.

The appeal of shadow AI is clear. Employees, under increasing pressure to meet tight deadlines and handle complex workloads, are turning to these tools to boost productivity.

Vineet Arora, CTO at WinWire, notes to VentureBeats, “Departments jump on unsanctioned AI solutions because the immediate benefits are too tempting to ignore.” However, the risks these tools introduce are profound.

Golan compares shadow AI to performance-enhancing drugs in sports, saying, “It’s like doping in the Tour de France. People want an edge without realizing the long-term consequences,” as reported by VentureBeats.

Despite their advantages, shadow AI apps expose organizations to a range of vulnerabilities, including accidental data leaks and prompt injection attacks that traditional security measures cannot detect.

The scale of the problem is staggering. Golan reveals to VentureBeats that his company catalogs 50 new AI apps daily, with over 12,000 currently in use. “You can’t stop a tsunami, but you can build a boat,” Golan advises, pointing to the fact that many organizations are blindsided by the scope of shadow AI usage within their networks.

One financial firm, for instance, discovered 65 unauthorized AI tools during a 10-day audit, far more than the fewer than 10 tools their security team had expected, as reported by VentureBeats.

The dangers of shadow AI are particularly acute for regulated sectors. Once proprietary data is fed into a public AI model, it becomes difficult to control, leading to potential compliance issues.

To tackle the growing issue of shadow AI, experts recommend a multi-faceted approach. Arora suggests organizations create centralized AI governance structures, conduct regular audits, and deploy AI-aware security controls that can detect AI-driven exploits.

Additionally, businesses should provide employees with pre-approved AI tools and clear usage policies to reduce the temptation to use unapproved solutions.

Hackers Are Targeting Apple Devs With This Tricky New Malware - 2

Image by Drazen Zigic, from Freepik

Hackers Are Targeting Apple Devs With This Tricky New Malware

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Microsoft has warned of a new version of the XCSSET malware, the first update to this Mac-based threat since 2022.

In a Rush? Here are the Quick Facts!

  • The malware spreads through infected Xcode projects used by Apple developers.
  • It can steal digital wallet data, notes, and system files.
  • The malware now hides better and stays active using new persistence tricks.

The malware, which spreads through infected Xcode projects, has improved ways to hide and stay on an infected system, making it harder to detect and remove.

XCSSET mainly targets Apple developers by sneaking into Xcode, the software used to build Mac and iPhone apps. If a developer unknowingly downloads an infected project, the malware can steal sensitive information like digital wallet data, notes, and system files. It can also allow attackers to spy on the system and potentially take control.

The latest version has three major upgrades: better hiding techniques, stronger persistence, and new infection methods.

To avoid detection, the malware scrambles its code in random ways so security programs have a harder time identifying it. It now also uses multiple encoding techniques, making it even more difficult to spot.

To ensure it stays on a device, XCSSET has new tricks. One method alters a system file called .zshrc, which makes the malware run automatically whenever the Terminal app is opened.

Another method involves manipulating the Mac dock by creating a fake version of the Launchpad app. When users click on it, the real app still opens, but the malware secretly runs in the background.

The malware has also improved how it infects Xcode projects, using different strategies to hide its payload. This makes it harder for developers to notice something is wrong.

Microsoft urges Mac users—especially developers—to be cautious when downloading Xcode projects from the internet, as this is the primary way the malware spreads. They also recommend only installing apps from trusted sources, such as the Mac App Store or official developer websites.