A Disney Worker Downloaded An AI Tool That Led To A Costly Cyberattack - 1

Image by Joe Penniston, from Flickr

A Disney Worker Downloaded An AI Tool That Led To A Costly Cyberattack

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

A simple software download turned into a nightmare for Matthew Van Andel, a former Disney employee. A cyberattack stole his personal information, and ultimately cost him his job, as first reported by The Wall Street Journal (WSJ).

In a Rush? Here are the Quick Facts!

  • Van Andel downloaded AI software with hidden malware, compromising his passwords and credentials.
  • The hacker used stolen session cookies to access and leak 44 million Disney messages.
  • Sensitive Disney data, including employee and customer details, was exposed in the breach.

Van Andel unknowingly invited the hacker into his system last February when he downloaded AI image-generation software from GitHub. Hidden within the program was an infostealer, malware designed to extract login credentials and other sensitive data, as reported by WSJ.

GitHub, a widely used platform for sharing code, has recently also been exploited by cybercriminals through fake repositories that spread malware worldwide . Attackers use AI-generated documentation and frequent updates to make these malicious projects appear legitimate, tricking developers into downloading harmful software.

Over the following months, the attacker gained access to Van Andel’s password manager, 1Password, as well as session cookies—digital tokens that bypass login credentials for online accounts.

This granted the hacker unauthorized entry into Disney’s Slack workspace, where millions of messages, internal documents, and even private employee and customer information were stored.

WSJ reports that Van Andel remained unaware of the breach until July 11, when he received a cryptic message on Discord referencing a conversation he had in Disney’s Slack channel. Soon after, his credentials were used to leak 44 million messages online, exposing Disney’s internal communications and financial data.

The fallout was swift. Disney launched a cybersecurity investigation, confirming the exposure of confidential customer details, employee passport numbers, and revenue figures from its streaming and theme park divisions. The company later announced plans to phase out Slack as a collaboration tool, reports WSJ.

For Van Andel, the attack didn’t stop at work. The hacker stole his credit card information, leaked his Social Security number, and even published credentials that could access security cameras in his home. “It’s impossible to convey the sense of violation,” said Van Andel, as reported by WSJ.

His digital accounts were hijacked, his children’s Roblox profiles were compromised, and strangers flooded his social media with offensive messages. The hacker, initially claiming to be from a Russia-based hacktivist group, later turned out to be an individual operating under the alias “Nullbulge.”

Days after the breach, Disney terminated Van Andel’s employment, citing forensic evidence of inappropriate material on his company-issued laptop—an allegation he denies.

“Mr. Van Andel’s claim that he did not engage in the misconduct that led to his termination is firmly refuted by the company’s review of his company-issued device,” a Disney spokesperson stated.

Van Andel has since filed a legal claim against Disney, seeking compensation for lost wages and damages. Meanwhile, he continues to battle the lingering effects of the cyberattack, as stolen credentials linked to his accounts remain active in underground markets.

Amazon Launches Alexa+: AI Assistant With Advanced Task Automation - 2

Image by Stock Catalog, from Flickr

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

In a Rush? Here are the Quick Facts!

  • It features “agentic capabilities,” allowing it to complete tasks autonomously online.
  • Personalized recommendations are based on user preferences, like dietary needs and interests.
  • Alexa+ costs $19.99/month but is free for Amazon Prime members.

These advancements allow it to understand half-formed thoughts, colloquial language, and intricate topics, making interactions feel more like a conversation with a knowledgeable assistant rather than a machine.

A key innovation behind Alexa+ is its “expert” system, which organizes tasks into specialized modules. This allows it to control devices like smart lights and cameras, make reservations, order groceries, and track event tickets. It connects to a variety of services and devices, helping everything work together more efficiently.

One of its most advanced features is “agentic capabilities”, enabling Alexa+ to autonomously complete tasks online. For example, if a user needs an appliance repaired, Alexa+ can browse the web, find a service provider through Thumbtack, book an appointment, and confirm the details—all without user intervention.

This represents a shift toward AI assistants that actively handle responsibilities rather than just providing information. Alexa+ also offers deep personalization. It can remember user preferences, such as dietary restrictions or favorite music, to make tailored recommendations.

Users can further enhance Alexa’s knowledge by sharing documents, photos, or emails, allowing the assistant to organize schedules, summarize study materials, or extract relevant details from messages.

As noted by Medium , AI-enhanced Alexa introduces several concerns. Privacy remains a major issue, as the assistant collects more user data, raising questions about how it’s stored and used.

Additionally, ethical concerns arise as AI assistants become more human-like, since they can subtly influence user behavior and decision-making , raising ethical concerns. This sophistication also makes them more vulnerable to cyberattacks, as bad actors could exploit AI-generated interactions to manipulate users or extract sensitive data.

Recently, OpenAI demonstrated that its AI models surpass 82% of Reddit users in persuasive writing , raising concerns about their potential for political manipulation and misinformation.

If AI can influence opinions at this level, it could also be weaponized for phishing attacks, scams, or social engineering tactics, making transparency, security, and responsible development crucial for maintaining trust.