
Image by Wesley Fryer, from Unsplash
AirPlay Bug Lets Hackers Spy On You Through Speakers, Cars, and Macs
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A new set of critical vulnerabilities discovered in Apple’s AirPlay protocol could allow hackers to hijack Apple devices—and even third-party devices like smart TVs and car infotainment systems—without any user interaction.
In a rush? Here are the quick facts:
- AirPlay flaws allow zero-click attacks through shared WiFi networks.
- Vulnerabilities affect iPhones, Macs, speakers, TVs, and over 800 car models.
- CVE-2025-24252 grants full control of MacBooks via WiFi.
Cybersecurity firm Oligo Security revealed the flaws, which they’ve dubbed AirBorne, saying the vulnerabilities enable “zero-click” and “one-click” remote code execution (RCE). In other words, hackers can take control of a device just by being on the same Wi-Fi network, without the user doing anything.
In the worst cases, attackers don’t even need users to click anything. The researchers explained that the attack could spread across devices automatically. Oligo showed how a simple WiFi connection could be used to hijack a Mac, speaker, or even a car’s entertainment system.
“The amount of devices that were vulnerable to these issues, that’s what alarms me,” said Uri Katz, a researcher at Oligo Security, as reported by WIRED . “When was the last time you updated your speaker?” Uri asked.
Two of the most dangerous bugs (CVE-2025-24252 and CVE-2025-24132) can let hackers quietly install malware on a device, and use it to spread across other systems on the same network. This could lead to data theft, spying, ransomware, or supply-chain attacks.
AirPlay is used by Apple devices like iPhones, iPads, MacBooks, and Apple TVs to stream content between devices. It’s also integrated into many third-party gadgets—possibly tens of millions—including speakers, smart TVs, and over 800 car models with CarPlay.
Some of the flaws can be used to spread malware across networks, making AirBorne “wormable.” That means a single infected device could be used to automatically spread malicious code to others nearby.
“A victim device is compromised while using public WiFi, then connects to their employer’s network – providing a path for the attacker to take over additional devices on that network. ” Oligo explained.
Oligo says the worst vulnerabilities (like CVE-2025-24252) can give hackers complete control over MacBooks with AirPlay turned on. In another example, flaws in third-party speakers could allow eavesdropping through built-in microphones.
In response, Apple told WIRED that the flaws have been patched and stressed that attackers would still need to be on the same local network as the target. The company also noted that personal data on devices like TVs and speakers is usually minimal.
However, many users might not realize their home or car devices are affected, or that they need updating.
Attack examples include playing unwanted audio, spying via microphones, tracking car locations, and even logging Mac users out remotely.
The flaws mostly relate to how AirPlay handles ‘plists’, the Apple data files used to send commands between devices. Improper parsing of these files creates openings for attackers.
Oligo and Apple advise users to update all Apple and AirPlay-enabled devices immediately. They also recommend turning off the AirPlay receiver if not in use, limiting AirPlay access to known devices, and adjusting settings to “Current User” to reduce risks.

Image by Pramod Tiwari, from Unsplash
AI Use Surges in Workplaces, So Do Privacy Risks
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A new international study reveals widespread AI use in workplaces, with nearly half of employees misusing tools like ChatGPT, often risking data exposure.
In a rush? Here are the quick facts:
- 58% of global workers use AI regularly at their jobs.
- 48% uploaded sensitive company data into public AI tools.
- 66% rely on AI output without checking its accuracy
A new study, reported by The Conversation , has revealed that while most workers are embracing AI tools like ChatGPT to improve performance, many are also using them in risky ways, often without their employers’ knowledge.
The research conducted by Melbourne Business School together with KPMG support gathered data from 32,000 workers spread across 47 countries. The survey revealed that 58% of employees use AI tools in their work activities and most workers reported improved efficiency and innovation and better work quality.
However, 47% admitted to misusing AI, including uploading sensitive data to public tools or bypassing company rules. Even more (63%) have witnessed colleagues doing the same, as reported by The Conversation.
More concerning is how widespread “shadow AI” has become, when employees use AI tools secretly or present its output as their own. Sixty-one percent said they don’t disclose when they use AI, while 55% have passed off AI-generated content as personal work.
This secrecy may not be surprising given the growing pressure workers face to appear indispensable in an AI-dominated labor market . At companies like Shopify, AI adoption is not only encouraged, it’s mandated. CEO Tobi Lütke recently told employees that before requesting additional staff or resources, they must prove AI can’t do the job first .
He emphasized that effective AI usage is now a fundamental expectation, and that performance reviews will assess how well employees integrate AI tools into their workflows. Workers who lean into automation, he noted, are accomplishing “100X the work.”
While this drive boosts productivity, it also fuels quiet competition. Admitting reliance on generative AI could be perceived as making one’s role replaceable.
This concern is echoed globally: a recent UNCTAD report warned that AI could affect up to 40% of jobs worldwide . It noted AI’s ability to perform cognitive tasks traditionally reserved for humans raising the spectre of job loss and economic inequality.
In such an environment, many workers may choose to hide their use of AI to retain a sense of control, creativity, or job security, even if it means violating transparency norms or workplace policies.
The Conversation reports that complacency is another issue in the reviewed study, where 66% of respondents say they have relied on AI output without evaluating it, leading to errors and, in some cases, serious consequences like privacy breaches or financial loss.
Researchers stressed the need for urgent reforms, since they noted that just 47% of workers have received any AI training.The authors call for stronger governance, mandatory training, and a work culture that supports transparency.
Yet, with 39% of current skills expected to require reskilling by 2030 , some workers may stay silent. As automation transforms jobs, employees might hide AI use to avoid appearing replaceable.