
Image by Kevin Ku, from Unsplash
AI Widens Cybersecurity Gap Between Attackers And Defenders
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Generative AI (GenAI) is reshaping the cybersecurity landscape, driving advancements for defenders while empowering attackers to operate with greater efficiency and creativity, according to the latest Splunk CISO Report .
In a Rush? Here are the Quick Facts!
- 64% of CISOs blame inadequate support for the rise in successful cyberattacks.
- AI-powered attacks top CISOs’ concerns, followed by cyber extortion and data breaches.
- 86% of cybersecurity professionals believe AI can address the cybersecurity skills gap.
The report reveals how artificial intelligence is being harnessed on both sides of the cyber conflict, creating opportunities and challenges in equal measure.
The study highlights a growing disparity in priorities between board members and Chief Information Security Officers (CISOs). While 52% of CISOs are focused on adopting emerging technologies like GenAI, only 33% of board members share their enthusiasm.
This disconnect extends to budget allocation, with just 29% of CISOs believing they have sufficient funding to secure their organizations, compared to 41% of board members who think otherwise.
This lack of alignment is raising red flags, as nearly two-thirds (64%) of CISOs report that insufficient resources have contributed to cyberattacks on their organizations.
For cybercriminals, GenAI is proving to be a game-changer. It enables them to make existing attacks more effective (32%), increase attack volumes (28%), and create entirely new types of threats (23%).
On the defensive side, security teams are leveraging AI for tasks such as identifying risks (39%), analyzing threat intelligence (39%), and prioritizing threat detection (35%).
However, concerns about AI-powered attacks dominate among CISOs, with 36% identifying them as their primary worry, followed by cyber extortion (24%) and data breaches (23%). Greg Clark, Director of Product Management at OpenText Cybersecurity, emphasized the need for comprehensive training alongside AI-powered solutions.
“Phishing scams and insider threats are only getting more sophisticated. Whether a large enterprise or a small business, education and awareness across all departments need to be layered on top of AI-powered technologies that detect threats,” Clark said as reported by Tech Radar .
The cybersecurity skills gap remains a pressing issue, but AI is seen as a potential solution. A significant majority of respondents—86%—believe AI can assist in onboarding entry-level talent, while 65% see it enhancing the productivity of experienced professionals.
To address these challenges, organizations are ramping up security training for compliance and legal teams, with over 90% prioritizing cross-disciplinary education.
The report underscores the importance of maintaining robust cyber-hygiene practices, such as enforcing strong passwords, implementing multi-factor authentication, and assessing third-party vendors for vulnerabilities.

Image by standret, from Freepik
Critical Security Flaw Discovered In Meta’s AI Framework
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A severe security vulnerability, CVE-2024-50050, has been identified in Meta’s open-source framework for generative AI, known as Llama Stack.
In a Rush? Here are the Quick Facts!
- The vulnerability, CVE-2024-50050, allows remote code execution via untrusted deserialized data.
- Meta patched the issue in version 0.0.41 with a safer Pydantic JSON implementation.
- The vulnerability scored 9.3 (critical) on CVSS 4.0 due to its exploitability.
The flaw, disclosed by the Oligo Research team , could allow attackers to remotely execute malicious code on servers using the framework. The vulnerability, caused by unsafe handling of serialized data, highlights the ongoing challenges of securing AI development tools.
Llama Stack, introduced by Meta in July 2024, supports the development and deployment of AI applications built on Meta’s Llama models. The research team explains that the flaw lies in its default server, which uses Python’s pyzmq library to handle data.
A specific method, recv_pyobj, automatically processes data with Python’s insecure pickle module. This makes it possible for attackers to send harmful data that runs unauthorized code. The researchers say that when exposed over a network, servers running the default configuration become vulnerable to remote code execution (RCE).
Such attacks could result in resource theft, data breaches, or unauthorized control over AI systems. The vulnerability was assigned a critical CVSS score of 9.3 (out of 10) by security firm Snyk, although Meta rated it as medium severity at 6.3, as reports by Oligo.
Oligo researchers uncovered the flaw during their analysis of open-source AI frameworks. Despite Llama Stack’s rapid rise in popularity—it went from 200 GitHub stars to over 6,000 within months—the team flagged the risky use of pickle for deserialization, a common cause of RCE vulnerabilities.
To exploit the flaw, attackers could scan for open ports, send malicious objects to the server, and trigger code execution during deserialization. Meta’s default implementation for Llama Stack’s inference server proved particularly susceptible.
Meta quickly addressed the issue after Oligo’s disclosure in September 2024. By October, a patch was released, replacing the insecure pickle-based deserialization with a safer, type-validated JSON implementation using the Pydantic library. Users are urged to upgrade to Llama Stack version 0.0.41 or higher to secure their systems.
The maintainers of pyzmq, the library used in Llama Stack, also updated their documentation to warn against using recv_pyobj with untrusted data.
This incident underscores the risks of using insecure serialization methods in software. Developers are encouraged to rely on safer alternatives and regularly update libraries to mitigate vulnerabilities. For AI tools like Llama Stack, robust security measures remain vital as these frameworks continue to power critical enterprise applications.