Researchers Discover Security Flaws In Open-Source AI And ML Models - 1

Image by master1305, from Freepik

Researchers Discover Security Flaws In Open-Source AI And ML Models

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Over 30 security flaws found in open-source AI and ML tools.
  • Severe vulnerabilities impact tools like Lunary, ChuanhuChatGPT, and LocalAI.
  • LocalAI flaw allows attackers to infer API keys through timing analysis.

A recent investigation has uncovered over 30 security flaws in open-source AI and machine learning (ML) models, raising concerns about potential data theft and unauthorized code execution, as reported by The Hacker News (THN).

These vulnerabilities were found in widely used tools, including ChuanhuChatGPT, Lunary, and LocalAI, and were reported via Protect AI’s Huntr bug bounty platform, which incentivizes developers to identify and disclose security issues.

Among the most severe vulnerabilities identified, two major flaws impact Lunary, a toolkit designed to manage large language models (LLMs) in production environments.

The first flaw, CVE-2024-7474, is categorized as an Insecure Direct Object Reference (IDOR) vulnerability. It allows a user with access privileges to view or delete other users’ data without authorization, potentially leading to data breaches and unauthorized data loss.

The second critical issue, CVE-2024-7475, is an improper access control vulnerability that lets an attacker update the system’s SAML (Security Assertion Markup Language) configuration.

By exploiting this flaw, attackers can bypass login security to gain unauthorized access to personal data, raising significant risks for any organization relying on Lunary for managing LLMs.

Another security weakness identified in Lunary, CVE-2024-7473, also involves an IDOR vulnerability that allows attackers to update prompts submitted by other users. This is achieved by manipulating a user-controlled parameter, making it possible to interfere with others’ interactions in the system.

In ChuanhuChatGPT, a critical vulnerability (CVE-2024-5982) allows an attacker to exploit a path traversal flaw in the user upload feature, as noted by THN.

This flaw can lead to arbitrary code execution, directory creation, and exposure of sensitive data, presenting high risk for systems relying on this tool. LocalAI, another open-source platform that enables users to run self-hosted LLMs, has two major flaws that pose similar security risks, said THN.

The first flaw, CVE-2024-6983, enables malicious code execution by allowing attackers to upload a harmful configuration file. The second, CVE-2024-7010, lets hackers infer API keys by measuring server response times, using a timing attack method to deduce each character of the key gradually, noted THN.

In response to these findings, Protect AI introduced a new tool called Vulnhuntr , an open-source Python static code analyzer that uses large language models to detect vulnerabilities in Python codebases, said THN.

Vulnhuntr breaks down code into smaller chunks to identify security flaws within the constraints of a language model’s context window. It scans project files to detect and trace potential weaknesses from user input to server output, enhancing security for developers working with AI code.

These discoveries highlight the critical importance of ongoing vulnerability assessment and security updates in AI and ML systems to protect against emerging threats in the evolving landscape of AI technology.

OpenAI To Use AMD Chips And Develop Its Own Hardware By 2026 - 2

Photo by BoliviaInteligente on Unsplash

OpenAI To Use AMD Chips And Develop Its Own Hardware By 2026

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • OpenAI raised funds to build its own in-house AI chips, the first model expected by 2026
  • Broadcom and TSMC are working with OpenAI to develop the new technology
  • The startup projects a $5 billion loss and $3.7 billion in revenue for 2024

OpenAI is reportedly adding AMD chips and Nvidia chips to meet infrastructure demands and is working with Broadcom and Taiwan Semiconductor Manufacturing Company (TSMC) to build their own chips to support AI systems.

According to an exclusive by Reuters, anonymous sources have shared more details about these developments with the news agency. ChatGPT’s owner has been considering multiple strategies to reduce costs and diversify chip supplies.

OpenAI has been raising funds—the company recently raised $6.6 billion and is now valued at $157 billion —to build in-house products.

The startup previously considered building “foundries”—a network of factories—to manufacture AI chips, but due to time and costs, decided to postpone that plan and focus on chip design and development.

Both Broadcom—the American semiconductor and infrastructure software developers—and TSMC’s shares jumped after the report revealing the companies’ new project with OpenAI.

OpenAI needs AI chips for training AI models and for inference and is already one of NVIDIA’s largest purchasers. The AMD—Nvidia’s competitor— chips will be used through Microsoft’s Azure.

As explained by Reuter’s sources, along with Broadcom and TSMC, Open AI expects to develop specialized AI chips for inference, as analysts predicted that as more AI apps are deployed, in the future, the demand for inference chips could surpass the current high demand for training chips.

The startup has assembled a team of 20 employees, including former Google employees, to develop its in-house chip project, and the first custom-designed chip is expected for 2026.

The company’s efforts to reduce costs are related to its current high expenses—electricity, hardware, and cloud services— and costs. For this year, OpenAI projects a $5 billion loss and $3.7 billion in revenue.