OpenAI To Use AMD Chips And Develop Its Own Hardware By 2026 - 1

Photo by BoliviaInteligente on Unsplash

OpenAI To Use AMD Chips And Develop Its Own Hardware By 2026

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • OpenAI raised funds to build its own in-house AI chips, the first model expected by 2026
  • Broadcom and TSMC are working with OpenAI to develop the new technology
  • The startup projects a $5 billion loss and $3.7 billion in revenue for 2024

OpenAI is reportedly adding AMD chips and Nvidia chips to meet infrastructure demands and is working with Broadcom and Taiwan Semiconductor Manufacturing Company (TSMC) to build their own chips to support AI systems.

According to an exclusive by Reuters, anonymous sources have shared more details about these developments with the news agency. ChatGPT’s owner has been considering multiple strategies to reduce costs and diversify chip supplies.

OpenAI has been raising funds—the company recently raised $6.6 billion and is now valued at $157 billion —to build in-house products.

The startup previously considered building “foundries”—a network of factories—to manufacture AI chips, but due to time and costs, decided to postpone that plan and focus on chip design and development.

Both Broadcom—the American semiconductor and infrastructure software developers—and TSMC’s shares jumped after the report revealing the companies’ new project with OpenAI.

OpenAI needs AI chips for training AI models and for inference and is already one of NVIDIA’s largest purchasers. The AMD—Nvidia’s competitor— chips will be used through Microsoft’s Azure.

As explained by Reuter’s sources, along with Broadcom and TSMC, Open AI expects to develop specialized AI chips for inference, as analysts predicted that as more AI apps are deployed, in the future, the demand for inference chips could surpass the current high demand for training chips.

The startup has assembled a team of 20 employees, including former Google employees, to develop its in-house chip project, and the first custom-designed chip is expected for 2026.

The company’s efforts to reduce costs are related to its current high expenses—electricity, hardware, and cloud services— and costs. For this year, OpenAI projects a $5 billion loss and $3.7 billion in revenue.

New OSI Definition Of Open AI, Challenging Meta’s Model Standards - 2

image by vectorjuice, from Freepik

New OSI Definition Of Open AI, Challenging Meta’s Model Standards

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Open Source Initiative defines “open” AI
  • Open Source AI allows unrestricted use, modification, and sharing of technology.
  • AI models must disclose training data and source code to be open source.

The Open Source Initiative (OSI) has announced its official definition of “open” AI , which could lead to a confrontation with tech giants like Meta, whose models reportedly do not meet these new standards, as reported by The Verge .

According to The Verge, the OSI has long been the go-to authority for defining open-source software. However, AI systems involve aspects not covered by traditional licenses, such as the data used for training models.

According to the new definition, for an AI system to be considered Open Source, it must let users use it for any purpose without needing permission. Users should also be able to check how the system works and inspect its components.

This includes the ability to change the system for any reason, including its outputs, and to share it with others, whether in its original or modified form.

To make modifications, users need access to what’s called the “preferred form” of the system. This means providing detailed information about the training data, including where it came from, its characteristics, how it was selected, and how it was processed.

This information is necessary for users to replicate the system. Additionally, the complete source code that runs the AI system must be shared, covering everything from data processing to model architecture and testing. Model parameters, such as weights and configuration settings, should also be available for users to fully understand and modify the system.

The Verge also notes that this new definition directly challenges Meta’s Llama, which is often advertised as the largest open-source AI model. Although Llama is publicly available for download, it has restrictions on commercial use, especially for applications with over 700 million users, and it doesn’t provide access to its training data.

This means it does not meet OSI’s standards for unrestricted use, modification, and sharing.

Meta spokesperson Faith Eischen told The Verge that while “we agree with our partner OSI on many things,” the company disagrees with this definition.

“There is no single open source AI definition, and defining it is a challenge because previous open source definitions do not encompass the complexities of today’s rapidly advancing AI models.”

Eischen added, “We will continue working with OSI and other industry groups to make AI more accessible and free responsibly, regardless of technical definitions.”

Simon Willison, an independent researcher and creator of the open-source multi-tool Datasette, expressed optimism about the new definition: “Now that we have a robust definition in place, maybe we can push back more aggressively against companies who are ‘open washing’ and declaring their work open source when it actually isn’t,” he told The Verge.