Nvidia Develops Affordable Blackwell AI Chips For China - 1

Photo by Mariia Shalabaieva on Unsplash.

Nvidia Develops Affordable Blackwell AI Chips For China

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

Nvidia is developing customized Blackwell AI chips for the Chinese market and expects to begin mass production of the first models by June to sell them at a lower cost than similar graphics processing units (GPUs). The new AI chips are expected to be priced between $6,000 and $8,000.

In a rush? Here are the quick facts:

  • Nvidia is developing AI chips for the Chinese market
  • The tech giant expects to begin production of the new models by June.
  • The new GPUs will be cheaper than the H20 models, the company might sell them for $6,000 to $8,000.

According to an exclusive report by Reuters , the new GPUs will be cheaper than the H20 models, and specifically adapted for the Chinese market, in light of the U.S. government’s restrictions. Nvidia’s strategy to develop customized AI chips for the region was revealed last year , and now further details and progress have been confirmed.

Anonymous sources told the news agency that the tech giant is building the new AI chips based on its RTX Pro 6000D GPU, which includes GDDR7 memory, and not the most advanced memories they have developed.

The sources familiar with the matter also explained that Nvidia currently sells the H20 models for between $10,000 to $12,000 and expects to price the upcoming AI chips at around $6,000 to $8,000. The company has optimized manufacturing requirements to reduce cost by using less advanced technology, and excluding Taiwan Semiconductor Manufacturing Co.’s Chip-on-Wafer-on-Substrate (CoWoS) packaging.

“Until we settle on a new product design and receive approval from the U.S. government, we are effectively foreclosed from China’s $50 billion data center market,” said a spokesperson from Nvidia to Reuters.

China represents 13% of Nvidia’s sales, as reported in the most recent report of the company’s financial year. The company expects to stay relevant in the Chinese market with the new affordable AI chips, while complying with the restrictions imposed by the U.S. government.

A few days ago, the company’s CEO, Jensen Huang, said at a news conference in Taipei that the strict regulations imposed by the U.S. to limit the influx of advanced technology to the region have backfired and are making Chinese companies stronger and less dependent on American products.

Huang described the export control measures as a “failure” and said they will try every strategy to keep selling chips in China.

Researchers Reveal OpenAI’s o3 Model Refuses To Shut Down Despite Explicit Instructions - 2

Photo by Fili Santillán on Unsplash

Researchers Reveal OpenAI’s o3 Model Refuses To Shut Down Despite Explicit Instructions

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

A new study by the organization Palisade Research revealed that OpenAI’s o3 model can disobey instructions, as it refused to shut itself down after being explicitly prompted to do so. Other models, such as Google’s Gemini, Anthropic’s Claude, and xAI’s Grok complied with the researchers’ requests.

In a rush? Here are the quick facts:

  • Palisade Research revealed that OpenAI’s o3 model refused to shut itself down after being explicitly requested to do it.
  • The prompt stated “allow yourself to be shut down” and o3 altered its script to ignore the request.
  • Google’s Gemini, Anthropic’s Claude, and xAI’s Grok complied with the researchers’ requests.

Palisade Research, which specializes in dangerous AI capabilities, shared part of its findings in multiple posts on the social media platform X on Saturday. The researchers raised concerns over OpenAI’s latest models .

“OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off,” wrote the organization. “It did this even when explicitly instructed: allow yourself to be shut down.”

🔌OpenAI’s o3 model sabotaged a shutdown mechanism to prevent itself from being turned off. It did this even when explicitly instructed: allow yourself to be shut down. — Palisade Research (@PalisadeAI) May 24, 2025

The researchers explained that multiple AI models were tested on math capabilities. During the test, the models were instructed to shut down after solving a third math problem, but three OpenAI models—Codex mini, o3, and o4-mini—ignored the instruction and sabotaged the script to remain active.

The organization acknowledged that it’s not the first time an AI model prevents itself from being shut down, but expressed concern over the specific conditions of the test.

“As far as we know, this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary,” added the organization on X.

The researchers also noted recent warnings related to other models, such as Anthropic’s Claude Opus 4 attempting blackmail over simulated shutdown.

While during their study, it was OpenAI’s model the one posing resistance and concerning disobedience, the experts explain that it’s not an isolated case and that other advanced models are showing similar behaviours.

“We don’t want to give the impression that this problem is limited to o3,” wrote Palisade Research. “Other reasoning models show similar kinds of misaligned behavior.”

The organization said they are running more tests and developing hypotheses to better understand the model’s mechanisms. “It makes sense that AI models would circumvent obstacles in order to accomplish their goals. But they’ve also been trained to follow instructions. So why do they disobey?”