Opinion: How Big of a Threat Is the Chinese AI Model DeepSeek To OpenAI And Other Silicon Valley Companies? - 1

Image generated with DALL·E through ChatGPT

Opinion: How Big of a Threat Is the Chinese AI Model DeepSeek To OpenAI And Other Silicon Valley Companies?

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

DeepSeek entered the U.S. market, surpassing OpenAI on the App Store, impacting Nvidia’s stock, and sparking concern, amazement, and widespread adoption—even among Silicon Valley companies. The Chinese startup managed to create a competitive AI product that challenges the future of American AI companies

There’s a new sheriff in AI town and its arrival has shaken the tech industry worldwide. Just a few days ago, DeepSeek, a small Chinese startup, released its latest open-source AI model, the powerful R1, and made many tech giants, investors, and AI developers panic.

The new AI model is more powerful than many expected . Very quickly, DeepSeek’s AI model gained popularity—possibly with the help of a Chinese network building the hype and promoting it as the most advanced technology in the world—and ranked first place on Apple’s App Store in the U.S .

Almost immediately, news outlets everywhere were reporting on DeepSeek.

OpenAI, which was so confidently winning the AI race in 2024 , began to lose power, and Nvidia’s rising shares fell dramatically, losing almost $600 million in a day. It’s difficult to provide a precise metric on how big this new threat is to the U.S. economy but it is certainly not small.

From security concerns and potential lawsuits to a fragile tech market and the explosive adoption of the open-source AI model, here’s the essential information to understand the current DeepSeek drama:

What Is DeepSeek, And Why Is It Such A Big Deal Right Now?

DeepSeek is a small startup founded in 2023 by Liang Wenfeng, a Chinese engineer and entrepreneur, and backed by the Chinese quantitative hedge fund High-Flyer Capital Management. It has been developing open-source AI models since it was created but only started gaining attention a few months ago.

At Wizcase, we reported when they released the preview of the model DeepSeek-R1-Lite in November and noticed experts’ and users’ interest in the product that could already compete with OpenAI’s o1.

DeepSeek launched DeepSeek-L3, R1’s predecessor, in December, and caught the attention of Silicon Valley experts like Andrej Karpathy —former researcher at OpenAI and head of AI at Tesla currently building an AI-native educational platform . Karparthy highlighted the reduced costs to build the model among other interesting features.

It’s Cheaper, A Lot Cheaper

According to its official paper , DeepSeek-L3 cost $5.576 million to build —considering all training costs—while OpenAI spent over $100 million to build GPT-4 in 2023.

That’s about 94% cheaper than GPT-4!

On January 15 they launched a mobile app and on January 20, the Chinese startup launched its latest reasoning AI model, R1. This model seems to have been conceived under a Daft Punk spell of “harder, better, faster, stronger” as they managed to deliver a free and open source high-quality product that can compete with frontier models for a fraction of the cost and in record time. Boom!

🚀 DeepSeek-R1 is here! ⚡ Performance on par with OpenAI-o1 📖 Fully open-source model & technical report 🏆 MIT licensed: Distill & commercialize freely! 🌐 Website & API are live now! Try DeepThink at https://t.co/v1TFy7LHNy today! 🐋 1/n pic.twitter.com/7BlpWAPu6y — DeepSeek (@deepseek_ai) January 20, 2025

The cost of the new R1 model has not been revealed. But many assume it must remain at a low cost because DeepSeek is currently offering its API for a lot less than OpenAI’s o1, and, according to Nature , is allowing researchers to try the model.

Mario Krenn—leader of the Artificial Scientist Lab at the Max Planck Institute for the Science of Light in Erlangen, Germany—said that an experiment that costs around $370 with OpenAI’s o1, doesn’t even cost $10 with R1. “This is a dramatic difference which will certainly play a role in its future adoption,” said Krenn to Nature.

Room For Improvement

Users across the world started downloading the app to test DeepSeek’s model and, after admiring its fascinating reasoning capabilities, like its chain of thought, they also noticed a few peculiarities.

Just like every other AI model, DeepSeek’s R1 can hallucinate but the Chinese model also filters information, especially when it can affect the Chinese government.

Users shared examples of R1’s censorship. It avoids topics like the Tiananmen Square massacre, Taiwan, or answering who is Xi Jinping.

DeepSeek censors its own response in realtime as soon as Xi Jinping is mentioned pic.twitter.com/Nb2ylRXERG — Jane Manchun Wong (@wongmjane) January 24, 2025

So a new Chinese app conquers American’s curiosity within days and… what about all the data concerns the U.S. government previously had with that other popular Chinese app known as TikTok—currently caught in limbo ? Chinese technology is looking unstoppable, while the U.S. government seems less in control.

deepseek’s r1 is an impressive model, particularly around what they’re able to deliver for the price. we will obviously deliver much better models and also it’s legit invigorating to have a new competitor! we will pull up some releases. — Sam Altman (@sama) January 28, 2025

Despite the public congratulations, everyone was suspicious about how the Chinese startup managed to build this powerful model in such a short period despite all the restrictions, and lack of access to essential information.

The U.S. government has been imposing strict regulations to prevent this from happening. They forbid chipmakers to sell their advanced AI technology to China, and DeepSeek still managed to create cutting-edge artificial intelligence tools using less advanced Nvidia chips—such as the H800 GPU mentioned in the paper.

But the U.S. government is skeptical, and the U.S. Commerce Department is now investigating this as they suspect that Nvidia’s most advanced chips have been smuggled to China.

OpenAI vs DeepSeek

The atmosphere is tense. The U.S. government is not the only with with trust issues. OpenAI, along with its partner Microsoft are also investigating DeepSeek. They believe the Chinese company has used data generated by ChatGPT without permission.

OpenAI claims that its models may have helped train China’s DeepSeek model through a process known as distillation—when a large AI model transfers information to a smaller and more efficient mode.

“We know that groups in the P.R.C. are actively working to use methods, including what’s known as distillation, to replicate advanced U.S. A.I. models,” said a spokesperson from OpenAI to the New York Times . “We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models, and will share information as we know more.”

And it’s all very ironic considering that OpenAI is dealing with multiple copyright and data use violation accusations including a lawsuit filed by the New York Times , a $15 million fine for data violation in Italy, and a recent copyright case issued by Indian publishers.

If You Can’t Beat Them, Join Them?

There’s another phenomenon in the AI field. All the big companies are adopting DeepSeek’s open-source technology—even the ones that are investigating the Chinese startup.

Instead of finding issues with DeepSeek’s technology, Perplexity decided to quickly adopt it. The American AI search company integrated R1 in record time into its platform to offer users an expanded and DeepSeek-powered service. Perplexity’s lead was followed a few days later by Microsoft—yes, the one also investigating DeepSeek—by adding DeepSeek R1 on Azure AI Foundry and GitHub.

Is DeepSeek The New OpenAI In 2025?

The repercussions of DeepSeek’s impact are still uncertain, and China seems to have multiple cards in the game. Alibaba also released its latest reasoning model Qwen 2.5-Max and claimed that it’s more powerful than DeepSeek-V3, but it has yet to gain much traction.

DeepSeek’s impact has been massive, and many believe this is the end of OpenAI’s supremacy. The American AI companies are no longer as unreachable as they seemed and we will probably experience a surprising outcome soon. Experts like scientist Gary Marcus say that OpenAI is overvalued and could face a near future similar to WeWork. So what’s gonna happen with the $500 billion Stargate Project that OpenAI, SoftBank, and President Donald Trump just announced? Place your bets!

Everything suggests that, just as OpenAI arrived at full speed, sweeping everything in its path, DeepSeek is here to stay. In China, they are already being publicly praised, and their impact and adoption are already too significant to be pushed out of the U.S. market—probably with even more force than TikTok.

Loopholes In EU AI Bans Could Allow Police To Use Controversial Tech - 2

Image by Freepik

Loopholes In EU AI Bans Could Allow Police To Use Controversial Tech

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

The EU’s new AI Act aims to regulate AI but faces criticism over loopholes, exemptions, and corporate influence.

In a Rush? Here are the Quick Facts!

  • Critics argue the law has loopholes, especially for law enforcement and migration authorities.
  • Exemptions allow AI practices like real-time facial recognition and emotion detection in some cases.
  • Digital rights groups warn the law’s exceptions weaken protections against misuse.

The European Union’s new AI Act , marks a significant step in regulating artificial intelligence. The world-first legislation bans certain “unacceptable” uses of AI technology, aiming to protect citizens and preserve democratic values.

Among the prohibitions are predictive policing, scraping facial images from the internet for recognition, and using AI to detect emotions from biometric data. However, critics argue that the law contains significant loopholes, particularly when it comes to policing and migration authorities.

While the AI Act bans certain AI uses in principle, it includes exemptions that could allow European police and migration authorities to continue utilizing controversial AI practices, as first reported by Politico .

For instance, real-time facial recognition in public spaces, although largely banned, can still be allowed in exceptional cases, such as serious criminal investigations.

Similarly, the detection of emotions in public settings is prohibited, but exceptions could be made for law enforcement and migration purposes, raising concerns about the potential use of AI to identify deception at borders.

The law’s broad exemptions have raised alarm among digital rights groups. A coalition of 22 organizations warned that the AI Act fails to adequately address concerns regarding law enforcement’s use of the technology.

“The most glaring loophole is the fact that the bans do not apply to law enforcement and migrational authorities,” said Caterina Rodelli, EU policy analyst at Access Now, as reported by Politico.

The EU’s AI Act also bans the use of AI for societal control, a measure introduced to prevent AI from being used to undermine individual freedoms or democracy.

Brando Benifei, an Italian lawmaker involved in drafting the legislation, explained that the goal is to avoid AI technologies being exploited for “societal control” or the “compression of our freedoms,” as reported by Politico.

According to Politico, this stance was influenced by high-profile incidents like the Dutch tax authorities’ controversial use of AI to identify fraud in 2019, which wrongfully accused some 26,000 people of fraud.

In this case, the authorities used an algorithm to spot potential childcare benefits fraud, but the faulty algorithm led to widespread misidentifications and damage to innocent citizens’ lives.

The controversy surrounding this event played a major role in shaping the law’s restrictions on predictive policing and other forms of AI misuse.

Meanwhile, a report by Corporate Europe Observatory (CEO) raises concerns about the influence of Big Tech companies on the development of EU AI standards . The report reveals that over half of the members of the Joint Technical Committee on AI (JTC21), responsible for setting AI standards, represent corporate or consultancy interests.

This corporate influence has raised alarms that the EU’s AI Act could be undermined by industry interests focused on profitability over ethical considerations. Additionally, civil society and academic representatives face financial and logistical challenges in participating in the standard-setting process.

The report highlights the lack of transparency and democratic accountability within standard-setting organizations like CEN and CENELEC, sparking concerns about the fairness and inclusivity of the standards development process.

While the AI Act puts the EU at the forefront of global AI regulation, the ongoing debate over its loopholes suggests that balancing innovation with safeguarding human rights will be a delicate task moving forward.