
Photo by m. on Unsplash
Nvidia Loses Nearly $600 Billion As AI Rival DeepSeek Shakes Up AI Market
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Nvidia’s shares dropped 17% on Monday, losing around $600 billion in market value, after DeepSeek gained popularity with its latest AI model and reached the top spot on Apple’s App Store . The loss has been described as the largest single-day market value loss for a company in history.
In a Rush? Here are the Quick Facts!
- Nvidia lost $589 billion in market capitalization yesterday after DeepSeek released its latest AI model and gained popularity in the U.S. market.
- The Chinese startup demonstrated that it can build powerful AI models without relying on Nvidia’s GPUs, posing a challenge to Nvidia’s leadership in the AI market.
- The American company’s valuation dropped from $3.5 trillion to $2.9 trillion.
According to The New York Times , the plunge is related to the Chinese startup’s success in proving it can build and train advanced AI models without relying on Nvidia’s Graphics Processing Units (GPUs), challenging the American company’s leadership in the market.
Last year, Nvidia’s shares soared as it capitalized on the AI wave, developing hardware for cutting-edge AI technology and climbing to the top ranks among the world’s most powerful companies. While its shares had already declined by August after its impressive growth in the previous months, the recent drop has been particularly alarming—marking its worst day since the pandemic-driven sell-off in 2020.
According to Forbes , this loss represents “the biggest market loss in history” for a company in a single day, as Nvidia lost a total of $589 billion in market capitalization yesterday. Nvidia lost its position as the most valuable company in the world, with its valuation dropping from $3.5 trillion to $2.9 trillion, falling below Microsoft and Apple.
Nvidia controls 90% of the AI chip market, powering OpenAI’s ChatGPT since 2022, and increasing its revenue above 200% in the past few years, but now DeepSeek’s breakthrough has raised concerns about the future of the company.
Just a few days ago, Nvidia’s CEO Jensen Huang introduced new chips and technologies at the Consumer Electronics Show (CES) in Las Vegas.

Image by Freepik
The Rise of GhostGPT: Cybercrime’s New Weapon
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Artificial intelligence has revolutionized the way we approach everyday tasks, but it has also created new tools for cybercriminals. GhostGPT, an uncensored AI chatbot, is the latest example of this darker side of AI technology, as reported in a recent analysis by Abnormal .
In a Rush? Here are the Quick Facts!
- GhostGPT is an uncensored AI chatbot used for malware creation and phishing scams.
- It bypasses ethical guidelines by using jailbroken or open-source AI language models.
- GhostGPT is sold via Telegram, offering fast responses and no activity logs.
Unlike traditional AI models that are bound by ethical guidelines, GhostGPT removes those restrictions entirely, making it a powerful tool for malicious purposes. Abnormal reports that GhostGPT operates by connecting to a jailbroken version of ChatGPT, stripping away safeguards that typically block harmful content.
Sold on platforms like Telegram , the chatbot is accessible to anyone willing to pay a fee.It promises fast processing, no activity logs, and instant usability, features that make it particularly appealing to those engaging in cybercrime.
A researcher, speaking anonymously to Dark Reading , revealed that the authors offer three pricing tiers for the large language model: $50 for one week, $150 for one month, and $300 for three months.
The researchers explain that the chatbot’s capabilities include generating malware, crafting exploit tools, and writing convincing phishing emails. For instance, when prompted to create a fake DocuSign phishing email, GhostGPT produced a polished and deceptive template designed to trick unsuspecting victims.
While promotional materials for the tool suggest it could be used for cybersecurity purposes, its focus on activities like business email compromise scams makes its true intent clear.
What sets GhostGPT apart is its accessibility. Unlike more complex tools that require advanced technical knowledge, this chatbot lowers the barrier for entry into cybercrime.
Newcomers can purchase it and begin using it immediately, while experienced attackers can refine their techniques with its unfiltered responses. The absence of activity logs further enables users to operate without fear of being traced, making it even more dangerous.
The implications of GhostGPT go beyond the chatbot itself. It represents a growing trend of weaponized AI that is reshaping the cybersecurity landscape. By making cybercrime faster, easier, and more efficient, tools like GhostGPT pose significant challenges for defenders.
Recent research shows that AI could create up to 10,000 malware variants, evading detection 88% of the time . Meanwhile, researchers have uncovered vulnerabilities in AI-powered robots , allowing hackers to cause dangerous actions such as crashes or weaponization, raising critical security concerns.
As GhostGPT and similar chatbots gain traction, the cybersecurity community is locked in a race to outpace these evolving threats. The future of AI will depend not only on innovation but also on the ability to prevent its misuse.