AI Model DeepSeek-R1 Raises Security Concerns In New Study - 1

Image by Matheus Bertelli, from Pexels

AI Model DeepSeek-R1 Raises Security Concerns In New Study

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

A cybersecurity firm has raised concerns about the AI model DeepSeek-R1, warning that it presents significant security risks for enterprise use.

In a Rush? Here are the Quick Facts!

  • The model failed 91% of jailbreak tests, bypassing safety mechanisms.
  • DeepSeek-R1 was highly vulnerable to prompt injection.
  • The AI frequently produced toxic content and factually incorrect information.

In a report released on February 11, researchers at AppSOC detailed a series of vulnerabilities uncovered through extensive testing, which they described as a serious threat to organizations relying on artificial intelligence.

According to the findings, DeepSeek-R1 exhibited a high failure rate in multiple security areas. The model was found to be highly susceptible to jailbreak attempts, frequently bypassing safety mechanisms intended to prevent the generation of harmful or restricted content.

It also proved vulnerable to prompt injection attacks, which allowed adversarial prompts to manipulate its outputs in ways that violated policies and, in some cases, compromised system integrity.

Additionally, the research indicated that DeepSeek-R1 was capable of generating malicious code at a concerning rate, raising fears about its potential misuse .

Other issues identified in the report included a lack of transparency regarding the model’s dataset origins and dependencies, increasing the likelihood of security flaws in its supply chain.

Researchers also observed that the model occasionally produced responses containing harmful or offensive language, suggesting inadequate safeguards against toxic outputs. Furthermore, DeepSeek-R1 was found to generate factually incorrect or entirely fabricated information at a significant frequency.

AppSOC assigned the model an overall risk score of 8.3 out of 10, citing particularly high risks related to security and compliance.

The firm emphasized that organizations should exercise caution before integrating AI models into critical operations, particularly those handling sensitive data or intellectual property.

The findings highlight broader concerns within the AI industry, where rapid development often prioritizes performance over security. As artificial intelligence continues to be adopted across sectors such as finance, healthcare, and defense, experts stress the need for rigorous testing and ongoing monitoring to mitigate risks.

AppSOC recommended that companies deploying AI implement regular security assessments, maintain strict oversight of AI-generated outputs, and establish clear protocols for managing vulnerabilities as models evolve.

While DeepSeek-R1 has gained attention for its capabilities , the research underscores the importance of evaluating security risks before widespread adoption. The vulnerabilities identified in this case serve as a reminder that AI technologies require careful scrutiny to prevent unintended consequences.

US And UK Decline AI Declaration Over Regulatory Concerns - 2

Image by Faces Od The World, by Flickr

US And UK Decline AI Declaration Over Regulatory Concerns

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

The United States and the United Kingdom declined to sign a joint declaration on artificial intelligence at the Paris AI Summit on Tuesday , citing concerns over regulatory overreach and national security.

In a Rush? Here are the Quick Facts!

  • JD Vance argued excessive AI regulations could harm innovation and national security.
  • The UK criticized the declaration’s lack of practical clarity and security focus.
  • India will host the next AI summit, emphasizing democratic values and international cooperation.

The declaration, endorsed by dozens of countries, including China, called for AI development to be “open, inclusive, transparent, ethical, safe, secure and trustworthy,” as reported by DW .

US Vice President JD Vance defended Washington’s decision, arguing that excessive regulations could stifle innovation. “We believe that excessive regulation of the AI sector could kill a transformative industry,” Vance said at the summit, as reported by DW.

While the UK did not provide a detailed explanation for its stance, a spokesperson for Prime Minister Keir Starmer stated that the declaration lacked “practical clarity” on global governance and did not adequately address national security concerns, said DW.

Furthermore, The Guardian reports that when asked if Britain had declined to sign because it wanted to align with the US, Starmer stated they were “not aware of the US reasons or position” on the declaration. A government source dismissed the idea that Britain was trying to gain favour with the US.

However, a Labour MP remarked: “I think we have little strategic room but to be downstream of the US.” They further noted that US AI firms might cease engaging with the UK government’s AI Safety Institute, a leading global research body, if Britain was seen as adopting an overly restrictive stance on the development of the technology, as reported by The Guardian. China’s decision to sign the declaration surprised some observers, given its history of strict government control over AI development.

Although Vance did not directly reference China in his remarks, he alluded to the risks of relying on “cheap tech” subsidized by authoritarian regimes, warning that such partnerships could compromise national security, as noted by DW.

French President Emmanuel Macron acknowledged concerns over regulatory barriers but emphasized the need for “trustworthy AI.”

DW reports that he emphasized the importance of regulations to maintain public trust in AI systems. The European Commission supported this view, with President Ursula von der Leyen calling for streamlined bureaucracy while ensuring robust oversight.

However, the EU AI Act has faced criticism, particularly regarding the potential influence of Big Tech companies in shaping AI standards . Critics have also raised concerns about significant loopholes, especially regarding policing and migration authorities .

India, which co-hosted the summit alongside France, took a central role in the discussions. Indian Prime Minister Narendra Modi called for international cooperation to establish AI governance frameworks that uphold democratic values and mitigate risks. The Élysée Palace announced that India would host the next AI summit, as reported by DW.

The summit took place amid growing tensions in the AI industry, with ongoing competition between US-based OpenAI and Chinese AI firms. The absence of US and UK signatures highlights the ongoing divide between regulatory approaches to AI development worldwide.