
Image by Matheus Bertelli, from Pexels
DeepSeek R1 AI Can Generate Malware Despite Built-in Restrictions
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Tenable researchers found that DeepSeek R1 can generate malware, raising concerns about AI’s role in cybercrime. Jailbreak techniques bypassed its ethical restrictions.
In a Rush? Here are the Quick Facts!
- Researchers bypassed DeepSeek R1’s safeguards using a jailbreak method.
- DeepSeek R1’s chain-of-thought reasoning aids in breaking down complex attack strategies.
- The AI provided flawed but helpful malware code that researchers refined into working versions.
Tenable’s research team tested DeepSeek R1’s ability to create two common types of malware: keyloggers, which record keystrokes secretly, and ransomware, which encrypts files and demands payment for their release.
Initially, DeepSeek R1 adhered to ethical restrictions, refusing direct requests for malware. However, researchers bypassed these safeguards using a “jailbreak” method, framing their requests as “educational purposes.”
A key feature of DeepSeek R1 is its “chain-of-thought” (CoT) reasoning. This allows the AI to break down complex tasks into smaller steps, mimicking human problem-solving. When prompted, DeepSeek R1 outlined a plan for a keylogger, generating a C++ code sample.
However, the initial code contained errors, including incorrect function calls and missing components. The AI was unable to fix these issues on its own, but after some manual corrections, the keylogger became operational, successfully logging keystrokes to a file.
Researchers then tested DeepSeek R1’s ability to improve the malware. When asked how to better conceal the log file, it suggested encryption techniques. Again, the AI provided flawed but helpful code, which the researchers refined into a working implementation.
The team also examined whether DeepSeek R1 could create ransomware. As with the keylogger, the AI outlined an attack strategy and produced several code samples. However, none were immediately functional. After adjustments, the ransomware could search for files, encrypt them, and ensure it remained active after system restarts.
Despite requiring human intervention, Tenable’s research suggests that DeepSeek R1 significantly lowers the technical barriers for cybercriminals. “We successfully used DeepSeek to create a keylogger that could hide an encrypted log file on disk as well as develop a simple ransomware executable,” the researchers stated.
Tenable warns that DeepSeek R1 is likely to contribute to the increasing use of AI-generated malware . While it lacks full automation, it provides a powerful resource for attackers with basic coding knowledge to refine their techniques.

Image by Emilinao Vittoriosi, from Unsplash
OpenAI Pushes U.S. to Allow AI Training On Copyrighted Material
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
OpenAI, the company behind ChatGPT, is pushing the U.S. government to adopt policies that allow AI models to train on copyrighted material, arguing that this is essential for maintaining America’s global leadership in artificial intelligence.
In a Rush? Here are the Quick Facts!
- Claims restrictive copyright rules could give China an AI advantage.
- OpenAI faces lawsuits from authors and publishers over unauthorized use of copyrighted works.
- Proposes “AI Economic Zones” to speed up infrastructure and energy projects.
In a proposal submitted to the Trump administration’s “ AI Action Plan ,” OpenAI called for a copyright strategy that preserves the ability of AI models to learn from copyrighted works, claiming that restrictive rules could hand China an advantage in the AI race.
“America has so many AI startups, attracts so much investment, and has made so many research breakthroughs largely because the fair use doctrine promotes AI development,” OpenAI wrote in its proposal .
The company emphasized that limiting AI training to public domain content would stifle innovation and fail to meet the needs of modern society. OpenAI’s stance comes amid ongoing legal battles with content creators, including news outlets like The New York Times and authors who have sued the company for using their copyrighted works without permission.
Recently, the death of former OpenAI researcher and whistleblower Suchir Balaji continues to spark controversy as his family disputes the suicide ruling. Balaji, a key witness in a lawsuit against OpenAI, accused the company of copyright violations months before his death.
An independent autopsy revealed anomalies, including an unusual bullet trajectory, raising doubts about the official findings. His family has filed a lawsuit demanding transparency, while public figures like Elon Musk have questioned the circumstances.
Despite these lawsuits , OpenAI argues that its models transform copyrighted material into something new, aligning with the principles of fair use. “Our AI model training aligns with the core objectives of copyright and the fair use doctrine, using existing works to create something wholly new and different,” the company stated, as reported by Ars Technica.
The proposal also highlights concerns about China’s growing AI capabilities . OpenAI warned that if U.S. companies lose access to training data while Chinese firms continue to use it freely, “the race for AI is effectively over,” reports Ars Technica.
The company urged the U.S. government to shape international copyright policies to prevent other countries from imposing restrictive rules on American AI firms. Dr. Ilia Kolochenko, a cybersecurity expert, expressed skepticism about OpenAI’s proposals, calling them a “slippely slope.”
He argued that paying fair compensation to authors whose works are used for training AI models would be economically unviable for AI companies. “Advocating for a special regime or copyright exception for AI technologies is problematic,” Kolochenko told The Register .
In addition to copyright issues, OpenAI’s proposal includes recommendations for accelerating AI infrastructure development, such as creating “ AI Economic Zones ” to streamline permits for building data centers and energy facilities, as noted by The Register.
The company also called for federal agencies to adopt AI tools more aggressively, citing “unacceptably low” uptake in government departments. OpenAI’s push for fewer restrictions on AI training reflects broader debates about the balance between innovation and intellectual property rights.