Computer Virus Uses ChatGPT to Evade Detection and Spread - 1

Image by Darkest, from GoodFon.com

Computer Virus Uses ChatGPT to Evade Detection and Spread

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Researchers have just discovered that a computer virus can leverage ChatGPT to rewrite its code and evade detection. The virus not only avoids antivirus scans but also spreads by sending customized email templates that mimic genuine replies. This discovery raises significant cybersecurity concerns, prompting the need for advanced detection methods.

In their research paper, David Zollikofer (ETH Zurich) and Benjamin Zimmerman (Ohio State) warn of potential exploitation by viruses that can rewrite their own code, known as metamorphic malware.

To test this, Zollikofer and Zimmerman created a file that can be delivered to the initial victim’s computer via an email attachment. Once there, the software accesses ChatGPT to rewrite its own code and evade detection.

After ChatGPT rewrites the virus, the program discreetly opens Outlook in the background and scans the most recent email chains. It then uses the content of those emails to prompt ChatGPT to write a contextually relevant reply, innocuously linking an attachment, which secretly contains the virus.

For instance, if the program finds a birthday party invitation, it might respond by accepting the invitation and describing the attachment as a suggested playlist for the party. “It’s not something that comes out of the blue,” says Zollikofer on New Scientist. “The content is made to fit into the existing content.”

In their experiments, the AI chatbot’s alterations had about a 50 percent chance of causing the virus file to stop working or realizing it was being used maliciously and refusing to follow the instructions. However, the researchers suggest that the virus would have a good chance of success if it made five to ten attempts to replicate itself on each computer.

As large language models (LLMs) like ChatGPT become more advanced, the risk of their misuse rises significantly, emphasizing the critical cybersecurity threats they present and the pressing need for more research into smart malware.

AI exam submissions can go undetected - 2

Image byStanford University, from Flickr

AI exam submissions can go undetected

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

A University of Reading study raises concerns about the use of Artificial Intelligence (AI) in online assessments, highlighting its potential to undermine academic integrity.

The study investigated the ability of AI-generated responses to bypass detection of university examinations. Researchers injected 100% AI-written submissions into online exams across various modules in a psychology degree program. The results were reported a staggering 94% of these AI-generated submissions went undetected.

These AI submissions not only bypassed detection but also outperformed real students, consistently achieving top grades (2:1 to 1st class). This means that the AI wasn’t simply mimicking student responses; it demonstrably excelled at answering exam questions.

Existing AI detection tools proved unreliable, with a high rate of false positives and negatives.This means they might flag genuine student work as AI-generated while simultaneously allowing AI submissions to pass unnoticed.

Across all modules, only 6% of AI submissions were flagged as potentially non-student work, with some modules not identifying any suspicious submissions. “On average, the AI responses gained higher grades than our real student submissions,” Scarfe noted, although the results varied among modules. The study showed an 83.4% likelihood that AI submissions would outperform those of students.

A separate study predicts a significant increase in the adoption of AI educational technology. fundamentally changing how we teach and learn. As reported by Reuters , ChatGPT has become the fastest growing user application in history, reaching 100 million active users within a few months of launch in late 2022. Hayder Albayati’s research highlights ChatGPT’s potential to personalize learning through question answering, assignment feedback, and even generating educational content.

Studies have shown that personalized learning can significantly improve student outcomes. When students engage with content relevant to their interests and abilities, they are more likely to develop a deeper understanding of the subject. For example, when a student submits a response, the model analyzes it and provides feedback customized to their understanding. This feedback helps identify areas needing additional support or demonstrating mastery. Furthermore, these models generate personalized learning plans based on performance and feedback. Additionally, On-demand support is crucial for effective learning, especially for independent or online learners.This flexibility accommodates busy schedules, ensuring students receive the help they need to succeed.

Another study concludes that ChatGPT can significantly enhance student productivity. This language model aids students by offering valuable information and resources, improving language skills, facilitating collaboration, increasing time efficiency and effectiveness, and providing support and motivation.

However, this very functionality makes it a tempting tool for students seeking an unfair advantage, especially in unsupervised online exams.

AI in education offers a double-edged sword. While it can personalize learning and boost engagement, the University of Reading study highlights a critical issue demanding attention. As AI in education continues to evolve, robust safeguards are needed to ensure the integrity of online assessments and protect the value of genuine student achievement.