AI exam submissions can go undetected - 1

Image byStanford University, from Flickr

AI exam submissions can go undetected

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

A University of Reading study raises concerns about the use of Artificial Intelligence (AI) in online assessments, highlighting its potential to undermine academic integrity.

The study investigated the ability of AI-generated responses to bypass detection of university examinations. Researchers injected 100% AI-written submissions into online exams across various modules in a psychology degree program. The results were reported a staggering 94% of these AI-generated submissions went undetected.

These AI submissions not only bypassed detection but also outperformed real students, consistently achieving top grades (2:1 to 1st class). This means that the AI wasn’t simply mimicking student responses; it demonstrably excelled at answering exam questions.

Existing AI detection tools proved unreliable, with a high rate of false positives and negatives.This means they might flag genuine student work as AI-generated while simultaneously allowing AI submissions to pass unnoticed.

Across all modules, only 6% of AI submissions were flagged as potentially non-student work, with some modules not identifying any suspicious submissions. “On average, the AI responses gained higher grades than our real student submissions,” Scarfe noted, although the results varied among modules. The study showed an 83.4% likelihood that AI submissions would outperform those of students.

A separate study predicts a significant increase in the adoption of AI educational technology. fundamentally changing how we teach and learn. As reported by Reuters , ChatGPT has become the fastest growing user application in history, reaching 100 million active users within a few months of launch in late 2022. Hayder Albayati’s research highlights ChatGPT’s potential to personalize learning through question answering, assignment feedback, and even generating educational content.

Studies have shown that personalized learning can significantly improve student outcomes. When students engage with content relevant to their interests and abilities, they are more likely to develop a deeper understanding of the subject. For example, when a student submits a response, the model analyzes it and provides feedback customized to their understanding. This feedback helps identify areas needing additional support or demonstrating mastery. Furthermore, these models generate personalized learning plans based on performance and feedback. Additionally, On-demand support is crucial for effective learning, especially for independent or online learners.This flexibility accommodates busy schedules, ensuring students receive the help they need to succeed.

Another study concludes that ChatGPT can significantly enhance student productivity. This language model aids students by offering valuable information and resources, improving language skills, facilitating collaboration, increasing time efficiency and effectiveness, and providing support and motivation.

However, this very functionality makes it a tempting tool for students seeking an unfair advantage, especially in unsupervised online exams.

AI in education offers a double-edged sword. While it can personalize learning and boost engagement, the University of Reading study highlights a critical issue demanding attention. As AI in education continues to evolve, robust safeguards are needed to ensure the integrity of online assessments and protect the value of genuine student achievement.

Google to Automatically Generate Disclosures for AI-Generated Political Ads - 2

Image by Greg Bulla on Unsplash

Google to Automatically Generate Disclosures for AI-Generated Political Ads

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Google shared a new update last week announcing a new disclosure tag required for political content generated with artificial intelligence.

“Advertisers are now required to disclose election ads that contain synthetic or digitally altered content that inauthentically depicts real or realistic-looking people or events by selecting the checkbox in the ‘Altered or synthetic content’ section in their campaign settings,” states the document.

Google will automatically generate an in-ad disclosure according to the type of content being promoted for feeds and shorts on mobile phones, as well as in-stream content for computers, phones, mobile web, and TV screens.

Advertisers must choose from a checkbox one of the options that best suits the type of political AI-content they want to advertise. The four main tags are: “This audio was computer generated”, “Altered or synthetic content”, “This video content was synthetically generated”, and “This image does not depict real events.”

The new measure has been implemented during a controversial election year in the United States and when concerns regarding AI-generated political ads have risen.

Google’s update was announced also just a few weeks after a political scammer was fined $6 million for making deep fake robocalls cloning President Joe Biden’s voice back in January.