Researchers Reveal Students Who Use AI Models To Write Essays Face Cognitive Challenges - 1

Photo by Glenn Carstens-Peters on Unsplash

Researchers Reveal Students Who Use AI Models To Write Essays Face Cognitive Challenges

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

A recent MIT study, focused on the cognitive cost of using AI models to write essays, revealed that students who rely more on large language models (LLMs) may face harmful consequences and cognitive challenges.

In a rush? Here are the quick facts:

  • MIT study revealed that students who use AI models to write essays face harmful consequences and cognitive challenges.
  • The group of participants who used ChatGPT showed weaker neural connectivity and difficulties in remembering their work.
  • Experts conclude that AI models can significantly affect students and their learning processes, including what the researchers call a “cognitive cost.”

The study , titled Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, found that the use of an AI models can significantly affect students and their learning processes, including what the researchers call a “cognitive cost.”

The research involved 54 participants and revealed that the group using ChatGPT to write essays showed weaker neural connectivity and had difficulty remembering and quoting their own essay just minutes after finishing the task.

While the research team acknowledged the limitations of their small sample size, they hope the findings will serve as “a preliminary guide to understanding the cognitive and practical impacts of AI on learning environments.”

For the study, the researchers divided the participants into three groups: one that could use LLMs such as ChatGPT, another that could access traditional search engines like Google, and the third group that could only use their knowledge—called the Brain-only group.

The participants completed four essay writing and analysis sessions—three with the original group setup, and a final session in which access to tools was changed, requiring the LLM group to write using only their brains.

As measurement instruments, the scientists used an electroencephalography (EEG) to register brain activity considering engagement, and load—Scientists have also recently developed an e-tattoo to detect mental fatigue . The study also included NLP analysis, participant interviews, and essay scoring by both human teachers and an AI tool.

The experts revealed a strong correlation between brain connectivity and the use of external tools. The Brain-only group had the highest levels of neural connectivity, while those who used AI showed the weakest.

Memory retention was also negatively affected. The group that used AI models had more difficulty quoting their own essays and reported the lowest levels of “ownership” over their work.

“As the educational impact of LLM use only begins to settle with the general population, in this study we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study,” wrote the researchers. “The LLM group’s participants performed worse than their counterparts in the Brain-only group at all levels: neural, linguistic, and scoring.”

Researchers Discover Massive Data Leak Exposing 16 Billion Login Credentials - 2

Photo by Jefferson Santos on Unsplash

Researchers Discover Massive Data Leak Exposing 16 Billion Login Credentials

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

Cybersecurity researchers have discovered one of the largest breaches in history, involving several collections that expose over 16 billion login credentials from multiple online platforms, including Facebook, Apple, GitHub, and Google.

In a rush? Here are the quick facts:

  • Cybersecurity researchers discovered an enormous breach exposing over 16 billion login credentials.
  • It’s considered one of the largest data breaches in history.
  • Data includes credentials from Apple, Google, GitHub, and Facebook.

According to a report by Cybernews , its team of experts suspects that some of these collections—which include over 30 datasets with an average of 550 million records—belong to cybercriminals. The datasets vary significantly in size and language, with some in Portuguese and Russian. Researchers determined that most of the data originates from various infostealers—malicious software used to harvest sensitive information.

Cybernews’ research team explained that none of the discovered collections had been previously disclosed, except for one: the massive unsecured database that leaked 184 million login credentials reported a few days ago. However, the newly discovered collection includes an even larger dataset, such as one with more than 3.5 billion records.

The cybersecurity experts shared their thoughts and concerns about this colossal discovery and its implications.

This is not just a leak – it’s a blueprint for mass exploitation. With over 16 billion login records exposed, cybercriminals now have unprecedented access to personal credentials that can be used for account takeover, identity theft, and highly targeted phishing.

The experts also explained that massive datasets continue to emerge every week, highlighting just how powerful modern infostealers have become. Fortunately, many of the exposed credentials appear to have been only temporarily accessible.

Although it’s impossible to determine exactly how many people were affected—as the different datasets could not be compared—most of them had a similar structure: URL, login information, and password. That order suggests that actors gathering the collections used modern infostealers.

Researchers warned that this large collection of login credentials could be used for multiple attacks, including phishing campaigns, ransomware intrusions, account takeovers, and business email compromise.

“The inclusion of both old and recent infostealer logs – often with tokens, cookies, and metadata – makes this data particularly dangerous for organizations lacking multi-factor authentication or credential hygiene practices,” added the team.