
Image by Freepik
Does the Internet And AI Really Harm Memory? Scientists Weigh In
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A recent report by Nature explores the impact of digital technology and AI on human memory, challenging fears that the internet is eroding cognitive abilities.
In a Rush? Here are the Quick Facts!
- Studies show AI and GPS alter how people remember information.
- AI-generated summaries may inflate users’ confidence in their knowledge.
- Harvard’s Daniel Schacter says no strong evidence links technology to overall memory decline.
While search engines, GPS, and AI-driven tools shape how people learn and remember, researchers argue that sweeping claims about memory decline are overstated.
Nature reports how psychologist Adrian Ward from the University of Texas at Austin experienced firsthand how dependent he had become on digital navigation. After a malfunction left him without Apple Maps, he found himself lost in familiar parts of Austin. “I just instinctively put on the map and do what it says,” he said.
This reliance on technology has led to concerns about ‘digital amnesia,’ a term coined by a software firm to describe forgetting information because it is stored on a device. Oxford University even named ‘brain rot’—a term for mental decline due to consuming trivial online content—as its word of the year in 2024.
However, studies paint a nuanced picture. Some research suggests technology alters memory tasks: for instance, GPS users recall routes less effectively. Nature reports that Ward’s own study found that Googling information inflates people’s sense of knowledge.
But memory expert Elizabeth Marsh of Duke University refutes extreme claims, calling them “overstatements,” as reported by Nature.
With AI now integrated into search engines, its impact on memory could be profound. Marsh notes, “This whole ChatGPT thing is another level of technology that’s really different from just typing into a Google browser, ‘What’s the capital of Madagascar?’,” as reported by Nature.
Concerns include AI fostering cognitive laziness or even implanting false memories. Digital avatars of deceased individuals —so-called ‘deadbots’—could also reshape personal recollections. “It’s kind of reassembling a past that we never experienced,” says Andrew Hoskins from the University of Edinburgh, as reported by Nature.
The idea that the internet weakens memory gained traction after a 2011 study by Columbia University psychologist Betsy Sparrow. Participants in her experiments were more likely to remember where they stored facts online than the facts themselves, a phenomenon dubbed the ‘Google effect,’ as reported by Nature.
Yet, later replication attempts produced mixed results, prompting debate over the study’s conclusions.
Ward sees this as part of ‘cognitive offloading,’ in which people delegate memory tasks to external aids. This can be beneficial, freeing up cognitive resources. However, AI-generated summaries in search results might cause users to confuse online knowledge with their own, creating misplaced confidence, says Nature.
While studies confirm that technology affects memory for specific tasks, Harvard’s Daniel Schacter asserts, “There is very little evidence that these technologies are causing a broader decline in memory,” as reported by Nature.
Instead, researchers suggest that growing information overload—and natural aging—might be contributing to memory concerns more than the internet itself.

Image by Freepik
AI Adoption In Science Rising, But Challenges Remain
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A new survey published by Nature reveals that researchers worldwide see AI as a transformative force in scientific research and publishing.
In a Rush? Here are the Quick Facts!
- A Wiley survey of 5,000 researchers found AI adoption in science is increasing rapidly.
- Over half believe AI already outperforms humans in tasks like summarizing and plagiarism checks.
- 72% want to use AI for manuscript preparation within the next two years.
Conducted by Wiley, the survey gathered responses from nearly 5,000 researchers across 70 countries, highlighting both the enthusiasm and challenges surrounding AI adoption in academia.
Nature reports that the findings suggest that generative AI tools, such as ChatGPT and DeepSeek, are expected to become widely accepted for tasks like manuscript preparation, grant writing, and peer review within the next two years.
More than half of the respondents believe AI already surpasses humans in over 20 research-related tasks, including summarizing findings, detecting errors in writing, checking for plagiarism, and organizing citations.
Additionally, 34 out of 43 surveyed AI use cases are expected to become mainstream in research within the next two years.
“What really stands out is the imminence of this,” said Sebastian Porsdam Mann, an expert in AI ethics at the University of Copenhagen, as reported by Nature.
“People that are in positions that will be affected by this — which is everyone, but to varying degrees — need to start” addressing this now, he added.
Despite the growing optimism, the survey also highlights limited current use of AI in research. Among the first 1,043 respondents, only 45% reported actively using AI in their work, primarily for translation, proofreading, and manuscript editing.
While 81% had used ChatGPT for personal or professional purposes, fewer were familiar with alternative AI tools like Google’s Gemini or Microsoft’s Copilot. Researchers in China, Germany, and the field of computer science were found to be the most active AI users.
While 72% of respondents expressed interest in using AI for manuscript preparation in the next two years, they were less confident in AI’s ability to handle complex tasks such as identifying gaps in literature, selecting journals, or recommending peer reviewers.
Though 64% remain open to using AI for these functions, they still believe humans outperform AI in these areas.
One major obstacle to AI adoption is the lack of guidance and training. Nearly two-thirds of respondents cited inadequate training as a barrier, while 81% voiced concerns over AI’s accuracy, bias, and privacy risks.
“We think there’s a big obligation of publishers and others to help educate,” said Josh Jarrett, senior vice-president of Wiley’s AI growth team, as reported by Nature.
Wiley plans to release updated AI guidelines in the coming months to provide clearer recommendations on safe and ethical AI use in research. As AI continues to evolve, researchers hope for more structured training and clearer guidelines to navigate this rapidly changing landscape.