
Image by Freepik
AI Adoption In Science Rising, But Challenges Remain
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A new survey published by Nature reveals that researchers worldwide see AI as a transformative force in scientific research and publishing.
In a Rush? Here are the Quick Facts!
- A Wiley survey of 5,000 researchers found AI adoption in science is increasing rapidly.
- Over half believe AI already outperforms humans in tasks like summarizing and plagiarism checks.
- 72% want to use AI for manuscript preparation within the next two years.
Conducted by Wiley, the survey gathered responses from nearly 5,000 researchers across 70 countries, highlighting both the enthusiasm and challenges surrounding AI adoption in academia.
Nature reports that the findings suggest that generative AI tools, such as ChatGPT and DeepSeek, are expected to become widely accepted for tasks like manuscript preparation, grant writing, and peer review within the next two years.
More than half of the respondents believe AI already surpasses humans in over 20 research-related tasks, including summarizing findings, detecting errors in writing, checking for plagiarism, and organizing citations.
Additionally, 34 out of 43 surveyed AI use cases are expected to become mainstream in research within the next two years.
“What really stands out is the imminence of this,” said Sebastian Porsdam Mann, an expert in AI ethics at the University of Copenhagen, as reported by Nature.
“People that are in positions that will be affected by this — which is everyone, but to varying degrees — need to start” addressing this now, he added.
Despite the growing optimism, the survey also highlights limited current use of AI in research. Among the first 1,043 respondents, only 45% reported actively using AI in their work, primarily for translation, proofreading, and manuscript editing.
While 81% had used ChatGPT for personal or professional purposes, fewer were familiar with alternative AI tools like Google’s Gemini or Microsoft’s Copilot. Researchers in China, Germany, and the field of computer science were found to be the most active AI users.
While 72% of respondents expressed interest in using AI for manuscript preparation in the next two years, they were less confident in AI’s ability to handle complex tasks such as identifying gaps in literature, selecting journals, or recommending peer reviewers.
Though 64% remain open to using AI for these functions, they still believe humans outperform AI in these areas.
One major obstacle to AI adoption is the lack of guidance and training. Nearly two-thirds of respondents cited inadequate training as a barrier, while 81% voiced concerns over AI’s accuracy, bias, and privacy risks.
“We think there’s a big obligation of publishers and others to help educate,” said Josh Jarrett, senior vice-president of Wiley’s AI growth team, as reported by Nature.
Wiley plans to release updated AI guidelines in the coming months to provide clearer recommendations on safe and ethical AI use in research. As AI continues to evolve, researchers hope for more structured training and clearer guidelines to navigate this rapidly changing landscape.

Image by Matthew Henry, from Unsplash
Google Lifts Ban On AI Use For Weapons And Surveillance Technologies
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Alphabet, Google’s parent company, has reversed its promise not to use AI for developing weapons or surveillance tools.
In a Rush? Here are the Quick Facts!
- Google updated its AI ethics guidelines, removing harm-related restrictions, just before earnings report.
- AI head Demis Hassabis emphasized national security and global AI competition as key factors.
- Experts warn that Google’s updated guidelines could lead to more autonomous weapons development.
On Tuesday, just before reporting lower-than-expected earnings, the company updated its AI ethics guidelines, removing references to avoiding technologies that could cause harm, as reported by The Guardian .
Google’s AI head, Demis Hassabis, explained that the guidelines were being revised to adapt to a changing world, with AI now being seen as crucial to protecting “national security.”
In a blog post , Hassabis and senior vice-president James Manyika emphasized that as global AI competition intensifies, the company believes “democracies should lead in AI development,” guided by principles of “freedom, equality, and respect for human rights.”
WIRED highlighted that Google shared updates to its AI principles in a note added to the top of a 2018 blog post introducing the guidelines. “We’ve made updates to our AI Principles. Visit AI.Google for the latest,” the note reads.
Aljazeera reported that Google first introduced its AI principles in 2018 following employee protests over the company’s involvement in the U.S. Department of Defense’s Project Maven, which explored using AI to help the military identify targets for drone strikes.
In response to the backlash, which led to employee resignations and thousands of petitions, Google decided not to renew its Pentagon contract. Later that year, Google also chose not to compete for a $10 billion cloud computing contract with the Pentagon, citing concerns that the project might not align with its AI principles, as noted by Aljazeera.
However, in Tuesday’s announcement, Google revised its AI commitments. The updated webpage no longer lists specific prohibited uses for its AI projects, instead giving the company more flexibility to explore sensitive applications.
The revised document now emphasizes that Google will maintain “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.” Additionally, the company states its intention to “mitigate unintended or harmful outcomes.”
However, experts warn that AI could soon be widely deployed on the battlefield, although concerns are rising over its use, especially in relation to autonomous weapons systems.
“For a global industry leader to abandon red lines it set for itself signals a concerning shift, at a time when we need responsible leadership in AI more than ever,” said Anna Bacciarelli, senior AI researcher at Human Rights Watch, as reported by BBC .
Bacciarelli also noted that the “unilateral” decision highlights “why voluntary principles are not an adequate substitute for regulation and binding law.”