
Image by National Cancer Institute, from Unsplash
Google Launches ‘AI Co-Scientist’ To Accelerate Discovery And Innovation
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Researchers at Google have introduced a new AI system, known as the AI co-scientist, built on the Gemini 2.0 platform.
In a Rush? Here are the Quick Facts!
- The AI system features specialized agents for generating, ranking, and refining research ideas.
- The AI co-scientist uses a coalition of specialized agents for different research functions.
- It demonstrated promising results, such as suggesting potential drug treatments for leukemia.
This system aims to enhance scientific and biomedical research by functioning as a virtual collaborator for scientists.
The AI co-scientist is designed to generate novel hypotheses, propose research directions, and support long-term scientific planning, helping to accelerate discovery processes in a variety of fields, including drug repurposing, treatment target identification, and antimicrobial resistance.
The system’s core innovation lies in its multi-agent architecture. Rather than relying on a single AI model, the AI co-scientist utilizes a coalition of specialized agents, each tasked with a specific function.
These agents are inspired by the scientific method and work together to generate, refine, and evaluate hypotheses. For example, the “Generation” agent proposes new research ideas, while the “Ranking” agent compares and ranks these ideas based on their potential impact.
The system’s “Evolution” and “Reflection” agents iteratively improve the quality of hypotheses by analyzing feedback, while the “Meta-review” agent oversees the overall process, ensuring alignment with the research goal.
This collaborative approach allows the system to continuously refine its outputs. By parsing a given research goal into manageable tasks, the Supervisor agent manages the system’s workflow, allocating resources and ensuring that each specialized agent performs its role.
As a result, the AI co-scientist adapts its approach over time, improving the quality and novelty of its suggestions.
This self-improvement is driven by an Elo auto-evaluation metric, which monitors the quality of the generated hypotheses and assesses whether more computational time improves the system’s performance.
In tests, the AI co-scientist demonstrated a strong capacity for producing novel and impactful research ideas. For instance, in the field of drug repurposing, it suggested candidates for treating acute myeloid leukemia (AML).
These suggestions were subsequently validated through experimental studies, confirming the potential efficacy of the proposed drugs.
Similarly, in the area of liver fibrosis, the AI co-scientist identified epigenetic targets with significant therapeutic potential, supporting experimental validation in human liver organoids.
However, in addition to the potential benefits, a recent survey reveals several challenges surrounding AI adoption in research.
Despite the growing interest in AI tools, only 45% of the nearly 5,000 researchers surveyed are currently using AI in their work, primarily for tasks like translation and proofreading.
Concerns about AI’s accuracy, bias, and privacy risks are widespread, with 81% of respondents expressing unease. Furthermore, nearly two-thirds of participants cited inadequate training as a significant barrier to effective AI adoption.
Researchers also remain cautious about AI’s ability to handle more complex tasks, such as identifying gaps in literature or recommending peer reviewers.
As AI tools like ChatGPT become more integrated into research workflows, challenges surrounding their use, particularly in citation accuracy, are emerging.
For example, a recent study underscores the risks posed by generative AI tools, which frequently misattribute or fabricate citations. Of the 200 articles tested, 153 contained incorrect or partial citations.
This issue raises concerns for researchers relying on AI for manuscript preparation and peer review, as inaccurate sourcing can diminish the trust placed in these tools. Publishers are particularly vulnerable, as misattributions may harm their reputations and undermine the credibility of their work.
These challenges underscore the need for clearer guidelines and structured training to ensure the responsible use of AI in academia, as researchers seek to balance enthusiasm with caution in adopting this technology.
Nevertheless, the AI co-scientist represents a significant step forward in augmenting scientific discovery, leveraging AI to assist researchers in exploring new hypotheses, validating them, and accelerating progress across diverse fields.
The system is currently available for evaluation through a Trusted Tester Program, inviting research organizations to assess its applicability and effectiveness in real-world settings.

Photo by dole777 on Unsplash
Report Reveals Australian Children Easily Bypass Social Media Age Restrictions
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
The Australian online safety regulator ESafety shared a report this Thursday revealing that children can easily bypass the age verification systems imposed by social media platforms.
In a Rush? Here are the Quick Facts!
- ESafety’s report revealed that children can easily bypass the age-verification systems used by most social media platforms.
- 80% of children aged 8 to 15 used at least one social media platform in 2024.
- The study considered the most popular social media platforms and suggested that the number of children using these services is higher than reported by the companies.
Australia approved a social media ban for children under the age of 16 in November 2024, led by Prime Minister Anthony Albanese, forcing tech companies to comply or pay fines of up to $32 million. The ban will take effect by the end of this year, and, since January, social media channels have been in a trial period.
The new report revealed that most platforms relied only on a self-declaration that can be quickly faked and dismissed by children.
Researchers interviewed young Austrians across the country, aged from 8 to 15, in 2024, and 80% used one or more social media services, and are likely to continue doing so.
The watchdog considered 8 of the most popular platforms for the research: Facebook, Instagram, TikTok, Snap, Twitch, Discord, Reddit, and also YouTube, the only platform approved to show content to children.
Most platforms included simple age requirements to create new accounts—except Reddit—and self-declarations. The report suggests that number of young users reported by the social media companies could be even higher.
ESafety said they are in discussions with stakeholders over the next steps to develop more efficient age verification systems. “This report shows that there will be a tremendous amount of work to be done between now and December,” said Inman Grant.
Australia’s decision to ban social media channels has sparked debate and concerns across the world. While the government suggested to be working on age-verification systems, the companies providing social media services will be the ones responsible for infringing the law.