DeepSeek to Open Source AI Model Code, Affirming Commitment to Transparency - 1

Photo by Markus Spiske on Unsplash

DeepSeek to Open Source AI Model Code, Affirming Commitment to Transparency

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

The Chinese AI company DeepSeek announced this Friday that it will make its AI model codes public as part of its commitment to an open-source approach to artificial intelligence.

In a Rush? Here are the Quick Facts!

  • Starting next week DeepSeek will begin sharing the codes for their AI models as part of its OpenSourceWeek initiative.
  • DeepSeek explained that it is part of its commitment to the open-source community and its principles of “full transparency.”
  • Many users on X shared their support and excitement for the startup’s announcement.

The startup explained in a post on the social media platform X that next week they will launch a new initiative, the #OpenSourceWeek, where it will share their AI achievements with “full transparency.”

Starting Monday, Deepseek will begin sharing five code repositories containing parts of their software for anyone to see and collaborate. “These humble building blocks in our online service have been documented, deployed, and battle-tested in production,” states the post.

Since the release of its open-source model DeepSeek-V3, the company has gained popularity in the U.S. and worldwide , competing against frontier AI models in the industry, including ChatGPT.

“As part of the open-source community, we believe that every line shared becomes collective momentum that accelerates the journey,” wrote the company in a post.

🚀 Day 0: Warming up for #OpenSourceWeek ! We’re a tiny team @deepseek_ai exploring AGI. Starting next week, we’ll be open-sourcing 5 repos, sharing our small but sincere progress with full transparency. These humble building blocks in our online service have been documented,… — DeepSeek (@deepseek_ai) February 21, 2025

Many users on the social media platform shared their support and excitement for the news. “So cool to see you all build in public and share artefacts as you continue pushing the frontier forward!” wrote one user. “Real builders don’t hoard, they share. This is how breakthroughs happen!” added another .

The new announcement comes as the company is under investigation by the United States and the tech giants Microsoft and OpenAI, and faces blocks in South Korea and Italy over privacy concerns. Another study recently revealed that the AI model DeepSeek-R1 presents significant security risks for enterprise use.

Google Launches ‘AI Co-Scientist’ To Accelerate Discovery And Innovation - 2

Image by National Cancer Institute, from Unsplash

Google Launches ‘AI Co-Scientist’ To Accelerate Discovery And Innovation

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Researchers at Google have introduced a new AI system, known as the AI co-scientist, built on the Gemini 2.0 platform.

In a Rush? Here are the Quick Facts!

  • The AI system features specialized agents for generating, ranking, and refining research ideas.
  • The AI co-scientist uses a coalition of specialized agents for different research functions.
  • It demonstrated promising results, such as suggesting potential drug treatments for leukemia.

This system aims to enhance scientific and biomedical research by functioning as a virtual collaborator for scientists.

The AI co-scientist is designed to generate novel hypotheses, propose research directions, and support long-term scientific planning, helping to accelerate discovery processes in a variety of fields, including drug repurposing, treatment target identification, and antimicrobial resistance.

The system’s core innovation lies in its multi-agent architecture. Rather than relying on a single AI model, the AI co-scientist utilizes a coalition of specialized agents, each tasked with a specific function.

These agents are inspired by the scientific method and work together to generate, refine, and evaluate hypotheses. For example, the “Generation” agent proposes new research ideas, while the “Ranking” agent compares and ranks these ideas based on their potential impact.

The system’s “Evolution” and “Reflection” agents iteratively improve the quality of hypotheses by analyzing feedback, while the “Meta-review” agent oversees the overall process, ensuring alignment with the research goal.

This collaborative approach allows the system to continuously refine its outputs. By parsing a given research goal into manageable tasks, the Supervisor agent manages the system’s workflow, allocating resources and ensuring that each specialized agent performs its role.

As a result, the AI co-scientist adapts its approach over time, improving the quality and novelty of its suggestions.

This self-improvement is driven by an Elo auto-evaluation metric, which monitors the quality of the generated hypotheses and assesses whether more computational time improves the system’s performance.

In tests, the AI co-scientist demonstrated a strong capacity for producing novel and impactful research ideas. For instance, in the field of drug repurposing, it suggested candidates for treating acute myeloid leukemia (AML).

These suggestions were subsequently validated through experimental studies, confirming the potential efficacy of the proposed drugs.

Similarly, in the area of liver fibrosis, the AI co-scientist identified epigenetic targets with significant therapeutic potential, supporting experimental validation in human liver organoids.

However, in addition to the potential benefits, a recent survey reveals several challenges surrounding AI adoption in research.

Despite the growing interest in AI tools, only 45% of the nearly 5,000 researchers surveyed are currently using AI in their work, primarily for tasks like translation and proofreading.

Concerns about AI’s accuracy, bias, and privacy risks are widespread, with 81% of respondents expressing unease. Furthermore, nearly two-thirds of participants cited inadequate training as a significant barrier to effective AI adoption.

Researchers also remain cautious about AI’s ability to handle more complex tasks, such as identifying gaps in literature or recommending peer reviewers.

As AI tools like ChatGPT become more integrated into research workflows, challenges surrounding their use, particularly in citation accuracy, are emerging.

For example, a recent study underscores the risks posed by generative AI tools, which frequently misattribute or fabricate citations. Of the 200 articles tested, 153 contained incorrect or partial citations.

This issue raises concerns for researchers relying on AI for manuscript preparation and peer review, as inaccurate sourcing can diminish the trust placed in these tools. Publishers are particularly vulnerable, as misattributions may harm their reputations and undermine the credibility of their work.

These challenges underscore the need for clearer guidelines and structured training to ensure the responsible use of AI in academia, as researchers seek to balance enthusiasm with caution in adopting this technology.

Nevertheless, the AI co-scientist represents a significant step forward in augmenting scientific discovery, leveraging AI to assist researchers in exploring new hypotheses, validating them, and accelerating progress across diverse fields.

The system is currently available for evaluation through a Trusted Tester Program, inviting research organizations to assess its applicability and effectiveness in real-world settings.