Alibaba Launches Qwen3-Coder, An Open-Source AI Coding Model - 1

Photo by Mohammad Rahmani on Unsplash

Alibaba Launches Qwen3-Coder, An Open-Source AI Coding Model

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

Alibaba Group launched its latest open-source AI model, Qwen3-Coder, on Wednesday. The advanced system has been designed for software development and has been promoted as the “most powerful open agentic code model to date.”

In a rush? Here are the quick facts:

  • Alibaba Group introduced its latest open-source AI model: Qwen3-Coder.
  • The AI model has been designed for software development and promoted as its “most powerful open agentic code model to date.”
  • The Chinese giant said the model’s features and agentic capabilities outperform similar models from competitors such as DeepSeek, OpenAI, and Anthropic.

According to the official announcement , Qwen3-Coder features agentic capabilities and comes in various parameter sizes to suit different needs. The model is also compatible with other advanced AI systems, such as Anthropic’s chatbot Claude.

“Today, we’re announcing Qwen3-Coder, our most agentic code model to date,” states the document. “Qwen3-Coder is available in multiple sizes, but we’re excited to introduce its most powerful variant first: Qwen3-Coder-480B-A35B-Instruct — a 480B-parameter Mixture-of-Experts model with 35B active parameters which supports the context length of 256K tokens natively and 1M tokens with extrapolation methods, offering exceptional performance in both coding and agentic tasks.”

Alibaba emphasized that this open-source model is “comparable to Claude Sonnet 4,” one of Anthropic’s latest AI models launched in May.

The Chinese giant has also announced Qwen Code, a new command-line tool for agentic coding forked from Gemini Code. The new tool includes function-calling protocols and customized prompts to “unleash” Qwen3-Coder’s agentic capabilities.

Qwen3-Coder reportedly outperforms similar models developed by competitors such as DeepSeek-R1-0528, Devstral-small-2507, Gemini 2.5-Pro-preview-0506, and GPT-4.1.

Performance of Qwen3-Coder-480B-A35B-Instruct on SWE-bench Verified! pic.twitter.com/GQ3hYMKzPi — Qwen (@Alibaba_Qwen) July 22, 2025

In a series of posts on the social media platform X , Alibaba demonstrated the AI model’s ability to create digital products such as a “Flappy Bird” game, a 3D Earth Terrain visualization, or animated weather cards.

The AI model with coding capabilities has been released at a moment when competitors are building similar products for higher prices. Qwen3-Coder’s open-source nature and compatibility position it as a strong and affordable contender in the AI coding space.

Earlier this year, Alibaba released its advanced reasoning model Qwen 2.5-Max , shortly after DeekSeek’s reasoning model reached first place on Apple’s App Store in the U.S.

FDA’s Elsa Tool Faces Criticism For Hallucinating Scientific Data - 2

Image by Myriam Zilles, from Unsplash

FDA’s Elsa Tool Faces Criticism For Hallucinating Scientific Data

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The FDA’s new AI tool Elsa promises faster drug approvals, but medical experts warn this tool generates fabricated research, which produces additional safety risks.

In a rush? Here are the quick facts:

  • The FDA launched an AI tool named Elsa to aid drug approvals.
  • Elsa sometimes invents studies or misstates existing research.
  • Staff say Elsa wastes time due to fact-checking and hallucinations.

In June, the FDA launched Elsa as their new artificial intelligence tool to accelerate drug approval procedures. FDA Commissioner Dr. Marty Makary declared the system would be completed before schedule while remaining under budget.

However, the FDA staff members recently told CNN that Elsa requires further development before it can be used in practical applications.

Elsa is supposed to help FDA scientists by data summary work and review process optimization. However, CNN notes that current and former FDA employees report that Elsa hallucinates and generates false information. Indeed, the tool seems to fabricate new studies or distort existing ones, which makes it risky to use in serious scientific work.

“Anything that you don’t have time to double-check is unreliable. It hallucinates confidently,” said one FDA employee to CNN. Another added, “AI is supposed to save our time, but I guarantee you that I waste a lot of extra time just due to the heightened vigilance that I have to have.”

CNN notes that currently, Elsa isn’t used for drug or device reviews because it can’t access important documents like company submissions. The FDA’s head of AI, Jeremy Walsh, acknowledged the issue: “Elsa is no different from lots of [large language models] and generative AI […] They could potentially hallucinate,” as reported by CNN

FDA officials say Elsa is mostly being used for organizing tasks, like summarizing meeting notes. It has a simple interface that invites users to “Ask Elsa anything.”

Staff are not required to use the tool. “They don’t have to use Elsa if they don’t find it to have value,” said Makary to CNN.

Still, with no federal regulations in place for AI in medicine, experts warn it’s a risky path. “It’s really kind of the Wild West right now,” said Dr. Jonathan Chen of Stanford University, to CNN.

As adoption of AI in science is growing rapidly, with over half of researchers saying AI already outperforms humans in tasks like summarizing and plagiarism checks.

However, significant challenges remain. A survey of 5,000 researchers found 81% worry about AI’s accuracy, bias, and privacy risks. Many see the lack of guidance and training as a major barrier to safe AI use.

Experts emphasize the urgent need for clearer AI ethics and education to avoid misuse. While AI shows promise, researchers agree that human oversight is still crucial to maintain scientific integrity.