
Photo by John Schnobrich on Unsplash
Alibaba Releases New Qwen AI Model And Claims It Outperforms DeepSeek-V3
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
The Chinese giant Alibaba released the latest version of its flagship AI model, Qwen, this Wednesday. The company claims it can perform better than the popular DeepSeek-V3.
In a Rush? Here are the Quick Facts!
- Alibaba released its latest reasoning model Qwen 2.5-Max this Wednesday.
- The Chinese giant claims it outperforms popular models like DeepSeek-V3, GPT-4o, and Llama-3.1-405B.
- The company also launched Qwen2.5-VL this week, an AI model capable of processing images and act as an AI agent using computers and mobiles to perform tasks.
According to Reuters , Alibaba launched the new Qwen 2.5-Max, as it has named the new reasoning model, right during the holidays of the Lunar New Year in China, to join the massive AI developments of the past few days and add domestic competition.
On Monday, DeepSeek reached first place on Apple’s App Store in the United States, surpassing ChatGPT, concerning other companies in the AI industry and alarming investors— Nvidia shares dropped 17% in just one day.
Now, Alibaba has announced the latest versions of its Qwen model—it released 100 open-source AI models for the Qwen suite in September last year—promising better results than popular frontier models.
“Qwen 2.5-Max outperforms (…) almost across the board GPT-4o, DeepSeek-V3 and Llama-3.1-405B,” wrote the company on its official WeChat account.
The new reasoning model Qwen 2.5-Max’s API is available through Alibaba’s cloud and users can also test the model on its chat page.
“We are developing Qwen2.5-Max, a large-scale MoE model that has been pretrained on over 20 trillion tokens and further post-trained with curated Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) methodologies,” wrote Qwen Team in Github .
The Chinese giant also released Qwen2.5-VL on Monday, a series of multimodal AI models that can also process images and access mobiles and computers to perform tasks. OpenAI announced a similar feature, Operator , allowing ChatGPT to perform tasks autonomously taking control of the user’s computer.
According to Alibaba’s team, all Qwen models outperform similar versions from OpenAI, Microsoft, Google, Meta, and DeepSeek.

Image by Lorenzo Herrera, from Unsplash
Dancing With AI: MIT Students Experiment With Interactive Intelligence
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Students from MIT’s Interaction Intelligence course ( 4.043/4.044 ) presented a series of projects at the 38th annual Neural Information Processing Systems (NeurIPS) conference in December 2024, exploring new ways AI can shape creativity, education, and human interaction.
In a Rush? Here are the Quick Facts!
- Be the Beat generates music from dance movements using AI and PoseNet.
- A Mystery for You teaches fact-checking through an AI-powered, cartridge-based game.
- Memorscope creates AI-generated shared memories from face-to-face interactions.
The conference, one of the most recognized in artificial intelligence and machine learning research, brought together over 16,000 attendees in Vancouver, as reported on MIT’s press release .
Under the guidance of Professor Marcelo Coelho from MIT’s Department of Architecture, the students developed interactive AI-driven projects that examine the evolving role of AI in everyday experiences.
One of the projects, Be the Beat, developed by Ethan Chang and Zhixing Chen, integrates AI into dance by generating music that adapts to a dancer’s movements.
Using PoseNet to analyze motion and a language model to interpret style and tempo, the system shifts the relationship between dance and music, allowing movement to shape sound rather than the other way around. Participants described it as an alternative approach to choreographing and discovering new dance styles.
View this post on Instagram A post shared by Be the Beat (@be_thebeat)
Another project, A Mystery for You, by Mrinalini Singha and Haoheng Tang, is an educational game designed to develop fact-checking skills in young learners. The game presents AI-generated news alerts that players must investigate using a tangible cartridge-based interface. By eliminating touchscreen interactions, the design encourages a slower, more deliberate engagement with information, contrasting with the rapid consumption of digital news.
Keunwook Kim ’s Memorscope examines memory and human interaction through AI. The device allows two people to look at each other through opposite ends of a tube-like structure, with AI generating a collective memory based on their shared perspective.
By incorporating models from OpenAI and Midjourney, the system produces evolving interpretations of these interactions, reframing how memories are recorded and experienced.
Narratron, by Xiying (Aria) Bao and Yubo Zhao, introduces AI into traditional storytelling through an interactive projector. The system interprets hand shadows as characters and generates a real-time narrative, combining visual and auditory elements to engage users in an AI-assisted form of shadow play.
Karyn Nakamura’s Perfect Syntax explores AI’s role in video editing and motion analysis. The project uses machine learning to manipulate and reconstruct video fragments, questioning how technology interprets movement and time.
By examining the relationship between computational processes and human perception, the work reflects on the ways AI reshapes visual media.
Together, these projects examine AI’s potential beyond automation, focusing on its role in shaping artistic expression, critical thinking, and shared experiences.