
Image by leighklotz on Flickr
Alibaba Releases Over 100 New Open-Source AI Models and an AI Video Tool
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- Alibaba releases the new Qwen 2.5 open-source models this Thursday
- The new models can be used in gaming, scientific research, and automobile sectors
- The tech giant announced a new AI video tool as part of its Tongyi Wanxiang AI-image generation program
The Chinese tech giant Alibaba released over 100 new open-source AI models and a text-to-video AI tool this Thursday. According to Reuters , the tech company is increasing its efforts with new AI technologies to compete in the AI market.
The new AI models are from the Qwen suite, which is the main open-source AI model developed by Alibaba’s research division, launched in May this year. The latest model versions, the Qwen 2.5, range from 0.5 to 72 billion parameters and can be used in more than 29 languages.
These new models can support multiple AI applications in different sectors including gaming, automobile, and scientific research, and can be accessed from anywhere and by everyone across the globe. According to CNBC , these LLMs have advanced math and coding capabilities and can generate text and images from prompts.
The Chinese company also said that the newest version of its flagship model Qwen-Max—which is not open source—is superior to Meta and Open AI’s models in language comprehension, reasoning, and other areas.
As part of its Tongyi Wanxiang image products, Alibaba also launched its new text-to-video model, an AI tool that creates videos from prompts just like OpenAI’s Sora. More companies have recently joined this AI video sector. Just a few days ago Adobe announced the development of a similar tool, the Firefly Video Model , and Tiktok’s parent company ByteDance launched its AI video tool Jimeng AI last month.
“Alibaba Cloud is investing, with unprecedented intensity, in the research and development of AI technology and the building of its global infrastructure,” said Alibaba’s CEO Eddie Wu.

Image by Tumisu, from Pixabay
Researchers Introduce Blockchain Framework To Democratize Deep Reinforcement Learning
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- Crowdsourced DRL framework enhances accessibility and training.
- Blockchain ensures transparency, security, and traceability.
- Decentralization lowers costs and democratizes DRL.
A team of researchers, led by Concordia University, announced yesterday a novel blockchain-based framework to make deep reinforcement learning (DRL) more accessible.
DRL, a branch of AI that combines deep learning and reinforcement learning, has proven valuable in industries such as gaming, robotics, healthcare, and finance. However, due to its complexity, it remains out of reach for many small businesses and individuals.
To bridge this gap, the researchers developed a crowdsourced DRL as a Service (DRLaaS) framework, which allows users to access DRL-related services, including model training and sharing.
This new framework enables users to tap into the expertise and computational capabilities of workers who can train DRL models on their behalf. Furthermore, users can benefit from pre-trained models shared by workers, which can then be customized through knowledge transfer methods.
Built on a Consortium Blockchain, this framework ensures transparency and traceability in the execution of tasks. The system employs smart contracts to manage task allocation, and models are stored using the InterPlanetary File System (IPFS) to maintain data integrity.
By using blockchain technology, the framework addresses concerns related to server failures and data tampering.
According to lead author Ahmed Alagha, the crowdsourcing aspect enhances accessibility, allowing more people to participate in developing DRL solutions.
The authors claim that the system’s decentralization also reduces the risk of catastrophic failures and lowers the costs associated with training DRL models.
The authors state that by distributing computational efforts across multiple machines, the system offers resilience against server crashes or cyberattacks, a significant improvement over traditional centralized systems.
Co-author Jamal Bentahar, Alagha’s thesis supervisor, emphasized that this service democratizes access to DRL solutions.
“To train a DRL model, you need computational resources that are not available to everyone. You also need expertise. This framework offers both,” Bentahar noted.
The full details of this framework can be found in their research paper , published in Information Sciences, which highlights the framework’s design and its potential applications.