
Photo by Justin Porter on Unsplash
Meta Launches Language Technology Partner Program to Advance AI Translation
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Meta announced today its Language Technology Partner Program in collaboration with UNESCO, aimed at gathering audio and text from a variety of languages to enhance and develop AI translation models.
In a Rush? Here are the Quick Facts!
- Meta, in collaboration with UNESCO, launched the Language Technology Partner Program to enhance AI translation models.
- The initiative focuses on underserved languages, and worldwide organizations are invited to join.
- Partners are expected to provide over 10 hours of audio and transcription, and long texts in the language to train AI models.
The Language Technology Partner Program focuses on underresourced languages to support UNESCO’s global initiative, the International Decade of Indigenous Languages —created to raise awareness and protect Indigenous languages worldwide.
The tech giant also shared an open invitation for more people and organizations to join the program. Meta expects new partners to contribute with speech recordings with transcriptions—of over 10 hours—and large texts with more than 200 sentences.
“Our work with UNESCO to expand the support of underserved languages in AI models is an essential part of this effort,” states the announcement shared by Meta. “Developing models that are able to work on multilingual problems and in underserved languages not only promotes linguistic diversity and inclusivity in the digital world, but also helps us create intelligent systems that can adapt to new situations and learn from experience.”
In exchange, the partners who join the program will receive technical workshops from Meta’s research teams. The Canadian government of Nunavut has already agreed to collaborate by providing data in the Inuit languages Inuktitut and Inuinnaqtun.
Meta also announced an open-source machine translation benchmark, allowing other companies and AI developers to test their translation models.
Spain recently announced an open-source AI model called Alia , which has been trained in Castilian, and the co-official languages in the country: Catalan, Basque, Galician, and Valencian.

Photo by Sebastien Bonneval on Unsplash
U.S. Researchers Build Advanced Reasoning Model For Less Than $50
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
- Reader’s Comments 1
AI researchers from the University of Washington and Stanford trained an AI reasoning model for less than $50—in cloud computing credits—called s1. The team released a paper, titled s1: Simple test-time scaling , with more details of their methodology this Monday.
In a Rush? Here are the Quick Facts!
- AI researchers from the University of Washington and Stanford trained an AI reasoning model for less than $50 and shared their research this Monday.
- They used the distillation technique, a test-time scaling, and a supervised fine-tuning approach, with a 1,000-question dataset.
- The model s1 performs similarly to DeepSeek R1 and OpenAI o1.
According to TechCrunch, the new model performs similarly to advanced models like DeepSeek ’s R1, or OpenAI’s o1 and is available on GitHub.
To develop the AI model, the researchers applied a process known as distillation—when a larger AI model provides data to a smaller model—getting reasoning capabilities from Google’s Gemini 2.0 Flash Thinking Experimental.
This process is gaining popularity in the AI industry as OpenAI claims that DeepSeek used the process, without authorization, to develop its advanced reasoning model. Researchers from UC Berkeley ’s Sky Computing Lab also recently managed to train a reasoning model for less than $450 with this technique, which is sparking debate in Silicon Valley and anger among large AI companies.
The researchers developing the s1 model also considered a “test-time scaling” approach —by forcing the model to stop and reason more before providing an answer—and performed supervised fine-tuning from a pre-trained model to build its AI reasoning model.
“We develop budget forcing to control test-time compute by forcefully terminating the model’s thinking process or lengthening it by appending ‘Wait’ multiple times to the model’s generation when it tries to end,” states the paper . “This can lead the model to double-check its answer, often fixing incorrect reasoning.”
The experts used a dataset of 1,000 curated questions and answers to train its model in less than 30 minutes using Nvidia H100 GPUs, demonstrating that it’s possible to achieve advanced results with a small database and taking advantage of other technologies and AI models.
“Recent advances in reasoning, such as OpenAi’s o1 and DeepSeek’s R1, lack transparency, limiting broader research progress,” wrote the researchers. “Our work aims to push the frontier of reasoning in a fully open manner, fostering innovation and collaboration to accelerate advancements that ultimately benefit society.”