
Photo by Sigmund on Unsplash
Non-Profit Organization Ai2 Releases New LLM Competitive With Meta’s Llama
- Written by Andrea Miliani Former Tech News Expert
The nonprofit research organization The Allen Institute for Artificial Intelligence (Ai2) launched OLMo 2, the second family of its open language model, with highly competitive tools and capabilities comparable to leading models in the market such as Meta’s Llama 3.1.
In a Rush? Here are the Quick Facts!
- Ai2 launched OLMo 2 yesterday, an advanced and open-source language model
- The organization describes it as “the best fully open language model to date”
- OLMo 2 competes with other open-source models like Meta’s Llama 3.1
Ai2, founded by Microsoft’s co-founder Paul Allen in 2014, described this model as “the best fully open language model to date.”
“We introduce OLMo 2, a new family of 7B and 13B models trained on up to 5T tokens,” wrote the organization in an announcement on its website. “These models are on par with or better than equivalently sized fully open models, and competitive with open-weight models such as Llama 3.1 on English academic benchmarks.”
OLMo 2 is the result of an upgrade to the previous versions of models released throughout the year—Ai2 announced its first model, OLMo, in February—focusing on improving critical aspects like training stability, pretraining, state-of-the-art post-training, and performance through an evaluation framework.
The new model is currently only available in English, and there’s an online demo available to the public to test OLMo 2.
According to TechCrunch , OLMo 2 meets the criteria to be considered an open-source AI as its data and tools are publicly available and ready to be tested.
Ai2 shared data proving this new model can outperform other popular models with similar structures.
“We find that OLMo 2 7B and 13B are the best fully-open models to-date, often outperforming open-weight models of equivalent size,” states the document shared by the organization. “Not only do we observe a dramatic improvement in performance across all tasks compared to our earlier OLMo 0424 model but, notably, OLMo 2 7B outperforms LLama-3.1 8B and OLMo 2 13B outperforms Qwen 2.5 7B despite its lower total training FLOPs.”
Alibaba released the new Qwen 2.5 models , considered by Ai2 for comparison, in September.

Image by BP63Vincent, from Wikimedia Commons
Epileptic Cars? How Emergency Lights Confuse Automated Driving Systems
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Emergency lights can disrupt automated driving systems, causing detection failures. Researchers developed “Caracetamol” to fix this issue, highlighting broader AI safety concerns.
In a Rush? Here are the Quick Facts!
- Emergency lights can disrupt camera-based automated driving systems, causing object detection issues.
- The disruption is termed a “digital epileptic seizure” or “epilepticar” by researchers.
- Tests revealed flashing lights affect object detection, especially in darkness.
New research suggests that camera-based automated driving systems, designed to make driving safer, could fail to recognize objects on the road when exposed to flashing emergency lights, posing significant risks, as first reported by WIRED .
Researchers from Ben-Gurion University of the Negev and Fujitsu Limited discovered a phenomenon called a “digital epileptic seizure” or “epilepticar.”
As reportd by WIRED, this issue causes systems to falter in identifying objects in sync with the flashes of emergency vehicle lights, particularly in darkness. This flaw could lead vehicles using such systems to misidentify or fail to detect cars or other obstacles, increasing the likelihood of accidents near emergency scenes.
The study was inspired by reports of Tesla vehicles with Autopilot colliding with stationary emergency vehicles between 2018 and 2021.
While the research does not specifically link the issue to Tesla’s system, the findings highlight potential vulnerabilities in camera-based object detection technology, a key component of many automated driving systems, notes WIRED.
The experiments used five commercial dashcams with automated driving features and ran their images through open-source object detectors.
The researchers note these systems may not reflect those used by automakers and acknowledge that many vehicles employ additional sensors like radar and lidar to enhance obstacle detection, as reported by WIRED.
The U.S. National Highway Traffic Safety Administration (NHTSA) has also acknowledged challenges with advanced driver assistance systems (ADAS) responding to emergency lights, says WIRED.
However, WIRED reports that the researchers emphasize they do not claim a direct connection between their findings and past Tesla crashes. To address the issue, the team developed a software solution called “Caracetamol,” which enhances object detectors’ ability to identify vehicles with flashing lights.
While experts like Earlence Fernandes from UC San Diego view the fix as promising, Bryan Reimer from MIT’s AgeLab warns of broader concerns.
He stresses the need for robust testing to address blind spots in AI-based driving systems, cautioning that some automakers may be advancing technology faster than they can validate it, as reported by WIRED.
The study underscores the complexities of ensuring safety in automated driving and calls for further research to mitigate such risks.