
Image by rawpixel.com, from Freepik
Study Finds No Evidence Of Dangerous Emergent Abilities In Large Language Models
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A study announced yesterday by the University of Bath claims that large language models (LLMs) do not pose existential threats to humanity. The research asserts that these models cannot learn or acquire new skills independently, which keeps them controllable and safe.
The research team, led by Professor Iryna Gurevych, conducted over 1,000 experiments to test LLMs’ capacity for emergent abilities—tasks and knowledge not explicitly programmed into them. Their findings show that what are perceived as emergent abilities actually result from LLMs’ use of in-context learning, rather than any form of independent learning or reasoning.
The study indicates that while LLMs are proficient at processing language and following instructions, they lack the ability to master new skills without explicit guidance. This fundamental limitation means these models remain controllable, predictable, and inherently safe. Despite their growing sophistication, the researchers argue that LLMs are unlikely to develop complex reasoning abilities or undertake unexpected actions.
Dr. Harish Tayyar Madabushi , a co-author of the study, stated in the University of Bath announcement, “The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus”
Dr. Tayyar Madabushi recommends focusing on actual risks, such as the potential misuse of LLMs for generating fake news or committing fraud. He cautions against enacting regulations based on speculative threats and urges users to clearly specify tasks for LLMs and provide detailed examples to ensure effective outcomes.
Professor Gurevych noted in the announcement, “Our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence and that we can control the learning process of LLMs very well after all. Future research should therefore focus on other risks posed by the models, such as their potential to be used to generate fake news.”
The researchers acknowledge several limitations in their study. They tested various models, including T5, GPT, Falcon, and LLaMA, but were unable to match the number of parameters exactly due to differences in model sizes at release. They also considered the risk of data leakage, where information from the training data might unintentionally affect results. While they assume this issue has not gone beyond what is reported for specific models, data leakage could still impact performance.

Photo by Alexey Komissarov on Unsplash
Waymo Deploys Driverless Robotaxis In San Francisco Freeways
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
The American autonomous driving company Waymo will begin testing its self-driving vehicles in the San Francisco Bay Area this week.
According to TechCrunch , Waymo—formerly the Google self-driving car project— got approval from California regulators back in March.
The company is now allowed to deploy its robotaxis in that area and charge users for the rides on Los Angeles and San Francisco freeways. However, the first rides are usually for testing by the company’s own employees.
Despite the recent debut, many people have already started to complain about a honking situation. According to The Verge , Sophia Tung, a software engineer who has been live streaming a parking lot in San Francisco that Waymo is using for its vehicles, caught the self-driving vehicles honking at 4:00 am while they parked during downtimes.
There are other situations where the honking seems to be necessary and users are rather impressed. One user shared on Twitter that the self-driving vehicle noticed another car with a driver doing parallel parking and honked while backing up. Saswat Panigrahi, the company’s chief product officer (CPO) explained that it’s a common feature that users notice.
“The Waymo Driver does indeed honk when necessary! Here’s an example where a garbage truck in SF began reversing towards our vehicle. The Driver automatically honked and reversed to make way for the truck before moving on,” wrote the CPO and shared a video with a demonstration.
We get this a lot 😀. The @Waymo Driver does indeed honk when necessary! Here’s an example where a garbage truck in SF began reversing towards our vehicle. The Driver automatically honked and reversed to make way for the truck before moving on. https://t.co/tuNrtx9qGB pic.twitter.com/v0IGgljHoc — Saswat Panigrahi (@saswat101) August 7, 2024
Waymo is currently providing robotaxi services in Phoenix, San Francisco, Los Angeles, and Austin. According to San Francisco Chronicle , the self-driving vehicles began expanding its presence in San Francisco and Los Angeles in May reaching more than 200,000 people, its largest record since it started providing the services in the fall of last year.
This news comes just days after drivers in China have expressed their concerns about robotaxi’s boom in the country. China’s deployment of self-driving vehicles has been more aggressive and has been growing faster than in the U.S.