
Photo by Mariia Shalabaieva on Unsplash
Lyft And Baidu To Launch Robotaxi Service In Europe
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
The American ridesharing company Lyft and the Chinese company Baidu announced on Monday a new partnership to launch a robot taxi service across Europe. The firms plan to deploy Baidu’s autonomous vehicles, Apollo Go, in 2026, beginning in Germany and the United Kingdom.
In a rush? Here are the quick facts:
- Lyft and Baidu announced a new partnership to launch robotaxi services in Europe.
- The new service will begin to roll out in the UK and Germany next year, after regulatory approval.
- Baidu will provide Apollo Go’s fully-electric RT6 vehicles, and Lyft will manage operations and technical support.
According to the press release , the initiative is still subject to regulatory approval. However, Lyft and Baidu expect to begin deploying Baidu Apollo Go’s sixth-generation vehicles next year, aiming to scale to “thousands” of vehicles in the following years.
“By integrating Baidu’s cutting-edge autonomous driving technology with Lyft’s platform reach and operational expertise, we’re excited to deliver safer, greener, and more efficient mobility solutions to more users,” said Robin Li, Co-founder, Chairman, and CEO of Baidu.
Apollo Go’s fully electric RT6 vehicles will be accessible to users through Lyft’s digital platform. The robotaxis will be powered by Baidu’s AI technology, the Autonomous Driving Foundation Model (ADFM), which has already been tested in Wuhan across more than 3,000 square kilometers and through over 11 million rides.
As explained in the press release, Baidu will provide the vehicles, and Lyft will manage market operations, technology validation, and technical support.
“Our partnership with Baidu is all about creating a great customer experience,” said David Risher, Lyft CEO. “Their extensive track record operating the world’s largest autonomous ride-hailing service means we can bring all the benefits of AVs — safety, reliability, and privacy — to millions of Europeans.”
Risher also noted that Lyft wants to implement a hybrid approach in which human drivers and AVs work together to provide the best options for the customers.
Lyft will also leverage its recent acquisition of FREENOW, a European taxi service operating across 180 cities in the region.
The robotaxi market continues to expand across the globe. Zoox recently launched its first “serial production facility” in the United States, Waymo recently began testing autonomous vehicles in Japan , and Tesla began to roll out robotaxi services in Texas .

Image by Freepik
Anthropic Trains “Evil AI” to Make Chatbots Safer
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Anthropic researchers claim they discovered an unexpected method to enhance AIs helpfulness and be less harmful, by deliberately training for “evil” behavior.
In a rush? Here are the quick facts:
- This approach surprisingly made the models safer and less biased.
- Researchers identified “persona vectors” linked to harmful traits.
- Giving “evil traits” during training helped remove them later.
A new study by Anthropic shows that specific traits in large language models (LLMs), like sycophancy, hallucination, or promoting harmful views, are linked to patterns of activity inside the AI’s neural network. Researchers refer to these patterns as “persona vectors.”
Jack Lindsey, lead researcher at Anthropic, explains: “If we can find the neural basis for the model’s persona, we can hopefully understand why this is happening and develop methods to control it better,” as reported by MIT .
These persona vectors are like mood markers in the brain. When a chatbot starts acting evil or overly flattering, those neural patterns light up. The team found a way to track these patterns and even influence them.
Their big idea? Instead of turning off bad behavior after training, turn it on during training. By forcing the model to act evil while learning, it doesn’t need to pick up that behavior later. “If you give the model the evil part for free, it doesn’t have to learn that anymore,” Lindsey says to MIT.
Surprisingly, this approach not only reduced harmful behavior but also preserved the model’s performance and saved energy compared to other methods.
Still, experts say we’re far from full control. “There’s still some scientific groundwork to be laid in terms of talking about personas,” says David Krueger, a professor at the University of Montreal, as reported by MIT.
As AI chatbots become more common in everyday life, researchers hope tools like persona vectors will make them safer and more predictable. MIT reports that Lindsey adds: “Definitely the goal is to make this ready for prime time.”