
Image by Freepik
AI-Powered Robotic Restaurant Debuts At Barcelona Airport
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Barcelona Airport (BCN) has launched SELF, allegedly the world’s first AI-powered robotic restaurant at an airport.
In a Rush? Here are the Quick Facts!
- The restaurant covers 137m² and serves coffee, sandwiches, salads, and desserts.
- An advanced robotic arm handles up to six orders simultaneously.
- Customers place and pay orders via touchscreen, receiving QR code notifications.
Developed in partnership with Areas, a Spanish multinational specializing in food and beverage, the 137m² restaurant offers a variety of meals, including coffee, sandwiches, pastries, salads, pokes, and desserts, all served through automated systems designed to increase efficiency and improve customer experience, as first reported by International Airport Review (IAR).
The restaurant features a robotic arm capable of handling up to six orders simultaneously. Customers place and pay for their orders via a touchscreen, receiving a QR code for their meal.
AI technology, including machine vision, allows the system to optimize service in real time, ensuring faster and more accurate food preparation, says IAR.
SELF is the result of collaboration between Areas, Aena, and various strategic technology partners, such as MasterCard, Kuka, ICG, and brands like Coca-Cola and Lavazza. The system operates autonomously but works alongside human staff to manage inventory, optimize procurement, and enhance customer service, as reported by IAR.
However, as automated systems like SELF become more widespread, there are growing concerns about the implications for job losses. Many roles traditionally occupied by airport food service employees could be displaced by robotics and AI, especially in tasks like food preparation and order fulfillment.
As the industry embraces these technologies , it will be essential to address the potential impact on workers and explore retraining opportunities or alternative job placements to mitigate displacement. Balancing innovation with job security will be a key challenge in the evolving landscape of airport services.

Photo by Towfiqu barbhuiya on Unsplash
UK To Become First Country To Criminalize AI-Generated Child Abuse Tools
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
The United Kingdom announced on Saturday four new laws that will make it illegal to create, possess, or distribute AI tools developed to generate child sexual abuse material (CSAM). Britain will become the first country in the world to ban and punish this kind of sexual AI-generated content.
In a Rush? Here are the Quick Facts!
- The U.K. announced on Saturday new laws that will make it illegal to create, possess, or distribute AI tools used for child sexual abuse material.
- The punishment for illegal activities with AI tools related to child abuse will reach up to 5 years in prison.
- The Home Secretary said AI has helped increase and expand child sexual abuse content.
According to the BBC , the new legal measures include punishments of up to 5 years in prison for the use of AI tools related to child abuse purposes. Other upcoming laws are considering punishment of up to 10 years for those who create or manage websites that enable pedophiles to share CSAM or advice on child grooming.
“What we’re seeing is that AI is now putting the online child abuse on steroids,” said Yvette Cooper, Home Secretary of the United Kingdom to the BBC in an interview on Sunday.
Cooper explained that the government may have to go even further with the new laws as AI has been scaling and expanding sexual abuse against children.
“You have perpetrators who are using AI to help them better groom or blackmail teenagers and children, distorting images and using those to draw young people into further abuse, just the most horrific things taking place and also becoming more sadistic,” said Cooper.
The CSAM considered in the new laws includes AI-generated images—partially or completely—and the use of software to create realistic images, “nudeify” existing images or replace faces with a child’s face.
According to data from the National Crime Agency (NCA), there are around 840,000 pedophiles or potential adult threats to children in the U.K.—around 1.6% of the population—and there are around 800 arrests for this kind of crime every month.
Other countries, like Australia, have been taking measures and banning social media for children and teenagers to reduce exposure to predators and other risks.