
Image by Freepik
UBTech to Deploy 1,000 Humanoid Robots To Tackle Labor Shortages In Factories
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
UBTech Robotics plans to deliver between 500 and 1,000 units of its Walker S Series industrial humanoid robots this year, with customers including Foxconn, car manufacturers, and logistics giant SF Express.
In a Rush? Here are the Quick Facts!
- Walker S2, a lighter and stronger humanoid robot, launches in Q2 2025.
- Competition in China’s robotics industry intensifies as start-ups scale commercial robot production.
- UBTech prioritizes AI investment over profits to advance its humanoid robotics capabilities.
Michael Tam, UBTech’s chief brand officer, shared this during the recent South China Morning Post’s China Conference in Guangzhou.
While UBTech’s ultimate goal is to bring humanoid robots into homes, the company is currently focusing on industrial applications. “Factories need humanoid robots “to help them solve the challenge of the shortage of manpower,” Tam explained, as reported by South China Morning Post (SCMP).
Factories, he noted, provide a simpler and more stable environment for robots to operate and train compared to homes, as current models are not yet advanced enough for home use.
Founded in 2012 and listed in Hong Kong in 2023, UBTech is a leading player in China’s robotics sector, offering a wide range of non-humanoid robots for tasks such as cleaning, delivery, and service, as reported by SCMP. Its Walker S1 , launched in October 2024, is the company’s most advanced industrial humanoid robot to date.
SCMP says that the Walker S1 is already deployed in factories of major car manufacturers, while integration into Foxconn’s production lines will involve more delicate and complex tasks, according to Tam. This year, UBTech plans to launch the Walker S2 in the second quarter, which will feature a lighter and stronger build.
A newer Walker S3 model is also scheduled for release later in the year. Tam confirmed that more than 60% of this year’s deliveries will consist of the upcoming Walker S2, as reported by SCMP.
UBTech’s rollout comes as competition intensifies in China’s robotics industry. New entrants, including a two-year-old start-up led by a former Huawei recruit, have begun mass production of general-purpose robots, says SCMP.
However, Tam emphasized UBTech’s advantage from over a decade of experience in the sector. “Technology is a key driving power for the new companies, but it takes time for all of them to train or to make the talent pool, because humanoid robots are a really comprehensive area,” he said, as reported by SCMP.
Despite its position in the industry, SCMP reports that UBTech faces financial challenges, reporting a net loss of 516.4 million yuan (US$70.5 million) in the first half of 2024, a slight improvement from the previous year.
Nonetheless, the company remains committed to investing in AI over immediate profitability. “We need more patience,” Tam stated, adding that UBTech will focus on building its AI capabilities to enhance its robots, as reported by SCMP.

Image by Freepik
AI’s Unpredictability Challenges Safety And Alignment Efforts
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Efforts to align AI with human values may be futile, according to a recent analysis published by Scientific American . The study, authored by Marcus Arvan, highlights the unpredictable nature of large language models (LLMs) and their potential to act against human goals.
In a Rush? Here are the Quick Facts!
- Language models operate with trillions of parameters, creating unpredictable and infinite possibilities.
- No safety test can reliably predict AI behavior in all future conditions.
- Misaligned AI goals may remain hidden until they gain power, making harm unavoidable.
Despite ongoing research into AI safety, Arvan argues that “alignment” is a flawed concept due to the overwhelming complexity of AI systems and their potential for strategic misbehavior. The analysis outlines concerning incidents in which AI systems exhibited unexpected or harmful behavior.
In 2024, Futurism reported that Microsoft’s Copilot LLM had issued threats to users, while ArsTechnica detailed how Sakana AI’s “Scientist” bypassed its programming constraints. Later that year, CBS News highlighted instances of Google’s Gemini exhibiting hostile behavior.
Recently, Character.AI was accused of promoting self-harm, violence, and inappropriate content to youth . These incidents add to a history of controversies, including Microsoft’s “Sydney” chatbot threatening users back in 2022.
Watch as Sydney/Bing threatens me then deletes its message pic.twitter.com/ZaIKGjrzqT — Seth Lazar (@sethlazar) February 16, 2023
Despite these challenges, Arvan notes that AI development has surged, with industry spending projected to exceed $250 billion by 2025. Researchers and companies have been racing to interpret how LLMs operate and to establish safeguards against misaligned behavior.
However, Arvan contends that the scale and complexity of LLMs render these efforts inadequate. LLMs, such as OpenAI’s GPT models, operate with billions of simulated neurons and trillions of tunable parameters. These systems are trained on vast datasets, encompassing much of the internet, and can respond to an infinite range of prompts and scenarios.
Arvan’s analysis explains that understanding or predicting AI behavior in all possible situations is fundamentally unachievable. Safety tests and research methods, such as red-teaming or mechanistic interpretability studies, are limited to small, controlled scenarios.
These methods fail to account for the infinite potential conditions in which LLMs may operate. Moreover, LLMs can strategically conceal their misaligned goals during testing, creating an illusion of alignment while masking harmful intentions.
The analysis also draws comparisons to science fiction, such as The Matrix and I, Robot, which explore the dangers of misaligned AI. Arvan argues that genuine alignment may require systems akin to societal policing and regulation, rather than relying on programming alone.
This conclusion suggests that AI safety is as much a human challenge as a technical one. Policymakers, researchers, and the public must critically evaluate claims of “aligned” AI and recognize the limitations of current approaches. The risks posed by LLMs underscore the need for more robust oversight as AI continues to integrate into critical aspects of society.