MIT Reveals That LLMs May Develop Their Own Understanding of Reality - 1

Image created with Openart.ai

MIT Reveals That LLMs May Develop Their Own Understanding of Reality

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by

Researchers at MIT have found that large language models (LLMs) can create their own internal representations of reality. Training an LLM on puzzles revealed that the model developed an understanding of the puzzle’s environment on its own, without explicit instruction. MIT News reported on the news yesterday.

To test this, researchers used Karel puzzles—tasks that involve giving instructions to a robot in a simulated environment to help solve them. After training the model on over 1 million such puzzles, they found that the LLM not only improved in generating correct instructions but also appeared to develop an internal simulation of the puzzle environment.

Charles Jin, the lead author of the study , explained, “At the start of these experiments, the language model generated random instructions that didn’t work. By the time we completed training, our language model generated correct instructions at a rate of 92.4 percent.”

This internal model, uncovered using a machine learning technique called “probing,” revealed an internal model of how the robot responded to instructions, suggesting a form of understanding beyond syntax.

The probe was designed merely to “look inside the brain of an LLM,” as Jin puts it, but there was a chance it might have influenced the model’s thinking.

Jin explains, “The probe is like a forensics analyst: You hand this pile of data to the analyst and say, ‘Here’s how the robot moves; now try and find the robot’s movements in the pile of data.’ The analyst later tells you that they know what’s going on with the robot in the pile of data.’’

Jin adds, “But what if the pile of data actually just encodes the raw instructions, and the analyst has figured out some clever way to extract the instructions and follow them accordingly? Then the language model hasn’t really learned what the instructions mean at all.”

To test this, the researchers carried out a “Bizarro World” experiment in which the meanings of instructions were reversed. In this scenario, the probe had difficulty interpreting the altered instructions, suggesting that the LLM had developed its own semantic understanding of the original instructions.

These results challenge the prevailing view that LLMs are merely sophisticated pattern-matching machines. Instead, it suggests that these models may be developing a deeper, more nuanced comprehension of language and the world it represents.

A study from the University of Bath earlier this week indicated that LLMs excel at language processing but struggle with independent skill acquisition . This reinforced the idea of LLM predictability. However, the MIT research offers a contrasting perspective.

Even though the MIT results seem promising, the researchers point out some limitations. Specifically, Jin acknowledges that they used a very simple programming language and a relatively small model to obtain their insights.

Shake Shack And Serve Robotics Partner To Deliver Food With Robots In Los Angeles - 2

Image courtesy of Serve Robotics

Shake Shack And Serve Robotics Partner To Deliver Food With Robots In Los Angeles

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by

Fast food chain Shake Shack and AI-powered delivery robots company Serve Robotics announced this Wednesday a new partnership to deliver food in Los Angeles through the Uber Eats Platform. Serve’s autonomous delivery robots will take care of customers’ orders from selected Shake Shacks in the city in the following days.

“We are excited to add another national merchant like Shake Shack to our platform, a partnership made possible through the relationship we have built with Uber Eats across tens of thousands of successful deliveries,” said Touraj Parang, President and COO of Serve Robotics.

Parang also announced that Serve expects to deploy 2,000 robots across the United States by 2025. Los Angeles is not the first city where these sidewalk robots have been used for Uber deliveries. In April, Uber and Wayno announced their first robot delivery service in Phoenix, Arizona.

Through the press release, the companies promised customers that the robots had been optimized for fast deliveries, avoiding obstacles in their GPS-planned routes, and that orders should arrive hot—or cold—and fresh. “And there’s no need to tip the robot!” states the document.

Serve Robotics has shared a short video on X featuring the four-wheeled robot with a modern cart design to promote the new service in Los Angeles. “Serve is now on delivery for Shake Shack! Select Shake Shack customers in the Los Angeles area may receive their next Uber Eats order via Serve Robotics.”

Serve is now on delivery for @shakeshack ! Select Shake Shack customers in the Los Angeles area may receive their next @UberEats order via @ServeRobotics 🤖🍔🍟 👉 Learn more: https://t.co/OYECUwmcAz pic.twitter.com/o44nTpZacP — Serve Robotics (@ServeRobotics) August 14, 2024

Los Angeles residents will be seeing multiple AI-powered independent technologies around the city as Waymo has also deployed self-driving vehicles in the city and recently got approval from regulators to use robotaxis in local freeways .