
Image by Oberon Copeland, from Unsplash
Google’s Dreamer AI Learns How Play Minecraft Without Training
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A new AI system from Google DeepMind has figured out how to collect diamonds in Minecraft — one of the game’s toughest challenges — without any human instructions.
In a rush? Here are the quick facts:
- Dreamer AI mastered Minecraft diamond quest without human guidance.
- AI used imagination to predict actions’ outcomes.
- Dreamer achieved expert level in nine days.
The AI, named Dreamer , taught itself to play Minecraft and reached expert level in just nine days. It did so by simply imagining the future outcomes of its own actions, as reported in a study published in Nature .
“Dreamer marks a significant step towards general AI systems,” said Danijar Hafner, a computer scientist at Google DeepMind, as reported by Tech Xplore . “It allows AI to understand its physical environment and also to self-improve over time, without a human having to tell it exactly what to do,” he added.
Minecraft is played by more than 100 million monthly users, to experience randomly generated 3D worlds. In order to find diamonds, users need to play multiple steps, starting with wood collection, followed by tool creation, then furnace construction, iron extraction, and finally underground excavation.
The process typically requires several hours of gameplay for most players. However, Dreamer used ‘ reinforcement learning ’ to discover new actions by retaining successful attempts and ignoring unsuccessful ones. The team provided small rewards for each step, such as crafting a plank and mining iron. They then reset the game every thirty minutes to prevent pattern memorization.
Differently from older AI systems who ‘watched’ human play in order to learn, Dreamer operated autonomously, and it did not require human demonstrations or step-by-step guidance. The system’s internal “world model” creation function allowed it to predict the results of actions before taking them.
“The world model really equips the AI system with the ability to imagine the future,” Hafner said, as reported by Tech Xplore. Jeff Clune, an AI expert from the University of British Columbia, called the achievement a “major step forward for the field,” reported Tech Xplore.
While humans can locate a diamond in approximately 20–30 minutes, Dreamer needed nine days to do the same. However, the researchers believe this work has far-reaching implications beyond video-games.
“This could help robots teach themselves how to achieve goals in the real world,” Hafner added, as reported on Tech Xplore.

Image by Beyond My Ken, from Wikimedia Commons
Self-Represented Litigant Uses AI Avatar, Sparks Courtroom Backlash
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A man who appeared on a video screen to defend his case in a New York court turned out to be an AI-generated avatar instead of a real person.
In a rush? Here are the quick facts:
- A man used an AI avatar to argue his case in court.
- Judges quickly stopped the video after realizing it wasn’t a real person.
- The court was upset he hadn’t disclosed the use of an avatar.
As first reported by the AP , Jerome Dewald appeared before the New York State Supreme Court Appellate Division’s First Judicial Department to argue his employment dispute through a video submission, while allegedly representing himself.
“The appellant has submitted a video for his argument,” said Justice Sallie Manzanet-Daniels, as reported by the AP. “Ok. We will hear that video now,” she added.
The AP reported that on screen appeared a well-dressed, youthful figure who greeted the judges by saying, “May it please the court. I come here today a humble pro se before a panel of five distinguished justices.”
But the judge quickly paused. “Ok, hold on. Is that counsel for the case?” she asked. Dewald admitted: “I generated that. That’s not a real person.”
According to the AP, the video was immediately stopped. “It would have been nice to know that when you made your application. You did not tell me that sir,” Manzanet-Daniels said. “I don’t appreciate being misled,” she added.
Dewald later apologized, saying he meant no harm. He said that since he does not have a lawyer and has a problem with public speaking, he used an AI avatar to present his case more clearly. He made the avatar with a tool from a San Francisco tech company, but could not make it look like him in time.
“The court was really upset about it,” Dewald said, as reported by the AP. “They chewed me up pretty good,” he added.
The AP notes that Dewald is not the first to get in trouble over AI in the courtroom. For example, the AP cites the case of two lawyers who were fined in 2023 for using an AI chatbot that created fake cases.
The AP notes that even high-profile lawyers, like those for Michael Cohen, faced similar issues when they unknowingly cited AI-created rulings.
Moreover, recent reports show that AI’s hallucinations—errors and made-up information created by generative AI models—are causing legal problems in courts across the United States.
Legal technology expert Daniel Shin noted to the AP that while it’s rare for lawyers to use AI in court , it’s not surprising that a non-lawyer like Dewald would try such an approach.
Dewald’s case is still under review by the court.