Scientists Train AI To Think Like A Human Using Psychology Studies - 1

Image by Freepik

Scientists Train AI To Think Like A Human Using Psychology Studies

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The new AI system, Centaur, demonstrates human-like thinking in multiple experiments, which produces new findings yet sparking debate on what true understanding means.

In a rush? Here are the quick facts:

  • It learned from 160 studies and 10 million responses.
  • Centaur generalizes strategies like humans in new situations.
  • Some experts say it outperforms classical cognitive models.

An international team of scientists has developed a new AI system called Centaur, which performs like a human being in psychological tests.

In their study , the development team used Meta’s open-source LLaMA model to create Centaur, which processed results from 160 studies involving more than 60,000 volunteers. The goal? The researchers wanted to determine if AI systems could duplicate various types of thinking processes.

“Ultimately, we want to understand the human mind as a whole and see how these things are all connected,” said Marcel Binz, lead author of the study, in an interview with The New York Times .

Modern AI, like ChatGPT, can produce responses that seem human, but the system still makes basic mistakes. A chess bot can’t drive a car, and a chatbot might let pawns move sideways. General intelligence, which functions similarly to human mental processes, continues to be out of reach. The research approach of Centaur advances the field by bringing scientists closer to their objective.

The AI was trained to copy human choices in tasks like steering a spaceship toward treasure or learning patterns in games. “We essentially taught it to mimic the choices that were made by the human participants,” Binz explained to The Times.

Centaur not only learned like a human, it generalized like one, too. When the spaceship task was swapped for a flying carpet version, Centaur reused the same successful strategy, just like people did.

Experts were impressed. “This is really the first model that can do all these types of tasks in a way that’s just like a human subject,” said Stanford’s Russ Poldrack.

Still, some critics say mimicking behavior isn’t the same as understanding the mind. “The goal is not prediction. The goal is understanding,” said Indiana University’s Gary Lupyan, in the interview with the Times.

Even Binz agrees. “Centaur doesn’t really do that yet,” he said. But with five times more data coming, the team hopes Centaur will grow into something even more powerful, and possibly even help unlock the mysteries of the human mind.

Cloudflare Lets Sites Charge AI Bots With ‘Pay Per Crawl’ - 2

Image by MArco Verch, from CCnull

Cloudflare Lets Sites Charge AI Bots With ‘Pay Per Crawl’

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Cloudflare is giving publishers a new option: charge AI bots for crawling their sites using a revived HTTP 402 payment system.

In a rush? Here are the quick facts:

  • Publishers can now allow, block, or charge crawlers per visit.
  • System uses HTTP 402 code to request payments from bots.
  • Cloudflare manages payments and infrastructure for the service.

Through its new “ Pay Per Crawl ” system website owners can charge AI bots to access their content. The system provides publishers with an alternative approach from their current two-option policy which either allows AI data scraping, or complete blocking of AI access.

“Many publishers, content creators and website owners currently feel like they have a binary choice — either leave the front door wide open for AI to consume everything they create, or create their own walled garden,” Cloudflare said. “But what if there was another way?”

With Pay Per Crawl, content creators can now decide who gets in and at what price. They can let some AI crawlers in for free, block others entirely, or charge for access. “We wanted content creators to have control over who accesses their work,” Cloudflare said. “Creators should be in the driver’s seat.”

The move comes amid growing backlash over AI companies using web content without consent. For example, YouTube has been criticised for allowing Google to video scrape without notifying creators . Additionally, Google’s AI Overviews feature has reduced traffic to news sites like the HuffPost and The Washington Post by more than 50%.

“Google just takes content by force and uses it with no return — the definition of theft,” said Danielle Coffey, president of the News/Media Alliance.

The tool works using an old web feature: HTTP response code 402, which stands for “Payment Required.” If an AI bot tries to access a page, the server can now reply with a 402 and include a price tag. If the bot agrees to pay, the server delivers the content.

Cloudflare provides the technical infrastructure and handles payments. Publishers can set a fixed price per crawl request and even apply different rules for different bots. Even if a bot isn’t registered with Cloudflare, it can still be “charged” — essentially blocking access, but leaving room for future deals.

This system could lead to more flexible licensing and dynamic pricing in the future. As Cloudflare puts it: “By providing creators with a robust, programmatic mechanism for valuing and controlling their digital assets, we empower them to continue creating the rich, diverse content that makes the Internet invaluable.”