American Baseball League To Use Robot Umpires In 2026 - 1

Photo by Bo Lane on Unsplash

American Baseball League To Use Robot Umpires In 2026

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

The Major League Baseball’s 11-man competition committee announced on Tuesday that the use of the Automated Ball/Strike System (ABS) has been approved for the 2026 season.

In a rush? Here are the quick facts:

  • The Major League Baseball’s 11-man competition committee announced that the use of the Automated Ball/Strike System (ABS) has been approved for the 2026 season.
  • The MLB will rely more on robot umpires next year.
  • ABS are expected to reduce ejection rates, as more than 60% are linked to inappropriate behavior after disputed ball calls.

According to AP , the Major League Baseball (MLB) will rely more on robot umpires starting next year. While the AI-powered system will call all balls and strikes, players will still be able to request reviews, which will be displayed on outfield videoboards.

ABS has been tested over the past few years in multiple games and leagues across the United States. This decision marks a major transformation in the sport and the most significant rule change since the adjustments made in 2024. While not everyone welcomes it, the adoption of AI in professional baseball seems inevitable.

“You can like it, dislike it, it doesn’t matter,” Stephen Vogt, Guardians manager, told AP. “It’s coming. It’s going to change the game. It’s going to change the game forever.”

“I love it. I loved it in spring training,” said Rob Thomson, Phillies manager, to AP. “Not all of the players, but most of the players, if you ask them, they really liked it too. I think it keeps everybody accountable. It keeps everybody on their toes.”

The technology has also been designed to preserve certain human elements of the game, such as “pitch framing,” a strategy in which catchers use their positioning to make borderline pitches appear as strikes.

AI systems and humanoid robots have been gaining protagonism in sports this year . A few weeks ago, China hosted the world’s first humanoid robot games , showcasing local robotic developments with tournaments including soccer, boxing, and running competitions.

Studies Show ChatGPT and Other AI Tools Cite Retracted Research - 2

Image by Ryunosuke Kikuno, from Unsplash

Studies Show ChatGPT and Other AI Tools Cite Retracted Research

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Some artificial intelligence chatbots are giving answers based on flawed research from retracted scientific papers, recent studies show.

In a rush? Here are the quick facts:

  • AI chatbots sometimes cite retracted scientific papers without warning users.
  • ChatGPT GPT-4o referenced retracted papers five times, warning in only three.
  • Experts warn retraction data is inconsistent and often hard for AI to track.

The research findings, which MIT Technology Review confirmed, raise doubts about AI reliability when it comes to answering scientific questions to researchers, students and the general public.

AI chatbots are already known to sometimes fabricate references. But experts warn that even when the sources are real, problems arise if the papers themselves have been pulled from the scientific record.

“The chatbot is ‘using a real paper, real material, to tell you something,’” says Weikuan Gu, a medical researcher at the University of Tennessee, as reported by MIT. “But, he says, if people only look at the content of the answer and do not click through to the paper and see that it’s been retracted, that’s really a problem,” he added.

MIT reports that Gu’s team tested ChatGPT running on OpenAI’s GPT-4o model with 21 retracted medical imaging papers. The chatbot referenced retracted sources five times yet it only warned users about this issue in three of those instances. Another study found similar issues with GPT-4o mini, which failed to mention retractions at all.

The problem extends beyond ChatGPT. MIT evaluated research-oriented AI tools by testing Elicit, Ai2 ScholarQA, Perplexity, and Consensus. Each cited studies which had been retracted and did not warn about this. The researchers said this happened multiple times in dozens of cases. Some companies say they are now improving detection.

“Until recently, we didn’t have great retraction data in our search engine,” said Christian Salem, cofounder of Consensus, which has since added new sources to reduce errors.

Experts argue that retraction data is patchy and inconsistent. “Where things are retracted, they can be marked as such in very different ways,” says Caitlin Bakker from the University of Regina.

Researchers warn users to stay cautious. “We are at the very, very early stages, and essentially you have to be skeptical,” says Aaron Tay of Singapore Management University.