Opinion: AI Models Are Mysterious “Creatures,” and Even Their Creators Don’t Fully Understand Them - 1

Image generated with OpenAI

Opinion: AI Models Are Mysterious “Creatures,” and Even Their Creators Don’t Fully Understand Them

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

Anthropic’s recent study on how its Claude 3.5 Haiku model works promises groundbreaking revelations and a spark of insight into understanding how advanced AI technologies operate. But what do they mean when they say that LLMs are “living organisms” that “think”?

A few days ago, Anthropic released two papers with groundbreaking research on how Large Language Models (LLMs) work. While the technical developments were interesting and relevant, what caught my attention the most was the vocabulary used by the AI experts.

In the study On the Biology of a Large Language Model , the researchers compared themselves to biologists who study complex “living organisms” that have evolved through billions of years.

“Likewise, while language models are generated by simple, human-designed training algorithms, the mechanisms born of these algorithms appear to be quite complex,” wrote the scientists.

In the past few years, AI models have significantly evolved. And we’ve been witnessing its rapid evolution for the past few months. We’ve seen ChatGPT go from a text-only model to a speaking companion, to now a multidimensional agent that can also generate stunning Studio Ghibli-styled images .

But, what if the current frontier AI models are reaching that sci-fi level of developing such advanced reasoning that not even their creators can understand their processes and systems? There are multiple mysteries surrounding AI technologies that might be relevant to revisit—or dive into—in 2025.

The Spooky Black-Box Paradox of AI Models

There are multiple discussions on AI adoption and AI literacy, and how those who understand how generative AI models work are less likely to consider chatbots as their “friends” or “magical” apps. However, there’s another debate—among experts and people more familiar with the technology—on whether to compare or consider LLMs as independent creations. Regarding the latter, there is a special ingredient, a mystery known as “the AI black-box paradox,” that plays a crucial role in the discussion.

Deep learning systems are trained to recognize elements and trends in similar ways as humans do. Just like we teach children to recognize patterns and assign specific words to different objects, LLMs have been trained to make unique connections and build networks that get more and more complex as they “grow.”

Samir Rawashdeh, Associate Professor of Electrical and Computer Engineering, specializes in artificial intelligence and explains that just as it happens when we study human intelligence, it’s almost impossible to actually see how deep learning systems make decisions and reach conclusions. This is what experts call the “black box problem.”

AI Models Challenge Human Understanding

Anthropic’s recent study shed light on the AI black box situation by explaining how its model “thinks” in certain scenarios that were previously blurry or even completely wrong. Even if the study is based on the model Claude 3.5 Haiku, it allows experts to develop tools and analyze similar characteristics on other AI models.

“Understanding the nature of this intelligence is a profound scientific challenge, which has the potential to reshape our conception of what it means to ‘think,’” states the paper shared by Anthropic’s researchers.

However, the term “think,” assigned to AI technologies, upsets certain experts in the industry and is part of the criticism of the investigation. A Reddit user explained why it annoys a group of people: “There is a lot of anthropomorphizing throughout the article that obfuscates the work. For example, it keeps using the word ‘think’ when it should say ‘compute’. We are talking about computer software, not a biological brain.”

While the “humanized” terms help non-technical people understand AI models better and raise debate in the community, the truth is that, whether we say “compute” or “think,” the same challenge remains: we don’t have a full understanding or complete transparency on how LLMs operate.

What To Expect From Advanced AI Models in The Near Future

Can you imagine ignoring the existence of advanced AI technologies like ChatGPT, DeepSeek, Perplexity, or Claude—now or in the near future? All signs point to the fact that there’s no turning back. Generative and reasoning AI have already transformed our daily lives, and they will only continue to evolve.

Almost every day at WizCase we report a new development in the industry—a new AI model, a new AI tool, a new AI company—that has the potential to make a big impact in our society. The idea of taking a break to first gain a better understanding of these advanced models and how they operate—or even slightly slowing down—seems impossible, given the rapid pace of the AI race and the involvement of governments and the world’s most powerful companies.

“AI models exert increasing influence on how we live and work, we must understand them well enough to ensure their impact is positive,” states Anthropic’s paper. Even if it sounds a bit unrealistic, the researchers remain positive: “We believe that our results here, and the trajectory of progress they are built on, are exciting evidence that we can rise to meet this challenge.”

But how fast can these discoveries really move? The paper also notes the results cover just a few areas and specific cases, and that it is not possible to build more general conclusions. So, probably not fast enough.

While regulators introduce measures like the EU AI Act , to demand more transparency, drawing accusations and rants from major tech companies for allegedly slowing down progress, powerful AI models continue to advance.

As a society, we must strive to find a balance between deepening our understanding of how these technologies operate and adopting them in ways that bring meaningful benefits and progress to our communities. Is this possible? The idea of just praying or hoping that these “creatures” remain “ethical” and “good” doesn’t seem so far-fetched right now.

Retirees Panic As Social Security Site Fails Under DOGE Oversight - 2

Image by Gage Skidmore, from Wikimeadia Commons

Retirees Panic As Social Security Site Fails Under DOGE Oversight

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The Social Security Administration’s website has been crashing repeatedly, leaving millions of people, especially retirees and disabled individuals, unable to access their accounts, file claims, or check their benefits.

In a rush? Here are the quick facts:

  • The Social Security website experienced crashes which prevented retirees and disabled users from accessing their benefits.
  • The DOGE team led by Elon Musk requires the agency to reduce its technical staff by half.
  • The combination of staff reductions and system breakdowns disrupted both office locations and telephone services.

The Washington Post notes that the power outages coincide with Elon Musk’s cost-cutting team, DOGE, pushing for thousands more job cuts across the agency—including in the technology department that manages the site.

The site has been down several times in the last few weeks, sometimes for almost a full day. Even when it is working, many users are unable to log in, or they find that information is missing or incorrect. The officials said that the problems are partly due to a new fraud-check system that was introduced without testing for high traffic, as reported by The Post.

The most alarming breakdown affected the Supplemental Security Income (SSI) system. For nearly two days, 7.4 million people saw a message falsely stating they weren’t receiving payments. The checks were still deposited, but the scare led to panic, as reported by The Post.

“Social Security’s response has been, ‘Oops,’” said Darcy Milburn from The Arc, an advocacy group for people with disabilities, as reported by The Post. “It’s woefully insufficient when we’re talking about a government agency that’s holding someone’s lifeline in their hands,” she added.

Meanwhile, the AP reports that a federal appeals court on Monday restored DOGE’s access to sensitive data at several agencies, including the Treasury Department and the Office of Personnel Management. The court decision lifted the restriction on DOGE data access while the lawsuit from teachers’ unions and veterans groups continues.

The current restriction on Social Security data access remains in place, but opponents worry that the restored access could lead to further breaches of privacy and expanded oversight by Musk’s team. The court majority sided with the Trump administration, arguing that IT upgrades may justify administrator-level access, as reported by the AP.

The Post reports that the company has discharged 7,000 staff members while announcing additional job reductions are coming. A senior official revealed that the company plans to terminate 800 employees from its current 3,000-person technology staff. The new CIO Scott Coulter who shares Musk’s views as an analyst has ordered IT to reduce its workforce by half.

At field offices, workers are overwhelmed. Systems to book and track appointments have crashed three times in 10 days. “We’re just spiking like crazy,” said a senior official, as reported by The Post. “It’s the sheer massive volume of freaked-out people,” he added.

The Post argues that the situation may get worse after April 14 when new ID checks are introduced for people applying for benefits online. The officials admitted the outages are under investigation but described them as “brief disruptions.”

Meanwhile, customers across the country are struggling. In California, 72-year-old Kathy Stecher couldn’t book an appointment online for days. In New York, 67-year-old Robert Raniolo tried for five days to update his emergency contact but kept getting error messages. “That’s why it’s so frustrating to me I can’t make a simple transaction,” he said, as reported by The Post.

In Massachusetts, Chris Hubbard panicked when she logged into her autistic son’s account and saw no record of benefits. “My mind was racing,” she said, as reported by The post. “The whole thing was very alarming,” she added.