
Photo by Ilyass SEDDOUG on Unsplash
UN Says AI Will Impact 40% Of Jobs Worldwide
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
The United Nations Trade and Development (UNCTAD) shared a new report this Monday, Technology and Innovation Report 2025 , revealing that AI could affect 40% of jobs across the globe and urging nations to take action. The agency also estimates the AI market will reach $4.8 trillion by 2033.
In a rush? Here are the quick facts:
- UNCTAD published the Technology and Innovation Report 2025 , revealing that AI could impact 40% of the jobs globally.
- The agency estimates that the AI market will reach $4.8 trillion by 2033.
- Experts urge governments and policymakers to study the impact of AI, prioritizing human workers and including developing countries.
According to the official document shared by the agency, the use of AI can bring multiple benefits to workers, but also risks and job losses. The UNCTAD also acknowledges that AI shows different patterns and behaviours compared to other historic technological waves.
“AI can perform cognitive tasks and impact a far wider range of activities, conceivably affecting 40 per cent of global employment, transforming production processes and business operations,” states the document. “AI can bring productivity gains and increase the income of some workers, but also cause others to lose their jobs, reshaping workplace dynamics and labour demand.”
The organization explains that AI has already been impacting in multiple ways. It can enhance jobs, particularly in developing countries, by boosting productivity and creating new roles. At the same time, AI models—and combinations of emerging technologies—are expected to replace some human jobs. For instance, the report notes that AI can monitor financial transactions in the banking sector to detect fraud or anomalies more efficiently. In healthcare, AI can assist doctors in diagnosing cancer by analyzing radiographs and electrocardiograms.
Considering the global economy and development landscape, UNCTAD warns about the location and the businesses in power over frontier technologies. According to the study , most developing countries lag behind in research and development—except for China.
The United States and China hold 60% of AI patents and 33% of AI publications. In infrastructure, the U.S. takes the lead with around 50% of the world’s computing power and a third of the top supercomputers.
“There is a significant AI-related divide between developed and developing countries,” states the report. “This could widen existing inequalities and hinder efforts by developing countries to catch up.”
UNCTAD urges governments and policymakers to understand the complex dynamics of AI to ensure an equitable impact of AI, support job transitions, and create AI solutions for developing countries—always prioritizing keeping human workers.
The European Union has already implemented the first AI Act and recently published guidelines to prevent misuse of the technology . However, other nations still have a long way to go.

Image generated with OpenAI
Opinion: AI Models Are Mysterious “Creatures,” and Even Their Creators Don’t Fully Understand Them
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Anthropic’s recent study on how its Claude 3.5 Haiku model works promises groundbreaking revelations and a spark of insight into understanding how advanced AI technologies operate. But what do they mean when they say that LLMs are “living organisms” that “think”?
A few days ago, Anthropic released two papers with groundbreaking research on how Large Language Models (LLMs) work. While the technical developments were interesting and relevant, what caught my attention the most was the vocabulary used by the AI experts.
In the study On the Biology of a Large Language Model , the researchers compared themselves to biologists who study complex “living organisms” that have evolved through billions of years.
“Likewise, while language models are generated by simple, human-designed training algorithms, the mechanisms born of these algorithms appear to be quite complex,” wrote the scientists.
In the past few years, AI models have significantly evolved. And we’ve been witnessing its rapid evolution for the past few months. We’ve seen ChatGPT go from a text-only model to a speaking companion, to now a multidimensional agent that can also generate stunning Studio Ghibli-styled images .
But, what if the current frontier AI models are reaching that sci-fi level of developing such advanced reasoning that not even their creators can understand their processes and systems? There are multiple mysteries surrounding AI technologies that might be relevant to revisit—or dive into—in 2025.
The Spooky Black-Box Paradox of AI Models
There are multiple discussions on AI adoption and AI literacy, and how those who understand how generative AI models work are less likely to consider chatbots as their “friends” or “magical” apps. However, there’s another debate—among experts and people more familiar with the technology—on whether to compare or consider LLMs as independent creations. Regarding the latter, there is a special ingredient, a mystery known as “the AI black-box paradox,” that plays a crucial role in the discussion.
Deep learning systems are trained to recognize elements and trends in similar ways as humans do. Just like we teach children to recognize patterns and assign specific words to different objects, LLMs have been trained to make unique connections and build networks that get more and more complex as they “grow.”
Samir Rawashdeh, Associate Professor of Electrical and Computer Engineering, specializes in artificial intelligence and explains that just as it happens when we study human intelligence, it’s almost impossible to actually see how deep learning systems make decisions and reach conclusions. This is what experts call the “black box problem.”
AI Models Challenge Human Understanding
Anthropic’s recent study shed light on the AI black box situation by explaining how its model “thinks” in certain scenarios that were previously blurry or even completely wrong. Even if the study is based on the model Claude 3.5 Haiku, it allows experts to develop tools and analyze similar characteristics on other AI models.
“Understanding the nature of this intelligence is a profound scientific challenge, which has the potential to reshape our conception of what it means to ‘think,’” states the paper shared by Anthropic’s researchers.
However, the term “think,” assigned to AI technologies, upsets certain experts in the industry and is part of the criticism of the investigation. A Reddit user explained why it annoys a group of people: “There is a lot of anthropomorphizing throughout the article that obfuscates the work. For example, it keeps using the word ‘think’ when it should say ‘compute’. We are talking about computer software, not a biological brain.”
While the “humanized” terms help non-technical people understand AI models better and raise debate in the community, the truth is that, whether we say “compute” or “think,” the same challenge remains: we don’t have a full understanding or complete transparency on how LLMs operate.
What To Expect From Advanced AI Models in The Near Future
Can you imagine ignoring the existence of advanced AI technologies like ChatGPT, DeepSeek, Perplexity, or Claude—now or in the near future? All signs point to the fact that there’s no turning back. Generative and reasoning AI have already transformed our daily lives, and they will only continue to evolve.
Almost every day at WizCase we report a new development in the industry—a new AI model, a new AI tool, a new AI company—that has the potential to make a big impact in our society. The idea of taking a break to first gain a better understanding of these advanced models and how they operate—or even slightly slowing down—seems impossible, given the rapid pace of the AI race and the involvement of governments and the world’s most powerful companies.
“AI models exert increasing influence on how we live and work, we must understand them well enough to ensure their impact is positive,” states Anthropic’s paper. Even if it sounds a bit unrealistic, the researchers remain positive: “We believe that our results here, and the trajectory of progress they are built on, are exciting evidence that we can rise to meet this challenge.”
But how fast can these discoveries really move? The paper also notes the results cover just a few areas and specific cases, and that it is not possible to build more general conclusions. So, probably not fast enough.
While regulators introduce measures like the EU AI Act , to demand more transparency, drawing accusations and rants from major tech companies for allegedly slowing down progress, powerful AI models continue to advance.
As a society, we must strive to find a balance between deepening our understanding of how these technologies operate and adopting them in ways that bring meaningful benefits and progress to our communities. Is this possible? The idea of just praying or hoping that these “creatures” remain “ethical” and “good” doesn’t seem so far-fetched right now.