
Image generated with OpenAI
Opinion: How AI Is Transforming Global Education In 2025
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Generative AI initially sparked fear in education, with bans and plagiarism concerns dominating headlines. However, schools and universities are now shifting toward embracing and integrating AI, using it for teaching, tutoring, and curriculum development.
When generative AI models started emerging from Large Language Models and entering the academic field, they sparked panic across classrooms worldwide. It feels like it was yesterday (or rather, today, as this concern hasn’t truly vanished) that all we could think about was students cheating with AI and using ChatGPT to pass every exam.
In 2023, just a few months after ChatGPT launched, we saw headlines saying that New York City schools banned AI chatbots , that the platform was blocked from many schools’ networks in the United States and Australia, and multiple universities started updating policies to include generative AI in their plagiarism regulations.
A haze of fear, confusion, and distrust settled over educational institutions. Was it the Chatbot or the student who wrote that brilliant essay? Should we embrace this new technology? What would the consequences be?
While many schools and universities continue to “demonize” chatbots—and rightly raise concerns about the importance of critical thinking—I’ve noticed a shift in how AI is perceived in educational settings over the past few months.
Now, in 2025, not only have I observed a more open-minded attitude toward these tools, but also a rapid adoption and surprising integration of AI concepts and applications.
From six-year-old kids learning about LLMs at public schools, to teachers using ChatGPT to prepare lessons, to AI tutors, to Anthropic and Google developing specialized AI models for learning, generative AI is significantly transforming education globally this year.
Chatbots Shift From Banned To Required
In less than two years, we went from heated debates on how to punish students for using generative AI to what’s the earliest age we can teach them to use the technology.
One after another, reputable institutions started jumping into the generative AI era and suggesting innovative courses despite criticism and ethical concerns. Last December, UCLA announced its first AI course in the humanities , a comparative literature class, and more students were interested in AI tools to enhance academic performance and gain confidence in the technology.
But China, without a doubt, has been one of the fastest nations to adopt and integrate AI literacy into its educational institutions. After DeepSeek’s breakthrough in the United States and the world a few months ago, in January, Chinese organizations and the government acknowledged the power of the tool, and by February, the country’s leading universities were already imparting DeepSeek courses and expanding AI-focused undergraduate programs .
Chinese institutions didn’t limit their AI literacy to adults. Beijing schools announced this year that they will be teaching young students in primary and secondary schools, starting at 6 years old, how to use AI chatbots, AI ethics, and the basics of generative AI technology.
#NewsInPhoto Starting this fall semester, all #Beijing primary and secondary schools will offer #AI courses, with each student getting at least eight class hours annually. According to a 2024 directive from the Ministry of #Education , this educational initiative aims to nurture… pic.twitter.com/6b0sfGl9AB — China Daily (@ChinaDaily) April 16, 2025
While other countries and regions across the world haven’t been as quick, committed, and strategic about AI learning as China, I’ve seen the increasing interest and adoption of the technology in the classrooms, not only in the news, but also from close professionals in the field and firsthand experiences.
A Powerful Tool For Teachers
I have a couple of friends who are teachers, there are a few teachers in my family as well, and I worked as an assistant professor for two semesters, so I am familiar with a common problem in the profession: working non-compensated extra hours.
Sometimes, a teacher’s job is not limited to the time they spend explaining different topics to students in front of a board or webcam. There is a lot of planning, thinking, editing, revisions, and corrections that might not make it into the official paid hours. And there are multiple ways AI can help educators be more efficient.
I was recently talking to a friend who teaches English and Philosophy to teenagers, and he told me was truly enjoying the support from the chatbot. “It’s great,” he said. “I can craft cool tests and prepare interesting classes based on the things they are currently interested in.”
A textbook can supply valuable foundations for multiple topics, but it could never keep up with the latest TikTok trend or viral phenomena, like ChatGPT’s Studio Ghibli-style images . Teachers can now ask Perplexity or ChatGPT to help craft activities for a philosophy class debate and discuss whether it’s ethical to use AI to mimic a distinctive human style like Ghibli’s or not. Ironic, I know.
There are thousands of ways educators can now use AI to support lessons, and there seem to be new AI features and especially designed tools for them every week.
Specialized AI Tools
A few days ago, Anthropic launched ‘Claude for Education’ a specialized AI program for higher education in which the AI startup addresses one of the main concerns among experts in the field: critical thinking.
One of the major criticisms of the use of AI models is having a technology that provides answers and all the information required without allowing the student the time to think, solve problems, and develop new skills. Anthropic has created a solution, in partnership with institutions like the London School of Economics and Political Science and Northeastern University, Champlain College, to develop tailored learning programs that even consider Socratic questioning and special learning guides.
And it’s not just Anthropic. Google also recently launched the AI learning tool “Learn About” to engage in interactive conversations with users, considering textbook-like information, and answer big questions like “What causes the northern lights?” MIT has also been teaching children how to build “Little Language Models” through an educational tool.
And the power doesn’t lie solely in the hands of teachers and the companies developing these technologies. Curious students of all ages, genders, sexes, and geographic locations are gaining access to information and knowledge that was once exclusive to those who could afford such lessons.
With a bit of cleverness and determination, a senior in Argentina can fulfill their dream of learning Italian with their private AI tutor, or a bored teenager in Canada can learn Chinese through practical guides and interactive processes that can go even further than premium Duolingo.
AI is already an essential part of the present
Generative artificial intelligence is already part of the core curriculum of many educational institutions globally. That initial rejection of the technology is becoming a thing of the past.
AI is now here to stay, and the benefits—and consequences—of its use (or non-use) are almost palpable. It’s no longer about lacking sufficient resources or access to new technologies; they are now literally at our fingertips through apps on our mobile devices and computers.
The greatest challenge facing educators and leaders of educational institutions is finding the fortitude to process the latest advancements, understand how the new tools work, and integrate specialized systems that provide real value—all while considering potential risks for students, responding to the urgency and pressure from governments and prestigious educational models, and steering toward the healthiest and most beneficial path discernible at this moment.

Image by Freepik
Police Use AI Bots to Pose As Trafficking Victims Online
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Police departments near the U.S.-Mexico border are deploying AI bots posing as civilians online to investigate crimes, sparking civil liberties concerns.
In a rush? Here are the quick facts:
- Bots pose as kids, sex workers, protesters, and criminals.
- Massive Blue sells Overwatch AI to Pinal County for $360,000.
- No arrests have been made yet, but detectives are pursuing leads.
According to an extensive report by 404 Media , U.S. police departments near the Mexico border are quietly using AI bots that pose as protesters, sex workers, children, and criminals in an effort to gather intelligence online.
The technology—called Overwatch—is being sold by a New York-based company named Massive Blue. 404 Media says that these bots are designed to trick suspects into revealing information via social media, text, or messaging apps.
404 Media obtained documents revealing that police departments use AI personas to track individuals suspected of human trafficking, drug dealing, and people labeled as “radicalized activists” and “college protesters.”
404 Media reports that the AI surveillance system by Massive Blue operates under a $360,000 contract with Pinal County, Arizona, funded by an anti-human trafficking grant. The agreement provides continuous surveillance and allows for up to 50 AI personas. Another county, Yuma, tested the system but declined to renew it, saying, “It did not meet our needs.”
The AI bots are shockingly detailed, as noted by 404 Media. For example, one character is “Jason,” a shy 14-year-old boy from Los Angeles who speaks Spanish and loves anime. In a scripted exchange, an adult asks him:
“Your parents around? Or you getting some awesome alone time.” “Js chillin by myself, man. My momz @ work n my dadz outta town,” Jason replies.
Another AI character is a 25-year-old Yemeni-American woman who speaks Arabic and uses apps like Telegram and Signal. There’s also a “radicalized protest persona” posing as a lonely 36-year-old activist interested in body positivity and baking.
“This idea of having an AI pretending to be somebody, a youth looking for pedophiles to talk online, or somebody who is a fake terrorist, is an idea that goes back a long time,” said Dave Maass from the Electronic Frontier Foundation, as reported by 404 Media.
“The problem with all these things is that these are ill-defined problems. I’m not concerned about escorts. I’m not concerned about college protesters. So like, what is it effective at, violating protesters’ First Amendment rights?” Dave added.
Despite all this, 404 Media reports that Pinal County confirms no arrests have been made yet. “Massive Blue has produced leads that detectives are actively pursuing,” said spokesperson Sam Salzwedel. “But we cannot disclose further details,” he added.