
Image by Kenny Eliason, from Unsplash
Teachers Admit Defeat in Race Against ChatGPT in Classrooms
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
University assessment methods need to inevitably change as artificial intelligence is creating new obstacles that teachers and educational institutions cannot seem to solve.
In a rush? Here are the quick facts:
- Teachers struggle to design coursework resistant to ChatGPT and generative AI.
- Oral exams help, but workloads make them unrealistic for large classes.
- AI detection tools often fail as technology rapidly evolves.
A new research study conducted by Australian researchers demonstrates that the new challenges do not lie simply in preventing cheating, but are more far-reaching, calling it a “wicked problem” with no simple solution.
It is now common knowledge that AI tools can produce articulated essays in seconds, undermining traditional formats of exams and coursework. In order to overcome this, universities have attempted to respond with the implementation of new, stricter exams and AI detection software.
However, as the researchers note, this technology is rapidly evolving, and teachers struggle to keep up. Indeed, a recent study found that AI models are improving exponentially, doubling their power every 7 months.
One teacher admitted: “Every time I think I’ve adjusted the assessments to make them AI-resistant, AI improves.”
The study interviewed 20 Australian university teachers who had redesigned their assessments. Many described impossible trade-offs. One noted to The Conversation : “We can make assessments more AI-proof, but if we make them too rigid, we just test compliance rather than creativity.” Another added: “Have I struck the right balance? I don’t know.”
This results in teachers facing heavier workloads. Oral exams are obviously more AI-resistant; however, they are time-consuming.
As one explained: “250 students by […] 10 min […] it’s like 2,500 min, and then that’s how many days of work is it just to administer one assessment?”
Others said AI made their years of course design feel suddenly obsolete: “I’ve spent so much […] time on developing this stuff. They’re really good as units, things that I’m proud of. Now I’m looking at what AI can do, and I’m like, what […] do I do? I’m really at a loss, to be honest.”
The researchers argue that instead of chasing perfect fixes, universities should allow teachers “permission to compromise,” recognizing that all solutions involve trade-offs. Without this support, the weight of responsibility risks crushing educators already stretched thin.

Photo by Towfiqu barbhuiya on Unsplash
Anthropic’s Economic Index Report Shows Uneven AI Adoption
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Anthopic published its Economic Index report on Monday, revealing uneven AI adoption across the world and highlighting the faster uptake of AI compared to previous technological innovations.
In a rush? Here are the quick facts:
- Anthropic published its Economic Index Report on Monday.
- 40% of Claude users reported relying on the AI tool at work.
- Countries with higher incomes use the technology more, possibly increasing global economic inequality.
According to Anthropic’s analysis , the company’s latest report on the use of its chatbot Claude unveils different behaviors and purposes among users and businesses. One of the main findings is the speed at which generative AI is being adopted.
In the United States, just 20% of employees reported using the chatbot at work in 2023, but that number doubled to 40% this year. The report compares this pace with earlier technologies: while electricity and computers took decades to become widespread, and even the Internet required several years to diffuse, generative AI adoption has doubled within only two years.
“Such rapid adoption reflects how useful this technology already is for a wide range of applications, its deployability on existing digital infrastructure, and its ease of use—by just typing or speaking—without specialized training,” states the document.
The report also shows how usage patterns have shifted. While coding remains the primary purpose for Claude—at 36% of users—educational tasks now account for 12.4%, about three percentage points higher than in the previous report. Scientific tasks also rose, from 6.3% to 7.2%.
Claude is now available across 150 countries, and this report marks the first time Anthropic has compared usage across regions. Using the Anthropic AI Usage Index (AUI), the analysis found that higher-income countries tend to use the technology more intensively and for augmentation, rather than mere automation.
“If the productivity gains are larger for high-adoption economies, current usage patterns suggest that the benefits of AI may concentrate in already-rich regions—possibly increasing global economic inequality and reversing growth convergence seen in recent decades,” states the document.
On the business side, Anthropic noted that companies accessing Claude via API rely more heavily on it for coding, while those using the web platform lean toward writing and educational purposes.
OpenAI also released a study on ChatGPT usage this week, offering insights into how its 700 million weekly active users interact with the tool.