
Image by Good Free Photos, from Unsplash
Class Of 2025: The First Fully AI-Educated College Graduates
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
College seniors graduating this year are the first to complete all four years in the age of AI, where ChatGPT redefined learning.
In a rush? Here are the quick facts:
- ChatGPT launched in 2022, reshaping college education within three years.
- 92% of UK students have used generative AI tools.
- Harvard found two-thirds of undergraduates use AI weekly.
Students returning to campus this fall have spent most of their education during the era of generative AI, as noted in a report by The Atlantic .
ChatGPT launched in late 2022, meaning that the current graduating students were beginning their freshman year. With the new technology higher education underwent a rapid transformation faster than anyone expected.
By 2024, nearly two-thirds of Harvard undergraduates were using AI weekly, and a UK survey found 92 percent of students had tried it, as reported by The Atlantic.
“I cannot think that in this day and age that there is a student who is not using it,” said Vasilis Theoharakis, a professor at the Cranfield School of Management, as reported by The Atlantic.
For many students, AI is less about cheating and more about survival. “It can pretty much do everything,” said WashU senior Harrison Lieber. He admitted to The Atlantic that if seven assignments are due in five days, AI can help “for the cost of a large pizza.”
Other students see it as a way to balance busy lives. One recalled, “Sometimes I want to play basketball. Sometimes I want to work out.” Senior Da’Juantay Wynter, who juggles leadership roles on campus, said he prefers writing his own essays but sometimes relies on AI for summaries: “It’s always in the back of my mind: Well, AI can get this done in five seconds,” as reported by The Atlantic.
Professors are scrambling to respond with handwritten assignments, in-class essays, or moral appeals. Some even use AI themselves to save time, as noted by The Atlantic. But as Lieber pointed out, students want project-based work that “emulate[s] the real world.”
Three years after ChatGPT’s debut, higher education has been permanently reshaped. However, this widespread reliance on AI may be problematic.
A recent study shows how generative AI tools can impair critical thinking since it makes analytical tasks easier. The researchers reported that users often focus on verifying AI outputs, and avoid gathering and synthesizing information independently. The results point out that this behaviour significantly diminishes problem-solving skills.
Additionally, a systematic review of educational and research shows that over-reliance on AI often leads students to accept AI-generated recommendations without question, making them less able to spot AI errors. .
Without careful oversight or training, students who rely heavily on AI risk cognitive offloading, where the mental effort of learning and reasoning is outsourced to technology rather than exercised independently.

Photo by Adrian González on Unsplash
Anthropic Says Its AI Models Can End Conversations With Users to Protect Themselves
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Anthropic said on Friday that it has given its AI models, Claude Opus 4 and 4.1, the ability to end conversations with users. The startup explained that the new feature would be used in rare cases where it is necessary to prevent harm—directed toward the AI model.
In a rush? Here are the quick facts:
- Anthropic allowed Claude Opus 4 and 4.1 the ability to end conversations with users to protect themselves.
- The new feature will be used as a last resort only when users insist on engaging in harmful interactions.
- The ability is part of Anthropic’s AI welfare program.
According to the article published by Anthropic, the company released this update as part of its AI welfare program, a new area in AI research that considers an AI system’s “interests” or well-being. It clarified that while the potential moral status of AI systems is “uncertain,” it’s researching ways to mitigate risks to its AI model’s welfare.
“We recently gave Claude Opus 4 and 4.1 the ability to end conversations in our consumer chat interfaces,” wrote the company. “This ability is intended for use in rare, extreme cases of persistently harmful or abusive user interactions.”
Anthropic explained that its model Claude Opus 4, the company’s most advanced model released with safety warnings , showed during tests preference for avoiding harm—such as the creation of sexual content involving children or information that could lead to acts of terror or violence.
In cases where users repeatedly requested Claude to engage in harmful conversations, the chatbot refused to comply and tried to redirect the discussion. Now, the chatbot can refuse to answer and block the chat so users cannot continue the conversation—except in cases of imminent risk.
The company clarified that the conversation-ending ability will be used only as a last resort—most users will not be affected by this update—and that users can start a new conversation on another chat immediately.
“We’re treating this feature as an ongoing experiment and will continue refining our approach,” wrote Anthropic. “If users encounter a surprising use of the conversation-ending ability, we encourage them to submit feedback by reacting to Claude’s message with Thumbs or using the dedicated ‘Give feedback’ button.”
The startup has been previously working on other projects related to AI welfare. Last year, Anthropic hired researcher Kyle Fish to study and protect the “interests” of AI models.