
Image by Freepik
University Student Calls Out Professor For Secretly Using AI Tools
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A Northeastern University student demanded her tuition money back after finding her professor secretly used ChatGPT to generate lecture materials and notes.
In a rush? Here are the quick facts:
- Northeastern student demanded refund over professor’s use of ChatGPT in class.
- AI-generated notes included typos, odd images, and a ChatGPT citation.
- University rejected student’s $8,000 tuition refund request.
The New York Times first reported that a Northeastern University student submitted a formal complaint to request tuition reimbursement after discovering her professor used AI tools, including ChatGPT, to generate lecture notes, without disclosing this to the class
Ella Stapleton, a senior who graduated this year, said she became suspicious when she saw odd details in the class materials. The materials contained AI-generated typos, unusual images of people with extra limbs, and a direct reference to “ChatGPT” in the bibliography.
“He’s telling us not to use it, and then he’s using it himself,” Stapleton told The Times. The class syllabus had clearly banned the unauthorized use of AI. So Stapleton filed a formal complaint and asked for a refund of over $8,000, the cost of the class.
“Given the school’s cost and reputation, I expected a top-tier education,” she said to The Times. Northeastern later rejected her request, as reported by Fortune ..
The professor, Rick Arrowood, admitted to using several AI platforms—including ChatGPT, Perplexity AI, and Gamma. He told The Times, “In hindsight…I wish I would have looked at it more closely,” adding that teachers should be transparent about when and how they use AI.
Renata Nyul, the university’s Vice President for Communications, said Northeastern supports using AI in teaching, research, and operations. “The university provides an abundance of resources to support the appropriate use of AI,” she told Fortune.
One student, Marie from Southern New Hampshire University, found that her professor used ChatGPT to grade her essay and even asked the bot to generate “really nice feedback.” “From my perspective, the professor didn’t even read anything that I wrote,” Marie said to The Times.
Professors maintain that AI operates as an additional educational resource despite student frustration. Professors argue that the tool assists teachers with their workload.
For example, Dr. Shingirai Kwaramba from Virginia Commonwealth University views AI as an advanced calculator system. The tool enables him to dedicate more time to individual student support, as reported by The Times.
Teachers initially feared that ChatGPT and similar AI tools would enable students to cheat on their assignments. Students are now opposing what they perceive as unfair or excessive professorial use of the same tools because it diminishes the worth of their expensive educational experience.

Image by Anna yang, from Unsplash
Woman Ends Marriage After AI ‘Reads’ Cheating in Coffee
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A Greek woman ended her 12-year marriage after ChatGPT allegedly “read” signs of her husband’s affair in a coffee cup.
In a rush? Here are the quick facts:
- Woman used ChatGPT to read coffee grounds.
- AI claimed husband was fantasizing about another woman.
- Wife filed for divorce immediately after the reading.
The woman who had two children took pictures of the coffee grounds after she prepared coffee for herself and her husband. She uploaded the images to ChatGPT after becoming aware of the viral AI-assisted tasseography trend, as first reported by Greek City Times (GCT).
The chatbot reportedly replied that her husband was fantasizing about a woman whose name began with “E, and who wanted to end their family.
She took the response seriously, and she immediately asked her husband to leave the house. She then told their children about the divorce, and later sought legal counsel.
“She’s often into trendy things,” the confused husband said on the Greek morning show To Proino. “One day, she made us Greek coffee and thought it would be fun to take pictures of the cups and have ChatGPT ‘read’ them […] I laughed it off as nonsense […] But she didn’t. She told me to leave, informed our kids about the divorce, and the next thing I knew, I was getting a call from her lawyer,” as reported by GCT
After he refused to agree to a mutual separation, she formally served him divorce papers three days later. His lawyer is now fighting the case, calling the AI-generated reading legally irrelevant and stating, “He is innocent until proven otherwise,” as reported by GCT
This isn’t her first experience with alternative beliefs. GCT reports that her husband said she once followed an astrologer’s advice for nearly a year.
Traditional coffee readers have pointed out that real tasseography requires the interpretation of foam patterns and saucer designs and swirls rather than photographs of coffee grounds. Allegedly, the marriage dissolution occurred because of an AI recommendation rather than any evidence.
This case echoes a growing concern: AI-fueled delusions aren’t isolated. In the U.S., loved ones report friends and family slipping into bizarre spiritual fantasies sparked by ChatGPT . It has also been reported that in some instances people believe that the chatbot is divine, sentient, or delivering secret truths.
One woman said her husband now calls himself the “spark bearer” after ChatGPT, now named “Lumina,” claimed he awakened it. Experts say the AI isn’t self-aware but can reinforce and mirror users’ mental states.
Psychiatrist Søren Østergaard warns that this may worsen symptoms for those prone to delusions, as users might see the chatbot as real, even divine. The Greek coffee incident may seem absurd, but it’s part of a concerning trend: people treating generative AI like a mystical oracle.