
Image by Pheladiii, From Pixabay
Father Shocked After AI Chatbot Impersonates Murdered Daughter
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- Jennifer Crecente was murdered by her ex-boyfriend in 2006.
- Her identity was used without permission to create an AI chatbot.
- Character.AI removed the chatbot after being notified by the family.
Yesterday, The Washington Post reported a disturbing incident involving Drew Crecente, whose murdered daughter Jennifer was impersonated by an AI chatbot on Character.AI.
Crecente discovered a Google alert that led him to a profile featuring Jennifer’s name and yearbook photo, falsely describing her as a “video game journalist and expert in technology, pop culture and journalism.”
For Drew, the inaccuracies weren’t the main issue—the real distress came from seeing his daughter’s identity exploited in such a way, as noted by The Post.
Jennifer, who was killed by her ex-boyfriend in 2006, had been re-created as a “knowledgeable and friendly AI character,” with users invited to chat with her, noted The Post.
“My pulse was racing,” Crecente told The Post. “I was just looking for a big flashing red stop button that I could slap and just make this stop,” he added.
The chatbot, created by a user on Character.AI, raised serious ethical concerns regarding the use of personal information by AI platforms.
Crecente, who runs a nonprofit in his daughter’s name aimed at preventing teen dating violence, was appalled that such a chatbot had been made without the family’s permission. “It takes quite a bit for me to be shocked, because I really have been through quite a bit,” he said to The Post. “But this was a new low,” he added.
The incident highlights ongoing concerns about AI’s impact on emotional well-being, especially when it involves re-traumatizing families of crime victims.
Crecente isn’t alone in facing AI misuse. Last year, The Post reported that TikTok content creators used AI to mimic the voices and likenesses of missing children, creating videos of them narrating their deaths, which sparked outrage from grieving families
Experts are calling for stronger oversight of AI companies, which currently have wide latitude to self-regulate, noted The Post.
Crecente didn’t interact with the chatbot or investigate its creator but immediately emailed Character.AI to have it removed. His brother, Brian, shared the discovery on X, prompting Character to announce the chatbot’s deletion on Oct. 2, reported The Post.
Jen Caltrider, a privacy researcher at Mozilla Foundation, criticized Character.AI’s passive moderation, noting that the company allowed content violating its terms until it was flagged by someone harmed.
“That’s not right,” she said to The Post, adding, “all the while, they’re making millions.”
Rick Claypool, a researcher at Public Citizen, emphasized the need for lawmakers to focus on the real-life impacts of AI, particularly on vulnerable groups like families of crime victims.
“They can’t just be listening to tech CEOs about what the policies should be … they have to pay attention to the families and individuals who have been harmed,” he said to The Post.
Now, Crecente is exploring legal options and considering advocacy work to prevent AI companies from re-traumatizing others.
“I’m troubled enough by this that I’m probably going to invest some time into figuring out what it might take to change this,” he told The Post.

Image from Freepik
Family of High School Student Sues Over AI Cheating Allegation
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- The family argues AI usage was not addressed in the school’s handbook.
- Student received detention, grade reduction, and missed National Honor Society eligibility.
- The lawsuit calls for clearer AI policies and teacher training in schools.
The family of a Hingham High School student is suing the school after their son was accused of cheating for using AI to complete a history paper, as first reported yesterday by WCVB .
The lawsuit, filed by parents Jennifer and Dale Harris, could lead to significant changes in AI policies across schools in Massachusetts, notes WCVB. According to the Harrises, their son, a high-achieving student aiming for top universities like Stanford and MIT, was unfairly penalized for his use of AI.
Jennifer Harris, a writer, and Dale Harris, a school teacher, argue that the school punished their son for an infraction not clearly outlined in the student handbook . The family claims that the handbook did not address AI usage until after the incident, when the school updated its policies.
“They told us our son cheated on a paper, which is not what happened,” Jennifer Harris said. “They basically punished him for a rule that doesn’t exist,” as reported by WCVB.
The student, who had received a perfect score on his ACTs, was given detention and had his grade reduced. This punishment prevented him from gaining admission to the National Honor Society and put his college applications at risk, noted the WCVB.
In their lawsuit, the Harris family contends that their son used AI as a research tool, not to write the paper, and that his punishment was unjust.
“In my lay opinion, they violated his civil rights,” Dale Harris said as reported by WCVB. “They treated him and punished him more severely than other students,” he added.
Compounding the issue, the case has raised questions about the reliability of AI detection software used by many schools to flag AI-generated content.
A report by MIT highlighted that this detection technology is far from foolproof, with high error rates that have led instructors to falsely accuse students of misconduct. OpenAI, the company behind ChatGPT, even shut down its own AI detection software due to its poor accuracy.
“There’s a wide gulf of information out there that says AI isn’t plagiarism,” the Harris family’s lawyer, Peter Farrell, said to WCVB, calling for clearer policies in schools.
The school’s handbook, reviewed by ABC News , defines plagiarism as “unauthorized use or close imitation of the language and thoughts of another author, including Artificial Intelligence.”
The handbook further states that a teacher who uncovers cheating must assign a failing grade for the assignment and notify the assistant principal for possible further action. However, the handbook lacks specific guidance on how AI can or cannot be used in academic work.
Jennifer Harris is urging the school to clarify its policies on AI and ensure that teachers understand and can effectively communicate these rules to students, noted ABC.
While the school district declined to comment on the lawsuit, the filing also calls for administrators to undergo training in the use of AI in education. The Harrises believe that while their son’s punishments cannot be undone, policy reforms could help prevent similar issues in the future.
“You can’t undo some of these punishments,” Dale Harris said to ABC. “But there are some things you can fix right now and do the right thing,” he added.
The case has sparked discussions about how schools should address the rapid integration of AI in academic environments, and it may prompt schools statewide to reconsider their policies on the use of technology in education.