AI Welfare: Anthropic’s New Hire Fuels Ongoing Ethical Debate - 1

Image by Freepik

AI Welfare: Anthropic’s New Hire Fuels Ongoing Ethical Debate

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

As fears over AI outpacing human control grow, Anthropic , an AI company, has turned its attention to a new concern: chatbot welfare.

In a Rush? Here are the Quick Facts!

  • Anthropic hired Kyle Fish to focus on AI system welfare.
  • Critics argue AI welfare concerns are premature, citing current harm from AI misuse.
  • Supporters believe AI welfare preparation is crucial to prevent future ethical crises.

In a new move, the company has hired Kyle Fish to research and protect the “interests” of AI systems, as first reported today by Business Insider . Fish’s role includes pondering profound questions such as what qualifies an AI system for moral consideration and how its “rights” might evolve.

The rapid evolution of AI has raised ethical questions once confined to science fiction. If AI systems develop human-like thinking, could they also experience subjective emotions or suffering?

A group of philosophers and scientists argues that these questions demand attention. In a recent preprint report on arXiv , researchers called for AI companies to assess systems for consciousness and decision-making capabilities, while outlining policies to manage such scenarios.

Failing to recognize a conscious AI, the report suggests, could result in neglect or harm to the system. Anil Seth, a consciousness researcher, says that while conscious AI may seem far-fetched, ignoring its possibility could lead to severe consequences, as reported by Nature .

“The problem wasn’t that Frankenstein’s creature came to life; it was that it was conscious and could feel,” Seth argued in Nautilus .

Critics, however, find AI welfare concerns premature. Today’s AI already inflicts harm by spreading disinformation, aiding in warfare, and denying essential services.

Yale anthropologist Lisa Messeri challenges Anthropic’s priorities: “If Anthropic — not a random philosopher or researcher, but Anthropic the company — wants us to take AI welfare seriously, show us you’re taking human welfare seriously,” as reported by Buisness Insider

Supporters of AI welfare contend that preparing for sentient AI now could prevent future ethical crises.

Jonathan Mason, an Oxford mathematician, argues that understanding AI consciousness is critical. “It wouldn’t be sensible to get society to invest so much in something and become so reliant on something that we knew so little about — that we didn’t even realize that it had perception,” as reported by Nature.

While skeptics warn against diverting resources from human needs, proponents believe AI welfare is at a “transitional moment,” as noted by Nature.

Business Insider reports that Fish did not respond to requests for comment regarding his new role. However, they note that on an online forum focused on concerns about an AI-driven future, he expressed a desire to be kind to robots.

Fisher underscores the moral and practical importance of treating AI systems ethically, anticipating future public concerns. He advocates for a cautious approach to scaling AI welfare resources, suggesting around 5% of AI safety resources be allocated initially while stressing the need for thorough evaluation before any further expansion.

Fisher sees AI welfare as a crucial component of the broader challenge of ensuring that transformative AI contributes to a positive future, rather than an isolated issue.

As AI systems grow more advanced , the concern extends beyond their potential rights and suffering to the dangers they may pose. Malicious actors could exploit AI technologies to create sophisticated malware, making it more challenging for humans to detect and control.

If AI systems are given moral consideration and protection, this could lead to further ethical complexities regarding the use of AI in cyberattacks.

As AI becomes capable of generating self-learning and adaptive malware, the need to protect both AI and human systems from misuse becomes more urgent, requiring a balance between safety and development.

Whether an inflection point or misplaced priority, the debate underscores AI’s complex and evolving role in society.

AI-Fueled Cyberbullying: Teen’s Ordeal Sparks Outrage - 2

Image by Freepik

AI-Fueled Cyberbullying: Teen’s Ordeal Sparks Outrage

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

At 14, Francesca Mani was called to the principal’s office at Westfield High School, New Jersey, where she discovered a photo of her had been altered into a nude image using artificial intelligence, as reported by CBS News .

In a Rush? Here are the Quick Facts!

  • A 14-year-old girl was targeted by a classmate who used AI to create a nude image of her.
  • The AI-powered “nudify” website, Clothoff, has been used to create explicit images of numerous students.
  • Experts warn of the severe psychological and reputational harm caused by AI-generated nudes, even if they are fake.

Last October, rumors spread at Westfield High about boys possessing explicit images of female classmates. Mani learned she and several others were targeted.

According to a lawsuit filed by another victim, a boy uploaded Instagram photos to Clothoff, a popular AI-powered “nudify” site. The site, which received over 3 million visits last month, creates realistic fake nudes in seconds, as reported by CBS.

Mani never saw the manipulated image of herself but was deeply affected by how the school handled the situation. Victims were publicly called to the office, while the boys involved were privately removed from class.

“I feel like that was a major violation of our privacy while, like, the bad actors were taken out of their classes privately,” Mani said as reported by CBS.

That same day, the principal emailed parents, confirming students had used AI to create explicit images, reports CBS. The email assured parents the images had been deleted and were not being circulated, but Mani’s mother, Dorota, remained skeptical.

“You can’t really wipe it out,” Dorota said to CBS, expressing concern about screenshots or downloads.

The school district declined to provide details about the incident or disciplinary actions but stated it updated its harassment policies to address AI misuse—a change the Manis had urged for months, as reported by CBS.

Yiota Souras, chief legal officer at the National Center for Missing and Exploited Children, stressed the lasting harm such images cause, even if they are fake. Victims often suffer mental distress, reputational damage, and a loss of trust, as noted by CBS.

CBS reports that over the past 20 months, nearly 30 similar cases involving AI nudes have surfaced in U.S. schools, with additional incidents reported globally.

In at least three cases, Snapchat was used to circulate the images. Parents have reported delays of up to eight months in getting such content removed from the platform, as reported by CBS.

Snapchat, in response to one parent’s criticism, claimed to have “efficient mechanisms” for handling these reports and stated it has a “zero-tolerance policy” for such content, reported CBS.

Federal law prohibits AI-generated child pornography if it depicts sexually explicit conduct, but Souras warns that some AI nudes may fall outside the legal definition, noted CBS.

Since the incident, Francesca and Dorota Mani have advocated for AI-related policies in schools and worked with Congress to address the issue, reported CBS.