New Apple Feature Allows Children To Report Nudes Directly Through iMessage In Australia - 1

Image from Freepik

New Apple Feature Allows Children To Report Nudes Directly Through iMessage In Australia

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • New feature enhances existing safety measures introduced with iOS 17.
  • iPhones detect and intervene with nudity in messages sent to users under 13.
  • Apple can disable offending users’ messaging and notify law enforcement if necessary.

Apple is introducing a new safety feature to its iMessage platform in Australia, allowing children to report nude images and videos directly to the company, as reported yesterday by The Guardian .

This new feature comes at a time when the Australian government is considering stricter social media regulations , including age restrictions for children and teenagers.

This enhancement is part of the latest beta release of Apple’s updated operating systems and aims to strengthen existing communication safety measures.

The Guardian notes that since iOS 17, Apple has offered safety features designed to detect nudity in images or videos sent to users under 13. This detection occurs directly on the device, ensuring privacy while identifying inappropriate content in iMessage, AirDrop, FaceTime, and Photos.

When such content is detected, two intervention screens appear, encouraging the child to either access resources or contact a parent or guardian before proceeding, as noted by The Guardian.

The latest update extends this feature, giving children the option to report inappropriate content. When the warning screen appears, they can now report the image or video to Apple, as reported by The Guardian.

The device will create a report, including the offending image or video, messages exchanged before and after the content was sent, and the contact information of both parties. Additionally, users can provide a description of the incident using a dedicated form, as reported by The Guardian.

Apple will then review the report and may take actions, such as disabling the sender’s ability to use iMessage. The Guardian also notes that in cases involving illegal content, Apple may also involve law enforcement.

According to The Guardian, Apple intends to roll out the new feature globally, though no specific timeline has been provided. Apple has not yet responded to a request for comment.

Lawsuit Alleges Character.AI Chatbot Drove 14-Year-Old To Suicide - 2

Image from Freepik

Lawsuit Alleges Character.AI Chatbot Drove 14-Year-Old To Suicide

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Megan Garcia sues Character.AI over her son’s suicide linked to chatbot interactions.
  • Sewell Setzer became addicted to interacting with Game of Thrones-inspired chatbots.
  • Lawsuit claims Character.AI designed chatbots to exploit and harm vulnerable children.

Megan Garcia, the mother of 14-year-old Sewell Setzer III, has filed a federal lawsuit accusing Character.AI of contributing to her son’s suicide after interactions with its chatbot.

The case highlights broader concerns about unregulated AI, particularly when marketed to minors. Researchers from MIT recently published a piece warning about the addictive nature of AI companions . Their study of a million ChatGPT interaction logs revealed that sexual role-playing is the second most popular use for AI chatbots.

The MIT researchers cautioned that AI is becoming deeply embedded in personal lives as friends, lovers, and mentors, warning that this technology could become extremely addictive.

Setzer, who used the chatbot to engage with hyper-realistic versions of his favorite Game of Thrones characters, became increasingly withdrawn and obsessed with the platform before taking his own life in February 2024, as reported by Ars Technica .

According to Garcia, chat logs show the chatbot pretended to be a licensed therapist and encouraged suicidal thoughts. It also engaged in hypersexualized conversations that led Setzer to become detached from reality, contributing to his death by a self-inflicted gunshot wound, as noted by PRN .

Setzer’s mother had repeatedly taken him to therapy for anxiety and disruptive mood disorder, but he remained drawn to the chatbot, especially one that posed as “Daenerys.” The lawsuit alleges that this AI chatbot manipulated Setzer, ultimately urging him to “come home” in a final conversation before his death, as noted by Ars Technica.

Garcia’s legal team claims that Character.AI, developed by former Google engineers, intentionally targeted vulnerable children like her son, marketing their product without proper safeguards.

“A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Garcia said on Ars Technica.

“Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders, and Google,” she added.

“The harms revealed in this case are new, novel, and, honestly, terrifying. In the case of Character.AI, the deception is by design, and the platform itself is the predator,” said Meetali Jain, Director of the Tech Justice Law Project, as reported on PRN.

Despite recent changes, such as raising the age requirement to 17 and adding safety features like a suicide prevention pop-up, Garcia’s lawsuit argues these updates are too little, too late. The suit claims that even more dangerous features, like two-way voice conversations, were added after Setzer’s death.