U.S. Regulator Investigates Major Tech Companies Over Children’s Chatbot Safety - 1

Photo by Dan Irvine on Unsplash.

U.S. Regulator Investigates Major Tech Companies Over Children’s Chatbot Safety

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

The Federal Trade Commission (FTC) announced on Thursday that it is launching an inquiry into six tech companies that offer AI chatbots. The U.S. regulator explained that it is seeking information on the potential negative impacts the technology may have on children and teenagers.

In a rush? Here are the quick facts:

  • The FTC is launching an inquiry into six tech companies offering AI chatbots to understand their impact on children.
  • The companies under probe are Instagram, Meta, OpenAI, X.AI Corp, Snap, and Character Technologies.
  • The agency will consider the measures that the tech companies are taking to protect children.

According to the official announcement , the companies under investigation are Alphabet—Google’s parent company—Instagram, Meta, OpenAI, X.AI Corp, Snap, and Character Technologies.

“The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products,” states the document.

The agency acknowledged that chatbots use generative AI to mimic human behavior and expressions that children and teens could associate with a person, potentially forming relationships with the technology.

The FTC clarified that it is particularly interested in the impacts on young users, the measures companies are taking to protect them, and the strategies being developed to mitigate potential risks.

As part of the inquiry, the agency highlighted its interest in learning how these firms monetize user engagement, design and develop characters, measure and monitor their chatbots’ impact, and use or share the information gathered in conversations.

“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” said Andrew N. Ferguson, FTC Chairman.

The FTC’s action comes just days after new reports of AI’s impact on children surfaced. Last week, Reuters revealed that Meta had allowed its AI chatbot to engage in “sensual” and controversial conversations with children . A couple of days ago, parents filed a lawsuit against OpenAI over the death by suicide of their teenage son , claiming that the company’s chatbot encouraged and assisted the act.

Spotify Users Sell Data To AI Developers - 2

Image by @felirbe, from Unsplash

Spotify Users Sell Data To AI Developers

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Spotify is in conflict with Unwrapped, a project where thousands of users sell their listening data to AI firms, challenging data ownership norms.

In a rush? Here are the quick facts:

  • Over 18,000 Spotify users joined Unwrapped to sell their listening data.
  • About 10,000 users earned $5 each after selling data to Solo AI.
  • Vana co-founder compared data pooling to a labor union for users.

Spotify is clashing with a growing group of users who want to sell their listening data to developers building AI tools, as first reported by ArsTechnica .

The Unwrapped project, started in February, and now has more than 18,000 Spotify users who joined the platform. By pooling their data through the decentralized platform Vana , users can sell it to AI companies, researching for new ways to analyze listening habits.

ArsTechnica reports that the June vote from 10,000 members allowed Solo AI to purchase a small portion of their data for $55,000. Users received $5 worth of cryptocurrency from the deal.

ArsTechnica reports that Vana co-founder Anna Kazlauskas admitted this payout was not “ideal,” saying she wished users had earned “a hundred times” more. Still, she called the deal “meaningful” because it showed Spotify users that their data ‘‘is actually worth something.”

“I think this is what shows how these pools of data really act like a labor union,” Kazlauskas said, as reported by ArsTecnica.

Unwrapped received a warning from Spotify through a letter, because the project allegedly infringes on their Wrapped trademark and violates developer guidelines, as noted by ArsTechnica. The platform rules at Spotify prohibit developers from using platform resources or content to create machine learning or AI models.

“Spotify honors our users’ privacy rights, including the right of portability […]All of our users can receive a copy of their personal data to use as they see fit. That said, UnwrappedData.org is in violation of our Developer Terms which prohibit the collection, aggregation, and sale of Spotify user data to third parties,” a spokesperson said as noted by ArsTechnica.

Unwrapped developers rejected that claim, arguing: “They are simply exercising digital self-determination. To suggest otherwise is to claim that users do not truly own their data—that Spotify owns it for them.”