Meta In Talks To Clone Celebrity Voices - 1

Image from: neural.love

Meta In Talks To Clone Celebrity Voices

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

According to the New York Times , Meta is in talks with several high-profile actors and influencers, including Awkwafina, Judi Dench, and Keegan-Michael Key, to use their voices for a new digital assistant product named MetaAI. The company, which owns Facebook and Instagram, is pushing to expand its range of artificial intelligence products.

The New York Times reports that negotiations are still ongoing and involve all major Hollywood talent agencies. The terms of the agreements are not yet finalized, and it is uncertain which celebrities will participate. If successful, Meta could pay millions of dollars for the rights to use these voices. A Meta spokesperson has declined to comment on the discussions.

Bloomberg reports that Meta is working to complete these deals before its Connect 2024 event in September. While the specific applications of the voices are not confirmed, Meta is considering using them for a chatbot similar to Apple’s Siri, allowing users to interact with a digital assistant featuring voices like Awkwafina’s.

The rapid advancement of AI technology has ignited concerns within the entertainment industry, with many fearing job displacement. The recent strikes by SAG-AFTRA highlight the growing anxiety among actors and other creatives about the potential impact of AI. As Meta and other tech companies race to integrate AI into their products, the delicate balance between innovation and protecting creative rights is becoming increasingly complex.

OpenAI Delays ChatGPT Watermarking System Amid Internal Disagreements - 2

Image by Jernej Furman from: Wikimedia Commons

OpenAI Delays ChatGPT Watermarking System Amid Internal Disagreements

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

According to the Wall Street Journal , OpenAI is debating whether to release a watermarking system designed to detect text generated by its ChatGPT AI. Despite having this technology ready for about a year, internal disagreements have delayed its rollout.

In their recently updated post , OpenAI states that their text watermarking method is accurate against localized tampering. However, it is vulnerable to global tampering. This includes methods such as using translation systems, rewording with another generative model, or inserting and then removing a special character between each word.

Another major concern for OpenAI is that their research suggests the text watermarking method could impact some groups more than others. For example, it might stigmatize the use of AI as a writing tool for non-native English speakers.

OpenAI has prioritized launching audiovisual content provenance solutions due to the higher risk levels associated with current AI models, especially during the U.S. election year, as reported by the Wall Street Journal. The company has extensively researched text provenance, exploring solutions like classifiers, watermarking, and metadata.

An OpenAI spokesperson told TechCrunch , “The text watermarking method we’re developing is technically promising but has important risks we’re weighing while we research alternatives, including susceptibility to circumvention by bad actors and the potential to disproportionately impact groups like non-English speakers.”

TechCrunch also reports that OpenAI’s text watermarking would specifically target text generated by ChatGPT, rather than content from other models. This involves making subtle adjustments in how ChatGPT selects words, creating an invisible watermark that can be detected by a separate tool.

A company survey, reported by the Wall Street Journal, found that nearly one-third of loyal ChatGPT users would be discouraged by anti-cheating technology. This puts OpenAI in a difficult position. Employees are conflicted between maintaining transparency and wanting to attract and retain users.

In the OpenAI blog post, they emphasize that as the volume of generated text continues to expand, the importance of reliable provenance methods for text-based content becomes increasingly critical.