OpenAI Rolls Out Advanced Voice Feature to All ChatGPT Plus and Team Subscribers - 1

Photo by Solen Feyissa on Unsplash

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Premium users will have access to the Advanced Voice feature by the end of the week
  • The audio feature is not available in certain European countries.
  • It’s not an unlimited service, there’s a daily limit for its use

OpenAI rolled out the Advanced Voice Model for audio interactions with the AI assistant ChatGPT to all members of the ChatGPT Plus and Team subscriptions. Users will be able to try the feature in the upcoming days.

“Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week,” shared the company in its X account— recently hacked for a cryptocurrency scam — this Tuesday. “While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents.”

To demonstrate how the voice feature can be used, the startup showed a video of a woman asking the AI assistant to apologize to her grandmother for being late in Mandarin.

Advanced Voice is rolling out to all Plus and Team users in the ChatGPT app over the course of the week. While you’ve been patiently waiting, we’ve added Custom Instructions, Memory, five new voices, and improved accents. It can also say “Sorry I’m late” in over 50 languages. pic.twitter.com/APOqqhXtDg — OpenAI (@OpenAI) September 24, 2024

Team or Plus users in the rest of the regions will get a notification after updating the latest version and will get access to the new feature, and select the preferred voice.

This voice feature has been controversial since its announcement earlier this year when OpenAI first introduced the GPT-4o model because the actress Scarlett Johansson complained about the voice “Sky” due to the resemblance to her voice.

The voice Sky has been removed by OpenAI and the company just added five new voices: Vale, Spruce, Arbor, Maple, and Sol.

According to CNBC , the voice feature is not unlimited, after a few minutes of use, there’s a message with the remaining minutes for the day. OpenAI hasn’t shared details of the daily limit yet.

ChatGPT’s Memory Vulnerability: A Potential Security Risk - 2

Image by Tumisu, from Pixabay

ChatGPT’s Memory Vulnerability: A Potential Security Risk

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Identified a vulnerability in ChatGPT’s long-term memory feature.
  • The flaw allows prompt injection from untrusted sources like emails.
  • ChatGPT can store false information based on manipulated memory inputs.

ArsTechnica (AT) reported on Tuesday a study showcasing a vulnerability in OpenAI’s ChatGPT that allowed attackers to manipulate users’ long-term memories by simply having the AI view a malicious web link, which then sent all interactions with ChatGPT to the attacker’s website.

Security researcher Johann Rehberger demonstrated this flaw through a Proof of Concept (PoC), showing how the vulnerability could be exploited to exfiltrate data from ChatGPT’s long-term memory.

Rehberger discovered that ChatGPT’s long-term memory feature was vulnerable. This feature has been widely available since September.

The vulnerability involves a technique known as “prompt injection.” This technique causes large language models (LLMs) like ChatGPT to follow instructions embedded in untrusted sources, such as emails or websites.

The PoC exploit specifically targeted the ChatGPT macOS app, where an attacker could host a malicious image on a web link and instruct the AI to view it.

Once the link was accessed, all interactions with ChatGPT were transmitted to the attacker’s server.

According to AT, Rehberger found this flaw in May, shortly after OpenAI began testing the memory feature, which stores user details such as age, gender, and beliefs for use in future interactions.

Although he privately reported the vulnerability, OpenAI initially classified it as a “safety issue” and closed the report.

In June, Rehberger submitted a follow-up disclosure, including the PoC exploit that enabled continuous exfiltration of user input to a remote server, prompting OpenAI engineers to issue a partial fix.

While the recent fix prevents this specific method of data exfiltration, Rehberger warns that prompt injections can still manipulate the memory tool to store false information planted by attackers.

Users are advised to monitor their stored memories for suspicious or incorrect entries and regularly review their settings.

OpenAI has provided guidelines for managing and deleting memories or disabling the memory feature entirely.

The company has yet to respond to inquiries about broader measures to prevent similar attacks in the future.

Rehberger’s findings highlight the potential risks of long-term memory in AI systems, particularly when vulnerable to prompt injections and manipulation.