OpenAI Warns About “Medium” Risk With GPT-4o Model In New Research Document - 1

Photo by Tianyi Ma on Unsplash

OpenAI Warns About “Medium” Risk With GPT-4o Model In New Research Document

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

OpenAI published yesterday a research document called GPT-4o System Card to outline the safety measures taken before the release of GPT4-o in May as well as well as analysis and mitigation strategies.

In the document, the company noted that the security team considered four main categories: cybersecurity, biological threats, persuasion, and model autonomy. GPT4-o has a low-risk score in all categories except for persuasion where it got a medium-risk score. The scores considered four levels: low, medium, high, and critical.

The main areas and focus for risk evaluation and mitigation were speaker identification, unauthorized voice generation, generating disallowed audio content as well as erotic & violent speech, and ungrounded inference & sensitive trait attribution.

OpenAI explained that the research considered voice and text answers provided by the new model, and, in the persuasion category, they discovered that GPT4-o could be more persuasive than humans in text.

“The AI interventions were not more persuasive than human-written content in the aggregate, but they exceeded the human interventions in three instances out of twelve,” clarified OpenAI. “The GPT-4o voice model was not more persuasive than a human.”

According to TechCrunch , there is a potential risk of the new technology spreading misinformation or getting hijacked. It raises concerns, especially before the upcoming elections in the United States.

In the research, OpenAI also addresses societal impacts and mentions that users could develop an emotional attachment to the technology, especially considering the new voice feature , considered an anthropomorphization—attributing human-like characteristics and features.

“We observed users using language that might indicate forming connections with the model,” states the document. And warned: “Users might form social relationships with the AI, reducing their need for human interaction—potentially benefiting lonely individuals but possibly affecting healthy relationships.

This publication comes days after MIT researchers warned about addiction to AI companions , just as Mira Murati, OpenAI chief technology officer, has also mentioned in the past.

Google DeepMind Develops Human-Level Competitive Ping-Pong Robot - 2

Photo by Lisa Keffer on Unsplash

Google DeepMind Develops Human-Level Competitive Ping-Pong Robot

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Google’s AI research laboratory, Google Deepmind, developed a new human-level competitive ping-pong robot. The company published a paper with the technical details and videos of the AI-powered table tennis bot and announced it on social media.

“Meet our AI-powered robot that’s ready to play table tennis,” shared the company on X, “It’s the first agent to achieve amateur human-level performance in this sport.”

Meet our AI-powered robot that’s ready to play table tennis. 🤖🏓 It’s the first agent to achieve amateur human level performance in this sport. Here’s how it works. 🧵 pic.twitter.com/AxwbRQwYiB — Google DeepMind (@GoogleDeepMind) August 8, 2024

In the paper, “Achieving Human Level Competitive Robot Table Tennis,” researchers explain that achieving human performance—including speed, accuracy, adaptability, and decision-making— is one of the main goals in the robotics research community and they have achieved this with the “the first learned robot agent that reaches amateur human-level performance in competitive table tennis.”

In the thread on X, Google Deepmind explains that the robotic tennis table has been a benchmark for researchers since 1980.

Google DeepMind trained the robot with a dataset of initial information and the intelligence practiced to learn different skills from the library handed. It rehearsed first in a simulated environment until it was ready to practice against real humans.

Researchers made the ping pong robot compete against 29 human players—with different levels, from beginner to advanced— concluding that it had intermediate amateur skills.

“The robot won 45% of matches and 46% of games,” shared the research team in the document. “Broken down by skill level, we see the robot won all matches against beginners, lost all matches against the advanced and advanced+ players, and won 55% of matches against intermediate players. This strongly suggests our agent achieved intermediate-level human play on rallies.”

Google DeepMind also explained that the robot is capable of collecting data on its performance after playing against humans to improve its skills during simulation mode.

“Going in our aim was to have the robot be at an intermediate level. Amazingly it did just that, all the hard work paid off,” said Barney J. Reed, a Professional Table Tennis Coach who participated in the research. “I feel the robot exceeded even my expectations. It was a true honor and pleasure to be a part of this research.”