OpenAI Introduces Operator, An AI Agent That Can Perform Tasks Autonomously - 1

Photo by May Gauthier on Unsplash

OpenAI Introduces Operator, An AI Agent That Can Perform Tasks Autonomously

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

OpenAI announced this Thursday a new feature for its AI chatbot called Operator, an AI agent that can perform tasks autonomously including taking control over user’s computers.

In a Rush? Here are the Quick Facts!

  • Operator is an AI agent that can browse and perform autonomous tasks on its own.
  • The new tool will be first available as a research preview version for Pro users in the United States.
  • OpenAI expects to expand the feature to Plus, Team, and Enterprise users worldwide soon.

The research preview version of the tool has started rolling out for Pro users in the United States, and the startup expects to expand Operator to Plus, Team, and Enterprise users soon. The company clarified that this initial version has limitations, but it expects to evolve based on users’ feedback.

“Today we’re going to launch our first agent AI. Agents are systems that can do work for you independently when you give them a task,” said Sam Altman in a video shared by OpenAI to introduce the new product. “We think this is going to be a big trend in AI and will really impact the work people can do, how productive they can be, how creative they can be, what they can accomplish.”

During the live video presentation of Operator, OpenAI’s team demonstrated how the new tool can book a restaurant reservation or purchase groceries by conducting its own online searches, browsing suggested websites, and applying filters to tailor the results to the user’s needs. While Operator performs these tasks, it displays a window showing its progress, allowing the user to focus on other activities while it gathers the results.

According to the details on the press release , Operator is powered by Computer-Using Agent (CUA)—an interface to interact with text fields, buttons, and menus netizens usually find on websites—and GPT-4o’s vision capabilities. It doesn’t need API integrations to see or interact, it can self-correct, and it reachs out to the user to confirm or request more information—and before making critical decisions, like buying a ticket.

OpenAI clarified that they have trained the chatbot to avoid harmful tasks, like buying guns, and reduce misalignments like performing the wrong tasks.

OpenAI also recently announced its $500 billion Stargate Project in collaboration with SoftBank, Oracle, Microsoft, and the U.S. government to build AI infrastructure.

LinkedIn Faces Lawsuit For Allegedly Sharing User Messages To Train AI Models - 2

Image by Stock Snap, from Unsplash

LinkedIn Faces Lawsuit For Allegedly Sharing User Messages To Train AI Models

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

LinkedIn is facing a class-action lawsuit from Premium users who allege the platform shared their private messages with third parties to train generative AI models without proper consent, as reported by Reuters .

In a Rush? Here are the Quick Facts!

  • Plaintiffs accuse LinkedIn of quietly updating its privacy policy in September.
  • The lawsuit seeks $1,000 per user for federal data privacy violations.
  • LinkedIn denies all allegations, calling the claims “false claims with no merit.”

The lawsuit , filed in federal court in San Jose, California, claims LinkedIn introduced a privacy setting in August allowing users to opt in or out of data sharing.

The complaint accuses LinkedIn of deliberately violating its promise to use user data solely to enhance the platform, suggesting the company sought to minimize public and legal scrutiny, as reported by Reuters.

The suit was filed on behalf of Premium users who sent or received InMail messages and had their data shared before the September policy update.

The lawsuit alleges that LinkedIn breached its contractual promises by sharing Premium customers’ private messages with third parties to train generative AI models, as reported by The Register.

These messages could contain sensitive information about employment, intellectual property, compensation, and personal matters, raising serious privacy concerns.

The lawsuit focuses particularly on Premium customers—those subscribing to tiers like Premium Career, Premium Business, Sales Navigator, and Recruiter Lite—who are subject to the LinkedIn Subscription Agreement (LSA), noted The Register.

This agreement makes specific privacy commitments, including a clause in Section 3.2 promising not to disclose Premium customers’ confidential information to third parties, as noted by The Register.

The lawsuit claims LinkedIn violated this clause, breaching the US Stored Communications Act, contract terms, and California’s unfair competition laws.

However, The Register notes that the plaintiffs do not present evidence that InMail contents were shared. Instead, the complaint speculates that LinkedIn included these messages in AI training data.

It bases this assumption on LinkedIn’s alleged unannounced policy changes and its failure to publicly deny accessing InMail messages for training purposes, as reported by The Register.

Plaintiffs are seeking damages for breach of contract, violations of California’s unfair competition law, and $1,000 per user under the federal Stored Communications Act, as reported by Reuters.