OpenAI Launches O3-mini And Deep Research In ChatGPT - 1

OpenAI Launches O3-mini And Deep Research In ChatGPT

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

OpenAI launched its latest AI model o3-mini and deep research for ChatGPT during the weekend. The o3-mini model is already available for ChatGPT Plus, Pro, and Team users, and the deep search feature is only available for Pro users at the moment. The AI startup promised to expand the reach for both products soon.

In a Rush? Here are the Quick Facts!

  • OpenAI released its latest AI model o3-mini and its research agent “deep research” during the weekend.
  • o3-mini is OpenAI’s most cost-efficient model and is available for Plus, Pro, and Team members. Free users can test it too.
  • Deep Research is available exclusively to Pro users and can complete complex tasks in minutes that would take a human hours to finish.

OpenAI considers o3-mini its most cost-efficient model and, for the first time, will also allow free users to test the new reasoning model—the message limit wasn’t clarified. o3-mini will also be included for Entreprise members later this month. Users with access to the latest model must select this reasoning model to make the most out of its broader knowledge and technical capabilities.

“OpenAI o3-mini is our first small reasoning model that supports highly requested developer features including function calling⁠, Structured Outputs, and developer messages, making it production-ready out of the gate,” wrote OpenAI’s team on their website .

This Sunday, OpenAI also announced the introduction of its AI research agent, a feature known as deep research, an AI model that can perform complex tasks including multi-step research online. The startup assures this agent can perform in just a few minutes what a human being would take hours. It’s only available for Pro users, but the company expects to expand it to Plus and Team members soon.

“Deep research is OpenAI’s next agent that can do work for you independently—you give it a prompt, and ChatGPT will find, analyze, and synthesize hundreds of online sources to create a comprehensive report at the level of a research analyst,” states the official document shared .

This feature has been specially designed for people who need extensive knowledge and research in areas such as finance, engineering, and science, or also shoppers looking for detailed and personalized product research. The information provided by deep research includes citations and references to verify the information.

Unlike o3-mini which can provide fast answers, deep research will take from 5 to 30 minutes to provide a result for complex tasks.

These new releases come just days after the Chinese startup DeepSeek launched its latest AI models , including competitive versions to OpenAI’s models, for free.

First Known Case Of AI Chatbots Used For Stalking - 2

Image by Freepik

First Known Case Of AI Chatbots Used For Stalking

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

James Florence, 36, has agreed to plead guilty to a seven-year cyberstalking campaign that involved using an AI chatbots to impersonate a university professor and invite strangers to her home for sexual encounters.

In a Rush? Here are the Quick Facts!

  • James Florence used AI chatbots to impersonate a professor and cyberstalk her.
  • Florence stole the victim’s underwear and used it to harass her online.
  • He fed personal info to chatbots, including the professor’s home address and intimate details.

Florence, from Massachusetts, used platforms such as CrushOn.ai and JanitorAI to create personalized chatbots that mimicked the professor’s responses and led users to believe they were communicating with her, as reported by The Guardian .

According to court documents reviewed by The Guardian, Florence used the professor’s personal and professional information—such as her home address, date of birth, and family details—to instruct the chatbots to engage in sexually explicit conversations.

The AI bots were programmed to confirm sexual preferences and even provide intimate details about the victim. Florence, who had stolen underwear from the professor’s home, fed the bots information about her clothing choices and directed them to encourage users to visit her house.

The case, filed in Massachusetts federal court, marks a significant legal precedent as the first known instance of a stalker using AI to impersonate their victim to facilitate harassment, said The Guardian. Florence is set to plead guilty to seven counts of cyberstalking and one count of possession of child pornography.

Stefan Turkheimer, vice-president for public policy at Rainn, an anti-sexual-violence nonprofit, described the case as highlighting a disturbing new trend in the misuse of AI.

“This is a question of singling out someone for the goal of potential sexual abuse,” he said, reported The Guardian. “The tools that he’s been able to use here really made the damage so much worse,” he added.

Florence, who was once a friend of the professor, went beyond creating chatbots. He made fake social media accounts and websites to impersonate the victim, distributing explicit, manipulated images of her along with personal details, reported The Guardian.

Platforms like Craigslist, Reddit, X, and Linktree were used to humiliate the professor and distribute the fabricated content. One website, ladies.exposed, featured photo collages of the professor alongside her home address and phone number, as reported by The Guardian.

These actions extended over several years, beginning in 2017. The professor received dozens of disturbing messages and calls, including a voicemail falsely claiming that her father had died in a car accident, as reported by The Guardian.

The victim and her husband became increasingly concerned for their safety, eventually installing surveillance equipment in their home and taking other precautions.

The harassment did not stop with the professor. Florence targeted several other women and a 17-year-old girl, digitally altering their images to create sexually suggestive content.

This alarming trend of using AI for harassment is a growing issue, with reports showing that minors are also being exploited in this way.