
Image by TechCrunch, from Flickr
Virtual Employees Could Enter Workforces This Year, OpenAI CEO Predicts
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Artificial intelligence agents, capable of autonomously performing tasks, could join workforces as early as this year, transforming business operations, according to OpenAI CEO Sam Altman.
In a Rush? Here are the Quick Facts!
- OpenAI’s AI agent “Operator” will automate tasks like writing code and booking travel.
- McKinsey predicts 30% of U.S. work hours could be automated by 2030.
- Altman expresses confidence in building artificial general intelligence (AGI) and superintelligence.
In a blog post published Monday, Altman stated that AI-powered virtual employees could revolutionize the company output by taking on tasks traditionally handled by humans. The Guardian points out that Microsoft, OpenAI’s largest backer, has already introduced AI agents, with consulting giant McKinsey among the first to adopt the technology.
“We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes,” Altman wrote in the blog post.
OpenAI is reportedly set to launch an AI agent, codenamed “ Operator ,” later this month. The tool is expected to automate tasks such as writing code or booking travel on behalf of users. This move follows the release of Microsoft’s Copilot Studio , Anthropic’s Claude 3.5 Sonnet , Meta’s AI agents with physical-like bodies , and Salesforce’s new AI features in Slack .
McKinsey is already developing an agent to streamline client inquiries, including scheduling follow-ups. The firm projects that by 2030, up to 30% of work hours across the U.S. economy could be automated, as noted by The Guardian.
Microsoft’s head of AI, Mustafa Suleyman, has also expressed optimism about agents capable of making purchasing decisions. In an interview with WIRED he described witnessing “stunning demos” of AI completing transactions independently but acknowledged challenges in development. Suleyman predicted these advanced capabilities could emerge “in quarters, not years.”
Altman’s blog also touched on OpenAI’s confidence in creating artificial general intelligence (AGI)—AI systems that surpass human intelligence. “We are now confident we know how to build AGI as we have traditionally understood it,” he wrote.
Looking beyond AGI, Altman outlined OpenAI’s ambitions for “superintelligence,” which he described as tools that could significantly accelerate scientific discovery and innovation.
“Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity,”Altman wrote, expressing enthusiasm for a future where such advancements reshape human potential.
The rapid growth of AI in the workforce has broad implications for business productivity and the economy. However, as AI agents evolve, new security risks emerge.
A recent survey, published by Medium , highlighted vulnerabilities such as unpredictable multi-step user inputs, internal execution complexities, variability in operational environments, and interactions with untrusted external entities.
Unclear or incomplete user inputs can trigger unintended actions, while AI agents’ internal processes often lack real-time observability, making security threats hard to detect. Furthermore, agents operating across diverse environments may exhibit inconsistent behaviors, and trusting external entities without proper verification can expose agents to attacks.
These challenges underline the need for robust security frameworks to protect AI agents and ensure their safe deployment in real-world scenarios.

Image by Etactics Inc, from Unsplash
FDA Issues Draft Guidance For AI-Enabled Medical Devices
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
The U.S. Food and Drug Administration (FDA) released draft guidance on January 6, 2025, to support the development and marketing of AI-enabled medical devices throughout their Total Product Life Cycle.
In a Rush? Here are the Quick Facts!
- Guidance covers product lifecycle: design, development, maintenance, and documentation.
- Addresses risks like bias and ensures transparency in AI-enabled device design.
- Public comments open until April 7, 2025; FDA webinar scheduled February 18, 2025.
In their news release , the FDA explains that if finalized, this guidance would be the first to provide comprehensive recommendations covering design, development, maintenance, and documentation, aimed at ensuring the safety and effectiveness of these devices. The guidance is set to be published in the Federal Register on January 7, as reported by Healthcare IT News (HITN).
This draft guidance complements recently issued recommendations on predetermined change control plans for AI-enabled devices, outlining how developers can proactively plan for product updates aftermarket release.
Troy Tazbaz, director of the FDA’s Digital Health Center of Excellence, emphasized the significance of this guidance in addressing the unique considerations of AI-enabled devices.
“The FDA has authorized more than 1,000 AI-enabled devices through established premarket pathways. As we continue to see exciting developments in this field, it’s important to recognize that there are specific considerations unique to AI-enabled devices,” he stated, as reported on the FDA press release.
Key components of the draft guidance include recommendations on how sponsors should describe the postmarket performance and risk management of AI-enabled devices in marketing submissions. It highlights the importance of early and ongoing engagement with the FDA and offers a comprehensive approach to managing risks throughout a device’s lifecycle.
Additionally, the draft addresses strategies to mitigate transparency and bias concerns, providing detailed recommendations to help sponsors identify and manage risks associated with bias during design and evaluation.
The FDA also released draft guidance on using AI in developing drug and biological products, further reflecting the agency’s commitment to fostering innovation while maintaining safety and transparency.
This announcement comes amid the rapid growth of AI, which is transforming healthcare by enhancing diagnostics , predictive analytics , psychological treatment plans , and even medical education .
However, HITN notes in a blog post co-authored by Tazbaz and John Nicol, FDA experts highlighted the significant risks posed by AI’s adaptability in real-world settings, including exacerbating biases in data and algorithms, potentially harming patients and disadvantaging underrepresented populations.
To address these evolving risks, the FDA introduced principles for life cycle management and proposed guidance ensuring performance considerations—such as race, ethnicity, and gender—are prioritized throughout AI/ML device development and monitoring, as reported on HITN.
Finally, the FDA also stated that a webinar is scheduled for February 18, 2025, to discuss these proposals further.