
Image by Surface, from Unsplash
Microsoft AI Vice President Sebastien Bubeck To Join OpenAI
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- Sebastien Bubeck leaves Microsoft after 10 years to join OpenAI.
- Bubeck led Microsoft’s efforts on small, efficient language and vision AI models.
- OpenAI has faced recent staff turnover, including CTO Mira Murati’s departure.
Microsoft Corp. has announced that one of its AI vice presidents, Sebastien Bubeck, is leaving the company to join OpenAI, a move making headlines across the tech industry.
Bubeck, a 10-year veteran at Microsoft, had been leading efforts on small language models aimed at rivaling larger AI systems in terms of efficiency and effectiveness, notes Bloomberg .
Bubeck was instrumental in developing Microsoft’s Phi models—extra-small language and vision models designed to optimize AI applications on edge devices.
As markets shift towards on-device AI models that offer faster, more private, and offline functionality, Bubeck’s expertise is increasingly valuable, as noted by TechCrunch .
Microsoft confirmed Bubeck’s new role at OpenAI, where he will work toward advancing the company’s goal of achieving artificial general intelligence (AGI), Bloomberg reported.
While details about Bubeck’s specific role at OpenAI remain unclear, as noted by The Information , which first reported the story. Reuters and other headlines reported that Bubeck has yet to respond to requests for comment.
A Microsoft spokesperson stated that Bubeck “has decided to leave Microsoft to further his work toward developing AGI,” and expressed gratitude for his contributions, adding that the company look “forward to continuing our relationship through his work with OpenAI,” as cited by The Information.
This move follows a wave of high-profile departures from OpenAI, including the exit of longtime chief technology officer Mira Murati in September, Reuters noted.

Image by Ron Lach, from Pexels
Firefly Video Model: Adobe’s New AI Tool
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- The model features Text-to-Video and Image-to-Video capabilities for efficient editing.
- It includes tools for creating B-roll and controlling camera angles.
- Adobe emphasizes responsible AI with trained models on licensed and public domain content.
Adobe announced today that it has expanded its Firefly family of generative AI models to include video, introducing the Firefly Video Model . This new feature enables creative professionals to generate high-quality video clips from text prompts, images, or existing footage.
The Firefly Video Model is available in limited public beta, allowing a select group of users to provide initial feedback and shape the model’s development
The Adobe Firefly Video Model introduces several features aimed at streamlining video production for professionals. One key function is Text-to-Video, which allows users to generate video clips from text descriptions.
It also offers tools for controlling camera angles and creating B-roll to fill gaps in video projects. This feature is particularly useful for teams with limited budgets or tight schedules, as it helps plan complex or costly shots without immediate filming.
Another feature, Image-to-Video, converts still images into video clips, allowing editors to create new footage from existing photos. This can be used to generate additional shots, such as close-ups, by drawing from a single frame of video.
The tool also provides the option to adjust the motion or purpose of a shot, which can assist editors in demonstrating possible changes to directors or clients. The model is also capable of generating visual effects like fire, water, and smoke, which can be layered onto footage using Adobe’s editing software.
Adobe has emphasized that the Firefly Video Model was developed to be commercially safe, with its AI trained on licensed and public domain content. It also includes Content Credentials, which document how content is created and whether AI was used , ensuring transparency for users.
Additionally,Adobe stated that it has added new tools to its Firefly Services to help companies speed up their production processes.
One of these new features, currently in testing, is a Dubbing and Lip Sync tool that uses AI to translate spoken dialogue into different languages while keeping the original voice and matching lip movements. This feature is similar to the voice cloning and translation tool recently launched by D-iD .
Another tool, called “Bulk Create,” is also in testing and helps users quickly edit large numbers of images by making tasks like resizing or removing backgrounds easier.
The Firefly Video Model is currently available in a limited public beta. Adobe states that you can join the waitlist here . Throughout this beta phase, video generations are free. Adobe will provide additional details regarding Firefly video generation options and pricing once the model transitions out of limited public beta.