
YouTube Requires Creators to Label AI-Generated Content
- Written by Shipra Sanganeria Cybersecurity & Tech Writer
- Fact-Checked by
YouTube announced the implementation of its new labeling policy for AI-generated content on March 18. The new label in its Creator Studio requires uploaders to disclose “altered or synthetic” content that might be mistaken for a real person, place, or event. The new labels will initially be introduced to its mobile app, followed by desktop and TV in the coming weeks.
In the blog post, the social media platform cited examples that it considers “realistic” and requires AI labeling. For example, altering footage of real events and places, generating realistic-looking scenes, and digitally recreating a real person’s face or voice (using deepfake) to show them saying or doing something they didn’t actually do.
On the other hand, YouTube also said that creators using AI for production or post-production processes, like generating scripts, visual enhancements, and special effects, would be exempt from this disclosure policy. Also exempt is animation and unrealistic (fantasy) content creation.
For most AI-generated videos, the label will appear in the expanded description underneath the video player, but for videos related to real-world issues, like elections, health, finance, and news, YouTube will display a label or watermark on the video itself. Additionally, if a creator doesn’t include an AI label on content that could “confuse or mislead” people, YouTube holds the right to add this itself.
The new AI-labelling requirements come as part of an announcement that came out last November, regarding how YouTube intends to adapt and update its Community Guidelines to protect its users and community from false, manipulated content.
Part of this announcement was adding a new “privacy request” process, where anyone whose face or voice is digitally recreated and used to misrepresent or promote content can request to have the content removed from the platform.
In last week’s policy post, YouTube said that in the future, it also plans to penalize creators who repeatedly avoid disclosing this information.
YouTube follows in other social media platform’s footsteps with the introduction of AI labels on content. But putting all trust in the creators themselves to responsibly label their content might not be enough. It remains to be seen how successful YouTube will be in identifying AI-generated content and enforcing penalty measures for those who don’t comply.

ChatGPT’s GPT-5 Might Arrive This Summer
- Written by Deep Shikha Content Writer
- Fact-Checked by
OpenAI plans to introduce GPT-5 in mid-2024, according to a report from Business Insider . The next-level upgrade of the AI language model that drives ChatGPT has reportedly been demonstrated to select enterprise clients, according to two anonymous sources close to OpenAI.
One of the CEOs who received this demonstration described the new GPT model as “really good” and “materially better” as it uses company-level cases and data to generate outputs. The CEO also hinted that the model has more hidden features, like launching AI agents made by OpenAI that can do tasks independently.
The following snippet of a transcript from Altman’s March 18 appearance on the Lex Fridman podcast also hints at this big release.
When Fridman asked when GPT-5 is coming out, Altman initially responded that he honestly didn’t know. After further questioning, Altman revealed that OpenAI plans to release many different things in the coming year.
“I think before we talk about a GPT-5-like model called that, or not called that, or a little bit worse or a little bit better than what you’d expect from a GPT-5, I think we have a lot of other important things to release first,” Altman said.
In this discussion with Lex Fridman, Altman does suggest that the company plans to release a big AI model this year, but that’s about it. It could be GPT-5, but it could also just be a big upgrade to the GPT models that are already in use. The upcoming model is expected to remain a Large Language Model (LLM) that can accept text or encoded visual input (aka prompts).
As per the Business Insider report, OpenAI is currently training GPT-5. Once training is complete, the model will undergo further safety checks for potential problems prior to its public launch. The timing of the release may be pushed back based on how long the evaluation takes.
Ars Technica reports that the sources cited might not have accurate information, and the release of GPT-5 might be postponed for reasons other than testing. If GPT-5 is ready for testing, that could mean its main training is likely done, with additional tweaks and improvements expected to come.