Big Tech /

YouTube Will Now Require Users to Disclose if Content Was Created Using AI

YouTube said that this policy is especially important in instances where the video is discussing sensitive topics, such as elections, ongoing conflicts, politicians, and public health crises. 


YouTube will now require all users to disclose if their content was created using artificial intelligence.

The video sharing platform, which is owned by Google, announced the rule change on Tuesday.

Google and YouTube had already announced in September that verified election advertisers must prominently disclose any use of artificial intelligence or digitally altered content in their ads. This also applied to YouTube ads.

In Tuesday’s announcement, YouTube said, “Generative AI has the potential to unlock creativity on YouTube and transform the experience for viewers and creators on our platform. But just as important, these opportunities must be balanced with our responsibility to protect the YouTube community. All content uploaded to YouTube is subject to our Community Guidelines—regardless of how it’s generated—but we also know that AI will introduce new risks and will require new approaches.”

The tech giant continued, “We believe it’s in everyone’s interest to maintain a healthy ecosystem of information on YouTube. We have long-standing policies that prohibit technically manipulated content that misleads viewers and may pose a serious risk of egregious harm. However, AI’s powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers—particularly if they’re unaware that the video has been altered or is synthetically created.”

YouTube will now have an option for users to select during the upload process that would indicate if they used “realistic altered or synthetic material.”

“To address this concern, over the coming months, we’ll introduce updates that inform viewers when the content they’re seeing is synthetic,” YouTube said. “Specifically, we’ll require creators to disclose when they’ve created altered or synthetic content that is realistic, including using AI tools. When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.”

YouTube said that this policy is especially important in instances where the video is discussing sensitive topics, such as elections, ongoing conflicts, politicians, and public health crises.

Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties, YouTube explained.

The platform said that it will be working with creators before the new rules go into effect to make sure they understand the new requirements.

We’ll inform viewers that content may be altered or synthetic in two ways. A new label will be added to the description panel indicating that some of the content was altered or synthetic. And for certain types of content about sensitive topics, we’ll apply a more prominent label to the video player.

“There are also some areas where a label alone may not be enough to mitigate the risk of harm, and some synthetic media, regardless of whether it’s labeled, will be removed from our platform if it violates our Community Guidelines,” YouTube added. “For example, a synthetically created video that shows realistic violence may still be removed if its goal is to shock or disgust viewers.”

The platform is also adding a feature where someone can request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using the company’s privacy request process. Public and well-known officials will face a higher bar for requesting removals. The company also said it will take into account if the depiction is meant as satire or parody.

In September, Google announced that beginning in November, political campaigns must “prominently disclose when their ads contain synthetic content that’s been digitally altered or generated and depicts real or realistic-looking people or events… inclusive of AI tools.”

“This update builds on our existing transparency efforts — it’ll help further support responsible political advertising and provide voters with the information they need to make informed decisions,” Michael Aciman, a Google spokesperson, said in a statement, according to a report from Axios.

Google specified that the disclosure must be “clear and conspicuous” and in a place where viewers can see it. They must also explain which parts are artificial, for example “this audio was computer generated” or “this video content was synthetically generated.”

The new policy contains an exemption for content that is “inconsequential to the claims made in the ad.” This includes editing techniques like cropping or background edits that do not create realistic misrepresentations.

“Ads that depict someone saying or doing something they never said or did, or that alter footage of a real event, will fall under the new policy,” Axios reported.

Campaigns or advertisers who repeatedly violate the rules will be suspended from using Google Ads.

*For corrections please email [email protected]*