YouTube To Require Disclosure When Videos Include Generative AI

Site’s creators who repeatedly fail to disclose AI use will face penalties

YouTube, the video platform owned by Alphabet Inc.’s Google, will soon require video makers to disclose when they’ve uploaded manipulated or synthetic content that looks realistic — including video that has been created using artificial intelligence tools.

The policy update, which will go into effect sometime in the new year, could apply to videos that use generative AI tools to realistically depict events that never happened, or show people saying or doing something they didn’t actually do. “This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials,” Jennifer Flannery O’Connor and Emily Moxley, YouTube vice presidents of product management, said in a company blog post Tuesday. Creators who repeatedly choose not to disclose when they’ve posted synthetic content may be subject to content removal, suspension from the program that allows them to earn ad revenue, or other penalties, the company said.