YouTube has recently expressed its support for the “No Fakes Act,” a legislative effort aimed at establishing regulations for the use of AI-generated likenesses of individuals online. Originally introduced in 2023 and revisited throughout 2024, this act continues to gain momentum as concerns over AI misuse heighten. The surge of artificial intelligence technology has made creating deepfakes easier than ever.
These manipulated videos can depict individuals saying or doing things they never actually did, which raises significant concerns regarding fraud and misinformation proliferation. Instances of harmful deepfakes have already surfaced, prompting action from lawmakers like Senators Chris Coons and Marsha Blackburn, who crafted the *Nurture Originals, Foster Art, and Keep Entertainment Safe* (NO FAKES) Act. This legislation aims to protect artistic integrity and creativity in the AI landscape.
Despite facing various challenges and opposition, the NO FAKES Act has garnered support from notable organizations, including SAG-AFTRA and the Recording Industry Association. YouTube, the world’s largest long-form video-sharing platform, has now joined this coalition. Acknowledging its influential role, the company emphasized the responsibility that comes with its reach.
In its announcement, YouTube reaffirmed its commitment to responsible AI use, particularly in safeguarding creators and viewers against misuse. Under the revised Act, platforms that host deepfaked content, such as YouTube, face reduced liability as long as they comply with removal requests for unauthorized materials. For instance, if a fake video of someone emerges, YouTube would not be held accountable if it honors a request to take it down.
Additionally, YouTube is broadening its pilot program for “likeness management technology.” This initiative aims to help creators identify and remove unauthorized deepfakes, with new participants including prominent figures like MrBeast and Marques Brownlee.