YouTube vows to protect creators from AI fakes
If you watch as much YouTube as I do, you’ve no doubt been inundated with AI in the last year or so. AI-generated thumbnails, AI-generated voiceovers, even full-on AI-generated video is now in the cards.
Well, YouTube has taken notice and has officially promised to protect the creators on its platform with new tools.
YouTube’s infamous Content ID system — the thing that makes YouTubers freak out whenever someone starts humming a song because they don’t want their video demonetized — is being augmented with new AI-hunting tools. Content ID can now search for AI-generated singing voices based on existing artists. This tool is apparently being refined “with [YouTube’s] partners,” with a plan to implement it beginning in 2025.
What about the kind of AI generation that can create images or videos? YouTube says that it’s working on that too, “actively developing” tech that can detect and manage (read: take down) videos with AI-generated faces based on existing people. There’s no timeframe for when this will reach the hands of users or partners.
YouTube also says it’s working against systems that are scraping its content to train AI models, which has been a hot-button topic lately. Nvidia has been known to collect publicly accessible videos from YouTube to train its models, which may violate YouTube’s terms of service.
Training larger models for video generation is a topic of competition within the increasingly competitive AI industry, in which YouTube and Google are active participants. But individual users and artists are likely more worried about targeted scraping that’s designed to steal and replicate their likeness. Various tools that claim to train themselves on YouTube data are easy to find and set up, even on relatively low-power consumer hardware.
How exactly will YouTube prevent this? Is it even possible? So far, it hasn’t been explicitly spelled out. “We’ll continue to employ measures to ensure that third parties respect [the terms of service], including ongoing investments in the systems that detect and prevent unauthorized access, up to and including blocking access from those who scrape.”
Notably, YouTube’s terms of service do not prevent YouTube itself or owner Google from processing videos on the platform for its own AI tools. Though newer restrictions require YouTube’s video creators to disclose the use of AI for synthetic images, videos, and voices, Google has allowed OpenAI to scrape YouTube content without legal challenge… because it was afraid of establishing a standard for the AI tools it was developing itself, according to a New York Times report from April.