Celebrities will be able to find and request removal of AI deepfakes on YouTube
YouTube is expanding its AI deepfake detection tool to celebrities, talent agencies, and management companies, allowing enrolled public figures to find and request removal of AI-generated videos featuring their likenesses. This is a meaningful step toward platform accountability ...
Mia Sato, reporting for The Verge on April 21, 2026, broke the story that YouTube is opening its likeness detection program to Hollywood. The tool, which automatically scans the platform for AI-generated content featuring enrolled individuals, was first tested with content creators last fall, then extended to politicians and journalists in March 2026. Celebrities are next in line, and so are the talent agencies and management companies that handle the legal affairs and removal requests on their behalf.
Why This Matters
YouTube is the world's largest video platform, and it just built the most concrete enforcement mechanism any major platform has deployed against AI deepfakes. No comparable opt-in detection and removal system exists at TikTok, Instagram, or X as of this writing. The rollout sequence, creators in fall 2025, politicians and journalists in March 2026, celebrities in April 2026, tells you this is a deliberate, tested product rather than a rushed PR response. With deepfake creation costs collapsing as AI tools improve, the window for platforms to act proactively is narrowing fast, and YouTube is at least moving.
Daily briefing from 50+ sources. Free, 5-minute read.
The Full Story
YouTube's likeness detection feature is not a passive report button. It is an automated scanning system that uses computer vision and audio analysis to proactively search YouTube's entire video library for content featuring the enrolled individual's replicated face or voice. Once flagged content surfaces, the enrolled person or their representative can review it through a dashboard and submit a formal removal request. YouTube then evaluates each request against its existing privacy policy. Not every request results in a takedown.
The program began as a limited test with YouTube content creators in the fall of 2025. That pilot gave YouTube real-world data on how the detection system performed across a massive and varied video catalog. Based on what the company learned, it expanded access to politicians and journalists in March 2026, two groups that face distinct and serious risks. Politicians are targets for AI-generated disinformation, and journalists face harassment through fabricated videos designed to damage their credibility.
The celebrity expansion announced in April 2026 is the third phase of this rollout, and it is also the broadest in scope. By including talent agencies and management companies, YouTube is acknowledging a practical reality: most A-list celebrities do not personally file content removal requests. Their lawyers and managers do. Giving institutional access to these intermediaries makes the system usable at scale for the entertainment industry.
One critical policy detail shapes the entire program. YouTube explicitly exempts content classified as parody or satire from removal, even when that content features an AI-generated celebrity likeness. This is not a loophole. It is an intentional design choice that reflects the platform's commitment to protecting creative speech. The problem, of course, is that the line between a harmful deepfake and protected satire is contested territory, and YouTube evaluates that distinction case by case during the removal review process.
The urgency behind this kind of tool became very public in May 2024 when actress Scarlett Johansson threatened legal action against OpenAI after a ChatGPT 4.0 voice feature called Sky bore a striking resemblance to her voice. OpenAI's Sam Altman initially denied the similarity, but the episode made clear that even the most prominent AI companies lacked robust safeguards against unauthorized voice and likeness replication. YouTube's detection tool is a direct industry response to that kind of vulnerability.
Key Details
- YouTube first tested the likeness detection feature with content creators in fall 2025.
- The program expanded to politicians and journalists in March 2026.
- The celebrity and talent agency expansion was announced on April 21, 2026 by Mia Sato at The Verge.
- Removal requests are evaluated against YouTube's existing privacy policy, not automatically approved.
- Content classified as parody or satire is explicitly exempt from removal under the program's rules.
- The Scarlett Johansson and OpenAI dispute in May 2024 was a key catalyst for industry focus on likeness protection.
- As of July 2024, YouTube was the first major platform to offer a formal opt-in mechanism for requesting removal of AI-generated likeness content.
What's Next
The most important thing to watch is whether YouTube expands the program beyond public figures to ordinary users, who face the same deepfake risks but have far fewer resources to pursue removal on their own. Watch also for how YouTube handles edge cases where parody claims are used to shield content that is clearly harassment. The European Union's AI Act already addresses synthetic media at the regulatory level, and U.S. legislation targeting non-consensual deepfakes is moving through Congress, so platforms that build voluntary tools now are getting ahead of mandates that may soon become legal requirements.
How This Compares
Compare YouTube's approach to what TikTok, Instagram, and X have done, which is to say very little that is concrete. Those platforms have published AI content policies and added disclosure labels for synthetic media, but none has built a proactive scanning tool with an opt-in dashboard for public figures to manage their own likeness protection. YouTube is not just ahead by a small margin here. It is operating in a different category entirely when it comes to enforcement infrastructure.
The contrast with OpenAI is also instructive. OpenAI built some of the most powerful voice and image generation tools in existence, and the Johansson incident in May 2024 revealed that the company had no reliable system for preventing those tools from replicating a specific person's voice without consent. YouTube's detection approach runs downstream of that problem, catching synthetic content after it is created and uploaded. That is a meaningful limitation. Ideally, the AI tools generating the deepfakes would have consent safeguards built in at the source, not just at the distribution layer.
Google's broader investment in AI safety research matters here too. YouTube does not exist in isolation. It sits inside a parent company that has published synthetic media guidance, contributed to industry standards, and implemented AI safeguards across products like Search and Gemini. The likeness detection feature is the practical, user-facing expression of those commitments. Other platforms claim similar values, but YouTube is the one that has shipped a working product three times in under a year.
FAQ
Q: How does YouTube's deepfake detection tool actually work? A: The tool uses automated computer vision and audio analysis to scan YouTube's video library for content that replicates the face or voice of an enrolled individual. When it finds a match, it surfaces the flagged content in a dashboard where the enrolled person or their representative can review it and submit a removal request for YouTube to evaluate.
Q: Can anyone sign up for YouTube's likeness detection program? A: No. The program is currently limited to specific groups of public figures, including content creators, politicians, journalists, and now celebrities along with their talent agencies and management companies. YouTube has not announced a timeline for extending the feature to general users.
Q: Will YouTube remove every deepfake video that a celebrity flags? A: Not automatically. YouTube evaluates each removal request against its privacy policy on a case-by-case basis. Content that qualifies as parody or satire is explicitly protected and will not be removed even if it features an AI-generated likeness, which means some flagged videos will stay up after review.
YouTube's methodical rollout of this tool suggests the company is serious about getting it right rather than just getting credit for trying. As AI-generated media becomes harder to distinguish from real footage, platforms and the tools built around them will face increasing pressure to move faster than regulators do. For the latest coverage of how AI platforms are handling synthetic media, read the latest AI news at AI Agents Daily. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.
Get stories like this daily
Free briefing. Curated from 50+ sources. 5-minute read every morning.




