Hollywood stars launched a standard to control AI use of faces and creative work
Hollywood stars George Clooney, Tom Hanks and Meryl Streep backed a new licensing standard, the Human Consent Standard. It lets people control how AI systems us

Hollywood stars and directors are supporting a new licensing standard for artificial intelligence systems. Human Consent Standard allows people to set rules for how their face, voice, characters, and creative works are used by AI systems.
How the Human Consent Standard Works
Human Consent Standard gives people three options for controlling their content and image. It is simple and straightforward: you can allow AI full access to your material, set special conditions for use (for example, require a licensing agreement and payments), or completely prohibit access without any exceptions. Each option works as a signal for AI systems — just add one special file to your website or profile, and systems will know your preferences. It's similar to how robots.txt tells search engines which pages to index. Only now this is a signal specifically for generative AI and models that are trained on content. Ideally, large AI developer companies will check this file before training their models.
- Full permission — AI can use content freely without restrictions
- Conditional access — a license and payment are required for commercial use
- Complete prohibition — AI cannot use content for any purpose
Who Launched the Standard
Human Consent Standard was developed by RSL Media — a nonprofit organization that created the basic Really Simple Licensing (RSL) Standard last year. Behind it are not just random directors and actors — these are world-class stars: George Clooney, Tom Hanks, and Meryl Streep. Their support is a strong signal from the Hollywood establishment about the need to control how creative works are used in the age of AI.
The original RSL Standard was launched as a way for websites and authors to signal to AI systems how they want their content used. Now the standard is expanding and reaching a completely new level — the level of protecting the human image itself and their creativity. This is especially important for actors who can be synthesized in videos without their consent, and for musicians whose voices can be cloned.
Why This Is Relevant Right Now
Without such a standard, creative people remain defenseless against companies that train AI on everything — without asking, without payment, without attribution. This has already affected many actors and musicians whose faces and voices are used in synthesized videos. Legal action is expensive and time-consuming, and often results come too late, when the damage has already been done.
Human Consent Standard offers a different path: simply set a licensing standard, and everything works automatically. Platforms and AI system developers will see your choice and respect it — if they want to stay on good terms with the industry and avoid lawsuits. If large platforms and generative AI systems begin to respect this standard, it could become a de facto standard for the entire industry.
This would reduce conflicts between content creators and companies developing AI. For now, it is unclear whether companies like OpenAI or Meta will voluntarily follow such a standard.
What This Means
The new standard is an attempt to set the rules of the game at the technology level, without government involvement. It shows that the industry is ready to seek compromise and establish its own rules instead of waiting for court decisions and regulation. For authors, actors, and musicians, this is a tool to protect their rights. For AI companies — a way to be on the right side of history.