OpenAI unveils teen protection plan in Europe and €500,000 in EMEA grants
OpenAI introduced the European Youth Safety Blueprint for Europe and named 12 recipients of the €500,000 EMEA Youth & Wellbeing Grant. The company proposes five

OpenAI has announced two initiatives for the EMEA region: the European Youth Safety Blueprint and the first recipients of the EMEA Youth & Wellbeing Grant program. The company is proposing a framework for regulating AI for adolescents in Europe while simultaneously directing €500 thousand toward practical projects for families, schools, and youth organizations.
Five pillars of the plan
In a document published on May 5, 2026, OpenAI describes an approach to protecting young users that should be practical and based on real adolescent behavior, rather than restrictive. The idea is to preserve access to tools useful for education and creativity while building protective mechanisms into them. The company addresses European policymakers and regulators first and foremost, offering not abstract principles but a set of specific directions around which future rules for AI-based services can be built.
- Responsible implementation of AI in education
- Age-appropriate usage scenarios with protective measures and age verification without excessive data collection
- Safety policies for users under 18 with risk assessment and mitigation
- Protection against manipulative or misleading AI responses
- Common standards for clear and accessible parental controls
OpenAI emphasizes that this is not a finished legal code but a working framework for discussion. According to Ann O'Leary, Vice President of Global Policy, today's teenagers will be the first generation for whom AI is part of everyday life and will directly influence learning, creativity, and preparation for the future. Therefore, at the heart of the document is balance: protect children from harmful scenarios without cutting them off from tools that are already becoming part of the digital infrastructure of education and shaping platform expectations in the coming years.
"Today's youth will be the first generation to grow up with AI as part
of everyday life."
Who gets the grants
The second part of the announcement involves the first 12 recipients of the EMEA Youth & Wellbeing Grant program. The program was launched in January 2026 with a total fund of €500 thousand. The money will go to NGOs and research organizations in Europe, the Middle East, and Africa working at the intersection of youth safety, wellbeing, and AI. An important point: OpenAI is funding not only policy research but also practical services that interact with teenagers, parents, teachers, and vulnerable groups right now.
The list of recipients shows that bets are being placed on very different scenarios. Among them are the Centre for Information Policy Leadership with research into age verification systems and AI-based age assessment, the Ukrainian East Europe Foundation studying how teenagers in conflict-affected countries use AI for learning and mental health support, German FSM with AI literacy tools for parents and educators, Kenyan Luma with an AI tutor for remote communities, and Italian Telefono Azzurro with the AzzurroChat platform for adolescent digital wellbeing. There are other directions as well: evaluating chatbots as a channel to direct teenagers to crisis services, resources for families, helping girls from disadvantaged groups master AI skills, and systems supporting victims of human trafficking and gender-based violence.
In other words, this is not about a single "kids mode" for models, but rather a set of infrastructural and social solutions where AI is viewed simultaneously as both a risk and a helping tool.
Broader course
OpenAI separately connects the new initiatives to its broader policy on protecting young users. The company reminds of the behavioral principles for models for users under 18, an age prediction model, parental controls, and materials for families. The approach, it is claimed, is being formed with the participation of external experts, including the Expert Council on Well-Being and AI and the Global Physician Network. This matters because the discussion about teenagers and AI quickly goes beyond interface limitations and comes down to psychology, pedagogy, privacy, and digital rights.
In Europe, OpenAI already works with states and institutions through the Education for Countries program and supports research by the University of Tartu in Estonia on measuring educational outcomes from AI use. Additionally, the company was among the founders of the Beneficial AI for Children coalition and joined the Vatican's declaration on the rights and dignity of children in the age of AI. In other words, the current announcement is not an isolated grant announcement but part of an effort to establish itself in the conversation about what safe AI for minors should look like.
What this means
OpenAI is trying to occupy two positions simultaneously: offering European regulators its own regulatory framework while simultaneously investing in local projects that test these ideas in practice. For the market, this signals that adolescent safety is becoming a separate direction in AI policy—with grants, standards, age verification, and stricter expectations for products that enter schools and families. For EdTech services and product teams, this means growing requirements for verifiable protective mechanisms, not just general safety promises.