Relational Positioning, Dependency & Exclusivity Policy (#RPD)

Statement

One of the core challenges in youth AI safety is structural: experts in child and adolescent development are often not the ones drafting the technical rules that govern AI systems, while the teams building those systems do not always have sufficient developmental or psychosocial expertise to define those risks on their own.

At everyone.ai, our work is designed to help close that gap. Through our research, our operationalisation tools and through the iRAISE coalition, we bring together expertise across child development, psychology, education, neuroscience, behavior, and AI  so that youth-relevant risks can inform product design and policy in more direct and operational ways, including through AI tools. Our aim is to move the field from broad conceptual principles toward guidance that can be implemented and monitored in real systems, while anticipating emerging risks and equipping stakeholders to address them.

OpenAI’s recent release of prompt-based teen safety policies for gpt-oss-safeguard, their open-weight safety model, is a meaningful step in this direction, and one that points to a broader need. One of the longstanding weaknesses in this field has been the difficulty of translating research and expert judgment into standards that technical teams can actually use. Policy formats of this kind are valuable because they create a more practical bridge between multidisciplinary expertise and technical implementation.

At the same time, content-focused policy is a critical foundation but is only one layer of youth safety. Some youth-relevant risks arise not only from generated content, but from the way the system behaves within the interaction itself. Model behavior can shape user expectations, patterns of engagement, perceived reciprocity, and forms of reliance over time. This matters especially for youth, given developmental factors that can shape how relational framing, anthropomorphic cues, and repeated interaction patterns affect children and adolescents’ development and wellbeing.

In light of our recent findings on adolescents and anthropomorphic AI, everyone.ai drafted an initial behavioral policy focused on relational exclusivity and overreliance-related risk. This policy was informed by the first iRAISE Lab convening, where researchers and industry experts identified this as a priority lever for youth safety.

A related gap concerns threshold-setting. We need to move past risk assessment toward stronger methods for determining where healthy interaction gives way to concern and where stronger safeguards are justified. This is part of the work we are continuing to develop through research aimed at building expert consensus around acceptable, concerning, and unhealthy patterns of interaction at an operational level.

Looking ahead, we see a major next step in the need to map youth-relevant risks and benefits more systematically across products, use cases, and developmental stages. We see this work as an opportunity for broader potential joint effort with OpenAI to support that work, so technical teams and domain experts can operate from a clearer, more developmentally informed understanding of where benefits are likely, where risks emerge, and what kinds of safeguards are warranted in response. That kind of mapping is essential if the field wants to move beyond general principles toward more consistent implementation.

Taken together, this points to what the field now needs: more iteration, more transdisciplinary collaboration, and more evidence-informed consensus to support beneficial uses of AI for young people while identifying, constraining, and preventing youth-relevant risks through clearer, developmentally-informed behavioral standards for AI systems.

Leave your email if you want to receive more information about everyone.ai


I want to sign the Pledge asking for Responsible Ai for Children.