Today, the Tech Coalition held its third multi-stakeholder briefing on generative AI and its impacts on online child sexual exploitation and abuse (OCSEA) hosted at Microsoft's office in Brussels, Belgium. This event focused specifically on the EU context, following previous briefings in Washington, D.C., and London.
By gathering child safety experts, policy-makers, EU officials, and industry, the Tech Coalition aims to foster a collaborative approach to understanding and mitigating risks associated with generative AI in online spaces where children are present.
The briefing in Brussels outlined the robust ways in which Tech Coalition members have incorporated safety-by-design mechanisms into their generative AI products and tools, while also providing a platform for all stakeholders to discuss shared challenges, recent insights on the misuse of generative AI to create and distribute child sexual abuse material (CSAM), and opportunities for future engagement.
Among the participants were representatives from the European Commission, European Parliament, Council of Europe, Europol, Interpol, Coimisiún na Meán, INHOPE, Center for Democracy & Technology Europe and Missing Children Europe. They were joined by 12 Tech Coalition member companies, including Adobe, Amazon, Bumble, Google, Meta, Microsoft, OpenAI, Public Interest Registry, Snap Inc., Spotify, TikTok, and Zoom.
Driving Collaboration for a Safer Future
These briefings are part of the Tech Coalition’s ongoing commitment to creating a safer digital environment for children as generative AI technologies evolve. By facilitating targeted conversations in key regions, we aim to build a collective understanding of how industry, policy-makers, and civil society can work together to address the pressing risks of OCSEA in the age of AI.
Our series of briefings have already led to several new multi-stakeholder efforts, including:
- Developing a member resource outlining considerations for companies exploring ways to test for and mitigate generative AI OCSEA risks, with input from the US Department of Justice.
- Reviewing the Industry Classification System to consider whether any updates are required to address the impact of AI-generated OCSEA.
- Exploring how our Lantern program may be used for industry to securely share signals related to AI-generated OCSEA.
- Funding research specifically on generative AI and OCSEA, to fill an identified gap (more on this below).
- Developing a reporting template for members to use when referring cybertip reports of AI-generated OCSEA to the National Center for Missing and Exploited Children in the US, with input from NCMEC themselves. This reporting template is having input from the UK National Crime Agency, and following this meeting we will continue iteration with further input from INTERPOL and Europol.
As we continue to learn from and adapt to these advancements, the Tech Coalition remains dedicated to leading initiatives that promote safety by design and encourage proactive, innovative solutions to protect children online.
Announcing New Generative AI Research to Advance Child Safety
The Tech Coalition Safe Online Fund is now going into its fourth year, with the latest funding round taking a more targeted approach for amplifying impact by honing in on one topic: generative AI.
At the Brussels event, the Tech Coalition announced two new research projects for award under the Fund, aimed at deepening understanding of the complex dynamics between generative AI and online child sexual exploitation and abuse.
Along with a third project previously announced, these projects will together address diverse angles on the issue, from young people’s engagement with AI to the misuse of generative AI to produce and distribute CSAM. The projects reflect the Coalition’s commitment to rigorous, data-driven approaches to emerging child safety challenges:
- University of Kent: Initially introduced at the London briefing earlier this year, this project will explore the proliferation of AI-generated CSAM and its impact on attitudes and behaviors with those who engage with CSAM. The research will also examine potential implications for both prevention and perpetration dynamics within the evolving AI landscape.
- Western Sydney University’s Young and Resilient Research Centre: This project, titled “Youth Voices on AI: Shaping a Safer Digital Future,” will focus on directly involving young people in AI safety and OCSEA prevention. It aims to align AI policies and development with the values, expectations, and safety concerns of young users.
- SaferNet Brasil: Addressing the rising misuse of generative AI by young people, this project will collect insights from adolescents in Brazil on their experiences and perceptions. By engaging young participants through interviews and workshops, the research will create a nuanced understanding of these emerging practices and inform the development of child-centered safety policies.
For further details on these projects and the broader Safe Online Research Fund, please visit Safe Online’s announcement.