Addressing AI-generated child sexual exploitation and abuse

The rise of Generative Artificial Intelligence (AI) is introducing new avenues for the creation and spread of child sexual abuse material (CSAM), posing significant challenges for tech companies.

2151102672

The diversity of tools, from image generators to AI companions, and constantly evolving technology, makes it more challenging to mitigate risk, enforce policies, and detect evolving threats.

Through the Tech Coalition, industry leaders are coming together to reduce risk, improve response, and design safer tools and platforms to better protect children from AI-generated abuse and exploitation.

2151102672

A growing threat to child safety

Generative AI is being exploited in many ways:

  • Image, video, and audio generators can create new CSAM and modify existing CSAM and sexualized imagery of minors.
  • Deepfake websites and apps that take photos of real people and remove their clothing to make victims appear “nude” can sexualize existing images of children and create CSAM.
  • Text generators can write abuse scenarios, create instruction guides for exploiting children, roleplay child sexual abuse, or facilitate the distribution of CSAM at scale.
  • Bad actors offer to build and sell custom CSAM generators for buyers on online marketplaces and other platforms.

As a result, AI generated content is becoming more widespread, realistic, and severe, including on the surface web. These rapidly evolving threats demand urgent, collective action.

A model for child safety by design in generative AI

Developed through cross-industry collaboration, this model reflects how some Tech Coalition members are working to embed child safety into generative AI systems — from early risk assessment to post-deployment safeguards.

The Tech Coalition supports industry members to turn these principles into practice through shared resources, knowledge exchange, and joint innovation.

Tech Industry’s Approach to Risk Mitigation

Understand risks

  • Research
  • Testing
  • Internal subject matter expertise
  • Multi – stakeholder collaboration

Embed protections in Model Design

  • Policies
  • Governance & AI principles
  • Remove harmful training data
  • Test model rigorously

Deploy Safeguards

  • Input filters
  • Output filters
  • Supporting industry-wide initiatives

Iterate

  • Commitment to continuously iterate
  • Partnerships with external expert stakeholders

Cross-Industry Collaboration

  • Knowledge Sharing Working Groups
  • Signal Sharing 
  • Briefings
  • Cross Industry Commitments

Supporting the industry response to AI-generated OCSEA

Through the Tech Coalition, industry is collaborating to improve the detection and prevention of AI-generated CSAM. Together, we’re creating an ecosystem where safety by design, threat intelligence, and cross-sector collaboration drive real impact. 

Sharing and testing solutions

  • Our member-exclusive resources help companies develop effective strategies, policies, and tools to identify and prevent AI-generated CSAM on their platforms. 
  • Tech Coalition members meet regularly to discuss emerging trends, challenges, and solutions through our AI-focused knowledge sharing groups.
  • Lantern, our cross platform signal-sharing program, enables companies to better detect AI-generated OCSEA abuse material for escalation and action. 

Convening stakeholders across sectors 

  • The Tech Coalition has hosted a series of briefings on generative AI for key stakeholders from governments, law enforcement, and civil society to build shared understanding and coordinated responses.  
  • Our member-exclusive webinars have featured global experts to discuss new threats and  promising online safety solutions. 

Advancing research and resources   

Our Safe Online Research Fund is supporting independent, actionable research on topics from youth engagement with AI to the misuse of generative AI to produce CSAM.

Resources and Support

For those experiencing or witnessing AI-facilitated abuse, support is available across the world. The following resources provide additional information and tools for responding to and reporting AI-generated OCSEA:

Explore membership