The diversity of tools, from image generators to AI companions, and constantly evolving technology, makes it more challenging to mitigate risk, enforce policies, and detect evolving threats.
Through the Tech Coalition, industry leaders are coming together to reduce risk, improve response, and design safer tools and platforms to better protect children from AI-generated abuse and exploitation.

A growing threat to child safety
Generative AI is being exploited in many ways:
- Image, video, and audio generators can create new CSAM and modify existing CSAM and sexualized imagery of minors.
- Deepfake websites and apps that take photos of real people and remove their clothing to make victims appear “nude” can sexualize existing images of children and create CSAM.
- Text generators can write abuse scenarios, create instruction guides for exploiting children, roleplay child sexual abuse, or facilitate the distribution of CSAM at scale.
- Bad actors offer to build and sell custom CSAM generators for buyers on online marketplaces and other platforms.
As a result, AI generated content is becoming more widespread, realistic, and severe, including on the surface web. These rapidly evolving threats demand urgent, collective action.
A model for child safety by design in generative AI
Developed through cross-industry collaboration, this model reflects how some Tech Coalition members are working to embed child safety into generative AI systems — from early risk assessment to post-deployment safeguards.
The Tech Coalition supports industry members to turn these principles into practice through shared resources, knowledge exchange, and joint innovation.
Tech Industry’s Approach to Risk Mitigation
Understand risks
- Research
- Testing
- Internal subject matter expertise
- Multi – stakeholder collaboration
Embed protections in Model Design
- Policies
- Governance & AI principles
- Remove harmful training data
- Test model rigorously
Deploy Safeguards
- Input filters
- Output filters
- Supporting industry-wide initiatives
Iterate
- Commitment to continuously iterate
- Partnerships with external expert stakeholders
Cross-Industry Collaboration
- Knowledge Sharing Working Groups
- Signal Sharing
- Briefings
- Cross Industry Commitments
Supporting the industry response to AI-generated OCSEA
Through the Tech Coalition, industry is collaborating to improve the detection and prevention of AI-generated CSAM. Together, we’re creating an ecosystem where safety by design, threat intelligence, and cross-sector collaboration drive real impact.
Sharing and testing solutions
- Our member-exclusive resources help companies develop effective strategies, policies, and tools to identify and prevent AI-generated CSAM on their platforms.
- Tech Coalition members meet regularly to discuss emerging trends, challenges, and solutions through our AI-focused knowledge sharing groups.
- Lantern, our cross platform signal-sharing program, enables companies to better detect AI-generated OCSEA abuse material for escalation and action.
Convening stakeholders across sectors
- The Tech Coalition has hosted a series of briefings on generative AI for key stakeholders from governments, law enforcement, and civil society to build shared understanding and coordinated responses.
- Our member-exclusive webinars have featured global experts to discuss new threats and promising online safety solutions.
Advancing research and resources
Our Safe Online Research Fund is supporting independent, actionable research on topics from youth engagement with AI to the misuse of generative AI to produce CSAM.
Resources and Support
For those experiencing or witnessing AI-facilitated abuse, support is available across the world. The following resources provide additional information and tools for responding to and reporting AI-generated OCSEA:
- Know2Protect is a U.S. Department of Homeland Security initiative that provides children, teens, and parents with information about recognizing and reporting online enticement, including AI-facilitated enticement.
- INHOPE provides global information about AI-facilitated abuse, as well as access to hotlines across the world where incidents can be reported.
- The National Center for Missing and Exploited Children’s Take it Down service helps remove nude, partially nude, or sexually explicit images or videos taken of minors from the internet.
- The Internet Watch Foundation provides detailed information about AI-generated child abuse material, as well as a platform for reporting such abuse.
- Thorn has a guide for parents about protecting children from deep fake and AI-generated abuse.
Why join the Tech Coalition?
No company should have to face these threats alone. Our members benefit from:
- Access to industry insights and shared learnings
- Opportunities to collaborate through working groups
- First access to new child safety tools and resources
- Collective influence on safety standards
Join our global community of child safety professionals from tech companies – big and small – working together to disrupt AI generated child sexual abuse material and protect children.
Explore membership