Initiate 2026: a child safety tooling hackathon

Initiate brings together engineers, technical specialists, and child safety experts from across the tech ecosystem to move beyond conversation and into implementation.

initiate 2026 participants

Overview

Last week, the Tech Coalition held Initiate 2026, our fourth annual hackathon, to accelerate industry-wide action against online child sexual exploitation and abuse (OCSEA). Initiate brings together engineers, technical specialists, and child safety experts from across the tech ecosystem to move beyond conversation and into implementation.

Hosted by OpenAI at its San Francisco headquarters, Initiate 2026 demonstrated how leading technology companies are investing their time, expertise, and proven safety tools to raise the bar for the entire industry. Over two days, 56 people from 16 Tech Coalition member companies came together to test, strengthen, and expand access to safety technologies already being used at scale, with the shared goal of making them more effective and more widely adopted.

Tech tracks

initiate 2026 06

This year’s hackathon focused on three technical tracks, each centered on leading tools that companies are contributing to help others build stronger child safety capabilities:

Across all tracks, companies brought senior engineers on site to support real-world testing, integration, and improvement of tools, accelerating pathways from experimentation to deployment. This reflected a core objective of Initiate: lowering barriers to adoption of effective child safety tooling so more companies can detect, disrupt, and respond to OCSEA at scale.

Open source classifier tools

initiate 2026 03

OpenAI supported participants in testing and applying two complementary tools, ModAPI and gpt-oss-safeguard, to strengthen text-based safety interventions, including the detection of grooming and other harmful behaviors. The track explored how combining fast automated classification with deeper policy reasoning can improve detection accuracy while reducing reliance on human review.

ModAPI provides fast, hosted classification signals for text and images that can be used for real-time gating, logging, and thresholding. It returns category flags and confidence scores that support scalable moderation workflows. gpt-oss-safeguard is an open-weight safety model that applies user-provided policies at inference time. As a reasoning model, it enables deeper evaluation and allows teams to balance latency and depth through custom policy baselines.

Used together, these tools allow companies to quickly filter content with ModAPI and escalate higher-risk material to gpt-oss-safeguard for more nuanced analysis. This layered approach mirrors how many platforms operate in practice and supports more accurate, consistent moderation decisions.

11 companies participated in this track, with most focusing on gpt-oss-safeguard. One team experimented with applying safeguards to chat moderation scenarios, exploring how contextual signals and policy interpretation can inform assessments over time. A key takeaway was the model’s ability to translate lengthy internal policy documents into enforceable moderation prompts, reducing manual prompt engineering and accelerating safety policy iteration.

While additional work is needed to fully interpret interleaved context, some participants expressed optimism about moving these tools toward production use, signaling near-term impact on child safety operations.

Participants surfaced actionable feedback on rate limits, contextual inference, and scalability, directly informing improvements to OpenAI’s safety tooling and integration pathways. Several teams left with clear next steps toward production pilot.

Advancing established tooling

Google’s Content Safety API track focused on its embeddings-based model, which processes numerical representations of images rather than raw image data. This approach offers stronger privacy protections, lower latency, and higher throughput, making it well suited for high-volume safety workflows.

During the hackathon, participating companies wrote code to generate embeddings and used an API simulator to test integrations before deployment. Six companies took part in this track, and all successfully tested the tool. Several participants began onboarding the API, while existing users expanded their use cases. 

One member company also explored emerging use cases, including moderation of videos and GIFs using the Content Safety API’s video endpoint, and identified opportunities to refine implementation approaches.

While experimentation will continue, participants left with clear concrete next steps. Google track leads also identified ways to further develop the API based on participant feedback, reinforcing a feedback loop between tool builders and implementers that directly strengthens child safety outcomes.

Increasing Lantern’s reach

initiate 2026 02

The Lantern track, run by Meta, focused on expanding participation in industry-wide signal sharing. The track had two goals: onboarding companies already participating in Lantern to the API, and improving the ThreatExchange user interface for companies not yet integrated. 

Nine companies participated. Three successfully used the API for the first time, establishing proofs of concept and laying the groundwork for follow-up integration. Five additional members worked through key onboarding steps, positioning them to potentially join and contribute to Lantern this year.

One team shared their experience exploring Lantern integration and highlighted how testing shared signals helped clarify internal workflows and next steps toward broader adoption.

Participants also provided detailed feedback on the ThreatExchange UI, validating ongoing improvements and identifying opportunities to enhance usability and open source integration. These efforts support Lantern’s long-term role as shared safety infrastructure, enabling faster, more coordinated responses to OCSEA across the industry.

Connecting beyond the code

Initiate is designed not only to advance tooling, but to strengthen the community responsible for implementing it. Throughout the hackathon, child safety professionals connected with peers to share candid insights and practical lessons.

Lunch discussions surfaced shared challenges around generative AI, legal and privacy considerations, and voice chat. Non-technical attendees also engaged in structured conversations on tooling effectiveness, safety by design, and moderator wellness.

Discussions highlighted the value of designing safety systems that support frontline teams and the importance of close collaboration between trust and safety staff, product managers, and investigators to fully understand risks and impacts.

Looking ahead

Initiate 2026 reinforced a clear message: when companies invest their expertise and proven tools through the Tech Coalition, the entire industry becomes better equipped to protect children online. The progress made during the hackathon translates directly into stronger detection, faster response, and more consistent safety standards across platforms.

This work does not end with the event. In the months ahead, the Tech Coalition will continue supporting members as they move from testing to implementation, refine shared tools, and expand participation in collective safety efforts. Together, we are building the technical foundations and developing the professional community needed to combat online child sexual exploitation and abuse at scale.