lantern header
lantern vertical

Advancing child safety through 
signal sharing

Every day, the world’s leading tech companies share critical intelligence on threats to child safety through Lantern, using that information to make their platforms safer, removing abusive material and bad actors.

Why Lantern was created

Bad actors can use multiple platforms in their attempts to distribute abusive imagery and and exploit children online. Through Lantern, tech companies securely and responsibly share intelligence and threat indicators related to child sexual exploitation and abuse, helping them detect and address harm that may have otherwise gone unnoticed on their platforms.

Before Lantern, no reliable framework existed to coordinate industry efforts against predators exploiting multiple platforms to evade detection. Lantern now bridges that gap, strengthening collective defense across the sector.

How Lantern works

lantern

When a company detects OCSEA on its platform, it takes appropriate action to uphold its child safety standards. Through Lantern, the company can also securely share a signal—information that may help other companies identify OCSEA occurring on their own platforms. 

Companies that receive signals carefully assess the information and investigate whether there has been a policy violation on their own platform before taking action. These signals can help companies identify new forms of abuse or provide additional context to better understand activity already under review.  

Lantern signals can provide a crucial piece of the puzzle to uncover active threats to children.

What are signals?

Signals are threat indicators, intelligence, or other information that may be relevant for other companies to identify OCSEA activity on their platforms. Common forms of signals include hashes, URLs, and usernames. Participating companies choose which signals to share and ingest based on their policies, legal obligations, use cases, and assessments of potential relevance. 

Lantern supports two primary categories of signals: content-based and incident-based.

Content-based signals

Content-based signals relate to material being shared or discussed—including known child sexual abuse material (CSAM), grooming manuals, or other illegal images, video, audio or text. 

These signal types may include: 

  • Hashes of known CSAM that can be used to detect and prevent redistribution 
  • URLs linking to webpages that host OCSEA content 
  • Keywords or codewords used by offenders to evade detection in sharing or engaging with CSAM.

Content-based signals play a crucial role in preventing the rapid dissemination of harmful content, particularly CSAM, across multiple platforms.

For instance, predatory actors may store and share CSAM via hosting providers and share URLs of the content on social media apps.

When a platform identifies a URL with CSAM, after reporting it to relevant authorities, they can alert other participating companies through Lantern. This allows hosting providers and other platforms to act quickly and remove the content, helping stop the spread of abuse across the internet.

Incident-based signals

Incident-based signals relate to identified events or exchanges that violate child safety policies. By sharing these signals, companies help one another detect evolving threats and respond more effectively—closing the gaps between platforms that bad actors aim to exploit. Examples include: 

  • Accounts engaged in grooming or solicitation of minors to create explicit content
  • Indicators of financial sextortion
  • Patterns suggesting coordinate abuse across services.

By sharing these signals, companies help one another detect evolving threats, and respond more effectively – closing the gaps predators can exploit between platforms.

Lantern’s impact

Signals being shared in Lantern are producing tangible outcomes and helping to protect children from 
cross-platform abuses.

In 2024

296,336

New signals uploaded into Lantern

1,064,380

Cumulative number of uploaded signals

102,082

Accounts received enforcement actions as a result of signals shared in Lantern

135,077

CSEA URLs were blocked or removed

81

Instances of contact offenses were flagged

45

Trafficking instances were flagged

FAQs

Who is eligible to apply to Lantern?

Lantern is open to any tech company or U.S.-based financial institution that meets the specified participation criteria and demonstrates a firm commitment to combating OCSEA is eligible to apply. Companies that already have established detection on their platforms are best suited to apply and join.

Technology vendors, NGOs, researchers, law enforcement, governments, or other entities are not eligible to become participants in Lantern. This decision aligns with the program’s primary objective of aiding industry in voluntary efforts to help keep their platforms and users safe.

How do I apply to Lantern?

Please fill out our interest form with your information, and we will be in contact after reviewing your eligibility.

How much does it cost to join Lantern?

Lantern is free for all participants. The program is fully funded by the Tech Coalition and its member companies.

Where is the information stored/ who hosts the tech for Lantern?

Lantern is hosted on the ThreatExchange platform, which was developed by Meta as a way for organizations to share information in a secure, privacy-compliant way. Meta has implemented comprehensive security measures to protect the confidentiality, integrity, and availability of all data stored by the ThreatExchange platform.

ThreatExchange was selected for use in the Lantern program after thorough review by various working groups within the Tech Coalition, as well as assessments related to security and privacy.

Lantern participants

30 companies currently participate in our Lantern program, including:

block lockup black
lantern
lantern
lantern
dropbox logo
lantern
lantern
lantern
lantern
lantern
lantern
Nintendo logo.
open ai logo
quora logo
Reddit logo.
lantern
lantern
lantern
lantern
lantern
lantern
lantern
Get in touch

Lantern has improved the way we combat child exploitation by enabling us to see the bigger picture across platforms. Through this coordinated effort, we’ve been able to take decisive action against harmful accounts and behavior more efficiently and at a larger scale. By sharing threat intelligence with other platforms, we’re not just protecting Discord, we’re part of a cross-platform collaboration that’s creating a safer internet as a whole.

jud hoffman

Jud Hoffman

Vice President of Trust & Safety

Discord