Translate page with Google

Story Publication logo February 7, 2025

Telegram App Says It Prioritizes Child Safety. Its Bots Tell a Different Story

Country:

Authors:
The image features a grid of four depictions of a baby, each overlaid with digital distortions and glitches. These distortions symbolise the fragility of data and privacy in the context of nonconsensual data breaches. The glitch effects include fragmented pixels, colour shifts, and digital artefacts, emphasising the disruption and loss of control over personal information. Zeina Saleem + AIxDESIGN & Archival Images of AI / Better Images of AI / Distortion Series  / CC-BY 4.0.
English

Investigating the scope and impact of AI-generated child sexual abuse material

SECTIONS
Image by Rodolfo Almeida/Núcleo.

An investigation by Nucleo found at least 23 active Telegram bots that can create AI-generated child sexual abuse material, challenging the company's recent promises to crack down on such criminal content

Leia em português.


Using solely the messaging app Telegram, users are able to easily create dozens, even hundreds, of artificially generated images of child sexual abuse (CSAM), according to an investigation led by Nucleo with support from the Pulitzer Center’s AI Accountability Network.

Our investigation, in partnership with the Pulitzer Center's AI Accountability Network, identified 83 bots on Telegram using keywords associated with deepnudes—the practice of generating sexual or pornographic images using AI without consent.

Among all these groups found, 33 were active and functional, and 23 (70%) were capable of creating child sexual exploitation material (21 with children and adolescents and two only with adolescents).


As a nonprofit journalism organization, we depend on your support to fund more than 170 reporting projects every year on critical global and local issues. Donate any amount today to become a Pulitzer Center Champion and receive exclusive benefits!


Fifty of the bots were inactive (without updates from developers), while another 10 refused to generate sexual images of minors — although they still allowed the creation of non-consensual adult deepfakes, a practice criminalized in countries such as BrazilSouth KoreaCanada and the United Kingdom.

Telegram says its bots are accessed by more than 400 million users worldwide.

Telegram, a messaging service created by a Russian national in 2013 and currently headquartered in Dubai, has been the target of criticism and investigations for years due to its notably lenient moderation, including with cybercrimes.

In August 2024, the company faced formal accusations from French authorities, which claimed that the platform failed to cooperate with authorities in investigations regarding child exploitation and other crimes. Telegram's founder and CEO, Pavel Durov, was even arrested and later released.

At the time, Telegram promised to take online child sexual abuse cases more seriously, including collaborating with authorities to provide user data.

Contacted by Núcleo through its "Press Bot" channel, a Telegram spokesperson stated that the platform has a "zero-tolerance policy for illegal pornography" and uses "a combination of human moderation, AI tools, machine learning, user reports, and trusted standards" to detect and combat abuse.

The company also informed that "all media uploaded to Telegram's public platform are analyzed against a database of CSAM content hashes removed by Telegram moderators since the app's launch."

Telegram says it has removed over 2.8 million groups and channels so far in 2025, as of Feb. 7.

Telegram's full statement

Telegram has a zero-tolerance policy for illegal pornography. Telegram uses a combination of human moderation, AI and machine learning tools, and reports from trusted users and organizations to combat illegal pornography and other abuses on the platform.

All media uploaded to Telegram's public platform is analyzed against a database of CSAM content hashes removed by Telegram moderators since the app's launch. Telegram is able to strengthen this system with additional data sets provided by the IWF (Internet Watch Foundation) to ensure that Telegram remains a safe space for its users.

Telegram is expanding the list of trusted organizations it works with. For example, Telegram is in negotiations to join NCMEC (National Centre for Missing and Exploited Children).

Daily reports on Telegram's efforts to remove CSAM are published here: Stop Child Abuse. So far in January, more than 44,000 groups and channels have been removed from Telegram for involvement with child abuse material.

If you've found any such content on Telegram, I'd really appreciate it if you could share the links so that the moderators can investigate them.

More information about Telegram moderation and statistics is available here: Telegram Moderation Overview.

The bot dynamics

Our investigation revealed that some of these tools use AI to simulate people's nudity from real images — and are able to even create artificial sexual content, including of minors. All the user needs to do is upload an image to the bot, which will spit an artificially generated output.

Other tools follow the more traditional chatbot model, which generates content from text descriptions, known in the generative AI jargon as prompts.

There is no foolproof explanation on why AI companies haven't created effective ways to completely block the production of this sort of illegal content from the very start, but the most plausible hypothesis is that these models were already trained with adult pornography, and used real images of children — often without consent — to refine its results.

In many cases, the 23 tools would swap children's heads onto adult bodies. However, some of those bots generated content that preserved childlike characteristics, which may indicate the presence of real images of children in the databases used to train those language models.

This was the case with a version of the open-source model Stable Diffusion, developed by Stability AI. As discovered by the Stanford Cyber Policy Lab, the LAION 5-B database, used by the startup, contained more than 1,600 explicit images of child sexual exploitation materials among billions of other images.

Although organizations that distribute open source temporarily removed certain models and datasets from the internet to clean their databases, the open-source model had already been irreversibly downloaded and distributed thousands of times.

In a striking example found by our investigation, users were encouraged to clone a deepfake bot capable of generating AI child exploitation material in exchange for credits, which are used to generate photos.

Each of these bots was linked to an individual account, allowing cloned bots to operate in various languages — Mandarin, English, Portuguese — while using the same hosting service.

During the cloning process, the tool allowed users to submit their own inputs and images, creating a personalized AI mechanism for illicit content — for example, someone could send 300 photos of a known person and ask the bot to replicate their appearance.

Once cloned, the bot's link could be shared with contacts. Each interaction generated a platform credit, which could be redeemed to creating a fake image. This dynamic has made the whole system resistant to attempts to take it down.

Clone for credits

In many cases, the 23 tools swapped children's heads onto adult bodies, often sourced from explicit material. But some generated criminal content that preserved the child’s defining features, raising concerns about the presence of child nudity or CSAM in their datasets.

This was the case for one version of Stable Diffusion, produced by Stability AI. As discovered by the Stanford Cyber Policy Lab, LAION 5-B, the dataset used by the startup, contained more than 1,600 images of explicit CSAM in its billions of images.

Now, the problem of AI-generated CSAM goes beyond the datasets to the issue of multiplication. Although the organizations had temporarily removed the models and datasets online to clear the database, the open-source model had already been downloaded and distributed thousands of times. Criminals are now not only sharing models, but also cloning these tools on social platforms like Telegram.

In one striking example from our investigation, users were incentivized to clone a deepfake bot capable of generating AI-generated CSAM in exchange for free credits. Each bot is tied to an individual account, meaning cloned bots can operate across languages—Mandarin, English, Portuguese—while using the same hosting service.

During the cloning process, the tool allowed users to upload their own data, creating a personalized AI engine for illicit content — users could, for example, send 300 pictures of someone they know and ask the bot to replicate that person. Once cloned, the bot’s link could be shared with contacts. Every interaction earned the user a credit, redeemable for a deepfake image, making the system resistant to shutdown efforts.

Telegram's virtual currency

Almost all bots offered two or three free tests per user, which posed few obstacles, as users could create multiple Telegram accounts without any limits.

After the free tests, users were prompted to buy credits using credit and debit cards, Brazilian automatic transfers (known as PIX), crypto wallets, PayPal, or even Telegram's virtual currency, "Stars."

Launched in June, just two months before Durov's arrest, the virtual currency can be purchased through Apple and Google payment systems or directly via Telegram's "Premium Bot," which also accepts credit and debit cards.

Less than a year after being put on the market, "Stars" are being used to buy AI-generated CSAM, as observed in at least eight of the 23 bots analyzed.

When asked for comment, Apple's Brazilian press office did not respond to texts, calls, or emails.

Google stated that its "billing system should not be used for content in any product category considered unacceptable by the Google Payment Policies, including content harmful to children or child abuse.

Google's full statement

Google products have protections to ban and limit access to content that depicts child sexual abuse and exploitation, reflecting our commitment to combating this type of content.

On Google Play, all developers are subject to the developer program policies. If a violation is found, the app may be removed and the developer's account and all associated apps may be removed from the store. Anyone can report an app when there is a violation of our policies. Apps that contain or display user-generated content must adhere to a number of rules that include defining inappropriate content and behavior, robust, effective and ongoing moderation, the existence of reporting systems and means of removing or blocking users who violate their terms of use.

The Google Play billing system should not be used for content in any product category considered unacceptable by the Google Payment Policies, including content harmful to children or child abuse (e.g., content that sexualizes minors or content that may be perceived as depicting or encouraging sexual attraction to minors).

[CONTEXT] - Transparency issues

According to Telegram, the company has a "zero-tolerance policy for illegal pornography."

A similar statement was made on CNN Brasil in Oct. 2024, after SaferNet, a Brazilian NGO focused on online child safety, reported that more than 1.2 million users were participating in child pornography groups hosted on Telegram.

The NGO criticized the platform's lack of transparency, pointing out that its moderation claims cannot be independently verified due to the absence of transparency reports with verifiable data.

This problem extends beyond Telegram. Major AI companies also fail to provide full transparency about their moderation practices regarding CSAM and other forms of child abuse. Part of the challenge is that the industry still lacks foolproof methods to prevent the creation of this type of criminal content.

In an attempt to fill this gap, Thorn, an international non-profit organization focused on combating child sexual abuse and trafficking, introduced the "Safety By Design" protocol in May 2024. The guide emphasizes the need to integrate protections throughout the entire AI development cycle, from design to implementation, prioritizing child protection from the start.

Although Thorn's initiative has been adopted by major AI companies — including OpenAI, Google, Amazon, Microsoft, Stability AI, Meta, Mistral and Anthropic — its effectiveness remains uncertain. Thorn has faced criticism from European media for blurring the lines between activism and alignment with the technology industry.

All progress from companies adopting the protocol is self-reported, as well as data sharing with the NGO. Thorn acknowledges this limitation and stated in one of its reports in Sep. 2024: "This report documents the self-reported data from companies through surveys and follow-up interviews. Thorn has not independently confirmed, investigated, or audited the information provided in these self-reports.

Art by Rodolfo Almeida

Editing by Alexandre Orrico

This story was translated with the help of artificial intelligence and carefully reviewed by humans, according to Nucleo's AI use policy.

RELATED TOPICS

an orange halftone illustration of a hand underneath a drone

Topic

AI Accountability

AI Accountability
teal halftone illustration of two children, one holding up a teddy bear

Topic

Children and Youth

Children and Youth
technology and society

Topic

Technology and Society

Technology and Society

RELATED INITIATIVES

Logo: The AI Accountability Network

Initiative

AI Accountability Network

AI Accountability Network

Support our work

Your support ensures great journalism and education on underreported and systemic global issues