An investigation by Nucleo found at least 23 active Telegram bots that can create AI-generated child sexual abuse material, challenging the company's recent promises to crack down on such criminal content
Using solely the messaging app Telegram, users are able to easily create dozens, even hundreds, of artificially generated images of child sexual abuse (CSAM), according to an investigation led by Nucleo with support from the Pulitzer Center’s AI Accountability Network.
Our investigation, in partnership with the Pulitzer Center's AI Accountability Network, identified 83
bots on Telegram using keywords associated with deepnudes—the practice of generating sexual or pornographic images using AI without consent.
Among all these groups found, 33 were active and functional, and 23
(70%) were capable of creating child sexual exploitation material (21 with children and adolescents and two only with adolescents).

As a nonprofit journalism organization, we depend on your support to fund more than 170 reporting projects every year on critical global and local issues. Donate any amount today to become a Pulitzer Center Champion and receive exclusive benefits!
Fifty of the bots were inactive (without updates from developers), while another 10 refused to generate sexual images of minors — although they still allowed the creation of non-consensual adult deepfakes, a practice criminalized in countries such as Brazil, South Korea, Canada and the United Kingdom.
Telegram says its bots are accessed by more than 400 million users worldwide.
Telegram, a messaging service created by a Russian national in 2013 and currently headquartered in Dubai, has been the target of criticism and investigations for years due to its notably lenient moderation, including with cybercrimes.
In August 2024, the company faced formal accusations from French authorities, which claimed that the platform failed to cooperate with authorities in investigations regarding child exploitation and other crimes. Telegram's founder and CEO, Pavel Durov, was even arrested and later released.
At the time, Telegram promised to take online child sexual abuse cases more seriously, including collaborating with authorities to provide user data.
Contacted by Núcleo through its "Press Bot" channel, a Telegram spokesperson stated that the platform has a "zero-tolerance policy for illegal pornography" and uses "a combination of human moderation, AI tools, machine learning, user reports, and trusted standards" to detect and combat abuse.
The company also informed that "all media uploaded to Telegram's public platform are analyzed against a database of CSAM content hashes removed by Telegram moderators since the app's launch."
Telegram says it has removed over 2.8 million groups and channels so far in 2025, as of Feb. 7.
The bot dynamics
Our investigation revealed that some of these tools use AI to simulate people's nudity from real images — and are able to even create artificial sexual content, including of minors. All the user needs to do is upload an image to the bot, which will spit an artificially generated output.
Other tools follow the more traditional chatbot model, which generates content from text descriptions, known in the generative AI jargon as prompts.
There is no foolproof explanation on why AI companies haven't created effective ways to completely block the production of this sort of illegal content from the very start, but the most plausible hypothesis is that these models were already trained with adult pornography, and used real images of children — often without consent — to refine its results.
In many cases, the 23 tools would swap children's heads onto adult bodies. However, some of those bots generated content that preserved childlike characteristics, which may indicate the presence of real images of children in the databases used to train those language models.
This was the case with a version of the open-source model Stable Diffusion, developed by Stability AI. As discovered by the Stanford Cyber Policy Lab, the LAION 5-B database, used by the startup, contained more than 1,600 explicit images of child sexual exploitation materials among billions of other images.
Although organizations that distribute open source temporarily removed certain models and datasets from the internet to clean their databases, the open-source model had already been irreversibly downloaded and distributed thousands of times.
In a striking example found by our investigation, users were encouraged to clone a deepfake bot capable of generating AI child exploitation material in exchange for credits, which are used to generate photos.
Each of these bots was linked to an individual account, allowing cloned bots to operate in various languages — Mandarin, English, Portuguese — while using the same hosting service.
During the cloning process, the tool allowed users to submit their own inputs and images, creating a personalized AI mechanism for illicit content — for example, someone could send 300 photos of a known person and ask the bot to replicate their appearance.
Once cloned, the bot's link could be shared with contacts. Each interaction generated a platform credit, which could be redeemed to creating a fake image. This dynamic has made the whole system resistant to attempts to take it down.
Clone for credits
In many cases, the 23 tools swapped children's heads onto adult bodies, often sourced from explicit material. But some generated criminal content that preserved the child’s defining features, raising concerns about the presence of child nudity or CSAM in their datasets.
This was the case for one version of Stable Diffusion, produced by Stability AI. As discovered by the Stanford Cyber Policy Lab, LAION 5-B, the dataset used by the startup, contained more than 1,600 images of explicit CSAM in its billions of images.
Now, the problem of AI-generated CSAM goes beyond the datasets to the issue of multiplication. Although the organizations had temporarily removed the models and datasets online to clear the database, the open-source model had already been downloaded and distributed thousands of times. Criminals are now not only sharing models, but also cloning these tools on social platforms like Telegram.
In one striking example from our investigation, users were incentivized to clone a deepfake bot capable of generating AI-generated CSAM in exchange for free credits. Each bot is tied to an individual account, meaning cloned bots can operate across languages—Mandarin, English, Portuguese—while using the same hosting service.
During the cloning process, the tool allowed users to upload their own data, creating a personalized AI engine for illicit content — users could, for example, send 300 pictures of someone they know and ask the bot to replicate that person. Once cloned, the bot’s link could be shared with contacts. Every interaction earned the user a credit, redeemable for a deepfake image, making the system resistant to shutdown efforts.
Telegram's virtual currency
Almost all bots offered two or three free tests per user, which posed few obstacles, as users could create multiple Telegram accounts without any limits.
After the free tests, users were prompted to buy credits using credit and debit cards, Brazilian automatic transfers (known as PIX), crypto wallets, PayPal, or even Telegram's virtual currency, "Stars."
Launched in June, just two months before Durov's arrest, the virtual currency can be purchased through Apple and Google payment systems or directly via Telegram's "Premium Bot," which also accepts credit and debit cards.
Less than a year after being put on the market, "Stars" are being used to buy AI-generated CSAM, as observed in at least eight of the 23 bots analyzed.
When asked for comment, Apple's Brazilian press office did not respond to texts, calls, or emails.
Google stated that its "billing system should not be used for content in any product category considered unacceptable by the Google Payment Policies, including content harmful to children or child abuse.
Art by Rodolfo Almeida
Editing by Alexandre Orrico
This story was translated with the help of artificial intelligence and carefully reviewed by humans, according to Nucleo's AI use policy.