Translate page with Google

Story Publication logo May 9, 2025

With AI, Illegal Forums Are Turning Photos of Children Into Abusive Content

Country:

Author:
Graphic distortion of portraits of children
English

Investigating the scope and impact of AI-generated child sexual abuse material

SECTIONS

Based on images from the website This Person Does Not Exist. Image by Rodolfo Almeida/Núcleo.

Leia em português


Images of young girls skating, playing soccer, and practicing archery are being pulled from social media and repurposed by criminal groups to create AI-generated child sexual abuse material (CSAM) in forums operating on the so-called dark web.

In several cases, the victims are child actresses, models, or social media influencers, according to an investigation by Núcleo in collaboration with the Pulitzer Center’s AI Accountability Network. As of publication, the total number of victims remains unconfirmed, though roughly 10 individuals have been identified.

The “dark web” refers to areas of the internet where sites and servers are deliberately hidden from the public web. Specialized software—such as the Tor browser—or passwords are required to access these spaces. While using these tools is not inherently illegal, they are often exploited by criminals to conceal illicit activities.


As a nonprofit journalism organization, we depend on your support to fund more than 170 reporting projects every year on critical global and local issues. Donate any amount today to become a Pulitzer Center Champion and receive exclusive benefits!


To date, few countries have passed specific laws criminalizing AI-generated CSAM, but legislative efforts are underway in several places, with a growing consensus that this activity should indeed be outlawed. In the United Kingdom, for example, lawmakers are working to criminalize any tools designed for this purpose, with penalties of up to five years in prison.

In Brazil, the exploitation, abuse, and sexualization of minors under 18 years old—including through artificial media—is prohibited under the country’s Child and Adolescent Statute.

In nearly 400 pages of material obtained by Núcleo from anonymous forums, it was clear that AI-generated content was being created featuring both fictional minors and real children and teenagers. These images were produced using AI models fine-tuned to the faces of specific victims, built with images freely available online.

The investigation was able to observe and confirm that at least 18 children and teenagers had their images, including facial features, used to create AI-generated child sexual abuse imagery.

The creations ranged from suggestive to overtly explicit, always focused on minors. Some images portrayed children with X-ray effects that rendered them nude, others created whimsical or fairy-tale scenarios, and some were graphic, showing children kissing adults or wearing collars.

To produce this kind of disturbingly realistic, criminal content, offenders use open-source artificial intelligence programs—software systems that are freely available on the internet without commercial licenses and can be freely modified.

By altering these tools and incorporating smaller, specialized models known as LoRAs (Low-Rank Adaptation), criminals active in dark web forums have managed to create hundreds of AI-generated child abuse images featuring real children—many of them with alarming photorealism.

These specialized resources are widely available on model-sharing platforms and AI technology marketplaces—the names of which are being withheld in this report. This accessibility enables users to easily download and share models specifically designed for generating abusive AI content.

Text-to-image models, such as Stable Diffusion and Flux1d, operate like recipes: The user inputs a prompt, describing in natural language what image they want to generate. LoRAs act like custom toppings or enhancements to that recipe—adding specific details, colors, or textures to refine the final image.

LoRAs are typically used to fine-tune AI models and achieve highly specific outputs. Some are trained to mimic celebrities, politicians, fictional characters, or other recognizable figures. With these tools, it’s no longer necessary to retrain an entire AI model using images of a particular person; LoRAs allow users to precisely control results by making lightweight adjustments.


Image courtesy of Núcleo.

Data Training

While technical barriers still exist on some platforms for generating illegal content, users are rapidly finding workarounds as AI tools become more accessible—and, consequently, cheaper.

Previously, the main obstacle was the high cost of using advanced models. Today, the primary challenge for abusers is obtaining high-quality training images to create more convincing AI reproductions of victims. This issue was frequently acknowledged by forum users examined in the investigation.

“Every girl is different. Each one has a different quality and quantity of source images. The most important thing is the quality of the training images and selecting the right ones,” explained one user who claimed to have created hundreds of AI-generated child abuse images in one of the forums monitored.

This same user went on to say that developing specialized models is highly imprecise: “Nobody has the perfect answer—you have to experiment. The LoRA for [name of real child omitted by Núcleo] was trained with 392 images and 20 repetitions. That’s the highest number I’ve ever used on Flux.”

Other users discussed how one “gets used to” the slow process of training these models: “I can guarantee it’s possible to get realistic images of children once you get used to how this works,” one person posted in a conversation captured by Núcleo.

Open-Source Models

The AI programs most frequently mentioned in these criminal forums include Stable Diffusion and PonyXL, both developed by Stability AI, and Flux, a lesser-known system created in 2023 by Black Forest Labs, a company founded by former Stability AI employees.

It’s important to note that these systems were not designed for criminal purposes, but—as with much open-source software—it’s difficult to predict how these tools might be misused.

Flux emerged as the most commonly cited system for creating AI-generated child abuse material in the posts analyzed by this investigation. Though not as well known as Stable Diffusion, the Black Forest Labs model has been gaining market traction: It was downloaded more than 2 million times on Hugging Face—one of the largest AI model-sharing platforms—in April 2025, and is also used by xAI, Elon Musk’s company, in its chatbot Grok.

Open-source software refers to computer programs whose source code is publicly available on platforms like GitHub, GitLab, or SourceForge. These tools typically carry licenses that allow users to modify, redistribute, and apply them beyond their original purpose. To understand how these AI systems proliferate, it’s crucial to recognize their infrastructure. Websites and applications that offer AI-generated content typically connect to the developers’ APIs—technical bridges that distribute access to AI models and enable payment systems.

For example, Flux 1.1 by Black Forest Labs charges $0.04 per image. At that rate, $1 would generate 25 AI child abuse images. Meanwhile, Stable Diffusion generally uses 2 credits (around $0.02) per image, allowing 50 images for $1. This combination of low cost and accessibility makes large-scale abuse both affordable and increasingly available alongside legitimate uses.

Stability AI has faced scrutiny before. In 2023, a Stanford University Internet Observatory report revealed hundreds of child sexual abuse images in the LAION-5B dataset used to train open-source models, including Stable Diffusion.


Image courtesy of Núcleo.

Accountability

On image-sharing forums, also known as imageboards, users hide behind pseudonyms, posts vanish without notice, and content moderation is minimal, if it exists at all.

Conversely, the developers behind these AI models are well-known companies with legal, commercial, and social responsibilities, particularly to regulators, clients, and industry partners.

Both Stability AI and Black Forest Labs, named in this investigation, are official partners of the Internet Watch Foundation (IWF), a UK-based nonprofit that combats child sexual exploitation online.

This partnership grants them access to a restricted catalogue known as the Digital Fingerprint List, a database of nearly 3 million unique identifiers that can be used to help developers prevent their tools from being exploited to create AI-generated child abuse material.

In a statement to Núcleo, Dan Sexton, IWF’s chief technology officer, disclosed that the organization received 245 reports of AI-generated CSAM in 2024 alone—a 380% increase over 2023.

“Children who have already suffered abuse are being victimized again, with their images traded to train AI models or manipulated into even more extreme forms,” Sexton said.

Company Responses

Núcleo reached out to Stability AI and Black Forest Labs to ask what safeguards they have in place to detect and prevent the misuse of their AI models—particularly in the case of open-source, locally operated systems where real-time moderation is more difficult.

Stability AI declined to answer specific questions about its technologies and moderation practices, issuing a statement saying it is “deeply committed to preventing the misuse of AI technology, especially in the creation and distribution of harmful content.”

The company also stated that it “takes its ethical responsibilities seriously” and implements “robust safeguards to enhance security standards and protect against malicious actors.”

Black Forest Labs did not respond to Núcleo’s inquiries.


What we asked BFL and Stability AI

  • During our research in environments rife with child sexual abuse and exploitation material, we noticed that many of the creators of this criminal content are using [Stable Diffusion or Flux] as their base text-to-image model. How does [BFL or Stability] currently monitor this misuse, and what is the company’s position regarding this type of activity?
  • At a technical level, how does prompt moderation work at [BFL or Stability]? We would like to understand in greater detail how moderation processes are implemented to prevent the generation of child sexual abuse material.
  • How does [BFL or Stability] moderate the creation of LoRAs that enable the development of AI-generated child sexual abuse material?
  • In our investigation, we also identified the use and creation of LoRAs based on real individuals, including minors, through [Stable Diffusion or Flux]. What measures are in place to moderate and prevent the creation of fine-tuned content targeting real people, especially children and teenagers?

How We Conducted This Investigation

The reporting team gained access to forums and captured PDF screenshots of pages featuring AI-generated posts. The material was reviewed, and one post mentioning an Italian preteen model was identified. The reporters discovered a LoRA trained on this girl, along with several others. From there, the investigation expanded, with journalists reaching out to companies and websites hosting the tools mentioned in forum conversations.

RELATED TOPICS

an orange halftone illustration of a hand underneath a drone

Topic

AI Accountability

AI Accountability
teal halftone illustration of two children, one holding up a teddy bear

Topic

Children and Youth

Children and Youth
technology and society

Topic

Technology and Society

Technology and Society

RELATED INITIATIVES

Logo: The AI Accountability Network

Initiative

AI Accountability Network

AI Accountability Network

Support our work

Your support ensures great journalism and education on underreported and systemic global issues