Translate page with Google

Story Publication logo April 2, 2025

Instagram Is Full of Openly Available AI-Generated Child Abuse Content

Authors:
The image features a grid of four depictions of a baby, each overlaid with digital distortions and glitches. These distortions symbolise the fragility of data and privacy in the context of nonconsensual data breaches. The glitch effects include fragmented pixels, colour shifts, and digital artefacts, emphasising the disruption and loss of control over personal information. Zeina Saleem + AIxDESIGN & Archival Images of AI / Better Images of AI / Distortion Series  / CC-BY 4.0.
English

Investigating the scope and impact of AI-generated child sexual abuse material

SECTIONS

Illustration by Rodolfo Almeida/Nucleo.

Leia este relatório em português.

Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts


At least a dozen accounts on Instagram, totaling hundreds of thousands of followers, were openly sharing AI-generated images of children and adolescents in sexualized poses, a banned behavior that Meta's safety systems were unable to detect.

In January 2025, Meta CEO Mark Zuckerberg announced that the company would relax some of its content moderation policies in the United States, citing 'many mistakes and too much censorship' that he attributed to other actors, notably governments and the press.

At that time, Nucleo started an investigation in partnership with the Pulitzer Center's AI Accountability Network to assess the dissemination of illegal material involving children and adolescents, and was able to identify 14 Instagram profiles that were posting disturbing AI-generated images of minors.


As a nonprofit journalism organization, we depend on your support to fund more than 170 reporting projects every year on critical global and local issues. Donate any amount today to become a Pulitzer Center Champion and receive exclusive benefits!


All of those accounts followed the same visual pattern: blonde characters with voluptuous bodies and ample breasts, blue eyes, and childlike faces. These figures were depicted in various sexualized contexts: in bikinis, lingerie, or low-cut dresses. But these were not “adult models,” but rather representations of teenagers and even children. Each account analyzed by the investigation had the same visual theme, including eye color, skin tone, hair, and similar childlike features.

We were not able to confirm whether these contents were created with images based on a real person, or if these pages belonged to the same owners or groups.

The United Kingdom is one of the few countries in the world with a specific law for this type of crime with generative AI and child sexual exploitation. Passed in February 2025, the law bans both the creation of content and the development of guides or guidelines involving child abuse associated with generative AI.

AI-generated child exploitation content is a criminally serious abuse, says Dan Sexton, director of technology at the Internet Watch Foundation, a leading UK organization on the issue.

“These images, like real child sexual abuse images, can normalize and entrench child abuse. This horrific content is not confined to the dark web; we have received reports of people finding it on the open web. It is extremely disturbing and harmful", he told Nucleo.

Beyond Instagram

This criminal behavior was not limited to Instagram. In most of the analyzed profiles, users were directed to subscription content platforms like Patreon and Fanvue, where they could acquire other child-exploitation material.

In two of the accounts, followers were redirected to WhatsApp or Telegram groups, where, in addition to tips on how to create illicit AI-generated images, links to folders containing actual child sexual abuse material were also shared.

Delayed Moderation

After Nucleo contacted Meta in mid-March, 12 of the 14 identified profiles were removed.

In a statement to Nucleo, Meta said it has “strict rules against the sexualization of children — both real and AI-generated — and we have removed the accounts shared with us so far for violating these guidelines.”

No specific questions about moderation and detection of offensive content sent by our reporters were specifically answered.

After publication, Meta confirmed that the profiles were removed due to the information provided by Nucleo.

Meta: questions and (no) answers

Here are the questions we asked Meta:

  1. Does Meta have specific policies to combat the dissemination of AI-generated child sexual abuse materials on its platform?
  2. Are Meta's automated hash-based detection systems, aimed at combating child sexual exploitation, capable of identifying AI-generated images?
  3. How does WhatsApp deal with combating child sexual exploitation in groups, without compromising privacy and end-to-end encryption?

Here is Meta's full response:

At Meta, we have strict rules against the sexualization of children — both real and AI-generated — and we have removed the accounts shared with us so far for violating these guidelines. Just as in the case of explicit sexualization, we also remove Instagram accounts dedicated to sharing innocuous images of children, real or AI-generated, accompanied by captions or comments about their appearance.

WhatsApp also has zero tolerance for child sexual exploitation and abuse, and we ban users when we become aware that they are sharing content that exploits or endangers children. WhatsApp has unencrypted information available, including user reports, to detect and prevent this type of abuse, and we are constantly improving our detection technology.

Patreon and Fanvue

The creators of these criminal contents developed a strategy to circumvent Instagram's moderation and profit from the practice: the creation of multiple profiles with hybrid characters — combinations of childlike faces and more adult bodies — associated with pages on subscription content platforms.

During the investigation, Nucleo identified two profiles on Patreon and one on Fanvue, a lesser-known subscription content platform, were linked to Instagram accounts that produced and distributed AI-generated media sexualizing minors.

These accounts offered access to exclusive subscription content, as well as the production of “special media.” After being contacted, Patreon removed all profiles associated with this material.

“We have a zero-tolerance policy for works that feature child sexual abuse material (CSAM), real or animated, or other forms of exploitation that depict sexualized minors. If the physical characteristics depicted in a work are not unequivocally perceived as being of adults, it will be considered a violation of our policies,” a company spokesperson told the report.

Fanvue, however, did not respond to multiple attempts to contact them. In 2024, the platform was cited in a report by the Alliance to Counter Crime Online (ACCO), which highlighted how the company facilitates the commercialization of explicit AI-generated material. The document also recommended that Instagram adopt a more rigorous approach in monitoring profiles that link to Fanvue in their bios.

“Gray Area” — How Members of an Illegal Group React to Content Takedowns

Although Meta had removed several contents after being contacted by the advisory team, a WhatsApp group linked to one of the Instagram accounts monitored in the investigation remained active.

With over 150 members, the group was in fact deleted only after Nucleo contacted the administrator, questioning the illegal materials shared there.

The reporters accessed the group through a website associated with the Instagram profile — all chat histories were downloaded and analyzed. Within the group, those involved in the creation, distribution and even commercialization of these images demonstrated full awareness of the illegal nature of the material.

When Instagram removed the account linked to this WhatsApp group, its members began to wonder if the AI-generated content could be considered “harmless.” “IG only checks if someone reports and then a random, poorly calibrated AI examines the content. Total randomness,” complained a user with a Belgian phone number.

Another, from the United States, added: “All this happens while the real child exploiters escape … interesting.” The next day, a user from the Czech Republic raised the question of what was wrong with “venting” his “eccentricity” if it didn't harm anyone.

“Maybe they prefer to have real child victims rather than our 'gray area'... which doesn't make sense to me,” responded another member.

At one point, after the release of a disturbing AI-generated image of a teenager, a member asked how to create an Instagram account without linking it to the existing ones.

“I don't want anyone from my real life to know I'm doing this,” wrote another, expressing fear that Instagram would notify their contacts by associating the new account with their phone number. Shortly after, the administrator said he “wouldn't know” what his wife would say if she saw the “photo” he described as “a teenage girl putting an egg in her vagina.”

After leaving WhatsApp, members of the group mentioned in this report are on Telegram. The Russian messaging app has been widely used by criminals to host bots and channels dedicated to the sharing and generation of AI-generated child sexual abuse images, as documented in a previous Nucleo report.

Technical gaps

Meta's policies bans the production and sharing of content that “sexualizes children,” including artificially generated images.

Like other major tech companies, the company uses automated systems to detect serious violations, such as virtual child sexual exploitation. One of the main methods used is image hashing, which creates unique “digital fingerprints” for files previously identified as child abuse material, allowing AI tools to quickly recognize them.

However, reports from organizations like Active Fence and the We Protect Global Alliance highlight that this technology faces limitations when it comes to AI-generated content, as these images can be altered and not match the original fingerprints.

Researchers, such as Carolina Christofoletti from the University of São Paulo (USP), in Brazil, point out that experienced criminals exploit technical gaps in the generation of this material. To overcome these flaws, experts are studying new detection methods.

“By definition, it is not possible to automatically identify AI-generated images, because they are always new,” explains Christofoletti.

She emphasizes that the central challenge lies in the use of classifiers — algorithms that categorize data based on patterns. However, training these systems requires access to illegal image databases, something restricted to a few institutions in the world.

One of the approaches being studied is the use of biometrics. “If you observe a child's face, the proportions are specific. The eyes are rounder, the face is smaller, and the features are closer together,” the researcher explains.

This principle could be applied to the development of recognition systems capable of identifying childlike characteristics and signs of sexualization in AI-generated images.

“What is more efficient? Trying to map all the images created by AI or detecting child faces and elements of sexualization in them?” Christofoletti questions. “Perhaps the most promising path is facial recognition or the identification of patterns that indicate the manipulation of the child's body.”

How we did this investigation

We started investigating an Instagram profile after finding a website that hosted AI-generated child abuse material. We then found new accounts, added them to the mapping, as well as external links to Patreon, Fanvue, WhatsApp and Telegram. We downloaded the chat histories of the participants and analyzed the conversations in Google Spreadsheets. We then emailed the platforms and interviewed the experts.

RELATED TOPICS

an orange halftone illustration of a hand underneath a drone

Topic

AI Accountability

AI Accountability
teal halftone illustration of two children, one holding up a teddy bear

Topic

Children and Youth

Children and Youth
technology and society

Topic

Technology and Society

Technology and Society

RELATED INITIATIVES

Logo: The AI Accountability Network

Initiative

AI Accountability Network

AI Accountability Network

Support our work

Your support ensures great journalism and education on underreported and systemic global issues