
Leia este relatório em português.
Nucleo's investigation identified accounts with thousands of followers with illegal behavior that Meta's security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts
At least a dozen accounts on Instagram, totaling hundreds of thousands of followers, were openly sharing AI-generated images of children and adolescents in sexualized poses, a banned behavior that Meta's safety systems were unable to detect.
In January 2025, Meta CEO Mark Zuckerberg announced that the company would relax some of its content moderation policies in the United States, citing 'many mistakes and too much censorship' that he attributed to other actors, notably governments and the press.
At that time, Nucleo started an investigation in partnership with the Pulitzer Center's AI Accountability Network to assess the dissemination of illegal material involving children and adolescents, and was able to identify 14 Instagram profiles that were posting disturbing AI-generated images of minors.

As a nonprofit journalism organization, we depend on your support to fund more than 170 reporting projects every year on critical global and local issues. Donate any amount today to become a Pulitzer Center Champion and receive exclusive benefits!
All of those accounts followed the same visual pattern: blonde characters with voluptuous bodies and ample breasts, blue eyes, and childlike faces. These figures were depicted in various sexualized contexts: in bikinis, lingerie, or low-cut dresses. But these were not “adult models,” but rather representations of teenagers and even children. Each account analyzed by the investigation had the same visual theme, including eye color, skin tone, hair, and similar childlike features.
We were not able to confirm whether these contents were created with images based on a real person, or if these pages belonged to the same owners or groups.
The United Kingdom is one of the few countries in the world with a specific law for this type of crime with generative AI and child sexual exploitation. Passed in February 2025, the law bans both the creation of content and the development of guides or guidelines involving child abuse associated with generative AI.
AI-generated child exploitation content is a criminally serious abuse, says Dan Sexton, director of technology at the Internet Watch Foundation, a leading UK organization on the issue.
“These images, like real child sexual abuse images, can normalize and entrench child abuse. This horrific content is not confined to the dark web; we have received reports of people finding it on the open web. It is extremely disturbing and harmful", he told Nucleo.
Beyond Instagram
This criminal behavior was not limited to Instagram. In most of the analyzed profiles, users were directed to subscription content platforms like Patreon and Fanvue, where they could acquire other child-exploitation material.
In two of the accounts, followers were redirected to WhatsApp or Telegram groups, where, in addition to tips on how to create illicit AI-generated images, links to folders containing actual child sexual abuse material were also shared.
Delayed Moderation
After Nucleo contacted Meta in mid-March, 12 of the 14 identified profiles were removed.
In a statement to Nucleo, Meta said it has “strict rules against the sexualization of children — both real and AI-generated — and we have removed the accounts shared with us so far for violating these guidelines.”
No specific questions about moderation and detection of offensive content sent by our reporters were specifically answered.
After publication, Meta confirmed that the profiles were removed due to the information provided by Nucleo.
Patreon and Fanvue
The creators of these criminal contents developed a strategy to circumvent Instagram's moderation and profit from the practice: the creation of multiple profiles with hybrid characters — combinations of childlike faces and more adult bodies — associated with pages on subscription content platforms.
During the investigation, Nucleo identified two profiles on Patreon and one on Fanvue, a lesser-known subscription content platform, were linked to Instagram accounts that produced and distributed AI-generated media sexualizing minors.
These accounts offered access to exclusive subscription content, as well as the production of “special media.” After being contacted, Patreon removed all profiles associated with this material.
“We have a zero-tolerance policy for works that feature child sexual abuse material (CSAM), real or animated, or other forms of exploitation that depict sexualized minors. If the physical characteristics depicted in a work are not unequivocally perceived as being of adults, it will be considered a violation of our policies,” a company spokesperson told the report.
Fanvue, however, did not respond to multiple attempts to contact them. In 2024, the platform was cited in a report by the Alliance to Counter Crime Online (ACCO), which highlighted how the company facilitates the commercialization of explicit AI-generated material. The document also recommended that Instagram adopt a more rigorous approach in monitoring profiles that link to Fanvue in their bios.
Technical gaps
Meta's policies bans the production and sharing of content that “sexualizes children,” including artificially generated images.
Like other major tech companies, the company uses automated systems to detect serious violations, such as virtual child sexual exploitation. One of the main methods used is image hashing, which creates unique “digital fingerprints” for files previously identified as child abuse material, allowing AI tools to quickly recognize them.
However, reports from organizations like Active Fence and the We Protect Global Alliance highlight that this technology faces limitations when it comes to AI-generated content, as these images can be altered and not match the original fingerprints.
Researchers, such as Carolina Christofoletti from the University of São Paulo (USP), in Brazil, point out that experienced criminals exploit technical gaps in the generation of this material. To overcome these flaws, experts are studying new detection methods.
“By definition, it is not possible to automatically identify AI-generated images, because they are always new,” explains Christofoletti.
She emphasizes that the central challenge lies in the use of classifiers — algorithms that categorize data based on patterns. However, training these systems requires access to illegal image databases, something restricted to a few institutions in the world.
One of the approaches being studied is the use of biometrics. “If you observe a child's face, the proportions are specific. The eyes are rounder, the face is smaller, and the features are closer together,” the researcher explains.
This principle could be applied to the development of recognition systems capable of identifying childlike characteristics and signs of sexualization in AI-generated images.
“What is more efficient? Trying to map all the images created by AI or detecting child faces and elements of sexualization in them?” Christofoletti questions. “Perhaps the most promising path is facial recognition or the identification of patterns that indicate the manipulation of the child's body.”