Translate page with Google

Story Publication logo April 29, 2025

Brazil Consumer Agency Demands Action After Pulitzer Center Report on Child Abuse AI Content

Country:

Author:
The image features a grid of four depictions of a baby, each overlaid with digital distortions and glitches. These distortions symbolise the fragility of data and privacy in the context of nonconsensual data breaches. The glitch effects include fragmented pixels, colour shifts, and digital artefacts, emphasising the disruption and loss of control over personal information. Zeina Saleem + AIxDESIGN & Archival Images of AI / Better Images of AI / Distortion Series  / CC-BY 4.0.
English

Investigating the scope and impact of AI-generated child sexual abuse material

SECTIONS

Leia em português.

Secretary Wadih Damous forwards a complaint to officials.


Following a Pulitzer Center-supported investigation for Núcleo, which revealed Instagram profiles featuring child sexual exploitation content generated by artificial intelligence, Brazil's National Consumer Secretariat has initiated a monitoring procedure against social media giant Meta, which owns Instagram and other platforms.

The initiation of a monitoring procedure aims to oversee and evaluate companies' practices concerning consumer rights. In a formal communication, Wadih Damous, Brazil's consumer secretary, forwarded the complaint to the Federal Police and the Digital Law Secretariat, demanding action.


As a nonprofit journalism organization, we depend on your support to fund more than 170 reporting projects every year on critical global and local issues. Donate any amount today to become a Pulitzer Center Champion and receive exclusive benefits!


The National Consumer Secretariat, or Senacon, is a government agency linked to the Ministry of Justice and Public Security. Its primary role is to protect and promote consumer rights in Brazil. Senacon implements consumer protection policies, oversees compliance with consumer laws, and addresses consumer complaints and disputes.

In Brazil, child sexual abuse material is classified as a severe violation of the Statute of the Child and Adolescent, even if generated by AI. The country has yet to establish a comprehensive AI regulatory framework, although a bill is currently under consideration in the House of Representatives after being approved by the Senate last year.

The investigation uncovered profiles that directed users to paid content platforms, such as Patreon and Fanvue, as well as to groups containing actual child abuse material. Meta has since removed all identified profiles and credited the report for its moderation efforts.


RELATED TOPICS

an orange halftone illustration of a hand underneath a drone

Topic

AI Accountability

AI Accountability
teal halftone illustration of two children, one holding up a teddy bear

Topic

Children and Youth

Children and Youth
technology and society

Topic

Technology and Society

Technology and Society

RELATED INITIATIVES

Logo: The AI Accountability Network

Initiative

AI Accountability Network

AI Accountability Network

Support our work

Your support ensures great journalism and education on underreported and systemic global issues