
Leia esta história em português aqui.
Corporate lobbying has reshaped Brazil's AI legislation efforts, removing previously proposed safeguards against high-risk AI models that could generate exploitative content involving children, sources revealed to Nucleo
Corporate interests have quietly reshaped Brazil's most comprehensive proposed artificial intelligence regulation to date, leading its latest version in the Senate to abandon crucial safeguards against high-risk AI systems capable of generating child sexual abuse content, Nucleo's reporting has found.
The Senate bill – which sets rules for the creation, use, and management of AI in Brazil – initially mirrored the European Union's AI Act, with strict classifications for high-risk AI models. But the current version shifts toward self-regulation, a change championed by tech companies arguing that tighter controls would affect innovation in Brazil, according to multiple sources.
The shift follows multiple delays in the Senate's Commission on Artificial Intelligence, which began discussing regulation in August 2023. After five postponed votes since May 2024, the commission decided to revise its original proposal on high-risk AI systems last Tuesday (December 3, 2024).
The earlier draft completely banned several high-risk AI models, including those that could “enable, produce, disseminate, or facilitate” child sexual abuse material (CSAM). It also classified all AI applications on the internet as high-risk. Both provisions have now been removed.
The revised bill now only prohibits AI systems specifically designed to generate child exploitation content, rather than banning or severely limiting all systems capable of such misuse – like tools trained on child pornography material.
Under the new proposal, companies could avoid liability if their AI systems generate illegal content, as long as it wasn't the technology's intended purpose – even if they fail to prevent such misuse.
Critics argue that this, in association with other changes that downplay the importance of high-risk AI models, weakens accountability by AI companies and makes it harder to hold developers responsible for negligence.
Sudden changes
It is unclear why Eduardo Gomes, the bill's rapporteur within the commission, made the changes just a few days after the commission's vote was postponed again. The commission's report fails to identify which amendment triggered the crucial revisions on high-risk AI models and child exploitation content.
However, Nucleo has found that an amendment by the conservative Senator Marcos Rogério, around freedom of expression, may have influenced several revisions that would benefit Big Tech and AI companies.
Rogério's proposed amendment suggested that regulating “aspects related to the circulation of online content that may affect freedom of expression, including the use of AI for content moderation and recommendation, can only be done through specific legislation”.
While Gomes rejected Rogério's amendment, he acknowledged its influence, claiming it led to 'improvements.' He defended the softer approach by arguing that allowing “contextual use” of high-risk systems would preserve their “dynamism” and prevent outdated regulations.
Rogério, one of former president Jair Bolsonaro's most prominent defenders in the Senate, has also been described by sources as a “spokesperson for big tech” at the AI commission.
Multiple sources involved in the deliberations told Nucleo that tech giants and their allies fought these restrictions, arguing they could impair thing like social media recommendation algorithms and limit free expression in Brazil.
The draft changes also echoes an earlier case where federal authorities investigated Google and Telegram for allegedly interfering with Brazil's Fake News Bill, a legislation that sought to regulate social media platforms and hold them accountable for misinformation. That bill would have required platforms to create verification mechanisms for content and potentially take responsibility for damages caused by false information.
The Fake News Bill was eventually discarded after nearly being brought to a vote in April 2024.
Reporting by Sofia Schurig and Julianna Granjeia
Illustration by Rodolfo Almeida
Editing by Alexandre Orrico and Sergio Spagnuolo