Translate page with Google

Story Publication logo December 11, 2024

How Conflicts Between the Executive and Big Techs Shaped Brazil’s AI Regulation Proposal

Country:

Authors:
The image features a grid of four depictions of a baby, each overlaid with digital distortions and glitches. These distortions symbolise the fragility of data and privacy in the context of nonconsensual data breaches. The glitch effects include fragmented pixels, colour shifts, and digital artefacts, emphasising the disruption and loss of control over personal information. Zeina Saleem + AIxDESIGN & Archival Images of AI / Better Images of AI / Distortion Series  / CC-BY 4.0.
English

Investigating the scope and impact of AI-generated child sexual abuse material

SECTIONS

Illustration by Rodolfo Almeida/Núcleo Jornalismo.

Leia esta história em português.

Conflicts between Big Tech companies, the government, and civil society are influencing the debate around Bill 2,338/23. This legislative effort focuses on information integrity and high-risk AI systems. The lobbying from the private sector has driven several revisions to the text, with freedom of expression becoming a contentious topic.


For over a year, a tug-of-war between government ministries, private sectors, and civil society groups has shaped Brazil’s key legislative proposal to regulate artificial intelligence (AI). These efforts have also revealed the behind-the-scenes lobbying by Big Tech companies.

The bill, officially known as PL 2,338/23, is Brazil's most comprehensive proposal to regulate AI. It was approved by the Senate on the night of December 10, 2024, and is now headed to the House of Representatives in 2025. The intense disputes among stakeholders led to last-minute changes and delays in the voting process.

At the core of the debates are three main issues:

  1. Integrity of information
  2. High-risk AI systems
  3. Copyright protections

1. Information integrity

One of the most debated topics is the concept of information integrity, a key issue for both government supporters and the opposition. This innovative idea is relatively new to Brazilian legislation and unfamiliar even to some experts.

In its current form, the bill defines information integrity as an “ecosystem that promotes reliable, diverse, and accurate information while fostering freedom of expression.” In simpler terms, it demands that AI systems reflect reality.

Private sector lobbying played a significant role in shaping this part of the bill. Mentions of “information integrity” in the text are frequently accompanied by references to “freedom of expression,” largely due to opposition senators' fears—amplified by Big Tech lobbying—that this principle could lead to “censorship” of content moderation and recommendation algorithms on social platforms.

Bia Barbosa, advocacy coordinator for Reporters Without Borders (RSF) in Latin America, emphasized that information integrity should not be seen as a tool to force platforms to moderate individual content. According to her, it is about “adjusting for greater reliability in the systems,” establishing foundations for the ethical and trustworthy development of AI systems.

Rafael Zanatta, director of the Data Privacy Brazil Research Association, said that negotiations with companies to finalize the text were prolonged due to the government's insistence on preserving the concept of information integrity — in at least some sections of the bill. “This concept [information integrity] is new, and specialists in the field do not have a consensus or uniform understanding of what it means”, he said.

“But as the text still needs to pass through the House, and the deputies have not yet expressed much on this issue, it is still not possible to know what we will have at the end of this process,” he said, adding that the National Confederation of Industry “really has significant influence on this project.”

The National Confederation of Industry (CNI) is a powerful organization in Brazil that represents the interests of the industrial sector, influencing public policies and legislation to promote economic growth and competitiveness in industries ranging from manufacturing to technology.

2. High-Risk AI Systems

Before the focus shifted to the concept of information integrity, the private sector’s main concern was with so-called high-risk systems. The initial problem lay in the type of list concerning these systems.

For organizations such as CNI and technology companies, regulation should focus on the final use of AI systems based on their risks, rather than on the technology's development. In this way, oversight would act only on risks explicitly mentioned in the law.

However, the official position of the Lula government is that any regulatory authority should be able to include new AI systems as needed, as technology develops and new uses arise.

Some branches of the Executive, however, such as the Ministry of Development, Innovation, and Services (MDIC, in Portuguese), argue that to attract investments and boost AI development in Brazil, it is crucial to ensure greater legal certainty in regulation. MDIC’s press office told Nucleo that it had received various entities to discuss Bill 2,338/23.

“The CNI presented its suggestions to the MDIC and directly to the Senate, defending proposals that were not fully accepted. There was no change regarding what was discussed in the meeting with CNI representatives. Anyway, all of MDIC’s participation was aimed at improving the bill and adequately serving the public interest”, the statement says.

The bill's rapporteur, Senator Eduardo Gomes (PL-TO), rejected proposals from MDIC, CNI, and the private sector but accepted an amendment from Senator Marcos Rogério that removed from the high-risk category the systems for curation, diffusion, recommendation, and automated distribution of content—essentially, the algorithms used by companies like YouTube, Google, X, TikTok, and Meta.

Differences in classifications

If an AI system is classified as “high risk,” it must follow a series of governance measures, such as ensuring transparency in data management and preventing discriminatory biases. Systems classified as “excessive risk”—like autonomous weapons systems—cannot be used.

3. Copyright Protections

Finally, the issue of copyright protections also generated significant discussion. Many of the amendments proposed by senators would allow AI companies to use online content to train their models without paying copyright fees—an argument championed by Big Techs.

In a note released last week, representative entities from various sectors—including technology companies like the Brazilian Software Companies Association (ABES)—stated that the copyright requirements in the bill “could make AI development in Brazil unfeasible.”

Senate President Rodrigo Pacheco (PSD-MG), who authored the bill, rejected all amendments, as well as attempts to postpone the bill’s vote in the plenary.

Lobbying Groups

Unlike Bill 2630/20, the “Fake News Bill,” where Big Techs and technology companies voiced their opposition individually or collectively, in the AI bill, the sector is mobilizing through associations, law firms, and well-known lobbying groups.

Even though sources in the Senate, whether working in-house or following the work of the Committee on Technology and Artificial Intelligence (CTIA), stated to Núcleo that it is “common knowledge” that technology companies are drafting amendments for senators, mapping and understanding this activity through public information is challenging.

Last week, Reporters Without Borders (RSF) described Big Tech interference as “concerning,” stating that “pressure from technology companies and far-right parties resulted in the exclusion of AI systems used to moderate and recommend content on digital platforms from the scope of the law, leaving these regulatory issues for potential future legislation.”

The report was denied access to non-parliamentary attendance lists for the Senate sessions from December 3–5, 2024, during the final CTIA meetings. The security department justified the decision, citing an internal directive classifying visitor names as “personal information” protected under the General Data Protection Law (LGPD).

In theory, the Freedom of Information Act guarantees public, open, and free access to documentation related to the functioning of the National Congress and other public bodies, such as the Supreme Federal Court. The Chamber of Deputies, for example, responds to similar access-to-information requests.

RELATED CONTENT

RELATED TOPICS

an orange halftone illustration of a hand underneath a drone

Topic

AI Accountability

AI Accountability
technology and society

Topic

Technology and Society

Technology and Society

RELATED INITIATIVES

Logo: The AI Accountability Network

Initiative

AI Accountability Network

AI Accountability Network

Support our work

Your support ensures great journalism and education on underreported and systemic global issues