Regulatory framework for artificial intelligence passes in Brazil’s Senate
Bill establishes governance measures, upholds the rights of affected individuals and groups, and classifies high-risk and excessive-risk AI systems
Subjects
On December 10, 2024, the Brazilian Senate approved Bill No. 2,338/2023 to establish a national regulatory framework covering the development, use, and governance of AI systems in Brazil. The text reflects a commitment to the centrality of the human person, responsible innovation, AI market competitiveness, and the implementation of safe and reliable systems. Having passed in the Brazilian Senate, the current version of the Bill still requires further analysis in the House of Representatives and presidential assent before it can be signed into law and take effect.
The main topics of the Brazilian Senate approved version of the Bill are outlined below:
Rights of affected individuals or groups
The regulatory framework defines a set of rights designed to protect individuals or groups affected by AI systems, such as:
- The right to clear, accessible information about the use of AI in their interactions with such systems;
- The right to request reviews of automated decisions by humans in certain circumstances;
- The right to non-discrimination (illicit or abusive), as well as the right to have direct or indirect discriminatory bias corrected.
Risk categorization
The Bill classifies AI systems based on their degree of risk, which is based on a preliminary assessment by AI agents – system developers, distributors, and application developers that operate within and work with AI system value chains and internal governance.
AI systems can be classified into two categories:
- Excessive risk systems – AI systems employed for the purpose of manipulating behavior in order to harm the health, safety, or fundamental rights of others; evaluating personality traits, characteristics, or past behavior to assess the risk of committing crimes, infractions, or recidivism; enabling the production and dissemination of material involving or representing the abuse or sexual exploitation of minors; use in autonomous weapon systems and recapturing escaped convicts. As a rule, the current version of the framework prohibits the development, implementation, and use of excessive-risk AI systems;
- High-risk systems – AI systems employed to manage and operate critical infrastructure (traffic control, water, and electric power supply); the administration of justice; autonomous vehicles; the recruitment, screening, filtering, and evaluation of job candidates; the health sector; biometric identification and authentication systems for recognizing emotions, among others. The regulatory framework would permit high-risk systems to be developed, implemented, and used in Brazil, provided that AI agents comply with specific obligations. As a rule, these systems will be subject to algorithmic impact assessments, human supervision, and transparency measures to minimize risks to the health, safety, and fundamental rights of affected individuals or groups.
Responsible governance
Bill No. 2,338/2023 establishes rules to ensure responsible governance in relation to AI systems, requiring all agents to ensure the systems are safe and they allow affected individuals or groups to exercise their rights.
These rules extend to all AI agents, with specific responsibilities for system developers, distributors, and application developers. Such governance measures may include preparing documentation on safety tests, controlling bias, the degree of human supervision required, and transparency measures.
The Bill also encourages self-regulation and the creation of additional governance rules through codes of good practice and collaboration among AI agents.
Copyright
The Bill reinforces the need for AI agents to comply with Brazil’s Copyright Law (Law No. 9,610/1998). It establishes specific obligations for those agents who use works protected by this law, such as ensuring the right of copyright holders to opt out of the use of their works, as well as related rights regarding the use of their works within AI systems.
Civil liability
The Bill also provides for the use of two distinct civil liability frameworks. Civil liability arising from damages caused by AI systems within consumer relations remains subject to the liability rules in the Consumer Protection Code (Law No. 8,078/1990). Meanwhile, civil liability stemming from damages caused by AI systems that are exploited, employed, or used by AI agents outside the context of consumer relations remains subject to the liability rules in the Civil Code (Law No. 10,406/2002).
For more information on this topic, please contact Mattos Filho’s Technology practice area.