Senate Commission approves draft of Brazil’s Legal Framework for Artificial Intelligence
Approved on December 6 by the Brazilian Senate's Commission of Jurists, a new version of the legal framework is now set to be analyzed and voted on at a plenary session
Subjects
Brazil’s proposed Legal Framework for Artificial Intelligence (AI Framework) defines the principles of AI systems and the rights of people affected by them, as well as risk classification standards, and governance and transparency measures that organizations involved at any stage of an AI system’s life cycle must observe.
The AI Framework also establishes sanctions in the event its provisions are not complied with, and assigns the Executive Branch the responsibility to designate an authority to implement and enforce the AI Framework. If approved, the framework will take effect one year after being signed into law.
The commission’s objectives
The Commission of Jurists was appointed on March 30, 2022, to assist in preparing a preliminary draft establishing principles, rules, guidelines, and grounds for regulating the development and use of AI in Brazil. Chaired by Justice Ricardo Villas Bôas Cueva, the commission finalized its work on December 7, 2022.
The commission analyzed several bills, including Bill No. 5,051/2019, which would define principles for using AI in Brazil, and Bill No. 872/2021 and Bill No. 21/2020 (approved by the House of Representatives), both of which would regulate the use of AI.
The commission’s main findings for the AI Framework are summarized below:
AI agent definitions and liability
During the public hearings that preceded the commission’s draft presentation, experts noted that due to the dynamism of AI and its potential applications, there were risks in delimiting AI and adopting specific legal definitions. Despite this, the AI Framework has established certain definitions, including:
- AI system supplier: an individual or public or private legal entity that develops an AI system directly or upon request, with the aim of placing it on the market or applying the system to a service it provides, under its own name or brand, either for a charge or free of charge;
- AI system operator: an individual or public or private legal entity who deploys or uses an AI system for its own benefit, unless on a personal (non-professional) basis.
According to the draft, both AI system suppliers and operators are considered to be ‘AI agents’.
The AI Framework provides that AI agents that cause property, moral, individual or collective damage are fully liable for remedying it, regardless of the extent of the system’s autonomy. The AI Framework also establishes situations where AI agents would not be held liable for damages caused.
AI principles
After a number of intense debates between different sectors regarding the principles the AI Framework should be based on, the commission managed to consolidate a series of principles for using and developing AI systems in Brazil.
The following principles particularly stand out:
- Human participation in the AI lifecycle and effective human supervision;
- Non-discrimination;
- Intelligibility, transparency, explainability, and auditability;
- The reliability and robustness of AI systems;
- The traceability of decisions made throughout the AI lifecycle, and a responsible individual or legal entity for such decisions;
- Accountability, responsibility, and full remediation for damages;
- Precaution, prevention, and mitigation of systemic risks.
Rights of people affected by AI systems
Unlike the bills that were already being debated in the Senate, the AI Framework has included a list of rights for people affected by AI systems. The main rights the AI Framework assures are:
- The right to be informed about interactions with AI systems before they take place;
- The right to receive an explanation regarding decisions, recommendations or predictions made by an AI system within 15 days via a free and simple procedure;
- The right to contest decisions made by AI;
- The right to non-discrimination and to have discriminatory bias corrected;
- The right to human participation in certain decisions;
- The right to privacy and data protection.
These rights can be enforced before the competent judicial and administrative authorities and apply independently of the system risk prohibition provided for in the AI Framework, as long as the person is affected by the AI system risk.
Risk classification for AI systems
Similar to the European Union’s forthcoming AI regulatory framework (EU AI Act), the new version of the AI Framework establishes risk classification standards for AI systems. The risks posed by AI systems are divided into two categories:
- Excessive risk: these AI systems would not be permitted to operate in Brazil. They include:
- Systems that employ subliminal techniques to induce individuals to behave in a way that is harmful to their health and safety;
- Systems that exploit the vulnerabilities of specific groups of people (associated with age and specific disabilities) in order to induce them to behave in a way that is harmful to their health and safety;
- Systems employed by the government to evaluate, classify and rank individuals based on their social behavior and attributes through universal scoring, such as social credit systems.
- High-risk: these systems involve AI functions within certain types of applications, including:
- Critical infrastructure management and operation;
- Professional evaluation;
- Credit rating and evaluation systems;
- Autonomous vehicles (when their use may create risks to people’s health and safety);
- Health systems intended to aid medical procedures and diagnosis;
- Biometric identification systems.
To the extent of their involvement, high-risk AI system suppliers or operators are objectively liable for any damage caused. When an AI system is not classified as high risk, the AI agent that caused the damage will be presumed guilty, inverting the burden of proof in favor of the victim(s).
Governance measures applicable to AI agents
The AI Framework provides that AI agents must establish governance structures and internal processes that are capable of ensuring their AI systems are secure and comply with the rights of the people they affect.
At the very minimum, these governance structures should include:
- Being transparent with the public regarding how AI systems are employed;
- Transparency regarding the adopted governance measures for developing and implementing the AI system;
- Adequate data management measures to mitigate and prevent potential discriminatory bias;
- Privacy by design measures and privacy by default measures;
- Adopting adequate parameters for separating and organizing data for training, testing and validating results.
Competent authority and sanctions
The AI Framework establishes that the Executive Branch must appoint a competent authority to implement and enforce the law. The framework also provides for sanctions such as a simple fine of up to BRL 50 million per violation – and for private legal entities, of up to 2% of the total revenue its group or conglomerate in Brazil obtained in the previous fiscal year (excluding taxes) – as well as the temporary or permanent suspension (partial or total) of the development, supply or operation of AI systems and a prohibition on processing related databases.
For further information, please contact Mattos Filho’s Technology, Innovation & Digital Business practice area.