The regulation of artificial intelligence and its impacts on innovation
The path toward regulating artificial intelligence in Brazil requires consideration of many factors that foster a more innovative environment
With the rapidly increasing spread of artificial intelligence (AI) systems, several discussions have followed in turn regarding how they should function (as they need to collect, process and store various types of data), as well as the potential outcomes of their use.
Countries around the world – including Brazil – are implementing strategies to regulate AI systems in an effort to balance the benefits of innovation with the individual rights and guarantees of their citizens.
What is innovation?
The Austrian economist and political scientist Joseph Schumpeter theorized that innovation leads to economic change through a process of creative destruction, rendering previously developed innovations obsolete.
According to Schumpeter, innovation and its impacts on society should be understood as a systemic, imperative phenomenon, one that causes both the breakdown of economic cycles and a reduction in the time between each of them.
A typical innovative system is based on three factors:
- Universities – through education and research;
- Government – through policies encouraging innovation; and
- Industry – through financial investment and commercial interests, which often work together to foment innovation throughout the entire production chain.
In Brazil, the government’s main involvement in innovation principally regards creating legislation. Examples include Law No. 11,196/2005 (Lei do Bem), Supplementary Law No. 182/2021 (Legal Framework for Startups), and in particular, Law No. 10,973/2004 (amended by Law No. 13,243/2016) – known as the Innovation Law.
Innovation Law: definitions
Brazil’s Innovation Law provides incentives for scientific development, research, scientific and technological capacity building, and innovation, and aims to simplify the relationship between companies and research institutions.
Moreover, the law defines innovation as the “introduction of a novelty or improvement in the productive and social environment that results in new products, services, or processes, or that provides existing products, services or processes with new functionalities or characteristics that may result in improved quality or performance.
With this, a favorable environment can be established for partnerships, the participation of science and technology institutions, incentives for researchers and creators, private sector innovation, and new technologies, such as AI systems.
The Legal Framework for AI
AI can be defined as a computational system of algorithms developed to carry out activities more efficiently and quickly than they would be carried out by humans. AI is found in numerous everyday applications and economic sectors and is already used for everything from generating text-based and image-based works to facial recognition.
Given the global scenario and existing provisions on innovation, bills for regulating AI systems have been debated in Brazil’s congress in recent years, resulting in a proposed Legal Framework for AI (Bill No. 21/2020). The framework sets out principles, the rights of individuals affected by AI systems, risk classification processes, and governance and transparency measures that organizations must observe throughout the system’s life cycle, as explained in a previous article from Mattos Filho’s specialists.
Innovation is an important theme within the proposed framework, as it is directly associated with efforts for economic and technological development, as well as the rights of individual citizens.
In a specific section on the supervision and monitoring of AI systems, the Legal Framework for AI provides measures to promote innovation, highlighting the use of regulatory sandboxes. The goal of a sandbox is to create an experimental regulatory environment with specific requirements for AI innovation.
The innovations tested in regulatory sandboxes do not necessarily have to be new – rather, they should improve the use of AI systems, explore alternative uses, as well as ensure and enhance the operational viability of the project so that it can be used in line with the other principles outlined in the proposed framework.
Basic principles and rules
The regulatory sandboxes will be monitored by the competent authority to ensure that AI systems are developed without obstructing fundamental rights and consumer rights, while also ensuring personal data security and protection.
This oversight is also supported by principles outlined in the Legal Framework for AI that must be observed when developing, implementing, and using AI systems in good faith, namely:
- Inclusive growth, sustainable development, and well-being;
- Self-determination, freedom in decision-making and freedom of choice;
- Human participation in the AI cycle and proper human supervision;
- Non-discrimination;
- Justice, equity, and inclusion;
- Transparency, explainability, intelligibility, and auditability;
- Reliable and robust AI and information security systems;
- Due process, contestability, and adversarial court proceedings;
- Traceability of decisions during the life cycle of AI systems as a means of accountability and assigning responsibility to a natural person or legal entity;
- Accountability, liability, and full reparation for any damage or harm;
- Prevention, precaution, and mitigation of systemic risks arising from intentional or unintentional uses and unforeseen effects of AI systems;
- Non-maleficence and ensuring the methods employed are in proportion with the legitimate purpose of the AI systems.
Given that innovation is part of a system involving various actors and legal bases, the development, implementation, and use of AI systems must comply with these principles to ensure the diverse objectives such systems aim to attain are met in a balanced way.
It is important to note that the agents who develop AI systems will be liable for any failure to comply with any provisions in effect for regulating AI. As such, ensuring all obligations are met is essential.
Possible effects of regulation
Although the Legal Framework for AI seeks to establish measures to foster AI-related innovation, a wide range of opinions on the impacts of this ecosystem exist.
While regulation is often seen as important for assigning responsibility, there are also reservations that excessive regulation beyond the existing obligations could be detrimental to the industry and hinder AI development in Brazil.
Thus, though regulation may ensure the innovation system complies with rules and principles, it can harm technological development if innovation agents feel exposed to risks and potential sanctions. It is worth noting that the principles in Brazil’s proposed Legal Framework for AI stem from international ethical guidelines that are generally prioritized and respected by market actors and their entire development chain.
Also, it is important to highlight that Brazil’s Legal Framework for AI aims to balance the interests of both innovation actors and individuals, who are expected to benefit from the development of AI systems without prejudice to their fundamental rights and guarantees.
For further information on artificial intelligence and intellectual property, please contact Mattos Filho’s Intellectual Property practice area.
*With the collaboration of Ana Flávia Marques.