

Artificial intelligence in software and medical devices
AI technology offers the Brazilian healthcare sector various opportunities – but what are the challenges for ensuring responsible use?
Subjects
The use of artificial intelligence (AI) tools and systems has intensified significantly in recent times. Given the fast, constant pace of innovation in this scenario, government authorities have been facing challenges in establishing rules and regulations that ensure AI is used safely and responsibly, yet without creating obstacles to progress.
In the healthcare sector, the numerous potential benefits that AI represents have seen it become the subject of increased focus and financial investment. According to Stanford University, healthcare saw the most private investment in AI of any sector in 2022, with a total of USD 6.1 billion (the data management and fintech sectors followed in second and third place, with USD 5.9 billion and USD 5.5 billion, respectively).
In light of this scenario, this article addresses one of the principal ways AI is employed in the healthcare sector – in software and medical devices. We analyze how the development of AI offers several opportunities for this sector, as well as governance and regulatory challenges related to data protection and Brazil’s proposed AI Act.
This article is the first in a series Mattos Filho is releasing on the use of AI in the Life Sciences, Healthcare and Agribusiness industries. Upcoming articles will also be published on the firm’s Único news portal.
Wearables, SaMD and other medical devices
In 2022, the Brazilian Health Regulatory Agency (Anvisa) established specific requirements for approving software as a medical device (SaMD) with Board Resolutions No. 657/2022 and No. 751/2022.
Brazilian legislation defines SaMD as a product or application that meets the definition of a medical device, is intended for one or more medical indications (e.g., diagnosis, prevention, monitoring, treatment), and functions without being part of a medical device’s hardware.
Although Anvisa’s regulations do not specifically refer to AI systems, the definition of SaMD covers software that uses AI tools and systems for medical purposes. Therefore, AI-based SaMD manufacturers must comply with the rules applicable to medical devices, including those covering risk classifications, notifications, and registrations, as well as product labeling and user instructions.
When applying to approve their products with Anvisa, manufacturers and developers must also submit a description of the databases the AI uses for learning, training, and verification, among other activities. This description must be accompanied by a report justifying the AI technique applied to the device and the size of the databases used, together with an account of the training history. If any of this information is missing from the application, Anvisa may determine not to approve it.
It is worth noting that Anvisa’s rules for medical devices are closely aligned with those of foreign health authorities and the International Medical Device Regulators Forum (IMDRF). In recent years, an IMDRF-established Working Group has participated in a number of initiatives to establish a consistent approach to managing AI-based medical devices – Anvisa counts itself among the members of the group. In 2021, the Working Group published a document establishing common concepts and definitions in relation to the topic, including supervised, semi-supervised and unsupervised machine learning.
That same year, there was increased discussion of AI-based medical devices at the United States Food and Drug Administration (FDA) after it received more than 100 product registration applications. To date, the FDA has already regulated several products of this type, particularly focusing on ophthalmology, cardiology, and radiology devices.
Anvisa has followed suit in Brazil, approving a range of products that employ AI, such as software used for predicting renal failure, assisting in cancer diagnosis, and planning radiotherapy. The authority has even gone a step further and used AI for regulatory oversight, such as for identifying online sales of non-approved products that are harmful to patients’ health.
Some of AI’s main applications within the industry include:
- Disease diagnosis: AI is already used to predict renal failure, assist in diagnosing cancer, and detect rare hereditary diseases of the retina, among others. It is also used in diagnostic imaging
- Patient management: AI tools and systems are present in patient management routines, assisting in processes such as radiotherapy planning and dose distribution, as well as menstrual cycle monitoring applications.
- Research: AI is also used to develop real-world evidence – information collected outside of typical clinical research contexts (e.g., via mobile phone apps or smart watches), improve the effectiveness of medications and even public health policies.
Furthermore, AI is also used to indirectly assist with health issues that fall outside the scope of software or medical devices. One example regards the use of AI to analyze geographic data to improve public policies.
Impacts of Brazil’s proposed AI Act
Beyond Anvisa’s regulations, the Brazilian Congress itself has also been making efforts to regulate AI in the country. To date, the leading legislative initiative is the Bill No. 2,338/2023 (AI Act), the result of work and public hearings led by a Commission of Jurists that the House of Representatives established in 2022.
The AI Act seeks to establish general rules for developing, implementing, and responsibly using AI systems in Brazil in order to protect fundamental rights and guarantee safe, reliable systems that benefit human beings, democracy and scientific progress.
To this end, it looks to establish rights and regulate risks linked to AI systems. Chapter III of Bill No. 2,338/2023 provides that developers must conduct a preliminary assessment of all any AI systems they produce in order to classify the degree of risk they pose to society.
Applications in the area of health – including those designed to assist medical procedures – are classified as high risk in the bill.
In scenarios where an AI system provider or operator (known collectively as ‘AI agents’) develops or uses high-risk systems, they will be required to establish governance structures and internal procedures that can sufficiently guarantee the systems are secure and comply with the rights of affected users. These structures and procedures include:
- Preparing an Algorithmic Impact Assessment (AIA);
- Conducting tests to check systems demonstrate sufficient levels of reliability (depending on the sector and AI system’s application). This includes tests in regard to a system’s robustness, accuracy, precision and coverage.
- Adopting data management measures to mitigate and prevent discriminatory bias. In practice, this includes ensuring an inclusive, diverse team is responsible for designing and developing the system, as well as evaluating the data using adequate measures to control cognitive human biases.
- Adopting technical measures to enable the possibility of explaining results obtained via AI systems; and
- Ensuring human supervision of high-risk AI systems to prevent or minimize risks to individual rights and freedoms. This includes enabling human intervention in AI system operations and the interpretation of results.
Other governance measures include implementing procedures to ensure;
- Individuals may interact transparently with AI systems;
- Transparency in how the AI agent adopts governance measures;
- Adequate data management to mitigate and prevent possible discriminatory bias; and
- Information security in relation to both the AI system’s design and operation.
According to the AI Act, AI system governance measures must apply to the system’s entire life cycle – from the initial conception of the system to the end of its activities and discontinuation.
An initial analysis of the current version of Bill No. 2,338/2023 allows for the interpretation that wearables and medical devices that use AI will be classified as high-risk AI systems. As such, monitoring this matter is of utmost importance to companies that develop or use these devices to conduct business.
A substitute version of the AI Act was released on November 28, 2023, which removed the high-risk classification for applications in the healthcare sector. Instead, Annex I of the substitute text mentions that health impacts are among the factors to be included in the AI impact assessment.
Data protection challenges
The inclusion of health-focused AI applications in the list of high-risk systems within the AI Act is not an unfounded concern.
In October 2023, the World Health Organization (WHO) reiterated AI’s potential to transform the healthcare sector, with improved medical diagnoses and treatments, more robust clinical research and healthcare professionals with greater knowledge, competencies and skills.
At the same time, the WHO warned that it was important to establish adequate measures to guarantee the privacy, security, and integrity of health information. For all its promise, AI poses a series of challenges – unethical data collection, cybersecurity threats, misinformation and systemic discriminatory bias are but a few examples.
In the Brazilian context, the General Data Protection Law (Law 13,709/2018 – LGPD) provides a number of guidelines regarding risks stemming from using AI systems in healthcare. Firstly, it should be noted that these AI systems may only process personal data if the processing is grounded in the legal bases in Article 7 of the LGPD (or, for sensitive personal data, Article 11 of the LGPD).
These legal bases include the possibility for data processing agents to justify the processing as a means “for a research body to carry out studies, guaranteeing the anonymization of personal data whenever possible” (LGPD – Article 7, item IV and Article 11, item II ‘C’). Given the growing number of AI research centers, the way the legal basis above is interpreted is extremely important to the development of AI systems.
Aware of this issue, the Brazilian Data Protection Authority (ANPD) has published a guide on personal data processing for academic purposes and studies conducted by research bodies. The guide makes the following indications:
- A ‘research body’ is any public or private non-profit entity or body headquartered in Brazil and constituted under Brazilian law, whose institutional mission is to conduct basic or applied research of a historical, scientific, technological, or statistical nature. Examples of research bodies include the Brazilian Institute of Geography and Statistics (IGBE) and the Institute of Applied Economic Research (IPEA);
- Based on the concept above, for-profit legal entities governed by private law cannot be considered research bodies. This means that it is not possible for such institutions to use this specific legal basis, even if they were established precisely to carry out research; and
- For-profit entities can base their research on other legal grounds, such as the owner’s consent or legitimate interest.
Regardless, as per the LGPD, AI agents who train and develop AI systems must be prepared to clarify questions regarding the security of personal data processing and the technical and administrative measures they use.
In the absence of specific legislation and given the growing (in some cases, virtually inevitable) use of AI solutions in business, it is important that the providers, operators and end users involved in the service chain are aware of the existing risks and opportunities stemming from the use of these tools. This is made easier by understanding legislative debates on the subject, knowing and observing the general principles that guide the best use of these applications, and taking care when drafting terms of use and contractual clauses.
For more information on this topic, please contact Mattos Filho’s Technology and Life Sciences & Healthcare practice areas.
The next article in this series will address the use of AI within the services provided by Brazil’s National Agency of Supplementary Health (ANS).