CPDP LatAm 2024

Report: Latin American Conference on Artificial Intelligence and Data Protection

In October 1st, the 1st Latin American Conference on Artificial Intelligence and Data Protection took place. In two Sessions, the Conference covered key issues surrounding the implementation of artificial intelligence systems in the region, such as applying privacy by design, experimenting through regulatory sandboxes and policy prototyping, and achieving balance between innovation and personal data protection. Here are some key takeaways from the webinar, based on speakers’ presentations and their subsequent input.

Session 1

Jackline Conca

Undersecretary of Innovation and Digital Transformation, Ministry of Economy of Brazil

Mme. Undersecretary Jackline Conca highlighted that Artificial Intelligence (AI) is currently a crucial matter, especially due to its possible beneficial and harmful ramifications. AI systems are already embedded in the decision-making processes of various sectors and there are risks, but there are also opportunities for economic, social and environmental gains.

Brazil adhered to the OECD declaration, a reference of principles and values for the development of AI. It then created the Brazilian Strategy for Artificial Intelligence, currently managed by a governance structure composed of more than 30 entities that work to implement concrete actions, such as the elaboration of ethical principles, promotion of investment in research and development infrastructure, professional training etc. In addition, recently, a bill related to AI in Brazil was approved by the Chamber.

In this context, Mme. Conca stressed the importance of considering the role of public authorities. Regulating emerging technologies is difficult: it is necessary to balance innovation and rights, ethics and security. AI still has a lot to develop and, in her view, regulation, if extensive, could end up harming its development in the country. We still do not know satisfactorily the positive and negative impacts of AI, so excessive regulation could lead to a stranglehold on innovation. This, in turn, would call for principled regulation that privileges sectoral self-regulation.

Mme. Undersecretary then focused on the AI bill currently under discussion in the Brazilian Congress. Among the principles determined by the project are safeguards such as respect for fundamental rights, non-discrimination, transparency and the implementation of risk mitigation measures. This last point connects to the concept of risk-based regulation, as in the European Union’s proposed AI regulation. Unlike, however, the proposed European regulation, the Brazilian bill does not define prohibited technological fields due to high risk (in the European proposal, there are a few ex-ante prohibitions and more stringent obligations regarding a few specific uses of AI – see article 5). This would be aligned with the view that regulation must be proportional to the measured risk in a case-by-case manner.

The idea, then, would be to privilege intervention only when absolutely necessary, considering the risk and context of each AI system. The idea of sectoral regulation is similar to what happens in the USA, where the FDA has the prerogative of regulating and approving algorithms applied in health.

The Brazilian Subsecretariat of Innovation has been working closely with the Center for the 4th Industrial Revolution associated with the World Economic Forum, in São Paulo, in carrying out pilot projects to understand ways to mitigate the risks of AI applications. One pilot which shall begin shortly involves the use of AI to monitor São Paulo’s subway stations, with a view to understanding the risks involved and ways to mitigate them.

Regarding the protection of personal data, Mme. Conca expressed her view that the Brazilian General Data Protection Law (LGPD) already acts to mitigate several of the risks related to AI applications, which would be one more balancing element in the regulation of AI. The law provides for the right to review of automated decisions, establishes the Brazilian Data Protection Authority’s (ANPD) audit capacity and puts in place a requirement to carry out data protection impact assessments and the concept of privacy by design, all elements of the LGPD that already protect the data subject in the face of AI applications.

Ana Brian

UN Special Rapporteur on Privacy

Mme. Special Rapporteur argued that we are currently at a time of great technological development, especially based on the use of data and personal data – particularly in AI applications. A few years ago, there was much talk of the “end of privacy” – technology applications were everywhere to be seen and all around personal data were collected and processed.

However, nowadays the picture is starkly different: the protection of personal data shows a clear evolution, with 142 countries having some kind of personal data protection regulation. The European GDPR is one such regulation, and represents a stricter stance in relation to the protection of personal data than the trends seen up until its creation.

142 países con regulación general de protección de datos personales. 94 autoridades de protección de datos. 5051 millones de personas con acceso a internet (65%). @NelsonRemolina

AI regulation currently in debate in Europe relies on a classification that differentiates between applications of these systems. This classification is based on the potential harm to users, with applications that are unacceptable, and systems considered of high risk for citizen rights (related to transport, health, biometrics, educational or vocational, safety, employment, services, law enforcement, migration, administration of justice or democratic processes), limited risk (demanding specific transparency obligations) and minimal risk (such as spam filters).  Applications inserted in children’s toys, for example, occupy a strongly regulated category, while chatbots occupy a less strictly regulated category, with corresponding obligations according to the degree of risk.

Regarding facial recognition, the tendency of this European regulation is to limit its use for surveillance and control of public places. Also on this point, Michelle Bachelet, UN High Commissioner for Human Rights, has issued a statement proposing a moratorium on the commercialization and use of AI systems that pose human rights risks until the necessary safeguards are implemented.

In recent developments in the region, it is noteworthy that Uruguay adopted, in 2019, an AI strategy for electronic governance; and that Colombia has been implementing a regulatory sandbox strategy to test AI implementations based on privacy by design. This seems like an interesting tool to test out possible implementations and derive relevant insights.

A final reflection presented by Mme. Special Rapporteur regards the meaning and essence of artificial intelligence: “If artificial intelligence is to emulate human intelligence, and empathy and morality are aspects that permeate human intelligence, why would we overlook them when dealing with artificial intelligence?”.

Farzana Dudhwala

Privacy Policy Manager, AI Policy and Governance, Facebook

AI presents new challenges for regulation, which can take many forms. It is important to allow new technologies to develop and flourish. Regulatory sandboxes can be interesting tools in this sense, as they allow for testing initiatives in controlled scenarios. Another option is experimental governance and, in particular, the prototyping of public policies, which are means of multisectoral collaboration to understand the regulatory challenges of an area.

While sandboxes are often used to reform existing regulations under the supervision of a regulatory body, policy prototyping serves to create new regulations when none are in place. Therefore, prototyping is less formalistic and provides greater flexibility to think of means of regulation that adapt to the reality of new technologies.

Policy prototyping has been applied in the Open Loop project, which seeks to create a positive loop between regulators and implementers of this regulation. In Latin America, the project has focused on AI transparency, explainability, risk analysis and equity. The loop has alpha, research and testing phases, and beta iteration and tuning phases. Concretely, this translates into four steps:

  • a consortium of actors from multiple backgrounds and profiles develop a basic text on a topic, for example the explicability of AI systems;
  • this prototype is tested and applied in real scenarios, under observation to identify the points where it must be adjusted to achieve greater efficiency;
  • lessons learned are shared among all actors so that new regulatory iterations are developed; and
  • finally, the experiences of all involved actors are collated in a final public policy recommendation document.

The project was applied in Mexico with a focus on transparency and explainability over a period of six months. Great emphasis was placed on building partnerships between the various actors involved, with input from experts from the region. Currently, the final project report is being finalized and will be published shortly.

Karen Duque

Google Government Affairs and Public Policy Manager

AI uses have great beneficial potentials, but they also involve serious risks. However, AI and privacy don’t cancel each other out, or at least they don’t need to. An internal Google example is the development of Google Assistant.

Every project developed in the company is based on three principles: information security, responsible handling of data and always putting the user in control. This translates, in the case of Google Assistant, into a number of options, capabilities and design choices for the assistants.

These include the fact that by default the device does not keep audio recordings; the ability to erase all data with just a voice command; adjust automatic data erasure deadlines; specify the assistant’s sensitivity to voice commands; and the guest mode, which limits access to personal data without preventing the use of the tool. This and other Google projects involving AI follow the company’s Artificial Intelligence Principles, which include privacy by design, safety and accountability.

Veridiana Alimonti

EFF Senior Policy Analyst

Technological innovation and users’ rights need to work in tandem. LGPD provides some important principles, such as transparency, necessity and good faith, and is an important framework for social participation and guaranteeing rights. The AI bill should aim to play a similar part, but it is marked by hasty legislative debates and a superficial regulatory approach.

The use of AI technologies by governments must be based on an understanding of data protection as a fundamental right. In Brazil, this is a consequence of the recent case involving IBGE, decided by the Supreme Court, but it may become even more concrete with the approval of Constitutional Amendment bill (PEC) 17/2019. Thus, the adoption of technologies that can affect data protection must pass a rights balancing test.

LGPD provides a series of guarantees, embodied in the rights of data subjects, which will apply to AI implementations. The right to review of automated decisions must be approached from the point of view of due process, so that the holder is able not only to request the review, but also to understand it. At this junction, it is important to point out that even decisions that are not fully automated, but that contain some automation step must also ensure access to the information that supports the decisions being made. Observing personal data protection principles, in this sense, allows for a better balancing of decisions and should serve as a basis for public policy implementations regarding AI systems.

Nina da Hora

Researcher at CTS-FGV, Member of the Security Council of TikTok Brasil

Technical points of view and the user’s gaze should overlap and connect. Digital security necessarily involves understanding how the user will relate to a new technology. Personal assistants, for example, have helped people with disabilities in unforeseen ways. Designing AI tools means understanding users’ ability to use and understand them.

Technology will go on being increasingly present and essentially inescapable. Thinking about AI regulation involves actually dealing with technologies and facing the challenges they pose, and not simply pushing them away or vetoing them. It is important for civil society to get closer to the regulatory process in order to create norms that effectively deal with the challenges of technology. It is also important to bring industry and academia closer together to provide education on the use of technologies. It is crucial, too, to train people to be capable not only of building technology, but also of thinking about it: identifying the risks and shortcomings before they become real problems.

Session 2

Alejandro Pisanty

Member of the OECD Global Partnership for AI

The development of technology and regulation in Latin America is very uneven. Algorithms present several levels of bias – some comparable to the biases that emerge in human action itself, others more severe.

Latin America has a rich ecosystem of creative young people deeply involved in the development of technological tools, and it is necessary to encourage the ethical development of this ecosystem. The GPAI brings together experts from diverse backgrounds around responsible AI development. Some highlights of the projects developed at the GPAI are (i) the creation of a consent intermediation entity for the use of data for AI; and (ii) the application of AI for the discovery of new drugs. Regarding regulation, it is important to recognize that there is a discourse that is refractory to regulation and guided by self-regulation, and it is important that multisectoral mechanisms are used to establish regulations that are neither too severe nor too permissive.

Fabrizio Scrollini

Director of ILDA

ILDA sought to compile what is known about AI policies and practices in Latin America. In general, it can be said that the region is poorly prepared for AI. A profound challenge that even projects developed in Latin America face is the use of databases generated in other regions, referring to foreign populations, which generates large discrepancies and defects in the final application of AI. A particularly relevant case is that of the chronic disease risk calculator. By adjusting the data to the reality of the relevant population (Mexican, in this case), the effectiveness of the tool was greatly improved.

One can observe some common public policy challenges and insights in the region. First, there is a wide variety of AI strategies in Latin America, but this is often not necessarily reflected in the necessary investments and often translates into non-specific documents, which would be more appropriately characterized as letters of intent rather than strategies. Second, there is little clarity regarding data governance and a need to develop data policy leaderships. Third, there is great inequality in relation to the implementation of the strategies, with some countries presenting specific focuses that are different from those defined in the strategies. And fourth, that regulation is not so much an impediment in relation to public data as in relation to private data – in this case, there may be real barriers to development.

Sacha Alanoca

Senior Public Policy Researcher for AI at The Future Society

It is interesting to analyze the use of AI to fight the pandemic, as this reveals many of the opportunities and challenges of this technology in relation to data protection, as well as some points regarding public policies. In an analysis of more than 100 AI tools applied to combat the pandemic, three categories of applications were identified: applications focused on the molecular level (identification and understanding of the virus); the clinical field (improving and accelerating diagnosis); and the societal field (infodemiology, epidemiology and evidence-informed decision-making).

Some shared difficulties found were ethical and legal barriers (use of sensitive data, mainly); access to trusted data; and public adoption and trust. This last point is of special importance, especially in the context of a health crisis. To build trust, it is important that decisions are made through participatory processes; that implementation plans are developed with clear responsibilities; and that an active effort be made to identify the potential risks of AI applications.

Arturo Muente-Kunigami

Senior Specialist at the Inter-American Development Bank

It is important to find a balance between the interests involved in AI development. It is necessary to find spaces for coordination between industry, government and civil society. In this effort, the intention must be above all to understand and learn about AI and its risks, including reflecting on possible mitigation measures; and the need to raise awareness and inform the population about the uses of personal data.

Generally speaking, people understand that data protection is important, but they don’t know exactly why. It is possible that the idea of data trusts promotes more citizen control, but this must go hand in hand with these information and awareness efforts for it to take effect. A highlight at the IDB is the fAIr LAC initiative, focused on promoting the responsible and ethical use of data in AI development. The idea is not to curb innovation, but to promote awareness of the risks involved.

Another important line of attention is the creation of specific regulations regarding AI in the region. Even AI applications that do not involve personal data, in which case personal data protection laws would not apply, can have critical effects on people’s lives. Therefore, balanced regulation is essential. A crucial point is the application of algorithms in public sector processes, given the impact and risks involved – which raises the question of whether the public sector should work exclusively with open algorithms in order to promote transparency.

Constanza Gómez Mont

Founder and President of C Minds; AI Chair for Humanity, Chair for World Economic Forum Global Futures Council on AI for Humanity 

How to overcome the challenge of trust in AI tools? How to implement explainability and eliminate biases? These are issues of critical importance in the face of increasing digitization and adoption of AI, even by small companies. The point, in the end, is to determine access to rights – services, products and the realization of equality.

There is a great challenge in the global south where only a limited number of people and companies are aware of the importance of AI governance, its impact in their lives and fundamental rights as well as the understanding of good AI practices. Small businesses, particularly, often rely on third-party development without an understanding of how these tools are built and what safeguards are in place. Understanding the complexity of AI challenges should be a transversal concern in these companies, not restricted to technical areas, and with real involvement of the highest levels of management.

In general, European companies are already better adapted to the concern with AI challenges in their applications, due to a longstanding relation with regulations already in place. In Latin America, however, many companies prioritize other investments and concerns due mainly to, as we learned in various projects we led, a lack of awareness on the subject, financial resources and human resources.

However, it is necessary that both consumers and companies become aware that AI challenges in technology applications have real effects on the lives of people as well as on market performance: in terms of efficiency, as well-built AI systems are better adapted to the target audience application and therefore are more efficient; in terms of reputational gains or losses, as the construction of ethical AI systems can make the difference between one and the other; and with the same importance, in terms of the impact on the lives of their stakeholders and broader communities, as AI can have negative impact on people if not centered in human rights and ethics. In short, it is necessary to create spaces for open and multisector participatory discussion to promote significant change. Some particularly prominent points that must be addressed are AI biases, their impacts on the environment, and promoting transparency and explainability of AI applications. Ongoing initiatives in Latin America addressing these issues are, for example, the Eon Resilience Lab,  fAIr LAC led by IDB and OECD’s and UNESCO’s recommendations for an ethical AI, among others. Moreover, the Open Loop Mexico project, the first policy prototype in the region regarding the testing of governance mechanisms for AI transparency and explainability, – led by Facebook, C Minds, in collaboration with the InterAmerican Development Bank and the support of the National Institute for Transparency, Access to Information and Personal Data Protection (INAI), the participation of companies and a group of experts – are experimental governance projects that result in key lessons for the regulation of this field.