Tag: Artificial Intelligence

CNIL Plan for AI

19. May 2023

The French Data Protection Authority (CNIL), also known as the AI Action Plan, released a statement on May 16, 2023, outlining its artificial intelligence policy. This strategy expands on the CNIL’s prior work in the field of AI and contains a number of projects targeted at encouraging the adoption of AI systems that respect people’s right to privacy.

The four key goals of the AI Action Plan are as follows:

Increasing awareness of AI systems and how they affect people: The newly created artificial intelligence service at the CNIL will place a high priority on addressing critical data protection issues related to the creation and use of AI applications. These problems include preventing illegitimate scraping of publicly accessible online data, securing user-transmitted data within AI systems and guaranteeing users’ rights over their data with regard to AI training datasets and created outputs.

Directing the creation of AI that respects privacy: The CNIL will publish guidelines and best practices on a variety of AI subjects in order to enable organizations engaged in AI innovation and to get ready for the eventual adoption of the EU AI Act. Along with advice for the creation of generative AI systems, this will include a thorough manual on the regulations governing data exchange and reuse.

Supporting creative actors in the French and European AI ecosystem: The CNIL prioritizes the defense of fundamental rights and freedoms in France and Europe while attempting to promote innovation within the AI ecosystem. The CNIL intends to issue a call for projects inviting participation in its 2023 regulatory sandbox as part of this endeavour. It also aims to promote more communication among academic groups, R&D facilities, and businesses engaged in the creation of AI systems.

The CNIL will create an auditing tool specifically made for assessing AI systems in order to conduct audits and ensure control over these systems. It will keep looking into AI-related grievances brought to its attention, especially those involving generative AI.

Artificial Intelligence and Personal Data: a hard co-existence. A new perspective for the EU

7. July 2022

In the last decades AI has had an impressive development in various fields. At the same time, with each step forward the new machines and the new processes they are programmed to perform need to collect way more data than before in order to function properly.

One of the first things that come to mind is how can the rise of AI and the principle of data minimization, as contained in Art. 5 para. 1 lit. c) GDPR, be reconciled? At first glance it seems contradictory that there may be a way: after all, the GDPR clearly states that the number of personal data collected should be as small as possible. A study carried out by the Panel for the Future of Science and Technology of the European Union suggests that, given the wide scope (referring to the exceptions contained in the article) conceded by the norm, this issue could be addressed by measures like pseudonymization. This means that the data collected by the AI is deprived of every information that could refer personal data to a specific individual without additional information, thus lowering the risks for individuals.

The main issue with the current legal framework of the European Union regarding personal data protection is the fact that certain parts have been left vague, which causes uncertainty also in the regulation of artificial intelligence. To address this problem, the EU has put forward a proposal for a new Artificial Intelligence Act (“AIA”), aiming to create a common and more “approachable” legal framework.

One of the main features of this Act is that it divides the application of artificial intelligence in three main categories of risk levels:

  1. Creating an unacceptable risk, thus prohibited AIs (e.g. systems that violate fundamental rights).
  2. Creating a high risk, subject to specific regulation.
  3. Creating a low or minimum risk, with no further regulation.

Regarding high-risk AIs, the AIA foresees the creation of post-market monitoring obligations. If the AI in question violates any part of the AIA, it can then be forcibly withdrawn from the market by the regulator.

This approach has been welcomed by the Joint Opinion of the EDPB – EDPS, although the two bodies stated that the draft still needs to be more aligned with the GDPR.

Although the Commission’s draft contains a precise description of the first two categories, these will likely change over the course of the next years as the proposal is undergoing the legislative processes of the EU.

The draft was published by the European Commission in April 2021 and must still undergo scrutiny from the European Parliament and the Council of the European Union. Currently, some amendments have been formulated and the draft is still under review by the Parliament. After the Act has passed the scrutiny, it will be subject to a two – year implementation period.

Finally, a question remains to be answered: who shall oversee and control the Act’s implementation?It is foreseen that national supervisory authorities shall be established in each EU member state. Furthermore, the AIA aims at establishing a special European AI Board made up of representatives both of the member States and of the European Commission, which will also be the chair. Similar to the EDPB, this Board shall have the power to issue opinions and recommendations, and ensure the consistent application of the regulation throughout the EU.

Record GDPR fine by the Hungarian Data Protection Authority for the unlawful use of AI

22. April 2022

The Hungarian Data Protection Authority (Nemzeti Adatvédelmi és Információszabadság Hatóság, NAIH) has recently published its annual report in which it presented a case where the Authority imposed the highest fine to date of ca. €670,000 (HUF 250 million).

This case involved the processing of personal data by a bank that acted as a data controller. The controller automatically analyzed recorded audio of costumer calls. It used the results of the analysis to determine which customers should be called back by analyzing the emotional state of the caller using an artificial intelligence-based speech signal processing software that automatically analyzed the call based on a list of keywords and the emotional state of the caller. The software then established a ranking of the calls serving as a recommendation as to which caller should be called back as a priority.

The bank justified the processing on the basis of its legitimate interests in retaining its customers and improving the efficiency of its internal operations.

According to the bank this procedure aimed at quality control, in particular at the prevention of customer complaints. However, the Authority held that the bank’s privacy notice referred to these processing activities in general terms only, and no material information was made available regarding the voice analysis itself. Furthermore, the privacy notice only indicated quality control and complaint prevention as purposes of the data processing.

In addition, the Authority highlighted that while the Bank had conducted a data protection impact assessment and found that the processing posed a high risk to data subjects due to its ability to profile and perform assessments, the data protection impact assessment did not provide substantive solutions to address these risks. The Authority also emphasized that the legal basis of legitimate interest cannot serve as a “last resort” when all other legal bases are inapplicable, and therefore data controllers cannot rely on this legal basis at any time and for any reason. Consequently, the Authority not only imposed a record fine, but also required the bank to stop analyzing emotions in the context of speech analysis.

 

Artificial intelligence in business operations poses problems in terms of GDPR compliance

25. February 2022

With the introduction of the General Data Protection Regulation, the intention was to protect personal data and to minimize the processing of such data to the absolutely necessary extent. Processing should be possible for a specific, well-defined purpose.

In the age of technology, it is particularly practical to access artificial intelligence, especially in everyday business, and use it to optimize business processes. More and more companies are looking for solutions based on artificial intelligence. This generally involves processing significant amounts of personal data.

In order for artificial intelligence to be implementable at all, this system must first be given a lot of data to store so that it can learn from it and thus make its own decisions.

When using so-called “machine learning”, which forms a subset of artificial intelligence, care must be taken as to whether and what data is processed so that it is in compliance with the General Data Protection Regulation.

If a company receives data for further processing and analysis, or if it shares data for this purpose, there must be mutual clarity regarding this processing.

The use of artificial intelligence faces significant challenges in terms of compliance with the General Data Protection Regulation. These are primarily compliance with the principles of transparency, purpose limitation and data minimization.

In addition, the data protection impact assessment required by the General Data Protection Regulation also poses problems with regard to artificial intelligence, as artificial intelligence is a self-learning system that can make its own decisions. Thus, some of these decisions may not be understandable or predictable.

In summary, there is a strong tension between artificial intelligence and data privacy.

Many companies are trying to get around this problem with the so-called “crowd sourcing” solution. This involves the development of anonymized data, which is additionally provided with a fuzziness instead of being able to trace it back to a person.

EDPS and the EDPB call for a tightening of the EU draft legislation on the regulation of Artificial Intelligence (AI)

26. July 2021

In a joint statement, the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) call for a general ban on the use of artificial intelligence for the automated recognition of human characteristics in publicly accessible spaces. This refers to surveillance technologies that recognise faces, human gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioral signals. In addition to the AI-supported recognition of human characteristics in public spaces, the EDPS and EPDB also call for a ban of AI systems using biometrics to categorize individuals into clusters based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights. With the exception of individual applications in the medical field, EDPS and the EDPB are also calling for a ban on AI for sentiment recognition.

In April, the EU Commission presented a first draft law on the regulation of AI applications. The draft explicitly excluded the area of international law enforcement cooperation. The EDPS and EDPB expressed “concern” about the exclusion of international law enforcement cooperation from the scope of the draft. The draft is based on a categorisation of different AI applications into different types of risk, which are to be regulated to different degrees depending on the level of risk to the fundamental rights. In principle, the EDPS and EDPB support this approach and the fact that the EU is addressing the issue in general. However, they call for this concept of fundamental rights risk to be adapted to the EU data protection framework.

Andrea Jelinek, EDPB Chair, and Wojciech Wiewiórowski, of the EDPS, are quoted:

Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places. Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms.

The EDPS and EDPB explicitly support, that the draft provides for national data protection authorities to become competent supervisory authorities for the application of the new regulation and explicitly welcome, that the EDPS is intended to be the competent authority and the market surveillance authority for the supervision of the Union institutions, agencies and bodies. The idea that the Commission also gives itself a predominant role in the “European Artificial Intelligence Board” is questioned by the EU data protection authorities. “This contradicts the need for a European AI Board that is independent of political influence”. They call for the board to be given more autonomy, to ensure its independence.

Worldwide there is great resistance against the use of biometric surveillance systems in public spaces. A large global alliance of 175 civil society organisations, academics and activists is calling for a ban on biometric surveillance in public spaces. The concern is that the potential for abuse of these technologies is too great and the consequences too severe. For example, the BBC reports that China is testing a camera system on Uighurs in Xinjiang that uses AI and facial recognition to detect emotional states. This system is supposed to serve as a kind of modern lie detector and be used in criminal proceedings, for example.

EU offers new alliance with the USA on data protection

4. December 2020

The European Commission and the High Representative of the Union for Foreign Affairs and Security Policy outlined a new EU-US agenda for global change, which was published on December 2nd, 2020. It constitutes a proposal for a new, forward-looking transatlantic cooperation covering a variety of matters, including data protection.

The draft plan states the following guiding principles:

  • Advance of global common goods, providing a solid base for stronger multilateral action and institutions that will support all like-minded partners to join.
  • Pursuing common interests and leverage collective strength to deliver results on strategic priorities.
  • Looking for solutions that respect common values of fairness, openness and competition – including where there are bilateral differences.

As said in the draft plan, it is a “once-in-a-generation” opportunity to forge a new global alliance. It includes an appeal for the EU and US to bury the hatchet on persistent sources of transatlantic tension and join forces to shape the digital regulatory environment. The proposal aims to create a shared approach to enforcing data protection law and combatting cybersecurity threats, which could also include possible restrictive measures against attributed attackers from third countries. Moreover, a transatlantic agreement concerning Artificial Intelligence forms a part of the recommendation. The purpose is setting a blueprint for regional and global standards. The EU also wants to openly discuss diverging views on data governance and facilitate free data flow with trust on the basis of high safeguards. Furthermore, the creation of a specific dialogue with the US on the responsibility of online platforms and Big Tech is included in the proposal as well as the development of a common approach to protecting critical technologies.

The draft plan is expected to be submitted for endorsement by the European Council at a meeting on December 10-11th, 2020. It suggests an EU-US Summit in the first half of 2021 as the moment to launch the new transatlantic agenda.

The global competition for Artificial Intelligence – Is it Time to Regulate Now?

21. May 2019

This year’s edition of the European Identity & Cloud Conference 2019 took place last week.
In the context of this event, various questions relevant from a data protection perspective arose. Dr. Karsten Kinast, Managing Director of KINAST Attorneys at Law and Fellow Analyst of the organizer KuppingerCole, gave a keynote speech on the question of whether internationally uniform regulations should be created in the context of a global competition for artificial intelligence (AI). Dr. Kinast outlined the controversial debate about the danger of future AI on the one hand and the resulting legal problems and solutions on the other. At present, there is no form of AI that understands the concrete content of its processing. Moreover, AI has not yet been able to draw any independent conclusions from a processing operation or even base autonomous decisions on it. Furthermore, from today’s perspective it is not even known how such a synthetic intelligence could be created.
For this reason, it is not primarily a question (as a result) of developing a code of ethics in which AIs can unfold as independent subjects. Rather, from today’s perspective, it would be a matter of a far more profane view of responsibilities.

The entire lecture can be found here.