Tag: AI

CNIL Plan for AI

19. May 2023

The French Data Protection Authority (CNIL), also known as the AI Action Plan, released a statement on May 16, 2023, outlining its artificial intelligence policy. This strategy expands on the CNIL’s prior work in the field of AI and contains a number of projects targeted at encouraging the adoption of AI systems that respect people’s right to privacy.

The four key goals of the AI Action Plan are as follows:

Increasing awareness of AI systems and how they affect people: The newly created artificial intelligence service at the CNIL will place a high priority on addressing critical data protection issues related to the creation and use of AI applications. These problems include preventing illegitimate scraping of publicly accessible online data, securing user-transmitted data within AI systems and guaranteeing users’ rights over their data with regard to AI training datasets and created outputs.

Directing the creation of AI that respects privacy: The CNIL will publish guidelines and best practices on a variety of AI subjects in order to enable organizations engaged in AI innovation and to get ready for the eventual adoption of the EU AI Act. Along with advice for the creation of generative AI systems, this will include a thorough manual on the regulations governing data exchange and reuse.

Supporting creative actors in the French and European AI ecosystem: The CNIL prioritizes the defense of fundamental rights and freedoms in France and Europe while attempting to promote innovation within the AI ecosystem. The CNIL intends to issue a call for projects inviting participation in its 2023 regulatory sandbox as part of this endeavour. It also aims to promote more communication among academic groups, R&D facilities, and businesses engaged in the creation of AI systems.

The CNIL will create an auditing tool specifically made for assessing AI systems in order to conduct audits and ensure control over these systems. It will keep looking into AI-related grievances brought to its attention, especially those involving generative AI.

Record GDPR fine by the Hungarian Data Protection Authority for the unlawful use of AI

22. April 2022

The Hungarian Data Protection Authority (Nemzeti Adatvédelmi és Információszabadság Hatóság, NAIH) has recently published its annual report in which it presented a case where the Authority imposed the highest fine to date of ca. €670,000 (HUF 250 million).

This case involved the processing of personal data by a bank that acted as a data controller. The controller automatically analyzed recorded audio of costumer calls. It used the results of the analysis to determine which customers should be called back by analyzing the emotional state of the caller using an artificial intelligence-based speech signal processing software that automatically analyzed the call based on a list of keywords and the emotional state of the caller. The software then established a ranking of the calls serving as a recommendation as to which caller should be called back as a priority.

The bank justified the processing on the basis of its legitimate interests in retaining its customers and improving the efficiency of its internal operations.

According to the bank this procedure aimed at quality control, in particular at the prevention of customer complaints. However, the Authority held that the bank’s privacy notice referred to these processing activities in general terms only, and no material information was made available regarding the voice analysis itself. Furthermore, the privacy notice only indicated quality control and complaint prevention as purposes of the data processing.

In addition, the Authority highlighted that while the Bank had conducted a data protection impact assessment and found that the processing posed a high risk to data subjects due to its ability to profile and perform assessments, the data protection impact assessment did not provide substantive solutions to address these risks. The Authority also emphasized that the legal basis of legitimate interest cannot serve as a “last resort” when all other legal bases are inapplicable, and therefore data controllers cannot rely on this legal basis at any time and for any reason. Consequently, the Authority not only imposed a record fine, but also required the bank to stop analyzing emotions in the context of speech analysis.

 

EDPS and the EDPB call for a tightening of the EU draft legislation on the regulation of Artificial Intelligence (AI)

26. July 2021

In a joint statement, the European Data Protection Supervisor (EDPS) and the European Data Protection Board (EDPB) call for a general ban on the use of artificial intelligence for the automated recognition of human characteristics in publicly accessible spaces. This refers to surveillance technologies that recognise faces, human gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioral signals. In addition to the AI-supported recognition of human characteristics in public spaces, the EDPS and EPDB also call for a ban of AI systems using biometrics to categorize individuals into clusters based on ethnicity, gender, political or sexual orientation, or other grounds on which discrimination is prohibited under Article 21 of the Charter of Fundamental Rights. With the exception of individual applications in the medical field, EDPS and the EDPB are also calling for a ban on AI for sentiment recognition.

In April, the EU Commission presented a first draft law on the regulation of AI applications. The draft explicitly excluded the area of international law enforcement cooperation. The EDPS and EDPB expressed “concern” about the exclusion of international law enforcement cooperation from the scope of the draft. The draft is based on a categorisation of different AI applications into different types of risk, which are to be regulated to different degrees depending on the level of risk to the fundamental rights. In principle, the EDPS and EDPB support this approach and the fact that the EU is addressing the issue in general. However, they call for this concept of fundamental rights risk to be adapted to the EU data protection framework.

Andrea Jelinek, EDPB Chair, and Wojciech Wiewiórowski, of the EDPS, are quoted:

Deploying remote biometric identification in publicly accessible spaces means the end of anonymity in those places. Applications such as live facial recognition interfere with fundamental rights and freedoms to such an extent that they may call into question the essence of these rights and freedoms.

The EDPS and EDPB explicitly support, that the draft provides for national data protection authorities to become competent supervisory authorities for the application of the new regulation and explicitly welcome, that the EDPS is intended to be the competent authority and the market surveillance authority for the supervision of the Union institutions, agencies and bodies. The idea that the Commission also gives itself a predominant role in the “European Artificial Intelligence Board” is questioned by the EU data protection authorities. “This contradicts the need for a European AI Board that is independent of political influence”. They call for the board to be given more autonomy, to ensure its independence.

Worldwide there is great resistance against the use of biometric surveillance systems in public spaces. A large global alliance of 175 civil society organisations, academics and activists is calling for a ban on biometric surveillance in public spaces. The concern is that the potential for abuse of these technologies is too great and the consequences too severe. For example, the BBC reports that China is testing a camera system on Uighurs in Xinjiang that uses AI and facial recognition to detect emotional states. This system is supposed to serve as a kind of modern lie detector and be used in criminal proceedings, for example.

The global competition for Artificial Intelligence – Is it Time to Regulate Now?

21. May 2019

This year’s edition of the European Identity & Cloud Conference 2019 took place last week.
In the context of this event, various questions relevant from a data protection perspective arose. Dr. Karsten Kinast, Managing Director of KINAST Attorneys at Law and Fellow Analyst of the organizer KuppingerCole, gave a keynote speech on the question of whether internationally uniform regulations should be created in the context of a global competition for artificial intelligence (AI). Dr. Kinast outlined the controversial debate about the danger of future AI on the one hand and the resulting legal problems and solutions on the other. At present, there is no form of AI that understands the concrete content of its processing. Moreover, AI has not yet been able to draw any independent conclusions from a processing operation or even base autonomous decisions on it. Furthermore, from today’s perspective it is not even known how such a synthetic intelligence could be created.
For this reason, it is not primarily a question (as a result) of developing a code of ethics in which AIs can unfold as independent subjects. Rather, from today’s perspective, it would be a matter of a far more profane view of responsibilities.

The entire lecture can be found here.