Category: European Data Protection

CNIL Plan for AI

19. May 2023

The French Data Protection Authority (CNIL), also known as the AI Action Plan, released a statement on May 16, 2023, outlining its artificial intelligence policy. This strategy expands on the CNIL’s prior work in the field of AI and contains a number of projects targeted at encouraging the adoption of AI systems that respect people’s right to privacy.

The four key goals of the AI Action Plan are as follows:

Increasing awareness of AI systems and how they affect people: The newly created artificial intelligence service at the CNIL will place a high priority on addressing critical data protection issues related to the creation and use of AI applications. These problems include preventing illegitimate scraping of publicly accessible online data, securing user-transmitted data within AI systems and guaranteeing users’ rights over their data with regard to AI training datasets and created outputs.

Directing the creation of AI that respects privacy: The CNIL will publish guidelines and best practices on a variety of AI subjects in order to enable organizations engaged in AI innovation and to get ready for the eventual adoption of the EU AI Act. Along with advice for the creation of generative AI systems, this will include a thorough manual on the regulations governing data exchange and reuse.

Supporting creative actors in the French and European AI ecosystem: The CNIL prioritizes the defense of fundamental rights and freedoms in France and Europe while attempting to promote innovation within the AI ecosystem. The CNIL intends to issue a call for projects inviting participation in its 2023 regulatory sandbox as part of this endeavour. It also aims to promote more communication among academic groups, R&D facilities, and businesses engaged in the creation of AI systems.

The CNIL will create an auditing tool specifically made for assessing AI systems in order to conduct audits and ensure control over these systems. It will keep looking into AI-related grievances brought to its attention, especially those involving generative AI.

265 million euro fine for Meta

29. November 2022

The Irish Data Protection Commission (DPC) imposed an administrative fine of 265 million euros on Facebook-mother Meta as a result of the unlawful publication of personal data.

Investigation proceedings

Following the availability online of personal data of up to 533 million Facebook and Instagram users from over 100 countries in April 2021, the DPC had launched investigations. As part of the investigation process, it cooperated with the other European data protection authorities and examined the Facebook Search, Facebook Messenger Contact Importer and Instagram Contact Importer tools. With the help of these tools, contacts stored in the smartphone can be imported into the Instagram or Facebook app in order to find friends or acquaintances.

Lack of technical and organisational measures to protect data

As part of its investigation, the DPC dealt with the so-called technical and organisational measures according to Article 25 GDPR. According to data protection law, data controllers must use such measures to ensure that the rights of data subjects are extensively protected. These include, for example, pseudonymisation and encryption of personal data, but also physical protection measures or the existence of reliable backups.

The DPC did not consider Meta’s technical and organisational measures to be sufficient. Therefore, in addition to the aforementioned fine of 265 million euros, it issued a reprimand as well as an order to bring the processing operations into compliance with data protection law within a certain period of time and to implement a number of specific remedial measures to this end.

Not the first fine for Meta

Meta is by now familiar with fines from European data protection authorities. In total, the company has already been fined almost one billion euros, most recently in September in the amount of 405 million euros for serious data protection violations involving underage Instagram users. The reason for the considerable amount of the individual sanctions is Article 83 GDPR, according to which fines can amount to up to four percent of a company’s total worldwide annual turnover. Meta has appealed against each of the previous decisions, so it can also be assumed in this case that Meta will not accept the fine without a judicial review, either.

TikTok faces huge fine from Britain’s ICO

12. October 2022

Lately, the Chinese social media success has been the subject of an investigation by the British data protection watchdog, the Information Commissioner’s Office (ICO): the investigation has so far concluded that the social media network has clearly breached the United Kingdom’s data protection laws, in particular the regulations concerning children’s personal data in the time. The Authority issued therefore a notice of intent, which is a potential precursor to a fine amounting up to a staggering 27 million pounds.

In particular, the Authority found out that the platform could have processed personal data of children under the age of 13 failing to gather the parents’ consent for the processing of these data. Under these data there are allegedly also special category data, which have a special protection under Art. 9 GDPR.

Furthermore, in the ICO’s opinion the principle of transparency was not respected by the Chinese hit platform by not providing complete or transparent information on the data processing or their gathering.

The ICO’s investigation is still ongoing as the Commissioner’s Office is still deciding whether to impose the fine or whether there has been a breach of data protection law.

The protection of teenagers and children is the top priority of the ICO according to current Information Commissioner John Edwards. Under his guidance, the ICO has several ongoing investigations targeting various tech companies who could be breaking the UK’s data protection laws.

This is not the first time TikTok has been under observation by data protection watchdogs. In July a US – Australian cybersecurity firm has found that TikTok gathers excessive amounts of information from their users, and voiced their concern over their findings. Based on these precedents, it could be possible that local data protection authorities will increment their efforts to control TikTok’s compliance with local laws and, in Europe, with the  GDPR.

EDPS takes legal action against Europol’s new regulation

27. September 2022

ON June 28th 2022, two new provisions of the amended Europol regulation came into force. These changes are considered worrying by the European Data Protection Supervisor (EDPS), as they have a direct impact on the data processing of individuals in the European Union: based on these provisions, the new regulation allows the Europol to retroactively process large volumes of data, even of individuals with no links to criminal activity.

Specifically, before these new provisions were passed, individuals could expect that if their data was gathered by Europol it would be processed within six months in order to establish whether the individual was involved in illicit activities or not, and if the former was the case, that the data related to that person would be deleted. With these modifications, Europol would be allowed to store and process these data even if the individual was found not part of any wrongdoing.

In an effort to stop these changes to effectively come into force, the EDPS issued an order on January 3rd 2022 to amend the new provisions including a precisely determined deletion period for data related to individuals not connected to unlawful activities. Seen as the order was ignored by Europol, on September 16th the EDPS requested that the European Court of Justice (ECJ) annuls these two provisions. The authorities stated that this proceeding by Europol is a clear violation of the individual’s fundamental rights.

Furthermore, it is clear that by overriding a direct order by the European data protection watchdogs and by introducing such amendments the independent controlling power of the supervising authority is undermined: this could set a dangerous precedent by which authorities in the European Union could foresee possible counter – reactions of the legislative power to override their supervising activities depending on political will. This would result in a clear violation of the European Charter of Fundamental Rights, since there would be a concrete risk of undermining the independence of a controlling authority by making it subject to undue political pressure or interference.

EDPB AND EDPS criticise the Commission’s Proposal to combat child sexual abuse online

15. August 2022

In May 2022, the European Commission published its proposal on combating child sexual abuse material. The Commission justified the need for this proposal with the alleged insufficiency of voluntary detection carried out by companies. Recently, the European Data Protection Board (EDPB) and European Data Protection Supervisor (EDPS) have issued a joint statement criticizing the proposal on privacy grounds.

According to the proposal, hosting services and communication services would be obliged to identify, remove and report online child pornography. This, in turn, requires that encrypted messages can be screened. In other words, the actual text messages are to be read in order to detect grooming.

In their joint criticism, the EDPB and EDPS highlight that such an AI based system will most likely result into errors and false positives.

EDPS Supervisor, Wojciech Wiewiórowski, said: “Measures allowing public authorities to have access to the content of communications, on a generalised basis, affect the essence of the right to private life. Even if the technology used is limited to the use of indicators, the negative impact of monitoring the text and audio communications of individuals on a generalised basis is so severe that it cannot be justified under the EU Charter of Fundamental Rights. The proposed measures related to the detection of solicitation of children in interpersonal communication services are extremely concerning.”

Artificial Intelligence and Personal Data: a hard co-existence. A new perspective for the EU

7. July 2022

In the last decades AI has had an impressive development in various fields. At the same time, with each step forward the new machines and the new processes they are programmed to perform need to collect way more data than before in order to function properly.

One of the first things that come to mind is how can the rise of AI and the principle of data minimization, as contained in Art. 5 para. 1 lit. c) GDPR, be reconciled? At first glance it seems contradictory that there may be a way: after all, the GDPR clearly states that the number of personal data collected should be as small as possible. A study carried out by the Panel for the Future of Science and Technology of the European Union suggests that, given the wide scope (referring to the exceptions contained in the article) conceded by the norm, this issue could be addressed by measures like pseudonymization. This means that the data collected by the AI is deprived of every information that could refer personal data to a specific individual without additional information, thus lowering the risks for individuals.

The main issue with the current legal framework of the European Union regarding personal data protection is the fact that certain parts have been left vague, which causes uncertainty also in the regulation of artificial intelligence. To address this problem, the EU has put forward a proposal for a new Artificial Intelligence Act (“AIA”), aiming to create a common and more “approachable” legal framework.

One of the main features of this Act is that it divides the application of artificial intelligence in three main categories of risk levels:

  1. Creating an unacceptable risk, thus prohibited AIs (e.g. systems that violate fundamental rights).
  2. Creating a high risk, subject to specific regulation.
  3. Creating a low or minimum risk, with no further regulation.

Regarding high-risk AIs, the AIA foresees the creation of post-market monitoring obligations. If the AI in question violates any part of the AIA, it can then be forcibly withdrawn from the market by the regulator.

This approach has been welcomed by the Joint Opinion of the EDPB – EDPS, although the two bodies stated that the draft still needs to be more aligned with the GDPR.

Although the Commission’s draft contains a precise description of the first two categories, these will likely change over the course of the next years as the proposal is undergoing the legislative processes of the EU.

The draft was published by the European Commission in April 2021 and must still undergo scrutiny from the European Parliament and the Council of the European Union. Currently, some amendments have been formulated and the draft is still under review by the Parliament. After the Act has passed the scrutiny, it will be subject to a two – year implementation period.

Finally, a question remains to be answered: who shall oversee and control the Act’s implementation?It is foreseen that national supervisory authorities shall be established in each EU member state. Furthermore, the AIA aims at establishing a special European AI Board made up of representatives both of the member States and of the European Commission, which will also be the chair. Similar to the EDPB, this Board shall have the power to issue opinions and recommendations, and ensure the consistent application of the regulation throughout the EU.

UK announces Data Reform Bill

31. May 2022

In 2021 the Department for Culture, Media and Sport (DCMS) published a consultation document entitled “Data: a new direction”, requesting opinions on proposals that could bring changes to the UK’s data protection regime. On May 10, 2022, as part of the Queen’s Speech, Prince Charles confirmed that the government of the United Kingdom (UK) is in the process of reforming its data privacy rules, raising questions about whether the country could still be in compliance with the General Data Protection Regulation (GDPR).

Other than the statement itself, not much information was provided regarding the specific details. The accompanying briefing notes provided more information. They set out the main purposes of the Bill, namely to:

  • The establishment of a new pro-growth and trusted data protection framework
  • Reducing the burdens on business
  • Creation of a world class data rights regime
  • Supporting innovation
  • Driving industry participation in schemes which give citizens and small businesses more control of their data, particularly in relation to health and social care
  • Modernization of the  Information Commissioner’s Office (ICO), including strengthening its enforcement powers and increasing its accountability

Nevertheless, the defined goals are rather superficial. Another concern is that the new bill could deviate too far from the GDPR. The new regime might not be able to retain the adequacy-status with the EU, allowing personal data to be exchanged between UK and EU organizations. Prime Minister Johnson said that the Data Reform Bill would “improve the burdensome GDPR, allowing information to be shared more effectively and securely between public bodies.” So far, no time frame for the adoption of the new law has been published.

EU: Commission publishes Q&A on SCCs

30. May 2022

On 25 May 2022, the European Commission published guidance outlining questions and answers (‘Q&A’) on the two sets of Standard Contractual Clauses (‘SCCs’), on controllers and processors (‘the Controller-Processor SCCs’) and third-country data transfers (‘the Data Transfer SCCs’) respectively, as adopted by the European Commission on 4 June 2021. The Q&A are intended to provide practical guidance on the use of the SCCs. They are based on feedback from various stakeholders on their experiences using the new SCCs in the months following their adoption. 

Specifically, 44 questions are addressed, including those related to contracting, amendments, the relationship to other contract clauses, and the operation of the so-called docking clause.  In addition, the Q&A contains a specific section dedicated to each set of SCCs. Notably, in the section on the Data Transfer SCCs, the Commission addresses the scope of data transfers for which the Data Transfer SCCs may be used, highlighting that they may not be used for data transfers to controllers or processors whose processing operations are directly subject to the General Data Protection Regulation (Regulation (EU) 2016/679) (‘GDPR’) by virtue of Article 3 of the GDPR. Further to this point, the Q&A highlights that the Commission is in the process of developing an additional set of SCCs for this scenario, which will consider the requirements that already apply directly to those controllers and processors under the GDPR. 

In addition, the Q&A includes a section with questions on the obligations of data importers and exporters, specifically addressing the SCC liability scheme. Specifically, the Q&A states that other provisions in the broader (commercial) contract (e.g., specific rules for allocation of liability, caps on liability between the parties) may not contradict or undermine liability schemes of the SCCs. 

Additionally, with respect to the Court of Justice of the European Union’s judgment in Data Protection Commissioner v. Facebook Ireland Limited, Maximillian Schrems (C-311/18) (‘the Schrems II Case’), the Q&A includes a set of questions on local laws and government access aimed at clarifying contracting parties’ obligations under Clause 14 of the Data Transfer SCCs. 

In this regard, the Q&A highlights that Clause 14 of the Data Transfer SCCs should not be read in isolation but used together with the European Data Protection Board’s Recommendations 01/2020 on measures that supplement transfer tools. 

CJEU considers representative actions admissible

29. April 2022

Associations can bring legal proceedings against companies according to a press release of the European Court of Justice (CJEU).

This is the conclusion reached by the Court in a decision on the proceedings of the Federation of German Consumer Organisations (vzbv), which challenged Facebook’s data protection directive. Accordingly, it allows a consumer protection association to bring legal proceedings, in the absence of a mandate conferred on it for that purpose and independently of the infringement of specific rights of the data subjects, against the person allegedly responsible for an infringement of the laws protecting personal data, The vzbv is an institution that is entitled to bring legal proceeding under the GDPR because it pursues an objective in the public interest.

Specifically, the case is about third-party games on Facebook, in which users must agree to the use of data in order to be able to play these games on Facebook. According to the association, Facebook has not informed the data subjects in a precise, transparent and understandable form about the use of the data, as is actually prescribed by the General Data Protection Regulation (GDPR). The Federal Court of Justice in Germany (BGH) already came to this conclusion in May 2020 however, it was not considered sufficiently clarified whether the association can bring legal proceedings in this case.

The EU Advocate General also concluded before that the association can bring legal proceeding in a legally non-binding statement.

Thus, the CJEU confirmed this view so that the BGH must now finally decide on the case of vzbv vs. facebook. It is also important that this decision opens doors for similar collective actions against other companies.

Record GDPR fine by the Hungarian Data Protection Authority for the unlawful use of AI

22. April 2022

The Hungarian Data Protection Authority (Nemzeti Adatvédelmi és Információszabadság Hatóság, NAIH) has recently published its annual report in which it presented a case where the Authority imposed the highest fine to date of ca. €670,000 (HUF 250 million).

This case involved the processing of personal data by a bank that acted as a data controller. The controller automatically analyzed recorded audio of costumer calls. It used the results of the analysis to determine which customers should be called back by analyzing the emotional state of the caller using an artificial intelligence-based speech signal processing software that automatically analyzed the call based on a list of keywords and the emotional state of the caller. The software then established a ranking of the calls serving as a recommendation as to which caller should be called back as a priority.

The bank justified the processing on the basis of its legitimate interests in retaining its customers and improving the efficiency of its internal operations.

According to the bank this procedure aimed at quality control, in particular at the prevention of customer complaints. However, the Authority held that the bank’s privacy notice referred to these processing activities in general terms only, and no material information was made available regarding the voice analysis itself. Furthermore, the privacy notice only indicated quality control and complaint prevention as purposes of the data processing.

In addition, the Authority highlighted that while the Bank had conducted a data protection impact assessment and found that the processing posed a high risk to data subjects due to its ability to profile and perform assessments, the data protection impact assessment did not provide substantive solutions to address these risks. The Authority also emphasized that the legal basis of legitimate interest cannot serve as a “last resort” when all other legal bases are inapplicable, and therefore data controllers cannot rely on this legal basis at any time and for any reason. Consequently, the Authority not only imposed a record fine, but also required the bank to stop analyzing emotions in the context of speech analysis.

 

Pages: 1 2 3 4 5 6 7 8 9 10 ... 18 19 20 Next
1 2 3 20