Category: EU

Artificial Intelligence and Personal Data: a hard co-existence. A new perspective for the EU

7. July 2022

In the last decades AI has had an impressive development in various fields. At the same time, with each step forward the new machines and the new processes they are programmed to perform need to collect way more data than before in order to function properly.

One of the first things that come to mind is how can the rise of AI and the principle of data minimization, as contained in Art. 5 para. 1 lit. c) GDPR, be reconciled? At first glance it seems contradictory that there may be a way: after all, the GDPR clearly states that the number of personal data collected should be as small as possible. A study carried out by the Panel for the Future of Science and Technology of the European Union suggests that, given the wide scope (referring to the exceptions contained in the article) conceded by the norm, this issue could be addressed by measures like pseudonymization. This means that the data collected by the AI is deprived of every information that could refer personal data to a specific individual without additional information, thus lowering the risks for individuals.

The main issue with the current legal framework of the European Union regarding personal data protection is the fact that certain parts have been left vague, which causes uncertainty also in the regulation of artificial intelligence. To address this problem, the EU has put forward a proposal for a new Artificial Intelligence Act (“AIA”), aiming to create a common and more “approachable” legal framework.

One of the main features of this Act is that it divides the application of artificial intelligence in three main categories of risk levels:

  1. Creating an unacceptable risk, thus prohibited AIs (e.g. systems that violate fundamental rights).
  2. Creating a high risk, subject to specific regulation.
  3. Creating a low or minimum risk, with no further regulation.

Regarding high-risk AIs, the AIA foresees the creation of post-market monitoring obligations. If the AI in question violates any part of the AIA, it can then be forcibly withdrawn from the market by the regulator.

This approach has been welcomed by the Joint Opinion of the EDPB – EDPS, although the two bodies stated that the draft still needs to be more aligned with the GDPR.

Although the Commission’s draft contains a precise description of the first two categories, these will likely change over the course of the next years as the proposal is undergoing the legislative processes of the EU.

The draft was published by the European Commission in April 2021 and must still undergo scrutiny from the European Parliament and the Council of the European Union. Currently, some amendments have been formulated and the draft is still under review by the Parliament. After the Act has passed the scrutiny, it will be subject to a two – year implementation period.

Finally, a question remains to be answered: who shall oversee and control the Act’s implementation?It is foreseen that national supervisory authorities shall be established in each EU member state. Furthermore, the AIA aims at establishing a special European AI Board made up of representatives both of the member States and of the European Commission, which will also be the chair. Similar to the EDPB, this Board shall have the power to issue opinions and recommendations, and ensure the consistent application of the regulation throughout the EU.

UK announces Data Reform Bill

31. May 2022

In 2021 the Department for Culture, Media and Sport (DCMS) published a consultation document entitled “Data: a new direction”, requesting opinions on proposals that could bring changes to the UK’s data protection regime. On May 10, 2022, as part of the Queen’s Speech, Prince Charles confirmed that the government of the United Kingdom (UK) is in the process of reforming its data privacy rules, raising questions about whether the country could still be in compliance with the General Data Protection Regulation (GDPR).

Other than the statement itself, not much information was provided regarding the specific details. The accompanying briefing notes provided more information. They set out the main purposes of the Bill, namely to:

  • The establishment of a new pro-growth and trusted data protection framework
  • Reducing the burdens on business
  • Creation of a world class data rights regime
  • Supporting innovation
  • Driving industry participation in schemes which give citizens and small businesses more control of their data, particularly in relation to health and social care
  • Modernization of the  Information Commissioner’s Office (ICO), including strengthening its enforcement powers and increasing its accountability

Nevertheless, the defined goals are rather superficial. Another concern is that the new bill could deviate too far from the GDPR. The new regime might not be able to retain the adequacy-status with the EU, allowing personal data to be exchanged between UK and EU organizations. Prime Minister Johnson said that the Data Reform Bill would “improve the burdensome GDPR, allowing information to be shared more effectively and securely between public bodies.” So far, no time frame for the adoption of the new law has been published.

EU: Commission publishes Q&A on SCCs

30. May 2022

On 25 May 2022, the European Commission published guidance outlining questions and answers (‘Q&A’) on the two sets of Standard Contractual Clauses (‘SCCs’), on controllers and processors (‘the Controller-Processor SCCs’) and third-country data transfers (‘the Data Transfer SCCs’) respectively, as adopted by the European Commission on 4 June 2021. The Q&A are intended to provide practical guidance on the use of the SCCs. They are based on feedback from various stakeholders on their experiences using the new SCCs in the months following their adoption. 

Specifically, 44 questions are addressed, including those related to contracting, amendments, the relationship to other contract clauses, and the operation of the so-called docking clause.  In addition, the Q&A contains a specific section dedicated to each set of SCCs. Notably, in the section on the Data Transfer SCCs, the Commission addresses the scope of data transfers for which the Data Transfer SCCs may be used, highlighting that they may not be used for data transfers to controllers or processors whose processing operations are directly subject to the General Data Protection Regulation (Regulation (EU) 2016/679) (‘GDPR’) by virtue of Article 3 of the GDPR. Further to this point, the Q&A highlights that the Commission is in the process of developing an additional set of SCCs for this scenario, which will consider the requirements that already apply directly to those controllers and processors under the GDPR. 

In addition, the Q&A includes a section with questions on the obligations of data importers and exporters, specifically addressing the SCC liability scheme. Specifically, the Q&A states that other provisions in the broader (commercial) contract (e.g., specific rules for allocation of liability, caps on liability between the parties) may not contradict or undermine liability schemes of the SCCs. 

Additionally, with respect to the Court of Justice of the European Union’s judgment in Data Protection Commissioner v. Facebook Ireland Limited, Maximillian Schrems (C-311/18) (‘the Schrems II Case’), the Q&A includes a set of questions on local laws and government access aimed at clarifying contracting parties’ obligations under Clause 14 of the Data Transfer SCCs. 

In this regard, the Q&A highlights that Clause 14 of the Data Transfer SCCs should not be read in isolation but used together with the European Data Protection Board’s Recommendations 01/2020 on measures that supplement transfer tools. 

Twitter fined $150m for handing users’ contact details to advertisers

Twitter has been fined $150 million by U.S. authorities after the company collected users’ email addresses and phone numbers for security reasons and then used the data for targeted advertising. 

According to a settlement with the U.S. Department of Justice and the Federal Trade Commission, the social media platform had told users that the information would be used to keep their accounts secure. “While Twitter represented to users that it collected their telephone numbers and email addresses to secure their accounts, Twitter failed to disclose that it also used user contact information to aid advertisers in reaching their preferred audiences,” said a court complaint filed by the DoJ. 

A stated in the court documents, the breaches occurred between May 2013 and September 2019, and the information was apparently used for purposes such as two-factor authentication. However, in addition to the above-mentioned purposes, Twitter used that data to allow advertisers to target specific groups of users by matching phone numbers and email addresses with advertisers’ own lists. 

In addition to financial compensation, the settlement requires Twitter to improve its compliance practices. According to the complaint, the false disclosures violated FTC law and a 2011 settlement with the agency. 

Twitter’s chief privacy officer, Damien Kieran, said in a statement that the company has “cooperated with the FTC at every step of the way.” 

“In reaching this settlement, we have paid a $150m penalty, and we have aligned with the agency on operational updates and program enhancements to ensure that people’s personal data remains secure, and their privacy protected,” he added. 

Twitter generates 90 percent of its $5 billion (£3.8 billion) in annual revenue from advertising.  

The complaint also alleges that Twitter falsely claimed to comply with EU and U.S. privacy laws, as well as Swiss and U.S. privacy laws, which prohibit companies from using data in ways that consumers have not approved of. 

The settlement with Twitter follows years of controversy over tech companies’ privacy practices. Revelations in 2018 that Facebook, the world’s largest social network, used phone numbers provided for two-factor authentication for advertising purposes enraged privacy advocates. Facebook, now Meta, also settled the matter with the FTC as part of a $5 billion settlement in 2019. 

 

CJEU considers representative actions admissible

29. April 2022

Associations can bring legal proceedings against companies according to a press release of the European Court of Justice (CJEU).

This is the conclusion reached by the Court in a decision on the proceedings of the Federation of German Consumer Organisations (vzbv), which challenged Facebook’s data protection directive. Accordingly, it allows a consumer protection association to bring legal proceedings, in the absence of a mandate conferred on it for that purpose and independently of the infringement of specific rights of the data subjects, against the person allegedly responsible for an infringement of the laws protecting personal data, The vzbv is an institution that is entitled to bring legal proceeding under the GDPR because it pursues an objective in the public interest.

Specifically, the case is about third-party games on Facebook, in which users must agree to the use of data in order to be able to play these games on Facebook. According to the association, Facebook has not informed the data subjects in a precise, transparent and understandable form about the use of the data, as is actually prescribed by the General Data Protection Regulation (GDPR). The Federal Court of Justice in Germany (BGH) already came to this conclusion in May 2020 however, it was not considered sufficiently clarified whether the association can bring legal proceedings in this case.

The EU Advocate General also concluded before that the association can bring legal proceeding in a legally non-binding statement.

Thus, the CJEU confirmed this view so that the BGH must now finally decide on the case of vzbv vs. facebook. It is also important that this decision opens doors for similar collective actions against other companies.

Record GDPR fine by the Hungarian Data Protection Authority for the unlawful use of AI

22. April 2022

The Hungarian Data Protection Authority (Nemzeti Adatvédelmi és Információszabadság Hatóság, NAIH) has recently published its annual report in which it presented a case where the Authority imposed the highest fine to date of ca. €670,000 (HUF 250 million).

This case involved the processing of personal data by a bank that acted as a data controller. The controller automatically analyzed recorded audio of costumer calls. It used the results of the analysis to determine which customers should be called back by analyzing the emotional state of the caller using an artificial intelligence-based speech signal processing software that automatically analyzed the call based on a list of keywords and the emotional state of the caller. The software then established a ranking of the calls serving as a recommendation as to which caller should be called back as a priority.

The bank justified the processing on the basis of its legitimate interests in retaining its customers and improving the efficiency of its internal operations.

According to the bank this procedure aimed at quality control, in particular at the prevention of customer complaints. However, the Authority held that the bank’s privacy notice referred to these processing activities in general terms only, and no material information was made available regarding the voice analysis itself. Furthermore, the privacy notice only indicated quality control and complaint prevention as purposes of the data processing.

In addition, the Authority highlighted that while the Bank had conducted a data protection impact assessment and found that the processing posed a high risk to data subjects due to its ability to profile and perform assessments, the data protection impact assessment did not provide substantive solutions to address these risks. The Authority also emphasized that the legal basis of legitimate interest cannot serve as a “last resort” when all other legal bases are inapplicable, and therefore data controllers cannot rely on this legal basis at any time and for any reason. Consequently, the Authority not only imposed a record fine, but also required the bank to stop analyzing emotions in the context of speech analysis.

 

Google launches “Reject All” button on cookie banners

After being hit with a €150 million fine by France’s data protection agency CNIL earlier in the year for making the process of rejecting cookies unnecessarily confusing and convoluted for users, Google has added a new “Reject All” button to the cookie consent banners that have become ubiquitous on websites in Europe. Users visiting Search and YouTube in Europe while signed out or in incognito mode will soon see an updated cookie dialogue with reject all and accept all buttons.

Previously, users only had two options: “I accept” and “personalize.” While this allowed users to accept all cookies with a single click, they had to navigate through various menus and options if they wanted to reject all cookies. “This update, which began rolling out earlier this month on YouTube, will provide you with equal “Reject All” and “Accept All” buttons on the first screen in your preferred language,” wrote Google product manager Sammit Adhya in a blog post.

According to Google they have kicked off the rollout of the new cookie banner in France and will be extending the change to all Google users in Europe, the U.K., and Switzerland soon.

Google’s plan to include a “Reject All” button on cookie banners after its existing policy violated EU law was also welcomed by Hamburg’s Commissioner for Data Protection and Freedom of Information Thomas Fuchs during a presentation of his 2021 activity report.

But the introduction of the “Reject All” button is likely to be only an interim solution because the US giant already presented far-reaching plans at the end of January to altogether remove Google cookies from third-party providers by 2023.

Instead of cookies, the internet giant wants to rely on in-house tracking technology for the Google Privacy Sandbox project.

ECJ against data retention without any reason or limit

6. April 2022

In the press release of the judgment of 5.4.2022, the ECJ has once again ruled that the collection of private communications data is unlawful without any reason or limit. This reinforces the rulings of 2014, 2016 and 2020, according to which changes are necessary at EU and national level.

In this judgment, the ECJ states that the decision to allow data retention as evidence in the case of a long-standing murder case is for the national court in Ireland.

Questions regarding this issue were submitted in 2020 by Germany, France and Ireland. The EU Advocate General confirmed, in a legally non-binding manner, the incompatibility of national laws with EU fundamental rights.

However, a first exception to data retention resulted from the 2020 judgment, according to which, in the event of a serious threat to national security, storage for a limited period and subject to judicial review was recognized as permissible.

Subsequently, a judgment in 2021 stated that national law must provide clear and precise rules with minimum conditions for the purpose of preventing abuse.

According to the ECJ, an without cause storage with restriction should be allowed in the following cases:

  • When limited to specific individuals or locations;
  • No concrete evidence of crime necessary, local crime rate is sufficient;
  • Frequently visited locations such as airports and train stations;
  • When national laws require the identity of prepaid cardholders to be stored;
  • Quick freeze, an immediate backup and temporary data storage if there is suspicion of crime.

All of these are to be used only to combat serious crime or prevent threats to national security.

In Germany, Justice Minister Marco Buschmann is in favor of a quick freeze solution as an alternative that preserves fundamental rights. However, the EU states are to work on a legally compliant option for data retention despite the ECJ’s criticism of this principle.

UK’s new data protection clauses now in force

31. March 2022

After the British government announced reforms to UK’s data protection system last year, the Secretary of State submitted on February 2nd, 2022, a framework to the Parliament to regulate international data transfers and replace the EU Standard Contractual Clauses (SCC). As no objections were raised and the Parliament approved the documents, they entered into force on March 21st, 2022.

The set of rules consists of the International Data Transfer Agreement (IDTA), the International Data Transfer Addendum to the European Commission’s SCC for international data transfers (Addendum) and a Transitional Provisions document. The transfer rules are issued under Section 119A of the Data Protection Act 2018 and take into account the binding judgement of the European Court of Justice in the case commonly referred to as “Schrems II”.

The documents serve as a new tool for compliance with Art. 46 UK GDPR for data transfers to third countries and broadly mirror the rules of the EU GDPR. The UK government also retained the ability to issue its own adequacy decisions regarding data transfers to other third countries and international organizations.

The transfer rules are of immediate benefit to organizations transferring personal data outside the UK. In addition, the transitional provisions allow organizations to rely on the EU SCC until March 21st, 2024, for contracts entered into up to and including September 21st, 2022. However, this is subject to the condition that the data processing activities remain unchanged and that the clauses ensure adequate safeguards.

Italian DPA imposes a 20 Mio Euro Fine on Clearview AI

29. March 2022

The Italian data protection authority “Garante” has fined Clearview AI 20 million Euros for data protection violations regarding its facial recognition technology. Clearview AI’s facial recognition system uses over 10 billion images from the internet and prides themself to have the largest biometric image database in the world. The data protection authority has found Clearview AI to be in breach of numerous GDPR requirements. For example, fair and lawful processing was not carried out within the data protection framework, and there was no lawful basis for the collection of information and no appropriate transparency and data retention policies.

Last November, the UK ICO warned of a potential 17 million pound fine against Clearview, and in this context, and also ordered Clearview to stop processing data.

Then, in December, the French CNIL ordered Clearview to stop processing citizens’ data and gave it two months to delete all the data it had stored, but did not mention any explicit financial sanction.

In Italy, Clearview AI must now, in addition to the 20 million Euro fine, not only delete all images of Italian citizens from its database. It must also delete the biometric information needed to search for a specific face. Furthermore, the company must provide a EU representative as a point of contact for EU data subjects and the supervisory authority.

Pages: Prev 1 2 3 4 5 6 7 8 9 10 ... 22 23 24 Next
1 2 3 4 24