Category: EU

EDPS takes legal action against Europol’s new regulation

27. September 2022

ON June 28th 2022, two new provisions of the amended Europol regulation came into force. These changes are considered worrying by the European Data Protection Supervisor (EDPS), as they have a direct impact on the data processing of individuals in the European Union: based on these provisions, the new regulation allows the Europol to retroactively process large volumes of data, even of individuals with no links to criminal activity.

Specifically, before these new provisions were passed, individuals could expect that if their data was gathered by Europol it would be processed within six months in order to establish whether the individual was involved in illicit activities or not, and if the former was the case, that the data related to that person would be deleted. With these modifications, Europol would be allowed to store and process these data even if the individual was found not part of any wrongdoing.

In an effort to stop these changes to effectively come into force, the EDPS issued an order on January 3rd 2022 to amend the new provisions including a precisely determined deletion period for data related to individuals not connected to unlawful activities. Seen as the order was ignored by Europol, on September 16th the EDPS requested that the European Court of Justice (ECJ) annuls these two provisions. The authorities stated that this proceeding by Europol is a clear violation of the individual’s fundamental rights.

Furthermore, it is clear that by overriding a direct order by the European data protection watchdogs and by introducing such amendments the independent controlling power of the supervising authority is undermined: this could set a dangerous precedent by which authorities in the European Union could foresee possible counter – reactions of the legislative power to override their supervising activities depending on political will. This would result in a clear violation of the European Charter of Fundamental Rights, since there would be a concrete risk of undermining the independence of a controlling authority by making it subject to undue political pressure or interference.

Danish watchdogs ban Google Chromebooks and Google Workspace in municipality

26. August 2022

In July 2022, after an investigation related to a data breach was carried out by the Danish Data Protection Authority (Datailsynet), Google Chromebooks and Google Workspace were banned in schools in the municipality of Helsingor. The DPA ruled that the risk assessment carried out by city officials shows that the processing of personal data by Google does not meet GDPR requirements. In particular, data transfers have been targeted by the Authority: the Data Processing Agreement allows data transfer to third countries for analytical and statistical support, though the data are primarily stored in Google’s European facilities.

This decision comes in a moment of tension in the world of personal data between Europe and the United States of America: other notorious cases (some still ongoing) are the case of the Irish Data Protection Authority vs. Facebook (now part of Meta Inc.), and the case of the German Federal Cartel Office vs. Facebook. European watchdogs have found that in many cases the American tech giants’ policies do not meet the requirements established by the GDPR. This could be traced back to a lack of legal framework in the field of privacy and personal data protection in the United States, were these companies are based.

This decision was taken in the aftermath of the Schrems II ruling by the European Court of Justice, which stated that the pre-existing agreement on data transfers between Europe and the US (so-called Privacy Shield)was not compatible with the GDPR. A new deal is on the table, but not yet approved nor effective.

Google is becoming the target of various investigations by European data watchdogs, above all because of its tool Google Analytics. In January the Austrian Data Protection Authority published an opinion in which it stated that companies using Google Analytics inadvertently transferred customers’ personal data such as IP addresses to the United States, in breach of the GDPR. Italy’s Garante per la Protezione dei Dati Personali published a similar opinion a few weeks later, stating that “the current methods adopted by Google do not guarantee an adequate level of protection of personal data”.

Privacy issues in the antitrust legal framework: “the Facebook case”

21. July 2022

European countries were among the first to introduce privacy laws in the context of antitrust and in the competition law framework. As a result of this implementation, in 2019 the German Federal Cartel Office took action to stop Facebook (now a part of Meta Inc.) from further processing personal data that had been acquired through third – party installations (most of all referring to cookies). The proceedings on the matter are still ongoing. Recently also the Irish Data Protection Authority took position against Facebook (which has in the meantime become Meta Inc.), by preventing the American tech giant to transfer user data to the United States due to data safety issues. Also in this matter the parties are still in debate.

In 2014 Facebook notoriously purchased messaging company WhatsApp for almost 22 bln. dollars. At the time Europe did not give much thought to the potential consequences of this merger. This operation was the object of an opinion of the European Commission; in the Commission’s mind the two companies’ privacy policies were way different, and the thought that Facebook now had control over all of the data collected by WhatsApp did not sit well with the European authorities. Another key argument brought forward by the Commission was the lack of an effective competition between the two companies. However, no further action was taken at the time.

A few years later, academic research highlighted the mistake made by the European Commission in not considering the enormous meaning personal data have for these tech companies: due to the fact that personal data are considered to be so – called “nonprice competition”, they play a key role in the strategies and decision – making of big data – driven business models. In particular, when a company depends on collecting and using personal data, it usually lowers the bar of privacy protection standards and raises the number of data collected. This argument was brought forward by the U.K.’s Competition Agency, which stated that by considering the enormous importance personal data have gained in the digital market, companies such as Facebook do not have to face a strong competition in their business.

These arguments and the growing unrest in various DPAs around the globe has brought in 2020 to the notorious investigation of Facebook by the Federal Trade Commission of the United States. In particular the FTC accused Meta Inc. (in particular Facebook) of stifling its competition in order to retain its monopoly of the digital market. On one hand an American court dismissed the claims, but on the other hand the high risks connected with an enormous data collection was highlighted. In particular, according to Section 2 of the Sherman Act, the State has:

  • To prove that a company is in fact a monopoly, and
  • That it has to harm consumers

This does not apply directly to the case, but the FTC argued that the harm to the consumers is to be seen in Meta Inc.’s lowering privacy standards. The case is still pending as of July 2022.

This merger showed how much privacy and antitrust issues overlap in the digitalized market.

In the following months, policymakers and enforcers both in the United States and in the European Union have been struggling to establish new sets of rules to better regulate mergers between companies whose business model relies on the collection of personal data, and above all they called for more cooperation between privacy and antitrust agencies.

European Parliament adopts Digital Services Act and Digital Markets Act

7. July 2022

On July 5, 2022, the EU Parliament voted in favor of the long-awaited Digital Markets Act (DMA) and Digital Services Act (DSA) following trilogue talks and agreements held between Parliament, Council, and European Commission earlier this year.

While the DSA amending the e-Commerce directive strictly prohibits specific forms of targeted advertising and misleading practices, the DMA can be viewed as the Competition law component that sets out stricter obligations for large online platforms within the Commission’s Digital Services Package.

Upon entry into force, advertisements targeting children, advertisements based on sensitive data, and dark patterns will no longer be permitted. Further, online platforms need to provide its users with the option and choice to not receive recommendations based on profiling. What the DSA also seeks to do, is to strengthen platform’s accountability and transparency. This means  that these platforms have to provide authorities and vetted researchers with access to information on the content moderation rules the respective platform uses as well as information on the algorithms used by recommender systems.

The spread of illegal content, such as hate speech, is also being addressed by these legislations obliging large platforms to respond quickly with due regard to other fundamental rights implicated.

Online platforms and other service providers not respecting the new obligations, may be fined with 10% of their annual total turnover in case of violations of the DMA, and 6% for violations of the DSA.

Artificial Intelligence and Personal Data: a hard co-existence. A new perspective for the EU

In the last decades AI has had an impressive development in various fields. At the same time, with each step forward the new machines and the new processes they are programmed to perform need to collect way more data than before in order to function properly.

One of the first things that come to mind is how can the rise of AI and the principle of data minimization, as contained in Art. 5 para. 1 lit. c) GDPR, be reconciled? At first glance it seems contradictory that there may be a way: after all, the GDPR clearly states that the number of personal data collected should be as small as possible. A study carried out by the Panel for the Future of Science and Technology of the European Union suggests that, given the wide scope (referring to the exceptions contained in the article) conceded by the norm, this issue could be addressed by measures like pseudonymization. This means that the data collected by the AI is deprived of every information that could refer personal data to a specific individual without additional information, thus lowering the risks for individuals.

The main issue with the current legal framework of the European Union regarding personal data protection is the fact that certain parts have been left vague, which causes uncertainty also in the regulation of artificial intelligence. To address this problem, the EU has put forward a proposal for a new Artificial Intelligence Act (“AIA”), aiming to create a common and more “approachable” legal framework.

One of the main features of this Act is that it divides the application of artificial intelligence in three main categories of risk levels:

  1. Creating an unacceptable risk, thus prohibited AIs (e.g. systems that violate fundamental rights).
  2. Creating a high risk, subject to specific regulation.
  3. Creating a low or minimum risk, with no further regulation.

Regarding high-risk AIs, the AIA foresees the creation of post-market monitoring obligations. If the AI in question violates any part of the AIA, it can then be forcibly withdrawn from the market by the regulator.

This approach has been welcomed by the Joint Opinion of the EDPB – EDPS, although the two bodies stated that the draft still needs to be more aligned with the GDPR.

Although the Commission’s draft contains a precise description of the first two categories, these will likely change over the course of the next years as the proposal is undergoing the legislative processes of the EU.

The draft was published by the European Commission in April 2021 and must still undergo scrutiny from the European Parliament and the Council of the European Union. Currently, some amendments have been formulated and the draft is still under review by the Parliament. After the Act has passed the scrutiny, it will be subject to a two – year implementation period.

Finally, a question remains to be answered: who shall oversee and control the Act’s implementation?It is foreseen that national supervisory authorities shall be established in each EU member state. Furthermore, the AIA aims at establishing a special European AI Board made up of representatives both of the member States and of the European Commission, which will also be the chair. Similar to the EDPB, this Board shall have the power to issue opinions and recommendations, and ensure the consistent application of the regulation throughout the EU.

UK announces Data Reform Bill

31. May 2022

In 2021 the Department for Culture, Media and Sport (DCMS) published a consultation document entitled “Data: a new direction”, requesting opinions on proposals that could bring changes to the UK’s data protection regime. On May 10, 2022, as part of the Queen’s Speech, Prince Charles confirmed that the government of the United Kingdom (UK) is in the process of reforming its data privacy rules, raising questions about whether the country could still be in compliance with the General Data Protection Regulation (GDPR).

Other than the statement itself, not much information was provided regarding the specific details. The accompanying briefing notes provided more information. They set out the main purposes of the Bill, namely to:

  • The establishment of a new pro-growth and trusted data protection framework
  • Reducing the burdens on business
  • Creation of a world class data rights regime
  • Supporting innovation
  • Driving industry participation in schemes which give citizens and small businesses more control of their data, particularly in relation to health and social care
  • Modernization of the  Information Commissioner’s Office (ICO), including strengthening its enforcement powers and increasing its accountability

Nevertheless, the defined goals are rather superficial. Another concern is that the new bill could deviate too far from the GDPR. The new regime might not be able to retain the adequacy-status with the EU, allowing personal data to be exchanged between UK and EU organizations. Prime Minister Johnson said that the Data Reform Bill would “improve the burdensome GDPR, allowing information to be shared more effectively and securely between public bodies.” So far, no time frame for the adoption of the new law has been published.

EU: Commission publishes Q&A on SCCs

30. May 2022

On 25 May 2022, the European Commission published guidance outlining questions and answers (‘Q&A’) on the two sets of Standard Contractual Clauses (‘SCCs’), on controllers and processors (‘the Controller-Processor SCCs’) and third-country data transfers (‘the Data Transfer SCCs’) respectively, as adopted by the European Commission on 4 June 2021. The Q&A are intended to provide practical guidance on the use of the SCCs. They are based on feedback from various stakeholders on their experiences using the new SCCs in the months following their adoption. 

Specifically, 44 questions are addressed, including those related to contracting, amendments, the relationship to other contract clauses, and the operation of the so-called docking clause.  In addition, the Q&A contains a specific section dedicated to each set of SCCs. Notably, in the section on the Data Transfer SCCs, the Commission addresses the scope of data transfers for which the Data Transfer SCCs may be used, highlighting that they may not be used for data transfers to controllers or processors whose processing operations are directly subject to the General Data Protection Regulation (Regulation (EU) 2016/679) (‘GDPR’) by virtue of Article 3 of the GDPR. Further to this point, the Q&A highlights that the Commission is in the process of developing an additional set of SCCs for this scenario, which will consider the requirements that already apply directly to those controllers and processors under the GDPR. 

In addition, the Q&A includes a section with questions on the obligations of data importers and exporters, specifically addressing the SCC liability scheme. Specifically, the Q&A states that other provisions in the broader (commercial) contract (e.g., specific rules for allocation of liability, caps on liability between the parties) may not contradict or undermine liability schemes of the SCCs. 

Additionally, with respect to the Court of Justice of the European Union’s judgment in Data Protection Commissioner v. Facebook Ireland Limited, Maximillian Schrems (C-311/18) (‘the Schrems II Case’), the Q&A includes a set of questions on local laws and government access aimed at clarifying contracting parties’ obligations under Clause 14 of the Data Transfer SCCs. 

In this regard, the Q&A highlights that Clause 14 of the Data Transfer SCCs should not be read in isolation but used together with the European Data Protection Board’s Recommendations 01/2020 on measures that supplement transfer tools. 

Twitter fined $150m for handing users’ contact details to advertisers

Twitter has been fined $150 million by U.S. authorities after the company collected users’ email addresses and phone numbers for security reasons and then used the data for targeted advertising. 

According to a settlement with the U.S. Department of Justice and the Federal Trade Commission, the social media platform had told users that the information would be used to keep their accounts secure. “While Twitter represented to users that it collected their telephone numbers and email addresses to secure their accounts, Twitter failed to disclose that it also used user contact information to aid advertisers in reaching their preferred audiences,” said a court complaint filed by the DoJ. 

A stated in the court documents, the breaches occurred between May 2013 and September 2019, and the information was apparently used for purposes such as two-factor authentication. However, in addition to the above-mentioned purposes, Twitter used that data to allow advertisers to target specific groups of users by matching phone numbers and email addresses with advertisers’ own lists. 

In addition to financial compensation, the settlement requires Twitter to improve its compliance practices. According to the complaint, the false disclosures violated FTC law and a 2011 settlement with the agency. 

Twitter’s chief privacy officer, Damien Kieran, said in a statement that the company has “cooperated with the FTC at every step of the way.” 

“In reaching this settlement, we have paid a $150m penalty, and we have aligned with the agency on operational updates and program enhancements to ensure that people’s personal data remains secure, and their privacy protected,” he added. 

Twitter generates 90 percent of its $5 billion (£3.8 billion) in annual revenue from advertising.  

The complaint also alleges that Twitter falsely claimed to comply with EU and U.S. privacy laws, as well as Swiss and U.S. privacy laws, which prohibit companies from using data in ways that consumers have not approved of. 

The settlement with Twitter follows years of controversy over tech companies’ privacy practices. Revelations in 2018 that Facebook, the world’s largest social network, used phone numbers provided for two-factor authentication for advertising purposes enraged privacy advocates. Facebook, now Meta, also settled the matter with the FTC as part of a $5 billion settlement in 2019. 

 

CJEU considers representative actions admissible

29. April 2022

Associations can bring legal proceedings against companies according to a press release of the European Court of Justice (CJEU).

This is the conclusion reached by the Court in a decision on the proceedings of the Federation of German Consumer Organisations (vzbv), which challenged Facebook’s data protection directive. Accordingly, it allows a consumer protection association to bring legal proceedings, in the absence of a mandate conferred on it for that purpose and independently of the infringement of specific rights of the data subjects, against the person allegedly responsible for an infringement of the laws protecting personal data, The vzbv is an institution that is entitled to bring legal proceeding under the GDPR because it pursues an objective in the public interest.

Specifically, the case is about third-party games on Facebook, in which users must agree to the use of data in order to be able to play these games on Facebook. According to the association, Facebook has not informed the data subjects in a precise, transparent and understandable form about the use of the data, as is actually prescribed by the General Data Protection Regulation (GDPR). The Federal Court of Justice in Germany (BGH) already came to this conclusion in May 2020 however, it was not considered sufficiently clarified whether the association can bring legal proceedings in this case.

The EU Advocate General also concluded before that the association can bring legal proceeding in a legally non-binding statement.

Thus, the CJEU confirmed this view so that the BGH must now finally decide on the case of vzbv vs. facebook. It is also important that this decision opens doors for similar collective actions against other companies.

Record GDPR fine by the Hungarian Data Protection Authority for the unlawful use of AI

22. April 2022

The Hungarian Data Protection Authority (Nemzeti Adatvédelmi és Információszabadság Hatóság, NAIH) has recently published its annual report in which it presented a case where the Authority imposed the highest fine to date of ca. €670,000 (HUF 250 million).

This case involved the processing of personal data by a bank that acted as a data controller. The controller automatically analyzed recorded audio of costumer calls. It used the results of the analysis to determine which customers should be called back by analyzing the emotional state of the caller using an artificial intelligence-based speech signal processing software that automatically analyzed the call based on a list of keywords and the emotional state of the caller. The software then established a ranking of the calls serving as a recommendation as to which caller should be called back as a priority.

The bank justified the processing on the basis of its legitimate interests in retaining its customers and improving the efficiency of its internal operations.

According to the bank this procedure aimed at quality control, in particular at the prevention of customer complaints. However, the Authority held that the bank’s privacy notice referred to these processing activities in general terms only, and no material information was made available regarding the voice analysis itself. Furthermore, the privacy notice only indicated quality control and complaint prevention as purposes of the data processing.

In addition, the Authority highlighted that while the Bank had conducted a data protection impact assessment and found that the processing posed a high risk to data subjects due to its ability to profile and perform assessments, the data protection impact assessment did not provide substantive solutions to address these risks. The Authority also emphasized that the legal basis of legitimate interest cannot serve as a “last resort” when all other legal bases are inapplicable, and therefore data controllers cannot rely on this legal basis at any time and for any reason. Consequently, the Authority not only imposed a record fine, but also required the bank to stop analyzing emotions in the context of speech analysis.

 

Pages: 1 2 3 4 5 6 7 8 9 10 ... 21 22 23 Next
1 2 3 23