Category: EU

French DPA fines phone operator for various violations of the GDPR

10. January 2023

After receiving several complaints , in November 2022, the French Data Protection Authority (CNIL) decided to impose a fine of 300.000 Euros upon the French phone operator FREE for several violations of the rules contained in the GDPR.

In particular, findings included violations of:

  • Article 12 and 21 GDPR, regarding transparent communication on how the data subjects can exercise their rights, in particular the right of erasure.

 

  • Article 15 GDPR, regarding the right of access by the data subject.

 

 

  • Article 32 GDPR, regarding the security of personal data.

 

  • Article 33 GDPR, as FREE did not comply with the obligation to document a personal data breach.

 

As a consequence of these findings, CNIL decided to impose a fine upon FREE, with an order to comply with the GDPR’s rules regarding the management of access and erasure requests and to justify this compliance within three months from the decision, with an additional fine of 500 Euros for each day overdue.

Category: Data Protection · EU · French DPA · GDPR
Tags: , ,

265 million euro fine for Meta

29. November 2022

The Irish Data Protection Commission (DPC) imposed an administrative fine of 265 million euros on Facebook-mother Meta as a result of the unlawful publication of personal data.

Investigation proceedings

Following the availability online of personal data of up to 533 million Facebook and Instagram users from over 100 countries in April 2021, the DPC had launched investigations. As part of the investigation process, it cooperated with the other European data protection authorities and examined the Facebook Search, Facebook Messenger Contact Importer and Instagram Contact Importer tools. With the help of these tools, contacts stored in the smartphone can be imported into the Instagram or Facebook app in order to find friends or acquaintances.

Lack of technical and organisational measures to protect data

As part of its investigation, the DPC dealt with the so-called technical and organisational measures according to Article 25 GDPR. According to data protection law, data controllers must use such measures to ensure that the rights of data subjects are extensively protected. These include, for example, pseudonymisation and encryption of personal data, but also physical protection measures or the existence of reliable backups.

The DPC did not consider Meta’s technical and organisational measures to be sufficient. Therefore, in addition to the aforementioned fine of 265 million euros, it issued a reprimand as well as an order to bring the processing operations into compliance with data protection law within a certain period of time and to implement a number of specific remedial measures to this end.

Not the first fine for Meta

Meta is by now familiar with fines from European data protection authorities. In total, the company has already been fined almost one billion euros, most recently in September in the amount of 405 million euros for serious data protection violations involving underage Instagram users. The reason for the considerable amount of the individual sanctions is Article 83 GDPR, according to which fines can amount to up to four percent of a company’s total worldwide annual turnover. Meta has appealed against each of the previous decisions, so it can also be assumed in this case that Meta will not accept the fine without a judicial review, either.

Another 20 million Euro fine for Clearview AI

28. October 2022

The French data protection authority CNIL imposed a fine of 20 million Euros on Clearview AI, being the latest in a line of authorities deeming the processing activities of the biometrics company unlawful under data protection law.

Clearview AI is a US company that extracts photographs and videos that are directly accessible online, including social media, in order to feed its biometric image database, which it prides itself to be the biggest in the world. Access to the search engine based on this database is offered to law enforcement authorities.

The case

The decision followed several complaints from data subjects in 2020, which led to the CNIL’s investigations and a formal notice to Clearview AI in November 2021 to “cease the collection and use of data of persons on French territory in the absence of a legal basis” and “facilitate the exercise of individuals’ rights and to comply with requests for erasure.” However, the company did not react to this notice within the two-month deadline imposed by the CNIL. Therefore, the authority imposed not only the fine but also an order to Clearview AI “to stop collecting and processing data of individuals residing in France without a legal basis and to delete the data of these persons that it had already collected, within a period of two months.” In addition, it set a “penalty of 100,000 euros per day of delay beyond these two months.”

CNIL based its decision on three breaches. First, Clearview AI had processed the data without a legal basis. Given the “intrusive and massive nature of the process which makes it possible to retrieve the images present on Internet of the millions of internet users in France”, Clearview AI had no legitimate interest in the data processing. Second, the CNIL sanctioned Clearview AI’s inadequate handling of data subjects’ requests. Lastly, it penalized the company’s failure to cooperate with the CNIL.

The impact of the decision

For over two years, Clearview AI has been under the scrutiny of data protection authorities (“DPA”s) all over the world. So far, it has been fined more than 68 million Euros in total. Apart from CNIL’s fine, there have been fines of 20 million Euros by Greece’s Hellenic DPA in July 2022, over 7.5 million pounds by the UK Information Commissioner’s Office in May 2022 and 20 million Euros by the Italian Garante in March 2022.

CNIL’s decision was likely not the last one, considering that the all-encompassing nature of Clearview AI’s collection of personal data that – given the company’s business model – inevitably concerns EU data subjects. Whether the company will comply within the two-month period is yet to be seen.

Italian DPA launches investigation on cookie- and paywalls

27. October 2022

On October 21st, 2022 the Italian Data Protection Authority launched an investigation on the use of cookie walls by several online newspapers. Although the GDPR allows the implementation of cookiewalls and paywalls (not revealing the content of a website unless the cookies have been accepted or a certain amount of money has been paid), the Italian watchdogs will take a closer look if these have been correctly implemented correctly and do not violated the European regulation.

Further information is yet to be released by the authorities.

American company ordered to pay 75.000 Euros to wrongfully terminated employee

12. October 2022

A few days ago a Dutch court ordered a Florida – based company to pay a compensation of 75.000 Euros to an employee. The employee had been fired because he had refused to keep his work computer’s camera on the whole day, as required by the company, being concerned with the fact that this was an invasion of his privacy.

After he was fired he took his former employer to court, suing for wrongful termination; the judges recognized the issue and stated that the American company’s regulation was a violation of the employee’s privacy and were in violation of data protection laws. The worker had already stated his complaint with his employer, also stating that they already could see his shared screen while he was working, and that it was not necessary for him to keep the camera on.

Rather than a matter of personal data protection, this was a matter of the employee’s right to privacy, as stated in Article 8 of the European Convention of Human Rights: the court argued that the company’s request was disproportionate and intrusive of the worker’s privacy.

According to Dutch law, an appeal is possible for the company within three months of the ruling. In the aftermath of the ruling, the company shut down its offices in Rijswijk, Netherlands, where the plaintiff worked.

EDPS takes legal action against Europol’s new regulation

27. September 2022

ON June 28th 2022, two new provisions of the amended Europol regulation came into force. These changes are considered worrying by the European Data Protection Supervisor (EDPS), as they have a direct impact on the data processing of individuals in the European Union: based on these provisions, the new regulation allows the Europol to retroactively process large volumes of data, even of individuals with no links to criminal activity.

Specifically, before these new provisions were passed, individuals could expect that if their data was gathered by Europol it would be processed within six months in order to establish whether the individual was involved in illicit activities or not, and if the former was the case, that the data related to that person would be deleted. With these modifications, Europol would be allowed to store and process these data even if the individual was found not part of any wrongdoing.

In an effort to stop these changes to effectively come into force, the EDPS issued an order on January 3rd 2022 to amend the new provisions including a precisely determined deletion period for data related to individuals not connected to unlawful activities. Seen as the order was ignored by Europol, on September 16th the EDPS requested that the European Court of Justice (ECJ) annuls these two provisions. The authorities stated that this proceeding by Europol is a clear violation of the individual’s fundamental rights.

Furthermore, it is clear that by overriding a direct order by the European data protection watchdogs and by introducing such amendments the independent controlling power of the supervising authority is undermined: this could set a dangerous precedent by which authorities in the European Union could foresee possible counter – reactions of the legislative power to override their supervising activities depending on political will. This would result in a clear violation of the European Charter of Fundamental Rights, since there would be a concrete risk of undermining the independence of a controlling authority by making it subject to undue political pressure or interference.

Danish watchdogs ban Google Chromebooks and Google Workspace in municipality

26. August 2022

In July 2022, after an investigation related to a data breach was carried out by the Danish Data Protection Authority (Datailsynet), Google Chromebooks and Google Workspace were banned in schools in the municipality of Helsingor. The DPA ruled that the risk assessment carried out by city officials shows that the processing of personal data by Google does not meet GDPR requirements. In particular, data transfers have been targeted by the Authority: the Data Processing Agreement allows data transfer to third countries for analytical and statistical support, though the data are primarily stored in Google’s European facilities.

This decision comes in a moment of tension in the world of personal data between Europe and the United States of America: other notorious cases (some still ongoing) are the case of the Irish Data Protection Authority vs. Facebook (now part of Meta Inc.), and the case of the German Federal Cartel Office vs. Facebook. European watchdogs have found that in many cases the American tech giants’ policies do not meet the requirements established by the GDPR. This could be traced back to a lack of legal framework in the field of privacy and personal data protection in the United States, were these companies are based.

This decision was taken in the aftermath of the Schrems II ruling by the European Court of Justice, which stated that the pre-existing agreement on data transfers between Europe and the US (so-called Privacy Shield)was not compatible with the GDPR. A new deal is on the table, but not yet approved nor effective.

Google is becoming the target of various investigations by European data watchdogs, above all because of its tool Google Analytics. In January the Austrian Data Protection Authority published an opinion in which it stated that companies using Google Analytics inadvertently transferred customers’ personal data such as IP addresses to the United States, in breach of the GDPR. Italy’s Garante per la Protezione dei Dati Personali published a similar opinion a few weeks later, stating that “the current methods adopted by Google do not guarantee an adequate level of protection of personal data”.

Privacy issues in the antitrust legal framework: “the Facebook case”

21. July 2022

European countries were among the first to introduce privacy laws in the context of antitrust and in the competition law framework. As a result of this implementation, in 2019 the German Federal Cartel Office took action to stop Facebook (now a part of Meta Inc.) from further processing personal data that had been acquired through third – party installations (most of all referring to cookies). The proceedings on the matter are still ongoing. Recently also the Irish Data Protection Authority took position against Facebook (which has in the meantime become Meta Inc.), by preventing the American tech giant to transfer user data to the United States due to data safety issues. Also in this matter the parties are still in debate.

In 2014 Facebook notoriously purchased messaging company WhatsApp for almost 22 bln. dollars. At the time Europe did not give much thought to the potential consequences of this merger. This operation was the object of an opinion of the European Commission; in the Commission’s mind the two companies’ privacy policies were way different, and the thought that Facebook now had control over all of the data collected by WhatsApp did not sit well with the European authorities. Another key argument brought forward by the Commission was the lack of an effective competition between the two companies. However, no further action was taken at the time.

A few years later, academic research highlighted the mistake made by the European Commission in not considering the enormous meaning personal data have for these tech companies: due to the fact that personal data are considered to be so – called “nonprice competition”, they play a key role in the strategies and decision – making of big data – driven business models. In particular, when a company depends on collecting and using personal data, it usually lowers the bar of privacy protection standards and raises the number of data collected. This argument was brought forward by the U.K.’s Competition Agency, which stated that by considering the enormous importance personal data have gained in the digital market, companies such as Facebook do not have to face a strong competition in their business.

These arguments and the growing unrest in various DPAs around the globe has brought in 2020 to the notorious investigation of Facebook by the Federal Trade Commission of the United States. In particular the FTC accused Meta Inc. (in particular Facebook) of stifling its competition in order to retain its monopoly of the digital market. On one hand an American court dismissed the claims, but on the other hand the high risks connected with an enormous data collection was highlighted. In particular, according to Section 2 of the Sherman Act, the State has:

  • To prove that a company is in fact a monopoly, and
  • That it has to harm consumers

This does not apply directly to the case, but the FTC argued that the harm to the consumers is to be seen in Meta Inc.’s lowering privacy standards. The case is still pending as of July 2022.

This merger showed how much privacy and antitrust issues overlap in the digitalized market.

In the following months, policymakers and enforcers both in the United States and in the European Union have been struggling to establish new sets of rules to better regulate mergers between companies whose business model relies on the collection of personal data, and above all they called for more cooperation between privacy and antitrust agencies.

European Parliament adopts Digital Services Act and Digital Markets Act

7. July 2022

On July 5, 2022, the EU Parliament voted in favor of the long-awaited Digital Markets Act (DMA) and Digital Services Act (DSA) following trilogue talks and agreements held between Parliament, Council, and European Commission earlier this year.

While the DSA amending the e-Commerce directive strictly prohibits specific forms of targeted advertising and misleading practices, the DMA can be viewed as the Competition law component that sets out stricter obligations for large online platforms within the Commission’s Digital Services Package.

Upon entry into force, advertisements targeting children, advertisements based on sensitive data, and dark patterns will no longer be permitted. Further, online platforms need to provide its users with the option and choice to not receive recommendations based on profiling. What the DSA also seeks to do, is to strengthen platform’s accountability and transparency. This means  that these platforms have to provide authorities and vetted researchers with access to information on the content moderation rules the respective platform uses as well as information on the algorithms used by recommender systems.

The spread of illegal content, such as hate speech, is also being addressed by these legislations obliging large platforms to respond quickly with due regard to other fundamental rights implicated.

Online platforms and other service providers not respecting the new obligations, may be fined with 10% of their annual total turnover in case of violations of the DMA, and 6% for violations of the DSA.

Artificial Intelligence and Personal Data: a hard co-existence. A new perspective for the EU

In the last decades AI has had an impressive development in various fields. At the same time, with each step forward the new machines and the new processes they are programmed to perform need to collect way more data than before in order to function properly.

One of the first things that come to mind is how can the rise of AI and the principle of data minimization, as contained in Art. 5 para. 1 lit. c) GDPR, be reconciled? At first glance it seems contradictory that there may be a way: after all, the GDPR clearly states that the number of personal data collected should be as small as possible. A study carried out by the Panel for the Future of Science and Technology of the European Union suggests that, given the wide scope (referring to the exceptions contained in the article) conceded by the norm, this issue could be addressed by measures like pseudonymization. This means that the data collected by the AI is deprived of every information that could refer personal data to a specific individual without additional information, thus lowering the risks for individuals.

The main issue with the current legal framework of the European Union regarding personal data protection is the fact that certain parts have been left vague, which causes uncertainty also in the regulation of artificial intelligence. To address this problem, the EU has put forward a proposal for a new Artificial Intelligence Act (“AIA”), aiming to create a common and more “approachable” legal framework.

One of the main features of this Act is that it divides the application of artificial intelligence in three main categories of risk levels:

  1. Creating an unacceptable risk, thus prohibited AIs (e.g. systems that violate fundamental rights).
  2. Creating a high risk, subject to specific regulation.
  3. Creating a low or minimum risk, with no further regulation.

Regarding high-risk AIs, the AIA foresees the creation of post-market monitoring obligations. If the AI in question violates any part of the AIA, it can then be forcibly withdrawn from the market by the regulator.

This approach has been welcomed by the Joint Opinion of the EDPB – EDPS, although the two bodies stated that the draft still needs to be more aligned with the GDPR.

Although the Commission’s draft contains a precise description of the first two categories, these will likely change over the course of the next years as the proposal is undergoing the legislative processes of the EU.

The draft was published by the European Commission in April 2021 and must still undergo scrutiny from the European Parliament and the Council of the European Union. Currently, some amendments have been formulated and the draft is still under review by the Parliament. After the Act has passed the scrutiny, it will be subject to a two – year implementation period.

Finally, a question remains to be answered: who shall oversee and control the Act’s implementation?It is foreseen that national supervisory authorities shall be established in each EU member state. Furthermore, the AIA aims at establishing a special European AI Board made up of representatives both of the member States and of the European Commission, which will also be the chair. Similar to the EDPB, this Board shall have the power to issue opinions and recommendations, and ensure the consistent application of the regulation throughout the EU.

Pages: 1 2 3 4 5 6 7 8 9 10 ... 22 23 24 Next
1 2 3 24