Category: General
6. April 2021
On March 30th, 2021, EU Justice Commissioner Didier Reynders and Chairperson of the Personal Information Protection Commission of the Republic of Korea Yoon Jong In announced the successful conclusion of adequacy talks between the EU und the Republic of Korea (“South Korea”). These adequacy discussions began in 2017, and there was already initially a high level of convergence between the EU and the Republic of Korea on data protection issues, which has been further enhanced by additional safeguards to further strengthen the level of protection in South Korea. Recently, South Korea’s Personal Information Protection Act (“PIPA”) took effect and the investigative and enforcement powers of South Korea’s data protection authority, the Personal Information Protection Commission (“PIPC”), were strengthened.
In the GDPR, this adequacy decision is based on Art. 45 GDPR. Article 45(3) GDPR empowers the EU Commission to adopt an implementing act to determine that a non-EU country ensures an “adequate level of protection”. This means a level of protection for personal data that is substantially equivalent to the level of protection within the EU. Once it has been determined that a non-EU country provides an “adequate level of protection”, transfers of personal data from the EU to that non-EU country can take place without further requirements. South Korea will be the 13th country to which personal data may be transferred on the basis of an adequacy decision. An adequacy decision covering both commercial providers and the public sector will enable free and secure data flows between the EU and the Republic of Korea and it will complement the EU-Republic of Korea Free Trade Agreement.
Until the free flow of data can occur, the EU Commission must initiate the procedure for adopting its adequacy finding. In this procedure, the European Data Protection Board will issue an opinion and a committee composed of representatives of the EU member states must agree. The EU Commission may then adopt the adequacy decision.
31. March 2021
The ICO is planning to update their anonymisation and pseudonymisation guidance as blogged by Ali Shah, ICO’s Head of Technology Policy on March 19th, 2021. He emphasizes the important role of sharing personal data in a digital economy, citing the healthcare and financial sector as examples. Thus, in healthcare, data could improve patient care, and in the financial sector, it could help prevent money laundering and protect individuals from fraud.
Last year, the ICO published their recent Data Sharing Code of Practice. The intention of the Data Sharing Code, according to Elizabeth Denham CBE, Information Commissioner, is “to give individuals, businesses and organisations the confidence to share data in a fair, safe and transparent way (…)”. Shah calls the Data Sharing Code a milestone and not a conclusion stating that ICO’s ongoing work shall lead to more clarity and advice in regard to lawful data sharing.
He names several key topics that are going to be explored by the ICO in regard to updating the anonymisation and pseudonymisation guidance. Among others, you will find the following:
- “Anonymisation and the legal framework – legal, policy and governance issues around the application of anonymisation in the context of data protection law”
- “Guidance on pseudonymisation techniques and best practices”
- “Accountability and governance requirements in the context of anonymisation and pseudonymisation, including data protection by design and DPIAs”
- “Guidance on privacy enhancing technologies (PETs) and their role in safe data sharing”
- “Technological solutions – exploring possible options and best practices for implementation”
It is to be welcomed that apparently not only the legal side will be explored, but also technical aspects should play their role, as designing and implementing systems with privacy enhancing technologies (PETs) and data protection by design in mind has the potential to contribute to compliance with data protection laws already at the technical level and therefore at an early stage of processing.
The ICO plans to publish each chapter of the guidance asking the industry, academia and other key stakeholders to present their point of view on the topic encouraging them to give insights and feedback in order for the ICO to get a better understanding where the guidance can be targeted most effectively.
In recent years, Virtual Voice Assistants (VVA) have enjoyed increased popularity among technophile consumers. VVAs are integrated in modern smartphones like Siri on Apple or Google Assistant on Android mobile devices, but can also be found in seperate terminal devices like Alexa on the Amazon Echo device. With Smart Homes trending, VVAs are finding their ways into many homes.
However, in light of their general mode of operation and their specific usage, VVAs potentially have access to a large amount of personal data. They furthermore use new technologies such as machine learning and artificial intelligence in order to improve their services.
As both private households and corporate businesses are increasingly using VVAs and questions on data protection arise, the European Data Protection Board (EDPB) sought to provide guidance to the relevant data controllers. Therefore, the EDPB published a guidance on Virtual Voice Assistants earlier this month.
In its guidance, the EDPB specifically addresses VVA providers and VVA application developers. It encourages them to take considerations of data protection into account when designing their VVA service, as layed out by the principle of data protection by design and default under Art. 25 GDPR. The EDPB suggests that, for example, controllers could fulfil their information obligations pursuant to Art. 13/14 GDPR using voice based notifications if the VVA works with a screenless terminal device. VVA designers could also enable users to initiate a data subject request though easy-to-follow voice commands.
Moreover, the EDPB states that in their opinion, providing VVA services will require a Data Protection Impact Assessment according to Art. 35 GDPR. The guidance also gives further advice on complying with general data protection principles and is still open for public consultation until 23 April 2021.
29. March 2021
Microsoft’s Exchange Servers are exposed to an ever-increasing number of attacks. This is the second major cyberattack on Microsoft in recent months, following the so-called SolarWinds hack (please see our blog post). The new attacks are based on vulnerabilities that have been in the code for some time but have only recently been discovered.
In a blog post published on March 2nd, 2021, Microsoft explains the hack and a total of four found vulnerabilities. The first vulnerability allows attackers to gain access to a Microsoft Exchange Server, the second vulnerability allows them to execute their code on the system, and the third and fourth vulnerabilities allow the hacker write access to arbitrary files on the server. Microsoft Exchange Server versions 2019, 2016, 2013 and 2010 are affected, and Microsoft released a security update for all of them on March 2nd, even though support for Microsoft Exchange Server 2010 ended in October 2020.
Reportedly, Microsoft was informed about the vulnerability in January. Since then, a growing number of hacker groups have started to use the exploit. The initial campaign is attributed to HAFNIUM, a group believed to be state-sponsored and operating out of China. According to Microsoft, the vulnerabilities have been in the code for many years without being discovered. Only recently has Microsoft become aware of these vulnerabilities and begun working on them. Microsoft shared information on the vulnerability through the Microsoft Active Protections Program (Mapp), where they share information with a group of 80 security companies. The attacks began shortly after Microsoft began working to resolve the vulnerabilities. There are many similarities between the code Microsoft shared through Mapp and the code the attackers are using.
In an article about a recently published One-Click Exchange On-premises Mitigation Tool (EOMT), Microsoft developers describe how admins can secure Exchange servers against the current attacks within a very short amount of time. The tool only serves as an initial protective measure. For comprehensive protection, available security updates must be installed. In addition, it must be checked whether the hackers have already exploited existing gaps to leave behind backdoors and malware. This is because the updates close the gaps, but do not eliminate an infection that has already occurred. Hackers often do not use gaps immediately for an attack, but to gain access later, for example for large-scale blackmail.
Under the General Data Protection Regulation (GDPR), organizations affected by an attack on personal data must, in certain circumstances, report such an incident to the relevant supervisory authority and possibly to the affected individuals. Even after a successful patch, it should be kept in mind that affected organizations were vulnerable in the meantime. Pursuant to Art. 33 of the GDPR, system compromises that may affect personal data and result in a risk to data subjects must be notified to the competent supervisory authority. For such a notification, the time of discovery of the security breach, the origin of the security breach, the possible scope of the personal data affected, and the first measures taken must be documented.
18. March 2021
Personal health data are considered a special category of personal data under Art. 9 of the GDPR and are therefore given special protections. A group of IT experts, including members of the German Chaos Computer Club (CCC), has now revealed security gaps in the software for test centres by which more than 136,000 COVID-19 test results of more than 80,000 data subjects have apparently been unprotected on the internet for weeks.
The IT-Security experts’ findings concern the software “SafePlay” of the Austrian company Medicus AI. Many test centres use this software to allocate appointments and to make test results digitally available to those tested. In fact, more than 100 test centres and mobile test teams in Germany and Austria are affected by the recent data breach. These include public facilities in Munich, Berlin, Mannheim as well as fixed and temporary testing stations in companies, schools and daycare centres.
In order to view the test results unlawfully, one only needed to create an account for a COVID-19 test. The URL for the test result contained the number of the test. If this number was simply counted up or down, the “test certificates” of other people became freely accessible. In addition to the test result, the test certificate also contained the name, date of birth, private address, nationality and ID number of the person concerned.
It remains unresolved whether the vulnerabilities have been exploited prior to the discovery by the CCC. The CCC notified both Medius AI and the Data Protection Authorities about the leak which led to a quick response by the company. However, IT experts and Privacy-focused NGOs commented that Medicus AI was irresponsible and grossly negligent with respect to their security measures leading to the potential disclosure of an enormous amount of sensitive personal health data.
17. March 2021
On March 2nd, 2021, Virginia’s Governor, Ralph Northam, signed the Consumer Data Protection Act into law without any further amendments.
This makes the state of Virginia the second US state to enact a major privacy law, next to California’s CCPA enacted in 2018. At the point of the law passing to the Senate, there was debate that the bills were flawed as they are not including a private right of action and leaving all enforcement to the Office of the Attorney General. This caused some senators to oppose the bills, however it was ultimately passed by a vote of 32 to 7. The Consumer Data Protection Act will take effect on January 1st, 2023.
The bill establishes a comprehensive framework for controlling and processing personal data of Virginia residents. In addition, it provides Virginia residents with certain rights with respect to their personal data, including rights of access, correction, deletion, portability, the right to opt out of certain processing operations, as well as the right to appeal a controller’s decision regarding a rights request. The bill further states requirements relating to the principles of data minimization, processing limitations, data security, non-discrimination, third-party contracting and data protection assessments, as well as imposes certain requirements directly on entities who act as processors of data on behalf of a controller.
However, the law also includes a number of exemptions at entity level, such as exemptions for financial institutions subject to the Gramm-Leach-Bliley Act and also includes some data or context specific exemptions, such as an exemption for HR-related data processing.
The Attorney General’s office, as the enforcing entity, has to provide 30 days’ notice of any violation and allow an opportunity for the controller to cure any violation. In case a controller does not oblige and leaves the violation uncured, the Attorney General is able to file an action seeking $7,500 per violation.
16. March 2021
The Information Commissioner’s Office (ICO) has sanctioned two firms for sending unlawful and nuisance text messages to their customers. The ICO took notice because it received several complaints from people affected. One of the companies even received a total of 10,000 complaints.
The two companies had sent the unwanted text messages during the Corona pandemic and have now been sanctioned with £330,000 by the UK Data Protection Authority.
Leads Works Ltd.
One of the companies, the West Sussex-based Leads Works Ltd, sent more than 2.6 million text messages to its customers without obtaining valid consent. Between 26 May and 26 June, the authorities received more than 10,000 complaints.
In addition, Leads Works Ltd has received an enforcement notice from the ICO requiring it to stop sending unlawful direct marketing messages.
Valca Vehicle Ltd
Valca Vehicle Ltd, a company based in Manchester has been sanctioned £80,000. Between June and July 2020, the company sent over 95,000 text messages. This was also without the appropriate consent of those affected. The company has been ordered to stop sending further text messages without consent. Valca Vehicle Ltd has also been criticised for using the pandemic as an excuse for its actions.
15. March 2021
Google announces to stop the usage of third-party cookies in its browser Google Chrome and proclaim they will not implement other similar technologies that could track individuals while surfing on the web.
Cookies are small pieces of code used on almost every website. They are automatically downloaded when a user visits a website and from then on send data from the user back to the website operator. From this data, companies can create profiles of the user and personalize advertising based on the data collected. Originally, cookies were intended to give web browsers a “memory”. With cookies, online shops save shopping carts and users can stay logged in to online sites.
In a Blogpost published on March 3rd, 2021, David Temkin, Director of Product Management, Ads Privacy and Trust at Google, announced that the next update Google Chrome in April will allow cookie tracking to be turned of completely. With Google Chrome, only so-called “first-party cookies” of the respective website operator remain permitted. The decision will have lasting consequences, as Google Chrome has been the most widely used browser since 2012. The move comes after Google’s competitors Apple and Mozilla announced similar mechanisms for their Safari and Firefox browsers (please see our blog post). Temkin writes:
Keeping the internet open and accessible for everyone requires all of us to do more to protect privacy — and that means an end to not only third-party cookies, but also any technology used for tracking individual people as they browse the web.
Since the personalized advertising based on data, and thus the tracking of the data, is Google’s core business, Google will not stop either the data collection or the personalization of the advertising. Instead of individual profiles, Google will form cohorts of people with similar interests, to which advertising will be tailored. These cohorts are said to be broad enough to preserve the anonymity of individual users. This concept is called “Federated Learning of Cohorts” (FLoC). Google Ads FLoC based advertising is said to start in the second quarter of 2021.
Data will then be collected by the browser and stored locally and not by cookies. Every URL on a website and every content accessed can then be accessed by Google targeting algorithm. Algorithms on the end device are to calculate hash values from the browser history, for example, which enable the assignment to such a cohort. Google sends a selection of ads to the browser, which selects ads that match the cohort and shows them to the user.
While third-party cookies are gradually becoming obsolete, Google is replacing them with a system that Google can completely control itself. This will make it more difficult for competitors such as Facebook Ads in the future, as they will have to rely primarily on first-party data and on data obtained from cookies in smaller browsers.
9. March 2021
On March 3rd, POLITICO reported that the French government seeks to bypass the Court of Justice of the European Union’s (CJEU) ruling on limiting member states’ surveillance activities of phone and internet data, stating governments can only retain mass amounts of data when facing a “serious threat to national security”.
According to POLITICO, the French government has requested the country’s highest administrative court, the Council of State, to not follow the CJEU’s ruling in the matter.
Last year in October, the CJEU ruled that several national data retention rules were not compliant with EU law. This ruling included retention times set forth by the French government in matters of national security.
The French case in question opposes the government against digital rights NGOs La Quadrature du Net and Privacy International. After the CJEU’s ruling, it is now in the hands of the Council of State in France, which will have to decide on the matter.
A hearing date has not yet been decided, however POLITICO sources state that the French government is trying to bypass the CJEU’s ruling by presenting the argument of the ruling going against the country’s “constitutional identity”. This argument, first used back in 2006, is seldomly used, however can be referred to in order to avoid applying EU law at national level.
In addition, the French government accuses the CJEU to have ruled out of its competence, as matters of national security remain solely part of national competence.
The French government did not want to comment on the ongoing process, however has had a history of refusing to adopt EU court rulings into national law.
25. February 2021
The business model of the US company Clearview AI is coming under increasing pressure worldwide. The company collected billions of facial photos from publicly available sources, especially from social networks such as Facebook, Instagram, YouTube and similar services. Data subjects were not informed of the collection and use of their facial photos. Using the photos, Clearview AI created a comprehensive database and used it to develop an automated facial recognition system. Customers of this system are in particular law enforcement agencies and other prosecutors in the US, but companies can also make use of the system. In total, Clearview AI has around 2000 customers worldwide and a database with around 3 billion images.
After a comprehensive investigation by the New York Times in January 2020 drew attention to the company, opposition to the business practice is now also being voiced by the data protection authorities of various countries.
The Hamburg Data Protection Commissioner had already issued an order against Clearview AI in January 2021. According to the order, the company was to delete the biometric data of a Hamburg citizen who had complained to the authority about the storage. The reason given for the decision was that there was no legal basis for processing sensitive data and that the company was profiling by collecting photos over a longer period of time.
Now, several Canadian data protection authorities have also deemed Clearview AI’s actions illegal. In a statement, the Canadian Privacy Commissioner describes the activities as mass surveillance and an affront to the privacy rights of data subjects. The Canadian federal authority published a final report on the investigation into the Clearview AI case. In it, the company was found to have violated several Canadian federal reports.
It is interesting that the Canadian authorities even consider the data collection to be unlawful if Clearview AI were to obtain consents from the data subjects. They argue that already the purpose of the data processing is unlawful. They demand that Clearview AI cease its service in Canada and delete data already collected from Canadian citizens.
The pressure on Clearview AI is also growing due to the fact that the companies from which the data was collected are also opposing the procedure. In addition, the association “noyb” around the data protection activist Max Schrems is dealing with Clearview AI and various European data protection authorities have announced that they will take action against the facial recognition system.
Pages: Prev 1 2 3 4 5 6 7 8 9 10 ... 30 31 32 Next