Category: UK

EDPS publishes opinion on future EU-UK partnership

3. March 2020

On 24 February 2020, the European Data Protection Supervisor (EDPS) published an opinion on the opening of negotiations for the future partnership between the EU and the UK with regards to personal data protection.

In his opinion, the EDPS points out the importance of commitments to fully respect fundamental rights in the future envisaged comprehensive partnership. Especially with regards to the protection of personal data, the partnership shall uphold the high protection level of the EU’s personal data rules.

With respect to the transfer of personal data, the EDPS further expresses support for the EU Commission’s recommendation to work towards the adoption of adequacy decisions for the UK if the relevant conditions are met. However, the Commission must ensure that the UK is not lowering its data protection standard below the EU standard after the Brexit transition period. Lastly, the EDPS recommends the EU Institutions to also prepare for a potential scenario in which no adequacy decisions exist by the end of the transition period on 31 December 2020.

UK: Betting companies had access to millions of data of children

28. January 2020

In the UK, betting companies have gained access to data from 28 million children under 14 and adolescents. The data was stored in a government database and could be used for learning purposes. Access to the platform is granted by the government. A company that was given access is said to have illegally given it to another company, which in turn allowed access for the betting companies. The betting providers used the access, among other things, to check age information online. The company accused of passing on the access denies the allegations, but has not yet made any more specific statements.

The British Department for Education speaks of an unacceptable situation. All access points have been closed and the cooperation has been terminated.

Category: Data breach · General · UK
Tags: , ,

National Retailer fined £500,000 by ICO

10. January 2020

The Information Commissioner’s Office (ICO) – UK’s Data Protection Authority – has fined the national retailer ‘DSG Retail Limited’ £500,000 for failing to secure information of at least 14 million people after a computer system was compromised as result of a cyberattack.

An investigation by the ICO came to the conclusion that between July 2017 and April 2018 malware has been installed and collected personal data until the attack was detected. Due to the failure of DSG the attacker had access to 5.6 million payment card details and further personal data, inter alia full names, postcodes and email addresses.

The reason for the fine is seen in having poor security arrangements and failing to take adequate steps to protect personal data. The fine is based on the Data Protection Act 1998.

The director of the ICO, Steve Eckersley, said:

“Our investigation found systemic failures in the way DSG Retail Limited safeguarded personal data. It is very concerning that these failures related to basic, commonplace security measures, showing a complete disregard for the customers whose personal information was stolen. The contraventions in this case were so serious that we imposed the maximum penalty under the previous legislation, but the fine would inevitably have been much higher under the GDPR.”

The ICO considered the individual freedom of DSG’s customers to be at risk. Customers would have to fear financial theft and identity fraud.

Category: Cyber security · Data breach · UK

NIST examines the effect of demographic differences on face recognition

31. December 2019

As part of its Face Recognition Vendor Test (FRVT) program, the U.S. National Institute of Standards and Technology (NIST) conducted a study that evaluated face recognition algorithms submitted by industry and academic developers for their ability to perform various tasks. The study evaluated 189 software algorithms submitted by 99 developers. It focuses on how well each algorithm performs one of two different tasks that are among the most common applications of face recognition.

The two tasks are “one-to-one” matching, i.e. confirming that a photo matches another photo of the same person in a database. This is used, for example, when unlocking a smartphone or checking a passport. The second task involved “one-to-many” matching, i.e. determining whether the person in the photo matches any database. This is used to identify a person of interest.

A special focus of this study was that it also looked at the performance of the individual algorithms taking demographic factors into account. For one-to-one matching, only a few previous studies examined demographic effects; for one-to-many matching, there were none.

To evaluate the algorithms, the NIST team used four photo collections containing 18.27 million images of 8.49 million people. All were taken from operational databases of the State Department, Department of Homeland Security and the FBI. The team did not use images taken directly from Internet sources such as social media or from video surveillance. The photos in the databases contained metadata information that indicated the age, gender, and either race or country of birth of the person.

The study found that the result depends ultimately on the algorithm at the heart of the system, the application that uses it, and the data it is fed with. But the majority of face recognition algorithms exhibit demographic differences. In one-to-one matching, the algorithm rated photos of two different people more often as one person if they were Asian or African-American than if they were white. In algorithms developed by Americans, the same error occurred when the person was a Native American. In contrast, algorithms developed in Asia did not show such a significant difference in one-to-one matching results between Asian and Caucasian faces. However, these results show that algorithms can be trained to achieve correct face recognition results by using a wide range of data.

Health data transfered to Google, Amazon and Facebook

18. November 2019

Websites, spezialized on health topics transfer information of website users to Google, Amazon and Facebook, as the Financial Times reports.

The transferred information are obtained through cookies and include medical symtoms and clinical pictures of the users.

Referring to the report of the Financial Times does the transfer take place without the express consent of the data subject, contrary to the Data Protection Law in the UK. Besides the legal obligations in the UK, the procedure of the website operators, using the cookie, contradicts also the legal requirements of the GDPR.

According to the requirements of the GDPR the processing of health data falls under Art. 9 GDPR and is a prohibition subject to permission, meaning, that the processing of health data is forbidden unless the data subject has given its explicit consent.

The report is also interesting considering the Cookie judgement of the CJEU (we reported). Based on the judgment, consent must be obtained for the use of each cookie.

Accordingly, the procedure of the website operators will (hopefully) change in order to comply with the new case law.

 

USA and UK sign Cross Border Data Access Agreement for Criminal Electronic Data

10. October 2019

The United States and the United Kingdom have entered into the first of its kind CLOUD Act Data Access Agreement, which will allow both countries’ law enforcement authorities to demand authorized access to electronic data relating to serious crime. In both cases, the respective authorities are permitted to ask the tech companies based in the other country, for electronic data directly and without legal barriers.

At the base of this bilateral Agreement stands the U.S.A.’s Clarifying Lawful Overseas Use of Data Act (CLOUD Act), which came into effect in March 2018. It aims to improve procedures for U.S. and foreign investigators for obtaining electronic information held by service providers in the other country. In light of the growing number of mutual legal assistance requests for electronic data from U.S. service providers, the current process for access may take up to two years. The Data Access Agreement can reduce that time considerably by allowing for a more efficient and effective access to data needed, while protecting the privacy and civil liberties of the data subjects.

The Cloud Act focuses on updating legal frameworks to respond to the growing technology in electronic communications and service systems. It further enables the U.S. and other countries to enter into a mutual executive Agreement in order to use own legal authorities to access electronic evidence in the other respective country. An Agreement of this form can only be signed by rights-respecting countries, after it has been certified by the U.S. Attorney General to the U.S. Congress that their laws have robust substansive and procedural protections for privacy and civil liberties.

The Agreement between the U.K. and the U.S.A. further assures providers that the requested disclosures are compatible with data protection laws in both respective countries.

In addition to the Agreement with the United Kingdom, there have been talks between the United States and Australia on Monday, reporting negotiations for such an Agreement between the two countries. Other negotiations have also been held between the U.S. and the European Commission, representing the European Union, in regards to a Data Access Agreement.

Category: General · UK · USA
Tags: , , , ,

CNIL updates its FAQs for case of a No-Deal Brexit

24. September 2019

The French data protection authority “CNIL” updated its existing catalogue of questions and answers (“FAQs”) to inform about the impact of a no-deal brexit and how controllers should prepare for the transfer of data from the EU to the UK.

As things stand, the United Kingdom will leave the European Union on 1st of November 2019. The UK will then be considered a third country for the purposes of the European General Data Protection Regulation (“GDPR”). For this reason, after the exit, data transfer mechanisms become necessary to transfer personal data from the EU to the UK.

The FAQs recommend five steps that entities should take when transferring data to a controller or processor in the UK to ensure compliance with GDPR:

1. Identify processing activities that involve the transfer of personal data to the United Kingdom.
2. Determine the most appropriate transfer mechanism to implement for these processing activities.
3. Implement the chosen transfer mechanism so that it is applicable and effective as of November 1, 2019.
4. Update your internal documents to include transfers to the United Kingdom as of November 1, 2019.
5. If necessary, update relevant privacy notices to indicate the existence of transfers of data outside the EU and EEA where the United Kingdom is concerned.

CNIL also discusses the GDPR-compliant data transfer mechanisms (e.g., standard contractual clauses, binding corporate rules, codes of conduct) and points out that, whichever one is chosen, it must take effect on 1st of November. If controllers should choose a derogation admissible according to GDPR, CNIL stresses that this must strictly comply with the requirements of Art. 49 GDPR.

High Court dismisses challenge regarding Automated Facial Recognition

12. September 2019

On 4 September, the High Court of England and Wales dismissed a challenge to the police’s use of Automated Facial Recognition Technology (“AFR”). The court ruled that the use of AFR was proportionate and necessary to meet the legal obligations of the police.

The pilot project AFR Locate was used for certain events and public places when the commission of crimes was likely. Up to 50 faces per second can be detected. The faces are then compared by biometric data analysis with wanted persons registered in police databases. If no match is found the images are deleted immediately and automatically.

An individual has initiated a judicial review process after he has not been identified as a wanted person, but is likely to have been captured by AFR Locate. He considered this to be illegal, in particular due to a violation of the right to respect for private and family life under Article 8 of the European Convention on Human Rights (“ECHR”) and data protection law in the United Kingdom. In his view, the police did not respect the data protection principles. In particular, that approach would violate the principle of Article 35 of the Data Protection Act 2018 (“DPA 2018”), which requires the processing of personal data for law enforcement purposes to be lawful and fair. He also pointed out that the police had failed to carry out an adequate data protection impact assessment (“DPIA”).

The Court stated that the use of AFR has affected a person’s rights under Article 8 of the ECHR and that this type of biometric data has a private character in itself. Despite the fact that the images were erased immediately, this procedure constituted an interference with Article 8 of the ECHR, since it suffices that the data is temporarily stored.

Nevertheless, the Court found that the police’s action was in accordance with the law, as it falls within the police’s public law powers to prevent and detect criminal offences. The Court also found that the use of the AFR system is proportionate and that the technology can be used openly, transparently and with considerable public commitment, thus fulfilling all existing criteria. It was only used for a limited period, for a specific purpose and published before it was used (e.g. on Facebook and Twitter).

With regard to data protection law, the Court considers that the images of individuals captured constitute personal data, even if they do not correspond to the lists of persons sought, because the technology has singled them out and distinguished them from others. Nevertheless, the Court held that there was no violation of data protection principles, for the same reasons on which it denied a violation of Art. 8 ECHR. The Court found that the processing fulfilled the conditions of legality and fairness and was necessary for the legitimate interest of the police in the prevention and detection of criminal offences, as required by their public service obligations. The requirement of Sec. 35 (5) DPA 2018 that the processing is absolutely necessary was fulfilled, as was the requirement that the processing is necessary for the exercise of the functions of the police.

The last requirement under Sec. 35 (5) of the DPA 2018 is that a suitable policy document is available to regulate the processing. The Court considered the relevant policy document in this case to be short and incomplete. Nevertheless, it refused to give a judgment as to whether the document was adequate and stated that it would leave that judgment to the Information Commissioner Office (“ICO”), as it would publish more detailed guidelines.

Finally, the Court found that the impact assessment carried out by the police was sufficient to meet the requirements of Sec. 64 of DPA 2018.

The ICO stated that it would take into account the High Court ruling when finalising its recommendations and guidelines for the use of live face recognition systems.

London’s King’s Cross station facial recognition technology under investigation by the ICO

11. September 2019

Initially reported by the Financial Times, London’s King’s Cross station is under crossfire for making use of a live face-scanning system across its 67 acres large site. Developed by Argent, it was confirmed that the system has been used to ensure public safety, being part of a number of detection and tracking methods used in terms of surveillance at the famous train station. While the site is privately owned, it is widely used by the public and houses various shops, cafes, restaurants, as well as office spaces with tenants like, for example, Google.

The controversy behind the technology and its legality stems from the fact that it records everyone in its parameters without their consent, analyzing their faces and compairing them to a database of wanted criminals, suspects and persons of interest. While Developer Argent defended the technology, it has not yet explained what the system is, how it is used and how long it has been in place.

A day before the ICO launched its investigation, a letter from King’s Cross Chief Executive Robert Evans reached Mayor of London Sadiq Khan, explaining the matching of the technology against a watchlist of flagged individuals. In effect, if footage is unmatched, it is blurred out and deleted. In case of a match, it is only shared with law enforcement. The Metropolitan Police Service has stated that they have supplied images for a database to carry out facial scans to system, though it claims to not have done so since March, 2018.

Despite the explanation and the distinct statements that the software is abiding by England’s data protection laws, the Information Commissioner’s Office (ICO) has launched an investigation into the technology and its use in the private sector. Businesses would need to explicitly demonstrate that the use of such surveillance technology is strictly necessary and proportionate for their legitimate interests and public safety. In her statement, Information Commissioner Elizabeth Denham further said that she is deeply concerned, since “scanning people’s faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all,” especially if its being done without their knowledge.

The controversy has sparked a demand for a law about facial recognition, igniting a dialogue about new technologies and future-proofing against the yet unknown privacy issues they may cause.

Category: GDPR · General · UK
Tags: , , , ,

ICO releases a draft Code of Practice to consult on the Use of Personal Data in Political Campaigning

14. August 2019

The United Kingdom’s Information Commissioner’s Office (ICO) plans to give consultations on a new framework code of practice regarding the use of personal data in relation to politcal campaigns.

ICO states that in any democratic society it is vital for political parties,  candidates and campaigners to be able to communicate effectively with voters. Equally vital, though, is that all organisations involved in political campaigning use personal data in a transparent, lawful way that is understood by the people.

Along with the internet, politcal campaigning has become increasingly sophisticated and innovative. Using new technologies and techniques to understand their voters and target them, political campaigning has changed, using social media, the electoral register or screening names for ethnicity and age. In a statement from June, ICO has adressed the risk that comes with innovation, which, intended or not, can undermine the democratic process by hidden manipulation through the processing of personal data that the people do not understand.

In this light, ICO expresses that their current guidance is outdated, since it has not been updated since the introduction of the General Data Protection Regulation (GDPR). It does not reflect modern campainging practices. However, the framework does not establish new requirements for campaigners, instead aims at explaining and clarifying data protection and electronic marketing laws as they already stand.

Before drafting the framework, the Information Commissioner launched a call for views in October 2018 in hopes of input from various people and organisations. The framework is hoped to have taken into account the responses the ICO had received in the process.

In hopes of being the basis of a statutory code of practice if the relevant legislation is introduced, the draft of the framework code of practice is now out for public consultation, and will remain open for public access until Ocotber 4th.

Pages: 1 2 3 4 5 Next
1 2 3 5