Category: Countries

High Court dismisses challenge regarding Automated Facial Recognition

12. September 2019

On 4 September, the High Court of England and Wales dismissed a challenge to the police’s use of Automated Facial Recognition Technology (“AFR”). The court ruled that the use of AFR was proportionate and necessary to meet the legal obligations of the police.

The pilot project AFR Locate was used for certain events and public places when the commission of crimes was likely. Up to 50 faces per second can be detected. The faces are then compared by biometric data analysis with wanted persons registered in police databases. If no match is found, the images are deleted immediately and automatically.

An individual has initiated a judicial review process after he has not been identified as a wanted person, but is likely to have been captured by AFR Locate. He considered this to be illegal, in particular due to a violation of the right to respect for private and family life under Article 8 of the European Convention on Human Rights (“ECHR”) and data protection law in the United Kingdom. In his view, the police did not respect the data protection principles. In particular, that approach would violate the principle of Article 35 of the Data Protection Act 2018 (“DPA 2018”), which requires the processing of personal data for law enforcement purposes to be lawful and fair. He also pointed out that the police had failed to carry out an adequate data protection impact assessment (“DPIA”).

The Court stated that the use of AFR has affected a person’s rights under Article 8 of the ECHR and that this type of biometric data has a private character in itself. Despite the fact that the images were erased immediately, this procedure constituted an interference with Article 8 of the ECHR, since it suffices that the data is temporarily stored.

Nevertheless, the Court found that the police’s action was in accordance with the law, as it falls within the police’s public law powers to prevent and detect criminal offences. The Court also found that the use of the AFR system is proportionate and that the technology can be used openly, transparently and with considerable public commitment, thus fulfilling all existing criteria. It was only used for a limited period, for a specific purpose and published before it was used (e.g. on Facebook and Twitter).

With regard to data protection law, the Court considers that the images of individuals captured constitute personal data, even if they do not correspond to the lists of persons sought, because the technology has singled them out and distinguished them from others. Nevertheless, the Court held that there was no violation of data protection principles, for the same reasons on which it denied a violation of Art. 8 ECHR. The Court found that the processing fulfilled the conditions of legality and fairness and was necessary for the legitimate interest of the police in the prevention and detection of criminal offences, as required by their public service obligations. The requirement of Sec. 35 (5) DPA 2018 that the processing is absolutely necessary was fulfilled, as was the requirement that the processing is necessary for the exercise of the functions of the police.

The last requirement under Sec. 35 (5) of the DPA 2018 is that a suitable policy document is available to regulate the processing. The Court considered the relevant policy document in this case to be short and incomplete. Nevertheless, it refused to give a judgment as to whether the document was adequate and stated that it would leave that judgment to the Information Commissioner Office (“ICO”), as it would publish more detailed guidelines.

Finally, the Court found that the impact assessment carried out by the police was sufficient to meet the requirements of Sec. 64 of DPA 2018.

The ICO stated that it would take into account the High Court ruling when finalising its recommendations and guidelines for the use of live face recognition systems.

London’s King’s Cross station facial recognition technology under investigation by the ICO

11. September 2019

Initially reported by the Financial Times, London’s King’s Cross station is under crossfire for making use of a live face-scanning system across its 67 acres large site. Developed by Argent, it was confirmed that the system has been used to ensure public safety, being part of a number of detection and tracking methods used in terms of surveillance at the famous train station. While the site is privately owned, it is widely used by the public and houses various shops, cafes, restaurants, as well as office spaces with tenants like, for example, Google.

The controversy behind the technology and its legality stems from the fact that it records everyone in its parameters without their consent, analyzing their faces and compairing them to a database of wanted criminals, suspects and persons of interest. While Developer Argent defended the technology, it has not yet explained what the system is, how it is used and how long it has been in place.

A day before the ICO launched its investigation, a letter from King’s Cross Chief Executive Robert Evans reached London Mayor Sadiq Khan, explaining the matching of the technology against a watchlist of flagged individuals. In effect, if footage is unmatched, it is blurred out and deleted. In case of a match, it is only shared with law enforcement. The Metropolitan Police Service has stated that they have supplied images for a database to carry out facial scans to system, though it claims to not have done so since March, 2018.

Despite the explanation and the distinct statements that the software is abiding by England’s data protection laws, the Information Commissioner’s Office (ICO) has launched an investigation into the technology and its use in the private sector. Businesses would need to explicitly demonstrate that the use of such surveillance technology is strictly necessary and proportionate for their legitimate interests and public safety. In her statement, Information Commissioner Elizabeth Denham further said that she is deeply concerned, since “scanning people’s faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all,” especially if its being done without their knowledge.

The controversy has sparked a demand for a law about facial recognition, igniting a dialogue about new technologies and future-proofing against the yet unknown privacy issues they may cause.

Category: GDPR · General · UK
Tags: , , , ,

Google strives to reconcile advertising and privacy

27. August 2019

While other browser developers are critical of tracking, Google wants to introduce new standards to continue enabling personalized advertising. With the implementation of the “Privacy Sandbox” and the introduction of a new identity management system, the developer of the Chrome browser wants to bring browsers to an uniform level in processing of user data and protect the privacy of users more effectively.

The suggestions are the first steps of the privacy initiative announced by Google in May. Google has published five ideas. For example, browsers are to manage a “Privacy Budget” that gives websites limited access to user data so that users can be sorted into an advertising target group without being personally identified. Google also plans to set up central identity service providers that offer limited access to user data via an application programming interface (API) and inform users about the information they have passed on.

Measures like Apple’s, which have introduced Intelligent Tracking Protection, are not in Google’s interest, as Google generates much of its revenue from personalized advertising. In a blog post, Google also said that blocking cookies promotes non-transparent techniques such as fingerprinting. Moreover, without the ability to display personalized advertising, the future of publishers would be jeopardized. Their costs are covered by advertising. Recent studies have shown, that the financing of publishers decreases by an average of 52% if advertising loses relevance due to the removal of cookies.

Based on these ideas, the discussion among developers about the future of web browsers and how to deal with users’ privacy should now begin. Google’s long-term goal is a standardization process to which all major browser developers should adhere. So far, Google has had only limited success with similar initiatives.

ICO releases a draft Code of Practice to consult on the Use of Personal Data in Political Campaigning

14. August 2019

The United Kingdom’s Information Commissioner’s Office (ICO) plans to give consultations on a new framework code of practice regarding the use of personal data in relation to politcal campaigns.

ICO states that in any democratic society it is vital for political parties,  candidates and campaigners to be able to communicate effectively with voters. Equally vital, though, is that all organisations involved in political campaigning use personal data in a transparent, lawful way that is understood by the people.

Along with the internet, politcal campaigning has become increasingly sophisticated and innovative. Using new technologies and techniques to understand their voters and target them, political campaigning has changed, using social media, the electoral register or screening names for ethnicity and age. In a statement from June, ICO has adressed the risk that comes with innovation, which, intended or not, can undermine the democratic process by hidden manipulation through the processing of personal data that the people do not understand.

In this light, ICO expresses that their current guidance is outdated, since it has not been updated since the introduction of the General Data Protection Regulation (GDPR). It does not reflect modern campainging practices. However, the framework does not establish new requirements for campaigners, instead aims at explaining and clarifying data protection and electronic marketing laws as they already stand.

Before drafting the framework, the Information Commissioner launched a call for views in October 2018 in hopes of input from various people and organisations. The framework is hoped to have taken into account the responses the ICO had received in the process.

In hopes of being the basis of a statutory code of practice if the relevant legislation is introduced, the draft of the framework code of practice is now out for public consultation, and will remain open for public access until Ocotber 4th.

CNIL and ICO publish revised cookie guidelines

6. August 2019

The French data protection authority CNIL as well as the British data protection authority ICO have revised and published their guidelines on cookies.

The guidelines contain several similarities, but also differ in some respects.

Both France and the UK consider rules that apply to cookies to be also applicable to any device that stores or accesses information. In addition, both authorities stress that users must give specific, free and unambiguous consent before cookies are placed. Further scrolling of the website cannot be considered as consent. Likewise, obtaining consent from T&Cs is not lawful. This procedure violates Art. 7 (2) of the General Data Protection Regulation (GDPR), according to which the request for consent shall be presented in a manner which is clearly distinguishable from the other matters, in an intelligible and easily accessible form, using clear and plain language. In addition, all parties who place cookies must be named so that informed consent can be obtained. Finally, both authorities point out that browser settings alone are not a sufficient basis for valid consent.

With regard to the territorial scope, CNIL clarifies that the cookie rules apply only to the processing of cookies within the activities of an establishment of a controller or processor in France, regardless of whether the processing takes place in France. The English guideline does not comment on this.

Cookie walls are considered non-compliant with GDPR by the French data protection authority due to the negative consequences for the user in case of refusal. ICO, on the other hand, is of the opinion that a consent forced on the basis of a cookie wall is probably not valid. Nevertheless GDPR must be balanced with other rights. Insofar ICO has not yet delivered a clear position.

Regarding analytic cookies, CNIL explains that a consent is not always necessary, namely not if they correspond to a list of cumulative requirements created by CNIL. ICO, on the other hand, does not exempt cookies from the consent requirement even in the case of analytic cookies.

Finally, CNIL notes that companies have six months to comply with the rules. However, this period will only be set in motion by the publication of a statement by the CNIL, which is still pending. CNIL expects this statement to be finalised during the first quarter of 2020. The ICO does not foresee such a time limit.

Settlement of $13 Million for Google in Street View Privacy Case

30. July 2019

In an attempt to settle a long-running litigation of a class-action case started in 2010, Google agrees to pay $13 million over claims that it violated U.S. wire-tapping laws. The issue came from vehicles used for its Street View mapping Project that captured and collected personal data from private wifi networks along the way.

Street View is a feature that lets users interact with panoramic and detailed images of locations all around the world. The legal action began when several people whose data was collected sued Google after it admitted the cars photographing neighborhoods for Street View had also gathered emails, passwords and other private information from wifi networks in more than 30 countries.

While the company was quick to call this collection of data a mistake,  investigators found out that the capture of personal data was built and embedded by Google engineers in the software of the vehicles to intentionally collect personal data from accessed networks.

The new agreement would make Google to be required to destroy any collected data via Street View, agree not to use Street View to collect personal data from wifi networks without consent, and to create webpages and instructions to explain to people how to secure their wireless content.

Google had been asked to refrain from using and collecting personal data from wifi networks in an earlier settlement in 2013, which raises questions as to why it was necessary to include it in the current settlement as well.

Category: Cyber security · General · USA
Tags: , ,

Hearing on the legal challenge of SCC and US-EU Privacy Shield before CJEU

17. July 2019

On Tuesday last week, the European Court of Justice (CJEU) held the hearing on case 311/18, commonly known as “Schrems II”, following a complaint to the Irish Data Protection Commission (DPC) by Maximilian Schrems about the transfer of his personal data from Facebook Ireland to Facebook in the U.S. The case deals with two consecutive questions. The initial question refers to whether U.S. law, the Foreign Intelligence Service Act (FISA), that consists a legal ground for national security agencies to access the personal data of citizens of the European Union (EU) violates EU data protection laws. If confirmed, this would raise the second question namely whether current legal data transfer mechanisms could be invalid (we already reported on the backgrounds).

If both, the US-EU Privacy Shield and the EU Standard Contractual Clauses (SCCs) as currently primeraly used transfer mechanisms, were ruled invalid, businesses would probably have to deal with a complex and diffucult scenario. As Gabriela Zanfir-Fortuna, senior counsel at Future of Privacy Forum said, the hearing would have had a particularly higher impact than the first Schrems/EU-US Safe Harbor case, because this time it could affect not only data transfers from the EU to the U.S., but from the EU to all countries around the world where international data transfers are based on the SCCs.

This is what also Facebook lawyer, Paul Gallagher, argued. He told the CJEU that if SCCs were hold invalid, “the effect on trade would be immense.” He added that not all U.S. companies would be covered by FISA – that would allow them to provide the law enforcement agencies with EU personal data. In particular, Facebook could not be hold responsible for unduly handing personal data over to national security agencies, as there was no evidence of that.

Eileen Barrington, lawyer of the US government assured, of course, by referring to a “hypothetical scenario” in which the US would tap data streams from a cable in the Atlantic, it was not about “undirected” mass surveillance. But about “targeted” collection of data – a lesson that would have been learned from the Snowden revelations according to which the US wanted to regain the trust of Europeans. Only suspicious material would be filtered out using particular selectors. She also had a message for the European feeling of security: “It has been proven that there is an essential benefit to the signal intelligence of the USA – for the security of American as well as EU citizens”.

The crucial factor for the outcome of the proceedings is likely to be how valid the CJEU considers the availability of legal remedies to EU data subjects. Throughout the hearing, there were serious doubts about this. The monitoring of non-US citizens data is essentially based on a presidential directive and an executive order, i.e. government orders and not on formal laws. However, EU citizens will be none the wiser, as particularly, referring to many critisists’ conlusion, they do not know whether they will be actually surveilled or not. It remains the issue regarding the independence of the ombudsperson which the US has committed itself to establish in the Privacy Shield Agreement. Of course, he or she may be independent in terms of the intelligence agencies, but most likely not of the government.

However, Henrik Saugmandsgaard Øe, the Advocate General responsible for the case, intends to present his proposal, which is not binding on the Judges, on December 12th. The court’s decision is then expected in early 2020. Referring to CJEU judge and judge-rapporteur in the case, Thomas von Danwitz, the digital services and networking would be considerably compromised, anyways, if the CJEU would declare the current content of the SCC ineffective.

 

 

Privacy incidents cost Facebook 5 billion dollar

15. July 2019

According to a report of the Washington Post the Federal Trade Commission (FTC) has approved a $ 5 billion (approx. € 4,4 billion) settlement with Facebook. The settlement was reached between the FTC and Facebook due to various Data Protection incidents, in particular the Cambridge Analytica scandal.

The settlement relies on a three to two vote – the FTC’s three republicans supported the fine the two democrats were against it- and terminates the procedure for investigating Facebook’s privacy violations against users’ personal information. The fine of $ 5 billion is the highest fine ever assessed against a tech company, but even if it sounds like a very high fine, it only corresponds to the amount of the monthly turnover and is therefore not very high in relative terms. So far, the highest fine was $ 22,5 million for Google in 2012.

The decision of the FTC needs to be approved by the Justice Department. As a rule, however, this is a formality.

This is not the first fine Facebook has to accept in connection with various data protection incidents and certainly not the last. Investigations against Facebook are still ongoing in Spain as well as in Germany. In addition, Facebook has been criticized for quite some time for privacy incidents.

Record fine by ICO for British Airways data breach

11. July 2019

After a data breach in 2018, which affected 500 000 customers, British Airways (BA) has now been fined a record £183m by the UK’s Information Commissioners Office (ICO). According to the BBC, Alex Cruz, chairman and CEO of British Airways, said he was “surprised and disappointed” by the ICO’s initial findings.

The breach happened by a hacking attack that managed to get a script on to the BA website. Unsuspecting users trying to access the BA website had been diverted to a false website, which collected their information. This information included e-mail addresses, names and credit card information. While BA had stated that they would reimburse every customer that had been affected, its owner IAG declared through its chief executive that they would take “all appropriate steps to defend the airline’s position”.

The ICO said that it was the biggest penalty that they had ever handed out and made public under the new rules of the GDPR. “When an organization fails to protect personal data from loss, damage or theft, it is more than an inconvenience,” ICO Commissioner Elizabeth Dunham said to the press.

In fact, the GDPR allows companies to be fined up to 4% of their annual turnover over data protection infringements. In relation, the fine of £183m British Airways received equals to 1,5% of its worldwide turnover for the year 2017, which lies under the possible maximum of 4%.

BA can still put forth an appeal in regards to the findings and the scale of the fine, before the ICO’s final decision is made.

Texas amends Data Breach Notification Law

2. July 2019

The Governor of Texas, Greg Abbott, recently signed the House Bill 4390 (HB 4390), which modifies the state’s current Data Breach Notification law and introduces an advisory council (“Texas Privacy Privacy Protection Advisory Council”) charged with studying data privacy laws in Texas, other states and relevant other jurisdictions.

Prior to the new amendment, businesses had to disclose Data Breaches to the Data Subjects “as quickly as possible”. Now, a concrete time period for notifying individuals whose sensitive personal information was acquired by an unauthorized person is determined by the bill. Individual notice must now be provided within 60 days after discovering the breach.

If more than 250 residents of Texas are subject to a Data Breach the Texas Attorney General must also be notified within 60 days. Such a notification must include:
– A detailed description of the nature and circumstances of the data breach;
– The number of the affected residents at that time;
– The measures taken regarding the breach and any measures the responsible person intends to take after the notification;
– Information on whether the law enforcement is engaged in investigating the breach.

The amendments take effect on January, 1 2020.

Category: General · USA
Tags: , ,
Pages: 1 2 3 4 5 6 7 8 9 10 ... 12 13 14 Next
1 2 3 14