Category: Personal Data

High Court dismisses challenge regarding Automated Facial Recognition

12. September 2019

On 4 September, the High Court of England and Wales dismissed a challenge to the police’s use of Automated Facial Recognition Technology (“AFR”). The court ruled that the use of AFR was proportionate and necessary to meet the legal obligations of the police.

The pilot project AFR Locate was used for certain events and public places when the commission of crimes was likely. Up to 50 faces per second can be detected. The faces are then compared by biometric data analysis with wanted persons registered in police databases. If no match is found, the images are deleted immediately and automatically.

An individual has initiated a judicial review process after he has not been identified as a wanted person, but is likely to have been captured by AFR Locate. He considered this to be illegal, in particular due to a violation of the right to respect for private and family life under Article 8 of the European Convention on Human Rights (“ECHR”) and data protection law in the United Kingdom. In his view, the police did not respect the data protection principles. In particular, that approach would violate the principle of Article 35 of the Data Protection Act 2018 (“DPA 2018”), which requires the processing of personal data for law enforcement purposes to be lawful and fair. He also pointed out that the police had failed to carry out an adequate data protection impact assessment (“DPIA”).

The Court stated that the use of AFR has affected a person’s rights under Article 8 of the ECHR and that this type of biometric data has a private character in itself. Despite the fact that the images were erased immediately, this procedure constituted an interference with Article 8 of the ECHR, since it suffices that the data is temporarily stored.

Nevertheless, the Court found that the police’s action was in accordance with the law, as it falls within the police’s public law powers to prevent and detect criminal offences. The Court also found that the use of the AFR system is proportionate and that the technology can be used openly, transparently and with considerable public commitment, thus fulfilling all existing criteria. It was only used for a limited period, for a specific purpose and published before it was used (e.g. on Facebook and Twitter).

With regard to data protection law, the Court considers that the images of individuals captured constitute personal data, even if they do not correspond to the lists of persons sought, because the technology has singled them out and distinguished them from others. Nevertheless, the Court held that there was no violation of data protection principles, for the same reasons on which it denied a violation of Art. 8 ECHR. The Court found that the processing fulfilled the conditions of legality and fairness and was necessary for the legitimate interest of the police in the prevention and detection of criminal offences, as required by their public service obligations. The requirement of Sec. 35 (5) DPA 2018 that the processing is absolutely necessary was fulfilled, as was the requirement that the processing is necessary for the exercise of the functions of the police.

The last requirement under Sec. 35 (5) of the DPA 2018 is that a suitable policy document is available to regulate the processing. The Court considered the relevant policy document in this case to be short and incomplete. Nevertheless, it refused to give a judgment as to whether the document was adequate and stated that it would leave that judgment to the Information Commissioner Office (“ICO”), as it would publish more detailed guidelines.

Finally, the Court found that the impact assessment carried out by the police was sufficient to meet the requirements of Sec. 64 of DPA 2018.

The ICO stated that it would take into account the High Court ruling when finalising its recommendations and guidelines for the use of live face recognition systems.

Irish DPC releases guide on Data Breach Notifications

15. August 2019

On Monday the Irish Data Protection Commission (IDPC) has released a quick guide on Data Breach Notifications. It is supposed to help controllers understand their obligations regarding notification and communication requirements, both to the responsible DPC and to the data subject.

The guide, which is supposed to be a quick overview of the requirements and obligations which fall on data controllers, refers to the Article 29 Working Party’s (now European Data Protection Board or EDPB), much more in depth and detailed, guidance in their guideline concerning Data Breach Notifications.

In summary, the IDPC categorizes a Data Breach as a “security incident that negatively impacts the confidentiality, integrity or availability of personal data; meaning that the controller is unable to ensure compliance with the principles relating to the processing of personal data as outlined in Art. 5 GDPR”. In this case, it falls to the controller to follow two primary obligations: (1) to notify the responsible DPC of the data breach, unless it is unlikely to result in a risk for the data subject, and (2) to communicate the data breach to the affected data subjects, when it is likely to result in a high risk.

The IDPC seeks to help controllers by providing a list of requirements in cases of notification to the DPC and data subjects, especially given the tight timeframe for notifications to be filed within 72 hours of awareness of the breach. It is hoping to eliminate confusion arising in the process, as well as problems that companies have had while filing a Data Breach Notification in the past.

EDPB adopts Guidelines on processing of personal data through video devices

13. August 2019

Recently, the EDPB has adopted its Guidelines on processing of personal data through video devices (“the guidelines”). The guidelines provide assistance on how to apply the GDPR in cases of processing through video devices with several examples, which are not exhaustive but applicable for all areas of using video devices.

In a first step, the guidelines set the scope of application. The GDPR is only applicable for the use of video devices if

  • personal data is collected through the video device ( e.g. a person is identifiable on basis of their looks or other specific elements)
  • the processing is not carried out by competent authorities for the purposes of prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, or,
  • the so-called “household exemption” does not apply (processing by a natural person in the course of personal or household activity).

Before processing personal data through video devices, controllers must specify their legal basis for it. According to the guidelines, every legal ground under Article 6 (1) can provide a legal basis. The purposes for using video devices for processing personal data should be documented in writing and specified for every camera in use.

Another subject of the guidelines is the transparency of the processing. The controllers have to inform data subjects about the video surveillance. The EDPB recommends a layered approach and combining several methods to ensure transparency. The most important information should be written on the warning sign itself (first layer) and the other mandatory details may be provided by other means (second layer). The second layer must also be easily accessible for data subjects.

The guidelines also deal with storage periods and technical and organizational measures (TOMs). In some member states may be specific provisions for storing video surveillance footage, but it is recommended to – ideally automatically – delete the personal data after a few days. As with any kind of data processing, the controller must adequately secure it and therefore must have implemented technical and organizational measures. Examples provided are masking or scrambling areas that are not relevant to surveillance, or the editing out of images of third persons, when providing video footage to data subjects.

Until September 9th 2019, the guidelines will be open for public consultation and a final and revised version is planned for the end of 2019.

Amazon lets Alexa recordings evaluate by timeworkers in home-office

5. August 2019

According to a report by German newspaper “Welt am Sonntag”, Amazon has Alexa’s voice recordings listened to not only by its own employees, but also by Polish temporary workers.

For some time now, Amazon has been the subject of criticism because the recordings of the Alexa language assistant are listened to and typed in by employees in order to improve speech recognition. For a long time, however, the users were unaware of this long-standing practice.

It has now become known that temporary workers in the home office listen to and evaluate the recordings using a remote work program. Until recently, a Polish recruitment agency advertised “teleworking all over the country”, although Amazon had previously assured that the voice recordings would only be evaluated in specially protected offices. However, one of the Polish temporary workers stated that many of them would work from home and that among the records were personal data such as names or places that allowed conclusions to be drawn about the person.

Upon request, Amazon confirmed the research results. A spokesman said that some employees were allowed to work from other locations than the Amazon offices, but that particularly strict rules would have to be observed. In particular, working in public places is not allowed.

On the same day, the online job advertisements were deleted and Amazon offered a new data protection option. Users can now explicitly object and block their recording for post-processing by Amazon employees.

Other language assistants have also been or are to be suspended from language evaluation, at least for European users. According to Google, around 0.2 % of the recordings are listened to subsequently, while Apple and Amazon say it is less than 1 %. Google already deactivated the function three months ago and Apple also wants to suspend the evaluation and explicitly ask its users later whether an evaluation may be resumed.

FaceApp reacts to privacy concerns

22. July 2019

The picture editing app FaceApp, which became increasingly popular on social media, was confronted with various concerns about their privacy.

Created in Russia by a four-person start-up company, the app applies a newly developed technology that uses neural networks to modify a face in any photo while remaining photorealistic. In this process, no filters are placed on the photo, but the image itself is modified with the help of deep learning technology.

However, the app is accused of not explaining that the images are uploaded to a cloud for editing. In addition, the app is accused of uploading not only the image selected by the user, but also the entire camera roll in the background. The latter in particular raises high security concerns due to the large number of screenshots that people nowadays take of sensitive information such as access data or bank details.

While there is no evidence for the latter accusation and FaceApp emphasizes in its statement that no image other than the one chosen by the user is uploaded, they confirm the upload into a cloud.

The upload to the cloud justifies FaceApp with reasons of performance and traffic. With this, the app developers want to ensure that the user does not upload the photo repeatedly during each editing process.

Finally, FaceApp declares that no user data will be sold or passed on to third parties. Also, in 99 % of cases, they are unable to identify a person because the app can be and actually is used without registration by a large number of users.

Google data breach notification sent to IDPC

18. July 2019

Google may face further investigations under the General Data Protection Regulation(GDPR), after unauthorized audio recordings have been forwarded to subcontractors. The Irish Data Protection Commission (IDPC) has confirmed through a spokesperson that they have received a data breach notification concerning the issue last week.

The recordings were exposed by the Belgian broadcast VRT, said to affect 1000 clips of conversations in the region of Belgium and the Netherlands. Being logged by Google Assistant, the recordings were then sent to Google’s subcontractors for review. At least 153 of those recordings were not authorized by Google’s wake phrase “Ok/Hey, Google,” and were never meant to be recorded in the first place. They contained personal data reaching from family conversations over bedroom chatter to business calls with confidential information.

Google has addressed this violation of their data security policies in a blog post. It said that the audio recordings were sent to experts, who understand nuances and accents, in order to refine Home’s linguistic abilities, which is a critical part in the process of building speech technology. Google stresses that the storing of recorded data on its services is turned off by default, and only sends audio data to Google once its wake phrase is said. The recordings in question were most likely initiated by the users saying a phrase that sounded similar to “Ok/Hey, Google,” therefore confusing Google Assistant and turning it on.

According to Google’s statement, Security and Privacy teams are working on the issue and will fully review its safeguards to prevent this sort of misconduct from happening again. If, however, following investigations by the IDPC discover a GDPR violation on the matter, it could result in significant financial penalty for the tech giant.

China: Tourist mobile phones are scanned by an app

8. July 2019

Foreign tourists who want to enter the Chinese province of Xinjiang by land are spied out via an app.
For the first time, employees of the Süddeutsche Zeitung, Motherboard Vice and various other media portals in cooperation with IT-experts of the Ruhr University Bochum have succeeded in decrypting a Chinese surveillance software that also targets foreigners.

It has been known for some time that the Chinese authorities use apps to monitor the Uighur residents of Xinjiang province (we reported). What is new is that foreign tourists and businessmen coming from Kyrgyzstan to Xinjiang by land have to hand in their mobile phones at the borders and then get the Android app “Fengcai” (“collecting bees”) installed. They are not explicitly informed about this.

The app gets access to the contacts, the calendar, the SMS, the location or the call lists and transmits them to a computer of the border police. In addition, the app scans the phone for over 70,000  files that are suspicious from Chineses government’s point of view. Many scanned files refer to extremist content related to the Islamic state, but also harmless religious content or files related to Taiwan, Tibet or the Dalai Lama are part of the list. If the app discovers anything, it emits a warning tone and thereby informs the border police.

The app also scans the phones to see which apps were installed by the user and even extract usernames but several antivirus firms already updated their products to identify the app as malware.

Neither the Chinese authorities nor the company that developed the app reacted to a request for comment.

Category: General · Personal Data
Tags: ,

EU-US Privacy Shield and SCCs facing legal challenge before the EU High Courts

3. July 2019

Privacy Shield, established between the European Union (EU) and the United States of America (US) as a replacement of the fallen Safe Harbor agreement, has been under scrutiny from the moment it entered into effect. Based on the original claims by Max Schrems in regards to Safe Harbor (C-362/14), the EU-US data transfer agreement has been challenged in two cases, one of which will be heard by the Court of Justice of the European Union (CJEU) in early July.

In this case, as in 2015, Mr. Schrems bases his claims elementally on the same principles. The contention is the unrestricted access of US agencies to European’s personal data. Succeeding hearings in 2017, the Irish High Court found and raised 11 questions in regards to the adequacy of the level of protection to the CJEU. The hearing before the CJEU is scheduled for July 9th. The second case, originally planned to be heard on July 1st and 2nd, has been brought to the General Court of the European Union by the French digital rights group La Quadrature du Net in conjunction with the French Data Net and Fédération FDN. Their concerns revolve around the inadequacy of the level of protection given by the Privacy Shield and its mechanisms.
This hearing, however, has been cancelled by the General Court of the EU only days prior to its date, which was announced by La Quadrature du Net through tweet.

Despite the criticism of the agreement, the European Commission has noted improvements to the level of security of the Privacy Shield in their second review of the agreement dating from December 2018. The US Senate confirmed Keith Krach as Under Secretary for Economic Growth, Energy and Environment, with his duties to include being the permanent ombudsman in regards to the Privacy Shield and the EU data protection, on June 20th 2019.

As it is, both cases are apt to worry companies that rely on being certified by the Privacy Shield or the use of SCCs. With the uncertainty that comes with these questions, DPOs will be looking for new ways to ensure the data flow between Europe and the US. The European Commission stated that it wants to make it easier for companies in the future to comply with data transfers under the GDPR. It plans to update the SCCs to the requirements of the GDPR, providing a contractual mechanism for international transfers. Nonetheless, it is unclear when those updates are happening, and they may be subject to legal challenge based on the future Schrems ruling.

Consumers should know how much their data is worth

27. June 2019

US Senators Mark R. Warner (Democrats) and Josh Hawley (Republicans) want to know from Facebook, Google and Co. exactly how much the data of their users, measured in dollars and cents, is worth to them.

Last Sunday, the two senators announced their intention for the first time in a US talk show: Every three months, each user is to receive an overview of which data has been collected and stored and how the respective provider rates it. In addition, the aggregated value of all user data is to be reported annually to the US Securities and Exchange Commission. In this report, the companies are to disclose how they store, process and protect data and how and with which partner companies they generate sales with the data. All companies with more than 100 million users per month will be affected.

The value of user data has risen enormously in recent years; so far, companies have protected their internal calculations as company secrets. In addition, there is no recognized method for quantifying the value of user data; only when a company is sold or valued by means of an initial public offering (IPO) does it become obvious. In the case of the WhatsApp takeover it was  $ 55 per user, in the case of Skype it was $ 200.

But one can doubt the significance of these figures. A further indication can be the advertising revenues, which are disclosed by companies per quarter. At the end of 2018, Facebook earned around $6 per user worldwide, while Amazon earned $752 per user. These figures are likely to rise in the future.  “For years, social media companies have told consumers that their products are free to the user. But that’s not true – you are paying with your data instead of your wallet,” said Senator Warner. “But the overall lack of transparency and disclosure in this market have made it impossible for users to know what they’re giving up, who else their data is being shared with, or what it’s worth to the platform. […]” Experts believe it is important for consumers to know the value of their data, because only when you know the value of a good you are able to value it.

On Monday, Warner and Rawley plan to introduce the  Designing Accounting Safeguards to Help Broaden Oversight And Regulations on Data (DASHBOARD) Act to the parliament for its first reading. It remains to be seen whether their plans will meet with the approval of the other senators.

Spanish DPA imposes fine on Spanish football league

13. June 2019

The Spanish data protection authority Agencia Española de Protección de Datos (AEPD) has imposed a fine of 250.000 EUR on the organisers of the two Spanish professional football leagues for data protection infringements.

The organisers, Liga Nacional de Fútbol Profesional (LFP), operate an app called “La Liga”, which aims to uncover unlicensed performances of games broadcasted on pay-TV. For this purpose, the app has recorded a sample of the ambient sounds during the game times to detect any live game transmissions and combined this with the location data. Privacy-ticker already reported.

AEPD criticized that the intended purpose of the collected data had not been made transparent enough, as it is necessary according to Art. 5 paragraph 1 GDPR. Users must approve the use explicitly and the authorization for the microphone access can also be revoked in the Android settings. However, AEPD is of the opinion that La Liga has to warn the user of each data processing by microphone again. In the resolution, the AEPD points out that the nature of the mobile devices makes it impossible for the user to remember what he agreed to each time he used the La Liga application and what he did not agree to.

Furthermore, AEPD is of the opinion that La Liga has violated Art. 7 paragraph 3 GDPR, according to which the user has the possibility to revoke his consent to the use of his personal data at any time.

La Liga rejects the sanction because of injustice and will proceed against it. It argues that the AEPD has not made the necessary efforts to understand how the technology works. They explain that the technology used is designed to produce only one particular acoustic fingerprint. This fingerprint contains only 0.75% of the information. The remaining 99.25% is discarded, making it technically impossible to interpret human voices or conversations. This fingerprint is also converted into an alphanumeric code (hash) that is not reversible to the original sound. Nevertheless, the operators of the app have announced that they will remove the controversial feature as of June 30.

Pages: 1 2 3 4 5 6 7 8 9 10 Next
1 2 3 10