Category: General

Germany: Data of smart home devices as evidence in court?!

11. June 2019

According to a draft resolution for the upcoming conference of interior ministers of the 16 German federal states, data from smart home devices are to be admitted as evidence in court. The ministers of the federal states believe that the digital traces could help to solve crimes in the future, especially capital crimes and terrorist threats.

The interior ministers want to remove constitutional concerns, because the mentioned data is of great interest for the security authorities. According to the draft resolution, judicial approval will be sufficient in the future. However, domestic politicians expect criticism and resistance from the data protection commissioners of both the federal states and the federal government.

Smart home devices are technical devices such as televisions, refrigerators or voice assistants that are connected to the Internet. They are also summarized under the term Internet of the Things (IoT), can be controlled via the smartphone and make daily life easier for the user. Many data are stored and processed.

We have already reported several times about smart home devices, including the fact that in the USA data from smart home devices have already helped to solve crimes (in German).

It cannot be denied that data from smart home devices can (under certain circumstances) help to solve crimes, but it must be neglected that due to the technical design a 100% reliable statement cannot be made. A simple example is this: whether the landlord was actually at home at the time in question or still on his way home, or just wanted to give the impression that he was at home while in fact on the other side of the world, cannot be determined on the basis of data from smart home devices. For example, the ability to use the smartphone to control the light/heat management allows the user to control it from anywhere at any time.

In addition, it should be taken into consideration that such interventions, or the mere possibility of intervention, may violate a person’s right to informational self-determination, and it is precisely the protection of this constitutionally protected right that data protection is committed to.

Update: The 210th Conference of the interior ministers has come to an end in the meantime and the approval of smart home data as evidence in court has been rejected. The resolutions of the conference can be found here (in German).

Royal family uses GDPR to protect their privacy

22. May 2019

Last week Prince Harry and Meghan Markle could claim another victory in the royal family’s never ending struggle with paparazzi photographers, securing “a substantial sum” in damages from an agency that released intimate photos of the Oxfordshire home the Duke and Duchess of Sussex rented to the media. In a statement, Splash News apologized for and acknowledged that this situation would represent “an error of judgement”.

The paparazzi agency “Splash News” took photos and footage of the couple’s former Cotswolds home — including their living room, dining area, and bedroom — using a helicopter and promptly sold to different news outlets. The lawyers of Prince Harry argued that this situation caused a breach of his right to privacy according to Art. 7 and 8 ECHR as well as a breach of the General Data Protection Regulation (GDPR) and Data Protection Act 2018 (DPA).

Considering the strategy of the Duke’s lawyers, it looks like the royal family have found a potentially attractive alternative to claims of defamation of invasion of privacy. Since in contrast to such a claim, a claimant relying on data protection law neither needs to prove that a statement is at least defamatory and met the threshold for serious harm to reputation nor that the information is private.

However, the (new) European data protection legislation grants all data subjects, regardless of their position and/or fame, a right of respect for their privacy and family lives and protection of their personal data. In particular, the GDPR requires organisations, according to its Article 5, to handle personal data (such as names, pictures and stories relating to them) fairly and in a transparent manner while also using it for a legitimate purpose.

Moreover, when obtaining pictures and footage of an individual’s private or even the intimite sphere, the organization using such materials need a specific reason like some kind of contract, the individual’s consent or be able to argue that using this photos and footage was “in the public interest” or for a “legitimate interest”. As a contract and a consent can be excluded here, the only basis that might be considerd could be a public interest or a legitimate interest of the organization itself. Taking into account the means and the way how these photos and footage of the Duke and Dutchess were created, both of these interest cannot withstand the interest  in protecting the rights and freedom of individuals’ private and intimite sphere.

Referring to this case, it seems pretty likely that the European data protection regime changed the way in how celebrities and the courts enforce the heavy-contested threshold of whether the public is allowed to see and be informed about certain parts and aspects of famous people’s lives or not.

 

 

The global competition for Artificial Intelligence – Is it Time to Regulate Now?

21. May 2019

This year’s edition of the European Identity & Cloud Conference 2019 took place last week.
In the context of this event, various questions relevant from a data protection perspective arose. Dr. Karsten Kinast, Managing Director of KINAST Attorneys at Law and Fellow Analyst of the organizer KuppingerCole, gave a keynote speech on the question of whether internationally uniform regulations should be created in the context of a global competition for artificial intelligence (AI). Dr. Kinast outlined the controversial debate about the danger of future AI on the one hand and the resulting legal problems and solutions on the other. At present, there is no form of AI that understands the concrete content of its processing. Moreover, AI has not yet been able to draw any independent conclusions from a processing operation or even base autonomous decisions on it. Furthermore, from today’s perspective it is not even known how such a synthetic intelligence could be created.
For this reason, it is not primarily a question (as a result) of developing a code of ethics in which AIs can unfold as independent subjects. Rather, from today’s perspective, it would be a matter of a far more profane view of responsibilities.

The entire lecture can be found here.

San Francisco took a stand against use of facial recognition technology

15. May 2019

San Francisco is the first major city in the US that has banned the use of facial recognition software by the authorities. The Board of Supervisors decided at 14th May that the risk of violating civil rights by using such technology far outweighs the claimed benefits. According to the current vote, the municipal police and other municipal authorities may not acquire, hold or use any facial recognition technology in the future.

The proposal is due to the fact that using facial recognition software threatens to increase racial injustice and “the ability to live free from constant monitoring by the government”. Civil rights advocates and researchers warn that the technology could easily be misused to monitor immigrants, unjustly target African-Americans or low-income neighborhoods, in case governmental oversight fails.

It sent a particularly strong message to the nation, coming from a city transformed by tech, Aaron Peskin, the city supervisor who sponsored the bill said. However, the ban is part of broader legislation aiming to restrict the use of surveillance technologies. However, airports, ports or other facilities operated by the federal authorities as well as businesses and private users are explicitly excluded from the ban.

Twitter shared location data on iOS devices

Twitter recently published a statement admitting that the app shared location data on iOS devices even if the user had not turned on the “precise location” feature.

The problem appeared in cases in which a user used more than one Twitter account on the same iOS device. If he or she had opted into the “precise location” feature for one account it was also turned on when using another account, even if the user had not opted into using the feature on this account. The information on the real-time location was then passed on to trusted partners of Twitter. However, through technical measures, only the postcode or an area of five square kilometres was passed on to the partners. Twitter accounts or other “Unique Account IDs”, which reveal the identity of the user, were allegedly not transmitted.

According to Twitter’s statement, they have fixed the problem and informed the affected users: “We’re very sorry this happened. We recognize and appreciate the trust you place in us and are committed to earning that trust every day”.

Mass monitoring in Xinjiang

3. May 2019

According to research by Human Rights Watch, China’s state and party leaders have had an app developed with which the security authorities in Xinjiang can monitor their inhabitants on a massive scale.

When police officers log into the app, they can see which “conspicuous” behaviours of individual residents have been recorded. According to the published report, the authorities are using the app for illegal mass surveillance and arbitrary arrest of the Uighur Muslim minority living in Xinjiang Province. Up to one million Uighurs are currently said to be imprisoned in “re-education camps”.

Users of the app are asked to enter a variety of information about citizens and explain the circumstances under which it was collected. This includes information such as name or identity card number, but also information such as religious beliefs, blood group or the absence of smartphones. According to Human Rights Watch, the app should also be connected to other databases and alert users if a citizen consumes too much electricity or a mobile phone does not log on to the network for a long time. Citizens should also make themselves “suspicious” if they have little contact with neighbours or do not often enter buildings through the front door.

Human Rights Watch is convinced that this procedure is also illegal in China and that the collected data must be deleted. It remains to be seen whether the Chinese – or other governments will react to the disclosures.

Category: General · Personal Data
Tags: ,

Dutch DPA publishes recommendations for privacy policies

26. April 2019

Recently, the Dutch Data Portection Authority (Autoriteit Personensgegevens) published six recommendations for companies when outlining their privacy policies for the purpose of Art. 24 para 2 of the General Data Protection Regulation (the “GDPR”).

The authorities’ recommendations are a result of their investigation of companies’ privacy policies, which focused on companies that mainly process special categories of personal data, e.g. health data or data relating to individuals’ political beliefs.

The Dutch DPA reviewed privacy policies of several companies such as blood banks or local political parties and it focused on three main points 1) the description of the categories of the personal data 2) the description of the purposes of the processing and 3) the information about data subjects’ rights. They discovered that the descriptions of the data categories and purposes were incomplete or too superficial and thus released six recommendations that companies shall take into consideration when outlining privacy policies.

Those are the six recommendations:

  • Companies should evaluate whether they have to implement privacy policies (taking into account the nature, scope, context and purposes of the processing, as well as the risks for the rights and freedoms of natural persons)
  • Companies should consult internal and/or external expertise such as data protection officers when implementing privacy policies
  • The policy should be outlined in a single document to avoid fragmentation of information
  • The policy should be concrete and specific and therefore not only repeating the provisions of the GDPR
  • The DPA recommends to publish the privacy policies so that data subjects are aware of how the company handles personal data
  • The DPA also suggests to draft a privacy policy even if it is not mandatory to demonstrate that the company is willing to protect personal data

EDPS investigates into contractual agreements between EU institutions and Microsoft

10. April 2019

The European Data Protection Supervisor (EDPS) is the supervisory authority for all EU institutions and therefore responsible for their compliance with data protection laws. It is currently investigating the compliance of contractual agreements between EU institutions and Microsoft as the different institutions use Microsoft products and services to conduct their day-to-day businesses including the processing of huge amounts of personal data.

The EDPS refers to a Data Processing Impact Assessment carried out last November by the Dutch Ministry of Justice and Security (we reported) in which they concluded that Microsoft collects and stores personal data of Office users on a large scale without informing them.

Wojciech Wiewiórowski, Assistant EDPS, said: “New data protection rules for the EU institutions and bodies came into force on 11 December 2018. Regulation 2018/1725 introduced significant changes to the rules governing outsourcing. Contractors now have direct responsibilities when it comes to ensuring compliance. However, when relying on third parties to provide services, the EU institutions remain accountable for any data processing carried out on their behalf. They also have a duty to ensure that any contractual arrangements respect the new rules and to identify and mitigate any risks. It is with this in mind that the contractual relationship between the EU institutions and Microsoft is now under EDPS scrutiny.”

The investigation should reveal which products and systems are used right now and whether the existing contractual agreements are compliant with current Data Protection Laws, especially the GDPR.

Category: EU · GDPR · General
Tags: ,

German Court’s Decision on the Right of Access

9. April 2019

Just recently, a German Labour Court (LAG Baden-Württemberg) has decided on the extent of Article 15 of the European General Data Protection Regulation (GDPR) with regard to the information that is supposed to be handed out to the data subject in case such a claim is made.

The decision literally reflects the wording of Art. 15 (1) GDPR which, amongst other things, requires information on

  • the purposes of data processing,
  • the categories of personal data concerned,
  • the recipients or categories of recipient to whom the personal data have been or will be disclosed
  • where possible, the envisaged period for which the personal data will be stored, or, if not possible, the criteria used to determine that period,
  • where the personal data are not collected from the data subject, any available information as to their source.

In contrast to the previous views of the local data protection authorities, which – in the context of information about recipients of personal data – deem sufficient that the data controller discloses recipient categories, the LAG Baden-Württemberg also obliged the data controller to provide the data subject with information about each individual recipient.

In addition, the LAG Baden-Württemberg ordered the data controller to make available to the data subject a copy of all his personal performance data. However, the court did not comment on the extent of copies that are to be made. It is therefore questionable whether, in addition to information from the systems used in the company, copies of all e-mails containing personal data of the person concerned must also be made available to the data subject.

Since the court has admitted the appeal to the Federal Labour Court (BAG) regarding this issue, it remains to be seen whether such an approach will still be valid after a Federal Labour Court decision.

Dutch DPA published update on policy on administrative fines

The Dutch Data Protection Authority, Autoriteit Persoonsgegevens (Dutch DPA), announced an update on its policy regarding administrative fines.

In addition to the Dutch GDPR implementation law the published policy provides insides on how the Dutch DPA will use its fining powers. According to the policy the DPA differentiats three or four categories of infringements. Each infringement is fined with a basic fine and a specific penalty bandwidth.

The DPA calculates the fine in two steps. First the basic fine is applied, second the basic fine is increased or decreased according to the classification to the different categories. Various aspects are included in the calculation of the fine, such as:

  • the nature, the seriousness and duration of the violation,
  • the number of data subjects affected,
  • the extent of the damage and of the data compromised,
  • the intentional or negligent nature of the violation,
  • the measures adopted to mitigate the damages,
  • the measures that were implemented to ensure compliance with the GDPR, including information security measures,
  • prior violations,
  • the level of cooperation with the DPA,
  • the types of data involved,
  • how the DPA became aware of the violation, including whether (and if so, to what extent) the data controller or processor reported the violation,
  • adherence to approved codes of conduct an certification mechanisms,
  • any other applicable aggravating or mitigating factors.

The maximum amount in general is €1.000.000,00, but the fine can be higher in case the Dutch DPA decides that the calculated maximum amount is inappropriate in the particular case.

Pages: Prev 1 2 3 4 5 6 7 8 9 10 11 12 13 Next
1 2 3 4 5 13