Germany: Data of smart home devices as evidence in court?!

11. June 2019

According to a draft resolution for the upcoming conference of interior ministers of the 16 German federal states, data from smart home devices are to be admitted as evidence in court. The ministers of the federal states believe that the digital traces could help to solve crimes in the future, especially capital crimes and terrorist threats.

The interior ministers want to remove constitutional concerns, because the mentioned data is of great interest for the security authorities. According to the draft resolution, judicial approval will be sufficient in the future. However, domestic politicians expect criticism and resistance from the data protection commissioners of both the federal states and the federal government.

Smart home devices are technical devices such as televisions, refrigerators or voice assistants that are connected to the Internet. They are also summarized under the term Internet of the Things (IoT), can be controlled via the smartphone and make daily life easier for the user. Many data are stored and processed.

We have already reported several times about smart home devices, including the fact that in the USA data from smart home devices have already helped to solve crimes (in German).

It cannot be denied that data from smart home devices can (under certain circumstances) help to solve crimes, but it must be neglected that due to the technical design a 100% reliable statement cannot be made. A simple example is this: whether the landlord was actually at home at the time in question or still on his way home, or just wanted to give the impression that he was at home while in fact on the other side of the world, cannot be determined on the basis of data from smart home devices. For example, the ability to use the smartphone to control the light/heat management allows the user to control it from anywhere at any time.

In addition, it should be taken into consideration that such interventions, or the mere possibility of intervention, may violate a person’s right to informational self-determination, and it is precisely the protection of this constitutionally protected right that data protection is committed to.

Update: The 210th Conference of the interior ministers has come to an end in the meantime and the approval of smart home data as evidence in court has been rejected. The resolutions of the conference can be found here (in German).

US Border Control – traveler photos and license plate images stolen in a data breach

U.S. Customs and Border Control (CBP) announced on Monday, 10th June 2019, that photos of travelers, their cars and their license plate images were stolen during a data breach.

The network of CBP itself was not affected by the breach, but the photos were transferred to a subcontractor and stolen by a hack at the subcontractor. The name of the subcontractor was not mentioned. According to US media reports, the subcontractor is Perceptics which was hacked in May 2019.

CBP announced: “CBP learned that a subcontractor, in violation of CBP policies and without CBP’s authorization or knowledge, had transferred copies of license plate images and traveler images collected by CBP to the subcontractor’s company network.”  CBP has not terminated its cooperation with the hacked subcontractor despite breaches of data protection and security regulations.

CBP was informed about the breach on 31st May 2019. The breach affects nearly 100.000 people who travelled the USA. Besides the photos of travelers, their cars and license plates neither passport or other travel documents nor images of airline passengers were involved. The photos show travellers crossing either the US border to Canada or Mexico.

Until now, the hacked data could neither be found on the Internet nor in the Dark net.

CNIL fines French real estate company for violating the GDPR

7. June 2019

The French Data Protection Authority “Commission Nationale de l’Informatique et des Libertés” (CNIL) issued a 400k euro fine for the French real estate company “Sergic” for violating the GDPR.
Sergic is specialized in real estate development, purchase, sale, rental and property management and has published the website www.sergic.com , which allows rental candidates to upload the necessary documents for preparing their file.

In August 2018, a Sergic user contacted the CNIL reporting that he had unencrypted access, from his personal space on the website, to other users’ uploaded files by slightly changing the URL. On September 7, 2018, an online check revealed that rental candidates’ uploaded documents were actually freely accessible for others without prior authentication. Among the documents were copies of identity cards, health cards, tax notices and divorce judgements. CNIL informed Sergic on the same day of this security incident and the violation of personal data. It became apparent that Sergic had been aware of this since March 2018 and, even though it had initiated IT developments to correct it, the final correction did not take place until September 17, 2018.

Based on the investigation, the responsible CNIL body found two violations of the GDPR. Firstly, Sergic had failed to fulfil its obligations according to Art. 32 GDPR, which obliges controllers to implement appropriate technical and organizational measures to ensure a secure level of protection of the personal data. This includes for example a procedure to ensure that personal documents cannot be accessed without prior authentication of the user. In addition, there is the time that the company took to correct the error.

Secondly, the CNIL found out that Sergic kept all the documents sent by candidates in active base, although they had not accessed rental accommodation for more than the time required to allocate housing. According to the GDPR, the controller has the obligation to delete data immediately if they are no longer necessary in relation to the purposes for which they were collected or otherwise processed and no other purpose justifies the storage of the data in an active database.

The CNIL imposed a fine of € 400.000 and decided to make its sanction public due to inter alia the seriousness of the breach, the lack of due diligence by the company and the fact that the documents revealed intimate aspects of people’s lives.

Category: Data breach · French DPA · GDPR
Tags: , ,

Royal family uses GDPR to protect their privacy

22. May 2019

Last week Prince Harry and Meghan Markle could claim another victory in the royal family’s never ending struggle with paparazzi photographers, securing “a substantial sum” in damages from an agency that released intimate photos of the Oxfordshire home the Duke and Duchess of Sussex rented to the media. In a statement, Splash News apologized for and acknowledged that this situation would represent “an error of judgement”.

The paparazzi agency “Splash News” took photos and footage of the couple’s former Cotswolds home — including their living room, dining area, and bedroom — using a helicopter and promptly sold to different news outlets. The lawyers of Prince Harry argued that this situation caused a breach of his right to privacy according to Art. 7 and 8 ECHR as well as a breach of the General Data Protection Regulation (GDPR) and Data Protection Act 2018 (DPA).

Considering the strategy of the Duke’s lawyers, it looks like the royal family have found a potentially attractive alternative to claims of defamation of invasion of privacy. Since in contrast to such a claim, a claimant relying on data protection law neither needs to prove that a statement is at least defamatory and met the threshold for serious harm to reputation nor that the information is private.

However, the (new) European data protection legislation grants all data subjects, regardless of their position and/or fame, a right of respect for their privacy and family lives and protection of their personal data. In particular, the GDPR requires organisations, according to its Article 5, to handle personal data (such as names, pictures and stories relating to them) fairly and in a transparent manner while also using it for a legitimate purpose.

Moreover, when obtaining pictures and footage of an individual’s private or even the intimite sphere, the organization using such materials need a specific reason like some kind of contract, the individual’s consent or be able to argue that using this photos and footage was “in the public interest” or for a “legitimate interest”. As a contract and a consent can be excluded here, the only basis that might be considerd could be a public interest or a legitimate interest of the organization itself. Taking into account the means and the way how these photos and footage of the Duke and Dutchess were created, both of these interest cannot withstand the interest  in protecting the rights and freedom of individuals’ private and intimite sphere.

Referring to this case, it seems pretty likely that the European data protection regime changed the way in how celebrities and the courts enforce the heavy-contested threshold of whether the public is allowed to see and be informed about certain parts and aspects of famous people’s lives or not.

 

 

Public availability of house images using a Google Street View raises legal concerns.

21. May 2019

In recent years, the science of data analytics has dramatically improved the ability to analyse raw data and to make conclusions about that information. Data analytics techniques can reveal trends and patterns that can be used to optimize processes to increase the overall efficiency of a business or system. However, there is an obvious contradiction between the security and privacy of big data and the widespread use of big data.
Google Street View is a quite popular Google service used by millions of people every day to plan trips, explore touristic destinations and more.
In 2017, two university researchers Łukasz Kidziński, Stanford University in California, and Kinga Kita-Wojciechowska,University of Warsaw in Poland, have used Street View images of people’s houses to determine how likely they are to be involved in a car accident.
The researchers worked with an unknown insurance company and analysed 20.000 random addresses of the insurance company clients who had taken out car insurance. They collected information from the insurance company’s database, like age, sex, zip code, claim history and linked that information with Street View images correlated with the policyholder’s residential area. It turned out that a policyholder’s residence is a surprisingly good predictor of the likelihood that he/she will get involved in a car accident.
Subsequently, researchers put those results into a data analytics algorithm, which improved its predictive power by 2%. They also noted that the accuracy of the algorithm could be further improved using larger data sets and data analysis.
The insurance companies rely on data to predict risk and the results of the research are from this perspective impressive, but they are also disturbing. The new utilization of the technology is an important step towards improving risk prediction models. However, having in mind the results of the research, some interesting questions regarding data protection come up: Did the policyholders give their consent to this activity? Could the insurance company use individuals’ data this way given Europe’s strict privacy legislation? “The consent given by the clients to the company to store their addresses does not necessarily mean a consent to store information about the appearance of their houses,” said by Kidziński and Kita-Wojciechowska.”
Studies such as these raise datat protection questions about thepower of data analysis and how the information is collected and shared.

The global competition for Artificial Intelligence – Is it Time to Regulate Now?

This year’s edition of the European Identity & Cloud Conference 2019 took place last week.
In the context of this event, various questions relevant from a data protection perspective arose. Dr. Karsten Kinast, Managing Director of KINAST Attorneys at Law and Fellow Analyst of the organizer KuppingerCole, gave a keynote speech on the question of whether internationally uniform regulations should be created in the context of a global competition for artificial intelligence (AI). Dr. Kinast outlined the controversial debate about the danger of future AI on the one hand and the resulting legal problems and solutions on the other. At present, there is no form of AI that understands the concrete content of its processing. Moreover, AI has not yet been able to draw any independent conclusions from a processing operation or even base autonomous decisions on it. Furthermore, from today’s perspective it is not even known how such a synthetic intelligence could be created.
For this reason, it is not primarily a question (as a result) of developing a code of ethics in which AIs can unfold as independent subjects. Rather, from today’s perspective, it would be a matter of a far more profane view of responsibilities.

The entire lecture can be found here.

EDPB: One year – 90.000 Data Breach Notifications

20. May 2019

Because of the GDPR’s first anniversary the EDPB published a new report that looks back on the first year GDPR.

Besides other findings of the report, the EDPB states that the national supervisory authorities received in total 281.088 complaints. 89.271 data breach notifications, 144.376 GDPR-related complaints and 47.441 other. Three month ago the number of received complaints were in total 206.326, 64.484 data breach notifications, 94.622 GDPR-related complaints from data subjects and 47.020 other. These number of complaints prove that the complaints have (on average) increased in the last three month.

At the time of the EDPB report 37% of the complaints are ongoing and 0,1% of the fined companies appealed against the decision of the supervisory authority. The other 62,9% were already closed. This proves that in contrast to the report after nine month, 2/3 of the complaints have been processed in the meantime. Three month ago only 52% were closed.

Referring to the EDPB report from three month ago, fines totalling € 55.955.871 were awarded for the detected violations by 11 authorities. With this high sum, however, it must be noted that € 50 million was imposed on Google alone. The current EDPB-report does not include a passage on fines.

All in all, the increase in queries and complaints, compared to the previous years, confirm the risen awareness on data protection. According to the Eurobarometer 67% of EU citizens have heard of the GDPR, 36% indicated that they are aware of the GDPR entails and 57% know about the existence of a public authority.

New Jersey changes data breach law to extend it to online account information

On May 10, 2019, Phil Murphy, Governor of New Jersey, signed a bill amending the law regarding notification of data breaches in New Jersey. The purpose of the amendment is to extend the definition of personal data to include online account information.

The amendment requires companies subject to the law to notify New Jersey residents of security breaches concerning the user name, e-mail address or other account holder identifying information.

The amendment states that companies should notify their customers affected by violations of such information electronically or otherwise and instruct them to promptly change any password and security questions or answers or take other appropriate measures to protect their online account with the company. The same shall be done for all other online accounts for which the customer uses the same username or e-mail address and password or the same security question and answer.

In addition, the amended law prohibits the company from sending notifications to the e-mail account of a person affected by a security breach. Instead, notifications must be sent in another legally required manner or by a clear and unambiguous notification sent online when the customer’s account is connected to an IP address and the company knows that the customer regularly accesses their account from that online location.

The amendment will take effect on 1 September 2019.

San Francisco took a stand against use of facial recognition technology

15. May 2019

San Francisco is the first major city in the US that has banned the use of facial recognition software by the authorities. The Board of Supervisors decided at 14th May that the risk of violating civil rights by using such technology far outweighs the claimed benefits. According to the current vote, the municipal police and other municipal authorities may not acquire, hold or use any facial recognition technology in the future.

The proposal is due to the fact that using facial recognition software threatens to increase racial injustice and “the ability to live free from constant monitoring by the government”. Civil rights advocates and researchers warn that the technology could easily be misused to monitor immigrants, unjustly target African-Americans or low-income neighborhoods, in case governmental oversight fails.

It sent a particularly strong message to the nation, coming from a city transformed by tech, Aaron Peskin, the city supervisor who sponsored the bill said. However, the ban is part of broader legislation aiming to restrict the use of surveillance technologies. However, airports, ports or other facilities operated by the federal authorities as well as businesses and private users are explicitly excluded from the ban.

Twitter shared location data on iOS devices

Twitter recently published a statement admitting that the app shared location data on iOS devices even if the user had not turned on the “precise location” feature.

The problem appeared in cases in which a user used more than one Twitter account on the same iOS device. If he or she had opted into the “precise location” feature for one account it was also turned on when using another account, even if the user had not opted into using the feature on this account. The information on the real-time location was then passed on to trusted partners of Twitter. However, through technical measures, only the postcode or an area of five square kilometres was passed on to the partners. Twitter accounts or other “Unique Account IDs”, which reveal the identity of the user, were allegedly not transmitted.

According to Twitter’s statement, they have fixed the problem and informed the affected users: “We’re very sorry this happened. We recognize and appreciate the trust you place in us and are committed to earning that trust every day”.

Pages: Prev 1 2 3 4 5 6 7 8 9 10 ... 38 39 40 Next
1 4 5 6 7 8 40