Tag: USA

New Jersey changes data breach law to extend it to online account information

20. May 2019

On May 10, 2019, Phil Murphy, Governor of New Jersey, signed a bill amending the law regarding notification of data breaches in New Jersey. The purpose of the amendment is to extend the definition of personal data to include online account information.

The amendment requires companies subject to the law to notify New Jersey residents of security breaches concerning the user name, e-mail address or other account holder identifying information.

The amendment states that companies should notify their customers affected by violations of such information electronically or otherwise and instruct them to promptly change any password and security questions or answers or take other appropriate measures to protect their online account with the company. The same shall be done for all other online accounts for which the customer uses the same username or e-mail address and password or the same security question and answer.

In addition, the amended law prohibits the company from sending notifications to the e-mail account of a person affected by a security breach. Instead, notifications must be sent in another legally required manner or by a clear and unambiguous notification sent online when the customer’s account is connected to an IP address and the company knows that the customer regularly accesses their account from that online location.

The amendment will take effect on 1 September 2019.

San Francisco took a stand against use of facial recognition technology

15. May 2019

San Francisco is the first major city in the US that has banned the use of facial recognition software by the authorities. The Board of Supervisors decided at 14th May that the risk of violating civil rights by using such technology far outweighs the claimed benefits. According to the current vote, the municipal police and other municipal authorities may not acquire, hold or use any facial recognition technology in the future.

The proposal is due to the fact that using facial recognition software threatens to increase racial injustice and “the ability to live free from constant monitoring by the government”. Civil rights advocates and researchers warn that the technology could easily be misused to monitor immigrants, unjustly target African-Americans or low-income neighborhoods, in case governmental oversight fails.

It sent a particularly strong message to the nation, coming from a city transformed by tech, Aaron Peskin, the city supervisor who sponsored the bill said. However, the ban is part of broader legislation aiming to restrict the use of surveillance technologies. However, airports, ports or other facilities operated by the federal authorities as well as businesses and private users are explicitly excluded from the ban.

Aetna to pay fine for HIV privacy breach

31. January 2019

Healthcare insurer Aetna will have to pay a 935,000$ fine after letters had been sent to nearly 12.000 patients in 2017, disclosing highly sensitive information on the windows of the envelopes.

The information revealed that the recipients were taking HIV-related medications.

In addition, the insurance company will have to complete privacy risk assessments annualy for three years.

The patients have received compensation through a private class action settlement.

 

Yahoo agreed to pay US$ 85 million after data breaches in 2013 and 2014

24. October 2018

As part of a court settlement filed Monday, Yahoo agreed to pay $50 million in damages and to provide two-years of free credit monitoring for services to 200 million people.

Around 3 billion Yahoo accounts were hacked in 2013 and 2014 but the company, which is now owned by Verizon, did not disclose the breach until 2016. Affected are U.S. and Israel residents and small businesses with Yahoo accounts at any time from January 1, 2012 to December 31, 2016. Apart from usernames and email addresses, millions of birthdates and security questions and answers were stolen. Not among the stolen information were passwords, credit card numbers and bank account information.

According to the settlement, the fund will compensate accountholders who paid for email services, who had out-of-pocket losses or who already have credit monitoring services. A refund of $25 per hour will be made for the time spent handling issues caused by the breach. Those with documented losses can ask for up to 15 hours of lost time ($375) whereas those who cannot document losses can ask for up to 5 hours ($125).

A hearing to approve the preliminary settlement is scheduled for November 29.

Nationwide: multistate data breach investigation settled by paying $ 5.5 million

11. August 2017

According to Hunton & Williams, on the 9th of August, Nationwide Mutual Insurance Company (“Nationwide”), agreed to pay $ 5.5 million to settle a data breach investigation by attorneys general from 32 states concerning a data breach that exposed personal data of about 1.2 million individuals. They also published the settlement.

In October 2012, Nationwide and its wholly-owned subsidiary Allied Property & Cansualty Insurance Company (“Allied”) experienced a data breach that led to an unauthorized access to and exfiltration of certain personal data of their customers, as well as other consumers. Since Nationwide and Allied provide customers with insurance quotes, inter alia the following personal data are collected: full name, Social Security number, date of birth or credit-related score.

The attorneys general alleged that the data breach occurred when hackers exploited a vulnerability in the companies’ web application hosting software. Further, it is alleged that, after the data was exfiltrated, Nationwide and Allied applied a software patch, that was not previously applied, to address the vulnerability.

Besides the $ 5.5 million Nationwide and Allied agreed to implement a series of steps to update its security practices. Besides other measures that are listed in the settlement a technology officer shall be appointed that should manage and monitor security and software updates to ensure that future patches and other security updates are applied.

Ten relevant practical consequences of the upcoming General Data Protection Regulation

22. January 2016

After several negotiations, the European Parliament, the European Council and the European Commission finally reached a consensus in December 2015 on the final version of the General Data Protection Regulation (GDPR), which is expected to be approved by the European Parliament in April 2016. The consolidated text of the GDPR involves the following practical consequences:

1) Age of data subject´s consent: although a specific, freely-given, informed and unambiguous consent was also required according to the Data Protection Directive (95/46 EC), the GDPR determines that the minimum age for providing a legal consent for the processing of personal data is 16 years. Nevertheless, each EU Member State can determine a different age to provide consent for the processing of personal data, which should not be below 13 years (Arts. 7 and 8 GDPR).

2) Appointment of a Data Protection Officer (DPO): the appointment of a DPO will be mandatory for public authorities and for data controllers whose main activity involves a regular monitoring of data subjects on a large scale or the processing of sensitive personal data (religion, health matters, origin, race, etc.). The DPO should have expert knowledge in data protection in order to ensure compliance, to be able to give advice and to cooperate with the DPA. In a group of subsidiaries, it will be possible to appoint a single DPO, if he/she is accessible from each establishment (Art. 35 ff. GDPR).

3) Cross-border data transfers: personal data transfers outside the EU may only take place if a Commission decision is in place, if the third country ensures an adequate level of protection and guarantees regarding the protection of personal data (for example by signing Standard Contractual Clauses) or if binding corporate rules have been approved by the respective Data Protection Authority (Art. 41 ff. GDPR).

4) Data security: the data controller should recognize any existing risks regarding the processing of personal data and implement adequate technical and organizational security measures accordingly (Art. 23 GDPR). The GDPR imposes strict standards related to data security and the responsibility of both data controller and data processor. Security measures should be implemented according to the state of the art and the costs involved (Art. 30 GDPR). Some examples of security measures are pseudonymization and encryption, confidentiality, data access and data availability, data integrity, etc.

5) Notification of personal data breaches: data breaches are defined and regulated for the first time in the GDPR (Arts. 31 and 32). If a data breach occurs, data controllers are obliged notify the breach to the corresponding Data Protection Authority within 72 hours after having become aware of it. In some cases, an additional notification to the affected data subjects may be mandatory, for example if sensitive data is involved.

6) One-stop-shop: if a company has several establishments across the EU, the competent Data Protection Authority, will be the one where the controller or processor’s main establishment is located. If an issue affects only to a certain establishment, the competent DPA, is the one where this establishment is located.

7) Risk-based approach: several compliance obligations are only applicable to data processing activities that involve a risk for data subjects.

8) The role of the Data Protection Authorities (DPA): the role of the DPA will be enforced. They will be empowered to impose fines for incompliances. Also, the cooperation between the DPA of the different Member States will be reinforced.

9) Right to be forgotten: after the sentence of the ECJ from May 2014, the right to be forgotten has been consolidated in Art. 17 of the GDPR. The data subject has the right to request from the data controller the erasure of his/her personal data if certain requirements are fulfilled.

10) Data Protection Impact Assesment (PIA): this assessment should be conducted by the organization with support of the DPO. Such an assessment should belong to every organization’s strategy. A PIA should be carried out before starting any data processing operations (Art. 33 GDPR).