Category: Countries

Series on COVID-19 Contact Tracing Apps Part 3: Data Protection Issues

28. May 2020

In today’s blogpost, we will finish the miniseries on COVID-19 contact tracing apps with a final part on the issues that are created by them with regards to data protection and users’ privacy. As we have presented in the first part of this series, different approaches to contact tracing apps are in use or are being developed in different countries. These different operating approaches have different data protection issues, some of which can, in the European Union, be mitigated by following data protection regulations and the guidelines the European Data Protection Board has published, which we presented in the second part of this series.

The arising data protection issues that come with COVID-19 contact tracing apps and their impact highly depend on the API design of the apps used. However, there are common points which can cause privacy problems that may apply to all contact tracing apps due to the sensitivity of the data processed.

The biggest risks of contact tracing apps

While contact tracing apps have the potential to pose risks to data protection and their users’ privacy in all terms of data protection aspects, the following are the risks politicians, scientists and users are most worried about:

  • The risk of loss of trust
  • The risk of unauthorized access
  • The risk of processing too much data
  • The risk of abuse of the personal data collected

The risk of loss of trust: In order to work properly and reach the effectiveness necessary to contain the spread of the virus and break the chain of transmission, scientists and researches have pinpointed that at least 60% of a country’s population has to use the contact tracing apps properly. But for this to be able to happen, user satisfaction and trust in the app and its use of their personal data have to remain high. A lot of the research done on the issue shares the concern that lack of transparency in the development of the apps as well as in regard to the data they collect and process might cause the population to be sceptical and distrustful to the technologies being developed. The European Data Protection Board (EDPB) as well as the European Parliament have stated that in order for contact tracing apps to be data protection compliant, their development as well as processing of data need to be transparent throughout the entirety of the use of the apps.

The risk of unauthorized access: While the risk that the apps and the data they process can be hacked is relatively low, there is the concern that in some cases unauthorized access may result in a big privacy issue. Especially in contact tracing apps that use GPS location data as well as apps that use a centralized approach to the storage of the data processed, the risks of unauthorized access is higher due to the information being readily available. In the case of GPS data, it is easily possible to track users’ movements, allowing for a very detailed potential to analyse their behaviour. The centralized storage stores all the collected data in one cloud space, which in the case of a hacking incident may result in easy access to not only information about social behaviour and health details, but also, if used in conjunction with GPS tracking data, an easy to identify user behaviour analysis. Therefore, it has been recommended to conduct a Data Protection Impact Assessment before launching the apps, and ensure that the encryption standards are high. The Bluetooth method of phones pinging each other anonymized IDs that change every 15 minutes in case of contact closer than 10 feet has been recommended as the ideal technology to minimize location data being collected. Furthermore, most scientists and researchers recommend that in order to prevent damage, a decentralized storage method is better suited to protect the data of the users, as this method only stores the information on the users’ device instead of a central cloud.

The risk of processing too much data: In the case of contact tracing apps, one of the big risks is the processing of too much data. This is an issue which can apply to apps using GPS location tracking, the necessity to collect sensitive health data other than the COVID-19 infection status, transactional information, contacts, etc. In general, contact tracing apps should not require much additional information except the user’s contact information, since it is only necessary to log the other devices their device has come in contact with. However, there are some countries that use contact tracing apps through GPS location tracking instead of Bluetooth exchange of IDs, in which case the location data and movements of the user are automatically recorded. Other countries, like for example India, have launched an app where additional health data is being processed, as well as other information unnecessary to follow up on the contact tracing. Contact tracing apps should follow the concept of minimization of data collection in order to ensure that only personal data necessary to the purpose of the contact tracing apps are being processed. That is also one of the important ground rules the EDPB has portrayed in their guideline on the subject. However, different countries have different data protection laws, which makes a unified approach and handling of personal data difficult in cases like these.

The risk of abuse of the personal data collected: One of the biggest fears of scientists and users regarding contact tracing apps is the potential risk of abuse of the personal data collected once the pandemic is over. Especially with the centralized storage, even now there are apps that give access to the data to the government, like in India, Hong Kong and Singapore. A majority of critics is demanding regulation which will ensure that the data cannot be used after the pandemic is over and the need for the apps has ceased. This is a specifically high risk in the case of tracing apps that locate the user through GPS location tracking rather than through Bluetooth technology, since the movements of the devices lead to a very detailed and easy to analyse movement tracking of the users. This potential risk is one the most prominent ones regarding the Apple and Google project for a joint contact tracing API, as both companies have been known to face severe data protection issues in the past. However, both companies have stated that they plan on completely discontinuing the developed API once the pandemic is over, which would disable the apps working with that API. Since the Bluetooth approach they are using stores the data on users’ devices, the data will be locked and inaccessible once the API cannot read it anymore. But there are still a lot of other countries with their own APIs and apps, which may lead to a risk of government surveillance and even abuse by foreign powers. For Europe, the EDPB and the European Parliament have clearly stated that the data must be deleted and the apps dismantled after they are no longer necessary, as the purpose and legal basis for processing will not apply anymore once the pandemic is under control.

The bottom line

Needless to say, the pandemic has driven the need for new technologies and approaches to handle the spread of viruses. However, in a modern world this brings risks to the personal data used to contain the pandemic and break the chain of transmission, especially due to the fact that it is not only a nationwide, but also an international effort. It is important for users to keep in mind that their right to privacy is not entirely overpowered by the public interest to contain the virus. However, in order to keep the balance, it is important for the contact tracing apps to face criticism and be developed in a way that is compliant with data protection regulations in order to minimize the potential risks that come with the new technology. It is the only way to ensure that the people’s personal freedom and private life can continue without having to take high toll from the potential attacks that could result from these risks. Transparency is the bottom line in these projects, and it can ensure that regulations are being met and the people’s trust is kept in order to be able to reach the effectiveness needed for the tracing apps to be successful in their purpose.

easyJet Data Breach: 9 million customers affected

22. May 2020

The British airline ‘easyJet’ has been hacked. The hackers have been able to access personal data of approximately 9 million customers.

easyJet published a statement on the hacker attack and announced that e-mail addresses and travel details were among the concerned personal data of customers. Which personal data in detail belong to ‘travel data’ was not disclosed. In some cases, the hackers could also access credit card data. easyJet stated that there is no proof, that the accessed personal data was abused. easyjet now warns about fake mails in his name as well as in the name of ‘easyJet Holidays’.

The hack was noticed by easyJet in January, but was only made public this week. With becoming aware of the attack, easyJet took several measures and has blocked the unauthorized access in the meantime. easyJet is also in contact with the British Data Protection Authority ‘ICO’ and the National Security Center.

At this time, easyJet has not yet been able to evaluate how the attack could have occurred, but easyJet explained, that the hacker attack was no ‘general’ hacker attack, since the attack was very sophisticated compared to other hacker attacks. It is suspected that the attack originated from a group that has already hacked other airlines, such as British Airways in 2018.

easyJet announced that they will get in contact with concerned data subjects until May 26th to inform those about the breach and to explain further measures which should be taken in order to decrease the risk. easyJet customers who will not receive a statement until then are not concerned by the breach.

In connection with hacker attacks like these the risk for phishing attacks is the highest. In phishing attacks, criminals use fake e-mails, for example on behalf of well-known companies or authorities, to try to persuade users to pass on personal data or to click on prepared e-mail attachments containing malware.

Zoom agrees on security and privacy measures with NY Attorney General

13. May 2020

Due to the COVID-19 pandemic, Zoom has seen an exponential surge in new users over the past two months. As we have mentioned in a previous blog post, this increase in activity highlighted a range of different issues and concerns both on the security and on the privacy side of the teleconference platform.

In light of these issues, which induced a wave of caution around the use of Zoom by a lot of companies, schools, religious institutions and governmental departments, urging to stop the use of the platform, Zoom has agreed to enhance security measures and privacy standards.

In the Agreement struck on May 7th with the New York Attorney General Laetitia James, Zoom has come to terms over several new measures it will enforce over the course of the next weeks. However, most of these enhancements have already been planned in the CEO Yang’s “90-day plan” published on April 1st, and have been slowly put into effect.

These measures include:

  • a new data security program,
  • conduction of risk assessment reviews,
  • enhancement of encryption protocols,
  • a default password for every meeting,
  • halt to sharing user data with Facebook.

In response to the Agreement being struck, Attorney General James stated: “Our lives have inexorably changed over the past two months, and while Zoom has provided an invaluable service, it unacceptably did so without critical security protections. This agreement puts protections in place so that Zoom users have control over their privacy and security, and so that workplaces, schools, religious institutions, and consumers don’t have to worry while participating in a video call.“

A day prior, Zoom was also reinstated for the use of online classes by the New York City Department of Education. In order to ensure the privacy of the students and counteract “Zoombombing”, Zoom has agreed to enhanced privacy controls for free accounts, as well as kindergarten through 12th grade education accounts. Hosts, even those with free accounts, will, by default, be able to control access to their video conferences by requiring a password or the placement of users in a digital waiting room before a meeting can be accessed.

This is not the only new addition to the controls that hosts will be able to access: they will also be able to control access to private messages in a Zoom chat, control access to email domains in a Zoom directory, decide who can share screens, and more.

Overall, Zoom stated that it was happy to have been able to reach a resolution with the Attorney General quickly. It remains to see how the measures in is implementing will hold up to the still growing audience, and how fast they can be implemented for worldwide use.

EDPS publishes opinion on future EU-UK partnership

3. March 2020

On 24 February 2020, the European Data Protection Supervisor (EDPS) published an opinion on the opening of negotiations for the future partnership between the EU and the UK with regards to personal data protection.

In his opinion, the EDPS points out the importance of commitments to fully respect fundamental rights in the future envisaged comprehensive partnership. Especially with regards to the protection of personal data, the partnership shall uphold the high protection level of the EU’s personal data rules.

With respect to the transfer of personal data, the EDPS further expresses support for the EU Commission’s recommendation to work towards the adoption of adequacy decisions for the UK if the relevant conditions are met. However, the Commission must ensure that the UK is not lowering its data protection standard below the EU standard after the Brexit transition period. Lastly, the EDPS recommends the EU Institutions to also prepare for a potential scenario in which no adequacy decisions exist by the end of the transition period on 31 December 2020.

US Lawmakers to introduce bill that restricts Government Surveillance

3. February 2020

On Thursday January 23rd a bipartisan group of US lawmakers have revealed a legislation which would reduce the scope of the National Security Agency’s (NSA) warrantless internet and telephone surveillance program.

The bill aims to reform section 215 of the PATRIOT Act, which is expiring on March 15, and prevent abuses of the Foreign Intelligence Surveillance Act. Under the PATRIOT Act, the NSA can create a secret mass surveillance that taps into the internet data and telephone records of American residents. Further, the Foreign Intelligence Surveillance Act allows for U.S. intelligence agencies to eavesdrop on and store vast amounts of digital communications from foreign suspects living outside the United States, with American citizens often caught in the cross hairs.

The newly introduced bill is supposed to host a lot of reforms such as prohibiting the warrantless collection of cell site location, GPS information, browsing history and internet search history, ending the authority for the NSA’s massive phone record program which was disclosed by Edward Snowden, establishing a three-year limitation on retention of information that is not foreign intelligence or evidence of a crime, and more.

This new legislation is seen favorably by national civil rights groups and Democrats, who hope the bill will stop the continuous infringement to the fourth Amendment of the American Constitution in the name of national security.

UK: Betting companies had access to millions of data of children

28. January 2020

In the UK, betting companies have gained access to data from 28 million children under 14 and adolescents. The data was stored in a government database and could be used for learning purposes. Access to the platform is granted by the government. A company that was given access is said to have illegally given it to another company, which in turn allowed access for the betting companies. The betting providers used the access, among other things, to check age information online. The company accused of passing on the access denies the allegations, but has not yet made any more specific statements.

The British Department for Education speaks of an unacceptable situation. All access points have been closed and the cooperation has been terminated.

Category: Data Breach · General · UK
Tags: , ,

Washington State Lawmakers Propose new Privacy Bill

23. January 2020

Washington lawmakers introduced in January 2020, a law that would give state residents new privacy rights. The law is called “Washington Privacy Act” (WPA).

If passed, the Privacy Act would enact a comprehensive data protection framework for Washington that includes individual rights that are very similar and go beyond the rights in the California Consumer Privacy Act (CCPA), as well as a range of other obligations on businesses that do not yet exist in any U.S. privacy law.

Furthermore, the new draft bill contains strong provisions that largely align with the EU’s General Data Protection Regulation (GDPR), and commercial facial recognition provisions that start with a legal default of affirmative consent. Nonetheless, legislators must work within a remarkably short time-frame to pass a law that can be embraced by both House and Senate within the next six weeks of Washington’s legislative session. If passed, the bill would go into effect on July 31, 2021.

The current draft provides  data protection to all Washington State residents, and would apply to entities that conduct business in Washington or produce products or services targeted to Washington residents. Such entities must control or process data of at least 100,000 consumers; or derive 50% of gross revenue from the sale of personal data and process or control personal data of at least 25,000 consumers (with “consumers” defined as natural persons who are Washington residents, acting in an individual or household context). The draft bill will not apply to state and local governments or municipal corporations. The new bill would further provide all state residents, among other rights, the ability to opt out of targeted advertising.

The new draft bill will  regulate companies that process “personal data,” defined broadly as “any information that is linked or reasonably linkable to an identified or identifiable natural person” (not including de-identified data or publicly available information “information that is lawfully made available from federal, state, or local government records”), with specific provisions for pseudonymous data.

Category: Cyber Security · GDPR · USA
Tags:

National Retailer fined £500,000 by ICO

10. January 2020

The Information Commissioner’s Office (ICO) – UK’s Data Protection Authority – has fined the national retailer ‘DSG Retail Limited’ £500,000 for failing to secure information of at least 14 million people after a computer system was compromised as result of a cyberattack.

An investigation by the ICO came to the conclusion that between July 2017 and April 2018 malware has been installed and collected personal data until the attack was detected. Due to the failure of DSG the attacker had access to 5.6 million payment card details and further personal data, inter alia full names, postcodes and email addresses.

The reason for the fine is seen in having poor security arrangements and failing to take adequate steps to protect personal data. The fine is based on the Data Protection Act 1998.

The director of the ICO, Steve Eckersley, said:

“Our investigation found systemic failures in the way DSG Retail Limited safeguarded personal data. It is very concerning that these failures related to basic, commonplace security measures, showing a complete disregard for the customers whose personal information was stolen. The contraventions in this case were so serious that we imposed the maximum penalty under the previous legislation, but the fine would inevitably have been much higher under the GDPR.”

The ICO considered the individual freedom of DSG’s customers to be at risk. Customers would have to fear financial theft and identity fraud.

Category: Cyber Security · Data Breach · UK

More US States are pushing on with new Privacy Legislation

3. January 2020

The California Consumer Privacy Act (CCPA) came into effect on January 1, 2020 and will be the first step in the United States in regulating data privacy on the Internet. Currently, the US does not have a federal-level general consumer data privacy law that is comparable to that of the privacy laws in EU countries or even the supranational European GDPR.

But now, several other US States have taken inspiration from the CCPA and are in the process of bringing forth their own state legislation on consumer privacy protections on the Internet, including

  • The Massachusetts Data Privacy Law “S-120“,
  • The New York Privacy Act “S5642“,
  • The Hawaii Consumer Privacy Protection Act “SB 418“,
  • The Maryland Online Consumer Protection Act “SB 613“, and
  • The North Dakota Bill “HB 1485“.

Like the CCPA, most of these new privacy laws have a broad definition of the term “Personal Information” and are aimed at protecting consumer data by strenghtening consumer rights.

However, the various law proposals differ in the scope of the consumer rights. All of them grant consumers the ‘right to access’ their data held by businesses. There will also be a ‘right to delete’ in most of these states, but only some give consumers a private ‘right of action’ for violations.

There are other differences with regards to the businesses that will be covered by the privacy laws. In some states, the proposed laws will apply to all businesses, while in other states the laws will only apply to businesses with yearly revenues of over 10 or 25 Million US-Dollars.

As more US states are beginning to introduce privacy laws, there is an increasing possiblity of a federal US privacy law in the near future. Proposals from several members of Congress already exist (Congresswomen Eshoo and Lofgren’s Proposal and Senators Cantwell/Schatz/Klobuchar/Markey’s Proposal and Senator Wicker’s Proposal).

NIST examines the effect of demographic differences on face recognition

31. December 2019

As part of its Face Recognition Vendor Test (FRVT) program, the U.S. National Institute of Standards and Technology (NIST) conducted a study that evaluated face recognition algorithms submitted by industry and academic developers for their ability to perform various tasks. The study evaluated 189 software algorithms submitted by 99 developers. It focuses on how well each algorithm performs one of two different tasks that are among the most common applications of face recognition.

The two tasks are “one-to-one” matching, i.e. confirming that a photo matches another photo of the same person in a database. This is used, for example, when unlocking a smartphone or checking a passport. The second task involved “one-to-many” matching, i.e. determining whether the person in the photo matches any database. This is used to identify a person of interest.

A special focus of this study was that it also looked at the performance of the individual algorithms taking demographic factors into account. For one-to-one matching, only a few previous studies examined demographic effects; for one-to-many matching, there were none.

To evaluate the algorithms, the NIST team used four photo collections containing 18.27 million images of 8.49 million people. All were taken from operational databases of the State Department, Department of Homeland Security and the FBI. The team did not use images taken directly from Internet sources such as social media or from video surveillance. The photos in the databases contained metadata information that indicated the age, gender, and either race or country of birth of the person.

The study found that the result depends ultimately on the algorithm at the heart of the system, the application that uses it, and the data it is fed with. But the majority of face recognition algorithms exhibit demographic differences. In one-to-one matching, the algorithm rated photos of two different people more often as one person if they were Asian or African-American than if they were white. In algorithms developed by Americans, the same error occurred when the person was a Native American. In contrast, algorithms developed in Asia did not show such a significant difference in one-to-one matching results between Asian and Caucasian faces. However, these results show that algorithms can be trained to achieve correct face recognition results by using a wide range of data.

Pages: Prev 1 2 3 4 5 6 7 8 9 10 ... 21 22 23 Next
1 6 7 8 9 10 23