SMS flaw lets hackers take control of individuals’ phones for $16

24. March 2021

Hackers have discovered a new method of gaining access to individuals’ mobile devices via text message rerouting, Vice reports. Apparently, all it takes is $16 to retrieve a person’s messages from a third-party provider and then take over the phone number and, with it, various associated accounts.

All of that is possible due to a text messaging service called Sakari that allows businesses to send SMS reminders, alerts, confirmations and marketing campaigns. The company lets business users import their own phone number in order to be contacted by the businesses. However, the service has a significant security vulnerability. Its use is enabled by purchasing Sakari’s $16 per month plan and then filling out a document saying that the signer has authority to change phone numbers. Although the document points out that the user should not conduct any unlawful, harassing or inappropriate behavior, there is no subsequent call or text notification from Sakari asking the user to confirm the consent to the transfer. That’s why it is largely effortless to simply sign up with another person’s phone number and receive their text messages instead. From that moment on, it can be trivial to hack into other accounts associated with that phone number by sending login requests, as they rely on SMS codes.

This overlooked security flaw shows how frighteningly easy it is to gain access to the tools necessary to seize phone numbers. It requires less technical skill or knowledge than, for instance, SIM jacking. It demonstrates not only the insufficient regulation of commercial SMS tools but also gaping holes in the telecommunications infrastructure, since a hacker only needs to pretend having the user’s consent.

The attack method has implications for cybercrime and poses an enormous threat to safety and security. It enables criminals to harass people, drain their bank account, tear through their digital lives or intercept sensitive information or personal secrets. At this time, it is not clear to what extent this attack method is being applied to mobile numbers.

CTIA, a trade association representing the wireless industry, stated that they immediately launched an investigation into the matter and took precautionary measures. Adam Horsman, co-founder of Sakari, responded to the insufficient authentication of their customers by saying that Sakari added a security feature where a number will receive an automated call in order to confirm the consent given. Moreover, Sakari will verify all existing text-enabled numbers. But Sakari is just one company. And there are plenty of others in this industry. As this method raises serious concerns, it is important for mobile carriers to do more to protect their customers’ privacy and security, such as notifications when registering a new device or a two-factor-authentication.

Data Breach made 136,000 COVID-19 test results publicly accessible

18. March 2021

Personal health data are considered a special category of personal data under Art. 9 of the GDPR and are therefore given special protections. A group of IT experts, including members of the German Chaos Computer Club (CCC), has now revealed security gaps in the software for test centres by which more than 136,000 COVID-19 test results of more than 80,000 data subjects have apparently been unprotected on the internet for weeks.

The IT-Security experts’ findings concern the software “SafePlay” of the Austrian company Medicus AI. Many test centres use this software to allocate appointments and to make test results digitally available to those tested. In fact, more than 100 test centres and mobile test teams in Germany and Austria are affected by the recent data breach. These include public facilities in Munich, Berlin, Mannheim as well as fixed and temporary testing stations in companies, schools and daycare centres.

In order to view the test results unlawfully, one only needed to create an account for a COVID-19 test. The URL for the test result contained the number of the test. If this number was simply counted up or down, the “test certificates” of other people became freely accessible. In addition to the test result, the test certificate also contained the name, date of birth, private address, nationality and ID number of the person concerned.

It remains unresolved whether the vulnerabilities have been exploited prior to the discovery by the CCC. The CCC notified both Medius AI and the Data Protection Authorities about the leak which led to a quick response by the company. However, IT experts and Privacy-focused NGOs commented that Medicus AI was irresponsible and grossly negligent with respect to their security measures leading to the potential disclosure of an enormous amount of sensitive personal health data.

The state of Virginia is second state in the USA to enact major Data Protection Legislation

17. March 2021

On March 2nd, 2021, Virginia’s Governor, Ralph Northam, signed the Consumer Data Protection Act into law without any further amendments.

This makes the state of Virginia the second US state to enact a major privacy law, next to California’s CCPA enacted in 2018. At the point of the law passing to the Senate, there was debate that the bills were flawed as they are not including a private right of action and leaving all enforcement to the Office of the Attorney General. This caused some senators to oppose the bills, however it was ultimately passed by a vote of 32 to 7. The Consumer Data Protection Act will take effect on January 1st, 2023.

The bill establishes a comprehensive framework for controlling and processing personal data of Virginia residents. In addition, it provides Virginia residents with certain rights with respect to their personal data, including rights of access, correction, deletion, portability, the right to opt out of certain processing operations, as well as the right to appeal a controller’s decision regarding a rights request. The bill further states requirements relating to the principles of data minimization, processing limitations, data security, non-discrimination, third-party contracting and data protection assessments, as well as imposes certain requirements directly on entities who act as processors of data on behalf of a controller.

However, the law also includes a number of exemptions at entity level, such as exemptions for financial institutions subject to the Gramm-Leach-Bliley Act and also includes some data or context specific exemptions, such as an exemption for HR-related data processing.

The Attorney General’s office, as the enforcing entity, has to provide 30 days’ notice of any violation and allow an opportunity for the controller to cure any violation. In case a controller does not oblige and leaves the violation uncured, the Attorney General is able to file an action seeking $7,500 per violation.

ICO fines companies a total of £330,000 for sending more than 2.7 million spam text messages

16. March 2021

The Information Commissioner’s Office (ICO) has sanctioned two firms for sending unlawful and nuisance text messages to their customers. The ICO took notice because it received several complaints from people affected. One of the companies even received a total of 10,000 complaints.

The two companies had sent the unwanted text messages during the Corona pandemic and have now been sanctioned with £330,000 by the UK Data Protection Authority.

Leads Works Ltd.

One of the companies, the West Sussex-based Leads Works Ltd, sent more than 2.6 million text messages to its customers without obtaining valid consent. Between 26 May and 26 June, the authorities received more than 10,000 complaints.

In addition, Leads Works Ltd has received an enforcement notice from the ICO requiring it to stop sending unlawful direct marketing messages.

Valca Vehicle Ltd

Valca Vehicle Ltd, a company based in Manchester has been sanctioned £80,000. Between June and July 2020, the company sent over 95,000 text messages. This was also without the appropriate consent of those affected. The company has been ordered to stop sending further text messages without consent. Valca Vehicle Ltd has also been criticised for using the pandemic as an excuse for its actions.

Category: General

Google plans to stop the use of cookie tracking

15. March 2021

Google announces to stop the usage of third-party cookies in its browser Google Chrome and proclaim they will not implement other similar technologies that could track individuals while surfing on the web.

Cookies are small pieces of code used on almost every website. They are automatically downloaded when a user visits a website and from then on send data from the user back to the website operator. From this data, companies can create profiles of the user and personalize advertising based on the data collected. Originally, cookies were intended to give web browsers a “memory”. With cookies, online shops save shopping carts and users can stay logged in to online sites.

In a Blogpost published on March 3rd, 2021, David Temkin, Director of Product Management, Ads Privacy and Trust at Google, announced that the next update Google Chrome in April will allow cookie tracking to be turned of completely. With Google Chrome, only so-called “first-party cookies” of the respective website operator remain permitted. The decision will have lasting consequences, as Google Chrome has been the most widely used browser since 2012. The move comes after Google’s competitors Apple and Mozilla announced similar mechanisms for their Safari and Firefox browsers (please see our blog post). Temkin writes:

Keeping the internet open and accessible for everyone requires all of us to do more to protect privacy — and that means an end to not only third-party cookies, but also any technology used for tracking individual people as they browse the web.

Since the personalized advertising based on data, and thus the tracking of the data, is Google’s core business, Google will not stop either the data collection or the personalization of the advertising. Instead of individual profiles, Google will form cohorts of people with similar interests, to which advertising will be tailored. These cohorts are said to be broad enough to preserve the anonymity of individual users. This concept is called “Federated Learning of Cohorts” (FLoC). Google Ads FLoC based advertising is said to start in the second quarter of 2021.

Data will then be collected by the browser and stored locally and not by cookies. Every URL on a website and every content accessed can then be accessed by Google targeting algorithm. Algorithms on the end device are to calculate hash values from the browser history, for example, which enable the assignment to such a cohort. Google sends a selection of ads to the browser, which selects ads that match the cohort and shows them to the user.

While third-party cookies are gradually becoming obsolete, Google is replacing them with a system that Google can completely control itself. This will make it more difficult for competitors such as Facebook Ads in the future, as they will have to rely primarily on first-party data and on data obtained from cookies in smaller browsers.

Firefox introduces new tool to prevent cookie-based tracking

12. March 2021

Mozilla has announced the introduction of a new privacy tool for its Firefox browser, “Total Cookie Protection”, aimed at blocking cookie-based tracking by ad-tech companies. The new feature prevents cross-site tracking by confining cookies to the website where they were created and placing them into a so-called “cookie jar”.

Mozilla refers to cookies as “a useful technology, but also a serious privacy vulnerability” because they are shared between websites which enables tracking user’s browsing behavior. This approach allows advertising companies, in particular, to gather information about users, their browsing habits and interests as well as create detailed personal profiles.

Total Cookie Protection works by maintaining a separate “cookie jar”, assigned to each website visited. This procedure prohibits the deposited cookie from being shared with any other website. A limited exception only applies to cross-site cookies needed for non-tracking purposes.

Firefox has blocked some cookies used by ad-tech companies for years in an effort to fight against cookie abuse and web tracking. In order to achieve this goal, “Enhanced Tracking Protection” (ETP) was introduced in 2019. It blocks many of the companies identified as trackers by Mozilla’s partners at Disconnect. Despite being an effective strategy to stop tracking, this form of cookie blocking has its limitations, Johann Hofmann and Tim Huang remark on the developer blog Mozilla Hacks:

ETP protects users from the 3000 most common and pervasive identified trackers, but its protection relies on the fact that the list is complete and always up-to-date. Ensuring completeness is difficult, and trackers can try to circumvent the list by registering new domain names. Additionally, identifying trackers is a time-consuming task and commonly adds a delay on a scale of months before a new tracking domain is added to the list.

With this in view, Total Cookie Protection has been built into ETP as a new privacy advance. The feature intends to address the limitations of ETP and provide more comprehensive protection. It is complemented by Supercookie Protections rolled out last month, which shall eliminate the usage of non-traditional storage mechanisms (“supercookies”) as a tracking vector.

In conclusion, Mozilla stated:

Together these features prevent websites from being able to “tag” your browser, thereby eliminating the most pervasive cross-site tracking technique.

French Government seeks to disregard CJEU data retention of surveillance data ruling

9. March 2021

On March 3rd, POLITICO reported that the French government seeks to bypass the Court of Justice of the European Union’s (CJEU) ruling on limiting member states’ surveillance activities of phone and internet data, stating governments can only retain mass amounts of data when facing a “serious threat to national security”.

According to POLITICO, the French government has requested the country’s highest administrative court, the Council of State, to not follow the CJEU’s ruling in the matter.

Last year in October, the CJEU ruled that several national data retention rules were not compliant with EU law. This ruling included retention times set forth by the French government in matters of national security.

The French case in question opposes the government against digital rights NGOs La Quadrature du Net and Privacy International. After the CJEU’s ruling, it is now in the hands of the Council of State in France, which will have to decide on the matter.

A hearing date has not yet been decided, however POLITICO sources state that the French government is trying to bypass the CJEU’s ruling by presenting the argument of the ruling going against the country’s “constitutional identity”. This argument, first used back in 2006, is seldomly used, however can be referred to in order to avoid applying EU law at national level.

In addition, the French government accuses the CJEU to have ruled out of its competence, as matters of national security remain solely part of national competence.

The French government did not want to comment on the ongoing process, however has had a history of refusing to adopt EU court rulings into national law.

AEPD issues highest fine for GDPR violations

5. March 2021

The Spanish Data Protection Authority, the Agencia Española de Protección de Datos (AEPD), imposed a fine of EUR 6.000.000 on CaixaBank, Spain’s leading retail bank, for unlawfully processing customers’ personal data and not providing sufficient information regarding the processing of their personal data. It is the largest financial penalty ever issued by the AEPD under the GDPR, surpassing the EUR 5.000.000 fine imposed on BBVA in December 2020 for information and consent failures.

In the opinion of the AEPD, CaixaBank violated Art. 6 GDPR in many regards. The bank had not provided sufficient justification of the legal basis for the processing activities, in particular with regard to those based on the company’s legitimate interest. Furthermore, deficiencies had been identified in the processes for obtaining customers’ consent to the processing of their personal data. The bank had also failed to comply with the requirements established for obtaining valid consent as a specific, unequivocal and informed expression of intention. Moreover, the AEPD stated that the transfer of personal data to companies within the CaixaBank Group was considered an unauthorized disclosure. According to Art. 83 (5) lit. a GDPR, an administrative fine of EUR 4.000.000 EUR was issued.

Additionally, the AEPD found that CaixaBank violated Art. 13, 14 GDPR. The bank had not complied with the information obligations since the information regarding the categories of personal data concerned had not been sufficient and the information concerning the purposes of and the legal basis for the processing had been missing entirely. What’s more, the information provided in different documents and channels had not been consistent. The varying information concerned data subjects’ rights, the possibility of lodging a complaint with the AEPD, the existence of a data protection officer and his contact details as well as data retention periods. Besides, the AEPD disapproved of the use of inaccurate terminology to define the privacy policy. Following Art. 83 (5) lit. b GDPR, a fine of EUR 2.000.000 was imposed.

In conclusion, the AEPD ordered CaixaBank to bring its data processing operations into compliance with the legal requirements mentioned within six months.

Data protection authorities around the world are taking action against the facial recognition software Clearview AI

25. February 2021

The business model of the US company Clearview AI is coming under increasing pressure worldwide. The company collected billions of facial photos from publicly available sources, especially from social networks such as Facebook, Instagram, YouTube and similar services. Data subjects were not informed of the collection and use of their facial photos. Using the photos, Clearview AI created a comprehensive database and used it to develop an automated facial recognition system. Customers of this system are in particular law enforcement agencies and other prosecutors in the US, but companies can also make use of the system. In total, Clearview AI has around 2000 customers worldwide and a database with around 3 billion images.

After a comprehensive investigation by the New York Times in January 2020 drew attention to the company, opposition to the business practice is now also being voiced by the data protection authorities of various countries.

The Hamburg Data Protection Commissioner had already issued an order against Clearview AI in January 2021. According to the order, the company was to delete the biometric data of a Hamburg citizen who had complained to the authority about the storage. The reason given for the decision was that there was no legal basis for processing sensitive data and that the company was profiling by collecting photos over a longer period of time.

Now, several Canadian data protection authorities have also deemed Clearview AI’s actions illegal. In a statement, the Canadian Privacy Commissioner describes the activities as mass surveillance and an affront to the privacy rights of data subjects. The Canadian federal authority published a final report on the investigation into the Clearview AI case. In it, the company was found to have violated several Canadian federal reports.

It is interesting that the Canadian authorities even consider the data collection to be unlawful if Clearview AI were to obtain consents from the data subjects. They argue that already the purpose of the data processing is unlawful. They demand that Clearview AI cease its service in Canada and delete data already collected from Canadian citizens.

The pressure on Clearview AI is also growing due to the fact that the companies from which the data was collected are also opposing the procedure. In addition, the association “noyb” around the data protection activist Max Schrems is dealing with Clearview AI and various European data protection authorities have announced that they will take action against the facial recognition system.

European Commission publishes draft UK adequacy decisions

On February 19th, 2021, the European Commission (EC) has published the draft of two adequacy decisions for the transfer of personal data to the United Kingdom (UK), one under the General Data Protection Regulation (GDPR) and the second for the Law Enforcement Directive. If approved, the decisions would confer adequacy status on the UK and ensure that personal data from the EU can continue to flow freely to the UK. In the EC’s announcement launching the process to adopt the newly drafted adequacy decisions Didier Reynders, Commissioner for Justice, is quoted:

We have thoroughly checked the privacy system that applies in the UK after it has left the EU. Now European Data Protection Authorities will thoroughly examine the draft texts. EU citizens’ fundamental right to data protection must never be compromised when personal data travel across the Channel. The adequacy decisions, once adopted, would ensure just that.

In the GDPR, this adequacy decision is based on Art. 45 GDPR. Article 45(3) GDPR empowers the EU Commission to adopt an implementing act to determine that a non-EU country ensures an “adequate level of protection”. This means a level of protection for personal data that is substantially equivalent to the level of protection within the EU. Once it has been determined that a non-EU country provides an “adequate level of protection”, transfers of personal data from the EU to that non-EU country can take place without further requirements. In the UK, the processing of personal data is governed by the “UK GDPR” and the Data Protection Act 2018, which are based on the EU GDPR. The UK is and has committed to remain part of the European Convention on Human Rights and “Convention 108” of the Council of Europe. “Convention 108” is a binding treaty under international law to protect individuals from abuses in the electronic processing of personal data, and in particular provides for restrictions on cross-border data flows where data is to be transferred to states where no comparable protection exists.

The GDPR adequacy decision draft addresses several areas of concern. One of these is the power of intelligence services in the UK. In this respect, the draft focuses on legal bases, restrictions and safeguards for the collection of information for national security purposes. It also details the oversight structure over the intelligence services and the remedies available to those affected. Another aspect discussed is the limitation of data subjects’ rights in the context of UK immigration law. The EC concludes that interference with individuals’ fundamental rights is limited to what is strictly necessary to achieve a legitimate purpose and that there is effective legal protection against such interference. As the UK GDPR is based on the GDPR and therefore the UK privacy laws should provide an adequate level of protection for data subjects, the main risks for EU data subjects do not lie in the current status of these laws but in possible changes of these laws in the future. For this reason, the EU Commission has built a fixed period of validity into the draft adequacy decision. If adopted, this decision would be valid for a period of four years and the adequacy finding could be extended for a further four years if the level of protection in the UK remains adequate. However, this extension would not be automatic, but subject to a thorough review. This draft marks the first time that the EU has imposed a time limit on an adequacy decision. Other adequacy decisions are subject to monitoring and regular review but are not time-limited by default.

The UK government welcomed the EC’s draft in a statement, while also calling on the EU to “swiftly complete” the process for adopting and formalizing the adequacy decisions, as the “bridging mechanism” will only remain in force until June 30th. Under the EU-UK Trade and Cooperation Agreement, the EU and UK agreed on a transition period of up to six months from January 1st, 2021, during which the UK is treated as an adequate jurisdiction (please see our blog post). The draft adequacy decisions address the flow of data from the EU to the UK. The flow of data from the UK to the EU is governed by UK legislation that has applied since 1 January 2021. The UK has decided that the EU ensures an adequate level of protection and that data can therefore flow freely from the UK to the EU.

Next, the non-binding opinion of the European Data Protection Board is sought (Art. 70 GDPR). After hearing the opinion of the European Data Protection Board, the representatives of the member states must then confirm the draft in the so-called comitology procedure. This procedure is used when the EC is given the power to implement legal acts that lay down conditions for the uniform application of a law. A series of procedures ensure that EU countries have a say in the implementing act. After the comitology procedure, the EC is free to adopt the drafts.

Pages: Prev 1 2 3 4 5 6 7 8 9 10 ... 53 54 55 Next
1 2 3 4 55