Tag: Social Media

Facebook data leak affects more than 500 million users

7. April 2021

Confidential data of 533 million Facebook users has surfaced in a forum for cybercriminals. A Facebook spokesperson told Business Insider that the data came from a leak in 2019.

The leaked data includes Facebook usernames and full name, date of birth, phone number, location and biographical information, and in some cases, the email address of the affected users. Business Insider has verified the leaked data through random sampling. Even though some of the data may be outdated, the leak poses risks if, for example, email addresses or phone numbers are used for hacking. The leak was made public by the IT security firm Hudson Rock. Their employees noticed that the data sets were offered by a bot for money in a hacking forum. The data set was then offered publicly for free and thus made accessible to everyone.

The US magazine Wired points out that Facebook is doing more to confuse than to help clarify. First, Facebook referred to an earlier security vulnerability in 2019, which we already reported. This vulnerability was patched in August last year. Later, a blog post from a Facebook product manager confirmed that it was a major security breach. However, the data had not been accessed through hacking, but rather the exploitation of a legitimate Facebook feature. In addition, the affected data was so old that GDPR and U.S. privacy laws did not apply, he said. In the summer of 2019, Facebook reached an agreement with the U.S. Federal Trade Commission (FTC) to pay a $5 billion fine for all data breaches before June 12, 2019. According to Wired, the current database is not congruent with the one at issue at the time, as the most recent Facebook ID in it is from late May 2019.

Users can check whether they are affected by the data leak via the website HaveIBeenPwned.

Data protection authorities around the world are taking action against the facial recognition software Clearview AI

25. February 2021

The business model of the US company Clearview AI is coming under increasing pressure worldwide. The company collected billions of facial photos from publicly available sources, especially from social networks such as Facebook, Instagram, YouTube and similar services. Data subjects were not informed of the collection and use of their facial photos. Using the photos, Clearview AI created a comprehensive database and used it to develop an automated facial recognition system. Customers of this system are in particular law enforcement agencies and other prosecutors in the US, but companies can also make use of the system. In total, Clearview AI has around 2000 customers worldwide and a database with around 3 billion images.

After a comprehensive investigation by the New York Times in January 2020 drew attention to the company, opposition to the business practice is now also being voiced by the data protection authorities of various countries.

The Hamburg Data Protection Commissioner had already issued an order against Clearview AI in January 2021. According to the order, the company was to delete the biometric data of a Hamburg citizen who had complained to the authority about the storage. The reason given for the decision was that there was no legal basis for processing sensitive data and that the company was profiling by collecting photos over a longer period of time.

Now, several Canadian data protection authorities have also deemed Clearview AI’s actions illegal. In a statement, the Canadian Privacy Commissioner describes the activities as mass surveillance and an affront to the privacy rights of data subjects. The Canadian federal authority published a final report on the investigation into the Clearview AI case. In it, the company was found to have violated several Canadian federal reports.

It is interesting that the Canadian authorities even consider the data collection to be unlawful if Clearview AI were to obtain consents from the data subjects. They argue that already the purpose of the data processing is unlawful. They demand that Clearview AI cease its service in Canada and delete data already collected from Canadian citizens.

The pressure on Clearview AI is also growing due to the fact that the companies from which the data was collected are also opposing the procedure. In addition, the association “noyb” around the data protection activist Max Schrems is dealing with Clearview AI and various European data protection authorities have announced that they will take action against the facial recognition system.

Update: The Council of the European Union publishes recommendations on encryption

8. December 2020

In November, the Austrian broadcasting network “Österreichischer Rundfunk” sparked a controversial discussion by publishing leaked drafts of the Council of the European Union (“EU Council”) on encryption (please see our blog post). After these drafts had been criticized by several politicians, journalists and NGOs, the EU Council published “Recommendations for a way forward on the topic of encryption” on December 1st, in which it considers it important to carefully balance between protecting fundamental rights with ensuring law enforcement investigative powers.

The EU Council sees a dilemma between the need for strong encryption in order to protect privacy on one hand, and the misuse of encryption by criminal subjects such as terrorists and organized crime on the other hand. They further note:

“We acknowledge this dilemma and are determined to find ways that will not compromise
either one, upholding the principle of security through encryption and security despite
encryption.”

The paper lists several intentions that are supposed to help find solutions to this dilemma.

First, it directly addresses EU institutions, agencies, and member states, asking them to coordinate their efforts in developing technical, legal and operational solutions. Part of this cooperation is supposed to be the joint implementation of standardized high-quality training programs for law enforcement officers that are tailored to the skilled criminal environment. International cooperation, particularly with the initiators of the “International Statement: End-to-End Encryption and Public Safety“, is proclaimed as a further intention.

Next the technology industry, civil society and academic world are acknowledged as important partners with whom EU institutions shall establish a permanent dialogue. The recommendations address internet service providers and social media platforms directly, noting that only with their involvement can the full potential of technical expertise be realized. Europol’s EU Innovation Hub and national research and development teams are named key EU institutions for maintaining this dialogue.

The EU Council concludes that the continuous development of encryption requires regular evaluation and review of technical, operational, and legal solutions.

These recommendations can be seen as a direct response to the discussion that arose in November. The EU Council is attempting to appease critics by emphasizing the value of encryption, while still reiterating the importance of law enforcement efficiency. It remains to be seen how willing the private sector will cooperate with the EU institutions and what measures exactly the EU Council intends to implement. This list of intentions lacks clear guidelines, recommendations or even a clearly formulated goal. Instead, the parties are asked to work together to find solutions that offer the highest level of security while maximizing law enforcement efficiency. In summary, these “recommendations” are more of a statement of intent than implementable recommendations on encryption.

ICO passed Children’s Code

8. September 2020

The UK Information Commissioner’s Office (ICO) passed the Age Appropriate Design Code, also called Children’s Code, which applies especially to social media and online services likely to be used by minors under the age of 18 in the UK.

The Children’s Code contains 15 standards for designers of online services and products. The aim is to ensure a minimum level of data protection. Therefore, the Code requires that apps, games, websites etc. are built up in a way which provides already a baseline of data protection. The following default settings should be mentioned here:

  • Glocalization disabled by default,
  • Profiling disabled by default,
  • Newly created profiles private and not public by default.

Base for the Children’s Code is the UK Data Protection Act of 2018 – local implementation law of the GDPR. Thus, the standards also include the GDPR Data Protection principles Transparency and Data Minimisation.

The requirements also and especially apply to the major social media and online services used by minors in the UK, e.g. TikTok, Instagram and Facebook.

The Code is designed to be risk-based. This means that not all organizations have to fulfil the same obligations. The more companies use, analyse and profile data from minors, the more they must undertake to comply with the Code.

FaceApp reacts to privacy concerns

22. July 2019

The picture editing app FaceApp, which became increasingly popular on social media, was confronted with various concerns about their privacy.

Created in Russia by a four-person start-up company, the app applies a newly developed technology that uses neural networks to modify a face in any photo while remaining photorealistic. In this process, no filters are placed on the photo, but the image itself is modified with the help of deep learning technology.

However, the app is accused of not explaining that the images are uploaded to a cloud for editing. In addition, the app is accused of uploading not only the image selected by the user, but also the entire camera roll in the background. The latter in particular raises high security concerns due to the large number of screenshots that people nowadays take of sensitive information such as access data or bank details.

While there is no evidence for the latter accusation and FaceApp emphasizes in its statement that no image other than the one chosen by the user is uploaded, they confirm the upload into a cloud.

The upload to the cloud justifies FaceApp with reasons of performance and traffic. With this, the app developers want to ensure that the user does not upload the photo repeatedly during each editing process.

Finally, FaceApp declares that no user data will be sold or passed on to third parties. Also, in 99 % of cases, they are unable to identify a person because the app can be and actually is used without registration by a large number of users.

Twitter shared location data on iOS devices

15. May 2019

Twitter recently published a statement admitting that the app shared location data on iOS devices even if the user had not turned on the “precise location” feature.

The problem appeared in cases in which a user used more than one Twitter account on the same iOS device. If he or she had opted into the “precise location” feature for one account it was also turned on when using another account, even if the user had not opted into using the feature on this account. The information on the real-time location was then passed on to trusted partners of Twitter. However, through technical measures, only the postcode or an area of five square kilometres was passed on to the partners. Twitter accounts or other “Unique Account IDs”, which reveal the identity of the user, were allegedly not transmitted.

According to Twitter’s statement, they have fixed the problem and informed the affected users: “We’re very sorry this happened. We recognize and appreciate the trust you place in us and are committed to earning that trust every day”.

Data protection risks with regard to WhatsApp and Snapchat on business phones

6. June 2018

The use of the chat services WhatsApp and Snapchat on smartphones used for business purposes will in future be forbidden for employees of the automotive supplier Continental: For data protection reasons, the employer prohibits its employees from downloading the apps. This ban affects approximately 36,000 mobile phones worldwide.

The ban is based on the fact that social media services access users’ address books and thus personal (and possibly confidential) data. The messenger apps do not restrict access to personal data in their settings, so Continental consequently decided to ban the apps from service mobile phones to protect business partners and its own employees.

Under the current terms of use, users of WhatsApp agree to provide contact information “in accordance with applicable laws”. WhatsApp hereby shifts its data protection responsibility to its users, who in fact confirm that they have obtained a corresponding declaration of consent for data processing from every person in their address book. The social media service will be aware that this is practically impossible to guarantee.

In order to ensure an adequate level of data protection, the latter would therefore be obliged to design the default settings to conform to data protection requirements. Such a change could also have a positive effect on the company itself, considering that this would remove the breeding ground for the prohibition. WhatsApp could then be used on countless other smartphones.

Will Visa Applicants for the USA have to reveal their Social Media Identities in future?

11. January 2018

The U.S. Department of State is aiming for Visa applicants to answer supplemental questions, including information about social media. A 30-Day notice has been published in November in order to gather opinions from all interested individuals and organizations. The goal is to establish a legal basis for the “proper collection of all information necessary to rigorously evaluate all grounds of inadmissibility or deportability, or grounds for the denial of other immigration benefits”.

In concrete terms, applicants are supposed to reveal their social media identifiers used during the last five years. The State Department stresses the fact that “the collection of social media platforms and identifiers will not be used to deny visas based on applicants’ race, religion, ethnicity, national origin, political views, gender, or sexual orientation.”

Meanwhile, the Electronic Privacy Information Center (EPIC) has submitted its comments asking for withdrawal of the proposal to collect social media identifiers and for review of the appropriateness of using social media to make visa determinations.

EPIC not only critizes the lack of transparency as it is “not clear how the State Department intends to use the social media identifiers” and further continues that “the benefits for national security” don’t seem precise. The organization also expresses concerns because the collection of these data enable enhanced profiling and tracking of individuals as well as large scale surveillance of innocent people, maybe even leading to secret profiles.

It remains to be seen how the situation develops and how the public opinion influences the outcome.