Tag: China

TikTok faces huge fine from Britain’s ICO

12. October 2022

Lately, the Chinese social media success has been the subject of an investigation by the British data protection watchdog, the Information Commissioner’s Office (ICO): the investigation has so far concluded that the social media network has clearly breached the United Kingdom’s data protection laws, in particular the regulations concerning children’s personal data in the time. The Authority issued therefore a notice of intent, which is a potential precursor to a fine amounting up to a staggering 27 million pounds.

In particular, the Authority found out that the platform could have processed personal data of children under the age of 13 failing to gather the parents’ consent for the processing of these data. Under these data there are allegedly also special category data, which have a special protection under Art. 9 GDPR.

Furthermore, in the ICO’s opinion the principle of transparency was not respected by the Chinese hit platform by not providing complete or transparent information on the data processing or their gathering.

The ICO’s investigation is still ongoing as the Commissioner’s Office is still deciding whether to impose the fine or whether there has been a breach of data protection law.

The protection of teenagers and children is the top priority of the ICO according to current Information Commissioner John Edwards. Under his guidance, the ICO has several ongoing investigations targeting various tech companies who could be breaking the UK’s data protection laws.

This is not the first time TikTok has been under observation by data protection watchdogs. In July a US – Australian cybersecurity firm has found that TikTok gathers excessive amounts of information from their users, and voiced their concern over their findings. Based on these precedents, it could be possible that local data protection authorities will increment their efforts to control TikTok’s compliance with local laws and, in Europe, with the  GDPR.

China publishes Draft Measures on Security Assessment of Cross-border Data Transfer for public consultation

8. November 2021

On October 29th, 2021, the Cyberspace Administration of China (CAC) announced a public consultation on its “Draft Measures on Security Assessment of Cross-border Data Transfer”. This is the CAC’s third legislative attempt to build a cross-border data transfer mechanism in China, and it came only days before the effective date of the Personal Information Protection Law (PIPL) on November 1st, 2021.

The CAC said its proposed data transfer assessment aims to comply with China’s PIPL and Data Security Law, while specifically focusing on efforts to “regulate data export activities, protect the rights and interests of personal information, safeguard national security and social public interests, and promote the safe and free flow of data across borders”. If they were to be made final, the Draft Measures would apply to cross-border transfers of personal information and “important data” collected and generated in China under certain circumstances.

Data controllers, or data handlers according to the PIPL, would be subject to mandatory security assessments by the CAC in the following circumstances:

  • transfer of personal information and important data collected and generated by critical information infrastructure operators as defined under China’s Cybersecurity Law;
  • transfer of important data;
  • transfer of personal information by data handlers who process over 1 million individuals’ personal information;
  • cumulatively transferring personal information of more than 100,000 individuals or “sensitive” personal information of more than 10,000 individuals; or
  • other conditions to be specified by the CAC.

According to the Draft Measures, data handlers that require a mandatory security assessment would need to submit certain materials in connection with it, which include an application form, the data handler’s self-security assessment, and the relevant data transfer agreement.

Upon receiving the data handler’s application, the CAC would confirm whether it will accept the application within seven business days. The CAC would have 45 business days to complete the assessment after issuing the notice of acceptance. This period could be extended in complex cases or where the CAC requires supplementary documents, however according to the Draft Measures the timeline should not exceed 60 business days.

In evaluating a data handler’s mandatory security assessment, the CAC would aim to focus on:

  • the legality, propriety and necessity of the cross-border transfer;
  • the data protection laws and regulations of the data recipient’s jurisdiction, the security of the data being transferred, and whether the protections provided by the data recipient satisfy Chinese laws and regulations and mandatory national standards;
  • the volume, scope, type and sensitivity of the data being transferred and the risk of a leak, damage, corruption, loss and misuse;
  • whether the data transfer agreement adequately allocates responsibilities for data protection;
  • compliance with Chinese laws, administrative regulations and departmental regulations; and
  • other matters that are deemed necessary by the CAC.

The CAC’s mandatory security assessment result would be effective for two years, after which a new assessment is necessary. Under circumstances, a re-evaluation would have to take place, e.g. in cases of changes to the purpose, means, scope and type of the cross-border transfer or processing of personal information and/or important data by the data recipient, an extension of the retention period for the personal information and/or important data and other circumstances that might affect the security of transferred data.

The public consultation period extends until November 28th, 2021, after which the CAC will review the public comments and recommendations.

China passes new data security law

15. June 2021

China’s “National People’s Congress”, the Chinese legislative body, approved the new “Data Security Law 2021” on June 10th, 2021 (unofficial English translation here). The new law gives President Xi Jinping the power to shut down or fine tech companies. The law will go into effect on September 1st, 2021.

The law applies to data processing activities and security surveillance within China’s territory. Data processing activities outside China’s territory that threaten China’s national security and public interests are also covered by the law. For international companies, the law means they must localize data in China. For example, data generated in factories in China must be kept in China and be subject to cyber data oversight.

Companies that leak sensitive data abroad or are found “mishandling core state data” can be forced to cease operations, have their licenses revoked, or fined up to 1.6 million US$, and companies who provide electronic information to foreign law enforcement authorities can be fined up to approx. 150.000 US$ or forced to suspend their business.

While the Chinese government is increasing its financial involvement in tech companies it is also producing new legislations to tighten its grip on such companies. The new data law is expected to provide a wide outline for future rules for Internet services and to ease the tracking of valuable data in the interest of national security. This may include directives that certain types of data must be stored and handled locally, as well as requirements for companies to track and report the information they hold.

A personal information protection law is still under review in China.

China issued new Draft for Personal Information Protection Law

23. November 2020

At the end of October 2020, China issued a draft for a new „Personal Information Protection Law” (PIPL). This new draft is the introduction of a comprehensive system in terms of data protection, which seems to have taken inspiration from the European General Data Protection Regulation (GDPR).

With the new draft, China’s regulations regarding data protection will be consisting of China’s Cybersecurity Law, Data Security Law (draft) and Draft PIPL. The new draft legislation contains provisions relating to issues presented by new technology and applications, all of this in around 70 articles. The fines written in the draft for non-compliance are quite high, and will bring significant impact to companies with operations in China or targeting China as a market.

The data protection principles drawn out in the draft PIPL include transparency, fairness, purpose limitation, data minimization, limited retention, data accuracy and accountability. The topics that are covered include personal information processing, the cross-border transfer of personal information, the rights of data subjects in relation to data processing, obligations of data processors, the authority in charge of personal information as well as legal liabilities.

Unlike China’s Cybersecurity Law, which provides limited extraterritorial application, the draft PIPL proposes clear and specific extraterritorial application to overseas entities and individuals that process the personal data of data subjects in China.

Further, the definition of “personal data” and “processing” under the draft PIPL are very similar to its equivalent term under the GDPR. Organizations or individuals outside China that fall into the scope of the draft PIPL are also required to set up a dedicated organization or appoint a representative in China, in addition to also report relevant information of their domestic organization or representative to Chinese regulators.

In comparison to the GDPR, the draft PIPL extends the term of “sensitive data” to also include nationality, financial accounts, as well as personal whereabouts. However, sensitive personal information is defined as information that once leaked or abused may cause damage to personal reputation or seriously endanger personal and property safety, which opens the potential for further interpretation.

The draft legislation also regulates cross-border transfers of personal information, which shall be possible if it is certified by recognized institutions, or the data processor executes a cross-border transfer agreement with the recipient located outside of China, to ensure that the processing meets the protection standard provided under the draft PIPL. Where the data processor is categorized as a critical information infrastructure operator or the volume of data processed by the data processor exceeds the level stipulated by the Cyberspace Administration of China (CAC), the cross-border transfer of personal information must pass a security assessment conducted by the CAC.

It further to keep in mind that the draft PIPL enlarges the range of penalties beyond those provided in the Cybersecurity Law, which will put a much higher pressure on liabilities for Controllers operating in China.

Currently, the period established to receive open comments on the draft legislation has ended, but the next steps have not yet been reported, and it not yet sure when the draft legislation will come into full effect.

German Officials warn Travellers to China of Espionage

17. January 2020

The German Federal Office for the Protection of the Constitution (BfV) sees a significant risk for the security of personal data when accessing local WiFi networks and the mobile network in China. A request from the German newspaper “Handelsblatt” to the BfV revealed that the Officials warn travellers to China of an increasing risk of espionage.

For the stay in China, the BfV discourages travellers from using laptops and smartphones that contain personal data, especially contact information. Instead, the BfV recommends to acquire a travel laptop and a prepaid mobile phone that could be resetted or even be disposed of after leaving China.

According to Handelsblatt, the warning stems from cases in which the Chinese border police conducted mobile phone controls at the Chinese border of Xinjiang and installed a surveillance App on tourists’ smartphones.

In 2016, the BfV already cautioned of potential espionage by Chinese secret services targetting students and researchers.

NIST examines the effect of demographic differences on face recognition

31. December 2019

As part of its Face Recognition Vendor Test (FRVT) program, the U.S. National Institute of Standards and Technology (NIST) conducted a study that evaluated face recognition algorithms submitted by industry and academic developers for their ability to perform various tasks. The study evaluated 189 software algorithms submitted by 99 developers. It focuses on how well each algorithm performs one of two different tasks that are among the most common applications of face recognition.

The two tasks are “one-to-one” matching, i.e. confirming that a photo matches another photo of the same person in a database. This is used, for example, when unlocking a smartphone or checking a passport. The second task involved “one-to-many” matching, i.e. determining whether the person in the photo matches any database. This is used to identify a person of interest.

A special focus of this study was that it also looked at the performance of the individual algorithms taking demographic factors into account. For one-to-one matching, only a few previous studies examined demographic effects; for one-to-many matching, there were none.

To evaluate the algorithms, the NIST team used four photo collections containing 18.27 million images of 8.49 million people. All were taken from operational databases of the State Department, Department of Homeland Security and the FBI. The team did not use images taken directly from Internet sources such as social media or from video surveillance. The photos in the databases contained metadata information that indicated the age, gender, and either race or country of birth of the person.

The study found that the result depends ultimately on the algorithm at the heart of the system, the application that uses it, and the data it is fed with. But the majority of face recognition algorithms exhibit demographic differences. In one-to-one matching, the algorithm rated photos of two different people more often as one person if they were Asian or African-American than if they were white. In algorithms developed by Americans, the same error occurred when the person was a Native American. In contrast, algorithms developed in Asia did not show such a significant difference in one-to-one matching results between Asian and Caucasian faces. However, these results show that algorithms can be trained to achieve correct face recognition results by using a wide range of data.

China publishes provisions on the protection of personal data of children

10. October 2019

On 23 August 2019, the Cyberspace Administration of China published regulations on the cyber protection of personal data of children, which came into force on 1 October 2019. China thus enacted the first rules focusing exclusively on the protection of children’s personal data.

In the regulations, “children” refers to minors under the age of 14. This corresponds to the definition in the national “Information Security Technology – Personal Information Security Specification”.

The provisions regulate activities related to the collection, storage, use, transfer and disclosure of personal data of children through networks located on the territory of China. However, the provisions do not apply to activities conducted outside of China or to similar activities conducted offline.

The provisions provide a higher standard of consent than the Cybersecurity Law of China. To obtain the consent of a guardian, a network operator has to provide the possibility of refusal and expressly inform the guardian of the following:

  • Purpose, means and scope of collection, storage, use, transfer and disclosure of children’s personal information;
  • Storage location of children’s personal information, retention period and how the relevant information will be handled after expiration of the retention period;
  • Safeguard measures protecting children’s personal information;
  • Consequences of rejection by a guardian;
  • The channels and means of filing or reporting complaints; and
  • How to correct and delete children’s personal information.

The network operator also has to restrict internal access to children’s personal information. In particular, before accessing the information, personnel must obtain consent of the person responsible for the protection of children’s personal data or an authorised administrator.

If children’s personal data are processed by a third party processor, the network operator is obliged to carry out a security assessment of the data processor commissioned to process the children’s personal data. He also has to conclude an entrustment agreement with the data processor. The data processor is obliged to support the network operator in fulfilling the request of the guardian to delete the data of a child after termination of the service. Subletting or subcontracting by the data processor is prohibited.

If personal data of children is transferred to a third party, the network operator shall carry out a security assessment of the commissioned person or commission a third party to carry out such an assessment.

Children or their legal guardians have the right to demand the deletion of children’s personal data under certain circumstances. In any case, they have the right to demand the correction of personal data of children if they are collected, stored, used or disclosed by a network operator. In addition, the legal guardians have the right to withdraw their consent in its entirety.

In the event of actual or potential data breaches, the network operator is obliged to immediately initiate its emergency plan and take remedial action. If the violation has or may have serious consequences, the network operator must immediately report the violation to the competent authorities and inform the affected children and their legal guardians by e-mail, letter, telephone or push notification. Where it is challenging to send the notification to any data subject, the network operator shall take appropriate and effective measures to make the notification public. However, the rules do not contain a precise definition of the serious consequences.

In the event that the data breach is caused or observed by a data processor, the data processor is obliged to inform the network operator in good time.

Chinese police uses gait recognition for identification

30. July 2019

The police in Beijing and Shanghai have begun to use a new form of surveillance. The gait recognition technology analyzes the body shapes and ways people walk to identify them, even if their faces are hidden from the camera.

The gait recognition software is part of an advance in China towards the development of artificial intelligence and data-driven surveillance.

On their website, the Chinese technology startup Watrix explains that gait functions with a low-resolution video are remotely obtainable and recognizable compared to other biometrics such as face, iris, palm print and fingerprint. With the features of the contactless, far-reaching, transparent recognition range and the difficult to disguise gait recognition, it closes the gap in the market for remote identification in the public security industry. “You don’t need people’s cooperation for us to be able to recognize their identity,” Huang Yongzhen, the CEO of Watrix, said in an interview. “Gait analysis can’t be fooled by simply limping, walking with splayed feet or hunching over, because we’re analyzing all the features of an entire body.”

Watrix’s software extracts a person’s silhouette from the video and analyzes their movements to create a model of the person’s gait. However, it is not yet able to identify people in real time. Users must upload videos to the program. Yet no special cameras are needed. The software can use footage from regular surveillance cameras to analyze the gait.

The technology is not new. Scientists in Japan, the UK and the U.S. Defense Information Systems Agency have been researching gait detection for over a decade. Professors from the University of Osaka have been working with the Japanese National Police Agency since 2013 to pilot the gait recognition software.

Category: China
Tags: , ,

China: Tourist mobile phones are scanned by an app

8. July 2019

Foreign tourists who want to enter the Chinese province of Xinjiang by land are spied out via an app.
For the first time, employees of the Süddeutsche Zeitung, Motherboard Vice and various other media portals in cooperation with IT-experts of the Ruhr University Bochum have succeeded in decrypting a Chinese surveillance software that also targets foreigners.

It has been known for some time that the Chinese authorities use apps to monitor the Uighur residents of Xinjiang province (we reported). What is new is that foreign tourists and businessmen coming from Kyrgyzstan to Xinjiang by land have to hand in their mobile phones at the borders and then get the Android app “Fengcai” (“collecting bees”) installed. They are not explicitly informed about this.

The app gets access to the contacts, the calendar, the SMS, the location or the call lists and transmits them to a computer of the border police. In addition, the app scans the phone for over 70,000  files that are suspicious from Chineses government’s point of view. Many scanned files refer to extremist content related to the Islamic state, but also harmless religious content or files related to Taiwan, Tibet or the Dalai Lama are part of the list. If the app discovers anything, it emits a warning tone and thereby informs the border police.

The app also scans the phones to see which apps were installed by the user and even extract usernames but several antivirus firms already updated their products to identify the app as malware.

Neither the Chinese authorities nor the company that developed the app reacted to a request for comment.

Category: General · Personal Data
Tags: ,

Mass monitoring in Xinjiang

3. May 2019

According to research by Human Rights Watch, China’s state and party leaders have had an app developed with which the security authorities in Xinjiang can monitor their inhabitants on a massive scale.

When police officers log into the app, they can see which “conspicuous” behaviours of individual residents have been recorded. According to the published report, the authorities are using the app for illegal mass surveillance and arbitrary arrest of the Uighur Muslim minority living in Xinjiang Province. Up to one million Uighurs are currently said to be imprisoned in “re-education camps”.

Users of the app are asked to enter a variety of information about citizens and explain the circumstances under which it was collected. This includes information such as name or identity card number, but also information such as religious beliefs, blood group or the absence of smartphones. According to Human Rights Watch, the app should also be connected to other databases and alert users if a citizen consumes too much electricity or a mobile phone does not log on to the network for a long time. Citizens should also make themselves “suspicious” if they have little contact with neighbours or do not often enter buildings through the front door.

Human Rights Watch is convinced that this procedure is also illegal in China and that the collected data must be deleted. It remains to be seen whether the Chinese – or other governments will react to the disclosures.

Category: General · Personal Data
Tags: ,
Pages: 1 2 Next
1 2