Tag: Facial recognition

Another 20 million Euro fine for Clearview AI

28. October 2022

The French data protection authority CNIL imposed a fine of 20 million Euros on Clearview AI, being the latest in a line of authorities deeming the processing activities of the biometrics company unlawful under data protection law.

Clearview AI is a US company that extracts photographs and videos that are directly accessible online, including social media, in order to feed its biometric image database, which it prides itself to be the biggest in the world. Access to the search engine based on this database is offered to law enforcement authorities.

The case

The decision followed several complaints from data subjects in 2020, which led to the CNIL’s investigations and a formal notice to Clearview AI in November 2021 to “cease the collection and use of data of persons on French territory in the absence of a legal basis” and “facilitate the exercise of individuals’ rights and to comply with requests for erasure.” However, the company did not react to this notice within the two-month deadline imposed by the CNIL. Therefore, the authority imposed not only the fine but also an order to Clearview AI “to stop collecting and processing data of individuals residing in France without a legal basis and to delete the data of these persons that it had already collected, within a period of two months.” In addition, it set a “penalty of 100,000 euros per day of delay beyond these two months.”

CNIL based its decision on three breaches. First, Clearview AI had processed the data without a legal basis. Given the “intrusive and massive nature of the process which makes it possible to retrieve the images present on Internet of the millions of internet users in France”, Clearview AI had no legitimate interest in the data processing. Second, the CNIL sanctioned Clearview AI’s inadequate handling of data subjects’ requests. Lastly, it penalized the company’s failure to cooperate with the CNIL.

The impact of the decision

For over two years, Clearview AI has been under the scrutiny of data protection authorities (“DPA”s) all over the world. So far, it has been fined more than 68 million Euros in total. Apart from CNIL’s fine, there have been fines of 20 million Euros by Greece’s Hellenic DPA in July 2022, over 7.5 million pounds by the UK Information Commissioner’s Office in May 2022 and 20 million Euros by the Italian Garante in March 2022.

CNIL’s decision was likely not the last one, considering that the all-encompassing nature of Clearview AI’s collection of personal data that – given the company’s business model – inevitably concerns EU data subjects. Whether the company will comply within the two-month period is yet to be seen.

Italian DPA imposes a 20 Mio Euro Fine on Clearview AI

29. March 2022

The Italian data protection authority “Garante” has fined Clearview AI 20 million Euros for data protection violations regarding its facial recognition technology. Clearview AI’s facial recognition system uses over 10 billion images from the internet and prides themself to have the largest biometric image database in the world. The data protection authority has found Clearview AI to be in breach of numerous GDPR requirements. For example, fair and lawful processing was not carried out within the data protection framework, and there was no lawful basis for the collection of information and no appropriate transparency and data retention policies.

Last November, the UK ICO warned of a potential 17 million pound fine against Clearview, and in this context, and also ordered Clearview to stop processing data.

Then, in December, the French CNIL ordered Clearview to stop processing citizens’ data and gave it two months to delete all the data it had stored, but did not mention any explicit financial sanction.

In Italy, Clearview AI must now, in addition to the 20 million Euro fine, not only delete all images of Italian citizens from its database. It must also delete the biometric information needed to search for a specific face. Furthermore, the company must provide a EU representative as a point of contact for EU data subjects and the supervisory authority.

The Government of India plans one of the largest Facial Recognition Systems in the World

14. February 2020

The Indian Government released a Request for Proposal to bidder companies to procure a national Automated Facial Recognition System (AFRS). AFRS companies had time to submit their proposals until the end of January 2020. The plans for an AFRS in India are a new political development amidst the intention to pass the first national Data Protection Bill in Parliament.

The new system is supposed to integrate image databases of public authorities centrally as well as incorporate photographs from newspapers, raids, mugshots and sketches. The recordings from surveillance cameras, public or private video feeds shall then be compared to the centralised databases and help identify criminals, missing persons and dead bodies.

Human rights and privacy groups are pointing to various risks that may come with implementing nationwide AFRS in India, including violations of privacy, arbitrariness, mis-identifications, discriminatory profiling, a lack of technical safeguards, and even creating an Orwellian 1984 dystopia through mass surveillance.

However, many people in India are receiving the news about the plans of the Government with acceptance and approval. They hope that the AFRS will lead to better law enforcement and more security in their everyday lives, as India has a comparably high crime rate and only 144 police officers for every 100.000 citizens, compared to 318 per 100.000 citizens in the EU.

NIST examines the effect of demographic differences on face recognition

31. December 2019

As part of its Face Recognition Vendor Test (FRVT) program, the U.S. National Institute of Standards and Technology (NIST) conducted a study that evaluated face recognition algorithms submitted by industry and academic developers for their ability to perform various tasks. The study evaluated 189 software algorithms submitted by 99 developers. It focuses on how well each algorithm performs one of two different tasks that are among the most common applications of face recognition.

The two tasks are “one-to-one” matching, i.e. confirming that a photo matches another photo of the same person in a database. This is used, for example, when unlocking a smartphone or checking a passport. The second task involved “one-to-many” matching, i.e. determining whether the person in the photo matches any database. This is used to identify a person of interest.

A special focus of this study was that it also looked at the performance of the individual algorithms taking demographic factors into account. For one-to-one matching, only a few previous studies examined demographic effects; for one-to-many matching, there were none.

To evaluate the algorithms, the NIST team used four photo collections containing 18.27 million images of 8.49 million people. All were taken from operational databases of the State Department, Department of Homeland Security and the FBI. The team did not use images taken directly from Internet sources such as social media or from video surveillance. The photos in the databases contained metadata information that indicated the age, gender, and either race or country of birth of the person.

The study found that the result depends ultimately on the algorithm at the heart of the system, the application that uses it, and the data it is fed with. But the majority of face recognition algorithms exhibit demographic differences. In one-to-one matching, the algorithm rated photos of two different people more often as one person if they were Asian or African-American than if they were white. In algorithms developed by Americans, the same error occurred when the person was a Native American. In contrast, algorithms developed in Asia did not show such a significant difference in one-to-one matching results between Asian and Caucasian faces. However, these results show that algorithms can be trained to achieve correct face recognition results by using a wide range of data.

CNIL publishes report on facial recognition

21. November 2019

The French Data Protection Authority, Commission Nationale de l’Informatique et des Libertés (CNIL), has released guidelines concerning the experimental use of facial recognition software by the french public authorities.

Especially concerned with the risks of using such a technology in the public sector, the CNIL made it clear that the use of facial recognition has vast political as well as societal influences and risks. In its report, the CNIL explicitly stated the software can yield very biased results, since the algorithms are not 100% reliable, and the rate of false-positives can vary depending on the gender and on the ethnicity of the individuals that are recorded.

To minimize the chances of an unlawful use of the technology, the CNIL came forth with three main requirements in its report. It recommended to the public authorities, that are using facial recognition in an experimental phase, to comply with them in order to keep the chances of risks to a minimum.

The three requirements put forth in the report are as follows:

  • Facial recognition should only be put to experimental use if there is an established need to implement an authentication mechanism with a high level of reliability. Further, there should be no less intrusive methods applicable to the situation.
  • The controller must under all circumstances respect the rights of the individuals beig recorded. That extends to the necessity of consent for each device used, data subjects’ control over their own data, information obligation, and transparency of the use and purpose, etc.
  • The experimental use must follow a precise timeline and be at the base of a rigorous methodology in order to minimize the risks.

The CNIL also states that it is important to evaluate each use of the technology on a case by case basis, as the risks depending on the way the software is used can vary between controllers.

While the CNIL wishes to give a red lining to the use of facial recognition in the future, it has also made clear that it will fulfill its role by showing support concerning issues that may arise by giving counsel in regards to legal and methodological use of facial recognition in an experimental stage.

Category: EU · French DPA · GDPR · General
Tags: , , , ,

High Court dismisses challenge regarding Automated Facial Recognition

12. September 2019

On 4 September, the High Court of England and Wales dismissed a challenge to the police’s use of Automated Facial Recognition Technology (“AFR”). The court ruled that the use of AFR was proportionate and necessary to meet the legal obligations of the police.

The pilot project AFR Locate was used for certain events and public places when the commission of crimes was likely. Up to 50 faces per second can be detected. The faces are then compared by biometric data analysis with wanted persons registered in police databases. If no match is found the images are deleted immediately and automatically.

An individual has initiated a judicial review process after he has not been identified as a wanted person, but is likely to have been captured by AFR Locate. He considered this to be illegal, in particular due to a violation of the right to respect for private and family life under Article 8 of the European Convention on Human Rights (“ECHR”) and data protection law in the United Kingdom. In his view, the police did not respect the data protection principles. In particular, that approach would violate the principle of Article 35 of the Data Protection Act 2018 (“DPA 2018”), which requires the processing of personal data for law enforcement purposes to be lawful and fair. He also pointed out that the police had failed to carry out an adequate data protection impact assessment (“DPIA”).

The Court stated that the use of AFR has affected a person’s rights under Article 8 of the ECHR and that this type of biometric data has a private character in itself. Despite the fact that the images were erased immediately, this procedure constituted an interference with Article 8 of the ECHR, since it suffices that the data is temporarily stored.

Nevertheless, the Court found that the police’s action was in accordance with the law, as it falls within the police’s public law powers to prevent and detect criminal offences. The Court also found that the use of the AFR system is proportionate and that the technology can be used openly, transparently and with considerable public commitment, thus fulfilling all existing criteria. It was only used for a limited period, for a specific purpose and published before it was used (e.g. on Facebook and Twitter).

With regard to data protection law, the Court considers that the images of individuals captured constitute personal data, even if they do not correspond to the lists of persons sought, because the technology has singled them out and distinguished them from others. Nevertheless, the Court held that there was no violation of data protection principles, for the same reasons on which it denied a violation of Art. 8 ECHR. The Court found that the processing fulfilled the conditions of legality and fairness and was necessary for the legitimate interest of the police in the prevention and detection of criminal offences, as required by their public service obligations. The requirement of Sec. 35 (5) DPA 2018 that the processing is absolutely necessary was fulfilled, as was the requirement that the processing is necessary for the exercise of the functions of the police.

The last requirement under Sec. 35 (5) of the DPA 2018 is that a suitable policy document is available to regulate the processing. The Court considered the relevant policy document in this case to be short and incomplete. Nevertheless, it refused to give a judgment as to whether the document was adequate and stated that it would leave that judgment to the Information Commissioner Office (“ICO”), as it would publish more detailed guidelines.

Finally, the Court found that the impact assessment carried out by the police was sufficient to meet the requirements of Sec. 64 of DPA 2018.

The ICO stated that it would take into account the High Court ruling when finalising its recommendations and guidelines for the use of live face recognition systems.

Swedish DPA imposed ist first GDPR fine

23. August 2019

The Swedish Data Protection Authority “datainspektionen” imposed its first fine since the General Data Protection Regulation (GDPR) has entered into force.

Affected is a high school in Skelleftea in the north of Sweden. In the school, 22 pupils were part of a pilot programme to monitor attendance times using facial recognition.

In January 2019, the IT company Tieto announced that it was testing the presence of students at the school with tags, spartphone apps and facial recognition software for automatic registration of students. In Sweden, it is mandatory for teachers to report the presence of all students in each lesson to the supervisors. According to Tieto, teachers at the school in Skelleftea spend around 18,000 hours a year on this registration. Therefore, a class was selected for the pilot project to test the registration for eight weeks using facial recognition. Parents and students were asked to give their consent.

However, the Swedish data protection authority has now said that the way in which consent was obtained violates the GDPR because of the clear imbalance between controller and data subject. Additionally the school failed to conduct an impact assessment including seeking prior consultation with datainspektionen.

Therefore, the DPA imposed a fine of SEK 200.000 (approximately EUR 20.000). In Sweden, public authorities can be fined up to SEK 20.000.000 (approximately EUR 1.000.000).

Facial recognition data may become purchasable for private companies in Australia

5. December 2017

The Australian government is considering making facial recognition data available for private companies.

By paying a fee they are supposed to get access to data originally collected for the sake of national security.

However, the companies are to be restricted to cases where the person has given her/his consent.

In an interview with The Guardian, Monique Mann, a director of the Australian Privacy Foundation and a lecturer at the faculty of law at the Queensland University of Technology, says that requiring companies to ask for consent may not be enough to protect consumers’ rights or mitigate the risks involved with biometric data, and would encourage firms to store more data.

As also reported by The Guardian, the government struck a deal with states and territories over the controversial national facial recognition database last month. It is said, that according to the documents, which predate the agreement, at that time 50% of the population was already included in the database.

With the help of state and territory governments, the federal Attorney General’s Department planned to expand that number to cover 85% of Australians.

Moscow adds facial recognition to its network of surveillance cameras

2. October 2017

Moscow adds facial recognition to its network of 170.000 surveillance cameras across the city to be able to identify criminals and boost security, Bloomberg reports. The camera surveillance started in 2012. The recordings of the camera surveillance system have been held for five days after they are captured, with an amount of 20 million hours of video material stored at any one time. “We soon found it impossible to process such volumes of data by police officers alone,” Artem Ermolaev, who is Head of the Department of Information Technology in Moscow, said according to Bloomberg. “We needed an artificial intelligence to help find what we are looking for.”, he further said.

A Russian start-up, named N-Tech.Lab Ltd designed the facial recognition technology. The start-up is known for its mobile app FindFace which was released last year. With FindFace it is possible to search for users of the Russian social network VKontakte by making a picture of a person’s face and match it against the user profiles of VKontakte.

However, due to high costs the face recognition technology should not be deployed to every camera and therefore only be installed selectively within specific districts where it is needed the most. To maintain the camera surveillance, the Moscow government already should spend about $ 86 million a year and this amount would triple if every camera would use the new facial recognition technology.

The new technology is used to cross-reference images captured by the cameras with those from the Interior Ministry’s database.