Tag: facial recognition technology
29. March 2022
The Italian data protection authority “Garante” has fined Clearview AI 20 million Euros for data protection violations regarding its facial recognition technology. Clearview AI’s facial recognition system uses over 10 billion images from the internet and prides themself to have the largest biometric image database in the world. The data protection authority has found Clearview AI to be in breach of numerous GDPR requirements. For example, fair and lawful processing was not carried out within the data protection framework, and there was no lawful basis for the collection of information and no appropriate transparency and data retention policies.
Last November, the UK ICO warned of a potential 17 million pound fine against Clearview, and in this context, and also ordered Clearview to stop processing data.
Then, in December, the French CNIL ordered Clearview to stop processing citizens’ data and gave it two months to delete all the data it had stored, but did not mention any explicit financial sanction.
In Italy, Clearview AI must now, in addition to the 20 million Euro fine, not only delete all images of Italian citizens from its database. It must also delete the biometric information needed to search for a specific face. Furthermore, the company must provide a EU representative as a point of contact for EU data subjects and the supervisory authority.
25. February 2021
The business model of the US company Clearview AI is coming under increasing pressure worldwide. The company collected billions of facial photos from publicly available sources, especially from social networks such as Facebook, Instagram, YouTube and similar services. Data subjects were not informed of the collection and use of their facial photos. Using the photos, Clearview AI created a comprehensive database and used it to develop an automated facial recognition system. Customers of this system are in particular law enforcement agencies and other prosecutors in the US, but companies can also make use of the system. In total, Clearview AI has around 2000 customers worldwide and a database with around 3 billion images.
After a comprehensive investigation by the New York Times in January 2020 drew attention to the company, opposition to the business practice is now also being voiced by the data protection authorities of various countries.
The Hamburg Data Protection Commissioner had already issued an order against Clearview AI in January 2021. According to the order, the company was to delete the biometric data of a Hamburg citizen who had complained to the authority about the storage. The reason given for the decision was that there was no legal basis for processing sensitive data and that the company was profiling by collecting photos over a longer period of time.
Now, several Canadian data protection authorities have also deemed Clearview AI’s actions illegal. In a statement, the Canadian Privacy Commissioner describes the activities as mass surveillance and an affront to the privacy rights of data subjects. The Canadian federal authority published a final report on the investigation into the Clearview AI case. In it, the company was found to have violated several Canadian federal reports.
It is interesting that the Canadian authorities even consider the data collection to be unlawful if Clearview AI were to obtain consents from the data subjects. They argue that already the purpose of the data processing is unlawful. They demand that Clearview AI cease its service in Canada and delete data already collected from Canadian citizens.
The pressure on Clearview AI is also growing due to the fact that the companies from which the data was collected are also opposing the procedure. In addition, the association “noyb” around the data protection activist Max Schrems is dealing with Clearview AI and various European data protection authorities have announced that they will take action against the facial recognition system.
12. September 2019
On 4 September, the High Court of England and Wales dismissed a challenge to the police’s use of Automated Facial Recognition Technology (“AFR”). The court ruled that the use of AFR was proportionate and necessary to meet the legal obligations of the police.
The pilot project AFR Locate was used for certain events and public places when the commission of crimes was likely. Up to 50 faces per second can be detected. The faces are then compared by biometric data analysis with wanted persons registered in police databases. If no match is found the images are deleted immediately and automatically.
An individual has initiated a judicial review process after he has not been identified as a wanted person, but is likely to have been captured by AFR Locate. He considered this to be illegal, in particular due to a violation of the right to respect for private and family life under Article 8 of the European Convention on Human Rights (“ECHR”) and data protection law in the United Kingdom. In his view, the police did not respect the data protection principles. In particular, that approach would violate the principle of Article 35 of the Data Protection Act 2018 (“DPA 2018”), which requires the processing of personal data for law enforcement purposes to be lawful and fair. He also pointed out that the police had failed to carry out an adequate data protection impact assessment (“DPIA”).
The Court stated that the use of AFR has affected a person’s rights under Article 8 of the ECHR and that this type of biometric data has a private character in itself. Despite the fact that the images were erased immediately, this procedure constituted an interference with Article 8 of the ECHR, since it suffices that the data is temporarily stored.
Nevertheless, the Court found that the police’s action was in accordance with the law, as it falls within the police’s public law powers to prevent and detect criminal offences. The Court also found that the use of the AFR system is proportionate and that the technology can be used openly, transparently and with considerable public commitment, thus fulfilling all existing criteria. It was only used for a limited period, for a specific purpose and published before it was used (e.g. on Facebook and Twitter).
With regard to data protection law, the Court considers that the images of individuals captured constitute personal data, even if they do not correspond to the lists of persons sought, because the technology has singled them out and distinguished them from others. Nevertheless, the Court held that there was no violation of data protection principles, for the same reasons on which it denied a violation of Art. 8 ECHR. The Court found that the processing fulfilled the conditions of legality and fairness and was necessary for the legitimate interest of the police in the prevention and detection of criminal offences, as required by their public service obligations. The requirement of Sec. 35 (5) DPA 2018 that the processing is absolutely necessary was fulfilled, as was the requirement that the processing is necessary for the exercise of the functions of the police.
The last requirement under Sec. 35 (5) of the DPA 2018 is that a suitable policy document is available to regulate the processing. The Court considered the relevant policy document in this case to be short and incomplete. Nevertheless, it refused to give a judgment as to whether the document was adequate and stated that it would leave that judgment to the Information Commissioner Office (“ICO”), as it would publish more detailed guidelines.
Finally, the Court found that the impact assessment carried out by the police was sufficient to meet the requirements of Sec. 64 of DPA 2018.
The ICO stated that it would take into account the High Court ruling when finalising its recommendations and guidelines for the use of live face recognition systems.
15. May 2019
San Francisco is the first major city in the US that has banned the use of facial recognition software by the authorities. The Board of Supervisors decided at 14th May that the risk of violating civil rights by using such technology far outweighs the claimed benefits. According to the current vote, the municipal police and other municipal authorities may not acquire, hold or use any facial recognition technology in the future.
The proposal is due to the fact that using facial recognition software threatens to increase racial injustice and “the ability to live free from constant monitoring by the government”. Civil rights advocates and researchers warn that the technology could easily be misused to monitor immigrants, unjustly target African-Americans or low-income neighborhoods, in case governmental oversight fails.
It sent a particularly strong message to the nation, coming from a city transformed by tech, Aaron Peskin, the city supervisor who sponsored the bill said. However, the ban is part of broader legislation aiming to restrict the use of surveillance technologies. However, airports, ports or other facilities operated by the federal authorities as well as businesses and private users are explicitly excluded from the ban.