Tag: biometric data

Italian DPA imposes a 20 Mio Euro Fine on Clearview AI

29. March 2022

The Italian data protection authority “Garante” has fined Clearview AI 20 million Euros for data protection violations regarding its facial recognition technology. Clearview AI’s facial recognition system uses over 10 billion images from the internet and prides themself to have the largest biometric image database in the world. The data protection authority has found Clearview AI to be in breach of numerous GDPR requirements. For example, fair and lawful processing was not carried out within the data protection framework, and there was no lawful basis for the collection of information and no appropriate transparency and data retention policies.

Last November, the UK ICO warned of a potential 17 million pound fine against Clearview, and in this context, and also ordered Clearview to stop processing data.

Then, in December, the French CNIL ordered Clearview to stop processing citizens’ data and gave it two months to delete all the data it had stored, but did not mention any explicit financial sanction.

In Italy, Clearview AI must now, in addition to the 20 million Euro fine, not only delete all images of Italian citizens from its database. It must also delete the biometric information needed to search for a specific face. Furthermore, the company must provide a EU representative as a point of contact for EU data subjects and the supervisory authority.

London’s King’s Cross station facial recognition technology under investigation by the ICO

11. September 2019

Initially reported by the Financial Times, London’s King’s Cross station is under crossfire for making use of a live face-scanning system across its 67 acres large site. Developed by Argent, it was confirmed that the system has been used to ensure public safety, being part of a number of detection and tracking methods used in terms of surveillance at the famous train station. While the site is privately owned, it is widely used by the public and houses various shops, cafes, restaurants, as well as office spaces with tenants like, for example, Google.

The controversy behind the technology and its legality stems from the fact that it records everyone in its parameters without their consent, analyzing their faces and compairing them to a database of wanted criminals, suspects and persons of interest. While Developer Argent defended the technology, it has not yet explained what the system is, how it is used and how long it has been in place.

A day before the ICO launched its investigation, a letter from King’s Cross Chief Executive Robert Evans reached Mayor of London Sadiq Khan, explaining the matching of the technology against a watchlist of flagged individuals. In effect, if footage is unmatched, it is blurred out and deleted. In case of a match, it is only shared with law enforcement. The Metropolitan Police Service has stated that they have supplied images for a database to carry out facial scans to system, though it claims to not have done so since March, 2018.

Despite the explanation and the distinct statements that the software is abiding by England’s data protection laws, the Information Commissioner’s Office (ICO) has launched an investigation into the technology and its use in the private sector. Businesses would need to explicitly demonstrate that the use of such surveillance technology is strictly necessary and proportionate for their legitimate interests and public safety. In her statement, Information Commissioner Elizabeth Denham further said that she is deeply concerned, since “scanning people’s faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all,” especially if its being done without their knowledge.

The controversy has sparked a demand for a law about facial recognition, igniting a dialogue about new technologies and future-proofing against the yet unknown privacy issues they may cause.

Category: GDPR · General · UK
Tags: , , , ,

Millions of unencrypted biometric data discovered on the internet

19. August 2019

The Israeli security researchers Noam Rotem and Ran Locar discovered the unprotected and mostly unencrypted database of Biostar 2 during an Internet search.

Biostar 2 is a web-based biometric locking system that provides centralized control of access to secure facilities such as warehouses and office buildings. The researchers were given access to over 27.8 million records and 23 gigabytes of data, including fingerprint data, facial recognition data, facial photos of users, user names and passwords, and protocols for accessing facilities. Among others, the system is used by the British Metropolitan Police, insurance companies and banks.

Rotem told the Guardian: “The access allows first of all seeing millions of users are using this system to access different locations and see in real time which user enters which facility or which room in each facility, even.”
He also states that they were able to change data and add new users. So they could have added their own photo and fingerprint to an existing user account and could have had access to the buildings that user had access to or could have added a new user with their own photo and fingerprints.

The intensity of this data breach was particularly large because Biostar 2 is used in 1.5 million locations around the world and fingerprints, unlike passwords, cannot be changed.
Before Rotem and Locar turned to the Guardian, they made several attempts to contact Suprema, the security company responsible for Biostar 2. Meanwhile, the vulnerability has been closed.

To the Guardian, Suprema’s marketing director said they had conducted an “in-depth evaluation” of the information provided: “If there has been any definite threat on our products and/or services, we will take immediate actions and make appropriate announcements to protect our customers’ valuable businesses and assets.”

Rotem said that such problems not only occur at Suprema, but that he contacts three or four companies a week with similar problems.

CNIL publishes model regulation on access control through biometric authentication at the workplace

9. April 2019

The French data protection authority CNIL has published a model regulation which regulates under which conditions devices for access control through biometric authentication may be introduced at the workplace.

Pursuant to Article 4 paragraph 14 of the General Data Protection Regulation (GDPR), biometric data are personal data relating to the physical, physiological or behavioural characteristics of a natural person, obtained by means of specific technical processes, which enable or confirm the unambiguous identification of that natural person. According to Article 9 paragraph 4 GDPR, the member states of the European Union may introduce or maintain additional conditions, including restrictions, as far as the processing of biometric data is concerned.

The basic requirement under the model regulation is that the controller proves that biometric data processing is necessary. To this end, the controller must explain why the use of other means of identification or organisational and technical safeguards is not appropriate to achieve the required level of security.

Moreover, the choice of biometric types must be specifically explained and documented by the employer. This also includes the justification for the choice of one biometric feature over another. Processing must be carried out for the purpose of controlling access to premises classified by the company as restricted or of controlling access to computer devices and applications.

Furthermore, the model regulation of the CNIL describes which types of personal data may be collected, which storage periods and conditions apply and which specific technical and organisational measures must be taken to guarantee the security of personal data. In addition, CNIL states that before implementing data processing, the controller must always carry out an impact assessment and a risk assessment of the rights and freedoms of the individual. This risk assessment must be repeated every three years for updating purposes.

The data protection authority also points out that the model regulation does not exempt from compliance with the regulations of the GDPR, since it is not intended to replace its regulations, but to supplement or specify them.

India’s Supreme Court rules that privacy is a fundamental right

29. August 2017

In the past few years, India’s government aimed to build up the world’s largest biometric database, named Aadhaar. So far, more than a billion citizens have been registered to the identity programme, whereby eye scans and fingerprints are collected. In order to make sure that all citizens registered to the Aadhaar database, the government restricted access to government services for those who are not part of the database.

Critics expressed concerns about the implications of possible future data breaches, jeopardising the privacy of more than a billion Indians. It was also feared that the Indian government could use the database for surveillance purposes.

Last week, a nine-member panel of India’s Supreme Court ruled that a right to privacy is a part of article 21 of the Constitution of India. This historic ruling could result in the abrogation of the mandatory enrolment to the Aadhaar database. Furthermore, any future laws aiming at restricting privacy, will now “have to be tested on the touchstone of article 21”. It remains to be seen whether the ruling will also have lasting effects on the civil liberties and the daily life of Indians.

Facial recognition on the rise

4. August 2017

At Australian airports new technology will be rolled out which will help processing passengers by means of facial recognition. Peter Dutton, Minister for Immigration and Border Protection, said that 105 smart gates will be provided for this purpose as part of a AU$22.5 million contract with Vision-Box Australia. Vision-Box has already implemented a facial recognition system at New York’s JFK airport.

Australian government’s goal is to automatize 90 % of air traveller processing by 2020. After the implementation, passengers will not have to show their passports, but will be processed by biometric recognition of their faces, irises and/or fingerprints.

Meanwhile, at Berlin’s Südkreuz station the testing of a facial recognition system began. The software can recognise known suspects and alert the police. Currently, the software is only scanning the faces of 250 volunteers. Thomas de Maizière, the German interior minister, aims at improving security in Germany after several terrorist attacks.

However, concerns were raised over this technology by privacy activists as well as by well-respected lawyers. They fear that Germany could head towards a surveillance state. Besides, it is stated there was no constitutional basis for the use of these methods.

INTERPOL suggests that governments share terrorists’ biometric data

11. November 2016

The IAPP just published an article saying that INTERPOL calls on governments around the world to share terrorists’ biometric data in order to increase global security.

This statement was issued by INTERPOL’s General Assembly saying that it currently possesses information about 9,000 terrorists. However, only 10 percent of these files include biometric information. INTERPOL’s Secretary General, Jürgen Stock, explaines that this can be seen as “a weak link” in the prevention of terrorism.

On one side, some countries – among these are multiple ASEAN countries – have taken big steps with regard to data sharing as they have recently agreed to share biometric data for the purposes of counter-terrorism. On the other side, many governments are still discussing how to handle biometric data domestically. So the sharing of data would be one step ahead.

However, governments worldwide becoming more and more interested in biometric security which might help to fight terrorism. The mentioned suggestion of INTERPOL might also increase this kind of cooperation.