Category: General
12. December 2019
The new update of the Indian Personal Data Protection Bill is part of India’s broader efforts to tightly control the flow of personal data.
The bill’s latest version enpowers the government to ask companies to provide anonymized personal data, as well as other non-personal data in order to help to deliver governmental services and privacy policies. The draft defines “personal data” as information that can help to identify a person and also has characteristics, traits and any other features of a person’s identity. “Sensitive personal data” also includes financial and biometric data. According to the draft, such “sensitive” data can be transferred outside India for processing, but must be stored locally.
Furthermore, social media platforms will be required to offer a mechanism for users to prove their identities and display a verification sign publicly. Such requirements would raise a host of technical issues for companies such as Facebook and WhatsApp.
As a result, the new bill could affect the way companies process, store and transfer Indian consumers’ data. Therefore, it could cause some difficulties for top technology companies.
28. November 2019
The National Congress of Brazil passed in August 2018 a new General Data Protection Law (“Lei Geral de Proteção de Dados” or “LGPD”). This law is slated to come into effect in August 2020. Prior to the LGPD, data protection in Brazil was primarily enforced via a various collection of legal frameworks, including the country’s Civil Rights Framework for the Internet (Internet Act) and Consumer Protection Code.
The new legislation creates a completely new general framework for the use of personal data processed on individuals in Brazil, regardless of where the data processor is located. Brazil also established its own Data Protection Authority, in order to enforce the guidance. Although the Data Protection Authority will initially be tied to the Presidency of the Federative Republic of Brazil, the DPA will become autonomous in the long term, in about two years.
Like the GDPR, the new framework has an extraterritorial application, which means that the law will apply to any individual or organization, private or public that processes or collects personal data in Brazil, regardless of where the Processor is based. The LGPD does not apply to data processing for strictly personal, academic, artistic and journalistic purposes.
Although the LGPD is largely influenced by the GDPR, both frameworks also differ from each other a lot. For instance, both frameworks define personal data differently. The LGPD’s definition is broad and covers any information relating to an identified or identifiable natural person. Furthermore, the LGPD does not permit cross-border transfers based on the controller’s legitimate interest. In the GDPR, the deadline for data breach notification is 72 hours; in the LGPD, the deadline is loosely defined, to name just a few.
21. November 2019
The French Data Protection Authority, Commission Nationale de l’Informatique et des Libertés (CNIL), has released guidelines concerning the experimental use of facial recognition software by the french public authorities.
Especially concerned with the risks of using such a technology in the public sector, the CNIL made it clear that the use of facial recognition has vast political as well as societal influences and risks. In its report, the CNIL explicitly stated the software can yield very biased results, since the algorithms are not 100% reliable, and the rate of false-positives can vary depending on the gender and on the ethnicity of the individuals that are recorded.
To minimize the chances of an unlawful use of the technology, the CNIL came forth with three main requirements in its report. It recommended to the public authorities, that are using facial recognition in an experimental phase, to comply with them in order to keep the chances of risks to a minimum.
The three requirements put forth in the report are as follows:
- Facial recognition should only be put to experimental use if there is an established need to implement an authentication mechanism with a high level of reliability. Further, there should be no less intrusive methods applicable to the situation.
- The controller must under all circumstances respect the rights of the individuals beig recorded. That extends to the necessity of consent for each device used, data subjects’ control over their own data, information obligation, and transparency of the use and purpose, etc.
- The experimental use must follow a precise timeline and be at the base of a rigorous methodology in order to minimize the risks.
The CNIL also states that it is important to evaluate each use of the technology on a case by case basis, as the risks depending on the way the software is used can vary between controllers.
While the CNIL wishes to give a red lining to the use of facial recognition in the future, it has also made clear that it will fulfill its role by showing support concerning issues that may arise by giving counsel in regards to legal and methodological use of facial recognition in an experimental stage.
14. October 2019
Apple wants to evaluate Siri-recordings again in the future. After it became public that Apple automatically saved the audio recordings of Siri entries and had some of them evaluated by employees of external companies, the company stopped this procedure. Although Apple stated that only less than 0.2 % of the queries were actually evaluated, the system received around 10 billion queries per month (as of 2018).
In the future, audio recordings from the Siri language assistant will be stored and evaluated again. This time, however, only after the user has consented. This procedure will be tested with the latest beta versions of the Apple IOS software for iPhone and iPad.
Apple itself hopes that many users will agree and thus contribute to the improvement of Siri. A later opt-out is possible at any time, but for each device individually. In addition, only apple’s own employees, who are – according to Apple -subject to strict confidentiality obligations ,will evaluate the recordings. Recordings that have been generated by an unintentional activation of Siri will be completely deleted.
In addition, a delete function for Siri-recordings is to be introduced. Users can then choose in their settings to delete all data recorded by Siri. If this deletion is requested within 24 hours of a Siri request, the respective recordings and transcripts will not be released for evaluation.
However, even if the user does not opt-in to the evaluation of his Siri recordings, a computer-generated transcript will continue to be created and kept by Apple for a certain period of time. Although these transcripts are to be anonymized and linked to a random ID, they still could be evaluated according to Apple.
10. October 2019
The United States and the United Kingdom have entered into the first of its kind CLOUD Act Data Access Agreement, which will allow both countries’ law enforcement authorities to demand authorized access to electronic data relating to serious crime. In both cases, the respective authorities are permitted to ask the tech companies based in the other country, for electronic data directly and without legal barriers.
At the base of this bilateral Agreement stands the U.S.A.’s Clarifying Lawful Overseas Use of Data Act (CLOUD Act), which came into effect in March 2018. It aims to improve procedures for U.S. and foreign investigators for obtaining electronic information held by service providers in the other country. In light of the growing number of mutual legal assistance requests for electronic data from U.S. service providers, the current process for access may take up to two years. The Data Access Agreement can reduce that time considerably by allowing for a more efficient and effective access to data needed, while protecting the privacy and civil liberties of the data subjects.
The Cloud Act focuses on updating legal frameworks to respond to the growing technology in electronic communications and service systems. It further enables the U.S. and other countries to enter into a mutual executive Agreement in order to use own legal authorities to access electronic evidence in the other respective country. An Agreement of this form can only be signed by rights-respecting countries, after it has been certified by the U.S. Attorney General to the U.S. Congress that their laws have robust substansive and procedural protections for privacy and civil liberties.
The Agreement between the U.K. and the U.S.A. further assures providers that the requested disclosures are compatible with data protection laws in both respective countries.
In addition to the Agreement with the United Kingdom, there have been talks between the United States and Australia on Monday, reporting negotiations for such an Agreement between the two countries. Other negotiations have also been held between the U.S. and the European Commission, representing the European Union, in regards to a Data Access Agreement.
7. October 2019
The Belgian data protection authority (Gegevensbeschermingsautoriteit) has recently imposed a fine of €10,000 for violating the General Data Protection Regulation (GDPR). The case concerns a Belgian shop that provided the data subject with only one opportunity to get a customer card, namely the electronic identity card (eID). The eID is a national identification card, which contains several information about the cardholder, so the authority considers that the use of this information without the valid consent of the customer is disproportionate to the service offered.
The Authority had learnt of the case following a complaint from a customer. He was denied a customer card because he did not want to provide his electronic identity card. Instead, he had offered the shop to send his data in writing.
According to the Belgian data protection authority, this action violates the GDPR in several respects. On the one hand, the principle of data minimisation is not respected. This requires that the duration and the quantity of the processed data are limited by the controller to the extent absolutely necessary for the pursued purpose.
In order to create the customer card, the controller has access to all the data stored on the eID, including name, address, a photograph and the barcode associated with the national registration number. The Authority therefore believes that the use of all eID data is disproportionate to the creation of a customer card.
The DPA also considers that there is no valid consent as a legal basis. According to the GDPR, the consent must be freely given, specific and informed. However, there is no voluntary consent in this case, since no other alternative is offered to the customer. If a customer refuses to use his electronic ID card, he will not receive a customer card and will therefore not be able to benefit from the shops’ discounts and advantages.
In view of these violations, the authority has imposed a fine of €10,000.
27. September 2019
In a landmark case on Tuesday the Court of Justice of the European Union (CJEU) ruled that Google will not have to apply the General Data Privacy Regulation’s (GDPR) “Right to be Forgotten” to its search engines outside of the European Union. The ruling is a victory for Google in a case against a fine imposed by the french Commission nationale de l’informatique et des libertés (CNIL) in 2015 in an effort to force the company and other search engines to take down links globally.
Seeing as the internet has grown into a worldwide media net with no borders, this case is viewed as a test of wether people can demand a blanket removal of information about themselves from searches without overbearing on the principles of free speech and public interest. Around the world, it has also been perceived as a trial to see if the European Union can extend its laws beyond its own borders.
“The balance between right to privacy and protection of personal data, on the one hand, and the freedom of information of internet users, on the other, is likely to vary significantly around the world,” the court stated in its decision.The Court also expressed in the judgement that the protection of personal data is not an absolute right.
While this leads to companies not being forced to delete sensitive information on their search engines outside of the EU upon request, they must take precautions to seriously discourage internet users from going onto non-EU versions of their pages. Furthermore, companies with search engines within the EU will have to closely weigh freedom of speech against the protection of privacy, keeping the currently common case to case basis for deletion requests.
In effect, since the Right to be Forgotten had been first determined by the CJEU in 2014, Google has since received over 3,3 million deletion requests. In 45% of the cases it has complied with the delisting of links from its search engine. As it stands, even while complying with deletion requests, the delisted links within the EU search engines can still be accessed by using VPN and gaining access to non-EU search engines, circumventing the geoblocking. This is an issue to which a solution has not yet been found.
20. September 2019
As reported by the US investment platform ProPublica and the German broadcaster Bayerischer Rundfunk, millions of highly sensitive patient data were discovered freely accessible on the Internet.
Among the data sets are high-resolution X-ray images, breast cancer screenings, CT scans and other medical images. Most of them are provided with personal data such as birth dates, names and information about their doctor and their medical treatment. The data could be found for years on unprotected servers.
In Germany, around 13,000 data records are affected, and more than 16 million worldwide, including more than 5 million patients in the USA.
When X-ray or MRI images of patients are taken, they are stored on “Picture Archiving Communication System” (PACS) servers. If these servers are not sufficiently secured, it is easy to access the data. In 2016, Oleg Pianykh, Professor of Radiology at Harvard Medical School, published a study on unsecured PACS servers. He was able to locate more than 2700 open systems, but the study did not prompt anyone in the industry to act.
The German Federal Ministry for Information Security has now informed authorities in 46 countries. Now it remains to be seen how they will react to the incident.
19. September 2019
On Monday, 16th of September, it has been revealed that the detailed information of potencially every citizen of Ecuador has been freely available online as part of a massive data breach resulting from an incorrectly configured database. The leak, detected by security researchers of vpnMentor during a routine large-scale web mapping project, exposed more than 20 million individuals, inclusing close to 7 million children, giving access to 18 GB of data.
In effect Ecuador counts close to 17 million citizens, making it possible that almost every citizen has had some data compromised. This also includes government officials, high profile persons like Julian Assange, and the Ecuadorian President.
In their report, vpnMentor designates that it was able to track the server back to its owner, an ecuadorian company named Novaestrat, which is a consulting company providing services in data analytics, strategic marketing and software development.
It also mentioned several examples of the entries it had found in the database, including the types of data that were leaked. Those came down to full names, gender and birth information, home and e-mail adresses, telephone numbers, financial information, family members and employment information.
Access to the data has been cut off by the ecuadorian Computer Emergency Response Team, but the highly private and sensitive nature of the leaked information could create long lasting privacy issues for the citizens of the country.
In a twitter post, Telecommunications Minister Andres Michelena announced that the data protection bill, which had been in the works for months, will be submitted to the National Assembly within 72 hours. On top of that, an investigation into the possibility of a violation of personal privacy by Novaestrat has been opened.
11. September 2019
Initially reported by the Financial Times, London’s King’s Cross station is under crossfire for making use of a live face-scanning system across its 67 acres large site. Developed by Argent, it was confirmed that the system has been used to ensure public safety, being part of a number of detection and tracking methods used in terms of surveillance at the famous train station. While the site is privately owned, it is widely used by the public and houses various shops, cafes, restaurants, as well as office spaces with tenants like, for example, Google.
The controversy behind the technology and its legality stems from the fact that it records everyone in its parameters without their consent, analyzing their faces and compairing them to a database of wanted criminals, suspects and persons of interest. While Developer Argent defended the technology, it has not yet explained what the system is, how it is used and how long it has been in place.
A day before the ICO launched its investigation, a letter from King’s Cross Chief Executive Robert Evans reached Mayor of London Sadiq Khan, explaining the matching of the technology against a watchlist of flagged individuals. In effect, if footage is unmatched, it is blurred out and deleted. In case of a match, it is only shared with law enforcement. The Metropolitan Police Service has stated that they have supplied images for a database to carry out facial scans to system, though it claims to not have done so since March, 2018.
Despite the explanation and the distinct statements that the software is abiding by England’s data protection laws, the Information Commissioner’s Office (ICO) has launched an investigation into the technology and its use in the private sector. Businesses would need to explicitly demonstrate that the use of such surveillance technology is strictly necessary and proportionate for their legitimate interests and public safety. In her statement, Information Commissioner Elizabeth Denham further said that she is deeply concerned, since “scanning people’s faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all,” especially if its being done without their knowledge.
The controversy has sparked a demand for a law about facial recognition, igniting a dialogue about new technologies and future-proofing against the yet unknown privacy issues they may cause.
Pages: Prev 1 2 3 ... 16 17 18 19 20 21 22 ... 30 31 32 Next