Sep 8, 2020 - 3 mins read
Smart devices are constantly amassing personal data about you – from your location, your health metrics, your spending habits… Most recently, your smart devices and public cameras have started collecting data about what you look like. Facial recognition has become widely used in tech to improve the quality of consumer experience. But, in a February 2020 Civic Science study, the majority of people surveyed said that they were uncomfortable with tech companies collecting data through facial recognition. When you consider the privacy implications of facial recognition data, it is no wonder that consumers are worried…
When it comes to data identifying a person’s face, the privacy concerns are real. Could that data be stolen and used to access your phone through Apple’s Face ID? Could hackers use your facial images to facilitate identity theft? There are many ways that this data can be used to cause harm to those it is supposed to be helping.
Most importantly, this isn’t just a hypothetical – it is a real problem. In February 2020, various news outlets reported that a company called Clearview AI had been hacked. This company had collected a database of more than 3 billion facial images by scouring Facebook, Youtube, Twitter, and other publicly available photographs that remained in their database even after users deleted the images from their social media accounts. Clearview AI was in the business of cooperating with law enforcement to provide these facial photos where requested. Although the company denied that its servers had been breached, the hack left their client list vulnerable. Regardless of whether the images were accessed this time, it shows that breaches are real and illustrates the danger of having such a mine of facial images.
The main concern is consent. Where facial recognition is being used in secret, without consent, people are can’t take action against it. Luckily, under the GDPR, sensitive personal data are protected. This extends to facial images. In 2019, we saw the first disciplinary fine given to a Swedish school that had been secretly running a facial recognition test to track a group of students. The intention was to track when each student entered the classroom each day. The Swedish Data Protection Agency (SDPA) watchdog delivered a fine under the GDPR on the basis of a lack of consent and a failure to “do an adequate impact assessment including seeking prior consultation with the Swedish DPA”. It seems that, at least, the GDPR covers some facet of facial recognition but that only applies to European consumers.
What about in other areas of the world? Well, when new technology is created, the laws and policies regulating that technology often lag behind. Facial recognition is no different. In Canada, there are no policies on the collection of facial images given how new the technology is. Not only does this permit private companies to use a person’s facial image, but it also allows government agencies and law enforcement to do so as well. Privacy laws in the U.S. vary state by state, with no federally consistent law, despite activists fighting for such federal law. In 2019, a group of activists against facial recognition conducted a demonstration in Washington in which they scanned nearly 14,000 people’s faces without consent to prove just how much U.S. federal laws are failing to protect the privacy of their people.
An article by Sam Dupont in Nextgov sums it up perfectly. He states: “Without legal safeguards, this technology will undermine democratic value.” Facial recognition can be used to damper activism by being used at protests. Facial recognition can be nefariously used by agencies such as Immigration and Customs Enforcement. Hackers can use this information to cause harm, for financial or another gain. If we are to continue to develop powerful tech such as this, we need to ensure we are doing so in a way that is secure, consensual, and accurate to best protect everyone.