Our Blog

How Privacy Laws are Failing to Protect your Face

Sep 8, 2020 - 3 mins read

Smart devices are constantly amassing personal data about you – from your location, your health metrics, your spending habits… Most recently, your smart devices and public cameras have started collecting data about what you look like. Facial recognition has become widely used in tech to improve the quality of consumer experience. But, in a February 2020 Civic Science study, the majority of people surveyed said that they were uncomfortable with tech companies collecting data through facial recognition. When you consider the privacy implications of facial recognition data, it is no wonder that consumers are worried…


When it comes to data identifying a person’s face, the privacy concerns are real. Could that data be stolen and used to access your phone through Apple’s Face ID? Could hackers use your facial images to facilitate identity theft? There are many ways that this data can be used to cause harm to those it is supposed to be helping. 


Most importantly, this isn’t just a hypothetical – it is a real problem. In February 2020, various news outlets reported that a company called Clearview AI had been hacked. This company had collected a database of more than 3 billion facial images by scouring Facebook, Youtube, Twitter, and other publicly available photographs that remained in their database even after users deleted the images from their social media accounts. Clearview AI was in the business of cooperating with law enforcement to provide these facial photos where requested. Although the company denied that its servers had been breached, the hack left their client list vulnerable. Regardless of whether the images were accessed this time, it shows that breaches are real and illustrates the danger of having such a mine of facial images.    


The main concern is consent. Where facial recognition is being used in secret, without consent, people are can’t take action against it. Luckily, under the GDPR, sensitive personal data are protected. This extends to facial images. In 2019, we saw the first disciplinary fine given to a Swedish school that had been secretly running a facial recognition test to track a group of students. The intention was to track when each student entered the classroom each day. The Swedish Data Protection Agency (SDPA) watchdog delivered a fine under the GDPR on the basis of a lack of consent and a failure to “do an adequate impact assessment including seeking prior consultation with the Swedish DPA”. It seems that, at least, the GDPR covers some facet of facial recognition but that only applies to European consumers.


What about in other areas of the world? Well, when new technology is created, the laws and policies regulating that technology often lag behind. Facial recognition is no different. In Canada, there are no policies on the collection of facial images given how new the technology is. Not only does this permit private companies to use a person’s facial image, but it also allows government agencies and law enforcement to do so as well.  Privacy laws in the U.S. vary state by state, with no federally consistent law, despite activists fighting for such federal law. In 2019, a group of activists against facial recognition conducted a demonstration in Washington in which they scanned nearly 14,000 people’s faces without consent to prove just how much U.S. federal laws are failing to protect the privacy of their people. 


An article by Sam Dupont in Nextgov sums it up perfectly. He states: “Without legal safeguards, this technology will undermine democratic value.” Facial recognition can be used to damper activism by being used at protests. Facial recognition can be nefariously used by agencies such as Immigration and Customs Enforcement. Hackers can use this information to cause harm, for financial or another gain. If we are to continue to develop powerful tech such as this, we need to ensure we are doing so in a way that is secure, consensual, and accurate to best protect everyone.  




Sources:


[1] https://www.deidentification.co/the-eu-general-data-protection-regulation-gdpr-and-facial-recognition/ 


[2] https://civicscience.com/apple-snapchat-users-face-the-music-of-facial-recognition-tech/ 


[3] https://www.bbc.com/news/technology-51658111 


[4] https://www.cbc.ca/news/canada/nova-scotia/facial-recognition-police-privacy-laws-1.5452749 


[5] https://www.vox.com/future-perfect/2019/11/15/20965325/facial-recognition-ban-congress-activism 


[6] https://www.scancongress.com 


[7] https://www.bbc.com/news/technology-49489154 


[8] https://edpb.europa.eu/news/national-news/2019/facial-recognition-school-renders-swedens-first-gdpr-fine_en

Read more

Racial Bias in AI: Is Facial Recognition Truly Benefitting All Consumers?

Sep 8, 2020 - 4 mins read

Machine learning and artificial intelligence promise to be the future of tech. Facial detection and recognition are just one advancement in machine learning that has been widely introduced into many programs. Many companies have developed facial recognition software to benefit their consumers in a variety of ways. 


Most people would best understand facial detection and recognition in social media programs like Snapchat, Apple, Instagram, and Facebook. Snapchat, with the help of software developed at Looksery, launched “filters” which could recognize a person’s face and superimpose an interactive facial modification like adding makeup, slimming the face, or allowing users to have dog ears and flower crowns. Facebook and Instagram followed suit, providing the same types of filters but also using facial recognition to identify its users in photographs.  Lastly, Apple went so far as to create Face ID, a facial recognition software that allowed users to unlock their phones using only camera detection of their faces. Clearly, facial recognition is becoming ubiquitous and provides an enhanced experience. But, does it enhance the experience for all users?


In a research study titled Gender Shades, researchers Joy Buolamwini and Timnit Gebru set out to analyze the accuracy of various facial recognition software programs when it comes to racial and gender biases. The study found that major facial recognition programs like Microsoft and IBM fell short inaccuracy when it came to people of color, especially with women of color. The programs sometimes misgendered women of color or were unable to even detect faces in certain images. These findings are troubling, especially considering that these are commercial products designed to cater to their consumers – people of all genders and skin tones. 


The bias in machine learning seems to be a reflection of deeper social biases present in our society. Facial recognition software must be developed through machine learning based on a data set of images. Understandably, the programs will work only as good as they are trained to mean that they if aren’t properly trained to recognize a wide variety of skin tones, they may fall short. The data set needs to be representative and diverse if we wish to resolve this racial bias in facial recognition.


One method of training the systems has been to use databases of celebrities such as that of the University of Massachusetts Amherst called “Labeled Faces in the Wild”. A quick glance through this database of images reveals a predominately lighter tone. The disclaimer on the databases acknowledges this shortcoming by stating that “many ethnicities have very minor representation or none at all”. It also warns programmers not to use this data set to conclude that a product is “suitable for any commercial purpose”. However, this approach requires each and every developer to then actively search for images representative of those people of color who are underrepresented in the data set. If the programmers are not themselves aware of this racial bias, this active step may never take place and the ultimate product may continue to perpetuate the racial bias in facial recognition. 


Clearly, this is a serious issue that tech companies must fix. If facial recognition software is inaccurate and misgenders or misidentifies people of color, this creates a demeaning experience for the consumer. It is made even more serious when law enforcement becomes involved. If law enforcement is to rely on facial recognition that misidentifies a person of color as a perpetrator of a crime, this serves to further exacerbate the institutional racism in today’s world. It is a serious problem that requires an immediate solution.

A major step forward has been seen in California where the Body Camera Accountability Act (AB 1215) was signed and banned police from integrating facial recognition software in their body cameras for the next three years. It shows a positive step in the right direction toward mitigating the racial injustices present in the U.S. and globally.


In June 2020, many tech giants also took accountability for this issue. IBM was the first to announce that it would no longer offer facial recognition software and wrote to U.S. Congress to condone the use of facial recognition in any way that interfered with “basic human rights and freedoms” or contributed to “racial profiling”. Amazon came next to announce that it would ban law enforcement from accessing its facial recognition software. These moves come in such an important time of racial reform and provide hope in eradicating racial bias in facial recognition software. Everyone in tech must take a step back and recognize how their software might be contributing to racial injustice and do something to make a change. 



Sources:


[1] Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, (2018), Buolamwini, J., Gebru, T., Proceedings of Machine Learning Research 81:1-15; http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf 


[2] http://gendershades.org 


[3] Labeled Faces in the Wild, University of Massachusetts Amherst http://vis-www.cs.umass.edu/lfw/ 


[4] https://www.ibm.com/blogs/policy/facial-recognition-sunset-racial-justice-reforms/ 


[5] https://www.theverge.com/2020/6/10/21287101/amazon-rekognition-facial-recognition-police-ban-one-year-ai-racial-bias 


[6] https://www.eff.org/deeplinks/2019/10/victory-california-governor-signs-ab-1215 

Read more

WEARABLE TECHNOLOGY AND SOCIOECONOMICS: IS THE DATA TRULY REPRESENTATIVE?

Aug 7, 2020 - 3 mins read

Many data scientists are excited at the prospect of using health trackers and wearable technology to perform health research. The hope is that this data will shed light on risk factors and predictors of disease in a manner that was never possible before. However, there are valid concerns that this sample size isn’t representative of the entire population.

Read more

IS BIG DATA THE FUTURE OF HEALTHCARE?

Jul 23, 2020 - 3 mins read

You’re probably familiar with the term “data”, which refers to information or statistics gathered in the form of numbers or other measures...but what is Big Data? Some describe it using the Four V model: volume, velocity, variety, and veracity. Some add a fifth V for value. 

Volume refers to the scale of the data or the sheer amount of information that is being collected. For example, Statista states that there were 722 million wearable devices worldwide in 2019. Each of these devices is transmitting data each day to its servers. It’s estimated that approximately 6 billion people worldwide have cell phones. Again, each of these devices is transmitting its own data every single day. This gives you an idea of the volume aspect of Big Data - it is massive. 

Read more

COVID-19 and the future of healthcare

Jul 21, 2020 - 3 mins read

COVID left the world, let alone the health care industry, in a unique situation/landscape. It created the urgency for healthcare to be delivered remotely, necessitating the adaptation of digital and remote solutions. While much of the groundwork for these technologies existed prior to COVID, there was a hesitation for acceptance of these, and the effectiveness was often discounted by patients and doctors alike. The topics of telehealth and remote healthcare delivery have been an area of research for some time now. In 2015 for instance, research and studies addressed the effectiveness of video delivered cognitive behavioral therapy, however even with much explaining, doctors still lacked acceptance of the idea.

Read more

Why you should read Terms & Conditions

Jul 16, 2020 - 4 mins read

Let’s face it: none of us are reading the Software License Agreement for the latest Apple iOS update. Why is that the case? Perhaps it is that the language is inaccessible. Maybe people find the content too boring. What we know for sure is that the majority of the population appears to trust that the digital service provider is not asking them to agree to anything that would be unfair. But is that trust warranted?

Read more