IVPN – August 2019
The iPhone X brought real-time facial recognition technology into mainstream consciousness. Since the launch, the Face ID tech proved vulnerable to twins, coworkers that look like you, family members, and paper and stone masks. These vulnerabilities aside, it does represent the pinnacle of consumer-grade implementation of facial recognition technology. With the climbing sales of the device, the idea of facial recognition gained the sense of the everyday.
The market for facial recognition technology is projected to hit USD $8 billion by 2025. The incredible potential for privacy and security mishandling and outright abuse that come with the rapid deployment of new technology is not surprising. From the business and consumer side, the privacy concerns broadly fall into three categories: intentional abuse of power, mishandling of data, and lack of understanding of the technology by the public or the user.
Solid data security hygiene, understanding of the technology and the laws pertaining to it, and active participation in the systems that determine policy can help create better privacy expectations today, and in the future.
Most importantly, addressing the problem of the threat to personal privacy, autonomy, and liberty must start with the human component. Those with the capabilities to do so should shed light on the problem, and deliver clear, informed, and practical counters to attempts at abuse. The technological answers to privacy concerns have to come with societal ones. Though each society is different, technology is bringing us inevitably closer together. The norms and expectations we establish now, as consumers, professionals, and citizens, are the norms that we will have to face ourselves, for better or worse.
Understanding the Technology
Broadly defined, facial recognition works through a series of steps: identifying faces, encoding salient features of the image into a database entry, and then comparing it against the database to find the best match. In practice, there are multiple technologies and the approaches, and the specific configuration varies vastly with the budget, accuracy, and security needs of the user.
Facial recognition work dates back to the 1960s. Researchers would encode features like the distance between the eyes or the width of the mouth by hand, via an early graphic tablet. By the beginning of the 1990s, a method of processing the photos into much smaller sets of numbers (the eigenface method) came to be the algorithm of choice for facial recognition. Using machine learning applications, a set of photos could be made into a “face space” by the computer. This face space was made up of the most extreme facial features in a set, with the rest of the facial features encoded as vectors, or intensities, of these features. This technique eventually resulted in automated face detection, a way for the computer to detect the presence of a face automatically. If you’ve used a camera or app that auto-focused on the face, you have come in contact with this technology.
Once the face is detected, the 2D images are normalized and processed for the geometric features of a face, or the statistical relationships among the pixels in an image. The process arrives at a set of salient features, determined via machine learning algorithms, and stores this processed data for comparison.
While machine vision has traditionally relied on two-dimensional images, by using multiple cameras, or even infrared projection a machine can map face to create a 3D model. Using a 3D model helps avoid easy spoofs like using a photo of a person, and allows for evaluation of a face from multiple angles.
Use of infrared imaging and projection is useful as well, with ability to verify that the person in front of the camera is flesh-and-blood, rather than a mask, “illuminate” the face for recognition in the dark, and map a user’s face more precisely using invisible IR dot projection.
These technologies are often used in combination with one another, and other biometric technologies like iris scanning and skin texture analysis, helping to compensate for the potential vulnerabilities and drawbacks of the others.
Simple incompetence provides plenty of potential for data misuse. SenseNets, a firm that provides FRT services to the Chinese government was using an unsecured database containing real-time tracking data of Chinese Uighurs. Seemingly not to be outdone in the race to the bottom, the Indian national ID database that contains biometric information of 1.1 billion Indian citizens was found to have a gaping vulnerability: a hardcoded API access token on an unsecured domain, giving access to pretty much anyone. The commercial sector is not without fault either, with the number of high-profile data leaks from and hacks of major corporate data stores continuing to rack up.
Standard data security protocols are useful whether in a personal or business context. Luckily, this level of basic security hygiene can be practiced with a checklist:
- don’t hoard unnecessary data
- destroy before disposal
- use strong, unique passwords and change them regularly
- use encryption for storage and communication
- keep procedures up to date
- train your people to follow them well
- keep security software patched and up to date
- don’t insert strange portable media
Security practices specific to biometric data storage can build on top of a strong baseline security protocol. Employing visual cryptography schemes for storage of biometric information is a strong way to store biometric image information, like faces, in multiple files preventing cross-database matching or identifying of an individual from a record.
Auditing the products your company chooses to buy or subscribe to is an important factor in ensuring user privacy as well. On-device processing is significantly more secure than cloud-based solutions, and are increasingly more available on mobile and autonomous devices. Technologies are also being developed to be more inherently secure and autonomous, like US Navy’s facial recognition software that doesn’t query a database, and does not need photos of people outside of those being looked-for.
There are more direct approaches to avoid facial recognition technology. Dazzle makeup and counter-surveillance fashion projects are trying to create a more personal way to avoid being recognized by robots. These solutions range from scarves that try to trigger false-positives to clothing designed to fight infrared detection using reflected body heat, to hats that obscure or even falsify the face a computer sees. There is an arms race between those who seek to improve facial recognition and those who seek
to circumvent it. It’s hard to imagine one’s fashion choices can beat back the tide of heavy, government-supported investment, but taking any precaution at all is better than giving up.
Regulatory and Self-Regulatory Solutions
There are attempts to earn the trust of the public and inject some order into this unregulated space. In the US, “The Commercial Facial Recognition Privacy Act” proposes harsh restrictions on sharing facial recognition data by commercial enterprises, stipulating “affirmative consent”. EU’s GDPR has included strong protections into personal data handling that extends to biometrics as well. Even India’s Supreme Court ruled that individuals have the constitutional right to privacy.
In terms of self-regulation, there are some attempts to establish a baseline of ethical practice as well. Microsoft, no stranger to privacy concerns itself, is trying to establish a standard of ethical practice, advocating for a code of ethics and responsible implementation of facial recognition. Creating industry standards and getting ahead of likely inevitable privacy disasters is a reasonable strategy, not just for the benefit of the company, but for the benefit of the entire industry. Still, as Microsoft President Brad Smith said in an interview with The Brookings Institution, “once the data is available somewhere, the data is available everywhere”.
On the level of the individual protection against facial recognition, misuse takes a multi-tiered approach. It’s a passive participation tech, so if you’re in public, you’re likely to be photographed. UK, China, US, Australia, all have varying laws on public surveillance, but give their own governments and law enforcement a pass. There are and have been attempts to map out video surveillance locations to aid navigating with least video surveillance. With every device or even person being a potential platform for a camera, trying to stay out of sight is a Sisyphean task. Where there is video surveillance, there will probably be facial recognition, especially if police is involved.
Knowing and using the tools that exist is a start to controlling your privacy. According to Brad Smith, since Microsoft adopted the GDPR regulations for the consumer right to access, change, and remove personal data, over 2 million consumers exercised these options in the EU. More than 3 million have done so in the US in the same period of time. Going to Microsoft’s “Privacy Dashboard” gives you access to your account data and the option to delete it. Microsoft’s report on the “right to be forgotten” requests can be found here.
Holding the firms and governments using technology to a high standard is necessary. It’s the public’s duty to hold these institutions accountable. It’s also our duty to know the tools for institutional change and advocacy, and using them to force our civil servants to address at least some of the issues. Holding civil servants accountable is a personal and moral duty, but relying on them to protect us is naive.
Abuses of Power
For a large swath of the global population, “lobbying”, “protesting”, or voting is a useless and potentially dangerous option for businesses and individuals alike. Protecting your data, and the data you handle must be taken up with a sense of personal responsibility.
In places like China, where privacy concerns evaporate when opposed by the interest of the state and commerce, FRT is already ubiquitous. From catching jaywalkers, buying KFC, and dispensing toilet paper, to tracking the movements of Uighur minority individuals – FRT is a key technology. In many ways China can be used as a sort of dystopian future looking-glass, foreseeing the curtailments of rights and invasions of privacy years before they make landfall in less authoritarian countries. The digital surveillance state isn’t worried about being taken to task for invasions of privacy, largely due to the close relationship and dependence between private firms and the state.
While fighting against a totalitarian regime is difficult on any front, abuse of power can come in other forms, even unintentionally. Handling new technology is not something politicians tend to do well. Without a high bar of understanding from the legislators, vested interests of the technology providers become the de-facto rule of law, but fighting back against corporations is not impossible. You can make a personal choice and not feed the beast. Vote with your attention, your time, and the currency of choice to support companies that do take your privacy and security seriously. There’s a lot that you can do: educate yourself, protect yourself, stay calm, #deletefacebook. And, if you absolutely have to go outside, maybe wear a hat.