Reevaluating the future of facial recognition technology in security

by Martin Cronin

This month, in the wake of global protests for racial equality in the United States, three of the world’s biggest technology companies—IBM, Amazon, and Microsoft—announced they would suspend the sale and marketing of facial recognition products to police departments. All three tied their decision to facial recognition technology’s failure to properly recognize and identify people of color and women.

That failure is hardly news. But the tech sector is optimistic, confident in its own abilities, and accustomed to incremental improvement. This month, those companies decided incremental improvement was simply not enough, and so they acted—and rightly so.

At Patriot One, we have long held privacy and civil liberties as paramount to the design and execution of our technology. That’s why we have chosen to exclude facial recognition technology from our platform. Instead, after a weapon, chemical or explosive is detected, our system instantly takes what is essentially a high-resolution photograph of the potential bad actor. Then, the information is sent to an on-site security team for intervention. Facial recognition software is never activated.

This is not to say that facial recognition can never be used successfully. Police departments have used the software for 20 years, mainly in a forensic capacity—matching video footage or other images of suspects to drivers license databases, for example. And many citizens would trade a modicum of privacy if they were assured facial recognition technology was activated only in extreme circumstances, such as after a threat is detected or after a mass shooting or other terrorist attack. This is why localities are taking it upon themselves to create explicit privacy policies and best practices for police.

So, where do we go from here? Here are some of my thoughts around this matter.

Work with communities to craft privacy policy and institute guardrails: The unfettered use of facial recognition technology is unlikely to win fans in any corner of civilian life. In particular, the advent of video networks and the suggestion that the software could be used to track individuals across a geographic area is alarming to civil liberties advocates, who see the application of faulty tech that multiplies the opportunities for misidentification—particularly for people of color. Seek legal counsel and understand that the use of facial recognition technology could represent unacceptable levels of risk, be they real-time, in the field errors, or Fourth Amendment challenges to individual cases.

Facial recognition is best used as a whitelisting tool: When deployed for perimeter security, facial recognition could be used only to recognize people who are authorized to enter a facility. Those persons are already regularly showing their face to security personnel and, if the property chooses to install facial recognition for access control, employees would have to opt-in and submit to controlled photography of their face from a variety of angles.

Police and security personnel must understand, and correct for, automation bias: People put quite a bit of trust in computers and other machines, often against their better judgment. Think about how often you’ve followed circuitous GPS directions, only to realize your shortcut was better all along. When police and security officers give into automation bias, the consequences can be dire. Here, racial bias is quite explicit. Any department or facility that enlists technological help must train their employees to understand that facial recognition technologies simply do not work as well for people of color and women and that, in general, machines sometimes make mistakes. Officers must be trained to understand that false negatives do occur and how they should respond.

We must have an active and ongoing debate about tech’s role in security: And, as is too often the case in public security, policy tends to drift along with a status quo until some major event jolts us into action. This summer, the murder of George Floyd became the catalyst for conversations about racial equity, justice, and how to reform American policing so it can deliver both. But our industry should not wait for these events to jolt us into action.

Patriot One has faith in technology’s potential to improve security and defend our citizenry from the threat of violence, but not at all costs. We developed our PATSCAN platform to be a low-profile solution to keep our public spaces from looking like fortresses. An ongoing debate on the balance between privacy, security, and yes, racial equity is long overdue. Patriot One is here to participate in the conversation.

Many of the articles within the media pages of the patriot1tech.com website are 3rd party in origin and have been included for informative purposes only. Decisions to include articles are solely based on the timely nature of the storyline as it applies to the security industry in general and to the proliferation of threats to public safety in particular. The inclusion of these articles does not imply that PATRIOT ONE its management, agents or employees endorses any statements expressed. The public is advised to fully investigate any contentious claims or assertions prior to arriving at any conclusions. Any hyperlinks included in these articles does not imply that PATRIOT ONE monitors or endorses these websites. Accordingly, PATRIOT ONE accepts no responsibility for such websites. Additional information regarding exclusions and liability limitations are outlined here.