People are freaking out about in-video facial recognition AI. Here’s why.

Imagine this. You’re a highway patrol officer. It’s dusk, and you’re making your last stop of the day—a pickup truck with two young men in the front seat. You approach the vehicle, knock on the window and lean in.

Before you can ask for license and registration, your body cam pings. Its real-time facial recognition software has recognized the passenger, and he’s wanted for armed robbery. Instead of going it alone, you call for back up.

An hour later, you’re home safe and enjoying pizza night with the kids.

Real-time, in-video facial recognition software promises law enforcement officers a new level of omniscience, efficiency and personal security, but it’s also raising the bar for civil rights watchdogs, legal experts, and the very departments eager to try out the new tech. The rules governing its use have yet to be written, and last month the American Civil Liberties Union pointed to Amazon’s video analysis system—Rekognition—saying its unregulated use could be used to identify protesters, track immigrants and generally undermine civil liberty.

“This is just one of many areas where AI will get ahead of our thinking,” said Timothy Williams, vice chairman of corporate risk services at Pinkerton. “We’ll have to be mindful.”

Security and law enforcement chiefs who want to deploy the tech while staying on the right side of Joe Public—and possibly federal law—should understand how facial recognition in video works and why it’s making some people nervous.

So how does this work?

Image recognition software uses artificial intelligence technologies like computer vision, pattern recognition, and machine learning to analyze pictures and video.

Though it’s far from the only company to offer the technology, Amazon Web Service site says Rekognition works by casting a bounding box around objects of interest, then applying neural networks to detect and label those objects. When applied to faces, image recognition software can identify gender, facial features, facial hair and, in some cases, emotions.

Marketers have used this technology for years to identify their logos during sports broadcasts and in social media feeds. Consumers use it to sort personal photographs based on who is in them. In the security sector, its routinely used to identify license plate numbers on speeding vehicles and sort suspects’ pictures against databases of mugshots and, sometimes, state-issued IDs.

Why the uproar now?

To be fair, civil liberty watchdogs have never been thrilled with facial recognition software. In 2016, the Georgetown Center for Privacy and Technology published a lengthy report outlining the risks associated with law enforcement’s use of the technology. Their biggest concern—and probably the reason why Amazon has reignited this debate—is that police departments often deploy facial recognition and other surveillance tech without telling anyone about it.

The Orlando Police Department’s Rekognition pilot program was limited to just three downtown surveillance cameras and five in police headquarters, but even that stunned the public. “I don’t share the ACLU’s heebie-jeebies about big government,” Orlando Sun-Sentinel reporter Scott Maxwell said in a video op-ed, “what does give me pause is we don’t know what’s going on.”

The immediacy of in-video facial recognition also scares people. Hypothetically, widely deployed systems could identify private citizens a protest in real time and then monitor their movements.

That’s a little 1984, don’t you think?

Okay, fine. However, companies are actively marketing facial recognition tech for inclusion in policy body cameras, which are already being used by thousands of police officers. Real-time use there could ignite a tinderbox for officers already on their guard, according to Robyn Greene, Policy Counsel and Government Affairs Lead at New America’s Open Technology Institute.

“If the facial recognition tech winds up pinging and says that someone is wanted for an arrest warrant [even if they are out on bail], your response may be very different,” than an otherwise routine stop, said Greene.

Some cities are catching up with tech procurement by requiring police departments to create privacy policies Greene wrote in Slate magazine. When the public is asked how they’d prefer to spend their tax dollars, Greene said, “folks want clean water and fewer potholes in the streets and less real-time spying equipment.”

So they want us to give up our element of surprise? And then we’re good?

Absolutely. And absolutely not. There are more concerns, like real-time use and racial bias.

Wait, is facial recognition racist?

Not exactly. To get any artificial intelligence network up and running, computer scientists train it on massive datasets. AI has been trained mainly on images of white men, so systems are better at recognizing male, caucasian facial features. Coders are working to eliminate bias in AI, but departments should know that AI is three times as likely to misidentify a woman or person of color.

“Law enforcement has a track record of disproportionately policing communities of color,” Greene said. “Marijuana arrest rates in New York are all the evidence you need of that. If left uncontrolled and unaccountable and not overseen by local government and state and federal governments than this kind of tech will supercharge those biases.”

Are police departments that use this kind of technology legally vulnerable?

It depends on how they use it. According to the Georgetown report, dragnet searches wherein the police run images against a database of private citizens could be a violation of the Fourth Amendment, which is intended to prevent generalized, warrantless searches.

According to that report’s risk framework, searching for a particular person is more acceptable than attempting to identify and track a number of individuals who fit a profile. Likewise, using facial recognition in the investigation of a crime is a legitimate use of the tech. Real-time use, however—like putting facial recognition tech in policy body cams—could lead to legal trouble.

So how do we avoid those problems?

Consulting risk frameworks and drawing up department privacy guidelines is one way to get ahead of the curve, as is making sure municipal and department lawyers are on top of related cases. In 2013, an appellate court ruled that stingrays—tech that fools devices to sending cellphones their location—requires a warrant. While it’s not an apples-to-apples comparison, it shows that the court leans toward requiring warrants in cases where tech can fly under the radar.

While legislators and the courts are likely to write the rules around public use of the technology, the industry would serve itself well to get out in front, said Williams. The ACLU, Associations of Chiefs of Police and industry associations like SIA need to get around the table.

“We have to have a serious dialogue about how we want to apply this to our society,” he said. “Unfortunately, we end up taking sides quickly and with a lot of hyperbole.”

Many of the articles within the media pages of the patriot1tech.com website are 3rd party in origin and have been included for informative purposes only. Decisions to include articles are solely based on the timely nature of the storyline as it applies to the security industry in general and to the proliferation of threats to public safety in particular. The inclusion of these articles does not imply that PATRIOT ONE its management, agents or employees endorses any statements expressed. The public is advised to fully investigate any contentious claims or assertions prior to arriving at any conclusions. Any hyperlinks included in these articles does not imply that PATRIOT ONE monitors or endorses these websites. Accordingly, PATRIOT ONE accepts no responsibility for such websites. Additional information regarding exclusions and liability limitations are outlined here.