ai video surveillance security cameras

5 ways AI is transforming video surveillance

by Jared Shelly

Gone are the days of security guards staring at monitors for hours on end, and unwatched footage that’s deleted when there is no incident to investigate. Video surveillance has evolved.

Artificial intelligence is pushing the field forward at warp speed, empowering security officials to take real-time action based on real time analytics. Video surveillance software powered by AI automates weapons detection, analyzes behavior, and identifies potential wrongdoing.

These software applications are often overlaid on top of existing video surveillance systems, boosting their power by applying computer and object recognition (AI trained to analyze images and identify objects) and machine learning (predictive algorithms that help AI improve its accuracy over time).

As high-tech cameras and AI software become less expensive, they become more ubiquitous. That’s leading to improvements in crime prevention, as well as complaints about privacy infringement.

It’s a complex and ever-evolving industry. Here are five ways AI is transforming video surveillance.

1. It enables security operations to search hours of video in seconds. AI-powered surveillance technology takes humans — and human error — out of the analysis process, leading to easier crime detection and better analysis of behavior patterns. AI-generated surveillance assessments are based on behavior, said security technologist Bruce Schneier, a fellow and lecturer at Harvard’s Kennedy School and a board member of the Electronic Frontier Foundation.

“Surveillance is no longer restricted to humans and human limitations. If you want to process a million hours of footage, that’s easy because it scales,” said Schneier.

2. AI identifies and tracks identifying characteristics across locations. In crime prevention, police use AI to analyze surveillance footage and identify potential suspects. If the target is wearing a red hat, for example, systems can identify all people wearing red hats to narrow the search. In an airport, the technology can identify unattended luggage and suspicious behavior. In a retail setting, it identifies potential shoplifting behavior.

“With behavioral biometrics or gait recognition, it can recognize if a person is acting strangely,” said Danielle VanZandt, security research analyst at Frost & Sullivan. “There are analytics that can be programmed to detect whether a person or object has been in a certain area for either too long or where they’re not supposed to be.”

3. Affordable AI-powered cameras are compatible with existing surveillance networks. If you enter a public space, you’re probably under surveillance. That’s because the price of cameras has come down significantly, and many have embedded AI technology. Organizations can implement the technology even if they were priced out a few years ago.

“Cameras are all about sensors. They’re getting cheaper, more accurate, and have higher resolution,” said Schneier.

VanZandt adds that many vendors integrate AI-powered tech into legacy surveillance network.

“They don’t need to do an entire overhaul, which is a huge selling point to mid-tier and smaller businesses,” she said.

4. AI enables a wide variety of public-sector implementations, from schools to transit systems. At least 75 out of 176 countries are actively using AI technologies for surveillance purposes, according to the Carnegie Endowment of International Peace. Smart city platforms are in 56 countries; facial recognition systems are implemented by 64 countries, and smart policing is used by 52 countries.

In the United States, plenty of public-sector entities are deploying AI surveillance. New York’s Domain Awareness System, developed by the NYPD and Microsoft, uses 9,000 surveillance cameras to identify potential crime. In the wake of the Parkland, Florida school shooting that left 17 dead, the Broward County school system announced plans to install a 145-camera analytics-enabled video monitoring system that alerts school police to behavior that’s out of the ordinary. Meanwhile, New York City’s Penn Station, Washington D.C.’s Union Station, and other hubs are experimenting with body scanning systems that look for guns or explosives under people’s clothing as they naturally walk through a station.

5. Privacy and bias concerns loom large; accuracy quickly improves. As AI surveillance becomes more ubiquitous, it grows more polarizing. Some say it’s unnecessary to monitor people in public places like a grocery store, restaurant or shopping mall — especially if nothing happens. Others say it is necessary to stop crime.

“It’s leading to people under constant surveillance. It’s not even subtle,” said Schneier. “The privacy implications are that you’re being watched all the time by the AI. That’s the issue.”

VanZandt argued that many of the complaints center around older statistics regarding a subset of AI surveillance — facial recognition. She said that police departments began embracing facial recognition around 2015, and initially got inaccurate results. By 2018, the ACLU released a widely cited study finding that a facial recognition system developed by Amazon mistakenly matched 28 members of Congress — many people of color — with criminal mugshots. But facial recognition companies and the wider surveillance industry have advanced significantly since then, she said, because technologists are more careful about testing algorithms on diverse sets of people. Furthermore, they’ve already improved privacy to comply with the strict General Data Protection Regulation (GDPR) in Europe.

“The bias argument does have merit,” said VanZandt. “However, it is not as bad as it’s reported in the media sometimes. I deal with a lot of the top vendors on the research side. I’ve seen accuracy as high as 97% to 99% — and that’s not just white men. That’s across gender boundaries, racial boundaries and social boundaries.”

Jared Shelly is a freelance writer who writes about business and emerging technology. The opinions and positions expressed in this article do not necessarily reflect the opinions and positions held by Patriot One Technologies and inclusion of persons, companies, or methods herein should not be interpreted as an endorsement.

Many of the articles within the media pages of the website are 3rd party in origin and have been included for informative purposes only. Decisions to include articles are solely based on the timely nature of the storyline as it applies to the security industry in general and to the proliferation of threats to public safety in particular. The inclusion of these articles does not imply that PATRIOT ONE its management, agents or employees endorses any statements expressed. The public is advised to fully investigate any contentious claims or assertions prior to arriving at any conclusions. Any hyperlinks included in these articles does not imply that PATRIOT ONE monitors or endorses these websites. Accordingly, PATRIOT ONE accepts no responsibility for such websites. Additional information regarding exclusions and liability limitations are outlined here.