How computer vision is being used to spot weapons

By Benjamin O. Powers

On March 19, 2019, an anti-Islamic extremist live-streamed himself on Facebook opening fire at the Al-Noor Mosque in Christchurch, New Zealand. Fifty people were killed, and another 50 were wounded. The attack was met with condemnation from countries around the world and has led to international efforts to address the spread of violent content and hate speech online. 

Now, Al-Noor Mosque will receive AI-active shooter detection technology through the Keep Mosques Safe (KMS) initiative, a partnership between Athena Security and Al-Ameri International Trading. Athena’s technology utilizes a computer vision algorithm that monitors cameras and alerts law enforcement when it spots a gun. KMS wants to install Athena’s technology at mosques across the world, financed by Islamic charities and foundations. This might give mosques a better chance at alerting law enforcement or on-site security more quickly if something like Christchurch happens again. 

Companies across the US are adopting technology that proactively identifies weapons in live video and images. Virtual eForce is using it at schools and TrueFace is using it at offices to detect weapons faster via surveillance footage. PatriotOne developed Patscan VRS, which uses computer vision to automatically analyze video footage.

But how does computer vision, the technology underlying all of these systems, actually work? Essentially, it’s all about creating labels for images and the pixels that make them up. 

Computer vision is a kind of artificial intelligence that trains computers to “see” the world around them. Programmers begin by labeling and sorting millions of images, then teaching a computer to recognize and sort those images (and images like them) on a pixel by pixel basis. One the computer learns the initial dataset, it can then compare new images to its initial dataset and begin sorting them on its own, using machine learning models that incorporate statistical probability. In this way, a machine can be trained to differentiate between a dog and a cat, a car and a truck, or—in the security industry—a long gun from a broom handle.

The technology is then incorporated into systems that can be able to recognize objects within images or video feeds. 

The system analyzes images by first establishing the object’s perimeter. This “bounding box” cordons off a part of the image as an area of interest, then analyzes and labels the pixels within that box. Once it identifies what it thinks is a gun, it can be given feedback about whether it was accurate or not, and use that feedback to hone its findings, a process known as machine learning. For example, if the system mislabels a rake being carried over someone’s shoulder with as a gun, but you’d tweak the algorithm and mark that as incorrect, and it would then use other patterns in the data to differentiate between the two. 

“That’s what distinguishes a modern computer vision system, it’s very much based upon a machine-learning artificial intelligence approach based upon tons of data,” says George Berg, a computer scientist at the  College of Emergency Preparedness, Homeland Security and Cybersecurity at the University of Albany. 

There are numerous use cases for this technology, from identifying cancerous cells at rates faster than doctors, to facial recognition (whether that be for security or social media), or as discussed, weapons detection

The security industry, in particular, has interconnected use cases that show the value of computer vision across multiple systems. Computer vision can track a make and model of a suspicious car across a campus or city using a network of surveillance cameras, or breeze through hundreds of hours of forensic footage looking for a suspect using computer vision-based facial recognition. PatriotOne’s Patscan VRS can flag guns or other weapons and alert teams that something wicked this way comes.

It shouldn’t come as a surprise, then, that analysts expect the computer vision market to reach $48 billion by 2023. 

Wael Abd-Almageed, of the USC Information Sciences Institute, says that even with advances in computer vision, there are still challenges that it faces.

“The challenges when it comes to things using computer vision, like facial recognition, are that people change,” says Abd-Almageed. “That can be through aging, or the person in this image or video might be wearing a scarf or grow a beard, or something along those lines.”

Guns and other weapons present a similar kind of challenge, just because of the sheer number of them. There are hundreds of makes and models of weapons. Think just about a submachine gun, a handgun, and a semiautomatic rifle. 

“Computer vision can recognize both a rifle and a pistol as a weapon, even though they look quite different,” says Berg. “The modern systems make the learning process a flexible ongoing one that can account for the wide variety of weapons out there.”

It’s important to recognize that these systems aren’t in and of themselves perfect—they’re only as good as the data fed to them. Larger amounts of data give systems a better sample size to look at, and make them more reliable. But even if you don’t have access to vast data repositories, you could train a computer vision system to recognize, say, the ten models of guns most used in mass shootings. 

There is also the risk of false positives. Even after extensive training, these systems could still misidentify something as a weapon if it shares characteristics that the computer associates with a weapon. Think about a broom, for example, thrown over someone’s shoulder. It may be dark in color, narrow, and long, all characteristics that a computer might associate with a rifle. The system might send a security officer an alert that there is someone with a weapon entering the building, even though that’s not the case. Security practitioners need to be aware of these limits and recognize that the system cannot entirely substitute for human judgment. There is huge promise for these systems when trained and implemented correctly, though. 

“Because of the power of these systems, this is a burgeoning and burgeoning and exploding field,” says Berg. “Pretty much anything where you can gather data, you’re starting to see computer vision systems developed around them, both by research groups as well as commercial enterprises.”

Photo by Stefan Cosma on Unsplash

Many of the articles within the media pages of the website are 3rd party in origin and have been included for informative purposes only. Decisions to include articles are solely based on the timely nature of the storyline as it applies to the security industry in general and to the proliferation of threats to public safety in particular. The inclusion of these articles does not imply that PATRIOT ONE its management, agents or employees endorses any statements expressed. The public is advised to fully investigate any contentious claims or assertions prior to arriving at any conclusions. Any hyperlinks included in these articles does not imply that PATRIOT ONE monitors or endorses these websites. Accordingly, PATRIOT ONE accepts no responsibility for such websites. Additional information regarding exclusions and liability limitations are outlined here.