In July, Shagaf Khan, a longtime member of the Al Noor Mosque in Christchurch, New Zealand, and president of the region’s Muslim association, froze as one gunman after another entered the house of worship. Just four months prior, the mosque had been ground zero for that country’s deadliest massacre, in which the attacker live-streamed on Facebook his shooting rampage that killed 51 and wounded 49.
This time, the guns were part of a drill. Police officers were simulating a siege on the mosque, brandishing a variety of firearms to test a new high-tech security system developed by Athena Security.
With the help of artificial intelligence, surveillance cameras mounted inside and outside the mosque recognized lethal threats within seconds. They set in motion a rapid emergency response, alerting authorities and ultimately the congregants inside of imminent danger. “We were impressed,” Khan says of the technology’s performance that day. “We saw all kinds of arms—whether it be a pistol or a larger gun. All of them were detected.”
Mass shootings like the one in New Zealand, and more recent ones in Dayton, El Paso, and Odessa, Texas, are prompting some businesses and schools to install A.I.-powered security systems. The hope is that the emerging technology will help save lives in a mass shooting, which, according to Mass Shooting Tracker, a crowdsourced database, occurs an average of at least once a day somewhere in the land of the free, home of the brave.
Still, there’s only so much cameras and computers can do to stop a determined gunman armed with an AR-15-style weapon. Furthermore, the technology is prone to occasional false positives, while critics worry about the privacy implications of monitoring for firearms anyone and everyone who happens to be walking by.
Whatever the case, investors are pouring money into intelligent security. Athena Security, a year-old startup based in Austin, has raised $5.5 million. The Israeli firm AnyVision, meanwhile, closed a $74 million funding round in June, and Canada’s Patriot One Technologies, listed on the Toronto Stock Exchange, has raised $87 million Canadian ($65 million U.S.).
The recent rise of A.I.-based security systems is tied to improvements in image recognition, a technology that tries to identify what’s in photographs or video stills. In this case, the goal is to zero in on what’s often easily overlooked—deadly weapons and suspicious behavior that signal an impending violent act.
Athena’s technology works by analyzing as little as three seconds of surveillance footage, or 90 individual frames of video. Its algorithms are trained to look for both dangerous objects and the menacing movements of individuals—say, a person brandishing a Glock pistol while approaching a school.
Critics worry about the privacy implications of monitoring for firearms anyone and everyone who happens to be walking by.
The tech will lock in on that scene, pulling in more frames to analyze before notifying the on-duty Athena technician or a security officer on-site to verify the threat. If the danger is real, the security staff can sound the alarm, locking down the school, office complex, or place of worship and preventing an armed attacker from entering.
Earlier this year, the company found just one weak spot: Its algorithms failed to spot the handgun or assault rifle being carried by a person 30 feet away and pointing straight at the camera. In every other scenario tested—the gun angled to the right or left, the gunman in motion with the gun, the weapon held at any angle within 25 feet of the camera—the detection rate was 100%.
Spotting guns is the feature that clients are most interested in, explains Chris Ciabarra, Athena’s cofounder and chief technology officer. But they also ask for harder-to-spot threats, which is why in September the company introduced updated software that’s supposed to home in on knives (over six inches) and fights (punching, kicking, pushing). The technology costs $100 monthly per camera.
What Athena tries to avoid is making its system so overzealous that it mistakes any shiny black object—an iPhone, say—for a handgun. Ciabarra says that newly installed technology may flag two or three false positives per camera per day that on-duty security staff must vet. “The alert pops up on a screen, and they click yes or no,” Ciabarra says. “The last thing we want is for police to be called to a scene when they’re not supposed to be.”
Patriot One, the Canadian company, uses a combination of machine learning and microwave-radar technology to spot hidden threats. Its sensors act as a kind of long-range metal detector that identifies concealed weapons, including guns, knives, and bombs.
Signals that bounce off solid objects are instantly analyzed for a match in the company’s weapons database. The system’s machine learning distinguishes between the highly suspect (say, an assault rifle smuggled in a suitcase) and the benign (a mobile phone in a jacket pocket), and then, if necessary, alerts security personnel. “It’s all about smart, distributed, low-cost networked security,” says Martin Cronin, the company’s CEO.
Patriot One’s hardware is installed in schools, offices, and public venues including the Westgate Las Vegas Resort & Casino and the University of North Dakota at Grand Forks. It costs just under $50,000 per installation, plus an annual $10,000 fee.
Cronin, a former diplomat for the British Foreign Office, says interest in his company’s technology picked up significantly after the 2017 Mandalay Bay shooting in Las Vegas in which a lone gunman killed 58 and wounded 422. Cronin won’t get drawn into hypotheticals about whether Patriot One’s system, called PatScan, could have prevented it.
But, he says, that type of incident—weapons in bags brought into a hotel—is precisely what the technology is designed to prevent. “Yes, in principle, we will detect those and generate an alert so that security could respond before an incident could happen,” Cronin says.
A growing number of companies are selling A.I.-powered security systems that detect guns and other weapons.
Using image-recognition technology, this Austin firm’s security system analyzes surveillance video for firearms and knives.
Patriot One Technologies
With the help of microwave radar and machine learning, this Canadian firm’s tech can spot and identify visible or hidden threats like guns, knives, and bombs.
This Israeli firm has developed a “computer vision” platform that works with most networked security cameras to recognize faces and body types along with objects that resemble a security threat.
Using A.I. for security is fast becoming a hot-button issue. Digital-rights advocacy groups and politicians, from San Francisco’s board of supervisors to Democratic presidential candidate Bernie Sanders, have called for banning facial-recognition algorithms for policing. In March, the Commercial Facial Recognition Privacy Act was introduced in the U.S. Senate, a bipartisan bill that could codify privacy rights, potentially limiting what A.I.-powered security systems can do.
“We need guardrails to ensure that as this technology continues to develop, it is implemented responsibly,” says Sen. Roy Blunt (R-Mo.), who cosponsored the bill with Sen. Brian Schatz (D-Hawaii).
Patriot One and Athena Security say that one of the biggest misconceptions about the use of their security systems is a subsequent loss of privacy. Both companies say their technologies don’t record, store, or share an individual’s biometric data.
But Cronin acknowledges that he gets a lot of questions about how Patriot One’s technology differs—if at all—from how, say, China uses facial-recognition technologies to track political foes. “To that I say, ‘This isn’t what this technology is about. It’s to keep people safe, in a public or a private environment,’ ” Cronin contends.
In Christchurch, Khan has other concerns. In the aftermath of the massacre, his job as one of the congregation’s administrative leaders has morphed into caretaker, counselor, and security chief. He’s just trying to restore a bit of normalcy.
Even with the new security system and a beefed-up police presence, some congregants are too scared to return. Khan doesn’t know if they’ll come back, but he’s beginning to feel better about the safety and well-being of those who have remained. No technology is foolproof, he says: “But at least we have these technologies in place that can help prevent this kind of thing happening.”