While you were working: MIT’s AI psycho and other tech tales

Here’s a roundup of news stories you may have missed while you were working.

Researchers at MIT trained an AI robot exclusively on ultraviolent Reddit content, and the results were creepy. The MIT team asked “Norman,”—named after Alfred Hitchcock’s Psycho killer—to identify Rorschach inkblots as part of a Rorschach test and boy, howdy, did he fail hard. The experiment was meant to highlight how significantly bias can impact artificial intelligence.

It’s not all killer robots. Tech is also playing a role in recovery. It seems the survivors of the school shooting in Parkland, Fl. have created a text messaging group to ease their minds in the wake of the violence. That’s only part of the story here, in The New York Times’ harrowing account of what it was like inside the walls of Marjorie Stoneman Douglas High.

As school shootings continue to grip the nation, districts are investing in security measures of all kinds. A Post-Parkland analysis by IHS Markit suggests the school security industry will grow to $2.8 billion by 2021. Schools nationwide are investing in everything from security guards and panic buttons to gunshot detection systems and AI-enhance facial recognition software.

That last one is raising eyebrows among civil liberties advocates, who are generally wary of unregulated surveillance technology. Facial recognition software, they say, may superpower existing human biases because AI has been proven to misidentify people of color and women regularly.

Meanwhile, automated surveillance tools like drones that scan crowds for fights and other flare-ups may violate fourth amendment and privacy rights. In staged tests, scientists said their drones successfully picked out violent body poses 94 percent of the time. Take that figure with a grain of salt though. There’s a big difference between staged tests and real-world use.

Players on all sides admit that tech is simply advancing too quickly for legislators, regulators, and industry associations to set guardrails. Robyn Greene, policy counsel for the Open Technology Institute at New America, wrote in Slate about how some cities are creating policies to ensure their police departments don’t violate people’s privacy.

Google created its own guidelines for what it will develop using AI. Google released the new rules after some engineers quit because the tech giant is working with the U.S. military. The giant ditched it’s “Don’t be evil” mantra this year, but said in its new rules that it wouldn’t help build weapons or violate human rights.

 

 

Many of the articles within the media pages of the patriot1tech.com website are 3rd party in origin and have been included for informative purposes only. Decisions to include articles are solely based on the timely nature of the storyline as it applies to the security industry in general and to the proliferation of threats to public safety in particular. The inclusion of these articles does not imply that PATRIOT ONE its management, agents or employees endorses any statements expressed. The public is advised to fully investigate any contentious claims or assertions prior to arriving at any conclusions. Any hyperlinks included in these articles does not imply that PATRIOT ONE monitors or endorses these websites. Accordingly, PATRIOT ONE accepts no responsibility for such websites. Additional information regarding exclusions and liability limitations are outlined here.