By Caroline Bottger
Chief technology officer of the United States Michael Kratsios announced that the Trump administration’s will take a laissez-faire approach to AI regulation at CES last week. It outlined 10 ‘principles’ for U.S. federal agencies to follow when regulating AI, but ultimately the budding industry will self regulate. The principles are non-binding, and notably, do not apply to AI development and application by the federal government itself.
The principles themselves range from building public trust in AI to the importance of flexibility in adapting to technology changes. But the memo’s wording is careful when it comes to privacy in particular: “AI applications could pose risks to privacy, individual rights, autonomy, and civil liberties” [emphasis ours].
AI proves a useful tool for automating processes and identifying trends in large amounts of data. Cybersecurity departments are investing more and more into AI for network security, from detecting threats to predicting them. AI is being used by law enforcement to identify suspects and deviations from the behavioral norm. The FBI currently uses facial recognition, powered by AI, to scan DMV databases in several states.
The soft touch isn’t surprising: in February 2019, Trump signed an executive order prioritizing AI research, with the underlying aim of staying abreast of its main AI competitor, China. Called the “American AI Initiative,” the order did not include any mention of federal funding — a stark contrast to the $15 billion invested by individual Chinese cities. (Federal funding would require bipartisan support.) The February order does include a section about regulation, but according to Vox, the order “doesn’t address these concerns directly.” Almost one year on, the lack of specificity in the order seems to be born out in last week’s CES announcement.
But some U.S. politicians want something more than suggestions. In the case of drones, Rep. Jan Schawkowsky of Illinois called for “meaningful rules, not just guidelines” in the name of public safety. According to a survey by Deloitte, over half of Americans believed that driverless cars were unsafe.
It’s early days, but commentators within the AI industry have described the guidelines as “wooly.” A former tech advisor for the Obama administration said that laissez-faire approach could actually harm innovation, and prevent the US from being seen as a world leader in AI.
Terah Lyons, who shaped AI policy at the Office of Science and Technology under former U.S. president Barack Obama, says that the principles don’t stray much from the previous administration, but that the idea that regulation stifles innovation is a “false dichotomy.” R. David Edelman, head of policy at MIT’s Internet Policy Research Initiative, said that in the interest of easing tensions, talks between the US and China on AI “need to begin.”
While the United States wants to stay on the frontier of AI innovation, Europe hopes to set the global standard for AI regulation, especially when it comes to individuals. Ursula von der Leyen, the newly elected president of the European Commission, pledged to act on AI legislation in her first 100 days in office. Now, the EU’s digital policy department must determine how much of current AI technology is covered by the General Data Protection Regulation (GDPR), which came into effect in May 2018.
Photo by Stefan Cosma via Unsplash