Like millions of people, I watched the viral video of dancing Boston Dynamics robots that made its way around Twitter this week. But unlike many of those millions, I did not think, “Wow, the future is so cool.” I thought, “We gotta keep these away from the cops.”
I admit that some of my aversion is a gut reaction to the uncanny valley. The dog-shaped ones creep me out the most. A predator, often headless, unfazed by rain or heat, without need for food or water or rest—that’s the stuff of science fiction nightmares. I know, objectively, these robots are an incredible technological achievement, yet I can’t erase that instinctive unease.
Still, my worry about misuse of these and similar robots by law enforcement is not merely an emotional reaction. Nor do I think there’s zero place for robots in policing. The problem I foresee is the introduction of robotics without a strong and specific legal framework dictating how they may and may not be used.
The risk here is escalation, and the history of SWAT teams provides an excellent case study. These units were introduced to American policing a little over half a century ago, designed for a limited set of very dangerous circumstances, chiefly hostage and barricade situations or violent rioting.
As time went on, however, police departments realized they could use SWAT teams in more routine contexts, too. Now, fewer than one in 10 SWAT raids involve those high-danger situations. “Today in America SWAT teams violently smash into private homes more than 100 times per day,” writes Reason alum Radley Balko in Rise of the Warrior Cop. “The vast majority of these raids are to enforce laws against consensual crimes,” he adds, particularly drug use. American SWAT teams have raided homes and businesses for alleged offenses including unlicensed barbering, copyright violation, and parodying a local politician on Twitter. Some police departments use SWAT teams to execute every search warrant.
There’s a difference, however, between how SWAT use started and how robotics is being introduced to policing. When SWAT teams were created, state lawmakers passed legislation giving police new leeway in their work and deadlier tools with which to do it. By contrast, the residents of New York City learned their police department had obtained a Boston Dynamics dog when it was photographed in action at a crime scene. The acquisition does not appear to have been directed by any elected officials, though it’s possible a law enforcement transparency measure passed by the city council this past summer will compel the NYPD to report on the robot’s current use and devise policies for it going forward. So far, the NYPD has characterized the dog exactly as SWAT teams were described in their early days: a tool to keep officers safe in unusual emergencies, especially hostage and barricade crises.
Likewise, when the Massachusetts State Police borrowed a Boston Dynamics robot for “mobile remote observation,” a records request by the state branch of the American Civil Liberties Union (ACLU) turned up no departmental use policy. “We just really don’t know enough about how the state police are using this,” said the organization’s director of the Technology for Liberty Project, Kade Crockford. “And the technology that can be used in concert with a robotic system like this is almost limitless in terms of what kinds of surveillance and potentially even weaponization operations,” Crockford continued. “We really need some law…to establish a floor of protection to ensure that these systems can’t be misused or abused in the government’s hands.”
We do. Robots in policing are not inherently dystopian. It is a good thing that a robot can be sent to diffuse a bomb instead of putting a human officer at risk. There are some appropriate uses here. But we need laws delineating those uses and, I believe, prohibiting machine use of force against human beings (just as we must legislate police use of drones).
The move from defensive (e.g., bomb disposal) to offensive (e.g., restraining or even killing a suspect) use will happen if it isn’t prohibited. In fact, it has already happened: In Dallas in 2016, police jury-rigged a bomb disposal robot with explosives and used it to kill a sniper who had shot 12 officers, murdering five. Other departments have similarly used robots built to protect people to instead deliver nonlethal attacks.
Legislators—not unelected police department administrators—should be pre-emptively determining what kinds of robots police departments can acquire and how they may be used. Police robot acquisition, like any major new weapon or equipment procurement, should never catch the public by surprise. Law enforcement are supposed to be public servants, not masters.
In the bigger picture, there are three questions our lawmakers should be answering with legislation. First, do robots make police use of force more likely and/or more severe? This technology won’t be a neutral influence on officers’ decision making, just as SWAT teams haven’t been neutral. If it is substantially easier (by virtue of being safer for officers) to use force with a robot than without, that will change police behavior. It may sometimes change it for the better, of course, for removing the question of officer safety could make escalation to violence less likely in some circumstances. It may also change the behavior of the person being policed, making them more or less fearful and therefore more or less likely to fight or flee.
Second, what happens when robots progress away from significant human control? Bomb disposal robots don’t have artificial intelligence (A.I.) to make their own decisions. They are remote controlled by human operators. The Boston Dynamics machines are more sophisticated—they have “an element of autonomy”—but these too are mostly human-controlled. As A.I. progresses, however, we’ll run into moral dilemmas like those posed by self-driving cars: What ethics do you give the robot? This will be a matter of sincere disagreement, and it should be subject to public debate, not secretive departmental decision making.
Finally, when we do reach that point with A.I., the case will be made that robots should be entrusted with use of force (perhaps even more than humans) because they cannot operate outside their programmed rules and wouldn’t react out of fear for their own lives as a human officer might. A robot cop approaching Philando Castile’s car, for example, would not have shot him as Jeronimo Yanez did unless programmed to do so.
The problem, as with autonomous weapons of war, is threefold: Programming can be bad, and the entire “robots would be police without the burden of human frailty” argument rests on an unjustified assumption of perfect ethics and execution. Complex ethical decisions require more than rules—human compassion and conscience can’t be programmed, and subjecting human life to the decision of a robot is an affront to human dignity. Moreover, every problem we have with holding law enforcement accountable for misconduct today will be exacerbated ad infinitum if “the robot did it.” You can’t try or punish a robot, and it is inevitable that the manufacturers, programmers, and operators of police robots will be indemnified against liability in brutality cases if we permit the escalation I foresee to proceed unchecked.
We should take the lesson of SWAT teams to heart and build a strong fence of law around police robots before they get loose.
from Latest – Reason.com https://ift.tt/3bapszh
via IFTTT