Driver Using “Autopilot”-Technology Criminally-Charged After Collision

Driver Using “Autopilot”-Technology Criminally-Charged After Collision

Tyler Durden

Tue, 07/07/2020 – 17:30

Authored by Darren Smith via JonathanTurley.org,

A driver reportedly relying on the “Autopilot” function of a vehicle was cited for driving with criminal negligence after his passenger car struck a legally standing police patrol car. Though anecdotal, I believe this incident demonstrates what I believe to be a legal fatal flaw in the foundational concept for vehicles equipped with autonomous navigation and driving technology – that they can cause either the “driver” or vehicle owner into criminal liability for essentially the passive act of allowing the car control over the journey.

Ars Technica reported this most recent collision where a Massachusetts driver was cited for driving with criminal negligence after his autonomously operating vehicle crashed into the rear end of a patrol car on a traffic stop. Though the officer was outside his SUV at the time of the collision, he suffered minor injuries when his patrol car was pushed forward into the stopped vehicle. State Troopers said the driver of the colliding vehicle was “not paying attention”. The mechanics of the collision showed the officer was lucky to have escaped death.

A technical problem that might be manifest in the present iteration of a lesser technology known as “Adaptive Cruise Control”, according to author Timmothy B. Lee, was that the tech was in certain conditions prone to failure:

“Often these systems use radar to match the speed of moving vehicles ahead. This is fairly easy to do with radar, which can directly measure another vehicle’s velocity. However, such systems may completely ignore stationary vehicles since radar has poor angular resolution and can’t distinguish stationary objects near the road (like a concrete lane divider) from an obstacle in the vehicle’s travel lane. Adaptive cruise control works well enough most of the time, but it can lead to the rare case where, if a car is parked in the travel lane and a car with adaptive cruise control is being driven by a driver who isn’t paying attention, the two can crash.”

It is not fully determined to my knowledge what was the technical failure of the car in the Massachusetts collision, but the underlying problem is often that the tech does not anticipate a danger, the driver is inattentive to the situation, and a collision occurs. In essence one has to fully consider if deferring responsibility and control of a two ton vehicle travelling up to highway speeds and risking injury or legal jeopardy for a collision outweighs the benefit of the autonomous tech.

The legal principles of liability presently codified in the various states’ laws are inadequately construed to facilitate autonomous vehicles. As a result, the default will be presently to cast blame on the driver under at least an inchoate offense of Criminal Negligence. Why does this happen?

First I will demonstrate this in a descending order of culpability.

Intent: In Using the Revised Code of Washington as a model, under Chapter 9A.08 RCW, Intent means: “A person acts with intent or intentionally when he or she acts with the objective or purpose to accomplish a result which constitutes a crime.” Clearly in the case of traffic collisions, this principle does not hold that a person uses an autonomous car with the intention of committing a crime, such as causing in this case a collision.

Recklessness: “A person is reckless or acts recklessly when he or she knows of and disregards a substantial risk that a wrongful act may occur and his or her disregard of such substantial risk is a gross deviation from conduct that a reasonable person would exercise in the same situation.” It could be argued that a person could claim that they were not acting reckless through the ordinary and by-design act of setting an autonomous car into “autopilot” mode and relying on the manufacturer’s proffered assurance the car will take over driving them to their intended destination safely. But it is not a complete defense. What of the case of a person who takes such comfort in the technology that they fall asleep, either intentionally or not, and the vehicle’s navigation system makes a error in judgment, causing a fatal collision? Or worse, interprets a collision with a small child as hitting a pothole and keeps driving, thereby forcing a felony hit-and-run and vehicular homicide charge onto the driver. Could it at worst be argued that in such a situation the driver exhibited an “extreme indifference to human life” and induced a prosecutor to up the culpability to fit a First Degree Murder charge?

Criminal Negligence: “A person is criminally negligent or acts with criminal negligence when he or she fails to be aware of a substantial risk that a wrongful act may occur and his or her failure to be aware of such substantial risk constitutes a gross deviation from the standard of care that a reasonable person would exercise in the same situation.” I suspect this will meet the standards of most collisions involving criminal traffic charges, and for those traffic collisions not involving a criminal level of negligence–such as an ordinary traffic infraction–negligence will be the dominant factor. But who or what is legally responsible for the collision?

Presently, responsibility for the navigation or control of a motor vehicle lies with the operator/driver, the human on the driver’s seat. But is that absolute? What of the vehicle’s manufacturer or developers who designed the navigation system? Can they be held criminally liable for allowing a foreseeable blind spot in the collision avoidance system of a vehicle that enters a state of which causes an unhandled exception in the software; and this leads to a predictable failure of the system? Or is it to be the vehicle owner who permitted the vehicle on the roadway who might not have maintained the vehicle as instructed. Or maybe the sensors might have been blinded by debris or had corroded wiring or a myriad of other forms of neglect.  There exists precedent both in statutory and case law for charging a vehicle owner for violations such as in the case of traffic cameras or for parking violations. Whoever it might be, it is so often the case where “someone is going to pay” for a collision, whether it be the traffic or civil courts. And the answer to this question has not been fully established.

The existential question of autonomous vehicles being safer is something technology will have to bear out but assuming that it eventually will be bullet-proof is rather foolhardy thinking. Given the capability of drivers generally it is evident that humans are not always better drivers, and it might be argued that the probability of a collision under ideal driving conditions might be statistically lower with automation than with people, it will never be foolproof and can fail at often unpredictable times, and without warning to the driver.

How is a person to know when the software is unaware of a danger? Not only is the human operator now required to continually observe their surroundings but he/she must also be fully integrated into how the tech is thinking and predict with absolute certainty what the software is seeing in order to not only protect themselves and others from physical damage but legal liability as well. That is a clairvoyance nobody truly possesses. So in a sense, the attention demanded of a driver responsible for an autonomous car is double that of someone operating a standard vehicle. It would seem in that light the fully autonomous vehicle loses its advantage in more ways than one.

There is also the matter that fully autonomous vehicles have only made baby-steps into real world driving. The cars now are new, few in numbers, and have not experienced many years of neglect and inconsistent maintenance, and not operated extensively under especially hazardous road conditions or all weather conditions. It harkens back to decades of wishful thinking and promises to have flying automobiles for the masses. The Ockham’s razor for flying cars is to simply look at all the various broken-down vehicles seen occasionally along highways everywhere.  Why is that? It’s usually because people do not maintain their cars to the standards required of aircraft. Had these been flying cars, they would not be on the sides of roadways, they would instead be in roofs of houses or crashed into buildings. That same fate is destined for autonomous cars. The average person is not going to continually test, calibrate, or maintain a system to a level of standard required to permit an autonomous vehicle to operate independently on a highway for a decade or more. The system is going to wear out and it is going to fail eventually. And when it does, is it the owner’s fault or their teenage daughter who relied on an autonomous system while she was driving the car when it crashed into another and killed the occupant. Should she go to jail due to a software failure or her father’s failure to maintain the equipment?  Of course the owner/driver could argue that the collision was caused by a mechanical failure in the autonavigation. But the immediate counter to that claim would be “so if you saw the autopilot fail, why did you not retake control of the car to avoid the collision?” That is a formidable position to retort.

If we allow our thinking to travel to the next step of vehicle automation the question of legal liability becomes more opaque: Fully Integrated Autonomous Vehicles. Here, it is not just the single, stand-alone autonomous vehicle driving via its own devices: it is the network of all vehicles on a roadway communicating with each other and acting as a complete system. One strategy of such a system is where each vehicle communicates with the other as it its direction, intended movement, speeds, et cetera. Perportedly there will be few collisions since each vehicle is aware of his neighbor’s position and travel. Probably the most praiseworthy of this technology would be that traffic cues could be drastically reduced since the cars would all perhaps move forward at once for a green light rather than acting like dripping molasses with human driven cars. But if we return to legal liability does the owner/driver of a vehicle face jeopardy because the autonomous vehicle they drive failed to communicate its intention or gave an improper signal and this mislead another vehicle to change lanes and caused another collision? Were they negligent in maintaining their software, or radar, or transmitter? And they might not have even been aware of such a fault in the system or causation of the accident. But if someone is to blame, or someone else seeks to pass responsibility on to another, a system that continually broadcasts and records everyone’s vehicle identity provides a tempting mechanism to find that particular “someone” to take to court.

I suppose it would be wise to consider these aspects before a person chooses to delegate their driving to a black box while retaining all the legal risk and jeopardy in doing so. For me this is a liability I am not willing to assume.

via ZeroHedge News https://ift.tt/2Z7Kl7C Tyler Durden

Leave a Reply

Your email address will not be published.