Who is responsible if an automated vehicle crashes?

Aug 02, 2017

Cars are already capable of getting us from A to B without human intervention.

Here, just look at this nifty video Tesla released:

Here’s the rub: These cars are beginning to be involved in accidents. That raises some serious legal and ethical questions. Will the cars be programmed to save the maximum number of lives, or the life of the owner? If something goes wrong, is the occupant of the car considered to be in control of the vehicle? Who is responsible?

Obviously, we’re interested in these questions because we’re going to operate our motorcycles cheek-by-jowl with these vehicles. The crux of the uncertainty boils down to two questions which have not yet been answered clearly.

Are self-driving cars safer than humans?

“Safety” is an interesting concept.

”The autonomous car doesn't drink, doesn't do drugs, doesn't text while driving, doesn't get road rage,” former GM Vice Chairman Bob Lutz told CNBC. “Young, autonomous cars don't want to race other autonomous cars, and they don't go to sleep."

In that respect, cars are more consistent than humans, which does contribute to safety.

In 2012, The Santa Clara Law Review published an abstract on nascent autonomous technology. They did not directly compare autonomous crash information with human-piloted vehicle crash information, but they did note that “driver error was the most common factor implicated in vehicle accidents” at 95 percent.

In response to a fatal crash involving one of its cars being operated on Autopilot in June of this year, Tesla published a post on its corporate blog stating, “This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles.”

Automotive journalist David Noland calculates safety a bit differently. He asserts that Tesla’s safety comparison isn’t valid because the U.S. traffic fatality rate used for comparison includes bicyclists, pedestrians, buses, and 18-wheelers.

“This is not just apples-to-oranges. This is apples-to-aardvarks,” Noland wrote. His belief is that the best comparative benchmark is a Tesla that’s not operated on Autopilot. Using figures supplied by Tesla, Nolan’s findings show an autonomous Tesla to have a crash rate that’s roughly double of a human-piloted Tesla. (Nolan acknowledges that his data may be imperfect due to the small sample size of autonomous vehicle crashes.)

Unsurprisingly, MIT research of 2,000 U.S. citizens shows that people want autonomous cars to put the greater good first: They want the car to minimize the loss of life, even if that includes killing the car’s operator. If you’re an economist at heart, and believe people are rational in terms of economic utility, it should come as no surprise that those same people do not want to buy a car that works like that. Respondents were only one-third as likely to buy a car that would potentially prioritize the lives of others over the life of the car owner.

At this point in time, the question of safety does not appear to have a clear-cut answer. It seems as though autonomous cars have the potential to be safer than their human counterparts, but whether they have exceeded that threshold seems to be unclear at the present time. Morbidly, more miles driven and logged and more accidents are the keys to clarifying this research. Increased sample size should make studying the safety of autonomous vehicles more certain.

Who is responsible for injuries caused by a driverless vehicle?

Another point that has yet to be hashed out is one of liability. In the event of an accident involving a self-driving car, determining the responsible party becomes fairly difficult. First, it helps to remember that in our legal system, rules of the roadways are set by state and local authorities. The issue is not a federal one, so it’s possible that the answer to the question will not be a uniform one.

Additionally, remember that in America, we have two types of cases: civil and criminal. It’s perfectly plausible to be found innocent of a crime but civilly responsible for damages.

With those thoughts in mind, addressing responsibility probably begins with the driver. Most state laws use a premise known as actual physical control to describe the thing people do in cars. This is important, because the term has precedent from case law where that phrase is significantly more inclusive than “driving.” For instance, in Cloyd v. State, the Florida District Court of Appeals defined the term thusly: “‘Actual physical control’ means the defendant must be physically in or on the vehicle and have the capability to operate the vehicle, regardless of whether he/she is actually operating the vehicle at the time.”

To a reasonable person, that’s not “driving,” but using that protracted legal definition, someone being ferried about by a self-driving car could very well be legally culpable for a mistake. This is a legal concept most people might not be familiar with — but it’s probably going to be something that is argued at some point in the near future.

Note that actual physical control typically is a component of DUI/DWI cases, so the answer to driver responsibility for injuries or deaths may actually be answered in the more-likely scenario of a driver having an autonomous vehicle navigate the way home after a few beers. Doesn’t it seem likely that a judge could be swayed by a defendant who’s following the spirit of the law, even if he’s violating the letter of it? Will that judge be aware of the potential ramifications from the precedent that case law could set?

Another likely source of debate will be customer expectations of the capabilities of the car. The dream and the reality are not the same thing. For instance, a Tesla blog post touting the self-driving capabilities promises, “...all Tesla vehicles produced in our factory – including Model 3 – will have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.”

That sounds excellent. Self-driving, in this context, certainly sounds like you could move over to the passenger seat and take a nap, right? You do that with other humans who are driving, and this car is supposed to be even safer.

However, Tesla also states, “It is important to note that Tesla disables Autopilot by default and requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled. When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times," and that "you need to maintain control and responsibility for your vehicle” while using it.

Every time Autopilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.” The system also makes frequent checks to ensure that the driver's hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.

Says Tesla: “We do this to ensure that every time the feature is used, it is used as safely as possible. As more real-world miles accumulate and the software logic accounts for increasingly rare events, the probability of injury will keep decreasing. Autopilot is getting better all the time, but it is not perfect and still requires the driver to remain alert.”

This certainly sounds more careful and guarded than the first, far more optimistic-sounding post (though it was written first chronologically). Astonishingly, the post that snippet appeared in is entitled, “A Tragic Loss” and the subject was the death of Joshua Brown, killed when his Tesla collided with another vehicle when operating on Autopilot mode.

In a significantly less guarded move, Volvo CEO and president Håkan Samuelsson has pledged that his company will “accept full liability whenever one if its cars is in autonomous mode.” According to a Volvo press release, that makes them one of the first car makers in the world to make such a promise.

Another layer in the post-crash blame game is going to involve owners who modify or pay to have their cars modified. As discussed earlier, it seems there is strong incentive for owners to have their autonomous driving software rewritten to prioritize the life of the driver as opposed to minimizing all or others’ loss of life. In the same press release discussed above, Samuelsson specifically states that Volvo considers “hacking” the car’s software a criminal offense.

How would such a law be enforced? It would be draconian to place too high a fine on software hackers, but the ramifications of the hack could be extraordinary if a vehicle was involved in a crash. Thus, owners who place a high utility value on their own lives would have great incentive to cheat the software. There would be a small penalty if caught, but a large payout if the driver was ever saved from a lethal crash.

Complicating matters further is the issue of protecting the software that guides the car’s logic. In September, Tencent’s Keen Security Lab revealed they were able to hijack a Tesla Model S and control it remotely in a white-hat hack. Tesla responded with a patch to repair it. To their credit, Tesla maintains a running bounty program for those who can exploit bugs in their systems. However, legal questions may be raised about how vigorously and how long a manufacturer can reasonably be expected to defend its digital product from attacks.

So what’s gonna happen?

Here’s what I think we’ll see. Things are going to trundle along just as they are now, and autonomous vehicle manufacturers are going to quietly — yet furiously — continue making these cars safer, including what were less important agents in the mix, including motorcyclists, pedestrians, and cyclists.

This is going to be critical, because the failure rate — the number of injuries and death expressed as a ratio to miles driven — is going to be almost the sole deciding factor in answering the first question this article posed. In order for autonomous vehicles to be accepted in America or anywhere else, it’s going to have to be demonstrably proven that they are at least as safe as human drivers, and ideally way safer. I have no doubt that will happen. Perhaps the question I should have asked is “Are self-driving cars safer than humans right now?” Given how explosive the growth in autonomous vehicles is, it’s simply a matter of time before driverless transit is the safer option.

The law will slowly develop as incidents occur. Once we pass the societal tipping point, the law will fully form regarding this method of travel. My suspicion is that the legality of allowing your car to operate for you will hinge almost exclusively on the failure rate. The lower the ratio of deaths per mile traveled, the more freedom “drivers” will be given to abdicate control of the vehicle.

For now, though? I think I’ll treat any Teslas I see on the highway like a teenager texting.