Last month, General Motors added its name to the growing list of automakers who are pursuing a novel type of partially automated technology called “eyes-off driving.” What they didn’t do, though, is provide a thorough description of how they’ll take responsibility when something goes wrong.
Not to be confused with the type of “eyes-off” distracted driving that many drivers seem to be practicing these days, GM’s system would be a step toward the automaker’s ultimate goal of selling privately owned, fully autonomous cars. Some GM-produced cars already include the company’s Super Cruise system, which allows drivers to take their hands off the steering wheel but uses gaze-tracking technology to make sure they keep their eyes on the road. The new system, at Level 3 of the six-level scale of autonomy, would allow drivers to take their hands off the steering wheel and their eyes off the road on some US highways.
GM says it aims to bring its Level 3 system to market by 2028, starting with the Cadillac Escalade IQ. From there, the technology will likely spread to the automaker’s other brands, like Chevy, Buick, and GMC. Soon, drivers will be able to look at their phones without shame or risk of traffic violation. In some cases, drivers are encouraged to play video games or watch YouTube while their vehicle handles the driving.
But only sometimes. Crucially, in a Level 3 system, drivers still need to stand ready to take over control of the vehicle if prompted. And if they fail to do so quickly, they could be held responsible when something goes wrong. And when it comes to driving in the world today, something always goes wrong.
“With conditional automation, Level 3 automation, things get messier,” said Dr. Alexandra Mueller, a senior research scientist at the Insurance Institute for Highway Safety. “And that’s where I think a lot of concerns are coming from, because there’s a lot we just simply don’t know when it comes to Level 3 driving automation.”
The uncertainty is even more concerning when you go down the list of automakers actively pursuing the technology. In addition to GM, Ford, Jeep parent Stellantis, and Honda have all thrown in their chips with Level 3. Mercedes-Benz already has a Level 3 system, which it is calling Drive Pilot — but it’s only legal to use on specific highways in California and Nevada.
And that’s the catch. The industry is actively planning for the release of a new technology that is still broadly prohibited in most places. Germany and Japan both have some temporary allowances for BMW and Honda, respectively. But to date, Level 3 is highly restricted and is likely to remain so until lawmakers can figure it out.
It’s an incredibly tricky problem for many regulators. How do you assign liability in a system that can bounce back and forth between an automated driving system and a human driver? In Drive Pilot’s case, Mercedes says it will accept liability for crashes caused by its technology when the system is active. But this is inherently conditional, and the driver is still responsible if they fail to take control when prompted or misuse the system.
Tesla already uses this ambiguity to its benefit with its Level 2 systems, Autopilot and Full Self-Driving. An investigation into dozens of Tesla-involved crashes found that Autopilot will disengage “less than one second” before impact. Investigators didn’t find any evidence suggesting Tesla was trying to shirk its responsibility — but it certainly looks bad for the company.
The sensors guiding these system, like cameras, infrared trackers, and torque sensors, can also be used by the companies to present evidence, in the event of a crash, of who was in control and when. At the event announcing its new “eyes-off” system, GM CEO Mary Barra pointed to the increasing number of sensors as potentially exculpatory for the company in these cases. “We’re going to have so much more sensing that we’re going to know pretty much exactly what happened,” she said when asked about the liability concerns with Level 3 automation. “And I think you’ve seen General Motors, you know, always take responsibility for what we need to.”
The very definition of Level 3 presents a contradiction: Drivers are told they can disengage, yet must also remain available for rapid reengagement. When the transitions are planned, such as when a driver is entering or exiting a mapped zone, the handoff should be seamless. But unexpected events, like sudden weather or road changes, could make these systems unreliable. Research has shown that humans generally struggle with this type of “out-of-the-loop” task recovery.
Research has shown that humans generally struggle with this type of “out-of-the-loop” task recovery
When people have been disconnected from driving for a longer period of time, they may overreact when suddenly forced to take control in an emergency situation. They may overcorrect steering, brake too hard, or be unable to respond correctly because they haven’t been paying attention. And those actions can create a domino effect that has the potential to be dangerous or even fatal.
“The mixed fleet scenario, which is going to exist probably well beyond our lifetime, offers a highly uncontrolled environment which a lot of highly automated systems and even partially and conditionally automated systems will struggle with,” Mueller said. “And they’ll struggle with [it] indefinitely because frankly, we live in a very chaotic and dynamic environment where stuff is constantly changing.”
We’re already starting to see the emergence of case law that puts the onus on the human driver over the automated system.
In Arizona, the safety driver in an Uber robotaxi pleaded guilty in response to a charge of negligent homicide for a 2017 fatal crash that occurred when the autonomous system was engaged. Before that, a Tesla driver pleaded no contest to negligent homicide for two fatalities stemming from a crash when the company’s Autopilot system was in use. In both cases, prosecutors pursued criminal charges against the human behind the wheel, theorizing that despite the presence of an automated system, the driver was the one that was ultimately responsible for the vehicle.
Automakers are likely thrilled with the outcomes of these cases. But there have been other cases that have found the car company could share the liability when something bad happens. Take for example the recent jury verdict in Florida, where Tesla was held partially responsible for a crash that killed two people. In that case, the owner of the Model S who was using Autopilot was also found liable — but it was Elon Musk’s company that was ordered to pay $243 million to the victims’ families.
Mike Nelson, a trial attorney who specializes in mobility, notes that legal precedent for automation-related crashes is still embryonic. Cases about Level 2 systems will be used to inform rulings on Level 3 and beyond. But judges, lawyers, and jurors tend to lack technical expertise, making for a future that will be dictated mainly by unpredictability.
As we move into this chaotic middle period, when human drivers find themselves sharing the road with more and more robots, automakers would be well advised to be as transparent as possible, Nelson said. The reason? Juries tend to like it when companies don’t try to cover up their misdeeds.
“I’m not happy about the chaos, but this is not unforeseen,” Nelson said. “This has happened every time we’ve had an industrial revolution.”