In a dramatic turn of events, Robotaxis, self-driving vehicles that pick up fares with no human operator, were recently unleashed in San Francisco. After a contentious 7-hour public hearing, the decision was driven home by the California Public Utilities commission. Despite protests, there’s a sense of inevitability in the air. California has been gradually loosening restrictions since early 2022. The new rules allow the two companies with permits – Alphabet’s Waymo and GM’s Cruise – to send these taxis anywhere within the 7-square-mile city except highways, and to charge fares to riders.
The idea of self-driving taxis tends to bring up two conflicting emotions: Excitement (“taxis at a much lower cost!”) and fear (“will they hit me or my kids?”) Thus, regulators often require that the cars get tested with passengers who can intervene and manage the controls before an accident occurs. Unfortunately, having humans on the alert, ready to override systems in real-time, may not be the best way to assure safety.
In fact, of the 18 deaths in the U.S. associated with self-driving car crashes (as of February of this year), all of them had some form of human control, either in the car or remotely. This includes one of the most famous, which occurred late at night on a wide suburban road in Tempe, Arizona, in 2018. An automated Uber test vehicle killed a 49-year-old woman named Elaine Herzberg, who was running with her bike to cross the road. The human operator in the passenger seat was looking down, and the car didn’t alert them until less than a second before impact. They grabbed the wheel too late. The accident caused Uber to suspend its testing of self-driving cars. Ultimately, it sold the automated vehicles division, which had been a key part of its business strategy.
The operator ended up in jail because of automation complacency, a phenomenon first discovered in the earliest days of pilot flight training. Overconfidence is a frequent dynamic with AI systems. The more autonomous the system, the more human operators tend to trust it and not pay full attention. We get bored watching over these technologies. When an accident is actually about to happen, we don’t expect it and we don’t react in time.
Humans are naturals at what risk expert, Ron Dembo, calls “risk thinking” – a way of thinking that even the most sophisticated machine learning cannot yet emulate. This is the ability to recognize, when the answer isn’t obvious, that we should slow down or stop. Risk thinking is critical for automated systems, and that creates a dilemma. Humans want to be in the loop, but putting us in control when we rely so complacently on automated systems, may actually make things worse.
How, then, can the developers of automated systems solve this dilemma, so that experiments like the one taking place in San Francisco end positively? The answer is extra diligence not just before the moment of impact, but at the early stages of design and development. All AI systems involve risks when they are left unchecked. Self-driving cars will not be free of risk, even if they turn out to be safer, on average, than human-driven cars.
The Uber accident shows what happens when we don’t risk-think with intentionality. To do this, we need creative friction: bringing multiple human perspectives into play long before these systems are released. In other words, thinking through the implications of AI systems rather than just the applications requires the perspective of the communities that will be directly affected by the technology.
Waymo and Cruise have both defended the safety records of their vehicles, on the grounds of statistical probability. Nonetheless, this decision turns San Francisco into a living experiment. When the outcomes are tallied, it’s going to be extremely important to capture the right data, to share the successes and the failures, and let the affected communities weigh in along with the specialists, the politicians, and the business people. In other words, keep all the humans in the loop. Otherwise, we risk automation complacency – the willingness to delegate decision-making to the AI systems – at a very large scale.
Juliette Powell and Art Kleiner are co-authors of the new book The AI Dilemma: 7 Principles for Responsible Technology.
Credit: Source link
Comments are closed.