Driverless Cars and the Myths of Autonomy


We hear a lot these days about the goal of “full autonomy.” In a dawning robotic era, we’re told by Google’s automobile engineers and others, machines will act completely on their own, collecting data about the world, making informed, rapid decisions and learning from their experiences. On a commonly used scale of levels of autonomy, level one is fully manual control and level 10 is full autonomy. With its familiar rising numbers, the scale itself seems to imply full autonomy as the inevitable, ultimate goal of a natural trajectory of progress in robotics and automation.

History and experience show, however, that the most difficult, challenging and worthwhile problem is not full autonomy but the perfect five — a mix of human and machine and the optimal amount of automation to offer trusted, transparent collaboration, situated within human environments.

Consider the Apollo lunar landings. In the early 1960s, engineers began to build the computer that would guide NASA’s astronauts to the moon. At first they thought that the computer’s interface should have only two buttons. One would be say “go to moon,” and the other, “take me home” — nearly level 10 on the autonomy scale.

As the Apollo program progressed, with human lives and national prestige on the line, the engineers gradually added human approvals and inputs. The machine that eventually landed Neil Armstrong and Buzz Aldrin on the moon (see picture above) did indeed include an advanced digital computer running complex software (the first digital fly-by-wire system). But it worked as a perfect five. With a rich set of controls (and real-time collaboration from Houston), the astronauts could vary the level of autonomy, adding or disengaging aspects of the computer in response to alarms and distractions. All six attempts at landing succeeded.

The futures of driverless —> Read More