We Need To Do More Than Just Point to Ethical Questions About Artificial Intelligence
Hundreds of artificial intelligence experts recently signed a letter put together by
the Future of Life Institute that prompted Elon Musk to donate $10 million to the institute. “We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our A.I. systems must do what we want them to do,” the letter read.
The problem is that both the letter and the corresponding report allow anyone to read any meaning he or she wants into “beneficial,” and the same applies when it comes to defining who “we” are and what “we” want A.I. systems to do exactly. Of course, there already exists a “we” who think it is beneficial to design robust A.I. systems that will do what “we” want them to do when, for example, fighting wars.
But the “we” the institute had in mind is something different. “The potential benefits [of A.I.] are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools A.I. may provide, but the eradication of disease and poverty are not unfathomable.” But notice that these are presented as possibilities, not as goals. They are benefits that could happen, not benefits that should happen. Nowhere in the research priorities document are these eventualities actually called research priorities.
“The combination of gesturing towards what are usually called ‘important ethical issues,’ while steadfastly putting off serious discussion of them, is pretty typical in our technology debates.”
One might think that such vagueness is just the result of a desire to draft a letter that a large number of people might be willing to sign on to. Yet in fact, the combination of gesturing towards what are usually called “important ethical issues,” while steadfastly putting —> Read More