The Problem of AI Consciousness

Some things in life cannot be offset by a mere net gain in intelligence.

The last few years have seen the widespread recognition that sophisticated AI is under development. Bill Gates, Stephen Hawking, and others warn of the rise of “superintelligent” machines: AIs that outthink the smartest humans in every domain, including common sense reasoning and social skills. Superintelligence could destroy us, they caution. In contrast, Ray Kurzweil, Google’s chief engineer, depicts a technological utopia bringing about the end of disease, poverty and resource scarcity.

Whether sophisticated AI turns out to be friend or foe, we must come to grips with the possibility that as we move further into the 21st Century, the greatest intelligence on the planet may be silicon-based.

It is time to ask: could these vastly smarter beings have conscious experiences – could it feel a certain way to be them? When we experience the warm hues of a sunrise, or hear the scream of an espresso machine, there is a felt quality to our mental lives. We are conscious.

A superintelligent AI could solve problems that even the brightest humans are unable to solve, but being made of a different substrate, would it have conscious experience? Could it feel the burning of curiosity, or the pangs of grief? Let us call this “the problem of AI consciousness.”

If silicon cannot be the basis for consciousness, then superintelligent machines — machines that may outmode us or even supplant us — may exhibit superior intelligence, but they will lack inner experience. Further, just as the breathtaking android in Ex Machina convinced Caleb that she was in love with him, so too, a clever AI may behave as if it is conscious.

In an extreme, horrifying case, humans upload their brains, or slowly replace the —> Read More

facebooktwittergoogle_plusredditpinterestlinkedinmail