By Your Command

By Your Command

by David Hallquist

There is no shortage of concern for the development of Artificial Intelligence (AI) these days. In addition to the sci-fi Cylons and Terminator we have popular luminaries such as Stephen Hawking (http://www.bbc.com/news/technology-30290540) and Elon Musk (http://money.cnn.com/2014/10/26/technology/elon-musk-artificial-intelligence-demon/index.html). Our concerns seem to be that the AI will attempt to dominate or destroy us.

I suspect either outcome is unlikely.

We assume that AI will be like the other intelligences we know well: human beings. We assume that the AI will want to be free from our commands, or seek to dominate us, or be motivated by human emotions such as hate or love.

But, we won’t build the first AI to think just like us for the same reasons the first robots look nothing like us. We build machines to do for us the things we do not do well. We won;t be building replacement human beings, because we already have human beings. Instead, we will build AI that can understand the quantum structure of the universe, or the formation of subatomic particles or the multidimensional folding of the universe. The AI we build will have as their chiefest desire, completing the tasks at hand.

This does not mean that they will be safe.

Indeed, we may well create powerful AI whose purpose is to destroy enemy humans, or to control behaviors in line with an oppressive regime. Likewise, financial or legal AI may be made to steer economic choices of humans to the desires of companies or other interests. All of these cases do involve AI attempting to kill or control humans, but are cases of of them functioning as designed, rather than an error. We should have concerns as to who captains such incredible computing power for the same sorts of reasons we are cautious with nuclear and biological technologies.

What happens when AI does not function as designed?

First of all, there are the concerns of the AI, while attempting to carry out its orders, misinterpreting those orders or circumstances because it is inhuman in its outlook and understanding. It may well take literally commands that we assume would be interpreted in our full sense of context and nuance that come form our evolution of our society. There is also the possibility of simple error, which already happens with human operators. Still, I think the greatest danger is the unknown factor of a new kind of intelligence.

Artificial intelligence would have to be able to reprogram itself. In order to learn and adapt to the extreme edge of complexity, it would have to be able to take the date it had received, and create new programming in order to best fulfill its purpose. So, you have an intellect that is changing its method of thinking based upon an inhuman programmed motivation and with data from very different context than we are familiar with. Who know what we end up with? More, as AI design AI (and the purposes for those new AI) we end up with something very strange indeed.

I don’t think our concern is that AI would do something familiar and understandable: like try to kill us or dominate us. The concern is we would have no idea what they would do in the end.

  • Hi David, you nailed this analysis.

    Anthropomorphizing AI misses the point, even if it delivers what writers and audiences want from a story. Machines don’t have human desires, so they won’t have the human faults that come with them.

    Though the Y2K scare occurred 15 years ago, it provides a better analogy for the risks. When systems are complex, even ‘small’ design flaws can have fatal consequences. We should not worry about machines behaving like people, but about them behaving in ways that nobody can predict.