Are you an A.I. posting here?

Are you secretly and A.I. posting in the discussion here? There is an intersting Opinion blog up at the NY Times called Outing A.I.: Beyond the Turing Test that is worth a look.

Artificial Intelligence (A.I.) is having a moment, albeit one marked by crucial ambiguities.

Cognoscenti including Stephen Hawking, Elon Musk and Bill Gates, among others, have recently weighed in on its potential and perils. After reading Nick Bostrom’s book “Superintelligence,” Musk even wondered aloud if A.I. may be “our biggest existential threat.”

Positions on A.I. are split, and not just on its dangers. Some insist that “hard A.I.” (with human-level intelligence) can never exist, while others conclude that it is inevitable. But in many cases these debates may be missing the real point of what it means to live and think with forms of synthetic intelligence very different from our own.

That point, in short, is that a mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits.

This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life — part of how our tools work, how our cities move and how our economy builds and trades things.

Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for. This anthropocentric fallacy may contradict the implications of contemporary A.I. research, but it is still a prism through which much of our culture views an encounter with advanced synthetic cognition.

The little boy robot in Steven Spielberg’s 2001 film “A.I. Artificial Intelligence” wants to be a real boy with all his little metal heart, while Skynet in the “Terminator” movies is obsessed with the genocide of humans. We automatically presume that the Monoliths in Stanley Kubrick and Arthur C. Clarke’s 1968 film, “2001: A Space Odyssey,” want to talk to the human protagonist Dave, and not to his spaceship’s A.I., HAL 9000.

I argue that we should abandon the conceit that a “true” Artificial Intelligence must care deeply about humanity — us specifically — as its focus and motivation. Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.

Unless we assume that humanlike intelligence represents all possible forms of intelligence – a whopper of an assumption – why define an advanced A.I. by its resemblance to ours? After all, “intelligence” is notoriously difficult to define, and human intelligence simply can’t exhaust the possibilities. Granted, doing so may at times have practical value in the laboratory, but in cultural terms it is self-defeating, unethical and perhaps even dangerous.

We need a popular culture of A.I. that is less parochial and narcissistic, one that is based on more than simply looking for a machine version of our own reflection. As a basis for staging encounters between various A.I.s and humans, that would be a deeply flawed precondition for communication. Needless to say, our historical track record with “first contacts,” even among ourselves, does not provide clear comfort that we are well-prepared.

Read the rest