Search for

No matches. Check your spelling and try again, or try altering your search terms for better results.

contributor perspectives

Jan 3, 2018 | 19:21 GMT

9 mins read

The Promise and the Threat of AI

Board of Contributors
Jay Ogilvy
Board of Contributors
The true danger lurking in advanced artificial intelligence may not be that computers' cognitive abilities will eclipse those of humankind but that humankind will forget what sets our cognition apart from simple computing.
Contributor Perspectives offer insight, analysis and commentary from Stratfor’s Board of Contributors and guest contributors who are distinguished leaders in their fields of expertise.

High-level problem-solving isn't just for humans anymore. As computers gain speed and accomplish dazzling feats like defeating the world's masters at games of chess and Go, some of the planet's brightest minds — Elon Musk and Stephen Hawking among them — warn that we human beings may find ourselves obsolete. Further, a kind of artificial intelligence arms race may come to dominate geopolitics, rewarding the owners of the best AI mining the biggest pools of "big data" — most likely, as a result of its sheer size, China.

Or consider another dire consequence: As AI-driven robots replace more and more workers, from truck drivers to insurance adjusters, loan officers and any number of other white-collar occupations, unemployment will rise. How will economies adjust? Should we imagine a utopia filled with gratifying leisure activities or a feudal dystopia in which a wealthy elite hold the few precious jobs unsuitable for computers?

The stakes are high. But the terms of the debate thus far are confused. The recent advances in AI are impressive, and the future prospects for the technology are truly amazing. Even so, between artificial intelligence and truly human intelligence lie a host of differences that much of the literature on the subject has failed to adequately address. In this column I'll try to sort fact from fiction.

Thinking About Thinking Machines

In a rich anthology of short essays, What to Think About Machines That Think, William Poundstone, author of Are You Smart Enough to Work at Google?, begins with a quote from the computer science pioneer Edsger Dijkstra: "The question of whether machines can think is about as relevant as the question of whether submarines can swim." Both a whale and a submarine make forward progress through the water, but they do it in fundamentally different ways. Likewise, both thinking and computation can come up with similar-looking results, but the way they do it is fundamentally different.

On the other hand, Freeman Dyson, the acclaimed physicist at Princeton's Institute for Advanced Study, dismisses the question. His is the shortest of all the essays in the anthology, edited by John Brockman. It reads in full: "I do not believe that machines that think exist, or that they are likely to exist in the foreseeable future. If I am wrong, as I often am, any thoughts I might have about the question are irrelevant. If I am right, then the whole question is irrelevant."

Before being quite so dismissive, though, let's take a deeper look at what the alarmists are saying. By the end of his short essay, after all, Poundstone comes around. Having opened with Dijkstra's apt aphorism about submarines that don't swim, Poundstone closes on a cautionary note: "I think the notion of Frankensteinian AI — AI that turns on its creators — is worth taking seriously."

The Dangers of Ultraintelligence

The case for concern is nothing new. All the way back in 1965, British mathematician Irving Good wrote:

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."

The last provision is key. While the sorcerer's apprentice may not be as malevolent as Frankenstein's monster, even the best-intentioned "apprentice" can get out of hand. Hence the increasing attention to two different issues in debates over AI. First there is the question of how soon, if ever, machines will achieve or surpass human intelligence. Second is the debate over whether, if they do, they will be malignant or benign.

In his book Life 3.0: Being Human in the Age of Artificial Intelligence, Max Tegmark distinguishes five different stances toward AI based on these two dimensions. The categories come in handy for grouping the many contributors to the Brockman volume, as well as the many participants Tegmark pulled together for a conference on AI three years ago:

  1. Those who believe that AI will exceed human intelligence "in a few years" — "virtually nobody" these days, according to Tegmark.
  2. The so-called digital utopians, who hold that AI will pass up human intelligence in 50-100 years and that the development will be a boon for humanity. Kevin Kelly belongs in this category, along with Singularity Is Near author Ray Kurzweil.
  3. People who think that, on the contrary, the achievement of superior intelligence by machines will be a bad thing, whenever it happens. Tegmark calls adherents to this idea "luddites." The contingent includes Martin Rees, the Royal Society's former president, and American computer scientist Bill Joy, who wrote a famous cover story for Wired titled "Why the Future Doesn't Need Us."
  4. A group between the luddites and the utopians, "the beneficial AI movement," which contends that AI is likely to arrive sometime in the next hundred years, and that we'd better get to work on making sure that its effects are benign, not malignant. Oxford philosopher Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, is a prominent voice in this camp, as are most of the people who took part in the January 2015 conference, largely to launch the beneficial AI movement.
  5. Finally there are the "techno-skeptics," as Tegmark calls them, who believe AI will never rival human cognition. Along with Dyson, Jaron Lanier — the inventor of virtual reality — belongs in this group, as does neuroanthropologist Terrence Deacon.

If you accept the taxonomy, then the main questions about AI are how soon it will overtake human intelligence, whether that event will have beneficial or deleterious effects, and what we should do now to prepare for those effects. Sounds reasonable enough.

Mistaking Computation for Cognition

But there is a problem with Tegmark's taxonomy. It assumes that AI is trying to overtake human intelligence on the same racetrack, as it were. As with the whale and the submarine, however, computers and human minds achieve similar ends through vastly different means, though at first glance they may appear to be doing the same thing — calculating.

Computers are built to be precise. Enter a given input, and you get the same output every time — a behaviorist's dream. Brains, on the other hand, are messy, with lots of noise. Where computers are precise and deterministic, brains are stochastic. Where computers work by algorithmic sequences that simulate deterministic patterns of mechanistic cause and effect, minds aim at meanings. Where computers run on hardware using software that is unambiguous — one-to-one mappings called "code" — brains run on wetware that is not just a circuit diagram of neurons but also a bath of blood and hormones and neurotransmitters.

To be fair to those who buy into the computational metaphor for mind — and all of the digital utopians do — AI might easily be confused with human intelligence because, however much we may know about AI, we know shockingly little about how the brain works, and next to nothing about how subjective consciousness emerges from that bloody mess. But we do know that the brain is not a hard-wired circuit board.

Techno-skeptic Deacon deconstructs Silicon Valley's adoption of the computational metaphor for mind in his book Incomplete Nature:

"Like behaviorism before it the strict adherence to a mechanistic analogy that was required to avoid blatant homuncular assumptions come at the cost of leaving no space for explaining the experience of consciousness or the sense of mental agency ... So, like a secret reincarnation of behaviorism, cognitive scientists found themselves seriously discussing the likelihood that such mental experiences do not actually contribute any explanatory power beyond the immediate material activities of neurons."

Deacon uses the mythical figure of the golem to capture the difference between computers and human intelligence. In Jewish folklore of the late Middle Ages, golems were imagined as clay figures formed to look like a man but to have no inner life. A powerful rabbi then brought them to life using magical incantations.

"Golems can thus be seen as the very real consequence of investing relentless logic with animate power. ... In their design as well as their role as unerringly literal slaves, digital computers are the epitome of a creation that embodies truth maintenance made animate. Like the golems of mythology, they are selfless servants, but they are also mindless. Because of this, they share the golem's lack of discernment and potential for disaster."

So even if we agree with Deacon that computers and brains are doing very different things when they calculate, AI may still carry the "potential for disaster." Elon Musk and Stephen Hawking aren't crazy. It's just that in articulating the nature of the potential disaster, we should constantly keep in mind the artificiality of artificial intelligence.

In the eyes of Adriana Braga and Robert Logan, authors of a recently published paper, "The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence," the danger of AI has less to do with some potentially ill-intentioned superintelligence overtaking us and more to do with our misconstruing the nature of our own intelligence. They explain:

"What motivated us to write this essay is our fear that some who argue for the technological singularity might in fact convince many others to lower the threshold as to what constitutes human intelligence so that it meets the level of machine intelligence, and thus devalue those aspects of human intelligence that we (the authors) hold dear such as imagination, aesthetics, altruism, creativity, and wisdom."

Virtual reality creator Lanier, who is deeply suspicious of the computational metaphor for mind, makes a similar point in his important book, You Are Not a Gadget: "People can make themselves believe in all sorts of fictitious beings, but when those beings are perceived as inhabiting the software tools through which we live our lives, we have to change ourselves in unfortunate ways in order to support our fantasies. We make ourselves dull."

In our headlong quest for bigger, better, faster artificial intelligence, we run the risk of rendering our own intelligence artificial.

Jay Ogilvy joined Stratfor's board of contributors in January 2015. In 1979, he left a post as a professor of philosophy at Yale to join SRI, the former Stanford Research Institute, as director of research. Dr. Ogilvy co-founded the Global Business Network of scenario planners in 1987. He is the former dean and chief academic officer of San Francisco’s Presidio Graduate School. Dr. Ogilvy has published nine books, including Many Dimensional Man, Creating Better Futures and Living Without a Goal.
The Promise and the Threat of AI
7 Geo 

Copyright © Stratfor Enterprises, LLC. All rights reserved.

Stratfor Worldview


To empower members to confidently understand and navigate a continuously changing and complex global environment.

Google Play