PREFACE - Superintelligence: Paths, Dangers, Strategies - Nick Bostrom

Superintelligence: Paths, Dangers, Strategies - Nick Bostrom (2014)

PREFACE

Inside your cranium is the thing that does the reading. This thing, the human brain, has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that we owe our dominant position on the planet. Other animals have stronger muscles and sharper claws, but we have cleverer brains. Our modest advantage in general intelligence has led us to develop language, technology, and complex social organization. The advantage has compounded over time, as each generation has built on the achievements of its predecessors.

If some day we build machine brains that surpass human brains in general intelligence, then this new superintelligence could become very powerful. And, as the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species would depend on the actions of the machine superintelligence.

We do have one advantage: we get to build the stuff. In principle, we could build a kind of superintelligence that would protect human values. We would certainly have strong reason to do so. In practice, the control problem—the problem of how to control what the superintelligence would do—looks quite difficult. It also looks like we will only get one chance. Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.

In this book, I try to understand the challenge presented by the prospect of superintelligence, and how we might best respond. This is quite possibly the most important and most daunting challenge humanity has ever faced. And—whether we succeed or fail—it is probably the last challenge we will ever face.

It is no part of the argument in this book that we are on the threshold of a big breakthrough in artificial intelligence, or that we can predict with any precision when such a development might occur. It seems somewhat likely that it will happen sometime in this century, but we don’t know for sure. The first couple of chapters do discuss possible pathways and say something about the question of timing. The bulk of the book, however, is about what happens after. We study the kinetics of an intelligence explosion, the forms and powers of superintelligence, and the strategic choices available to a superintelligent agent that attains a decisive advantage. We then shift our focus to the control problem and ask what we could do to shape the initial conditions so as to achieve a survivable and beneficial outcome. Toward the end of the book, we zoom out and contemplate the larger picture that emerges from our investigations. Some suggestions are offered on what ought to be done now to increase our chances of avoiding an existential catastrophe later.

This has not been an easy book to write. I hope the path that has been cleared will enable other investigators to reach the new frontier more swiftly and conveniently, so that they can arrive there fresh and ready to join the work to further expand the reach of our comprehension. (And if the way that has been made is a little bumpy and bendy, I hope that reviewers, in judging the result, will not underestimate the hostility of the terrain ex ante!)

This has not been an easy book to write: I have tried to make it an easy book to read, but I don’t think I have quite succeeded. When writing, I had in mind as the target audience an earlier time-slice of myself, and I tried to produce a kind of book that I would have enjoyed reading. This could prove a narrow demographic. Nevertheless, I think that the content should be accessible to many people, if they put some thought into it and resist the temptation to instantaneously misunderstand each new idea by assimilating it with the most similar-sounding cliché available in their cultural larders. Non-technical readers should not be discouraged by the occasional bit of mathematics or specialized vocabulary, for it is always possible to glean the main point from the surrounding explanations. (Conversely, for those readers who want more of the nitty-gritty, there is quite a lot to be found among the endnotes.1)

Many of the points made in this book are probably wrong.2 It is also likely that there are considerations of critical importance that I fail to take into account, thereby invalidating some or all of my conclusions. I have gone to some length to indicate nuances and degrees of uncertainty throughout the text—encumbering it with an unsightly smudge of “possibly,” “might,” “may,” “could well,” “it seems,” “probably,” “very likely,” “almost certainly.” Each qualifier has been placed where it is carefully and deliberately. Yet these topical applications of epistemic modesty are not enough; they must be supplemented here by a systemic admission of uncertainty and fallibility. This is not false modesty: for while I believe that my book is likely to be seriously wrong and misleading, I think that the alternative views that have been presented in the literature are substantially worse—including the default view, or “null hypothesis,” according to which we can for the time being safely or reasonably ignore the prospect of superintelligence.