Book , Print in English

Superintelligence : paths, dangers, strategies

Nick Bostrom, Director, Future of Humanity Institute, Professor, Faculty of Philosophy & Oxford Martin School, University of Oxford.
  • Oxford, United Kingdom : Oxford University Press, 2014.
  • First edition.
  • xvi, 328 pages : illustrations, graphs, tables ; 25 cm
Subjects
Summary
  • The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.
Contents
  • note: 1. Past developments and present capabilities
  • Growth modes and big history
  • Great expectations
  • Seasons of hope and despair
  • State of the art
  • Opinions about the future of machine intelligence
  • 2. Paths to superintelligence
  • Artificial intelligence
  • Whole brain emulation
  • Biological cognition
  • Brain-computer interfaces
  • Networks and organizations
  • Summary
  • 3. Forms of superintelligence
  • Speed superintelligence
  • Collective superintelligence
  • Quality superintelligence
  • Direct and indirect reach
  • Sources of advantage for digital intelligence
  • 4. kinetics of an intelligence explosion
  • Timing and speed of the takeoff
  • Recalcitrance
  • Non-machine intelligence paths
  • Emulation and AI paths
  • Optimization power and explosivity
  • 5. Decisive strategic advantage
  • Will the frontrunner get a decisive strategic advantage?
  • How large will the successful project be?
  • Monitoring
  • International collaboration
  • From decisive strategic advantage to singleton
  • 6. Cognitive superpowers
  • Functionalities and superpowers
  • AI takeover scenario
  • Power over nature and agents
  • 7. superintelligent will
  • relation between intelligence and motivation
  • Instrumental convergence
  • Self-preservation
  • Goal-content integrity
  • Cognitive enhancement
  • Technological perfection
  • Resource acquisition
  • 8. Is the default outcome doom?
  • Existential catastrophe as the default outcome of an intelligence explosion?
  • treacherous turn
  • Malignant failure modes
  • Perverse instantiation
  • Infrastructure profusion
  • Mind crime
  • 9. control problem
  • Two agency problems
  • Capability control methods
  • Boxing methods
  • Incentive methods
  • Stunting
  • Tripwires
  • Motivation selection methods
  • Direct specification
  • Domesticity
  • Indirect normativity
  • Augmentation
  • Synopsis
  • 10. Oracles, genies, sovereigns, tools
  • Oracles
  • Genies and sovereigns
  • Tool-AIs
  • Comparison
  • 11. Multipolar scenarios
  • Of horses and men
  • Wages and unemployment
  • Capital and welfare
  • Malthusian principle in a historical perspective
  • Population growth and investment
  • Life in an algorithmic economy
  • Voluntary slavery, casual death
  • Would maximally efficient work be fun?
  • Unconscious outsourcers?
  • Evolution is not necessarily up
  • Post-transition formation of a singleton?
  • second transition
  • Superorganisms and scale economies
  • Unification by treaty
  • 12. Acquiring values
  • value-loading problem
  • Evolutionary selection
  • Reinforcement learning
  • Associative value accretion
  • Motivational scaffolding
  • Value learning
  • Emulation modulation
  • Institution design
  • Synopsis
  • 13. Choosing the criteria for choosing
  • need for indirect normativity
  • Coherent extrapolated volition
  • Some explications
  • Rationales for CEV
  • Further remarks
  • Morality models
  • Do What I Mean
  • Component list
  • Goal content
  • Decision theory
  • Epistemology
  • Ratification
  • Getting close enough
  • 14. strategic picture
  • Science and technology strategy
  • Differential technological development
  • Preferred order of arrival
  • Rates of change and cognitive enhancement
  • Technology couplings
  • Second-guessing
  • Pathways and enablers
  • Effects of hardware progress
  • Should whole brain emulation research be promoted?
  • person-affecting perspective favors speed
  • Collaboration
  • race dynamic and its perils
  • On the benefits of collaboration
  • Working together
  • 15. Crunch time
  • Philosophy with a deadline
  • What is to be done?
  • Seeking the strategic light
  • Building good capacity
  • Particular measures
  • Will the best in human nature please stand up.
Other information
  • Includes bibliographical references (pages 305-324) and index.
  • OCLC
ISBN
  • 9780199678112
  • 0199678111
Identifying numbers
  • LCCN: 2013955152
  • OCLC: 857786110
  • OCLC: 857786110