Nick Bostrom says artificial intelligence poses an existential threat to humanity
Every morning Nick Bostrom wakes up, brushes his teeth, and gets to work thinking about how the human species may be wiped off the face of the earth. Bostrom, director of the Future of Humanity Institute at the University of Oxford, is an expert on existential threats to humanity. Of all the perils that make his list, though, he’s most concerned with the threat posed by artificial intelligence.
Bostrom’s new book, Superintelligence: Paths, Dangers, Strategies (Oxford University Press), maps out scenarios in which humans create a “seed AI” that is smart enough to improve its own intelligence and skills and which goes on to take over the world. Bostrom discusses what might motivate such a machine and explains why its goals might be incompatible with the continued existence of human beings. (In one example, a factory AI is given the task of maximizing the production of paper clips. Once it becomes superintelligent, it proceeds to convert all available resources, including human bodies, into paper clips.) Bostrom’s book also runs through potential control strategies for an AI—and the reasons they might not work.