Is artificial intelligence likely to make humans extinct, or are we smart enough to control it?
Stephen Cass: Hi, I’m Stephen Cass for IEEE Spectrum’s “Techwise Conversations.” We’ve all seen science fiction movies like 2001: A Space Odyssey and The Matrix, where the villain is an artificial intelligence program that has gone rogue. These killer AI scenarios have provided entertainment at the cinema for decades, but some scientists are now warning that we need to take the AI threat very seriously.
There’s a new book out by the Oxford University philosopher Nick Bostrom that explores this threat in great detail. Bostrom directs Oxford’s Future of Humanity Institute, and he studies all the ways the human species could be wiped off the planet. In his book, called Superintelligence, he explains how a supersmart AI could arise and destroy us. Now, the book’s a bit dense—Bostrom is a philosopher, after all. So I’m here with IEEE Spectrum Associate Editor Eliza Strickland, who’s read Superintelligence and talked to Bostrom, and we’re going to figure out if there’s any hope for humanity.