Nick Bostrom, director of the Future of Humanity Institute at Oxford University, lays out the best predictions of the artificial intelligence (AI) research community in his new book, Superintelligence: Paths, Dangers, Strategies. Here are the combined results of four surveys of AI researchers, including a poll of the most-cited scientists in the field.
Human-level machine intelligence is defined here as "one that can carry out most human professions at least as well as a typical human".
Maybe we shouldn't be so surprised about these predictions. Robots and algorithms are already squeezing the edges of our global workforce. Jobs with routine tasks are getting digitised.
Replication of routine isn't the kind of intelligence Bostrom is interested in. He's talking about an intelligence with intuition and logic, one that can learn and sense the world around it.
The most interesting thing about reaching human-level intelligence isn't the achievement itself, says Bostrom; it's what comes next.
Computers are improving at an exponential rate. In many areas - chess, for example - machine skill is already superhuman. In others - reason, emotional intelligence - there's still a long way to go.
Whether human-level general intelligence is reached in 15 years or 150, it's likely to be a little-observed marker on the road towards superintelligence that greatly exceeds the cognitive performance of humans.
Inventor and Tesla chief executive Elon Musk warns that superintelligent machines are possibly the greatest existential threat to humanity. He says the investments he's made in artificial-intelligence companies are primarily to keep an eye on where the field is headed.
"Hope we're not just the biological boot loader for digital superintelligence," he tweeted in August. "Unfortunately, that is increasingly probable."
There are lots of caveats before we prepare to hand the keys to our earthly kingdom over to robot offspring.
First, humans have a terrible track record of predicting the future. Second, people are notoriously optimistic when forecasting the future of their own industries. Third, it's not a given that technology will continue to advance along its current trajectory, or even with its current aims.
Still, the brightest minds devoted to this evolving technology are predicting the end of human intellectual supremacy by midcentury.
The direction of technology may be inevitable, but the care with which we approach it is not.
"Success in creating AI would be the biggest event in human history," wrote theoretical physicist Stephen Hawking, in an Independent column in May. "It might also be the last."
- Bloomberg