The singularity is a term invented by science-fiction writer Vernor Vinge in 1993 to describe the moment when human beings cease to be the most intelligent creatures on the planet. The threat, in his view, came not from very clever dolphins but from hyper-intelligent machines. But would they really be a threat?
We have a foundation for almost everything these days, and now we have one to worry about that. It is the Cambridge Project for Existential Risks, set up by none other than Martin Rees, Britain's astronomer royal, and Huw Price, occupant of the Bertrand Russell Chair in Philosophy at Cambridge University. The money comes from Jaan Tallinn, co-founder of Skype, the internet telephone company now owned by Microsoft.
It is quite likely we will one day create a machine - a robot, if you like - that can "think" faster than we do. Moore's Law, which stipulates that computing power doubles every two years, is still true 47 years after it was first stated by Intel founder Gordon Moore. Since the data-processing power of the human brain, although hard to measure, is obviously not doubling every two years, this is a race we are bound to lose in the end.
But that is only the start of the argument. Why should we believe that creating a machine that can process more data than we can is a bigger deal than building a machine that can move faster than we do, or lift more than we can? The "singularity" hypothesis implies (though it does not prove) that high data-processing capacity is synonymous with self-conscious intelligence.