For decades, Silicon Valley anticipated the moment when a new technology would come along and change everything. It would unite human and machine, probably for the better but possibly for the worse, and split history into before and after.
The name for this milestone: the Singularity.
It could happen in several ways. One possibility is that people would add a computer’s processing power to their own innate intelligence, becoming supercharged versions of themselves. Or maybe computers would grow so complex that they could truly think, creating a global brain.
In either case, the resulting changes would be drastic, exponential and irreversible. A self-aware superhuman machine could design its own improvements faster than any group of scientists, setting off an explosion in intelligence. Centuries of progress could happen in years or even months. The Singularity is a slingshot into the future.
Artificial intelligence is roiling tech, business and politics like nothing in recent memory. Listen to the extravagant claims and wild assertions issuing from Silicon Valley, and it seems the long-promised virtual paradise is finally at hand.
Sundar Pichai, Google’s usually low-key CEO, calls artificial intelligence “more profound than fire or electricity or anything we have done in the past”. Reid Hoffman, a billionaire investor, says, “The power to make positive change in the world is about to get the biggest boost it’s ever had.” And Microsoft’s co-founder Bill Gates proclaims AI “will change the way people work, learn, travel, get healthcare and communicate with each other”.
AI is Silicon Valley’s ultimate new product rollout: transcendence on demand.
But there’s a dark twist. It’s as if tech companies introduced self-driving cars with the caveat that they could blow up before you got to Walmart.
“The advent of artificial general intelligence is called the Singularity because it is so hard to predict what will happen after that,” Elon Musk, who runs Twitter and Tesla, told CNBC last month. He said he thought “an age of abundance” would result but there was “some chance” that it “destroys humanity”.
The biggest cheerleader for AI in the tech community is Sam Altman, CEO of OpenAI, the startup that prompted the current frenzy with its ChatGPT chatbot. He says AI will be “the greatest force for economic empowerment and a lot of people getting rich we have ever seen”.
But he also says Musk, a critic of AI who also started a company to develop brain-computer interfaces, might be right.
Altman signed an open letter last month released by the Center for AI Safety, a nonprofit organisation, saying that “mitigating the risk of extinction from AI should be a global priority” that is right up there with “pandemics and nuclear war”. Other signatories included Altman’s colleagues from OpenAI and computer scientists from Microsoft and Google.
Apocalypse is familiar, even beloved territory for Silicon Valley. A few years ago, it seemed every tech executive had a fully stocked apocalypse bunker somewhere remote but reachable. In 2016, Altman said he was amassing “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to”. The coronavirus pandemic made tech preppers feel vindicated, for a while.
Now, they are prepping for the Singularity.
“They like to think they’re sensible people making sage comments, but they sound more like monks in the year 1000 talking about the Rapture,” said Baldur Bjarnason, author of The Intelligence Illusion, a critical examination of AI. “It’s a bit frightening,” he said.
The roots of transcendence
The Singularity’s intellectual roots go back to John von Neumann, a pioneering computer scientist who in the 1950s talked about how “the ever-accelerating progress of technology” would yield “some essential singularity in the history of the race”.
Irving John Good, a British mathematician who helped decode the German Enigma device at Bletchley Park during World War II, was also an influential proponent. “The survival of man depends on the early construction of an ultra-intelligent machine,” he wrote in 1964. Director Stanley Kubrick consulted Good on HAL, the benign-turned-malevolent computer in 2001: A Space Odyssey — an early example of the porous borders between computer science and science fiction.
Hans Moravec, an adjunct professor at the Robotics Institute at Carnegie Mellon University, thought AI would be a boon not just for the living: the dead, too, would be reclaimed in the Singularity. “We would have the opportunity to re-create the past and to interact with it in a real and direct fashion,” he wrote in Mind Children: The Future of Robot and Human Intelligence.
In recent years, entrepreneur and inventor Ray Kurzweil has been the biggest champion of the Singularity. Kurzweil wrote The Age of Intelligent Machines in 1990 and The Singularity Is Near in 2005, and is now writing The Singularity Is Nearer.
By the end of the decade, he expects computers to pass the Turing Test and be indistinguishable from humans. Fifteen years after that, he calculates, the true transcendence will come: the moment when “computation will be part of ourselves, and we will increase our intelligence a millionfold”.
By then, Kurzweil will be 97. With the help of vitamins and supplements, he plans to live to see it.
For some critics of the Singularity, it is an intellectually dubious attempt to replicate the belief system of organised religion in the kingdom of software.
“They all want eternal life without the inconvenience of having to believe in God,” said Rodney Brooks, the former director of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology.
The innovation that feeds today’s Singularity debate is the large language model, the type of AI system that powers chatbots. Start a conversation with one of these LLMs and it can spit back answers speedily, coherently and often with a fair degree of illumination.
“When you ask a question, these models interpret what it means, determine what its response should mean, then translate that back into words — if that’s not a definition of general intelligence, what is?” said Jerry Kaplan, a longtime AI entrepreneur and the author of Artificial Intelligence: What Everyone Needs to Know.
Kaplan said he was sceptical about such highly heralded wonders as self-driving cars and cryptocurrency. He approached the latest AI boom with the same doubts but said he had been won over.
“If this isn’t ‘the Singularity,’ it’s certainly a singularity: a transformative technological step that is going to broadly accelerate a whole bunch of art, science and human knowledge — and create some problems,” he said.
Critics counter that even the impressive results of LLMs are a far cry from the enormous, global intelligence long promised by the Singularity. Part of the problem in accurately separating hype from reality is that the engines driving this technology are becoming hidden. OpenAI, which began as a nonprofit using open-source code, is now a for-profit venture that critics say is effectively a black box. Google and Microsoft also offer limited visibility.
Much of the AI research is being done by the companies with much to gain from the results. Researchers at Microsoft, which invested US$13 billion (NZ$21.2b) in OpenAI, published a paper in April concluding that a preliminary version of the latest OpenAI model “exhibits many traits of intelligence” including “abstraction, comprehension, vision, coding” and “understanding of human motives and emotions”.
Rylan Schaeffer, a doctoral student in computer science at Stanford University, said some AI researchers had painted an inaccurate picture of how these large language models exhibit “emergent abilities” — unexplained capabilities that were not evident in smaller versions.
Along with two Stanford colleagues, Brando Miranda and Sanmi Koyejo, Schaeffer examined the question in a research paper published last month and concluded that emergent properties were “a mirage” caused by errors in measurement. In effect, researchers are seeing what they want to see.
Eternal life, eternal profits
In Washington, London and Brussels, lawmakers are stirring to the opportunities and problems of AI and starting to talk about regulation. Altman is on a road show, seeking to deflect early criticism and to promote OpenAI as the shepherd of the Singularity.
This includes an openness to regulation, but exactly what that would look like is fuzzy. Silicon Valley has generally held the view that government is too slow and stupid to oversee fast-breaking technological developments.
“There’s no one in the government who can get it right,” Eric Schmidt, Google’s former CEO, said in an interview on Meet the Press last month, arguing the case for AI self-regulation. “But the industry can roughly get it right.”
AI, just like the Singularity, is already being described as irreversible. “Stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work,” Altman and some of his colleagues wrote last month. If Silicon Valley doesn’t make it, they added, others will.
Less discussed are the vast profits to be made from uploading the world. Despite all the talk of AI being an unlimited wealth-generating machine, the people getting rich are pretty much the ones who are already rich.
Microsoft has seen its market capitalisation soar by half a trillion dollars this year. Nvidia, a maker of chips that run AI systems, recently became one of the most valuable public US companies when it said demand for those chips had skyrocketed.
“AI is the tech the world has always wanted,” Altman tweeted.
It certainly is the tech that the tech world has always wanted, arriving at the absolute best possible time. Last year, Silicon Valley was reeling from layoffs and rising interest rates. Crypto, the previous boom, was enmeshed in fraud and disappointment.
Follow the money, said Charles Stross, a co-author of the novel The Rapture of the Nerds, a comedic take on the Singularity, as well as the author of Accelerando, a more serious attempt to describe what life could soon be like.
“The real promise here is that corporations will be able to replace many of their flawed, expensive, slow, human information-processing sub units with bits of software, thereby speeding things up and reducing their overheads,” he said.
The Singularity has long been imagined as a cosmic event, literally mind-blowing. And it still may be.
But it might manifest first and foremost — thanks, in part, to the bottom-line obsession of today’s Silicon Valley — as a tool to slash corporate America’s head count. When you’re sprinting to add trillions to your market cap, heaven can wait.
This article originally appeared in The New York Times.
Written by: David Streitfeld
Images by: Zach Meyer and Haiyun Jiang
©2023 THE NEW YORK TIMES