The statement comes at a time of growing concern about the potential harms of AI. Recent advancements in so-called large language models — the type of AI system used by ChatGPT and other chatbots — have raised fears that AI could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.
Eventually, some believe, AI could become powerful enough that it could create societal-scale disruptions within a few years if nothing is done to slow it down, although researchers sometimes stop short of explaining how that would happen.
These fears are shared by numerous industry leaders, putting them in the unusual position of arguing that a technology they are building — and, in many cases, are furiously racing to build faster than their competitors — poses grave risks and should be regulated more tightly.
This month, Altman, Hassabis and Amodei met with President Joe Biden and Vice President Kamala Harris to talk about AI regulation. In a Senate testimony after the meeting, Altman warned that the risks of advanced AI systems were serious enough to warrant government intervention and called for regulation of AI for its potential harms.
Dan Hendrycks, executive director of the Center for AI Safety, said in an interview that the open letter represented a “coming-out” for some industry leaders who had expressed concerns — but only in private — about the risks of the technology they were developing.
“There’s a very common misconception, even in the AI community, that there only are a handful of doomers,” Hendrycks said. “But, in fact, many people privately would express concerns about these things.”
Some sceptics argue that AI technology is still too immature to pose an existential threat. When it comes to today’s AI systems, they worry more about short-term problems, such as biased and incorrect responses, than longer-term dangers.
But others have argued that AI is improving so rapidly that it has already surpassed human-level performance in some areas, and that it will soon surpass it in others. They say the technology has shown signs of advanced abilities and understanding, giving rise to fears that “artificial general intelligence,” or AGI, a type of AI that can match or exceed human-level performance at a wide variety of tasks, may not be far off.
In a blog post last week, Altman and two other OpenAI executives proposed several ways that powerful AI systems could be responsibly managed. They called for cooperation among the leading AI makers, more technical research into large language models and the formation of an international AI safety organization, similar to the International Atomic Energy Agency, which seeks to control the use of nuclear weapons.
Altman has also expressed support for rules that would require makers of large, cutting-edge AI models to register for a government-issued license.
In March, more than 1,000 technologists and researchers signed another open letter calling for a six-month pause on the development of the largest AI models, citing concerns about “an out-of-control race to develop and deploy ever more powerful digital minds.”
That letter, which was organised by another AI-focused nonprofit, the Future of Life Institute, was signed by Elon Musk and other well-known tech leaders, but it did not have many signatures from the leading AI labs.
The brevity of the new statement from the Center for AI Safety — just 22 words — was meant to unite AI experts who might disagree about the nature of specific risks or steps to prevent those risks from occurring but who share general concerns about powerful AI systems, Hendrycks said.
“We didn’t want to push for a very large menu of 30 potential interventions,” he said. “When that happens, it dilutes the message.”
The statement was initially shared with a few high-profile AI experts, including Hinton, who quit his job at Google this month so that he could speak more freely, he said, about the potential harms of AI. From there, it made its way to several of the major AI labs, where some employees then signed on.
The urgency of AI leaders’ warnings has increased as millions of people have turned to AI chatbots for entertainment, companionship and increased productivity, and as the underlying technology improves at a rapid clip.
“I think if this technology goes wrong, it can go quite wrong,” Altman told the Senate subcommittee. “We want to work with the government to prevent that from happening.”
This article originally appeared in The New York Times.
Written by: Kevin Roose
Photographs by: Haiyun Jiang
©2023 THE NEW YORK TIMES