Now, in an about-face, the tech industry is recoiling at an attempt to do exactly that in California. Because they are based in the state or do business in the state, many of the leading AI companies, including Google, Meta, Anthropic and OpenAI, would be bound by the proposed law, which could set a precedent for other states and national governments.
SB 1047 arrives at a precarious time for the San Francisco Bay Area, where much of the AI start-up community, as well as many of the industry’s biggest companies, is based. The bill, its harshest critics argue, could push AI development into other states, just as the region is rebounding from a pandemic-induced slump.
Some notable AI researchers have supported the bill, including Geoff Hinton, the former Google researcher, and Yoshua Bengio, a professor at the University of Montreal. The two have spent the past 18 months warning of the dangers of the technology. Other AI pioneers have come out against the bill, including Meta’s chief AI scientist, Yann LeCun, and former Google executives and Stanford professors Andrew Ng and Fei-Fei Li.
Newsom’s office declined to comment. Google, Meta and Anthropic also declined to comment. An OpenAI spokesperson said the bill could slow innovation by creating an uncertain legal landscape for building AI. The company said it had expressed its concerns in meetings with the office of California state senator Scott Wiener, who created the bill, and that serious AI risks were national security issues that should be regulated by the federal government, not by states.
The bill has roots in “AI salons” held in San Francisco. Last year, Wiener attended a series of those salons, where young researchers, entrepreneurs, activists and amateur philosophers discussed the future of artificial intelligence.
After sitting in on those discussions, Wiener said he created SB 1047, with input from the lobbying arm of the Centre for AI Safety, a think tank with ties to effective altruism, a movement that has long been concerned with preventing existential threats from AI.
The bill would require safety tests for systems that have development costs exceeding $100 million and that are trained using a certain amount of raw computing power. It would also create a new state agency that defines and monitors those tests. Dan Hendrycks, a founder of the Centre for AI Safety, said the bill would push the largest tech companies to identify and remove harmful behavior from their most expensive technologies.
“Complex systems will have unexpected behavior. You can count on it,” Hendrycks said in an interview with the New York Times. “The bill is a call to make sure that these systems don’t have hazards or, if the hazards do exist, that the systems have the appropriate safeguards.”
Today’s AI technologies can help spread disinformation online, including text, still images and videos. They are also beginning to take away some jobs. But studies by OpenAI and others over the past year showed today’s AI technologies were not significantly more dangerous than search engines.
Still, some AI experts argue that serious dangers are on the horizon. In one example, Dario Amodei, CEO of the high-profile AI start-up Anthropic, told Congress last year new AI technology could soon help unskilled people create large-scale biological attacks.
Wiener said he was trying to head off those scary scenarios.
“Historically, we have waited for bad things to happen and then wrung our hands and dealt with it later, sometimes when the horse was out of the barn and it was too late,” Wiener said in an interview. “So my view is, let’s try to, in a very light-touch way, get ahead of the risks and anticipate the risks.”
Google and Meta sent letters to Wiener expressing concerns about the bill. Anthropic, Amodei’s company, surprised many observers when it also opposed the bill in its current form and suggested changes that would allow companies to control their own safety testing. The company said the government should only become involved if real harms were caused.
Wiener said the opposition by tech giants sent mixed messages. The companies have already promised the Biden administration and global regulators that they would test their systems for safety.
“The CEOs of Meta, Google, of OpenAI - all of them - have volunteered to testing and that’s what this bill asks them to do,” he said.
The bill’s critics say they are worried that the safety rules will add new liability to AI development, since companies will have to make a legal promise that their models are safe before they release them. They also argue the threat of legal action from the state attorney-general will discourage tech giants from sharing their technology’s underlying software code with other businesses and software developers - a practice known as open source.
Open source is common in the AI world. It allows small companies and individuals to build on the work of larger organizations, and critics of SB 1047 argue the bill could severely limit the options of start-ups that do not have the resources of tech giants like Google, Microsoft and Meta.
“It could stifle innovation,” said Lauren Wagner, an investor and researcher who has worked for both Google and Meta.
Open-source backers believe that sharing code allows engineers and researchers across the industry to quickly identify and fix problems and improve technologies.
Jeremy Howard, an entrepreneur and AI researcher who helped create the technologies that drive the leading AI systems, said the new California bill would ensure the most powerful AI technologies belonged solely to the biggest tech companies. And if these systems were to eventually exceed the power of the human brain, as some AI researchers believe they will, the bill would consolidate power in the hands of a few corporations.
“These organisations would have more power than any country - any entity of any kind. They would be in control of an artificial super intelligence,” Howard said. “That is a recipe for disaster.”
Others argue that if open source development is not allowed to flourish in the United States, it will flow to other countries, including China. The solution, they argue, is to regulate how people use AI rather than regulating the creation of the core technology.
“AI is like a kitchen knife, which can be used for good things, like cutting an onion, and bad things, like stabbing a person,” said Sebastian Thrun, an AI researcher and serial entrepreneur who founded the self-driving car project at Google.
“We shouldn’t try to put an off switch on a kitchen knife. We should try to prevent people from misusing it.”
Written by: Cade Metz and Cecilia Kang.
© 2024 THE NEW YORK TIMES