And although Musk is pushing back against OpenAI and plans to compete with it, he helped found the AI lab in 2015 as a nonprofit. He has since said he has grown disillusioned with OpenAI because it no longer operates as a nonprofit and is building technology that, in his view, takes sides in political and social debates.
What Musk’s AI approach boils down to is doing it himself. The 51-year-old billionaire, who also runs the electric carmaker Tesla and the rocket company SpaceX, has long seen his own AI efforts as offering better, safer alternatives than those of his competitors, according to people who have discussed these matters with him.
“He believes that AI is going to be a major turning point and that if it is poorly managed, it is going to be disastrous,” said Anthony Aguirre, a theoretical cosmologist at the University of California, Santa Cruz, and a founder of the Future of Life Institute, the organization behind the open letter. “Like many others, he wonders: What are we going to do about that?”
Musk and Ba, who is known for creating a popular algorithm used to train AI systems, did not respond to requests for comment. Their discussions are continuing, the three people familiar with the matter said.
A spokesperson for OpenAI, Hannah Wong, said that although it now generated profits for investors, it was still governed by a nonprofit and its profits were capped.
Musk’s roots in AI date to 2011. At the time, he was an early investor in DeepMind, a London startup that set out in 2010 to build artificial general intelligence, or AGI, a machine that can do anything the human brain can. Less than four years later, Google acquired the 50-person company for US$650 million.
At a 2014 aerospace event at the Massachusetts Institute of Technology, Musk indicated that he was hesitant to build AI himself.
“I think we need to be very careful about artificial intelligence,” he said while answering audience questions. “With artificial intelligence, we are summoning the demon.”
That winter, the Future of Life Institute, which explores existential risks to humanity, organized a private conference in Puerto Rico focused on the future of AI. Musk gave a speech there, arguing that AI could cross into dangerous territory without anyone realizing it and announced that he would help fund the institute. He gave US$10 million.
In the summer of 2015, Musk met privately with several AI researchers and entrepreneurs during a dinner at the Rosewood, a hotel in Menlo Park, California, famous for Silicon Valley deal-making. By the end of that year, he and several others who attended the dinner — including Sam Altman, then president of the startup incubator Y Combinator, and Ilya Sutskever, a top AI researcher — had founded OpenAI.
OpenAI was set up as a nonprofit, with Musk and others pledging US$1 billion in donations. The lab vowed to “open source” all its research, meaning it would share its underlying software code with the world. Musk and Altman argued that the threat of harmful AI would be mitigated if everyone, rather than just tech giants like Google and Facebook, had access to the technology.
But as OpenAI began building the technology that would result in ChatGPT, many at the lab realized that openly sharing its software could be dangerous. Using AI, individuals and organizations can potentially generate and distribute false information more quickly and efficiently than they otherwise could. Many OpenAI employees said the lab should keep some of its ideas and code from the public.
In 2018, Musk resigned from OpenAI’s board, partly because of his growing conflict of interest with the organization, two people familiar with the matter said. By then, he was building his own AI project at Tesla — Autopilot, the driver-assistance technology that automatically steers, accelerates and brakes cars on highways. To do so, he poached a key employee from OpenAI.
In a recent interview, Altman declined to discuss Musk specifically, but said Musk’s breakup with OpenAI was one of many splits at the company over the years.
“There is disagreement, mistrust, egos,” Altman said. “The closer people are to being pointed in the same direction, the more contentious the disagreements are. You see this in sects and religious orders. There are bitter fights between the closest people.”
After ChatGPT debuted in November, Musk grew increasingly critical of OpenAI. “We don’t want this to be sort of a profit-maximizing demon from hell, you know,” he said during an interview last week with Tucker Carlson, the former Fox News host.
Musk renewed his complaints that AI was dangerous and accelerated his own efforts to build it. At a Tesla investor event last month, he called for regulators to protect society from AI, even though his car company has used AI systems to push the boundaries of self-driving technologies that have been involved in fatal crashes.
That same day, Musk suggested in a tweet that Twitter would use its own data to train technology along the lines of ChatGPT. Twitter has hired two researchers from DeepMind, two people familiar with the hiring said. The Information and Insider earlier reported details of the hires and Twitter’s AI efforts.
During the interview last week with Carlson, Musk said OpenAI was no longer serving as a check on the power of tech giants. He wanted to build TruthGPT, he said, “a maximum-truth-seeking AI that tries to understand the nature of the universe.”
Last month, Musk registered X.AI. The startup is incorporated in Nevada, according to the registration documents, which also list the company’s officers as Musk and his financial manager, Jared Birchall. The documents were earlier reported by The Wall Street Journal.
Experts who have discussed AI with Musk believe he is sincere in his worries about the technology’s dangers, even as he builds it himself. Others said his stance was influenced by other motivations, most notably his efforts to promote and profit from his companies.
“He says the robots are going to kill us?” said Ryan Calo, a professor at the University of Washington School of Law, who has attended AI events alongside Musk. “A car that his company made has already killed somebody.”
Written by: Cade Metz, Ryan Mac and Kate Conger
© 2023 THE NEW YORK TIMES