OpenAI chief executive Sam Altman had this year’s Apec CEO Summit in thrall as he predicted AI would prove to be “the greatest leap forward of any of the big technological revolutions we’ve had so far” and promoted a “trust us” model in the face of an Executive
Dynamic Business: Artificial Intelligence at Apec CEO Summit
The existential threat to humanity posed by AI is one of the reasons that led Musk to spend some of his estimated fortune of US$240b to launch a start-up called xAI.
In a later power play, Nadella first pocketed Altman to head AI research at Microsoft, then shoehorned him back to lead OpenAI with a new bunch of directors including Bret Taylor, Larry Summers, and Adam D’Angelo.
Earlier, Altman acknowledged the need for guard rails to protect humanity from the existential threat posed by the quantum leaps being taking by computers. “I really think the world is going to rise to the occasion and everybody wants to do the right thing.”
Altman said: “The real concern of the industry right now, to paraphrase, is how do we make sure we get thoughtful guardrails on the real frontier models without us all turning it into regulatory capture and, stopping open-source models”.
“I think open source is awesome; not everybody agrees with that. I’m thrilled you all are doing it. I hope we see more of it.”
Where Altman departs from current thinking is with his view on just where regulation needs to step in.
“We don’t need heavy regulation here, probably not even for the next couple of generations. But at some point, when the model can do the equivalent output of a whole company and then a whole country and then the whole world, like maybe we do want some collective global supervision of that and some collective decision-making.
“But to land that message and not say it’s like, ‘Hey, we’re not telling you; you have to totally ignore present harms. We’re not saying you have to, like, you should go after small companies and open-source models.
“We are saying, you know, trust us; this is going to get really powerful and really scary, and you got to regulate it later. Very difficult needle to thread through all of that.”
The United States Government thinks differently. In mid-October, President Joe Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of AI.
The order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.
Critically, it requires that developers of the most powerful AI systems share their safety test results and other critical information with the Government. The Departments of Energy and Homeland Security are required to address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks.
At the summit, Google chief executive Sundar Pichai was bullish: ”I think we have to work hard to harness it. But that is true of every other technological advance we’ve had before. It was true for the industrial revolution. I think we can learn from those things.”
Altman said: “I think we’re on a path to self-destruction as a species right now. We need new ideas; we need new technology if we want to flourish for tens and hundreds and millions, hundreds of thousands and millions of years more.
“But the technological change happening now is going to so change the constraints of the way we live and the sort of economy and social structures, and what’s possible.
“So, I’m super-excited. I can’t imagine anything more exciting to work on. And on a personal note, like four times now in the history of OpenAI, I’ve gotten to be in the room when we sort of push the veil of ignorance back and the frontier of discovery forward.
“And getting to do that is like the professional honour of a lifetime.”
Two other leading AI thinkers and developers also took part: Chris Cox, Meta’s chief product officer, and James Manika, Google’s senior vice-president of research technology.
Each had examples of where AI was contributing.
Cox talked about developments in proteins by his colleagues at DeepMind, AlphaFold, who were working to predict the protein structure of all 200 million proteins known to science and “then make that available to everybody”.
Manika pointed to some pressing challenges like access to maternal health in low-income countries and communities.
“Think about climate change. You spend a lot of time thinking about the effects of climate change. Think about all the things we see in California wildfires.
“AI gives us the possibility of actually addressing and enhancing how we tackle all of this. This is what motivates me and excites me.”
The global titans of tech themselves had plenty of airtime, with Biden and China’s President Xi Jinping focusing on artificial intelligence and other defining technologies.
“The world is at an inflection point — this is not a hyperbole,” Biden told the summit.
Intriguingly for all the prior words about the US and China standoff, it was notable that Xi hosted some of those tech titans — like Apple’s Tim Cook and Tesla’s Musk — to a private dinner.
They aren’t about to leave China anytime soon.
For host city, San Francisco, AI is a boon.
Mayor London Breed boasted that the city has more AI job openings than others in the country, with eight of the world’s top AI companies based in San Francisco.
“The conversations happening in this city and the conversations happening here today, these are the ideas that are going to transform our world in the decades to come.
“Future generations will look back on these discussions as the start of something entirely new, and it’s happening all right here in San Francisco. Economies, industry, and society change rapidly,” she said.
“Google was started out of a garage down Highway 101 Freeway. OpenAI was virtually unheard of last year at this time; now, ChatGPT has 100 million users.”
Out-takes from Condoleezza Rice
Former US secretary of state Dr Condoleezza Rice has some cautionary words on artificial intelligence.
“Everyone has learned to spell AI, they don’t really know what quite to do about it,” said Rice, who is now director of the Hoover Institution at Stanford University.
“They have enormous benefit written all over them.
“They also have a lot of cautionary tales about how technology can be misused.”
Rice predicted the US would continue to decouple from China, particularly on technology where China has emerged as a major competitor and threat to American business. But not elsewhere.
She makes no distinction between “decoupling” and “de-risking”. It’s the same thing, she said.
When it comes to US policies towards China, Rice was adamant the US does not want a “hot conflict” with China. “That means military-to-military talks and de-escalation of conflict.”
Her three key issues:
● “There is a technological bow wave that is coming at us with technologies that are so transformative and so powerful. So, I would hope, given the countries that are at these frontiers, like China and the US, that there can be some understanding of how we want to approach some of these transformative technologies. So, that’s one thing.”
● “A second thing is about the big challenges of climate and food security. There are countries that are in large part fast-growing. These are countries that in large part do have technological capabilities. One of things coming out of the Global South is that the developed countries are the ones that are on the front line. How do you feel, if you are sitting in the Caribbean at this point, about the potential if we’re too late on climate change?”
“I know people like to go to Cop 28 and talk about 1.5 degrees and they like to talk about 2050. But it’s not helpful.”
● “What every country has is three Es: Economic growth, energy mix, and environmental sustainability. How are we going to harmonise those to get better outcomes on sustainability?”