ANALYSIS: Sam Altman went before the US Congress last week and gave American lawmakers a clear message – regulate artificial intelligence.
“I think if this technology goes wrong, it can go quite wrong,” said the co-founder of OpenAI, which created the ChatGPT intelligence chatbot that has taken the world by storm.
Altman may have a hit product on his hands, but he is a contradictory figure in tech. He has turned OpenAI from an open-source, non-profit organisation created with the aim of countering the power of Google and other Big Tech players into a for-profit enterprise propped up by a US$10 billion investment from Microsoft.
ChatGPT stands to become one of the biggest cash cows since Google invented its ubiquitous search engine. Even Elon Musk, the billionaire who had a hand in OpenAI’s creation before parting ways with Altman, sees a problem with it.
“Let’s say you funded an organisation to save the Amazon Rainforest and instead they became a lumber company and chopped down the forest and sold it for money,” was how he put it last week.
Altman’s call for regulation is self-serving – the genie is out of the bottle with the generative AI models that can spit out university essays, computer code and impressive artworks. Regulation will give OpenAI, as the market leader, an advantage over the wave of competitors coming up behind it.
But we should still regulate AI. Unfortunately, the deeply polarised political landscape in the US means that won’t be a straightforward process there. Efforts to introduce stronger data access and privacy regulations in the US and anti-trust action against the likes of Meta and Google to reduce their market power are taking years to advance.
Across the Atlantic, the Europeans are moving at pace to address the threats posed by artificial intelligence, with the EU AI Act. The legislation was first drafted in 2021 but there’s now a greater impetus to get it passed by the European Parliament by the end of the year.
“With the arrival of ChatGPT, we believe we’ve got to accelerate it,” Nina Obermaier, the EU’s ambassador to New Zealand, said at an AI panel discussion in the Beehive last week.
The legislation proposes a four-tiered system for regulating AI products and services. Developers of AI considered to pose minimal risks, such as AI-powered video games or spam-filtering systems, will be subject only to a code of conduct.
AI that poses limited risks, such as chatbots and emotion recognition, will need to be transparent and explainable to regulators, while AI used in areas such as employment law, immigration and the justice system will be subject to a “conformity assessment”.
AI deemed to pose an unacceptable risk will be banned. Those uses include social scoring – the type of system China has set up to monitor and influence the behaviour of its citizens, and real-time collection of biometric information from facial recognition systems operating in public areas.
There will be a register for developers of AI products and ongoing safety assessments looking at an AI system’s data quality, traceability and documentation. Systems need to be trained with representative data sets to reduce the risk of bias. Companies breaching the act will be liable for fines of up to $50 million.
It sounds like a lot of red tape, but Obermaier is unfazed by it. “We are good at that, we do it all the time for all types of goods,” she said.
We can learn a lot from the AI Act, which will have a significant impact not just for the 450 million citizens in EU countries, but globally, too.
As they did when the EU tightened its data regulations in 2018, Big Tech companies will likely adopt some of the policies on a global basis given the borderless nature of the internet.
Altman says we are witnessing a “printing press moment” with artificial intelligence. We now have to ensure we use it as a force for good.