There have been a number of recent inflection points in the information age when a mere product has become a movement: the debut of the iPhone and Amazon’s Kindle and the rise of Facebook and Netflix are among them.
But the debut of ChatGPT in November 2022 was something else entirely. Within weeks, the generative artificial intelligence application from San Francisco-based startup OpenAI became the most rapidly adopted web application in history, used by hundreds of millions of people worldwide. It spawned a headlong race to exploit a field of AI – large language models (LLMs) trained on masses of digital information – that only emerged in 2018.
AI systems have been in use for decades, deciding the order of posts in your Facebook newsfeed and enabling Smartgate machines in airports to match your face with your passport photo. But the versatility of ChatGPT, which was able to assemble coherent responses to a wide range of questions, has meant everyone from students to business leaders and politicians have finally been able to grasp AI’s power.
Hollywood actors and scriptwriters took to the picket lines in protest over its use to automatically generate new works based on their visage, words and ideas. University and high school teachers had to wade through a deluge of AI-generated essays. Software developers had access to tools that could automate swathes of computer code in seconds.
The flurry of activity culminated in late November, almost on the anniversary of ChatGPT’s debut, when the chief executive of OpenAI, Sam Altman, was ousted from the wildly successful company he co-founded with Tesla and SpaceX chief executive Elon Musk in 2015. The move backfired badly. Within a few days, Altman was back at OpenAI and all but one of its board members had resigned to be replaced by new, more business-friendly directors.
But OpenAI’s boardroom ructions can’t be easily forgotten as generative AI development continues apace. The nonprofit remit of the world’s hottest AI company is looking shaky and the company recently signed a deal with the Pentagon, quietly softening its self-imposed ban on use of its AI for military purposes.
Are tech companies moving too quickly, overlooking safety concerns in the race to be first with new features? The potential prize, after all, is vast. The Wall Street Journal reported in September that OpenAI was discussing a potential share sale that valued the company at US$80-$90 billion, which would make it one of the most valuable privately held tech companies in the world.
What’s next?
If 2023 was a year of rapid experimentation in generative AI, this year, “copilot” chatbots will be adopted across the business world. Powered by the technology underpinning ChatGPT, Microsoft CoPilot has already been built into Windows, the Bing search engine and the Microsoft 365 productivity suite of Word, Excel, Outlook and PowerPoint.
A similar service from Google, Duet AI, is available to Gmail and Google Drive users. It has radically changed how I, a technology writer, can find information in the thousands of articles stored in my Google Drive folder. When legions of office workers have access to copilots, which are being offered as software add-ons or standalone apps at a cost of $30-$40 a month, basic office admin, report writing, composing emails and the like could largely be automated.
The major push to develop LLMs will probably lead to sophisticated “multimodal” AI chatbots that will handle every type of content: internet search queries, text, images, audio, video generation and computer code. Fine-tuning of the models and algorithms will produce more nuanced human-like responses to queries.
We are still a world away from human-level thinking, the artificial general intelligence (AGI) some experts fear could create a superintelligence that spells the end for humanity. But the answers ChatGPT spat out in early 2023 will soon appear laughably clunky in comparison to the new generation of chatbots.
To date, we have mainly been drawing on the likes of ChatGPT to answer questions and generate impressive pictures. But AI developers are now creating supercharged chatbots, known as AI agents, that can use software applications and websites to complete tasks. AI agents could work on your behalf to complete mundane admin work, a potential productivity booster.
We are also moving from LLMs that focus on one or two tasks, such as answering questions and creating images, to ones that will generate video, audio, diagrams and other media.
News reports in December suggested OpenAI had developed a new model called Q* (Q Star) that is capable of answering primary-school-level mathematics questions. That may not sound impressive, but experts see it as a major step towards AGI. It has big implications for science and education and the workers in those sectors.
Some experts suggest with OpenAI’s GPT-4 technology, released last March, we have reached a temporary “algorithmic ceiling” that will require some major breakthroughs to overcome. OpenAI and its chief rival, Google, are pouring resources into the next generation of the technology.
An unlikely rival has emerged in the form of Facebook co-founder Mark Zuckerberg, who wants to develop an open-source AGI that anyone can use. His company, Meta, is reportedly committing $20 billion worth of computing capacity to the task.
Jobs and productivity
The relatively low barrier to entry for generative AI in business has led to some bullish predictions about a looming productivity bump – measured in real gross domestic product per hour worked – if it is widely deployed. Goldman Sachs last year increased its 10-year outlook for global GDP growth, estimating AI could add 1.5 percentage points to annual productivity growth over the next decade.
Consulting firm McKinsey suggests globally “generative AI could enable labour productivity growth of 0.1-0.6% annually through [to] 2040, depending on the rate of technology adoption and redeployment of worker time into other activities”.
Experts suggest up to 40% of the functions of workers are likely to be replaced by AI, especially in white-collar office environments. Legal and financial services, business administration, software development and marketing roles are ripe for automation.
Here, AI is touted as part of the answer to New Zealand’s chronically low productivity. In the 2022 OECD rankings, we languished well behind the small advanced nations we prefer to compare ourselves with, Denmark and Ireland among them.
Kiwi businesses have, however, been generally slower to invest in and deploy AI services than other countries, which Matt Ensor, founder of Auckland startup FranklyAI (see “Stampede ahead”, left), puts down to a lack of understanding of the technology, how it should be used responsibly and the ways it can add value to businesses.
In contrast to countries such as the US, the UK and Australia, which have invested heavily in AI initiatives, governmental stimulation of AI uptake has been absent here. Australia has a national AI action plan and a National Artificial Intelligence Centre based at government research agency CSIRO. It has responsibility for spurring AI adoption across industries.
The National-led coalition is disbanding the Productivity Commission, whose job was to find ways to boost productivity. Widespread use of AI and automation technologies were on the commission’s radar, but of themselves are no silver bullet for sluggish productivity.
Deep fakes and bias
We face the prospect of falling further behind other OECD countries on productivity measures if we continue to be laggards in AI adoption. But hasty deployment of AI could be a recipe for disaster, says Frith Tweedie, a responsible AI consultant at Wellington-registered company Simply Privacy. There was ample evidence of the dark side of generative AI last year. Chatbots made up facts and regurgitated errors and biased viewpoints. Deepfakes made with AI image and video generation tools began to proliferate: just last week, X (formerly Twitter) attempted to block fake porn images of pop star Taylor Swift. Numerous data breaches occurred when information was inappropriately fed into large language models exposing it to other users in the process.
“I think you ignore those issues at your peril,” says Tweedie, who has worked on privacy and technology legal issues for more than 20 years. “There’s potential for real harm to happen to your customers, potentially your staff and ultimately that turns into a real risk of reputation damage.”
The European Union is leading the way as governments scramble to regulate AI. Its parliament and council in December reached provisional agreement on an AI law to regulate uses of AI based on potential threats to society.
Australia, after public consultation last year, signalled last month it will look to regulate AI in high-risk settings through testing, transparency and accountability measures. Existing laws will also be strengthened to take account of AI.
In the US, the Biden administration in October issued an executive order establishing new standards for AI safety and security and requiring developers of the “most powerful AI systems” to share their safety test results with the government.
The UK government is using AI to vet welfare claims, despite concerns about algorithm bias. Last year, the country’s information commissioner, former New Zealand privacy commissioner John Edwards, warned it risked contempt of court if it wasn’t more transparent about how AI was used to make decisions.
UK Prime Minister Rishi Sunak in November gathered representatives from 28 nations at Bletchley Park, home of the World War II effort to crack the Enigma codes the Nazis used to send encrypted messages, to sign an agreement to work collectively on AI safety efforts.
New Zealand is not party to the Bletchley Declaration and official moves concerning AI have been limited to issuing guidelines for how government departments should use generative AI systems.
“I don’t think we should be racing off to legislate but [instead] looking at whether the Privacy Act is sufficiently robust, and similarly the Human Rights Act, in terms of discrimination,” says Tweedie.
Approaches to the use of AI vary widely in our public sector, though the 2020 Algorithm Charter signed by nearly 30 state agencies, including health, social and justice authorities, outlines principles ministries and other bodies need to follow. But the most powerful AI systems are essentially delivered as black boxes from the likes of OpenAI with little to give away how they generate answers. That’s alarming when they are used to make decisions that affect citizens’ eligibility to access public services.
Performing an AI impact assessment before deploying an AI system is considered best practice internationally and Tweedie says state agencies and businesses need to scrutinise “third-party supplier risk” from using AI systems they don’t fully understand.
Technology Minister Judith Collins is an AI enthusiast and last year created a cross-party working group to bring MPs up to speed on the technology. But she has said sweeping regulatory change, such as in Europe, is unlikely here. As the minister for “digitising government”, she sees major scope for AI to improve the delivery of state services.
An increasingly hot issue for generative AI is the implications of system sellers “scraping” vast amounts of data from the internet to train their LLMs. “We are one of the few countries that recognises computer-generated works can exist and there are ownership rights for those,” Tweedie says. “But it’s not that clear how that would apply in a practical context.”
A flurry of lawsuits have already been launched in the US, most recently by the publisher of the New York Times against OpenAI, which has argued that its models won’t work without wholesale access to information. Here, Stuff has moved to stop OpenAI “scraping” information.
A gulf between the hype of 2023 and what is truly possible from generative AI, exacerbated by fights over intellectual property rights, could stymie developments this year, according to Rodney Brooks, the AI and robotics scientist who created the company iRobot, which makes the Roomba robotic vacuum cleaner. “There may be yet another AI winter, and perhaps even a full-scale tech winter, just around the corner. And it is going to be cold,” he wrote last month in his annual set of technology predictions.
Despite having worked in the field of AI since 1976, Brooks says we are “still in the uncertain baby-step stages” of the technology. He summed up the state of AI in 2024 with a quote from a paper on computational machinery penned in 1950 by Alan Turing, one of the pioneers of AI and a key player among the Bletchley code breakers. “We can only see a short distance ahead, but we can see plenty there that needs to be done.”
Stampede ahead
NZ companies aren’t completely on the sidelines of AI development but they could easily get trampled in the rush.
A growing cluster of New Zealand AI companies are harnessing the power of large language models (LLMs) and a number emerged from stealth mode last year.
Auckland-based FranklyAI started life within engineering consultancy Beca as an intelligent chatbot to allow public consultation and feedback on the company’s civil-engineering projects. But it has morphed into a powerful office assistant that works with Microsoft’s Teams messaging platform and is used by about 3000 customers.
Founder Matt Ensor says Frankly is a “broker” between an organisation and numerous different LLMs, OpenAI included. It allows employees to feed company information into chatbots for analysis and to produce detailed answers without fear of sensitive data leaking to third parties.
BeingAI, a collection of AI-related businesses, could become our first NZX-listed AI company this year as part of a proposed reverse listing on the stock exchange involving the listed shell company Ascension Capital Ltd. Last year was tough for tech start-ups attempting to raise money, but anything with “AI” in the name has stood a better chance of finding investors.
Ensor says the challenge for New Zealand AI startups is building AI business models that won’t be undermined by Big Tech. “Every time OpenAI or someone with a big foundation model releases an update, it wipes out 10,000 start-ups,” he says.
It’s like the flurry of flashlight apps that appeared in the early days of Apple’s App Store, allowing the iPhone screen to glow white as a stand-in for a torch. Apple built the Flashlight function into the iPhone operating system in 2013, effectively killing the market for standalone apps. The lesson for the fast-paced AI age is to build a unique business rather than provide a mere function that can be easily incorporated and commoditised by deep-pocketed companies.
“We’ve spent the past two years trying to avoid competing with Google, Microsoft or OpenAI,” says Ensor.
As for what this year will bring, he is predicting “a year of consolidation in an Amazon sort of way”. (The US e-tailing giant is the master of scaling up technologies to allow rapid, global adoption.)
“You’ll get the big players with much more mature products coming out. But we’re going to lose a lot of those dynamic little companies that were doing really cool things.”