In the spring of 2020, as Covid-19 swept across the world, artificial intelligence (AI) systems working for one of the world’s biggest credit scoring companies noticed an unexpected surge in online shopping.
The AI bots concluded that this could only mean one thing - a huge digitalcrime wave. They instructed the company to block millions of transactions, most of which were from ordinary shoppers scrambling for essential goods.
In the event, human analysts at Fico noticed the errors and reversed most of them, allowing shoppers to get their toilet paper. But the incident is a vivid foretaste of what could go wrong in the next 25 years as more and more financial transactions come under the influence of AI.
“I expect AI to be deployed extensively in the financial industry and beyond, not least since no one wants to be left behind by competitors,” says Anselm Küsters, an expert in the digital economy at the Berlin-based European Policy (Cep) and the author of a recent report on how AI could exacerbate future financial crises.
With big banks and other institutions already using it for everything from detecting fraud to making high-speed trading decisions, future financial markets could become the playground of duelling AI systems “making complex investment decisions based on patterns that may not be apparent to humans”.
At the more intimate level, Omar Green, founder and chief executive of the San Francisco finance app Wallet.AI, predicts a world where personalised AI helpers are as omnipresent and as closely tied to our sense of self as our smartphones. “That’s not thirty years out,” he says. “That’s sooner than that, at the pace that we’re going.”
So what will that mean for your money? To answer that question, it’s wise to understand what makes AI tick. Today’s cutting-edge systems such as Google Bard and OpenAI’s ChatGPT – which have stunned the world with their ability to engage in realistic, flowing conversation – are as much grown as they are built, through a process known as machine learning.
Rather than being programmed with detailed rules about what to do, they are let loose on large amounts of data in a process similar to biological evolution, figuring out through repeated trial and error what results are considered “correct” and adapting themselves accordingly.
The resulting “model” – the technical name for an AI’s internal view of the world – can not only rival humans at many tasks but spot patterns humans never would.
In an optimistic scenario, this would make financial markets more efficient and more stable, spotting irrational behaviour and helping predict future crashes.
“If you ask me, robust use of large data models would have revealed the anomaly in [mortgage] lending models in 2008 quicker than we identified it as humans,” says Rob Rooney, chief executive of the British personal finance company HyperJar, who previously spent more than three decades at Morgan Stanley.
Cathy O’Neil, a data scientist whose 2016 book Weapons of Math Destruction explored how algorithms often reinforce inequality and discrimination, points out that AI could close the gap between under-resourced regulators and the giant corporations they try to police.
“Their biggest problem is that they don’t have the technical expertise to do the analysis they need,” she says. “They can’t keep up with the kind of frauds that are perpetrated. If [they] can use the chatbots to do the analysis for them, that will sort of democratise regulation, if you will.”
Küsters, however, fears that AI could just as easily spark a crisis as spot one.
Machine learning models are always limited by the data they are trained on, and because financial crises are relatively rare, our data about them is patchy.
That means an AI adapted to “normal times” could go haywire in the face of unprecedented “black swan” events, amplifying the chaos – especially if multiple institutions use similar AIs that fail in a similar way. Last month the chairman of the US Securities and Exchange Commission warned that AI posed a “systemic risk” and could catalyse the next financial crash.
Some current AI chatbots have also shown a penchant for “hallucination” – that is, inventing facts, apparently because they are trained to sound plausible and not to deliver truth.
The phenomenon exposes a key problem with machine learning models: their self-generated understanding of the world is mysterious and alien even to their own creators, and there is often no way to tell why or how they reached a certain decision.
There is also no reason why the benefits of AI would be limited to legitimate industries. Hackers and hoaxsters have already proved adept at using the technology to replicate the voices of company executives, or generate spurious images that can move markets – as apparently happened last month with “photos” of an explosion at the Pentagon.
“Maybe the worst case scenario would be that it’s just so easy to create and share disinformation that no one really trusts what they’re seeing in digital format anymore,” says Drew Popson, head of technology and innovation in financial services at the World Economic Forum (WEF). “So cybercriminals have taken over, and we end up going back to analogue and bank branches and seeing people in person.”
Most dangerous of all, albeit perhaps most speculative too, is the fact that giving AI direct control of money is exactly the type of Terminator-scenario that prophets of machine doom have been warning about for years.
Philosophers and researchers such as Eliezer Yudkowsky have described the difficulty of ensuring that an AI truly has the same values and motives as the humans it is meant to serve.
The best-known example is the “paperclip maximiser”, a theoretical super-intelligent AI employed by a stationery company that is ordered to make as many paperclips as possible and, taking that to its logical end, eventually takes over the planet and uses humans as raw material, resisting all attempts to turn it off. One way for an AI to escape constraints designed to stop this scenario would be to simply bribe people to do its bidding.
What about personal finance? Here, too, Rooney sees massive opportunity with AI, effectively replicating the services of a financial adviser or wealth manager at a fraction of the cost.
“It goes to everything from buying the right car, to getting the right car insurance, to getting the right car loan. There’s a lot of competition there, there’s a lot of complexity there, and sifting through all of that is not easy for anybody,” he says. “Big data models are a really powerful way for the consumer to get a better deal.”
Omar Green has been trying to realise a similar vision since 2012, when he founded Wallet.AI. He is cagey about the details of his product, and declines to give details of the company’s finances or investors, saying they have asked to be anonymous.
But he is open and thoughtful about his principles, describing how he hopes to build personalised AI that can coach and coax people through the difficult long-term grind of reforming their spending habits and achieving financial independence.
“It turns out that’s a really hard problem, because it is about... bringing a certain degree of reality into the sort of delusional set of reality that we’d like to live in,” he says.
That could mean an AI that nudges users, via a smartphone screen or a synthesised voice in their ear, with insights and suggestions drawn from their own behaviour and that of others like them. But it could also mean an AI that is capable of listening with simulated empathy to a dire situation – your daughter needs private medical treatment, and you can’t afford it – and then frankly discussing your options, and what you might have to do to make it happen.
O’Neill points out that it could also be deliberately “gamed” by unscrupulous financiers who find out hidden ways of tricking the AI into favouring their products, just as today unscrupulous web designers trick Google into putting useless nonsense at the top of its search results.
Küsters fears that large numbers of people getting personal advice from the same set of AIs could have wider effects, creating “a new type of herding behaviour” that amplifies market volatility.
Green is far from ignorant of such dangers. He explains how, in the late 2010s, he hoped to work with big banks to make Wallet.AI’s insights part of their mainstream products. But these partnerships mostly collapsed because, he claims, banks actually profit from their customers making bad financial decisions – such as getting into debt they can’t afford – and do not want them to make better ones.
“They couldn’t figure out how to turn Wallet.AI into a growth mechanic without doing things that would be predatory,” he says. One credit executive asked him: “You do understand that sometimes we produce programmes for our customers that we don’t want them to take advantage of? That we can’t afford for them to take advantage of?”
That experience underscores the reality that AI does not spring into being from nowhere, free of earthly bonds. It is trained by, shaped by, and ultimately serves the interest of the institutions that create it. Whether you trust AI to manage your money will depend on how far you trust existing corporations, financial markets, and capitalism itself.
A 2021 survey by McKinsey of 1843 firms does not inspire confidence, finding that most respondents were not regularly monitoring their AI-based programmes after launching them.
Green is deeply concerned about what will happen if Big Tech incumbents such as Meta, with its historic culture of “moving fast and breaking things”, or Apple, with its controversial dominion over the iPhone app ecosystem, define the shape of future AI finance.
Popson and Rooney argue that the financial industry is highly regulated and will not get away with behaving like Big tech, while Küsters says we need more specific regulations similar to, but more robust than, the European Union’s proposed “EU AI Act”.
That does not mean that the AI industry can just leave it to the politicians and wash their hands of the problem.
“I am a cautious optimist,” says Green. “I think that if [AI makers] can be taught to believe that there is an incentive to building systems that are helpful, that avoid harm, that represent the angels of our better natures, then they’ll build them... Let’s show some discipline as makers, and try to build the world we want to exist.”