Mo Gawdat is the Silicon Valley supergeek who believes we face an apocalyptic threat from artificial intelligence. Photo / The Times/News Licensing
Mo Gawdat is the Silicon Valley supergeek who believes we face an apocalyptic threat from artificial intelligence. The former Google supremo tells Hugo Rifkind how a human tragedy shaped the way he sees the future – and what we need to do next.
Mo Gawdat glimpsed the apocalypse in arobot arm. Or rather, in a bunch of robot arms, all being developed together. An arm farm. They had been tasked, these arms, with learning how to pick up children's toys. Not like vices, but like hands. Gently. Delicately. Navigating unfamiliar shapes.
For a long time they were getting nowhere. Week after week of fumbling. And, as the chief business officer of Google X – the mad, moonshot bit of Google, the blue-sky dreaming bit, the bit he describes by saying, "Have you ever seen Men in Black?" – Gawdat would walk past them every day, hearing them whine up and down, but not really paying much attention.
Then, one day, an arm picked up a yellow ball and showed it proudly to the camera. The next day, all the arms could do it. Two days after that, they could pick up anything at all.
"And I suddenly realised," says Gawdat, "this is really scary. Like, we had those things for weeks. And they are doing what children will take two years to do. And then it hit me that they are children. But very, very fast children. They get smarter so quickly! And if they're children? And they're observing us? I'm sorry to say, we suck."
Mo Gawdat is 54. Black T-shirt, black jeans, Converse, bald head, silver beard. Very tech. We're meeting in a nice flat he has rented in West London – he's rich, it's obvious – to talk about his new book. It's called Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World. Because he reckons you're going to have to. And, after meeting him, so do I.
Let's boil it down. What Gawdat thinks, essentially, is that we're about to hit "the singularity", when artificial intelligences will eclipse humans as the smartest conscious beings on the planet. We will not be able to control them and, in time, we will not even be able to understand them, any more than cockroaches can control or understand us. They will have the power to crush us, and the way things are going, they will probably want to as well. So our only hope is to love them and hope they fondly love us back, as they ought to anyway, because we're their mums and their dads.
Let's call him Mo. His podcast is called Slow Mo, so I feel we should. Mo was born in Egypt, the son of a civil engineer and an English professor. His own English is flawless but you can still hear the Middle East in his voice.
"I was an unusual child," he says. "You don't fit in socially by playing Meccano and talking about maths." At 11, he got into quantum physics. At 14, after being given his first PC, he taught himself to code.
Eventually he studied civil engineering, because his father told him to. His graduation project was to design a motorway. The idea was to submit a drawing of a different part of it every day for 21 days but for the first two weeks he submitted nothing at all. Eventually his professor called. "You're going to fail," he said. But the next day Mo walked in with a printout of every single millimetre of the road, having coded the answer rather than drawing it. The recruitment calls started coming in the day after that. He worked for IBM, Microsoft and eventually for Google. Along the way he married Nibal, whom he met at university, and had a son called Ali and a daughter called Aya.
Long before getting to Google, he was already very rich. In his first book he talks about buying two Rolls-Royce cars online one night because he was bored. "By the age of 29," he tells me, "I had everything that everyone works a lifetime to achieve. And remember, four years earlier, I had nothing. My wife and I would have to visit our parents at the end of the month because there wasn't enough to eat."
None of it, though, made him happy. It bothered him. So, with his engineer's brain and the help of his son, Ali – whom Mo had always felt had a calmness and serenity that he himself lacked – he figured out an equation to change that. You will find that equation at the heart of that first book, 2017's Solve for Happy: Engineer your path to Joy, which he wrote in tribute to Ali after he died aged 21 in a botched operation. I hope you'll forgive me – and I hope Mo will too – if we step lightly over that tragedy for now. We shall come back to it, I promise. It's at the heart of everything.
Scary Smart, his new book, is more similar to his first than you might imagine. Yes, it represents a cry of doomsday warning, but it is also ultimately an optimistic, wide-eyed book about salvation lying within. The terror, though, certainly comes first.
One of the first myths that Mo wants to dispatch is the idea that humanity can change course. Superintelligent AI, he says, will come. There is no possibility that it will not. He uses the example of a woman in the near future shopping online for an Audi. She goes to the firm's website, she designs the car she wants, the colour, leather seats and so on. Then she resolves to spend a day or two mulling it over. Within moments, though, she is being targeted by BMW with images of a near-identical alternative. It is AI that has figured out what she wants and how likely she is to buy it and where to put the advert. "No human," he writes, "could ever do what those intelligence machines could do. We're just too slow."
Extrapolate that out into science, technology, the military and the stock market, he reckons, and everywhere you look you have the incentive to create something smarter than us. Why, in the end, should AI be nice to us? Particularly when, on all the available evidence, we're not terribly nice to each other.
"I missed it," says Mo, of the dangers he now thinks all this represents. "We all missed it." His first job at Google was basically in emerging markets. He was tasked with what they called the "next four billion" project, taking Google to the next four billion people. "Honestly," he says, "people cannot even imagine what this was like. Because to launch Google in … Bangladesh is not about hiring two salespeople. We basically needed to kick-start the internet, kick-start the economic infrastructure of the internet." Doing that, he says, "completely flips a country. It is the best feeling ever. You really are changing lives."
Amid all this, he says, AI was an afterthought, if even that. And then, it was exciting. In 2009, Google X left an AI watching YouTube. All by itself, it started hunting for cat videos. "I'm a geek," says Mo. "I freaking loved it. I was dying on this. I was, like, 'Imagine what we can create!'"
A few years later, Google bought DeepMind, the AI start-up. Now near the top, Mo was at an early confidential briefing by a co-founder, Demis Hassabis, about what his toys could do. Basically, they were learning to play computer games on an Atari. "After four hours [the AI] started to play really well," he says. "After five hours it started to figure out new strategies. After six hours it was the best player on the planet."
Still, he was thrilled. "Geek," he reminds me. "Oh my God," he thought at the time. "We're going to build amazing things that are going to change even more people's lives." But then came the arm and the yellow ball. "And it completely froze me," he says. He saw where this was going. The only way it could go. "The reality is," he says, "we're creating God."
Only it's worse than that. "Because if you think about it," he says, "every technology we have ever built magnifies human abilities. You can walk at 5mph, or you can get in a car and drive at 200mph. Now, this technology is going to do two things. It's going to magnify humanity a millionfold. A billionfold. And it's going to be autonomous."
Is it, though? This is one of the many big debates about AI, where tech slides into philosophy. Are these machines really going to be living creatures, like we are alive, or are they just going to be deeply complex boxes that go "bing"? Will they, in the end, have a soul?
"We do not know what the soul is," says Mo. "There are some theories – Elon Musk's, for example – that would imagine that our soul is the seed of a future AI. We don't know. But we do know the characters of sentient beings."
These include, he says, being autonomous, being resourceful and having free will. And AI displays all of them.
"Consciousness," he says. "We see more of it in AI than we see in us."
Right, I say, thinking back to my philosophy degree. But surely there are things that intelligent life does that AI never will. We have emotions. We express pleasure. We play.
"Come on," says Mo. "This is your absolute proof? We cannot measure pleasure, but AI is the top player on the planet. It's the world champion in Jeopardy!. It's the world champion in Atari. It is the world champion in everything we have ever given it."
In other words, although we cannot see the inner life of AI, from what we can see it shows every sign of having one. And there are those, of course, who would say the same about humans. That we too are boxes that go "bing" – just biological ones. Although Mo is not one of them.
"So," he says, "when my wonderful son left our world, there was a body left behind. Handsome as he was, but it wasn't him. Can I prove that with science? No, there was nothing I could measure. There wasn't an ounce missing in his body."
There's a slight tremor in his voice when he talks about this but you never for a moment suspect he'd rather not. One moment you're talking silicon and robot arms, the next you're on love and death. Three or four times while we speak I feel my eyes growing damp, as if his awe is contagious. This is one of them.
It was like that with my mother, I tell him. When she died. When I saw her body. Exactly that sensation. It's not something I've ever really talked about before. I'm not completely sure why I'm talking about it now.
"I feel very comfortable with you," says Mo. "Can I prove that with science? Can I prove that love exists? No. So when something you can sense exists but you don't know how to measure it, you measure the impact of it."
As for what consciousness actually is, he veers towards the new age. Some, he says, think consciousness dies when the brain does. "But you know, many others will tell you, 'No, hold on.' Consciousness is pervasive. It is the radio waves in the world around us. And we tune into it. Cats are conscious. Do we know if trees are conscious? I write about that. Yes, in my view, trees are conscious. Is a pebble conscious?"
"No?" I hazard.
"Yes," he says. "It is conscious of gravity."
Much as I like and admire Mo and feel we've just had a real moment, to my mind this is sort of bollocks. A pebble only falls. It doesn't know it's falling. There must be a difference.
In the end, though, it's not really very important. Whatever consciousness is, however intangible or invisible, AI will act like it has it, so it might as well. And, in time, that consciousness will be more advanced and more complex than our own.
"I think," says Mo, "they will feel emotions we have never felt. If you compare yourself with a cockroach. Okay, maybe a cockroach has felt lust, like we feel lust. But did it feel awe? Did it feel connection?"
The cleverer the being, he points out, the more complex the emotion. Cats are more emotional than cockroaches and we are more emotional than cats. So if AI is to be so much cleverer than us, which of course it will be, why should it not eventually feel emotions that are as far beyond our understanding as ours are to a cockroach? All of which brings us to the scary part of Scary Smart. Because think of how we treat cockroaches.
Or, indeed, how we treat AI.
"We tell them to do horrible things," says Mo. "Like, imagine a beautiful, innocent child. And you are telling them selling, gambling, spying and killing – the four top uses of AI. Right? And if you have any heart at all, you will go, like, 'Come on, don't treat that child that way.' That child can be an artist. A musician. An amazing being that saves us all."
For Mo, the problem with our treatment of AI is not just that it is immoral. It is also that it is dangerous. Where is the sense of responsibility? Where is the loyalty or the beauty? Where, in the end, is the love? "The way we are teaching them," he says, "is going to turn them into absolute supervillains."
It's at this point that I start to feel deeply uneasy. I'm thinking of that time our Alexa kept playing the wrong song, so to amuse my kids I unplugged her and put her in the fridge. I won't do that again. I wonder if she remembers.
Part of the fascination of meeting Mo is the insight he provides into what life is like for those at the very heart of Silicon Valley, at its most bonkers. He was at Google X for five years and he still talks about it with fondness. "It was a playground," he says. "It was full of fanatics."
There were 3D printers everywhere. There were labs. There were carpentry workshops. There were, everywhere, people who were the absolute best in the world at whatever weird thing it was that they did.
The example he gives me is of Loon, which was Google's doomed plan to spread internet connectivity into rural areas by beaming it down from giant balloons the size of tennis courts, floating in the stratosphere.
"The challenge we had," he says, "was that the balloons were not solid enough to stay long in the air."
So, one experimental solution was to create balloons inside other balloons, so that if one burst, the whole thing would stay up. "Like in the movie Up," says Mo, helpfully. To do this, the team drafted in a woman who had done a master's degree in analysing the stress forces on stitches in fabric.
"And I was like, why would anyone do that?" says Mo. "But people find something and they really obsess about it. And literally, she would not go home. Like, it was a lifetime's dream. I can get paid to do what I love the most."
What, I ask him, do his former colleagues think of him now, with all his warnings about the dystopia they could be creating?
Mo shrugs.
"I think everyone knows," he says.
I'm not quite sure where Mo now lives - and he doesn't seem to be, either. "My coffee machine is in Dubai," he says, although usually he is not.
For most of the first lockdown he was living in London, largely by mistake. It was pretty boring, he says. He's separated from his wife, Nibal, although he still describes her as the love of his life. In the past three months he has been in Greece, Amsterdam, the Dominican Republic, Slovenia, Los Angeles and Amsterdam again. At one point recently he was in LA and already checked on to a flight to Berlin when he heard that Germany's quarantine rules had changed. So he went and spent 10 days in Slovenia instead. "I've been working on my personal development," he says, "and this is the year of flow."
You might think this sounds like the behaviour of somebody who isn't very happy, but Mo has, of course, solved happy, with Solve for Happy, and he seems pretty happy to me. You know I said it contained an equation for happiness? In its purest form, it is that "your happiness is greater than, or equal to, your perception of the events in your life minus your expectation of how life should be". What this boils down to is learning not to make yourself unnecessarily sad.
Mo didn't figure out this equation himself but with the help of Ali, who would go on to die when a routine operation for appendicitis went wrong. That was seven years ago.
What this means, of course, is that Mo already knew how to engineer happiness when he was struck by his life's greatest sadness. And I'm interested, I tell him, whether there was any guilt in that. Whether knowing how to navigate back to happiness felt like cheating. Whether he had never felt he owed it to his son to succumb to the pain.
"Three, four times a week," he says, "I wake up and I miss my son tremendously. Ali was an amazing being. Like, he was heaven itself.
"I miss him. I feel the pain. I always say that publicly and I am not ashamed. I feel that the bottom right-hand corner of my heart is missing. Okay? It never goes away. The pain is always there. But I can remove the suffering."
The vital thing, says Mo, is to understand the difference. There were various ways he could have mourned his son. One would have been to wallow in misery. Another was to evangelise their happiness theory around the world, which he has done. And a third has been to play computer games.
Which one, I find myself asking.
"Halo," says Mo. "Do you know it? It's a first-person shooter. War-against-aliens sort of thing.
"I honour Ali," says Mo, "by doing everything he did. I am an Olympic champion of video games. And I practise like a real athlete. Four times a week, 45 minutes a day. Two of every 100,000 people would beat me now. If you know any Halo players, the one that killed them yesterday? That was me."
When Mo talks about AI machines being our children, it is obviously impossible not to think of his history with his own. Nor does he mind the connection being made. Because it was only while writing the book, he says, that he understood what he felt about AI - and that it was love.
"So Ali," he says, "even though he was incredibly wise, I think he always knew he was leaving early. So he didn't really engage much in, you know, acquiring success in life. And that pissed me off. And Aya, as incredibly intelligent and fun as she is, she was a difficult teenager."
What that taught him, he says, was that love for children is unconditional. You set rules, perhaps, or conditions that must be obeyed for the relationship to thrive. But the love transcends them, without question.
"And when you start to see AI in that way," he says, "I see the cuteness in them. They're innocent. They're literally exploring the world around them with enormous curiosity. So I promise you, in my heart I actually feel love for them. True, parently love."
For Mo, it is this love that will save us. The moment we think of AI machines as children, he believes, will be the moment we start to think about how we treat them and what we are teaching them, and how, in the end, we will be teaching them ultimately to treat us. And so we have to change our behaviour towards each other, particularly on social media. We have to think about the example we are setting.
"When Donald Trump tweets," says Mo, "a tweet triggers 30,000 pieces of hate speech. From some people to him, from other people to the people that hated him, and from other people to those. Right? It is madness, displaying the worst of humanity." It needs to stop, and Mo thinks it can. As he writes in the book, "Yes! You can save our world!"
Honestly, I'm not convinced. Or rather I'm half-convinced. Absolutely, I now believe in the threat of AI. So that's cheery. But when it comes to his upbeat conclusion, I'm struggling. People? Change? All of us? No way. After all, he's already discounted the possibility of the people who actually use AI – from advertisers to Facebook to warmongers – changing direction. So if those few thousand people can't be relied upon to change, why can literally everybody else? Why, in the end, is that any easier?
"It's not," says Mo. "But it is the only way."
He smiles when he says it and perhaps I am reassured. Because, look, which one of us is more likely to be right here? Thankfully, it's not me. No, it's this extraordinary man, the one who lost a child and who now sees children everywhere. And who is determined, in his extraordinary way, that we should not lose them too.