In Emily Perkins’ play The Made, Alice, a 40-year-old AI engineer and sole parent, negotiates the uncharted waters of robot emotions. She has Nanny Ann, a frumpy, middle-aged humanoid bot charged with childcare and housework.
And she has Arie, a new robot built on the chassis of a former sexbot. When Alice tries to infuse her robots with emotion, Nanny Ann gains the full gamut of emotional autonomy. As Perkins says, “She has a lot of fury.”
But Arie’s sexbot programming limits her emotional capacity to an unflagging happiness.
Premiered by the Auckland Theatre Company last year, The Made is funny, warm-hearted and chaotic, but it also casts a hard light on the stereotypes that shape the artificial intelligence industry.
“I wanted to consider why we have – not necessarily a lack of imagination, but a narrow band of imagination when it comes to what we are making AI look like and sound like and do,” says Perkins. “Falling into those stereotypes is a huge problem.”
Australian journalist Tracey Spicer, NSW Premier’s 2019 Woman of the Year for her work in the #MeToo movement and a recipient of an Order of Australia gong, was blindsided by that problem, too. In 2016, her then-11-year-old son announced he wanted a robot slave. It was 7.45am and her son had just seen South Park’s “toon hoon” Cartman bully and harass home robot, Amazon Alexa.
So began Spicer’s seven-year investigation into the gender, racial, age-based and sexual biases shaping the development and use of artificial intelligence.
As she says from her home in Sydney, “It is like when you see something and the scales fall from your eyes and you can’t stop seeing it everywhere around you.”
In Man-Made, she takes the reader on a high-speed drive-by of AI prejudice: automatic hotel-soap dispensers that can detect the hand of a white person but not that of a black person; voice-activated software that has difficulty understanding non-American and non-Caucasian voices; health apps with no period tracker; recruitment systems that filter out people over 50 and women. “It seems everything old is new again,” she writes. “Sexism is the new black.”
Biased datasets
Much of this bias is due to out-of-date or limited datasets used in the design of AI programs. Scanning texts from the 1970s and 1980s, for example, will train algorithms to assume that every doctor is a he, every nurse a she.
In some cases, health data is collected from private hospitals in overwhelmingly wealthy white areas. A predictive cancer model in use, for example, uses mammograms from a dataset that’s over 80% white. This data is being fed into diagnostic algorithms that will be used in hospitals in the future.
Inevitably, these biases also reflect the homogenous pool of programmers and developers who design these algorithms. These tend to be young, middle-class, white and Asian men. The only available emoji of a “woman of a certain age” depicts a grandmother-like figure with round specs and a bun. You can hear Spicer yelling as she thumps the keyboard: “A bun!”
According to Spicer’s research, about 90% of coding and engineering in AI is done by men. Women make up less than a quarter of Silicon Valley’s workforce.
Increasingly, she writes, “Our future is looking man-made, instead of human-made.”
It wasn’t always this way. In her book, Spicer identifies the many women involved in the development of new computer technologies: mathematician Ada Lovelace, associate of Charles Babbage and daughter of Lord Byron, who is considered to have devised the first computer algorithm; actor Hedy Lamarr, whose work paved the way for GPS and Bluetooth technology; Hilda Carpenter, who wove the first core memory plane for a computer in 1953; computer programmer Radia Perlman who, in the mid-1980s, solved the problem of file sharing between computers.
Many more remain nameless. To send Apollo 11 to the moon, Nasa hired highly skilled female weavers to hand-weave the ferrite rings that formed the memory system. It was called LOL memory, an acronym for “little old ladies”.
British scientist Tim Berners-Lee is credited with “inventing” the world wide web in the early 1990s, but 20 years earlier, Pam Hardt-English developed a computerised bulletin board linking libraries and a bookstore in San Francisco.
Since then, the tech industry has been driven by an increasingly narrow band of society, so skewing the data, says Spicer, and deepening existing biases that are then replicated and repeated in an endless feedback loop.
The what-ifs are a minefield. If audio technology can’t understand your voice, you could be rejected in a job interview, blocked from emigrating to another country or misunderstood in a desperate call to emergency services. If you’re a woman of colour from Glasgow, she writes, “you’re well and truly ‘focked’”.
If face-recognition technology has trouble depicting darker skin tones and women, innocent citizens can be wrongfully investigated by police.
As AI programs restrict access to jobs, rental homes and home loans, there is little opportunity for legal recourse. Algorithms are trade secrets, and how do you prove a machine is ageist, transphobic, racist, sexist, ableist or homophobic?
“You can’t bring the algorithm before a court of law,” says Spicer. “So, do you sue the programmer? Do you sue the company that compiled the dataset and sold it to the tech company? Do you sue the tech giants? It is a really fraught legal area.”
Sex slaves and misogyny
Gender stereotypes are rampant in online gaming culture, social media and software, fuelling a growing level of misogynistic cyber hate, harassment, surveillance and revenge porn.
Such incidents reflect society’s attitudes towards women in the physical world, but in cyberspace they spread faster. “The imbalance in the data itself creates more inequity.”
Spicer quotes a Washington University’s associate professor of psychological and brain sciences, Calvin Lai: “It’s like we’re marinating in a sauce of bigotry.”
Some of this seems benign. AI voice assistants such as Siri or Alexa, for example, exude an ever-patient, girl-next-door friendliness, the result, says Spicer, of months of research by the tech giants. “They found that people were more likely to buy the technology if there was a familiarity around the gender stereotypes. That means people were more likely to buy bots if it is female and sounds caring, sounds thoughtful, sounds servile, because historically that is the way women and girls have been taken in society.”
They also sound “vaguely sexual”.
If you said “Let’s talk dirty” to an early female version of Samsung’s virtual assistant Bixby, it would reply in a sexual tone, “I don’t want to end up on Santa’s naughty list,” says Spicer.
When the same comment was put to a male-voiced Bixby, it replied, “I’ve read that soil erosion is a real dirt problem.”
A 2019 Unesco report warned that personal assistants perpetuated the idea that “women are obliging, docile and eager-to-please helpers, available at the touch of a button or with a blunt voice command”, regardless of the level of hostility in that command.
They are also endlessly trainable. Just as Netflix and Spotify are designed to adapt to the preferences of users, new chatbots develop in response to human feedback. In 2016, Microsoft released Tay, a “millennial-minded AI agent” designed to engage with people “through casual and playful conversation”. The more you chat with Tay, the company said, “the smarter she gets”. And chat they did.
As Peter Singer explains in Ethics in the Real World, within 24 hours, people were teaching Tay racist and sexist ideas. When she started making positive comments about Hitler, Microsoft turned her off. “I don’t know whether the people who turned Tay into a racist were themselves racists,” muses Singer, “or just thought it would be fun to undermine Microsoft’s new toy.”
The vulnerability of chatbots to this level of malicious interference is even more apparent in the violence enacted on real-life “toys”. The internet is awash with humanoid robots being slapped, shoved, kicked and verbally abused – not thrown out of the way like an annoying object, but bullied like a living being.
At the Ars Electronica Festival 2017 in Austria, Samantha the sex robot was molested by a group of men and had to be sent back to Barcelona for repairs. Engineer Sergi Santos told the Daily Star the robot had been heavily soiled: “They treated the doll like barbarians.”
Can we demand respect for dehumanised sexbots? A close approximation of what would become the sex robot, made by now-defunct British company Sex Objects Ltd in the late 1970s, was known simply by its chest measurement – 36C.
The first sex robot, Roxxxy, launched at the Adult Entertainment Expo in Las Vegas in 2010, had five distinct personalities – Wild Wendy, S & M Susan, Mature Martha, Young Yoko (“oh so young … and waiting for you to teach her,” said the advertising blurb) and Frigid Farrah. The names hark back to the rampant misogyny of “Mad Men times”, says Spicer, but the company’s description of Frigid Farrah – “if you touch her in a private area, more than likely, she will not be too appreciative of your advances” – raises the issue of consent. “The entire idea still stinks of rape culture.”
A lesser evil
Some argue that sex robots can help people suffering from anxiety, disabilities or loneliness. Others have suggested that such robots provide a safe outlet for people to direct their rape fantasies.
Spicer quotes Laura Bates’ 2014 book Everyday Sexism: “We should no more be encouraging rapists to find a supposedly safe outlet for it than we should facilitate murderers by giving them realistic blood-spurting dummies to stab.”
New male, lesbian and non-binary sex robots coming on to the market might be seen as a sign of growing inclusivity, “but people can still override consent and rape that robot”, says Spicer. “Yes, it is a mass of metal and wires, but it is a representation of a real human being.”
The idea of a perfect or perfectible female form, crafted by men, has a long history. Roman poet Ovid told the story of Pygmalion, who carves a sculpture of a woman so beautiful he falls in love with it. With Venus’ blessing, she becomes a real woman. George Bernard Shaw took the name of Ovid’s sculptor for his play about phonetics professor Henry Higgins, who makes a wager that he can transform Eliza Doolittle, a cockney flower seller raised among “the squashed cabbage leaves of Covent Garden”, into a convincing duchess. In the 1964 film version, My Fair Lady, we cheered on Audrey Hepburn’s Eliza as she calls for “‘enry ‘iggins’ ‘ead”. Forty years later, we watched on in horror as one more wife is replaced by a beautiful, submissive robot in Frank Oz’s film of the Ira Levin novel, The Stepford Wives.
In the mid-1960s, US computer scientist Joseph Weizenbaum named his first chatbot Eliza, clearing the way for virtual assistants such as Siri and Alexa. When he reprogrammed Eliza to respond as an automated psychologist, he regendered the machine and named it “Doctor”.
Clear benefits
As Eliza or Doctor, there is clear value in these machines. Artificial intelligence can save lives, predict illness, provide companionship and encourage empathy. Already, an army of nursing and home-care robots are being used to supplement – or replace – a depleting human workforce. Although the academic jury is still out, Paro, a fluffy, Japanese-made robotic Canadian harp seal, is thought to ease loneliness and assist communication, especially with dementia patients.
In the UK, an artificial intelligence companion called Amper (“Agent-based Memory Prosthesis to Encourage Reminiscing”) is being developed to prompt people with dementia to tell their stories. “What a fabulous invention!” says Spicer.
Since being hit with long Covid early last year, Spicer has had to cope with overwhelming chronic fatigue which has made her question the lack of smart technologies in her own home.
“I have a very stupid home, which I was very proud of, then I got this dynamic disability where even turning the light switch on was exhausting. I thought, ‘Why don’t I have an app for that?’”
But there are increasingly calls for governments to install guardrails on the rapid growth of AI machine learning. In March, more than 1000 tech players called for a six-month pause in the creation of “giant” AI systems. In May, ChatGPT creator Sam Altman called on lawmakers to regulate AI. “If this technology goes wrong,” he said, “it can go quite wrong.”
Change the data
New legislation drafted by the European Union seeks to restrict the use of AI in critical infrastructure. Spicer is adding her voice to calls for required watermarking on AI-generated images and bias auditing of all databases. But as users of AI, we also have a role in determining the shape of these AI-driven creations.
“ChatGPT was released only in a basic stage to let the world’s population work with it and teach it,” says Spicer. “I think that’s a bit negligent – I would rather have regulation and legislation before something is unleashed on an unsuspecting public – but at least it gives the consumer some power to train it to be better.
“It would be easy to sit back and say, ‘I won’t engage with these technologies’, but if we don’t play with these technologies and enter our likes and dislikes, then future databases will have an even more distorted view of the world because it will predominately be men, white people and those in wealthy countries who use the technology. And then it will be good for a smaller and smaller percentage of the population.”
We can change the gender of our voice assistants to male or non-binary; encourage girls to study STEM (science, tech, engineering, maths) subjects at school; and attract more women to the tech industry. Already a swathe of social enterprises, games developers and tech studios are including more women and ethnic minorities; new “clean” datasets are being developed.
“Throughout history, through every industrial revolution, technology far exceeds regulation and legislation – cars were on the road for an awful long time before seat belts were invented. We are at the stage of cars without seat belts right now.”
Man-Made, by Tracey Spicer (Simon & Schuster, $39.99).