Researchers at Google said the AI could not carry out general chit-chat but had been trained for the "natural conversations" of specific tasks, such as scheduling appointments over the phone. "The system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine," engineers wrote on the company's AI blog.
Google calls Duplex an "experiment" that a limited number of Google Assistant users will be able to try this summer. When, or whether, it will debut more widely remains an open question. Google has yet to show a live demo.
"We want to be clear about the intent of the call so businesses understand the context," Google engineers said. "We'll be experimenting with the right approach over the coming months."
The company showed several examples, including Duplex calling a restaurant to book a table for four, and in each instance, the listener seemed to have no idea it's a machine; in one call, the listener asked the AI, "What's up, man?" and referred to it as "sir."
Google's AI engineers trained Duplex on in-call practices that are typically simple for humans but challenging for machines, including "elaborations" ("for when?"), "syncs" ("can you hear me?"), "interruptions" ("can you start over?") and "pauses" ("can you hold?").
To prevent it from sounding too stilted or robotic, the system was also taught a number of so-called "speech disfluencies": The "hmms," "uhs" and other noises people make in casual conversation. Like humans, the AI makes those sounds to convey that it's still gathering its thoughts, the engineers said.
Duplex will make its call from an outside number when its user asks it to complete the task; the human won't be able to listen in or intervene. In cases where the task is too complex or the call goes awry, Google says, the AI will pass the call to a human operator.
Automated voice assistants, such as Amazon's Alexa and Apple's Siri, have quickly become a key part of how people interact with the computers in their lives, and many callers today are familiar with the automated voices of modern-day telemarketers, customer service lines and robocalls.
But Duplex would inject that AI into a new kind of arena, with listeners who have not consented or don't realize they're talking to a machine. Google representatives did not respond to questions about how Duplex would operate in conversation, including whether it would announce its non-humanness. Yossi Matias, Google's vice president of engineering, told CNET that the software would "likely" tell the person on the other end that he or she is talking to an AI.
From the charming Samantha of "Her" to the coldly murderous HAL 9000 of "2001: A Space Odyssey," lifelike AI assistants have long served as a hallmark of science fiction, and Duplex's convincing fakery had some listeners unnerved about how far the technology had come. Some listeners said the Duplex calls appeared able to pass a simple "Turing test," the famous yardstick for whether a machine can act or speak so convincingly that it'd be hard to distinguish it from a real person.
"A lot of folks have drawn attention to the risks of AIs masquerading as humans, which Duplex seems to normalize," said Miles Brundage, a research fellow at the University of Oxford's Future of Humanity Institute. "At the very least Google should seriously consider some sort of notification that people are interacting with an AI."
That kind of notification, Brundage said, would help educate people about the advanced state of AI. It would also potentially prevent the kinds of havoc that could result when a machine mimics a human being. In a recent report on "malicious AI," Brundage and his co-authors posited a series of unnerving examples, including how an AI could copy someone's voice to fool a listener or seek information as part of an automated "social engineering attack." A Google official said it takes the issue of synthetic content used to spread misinformation very seriously.
It's also unclear how Google would navigate legal concerns such as the Federal Communications Commission's telemarketing and robocall laws. Those rules ban companies from using an "artificial or prerecorded voice" to make calls to certain establishments and set guidelines for how similar voice systems should operate, including requiring that each call clearly identify the "business, individual or other entity initiating the call."
A Google official said the service was different from those calls because it's not for solicitation or telemarketing. The official added that the automated assistant will only call companies on phone numbers offered to the public for booking appointments or doing business.
Madeline Lamo, a University of Washington graduate student researching robotic harms and free speech, said the Google AI could also effectively flip the robocall dynamic on its head. "Instead of vendors and scammers using AI to contact potential consumers/scam victims en masse," she said, "the consumers are now empowered to make robocalls themselves."
She cited a scene from the TV show "The Office," in which a scheming assistant to the regional manager, Dwight Schrute, makes 50 restaurant reservations and then sells them off to desperate callers -- what he calls his "perfect Valentine's Day." "People with AI-powered assistants who can easily make those 50 restaurant reservations would harm both businesses and consumers," she said.
AI experts have in recent years called for legal or ethical guidelines that could help curb that kind of mischief. The Columbia University professor Tim Wu in 2017 called for "Blade Runner" laws that would prevent a company from deploying human-copying machines that hide their true identity.
There's a natural tension for those kinds of rules: Google wants its AI to be as convincing - and, yes, lifelike - as humanly possible, to ensure the listener gives compelling responses - and, hopefully, doesn't hang up.
But Brynjolfsson thinks there should be a middle ground to ensure humans aren't left wondering who, or what, they're talking to. Regulation, he said, may be necessary to require bots to self-identify.
"At a bare minimum, a bot should answer truthfully if a human asks whether it's a bot," Brynjolfsson said. "Or perhaps more radically, bots should be required to have a recognizable voice style and or text style and/or appearance. I don't think this would harm their efficiency. In fact, it would likely improve it."