May 10, 2018 at 03:03PM
via Financial Review
Drew Harwell
Google's artificial-intelligence assistant sounds almost exactly like a human when it calls the salon to book a woman's hair appointment. It responds to questions, negotiates timing and thanks the receptionist for her help. It even says "um" and "mm-hmm."
What it doesn't say, however, is that it's a machine - and the receptionist doesn't show any sign that she can tell.
Google's unveiling of Duplex - an automated voice assistant that can book restaurant reservations, check opening hours and accomplish other tasks over the phone - has thrown a spotlight on how advanced AI can now carry on conversations that are so lifelike that even a human listener can be fooled.
The technology, debuted at Google's I/O developer conference, could be a huge convenience for anyone who hates picking up the phone. But it is also raising some thorny questions about the ethics of using a machine to copy a person's voice, carry out commands -- and potentially deceive the unsuspecting listener on the other side.
"This technology is amazing, and [a] big step forward, but I don't think the main goal of AI should be to mimic humans," said Erik Brynjolfsson, a Massachusetts Institute of Technology professor and director of its Initiative on the Digital Economy. "Instead, AI researchers should make it as easy as possible for humans to tell whether they are interacting with another human or with a machine."
Advertisement
Google calls Duplex an "experiment" that a limited number of Google Assistant users will be able to try this US summer. When, or whether, it will debut more widely remains an open question. Google has yet to show a live demo.
"We want to be clear about the intent of the call so businesses understand the context," Google engineers said. "We'll be experimenting with the right approach over the coming months."
The company showed several examples, including Duplex calling a restaurant to book a table for four, and in each instance, the listener seemed to have no idea it's a machine; in one call, the listener asked the AI, "What's up, man?" and referred to it as "sir."
Google's AI engineers trained Duplex on in-call practices that are typically simple for humans but challenging for machines, including "elaborations" ("for when?"), "syncs" ("can you hear me?"), "interruptions" ("can you start over?") and "pauses" ("can you hold?").
To prevent it from sounding too stilted or robotic, the system was also taught a number of so-called "speech disfluencies": The "hmms," "uhs" and other noises people make in casual conversation. Like humans, the AI makes those sounds to convey that it's still gathering its thoughts, the engineers said.
Talking 'to an AI' concerns
Duplex will make its call from an outside number when its user asks it to complete the task; the human won't be able to listen in or intervene. In cases where the task is too complex or the call goes awry, Google says, the AI will pass the call to a human operator.
Automated voice assistants, such as Amazon's Alexa and Apple's Siri, have quickly become a key part of how people interact with the computers in their lives, and many callers today are familiar with the automated voices of modern-day telemarketers, customer service lines and robocalls.
But Duplex would inject that AI into a new kind of arena, with listeners who have not consented or don't realise they're talking to a machine. Google representatives did not respond to questions about how Duplex would operate in conversation, including whether it would announce its non-humanness. Yossi Matias, Google's vice president of engineering, told CNET that the software would "likely" tell the person on the other end that he or she is talking to an AI.
From the charming Samantha of "Her" to the coldly murderous HAL 9000 of "2001: A Space Odyssey," lifelike AI assistants have long served as a hallmark of science fiction, and Duplex's convincing fakery had some listeners unnerved about how far the technology had come. Some listeners said the Duplex calls appeared able to pass a simple "Turing test," the famous yardstick for whether a machine can act or speak so convincingly that it'd be hard to distinguish it from a real person.
"A lot of folks have drawn attention to the risks of AIs masquerading as humans, which Duplex seems to normalise," said Miles Brundage, a research fellow at the University of Oxford's Future of Humanity Institute. "At the very least Google should seriously consider some sort of notification that people are interacting with an AI."
That kind of notification, Brundage said, would help educate people about the advanced state of AI. It would also potentially prevent the kinds of havoc that could result when a machine mimics a human being. In a recent report on "malicious AI," Brundage and his co-authors posited a series of unnerving examples, including how an AI could copy someone's voice to fool a listener or seek information as part of an automated "social engineering attack." A Google official said it takes the issue of synthetic content used to spread misinformation very seriously.
Ethical concerns
It's also unclear how Google would navigate legal concerns such as the US Federal Communications Commission's telemarketing and robocall laws. Those rules ban companies from using an "artificial or prerecorded voice" to make calls to certain establishments and set guidelines for how similar voice systems should operate, including requiring that each call clearly identify the "business, individual or other entity initiating the call."
A Google official said the service was different from those calls because it's not for solicitation or telemarketing. The official added that the automated assistant will only call companies on phone numbers offered to the public for booking appointments or doing business.
She cited a scene from the TV show "The Office," in which a scheming assistant to the regional manager, Dwight Schrute, makes 50 restaurant reservations and then sells them off to desperate callers -- what he calls his "perfect Valentine's Day." "People with AI-powered assistants who can easily make those 50 restaurant reservations would harm both businesses and consumers," she said.
AI experts have in recent years called for legal or ethical guidelines that could help curb that kind of mischief. The Columbia University professor Tim Wu in 2017 called for "Blade Runner" laws that would prevent a company from deploying human-copying machines that hide their true identity.
There's a natural tension for those kinds of rules: Google wants its AI to be as convincing - and, yes, lifelike - as humanly possible, to ensure the listener gives compelling responses - and, hopefully, doesn't hang up.
But Brynjolfsson thinks there should be a middle ground to ensure humans aren't left wondering who, or what, they're talking to. Regulation, he said, may be necessary to require bots to self-identify.
"At a bare minimum, a bot should answer truthfully if a human asks whether it's a bot," Brynjolfsson said. "Or perhaps more radically, bots should be required to have a recognizable voice style and or text style and/or appearance. I don't think this would harm their efficiency. ... In fact, it would likely improve it."