At Google’s developer conference last week, CEO Sundar Pichai demoed a new capability in Google Assistant that caused alarm bells to reverberate far beyond Silicon Valley.
Pichai played two recordings in which an eerily human-sounding Google Assistant engaged in phone conversations with actual people who had no idea they were talking to an AI-powered bot.
The demo spurred a flurry of debate over whether the experiment — which Google dubs “Duplex” — crosses ethical lines. In response, Google promised that whenever the technology goes to market it will have “disclosure built in.”
But for those of us tasked with building the future of conversational technology, the discussion is just getting started.
A radical rethink 🤔
While chat apps and voice assistants are celebrated for their convenience, the inconvenient truth is that they may be shifting our conception of “truth” itself.
The Verdict explores how, paired with another Google technology called WaveNet, which uses machine learning to synthesize human voice, Google may “irreversibly blur the line between human and computer.”
For example, if Google Assistant negotiates a fraudulent transaction on behalf of a person, who should be held responsible? If someone’s Google Assistant persona is hacked, is that considered identity theft?
These are the kinds of questions companies and governments will need to “radically rethink,” according to the piece:
Without such guidelines, the rapid rate of innovation as demonstrated by Google with Duplex and WaveNet may serve not to better society but instead to undermine our trust in one another.
What happens on chat… ⚖️
At least one judge has ruled that what's communicated through messaging — even in emoji form — has legal ramifications.
Last year an Israeli landlord won a case against prospective tenants who indicated they were interested in renting his property via a string of enthusiastic emoji ( 💃 👯 ✌️🐿 🍾 to be precise).
The landlord removed his ad based on their messages but the would-be tenants backed out and the landlord sued for damages. Here’s what the judge said:
The…text message sent by Defendant…included a smiley, a bottle of champagne, dancing figures and more. These icons convey great optimism. Although this message did not constitute a binding contract between the parties, [it] naturally led to the Plaintiff’s great reliance on the Defendants’ desire to rent his apartment.
I had my own reality check recently when corresponding with Quartz’s news bot on Facebook Messenger. The bot lets readers choose their own adventure by clicking on prompts or calls to action which, when selected, are posted into the conversation as messages:
Scrolling back through the conversation, I couldn’t help but feel as though the bot had put words in my mouth. I had clicked on the buttons but hadn’t actually typed those characters, even though it looks like l did.
If a more nefarious bot ever tricked me into “saying” something incriminating or embarrassing, and that transcript was used against me, how would I defend myself?
While this may seem like paranoia, researchers recently uncovered the ability to send audio commands to Siri, Alexa and Google Assistant that are undetectable to the human ear.
As the New York Times points out, the secret messages could be used to unlock doors, wire money or buy stuff online — all under the presumed identity of the person linked to the device.
Meanwhile, Facebook recently allowed users to “unsend” messages after Mark Zuckerberg was caught deleting his own, suggesting Zuck himself sees the value of controlling his conversation trail.