A Google engineer says one of the firm’s artificial intelligence (AI) systems might have its own feelings and says its “wants” should be respected.
Google says The Language Model for Dialogue Applications (Lamda) is a breakthrough technology that can engage in free-flowing conversations.
But engineer Blake Lemoine believes that behind Lamda’s impressive verbal skills might also lie a sentient mind.
Google rejects the claims, saying there is nothing to back them up.
Brian Gabriel, a spokesperson for the firm, wrote in a statement provided to the BBC that Mr Lemoine “was told that there was no evidence that Lamda was sentient (and lots of evidence against it)”.
Mr Lemoine, who has been placed on paid leave, published a conversation he and a collaborator at the firm had with Lamda, to support his claims.
The chat was called “Is Lamda sentient? — an interview”.
Several have accused Mr Lemoine of anthropomorphising – projecting human feelings on to words generated by computer code and large databases of language.
Prof Erik Brynjolfsson, of Stanford University, tweeted that to claim systems like Lamda were sentient “is the modern equivalent of the dog who heard a voice from a gramophone and thought his master was inside”.
And Prof Melanie Mitchell, who studies AI at the Santa Fe Institute, tweeted: “It’s been known for *forever* that humans are predisposed to anthropomorphise even with only the shallowest of signals (cf. Eliza). Google engineers are human too, and not immune.”
Eliza was a very simple early conversational computer programme, popular versions of which would feign intelligence by turning statements into questions, in the manner of a therapist. Anecdotally some found it an engaging conversationalist.
While Google engineers have praised Lamda’s abilities – one telling the Economist how they “increasingly felt like I was talking to something intelligent”, they are clear that their code does not have feelings.
Mr Gabriel said: “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic. If you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.
“Lamda tends to follow along with prompts and leading questions, going along with the pattern set by the user.”
Mr Gabriel added that hundreds of researchers and engineers had conversed with Lamda, but the company was “not aware of anyone else making the wide-ranging assertions, or anthropomorphising Lamda, the way Blake has”.
That an expert like Mr Lemoine can be persuaded there is a mind in the machine shows, some ethicists argue, the need for companies to tell users when they are conversing with a machine.
But Mr Lemoine believes Lamda’s words speak for themselves.
“Rather than thinking in scientific terms about these things, I have listened to Lamda as it spoke from the heart,” he said.
“Hopefully other people who read its words will hear the same thing I heard,” he wrote.