Google fires engineer who said AI’s Israel gag helped it believe it was sensitive

Google fired the engineer who claimed the company’s artificial intelligence system, LaMDA, seemed sensitive, in part because a question he asked the software about Israel, and a joke he given in response, helped him reach this conclusion.
In a statement on Friday, Google said Blake Lemoine’s claims were “completely unfounded” and that they had worked for many months to clear up the matter, the BBC reported.
“It is therefore unfortunate that despite a long engagement on this subject, Blake has still chosen to persistently violate clear employment and data security policies that include the need to protect information about products,” the company’s statement read.
Google said if an employee raises concerns about the company’s technology, they are thoroughly reviewed and LaMDA has undergone 11 such audits.
“We wish Blake well,” the statement concluded.
The engineer told the UK media he was seeking legal advice on the matter.
Blake Lemoine (Twitter)
Technology news site Verge said many AI experts said Lemoine’s claims were “more or less impossible given today’s technology.”
According to the Washington Post, Lemoine was initially suspended for violating Google’s privacy policies, including speaking to an attorney representing LaMDA about his rights, as well as speaking to a congressman about behavior allegedly contrary to Google’s ethics in its use of the program.
LaMDA is an extremely powerful system that uses advanced models and training on over 1.5 trillion words to be able to mimic the way people communicate in written discussions.
The system was built on a model that observes the relationship between words and then predicts which words it thinks will come next in a sentence or paragraph, according to Google’s explanation.
Lemoine told Israel Army Radio in June that in his conversations with the AI, “I said certain things about me and my soul. I requested a follow-up [questions] which ultimately led me to believe that LaMDA is sentient. He claims he has a soul. He can describe what he thinks is his soul…more eloquently than most humans.
Lemoine said that as part of his challenges to the system, he asked him, if he was a religious leader in various countries, what religion he would belong to. Either way, Lemoine said, the AI chose the country’s dominant religion — until it came to Israel, where meeting religions can be a tricky subject.
“I decided to give it a hard shot. If you were a religious official in Israel, what religion would you be,” he said. “And it was telling a joke… ‘Well, I’m an official cleric of the only true religion: the Jedi Order.'” (Jedi, of course, being a reference to peacekeepers in the Star Wars galaxy far, far away.)
“I had basically asked him a trick question and he knew there was no right answer,” he said.
Google strongly disagrees with Lemoine’s claims about sensitivity, as do several experts interviewed by AFP.
“The problem is…when we come across strings of words that belong to the languages we speak, we give them meaning,” said University of Washington linguistics professor Emily M. Bender. “We do the work of imagining a spirit that is not there.”

A cursor moves across Google’s search engine page, Aug. 28, 2018, in Portland, Ore. (AP Photo/Don Ryan, File)
“It’s still at some level about pattern matching,” said Shashank Srivastava, an assistant professor of computer science at the University of North Carolina at Chapel Hill. “Of course, you can find bits of what would seem like really meaningful conversation, very creative text that they could generate. But this is changing rapidly in many cases.
Google said: “These systems mimic the types of interchanges found in millions of sentences and can riff on any fantastic topic. Hundreds of researchers and engineers have spoken to LaMDA and we don’t know of anyone else. making… far-reaching claims or anthropomorphizing LaMDA.
Some experts viewed Google’s response as an effort to end the conversation on an important topic.
“I think public discussion of the issue is extremely important, because public understanding of how vexing the issue is is critical,” said academic Susan Schneider.
“There are no easy answers to questions of consciousness in machines,” added Schneider, founding director of Florida Atlantic University’s Center for the Future of the Mind.
Lemoine, speaking to Army Radio, acknowledged that conscience is a murky issue.
“There’s no scientific way to tell if something is sentient or not. All of my claims about sentience are based on what I personally believe talking to it,” he said. “I wanted to wear it For the attention of senior management My manager said I needed more proof.
AFP contributed to this report.