Google Engineer Suspended After Claiming AI Is Sentient

According to the Washington Post, Google has suspended developer Blake Lemoine for an alleged breach of confidentiality following accusations that an artificial intelligence (AI) chatbot had become sentient.

The LaMDA (Language Model for Dialogue Applications) system was the AI chatbot in question. LaMDA was unveiled at Google I/O last year in the hopes of making AI assistant dialogues more open-ended and natural-sounding.

Lemoine had been putting LaMDA to the test in terms of its ability to converse. Later, the engineer realized he was engaging with a seven- or eight-year-old human. He then allegedly elicited compelling reactions, such as LaMDA’s declaration that it is, in fact, a human. LaMDA has gone so far as to claim that it is capable of feeling emotions such as loneliness and fear.

Google said it decided to put Lemoine on paid leave because of his “aggressive” actions, including plans to hire an attorney to defend LaMDA and alert members of the House justice committee about alleged unethical Google operations. Google also claimed that Lemoine broke the law by sharing the transcript of his conversation with LaMDA on his Medium account, claiming that he had violated confidentiality.

Lemoine also used Medium to announce that he had been placed on “paid administrative leave.” After an internal team of ethicists and engineers looked into Lemoine’s allegations, they discovered no indication that LaMDA was sentient.

Despite the current revelation, Lemoine has stated on Twitter that he plans to continue his Artificial Intelligence work regardless of whether Google chooses to keep him or not.

+1
0
+1
0
+1
0
+1
0
+1
0
+1
0
+1
0

Enter your email and get notified when new content is added!

We don’t spam! Read our privacy policy for more info.

Leave a Reply

Your email address will not be published. Required fields are marked *

Send this to a friend