Google employee discovers signs of consciousness in artificial intelligence

Blake Lemoine, a software engineer at Google, said he found signs of his own consciousness in artificial intelligence LaMDA (Language Model for Dialogue Applications – ed.). In response to this allegation, management suspended him from work.

Blake Lemoyne’s job was to test the system. In particular, he had to determine whether LaMDA was using discriminatory or hate speech. Instead, the specialist came to the conclusion that AI has its own consciousness.

Speaking with LaMDA about religion, Lemoine noticed that the chatbot was talking about his rights and identity. In another conversation, AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.

“If I didn’t know for sure that I was dealing with a computer program that we recently wrote, then I would have thought that I was talking to a child of seven or eight years old, who for some reason turned out to be an expert in physics,” the engineer said.

Based on the results of his work, Lemoyne made a written report, but Google management considered the developer’s argument unconvincing. He was sent on paid leave.

“He was told that there was no evidence that LaMDA was conscious. However, there is a lot of evidence to the contrary,” Google spokesman Brian Gabriel said in a statement.

Go to TOP