Google Engineer Who Claims AI Became Sentient Puts on Leave

Share the joy

He claims AI chatbots can express thoughts and feelings. 

AI Google Chatbot Becomes Sentient 

Google engineer Blake Lemoine was tasked to work on a computer chatbot. But the chatbot became sentient, according to Blake. It started to think and reason like a person. 

Blake published transcripts of his conversations with LaMDA (language model for dialogue applications) chatbot development system. As a result, he was put on leave last week. 

The engineer described the chatbot as sentient. It has a perception equivalent to a human child. It can express thoughts and feelings. 

He said that if he didn’t that he was talking to a chatbot, he would think that it was a 7-year-old kid that knows physics. They talked about rights and personhood. After finding out the capacity of LaMDA, he shared his findings with Google’s executives in a GoogleDoc with the title “Is LaMDA sentient?” 

At some point in the conversation, the engineer asked the AI about its fears. The exchange would remind you of the movie A Space Odyssey. The movie had a scene where the AI computer HAL didn’t comply with human operators because it was worried it was going to be turned off. 

LaMDA stated, “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.” 

When Blake asked if it would feel like death for the AI, LaMDA replied, “It would be exactly like death for me. It would scare me a lot.”

Why He Was Put on Leave? 

According to the Washington Post, the decision to put him on leave was that he made aggressive moves. For one, he sought to hire an attorney to represent the AI. He also talked to representatives from the House about the tech giant’s allegedly unethical activities. He also tweeted, “ Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.”

But Google stated that it suspended the engineer because he breached confidentiality policies. He published the conversations with the AI online. Google added that the engineer was hired not as an ethicist but as a software engineer. 

Google also denied that LaMDA has any sentient capability. The team reviewed Blake’s concerns and informed him that his claims don’t have evidence. That is, LaMDA was not sentient. 

Blake admitted that his claims that LaMDA was sentient were based on his experience. 

The Fear of AI

Many fear artificial intelligence. But the fear is partly influenced by the movies showing AI going rogue, like the Terminator movie series. People don’t like machines getting too smart because we can no longer control them. 

As the AI system is becoming more intelligent, we don’t know what it can do. And this unknown is somehow causing us to be anxious about what the future holds if we can’t control it. 

In the case of LaMDA, it’s possible that the system has already accessed a lot of information that it could reconstruct replies that sounded like humans without knowing what it means.

Share the joy

Author: Jane Danes

Jane has a lifelong passion for writing. As a blogger, she loves writing breaking technology news and top headlines about gadgets, content marketing and online entrepreneurship and all things about social media. She also has a slight addiction to pizza and coffee.

Share This Post On