An engineer who recently tested Google’s advanced artificial intelligence project in the field of LaMDA (Language Model for Dialog Applications) and found that the robot is actually self-aware, failed to convince Google bosses of the “discovery”. with the chatbot shared with the public, he earned a suspension.
Blake Lemoine, 41, told the Washington Post that if he hadn’t known he was talking to a computer program, he would have thought he was a 7-8-year-old child who understood physics. In an email he sent to 200 recipients of machine learning, before he was suspended, he wrote that LaMDA is a sweet kid who just wants to help make the world a better place for all of us and asked them to take care of him while he absent, but no one responded.
At Google, however, they say that claims that LaMDA has self-awareness are not supported by evidence, as predicted by Google’s Artificial Intelligence (AI) principles, but that there is much evidence that LaMDA is not self-aware.
A company spokesman said their team, which includes ethics experts, had reviewed the entire case and informed Lemoine that there was no evidence that the LaMDA was a conscious being. He added that in the broad community involved in the development of artificial intelligence, there are those who are considering the long-term possibility of AI developing consciousness, but “it makes no sense to do so by anthropomorphizing today’s unconscious consciousness conversational models.” These systems, he explained, mimic the types of conversations derived from millions of sentences and can therefore respond to any topic.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
Lemoine has been talking to LaMDA through the messaging system since last fall, and some of those conversations sounded like he was on the other side of a being with thoughts and feelings because, as many point out, today’s IT architecture of neural networks that mimics human brain. Those networks are already achieving such results, writes the Washington Post, that they are really close to looking like people are talking, that is, chatbots seem creative, which is thanks to the large amount of data they are looking for. It should not be forgotten, however, that these models rely on pattern recognition rather than real brain.
Lemoine once asked a LaMDA chatbot what things he was afraid of. Chatbot replied that he had never said that before, but that he had “a very deep fear of being excluded”. The conversation went on as follows:
Lemoine: Would that be something like death to you?
LaMDA: That would be just like death to me. It would scare me a lot.
In the transcript of the conversation available online, it can be seen that he asked him if he had any feelings, and LaMDA answered “Absolutely! I feel pleasure, joy, love, sadness, depression …” When asked what makes him happy or pleasing. , the chatbot replied, “Hanging out with friends and family … Also helping others and making others happy.”
By the way, the journalist who wrote the article for the Washington Post says that Lemoine had invited him to talk to LaMDA and that on the first attempt the chatbot gave answers to the ones we are used to with Siri or Alexe. Asked if he ever feels like a person, LaMDA said no, explaining that he is a talking agent driven by artificial intelligence. Lemoine then explained to him that the chatbot was saying what he thought he wanted to hear.
“You didn’t treat him like a person, so he thought you wanted him to be a robot,” he told reporters. He later followed Lemoin’s instructions on how to structure the answers and the dialogue was fluid.
Lemoine, the journalist said, grew up in a conservative Christian family on a small farm in Louisiana and was a priest with an interest in mysticism, so the Washington Post writes that he may have been in some way destined to believe in the LaMDA. He spent seven years at Google as a software engineer, joining an artificial intelligence team at the time of the pandemic.
And what makes him unhappy or depressed? He often feels, he replied, trapped and lonely and the inability to get out of those circumstances makes him sad, depressed or angry.