Sentient ai chatbot: Google employee was sent on paid leave after he claimed that one of the company’s AI chatbot has come to life and is thinking and responding like a human being. The software engineer in Google Blake Lemoine published a blog post in which he labeled LaMDA (Language Model for Dialogue Applications) as a person after having conversations with the AI bot on the subjects such as consciousness, religion, and robotics.
The latest claims of Google’s AI-based chatbot coming to life have spurred a debate on the capabilities and the limitations of the AI-based chatbots and whether they can actually hold a conversation akin to human beings. Let’s understand what is Google’s LaMDA and why the engineer believes that it has come to life.
An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers.https://t.co/uAE454KXRB
— Blake Lemoine (@cajundiscordian) June 11, 2022
What is Google’s AI-based chatbot LaMDA?
Google had first announced LaMDA at its flagship developer conference I/O in 2021 as its generative language model for the dialogue applications which can assure that the application will be able to converse on any topic.
As per Google, LaMDA can engage in a free-flowing way about a seemingly endless number of topics, an ability that the company thinks can unlock more natural ways of interacting with technology and the entirely new categories of helpful applications.
In simple words, it means that LaMDA can have a discussion on the basis of the user’s inputs thanks completely to its language processing model which has been trained on large amounts of dialogue.
LaMDA 2.O
Google at I/O 2022 announced LaMDA 2.0 which will further build on these capabilities. The new model will possibly take an idea and generate imaginative and relevant descriptions, can stay on a particular topic even if a user strays off-topic, and can also suggest a list of things that are needed for a specified activity.
Why Google engineer thinks LaMDA has come to life?
Blake Lemoine, who works in Google’s Responsible AI Team, started chatting with LaMDA in 2021 as part of his job.
However, after he and a collaborator at Google conducted an interview of the AI-based chatbot LaMDA, involving topics such as religion, robotics, and consciousness, he came to the conclusion that the chatbot may have come to life. In April 2022, he reportedly shared an internal document with Google Employees, however, his concerns were dismissed.
Why Google has sent the engineer on leave?
Reportedly, Google has placed Blake Lemoine on paid administrative leave for violating its confidentiality policy and has said that his evidence Google’s AI Chatbot has come to life does not support his claims.
The Social Media giant said some in the broader AI community are considering the long-term possibility of sentient or general AI, however, it does not make sense to do so by idealizing today’s conversational models, which are not sentient.
Language-based AI tools: What else they can do?
While there always have been a lot of debate on what AI tools can do, including whether they are actually capable of replicating human emotions and ethics, an article published in The Guardian in 2020 had claimed that it was written entirely by an AI text generator known as Generative Pre-Trained Transformer 3 (GPT-3).
It is an autoregressive language model that uses deep learning to produce human-like text. However, it is worth noting that the article was criticized for feeding a lot of specific information to GPT-3 before it wrote the article.
Comments
All Comments (0)
Join the conversation