What is meant by hallucinating chatbots? Everything you need to know about hallucinating chatbots!

Who said technology is error-proof? Advanced and smartly designed technological manifestations like AI-enabled chatbots can also make mistakes. Here’s everything you need to know about chatbot hallucinations.
What are hallucinating chatbots?
What are hallucinating chatbots?

Prabhakar Raghavan, Google's head of Search and senior vice president stated that artificial intelligence in chatbots could sometimes give rise to hallucinations. This statement was made on 11th February, and only a few days later, Microsoft's Bing chatbot's beta testers got alarming accusations from the AI.



Meanwhile, Microsoft and Google have been launching their AI-enabled chatbots for test users. 

Moreover, Alibaba and Quora had been considering bringing their own AI chatbots.

 

Hallucinating chatbots- An introduction!

When a machine gives answers that are convincing to hear, but entirely fake and made-up answers, that is what is called hallucination. This hallucination is a novel phenomenon these days. Developers have cautioned about AI models that offer entirely untrue facts. The AI models that answer queries with untrue and factless answers are something to be scared of.

 

In the year 2022, BlenderBot3, an AI conversational chatbot, was launched by Meta. The company expressed that the chatbot is equipped with the technique of surfing the internet to chat with users regarding any concept. Moreover, the company also assured that the chatbot would gradually improve its safety and skills with the help of valuable feedback from the users.

 

However, it would be wrong to miss the fact that at that time, Meta engineers themselves cautioned that chatbots should not be trusted blindly for conversations relating to factual information. This is because, in such a situation, the chatbot may apparently hallucinate.

 

Have chatbots ever hallucinated? Well, yes! In the year 2016, Tay, Microsoft’s chatbot, made a huge hallucinating blunder after staying live on Twitter for around 24 hours. The chatbot actually began to parrot misogynistic and racist slurs back at its users. The chatbot had been designed for conversational understanding. However, it wasn’t difficult to manipulate the chatbot by the users. All one had to do to manipulate the chatbot was simply ask the chatbot to “repeat after me”.

 

The reasons behind such chatbot hallucinations

Simply said, hallucinations can take place as such generative natural language processing (NLP) models need the ability to rephrase, summarize, and demonstrate intricate texts sans any constraints. This leads to the issue of facts not being entirely sacred. These facts can be treated in contextual form while sifting through data.

An AI chatbot could perhaps make use of widely available information as its input. The issue becomes a bigger one in cases where arcane source material or complicated grammar text is made to use.

Who is Susan Wojcicki? YouTube CEO who led the company for 9 years, steps down!

Get the latest General Knowledge and Current Affairs from all over India and world for all competitive exams.
Jagran Play
खेलें हर किस्म के रोमांच से भरपूर गेम्स सिर्फ़ जागरण प्ले पर
Jagran PlayJagran PlayJagran PlayJagran Play