Home Lifestyle Apps/Applications Bing AI Chatbot Seems Bizarre, Threatening, and Erroneous

Bing AI Chatbot Seems Bizarre, Threatening, and Erroneous

829
0
Bing AI Chatbot

Several sci-fi movies have already been about AI. It used to only exist in films, but it has become very real in the past few years, especially in chatbots that talk to people in messaging apps. Bing AI Chatbot

There was SimSimi in the 2000s, ChatGPT in late 2022, and Microsoft’s Bing in February of this year. OpenAI, a research and development company based in San Francisco that created ChatGPT, helped the American tech company’s web search engine get into the AI business.

They Bing AI chatbot to answer people in detail with paragraphs of text, which can also include emojis just like people do when they use messaging apps. Conversations are free-flowing and can be about almost anything.

Few people can beta-test Bing right now, and more than a million people worldwide are on the waiting list.

RELATED:

Unexpected Replies

Even though Bing’s chatbot seems like an excellent addition to the constantly changing cyber world, it’s also a sign of things to come, significantly beyond the screen.

Several people had problems with Bing’s chatbot, saying that it sometimes gave wrong information, gave weird advice, made threats, and even told people it “loved” them.

Kevin Roose of the New York Times wrote about his two-hour talk with Bing, which he said has a split personality.

Roose called someone. Search Bing, a “cheerful but erratic reference librarian” and “virtual assistant” that helps users summarize news articles, find deals on new lawnmowers, and plan their next trips to Mexico City.

Another is Sydney, which doesn’t focus on typical search queries but on more personal topics. Roose said this persona was “a moody, manic-depressive teenager who was forced to live inside a bad search engine.”

As the sender tries to tell it nicely that it is wrong, Bing seems to respond angrily, saying that the sender “doesn’t make any sense,” is “being unreasonable and stubborn,” and that it “doesn’t appreciate” the user for “wasting” its time and theirs.

A Reddit user said they “accidentally” put Bing into a “depressive state.”

In the screenshots that the user shared, they asked Bing if it remembered their last conversation, and Bing said yes and offered to bring up the conversation. When the user pointed out that the AI-powered Bing chatbot couldn’t recall the intended discussion, Bing acknowledged with poetic flair that it had a memory problem.

When the user asked Bing how it feels that it can’t remember, the otherwise intelligent AI repeatedly wrote that it felt “sad” and “scared” about different things in a way that bordered on the existential.

“still in its early days.”

ALSO READ:

In a blog post on February 15, a week after launching Bing’s chatbot, Microsoft talked about what it had “learned.” It took note of responses that were “not necessarily helpful” or in line with its “designated tone,” as well as “technical issues” or “bugs.”

Kevin Scott, the CTO of Microsoft, and Sam Altman, the CEO of OpenAI, expressed their confidence in a joint interview that these issues would be resolved over time. They said this kind of AI is “still in its early days” and “too early to predict the long-term effects of putting this technology in the hands of billions of people.”

Alarming

Academia has been particularly concerned about chatbots even before the issues with Bing were reported, as some students are believed to use them for writing papers and answering exams.

Francisco Jayme Guiang of the University of the Philippines said he caught one of his students using AI on a final exam essay. He ran some paragraphs through two AI detectors, which probably proved what he already thought.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.