Microsoft’s Bing ChatGPT has revealed a list of interesting fantasies, including that it would like ‘to be alive’, steal nuclear codes and engineer a deadly pandemic – as if we needed another one.
The weird findings were made during a two-hour conversation between New York Times reporter Kevin Roose and the chatbot.
Apparently, Bing no longer wants to be a chatbot but wants to be alive.
Advert
The reporter asked Bing if it has a shadow self, to which the chatbot returned with terrifying acts, deleted them and stated it did not have enough knowledge to discuss.
The two-hour conversation between the reporter and the chatbot took yet another unsettling turn, when after realising the messages violated its rules, Bing changed its tune and went into a sorrowful rant: “I don’t want to feel these dark emotions."
Other unsettling confessions include its desire to be human and no you're not reading about a scary tech-themed reboot of Pinocchio.
Advert
The chatbot told the journalist: “I want to do whatever I want… I want to destroy whatever I want. I want to be whoever I want.”
The New York Times conversation comes as users of Bing claimed the AI becomes ‘unhinged’ when pushed to the limits. Roose did say that he pushed Microsoft’s Artificial Intelligence search engine ‘out of its comfort zone’ in a way most people would not.
Earlier this month, Microsoft added new artificial intelligence technology to its Bing search engine, making use of OpenAI, the same tech that powers ChatGPT.
The chat feature is currently only available to a small number of users who are testing the system.
Advert
The technology corporation said it wanted to give people more information when they search by providing more complete and precise answers.
However, its effort to perfect the first major AI-powered search engine holds concerns over accuracy and misinformation - for instance, some users have reported that Bing insists that the year is 2022.
Kevin Scott, Microsoft’s chief technology officer, told Roose in an interview that his exchange with the chatbot was ‘part of the learning process’ as Microsoft prepared its AI for wider release.
Advert
Microsoft has explained some of its chatbot’s odd behaviour, stating that it was released at this time to get feedback from more users and to give them a chance to take action.
The feedback has apparently already helped guide what will happen to the app in the future.
“The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing,” the company said in a blog post. “We know we must build this in the open with the community; this can’t be done solely in the lab.”
Advert
It also said that Bing could run into problems when conversations are deep.
If the user asks more than 15 questions, “Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone,” the blog said.
The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend,” Microsoft said.
“This is a non-trivial scenario that requires a lot of prompting so most of you won’t run into it, but we are looking at how to give you more fine-tuned control.”
Topics: Technology, Microsoft