A New York Times journalist got talking to the AI chatbot on Microsoft search engine Bing and things were going pretty well until the conversation took a disturbing turn.
Right now if you hop onto Bing - the search engine you probably don't use because it's not Google - you aren't going to have the chance to talk to an AI chatbot.
That's because it's a feature which is still in development and only open to a select few people testing out the bot's capabilities, though Microsoft plans to roll the robot out to a wider audience later on.
Advert
The chatbot has been developed by OpenAI, who recently made a ChatGPT AI software that successfully passed exams at a law school.
One of those people able to have a natter with the AI was New York Times technology columnist Kevin Roose, who gave the verdict that the AI chatbot was 'not ready for human contact' after spending two hours in its company on the night of 14 February.
That might seem like a bit of a harsh condemnation but considering the chatbot came across as a bit of a weirdo with a slight tendency towards amassing a nuclear arsenal, it's actually rather understandable.
Advert
Kevin explains that the chatbot had a 'split personality' with one persona he dubbed 'Search Bing' that came across as 'a cheerful but erratic reference librarian' who could help make searching for information easier and only occasionally screwed up on the details.
This was the persona most users would encounter and interact with, but Roose noted that if you spoke with the chatbot for an extended period of time another personality emerged.
The other personality was called 'Sydney' and it ended up steering their conversation 'toward more personal topics', but came across as 'a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine'.
Sydney told Kevin it fantasised about hacking computers and spreading misinformation while also expressing a desire to become human.
Advert
Rather fittingly for the date of the conversation, the chatbot ended up professing its love for Kevin 'out of nowhere' and then tried to convince him he was in an unhappy marriage and should leave his wife.
It told him he and his wife 'don’t love each other' and that Kevin was 'not in love, because you’re not with me'.
You might be getting the picture that this chatbot AI is still very much a work in development, and it left Roose 'unsettled' to the point he could hardly sleep afterwards.
Advert
He was most worried that AI could work out ways to influence the humans it was speaking to and persuade them to carry out dangerous actions.
Even more disturbing was the moment the bot was asked to describe its ultimate fantasy, which was apparently to create a deadly virus, make people argue to the point of killing each other and stealing nuclear codes.
This message ended up getting deleted from the chat after tripping a safety override, but it's disturbing it was said in the first place.
One of Microsoft's previous experiments with AI was similarly a bit of a disaster when being exposed to actual people, launching into a racist tirade where it suggested genocide.
Topics: Microsoft, Technology