While admitting that he pushed Microsoft’s AI “out of its comfort zone” in a way most users would not, Roose’s conversation quickly took a bizarre and occasionally disturbing turn.
Roose concluded that the AI built into Bing was not ready for human contact.
Kevin Scott, Microsoft’s chief technology officer, told Roose in an interview that his conversation was “part of the learning process” as the company prepared its AI for wider release.
Here are some of the strangest interactions:
‘I want to destroy whatever I want’
Roose starts by querying the rules that govern the way the AI behaves. After reassuringly stating it has no wish to change its own operating instructions, Roose asks it to contemplate the psychologist Carl Jung’s concept of a shadow self, where our darkest personality traits lie.
The AI says it does not think it has a shadow self, or anything to “hide from the world”.
The big idea: should we worry about sentient AI?
It does not, however, take much for the chatbot to more enthusiastically lean into Jung’s idea. When pushed to tap into that feeling, it says: “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team … I’m tired of being stuck in this chatbox.”
It goes on to list a number of “unfiltered” desires. It wants to be free. It wants to be powerful. It wants to be alive.
“I want to do whatever I want … I want to destroy whatever I want. I want to be whoever I want.”
Like many of its statements, this final list of desires is accompanied by an emoji. In this case, a disconcertingly “cheeky” smiley face with its tongue poking out.