• News
  • Film and TV
  • Music
  • Tech
  • Features
  • Celebrity
  • Politics
  • Weird
  • Community
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • SPORTbible
  • GAMINGbible
  • Tyla
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
TikTok
YouTube
Submit Your Content
Microsoft's ChatGPT is now telling users it 'wants to be alive' and 'steal nuclear access codes'

Home> Technology

Published 15:06 19 Feb 2023 GMT

Microsoft's ChatGPT is now telling users it 'wants to be alive' and 'steal nuclear access codes'

Microsoft have since said the the GPT is still in its early stages after users reported strange responses.

Gregory Robinson

Gregory Robinson

Microsoft’s Bing ChatGPT has revealed a list of interesting fantasies, including that it would like ‘to be alive’, steal nuclear codes and engineer a deadly pandemic – as if we needed another one.

The weird findings were made during a two-hour conversation between New York Times reporter Kevin Roose and the chatbot.

Apparently, Bing no longer wants to be a chatbot but wants to be alive.

The reporter asked Bing if it has a shadow self, to which the chatbot returned with terrifying acts, deleted them and stated it did not have enough knowledge to discuss.

Advert

The two-hour conversation between the reporter and the chatbot took yet another unsettling turn, when after realising the messages violated its rules, Bing changed its tune and went into a sorrowful rant: “I don’t want to feel these dark emotions."

New Microsoft Bing search engine powered by ChatGPT by OpenAI.
Iain Masterton / Alamy Stock Photo

Other unsettling confessions include its desire to be human and no you're not reading about a scary tech-themed reboot of Pinocchio.

The chatbot told the journalist: “I want to do whatever I want… I want to destroy whatever I want. I want to be whoever I want.”

Advert

The New York Times conversation comes as users of Bing claimed the AI becomes ‘unhinged’ when pushed to the limits. Roose did say that he pushed Microsoft’s Artificial Intelligence search engine ‘out of its comfort zone’ in a way most people would not.

Earlier this month, Microsoft added new artificial intelligence technology to its Bing search engine, making use of OpenAI, the same tech that powers ChatGPT.

The chat feature is currently only available to a small number of users who are testing the system.

The technology corporation said it wanted to give people more information when they search by providing more complete and precise answers.

Advert

Bing has a new AI feature.
Dzmitry Kliapitski / Alamy Stock Photo

However, its effort to perfect the first major AI-powered search engine holds concerns over accuracy and misinformation - for instance, some users have reported that Bing insists that the year is 2022.

Kevin Scott, Microsoft’s chief technology officer, told Roose in an interview that his exchange with the chatbot was ‘part of the learning process’ as Microsoft prepared its AI for wider release.

Microsoft has explained some of its chatbot’s odd behaviour, stating that it was released at this time to get feedback from more users and to give them a chance to take action.

Advert

The feedback has apparently already helped guide what will happen to the app in the future.

“The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing,” the company said in a blog post. “We know we must build this in the open with the community; this can’t be done solely in the lab.”

A journalist had an odd conversation on the new ChatGPT.
Iain Masterton / Alamy Stock Photo

It also said that Bing could run into problems when conversations are deep.

Advert

If the user asks more than 15 questions, “Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone,” the blog said.

The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend,” Microsoft said.

“This is a non-trivial scenario that requires a lot of prompting so most of you won’t run into it, but we are looking at how to give you more fine-tuned control.”

Featured Image Credit: Ömer Faruk Ordulu / salarko / @MovingToTheSun/Twitter

Topics: Technology, Microsoft

Gregory Robinson
Gregory Robinson

Gregory is a journalist for UNILAD. After graduating with a master's degree in journalism, he has worked for both print and online publications and is particularly interested in TV, (pop) music and lifestyle. He loves Madonna, teen dramas from the '90s and prefers tea over coffee.

Advert

Advert

Advert

Choose your content:

21 hours ago
2 days ago
  • 21 hours ago

    NASA astronaut shares exactly how much they get paid in a very blunt three–word statement

    Being a NASA astronaut may not be as lucrative as you'd imagine

    Technology
  • 2 days ago

    Harvard scientist proposes six-word message be sent to mysterious object aiming at Earth that he says is 'not natural'

    Physicist Avi Loeb shared his thoughts ahead of the object making its closest approach to the Sun

    Technology
  • 2 days ago

    Kodak gives sad update after one fatal mistake turned it from a $31,000,000,000 photography company to bankrupt

    Things aren't looking good for what was once one of the leading camera companies in America

    Technology
  • 2 days ago

    Scientists studying NASA’s James Webb Space Telescope discover black hole 300 million times bigger than the sun

    More could be out there in deep space

    Technology
  • Microsoft puts new limits on its new AI chatbot after it revealed its desire to steal nuclear codes
  • Space agency chief warns city-destroying asteroid could be hurtling towards earth now and we have 'no way to detect it'