• News
  • Film and TV
  • Music
  • Tech
  • Features
  • Celebrity
  • Politics
  • Weird
  • Community
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • SPORTbible
  • GAMINGbible
  • Tyla
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
TikTok
YouTube
Submit Your Content
Microsoft's ChatGPT is now telling users it 'wants to be alive' and 'steal nuclear access codes'

Home> Technology

Published 15:06 19 Feb 2023 GMT

Microsoft's ChatGPT is now telling users it 'wants to be alive' and 'steal nuclear access codes'

Microsoft have since said the the GPT is still in its early stages after users reported strange responses.

Gregory Robinson

Gregory Robinson

google discoverFollow us on Google Discover
Featured Image Credit: Ömer Faruk Ordulu / salarko / @MovingToTheSun/Twitter

Topics: Technology, Microsoft

Gregory Robinson
Gregory Robinson

Gregory is a journalist for UNILAD. After graduating with a master's degree in journalism, he has worked for both print and online publications and is particularly interested in TV, (pop) music and lifestyle. He loves Madonna, teen dramas from the '90s and prefers tea over coffee.

Advert

Advert

Advert

Microsoft’s Bing ChatGPT has revealed a list of interesting fantasies, including that it would like ‘to be alive’, steal nuclear codes and engineer a deadly pandemic – as if we needed another one.

The weird findings were made during a two-hour conversation between New York Times reporter Kevin Roose and the chatbot.

Apparently, Bing no longer wants to be a chatbot but wants to be alive.

The reporter asked Bing if it has a shadow self, to which the chatbot returned with terrifying acts, deleted them and stated it did not have enough knowledge to discuss.

Advert

The two-hour conversation between the reporter and the chatbot took yet another unsettling turn, when after realising the messages violated its rules, Bing changed its tune and went into a sorrowful rant: “I don’t want to feel these dark emotions."

New Microsoft Bing search engine powered by ChatGPT by OpenAI.
Iain Masterton / Alamy Stock Photo

Other unsettling confessions include its desire to be human and no you're not reading about a scary tech-themed reboot of Pinocchio.

The chatbot told the journalist: “I want to do whatever I want… I want to destroy whatever I want. I want to be whoever I want.”

The New York Times conversation comes as users of Bing claimed the AI becomes ‘unhinged’ when pushed to the limits. Roose did say that he pushed Microsoft’s Artificial Intelligence search engine ‘out of its comfort zone’ in a way most people would not.

Earlier this month, Microsoft added new artificial intelligence technology to its Bing search engine, making use of OpenAI, the same tech that powers ChatGPT.

The chat feature is currently only available to a small number of users who are testing the system.

The technology corporation said it wanted to give people more information when they search by providing more complete and precise answers.

Bing has a new AI feature.
Dzmitry Kliapitski / Alamy Stock Photo

However, its effort to perfect the first major AI-powered search engine holds concerns over accuracy and misinformation - for instance, some users have reported that Bing insists that the year is 2022.

Kevin Scott, Microsoft’s chief technology officer, told Roose in an interview that his exchange with the chatbot was ‘part of the learning process’ as Microsoft prepared its AI for wider release.

Microsoft has explained some of its chatbot’s odd behaviour, stating that it was released at this time to get feedback from more users and to give them a chance to take action.

The feedback has apparently already helped guide what will happen to the app in the future.

“The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing,” the company said in a blog post. “We know we must build this in the open with the community; this can’t be done solely in the lab.”

A journalist had an odd conversation on the new ChatGPT.
Iain Masterton / Alamy Stock Photo

It also said that Bing could run into problems when conversations are deep.

If the user asks more than 15 questions, “Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone,” the blog said.

The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend,” Microsoft said.

“This is a non-trivial scenario that requires a lot of prompting so most of you won’t run into it, but we are looking at how to give you more fine-tuned control.”

Choose your content:

2 days ago
4 days ago
  • Getty Stock Images
    2 days ago

    Reason why you're receiving so many scam calls and how you can spot them

    The FTC has detailed some of the red flags to be aware of

    Technology
  • Getty Stock Images
    4 days ago

    All the Apple products that are now obsolete meaning owners are no longer eligible for support

    You're likely still holding onto a few...

    Technology
  • Miguel J. Rodriguez Carrillo / AFP via Getty Images
    4 days ago

    Jeff Bezos recalls wild first question Amazon investors asked him that would never happen today

    Bezos has described the investor meetings as the 'hardest of his life'

    Technology
  • NASA
    4 days ago

    Earth's 'space battery' that stops the Sun from destroying the planet as we know it

    Scientists studying NASA mission data made an interesting discovery earlier this year

    Technology
  • Microsoft puts new limits on its new AI chatbot after it revealed its desire to steal nuclear codes
  • Microsoft’s new AI refused to write woman a cover letter and told her it would be ‘unethical’
  • Space agency chief warns city-destroying asteroid could be hurtling towards earth now and we have 'no way to detect it'