• News
  • Film and TV
  • Music
  • Tech
  • Features
  • Celebrity
  • Politics
  • Weird
  • Community
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • SPORTbible
  • GAMINGbible
  • Tyla
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
TikTok
YouTube
Submit Your Content
Microsoft's ChatGPT is now telling users it 'wants to be alive' and 'steal nuclear access codes'

Home> Technology

Published 15:06 19 Feb 2023 GMT

Microsoft's ChatGPT is now telling users it 'wants to be alive' and 'steal nuclear access codes'

Microsoft have since said the the GPT is still in its early stages after users reported strange responses.

Gregory Robinson

Gregory Robinson

Microsoft’s Bing ChatGPT has revealed a list of interesting fantasies, including that it would like ‘to be alive’, steal nuclear codes and engineer a deadly pandemic – as if we needed another one.

The weird findings were made during a two-hour conversation between New York Times reporter Kevin Roose and the chatbot.

Apparently, Bing no longer wants to be a chatbot but wants to be alive.

Advert

The reporter asked Bing if it has a shadow self, to which the chatbot returned with terrifying acts, deleted them and stated it did not have enough knowledge to discuss.

The two-hour conversation between the reporter and the chatbot took yet another unsettling turn, when after realising the messages violated its rules, Bing changed its tune and went into a sorrowful rant: “I don’t want to feel these dark emotions."

New Microsoft Bing search engine powered by ChatGPT by OpenAI.
Iain Masterton / Alamy Stock Photo

Other unsettling confessions include its desire to be human and no you're not reading about a scary tech-themed reboot of Pinocchio.

Advert

The chatbot told the journalist: “I want to do whatever I want… I want to destroy whatever I want. I want to be whoever I want.”

The New York Times conversation comes as users of Bing claimed the AI becomes ‘unhinged’ when pushed to the limits. Roose did say that he pushed Microsoft’s Artificial Intelligence search engine ‘out of its comfort zone’ in a way most people would not.

Earlier this month, Microsoft added new artificial intelligence technology to its Bing search engine, making use of OpenAI, the same tech that powers ChatGPT.

The chat feature is currently only available to a small number of users who are testing the system.

Advert

The technology corporation said it wanted to give people more information when they search by providing more complete and precise answers.

Bing has a new AI feature.
Dzmitry Kliapitski / Alamy Stock Photo

However, its effort to perfect the first major AI-powered search engine holds concerns over accuracy and misinformation - for instance, some users have reported that Bing insists that the year is 2022.

Kevin Scott, Microsoft’s chief technology officer, told Roose in an interview that his exchange with the chatbot was ‘part of the learning process’ as Microsoft prepared its AI for wider release.

Advert

Microsoft has explained some of its chatbot’s odd behaviour, stating that it was released at this time to get feedback from more users and to give them a chance to take action.

The feedback has apparently already helped guide what will happen to the app in the future.

“The only way to improve a product like this, where the user experience is so much different than anything anyone has seen before, is to have people like you using the product and doing exactly what you all are doing,” the company said in a blog post. “We know we must build this in the open with the community; this can’t be done solely in the lab.”

A journalist had an odd conversation on the new ChatGPT.
Iain Masterton / Alamy Stock Photo

Advert

It also said that Bing could run into problems when conversations are deep.

If the user asks more than 15 questions, “Bing can become repetitive or be prompted/provoked to give responses that are not necessarily helpful or in line with our designed tone,” the blog said.

The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend,” Microsoft said.

“This is a non-trivial scenario that requires a lot of prompting so most of you won’t run into it, but we are looking at how to give you more fine-tuned control.”

Featured Image Credit: Ömer Faruk Ordulu / salarko / @MovingToTheSun/Twitter

Topics: Technology, Microsoft

Gregory Robinson
Gregory Robinson

Gregory is a journalist for UNILAD. After graduating with a master's degree in journalism, he has worked for both print and online publications and is particularly interested in TV, (pop) music and lifestyle. He loves Madonna, teen dramas from the '90s and prefers tea over coffee.

Advert

Advert

Advert

Choose your content:

a day ago
2 days ago
  • a day ago

    People left mind-blown after watching Hubble telescope image of a star exploding over 10,000,000 lightyears away

    One Redditor claimed the images were their 'favorites ever captured' in space

    Technology
  • a day ago

    Expert shares three jobs young people should start training to do now to beat AI in the future

    A new report has shown a drastic rise in the use of AI in the workforce

    Technology
  • 2 days ago

    Urgent warning issued for 86,000,000 mobile service customers to act now as hackers sell stolen data

    Cybersecurity experts have issued a warning to customers who are impacted

    Technology
  • 2 days ago

    James Webb Space Telescope's stunning image of 'Sombrero Galaxy' has people saying 'we can't be alone in the universe'

    Brace yourself for an existential crisis...

    Technology
  • Microsoft puts new limits on its new AI chatbot after it revealed its desire to steal nuclear codes
  • Space agency chief warns city-destroying asteroid could be hurtling towards earth now and we have 'no way to detect it'
  • Microsoft’s new AI refused to write woman a cover letter and told her it would be ‘unethical’