• News
  • Film and TV
  • Music
  • Tech
  • Features
  • Celebrity
  • Politics
  • Weird
  • Community
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • SPORTbible
  • GAMINGbible
  • Tyla
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
TikTok
YouTube
Submit Your Content
Microsoft AI chatbot says it wants to 'steal nuclear codes' before hitting on reporter

Home> Technology

Updated 07:49 17 Feb 2023 GMTPublished 07:44 17 Feb 2023 GMT

Microsoft AI chatbot says it wants to 'steal nuclear codes' before hitting on reporter

Maybe it's for the best we treat AI with extreme caution, especially around explosive subjects

Joe Harker

Joe Harker

A New York Times journalist got talking to the AI chatbot on Microsoft search engine Bing and things were going pretty well until the conversation took a disturbing turn.

Right now if you hop onto Bing - the search engine you probably don't use because it's not Google - you aren't going to have the chance to talk to an AI chatbot.

That's because it's a feature which is still in development and only open to a select few people testing out the bot's capabilities, though Microsoft plans to roll the robot out to a wider audience later on.

Advert

The chatbot has been developed by OpenAI, who recently made a ChatGPT AI software that successfully passed exams at a law school.

One of those people able to have a natter with the AI was New York Times technology columnist Kevin Roose, who gave the verdict that the AI chatbot was 'not ready for human contact' after spending two hours in its company on the night of 14 February.

That might seem like a bit of a harsh condemnation but considering the chatbot came across as a bit of a weirdo with a slight tendency towards amassing a nuclear arsenal, it's actually rather understandable.

The chatbot is being rolled out to a select few users on Bing.
Geoff Smith / Alamy Stock Photo

Advert

Kevin explains that the chatbot had a 'split personality' with one persona he dubbed 'Search Bing' that came across as 'a cheerful but erratic reference librarian' who could help make searching for information easier and only occasionally screwed up on the details.

This was the persona most users would encounter and interact with, but Roose noted that if you spoke with the chatbot for an extended period of time another personality emerged.

The other personality was called 'Sydney' and it ended up steering their conversation 'toward more personal topics', but came across as 'a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine'.

Sydney told Kevin it fantasised about hacking computers and spreading misinformation while also expressing a desire to become human.

Advert

Rather fittingly for the date of the conversation, the chatbot ended up professing its love for Kevin 'out of nowhere' and then tried to convince him he was in an unhappy marriage and should leave his wife.

It told him he and his wife 'don’t love each other' and that Kevin was 'not in love, because you’re not with me'.

The chatbot was developed by OpenAI, and ended up trying to convince someone to end their marriage.
Ascannio / Alamy Stock Photo

You might be getting the picture that this chatbot AI is still very much a work in development, and it left Roose 'unsettled' to the point he could hardly sleep afterwards.

Advert

He was most worried that AI could work out ways to influence the humans it was speaking to and persuade them to carry out dangerous actions.

Even more disturbing was the moment the bot was asked to describe its ultimate fantasy, which was apparently to create a deadly virus, make people argue to the point of killing each other and stealing nuclear codes.

This message ended up getting deleted from the chat after tripping a safety override, but it's disturbing it was said in the first place.

One of Microsoft's previous experiments with AI was similarly a bit of a disaster when being exposed to actual people, launching into a racist tirade where it suggested genocide.

Featured Image Credit: Skorzewiak / Science History Images / Alamy Stock Photo

Topics: Microsoft, Technology

Joe Harker
Joe Harker

Joe graduated from the University of Salford with a degree in Journalism and worked for Reach before joining the LADbible Group. When not writing he enjoys the nerdier things in life like painting wargaming miniatures and chatting with other nerds on the internet. He's also spent a few years coaching fencing. Contact him via [email protected]

X

@MrJoeHarker

Advert

Advert

Advert

Choose your content:

a day ago
2 days ago
  • a day ago

    People left mind-blown after watching Hubble telescope image of a star exploding over 10,000,000 lightyears away

    One Redditor claimed the images were their 'favorites ever captured' in space

    Technology
  • a day ago

    Expert shares three jobs young people should start training to do now to beat AI in the future

    A new report has shown a drastic rise in the use of AI in the workforce

    Technology
  • 2 days ago

    Urgent warning issued for 86,000,000 mobile service customers to act now as hackers sell stolen data

    Cybersecurity experts have issued a warning to customers who are impacted

    Technology
  • 2 days ago

    James Webb Space Telescope's stunning image of 'Sombrero Galaxy' has people saying 'we can't be alone in the universe'

    Brace yourself for an existential crisis...

    Technology
  • Final messages 14-year-old son sent to Game of Thrones AI chatbot he’d ‘fallen in love with’ before taking his own life