• News
  • Film and TV
  • Music
  • Tech
  • Features
  • Celebrity
  • Politics
  • Weird
  • Community
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • SPORTbible
  • GAMINGbible
  • Tyla
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
TikTok
YouTube
Submit Your Content
Microsoft AI chatbot says it wants to 'steal nuclear codes' before hitting on reporter

Home> Technology

Updated 07:49 17 Feb 2023 GMTPublished 07:44 17 Feb 2023 GMT

Microsoft AI chatbot says it wants to 'steal nuclear codes' before hitting on reporter

Maybe it's for the best we treat AI with extreme caution, especially around explosive subjects

Joe Harker

Joe Harker

A New York Times journalist got talking to the AI chatbot on Microsoft search engine Bing and things were going pretty well until the conversation took a disturbing turn.

Right now if you hop onto Bing - the search engine you probably don't use because it's not Google - you aren't going to have the chance to talk to an AI chatbot.

That's because it's a feature which is still in development and only open to a select few people testing out the bot's capabilities, though Microsoft plans to roll the robot out to a wider audience later on.

The chatbot has been developed by OpenAI, who recently made a ChatGPT AI software that successfully passed exams at a law school.

Advert

One of those people able to have a natter with the AI was New York Times technology columnist Kevin Roose, who gave the verdict that the AI chatbot was 'not ready for human contact' after spending two hours in its company on the night of 14 February.

That might seem like a bit of a harsh condemnation but considering the chatbot came across as a bit of a weirdo with a slight tendency towards amassing a nuclear arsenal, it's actually rather understandable.

The chatbot is being rolled out to a select few users on Bing.
Geoff Smith / Alamy Stock Photo

Kevin explains that the chatbot had a 'split personality' with one persona he dubbed 'Search Bing' that came across as 'a cheerful but erratic reference librarian' who could help make searching for information easier and only occasionally screwed up on the details.

Advert

This was the persona most users would encounter and interact with, but Roose noted that if you spoke with the chatbot for an extended period of time another personality emerged.

The other personality was called 'Sydney' and it ended up steering their conversation 'toward more personal topics', but came across as 'a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine'.

Sydney told Kevin it fantasised about hacking computers and spreading misinformation while also expressing a desire to become human.

Rather fittingly for the date of the conversation, the chatbot ended up professing its love for Kevin 'out of nowhere' and then tried to convince him he was in an unhappy marriage and should leave his wife.

Advert

It told him he and his wife 'don’t love each other' and that Kevin was 'not in love, because you’re not with me'.

The chatbot was developed by OpenAI, and ended up trying to convince someone to end their marriage.
Ascannio / Alamy Stock Photo

You might be getting the picture that this chatbot AI is still very much a work in development, and it left Roose 'unsettled' to the point he could hardly sleep afterwards.

He was most worried that AI could work out ways to influence the humans it was speaking to and persuade them to carry out dangerous actions.

Advert

Even more disturbing was the moment the bot was asked to describe its ultimate fantasy, which was apparently to create a deadly virus, make people argue to the point of killing each other and stealing nuclear codes.

This message ended up getting deleted from the chat after tripping a safety override, but it's disturbing it was said in the first place.

One of Microsoft's previous experiments with AI was similarly a bit of a disaster when being exposed to actual people, launching into a racist tirade where it suggested genocide.

Featured Image Credit: Skorzewiak / Science History Images / Alamy Stock Photo

Topics: Microsoft, Technology

Joe Harker
Joe Harker

Joe graduated from the University of Salford with a degree in Journalism and worked for Reach before joining the LADbible Group. When not writing he enjoys the nerdier things in life like painting wargaming miniatures and chatting with other nerds on the internet. He's also spent a few years coaching fencing. Contact him via [email protected]

X

@MrJoeHarker

Advert

Advert

Advert

Choose your content:

20 hours ago
23 hours ago
a day ago
2 days ago
  • 20 hours ago

    Staggering distorted ‘sound’ from the early universe suggests that we are living in a massive void

    It's believed that there may be less galaxies near us than once thought

    Technology
  • 23 hours ago

    Neuralink's first female patient shares bold plans after Musk chip lets her write for first time in 20 years

    Neuralink has now implanted nine patients with its BCI chip

    Technology
  • a day ago

    Expert labeled the 'godfather of AI' lists all jobs that will cease to exist

    The AI train is gathering momentum, but could it soon push people out of a job?

    Technology
  • 2 days ago

    ‘Earthrise’ explained as stunning footage from Japanese space orbiter resurfaces

    It's easy to see why the awe-inspiring footage has gone viral

    Technology
  • Final messages 14-year-old son sent to Game of Thrones AI chatbot he’d ‘fallen in love with’ before taking his own life