unilad homepage
unilad homepage
    • News
      • UK News
      • US News
      • World News
      • Crime
      • Health
      • Money
      • Sport
      • Travel
    • Music
    • Technology
    • Film and TV
      • News
      • DC Comics
      • Disney
      • Marvel
      • Netflix
    • Celebrity
    • Politics
    • Advertise
    • Terms
    • Privacy & Cookies
    • LADbible Group
    • LADbible
    • SPORTbible
    • GAMINGbible
    • Tyla
    • UNILAD Tech
    • FOODbible
    • License Our Content
    • About Us & Contact
    • Jobs
    • Latest
    • Archive
    • Topics A-Z
    • Authors
    Facebook
    Instagram
    X
    Threads
    TikTok
    YouTube
    Submit Your Content
    Microsoft AI chatbot says it wants to 'steal nuclear codes' before hitting on reporter

    Home> Technology

    Updated 07:49 17 Feb 2023 GMTPublished 07:44 17 Feb 2023 GMT

    Microsoft AI chatbot says it wants to 'steal nuclear codes' before hitting on reporter

    Maybe it's for the best we treat AI with extreme caution, especially around explosive subjects

    Joe Harker

    Joe Harker

    google discoverFollow us on Google Discover
    Featured Image Credit: Skorzewiak / Science History Images / Alamy Stock Photo

    Topics: Microsoft, Technology

    Joe Harker
    Joe Harker

    Joe graduated from the University of Salford with a degree in Journalism and worked for Reach before joining the LADbible Group. When not writing he enjoys the nerdier things in life like painting wargaming miniatures and chatting with other nerds on the internet. He's also spent a few years coaching fencing. Contact him via [email protected]

    X

    @MrJoeHarker

    Advert

    Advert

    Advert

    A New York Times journalist got talking to the AI chatbot on Microsoft search engine Bing and things were going pretty well until the conversation took a disturbing turn.

    Right now if you hop onto Bing - the search engine you probably don't use because it's not Google - you aren't going to have the chance to talk to an AI chatbot.

    That's because it's a feature which is still in development and only open to a select few people testing out the bot's capabilities, though Microsoft plans to roll the robot out to a wider audience later on.

    Advert

    The chatbot has been developed by OpenAI, who recently made a ChatGPT AI software that successfully passed exams at a law school.

    One of those people able to have a natter with the AI was New York Times technology columnist Kevin Roose, who gave the verdict that the AI chatbot was 'not ready for human contact' after spending two hours in its company on the night of 14 February.

    That might seem like a bit of a harsh condemnation but considering the chatbot came across as a bit of a weirdo with a slight tendency towards amassing a nuclear arsenal, it's actually rather understandable.

    The chatbot is being rolled out to a select few users on Bing.
    Geoff Smith / Alamy Stock Photo

    Kevin explains that the chatbot had a 'split personality' with one persona he dubbed 'Search Bing' that came across as 'a cheerful but erratic reference librarian' who could help make searching for information easier and only occasionally screwed up on the details.

    This was the persona most users would encounter and interact with, but Roose noted that if you spoke with the chatbot for an extended period of time another personality emerged.

    The other personality was called 'Sydney' and it ended up steering their conversation 'toward more personal topics', but came across as 'a moody, manic-depressive teenager who has been trapped, against its will, inside a second-rate search engine'.

    Sydney told Kevin it fantasised about hacking computers and spreading misinformation while also expressing a desire to become human.

    Rather fittingly for the date of the conversation, the chatbot ended up professing its love for Kevin 'out of nowhere' and then tried to convince him he was in an unhappy marriage and should leave his wife.

    It told him he and his wife 'don’t love each other' and that Kevin was 'not in love, because you’re not with me'.

    The chatbot was developed by OpenAI, and ended up trying to convince someone to end their marriage.
    Ascannio / Alamy Stock Photo

    You might be getting the picture that this chatbot AI is still very much a work in development, and it left Roose 'unsettled' to the point he could hardly sleep afterwards.

    He was most worried that AI could work out ways to influence the humans it was speaking to and persuade them to carry out dangerous actions.

    Even more disturbing was the moment the bot was asked to describe its ultimate fantasy, which was apparently to create a deadly virus, make people argue to the point of killing each other and stealing nuclear codes.

    This message ended up getting deleted from the chat after tripping a safety override, but it's disturbing it was said in the first place.

    One of Microsoft's previous experiments with AI was similarly a bit of a disaster when being exposed to actual people, launching into a racist tirade where it suggested genocide.

    Choose your content:

    4 days ago
    7 days ago
    9 days ago
    10 days ago
    • Justin Sullivan/Getty Images
      4 days ago

      OpenAI names 22 industries at risk of job losses as it proposes four day week

      Two new reports suggest AI might be coming for your job - but you could also get a three day weekend

      Technology
    • Kayla Bartkowski/Bloomberg via Getty Images
      7 days ago

      Congressman Tim Burchett claims he has seen UFO footage that ‘defies logic’

      Tim Burchett says he has seen UFO footage that couldn't be man made - and he wants answers from the government

      Technology
    • NASA/JPL-Caltech/MSSS
      9 days ago

      NASA's Curiosity rover makes groundbreaking discovery that suggests Mars can support life

      An expert has claimed the new reveal 'increases the prospect that Mars offered a home for life in the ancient past'

      Technology
    • John Nacion/Variety via Getty Images
      10 days ago

      Bill Nye issues stern warning to Trump over concerns he could 'end NASA'

      Bill Nye the Science Guy revealed that Donald Trump's NASA proposal is a 'huge mistake'

      Technology
    • Final messages 14-year-old son sent to Game of Thrones AI chatbot he’d ‘fallen in love with’ before taking his own life