unilad homepage
unilad homepage
    • News
      • UK News
      • US News
      • World News
      • Crime
      • Health
      • Money
      • Sport
      • Travel
    • Music
    • Technology
    • Film and TV
      • News
      • DC Comics
      • Disney
      • Marvel
      • Netflix
    • Celebrity
    • Politics
    • Advertise
    • Terms
    • Privacy & Cookies
    • LADbible Group
    • LADbible
    • SPORTbible
    • GAMINGbible
    • Tyla
    • UNILAD Tech
    • FOODbible
    • License Our Content
    • About Us & Contact
    • Jobs
    • Latest
    • Archive
    • Topics A-Z
    • Authors
    Facebook
    Instagram
    X
    Threads
    TikTok
    YouTube
    Submit Your Content
    Experts warn artificial intelligence could lead to humanity’s extinction
    Home>Technology
    Updated 07:23 31 May 2023 GMT+1Published 07:02 31 May 2023 GMT+1

    Experts warn artificial intelligence could lead to humanity’s extinction

    Experts have said that 'mitigating extinction' from AI should be a global priority, just like pandemics or nuclear war

    Charisa Bossinakis

    Charisa Bossinakis

    google discoverFollow us on Google Discover
    Featured Image Credit: Orion Pictures. Alamy

    Topics: News, Technology, Artificial Intelligence, World News

    Charisa Bossinakis
    Charisa Bossinakis

    Advert

    Advert

    Advert

    Experts have warned that Artificial Intelligence (AI) could lead to the extinction of humanity.

    Those at the forefront of the technology, including the heads of OpenAI and Google Deepmind, have issued a chilling new warning about the potential repercussions of AI, as well suggesting possible disaster scenarios that could unfold.

    "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," a statement from the Center of AI Safety read.

    Hundreds of tech leaders and academics signed off on the statement, including chief executives of Google’s DeepMind, the ChatGPT developer OpenAI, and the AI startup Anthropic.

    Advert

    Pixabay

    The Center for AI Safety website suggests possible disaster scenarios that could unfold, such as AIs being weaponized.

    Here's the full list:


    • AIs could be weaponised - for example, drug-discovery tools could be used to build chemical weapons


    • AI-generated misinformation could destabilise society and "undermine collective decision-making"


    • The power of AI could become increasingly concentrated in fewer and fewer hands, enabling "regimes to enforce narrow values through pervasive surveillance and oppressive censorship"


    • Enfeeblement, where humans become dependent on AI "similar to the scenario portrayed in the film Wall-E"


    Dan Hendrycks, the executive director of the Center for AI Safety, said the collective statement was 'reminiscent of atomic scientists issuing warnings about the very technologies they’ve created'.

    “There are many ‘important and urgent risks from AI,’ not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks, and weaponization,” he added.

    Pixabay

    “These are all important risks that need to be addressed.”

    Cynthia Rudin, a computer science professor and AI researcher at Duke University, expressed how dire the situation was.

    She told CNN: "Do we really need more evidence that AI's negative impact could be as big as nuclear war?”

    It comes after the CEO of OpenAI, Sam Altman, said AI could cause possible 'existential risk' to humanity and the need for global regulation to avoid a technological fallout.

    In a blog post, he penned: "Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations."

    But he added that super intelligence 'will be more powerful than other technologies humanity has had to contend with in the past'.

    Elsewhere in the blog post, he stated that at this early stage in AI development, it would be 'unintuitively risky' to 'stop the creation of super intelligence'.

    “Stopping it would require something like a global surveillance regime,” he said.

    “And even that isn’t guaranteed to work.”

    Choose your content:

    8 hours ago
    a day ago
    2 days ago
    3 days ago
    • (Photo by Emanuele Cremaschi/Getty Images)
      8 hours ago

      Playstation users who bought games within four-year period eligible for Sony $7.85 million settlement

      Sony has been accused of monopolizing the market through its PlayStation Store

      Technology
    • Dhiraj Singh/Bloomberg via Getty Images
      a day ago

      iPhone users can check if they’re eligible for Apple's $250m payout over AI accusations

      The payout applies to people who bought certain iPhones between June 2024 and March 2025

      Technology
    • Christopher Willard/Disney via Getty Images
      2 days ago

      Shark Tank star Lori Greiner issues warning over hidden Gmail setting and reveals how to disable it

      Lori Greiner has warned 1.8 billion Gmail users about a setting that allows access to their private emails

      Technology
    • Getty Stock
      3 days ago

      Every country where ChatGPT is banned and why

      One in eight people on the planet can't access ChatGPT - and their governments want to keep it that way

      Technology
    • Change your password immediately if AI created it, cybersecurity experts warn
    • 'Godfather of AI' reveals the one job that will survive as artificial intelligence takes over workplaces
    • OpenAI CEO slammed for comments about jobs that will eventually be replaced by Artificial Intelligence
    • OpenAI names 22 industries at risk of job losses as it proposes four day week