• News
  • Film and TV
  • Music
  • Tech
  • Features
  • Celebrity
  • Politics
  • Weird
  • Community
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • SPORTbible
  • GAMINGbible
  • Tyla
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
TikTok
YouTube
Submit Your Content
Experts warn artificial intelligence could lead to humanity’s extinction

Home> Technology

Updated 07:23 31 May 2023 GMT+1Published 07:02 31 May 2023 GMT+1

Experts warn artificial intelligence could lead to humanity’s extinction

Experts have said that 'mitigating extinction' from AI should be a global priority, just like pandemics or nuclear war

Charisa Bossinakis

Charisa Bossinakis

Experts have warned that Artificial Intelligence (AI) could lead to the extinction of humanity.

Those at the forefront of the technology, including the heads of OpenAI and Google Deepmind, have issued a chilling new warning about the potential repercussions of AI, as well suggesting possible disaster scenarios that could unfold.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," a statement from the Center of AI Safety read.

Hundreds of tech leaders and academics signed off on the statement, including chief executives of Google’s DeepMind, the ChatGPT developer OpenAI, and the AI startup Anthropic.

Advert

Pixabay

The Center for AI Safety website suggests possible disaster scenarios that could unfold, such as AIs being weaponized.

Here's the full list:


  • AIs could be weaponised - for example, drug-discovery tools could be used to build chemical weapons


  • AI-generated misinformation could destabilise society and "undermine collective decision-making"


  • The power of AI could become increasingly concentrated in fewer and fewer hands, enabling "regimes to enforce narrow values through pervasive surveillance and oppressive censorship"


  • Enfeeblement, where humans become dependent on AI "similar to the scenario portrayed in the film Wall-E"


Advert

Dan Hendrycks, the executive director of the Center for AI Safety, said the collective statement was 'reminiscent of atomic scientists issuing warnings about the very technologies they’ve created'.

“There are many ‘important and urgent risks from AI,’ not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks, and weaponization,” he added.

Pixabay

“These are all important risks that need to be addressed.”

Advert

Cynthia Rudin, a computer science professor and AI researcher at Duke University, expressed how dire the situation was.

She told CNN: "Do we really need more evidence that AI's negative impact could be as big as nuclear war?”

It comes after the CEO of OpenAI, Sam Altman, said AI could cause possible 'existential risk' to humanity and the need for global regulation to avoid a technological fallout.

In a blog post, he penned: "Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations."

Advert

But he added that super intelligence 'will be more powerful than other technologies humanity has had to contend with in the past'.

Elsewhere in the blog post, he stated that at this early stage in AI development, it would be 'unintuitively risky' to 'stop the creation of super intelligence'.

“Stopping it would require something like a global surveillance regime,” he said.

“And even that isn’t guaranteed to work.”

Featured Image Credit: Orion Pictures. Alamy

Topics: News, Technology, Artificial Intelligence, World News

Charisa Bossinakis
Charisa Bossinakis

Advert

Advert

Advert

Choose your content:

15 hours ago
a day ago
3 days ago
  • 15 hours ago

    Tesla owner who's driven more than 300,000 miles reveals the shocking impact it’s had on battery health

    Jason McKnight crunched the numbers on his Tesla Model Y

    Technology
  • a day ago

    NASA astronaut shares exactly how much they get paid in a very blunt three–word statement

    Being a NASA astronaut may not be as lucrative as you'd imagine

    Technology
  • 3 days ago

    Harvard scientist proposes six-word message be sent to mysterious object aiming at Earth that he says is 'not natural'

    Physicist Avi Loeb shared his thoughts ahead of the object making its closest approach to the Sun

    Technology
  • 3 days ago

    Kodak gives sad update after one fatal mistake turned it from a $31,000,000,000 photography company to bankrupt

    Things aren't looking good for what was once one of the leading camera companies in America

    Technology
  • 'Godfather of AI' reveals the one job that will survive as artificial intelligence takes over workplaces
  • Expert shares four key ways to protect your job from AI after Bill Gates claims only three roles will survive
  • 'Godfather of AI' reveals the only strategic way society can stop it wiping out humanity
  • Experts issue urgent warning to 1,800,000,000 Gmail users over new type of attack that fools AI