To make sure you never miss out on your favourite NEW stories, we're happy to send you some reminders

Click 'OK' then 'Allow' to enable notifications

Not now
OK
Advert
Advert
Advert

Experts warn artificial intelligence could lead to humanity’s extinction

Charisa Bossinakis

Published 
| Last updated 

Experts warn artificial intelligence could lead to humanity’s extinction

Featured Image Credit: Orion Pictures. Alamy

Experts have warned that Artificial Intelligence (AI) could lead to the extinction of humanity.

Those at the forefront of the technology, including the heads of OpenAI and Google Deepmind, have issued a chilling new warning about the potential repercussions of AI, as well suggesting possible disaster scenarios that could unfold.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," a statement from the Center of AI Safety read.

Advert

Hundreds of tech leaders and academics signed off on the statement, including chief executives of Google’s DeepMind, the ChatGPT developer OpenAI, and the AI startup Anthropic.

Credit: Pixabay
Credit: Pixabay

The Center for AI Safety website suggests possible disaster scenarios that could unfold, such as AIs being weaponized.

Here's the full list:

Advert


  • AIs could be weaponised - for example, drug-discovery tools could be used to build chemical weapons


  • AI-generated misinformation could destabilise society and "undermine collective decision-making"
Advert


  • The power of AI could become increasingly concentrated in fewer and fewer hands, enabling "regimes to enforce narrow values through pervasive surveillance and oppressive censorship"


  • Enfeeblement, where humans become dependent on AI "similar to the scenario portrayed in the film Wall-E"
Advert


Dan Hendrycks, the executive director of the Center for AI Safety, said the collective statement was 'reminiscent of atomic scientists issuing warnings about the very technologies they’ve created'.

“There are many ‘important and urgent risks from AI,’ not just the risk of extinction; for example, systemic bias, misinformation, malicious use, cyberattacks, and weaponization,” he added.

Credit: Pixabay
Credit: Pixabay
Advert

“These are all important risks that need to be addressed.”

Cynthia Rudin, a computer science professor and AI researcher at Duke University, expressed how dire the situation was.

She told CNN: "Do we really need more evidence that AI's negative impact could be as big as nuclear war?”

It comes after the CEO of OpenAI, Sam Altman, said AI could cause possible 'existential risk' to humanity and the need for global regulation to avoid a technological fallout.

In a blog post, he penned: "Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations."

But he added that super intelligence 'will be more powerful than other technologies humanity has had to contend with in the past'.

Elsewhere in the blog post, he stated that at this early stage in AI development, it would be 'unintuitively risky' to 'stop the creation of super intelligence'.

“Stopping it would require something like a global surveillance regime,” he said.

“And even that isn’t guaranteed to work.”

Topics: Technology, News, Artificial Intelligence, World News

Charisa Bossinakis
More like this
Advert
Advert
Advert

Chosen for YouChosen for You

Technology

Netflix is releasing three GTA games on its streaming service

2 hours ago

Most Read StoriesMost Read

Dog with 'Italian accent' proves animals sound like their owners

a day ago