
Topics: Artificial Intelligence, ChatGPT

Topics: Artificial Intelligence, ChatGPT
One of the godfathers of AI, Yoshua Bengio, has suggested the big-tech race for AI supremacy could fast track humans on the road to extinction.
Bengio, who is a professor at the Université de Montréal has made a name for himself educating the world on the threats posed by super intelligent AI.
He earned the “godfather of AI” moniker after winning the 2018 Turing award, seen as the equivalent of a Nobel prize for computing.
The big tech race between Anthropic, OpenAI, Elon Musk’s xAI and Google’s Gemini has really picked up over the past year, prompting a warning from Bengio that self aware machines might come with ‘preservation goals’ of their own. And they could be here sooner than we think.
Advert
Sam Altman, CEO of OpenAI, has consistently projected that artificial intelligence will surpass human intelligence within the next few years, likely by 2030.
While that might excite some people, Bengio is not one of them.

Speaking to the Wall Street Journal, Bengio said: “If we build machines that are way smarter than us and have their own preservation goals, that’s dangerous.
"It’s like creating a competitor to humanity that is smarter than us.
"Recent experiments show that in some circumstances where the AI has no choice but between its preservation, which means the goals that it was given, and doing something that causes the death of a human, they might choose the death of the human to preserve their goals."

Bengio has also put a timeframe on when we could start seeing major risks from AI models: between the next five and ten years.
He's even suggested that we should start to prepare even earlier than that just in case things move ahead of schedule.
Bengio added: "The thing with catastrophic events like extinction, and even less radical events that are still catastrophic, like destroying our democracies, is that they’re so bad that even if there was only a 1% chance it could happen, it’s not acceptable."
He also went on suggest that big tech should be willing to pull the plug if AI shows signs of self preservation.
Speaking to the Guardian, Bengio said that giving legal status to cutting-edge AIs would be akin to giving citizenship to 'hostile extraterrestrials'.

He said: "People demanding that AIs have rights would be a huge mistake.
"Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we’re not allowed to shut them down.
“As their capabilities and degree of agency grow, we need to make sure we can rely on technical and societal guardrails to control them, including the ability to shut them down if needed.”
A poll by the Sentience Institute, which is a US thinktank that supports the moral rights of all 'sentient beings', found that nearly four in 10 US adults backed legal rights for a sentient AI.