The rise of artificial intelligence is exciting for a lot of folks, though it is worrying a lot of people as well... and quite understandably.
While a new robot debuted by Elon Musk recently will save us the effort of completing a lot of household chores, it has worried many of the fast-paced developments of AI.
In fact, AI has already provided many with a lot of problems, including deepfakes of them doing the rounds on social media.
Advert
And to add further fuel to the fire, the notoriously pessimistic AI expert Eliezer Yudkowsky has suggested that artificial intelligence may destroy humankind in just two years.
Speaking to The Guardian in a fascinating new interview, Yudkowsky said: "If you put me to a wall and forced me to put probabilities on things, I have a sense that our current remaining timeline looks more like five years than 50 years.
"Could be two years, could be 10."
Advert
He added: "The difficulty is, people do not realise, we have a shred of a chance that humanity survives."
California-based Yudkowsky is a researcher with an established history of speaking out against the rise of AI, sometimes even doing so controversially.
Many might remember last year that the expert called for bombing data centers to halt the rise of AI - something that certainly raised a few eyebrows.
He somewhat changed his opinion on this in the interview with the Guardian, though he stands by the idea of bombing data centers.
Advert
But now, he no longer thinks that nuclear weapons should be used to target them.
"I would pick more careful phrasing now," he told the outlet.
Yudkowsky now seems to think we're barrelling towards some risks that have famously been shown in film and TV.
Advert
He noted the idea of no return where AI becomes self-sufficient and decides that humanity is no longer needed, essentially wiping us out.
I think the main concern for a lot of folks when it comes to the rise of AI is potential job losses as computers become able to do jobs humans currently occupy.
The experts involved in the piece with The Guardian are simply asking why people - and even businesses - could simply choose not to pursue AI.
However, Yudkowsky acknowledges that would rely on businesses making ethical choices.
Advert
He said: "You could say that nobody’s allowed to train something more powerful than GPT-4. Humanity could decide not to die and it would not be that hard."
But considering the rapid rise of AI, we really can't see this happening.
Topics: Artificial Intelligence, Technology