Turns out the race to develop smarter and faster AIs could mean we're heading for disaster, as an expert in the field have warned that there's a 50 percent chance it end in 'catastrophe' with 'most' humans dead.
Paul Christiano, former key researcher at OpenAI, believes there are pretty good odds that artificial intelligence could take control of humanity and destroy it.
Having formerly headed up the language model alignment team at the AI intel company, he probably knows what he's talking about.
Advert
Christano now heads up the Alignment Research Center, a non-profit aimed at aligning machine learning systems with 'human interests'.
Talking on the 'Bankless Podcast', he said: "I think maybe there's something like a 10-20 percent chance of AI takeover, [with] many [or] most humans dead.
"I take it quite seriously."
He continued: "Overall, maybe we're talking about a 50/50 chance of catastrophe shortly after we have systems at the human level."
Advert
And he's not alone.
Earlier this year, scientists from around the globe signed an online letter urging that the AI race be put on pause until we humans have had time to strategise.
Bill Gates has also voiced his concerns, comparing AI to 'nuclear weapons' back in 2019.
So, in the absence of a gripping sci-fi subplot, how could AI turn on the humans that created it?
Advert
It turns out that it's all down to its life experience.
Just like a human baby, AI is trained by being bombarded by data without knowing what to do with it.
Just like a newborn baby cries and learns that a parent comes to pick it up, AI learns by trying to achieve certain goals with random actions and zeroes in on 'correct' results.
By immersing AI in internet data, machine learning means it can now produce well-structured, coherent responses to human queries.
Advert
As the computer processing that powers machine learning becomes increasingly specialized, many in the field believe that, within a decade, processing power combined with artificial intelligence will make machines sentient - which is where we might have a problem.
It is why many researchers have urged that we learn how to control AI behaviour now - before it gets out of hand.
And Christiano isn't the only one with concerns about the future of AI.
Advert
Elon Musk said in March this year that he is 'worried about AI stuff'.
"It's quite dangerous technology. I fear I may have done some things to accelerate it," he said.
Topics: News, Artificial Intelligence, Technology