In the latest AI news, “Artificial Intelligence poses more of a risk than a potential nuclear conflict between the US and North Korea” says Elon Musk.

The CEO of Tesla issued the warning after an AI built by OpenAI, a company founded by Mr Musk, defeated the world’s best Dota 2 players after just two weeks of training.

“If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea,” he tweeted shortly after the bot’s victory, along with a picture of a poster bearing the slogan: “In the end, the machines will win”.

SEE ALSO: Artificial Intelligence Could Predict Alzheimer’s Years before Doctors

The poster, incidentally, is actually about gambling.

“Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that’s a danger to the public is regulated. AI should be too,” he added later.

“Biggest impediment to recognizing AI danger are those so convinced of their own intelligence they can’t imagine anyone doing what they can’t.”

Questions about safety when it comes to AI

A recent University of Oxford study concluded that AI will be better than humans at all tasks within 45 years, and many people, including Stephen Hawking, believe humans will be in trouble in the future if our goals don’t align with those of machines.

However, following the exchange of increasingly heated words between Donald Trump and Kim Jong-un, some Twitter users pointed out that nuclear war might wipe humans out before AI even gets the chance to.

Mr Musk has spoken out about the potential dangers of AI on numerous occasions, and recently engaged in a war of words with Mark Zuckerberg, who has a very different outlook to him.

After Mr Musk called AI “a fundamental existential risk for human civilisation”, the Facebook founder branded his views as “negative” and “pretty irresponsible”.

Mr Musk hit back by saying Mr Zuckerberg’s understanding of the subject was “limited”.
He wants the companies working on AI to slow down to ensure they don’t unintentionally build something unsafe, and says it needs to be regulated.

“I think we should be really concerned about AI and I think we should… AI’s a rare case where I think we need to be proactive in regulation instead of reactive,” he said last month.

“Because I think by the time we are reactive in AI regulation, it’s too late.”

For more news, related to AI, keep reading iTMunch