DHAKA: Bangladesh national cricket team’s current ODI captain Mashrafe Bin Mortuza, who is contesting the 11th parliamentary elections on Awami…
Physicist Stephen Hawking has warned against the use of drones in warfare, with the world caught in “an IT arms race fuelled by unprecedented investment and building on an increasingly mature theoretical foundation”.
The physicist, considered one of the greatest minds in today’s world said: “Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation.”
He says that humanity should learn how to avoid the risks that artificial intelligence (AI) poses to mankind.
In an op-ed in the Independent (UK), Hawking describes a situation in the not-too-distant future where the intelligence of machines could outpace humans.
Stephen Hawking is the director of research at the Department of Applied Mathematics and Theoretical Physics at Cambridge and a 2012 Fundamental Physics Prize laureate for his work on quantum gravity.
The pioneering physicist has said the creation of general artificial intelligence systems may be the “greatest event in human history” – but, then again, it could also destroy us.
The physicist said IBM’s Jeopardy! -busting Watson machine, Google Now, Siri, self-driving cars, and Microsoft’s Cortana will all “pale against what the coming decades will bring.”
We are, in Hawking’s words, caught in “an IT arms race fueled by unprecedented investment and building on an increasingly mature theoretical foundation.”
These investments, whether made by huge companies such as Google or startups like Vicarious, have the potential to revolutionize our society.
But Hawking worries that though “success in creating AI would be the biggest event in human history. … it might also be the last, unless we learn how to avoid the risks.”
So inevitable is the rise of a general artificial intelligence system that Hawking cautioned that governments and companies are not doing nearly enough to prepare for its arrival.
“If a superior alien civilization sent us a message saying, ‘We’ll arrive in a few decades’, would we just reply, ‘OK, call us when you get here – we’ll leave the lights on’? Probably not – but this is more or less what is happening with AI,” Hawking wrote.
The only way to stave off a societal meltdown when AI arrives is to devote serious research at places such as Cambridge ‘s Centre for the Study of Existential Risk, the Future of Humanity Institute, the Machine Intelligence Research Institute, and the Future Life Institute, he said.