Technology

Artificial Intelligence Could Be Disastrous, Experts Say

Written by prodigitalweb

Artificial Intelligence – Part of our Daily Life

Artificial Intelligence – AI is becoming a part of our daily life right from smartphone apps like Siri to features such as facial recognition of photos. Experts state that humanity could take more precaution in the development of AI than with other technologies.

Science and tech heavyweights Elon Musk, Stephen Hawking and Bill Gates have advised that intelligent machine could be one of humanity’s biggest existential threats. However all through history, human inventions like fire also posed as something dangerous.

According to physicist at the Massachusetts Institute of Technology, Max Tegmark, stated on the radio shows`Science Friday’, on April 10, that `with fire, it was Ok that we screwed up a bunch of times. But in developing artificial intelligence as with nuclear weapons, they want to get it right the first time since it might be the only chance they have’.

According to experts the artificial intelligence on the other hand has the capabilities of achieving huge good in society. Eric Horvitz, managing director of Microsoft Research lab in Seattle, on the show informed that `this technology could also save thousands of lives. The downside is the possibility of creating a computer program capable of continually improving itself that we might lose control’.

Race between Growing Power of Technology/Humanity’s Wisdom

A computer scientist at the University of California, Berkeley, Stuart Russell, had made a commented on the show that `Society was of the belief that for a long time, things that were smarter must be better.

But just like the Greek myth of King Midas who has transformed everything he touched into gold, ever smart machines may not turn out to what society wished for. In fact, the goal of making machines smarter may not be aligned with the goals of human race.

For instance, the nuclear power gave rise to access the almost unlimited energy that was stored in an atom and the first thing done was the creation of an atom bomb. Presently 99% of fusion research is containment and Artificial Intelligence is going to go the same way’.

According to Tegmark, the development of Artificial Intelligence is a race between the growing power of technology and humanity’s growing wisdom, in handling that technology. Instead of trying to slow down the former, humanity should be investing more in the latter.

Technology Learns to Think for Itself

Stephen Hawking has warned that humanity would face uncertain future since technology learns to think for itself and tend to adjust to its environment.

He had discussed in a written article in the Independent, on Jonny Depp’s latest film `Transcendence’ that probes into a world wherein computers tend to surpass the abilities of humans. He debates that the developments in the digital personal assistants Siri, Google Now and Cortana are symptoms of an IT arms race that will fade away to what the future would bring.

He also notes that the other potential advantage of this technology would be important with the prospect to eradicate, war, poverty and disease. He informs that `success in creating Artificial Intelligence would be the biggest event in human history and unfortunately it could also be the last, unless it learns how to avoid risks’.

In the medium and short term, it is said that militaries all across the world are focusing in the development of autonomous weapon systems while the UN is simultaneously working on banning them. He further adds that `looking ahead, there are no fundamental limits which could be achieved. There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains’.

Smart Chips – Way for Sensor Network

IBM, in fact has already created smart chips which could make way for sensor network which imitates the brain’s capabilities of perception, thought and action.

Professor Hawking adds that experts are not ready for this change. By way of comparison he states that if aliens could tell us that they would be arriving in a few decades, scientists would not sit waiting for their arrival, and `though we would be facing the best or worst thing to happen to humanity in history, little serious research could be devoted to these issues.

All need to ask ourselves what can be done to improve the chances of reaping the benefits and avoiding the risks’. In January, at a conference in Puerto Rico which had been organized by the non-profit Future of Life Institute, the co-founder being Tegmark, AI leaders from industry and academia, which also included Elon Musk, came to an agreement that it was time to redefine the goal of making machines as smart and fast as possible.

The aim was now to make machines beneficial to society. It was said that Musk had donated $10 million to the institute to progress the goal further. Thereafter, hundreds of scientists including Musk signed an open letter with the description of the potential advantages of AI, yet cautioned on its consequences.

 

 

About the author

prodigitalweb