Software Technology

The end of humanity? The Prodigitalweb Guide to Artificial Intelligence

Written by Andy Prosper

The topic Artificial Intelligence (AI) is overhyped. Still extremely important for the future. Prodigitalweb explains Machine Learning, Deep Learning, Neural Networks – and what Animoji has to do with it.

Superintelligence will not destroy all jobs or humanity. But algorithms and software have become decidedly smarter in the past few years. This is shown only by tricks such as the Animoji on the iPhone X or assistants with which it is possible to order new kitchen paper by voice commands.

The big companies in Silicon Valley invest a lot of money in AI and have changed the lives of many people and their gadgets. Google, Amazon, Facebook and others are laying the foundations of a future that focuses on AI.

Machine Learning was the catalyst for the boom in the Artificial Intelligence industry: it’s a kind of training for computers that simply teach themselves, rather than having each step programmed by one person. A new technique called deep learning has made this method particularly effective. And that finally got to feel Lee Sedol. As the best Go player in the world he had won a total of 18 international championships, but then lost in 2016 against the software AlphaGo.

In the everyday life of most people, this leap in AI evolution has brought new gadgets such as smart speakers to market. Or it has become possible to unlock your own smartphone via FaceID. However, AI is about to change much more important areas, such as healthcare. Hospitals in India are currently testing software that will use photos of people to see if the retina of the eye has evidence of retinal diabetes. Most such cases are recognized too late. Machine Learning can also be used for autonomous driving. A car can learn to react to its environment.

There are first signs that Artificial Intelligence may make us happier or healthier. However, the risks must not be hidden. The currently most prominent problem is algorithms that observe prejudices against foreigners and women and then imitate them. A future with AI does not necessarily have to be a better one.

The beginning of the AI

The modern Artificial Intelligence started a vacation project: John McCarthy, a professor at Dartmouth College, invented the term in the summer of 1956, when he invited a small group of colleagues to spend a few weeks thinking about how machines could handle complex tasks. He had high hopes for machines that think on a human level. “We think that great progress can be made if a group of scientists team up for a summer,” wrote McCarthy and his colleagues at the time.

McCarthy’s hopes were not fulfilled – he later said he was too optimistic. But the workshop helped researchers to establish their dreams of artificial intelligence as a new academic discipline.

Initially, many researchers were dealing with fairly abstract math or logic problems. But it was not long before they devoted themselves to practical tasks. In the late 1950s Arthur Samuel wrote software that learned to play the role of a lady. In 1962, a computer succeeded in beating the lady-master of that time. Finally, in 1967, a program called Dendral learned to interpret the data of a mass spectrometer, thus removing the work of chemists.

As Artificial Intelligence research evolved, so did the way to build smart machines. Some experts tried to cast knowledge in code or to write rules for human language. Others were inspired by the way animals and humans learn and built systems that themselves should improve with sample data. With each step, there was one less task that until then only humans could do.

Deep Learning and Neural Networks

The fuel of the AI boom is deep learning – actually one of the industry’s oldest ideas. Data is fed into a network of mathematics. A system reminiscent of a simplified representation of the brain called an Artificial Neural Network. Data is used to train this system until it can make the right decisions on its own.

The idea for the first artificial neural network was born shortly after the Dartmouth workshop. The computer Mark 1 – which filled an entire room in 1958 – learned to distinguish geometrical forms with such a system. The New York Times described him as an “embryo of computer design that reads and becomes smarter”. In 1969, however, the good opinion of neural networks overturned as MIT researcher Marvin Minsky explained in a book why the power of these programs is limited. Some researchers disagreed and continued to develop this technique over the decades. The revelation came in 2012 with experiments that showed that a Neural network can learn with enough data and processing power to see.

With this innovation, researchers from the University of Toronto won a competition with software that could sort images. IBM, Microsoft and Google employees subsequently joined to show how Deep Learning can help improve the accuracy of speech recognition. Shortly afterwards, Artificial Intelligence experts were the most sought-after employees in Silicon Valley. The Future of Artificial Intelligence will change the world – that is beyond question. Google, Microsoft and Amazon have gathered the most important experts and powerful computers to build their business model: tailor-made advertising. These companies also make money by lending their networks to other companies for their own AI projects.

The main areas of work are currently health care and national security. As machine learning training continues to improve and there are cheaper hardware and better open source tools, artificial intelligence will be more and more used in all areas of the industry. Consumers will soon find features in many new gadgets and apps that were made possible by Artificial Intelligence. Google and Amazon in particular are counting on progress in machine learning to make language assistants and smart speakers a success. The next step for Amazon: cameras that watch their owners to optimize their everyday lives.

It’s a great time for Artificial Intelligence researchers. There are countless institutes looking for new ways to make smarter machines even smarter, and they are better funded than ever before. And there is a lot of work to be done: despite the rapid progress, there are still many things that machines still do not understand. For example, the nuances of language or just because of curiosity learn something new, just to name a few examples. Only when machines have mastered such tasks can they come close to human intelligence, with all its complexity, adaptability and creativity. Geoff Hinton, a Google deep learning pioneer, says that to make progress here, AI needs to be re-thought on a completely new basis.

Facebook was also confronted with the disadvantages of its own algorithm: He promoted hate and fake news. Even more powerful AIs could lead to even worse problems, especially reinforcing prejudices or stereotypes against blacks and women. Not only activists, but also the tech industry is working on guidelines to guarantee the ethics of algorithms. In order for people to benefit from the advantages of smart machines, they have to handle them smarter themselves.

About the author

Andy Prosper