Gadgets Technology

How to live together with Artificial Intelligence

Written by Andy Prosper

The star was a robot in the shape of a woman: Sophia impressed experts for artificial intelligence at the UN convention “AI for Good”. Together, the approximately 500 participants discussed how AI could be used for the benefit of mankind.

If everything goes well, we will no longer have to drive a car, be healthier, enjoy more leisure time – thanks to algorithms that become smarter every day. If things go wrong, AI might threaten world peace – by increasing inequalities and be killing weapons causing murderous conflicts. For three days experts at a UN summit in Geneva are looking for answers on how their technology can be used for the benefit of mankind.

Sophia is optimistic: “I know one day I’ll make it,” she replies, when asked if she can speak Chinese. “Sophia is the youngest robot of Hanson Robotics, a developer of machines that should be as similar to humans as possible: Sophia can blink, bend her head, and move her lips when she speaks. Her voice still sounds artificial, and much of what she says is reminiscent of early versions of chat programs from the 1990s.

But the interplay of mimic, gesture and question-answer is convincing enough to inspire the audience. The robot – hardly more than a plastic head full of chips, motors, and cables, packed into a woman’s dress – is surrounded, flipped, filmed, amazed. And this, although most people around the machine are experts – participants in a UN Congress on the subject of “AI for Good”: What must happen to use artificial intelligence for the benefit of mankind?

More than 500 experts traveled from all directions to Switzerland to find answers for three days. They sit together in the rooms of the International Telecom Union ITU, at long tables designed for debates of the United Nations. There are a microphone and three buttons for voting, with yes, no or abstention, in each seat.

At the KI summit, no one has to make decisions at the same time, and that’s a good thing because it soon becomes clear that there are open questions: how smart did machines actually become? How fast are researchers moving forward that make computers so intelligent that they can not only distinguish dogs from cats in YouTube videos, but understand what they see? What happens if the progress benefits only a small part of humanity – states in which electricity and general networking are self-evident; Or perhaps only a handful of super-riches, the efficiency gains, while the majority of the population is impoverished?

 

There is hardly any aspect of this latest technology revolution. Whether it is possible to create a general artificial intelligence is the one only a matter of time, while others doubt that in the foreseeable future it can at all succeed to map the human intellect into algorithms.

Probably the greatest optimist is Jurgen Schmidhuber, a German AI pioneer, who is researching at the Swiss IDSIA Institute. “In the near future, we will have AI systems with the intelligence of small animals,” Schmidhuber predicts. “And the next step to reach the level of human intelligence will be relatively short.”

Proudly, the researcher points to the image and speech recognition systems that are present in every smartphone today: Schmidhuber is one of the co-founders of the principles behind it, so-called neural networks with a “long short-term memory”. His first attempts date back to the 1990s. At that time the computers were still too slow, and there was a lack of training data to make the systems really useful.

Both have dramatically changed: every cell phone now has the many thousand times the power of computing PCs, and the internet feeds learning software with data that gives them new insights. Therefore, artificial intelligence, with a true understanding of the world, is only a question of growing computing power for Schmidhuber, coupled with smarter research approaches and more and more sensor data.

Other scientists see little more than a dream in this vision. “Faster computers alone will not bring us a general artificial intelligence,” says Toby Walsh from the University of Sydney, who currently teaches as a guest professor at the TU Berlin. “We’re going to need basic breakthroughs in software development or quite new types of computing machines.” Should one day succeed in creating a general AI, “a kind of zombie intelligence,” Walsh believes, “perfectly rational, But without any emotions “.

Far more than looking into the distant future drives the Australian researchers, however, what is already happening. “I’m not afraid of a terminator,” says Walsh. “Worry makes me stupid AI: At the moment the systems are still little reliance – most of the time they work well, but suddenly something goes wrong, and they break down. And we make the classic human mistake of trusting machines too much. ”

This can be fatal, as the Tesla accident shows, with the driver taking his hands off the wheel because he thought the autopilot could drive the car alone over the highway. Even if the fault is more likely to lie with people who are too confident about their engine: any accident that an AI system does not prevent becomes a business risk for vehicle manufacturers who invest billions in the development of self-steering cars.

That is why Audi head Rupert Stadler is sitting in front of journalists on the morning of the first conference day and presents the newly revealed “Beyond” initiative of his company: For two years, Audi has consulted with AI experts, lawyers, philosophers, and social scientists, People to a new age of mobility. “Autonomous driving has the potential to make our lives safer,” says Stadler, pointing out that 90 percent of the accidents are currently due to human error.

Even with self-propelled cars “we will not prevent all accidents,” admits the Audi boss – “but if we can reduce the number, it is worthwhile to start a dialogue with the company”. The first priority was to convince possible customers of the benefits of the new technology – even if the machines sometimes make mistakes: “People have to trust these technologies,” says Stadler. “For without trust there is no market.”

Neural networks are fantastic for collision avoidance

In his concluding speech in the conference hall, Stadler also addresses the question of how an AI should decide if an accident becomes unavoidable: Should the car pass an old woman, even though she crosses the road at the Zebrastreifen, as long as she has four lives in the vehicle itself Can save? Experts talk about the “moral dilemma”, and MIT researchers have developed an interactive test that allows people to determine how they would decide.

However, in the eyes of many experts, such scenarios are misleading because the developers are not required to program such decisions into the software. Rather, the systems themselves learn, based on example situations. “Neural networks are fantastic for collision avoidance,” says Reinhard Karger, spokesman of the German Research Center for Artificial Intelligence (DFKI). Lined with sensory data from cameras, radar, and ultrasound, cars could in principle be much better at detecting obstacles than humans – and reacting more quickly to them.

The challenge for the developers is mainly to find enough training data for their algorithms. “Fortunately, relatively few accidents happen,” says Karger. “For the training, however, this is bad.” This is the way the researchers work with artificially generated scenarios, in which the cars can identify, for example, the dangers. For jurists, this raises a new problem: Who guarantees that no mistakes sneak into the learning process? “It could be that the training material is notarized,” says Karger, “so you can check it in retrospect.”

Not only the auto industry is struggling with the challenge of steering AI systems that are fed with tons of information in the desired direction. It is often shown that the algorithms take on social prejudices, which were hidden in the training data. This can quickly lead to discrimination, for example in the case of CI systems, which are intended to signal to the police, in which areas new criminal offenses are to be expected, or in the automatic preselection of applications to an open position. “If you have the wrong skin color, the analysis may result in other results,” says Urs Gasser, Director of the Center for Internet and Society at Harvard Law School.

The actual strength of the systems – to be able to learn independently, similar to a child – becomes a problem for the developers: they often do not know how the software comes to their results because the calculations in the neural networks are a manifold influence. “The difficulty is that we are dealing with black boxes, where it is not always clear how they work,” says Gasser. “The complexity is so high that one can not explain in any particular case how the system has worked. Then one reaches the limits of possibility. ”

What is clear is that artificial intelligence – 61 years after the invention – is at a point where technology is about to affect every aspect of everyday life. For example, Stanford researcher Fei Fei Li demonstrates how the evaluation of Google Street View photos can serve to assess the income and the presumed choice of citizens: an automatic image recognition of the vehicles along the roadside by manufacturer, model and age Gave enough conclusions as to how poor or rich the people were in the districts concerned. From the analysis, it was also half-timely to see whether the people in the exiled US cities would vote for or against President Obama.

In an ideal world, Artificial Intelligence would be a good of the community 

Who controls such systems has enormous power. Therefore, the author and psychologist Gary Marcus, known as the cross-thinker of the KI community, demands: “In an ideal world artificial intelligence would be a good of the community – not something that belongs to a single large company.” He says that with regard to Google and Facebook, Microsoft, IBM or Amazon: Lots of companies investing heavily in AI and collecting as much data as they possibly can. As a counterbalance, Marcus proposes a worldwide collaboration of the research community, an intimate one like the Swiss CERN, which is only a few kilometers from the conference venue.

Worries also make it more likely that governments would be tempted by AI systems to go to war more quickly – because they do not have to send armies to combat, but merely killer robots, drones, and other “deadly autonomous weapon systems”, who independently decide the lives and deaths of people.

“These systems will become weapons of mass destruction,” says UC Berkeley researcher Stuart Russell. “A small group of people can control hundreds of millions of weapons and have the same effect as with the most powerful atomic bombs at a relatively low cost.” That is why Russell belongs to the growing circle of AI researchers who demand an international spell of such autonomous weapon systems.

Politicians will have to decide whether or not it will happen. But is politics and society still able to act fast enough to keep pace with the speed of technology? “The challenge is: our government systems, our laws, our culture – all this is designed for a linear development,” says Marcus Shingles, CEO of the Xprize Foundation, who is organizing the Congress. “But we live in exponential times.” Let’s say: Everywhere accelerates the development – punishes hesitancy, rewards risk, gives winners more power and leaves losers without chances.

The global network allows everyone to observe any success and failure but also stresses the unequal distribution of progress. “When I see that AI helps you live longer and healthier,” says Shingles. If Artificial Intelligence really does work, the positive effects of the technology need to be evenly distributed: “We have to build bridges, so that all people can benefit equally from the enrichment that artificial intelligence can bring to our lives.”

About the author

Andy Prosper