Go to content

SIEMENS

Research & Development
Technology Press and Innovation Communications

Dr. Ulrich Eberl
Herr Dr. Ulrich Eberl
  • Wittelsbacherplatz 2
  • 80333 Munich
  • Germany
Dr. Ulrich Eberl
Herr Florian Martini
  • Wittelsbacherplatz 2
  • 80333 Munich
  • Germany
Thriving on Mountains of Data
Prof Eugene Wong

Prof. Bernhard Schölkopf (43) is the Director of the new Max Planck Institute for Intelligent Systems in Tübingen and Stuttgart, as well as one of the world's leading experts in machine intelligence. A physicist and mathematician, Schölkopf develops new learning techniques that are designed to uncover regularities in complex data sets. He has conducted research at Bell Laboratories and Microsoft Research, among other places, and was presented with the Max Planck Research Award in 2011.

pic
pic Robots and children learn by trying things out and from examples presented to them.

What does learning actually mean in a scientific sense?

Schölkopf: That depends on who you ask. A psychologist would say that learning can be defined as the change in behavior that results from experience. That’s only part of it, however. If someone injures their foot, they’re going to limp — not because they learned to but simply because it hurts. I as a physicist, on the other hand, search for certain types of regularities that lead from a specific input to an output. Scientists refer to this drawing of cause and effect conclusions on the basis of observations as “empirical inference.” My institute attempts to convert the associated mechanisms into algorithms in order to find solutions to problems that humans are unable to solve on their own.

Can you provide an example?

Schölkopf: You’ll always find problems like that wherever tremendous amounts of data are involved. Take bioinformatics, for example. Geneticists want to find out where genes on a DNA strand begin and end. You can do this by conducting an experiment in a lab, which will generate data with millions of data points linked to one another by a high-dimensional connection. No human being is going to discover any regularities here that will allow you to predict where the right interfaces will be found. But if you use the data to train software, things work out pretty well. The great thing is that the regularities converge, as we say, which essentially means that the results become more precise as you feed in more data. That’s the big benefit of machine learning. Machines find the kinds of structures in large amounts of data that a human would never find. That’s not surprising, given that our brains are optimized for perception and action — and not for scientific processes. Another advantage of machine learning can be found in those applications where we observe the environment with sensors that humans simply don’t possess. After all, we’re not equipped with built-in laser scanners to measure distances, for example.

Where does the human brain have an advantage?

Schölkopf: The brain is a very complex organ that can carry out some tasks very precisely and efficiently through learning. This is especially true when the brain faces problems that were important to us throughout evolution, like recognizing visual patterns. That’s why we can recognize numbers and letters in fractions of a second, whereas computers have problems with that. On the other hand, if you convert the symbols into barcodes, we can’t read them, but computers can. This is because our brains have been trained our whole lives to extract regularities out of numbers and letters. Neuroscientist Horace Barlow once referred to the brain as a statistical decision-making organ. Still, we have to keep in mind that only certain statistical tasks can be handled very effectively — the ones that have had the greatest significance throughout evolution.

In your opinion, what role do feelings play in learning?

Schölkopf: Feelings definitely play a role in human learning — for example, when assessing what’s important to do, or what makes sense to do, or in situations that involve motivation. Evolution seems to indicate that everything “implemented” in human beings is also useful. That’s why I believe psychology issues will sooner or later become relevant and helpful in the design of intelligent systems. My own feeling, however, tells me that we’re still quite far from being able to understand and implement such artificial intelligence in a functional manner.

Forty years ago scientists thought they would soon be able to build robots with artificial intelligence. What went wrong?

Schölkopf: Those machines were built by engineers, which is why people could understand them. When a sensor in such a robot registers a certain measurement, a motor in the robot will begin to move. Artificial intelligence isn’t an area traditionally addressed by engineers, however. Biological systems are the only truly intelligent systems, so it’s hard for people to understand them. Homespun programs like those in the past won’t work here in any case.

Are you saying machines need to learn how to learn?

Schölkopf: Learning-enabled systems do offer certain benefits, but they’re also designed by engineers. The most progress here has been made with monitored learning, in which case humans first must evaluate measured data, or give it labels, as we say. You can train facial recognition software, for example, by telling a program when a certain person appears in an image. If you do that often enough, the program will be able to extrapolate to a limited extent, even if the person in question looks a little different each time.

In other words, human and animal learning probably can’t be considered monitored learning?

Schölkopf: Right. In most cases it isn’t; but it is monitored learning, for example, when parents show their child a picture of a cat and tell them it’s a cat. Gripping an object, on the other hand, is something children learn by themselves. Machines still can’t do this. That’s why we’re increasingly using something called “reinforced learning,” which is a kind of middle way. Here, a robot designer no longer tells the machine which path its gripper arm needs to take. He or she only reports on whether or not the robot successfully gripped the object. The machine then learns which movements lead to success, and determines the best way to move the arm.

What happens when you link biological systems and machines, as you did with your Brain Interface that translates brainwaves into muscle movements?

Schölkopf: Brain Interface is designed to help paralyzed individuals move their arms by having them imagine the movements, while we simultaneously measure their brainwaves. The work our brains do can’t be mathematically modeled, which is why we need to use monitored learning here as well. During the training phase, a researcher not only records the patient’s brainwaves but also the imagined movements. If we put enough data in, we can achieve a recognition rate of between 80 and 90 percent. Nevertheless, the degree of generalization — by which I mean the ability to apply the same approach to similar problems — is very low. For example, knowing what brainwaves for hand movements look like doesn’t mean you can figure out how to move legs. We humans are the masters of that — after all, we learn how to write with our hands on paper but can still use our arms to write the same letters on a blackboard more or less in the same handwriting, only bigger.

What is machine learning mainly used for today?

Schölkopf: It’s being used in things we don’t see but nevertheless use every day — search engines. Many of the people Google hires are experts in machine learning. Banks also utilize machine learning to predict share price movements, for example. And there’s an interesting medical application as well: Positron-emission tomography is usually combined with a computed tomography unit in clinical applications, whereby the latter’s images are used to correct the intensity values of the PET image data. Still, doctors prefer magnetic resonance tomography (MRT) devices, because they also provide physiological information. Siemens recently presented such a combined MR-PET system. Our institute has developed a method that predicts synthetic CT images on the basis of MRT pictures. This development was made possible by using MRT-CT image pairs for machine training purposes. As a result, we can process PET images as if they were recorded with a computed tomography device.

What advances can we expect in machine learning over the next ten or 20 years?

Schölkopf: Progress will certainly be made in processing large amounts of data with increasingly powerful computers. But it’s difficult to say whether fundamentally new methods will also be developed. I hope that advances will be made with causal learning. At the moment, we’ve identified statistical regularities, but not the causal laws behind them. Consider the following example: Countries with high stork populations also have higher birth rates. So does this mean the storks bring the babies? Of course not — but the methods we use today don’t differentiate in such instances, so we need to uncover causal laws.

What about the age-old dream of robots that are capable of learning?

Schölkopf: I believe there will in fact be a greater number of physically autonomous systems in the future. Researchers 40 years ago thought robots would be omnipresent today. That hasn’t turned out to be the case, and I also don’t believe we’ll be seeing robot nurses in hospitals, for example. After all, humans are better at taking care of other humans than machines are. What we are more likely to see wil be micro-robots with artificial intelligence that can go into action where people can’t, and do things like treat and destroy a tumor inside the body .

Interview conducted by Bernd Müller.