What does learning actually mean in a scientific sense?
Schölkopf: That depends on who you ask. A psychologist would say that learning can be defined as the change in behavior that results from experience. That’s only part of it, however. If someone injures their foot, they’re going to limp — not because they learned to but simply because it hurts. I as a physicist, on the other hand, search for certain types of regularities that lead from a specific input to an output. Scientists refer to this drawing of cause and effect conclusions on the basis of observations as “empirical inference.” My institute attempts to convert the associated mechanisms into algorithms in order to find solutions to problems that humans are unable to solve on their own.
Can you provide an example?
Schölkopf: You’ll always find problems like that wherever tremendous amounts of data are involved. Take bioinformatics, for example. Geneticists want to find out where genes on a DNA strand begin and end. You can do this by conducting an experiment in a lab, which will generate data with millions of data points linked to one another by a high-dimensional connection. No human being is going to discover any regularities here that will allow you to predict where the right interfaces will be found. But if you use the data to train software, things work out pretty well. The great thing is that the regularities converge, as we say, which essentially means that the results become more precise as you feed in more data. That’s the big benefit of machine learning. Machines find the kinds of structures in large amounts of data that a human would never find. That’s not surprising, given that our brains are optimized for perception and action — and not for scientific processes. Another advantage of machine learning can be found in those applications where we observe the environment with sensors that humans simply don’t possess. After all, we’re not equipped with built-in laser scanners to measure distances, for example.