Our new companions are called Aibo, Asimo, Cog and Kismet. Their bodies are made not of flesh and blood, but of metal, circuits and sensors. Asimo looks like an astronaut. He can stiffly walk up a staircase on two legs. Aibo, the pooch, barks and likes it when his master pets him. Cog, the torso, is supposed to learn how to behave through interaction with his environment. And Kismet, the metallic head with the goggle-eyes and huge lips, can smile or show fear or anger (see article Say It with Feeling). Could we say that Aibo, Asimo, Cog and Kismet are intelligent?
At MIT, researchers are trying to imbue machines like Cog with the capacity for common-sense reasoning
Being able to build machines that are like animals or people is an ancient dream that can be traced back at least as far as the 18th century. In 1738, French inventor Jacques de Vaucanson presented to an amazed public a flute player that moved its tongue and lips like a person and pressed its fingers over the holes of the instrument to play various songs. Vaucanson built more automatons, which were followed by a veritable machine boom. The novel Frankenstein from the year 1818 reinforced the idea that one could build a copy of a person. But only with the advent of the computer did the dream of intelligent machines take on a tangible form. Now, says Professor Marvin Minsky, artificial intelligence (AI) pioneer and founder of the famous AI Lab at the Massachusetts Institute of Technology (MIT), it's only a matter of time before there are robots that measure up to people.
Other scientists offer similar predictions. Professor Hans Moravec of Carnegie Mellon University in Pittsburgh, Pennsylvania, believes that by 2010 robots will be able to move with the intelligence of small lizards. By 2020, says Moravec, machines will be as adaptive as mice; by 2030 as smart as apes; and by 2040 they will rival the full cognitive ability of human beings, having the power of imagination, and the ability to learn and modify their behavior. Eventually, says Moravec, they will be so perfect that people will implant their minds in them. Thus, by the end of the 21st century, human and artificial intelligence will merge, creating a new life form (see essay).
Simulating the Brain. Many bold visions of this kind are based on the assumption that intelligence can be created through sheer computing power. "We are looking for the gold of the Incas, but we haven't even discovered America yet," cautions Professor Christoph von der Malsburg, who designs software that recognizes faces at Bochum University in Germany and the University of Southern California at Los Angeles. On the one hand, he says, modern computing is trying to make models of the brain, despite the fact that researchers do not yet understand how it works. On the other hand, AI researchers are slowly realizing how difficult it is to generate reliable behavior in a natural environment. To put it differently: In the laboratory it may be possible to build a machine that recognizes faces, navigates a path or takes hold of objects. But the real world is incomparably more complicated—and yet people manage to find their way around in it.
But if neither Cog nor Kismet, both of which are being developed at the AI Lab at MIT, nor Aibo (Sony) nor Asimo (Honda) can do anything like this, the questions then become: Do machines even have the potential for intelligence? How does one implant common sense into them? And how can they obtain knowledge of the natural world?
Researchers are only slowly feeling their way around the question of what intelligence is. "There is no comprehensive theory of intelligence," says Professor Helge Ritter, a neuro-computing specialist at Bielefeld University in Germany, who, together with his colleagues, is currently teaching a robot to recognize language and gestures. One thing, at least, is clear: Human intelligence can be traced back to the large number of specialized functions in our brains. We can identify objects of all kinds. We can move around without bumping into things. We can recognize the feelings of others and express our own emotions. We learn from experience. We plan our future. All of this is based on the complicated interactions that take place between numerous parts of our brain. But because researchers are still a long way from understanding how these parts of the brain act in concert, and because each part is itself extremely complex, the builders of intelligent robots still have to limit themselves to small units of intelligence. Some therefore take up visual intelligence and make computers recognize images; others replicate spatial intelligence and train machines to find their way around a room.
Two Research Groups. In Munich, two teams from Siemens Corporate Technology are working on modeling and imitating intelligence. Their approaches are very different, but they nevertheless complement one another.
Researchers working with Professor Bernd Schürmann in the Neural Computation group chose biology as their source of ideas. They did so because Schürmann is convinced that, as an optimized machine for reasoning, the brain knows best how to process signals from the outside world. Thus, networks of artificial nerve cells should operate in the same way real neurons do, preferably down to the level of biochemistry.
A network of this kind with millions of electronic nerve cells has been designed by award-winning Siemens researcher Dr. Gustavo Deco as a software solution. The network has, for instance, been presented with the task of identifying a doorknob. "Performing this kind of activity requires a spark of intelligence, if you like," says Schürmann. The software must know what a doorknob generally looks like. It can accomplish this because certain cells have been trained to recognize doorknob-shaped objects, having learned this in advance from a large number of examples. But before the network can signal that it has identified a doorknob, over one million differential equations per second must be solved.
The software can identify not only doorknobs and other objects, but also items of a specific color, such as red or blue doorknobs. As in the brain, color and shape are processed by different networks of nerves, which then combine their information. The neurons that specialize in certain patterns—so-called grandmother cells—represent the optical common-sense knowledge of a robot. Thanks to the collective capacity of these cells, Deco's artificial brain recognizes naturally-occurring patterns.
Deco, who recently became one of a handful of Inventors of the Year at Siemens, is currently engaged in a European-wide project designed to mold the software into special hardware. If, for instance, it becomes possible to capture something like 1,000 neurons on a chip, the network would be rapid enough to identify patterns in real time. At the moment, however, Deco's network of neurons is still being simulated by software, which makes calculations relatively slow.
Learning by Doing. The other Siemens AI group is taking a more pragmatic approach. "We want to develop a product that we will gradually instill with intelligence," says Rudolf Kober, head of Siemens' Intelligent Autonomous Systems center in Munich. Together with the Automation and Drives Group, Kober's team developed a cleaning robot that has already been on the market for two years. Following a training phase, the machine can react flexibly to obstacles as it glides independently along the aisles in a supermarket while cleaning the floors. Not even its designers can predict where it will go next, since this process is based on an internal map built on the basis of experience (see article Mr. Clean Reports for Service).
Tough to Beat. Despite all of the advances they have made in recent years, the two teams of Siemens researchers admit that imbuing robots with intelligence is an arduous task. The main snag is that the environment in which we live is more diverse than one first realizes, says Dr. Gisbert Lawitzky, who has been involved in developing the cleaning robot. Every chair leg that we can easily avoid is an obstacle for a machine. "To a certain extent, you start to feel humbled by human intelligence," says Lawitzky. Even though they do not see any theoretical impediments to developing a truly intelligent robot, the researchers are far from piecing together the components of intelligence into a machine that senses, plans and acts perfectly, communicates with people, and carries around a sort of picture of the surrounding world in its head.
All things considered, researchers tend to shy away from specifying when such a mechanical marvel will appear. "Not 50 but more likely 100 years from now," predicts Deco. "But maybe by 2015," says Lawitzky, "we will see an assistant that performs a number of useful functions in the household" (see article The Electronic Home). Such forecasts are being taken seriously. Ethicists at the European Academy in Bad Neuenahr, Germany, are already pondering the rights of our future metallic companions.
Jeanne Rubner
Whereas the first generation of neural networks in the 1980s used very simple artificial nerve cells, temporal dynamics played an important role in the second generation (1990s). Neurons were no longer static, but instead operated with pulsed signals like their natural counterparts. This time dependence allowed them to process input patterns far more complex than those that could be handled by static neurons. The third generation, which has evolved in the last few years, is referred to as neurocognitive because it takes into account knowledge concerning the organization of brain functions. Thus, in addition to the input of a certain visual pattern, the neurons in systems developed by Siemens researchers also receive data from other parts of the brain, such as the inferotemporal cortex. This area ensures that objects are recognized independently of their orientation in space. All of this, together with sophisticated sensor technology, lends a certain intelligence to machines.