Artificial creatures that display feelingssuch as robot Mexi shown here, cute characters for chat rooms (top left) and Japanese creations (bottom left)can make interaction with machines much easier
"Hello there, I'm Mexi." Like a curious rabbit, the little fellow with the tinny voice perks up his ears and beams. He stares at the unfamiliar guest for a while, but then his attention seems to wane, and he gazes at the floor apathetically, then sets his sights on two colored balls that lie in front of him on a tableuntil that gets too boring for him as well. "Mexi has needs just like a person," says 43-year-old Dr. Bernd Kleinjohann, Deputy Director of C-LAB in Paderborn, Germany, a research institute run jointly by Siemens and the University of Paderborn. "Once a need has been satisfied, he loses interestlike an infant."
Kleinjohann is one of the fathersor rather creatorsof Mexi, the Machine with Emotionally Extended Intelligence. Mexi is not a living thing, but rather a somewhat bizarre concoction made of plexiglas, microchips, motors, cameras and small lights. Nevertheless, with its protruding eyes and its lips of red cord, the robot looks somehow human, almost endearing. You automatically smile back and catch yourself wanting to talk to this artificial head. And indeed, Mexi can already speak short sentences and express emotions. For instance, he raises his voice when he's happy, and lowers it when he's in a bad mood.
Role Models. Does Mexi have feelings? "No, he just appears to," says Kleinjohann, shaking his head as he opens the Emotion Engine on a PC, from which Mexi can be programmed. Three slide controls appear on the screen. They represent the alternating desires Mexi tries to satisfy: communication (looking at people), play (watching colored balls) and greeting the Linux mascota penguin doll. With the help of another control, Mexi can display a wider range of emotions. He can show fear, for example, by cringing when someone waves a hand in front of his camera eyes or comes too close for comfort.
Mexi has two famous role models: Cog and Kismet, both of which were built at the Massachusetts Institute of Technology (MIT) just outside of Boston. Robot pioneer Professor Rodney Brooks proceeded from the hypothesis that a robot can acquire attributes similar to those of a human being only if it is allowed to explore its surroundings in the same way a small child does. Cog is now able to distinguish the faces of his handlers from strangers' faces, and he can tell whether or not a person is looking directly at him. Like Mexi, Kismet, the successor to Cog, has feelings. If no one pays attention to him, he looks sad. "Kismet was designed to emotionally blackmail people," says his creator, Cynthia Breazeal.
The Need for Emotions. Neglected by cognitive researchers until recently, emotions now seem to be essential to the success of artificial intelligence, a field that has disappointed many since its promising birth in the 1960s and `70s. The Affective Computing research group at MIT is proceeding from the assumption that emotions are important for the ability of intelligent machines to make flexible and rational decisions.
The researchers draw this inference in part from studies conducted by Antonio Damasio, a neurologist at the University of Iowa. In the course of his research, Damasio discovered that emotionally disturbed patients make their decisions much as computers doinflexibly and according to simple if-then patterns.
All of this is speculation for Bernd Kleinjohann. He's not particularly concerned whether Mexi's emotions are genuine or pretend. What's important, he believes, are the feelings that the artificial head triggers in people. "People project their emotions onto technical devices they interact with," he observes. Although robots or artificial characters on a screenso-called avatarsdo not have real feelings themselves, they trigger emotional reactions in people. And this can be used, for example, to design better user interfaces. This new discipline is called robotic user interfaces, and the objective behind it is not to build robots and avatars that resemble people, but to develop synthetic creations that can bridge the gap between human needs and the information present in the computer world. Kleinjohann imagines future information kiosks or cash machines that enter into spoken dialog with users through a device that might be similar to Mexi.
Christoph Bartneck of the Eindhoven University of Technology in the Netherlands imagines robot interfaces above all in the entertainment and educational sectors. He believes that robots could also take over the job of controlling an electronic home.
In Japan, where the subject of humanoid robots is viewed with far fewer inhibitions than almost anywhere else, domestic helpers of this kind are already highly popular. One example is the R100 from NEC, which looks like a miniature version of R2D2 from Star Wars. This monitor-based creation can read e-mails out loud for its owner and control the television and video recorder. But Kleinjohann believes that a physical implementation such as Mexi is more credible and therefore superior to a computer-screen avatar. "The PC with a screen isn't the terminal of the future," he says.
Kismet, Mexi's brother, was created at the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology in Boston
Friendly Faces. But ease of use can be enhanced with the personal computer as well. Dr. Stefan Schoen, head of the User Interface Design department at Siemens Corporate Technology (CT) in Munich, Germany, emphasizes that finding the right mixture of user friendliness and attractiveness is crucial to the acceptance of user interfaces in PCs and PDAs. The attractiveness factor has a big influence on subjective first impressions, and therefore on sales.
The same is true for voice-response systems. A friendly computer voice is interpreted as more helpful than a neutral or unfriendly one. Feelings can therefore be deliberately manipulated on a subliminal level. However, it is important that the interaction remain controllable, Schoen says. Although a machine may arouse emotions in people, it should not become unpredictable and irritate users with its own moodslike the depressive robot Marvin in Douglas Adams' novel The Hitchhiker's Guide to the Galaxy. Things can also backfire if an avatar acts too much like a person, since that might create expectations that the artificial creature cannot possibly fulfill. "Abstract gestures like pointing or head scratching when the computer is looking for an answer are very effective," notes Dr. Bernhard Kämmerer, who is responsible for interaction technologies at Siemens CT. In principle, that also applies to Clippit, which pops up to offer help texts in Microsoft programs. Nevertheless, the virtual paperclip doesn't go over well, because it appears without being invoked, says Stefan Schoen. "Many users feel controlled by Clippit," he says.
Schoen's colleague Heinz Bergmeier is currently developing avatars for a future UMTS chat application. The new generation of figures resemble amusing cartoon characters and can depict the feelings of chat participants on a cell phone display. These figures work in much the same way as the emotions that many cell phone users like to use in SMS.
Instead of faces produced from symbols, such as :-) or :-(, a mobile chat can use humorous characters such as penguins or tortoises that act as avatars and smile or pout in a virtual chatroom. Using a slide control, the user can select from up to 13 different emotional states to transfer to his or her cell phone partner's phone. If the participants like each other, they can even go into a private room and have the penguin and the tortoise kiss by pressing a button.
The Sound of Anger. Obviously, it is easy to call forth emotional reactions in people. A yapping plastic dog like Sony's toy robot Aibo or a cartoon character on a cell phone display are all it takes. But could the reverse be true? Would it be possible for machines to recognize and use emotions? To date, achievements in this area have been modest. Researchers at the University of Munich have used a computer to interpret 80 percent of human gestures, but the number of gestures was small and they were performed by actors.
For Professor Harald Höge of the Interaction Technologies Department at Siemens CT in Munich, what is particularly interesting is how emotions are expressed in speech. In the future, says Höge, the voice response systems frequently used by call centers for preliminary customer guidance will put a caller through to a flesh-and-blood staff member immediately if they determine that the caller is angry. But this will not happen in the near future.
"The most effective thing would be to tap right into the brain to determine a person's feelings," says Dr. Martin Stetter of the Neurocomputing Center at Siemens CT. Stetter is developing a brain-computer interface that senses simple emotional states by means of electroencephalograms.
Indeed, certain brainwaves are very reliable indicators of fright, relaxation or tiredness. This means that a cap fitted with sensors could be used to measure the effect user interfaces have on test subjects. The technology could, for instance, be adapted to automotive safety systems that would sound an alarm if the driver started dozing off.
Brain Piercing. In animal experiments, monkeys have learned to control a robotic arm with the power of their thoughts. To do so, however, a team led by Johan Wessberg from Duke University in Durham, North Carolina, had to implant electrodes in the brains of the test animals. Taking these thoughts a step further, Stetter believes that it may eventually be possible to transmit simple feelings from one individual's brain to another or even to a robot by means of what he calls brain piercing. Nevertheless, even he doesn't believe that the entire gamut of a person's thoughts and emotions will ever be transmitted in this manner: "That's pure science fiction," he says.
Bernd Müller