Go to content

SIEMENS

Research & Development
Technology Press and Innovation Communications

Dr. Ulrich Eberl
Herr Dr. Ulrich Eberl
  • Wittelsbacherplatz 2
  • 80333 Munich
  • Germany
Dr. Ulrich Eberl
Herr Florian Martini
  • Wittelsbacherplatz 2
  • 80333 Munich
  • Germany
When Users and Computers Perceive Each Other Naturally

Professor Patrick Baudisch, 44, is the Head of the Human Computer Interaction group at the Hasso Plattner Institute in Potsdam. The group is responsible for developing interactive devices for the future. The researchers' focus is on developing new mobile devices such as cell phones, but also very large display systems such as electronic desks, walls, and floors. Baudisch, a computer scientist who received his doctorate in Darmstadt, previously worked at Microsoft Research in Redmond, Washington, and at Xerox Park in Palo Alto, California.

Baudisch works on unusual applications — like touch-floors that respond to shoe sole imprints and can call for help in emergencies, and “RidgePads” that are twice as precise as capacitive touch screens.

In one of your courses, you tell your students to design applications that can be operated by foot. What’s the point of that?

Baudisch: We initially viewed this as a kind of thought experiment, a possible answer to the question of how computer users might be able to directly interact with large amounts of data. The size of today’s multi-touch systems generally ranges from the dimensions of a cell phone to those of a coffee table. The reason why they’re not any bigger has to do with the length of the human arm. So the question is: How can you implement the principle of “direct touch” with tens of thousands of objects? Why not use a pressure-sensitive touch-floor, for example? Here users can move between large volumes of data, as the touch-floor recognizes them on the basis of their shoe sole imprints and can also tell whether they’re sitting, standing, or lying down. This idea has been developed further toward the concept of intelligent rooms.

Could such a technology make life easier for senior citizens?

Baudisch: That’s exactly what we’re thinking. Imagine that you’re a senior and are able to use a system that allows you to live at home by supporting your daily activities and checking to see whether you’re moving around enough or sitting in an ergonomic position. Such a system would also call for help if you fall, for example. Our touch-floor uses 13 million pixels that are analyzed 30 times a second to collect a huge amount of data. It’s conceivable that in five to ten years you’ll be able to roll out a touch-sensitive carpet that will detect and localize things in the room it’s in.

What other electronic wonders do you envision for our aging societies?

Baudisch: A student of mine named Christian Holz is working on implantable interfaces — we’re talking about 25 years into the future here. Many people today have implanted devices such as insulin pumps, pacemakers, and hearing aids. However, they cannot operate them. Instead, they have to go to a doctor to get the units configured. Christian is studying how people might be able to directly interact with these devices — right through the skin. It’s an unusual perspective, but a very important one for our lives in the future.

What else might be possible in the next ten or 20 years?

Baudisch: One new thing we’ll be seeing are ultra-mobile devices. These might be thumb-sized and could be worn as watches, pendants or rings — in other words, as part of normal attire. The devices will provide the wearer with access to digital information anywhere and at any time. The most extreme examples are the “imaginary” devices we are researching now. These have no display and can be as small as the user desires. One of our prototypes — the Imaginary Phone — works like a cell phone. The user interacts with it via a small camera that is worn on the body, perhaps in the form of a brooch. The camera monitors the user’s hands, whereby the left hand represents the interactive interface of the cell phone. The user can then “type” with his or her right hand on the left one as if that hand were a touch screen.

Devices like the iPhone have revolutionized the market and the way we think. What type of device will we be using to make calls in 2017?

Baudisch: I can’t tell you that for sure, but there’s no doubt that desktop computers and mobile devices will merge until, at some point in the future, we will only be carrying a single computer around with us. It will have the shape and the functionality of a cell phone. When you get to the office, you’ll connect this unit to a keyboard and a display, and these will then be your PC. Apple, for example, is now working intensely on uniting the functions of PCs and mobile devices to create units with standardized applications and uniform interactive principles.

Smartphones and tablet computers respond to gestures. Does this mean we’re on our way to a multi-touch world?

Baudisch: We already have one. Multi-touch has been successful because it enables us to make devices smaller by overlaying the display and the input interface. Multi-touch supports people who are frequently mobile or who have to deal with lots of different people as part of their job — for example, doctors who need to go to many individual patients in a hospital and therefore carry their tablet PCs around with them.

Will we at some point be operating devices solely with gestures?

Baudisch: Definitely not. Certain types of interaction can be carried out very well by means of gestures — like conducting an orchestra or guiding an airplane on a runway. But what kind of gesture should you make if you want to ask an object for help? Intuitive gestures are limited to the applications they’re associated with in the physical world.

Which technologies do you believe will ultimately succeed?

Baudisch: Interestingly enough, no specific technologies will dominate. Instead, you’ll see an alternation between diversification and unification. An example of unification is Apple’s approach of merging PC and mobile lines. Nevertheless, we will still have specialized devices. Fifty years ago writers used typewriters, but today they write on the keyboards of their PCs. This will remain the case, because a PC keyboard is the perfect component for writing. If you want to play the piano, you’ll use a piano keyboard. However, despite the great variety of interface devices, you’ll probably still have only one computer, which will not only reduce costs but also keep all of your data consistent. At the same time, there’ll be different input and output devices, and people who usually work in one place will continue to use the keyboard-mouse setup for a longer period of time than you might think.

Will it be easier or more difficult for us to use technology 30 years from now?

Baudisch: It will be easier. Today’s children are already growing up with devices that they understand better than they grasp the physical world around them. You may have seen the YouTube video of a small child who is holding something printed on paper in his hand and tries to swipe his finger over it to move the pages like on an iPad. We used to use metaphors from the physical world, like those related to an office, in order to explain computers. Your e-mail program has an “inbox” because that’s where interoffice and outside mail was placed in the days before the Internet. Today’s young computer users have never worked in an office, which means they learn such things from computers before they experience them in the physical world. The logical consequence of this is that we now more frequently explain the physical world in terms of computers — like light switches marked with “0” and “1.”

So you’re saying we don’t need intuitive systems because our way of thinking has adapted to the computer world.

Baudisch: Yes and no. On the one hand, as an application developer I can assume a lot more knowledge on the part of users. On the other hand, I want to make operating a device simpler and more intuitive with the help of natural user interfaces, or NUIs. NUI systems are based on “new” interaction technologies such as multi-touch surfaces, cameras, and microphones. The term NUI actually describes a fundamental development: Whereas the desktop computers of the 20th century perceived users merely as coordinate axes — in terms of the mouse cursor— NUIs enable computers and users to perceive one another in a much more natural manner. Now users see the computer interface in graphic detail — and the computer sees you in just as much detail due to your gestures, facial expressions, and voice. NUI input requires much more extensive interpretation than a conventional input device such as pressing the button for A and the device then gives me an A. In other words, if a computer sees a group of users with raised hands via its camera, the image can be interpreted in many different ways. The new challenge lies in eliminating the resulting ambiguity.

Researchers say that microchip performance will increase another thousandfold over the next 20 to 30 years. Do you see any technical limits regarding human-machine relationships?

Baudisch: The ability to make rapid calculations is definitely useful, but it long ago ceased to be an obstacle to progress in this field. For a certain period of time, there will still be limits regarding the miniaturization of batteries or the levels of brightness produced by very small projectors. The heat emitted by microchips is also often a problem. Nonetheless, the processor in your cell phone today is 350 times faster than the best computer from the 1980s. For quite some time now, the real limits to development have been the limits of human beings — aspects such as the resolution capacity of the human eye and the size of our fingers. In other words, interfaces have to accommodate human senses, which is why the issue of user interfaces is just as important today as it was in the 1970s, when Xerox developed and launched the first-ever user-friendly devices — the Alto and the Star.

Interview conducted by Silke Weber