The human eye is hard to beat. To equal its power, about 125 million photo sensors would have to be concentrated in a few square millimeters, and thats without considering image processing. Although todays sensors still cant match that, when it comes to some thingslike resolution and speed they are already superior to the human eye.
Siemens 3D SISCAN scanner system examines silicon chip surfaces for production errors with a high-relief resolution of approximately 100 nm
The human eye is getting some competition from sensors: Automotive industry engineers are developing optical assistance systems that can recognize road signs and other traffic. And the systems may be able to guide drivers automatically through traffic in the future. Intelligent cameras monitor highways and tunnels and control access to various locations through biometric procedures (see Pictures of the Future, Spring 2002, Transportation Safety, and Spring 2003, Smart Cameras). Modern medicine would be unthinkable without imaging technology (see Ceramik Detectors), while optical measuring technology also helps monitor pollutant emissions. And high-tech eyes in industrial production and quality assurance scan micro and nano structures. Dr. Günter Doemens, head of the Sensor Solutions Center at Siemens Corporate Technology (CT) in Munich, describes one of the key trends here: "The development of optical sensors is currently moving from the second into the third dimensionin other words, toward three-dimensional vision, because recognition processes are more robust in 3D."
Measuring Height on a Nanometer Scale. Dr. Anton Schick, head of Development at Siemens Logistics and Assembly Systems in Munich, and his team developed and launched the SISCAN sensors for 3D analyses a few years ago. On SISCANs screen, Schick can view large-scale, relief images of micrometer-size components or tiny laser welds. SISCAN precisely measures surfaces (for example wafers) in nanometers and examines them for flaws. The system works by the confocal microscope principle: Laser light is beamed vertically onto the object to be measured, and a detector captures the reflected light. To measure height and depth profiles to within 100 nm, the focused laser beam oscillates 4,000 times per second back and forth in the direction of the beam. The detector receives the strongest signal precisely when the beam focus hits the surface. The associated height value is calculated in real time.
A sensor recognizes the shape of up to 20 parts per second. Faulty items are removed, while the correct ones are picked up by a robot
Schick also splits the laser beam in his sensor into 64 parallel beams that measure more than half a million pixels (height values) per second. To get a 3D surface image from this, the measured object is simultaneously shifted horizontally at a speed of 80 mm/s. Researchers are aiming to reduce the size of the sensor head, which weighs 4 kg, so it can easily be guided by a robot arm. The beam from the semiconductor laser also isnt the ideal light source: Its cross-section is generally both elliptical and astigmatic, which creates unwanted signal spread. A glass fiber-optical device would be the ideal solution. Schicks team recently developed such a sensor. With a scanning rate of 8,000 pixels per second, its considered the worlds fastest single-channel, fiber-optical measurement sensor.
Another scanner that Siemens developed for detecting defects at nanometer scales (the smallest particles on what should be an ultra-precise surface, for example) also works with a laser beam. "Its as if you were moving along a mirrored surface the size of a soccer field at a speed of 100 km/h looking for a grain of dust," says Dieter Spriegel, Project Manager at CTs Sensor Solutions Center. The laser beam simply scans the object line by line. The beam is focused on just a few micrometers, and if it hits a defect, it gets scattered. A special set of optics guides the scattered light to the systems highly sensitive detectors. The developers can currently spot particles measuring approximately 80 nm using the system. "But thats not small enough for us," says Spriegel. In a project sponsored by the German Ministry of Education and Research, he and his team are therefore looking to spot particles of 60 nm in an initial phase and then those measuring 30 nm beginning in 2007. Such detection power would be suitable for ensuring defect-free lithography masks in microchip production. But it will require the laser beam to be more sharply focused. The problem here is that the smaller the scanning beam, the faster the signal recognition and processing systems have to be for the application to be economically feasible: Three detectors have to register 8,000 scan lines of 3,000 pixels each per second and then process them at a speed of approximately 1 Gbit/s. To examine an area of approximately 15 × 15 cm² with the desired sensitivity of 60 nm, the system would need to process 2,400 Gbit, which would take nearly 30 minutes with three high-performance computersjust about acceptable for semiconductor production. A sensitivity of 30 nm would increase the data volume fourfold, presenting a Herculean challenge for developers.
Laser Flash for 3D Cameras. Dr. Peter Mengel, Project Manager at CTs Sensor Solutions Center, and his team are developing a CMOS sensor that can register three-dimensional objects by using laser flashes. Plans call for the unit to be used to register 3D profiles of persons in entryways, for automatic baggage check-in at airports and to recognize unusual positions of vehicle drivers and passengers, to ensure airbags inflate properly (see Pictures of the Future, Fall 2003, Researchers and Patents). The units sensor sends laser pulses (each less than 30 ns) to the object to be measured, and a semiconductor array, typically containing some 1,000 pixels, analyzes the reflected light impulses. A high-speed electronic shutter ensures the light intensity a pixel is exposed to is dependent on the distance to the associated point on the object. The software uses this data to calculate and process the 3D image. A reference measurement, taken by leaving the shutter open somewhat longer, compensates for possible differences in object surface brightness.
Researchers are also using laser light to inspect overhead power lines for rail systems. The faster the trains travel, the sooner the overhead lines and their supports wear out. If the damage isnt detected in time, the overhead traction line can tear, blocking the route and causing substantial delays. To detect such wear and tear, developers in a joint project by CT and Siemens Transportation Systems have installed diode line cameras with infrared lasers onto the roof of a measuring rail vehicle that travels at up to 80 km/h, even at night. These diode lasers illuminate the overhead traction line and its suspension in any light conditions. They record 22,000 image lines per second, which when laid together result in an "infinitely" long picture. At a resolution of 0.2 to 2 mm, the image processing system recognizes in real time how severely the line has been worn by the pantograph. The researchers goal is to be able to conduct the tests at higher speeds, which would let them check more routes in a given period of time. To do this, though, they will have to increase the cameras shutter speed and improve the image processing system. "Were developing a sensor system that will allow the measuring train to travel at up to 120 km/h," says Dr. Richard Schneider from CT.
The human eye is getting some competition from the CS10 sensor, says developer Michael Staudt from the New Sensor Technologies unit at Siemens Automation and Drives (A&D) in Amberg, Germany. By observing color distribution, the CS10 sensor recognizes within 30 ms if packages are lying properly on a production line or if bottle labels are correctly affixed. In a bottling plant, a conveyor belt moves bottles past a color sensor in a matter of milliseconds. In this short time, four white-light LEDs flash on the passing labels. A camera chip, similar to the CMOS chip in a photo cell phone, records the images. The image processing system in the sensor is only interested in the number of color pixels and how they are distributed (see Trends). The software compares this color pattern with a stored sample and emits a warning signal if there is a deviation in the color values. "This color sensor makes Siemens the leader in this field," says Staudt. "And to ensure this remains the case, I want to build as small and compact a sensor as possible in the future. The camera chip and image processor, which are still separate, should someday be installed together on one chip." Such a design should also make it possible to cut the recognition time from 30 to 10 ms.
Examples of sensors that see in 3D. Left: A CT image of a hearing-aid component only a few millimeters long. Center: The shape of a vehicle occupant as seen by an extremely fast CMOS sensor. Right: A laser scanner as it scans a surfaces nanometer-scale defects
Ernst Lüthes team from the A&D Factory Automation Sensors development department is facing similar challenges. Their high-tech eye can read data matrix codes, a sort of pixel image printed on production parts that provide graphically encrypted information on product type or serial number, much like a bar code. The sensor currently consists of three components: the sensor head, the illumination unit and the controller. "In the next step, we want to reduce the size of the sensor and pack the components into a single device," says Lüthe. The team has already accomplished much in terms of sensor performance. Until recently, the data matrix sensor could perform five evaluations per second, but now its capable of reading the codes of 20 parts in just one second. Such high scanning speed is very useful in applications such as letter-sorting. The technology behind it involves an LED lamp that illuminates the object to be examinedwhether a letter or a gray cast-iron housing. The image is then recorded by a CCD camera and analyzed by a digital signal processor. The real expertise here is contained in the image-processing algorithm, which must first find the code in the picture before it can evaluate it, under extremely difficult conditions in some cases. For example, the codes are lased, stamped or printed, and the surfaces can be smooth, rough, dirty or reflective. "But our sensor can handle it," says Lüthe with pride.
3D Flight through a Hearing Aid. Imaging sensors can do even more, like looking directly into a componentsomething the human eye will never be able to do. Jürgen Stephan, who is responsible for X-ray Technology at CTs Sensor Solutions Center, uses a computer tomograph to peek into computer chips, inner-ear hearing aids and cell phones. The image-processing system then merges up to 2,000 individual X-ray images into a single 3D picture. Stephan can even fly through the component on his screen in a manner similar to a virtual endoscopy in the medical sector. This allows him to detect hidden cracks or other material flaws at the micrometer scale, without damaging the component. To do this, he uses microfocus and nanofocus X-ray tubes with focal points measuring around 600 nmmuch smaller than those of the X-ray tubes used for medical applications and capable of much higher resolution.
The largest flat detector measures 24 ×24 cm² and achieves a resolution of one-thousandth of a millimeter. "Todays microsystem technology is taking us to the limits of all components," Stephan says. He explains that he could move the detector to the left and right to achieve a virtual detection width of 6,000 pixels, which would yield a three-fold improvement in resolution. However, this would also result in a data volume of 100 Gbyte. With that amount of data, Stephan points out, no standard commercial computer today could create a 3D image that you could virtually travel through. But perhaps in the future
Rolf Sterbak