Using sophisticated technology, the latest hearing aids adapt automatically to almost any situation. They now allow users to hear nearly as well as their grandchildren.
A researcher tests a hearing aid's response to feedback in an anechoic room , as loudspeakers simulate sound scenarios, an essential step in the development of high-end hearing instruments.
- Text Size
- Share
- Print this page
Our lives are becoming louder: traffic noise, machinery, disco music, MP3 players. In fact, we're hearing more and more, and yet less and less all the time. Britain's MRC Institute of Hearing Research estimates that by 2015 about 700 million people worldwide will suffer from hearing impairment; and by 2025 that figure could be as high as 900 million. In 1995, fewer than 450 million people suffered from hearing loss. The reason for this growing problem is not just our lifestyle but also the fact that people are living longer. Over time, influences such as exposure to loud noises, nutritional deficiencies, and illnesses add up. Initially, it becomes more difficult to perceive quiet or high-pitched sounds, fatigue sets in more quickly, and those affected avoid situations that require communication. Broken down numerically, approximately one out of every four persons over 65 years of age suffers some hearing loss. That increases to one out of every two persons over 75 and four out of every five persons over age 85.
Most types of hearing loss can't be reversed, but the effects can be mitigated. In a U.S. study, for instance, nine out of ten hearing aid users said they enjoy a higher quality of life thanks to the devices. Nevertheless, many potential users won't wear them—partly because the onset of hearing loss is usually gradual, and partly due to feelings of self-consciousness. Mention hearing aids and many people think of an unsightly protuberance behind the ear and grandpa turning the volume dial to get the annoying squealing sound under control. One result is that although 14 to 16 million Germans suffer from hearing loss, only about 3.5 million wear a hearing aid.
Hearing aids must be particularly good at one thing: making speech intelligible, even in a noisy environment. That's why directional microphone technology is important. Conventional systems are based on the assumption that speech comes from the listener's front. Frontal voice signals are thus reinforced, and ambient noises arriving from the side or from behind are muted. But if the speaker is walking alongside the listener or, even worse, sitting behind him in a car, most hearing aids reach their limits. This is remedied by the new SpeechFocus function, which uses ingenious algorithms to scan the sound coming from any direction for specific patterns that indicate speech. In particular, it looks for a modulation frequency of four hertz, because the volume of typical speech oscillates four times per second. If speech from behind is detected, the device automatically focuses on the rear and suppresses ambient noise from the front and sides.
No More Squeal. A standard feature of modern hearing aids is the ability to prevent feedback. This phenomenon, which almost everyone is familiar with from live concerts, occurs when a microphone is too close to a loudspeaker. The tones from the loudspeaker reach the microphone, at which point they are fed to the loudspeaker again—an endless loop that builds up in a fraction of a second and creates a loud squealing noise. Wearers of older hearing instruments experience this effect too, when amplified signals travel outward from the auditory canal and reach the device's microphone. Manually adjusting the hearing aid often helps, but not always. In modern hearing aids, therefore, the signals at the loudspeaker and microphone are compared. If feedback appears to be imminent, the hearing instrument generates a signal that is opposite in phase and thus cancels out the squeal.
This process has drawbacks, however. The sound of a high-pitched flute, for example, is similar to feedback noise. So at a concert there is a risk that flute sounds might be eliminated, thus distorting tonal balance. Siemens' new FeedbackStopper technology circumvents this problem. Hearing instruments equipped with this function add an inaudible—but technologically detectable—phase modulation to the audio signal, a sort of fingerprint. If this fingerprint reaches the input, the hearing aid recognizes that feedback is present and immediately reacts. FeedbackStopper instantly triggers a brief, minor frequency shift, preventing a build-up of the sound. Practiced listeners will object that if flute notes are shifted lower even slightly they will be noticeably out of tune. “But the FeedbackStopper intervenes for only split seconds,” says André Steinbuss, head of Audiological Research and Product Development at Siemens Audiologische Technik GmbH in Erlangen. Once the risk of feedback is gone, the FeedbackStopper switches off again—so quickly that it's not noticed.
Fine Tuning Via Remote Control. Another problem with older hearing aids is sometimes satirized in comedy shows: when hapless seniors fiddle with volume controls because they either can't hear anything or the amplification is excessive. Some viewers may find that amusing, but it's no fun for those affected. Modern hearing aids do adjust volume automatically, but individual adjustment is still needed and takes time. The rough adjustment, to a comfortable volume level, is performed by a specialist when the device is delivered. When the customer reports his experience after a few weeks, the specialist does some fine-tuning and sets the correct balance between high- and low-pitched sounds. It's a time-consuming process, and one that Siemens has now made far more convenient thanks to its SoundLearning technology. The company introduced it in 2004 and was the first manufacturer to do so. It allows the wearer to do some fine-tuning himself, either on the device or via remote control, by adjusting the volume and tone when an auditory experience isn't ideal in a certain situation, such as when watching TV or listening to a concert. Thanks to advanced software, the device gradually learns when it is too loud or too quiet for the wearer, or when a sound is too high-pitched or too deep. After a few weeks, the hearing aid has adapted so that it finds the right setting automatically.
With SoundLearning 2.0, Siemens has now augmented this technology with a situational-awareness feature that distinguishes between various sound scenarios, such as speech, ambient noise, and music. SoundLearning 2.0 does this using classifiers, i.e., certain properties of the sound, such as the typical modulation frequency of speech. If, for example, the wearer uses the remote control to make the sound higher-pitched and the hearing aid identifies the situation “music” at this moment, the device will thenceforth automatically adjust the sound accordingly to music. The more often the wearer makes adjustments, the faster the hearing aid learns to accurately deliver the right sound by itself. “90 % of the time, the situation is correctly identified,” says Steinbuss.
The idea for SoundLearning comes from Sydney, Australia, or, more precisely, from the National Acoustic Laboratories there, one of the world's leading research centers for hearing and hearing technology. In 2003, Research Director Harvey Dillon had an idea for a self-adapting hearing aid and realized it with Siemens as SoundLearning 1.0. “It used to be hard for audiologists to fine tune hearing aids because users often can't describe their perception of certain situations afterwards,” says Dillon. Now, users can perform the adjustment immediately themselves. The technology has been improved continuously. At first its self-learning software only distinguished between loud-quiet inputs, followed by the tonal balance. But now, new ambient situations are identified as well. “SoundLearning makes audiologists' work easier and gives patients more control,” says Dillon.