Subscribe to Issue Stories Hearing Review - March 2010 Issue Stories Hearing Instrument Technology Digital Wireless Hearing Aids, Part 1: A Primer by Francis Kuk, PhD; Bryan Crose; Petri Korhonen, MSc; Thomas Kyhn; Martin Mørkebjerg, MSc; Mike Lind Rank, PhD; Preben Kidmose, PhD; Morten Holm Jensen, PhD; Søren Møllskov Larsen, MSc; and Michael Ungstrup, MSc Taking an audio signal and transmitting/receiving it digitally is a multi-stage process, with each step influencing the quality of the transmitted sounds. This article provides a primer about the steps involved in the process for both near- and far-field transmission of signals. Digital signal processing has opened up innovative ways where an audio signal can be manipulated. This flexibility allows the development of algorithms to improve the sound quality of the audio signal and opens up new ways in which audio signals can be stored and transmitted. Whereas FM has been the standard of analog wireless transmission used in the hearing aid world, digital is fast becoming the new norm for wireless transmission. This paper takes a behind-the-scenes look at some of the basic components of a wireless digital hearing aid that transmits audio data so that readers may appreciate the complexity of such a system. All wireless digital hearing aids share the same functional stages shown in Figure 1. All analog audio signals must be digitized first through a process called analog-to-digital conversion (ADC). The sampled data is then coded in a specific way (audio codec) for wireless transmission. An antenna (or transmitter) using radio waves (a form of electromagnetic (EM) waves) is used to transmit these signals, and a receiving antenna (or receiver) paired to the transmitter detects the transmitted signal. The signal is then decoded (audio codec) and sent to the digital hearing aid for processing. The processed signal then goes through a digital-to-analog conversion (DAC) process again before it is output through the hearing aid receiver. FIGURE 1. Functional stages of a wireless digital hearing aid. 4/26/2010 Digital Wireless Hearing Aids, Part 1: … hearingreview.com/…/2010-03_09.asp… 1/14
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Subscribe to Issue Stories
Hearing Review - March 2010
Issue Stories
Hearing Instrument Technology
Digital Wireless Hearing Aids, Part 1: A Primer
by Francis Kuk, PhD; Bryan Crose; Petri Korhonen, MSc; Thomas Kyhn; Martin Mørkebjerg,MSc; Mike Lind Rank, PhD; Preben Kidmose, PhD; Morten Holm Jensen, PhD; Søren MøllskovLarsen, MSc; and Michael Ungstrup, MSc
Taking an audio signal and transmitting/receiving it digitally is a multi-stage process, with each step influencingthe quality of the transmitted sounds. This article provides a primer about the steps involved in the process forboth near- and far-field transmission of signals.
Digital signal processing has opened up innovative ways where an audio signal can be manipulated. Thisflexibility allows the development of algorithms to improve the sound quality of the audio signal and opens upnew ways in which audio signals can be stored and transmitted. Whereas FM has been the standard of analogwireless transmission used in the hearing aid world, digital is fast becoming the new norm for wirelesstransmission. This paper takes a behind-the-scenes look at some of the basic components of a wireless digitalhearing aid that transmits audio data so that readers may appreciate the complexity of such a system.
All wireless digital hearing aids share the same functional stages shown in Figure 1. All analog audio signalsmust be digitized first through a process called analog-to-digital conversion (ADC). The sampled data is thencoded in a specific way (audio codec) for wireless transmission. An antenna (or transmitter) using radio waves(a form of electromagnetic (EM) waves) is used to transmit these signals, and a receiving antenna (or receiver)paired to the transmitter detects the transmitted signal. The signal is then decoded (audio codec) and sent to thedigital hearing aid for processing. The processed signal then goes through a digital-to-analog conversion(DAC) process again before it is output through the hearing aid receiver.
FIGURE 1. Functional stages of a wireless digital hearing aid.
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 1/14
Each one of these steps can have significant impact on the final power consumption of the hearing aids, thedelay of the transmitted sounds, and the overall sound quality of the signal (to be discussed in Part 2). Thus, tounderstand wireless digital hearing aids, it is necessary that one understands some principles of digital sampling,audio codec (coding and decoding), and transceiver (transmitter and receiver) technology.
Digital Sampling
Francis Kuk, PhD, is director of audiology, and Bryan Crose, BS, and Petri Korhonen, MSc, areresearch engineers at the Widex Office of Research in Clinical Amplification (ORCA), Lisle, Ill, a division ofWidex Hearing Aid Co, Long Island City, NY. Thomas Kyhn, BS, Martin Mørkebjerg, MSc, Mike LindRank, PhD, Preben Kidmose, PhD, Morten Holm Jensen, PhD, Søren Møllskov Larsen, MSc, andMichael Ungstrup, MSc, are research engineers at Widex A/S in Lynge, Denmark.
The process in which a digital system takes a continuous signal (ie, analog), samples it, and quantizes theamplitude so that the signal is discrete in amplitude (ie, no longer continuous) is known as analog-to-digitalconversion (ADC). The digitized signal is a sequence of data samples (strings of “1” and “0”) which representthe finite amplitudes of the audio signal over time.
Sampling frequency. The number of times at which we measure the amplitude of an analog signal in onesecond is the sampling frequency or sampling rate. To capture all the frequencies within a signal, thesampling frequency must be at least twice the highest frequency in that signal. For example, if an audio signalhas frequencies up to 8000 Hz, a sampling frequency of 16,000 Hz or higher must be used to sample the audio.Figure 2 shows an example of a 1000 Hz sine wave that is sampled at two different frequencies: 1333 Hz and2000 Hz. As can be seen, the sampling frequency of 1333 Hz incorrectly sampled the 1000 Hz sinusoid as a333 Hz sinusoid (Figure 2a, below left). When the same signal is sampled at 2000 Hz, the original waveform isaccurately reconstructed as a 1000 Hz sine wave (Figure 2b, below right).
FIGURE 2. The effect of sampling frequency on a 1000 Hz waveform. The sample on theleft (A) was reconstructed using a sampling frequency of 1333 Hz, causing distortion,whereas the 2000 Hz sampling frequency produced an accurate rendering of the signal.
Bit depth (or bit resolution). Digital systems use binary digits (0, 1) or bits to represent the amplitudeof the sampled signal. The precision at which the amplitude variations within the audio signal can be reflected is
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 2/14
determined by the bit resolution (or bit depth) of the digital processor. As the number of bits in a processor (orbit resolution) increases, finer amplitude differentiation becomes possible.
Figure 3 shows the difference in resolution when a sinusoid is sampled at 1 bit, 3 bits, and 5 bits. The blue lineis the analog signal while the red line is the digital representation of the signal. The space between the blue andred lines (in yellow) is the quantization noise. Note that, as the number of bits increases, the resolution of thesignal increases (becomes smoother) and the quantization noise decreases. In other words, the dynamic range(range of possible values between the most intense sound and the least intense sound) increases.
FIGURE 3. The effect of bit resolution on the output waveform (the blue
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 3/14
line is the original sinusoid). The red line represents the digitized sinusoid.The difference between the red and blue lines (in yellow) is the quantizationnoise.
Perceptually, a signal that is processed with a high bit resolution will sound clearer, sharper, and cleaner thanthe same signal that is processed with a lower bit resolution. One shouldn’t think that more bits are needed torepresent a more intense signal (or fewer bits for a soft sound); however, more bits are needed when loud andsoft sounds are presented together (ie, fluctuations in level) and one is interested in preserving the relativeamplitudes of these sounds (ie, dynamic range).
Sampling trade-offs: current drain. When an analog signal is converted into a digital form, the amountof information (number of bits) or size of the digital signal is a product of the sampling frequency, the bitresolution, and the duration of the sampling. A digital processor that uses a high bit resolution sampling at a highfrequency results in more bits than ones that use a lower bit resolution and/or a lower sampling frequency. Thismeans that more of the nuances of the input signal are available. Perceptually, this corresponds to a less noisysignal with a better sound quality. Unfortunately, more bits also mean more computations, larger memory, andlonger time to transmit. Ultimately, this demands a higher current drain. Thus, a constant challenge for engineersis to seek the highest sampling frequency and the greatest bit resolution without significantly increasing thecurrent drain.
Digital representation. Digital signals are represented as a string of 1’s and 0’s. To ensure that the datacan be used correctly, other information is added to the beginning of the data string. This is called a “header” orthe “command data.” This includes information such as the sampling rate, the number of bits per sample, andthe number of audio channels present.
Figure 4 shows an example of what an audio header may look like (along with the digital audio). In this case,the 12-bit header consists of three 4-bit words—indicating how many channels it contains (mono or stereo),the sampling rate, and the number of bits per sample. The hearing aid processor reads the header first before itprocesses the data string.
FIGURE 4. Digital audio with header information.
Digital-to-analog conversion. To convert the processed digital string back into an analog signal (suchas after processing by the hearing aid processor), a digital-to-analog converter (DAC) is needed. The DACreads the instructions on the header and decodes the data at the same rate at which the audio is originallysampled. The output is low-pass filtered to smooth the transitions between voltages (the yellow shaded area inFigure 3). The signal is finally sent to an audio speaker (or receiver).
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 4/14
Audio Data Compression or Audio Codec
Rationale for data compression. When audio is converted from an analog to a digital format, theresulting size of the digital audio data can be quite large. For example, one minute of stereo audio recorded at asampling frequency of 44,100 Hz (or samples per second) at a 16-bit resolution results in over 84 Mbits ofinformation. This requires 10.5 Mbytes (MB) of storage (1 byte = 8 bits). That’s why an audio CD with acapacity of 783 Mbytes (MB) can hold only 74 minutes of songs.
To increase the number of songs that can be stored on the CD, one can either digitize the songs with a lower bitresolution, or sample them at a lower sampling frequency. Unfortunately, a lower bit resolution will decrease theamplitude resolution of the audio signal and increase the quantization noise. Decreasing the sampling frequencywill limit the range of frequencies that are captured and lose some details of the songs. Thus, neither approachoffers an acceptable solution to reduce the size of the data file and yet maintain the sound quality of the music.
Data compression (or data codec, short for “data coding and decoding”) allows digital data to be stored moreefficiently, thus reducing the amount of physical memory required to store the data. Authors’ Note: Datacompression should not be confused with amplitude compression, which is the compression or reductionof the dynamic range of an audio signal. Unless specifically intended, data compression generally doesnot reduce or alter the amplitude of the audio signal, but it does reduce the physical size (number ofbits) that the audio signal occupies.
The transmission bit rate—or how much data (in number of bits) a transmitter is capable of sending in unit time—is a property of the transmitting channel. It depends on the available power supply, the criterion foracceptable sound quality of the transmitted signal, and also the integrity of the codec that is used to code anddecode the transmitted signal. So, for example, while a higher bit rate usually means more data can betransmitted (and a better sound quality by inference), it does not guarantee sound quality because sound qualityalso depends on how well the codec system works.
How quickly an audio sample is transmitted (or downloaded) is important in the music world. The amount ofdownloading time is related to the size of the file and the bit rate of the transmitting channel. For example, a 4-minute song of 35 MB takes over 9 minutes to download using an average high-speed Internet connection (bitrate of 512 KB). If the same song is compressed using mp3 encoding technique, it is approximately 4 MB insize and takes approximately 1 minute to download. Thus, another reason for data compression (or codec) isto reduce the size of the “load” (or file) so the same data can be transmitted faster within the limits of thetransmission channel without losing its quality.
A digital wireless hearing aid that transmits audio from one hearing aid to the other, or from a TV/cell phone,etc, to the hearing aid, has the same (or more) constraints as a music download. Because of the need foracceptable current consumption, the bit rate of current wireless digital hearing aids is typically lower than thehigh-speed Internet. In order to transmit the online digital audio without any noticeable delays or artifacts, someintelligent means for reducing the size of the audio data file is critical. (Note: this is not a necessary considerationfor transmission of parametric data, such as hearing aid gain settings, because of the relatively small size andnon-redundant nature of such data.)
Audio coding. The various algorithms that are used to code and decode an audio signal are called audiocodec. The choice of a codec is based on several factors, such as the maximum available transmission bit rate,the desired audio quality of the transmitted signal, the complexity of the wireless platform, and the ingenuity of
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 5/14
the design engineers. These decisions affect the effectiveness of the codec.
One can code a signal intelligently so it has good sound quality but fewer bits (thus requiring a lowertransmission bit rate). Conversely, if the codec is not “intelligent” or if the original signal does not have a goodsound quality, no transmission system at any bit rate can improve the sound quality.
There are two components in the audio encoding process: 1) Audio coding which involves “packaging” of theaudio signals to a smaller size, and 2) Channel coding which involves adding error correction codes to handlepotential corrupted data during the transmission. Protocol data, such as header information for data exchange,is also included prior to transmission.
Approaches to audio coding: lossless vs lossy. The objective for audio coding is to reduce the sizeof the audio file without removing pertinent information. Luckily, audio signals have large amounts of redundantinformation. These redundancies may be eliminated without affecting the identity and quality of the signal. Audiocoding takes advantage of this property to reduce the size of the audio files. The two common approaches—lossless and lossy—may be used alone or in combination (these approaches may be used with otherproprietary approaches as well).
Lossless codec. The systems that take advantage of the informational redundancy in audio signals arecalled lossless systems. These systems use “redundancy prediction algorithms” to compile all the redundant orrepeated information in the audio signal. They then store the audio more efficiently with fewer bits but noinformation is lost. For example, the number 454545454545 can be coded as a 12-digit number by thecomputer. But the same number can also be coded as 6(45) to be read as 45 repeated 6 times.
This is the process used when computers compress files into a ZIP file. It is used in applications where exactdata retention—such as computer programs, spreadsheets, computer text, etc—is necessary.
Lossy codec. The systems that take advantage of perceptual redundancy in audio coding are called lossysystems. They use “irrelevance algorithms” which apply existing knowledge of psychoacoustics to aid ineliminating sounds that are outside the normal perceptual limits of the human auditory system. For example, it isknown that, when two sounds are presented simultaneously, the louder sound will exert a masking effect on thesofter sound. The amount of masking depends on the closeness of the spectra of the two sounds. Because ofmasking effects, it is inconsequential perceptually if one does not code the softer sound while a louder one ispresent. Lossy audio coding algorithms are capable of very high data reduction, yet in these systems the outputsignal is not an exact replica of the input signal (even though they may be perceptually identical).
This type of codec is commonly used in mp3 technology. JPEG (Joint Photographic Experts Group)compression is another example of lossy data compression used in the visual domain.
Channel coding. One important consideration when sending any type of data (analog or digital) is thepotential of the introduction of errors into the signal from electromagnetic interference during the transmissionprocess. This is especially pertinent for wireless systems. Consequently, efforts must be made to ensure that thetransmitted data are received correctly.
Channel coding algorithms provide a method to handle transmission errors. To achieve that objective, channelcoding algorithms specify ways to check the accuracy of the received data. They also include additional codesthat specify how errors can be handled.
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 6/14
Because there are no required standards on how these errors must be handled, channel coding algorithms varywidely among manufacturers. Some devices simply ignore and drop the data that are in error; some wait for thecorrect data to be sent; and others can correct the data that are in error. The various approaches can affect therobustness of the transmission and the sound quality of the transmitted signal.
Before sending the encoded digital audio (and the error correction codes), the encoder generates a header tothe data following the protocol for wireless transmission. In this case, the header includes the address of thereceiver, command data, and a data-type identification code that specifies which data are instructions, whichare audio data, and which are error-correction codes. In addition, it also includes information on how to makesure that the transmitted data are correct; and how to handle “errors” if and when they are encountered.
Audio decoding. When a coded audio signal is received, it needs to be decoded so the original informationcan be retrieved. The receiver first examines the header information from the received coded signals so itknows how the received data should be handled. The received data then go through the channel decoder toensure that the transmitted data are correct. Any transmission errors are handled at this channel decoding stageaccording to the error-correction codes of the channel codec. The channel-decoded signal then feeds throughthe audio decoder which unpacks the compressed digital audio data to restore the “original” digital audio.
“Bit-true” vs “non bit-true” decoding. There are two approaches to audio codec: bit-true and nonbit-true. A bit-true codec means the decoder knows the encoder so it can decode the audio faithfully with theleast current drain. Because it knows how the data are coded, it is prepared to handle any “errors” that itencounters during the transmission. A bit-true system is a dedicated system.
A non bit-true codec is an open system that allows multiple manufacturers to produce files that can bedecoded by the same decoder. An example is the codec used in mp3 players. The advantage of a non bit-truesystem is its flexibility, adaptability, and ease of implementation by various manufacturers; it can savedevelopment time and resources. A potential problem is that the quality is not always ensured because differentimplementations are allowed. And because the decoder does not know the encoder, errors that are introducedduring the transmission may not be corrected effectively and/or efficiently. This leads to drop outs andincreased noise, and it may degrade the quality of the transmitted audio.
Wireless Transmission
Why wireless? Wireless allows the transfer of information (or audio data) over distance (from less than a meterto over thousands of miles) without the use of any wires or cables. Although wireless opens up the transmitteddata to potential interference by other signals, the convenience that it offers and the possibility that data can betransferred over a long distance (such as a satellite) make it a desirable tool for data transmission.
The challenge for engineers is to minimize the potential for transmission errors (from interference) while keepingreasonable power consumption. Today, wireless transmission technology is also applied to hearing aids to bringabout improvements in communication performance never before possible.
Vehicles for transmission: Electromagnetic (EM) waves. Wireless transmission is achievedthrough the use of electromagnetic (EM) waves. This is a type of transverse wave which has both an electriccomponent and a magnetic component. EM waves by themselves are not audible unless they are converted toa sound wave (a longitudinal wave). One property of an EM wave is its ease of being modified by another
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 7/14
signal. This makes EM waves excellent carriers of data.
Electromagnetic waves cover a wide range of frequencies. The choice of carrier frequency depends on howmuch information needs to be sent, how much power is available, the transmission distance, how many otherdevices are using that frequency, local laws and regulations, and terrestrial factors such as mountains orbuildings that may be in the path of the transmission. Higher carrier frequencies can carry more information thanlower frequency carriers. On the other hand, lower frequencies require less power for transmission.
The spectra of electromagnetic waves that are used today can be divided into different categories. Visible lightis one form of electromagnetic waves and it is marked in the center of Figure 5. On the left side of the spectrumare the frequencies for radio transmission (or radio waves). These waves have a longer wavelength (and thuslower frequencies) than light and are commonly used for most types of wireless communication. One can seethat most of the AM and FM radios use frequencies between the 106 and 108 Hz regions.
FIGURE 5. The electromagnetic (EM) spectra, with visible light near the center and mostof our transmission carrier frequencies in the lower/longer frequency regions.
Far-field vs near-field transmission. Traditional wireless transmission systems use an antenna totransmit an EM wave through the air. The farther the wave is from the transmitter, the weaker its strength.However, the rate of decrease of the EM wave amplitude depends on how far the signal propagates.
An intended distance that is much greater than the wavelength of the carrier is classified as a far-field; incontrast, a distance much shorter than the wavelength is called a near-field. Thus, the distinction between a far-and a near-field not only depends on the physical distance, but also on the frequency of the carrier. In a farfield, both the electric and magnetic (or inductive) field strengths decrease linearly with distance at a rate of 1/r.On the other hand, in a near-field, the magnetic field strength is dominated by a component which decreases ata rate of 1/r3 as shown in Figure 6.
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 8/14
FIGURE 6. Difference between far-field and near-field attenuation of themagnetic field.
The difference in the rate of decrease between the two components suggests that they may be utilized fordifferent applications. Most wireless technologies today use both the electric and magnetic fields of EM wavesfor far-field transmission. In the area of hearing aids and assistive devices, this usually suggests a distance of 10to 50 m. Because of the greater distance of far-field transmission, interference from and on other transmittedsignals is likely to occur depending on the relative levels of the transmitted signals. For transmission over a shortdistance (less than 1 m, or near-field), the magnetic or inductive component is used instead because it retains itssignal strength over the short distance. In addition to a lower current consumption, the shorter distance wouldmean less interference from and on other transmitted signals. This results in a greater security of the transmittedsignals and immunity from other transmitted signals.
Bluetooth: A common far-field communication protocol. Bluetooth is a commonly used radiofrequency (RF) wireless standard in many communication devices today. It is a wireless protocol forexchanging data up to 100 meters (thus, far-field) and uses the EM wave to carry data at a carrier frequency of2.4 GHz with a bandwidth of 1 MHz (79 different channels).
Bluetooth is described as a protocol because it offers a predefined method of exchanging data between multipledevices. This means that two devices connected with a Bluetooth connection (ie, Bluetooth compatible) mustmeet certain requirements before they can exchange data. This qualifies it as an open or non bit-true system.The openness and connectivity are major reasons for its proliferated use in consumer electronics today.
Historically, Bluetooth was developed when computer wireless networks (Wi-Fi) became available. Wirelessnetworks also use a 2.4 GHz carrier frequency band, but have a channel bandwidth of 22 MHz. This allowswireless networks to send more information over a farther distance, but at the expense of high powerconsumption. By restricting the range of the transmission, engineers are able to reduce the power consumptionof Bluetooth. This enables devices smaller than notebook computers (eg, cell phones, PDAs, etc) to also utilizeBluetooth.
However, the power consumption of Bluetooth is still not low enough to permit its integration into a hearing aid.A typical Bluetooth chip requires a current drain from 45 milliAmps (mA) to as high as 80 mA for operation. If
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 9/14
a Bluetooth chip were embedded in a hearing aid that uses a #10 battery (with a capacity of 80 mAh), thebattery would only last 1 to 2 hours before it expires!
Another problem with Bluetooth is the audio delay inherent in the standard Bluetooth audio profile. In creating astandard that is adaptable to many different devices, Bluetooth has to satisfy many procedures to ensure aproper communication link between devices. This delays the immediate transmission of signals. For example, adelay of up to 150 ms may be noted between the direct sound and the transmitted sound from a TV usingBluetooth. When a delayed audio signal is mixed with the direct signal, a poorer sound quality—ranging from a“metallic” sound to an “echo”—may be perceived depending on the amount of delay. Excessive delay, such as150 ms, could lead to a dis-synchrony between the visual and audio signals. Figure 7 shows the perceptualartifacts that may result from mixing direct sounds with transmitted sounds at various delays.
FIGURE 7. The consequences of direct and delayed transmitted signalson the perception of sound. Delays in excess of 10 ms becomeproblematic.
Near-field magnetic induction (NFMI). The limited capacity of today’s hearing aid batteries makes itimpractical to use Bluetooth exclusively for far-field transmission to the hearing aids.
The rapid rate of attenuation of the magnetic field (shown in Figure 6) would suggest high signal strength withina close proximity and low signal strength beyond. This ensures accurate transmission of data between intendeddevices (such as hearing aids). The rapid decay characteristics mean that its signal strength will not be sufficientto interfere with other near-field devices in the environment, nor will it be interfered with by other unintendednear-field devices. A shorter range of transmission will also require a lower carrier frequency, reducing thepower consumption.
This makes magnetic or inductive EM wave an ideal technology to be integrated within hearing aids for near-field or short-range communication. On the other hand, the orientation of the antennae (between the transmitterand the receiver) may affect the sensitivity of the reception. A remote control and wireless CROS hearing aidsare prime examples of this form of technology.
Streamers and relay: A solution that incorporates inductive and Bluetooth. Using an inductive signal forwireless communication between hearing aids makes sense because of the security and low power requirement.
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 10/14
However, connecting to external electronic devices (such as cell phone or TV) would become impossible. Asolution which takes advantage of inductive technology and Bluetooth connectivity (or other far-field protocols)is needed to result in a practical solution.
This can be achieved using an external device (outside the hearing aid) which houses and uses both forms ofwireless technologies. This device, which includes Bluetooth (and other far-field protocols) technology, can belarger than a hearing aid and accommodate a larger battery than standard hearing aid batteries. Thus, itconnects with external devices (such as cell phones, etc) that are Bluetooth compatible.
The device should also have near-field magnetic (inductive) technology to communicate with the wearer’shearing aids when it is placed close to the hearing aids. Thus, a Bluetooth signal could be received by thisdevice then re-transmitted from this device to the hearing aid. This is the basis of the “streamers” used in manywireless hearing aids today.
FIGURE 8. A relay device that receives a Bluetooth signal and re-transmits it to thehearing aid on the other end.
Signal Transmission
Analog transmission. EM waves are used to carry the audio information so they may be transmittedwirelessly over a distance. This is accomplished by a process called modulation—where the EM wave (thecarrier) is altered in a specific way (ie, modulated) to carry the desired signal.
There are two common analog modulation schemes: amplitude modulation (AM) and frequency modulation(FM). The signal that modulates the carrier is an audio signal (eg, speech or music). The same mechanism ofmodulation may be used in both far-field and near-field transmissions.
For amplitude modulation (AM), the amplitude of the carrier frequency is altered (or modulated) according tothe amplitude of the signal that it is carrying. In Figure 9, observe how the amplitude-modulated signal showsthe same amplitude change over time as the sine wave that is used to modulate the carrier. The valleys of thesine wave reduce the amplitude of the carrier waveform, and the peaks of the signal increase the amplitude ofthe carrier waveform.
For frequency modulation (FM), the frequency of the carrier is modulated according to the amplitude of the
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 11/14
signal that is sent. Figure 9 displays how the frequency modulated signal shows the amplitude change of the sinewave by altering the closeness (or frequency) of the carrier waveform. Waveforms that are more spaced apart(lower frequency) represent the valleys of the sine wave, and waveforms that are closed together (higherfrequency) represent the peaks of the sine wave. Both AM and FM receivers de-modulate the received signaland reconstruct the audio signal based on how the AM or FM signal is modulated.
FIGURE 9. Analog modulation schemes—amplitude modulation (AM) and frequencymodulation (FM).
The Federal Communications Commission (FCC) regulates the use of the radio portion of the EM spectrum inthe United States. In the field of amplification, the three frequency bands that are commonly used for FMsystems include: 169-176 MHz (H Band), 180-187 MHz (J Band), and 216-217 MHz (N Band). Thefrequency band that is used in near-field transmission (and in remote) is typically around 10-15 MHz (althoughearlier systems still use a lower carrier frequency). The frequency band that is used for Bluetooth is the 2.4-2.5GHz band. This frequency band is classified as one of several “Industrial, Scientific, and Medical” (ISM)bands.
Digital transmission. The previous discussion relates the use of an analog audio signal to modulate a highfrequency EM carrier. In the process, the analog signal is being transmitted. When the signal that needs to betransmitted is digital, the analog modulation scheme will not be appropriate. In addition to the fact that the signalitself is digital (thus requiring digital transmission), there are other benefits of digital transmission.
Any form of signal transmission can be affected or contaminated by EM interference or noise. This is especiallythe case when the transmitted signal is farther away from the source because of the decrease in signal level (seeFigure 6) and the constant noise level from other EM interferences (ie, the “signal-to-noise” level decreases).Thus sound quality (and even speech intelligibility) decreases as the distance increases.
On the other hand, a digital signal (“1” and “0”) is not as easily affected by the interfering EM noise. As long asthe magnitude of the interfering noise does not change the value of the bit (from “1” to “0” and vice versa), thesignal keeps its identity. Thus, digital transmission is more resistant to EM interference than analog transmission.
This suggests that the sound quality of a signal that isdigitally transmitted may remain more natural (and lessnoisy) than an analog signal until a much higher level of EMintereference. Figure 10 shows the hypothetical soundquality difference between an analog transmisison and adigital transmission as a function of distance and/or
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 12/14
FIGURE 10. Hypothetical sound quality as afunction of interference between analog and digitaltransmissions.
interference.
How is digital transmission accomplished? Indigital transmission, a technique called “Frequency ShiftKeying” (FSK) is used. This modulation scheme uses twodifferent frequencies around the carrier frequency torepresent the “1” and “0” used in the binary representation.For example, a “1” may be assigned the frequency 10.65MHz and a “0” the frequency 10.55 MHz for a carrier at10.6 MHz. Each time a “1” needs to be sent, thetransmitter will send out a 10.65 MHz signal; each time a “0” needs to be sent, a signal at 10.55 MHz will besent.
Like analog modulation, when the transmitted signal (or pulse train) is received by the receiver, it needs to bedemodulated into “1” and “0” to recreate the digital sequence. This is done by the demodulator at the receiverend. Frequencies around the 10.55 MHz will be identified as a “0,” and those around 10.65 MHz a “1.”Typically, two points per bit are sampled to estimate the bit identity.
While this approach is sufficient for the typical operations, errors (identification of a “1” as “0” and vice versa)could still occur under adverse conditions (such as intense EM inteference from another source). Thus, animportant consideration in a wireless antenna or receiver design is how to handle the corrupted transmittedsignal so the retrieved signal is as accurate as possible to the original signal.
Summary
The process of taking an audio signal and transmitting/receiving it digitally is a multi-stage process, each ofwhich can affect the quality of the transmitted sounds. The following sequence summarizes all the steps involvedin the process (for both near- and far-field transmissions):
1) The audio signal (eg, from TV) is digitized through an analog-to-digital conversion process intoa digital form (ADC).2) The digital signal goes through an audio encoding process to reduce its size (audio coding).3) The encoded signal goes through channel coding to include error correction codes (channelcoding).4) Header information is included.5) The coded signal is modulated through FKS (or other techniques) and prepared for broadcast(modulation).6) The modulated signal is broadcast through the antenna (transmission by antenna).7) The modulated signal is received by the antenna (reception by antenna).8) The signal is demodulated to retrieve the digital codes (demodulation).9) The header information is read.10) The digital codes go through channel decoding to correct for errors (channel decoding).11) The signals go through audio decoding to “decompress” or return to as much of its originalform as possible (audio decoding).12) The decoded digital signal can be processed by the hearing aid processor (DSP processing).13) The processed signal leaves the hearing aid through a digital-to-analog converter to return to
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 13/14
its analog form (DAC).
Correspondence can be addressed to HR or Francis Kuk, PhD, at [email protected].
Citation for this article:
Kuk F, Crose B, Korhonen P, Kyhn T, Mørkebjerg M, Rank ML, Kidmose P, Jensen MH, Larsen SM,Ungstrup M. Digital wireless hearing aids, Part 1: A primer. Hearing Review. 2010;17(3):54-67.
Normal Version
4/26/2010 Digital Wireless Hearing Aids, Part 1: …
hearingreview.com/…/2010-03_09.asp… 14/14
This article was submitted to HR by Francis Kuk, PhD,
Digital Wireless Hearing Aids, Part 2: Considerations in Developing a New Wireless Platform
by Francis Kuk, PhD; Petri Korhonen, MSc; Bryan Crose, BSc; Thomas Kyhn; Martin Mørkebjerg, MS;
Mike Lind Rank, PhD; Prebin Kidmose, PhD; Morten Holm Jensen, PhD; Søren Møllskov Larsen, MS; and
Michael Ungstrup, MS
The factors involved in assessing wireless hearing aid platforms, and how these factors impact the performance and use of wireless
hearing aids.
In a previous paper,1 we provided a tutorial on the principles of digital audio codec and outlined some of the components
in a wireless hearing aid. In this paper, we will discuss the important factors to consider in a digital wireless hearing aid and
how those considerations are integrated into the design of the digital wireless platform used by WidexLink, a new
technology that enables the C-Integrated Signal Processing (C-ISP) used in the CLEAR440 hearing aids.
Three Important Factors in the Design of a Digital Wireless System
Because WidexLink is designed for hearing aid
applications, the description here is applicable for both
audio and data exchange between the two partner hearing
aids and external devices. In particular, a technical
description on how WidexLink considers the important
design criteria of 1) excellent sound quality, 2) robust
communication, and 3) efficient battery life is provided. The
audiological applications of such coordinated data
exchange in a binaural fitting (InterEar) will be described in
Part 3.
#1) Sound Quality
Hearing aid. In previous papers, we have considered the factors that affect the sound quality of a hearing aid.2,3 Adigital wireless hearing aid that possesses excellent sound quality obviously needs to have good sound quality from both the
hearing aid and the wireless codec system. As a quick review, a low noise floor, a high input dynamic range, a high
sampling rate, a high bit resolution, a broad bandwidth, an adequate MPO, and advanced compression algorithms at the
various stages of the signal flow are key components to ensure a good sound quality. The use of slow-acting compression
as the primary mode of signal processing also preserves the natural dynamics of the input sounds and retains all the nuances
of the inputs.
Codec. The sound quality provided by the wireless codec is highly dependent on the method that is used to compress thedata, how the data are transmitted, and the tolerance of the audio codec toward errors that may occur during the
transmission process. As indicated in the previous paper, a codec compresses the audio data to reduce the size of the data
set so that the same information can be transmitted more efficiently without appreciable delays and artifacts. Thus, an
effective codec algorithm would more likely retain the full range of sounds without any loss of perceptually significant
information. This ensures superior sound quality of the transmitted sounds.
Bit rate. The channel capacity of a wireless system also plays an important role in the sound quality of the transmitted
sounds. Typically, a system that has a higher capacity or bit rate would have a better sound quality because more
information can be transmitted per unit time.
However, a higher bit rate does not guarantee a good sound quality. This is because the original uncompressed signal may
be of poor quality or the codec algorithm may not have preserved the important cues critical for good sound quality (eg,
introducing more noise and missing important details). In both cases, a high transmission bit rate cannot compensate for the
poor quality of the original signals.
Coding delay. It is inevitable that, during transmission, electromagnetic interference occurs from other wireless sources,corrupting the integrity of the transmitted signal. How the codec handles errors from interference during transmission could
delay the transmission and affect sound quality.
Coding delay may be inconsequential when the listener simply listens to transmitted sounds (such as playback from an MP3
player). On the other hand, when the listener listens to the transmitted sounds along with the direct sound (or amplified
direct sound, such as in an open or vented fitting), or listens to the transmitted sound while watching its source (such as a
TV), the effects of the delay can be consequential. A small delay that is less than 10 ms may not be perceptible; a delay of
30-50 ms could lead to a "metallic" sound quality and the perception of an "echo."4 A delay of over 100 ms could lead to a
dys-synchrony between the visual and auditory information.
A low delay is a prerequisite to ensure good sound quality in a digital wireless communication system that is integrated into
a hearing aid.
#2) Robust Communication
Long-range versus short-range. Protecting the audio data from the effects of interfering electromagnetic noise inthe environment is a real and serious design issue. A system that is robust ensures continuity of the data transmission and
goodness of the audio data; a non-robust system can experience drop-outs of the signals and/or poor sound quality from
interference.
Factors that affect the robustness of the communication system include the distance of the transmission, the method that is
used for the transmission, and how the system handles errors. As indicated earlier, audio data can be transmitted over a
short range or over a long range. The closer the transmitter is to the receiver, the stronger the received signal and the lower
the likelihood of transmission errors.
Typically, short-range transmissions are less likely to interfere with and less susceptible to interference from other short-
range transmissions. Conversely, long-range transmissions are more susceptible to interference.
Codec. A codec has several ways to ensure the accuracy and robustness of the data transmission. This is done in theencoding stage through the channel coding process where details of the data set, along with error correction codes, are
provided. A codec that has a robust way of encoding is less susceptible to errors.
Transceiver. A transceiver that has a reliable way of detecting and identifying the transmitted bits makes fewer errorswhen decoding the transmitted signals. Indeed, minimizing the potential sources of errors during the transmission stage and
correcting them during the decoding stage is the most critical step in ensuring a robust system.
Previously, we indicated that digital wireless transmission is less susceptible to interference than analog wireless
transmission. Most power-limited wireless digital transmission systems use Frequency Shift Keying (FSK) to transmit
digital signals. How this radio technology identifies the coded audio in the presence of noise is important to ensure the
integrity of the signal. And, just as important, how it handles errors affects the outcome.
For example, many Bluetooth systems handle errors by requiring the encoder to re-send the data. This means the decoder
will not use the incorrect data but instead waits for the correct data to be sent. This is one reason why a Bluetooth system
can experience a delay as long as 150 ms. Other systems, like the earlier version of the Digital Audio Broadcast (DAB)
system, do not permit data re-send and have limited ability to correct errors. This leads to drop-outs of sounds and periods
of silence until the correct data are received.
It is desirable to have a system that does not encounter any errors during the transmission process. But, since that is not
possible, a system that can handle errors without drop-outs, severe artifacts, and/or delays is desirable.
#3) Efficient Current Drain
Many of the issues with sound quality and robustness are non-issues if the power supply to the wireless hearing aid system
is unlimited. This is obviously neither practical nor possible in hearing aid applications because of size constraints. In real
life, a design that uses the least amount of power from the hearing aid while achieving an acceptable level of robustness and
good sound quality is adopted. Several factors that affect the current drain include the complexity of the codec, the
transmission bit rate, the transmission power, and the complexity of the transceiver.
A codec that does minimal data compression (ie, no data reduction) requires a lower current drain than one that does a lot
of data compression. But little data reduction also means a larger data set to be transmitted—which ultimately increases
current drain. One way to have complex processing while keeping an acceptable current drain on the hearing aid is to keep
the coding complexity in the audio encoder and have minimal complexity and sophisticated processing in the decoder. This
is because the encoder (for external sources) is typically outside of the hearing aid where a larger battery or direct power
from the line supply is possible. By keeping most of the processes that require very high computing power outside the
hearing aid, it is possible to optimize the life of the hearing aid battery while maintaining the complexity of the processing.
In real life, a compromise between the amount of data compression (to reduce data size), the transmission bit rate and/or
bandwidth, sound quality, and power consumption needs to be made. The degree and form of compromise varies among
designers. Thus, no two wireless platforms are identical in their effectiveness. The quality of each platform must be
evaluated based on the "trade-offs" that the system makes to achieve wireless connectivity and how those trade-offs may
(or may not) affect performance.
WidexLink Digital Wireless Platform
The WidexLink digital wireless platform was developed to provide customized wireless communication solutions with a
design imperative to preserve the high sound quality provided by CLEAR440 hearing aids.
WidexLink covers both short-range and long-range communications, and allows coordination and synchronization of
hearing aid parameters between the left and right hearing aids. It also allows optional user control of selected hearing aid
features through the use of the new versatile remote control. Most notably, it allows high-fidelity transmission of audio
signals between hearing aids and from external sound sources (eg, TV, cell phone, MP3, telecoil, etc) to the hearing aids.
The system is developed on a proprietary technology platform based on Widex's vision of future needs. The criteria for this
platform include a very robust audio codec, a superior sound quality with a low delay, and a reasonable current drain.
Audio codec. Figure 1 shows a schematic block diagram of the components of the WidexLink as it is integrated within
the CLEAR hearing aid. Although all wireless systems include an encoding (or coding) stage, a decoding stage, and a
transceiver that transmits and receives the signals, the encoding stage is not included in Figure 1 because it is external to the
hearing aid. The general utility and a description of each stage have been described in the previous paper.1
FIGURE 1. Hearing aid with digital wireless communication technology and C- ISP processor.
The core of the proprietary WidexLink audio codec is inspired by an adaptive signal model called Analysis by Synthesis.
This is a highly complex and sophisticated method of coding and decoding audio signals within a closed (or bit-true)
system. In short, instead of transmitting the audio signal in its original form, this approach analyzes the content of the audio
signal at a very fast rate, then sends the result of the analysis to the receiver so it may recreate or synthesize the original
signal.
An analogy to the Analysis by Synthesis model may be made to shipping cakes. Imagine that you are the owner of a
bakery and your cakes are very popular. People all over the world order cakes from you. For the average customers who
order a limited number of cakes on an infrequent basis, you ship the cakes directly to them. In the process, you pack the
cakes in a sturdy container so they may not be dented (or worse yet, squashed). You ship it by express mail so they arrive
fresh. In some situations, accidents happen to the cakes so they may be delayed or damaged during the shipping process.
Obviously, you will have to add the costs of shipping, packing, and potential damages to the cost of the cakes.
Imagine now you have a contract to supply cakes to a large specialty grocery store, which also has an in-house bakery. In
this case, shipping finished cakes to the grocery store may not be the most efficient use of resources. Because the specialty
store also has a bakery, a more efficient way to ship cakes to the stores is to ship your recipes with detailed instructions
and ready-made pre-mix cakes to the specialty store. When the store receives the cake mix, its bakers can bake the cakes
themselves using your ingredients while following your instructions. This way, the cost involved in shipping the cakes is
significantly reduced. The number of bags of cake mix sent can be increased. Fewer special precautions are necessary
when packing the cake mix. The number of accidents that can happen to damage the cakes is reduced. And, most
importantly, the cakes will be available at the specialty store fresh and according to your high quality standard. Obviously,
this approach is only possible when the receiver (ie, specialty grocery store) also has bakers who know how to bake the
cakes with the special cake mix that you sent.
The act of packing and shipping your cakes is analogous to the task faced by engineers when designing wireless
transmission systems. In this case, engineers have to consider how to transmit the most audio signals in the most efficient
and error-free manner.
In the Analysis by Synthesis model, audio inputs are analyzed first to understand the component sounds that make up the
audio signal. Because there are only a finite number of sound components, each component can be matched to a special
code. This code, rather than the actual sound or sound components, is transmitted wirelessly. Because both the transmitter
and the receiver (transceiver) are designed by Widex, the special code that is received can be easily translated into the
appropriate sound components. Afterward, the sound components are synthesized to recover the original audio signal. This
is only possible for a bit-true system used in the WidexLink.
There are several important benefits from using this approach. First, it dramatically reduces the number of transmitted bits
and saves on the current consumption. Second, the same number of bits carried by the system will have more details or
information of the audio data than the same number of bits carried by another codec that uses a simpler method of coding
and decoding. And third, since only the special codes are transmitted, its interception by an unintended receiver will not be
meaningful because it cannot be decoded back. Thus, data security is enhanced. Kuk et al1 described some simpler
methods of codec in the primer paper.
In WidexLink, the transmitted signals are sampled at 25 kHz to result in a realizable transmission bandwidth from 100 Hz
to 11.2 kHz—which is even broader than the broad bandwidth (10 kHz) of the CLEAR440 ClearBand model (CL4-m-
CB) when it is in the typical mode of operation (ie, microphone mode).
As is common in digitization, audio codec introduces quantization noise at the output. In many codec, the level of the noise
is independent of the level of the signal (ie, it is fixed). As a result, the signal-to-noise ratio (SNR) decreases when signal
level decreases. In the WidexLink codec, the quantization noise level adapts to the level of the input signal. As the signal
level decreases, the quantization noise decreases in order to maintain a fixed SNR. This SNR level is set to be inaudible to
the user, while still allowing for a significant amount of digital audio compression.
Channel coding. In designing the WidexLink, special attention has been paid to ensure a robust system that can
tolerate electromagnetic interference during transmission. When interference occurs, the content of the transmitted signals
may change. The difference between the "intended-for-transmission" signals and the transmitted signals is called an error.
Channel-coding algorithms provide a method to ensure and check the accuracy of the received data. They also include
additional codes that specify how errors are handled, and these error correction codes are different among manufacturers.
The codes can range from simply identifying an error to those that can anticipate the errors and correct for them—an
approach called Forward Error Correction (FEC).
The error correction method in the WidexLink is a FEC type that is capable of anticipating what types of errors will likely
occur. As a consequence, a high proportion of the transmitted 212 kbits are designated for error correction purposes so
when errors occur, they are corrected immediately. As a result, audio information is continuously passed to the user to
ensure minimum drop-out of sounds and that the transmitted sounds are of good quality. This differs from simpler digital
audio transmission systems where corrupted audio samples are either removed (resulting in a complete drop-out of the
audio) or the data are re-transmitted (which delay the transmission). The error correction codes in the WidexLink are one
of the many lines of defense in ensuring exceptional sound quality and robustness of the wireless communication system.
Before sending the encoded digital audio, the channel encoder adds the FEC, the address of the receiver, command data,
and a data type identification code that specifies which data are instructions, which are audio data, and which are error
correction codes so the receiver knows how to process the transmitted data. This also ensures correct transmission of the
audio to the intended hearing aid, thereby preventing eavesdropping. This step also eliminates potential interference from
other hearing aids in the surroundings and ensures security of the data as well. The data type ID code lets the receiving
hearing aid know if the audio data that it is receiving is intended for mono or stereo playback. If the data type ID has
changed, the hearing aid will check the command data to determine in what format the audio data are being transmitted and
make appropriate corrections.
The channel decoder of the receiving hearing aid can look at the added FEC information and compare it with the received
audio data. If the audio data agree with the FEC data that are sent, then the receiving hearing aid can be confident that the
received audio data are indeed correct and error-free. If the audio data do not agree with the FEC, the receiving hearing
aid will register that an error has occurred and initiate the correction process.
FIGURE 3. Traditional demodulation principle with 2 sample points
per detected bit.
WidexLink Transceiver
In addition to a way of reducing the size of the load and the meticulous use of forward error correction to ensure
robustness, an accurate means to detect and identify the transmitted bits is also important to ensure robust transmission.
This is achieved through the use of the patented, robust WidexLink transceiver technology. Of note is the use of a new
oversampling method for receiving the wirelessly transmitted signal that is modulated with the Frequency Shift Keying
(FSK) technique.
Figure 2 shows a block diagram of the functional components in the WidexLink transceiver. In order to use the same
antenna for transmitting and receiving the FSK modulated signals, a receiver/transmitter switch is used. As a standard
means, two radio frequencies are used to transmit the digital "1" and "0." The transmitted "1" and "0" are transformed into a
voltage with a rising slope and a falling slope, respectively.
FIGURE 2. The WidexLink transceiver block diagram.
The received FSK signal is first demodulated. The task for the decoder is to detect if the voltage slopes are rising or falling.
Conventional FSK detectors sample twice per slope to determine if it is rising or falling (Figure 3). In general, this is
sufficient for a correct identification when no interference is present. However, if the FSK signal is contaminated by
interference, it might be difficult to detect the slopes correctly. Figure 3 shows that, when interference is present (red
curve), the detected signal (orange line) may suggest a bit "0" instead of a bit "1" because of the estimated falling slope
when it is rising (green in Figure 3).
The proprietary WidexLink transceiver
system utilizes an "oversampling" technique
where multiple (5) samples are taken per
slope to determine if the slope is rising or
falling (Figure 4). This allows greater
certainty in the identity of the transmitted bit.
Thus, the WidexLink transceiver is designed
to be more accurate and sensitive than
traditional FSK transceivers, especially
when interference is present. The end result
is a more robust communciation system with
consistently exceptional sound quality.
Multiple Utilities
The previous discussion provided a general
description of the considerations behind the
WidexLink codec used in the CLEAR440
hearing aid. To take advantage of the
FIGURE 4. The WidexLink demodulation uses an oversampling
method.
different properties of radio waves,
WidexLink is utilized in both short-range
and long-range applications. They are briefly
described here, but their audiological
ramifications will be further explained in the
next paper.
Short-range between CLEAR440
hearing aids. A short-rangecommunication is used to exchange
synchronization and coordination data
between hearing aids at 21 times per
second. In addition, audio is exchanged
between the two hearing aids, as well as
between the hearing aids and streamer
devices. This type of communication uses
magnetic induction with a carrier frequency
in the mega-hertz (MHz) radio frequency
range. This uses much less power (approximately 50 times less) than long-range communications such as Bluetooth and
WiFi. Although short-range radio technology can be used up to 1 meter, the range of transmission between hearing aids is
approximately 30 centimeters (about 1 foot) using standard hearing aid batteries. The sensitivity is highly dependent on the
transmitting and receiving antennae having the same orientation.
Short-range between assistive listening devices and CLEAR440. Short-range communication is also usedbetween the remote control (RC-DEX) and the hearing aids. In addition, the use of optional gateway assistive listening
devices (ALDs) allows routing the audio signal received from the external devices, such as a TV or cell phone, to the
hearing aids. Because of the short distance, this gateway device can make use of the short-range inductive transmission
technology to transmit the commands from the RC-DEX or the audio data from the external devices to the hearing aids.
Proprietary long-range between TV and Streamer (TV-DEX). The WidexLink platform is also
implemented on ALDs that use a proprietary long-range strategy to transmit the audio data. One is to transmit data from
the TV via the TV-base transmitter to the body-worn TV-controller (or streamer). The TV-base provides two mini-jack
inputs that allow a TV and one other audio device to input into the TV-base. It uses an electromagnetic wave at a carrier
frequency of 2.4 GHz (similar to Bluetooth), and the TV-DEX transmits audio data in a stereo format to the hearing aids.
The current consumption of this long-range transmission is higher than would be normally acceptable for use in a hearing
aid. Because the transmission originates from the TV-base, which is connected to a standard wall outlet for power, current
drain on the hearing aid battery is not a problem. Additionally, due to the efficiency of the codec, the audio delay between
the TV and the hearing aid is less than 10 ms (end to end).
Bluetooth between mobile phone and streamer (M-DEX). Because Bluetooth wireless connectivity is anindustry standard for cellular telephones, Bluetooth is used for communication between a cell phone and a cell phone ALD,
called the M-DEX. The higher audio delay of Bluetooth (between the M-DEX and the phone) is still present in this
connection. But, since it is only used in the auditory mode (listening to the transmitted cell phone sounds alone), the effect of
the delay is not a problem in this application. There is a microphone on the M-DEX that can be used to pick up the hearing
aid user's voice. A lanyard is available for the wearer to hang the M-DEX around the neck. Thus, the wearer can use the
M-DEX in conjunction with the CLEAR440 and the cellular phone in a hands-free mode.
Conclusion
The newly developed WidexLink wireless communication technology is a proprietary digital radio-frequency transmission
technology that is designed to provide high audio quality and efficiency. WidexLink offers new possibilities for extended
bandwidth audio streaming from external devices (DEX) to the hearing aids, and the continuous exchange of coordination
and synchronization data between hearing aids.
The unique digital wireless connection is designed to offer an unparalleled short delay when transmitting audio. This ensures
low distortion and echo-free audio quality when using hearing aid microphones together with direct audio transmission to
the hearing aids through optional ALDs.
References
1. Kuk F, Crose B, Korhonen P, Kyhn T, Mørkebjerg M, Rank M, Kidmose P, Jensen M, Larsen S, Ungstrup M.
Digital wireless hearing aids: Part 1: A primer. Hearing Review. 2010;17(3):54-67. Accessed May 1, 2011.
2. Kuk F, Korhonen P, Baekgaard L, Jessen A. MPO: A forgotten parameter in hearing aid fitting. Hearing Review.
2008;15(6):34-40. Accessed May 1, 2011.
3. Kuk F, Jessen A, Baekgaard L. Ensuring high fidelity in hearing aid sound processing. Hearing Review.
2009;16(3):34-43. Accessed May 1, 2011.
4. Stone M, Moore B. Tolerable hearing aid delays. I. Estimation of limits imposed by the auditory path alone using