Audio Equalization Using LMS Adaptive Filtering
Travis Bartley, Bob Minnich, and Dustin Cunningham
ECE 683 - Group Design Project
May 31, 2010
Contents
1 Introduction 4
1.1 Audio Equalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Adaptive Filtering 5
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 LMS Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 MATLAB Simulation 7
3.1 Test Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Channel Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Room Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.4 Adaptive Filter Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.5 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.5.1 Filter Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.5.2 Learning Rate, µ . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4 Room Experiments 15
4.1 Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.2 Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3 Experiment 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
4.4 Experiment 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4.5 Experiment 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5 Plug-In Implementation 24
6 Conclusion 26
A Matlab Code 26
A.1 filteradaptfinal.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1
A.2 playback.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
A.3 rec.m . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2
List of Figures
1 Audio equalization adaptive filter block diagram . . . . . . . . . . . . . . 5
2 White noise training signal . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Cross correlation of desired and received signal to find room delay . . . . 9
4 Adaptive filter learns to cancel effects of first arbitrary channel . . . . . . 10
5 Adaptive filter learns to cancel effects of second arbitrary channel . . . . 10
6 Adaptive filter order of 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
7 Adaptive filter order of 15 . . . . . . . . . . . . . . . . . . . . . . . . . . 11
8 Error analysis of length 7 adaptive filter . . . . . . . . . . . . . . . . . . 12
9 Error analysis of length 15 adaptive filter . . . . . . . . . . . . . . . . . . 12
10 Adptive filter order of 25 . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
11 Adptive filter order of 50 . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
12 Error analysis of length 25 adaptive filter . . . . . . . . . . . . . . . . . . 13
13 Error analysis of length 50 adaptive filter . . . . . . . . . . . . . . . . . . 13
14 Adaptive filter response with relatively low learning rate . . . . . . . . . 14
15 Adaptive filter response with relatively high learning rate . . . . . . . . . 14
16 Error analysis of relatively low learning rate . . . . . . . . . . . . . . . . 15
17 Error analysis of relatively high learning rate . . . . . . . . . . . . . . . . 15
18 Experiment 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
19 Studio Projects C4 microphone Bode plot . . . . . . . . . . . . . . . . . 17
20 Experiment 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
21 Experiment 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
22 Experiment 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
23 Experiment 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
24 Experiment 5, part 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
25 Exporting filter to keFIR plug-in . . . . . . . . . . . . . . . . . . . . . . 25
3
1 Introduction
1.1 Audio Equalization
Equalization, simply put, is the amplification or attenuation of the levels of different
frequencies in a signal. The frequencies of interest here are those detectable by the
human ear, approximately 20 Hz to 20 kHz. An example of audio equalization familiar to
most people is the treble/bass control on most audio equipment. The treble will adjust
the high frequencies while the bass the low frequencies. However, precise equalization of
a speaker or sound system is not possible with such a limited number of controls since
these controls necessarily affect a wide band of frequencies. In order to equalize a system
to a high degree of precision, control over narrow ranges of frequencies is required.
1.2 Problem Statement
As an audio signal propagates from the transmitter (speakers) to the receiver (human
ear) there are four main problems that can cause a difference between the original signal
and what is received at a particular listening point: 1) the interaction between multiple
speakers will cause a superposition of the two responses that differ depending on the
location of the listener, 2) the interaction of the speaker with the environment (room)
will cause reflected signals that will acoustically sum with the direct path signal, 3) the
variable conditions of the room such as temperature, humidity, and changing absorption
coefficients of the objects in the room and walls and 4) equipment distortions such as
non-ideal frequency responses of the speakers and digital-to-analog converters.
1.3 Proposed Solution
The proposed solution is to design and implement an adaptive filter using the least-
mean-square adaptation algorithm to correct for the frequency bands of the system that
deviate from unity gain. To do this, a known training sequence will be sent through
the DAC and speakers, and a microphone will be used to record the resultant distorted
sequence. Using this approach, the adaptive filter will adjust for the frequency response
4
of the overall system, including the DAC, the stereo, the room, and the microphone.
2 Adaptive Filtering
2.1 Introduction
Adaptive filters are self designing filters which use a recursive algorithm to automatically
update their coefficients based on the specific optimization algorithm. Therefore, they
are useful in applications in which certain parameters of the desired processing operation
are not known in advance. The block diagram representing the implementation of the
adaptive filter for the purpose of audio equalization is illustrated in Figure 1.
Figure 1: Audio equalization adaptive filter block diagram
In this block diagram sn represents the audio signal, hn is the impulse response of the
system, xn is a filtered version of sn i.e. the audio signal after it has propagated through
the system and is picked up by the listener, wn is the impulse response of the adaptive
filter or simply the adaptive filter weights, yn is the audio signal after it has been filtered
by the adaptive filter, and sn−∆ is a delayed version of the audio signal.
2.2 LMS Algorithm
The Least Mean Square (LMS) algorithm is a stochastic gradient descent algorithm that
uses the error signal, which is the difference between the desired and the actual signal,
at each iteration to update the filter coefficients. As a result, the error will diminish as
5
the adaptive filter reaches steady state, also known as convergence. Using Figure 1 the
error signal at iteration n can be written as
en = sn−∆ − yn (1)
where sn−∆ is a delayed version of the desired signal and yn is the actual signal at iteration
n and can be written as
yn = xn ∗ wn (2)
where xn is the signal after is has passed through the room and is picked up by the
recording microphone, wn is the adaptive filter, and * represents convolution between
two signals. The error signal can therefore be written as
en = sn−∆ − (xn ∗ wn) (3)
The goal of LMS is to minimize the mean squared error. Therefore the cost function to
be minimized can be written as
1
2(en)2 =
1
2(sn−∆ − (xn ∗ wn))2 (4)
In order to see how the adaptive filter affects this error signal we take the gradient with
respect to wn
5w (1
2(en)2) = 5w(
1
2(sn−∆ − (xn ∗ wn))2)
= −(sn−∆ − (xn ∗ wn))(xn)
= −(en)(xn) (5)
The LMS algorithm is based on gradient descent. To minimize the error the adaptive
filter should update its weights in order to move the error against the gradient. Using
6
this formulation the LMS weight update can be defined as
∆wn = −(µ)5w (1
2(en)2)
= (µ)(en)(xn) (6)
where µ is the learning rate parameter of the adaptive filter. The final weight update
rule for the LMS algorithm can be written as
wn+1 = wn + ∆wn
= wn + (µ)(en)(xn) (7)
3 MATLAB Simulation
3.1 Test Signal
A training period of 5 seconds with a test signal of Gaussian white noise was used to
determine the adaptive filter coefficients. White noise serves as an ideal test signal because
it probes the system with equal energy in all frequency bands. The Matlab command
‘randn’ was used to generate pseudo-random values drawn from a normal distribution
with mean zero and standard deviation one. A plot of the test signal is shown in Figure
2.
7
Figure 2: White noise training signal
3.2 Channel Simulation
For simulation purposes the command ’FIRLS’ was used in Matlab to create a filter that
would distort the input signal to simulate the response of an arbitrary system. This was
useful for the initial testing of the LMS algorithm.
3.3 Room Delay
Due to the propagation delay the desired signal and the received signal must be synchro-
nized in order to calculate the correct error for each sample. This was achieved by taking
the cross correlation between the original input and the recorded signal in order to find
the room delay. Figure 3 shows the cross correlation between the two signals and when
they are perfectly lined up a strong peak is observed.
8
Figure 3: Cross correlation of desired and received signal to find room delay
3.4 Adaptive Filter Simulation
The goal of the adaptive filter simulation is show that the LMS algorithm coded in Matlab
is working properly. To illustrate this various channels were created and tested through
the algorithm. Figures 4 and 5 shows how the adaptive filter is able to perfectly match
and provide the inverse of the channel response of the room. This ensures that when the
system response is convolved with the adaptive filter, the result is an all-pass filter with
unity gain.
9
Figure 4: Adaptive filter learns to cancel ef-fects of first arbitrary channel
Figure 5: Adaptive filter learns to cancel ef-fects of second arbitrary channel
3.5 Optimization
The adaptive filter used in this application can be controlled by two variables, the filter
order and the learning rate µ. If the order of the filter is too short there will not be
enough coefficients to counteract all the undesired effects of the system and the filter
designed will be suboptimal at best. If the order of the filter is too long it can cause three
problems. First, the computation needed for the error to converge will be unnecessarily
high, which is undesirable for real world application. Second, too long of a filter order
causes large memory consumption due to the convolution needed for filtering, and third,
if two sufficiently long filters are used for a particular room response the longer filter will
take more time to converge to the same weights as the shorter filter. If µ is too low the
LMS algorithm may again require excessive computation since it will take too long to
10
find the minimum of the cost function. If µ is too large the algorithm may cause the
solution to overshoot the minimum error or possibly wander off course and diverge.
3.5.1 Filter Order
The channel length used in the following examples was 11 and Figure 6 shows an adaptive
filter with an order of 7. From this figure it can be clearly seen that the filter does not
produce the desired results since the filter length is insufficient. Figure 7 shows an
adaptive filter with an order of 15. The general shape of the adaptive filter matches the
room response but is still unable to create an optimal filter.
Figure 6: Adaptive filter order of 7 Figure 7: Adaptive filter order of 15
Figures 8 and 9 show the error convergence for adaptive filter lengths of 7 and 15, re-
spectfully. For a length of 7 it can be seen that the error never decreases in the amount of
time that was simulated and for 15 the error decreases to reach a steady state. However
11
since the response of the filter was still suboptimal the error should be able to reach a
lower value.
Figure 8: Error analysis of length 7 adaptivefilter
Figure 9: Error analysis of length 15 adap-tive filter
Figures 10 and 11 show the responses of adaptive filters of length 25 and 50, respectfully,
while the channel length has remained at 11. Both of these are able to adapt and cancel
the negative effects of the system. This agrees with the general rule of the thumb that
the filter order should be at least twice as long as the channel impulse response.
12
Figure 10: Adptive filter order of 25 Figure 11: Adptive filter order of 50
Figures 12 and 13 show the respective error convergences for the length 25 and 50 adaptive
filters. Both plots show the error decreasing but the length 25 adaptive filter converges
quicker than the length 50 showing that a higher filter order is not always better.
Figure 12: Error analysis of length 25 adap-tive filter
Figure 13: Error analysis of length 50 adap-tive filter
13
3.5.2 Learning Rate, µ
Figures 14 and 15 show an arbitrarily low and high value of the learning parameter,
µ. The specific values are not important at this time as the purpose of this simulation
is to diplay a general concept. Given the same amount of time to adapt it can be seen
that the filter with the lower value of µ is forming the correct general shape but has not
yet fully adapted while the filter with the higher µ has converged on the optimal filter
weights to mitigate the effects of the room.
Figure 14: Adaptive filter response with rel-atively low learning rate
Figure 15: Adaptive filter response with rel-atively high learning rate
This concept can be further visualized by error analysis. Figures 16 and 17 show the
error convergence for the relatively low and high µ. Given the same number of iterations
the high µ has already converged to an optimal solution while the low µ is taking a much
longer time, which is undesirable.
14
Figure 16: Error analysis of relatively lowlearning rate
Figure 17: Error analysis of relatively highlearning rate
The optimal choice of filter order and µ will be determined empirically for real-world
testing using the concepts learned from this optimization procedure.
4 Room Experiments
4.1 Experiment 1
A ten second training sequence was generated and recorded. Figure 18 shows the filter
generated from adapting the filter weights over thirty iterations of these sequences.
15
Figure 18: Experiment 1
The recording was created using a stereo sound system (Sony MHC-GX450) and a built-
in Presario laptop mic. The resulting filter W produces a boost to the low frequencies. To
observe the effect of the filter, music was filtered by W and played through the speakers.
After listening to the music, it was easy to notice that the low end was overempha-
sized. One hypothesis based on this information is that there is a highpass filter in the
Presario laptop mic used in the experiment. Since the recording was colored by the fre-
quency response of the microphone, and the microphone attenuated the low frequencies
so much, the adaptive filter compensated for this by boosting these frequencies. This
clearly demonstrates the importance of using a high quality microphone for this experi-
ment since the fidelity of the EQ-enhanced system is directly dependant on the fidelity
of the microphone.
16
4.2 Experiment 2
In order to create a filter that accurately compensates for the frequency response of the
overall system, it is necessary to use suitable equipment and a consistent setup. This
is because the transfer function of the microphone affects the results of the program.
For instance, if the microphone attenuates a certain frequency band, the adaptive filter
will boost this band in order to compensate for the loss. In order to account for this
effect, the Studio Projects C4 microphone was chosen. This microphone has a relatively
flat frequency response, and an omni-directional pickup pattern, making it ideal for this
application. The frequency spectrum of the microphone is shown in Figure 19.
Figure 19: Studio Projects C4 microphone Bode plot
Since Matlab does not allow wavplay and wavrecord to run simultaneously in one pro-
gram, two separate instances of Matlab must be running concurrently. The first instance
should be used to run rec.m while the second instance of Matlab is used to run play-
back.m. These m-files should be executed near simultaneously with rec.m starting first,
followed by playback.m. Playback.m initializes a training sequence consisting of white
Gaussian noise. If the training sequence is not long enough, the spectrum of the sequence
17
will not be very flat. Therefore, the duration of the training sequence was chosen to be
ten seconds. This ensures that the adaptive filter is provided with enough frequency reso-
lution to provide accurate results. After setting up the training sequence, playback.m will
then normalize the training sequence so that the peak value is 1. This is done by dividing
the training sequence by the largest sample. This is necessary so that the sequence is not
clipped when it is played as a wav file. Rec.m records the sequence played by playback.m.
These two programs then save the original training sequence and the recording to the
current Matlab directory. This data can then be processed by filteradapt.m to calculate
the stereo filter.
A Sony MHC-GX450 stereo with subwoofer and Studio Projects C4 microphone were
used to conduct experiments 2 and 3. A microphone was placed in the listening posi-
tion, forming an equilateral triangle with the speakers. The frequency response of the
resulting filter can be seen in the plot below. This frequency response is as expected
for a couple reasons. Firstly the response shows a peak at around 4.5 kHz. This the
crossover frequency between the tweeter and the mid-range speaker. Because of how the
crossover was designed by Sony, this frequency range is attenuated. To account for this,
the adaptive filter boosted these frequencies. The same principle applies for the crossover
between the subwoofer and the mid-range speakers. This crossover occurs between 100
Hz 200 Hz and a large peak in the adaptive filter response can be seen in this range.
18
Figure 20: Experiment 2
4.3 Experiment 3
The purpose of experiment 3 was to verify the results of experiment 2. To do this, the
original training sequence was first filtered using the weights calculated in experiment 2.
Then, the same procedure as experiment 2 was followed: the filtered training sequence
was played out of the speakers using playback.m and recorded using rec.m. Using this
arrangement, the resulting frequency response of the overall system should theoretically
be flat. Subsequently, the derived adaptive filter response from this experiment should
also be flat. Although the frequency response of the derived adaptive filter is not totally
19
flat, the results are as expected. As seen in the plot below, there are peaks and valleys
of 4dB in the frequency response. However, there is no overall shape to the frequency
response as there was in experiment 2. That is, there is no particular frequency range that
is boosted or attenuated significantly. These 4dB variations can be attributed to noise.
The microphone and stereo are not particularly noisy. However, the audible noise from
a computer fan was unavoidable. Also, the room used to conduct the experiments was
not soundproof and as a result, noise from the surrounding environment may also have
corrupted the recordings to a degree. These noises were imprinted onto the recordings
from both experiment 2 and 3, compromising the results of both.
Figure 21: Experiment 3
20
4.4 Experiment 4
Although the training signal of randomly generated numbers has on average a flat fre-
quency spectrum, a 10 second sequence will not have a totally flat spectrum. This can
present a problem since if certain frequencies are missing in the training signal, the adap-
tive filter will not be able to detect error in these frequencies. There are a number of ways
to lessen the effects of this issue. One would be to average the results from a number of
experiments. Another would be to simply use a longer training sequence. The general
idea is to decrease the likelihood of there being gaps in the spectrum. To do this, an
experiment was conducted using a 120 second training sequence using a moderate filter
length of 1000. As can be seen from the figure, the shape is precisely that of Experiment
three. However, because a moderate filter length was used, the result is less precise. As
a result the frequency response from Experiment four resembles a smoothed-over version
of the frequency response from Experiment three.
21
Figure 22: Experiment 4
4.5 Experiment 5
To examine the effect of the room, an experiment was conducted with the microphone
about 15 feet away from the speaker. The microphone was near a corner, so it is expected
that the frequency response will vary from that from Experiment one. A moderate filter
length of 1000 was used.
22
Figure 23: Experiment 5
As seen in the figure above, the resulting filter was relatively flat in the high frequency
range. The low frequencies on the other hand had variations of about ±10 dB. To examine
the effectiveness of this filter, the training sequence was pre-equalized and sent through
the system. The resulting frequency response can be seen below. The high frequencies
deviate by roughly the same amount as before, but there is some improvement of the
low frequency response. Here the low frequency deviations are only approximately 6 dB.
Though the resulting filter is not totally flat we can again attribute this to the reasons
stated before, namely the moderate filter length and the length of the training sequence.
23
Figure 24: Experiment 5, part 2
5 Plug-In Implementation
To provide additional functionality, the filter created by filteradapt.m can be easily ex-
ported for use with a free VST plugin called keFIR. VST plugins are supported by a large
number of audio applications. KeFIR is zero latency, FIR filter based on user defined
impulse responses which can be loaded from .wav files. The plugin supports up to ap-
proximately thirty-two thousand filter weights, and includes a number of useful features
such as mix, gain and length. The mix knob determines the proportion of the dry or wet
24
signal. For instance, at 0% the output will be untouched by the filter, whereas at 100%
the resultant output will be exactly that of the filter. The gain knob allows the user to
set the output to the optimum level without clipping. The length knob is automatically
set to the length of the impulse response upon loading. However, this knob can be used
to truncate the filter coefficients to save on CPU cycles.
Figure 25: Exporting filter to keFIR plug-in
To export the filter, first use filteradapt.m to calculate the stereo filter weights WL
and WR. Then run the code cell labelled “Export filter to keFIR.” The program will
then use WL and WR, normalize them optimally and write them to a .wav file in the
current Matlab directory. Then in keFIR, use the load button and navigate to the current
Matlab directory and load “ExportedFilter.wav.” The keFIR license is restated below
from the original source, http://habib.webhost.pl/vst_keFIR.php “keFIR”
(VST plugin) is free (donate if you like it) for usage in music applications, I forbid
it’s sale, modifications or sharing it over internet without consulting this with me, the
author. I give no guarantee of proper work of this software and I take no responsibility
of unwanted results of this software.
25
6 Conclusion
As an audio signal propagates from speakers to a listener distortions occur due to path
loss, reflections that cause destructive and constructive interferences, and inherent fre-
quency responses of equipment. These distortions, collectively referred to here as room
response, were corrected by the implementation of an adaptive filter based on the LMS
algorithm. The goal of the adaptive filter was to learn the impulse response of the room
and create its inverse, therefore providing a flat frequency spectrum for the audible range.
The LMS algorithm was chosen for its simple yet effective implementation and provides
a robust method of counteracting a non-ideal room response. It is shown that there are
several ways to control how the LMS algorithm performs. These factors, µ and the fil-
ter order, allowed for control of the computation time and accuracy possible for a given
number of iterations. These values were chosen through trial and error while computing
the filter for the room response. Once the optimal values were finalized they were applied
to the final filter. The results have shown that due to the crossover that occurs between
the subwoofers and the midrange speakers there was an obvious loss that the filter was
attempting to recover. This resulted in a boost from the adaptive filter. Once the signal
is predistorted with this filter the ouput from the speakers creates an undesirable distor-
tion of the audio signal. This simply shows how difficult real-world implementation can
be. The simulations shown in this report describe how the LMS adaptive filter is able to
compensate for a non-ideal room response. However, the room experiments display how
important it is to have high quality hardware for real-world implementation. The plug-in
functionality provided by keFIR allow the adaptive filter to be optimized for a specific
listening point and room response and to provide real-time filtering while listening to
audio.
A Matlab Code
A.1 filteradaptfinal.m
%% Synchronization
% The First half of recording is from the L channel, and the second half is
26
% from the R channel. The recording is first split in half then the
% respective L and R recordings are synchronized to the original input
% sequence
CoutL=Cout(1:round(length(Cout)/2));
CoutR=Cout(round(length(Cout)/2)+1:length(Cout));
correlationL=xcorr(input,CoutL); % Use correlation to synchronize L
[mL,iL]=max(correlationL);
lagL=(abs(iL-max(length(input),length(CoutL)))+1);
CoutL=CoutL(lagL:lagL+length(input));
correlationR=xcorr(input,CoutR); % Use correlation to synchronize R
[mR,iR]=max(correlationR);
lagR=(abs(iR-max(length(input),length(CoutR)))+1);
CoutR=CoutR(lagR:lagR+length(input));
%% LMS Filter Adaptation
iterations=1; % Number of times W is adapted over the input sequence
M=2200; % Adaptive filter length, must be at least 2X room length
mu=.001/M; % Determines rate of convergence
WL=zeros(M,1); % Initial filter weights
WR=zeros(M,1);
% LMS algorithm
for index=1:iterations
for samplenum=M:length(input)-M+1 % Iterate through entire recorded sequence
% Output from stereo filter
WoutL=WL.’*flipud(CoutL(samplenum:M+samplenum-1));
WoutR=WR.’*flipud(CoutR(samplenum:M+samplenum-1));
% Calculate error between desired sequence and filter output
eL=input(samplenum+floor(M/2)+1)-WoutL;
eR=input(samplenum+floor(M/2)+1)-WoutR;
% Adjust filter weights based on error calculation
WL=WL+mu*eL.*flipud(input(samplenum:samplenum+M-1));
WR=WR+mu*eR.*flipud(input(samplenum:samplenum+M-1));
end
end
%% Graphical Analysis
% A variety of graphs useful for studying the behaviour of the
% adaptive filter and debugging
figure(1);
freqz(WR,1,2000,44100);
27
hold on;freqz(WL,1,2000,44100);hold off;
title(’Adaptive filter response’)
ax = findall(gcf, ’Type’, ’axes’);
set(ax, ’XScale’, ’log’);
figure(2);
stem(WL);
title(’Adaptive filter coefficients’)
xlabel(’Filter Coefficient’);ylabel(’Amplitude (Arbitrary Units)’);
figure(3);
plot(correlationL);
title(’Correlation between training sequence and recorded sequence’);
xlabel(’Sample Number (n)’);ylabel(’Amplitude (Arbitrary Units)’);
figure(4);
plot(input);hold on;
plot(CoutL,’r’);hold off;
title(’Training sequence (blue) vs recorded (red)’)
xlabel(’Sample Number (n)’);ylabel(’Amplitude (Arbitrary Units)’);
%% Test filter
% This cell may be used to listen to the effect of the filter. To use this
% cell, click File -> Import Data then navigate to a .au, .snd or .wav file
samplestart=1; %
sampleend=10;
fs=44100;
out=[conv(data((samplestart*fs):(sampleend*fs),1),WL), conv(data((samplestart*fs):(sampleend*fs),2),WR)];
outn=out/max(abs(out));
wavplay(outn,fs);
%% Export filter to keFIR
WnormL=(1-2ˆ-15)*WL/max(WL); % Normalize Filter Coeffs
WnormR=(1-2ˆ-15)*WR/max(WR); % Normalize Filter Coeffs
wavwrite([WnormR WnormR],44100,’ExportedFilter’);
save(’WEQed’,’WR’,’WL’);
A.2 playback.m
fs = 44100; % Sampling Frequency: 44.1 kHz
duration=10; % In Seconds
input=randn(duration*fs,1); % White noise signal
input=input/max(abs(input)); % Normalize to avoid clipping
% Send training sequence through L channel, wait 2 seconds, then send training
% sequence through R channel
28
inputLR=[[input zeros(length(input),1)];zeros(2*fs,2);[zeros(length(input),1) input]];
wavplay(inputLR,fs);
save(’input’,’input’) % Save training sequence to current MATLAB directory
A.3 rec.m
Cout=wavrecord(24*44100,44100);
Cout=Cout/max(abs(Cout));
save(’Cout’,’Cout’)
References
[1] Simon Haykin, Bernard Widrow, Least-Mean-Square Adaptive Filters. John Wiley & Sons, Inc., New Jersey, 2003.
[2] Simon Haykin, Adaptive Filter Theory. Prentice-Hall, New Jersey, 1986.
29