8/2/2019 Advance DSP Ch8 http://slidepdf.com/reader/full/advance-dsp-ch8 1/30 8 Advanced Signal Processing Techniques: Optimal and Adaptive Filters OPTIMAL SIGNAL PROCESSING: WIENER FILTERS The FIR and IIR filters described in Chapter 4 provide considerable flexibility in altering the frequency content of a signal. Coupled with MATLAB filter design tools, these filters can provide almost any desired frequency characteris- tic to nearly any degree of accuracy. The actual frequency characteristics at- tained by the various design routines can be verified through Fourier transform analysis. However, these design routines do not tell the user what frequency characteristics are best; i.e., what type of filtering will most effectively separate out signal from noise. That decision is often made based on the user’s knowl- edge of signal or source properties, or by trial and error. Optimal filter theory was developed to provide structure to the process of selecting the most appro- priate frequency characteristics. A wide range of different approaches can be used to develop an optimal filter, depending on the nature of the problem: specifically, what, and how much, is known about signal and noise features. If a representation of the de- sired signal is available, then a well-developed and popular class of filters known as Wiener filters can be applied. The basic concept behind Wiener filter theory is to minimize the difference between the filtered output and some de- sired output. This minimization is based on the least mean square approach, which adjusts the filter coefficients to reduce the square of the difference be- tween the desired and actual waveform after filtering. This approach requires pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Advanced Signal ProcessingTechniques: Optimaland Adaptive Filters
OPTIMAL SIGNAL PROCESSING: WIENER FILTERS
The FIR and IIR filters described in Chapter 4 provide considerable flexibility
in altering the frequency content of a signal. Coupled with MATLAB filter
design tools, these filters can provide almost any desired frequency characteris-
tic to nearly any degree of accuracy. The actual frequency characteristics at-tained by the various design routines can be verified through Fourier transform
analysis. However, these design routines do not tell the user what frequency
characteristics are best; i.e., what type of filtering will most effectively separate
out signal from noise. That decision is often made based on the user’s knowl-
edge of signal or source properties, or by trial and error. Optimal filter theory
was developed to provide structure to the process of selecting the most appro-
priate frequency characteristics.
A wide range of different approaches can be used to develop an optimal
filter, depending on the nature of the problem: specifically, what, and how
much, is known about signal and noise features. If a representation of the de-
sired signal is available, then a well-developed and popular class of filters
known as Wiener filters can be applied. The basic concept behind Wiener filter
theory is to minimize the difference between the filtered output and some de-
sired output. This minimization is based on the least mean square approach,
which adjusts the filter coefficients to reduce the square of the difference be-
tween the desired and actual waveform after filtering. This approach requires
pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
FIGURE 8.1 Basic arrangement of signals and processes in a Wiener filter.
an estimate of the desired signal which must somehow be constructed, and this
estimation is usually the most challenging aspect of the problem.*
The Wiener filter approach is outlined in Figure 8.1. The input waveformcontaining both signal and noise is operated on by a linear process, H ( z). In
practice, the process could be either an FIR or IIR filter; however, FIR filters
are more popular as they are inherently stable,† and our discussion will be
limited to the use of FIR filters. FIR filters have only numerator terms in the
transfer function (i.e., only zeros) and can be implemented using convolution
first presented in Chapter 2 (Eq. (15)), and later used with FIR filters in Chapter
4 (Eq. (8)). Again, the convolution equation is:
y(n) = ∑ L
k =1
b(k) x(n − k) (1)
where h(k ) is the impulse response of the linear filter. The output of the filter,
y(n), can be thought of as an estimate of the desired signal, d (n). The difference
between the estimate and desired signal, e(n), can be determined by simple
subtraction: e(n) = d (n) − y(n).
As mentioned above, the least mean square algorithm is used to minimize
the error signal: e(n) = d (n) − y(n). Note that y(n) is the output of the linear
filter, H ( z). Since we are limiting our analysis to FIR filters, h(k ) ≡ b(k ), and
e(n) can be written as:
e(n) = d (n) − y(n) = d (n) − ∑ L−1
k =0
h(k) x(n − k) (2)
where L is the length of the FIR filter. In fact, it is the sum of e(n)2 which is
minimized, specifically:
*As shown below, only the crosscorrelation between the unfiltered and the desired output is neces-
sary for the application of these filters.
†IIR filters contain internal feedback paths and can oscillate with certain parameter combinations.
pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
FIGURE 8.2 Configuration for using optimal filter theory for systems identification.
the correlation matrix mentioned in Chapter 2 (Eq. (21)) and has a symmetricalstructure termed a Toeplitz structure.* The equation can be written more suc-
cinctly using standard matrix notation, and the FIR coefficients can be obtained
by solving the equation through matrix inversion:
RB = rdx and the solution is: b = R−1rdx (7)
The application and solution of this equation are given for two different
examples in the following section on MATLAB implementation.
The Wiener-Hopf approach has a number of other applications in addition
to standard filtering including systems identification, interference canceling, and
inverse modeling or deconvolution. For system identification, the filter is placedin parallel with the unknown system as shown in Figure 8.2. In this application,
the desired output is the output of the unknown system, and the filter coeffi-
cients are adjusted so that the filter’s output best matches that of the unknown
system. An example of this application is given in a subsequent section on
adaptive signal processing where the least mean squared (LMS) algorithm is
used to implement the optimal filter. Problem 2 also demonstrates this approach.
In interference canceling, the desired signal contains both signal and noise while
the filter input is a reference signal that contains only noise or a signal correlated
with the noise. This application is also explored under the section on adaptive
signal processing since it is more commonly implemented in this context.
MATLAB Implementation
The Wiener-Hopf equation (Eqs. (5) and (6), can be solved using MATLAB’s
matrix inversion operator (‘\’) as shown in the examples below. Alternatively,
*Due to this matrix’s symmetry, it can be uniquely defined by only a single row or column.
pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
since the matrix has the Toeplitz structure, matrix inversion can also be done
using a faster algorithm known as the Levinson-Durbin recursion.
The MATLAB toeplitz function is useful in setting up the correlation
matrix. The function call is:
Rxx = toeplitz(rxx);
where rxx is the input row vector. This constructs a symmetrical matrix from a
single row vector and can be used to generate the correlation matrix in Eq. (6)
from the autocorrelation function r xx. (The function can also create an asymmet-
rical Toeplitz matrix if two input arguments are given.)
In order for the matrix to be inverted, it must be nonsingular; that is, the
rows and columns must be independent. Because of the structure of the correla-
tion matrix in Eq. (6) (termed positive- definite), it cannot be singular. However,it can be near singular: some rows or columns may be only slightly independent.
Such an ill-conditioned matrix will lead to large errors when it is inverted. The
MATLAB ‘\’ matrix inversion operator provides an error message if the matrix
is not well-conditioned, but this can be more effectively evaluated using the
MATLAB cond function:
c = cond(X)
where X is the matrix under test and c is the ratio of the largest to smallest
singular values. A very well-conditioned matrix would have singular values in
the same general range, so the output variable, c, would be close to one. Verylarge values of c indicate an ill-conditioned matrix. Values greater than 10
4have
been suggested by Sterns and David (1996) as too large to produce reliable
results in the Wiener-Hopf equation. When this occurs, the condition of the matrix
can usually be improved by reducing its dimension, that is, reducing the range,
L, of the autocorrelation function in Eq (6). This will also reduce the number
of filter coefficients in the solution.
Example 8.1 Given a sinusoidal signal in noise (SNR = -8 db), design
an optimal filter using the Wiener-Hopf equation. Assume that you have a copy
of the actual signal available, in other words, a version of the signal without the
added noise. In general, this would not be the case: if you had the desired signal,you would not need the filter! In practical situations you would have to estimate
the desired signal or the crosscorrelation between the estimated and desired
signals.
Solution The program below uses the routine wiener_hopf (also shown
below) to determine the optimal filter coefficients. These are then applied to the
noisy waveform using the filter routine introduced in Chapter 4 although
correlation could also have been used.
pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
The output plots from this example are shown in Figure 8.4. Note the
close match in spectral characteristics between the “unknown” process and thematching output produced by the Wiener-Hopf algorithm. The transfer functions
also closely match as seen by the similarity in impulse response coefficients:
The area of adaptive signal processing is relatively new yet already has a rich
history. As with optimal filtering, only a brief example of the usefulness and
broad applicability of adaptive filtering can be covered here. The FIR and IIR
filters described in Chapter 4 were based on an a priori design criteria and were
fixed throughout their application. Although the Wiener filter described above
does not require prior knowledge of the input signal (only the desired outcome),
it too is fixed for a given application. As with classical spectral analysis meth-
ods, these filters cannot respond to changes that might occur during the course
of the signal. Adaptive filters have the capability of modifying their properties
based on selected features of signal being analyzed.
A typical adaptive filter paradigm is shown in Figure 8.5. In this case, the
filter coefficients are modified by a feedback process designed to make the filter’s
output, y(n), as close to some desired response, d (n), as possible, by reducing the
error, e(n), to a minimum. As with optimal filtering, the nature of the desiredresponse will depend on the specific problem involved and its formulation may
be the most difficult part of the adaptive system specification (Stearns and David,
1996).
The inherent stability of FIR filters makes them attractive in adaptive appli-
cations as well as in optimal filtering (Ingle and Proakis, 2000). Accordingly, the
adaptive filter, H ( z), can again be represented by a set of FIR filter coefficients,
pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
b(k ). The FIR filter equation (i.e., convolution) is repeated here, but the filter
coefficients are indicated as bn(k ) to indicate that they vary with time (i.e., n).
y(n) = ∑ L
k =1
bn(k) x(n − k) (8)
The adaptive filter operates by modifying the filter coefficients, bn(k ),
based on some signal property. The general adaptive filter problem has similari-
ties to the Wiener filter theory problem discussed above in that an error is
minimized, usually between the input and some desired response. As with opti-
mal filtering, it is the squared error that is minimized, and, again, it is necessary
to somehow construct a desired signal. In the Wiener approach, the analysis is
applied to the entire waveform and the resultant optimal filter coefficients weresimilarly applied to the entire waveform (a so-called block approach). In adap-
tive filtering, the filter coefficients are adjusted and applied in an ongoing basis.
While the Wiener-Hopf equations (Eqs. (6) and (7)) can be, and have been,
adapted for use in an adaptive environment, a simpler and more popular ap-
proach is based on gradient optimization. This approach is usually called the
LMS recursive algorithm. As in Wiener filter theory, this algorithm also deter-
mines the optimal filter coefficients, and it is also based on minimizing the
squared error, but it does not require computation of the correlation functions,
r xx and r xy. Instead the LMS algorithm uses a recursive gradient method known
as the steepest -descent method for finding the filter coefficients that produce
the minimum sum of squared error.
Examination of Eq. (3) shows that the sum of squared errors is a quadratic
function of the FIR filter coefficients, b(k ); hence, this function will have a
single minimum. The goal of the LMS algorithm is to adjust the coefficients so
that the sum of squared error moves toward this minimum. The technique used
by the LMS algorithm is to adjust the filter coefficients based on the method of
steepest descent. In this approach, the filter coefficients are modified based on
pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
can have an influence on the result. The adaptive filter will try to adjust its
output to minimize this result, but since its output component, Nb*(n), only
correlates with the narrowband component of the waveform, Nb(n), it is only
the narrowband component that is minimized. In adaptive interference suppres-
sion, the narrowband component is the noise and this is the component that is
minimized in the subtracted signal. The subtracted signal, now containing less
noise, constitutes the output in adaptive interference suppression (upper output,
Figure 8.6).
In adaptive line enhancement, the configuration is the same except the
roles of signal and noise are reversed: the narrowband component is the signal
and the broadband component is the noise. In this case, the output is taken from
the filter output (Figure 8.6, lower output). Recall that this filter output is opti-
mized for the narrowband component of the waveform.
As with the Wiener filter approach, a filter of equal or better performancecould be constructed with the same number of filter coefficients using the tradi-
tional methods described in Chapter 4. However, the exact frequency or frequen-
cies of the signal would have to be known in advance and these spectral features
would have to be fixed throughout the signal, a situation that is often violated
in biological signals. The ALE can be regarded as a self -tuning narrowband
filter which will track changes in signal frequency. An application of ALE is
provided in Example 8.3 and an example of adaptive interference suppression
is given in the problems.
Adaptive Noise Cancellation
Adaptive noise cancellation can be thought of as an outgrowth of the interfer-
ence suppression described above, except that a separate channel is used to
supply the estimated noise or interference signal. One of the earliest applications
of adaptive noise cancellation was to eliminate 60 Hz noise from an ECG signal
(Widrow, 1964). It has also been used to improve measurements of the fetal
ECG by reducing interference from the mother’s EEG. In this approach, a refer-
ence channel carries a signal that is correlated with the interference, but not
with the signal of interest. The adaptive noise canceller consists of an adaptive
filter that operates on the reference signal, N ’(n), to produce an estimate of the
interference, N (n) (Figure 8.7). This estimated noise is then subtracted from the
signal channel to produce the output. As with ALE and interference cancella-tion, the difference signal is used to adjust the filter coefficients. Again, the
strategy is to minimize the difference signal, which in this case is also the
output, since minimum output signal power corresponds to minimum interfer-
ence, or noise. This is because the only way the filter can reduce the output
power is to reduce the noise component since this is the only signal component
available to the filter.
pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
FIGURE 8.7 Configuration for adaptive noise cancellation. The reference channel
carries a signal, N ’(n ), that is correlated with the noise, N (n ), but not with the
signal of interest, x (n ). The adaptive filter produces an estimate of the noise,N *(n ), that is in the signal. In some applications, multiple reference channels are
used to provide a more accurate representation of the background noise.
MATLAB Implementation
The implementation of the LMS recursive algorithm (Eq. (11)) in MATLAB is
straightforward and is given below. Its application is illustrated through several
examples below.
The LMS algorithm is implemented in the function lms.
function [b,y,e] = lms(x,d,delta,L)
%
% Inputs: x = input
% d = desired signal
% delta = the convergence gain
% L is the length (order) of the FIR filter
% Outputs: b = FIR filter coefficients
% y = ALE output
% e = residual error
% Simple function to adjust filter coefficients using the LSM
% algorithm
% Adjusts filter coefficients, b, to provide the best match
% between the input, x(n), and a desired waveform, d(n),
% Both waveforms must be the same length
% Uses a standard FIR filter
%
M = length(x);
b = zeros(1,L); y = zeros(1,M); % Initialize outputs
for n = L:M
pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
FIGURE 8.8 Application of an adaptive filter using the LSM recursive algorithm
to data containing a single sinusoid (10 Hz) in noise (SNR = -8 db). Note that the
filter requires the first 0.4 to 0.5 sec to adapt (400–500 points), and that the fre-
quency characteristics of the coefficients produced after adaptation are those ofa bandpass filter with a single peak at 10 Hz. Comparing this figure with Figure
8.3 suggests that the adaptive approach is somewhat more effective than the
Wiener filter for the same number of filter weights.
empirically set to give rapid, yet stable convergence. (In fact, close inspection of
Figure 8.8 shows a small oscillation in the output amplitude suggesting marginal
stability.)
Example 8.4 The application of the LMS algorithm to a stationary sig-
nal was given in Example 8.3. Example 8.4 explores the adaptive characteristics
of algorithm in the context of an adaptive line enhancement problem. Specifi-
cally, a single sinusoid that is buried in noise (SNR = -6 db) abruptly changes
frequency. The ALE-type filter must readjust its coefficients to adapt to the new
frequency.
The signal consists of two sequential sinusoids of 10 and 20 Hz, each
lasting 0.6 sec. An FIR filter with 256 coefficients will be used. Delay and
convergence gain will be set for best results. (As in many problems some adjust-
ments must be made on a trial and error basis.)
pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
hardware. Prepackaged phase sensitive detectors that incorporate a wide variety
of optional features are commercially available, and are sold under the term
lock -in amplifiers. While lock-in amplifiers tend to be costly, less sophisticated
analog phase sensitive detectors can be constructed quite inexpensively. The
reason phase sensitive detection is commonly carried out in the analog domain
has to do with the limitations on digital storage and analog-to-digital conversion.
AM signals consist of a carrier signal (usually a sinusoid) which has an ampli-
tude that is varied by the signal of interest. For this to work without loss of
information, the frequency of the carrier signal must be much higher than the
highest frequency in the signal of interest. (As with sampling, the greater the
spread between the highest signal frequency and the carrier frequency, the easier
it is to separate the two after demodulation.) Since sampling theory dictates that
the sampling frequency be at least twice the highest frequency in the input
signal, the sampling frequency of an AM signal must be more than twice thecarrier frequency. Thus, the sampling frequency will need to be much higher
than the highest frequency of interest, much higher than if the AM signal were
demodulated before sampling. Hence, digitizing an AM signal before demodula-
tion places a higher burden on memory storage requirements and analog-to-
digital conversion rates. However, with the reduction in cost of both memory
and highspeed ADC’s, it is becoming more and more practical to decode AM
signals using the software equivalent of phase sensitive detection. The following
analysis applies to both hardware and software PSD’s.
AM Modulation
In an AM signal, the amplitude of a sinusoidal carrier signal varies in proportion
to changes in the signal of interest. AM signals commonly arise in bioinstrumen-
tation systems when transducer based on variation in electrical properties is
excited by a sinusoidal voltage (i.e., the current through the transducer is sinus-
oidal). The strain gage is an example of this type of transducer where resistance
varies in proportion to small changes in length. Assume that two strain gages
are differential configured and connected in a bridge circuit, as shown in Figure
1.3. One arm of the bridge circuit contains the transducers, R + ∆ R and R − ∆ R,
while the other arm contains resistors having a fixed value of R, the nominal
resistance value of the strain gages. In this example, ∆ R will be a function of
time, specifically a sinusoidal function of time, although in the general case itwould be a time varying signal containing a range of sinusoid frequencies. If
the bridge is balanced, and ∆ R << R, then it is easy to show using basic circuit
analysis that the bridge output is:
V in = ∆RV /2R (14)
pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
V ′(t ) = V sk /4R [cos(2ωct + θ + ωst ) + cos(2ωct + θ − ωst )
+ cos(ωst + θ) + cos(ωst − θ)] (24)
The spectrum of V ′(t ) is shown in Figure 8.13. Note that the phase angle,
θ, would have an influence on the magnitude of the signal, but not its frequency.
After lowpass digital filtering the higher frequency terms, ωct ± ωs will be
reduced to near zero, so the output, V out(t ), becomes:
V out(t ) = A(t ) cosθ = (V sk /2R) cos θ (25)
Since cos θ is a constant, the output of the phase sensitive detector is the
demodulated signal, A(t ), multiplied by this constant. The term phase sensitive
is derived from the fact that the constant is a function of the phase difference,
θ, between V c(t ) and V in(t ). Note that while θ is generally constant, any shift in
phase between the two signals will induce a change in the output signal level,so this approach could also be used to detect phase changes between signals of
constant amplitude.
The multiplier operation is similar to the sampling process in that it gener-
ates additional frequency components. This will reduce the influence of low
frequency noise since it will be shifted up to near the carrier frequency. For
example, consider the effect of the multiplier on 60 Hz noise (or almost any
noise that is not near to the carrier frequency). Using the principle of superposit-
ion, only the noise component needs to be considered. For a noise component
at frequency, ωn (V in(t )NOISE = V n cos (ωnt )). After multiplication the contribution
at V ′(t ) will be:
FIGURE 8.13 Frequency spectrum of the signal created by multiplying the V in(t )
by the carrier frequency. After lowpass filtering, only the original low frequency
signal at ωs will remain.
pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
V in(t )NOISE = V n [cos(ωct + ωnt ) + cos(ωct + ωst )] (26)
and the new, complete spectrum for V ′(t ) is shown in Figure 8.14.
The only frequencies that will not be attenuated in the input signal, V in(t ),are those around the carrier frequency that also fall within the bandwidth of the
lowpass filter. Another way to analyze the noise attenuation characteristics of
phase sensitive detection is to view the effect of the multiplier as shifting the
lowpass filter’s spectrum to be symmetrical about the carrier frequency, giving
it the form of a narrow bandpass filter (Figure 8.15). Not only can extremely
narrowband bandpass filters be created this way (simply by having a low cutoff
frequency in the lowpass filter), but more importantly the center frequency of
the effective bandpass filter tracks any changes in the carrier frequency. It is
these two features, narrowband filtering and tracking, that give phase sensitive
detection its signal processing power.
MATLAB Implementation
Phase sensitive detection is implemented in MATLAB using simple multiplica-
tion and filtering. The application of a phase sensitive detector is given in Exam-
FIGURE 8.14 Frequency spectrum of the signal created by multiplying V in(t ) in-
cluding low frequency noise by the carrier frequency. The low frequency noise is
shifted up to ± the carrier frequency. After lowpass filtering, both the noise and
higher frequency signal are greatly attenuated, again leaving only the original low
frequency signal at ωs remaining.
pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
vc = [vc(ishift:N) vc(1:ishift-1)]; % period (45 deg) using
% periodic shift
v1 = vc .* vm; % Multiplier
vout = filter(b,a,v1); % Apply lowpass filter
subplot(3,1,3);
plot(t,vout,’k’); % Plot AM Signal
.......axis, label,title.......
The lowpass filter was set to a cutoff frequency of 20 Hz (0.02 * f s /2) asa compromise between good noise reduction and fidelity. (The fidelity can be
roughly assessed by the sharpness of the peaks of the recovered sawtooth wave.)
A major limitation in this process were the characteristics of the lowpass filter:
digital filters do not perform well at low frequencies. The results are shown in
Figure 8.16 and show reasonable recovery of the demodulated signal from the
noise.
Even better performance can be obtained if the interference signal is nar-
rowband such as 60 Hz interference. An example of using phase sensitive detec-
tion in the presence of a strong 60 Hz signal is given in Problem 6 below.
PROBLEMS
1. Apply the Wiener-Hopf approach to a signal plus noise waveform similar
to that used in Example 8.1, except use two sinusoids at 10 and 20 Hz in 8 db
noise. Recall, the function sig_noise provides the noiseless signal as the third
output to be used as the desired signal. Apply this optimal filter for filter lengths
of 256 and 512.
pyright 2004 by Marcel Dekker, Inc. All Rights Reserved.
2. Use the LMS adaptive filter approach to determine the FIR equivalent to
the linear process described by the digital transfer function:
H (z) = 0.2 + 0.5z−1
1 − 0.2z−1
+ 0.8z−2
As with Example 8.2, plot the magnitude digital transfer function of the
“unknown” system, H ( z), and of the FIR “matching” system. Find the transfer
function of the IIR process by taking the square of the magnitude of
fft(b,n)./fft(a,n) (or use freqz). Use the MATLAB function filtfilt
to produce the output of the IIR process. This routine produces no time delay
between the input and filtered output. Determine the approximate minimum
number of filter coefficients required to accurately represent the function above
by limiting the coefficients to different lengths.
3. Generate a 20 Hz interference signal in noise with and SNR + 8 db; that is,
the interference signal is 8 db stronger that the noise. (Use sig_noise with an
SNR of +8. ) In this problem the noise will be considered as the desired signal.
Design an adaptive interference filter to remove the 20 Hz “noise.” Use an FIR
filter with 128 coefficients.
4. Apply the ALE filter described in Example 8.3 to a signal consisting of two
sinusoids of 10 and 20 Hz that are present simultaneously, rather that sequen-
tially as in Example 8.3. Use a FIR filter lengths of 128 and 256 points. Evaluate
the influence of modifying the delay between 4 and 18 samples.
5. Modify the code in Example 8.5 so that the reference signal is correlat-ed with, but not the same as, the interference data. This should be done by con-
volving the reference signal with a lowpass filter consisting of 3 equal weights;
i.e:
b = [ 0.333 0.333 0.333].
For this more realistic scenario, note the degradation in performance as
compared to Example 8.5 where the reference signal was identical to the noise.
6. Redo the phase sensitive detector in Example 8.6, but replace the white
noise with a 60 Hz interference signal. The 60 Hz interference signal should
have an amplitude that is 10 times that of the AM signal.