EURASIP Journal on Applied Signal Processing 2005:18, 30763086c
2005 C. Charoensak and F. SattarDesign of Low-Cost FPGA Hardware
for Real-timeICA-Based Blind Source Separation AlgorithmCharayaphan
CharoensakSchool of Electrical and Electronic Engineering, Nanyang
Technological University, Nanyang Avenue, Singapore 639798Email:
[email protected] SattarSchool of Electrical and Electronic
Engineering, Nanyang Technological University, Nanyang Avenue,
Singapore 639798Email: [email protected] 29 April 2004;
Revised 7 January 2005Blind source separation (BSS) of independent
sources from their convolutive mixtures is a problem in many
real-world multi-sensor applications. In this paper, we propose and
implement an ecient FPGA hardware architecture for the realization
of areal-time BSS. The architecture can be implemented using a
low-cost FPGA (eld programmable gate array). The architectureoers a
good balance between hardware requirement (gate count and minimal
clock speed) and separation performance. TheFPGA design implements
the modied Torkkolas BSS algorithm for audio signals based on ICA
(independent component analy-sis) technique. Here, the separation
is performed by implementing noncausal lters, instead of the
typical causal lters, within thefeedback network. This reduces the
required length of the unmixing lters as well as provides better
separation and faster conver-gence. Description of the hardware as
well as discussion of some issues regarding the practical hardware
realization are presented.Results of various FPGA simulations as
well as real-time testing of the nal hardware design in real
environment are given.Keywords and phrases: ICA, BSS, codesign,
FPGA.1. INTRODUCTIONBlind signal separation, or BSS, refers to
performing inversechannel estimation despite having no knowledge
about thetrue channel (or mixing lter) [1, 2, 3, 4, 5]. BSS
techniquehas been found to be very useful in many real-world
mul-tisensor applications such as blind equalization, fetal
ECGdetection, and hearing aid. BSS method based on ICA tech-nique
has been found eective and thus commonly used[6, 7]. A limitation
using ICA technique is the need for longunmixing lters in order to
estimate inverse channels [1].Here, we propose the use of noncausal
lters [6] to shortenthe lter length. In addition to that, using
noncausal ltersin the feedback network allows a good separation
even in thecase where the direct channels lters do not have stable
in-verses. A variable step-size parameter for adaptation of
thelearning process is introduced here to provide a fast and
sta-ble convergence.FPGA architecture allows optimal parallelism
needed tohandle the high computation load of BSS algorithm in
realThis is an open access article distributed under the Creative
CommonsAttribution License, which permits unrestricted use,
distribution, andreproduction in any medium, provided the original
work is properly cited.time. Being fully custom-programmable, FPGA
oers rapidhardware prototyping of DSP algorithms. The recent
ad-vances in IC processing technology and innovations in
thearchitecture have made FPGA a suitable alternative to
usingpowerful but expensive computing platform.In spite of its
potential for real-world applications, therehave been very few
published papers on real-time hardwareimplementation of the BSS
algorithm. Many of the workssuch as [8] focus on the VLSI
implementation and do notprovide a detailed set of specications of
the BSS imple-mentation that oer a good balance between hardware
re-quirement (gate count and minimal clock speed) and sepa-ration
performance. Here, we propose an ecient hardwarearchitecture that
can be implemented using a low-cost FPGAand yet oers a good blind
source separation performance.Extensive set of experimentations,
discussion on separationperformance, and the proposal for future
improvement arepresented.The FPGA design process requires
familiarity with theassociated signal processing. Furthermore, the
developedFPGA prototype needed to be veried, through both
func-tional simulation and real-time testing, in order to fully
un-derstand the advantages and pitfalls of the architecture
underinvestigation. Thus, an integrated system-level
environmentFPGA Hardware for Real-time ICA-Based BSS 3077for
software-hardware co-design and verication is need-ed. Here, we
carried out FPGA design of a real-timeimplementation of the
ICA-based BSS using a new system-level design tool called System
Generator from Xilinx.The rest of the paper is organized as
follows. Section 2provides an introduction to BSS algorithm and the
appli-cation of noncausal lters within the feedback network.Section
3 describes the hardware architecture of our FPGAdesign for the
ICA-based BSS algorithm. Some critical issuesregarding the
implementation of the BSS algorithm usingthe limited hardware
resources in the FPGA are discussed.Section 4 presents the system
level design of the FPGA fol-lowed by detailed FPGA simulation
results, synthesis result,and the real-time experimentation of the
nal hardware us-ing real environment setup. The summary and the
motiva-tions for future improvement are given in Section 5.2.
BACKGROUNDOF BSS ALGORITHM2.1. Infomax or entropy maximization
criterionBSS is the main application of ICA, which aims at
reducingthe redundancy between source signals. Bell and
Sejnowski[9] proposed an information-theoretic approach for
BSS,which is referred to as the Infomax algorithm.2.2. Separation
of convolutive mixtureThe Infomax algorithm proposed by Bell and
Sejnowskiworks well only with instantaneous mixture and was
furtherextended by Torkkola for the convolutive mixture
problem[10]. As shown in Figure 1, minimizing the mutual
informa-tion between outputs u1 and u2 can be achieved by
maximiz-ing the total entropy at the output.This architecture can
be simplied by forcing W11 andW22 to be mere scaling coecients to
achieve the relation-ships shown below [6, 11]:u1(t) = x1(t)
+L12k=0w12(k)u2(t k),u2(t) = x2(t) +L21k=0w21(k)u1(t k),(1)and the
learning rules for the separation matrix:wi j1 2yi
uj(t k). (2)2.3. Modied ICA-based BSS method using
modiedTorkkolas feedback networkTorkkolas algorithm works only when
the stable inverse ofthe direct channel lters exists. This is not
always guaran-teed in real-world systems. Considering the
acoustical condi-tion when the direct channel can be assumed a
nonminimumphase FIR lter, the impulse response of its stable
inversewill become noncausal innite double-sided converging
se-X2X1MixedinputsW22W11++W12W21g(u) g(u)MaximizeentropyAdjust
unmixing ltercoecients, W12 and W21U1U2Blind sourceseparationoutput
resultsFigure 1: Torkkolas feedback network for BSS.quence, or a
noncausal IIR lter. By truncating this non-causal IIR lter, the
stable noncausal FIR lter (W12 or W21)can be realized [12]. It was
also shown in [6] that the algo-rithm can be easily modied to use
the noncausal unmix-ing FIR lters. The relationships between the
signals are nowchanged tou1(t) = x1(t + M) +M1k=Mw12(k)u2(t k),
(3)u2(t) = x2(t + M) +M1k=Mw21(k)u1(t k), (4)where M is half of the
lter length, L, that is, L = 2M +1 andthe learning rulewi j(t1p1+M)
= wi j(t0p0+M) + Kuit0
ujp0
, (5)whereKuit0
= 1 2yit0
, (6)yit0
= 11 + eui (t0),t1 = t0 + 1,p0 = t0k for k = M, M + 1, . . .,
M,p1 = t1k for k = M, M + 1, . . ., M.(7)The term in (6) represents
the variable learning stepsize which is explained in more detail in
Section 3.5.3. ARCHITECTURE OF HARDWAREFOR BSS ALGORITHMThe
top-level block diagram of our hardware architecture forthe BSS
algorithm based on Torkkolas network is shown inFigure 2. The
descriptions for the subsystems in the gurewill be discussed in the
following subsections. Some criti-cal issues regarding the
practical realization of the algorithmwhile minimizing the hardware
resources are discussed.3078 EURASIP Journal on Applied Signal
ProcessingX2X1UnmixingltersWVariable
learningsteppingEntropymeasurementFeedbacknetworkU1U2Figure 2:
Top-level block diagram of hardware architecture for BSSalgorithm
based on Torkkolas network.3.1. Practical implementation of the
modiedTorkkolas network for FPGArealizationIn order to understand
the eect of each lter parameter suchas lter length (L) and learning
step size on the separationperformance, a number of simulation
programs written inMATLAB were tested. A set of compromised
specication isthen proposed for practical hardware realization
[11].As a result of our MATLAB simulations, we propose thatin order
to reduce the FPGA resource needed, as well as toensure real-time
BSS separation given the limited maximumFPGA clock speed, the
specications shown below are to beused. Sections 3.2 and 3.6
explain the impact of some param-eters on hardware requirement in
more details.(i) Filter length, L = 161 taps.(ii) Buer size for
iterative convolution, N = 2500.(iii) Maximum number of iterations,
I = 50.(iv) Approximation of the exponential learning step
sizeusing linear piecewise approximation.The linear piecewise
approximation is used to avoid com-plex circuitry needed to
implement the exponential func-tion (see Section 3.5 for more
explanation of implementa-tion, and Section 4.2.1 for comparison of
simulation resultsusing exponential function and linear piecewise
function).There are many papers discussing the eect of lterlength
on the separation performance [13]. We have cho-sen a smaller lter
length considering the maximum operat-ing speed of the FPGA and
considered only the case of echoof a small room. (See Section 3.6
for calculation of requiredFPGA clock speed and Section 4.2.3 for
the FPGA simulationresult.)3.2. Three-buffer techniqueIn real-time
hardware implementation, to achieve an unin-terrupted processing,
the hardware must process the inputand output as streams of
continuous sample. However, thisis in contrast with the need of
batch processing of BSS algo-rithm [14]. To perform the separation,
a block of data buerhas to be ltered iteratively. Here, we
implement a bueringmechanism using three 2500-sample (N = 2500)
buers perone input source. While one buer is being lled with the
in-put data, a second buer is being ltered, and the third bueris
being streamed out.A side eect of this three-buer technique is that
the sys-tem produces a processing delay equivalent to twice the
timeAddress for WW12Address for UU2Address for
XX2MemoryblocksMAC(muliplyaccumulate)RegisterRegisterEnable+
u1(t)Figure 3: Implementation of (8) for the modied Tokkolas
feed-back network.needed to ll up a buer. For example, if the
signal sam-pling frequency is 8000 Hz, the time to ll up one buer
is2500/8000 = 0.31 second. The system will then need another0.31
second to process before the result being ready for out-put. The
total delay is then 0.31+0.31 = 0.62 s. This process-ing delay may
be too long for many real-time applications. Asuggestion on
applying overlapped processing windows, to-gether with polyphase
lter, in order to shorten this delay isgiven in Section 5.3.3.
Implementation of feedback network mechanismAccording to feedback
network in (3), there is a need to ad-dress negative addresses for
the values of w12(i) when i < 0.In practice, the equation is
modied slightly to include onlypositive addresses:u1(t) = x1(t + M)
+Mi=Mw12(i + M)u2(t i). (8)Equation (8) performs the same noncausal
ltering onu2 as in (3) without the need for negative addressing of
w12.Equation (4) is also modied accordingly.The block diagram shown
in Figure 3 depicts the hard-ware implementation of (8). Note that
the implementa-tion of the FIR ltering of w12 is done through
multiply-accumulate unit (MAC) which signicantly reduces thenumbers
of multipliers and adders needed when com-pared to direct parallel
implementation. The tradeo is thatthe FPGA has to operate at
oversampling frequency (seeSection 3.6).3.4. Mechanismfor learning
the lter coefcientsThe mechanism for learning of the lter coecients
was im-plemented according to (5). The implementation of the
vari-able learning step size, , is explained in the next
subsection.3.5. Implementation of variable learning step sizeIn
order to speed up the learning of the lter coecientsshown in (5),
we implement a simplied variable step-sizetechnique.In our
application, the variable learning step size in (6),that is, the
learning step size , may be implemented usingFPGA Hardware for
Real-time ICA-Based BSS 3079(9) below where n is the iteration
level, 0 is the initial stepsize, and I is the maximum number of
iterations, that is, 50: = exp
u0 nI, (9)whereu0 = log20
1I . (10)The exponential term is dicult to implement in digi-tal
hardware. Lookup table could be used but will require alarge block
of ROM(read only memory). Alternative to usinglookup ROMis the
CORDICalgorithm(COrdinate RotationDIgital Computer) [15]. However,
circuitry for CORDICalgorithm will impose a very long latency (if
not heavilypipelined) which will result in the need for even higher
FPGAclock speed. Instead, we used a linearly decreasing
variablestep size as shown: = 0.0006 0.000012n. (11)We had carried
out MATLAB simulations to compare theseparation performance using
exponential equation and lin-ear piecewise term equation. It was
found out that, usingthe specications given in Section 3.1, there
is no signicantdegradation in separation performance (see also
simulationresults in Section 4.2.1). A multipoint piecewise
approxima-tion will be implemented in future improvement.3.6.
Calculation of the required FPGAclock speedAs mentioned earlier
that in order to save hardware resource,multiply-accumulate (MAC)
technique is used. This impliesthat MAC operation has to be done at
a much higher ratethan that of the input sampling frequency. This
MAC operat-ing frequency is also the frequency of the FPGA clock.
Thus,a detailed analysis of the FPGA system clock needed for
real-time blind source separation is required.The FPGA clock
frequency can be calculated as shown in(12) (see also [11]) where
Fs is the sampling frequency of theinput signals, L is the tap
length of the FIR lter, and I is thenumber of iterations:FPGA clock
frequency = L I Fs. (12)In our FPGA design, lter tap L = 161,
iterations I = 50,and input sampling frequency Fs = 8000 Hz; the
FPGA clockfrequency is thus161 50 8000 = 64.4 MHz. (13)This means
that the nal FPGAdesign must operate properlyat 64.4 MHz. In
practice, the maximum operating speed of ahardware circuit can be
optimized by analyzing the criticalpath in the design. A more
detailed analysis of the criticalpath in the FPGA design is given
in Section 4.1.2.Note that the frequency 64.4 MHz also represents
thenumber of multiplications per second needed to perform theblind
source separation (per channel). This represents a verylarge
computation load if a general processor, or DSP (digitalsignal
processor), is to be used for real-time applications. Us-ing fully
hardware implementation, the performance gain iseasily obtained by
bypassing the fetch-decode-execute over-head, as well as by
exploiting the inherent parallelism. FP-GAs allow the
above-mentioned advantage plus the repro-grammability and cost
eectiveness.Some discussion on applying polyphase lter to
increasethe lter length while maintaining the FPGA clock speed
isgiven in section 5.4. SYSTEM-LEVEL DESIGNANDTESTINGOF THEFPGA FOR
BSS ALGORITHM4.1. System-level design of FPGAfor BSS
algorithmSystem generator provides a bit-true and cycle-true
FPGAblocksets for functional simulation under MATLABSimulink
environment thus oering a very convenient andrealistic system-level
FPGA co-design and cosimulation.Some preliminary theoretical
results from MATLAB m-lecan directly be used as reference for
verifying the FPGAresults (refer to [16] and
http://www.xilinx.com/systemgenerator for more detailed
description).Note that the FPGA design using system generator is
dif-ferent from the more typical approach using HDL
(hardwaredescription language) or schematics. Using system
generator,the FPGA is designed by means of Simulink models.
Thus,the FPGA functional simulation can be carried out easilyright
inside Simulink environment. After the successful sim-ulation, the
synthesizable VHDL (VHSIC HDL where VH-SIC is very-high-speed
integrated circuit) code is automati-cally generated from the
models. As a result, one can denean abstract representation of a
system-level design and easilytransform it into a gate-level
representation in FPGA.The top level design of the ICA-based BSS
algorithmbased on the described architecture is shown in Figure 4.
Ascan be seen from the gure, the FPGA reads in the data fromeach
wave le, and outputs the separated signals, as streamsof 16-bit
data sample. The whole design is made up of manysubsystems. Here, a
more detailed circuit design of the sub-system which implements the
computation for updating thelter coecients is shown in Figure 5.The
process of FPGA design is simplied. The Simulinkdesign environment
also enhances the system-level software-hardware co-design and
verication. The upper right-handside of Figure 3 shows how to use
Simulink scope to dis-play the waveform generated by the FPGA
during simula-tion. This technique can be used to display the
waveformat any point in the FPGA circuitry. This oers a very
prac-tical way to implement a software-hardware co-design
andverication. Note also the use of the blockset to wave leto
transfer the simulation outputs from the FPGA back to awave le for
further analysis.3080 EURASIP Journal on Applied Signal
ProcessingSystem generatorFrom wave lerss1 norm 1.wav(8000 Hz/1
Ch/16 b)From wave leFrom wave lerss2 norm 1.wav(8000 Hz/1 Ch/16
b)From wave leDbl FptGateway inDbl FptGateway inGateway outGateway
outNew u1 u1New u2 u2Input 1 Out 1Input 2 Out 2 Fpt DblDbl FptT:[0
0], D:0 ProbeTerminatortry x2c.wavTo wave le 1try x1c.wavTo wave
leTime scope 1Time scopeFigure 4: Top-level design of BSS using
system generator under MATLAB Simulink.SelD0D1SelD0D1SelD0D13Addr
DelayAndDelayAnd DelayAddr muxbDelayAddr muxaAddrData
WaWeWaAddrData WaWeWbMem out mux1Out1Figure 5: Detailed circuit for
updating the lter coecients.4.1.1. Using xed-point arithmetic in
FPGAimplementationWhen simulating a DSP algorithm using programming
lan-guages such as C and MATLAB, double precision oatingpoint
numeric system is commonly used. In hardware im-plementation,
xed-point format numeric is more practical.Although several groups
have implemented oating-pointadders and multipliers using FPGA
devices, very few practi-cal systems have been reported [17]. The
main disadvantagesof using oating point in FPGA hardware are higher
resourcerequirements, higher clock frequency, and longer design
timethan an equivalent xed-point system.For xed lters, the analysis
of the eect of xed-pointarithmetic on the lter performance has been
well presentedin other publications [18, 19]. An analysis of word
length ef-fect on the adaptation of LMS algorithm is given in
[18].We use only xed-point arithmetic in our design. Asmentioned
earlier, we use 16-bit xed-point at the inputs andoutputs of our
FPGA design. In practice, depending on theapplications, the size of
data width should be selected takinginto account the desired system
performance and FPGA gatecount. In terms of separation performance,
it can be shownthat word length of the accumulator unit in the MAC
unit ismost critical. Using system generator, it is easy to study
theeect of round-o error on the overall system performanceand that
helps in selecting the optimal bit size in the nalFPGA
design.4.1.2. Analysis of critical path in FPGAimplementationof BSS
algorithmIt was shown in Section 3.6 that in order for the FPGAto
per-form the blind source separation in real time, the requiredFPGA
Hardware for Real-time ICA-Based BSS 3081xy + DelayMAC outFigure 6:
Simplied block diagram of MAC operation.clock frequency is 64.4
MHz. We will now analyze the criti-cal paths in the ICA-based BSS
design in order to verify thatthe clock frequency can be met.
Referring to Figures 1 and 2,one can see that the maximum operating
speed of the FPGAis determined by the critical path in the
multiply-accumulateblocks. The propagation delay in this critical
path is the sum-mation of the combinational delay made up of one
multiplierand one adder as shown in the simplied block diagram
inFigure 6. Based on Xilinx data sheet of Virtex-E FPGA, de-lays of
the dedicated pipeline multiplier and adder units are(i) delay of
16-bits adder: 4.3 nanoseconds,(ii) delay of pipelined 16-bit
multiplier: 6.3 nanoseconds.Thus, we can approximate the total
delays in the criti-cal path to be 4.3 + 6.3 = 10.6 nanoseconds,
which convertsto 94 MHz maximum FPGA clock frequency. This is
muchhigher than the required 64.4 MHz needed. Note that this isonly
an approximation and we mention it here for the pur-pose of
discussion about our BSS architecture. The accuratemaximumclock
speed can easily be extracted fromthe FPGAsynthesis result which is
given in Section 4.3. Note also thatthe available dedicated
multipliers depend very much on theFPGA family and the size of
device used.4.2. Simulation results of the FPGAdesign forICA-based
BSSWe carried out experimentations using dierent wave les,all
sampled at 8000 Hz, and with various types of mixingconditions. The
output separation performance results weremeasured.4.2.1.
FPGAsimulation using two female voicesIn this simulation, we use
wave les of two female voices.Each le is approximately one-second
long, with samplingfrequency of 8000 samples per second, and 16
bits per sam-ple. The les were preprocessed to remove dc
componentand amplitude normalized. The two original input voices
areshown in Figures 7a and 7b, respectively.To simulate the mixing
process, the two inputs were pro-cessed using the instantaneous
mixing program called in-stamix.m downloaded from the website
http://www.ele.tue.nl/ica99 given in [20]. The mixing matrix used
is [0.6 1.0, 1.00.6]. The two mixed voices are shown in Figures 8a
and 8b.The separation results from the FPGA simulation areshown in
Figures 9 and 10. The separated outputs are shownin Figures 9a and
9b. By comparing the separated out-put voice in Figure 9a to the
corresponding mixed voice inFigure 8a, the separation is clearly
visible. By listening to the0 1000 2000 3000 4000 5000 6000 7000
800010.80.60.40.200.20.40.60.81(a)0 1000 2000 3000 4000 5000 6000
7000 800010.80.60.40.200.20.40.60.81(b)Figure 7: Two original
female voices used for FPGA simulations:(a) rst female voice and
(b) second female voice.output, the voice is much more
intelligible. Similar conclu-sion can be drawn for the other voice
(the measurements toshow the separation performance are given in
the next sub-section).The error of the separated rst female voice,
measured bysubtracting the output in Figure 9a fromthe original
input inFigure 7a, is shown in Figure 10a. Figure 10b shows error
ofthe second voice.Figures 11a and 11b show the FPGA simulation
resultsusing linear piecewise function, compared to
exponentialfunction shown in (7). Figure 11a shows the plot of
learningstep sizes, using exponential function and linear
piecewisefunction, against number of iterations. Figure 11b
comparesthe averaged changes of lter coecients w12 and w21, forall
the 161 taps, plotted against number of iterations. It canbe seen
that, for the maximum number of iterations used(= 50), both the
learning step size using exponential func-tion and the learning
step size using linear piecewise functionconverge to zeros
properly.4.2.2. FPGAsimulation using one female voice mixedwith
Gaussian noiseA second experiment was carried out to measure the
sepa-ration performance of the FPGA under noisy environment[21].
The rst female voice used in the last experimentation3082 EURASIP
Journal on Applied Signal Processing0 1000 2000 3000 4000 5000 6000
7000 800010.80.60.40.200.20.40.60.81(a)0 1000 2000 3000 4000 5000
6000 7000 800010.80.60.40.200.20.40.60.81(b)Figure 8: Two mixed
female voices used for FPGA simulation. Theprogram instamix.m
downloaded from http://www.ele.tue.nl/ica99 is used; (a) mixed rst
signal and (b) mixed second input.was mixed with white Gaussian
noise (see Figure 12) and thesignal-to-noise ratios (SNRs), before
and after the BSS opera-tion by the designed FPGA, were measured.
The same mixingmatrix [0.6 1.0, 1.0 0.6] was used. By adjusting the
varianceof the white Gaussian noise source, 2, the input SNR
variesin the range 9 dB to 10 dB.The input signal-to-noise ratio,
SNRi, is dened in thisexperimentation asSNRi (dB) = 10 logTi=1
x21(i)Ti=1 x22(i)
. (14)Here, T is the total number of samples in the time
periodof measurement and x1(i) and x2(i) are the original
femalevoice and the white Gaussian noise respectively.Similarly,
the output signal-to-noise ratio, SNRo, is de-ned in (15). e1(i)
represents the overall noise left in the rstseparated output u1 and
dened as e1(i) = u1(i) x1(i) asshown in Figure 12:SNRo (dB) = 10
logTi=1 x21(i)Ti=1 e21(i)
. (15)0 1000 2000 3000 4000 5000 6000 7000 8000
900010.80.60.40.200.20.40.60.81(a)0 1000 2000 3000 4000 5000 6000
7000 8000 900010.80.60.40.200.20.40.60.81(b)Figure 9: Two separated
voices from FPGA simulation; (a) sepa-rated rst voice and (b)
separated second voice.The improvement of SNR after BSS processing,
SNRimp(dB), is dened asSNRimp (dB) = 10 logTi=1 x22(i)Ti=1
e21(i)
. (16)The result in Figure 13 shows the averaged output SNRsand
the average improvement of SNRs plotted against inputSNRs. It can
be seen that as the input SNRs varies, the out-put SNRs is almost
the constant at approximately 35 dB. Themaximum achievable out SNRs
is limited by the width of thedatapath implemented inside the FPGA.
Due to this reason,the amount of improvement of SNRs decreases with
the in-creasing input SNRs.4.2.3. FPGAsimulation using two female
voices mixedusing simulated roomenvironmentNext experimentation was
carried out to measure separationperformance of the designed FPGA
under realistic room en-vironment using the same two female voices.
The room envi-ronment was simulated using the program
simroommix.mdownloaded from the same website mentioned earlier.
Thecoordinates (in meter) of the rst and second signal sourcesFPGA
Hardware for Real-time ICA-Based BSS 30830 1000 2000 3000 4000 5000
6000 7000 8000 900010.80.60.40.200.20.40.60.81(a)0 1000 2000 3000
4000 5000 6000 7000 8000 900010.80.60.40.200.20.40.60.81(b)Figure
10: Error of the two separated voices; (a) error of separatedrst
voice and (b) error of separated second voice.are (2, 2, 2) and (3,
4, 2) respectively. The locations of the rstand second microphone
are (3.5, 2, 2) and (4, 3, 2) respec-tively. The room size is 5 5
5. The simulated room ar-rangement is shown in Figure 14.The
measurement of separation performance is based onusing (17) (see
[20]). Here, Sj is the separation performanceof the jth separated
output where uj,x j is the jth output ofthe cascaded
mixing/unmixing system when only input xj isactive; E[u] represents
the expected value of u:Sj = 10 log
Euj,xj
2E i=j uj,xi
2
. (17)The program bsep.m, which performs the computationas shown
in (17), was downloaded from the website and usedto test our FPGA.
It was found that before the BSS,S1 = 17.29 dB, S2 = 10.63 dB,
(18)and after the BSS operation,S1 = 20.29 dB, S2 = 16.53 dB.
(19)Thus, the BSS hardware improves the separation by 3 dBand 5.9
dB for channels 1 and 2, respectively. The gures maynot appear very
high. However, by listening to the separatedoutput signals, the
improvement was obvious and further5 10 15 20 25 30 35 40 45
50Iteration
number00.20.40.60.811.21.41.61.8103LearningstepsizeExponential
functionLinear piecewise function(a)5 10 15 20 25 30 35 40 45
50Iteration
number00.0050.010.015AveragechangesofltercoecientsExponential
functionLinear piecewise function(b)Figure 11: (a) Learning step
sizes using exponential and linearpiecewise functions and (b)
averaged changes of lter coecientsw12 and w21 using the two
functions.White Gaussiannoise sourcewith adjustablevarianceFirst
femalevoice wave le Mixingmatrix0.6 1.01.0 0.6
DelayFPGAdesignof BSSalgorithmu1 e1 ++u2x1x2Figure 12: FPGA
simulation to test BSS using one female voicemixed with white
Gaussian noise with adjustable variance 2.improvement can be made
by increasing the lter lengths.Note also that the measurements of
separation performanceand the improvement after the BSS depend very
much on theroom arrangement.4.2.4. FPGAsimulation using
convolutivemixture of voicesIn order to test the FPGA using
convolutive mixturesrecorded in a real cocktail party eect, two
samples voiceswere downloaded from a website (T-W Lees website3084
EURASIP Journal on Applied Signal Processing10 8 6 4 2 0 2 4 6
8Input SNRs (dB)202530354045OutputSNRsandimprovementofSNRsOutput
SNRsImprovement of SNRs after BSSFigure 13: Results of SNR
measurements using one female voicemixed with white Gaussian noise.
The gure shows the output SNRsand the improvement of SNRs against
the input SNRs.543210Length(5m) 0
12345Width(5m)012345Height(5m)Voice 2Voice 1Microphone 2Microphone
1Figure 14: Arrangement of female voice sources and microphonesin
simulated room environment using program simroommix.m.at
http://inc2.ucsd.edu/tewon/) and used in the FPGAsimulation. The
two wave les used were from two speak-ers recorded speaking
simultaneously. Speaker 1 countsthe digits from one to ten in
English and speaker 2counts in Spanish. The recording was done in a
nor-mal oce room. The distance between the speakers andthe
microphones was about 60 cm in a square ordering.A commercial
program was used to resample the originalsampling rate of 16 kHz
down to 8000 Hz. The separation re-sults from the developed FPGA
can be found at our
website(http://www.ntu.edu.sg/home/ecchara/click project on theleft
frame). Note that we present a real-time experimentationusing these
convolutive mixtures in Section 4.4. We have alsoposted the
separation results of the same convolutive mix-tures using MATLAB
program at the same website for com-parison.Because the original
wave les (before the convolutivemixing) were not available, we
could not perform the mea-Table 1: Detailed gate requirement of the
BSS FPGA design.Number of slices for logic 550Number of slices for
ip ops 405Number of 4 input LUTs 3002used as LUTs 2030used as a
route-thru 450used as shift registers 522Total equivalent gate
count for the design 100 213Table 2: Maximum combinational path
delay and operating fre-quency of the FPGA design for BSS.Maximum
path delay from/to any node 15.8 nsMaximum operating frequency 71.2
MHzsurement of the separation performance. However, by listen-ing
to the separated outputs, we notice approximately thesame level of
separation as in our earlier experimentations.4.3. FPGAsynthesis
resultAfter the successful simulation, the VHDL codes were
auto-matically generated from the design using system generator.The
VHDL codes were then synthesized using Xilinx ISE 5.2iand targeted
for Virtex-E, 600 000 gates. The optimizationsetting for the ISE is
for maximum clock speed. Table 1 de-tails the gate requirement of
the FPGA design. The total gaterequirement reported by the ISE is
approximately 100 kgates.The price of the FPGA device in this range
of gate size is verylow. Note that all of the buers are implemented
using exter-nal memory.Table 2 shows the reported maximum path
delay and thehighest FPGA clock frequency.4.4. Real-time testing of
FPGAdesignThe real-time testing of the FPGA was done using a
proto-type board equipped with a 600 000-gate Virtex-E FPGA
de-vice. The systemsetup is shown in Figure 15. The wave les ofthe
convolutive mixtures used in Section 4.2.4 were encodedinto the
left and right channels of a stereo MP3 le which isthen played back
repeatedly using a portable MP3 player. Theprototype board is
equipped with 20-bit A/D and D/A con-verters and the sampling
frequency was set to 8000 samplesper second. Only the highest 16
bits of the sampled signalswere used by the FPGA.The FPGA streamed
out the separated outputs whichwere then converted into analog
signals, amplied, andplayed back on the speaker. By listening to
the playbacksound, it was concluded that we had achieved the same
levelof separation as found in earlier simulations.5. SUMMARYIn
this paper, the hardware implementation of the modi-ed BSS
algorithm was realized using FPGA. A set of com-promised
specication for the BSS algorithm was proposedFPGA Hardware for
Real-time ICA-Based BSS 3085Figure 15: System setup for real-time
testing of the blind sourceseparation in real time.taking into
consideration the separation performance andhardware resource
requirements. The design implements themodied Torkkolas BSS
algorithm using noncausal unmix-ing lters which reduce lter length
and provides faster con-vergence. There is no whitening process of
the separatedsources involved except that there is a scaling
applied in themixed inputs proportional to their variances.The FPGA
design was carried out using a system levelapproach and the nal
hardware achieves the real-time oper-ation using minimal FPGA
resource. Many FPGA functionalsimulations were carried out to
verify and measure the sepa-ration performance of the proposed
simplied architecture.The test input signals used include additive
mixtures, convo-lutive mixtures, and simulated room environment.
The re-sults show that the method is robust against noise; that
is,producing a small variation of the output SNR with respectto a
large variation in the input SNR. Note that here we haveconsidered
the sensors to be almost perfect, or high qualitywith negligible
sensor error.A relatively short lter length of 161 tap is rst
attemptedhere due to the limitation of the maximum clock speed
ofthe FPGA. The FPGAcan performthe separation successfullywhen the
delay is small, that is, less than 161/8000 = 20 ms.In the
environment where the delay is longer, a much longerlter tap is
needed and this can be realized. In this case, wepropose that, in
order to keep the FPGA clock frequency thesame, multiple MAC
engines should be used together withthe implementation of polyphase
decomposition. For exam-ple, if the tap length of 161 16 = 2576 is
needed, 16 MACsengines are to be implemented with the 16-band
polyphasedecomposition. Since one MAC engine is made up of onlyone
multiplier and one accumulator, the additional MAC en-gines will
not lead to much additional gates.The application of block mode
separation leads to theside eect of a long processing delay. It was
shown inSection 3.2 that our current design poses a long delay
of0.62 s. A practical solution to this long delay is to apply
over-lapped processing windows. For example, if the processingdelay
is to be reduced to 0.62/32 = 20 ms, the 2500-samplewindows will
have to be overlapped by 31/32 = 97%, that is,the BSS has to be
performed for every 78 input samples.In this paper, we consider the
case of 2 sources (or 2voices) and 2 sensors (or 2 microphones). In
the future,we will carry out further improvements in our FPGA
archi-tecture to tackle the situation when the number of sourcesis
higher, or lower, than the number of sensors. Using ourexisting
design, we have done some FPGA simulation us-ing 3 voices with 2
microphones. The separation results aregood (please visit the
website http://www.ntu.edu.sg/home/ecchara/ for listening test). In
this situation, two of the threevoices are successfully separated
(from each other) while thethird suppressed voice is still present
at both of the outputs.We will improve on our current FPGA design
to handle thissituation. Considering the fact that redesigning the
FPGAtakes much time, we will carry out the above improvementsin our
future works.REFERENCES[1] T.-W. Lee, Independent Component
Analysis: Theory and Ap-plications, Kluwer Academic, Hingham, Mass,
USA, 1998.[2] R. M. Gray, Entropy and Information Theory, Springer,
NewYork, NY, USA, 1990.[3] P. Comon, Independent component
analysis, a new con-cept? Signal Processing, vol. 36, no. 3, pp.
287314, 1994.[4] K. Torkkola, Blind source separation for audio
signalsarewe there yet? in Proc. IEEE International Workshop on
Inde-pendent Component Analysis and Blind Signal Separation
(ICA99), pp. 239244, Aussois, France, January 1999.[5] T.-W. Lee,
A. J. Bell, and R. Orglmeister, Blind source sepa-ration of real
world signals, in Proc. IEEE International Con-ference on Neural
Networks (ICNN 97), vol. 4, pp. 21292134,Houston, Tex, USA, June
1997.[6] F. Sattar, M. Y. Siyal, L. C. Wee, and L. C. Yen, Blind
sourceseparation of audio signals using improved ICA method,
inProc. 11th IEEE Signal Processing Workshop on Statistical
SignalProcessing (SSP 01), pp. 452455, Singapore, August 2001.[7]
H. H. Szu, I. Kopriva, and A. Persin, Independent compo-nent
analysis approach to resolve the multi-source limitationof the
nutating rising-sun reticle based optical trackers, Op-tics
Communications, vol. 176, no. 1-3, pp. 7789, 2000.[8] C. Abdullah,
S. Milutin, and C. Gert, Mixed-signal real-timeadaptive blind
source separation, in Proc. International Sym-posiumon Circuits and
Systems (ISCAS 04), pp. 760763, Van-couver, Canada, May 2004.[9] A.
J. Bell and T. J. Sejnowski, An information-maximisationapproach to
blind separation and blind deconvolution, Neu-ral Computation, vol.
7, no. 6, pp. 11291159, 1995.[10] K. Torkkola, Blind separation of
convolved sources based oninformation maximization, in Proc. IEEE
Signal ProcessingSociety Workshop on Neural Networks for Signal
Processing VI,pp. 423432, Kyoto, Japan, September 1996.[11] F.
Sattar and C. Charayaphan, Low-cost design and imple-mentation of
an ICA-based blind source separation algo-rithm, in Proc. 15th
Annual IEEE International ASIC/SOCConference (ASIC/SOC 02), pp.
1519, Rochester, NY, USA,September 2002.[12] B. Yin, P. C. W.
Sommen, and P. He, Exploiting acousticsimilarity of propagating
paths for audio signal separation,EURASIP Journal on Applied Signal
Processing, vol. 2003,no. 11, pp. 10911109, 2003.[13] H. Sawada, S.
Araki, R. Mukai, and S. Makino, Blind sourceseparation with dierent
sensor spacing and lter length foreach frequency range, in Proc.
12th IEEE Workshop on Neural3086 EURASIP Journal on Applied Signal
ProcessingNetworks for Signal Processing (NNSP 02), pp. 465474,
Mar-tigny, Valais, Switzerland, September 2002.[14] A. Cichocki and
A. K. Barros, Robust batch algorithm for se-quential blind
extraction of noisy biomedical signals, in Proc.5th International
Symposium on Signal Processing and Its Ap-plications (ISSPA 99),
vol. 1, pp. 363366, Brisbane, Queens-land, Australia, August
1999.[15] R. Andraka, A survey of CORDIC algorithms for FPGAs,
inProc. ACM/SIGDA 6th International Symposium on Field
Pro-grammable Gate Arrays (FPGA 98), pp. 191200, Monterey,Calif,
USA, February 1998.[16] Xilinx, Xilinx System Generator v2.3 for
the MathWorksSimulink: Quick Start Guide, February 2002.[17] J.
Allan and W. Luk, Parameterised oating-point arithmeticon FPGAs, in
Proc. IEEE International Conference on Acous-tics, Speech, and
Signal Processing (ICASSP 01), vol. 2, pp.897900, Salt Lake City,
Utah, USA, 2001.[18] J. R. Treichler, C. R. Johnson Jr., and M. G.
Larimore, The-ory and Design of Adaptive Filters, Prentice-Hall,
Upper SaddleRiver, NJ, USA, 2001.[19] S. Haykin, Adaptive Filter
Theory, Prentice-Hall, Upper SaddleRiver, NJ, USA, 1996.[20] D.
Schobben, K. Torkkola, and P. Smaragdis, Evaluation ofblind signal
separation methods, in Proc. IEEE InternationalWorkshop on
Independent Component Analysis and Blind Sig-nal Separation (ICA
99), pp. 261266, Aussois, France, Jan-uary 1999.[21] A. K. Barros
and N. Ohnishi, Heart instantaneous frequency(HIF): an alternative
approach to extract heart rate variabil-ity, IEEE Trans. Biomed.
Eng., vol. 48, no. 8, pp. 850855,2001.Charayaphan Charoensak
received theM.A.S. and Ph.D. degrees in electrical en-gineering
from the Technical Universityof Nova Scotia, Halifax, NS, Canada,
in1989 and 1993, respectively. He is currentlyan Assistant
Professor with the Schoolof Electrical and Electronic
Engineering,Nanyang Technological University, Singa-pore, in the
Division of Information Engi-neering. His research interests
include ap-plications of FPGAs in digital signal processing,
sigma-delta signalprocessing, pattern recognition, face
recognition, adaptive signals,speech/audio segmentation,
watermarking, and image processing.Farook Sattar is an Assistant
Professorat the Information Engineering Division,Nanyang
Technological University, Singa-pore. He has received his Technical
Licen-tiate and Ph.D. degrees in signal and im-age processing from
Lund University, Swe-den, and M.Eng. degree from
BangladeshUniversity of Technology, Bangladesh. Hiscurrent research
interests include blind sig-nal separation, watermarking,
speech/audiosegmentation, speech enhancement, 3D audio, image
feature ex-traction, image enhancement, wavelets, lter banks, and
adaptivebeamforming. He has training in both signal and image
processing,and has been involved in a number of signal and image
processing-related projects sponsored by the Swedish National
Science andTechnology Board (NUTEK) and Singapore Academic
ResearchFunding (AcRF) Scheme. His research has been published in
anumber of leading journals and conferences.EURASIP JOURNAL ON
APPLIED SIGNAL PROCESSINGSpecial Issue onImage PerceptionCall for
PapersPerception is a complex process that involves brain
activitiesat dierent levels. The availability of models for the
represen-tation and interpretation of the sensory information
opensup new research avenues that cut across neuroscience,
imag-ing, information engineering, and modern robotics.The goal of
the multidisciplinary eld of perceptual signalprocessing is to
identify the features of the stimuli that deter-mine their
perception, namely a single unied awarenessderived from sensory
processes while a stimulus is present,and to derive associated
computational models that can begeneralized.In the case of vision,
the stimuli go through a complexanalysis chain along the so-called
visual pathway, start-ing with the encoding by the photoreceptors
in the retina(low-level processing) and ending with cognitive
mecha-nisms (high-level processes) that depend on the task
beingperformed.Accordingly, low-level models are concerned with
imagerepresentation and aim at emulating the way the visualstimulus
is encoded by the early stages of the visual system aswell as
capturing the varying sensitivity to the features of theinput
stimuli; high-level models are related to image inter-pretation and
allow to predict the performance of a humanobserver in a given
predened task.A global model, accounting for both such bottom-up
andtop-down approaches, would enable the automatic interpre-tation
of the visual stimuli based on both their low-level fea-tures and
their semantic content.Among the main image processing elds that
would takeadvantage of such models are feature extraction,
content-based image description and retrieval, model-based
coding,and the emergent domain of medical image perception.The goal
of this special issue is to provide original contri-butions in the
eld of image perception and modeling.Topics of interest include
(but are not limited to): Perceptually plausible mathematical bases
for therepresentation of visual information (static and dy-namic)
Modeling nonlinear processes (masking, facilitation)and their
exploitation in the imaging eld (compres-sion, enhancement, and
restoration) Beyond early vision: investigating the pertinence
andpotential of cognitive models (feature extraction, im-age
quality) Stochastic properties of complex natural scenes(static,
dynamic, colored) and their relationships withperception
Perception-based models for natural (static and dy-namic) textures.
Theoretical formulation and psy-chophysical validation Applications
in the eld of biomedical imaging (med-ical image perception)Authors
should follow the EURASIP JASP manuscriptformat described at
http://www.hindawi.com/journals/asp/.Prospective authors should
submit an electronic copy of theircomplete manuscripts through the
EURASIP JASP man-uscript tracking system at
http://www.mstracking.com/asp/,according to the following
timetable:Manuscript Due December 1, 2005Acceptance Notication
April 1, 2006Final Manuscript Due July 1, 2006Publication Date 3rd
Quarter, 2006GUEST EDITORS:Gloria Menegaz, Department of
Information Engineering,University of Siena, Siena, Italy;
[email protected] Yang, Department of
Computing,Engineering Imperial College London, London,
UK;[email protected] Concetta Morrone, Universit Vita-Salute
SanRaaele, Milano, Italy; [email protected] Winkler, Genista
Corporation, Montreux,Switzerland; [email protected]
Portilla, Department of Computer Science andArticial Intelligence
(DECSAI), Universidad de Granada,Granada, Spain;
[email protected] Publishing
Corporationhttp://www.hindawi.comEURASIP JOURNAL ON APPLIED SIGNAL
PROCESSINGSpecial Issue onMusic Information Retrieval Based on
Signal ProcessingCall for PapersThe main focus of this special
issue is on the application ofdigital signal processing techniques
for music informationretrieval (MIR). MIR is an emerging and
exciting area of re-search that seeks to solve a wide variety of
problems dealingwith preserving, analyzing, indexing, searching,
and access-ing large collections of digitized music. There are also
stronginterests in this eld of research from music libraries and
therecording industry as they move towards digital music
distri-bution. The demands from the general public for easy
accessto these music libraries challenge researchers to create
toolsand algorithms that are robust, small, and fast.Music is
represented in either encoded audio waveforms(CD audio, MP3, etc.)
or symbolic forms (musical score,MIDI, etc.). Audio
representations, in particular, require ro-bust signal processing
techniques for many applications ofMIR since meaningful
descriptions need to be extractedfrom audio signals in which sounds
from multiple instru-ments and vocals are often mixed together.
Researchers inMIR are therefore developing a wide range of new
meth-ods based on statistical pattern recognition, classication,and
machine learning techniques such as the Hidden MarkovModel (HMM),
maximum likelihood estimation, and Bayesestimation as well as
digital signal processing techniques suchas Fourier and Wavelet
transforms, adaptive ltering, andsource-lter models. New music
interface and query systemsleveraging such methods are also
important for end users tobenet from MIR research.Although research
contributions on MIR have been pub-lished at various conferences in
1990s, the members of theMIR research community meet annually at
the InternationalConference on Music Information Retrieval (ISMIR)
since2000.Topics of interest include (but are not limited to):
Automatic summarization (succinct representation ofmusic) Automatic
transcription (audio to symbolic formatconversion) Music annotation
(semantic analysis) Music ngerprinting (unique identication of
music) Music interface Music similarity metrics (comparison) Music
understanding Musical feature extraction Musical styles and genres
Optical music score recognition (image to symbolicformat
conversion) Performer/artist identication Query systems
Timbre/instrument recognitionAuthors should follow the EURASIP JASP
manuscriptformat described at
http://www.hindawi.com/journals/asp/.Prospective authors should
submit an electronic copy of theircomplete manuscripts through the
EURASIP JASP man-uscript tracking system at
http://www.mstracking.com/asp/,according to the following
timetable:Manuscript Due December 1, 2005Acceptance Notication
April 1, 2006Final Manuscript Due July 1, 2006Publication Date 3rd
Quarter, 2006GUEST EDITORS:Ichiro Fujinaga, McGill University,
Montreal, QC, Canada,H3A 2T5; [email protected] Goto,
National Institute of Advanced IndustrialScience and Technology,
Japan; [email protected] Tzanetakis, University of Victoria,
Victoria, BC,Canada, V8P 5C2; [email protected] Publishing
Corporationhttp://www.hindawi.comEURASIP JOURNAL ON APPLIED SIGNAL
PROCESSINGSpecial Issue onVisual Sensor NetworksCall for
PapersResearch into the design, development, and deploymentof
networked sensing devices for high-level inference andsurveillance
of the physical environment has grown tremen-dously in the last few
years.This trend has been motivated, in part, by recent
techno-logical advances in electronics, communication
networking,and signal processing.Sensor networks are commonly
comprised of lightweightdistributed sensor nodes such as low-cost
video cameras.There is inherent redundancy in the number of nodes
de-ployed and corresponding networking topology. Operationof the
network requires autonomous peer-based collabora-tion amongst the
nodes and intermediate data-centric pro-cessing amongst local
sensors. The intermediate processingknown as in-network processing
is application-specic. Of-ten, the sensors are untethered so that
they must commu-nicate wirelessly and be battery-powered. Initial
focus wasplaced on the design of sensor networks in which scalar
phe-nomena such as temperature, pressure, or humidity
weremeasured.It is envisioned that much societal use of sensor
networkswill also be based on employing content-rich
vision-basedsensors. The volume of data collected as well as the
sophis-tication of the necessary in-network stream content
process-ing provide a diverse set of challenges in comparison
withgeneric scalar sensor network research.Applications that will
be facilitated through the develop-ment of visual sensor networking
technology include auto-matic tracking, monitoring and signaling of
intruders withina physical area, assisted living for the elderly or
physically dis-abled, environmental monitoring, and command and
con-trol of unmanned vehicles.Many current video-based surveillance
systems have cen-tralized architectures that collect all visual
data at a cen-tral location for storage or real-time interpretation
by a hu-man operator. The use of distributed processing for
auto-mated event detection would signicantly alleviate mundaneor
time-critical activities performed by human operators,and provide
better network scalability. Thus, it is expectedthat video
surveillance solutions of the future will success-fully utilize
visual sensor networking technologies.Given that the eld of visual
sensor networking is still inits infancy, it is critical that
researchers from the diverse dis-ciplines including signal
processing, communications, andelectronics address the many
challenges of this emergingeld. This special issue aims to bring
together a diverse setof research results that are essential for
the development ofrobust and practical visual sensor
networks.Topics of interest include (but are not limited to):
Sensor network architectures for high-bandwidth vi-sion
applications Communication networking protocols specic to vi-sual
sensor networks Scalability, reliability, and modeling issues of
visualsensor networks Distributed computer vision and aggregation
algo-rithms for low-power surveillance applications Fusion of
information from visual and other modali-ties of sensors Storage
and retrieval of sensor information Security issues for visual
sensor networks Visual sensor network testbed research Novel
applications of visual sensor networks Design of visual
sensorsAuthors should follow the EURASIP JASP manuscriptformat
described at http://www.hindawi.com/journals/asp/.Prospective
authors should submit an electronic copy of theircomplete
manuscripts through the EURASIP JASP man-uscript tracking system at
http://www.mstracking.com/asp/,according to the following
timetable:Manuscript Due December 1, 2005Acceptance Notication
April 1, 2006Final Manuscript Due July 1, 2006Publication Date 3rd
Quarter, 2006GUEST EDITORS:Deepa Kundur, Department of Electrical
Engineering,Texas A&M University, College Station, Texas,
USA;[email protected] Lin, Distributed Computing
Department,IBM TJ Watson Research Center, New York,
USA;[email protected] Shien Lu, Institute of Information
Science, AcademiaSinica, Taipei, Taiwan;
[email protected] Publishing
Corporationhttp://www.hindawi.comEURASIP JOURNAL ON APPLIED SIGNAL
PROCESSINGSpecial Issue onMultirate Systems and ApplicationsCall
for PapersFilter banks for the application of subband coding of
speechwere introduced in the 1970s. Since then, lter banks
andmultirate systems have been studied extensively. There hasbeen
great success in applying multirate systems to many ap-plications.
The most notable of these applications includesubband coding for
audio, image, and video, signal anal-ysis and representation using
wavelets, subband denoising,and so forth. Dierent applications also
call for dierent l-ter bank designs and the topic of designing
one-dimensionaland multidimentional lter banks for specic
applicationshas been of great interest.Recently there has been
growing interest in applying mul-tirate theories to the area of
communication systems such as,transmultiplexers, lter bank
transceivers, blind deconvolu-tion, and precoded systems. There are
strikingly many duali-ties and similarities between multirate
systems and multicar-rier communication systems. Many problems in
multicarriertransmission can be solved by extending results from
mul-tirate systems and lter banks. This exciting research area
isone that is of increasing importance.The aim of this special
issue is to bring forward recent de-velopments on lter banks and
the ever-expanding area ofapplications of multirate systems.Topics
of interest include (but are not limited to): Multirate signal
processing for communications Filter bank transceivers
One-dimensional and multidimensional lter bankdesigns for specic
applications Denoising Adaptive ltering Subband coding Audio,
image, and video compression Signal analysis and representation
Feature extraction and classication Other applicationsAuthors
should follow the EURASIP JASP manuscriptformat described at
http://www.hindawi.com/journals/asp/.Prospective authors should
submit an electronic copy of theircomplete manuscripts through the
EURASIP JASP man-uscript tracking system at
http://www.mstracking.com/asp/,according to the following
timetable:Manuscript Due January 1, 2006Acceptance Notication May
1, 2006Final Manuscript Due August 1, 2006Publication Date 4th
Quarter, 2006GUEST EDITORS:Yuan-Pei Lin, Department of Electrical
and ControlEngineering, National Chiao Tung University,
Hsinchu,Taiwan; [email protected] Phoong, Department of
Electrical Engineeringand Graduate Institute of Communication
Engineering,National Taiwan University, Taipei,
Taiwan;[email protected] Selesnick, Department of Electrical
and ComputerEngineering, Polytechnic University, Brooklyn, NY
11201,USA; [email protected] Oraintara, Department of
ElectricalEngineering, The University of Texas at
Arlington,Arlington, TX 76010, USA; [email protected]
Schuller, Fraunhofer Institute for Digital MediaTechnology (IDMT),
Langewiesener Strasse 22, 98693Ilmenau, Germany;
[email protected] Publishing
Corporationhttp://www.hindawi.comEURASIP JOURNAL ON APPLIED SIGNAL
PROCESSINGSpecial Issue onMultisensor Processing for Signal
Extractionand ApplicationsCall for PapersSource signal extraction
from heterogeneous measurementshas a wide range of applications in
many scientic and tech-nological elds, for example,
telecommunications, speechand acoustic signal processing, and
biomedical pattern anal-ysis. Multiple signal reception through
multisensor systemshas become an eective means for signal
extraction due to itssuperior performance over the monosensor mode.
Despitethe rapid progress made in multisensor-based techniques
inthe past few decades, they continue to evolve as key
tech-nologies in modern wireless communications and biomedi-cal
signal processing. This has led to an increased focus by thesignal
processing community on the advanced multisensor-based techniques
which can oer robust high-quality sig-nal extraction under
realistic assumptions and with minimalcomputational complexity.
However, many challenging tasksremain unresolved and merit further
rigorous studies. Majoreorts in developing advanced
multisensor-based techniquesmay include high-quality signal
extraction, realistic theoret-ical modeling of real-world problems,
algorithm complexityreduction, and ecient real-time
implementation.The purpose of this special issue aims to present
state-of-the-art multisensor signal extraction techniques and
applica-tions. Contributions in theoretical study, performance
analy-sis, complexity reduction, computational advances, and
real-world applications are strongly encouraged.Topics of interest
include (but are not limited to): Multiantenna processing for radio
signal extraction Multimicrophone speech recognition and
enhance-ment Multisensor radar, sonar, navigation, and
biomedicalsignal processing Blind techniques for multisensor signal
extraction Computational advances in multisensor processingAuthors
should follow the EURASIP JASP manuscriptformat described at
http://www.hindawi.com/journals/asp/.Prospective authors should
submit an electronic copy of theircomplete manuscripts through the
EURASIP JASP man-uscript tracking system at
http://www.mstracking.com/asp/,according to the following
timetable:Manuscript Due January 1, 2006Acceptance Notication May
1, 2006Final Manuscript Due August 1, 2006Publication Date 4th
Quarter, 2006GUEST EDITORS:Chong-Yung Chi, National Tsing Hua
University, Taiwan;[email protected] Lee, National Chiao
Tung University, Taiwan;[email protected] Luo,
University of Minnesota, USA;[email protected] Yao, University
of California, Los Angeles, USA;[email protected] Wang, Virginia
Polytechnic Institute and StateUniversity, USA;
[email protected] Publishing
Corporationhttp://www.hindawi.comEURASIP JOURNAL ON APPLIED SIGNAL
PROCESSINGSpecial Issue onSearch and Retrieval of 3DContent and
AssociatedKnowledge Extraction and PropagationCall for PapersWith
the general availability of 3D digitizers, scanners, andthe
technology innovation in 3D graphics and computa-tional equipment,
large collections of 3D graphical mod-els can be readily built up
for dierent applications (e.g.,in CAD/CAM, games design, computer
animations, manu-facturing and molecular biology). For such large
databases,the method whereby 3D models are sought merits
carefulconsideration. The simple and ecient query-by-content
ap-proach has, up to now, been almost universally adopted inthe
literature. Any such method, however, must rst dealwith the proper
positioning of the 3D models. The twoprevalent-in-the-literature
methods for the solution to thisproblem seek either Pose
Normalization: Models are rst placed into acanonical coordinate
frame (normalizing for transla-tion, scaling, and rotation). Then,
the best measure ofsimilarity is found by comparing the extracted
featurevectors, or Descriptor Invariance: Models are described in
atransformation invariant manner, so that any trans-formation of a
model will be described in the sameway, and the best measure of
similarity is obtained atany transformation.The existing 3D
retrieval systems allow the user to performqueries by example. The
queried 3D model is then processed,low-level geometrical features
are extracted, and similar ob-jects are retrieved from a local
database. A shortcoming ofthe methods that have been proposed so
far regarding the3D object retrieval, is that neither is the
semantic informa-tion (high-level features) attached to the
(low-level) geomet-ric features of the 3D content, nor are the
personalizationoptions taken into account, which would signicantly
im-prove the retrieved results. Moreover, few systems exist sofar
to take into account annotation and relevance feedbacktechniques,
which are very popular among the correspond-ing content-based image
retrieval systems (CBIR).Most existing CBIR systems using knowledge
either an-notate all the objects in the database (full annotation)
orannotate a subset of the database manually selected (par-tial
annotation). As the database becomes larger, full anno-tation is
increasingly dicult because of the manual eortneeded. Partial
annotation is relatively aordable and trimsdown the heavy manual
labor. Once the database is partiallyannotated, traditional image
analysis methods are used to de-rive semantics of the objects not
yet annotated. However, itis not clear how much annotation is
sucient for a specicdatabase and what the best subset of objects to
annotate is.In other words how the knowledge will be propagated.
Suchtechniques have not been presented so far regarding the
3Dcase.Relevance feedback was rst proposed as an interactivetool in
text-based retrieval. Since then it has been proven tobe a powerful
tool and has become a major focus of researchin the area of
content-based search and retrieval. In the tra-ditional computer
centric approaches, which have been pro-posed so far, the best
representations and weights are xedand they cannot eectively model
high-level concepts andusers perception subjectivity. In order to
overcome theselimitations of the computer centric approach,
techniquesbased on relevant feedback, in which the human and
com-puter interact to rene high-level queries to
representationsbased on low-level features, should be developed.The
aim of this special issue is to focus on recent devel-opments in
this expanding research area. The special issuewill focus on novel
approaches in 3D object retrieval, trans-forms and methods for
ecient geometric feature extrac-tion, annotation and relevance
feedback techniques, knowl-edge propagation (e.g., using Bayesian
networks), and theircombinations so as to produce a single,
powerful, and domi-nant solution.Topics of interest include (but
are not limited to): 3D content-based search and retrieval
methods(volume/surface-based) Partial matching of 3D objects
Rotation invariant feature extraction methods for 3Dobjects
Graph-based and topology-based methods 3D data and knowledge
representation Semantic and knowledge propagation over
heteroge-neous metadata types Annotation and relevance feedback
techniques for 3DobjectsAuthors should follow the EURASIP JASP
manuscriptformat described at
http://www.hindawi.com/journals/asp/.Prospective authors should
submit an electronic copy oftheir complete manuscript through the
EURASIP JASP man-uscript tracking system at
http://www.mstracking.com/asp/,according to the following
timetable:Manuscript Due February 1, 2006Acceptance Notication June
1, 2006Final Manuscript Due September 1, 2006Publication Date 4th
Quarter, 2006GUEST EDITORS:Tsuhan Chen, Carnegie Mellon University,
Pittsburgh, PA15213, USA; [email protected] Ouhyoung, National
Taiwan University, Taipei 106,Taiwan; [email protected]
Daras, Informatics and Telematics Institute, Centrefor Research and
Technology Hellas, 57001 Thermi, Thessa-loniki, Greece;
[email protected] Publishing
Corporationhttp://www.hindawi.comEURASIP JOURNAL ON APPLIED SIGNAL
PROCESSINGSpecial Issue onRobust Speech RecognitionCall for
PapersRobustness can be dened as the ability of a system to
main-tain performance or degrade gracefully when exposed
toconditions not well represented in the data used to developthe
system. In automatic speech recognition (ASR), systemsmust be
robust to many forms of signal degradation, includ-ing speaker
characteristics (e.g., dialect and accent), ambientenvironment
(e.g., cellular telephony), transmission channel(e.g., voice over
IP), and language (e.g., new words, dialectswitching). Robust ASR
systems, which have been under de-velopment for the past 35 years,
have made great progressover the years closing the gap between
performance on pris-tine research tasks and noisy operational
data.However, in recent years, demand is emerging for a newclass of
systems that tolerate extreme and unpredictable vari-ations in
operating conditions. For example, in a cellulartelephony
environment, there are many nonstationary formsof noise (e.g.,
multiple speakers) and signicant variationsin microphone type,
position, and placement. Harsh ambi-ent conditions typical in
automotive and mobile applicationspose similar challenges.
Development of systems in a lan-guage or dialect for which there is
limited or no training datain a target language has become a
critical issue for a new gen-eration of voice mining applications.
The existence of mul-tiple conditions in a single stream, a
situation common tobroadcast news applications, and that often
involves unpre-dictable changes in speaker, topic, dialect, or
language, is an-other form of robustness that has gained attention
in recentyears.Statistical methods have dominated the eld since the
early1980s. Such systems tend to excel at learning the
characteris-tics of large databases that represent good models of
the op-erational conditions and do not generalize well to new
envi-ronments.This special issue will focus on recent developments
in thiskey research area. Topics of interest include (but are not
lim-ited to): Channel and microphone normalization Stationary and
nonstationary noise modeling, com-pensation, and/or rejection
Localization and separation of sound sources (includ-ing speaker
segregation) Signal processing and feature extraction for
applica-tions involving hands-free microphones Noise robust speech
modeling Adaptive training techniques Rapid adaptation and learning
Integration of condence scoring, metadata, andother alternative
information sources Audio-visual fusion Assessment relative to
human performance Machine learning algorithms for robustness
Transmission robustness Pronunciation modelingAuthors should follow
the EURASIP JASP manuscriptformat described at
http://www.hindawi.com/journals/asp/.Prospective authors should
submit an electronic copy oftheir complete manuscript through the
EURASIP JASP man-uscript tracking system at
http://www.mstracking.com/asp/,according to the following
timetable:Manuscript Due February 1, 2006Acceptance Notication June
1, 2006Final Manuscript Due September 1, 2006Publication Date 4th
Quarter, 2006GUEST EDITORS:Herve Bourlard, IDIAP Research
Institute, Swiss Federal In-stitute of Technology at Lausanne
(EPFL), 1920 Martigny,Switzerland; [email protected]
Gales, Department of Engineering, University ofCambridge, Cambridge
CB2 1PZ, UK; [email protected] Omologo, ITC-IRST, 38050
Trento, Italy; [email protected]. Parthasarathy, AT&T Labs -
Research, NJ 07748, USA;[email protected] Picone, Department
of Electrical and Computer Engi-neering, Mississippi State
University, MS 39762-9571, USA;[email protected]
Publishing Corporationhttp://www.hindawi.comEURASIP JOURNAL ON
APPLIED SIGNAL PROCESSINGSpecial Issue onSignal Processing
Technologies for AmbientIntelligence in Home-Care ApplicationsCall
for PapersThe possibility of allowing elderly people with dierent
kindsof disabilities to conduct a normal life at home and achievea
more eective inclusion in the society is attracting moreand more
interest from both industrial and governmentalbodies (hospitals,
healthcare institutions, and social institu-tions). Ambient
intelligence technologies, supported by ad-equate networks of
sensors and actuators, as well as by suit-able processing and
communication technologies, could en-able such an ambitious
objective.Recent researches demonstrated the possibility of
provid-ing constant monitoring of environmental and
biomedicalparameters, and the possibility to autonomously
originatealarms, provide primary healthcare services, activate
emer-gency calls, and rescue operations through distributed
as-sistance infrastructures. Nevertheless, several
technologicalchallenges are still connected with these
applications, rang-ing from the development of enabling
technologies (hard-ware and software), to the standardization of
interfaces, thedevelopment of intuitive and ergonomic human-machine
in-terfaces, and the integration of complex systems in a
highlymultidisciplinary environment.The objective of this special
issue is to collect the mostsignicant contributions and visions
coming from both aca-demic and applied research bodies working in
this stimulat-ing research eld. This is a highly interdisciplinary
eld com-prising many areas, such as signal processing, image
process-ing, computer vision, sensor fusion, machine learning,
pat-tern recognition, biomedical signal processing,
multimedia,human-computer interfaces, and networking.The focus will
be primarily on the presentation of origi-nal and unpublished works
dealing with ambient intelligenceand domotic technologies that can
enable the provision ofadvanced homecare services.This special
issue will focus on recent developments in thiskey research area.
Topics of interest include (but are not lim-ited to): Video-based
monitoring of domestic environmentsand users Continuous versus
event-driven monitoring Distributed information processing Data
fusion techniques for event association and au-tomatic alarm
generation Modeling, detection, and learning of user habits
forautomatic detection of anomalous behaviors Integration of
biomedical and behavioral data Posture and gait recognition and
classication Interactive multimedia communications for
remoteassistance Content-based encoding of medical and
behavioraldata Networking support for remote healthcare
Intelligent/natural man-machine interaction, person-alization, and
user acceptanceAuthors should follow the EURASIP JASP
manuscriptformat described at
http://www.hindawi.com/journals/asp/.Prospective authors should
submit an electronic copy of theircomplete manuscripts through the
EURASIP JASP man-uscript tracking system at
http://www.mstracking.com/asp/,according to the following
timetable:Manuscript Due March 1, 2006Acceptance Notication July 1,
2006Final Manuscript Due October 1, 2006Publication Date 1st
Quarter, 2007GUEST EDITORS:Francesco G. B. De Natale, Department of
Informationand Communication Technology, University of Trento,
ViaSommarive 14, 38050 Trento, Italy; [email protected]
K. Katsaggelos, Department of Electrical andComputer Engineering,
Northwestern University, 2145Sheridan Road, Evanston, IL
60208-3118, USA;[email protected] Mayora, Create-Net
Association, Via Solteri 38,38100 Trento, Italy;
[email protected] Wu, Department of Electrical and
Computer Eng-ineering, Northwestern University, 2145 Sheridan
Road,Evanston, IL 60208-3118,
USA;[email protected] Publishing
Corporationhttp://www.hindawi.comEURASIP JOURNAL ON APPLIED SIGNAL
PROCESSINGSpecial Issue onSpatial Sound and Virtual AcousticsCall
for PapersSpatial sound reproduction has become widespread in
theform of multichannel audio, particularly through home the-ater
systems. Reproduction systems from binaural (by head-phones) to
hundreds of loudspeaker channels (such as waveeld synthesis) are
entering practical use. The applicationpotential of spatial sound
is much wider than multichan-nel sound, however, and research in
the eld is active. Spa-tial sound covers for example the capturing,
analysis, coding,synthesis, reproduction, and perception of spatial
aspects inaudio and acoustics.In addition to the topics mentioned
above, research in vir-tual acoustics broadens the eld. Virtual
acoustics includestechniques and methods to create realistic
percepts of soundsources and acoustic environments that do not
exist natu-rally but are rendered by advanced reproduction systems
us-ing loudspeakers or headphones. Augmented acoustic andaudio
environments contain both real and virtual
acousticcomponents.Spatial sound and virtual acoustics are among
the ma-jor research and application areas in audio signal
processing.Topics of active study range from new basic research
ideas toimprovement of existing applications. Understanding of
spa-tial sound perception by humans is also an important area,in
fact a prerequisite to advanced forms of spatial sound andvirtual
acoustics technology.This special issue will focus on recent
developments in thiskey research area. Topics of interest include
(but are not lim-ited to): Multichannel reproduction Wave eld
synthesis Binaural reproduction Format conversion and enhancement
of spatialsound Spatial sound recording Analysis, synthesis, and
coding of spatial sound Spatial sound perception and auditory
modeling Simulation and modeling of room acoustics Auralization
techniques Beamforming and sound source localization Acoustic and
auditory scene analysis Augmented reality audio Virtual acoustics
(sound environments and sources) Intelligent audio environments
Loudspeaker-room interaction and equalization ApplicationsAuthors
should follow the EURASIP JASP manuscriptformat described at
http://www.hindawi.com/journals/asp/.Prospective authors should
submit an electronic copy oftheir complete manuscript through the
EURASIP JASP man-uscript tracking system at
http://www.mstracking.com/asp/,according to the following
timetable:Manuscript Due May 1, 2006Acceptance Notication September
1, 2006Final Manuscript Due December 1, 2006Publication Date 1st
Quarter, 2007GUEST EDITORS:Ville Pulkki, Helsinki University of
Technology, Espoo,Finland; [email protected] Faller,
EPFL, Lausanne, Switzerland;[email protected] Harma, Philips
Research Labs, Eindhoven, TheNetherlands;
[email protected] Lokki, Helsinki University of
Technology, Espoo,Finland; [email protected] de Bruijn, Philips
Research Labs, Eindhoven, TheNetherlands;
[email protected] Publishing
Corporationhttp://www.hindawi.comNEWS RELEASENominations Invited
for the Institute of Acoustics2006 A B Wood MedalThe Institute of
Acoustics, the UKs leading professionalbody for those working in
acoustics, noise and vibration, isinviting nominations for its
prestigious A B Wood Medal forthe year 2006.The A B Wood Medal and
prize is presented to an individ-ual, usually under the age of 35,
for distinguished contribu-tions to the application of underwater
acoustics. The awardis made annually, in even numbered years to a
person fromEurope and in odd numbered years to someone from
theUSA/Canada. The 2005 Medal was awarded to Dr A Thodefrom the USA
for his innovative, interdisciplinary research inocean and marine
mammal acoustics.Nominations should consist of the candidates CV,
clearlyidentifying peer reviewed publications, and a letter of
en-dorsement from the nominator identifying the contributionthe
candidate has made to underwater acoustics. In addition,there
should be a further reference from a person involvedin underwater
acoustics and not closely associated with thecandidate. Nominees
should be citizens of a European Unioncountry for the 2006 Medal.
Nominations should be markedcondential and addressed to the
President of the Institute ofAcoustics at 77ASt Peters Street, St.
Albans, Herts, AL1 3BN.The deadline for receipt of nominations is
15 October 2005.Dr Tony Jones, President of the Institute of
Acoustics,comments, A B Wood was a modest man who took delightin
helping his younger colleagues. It is therefore appropriatethat
this prestigious award should be designed to recognisethe
contributions of young acousticians.Further information and an
nomination formcan be found on the Institutes website
atwww.ioa.org.uk.A B WoodAlbert Beaumont Wood was born in Yorkshire
in 1890 andgraduated from Manchester University in 1912. He
becameone of the rst two research scientists at the Admiralty
towork on antisubmarine defence. He designed the rst direc-tional
hydrophone and was well known for the many contri-butions he made
to the science of underwater acoustics andfor the help he gave to
younger colleagues. The medal wasinstituted after his death by his
many friends on both sides ofthe Atlantic and was administered by
the Institute of Physicsuntil the formation of the Institute of
Acoustics in 1974.PRESS CONTACTJudy EdrichPublicity &
Information Manager, Institute of AcousticsTel: 01727 848195;
E-mail: [email protected] NOTESThe Institute of
Acoustics is the UKs professional bodyfor those working in
acoustics, noise and vibration. It wasformed in 1974 from the
amalgamation of the AcousticsGroup of the Institute of Physics and
the British AcousticalSociety (a daughter society of the
Institution of MechanicalEngineers). The Institute of Acoustics is
a nominated body ofthe Engineering Council, oering registration at
Charteredand Incorporated Engineer levels.The Institute has some
2500 members from a rich di-versity of backgrounds, with engineers,
scientists, educa-tors, lawyers, occupational hygienists,
architects and envi-ronmental health ocers among their number. This
multi-disciplinary culture provides a productive environment
forcross-fertilisation of ideas and initiatives. The range of
in-terests of members within the world of acoustics is equallywide,
embracing such aspects as aerodynamics, architecturalacoustics,
building acoustics, electroacoustics, engineeringdynamics, noise
and vibration, hearing, speech, underwa-ter acoustics, together
with a variety of environmental as-pects. The lively nature of the
Institute is demonstrated bythe breadth of its learned society
programmes.For more information please visit our site
atwww.ioa.org.uk.HIGH-FIDELITY MULTICHANNEL AUDIO CODINGDai Tracy
Yang, Chris Kyriakakis, and C.-C. Jay KuoThis invaluable monograph
addresses the specic needs of audio-engineering students and
researchers who are either learning about the topic or using it as
a reference book on multichannel audio compression. This book
covers a wide range of knowledge on perceptual audio coding, from
basic digital signal processing and data compression techniques to
advanced audio coding standards and innovative coding tools. It is
the only book available on the market that solely focuses on the
principles of high-quality audio codec design for multichannel
sound sources.This book includes three parts. The rst part covers
the basic topics on audio compression, such as quantization,
entropy coding, psychoacoustic model, and sound quality assessment.
The second part of the book highlights the current most prevalent
low-bit-rate high-performance audio coding standardsMPEG-4 audio.
More space is given to the audio standards that are capable of
supporting multichannel signals, that is, MPEG advanced audio
coding (AAC), including the original MPEG-2 AAC technology,
additional MPEG-4 toolsets, and the most recent aacPlus standard.
The third part of this book introduces several innovative
multichannel audio coding tools, which have been demonstrated to
further improve the coding performance and expand the available
functionalities of MPEG AAC, and is more suitable for graduate
students and researchers in the advanced level.Dai Tracy Yang is
currently Postdoctoral Research Fellow, Chris Kyriakakis is
Associated Professor, and C.-C. Jay Kuo is Professor, all afliated
with the Integrated Media Systems Center (IMSC) at the University
of Southern California.EURASIP Book Series on Signal Processing and
CommunicationsThe EURASIP Book Series on Signal Processing and
Communications publishes monographs, edited volumes, and textbooks
on Signal Processing and Communications. For more information about
the series please visit: http://hindawi.com/books/spc/about.htmlFor
more information and online orders please visit:
http://www.hindawi.com/books/spc/volume-1/For any inquiries on how
to order this title please contact [email protected]
Book Series on SP&C, Volume 1, ISBN 977-5945-08-9The EURASIP
Book Series on Signal Processing and Communications publishes
monographs, edited volumes, and textbooks on Signal Processing and
Communications. For more information about the series please visit:
http://hindawi.com/books/spc/about.htmlFor more information and
online orders please visit:
http://www.hindawi.com/books/spc/volume-2/For any inquiries on how
to order this title please contact [email protected]
Book Series on SP&C, Volume 2, ISBN 977-5945-07-0Recent
advances in genomic studies have stimulated synergetic research and
development in many cross-disciplinary areas. Genomic data,
especially the recent large-scale microarray gene expression data,
represents enormous challenges for signal processing and statistics
in processing these vast data to reveal the complex biological
functionality. This perspective naturally leads to a new eld,
genomic signal processing (GSP), which studies the processing of
genomic signals by integrating the theory of signal processing and
statistics. Written by an international, interdisciplinary team of
authors, this invaluable edited volume is accessible to students
just entering this emergent eld, and to researchers, both in
academia and industry, in the elds of molecular biology,
engineering, statistics, and signal processing. The book provides
tutorial-level overviews and addresses the specic needs of genomic
signal processing students and researchers as a reference book.The
book aims to address current genomic challenges by exploiting
potential synergies between genomics, signal processing, and
statistics, with special emphasis on signal processing and
statistical tools for structural and functional understanding of
genomic data. The book is partitioned into three parts. In part I,
a brief history of genomic research and a background introduction
from both biological and signal-processing/statistical perspectives
are provided so that readers can easily follow the material
presented in the rest of the book. In part II, overviews of
state-of-the-art techniques are provided. We start with a chapter
on sequence analysis, and follow with chapters on feature
selection, clustering, and classication of microarray data. The
next three chapters discuss the modeling, analysis, and simulation
of biological regulatory networks, especially gene regulatory
networks based on Boolean and Bayesian approaches. The next two
chapters treat visualization and compression of gene data, and
supercomputer implementation of genomic signal processing systems.
Part II concludes with two chapters on systems biology and medical
implications of genomic research. Finally, part III discusses the
future trends in genomic signal processing and statistics
research.GENOMIC SIGNAL PROCESSINGAND STATISTICSEdited by: Edward
R. Dougherty, Ilya Shmulevich, Jie Chen, and Z. Jane WangEURASIP
Book Series on Signal Processing and Communications