Model-based Bayesian analysis in acoustics—A tutorial a) Ning Xiang b) Graduate Program in Arcvhitectural Acoustics, Rensselaer Polytechnic Institute, Troy, New York 12180, USA Bayesian analysis has been increasingly applied in many acoustical applications. In these applications, prediction models are often involved to better understand the process under investigation by purposely learning from the experimental observations. When involving the model-based data analysis within a Bayesian framework, issues related to incorporating the experimental data and assigning probabilities into the inferential learning procedure need fundamental consideration. This paper introduces Bayesian probability theory on a tutorial level, including funda- mental rules for manipulating the probabilities, and the principle of maximum entropy for assignment of necessary probabilities prior to the data analysis. This paper also employs a number of examples recently published in this jour- nal to explain detailed steps on how to apply the model-based Bayesian inference to solving acoustical problems. V C 2020 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1121/10.0001731 (Received 2 May 2020; revised 9 July 2020; accepted 24 July 2020; published online 28 August 2020) [Editor: James F. Lynch] Pages: 1101–1120 I. INTRODUCTION Many recent Bayesian applications in acoustics hint at a “Bayesian revolution” in science and engineering (Ballard et al., 2020; Landschoot and Xiang, 2019; Lee et al., 2020; Martiartu et al., 2019; Nannuru et al., 2018). However, logi- cal foundations of the Bayesian methods cannot be put in fully satisfactory form until the classical problem of arbitrar- iness (sometimes called “subjectivity”) in assigning prior probability is resolved (Jaynes, 2003). These recent Bayesian applications have already reached a level where the problem of prior probabilities can no longer be ignored. This paper describes model-based Bayesian analysis at an introductory level to the readers of the journal, sacrificing some rigorous complexity for increasing clarity in a way to keep mathematical handling of Bayesian probability meth- ods as minimum as feasible. Model-based approaches rely crucially on parametric models, which can be derived from physical/acoustical principles, and can also be phenomeno- logical and numerical. Another class of Bayesian data analy- sis that does not necessarily rely on parametric models, the so-called nonparametric Bayesian analysis (M€ uller et al., 2015), is outside the scope of this paper. This tutorial paper is organized as follows. Section II introduces the logical concept of probability from the Bayesian viewpoint. Section III discusses two simple exam- ples within the Bayesian framework. Section IV applies the principle of maximum entropy to assign prior probabilities. Section V explains two levels of Bayesian inference. Section VI presents a number of recent applications of model-based Bayesian analysis. Section VII summarizes the paper. II. BAYESIAN PROBABILITY When considering the interpretation of probability, debate is now in its third century between the two main schools of statistical inference, the Bayesian school and the frequentist school. Frequentists consider probability to be a proportion in a large ensemble of repeatable observations (Cox, 2006). This interpretation has been dominant in statis- tics until recent decades (McGrayne, 2011). But in consider- ing, for instance, the probability of the mass of the universe falling between certain bounds, the probability cannot be interpreted in terms of frequencies of repeatable observa- tions, because there is only one universe. The Bayesian school [henceforth just “Bayesian” (Fienberg, 2006)] views probability instead as a degree or strength of implication: how strongly one thing implies another (Garrett, 1998; Keynes, 1921). Carnap (1950), taking a similar view, con- sidered it as the degree of confirmation (or conclusion) of a hypothesis on the basis of some given evidence (or prom- ises). In detail, pðBjAÞ represents how strongly the assumed truth of one binary proposition, A, implies the truth of another, B, according to the relations known between the things that the propositions refer to; in this expression the proposition in question (B) appears first, to the left of the vertical line or “conditioning solidus,” and the proposition held to be true (A) appears to its right. Use of implication or confirmation, rather than belief, does away with common criticisms of Bayesianism relating to psychology, and also demonstrates clearly that all probabilities are conditional: they depend on at least two propositions, one of which is the conditioning information, so there is no such thing as an unconditional probability (Carnap, 1950; de Finetti, 2017). a) Extended from Xiang and Fackler (2015). Acoust. Today 11, 54–61. Portions of this work have also been presented at the 175th Meeting of the Acoustical Society of America, Minneapolis, MN, USA, May 2018. b) Electronic mail: [email protected]J. Acoust. Soc. Am. 148 (2), August 2020 V C Author(s) 2020. 1101 0001-4966/2020/148(2)/1101/20 ARTICLE ...................................
20
Embed
Model-based Bayesian analysis in acoustics—A tutorialsymphony.arch.rpi.edu/~xiangn/Papers/JASA2020_Xiang... · 2020. 8. 28. · Model-based Bayesian analysis in acoustics—A tutoriala)
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Model-based Bayesian analysis in acoustics—A tutoriala)
Ning Xiangb)
Graduate Program in Arcvhitectural Acoustics, Rensselaer Polytechnic Institute, Troy, New York 12180, USA
Bayesian analysis has been increasingly applied in many acoustical applications. In these applications, prediction
models are often involved to better understand the process under investigation by purposely learning from the
experimental observations. When involving the model-based data analysis within a Bayesian framework, issues
related to incorporating the experimental data and assigning probabilities into the inferential learning procedure need
fundamental consideration. This paper introduces Bayesian probability theory on a tutorial level, including funda-
mental rules for manipulating the probabilities, and the principle of maximum entropy for assignment of necessary
probabilities prior to the data analysis. This paper also employs a number of examples recently published in this jour-
nal to explain detailed steps on how to apply the model-based Bayesian inference to solving acoustical problems.VC 2020 Author(s). All article content, except where otherwise noted, is licensed under a Creative CommonsAttribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). https://doi.org/10.1121/10.0001731
(Received 2 May 2020; revised 9 July 2020; accepted 24 July 2020; published online 28 August 2020)
[Editor: James F. Lynch] Pages: 1101–1120
I. INTRODUCTION
Many recent Bayesian applications in acoustics hint at a
“Bayesian revolution” in science and engineering (Ballard
et al., 2020; Landschoot and Xiang, 2019; Lee et al., 2020;
Martiartu et al., 2019; Nannuru et al., 2018). However, logi-
cal foundations of the Bayesian methods cannot be put in
fully satisfactory form until the classical problem of arbitrar-
iness (sometimes called “subjectivity”) in assigning prior
probability is resolved (Jaynes, 2003). These recent
Bayesian applications have already reached a level where
the problem of prior probabilities can no longer be ignored.
This paper describes model-based Bayesian analysis at
an introductory level to the readers of the journal, sacrificing
some rigorous complexity for increasing clarity in a way to
keep mathematical handling of Bayesian probability meth-
ods as minimum as feasible. Model-based approaches rely
crucially on parametric models, which can be derived from
physical/acoustical principles, and can also be phenomeno-
logical and numerical. Another class of Bayesian data analy-
sis that does not necessarily rely on parametric models, the
so-called nonparametric Bayesian analysis (M€uller et al.,2015), is outside the scope of this paper.
This tutorial paper is organized as follows. Section II
introduces the logical concept of probability from the
Bayesian viewpoint. Section III discusses two simple exam-
ples within the Bayesian framework. Section IV applies the
principle of maximum entropy to assign prior probabilities.
Section V explains two levels of Bayesian inference.
Section VI presents a number of recent applications of
model-based Bayesian analysis. Section VII summarizes the
paper.
II. BAYESIAN PROBABILITY
When considering the interpretation of probability,
debate is now in its third century between the two main
schools of statistical inference, the Bayesian school and the
frequentist school. Frequentists consider probability to be a
proportion in a large ensemble of repeatable observations
(Cox, 2006). This interpretation has been dominant in statis-
tics until recent decades (McGrayne, 2011). But in consider-
ing, for instance, the probability of the mass of the universe
falling between certain bounds, the probability cannot be
interpreted in terms of frequencies of repeatable observa-
tions, because there is only one universe. The Bayesian
school [henceforth just “Bayesian” (Fienberg, 2006)] views
probability instead as a degree or strength of implication:
how strongly one thing implies another (Garrett, 1998;
Keynes, 1921). Carnap (1950), taking a similar view, con-
sidered it as the degree of confirmation (or conclusion) of a
hypothesis on the basis of some given evidence (or prom-
ises). In detail, pðBjAÞ represents how strongly the assumed
truth of one binary proposition, A, implies the truth of
another, B, according to the relations known between the
things that the propositions refer to; in this expression
the proposition in question (B) appears first, to the left of the
vertical line or “conditioning solidus,” and the proposition
held to be true (A) appears to its right. Use of implication or
confirmation, rather than belief, does away with common
criticisms of Bayesianism relating to psychology, and also
demonstrates clearly that all probabilities are conditional:
they depend on at least two propositions, one of which is the
conditioning information, so there is no such thing as an
unconditional probability (Carnap, 1950; de Finetti, 2017).
a)Extended from Xiang and Fackler (2015). Acoust. Today 11, 54–61.
Portions of this work have also been presented at the 175th Meeting of the
Acoustical Society of America, Minneapolis, MN, USA, May 2018.b)Electronic mail: [email protected]
J. Acoust. Soc. Am. 148 (2), August 2020 VC Author(s) 2020. 11010001-4966/2020/148(2)/1101/20
of data. The new information comprising the data D comes
from experiments.
In the room-acoustic example mentioned above, the
quantity pðhjIÞ is a probability representing the initial impli-
cation of the parameter values from the background infor-
mation before taking into account the experimental data D.
It is consequently referred to as the prior probability for h,
or prior for short.
The term pðDjh; IÞ reads “the probability of the data Dgiven the parameter h and the background information I,and represents the strength of implication that the measured
data D would have been generated for a given value of h. It
represents the probability of observing the dataset D if the
parameters take any particular set of values h. It is often
called the likelihood function (likelihood for short). This
likelihood represents the probability of getting the measured
data D supposing that the model HðhÞ holds with values of
its defining parameters h.
The term pðhjD; IÞ represents the probability of the
parameter values h after taking the data values D into
account. It is therefore referred to as the posterior probabil-ity, also known as the posterior predictive distribution. The
quantity pðDjIÞ in the denominator on the right-hand side of
Eq. (23) represents the probability that the observed data
occur no matter what the values of the parameters. It can be
interpreted as a normalization factor ensuring that the poste-
rior probability for the parameters integrates to unity.
Acousticians who face data analysis tasks from experi-
mental observations are typically challenged to estimate a
set of parameters encapsulated in the model HðhÞ, also
called the hypothesis. Figure 2 illustrates this task schemati-
cally. A model HðhÞ is specified by a functional form con-
taining a set of parameters h whose values are not known in
advance. The task is to go backwards: given the experimen-
tal data D and the well-established model form containing
the parameters h, estimate the unknown (hopefully optimal)
set of parameters.
Figure 3 illustrates Bayes’ theorem as shown in Eq. (23).
The posterior probability, up to a normalization constant
pðDjIÞ, arises by updating the prior probability pðhjIÞ. Once
the experimental data D become available, the likelihood
function pðDjh; IÞ represents a multiplicative factor that
updates the prior probability so as to transform it into the
posterior probability (up to a normalization constant). Bayes’theorem represents how one’s initial assignment is updated inthe light of the data. This corresponds exactly to the process
of scientific exploration: acoustical scientists seek to gain new
knowledge from acoustic experiments. The prior knowledge
in many acoustics fields is the fruit of long development, lead-
ing to well-understood hypotheses—models. These models, as
part of the prior knowledge (Candy, 2016), are typically based
on generations of learning/education.
In Eq. (23), Bayes’ theorem requires two probabilities
in the calculation of the posterior probability up to a normal-
ization constant. These are the prior probability, pðhjIÞ, and
the likelihood function, pðDjh; IÞ. Use of the prior probabil-
ity in the data analysis was an element of controversy
between the Bayesian and frequentist schools. Frequentists
have criticized the “subjective” aspect of the prior probabil-
ity involved in the Bayesian methodology (Berger, 2006),
since different prior assignments lead to different posteriors
according to Bayes’ theorem [see Fig. 3 and Cowan (2007)].
Figure 3(a) shows that, if the prior probability is sharply
peaked in parameter space, the likelihood function multi-
plied by the prior probability may give rise to a posterior
probability peaked at a significantly different position than
either the prior or the likelihood. Sharply peaked probability
density functions encode a strong implication that the
parameter values fall in certain (narrow) ranges in parameter
space. Assignment of the prior probability in this way
implies injection of information that may differ from person
to person into the data analysis. In the case of Fig. 3(a), the
parameters are already known accurately, in which case
FIG. 2. Model-based parameter estimation scheme as an inverse problem.
The inversion to be performed is to estimate a set of parameters encapsu-
lated in a model, based on experimental data and the model itself (the
“hypothesis”).
FIG. 3. (Color online) Bayes’ theorem [in Eq. (23)] used to estimate a
parameter with the same data set as incorporated in the likelihood function
pðDjh; IÞ, in which different prior probabilities pðhjIÞ lead to different pos-
terior probabilities pðhjD; IÞ (Cowan, 2007). (a) A more sharply peaked
prior pðhjIÞ has a considerable influence on the posterior pðhjD; IÞ. (b) A
flat prior pðhjIÞ has almost no influence on the posterior pðhjD; IÞ.
J. Acoust. Soc. Am. 148 (2), August 2020 Ning Xiang 1105
with the principle of the maximum entropy, represents a
quantitative tool capable of dealing with ignorance in the
sense of complete lack of knowledge. The least informative
prior probability is consistent with the principle of indiffer-ence (Keynes, 1921), or the principle of insufficient reason-ing, as historically applied to Bayesian inference by its
pioneers Bayes (1763), Richard Price (Hooper, 2013), and
Laplace (1812), among others. The principle of indifference
was eventually given a deeper logical justification by Jaynes
(1968), based on the principle of transformation groups fol-
lowed by the principle of maximum entropy. The methods
of probability theory constitute consistent reasoning in
scenarios where insufficient information for certainty is
available, namely, inductive reasoning. Thus, probability is
always the appropriate tool for dealing with uncertainty,
lack of scientific data, and ignorance.
J. Acoust. Soc. Am. 148 (2), August 2020 Ning Xiang 1107
In summary, this assignment of the likelihood function
incorporates no more than the available information; the
experimenter knows a priori that the data model HðhÞdescribes the experimental data D well such that the resid-
ual errors between the model and the data are expressed in a
finite error variance r2. The resulting Gaussian likelihood
distribution in Eq. (42) is then the consequence of the princi-
ple of the maximum entropy which is fundamentally differ-ent from assuming probability of the residual errors beingGaussian. The end result in Eq. (47) is due also to the resid-
ual errors being logically independent of each other, given
the conditioning information which is also according to the
principle of the maximum entropy. The assigned distribution
ensures no bias for which there is no prior evidence.
C. Student’s t-distribution
In the above formulation, the likelihood function enco-
des the extent of implication of the experimental data D, by
the values h of the parameters in the model HðhÞ. However,
the likelihood also contains a hyperparameter, the error
standard deviation r, stemming from the maximum entropy
assignment discussed in the Sec. IV B. For some applica-
tions, this is often a nuisance parameter of no interest to the
analysis. Marginalization (as introduced in Sec. II B) may be
applied to the joint probability of the D and the standard
deviation r (Bretthorst, 1988, 1990). Recent acoustic appli-
cations of Bayesian analysis also utilized a similar marginal-
ization process to remove nuisance parameters (Jasa and
Xiang, 2009; Xiang et al., 2011). The following derivation
relies heavily on these references, particularly (Bretthorst,
1988, 1990).
To perform the marginalization [as in Eq. (20)], the
joint probability pðD; rjh;H; IÞ of data D and the standard
deviation r, namely the product of the conditional likeli-
hood in Eq. (47) and the marginal distribution pðrjh;H; IÞ of
r, is integrated over all possible values of r, resulting in a
likelihood free from the error standard deviation
pðDjh;H; IÞ ¼ð
pðD; rjh;H; IÞ dr
¼ð
pðDjr; h;H; IÞ pðrjh;H; IÞ dr: (49)
The marginal distribution pðrjh;H; IÞ is regarded as a prior
probability for the standard deviation r, which is a scale
parameter discussed in Sec. IV A above, so that the principle
of maximum entropy assigns the Jeffreys’ prior
pðrjh;H; IÞ ¼ 1
r: (50)
Substituting Eqs. (47) and (50) in Eq. (49) and integrating
over all possible values for r results in
pðDjh;H; IÞ ¼ð1
0
r�K
rð2pÞ�K=2
exp�Q
r2
� �dr: (51)
This integral can be performed (see Appendix B) to give a
marginalized likelihood function taking the form of student’s
t-distribution
pðDjh;H; IÞ ¼ CðK=2Þ2
pXK
k¼1
�2k
!�K=2
; (52)
where Cð� � �Þ is the standard gamma function (Abramowitz
and Stegun, 1964), and the residuals �k ¼ dk � hkðhÞ.An extended Bayesian parameter estimation scheme is a
sequential Bayesian process (Candy, 2015; Carrier�e and
Hermand, 2012; €Ozdamar et al., 1990), such as the work
recently reported on underwater applications (Yardim et al.,2011), in which the parameters encapsulated in the model
evolve in time and space, with the data arriving consecutively.
V. TWO LEVELS OF BAYESIAN INFERENCE
In many acoustic experiments there are a finite number
of models (hypotheses) H1;H2;…; HM that compete against
one other to explain the data. In the room-acoustic example
above, H1 is specified in Eq. (22) as containing one expo-
nential decay term and one noise term, but the same data in
Fig. 1 may alternatively be described by H2, containing a
sum of two exponential decay terms (double-rate decay)
with differing time constants and amplitude coefficients.
Which model explains the data better? Figure 5 illustrates
this scenario. Each model HS is governed by a set of param-
eters hS. Model selection is an inverse problem to infer
which of a competing finite set of models is preferred by the
data.
In practice, architectural acousticians often expect
single-, double-, or triple-rate energy decays (Jasa and
Xiang, 2012; Xiang et al., 2011). In the case of such com-
peting models, it would be unhelpful to apply an inappropri-
ate model to the parameter estimation problem (Xiang et al.,2011; Xiang et al., 2010). Before undertaking parameter
FIG. 5. Model selection, which comprises a second level of inference, rep-
resents an inverse problem of choosing one of a finite set of models. The
model selection task is to infer which model is preferred of a finite set of (S)
models/hypotheses according to the experimental data.
J. Acoust. Soc. Am. 148 (2), August 2020 Ning Xiang 1109
sufficiently predicts the beamforming data with h being the
azimuth angular variable. A0 is a constant parameter for
accounting noise floor and the three parameters per sound
source include amplitude, As, angle of arrival, /s, and beam
width, ds, of each sound source. The model parameter for Snumber of sound sources, HS ¼ fA0;A1;…;AS;/1;…/S;d1;…; dSg, includes all of the amplitude and angular param-
eters. Figure 9 illustrates the directional response experi-
mentally measured using a coprime linear microphone array
of 16 elements, compared with the predicted one based on
the model in Eq. (66), where two simultaneous sound sour-
ces are in the data. Figure 9(b) also illustrates the residual
error function when the predictive model in Eq. (66) fits the
experimental data well. The residual errors fulfill the condi-
tion as expressed in Eq. (36) when assigning the likelihood
function in Sec. IV B.
The DoA model using a coprime microphone array
expressed in Eq. (66), including those in Eqs. (64), (65), rep-
resents a class of models; generalized linear models, consist-
ing of a linear superposition of S number of nonlinear
functions. The DoA estimation of Escolano et al. (2014)
using two microphones and the room-acoustic modal analy-
sis (Beaton and Xiang, 2017) and decay analysis (Xiang
et al., 2011) all employ the generalized linear models, where
the Bayesian model selection is applied to estimate the num-
ber S of the nonlinear functions in their models.
b. Spherical harmonic sensing model. Xiang and
Landschoot (2019) recognize similar problems using a
spherical microphone array in the DoA analysis of sound
events. They apply spherical harmonics beamforming to for-
mulating a parametric model to predict multiple sound sour-
ces as
HSðUS;UÞ ¼XS
s¼1
As gsðUs;UÞ; (67)
with As representing strength associated with sth sound
source. US ¼ fh1;…; hS; /1;…;/Sg are S number of sound
source directions. Instead of basic nonlinear functions, a
normalized sound energy function, gsðUs;UÞ,
gsðUs;UÞ ¼jgðUs;UÞj2
max jgðUs;UÞj2h i (68)
exploits the completeness property of the spherical harmon-
ics with a finite spherical harmonic order is expressed by the
truncated completeness (Williams, 1999) as
gðUs;UÞ ¼ 2 pXN
n¼1
Xn
m¼�n
Ymn ðUsÞ Ym
n ðUÞ; (69)
where Us ¼ fhs;/sg denotes the specific filtering direction,
and gðUs;UÞ represents specific beamforming function ori-
ented towards direction, Us, over angular range specified by
U. The maximum order, N, dictated by the number of micro-
phone channels, determines the sharpness of the beam pat-
terns (Landschoot and Xiang, 2019).
This section has dealt with three different applica-
tions, yet they are similar in that the prediction models are
nested analytical expressions in form of generalized linear
models (Bretthorst, 1988; �O Ruanaidh and Fitzgerald,
1996), consisting in essence of a sum of simple nonlinear
functions. The sum of total number S leads to different
FIG. 9. (Color online) Broadband beamforming data and the Laplace model
for two simultaneous sound sources (Bush and Xiang, 2018). (a) Broadband
directional pattern in response to two noise sources (solid line). The two-
sound source Laplace distribution function model with reasonable values of
the model parameters is superimposed onto the experimental data (dashed
line). (b) Errors between model and data are finite with mean zero.
FIG. 10. (Color online) Logarithm of the Bayesian evidence among com-
peting models for up to five simultaneous sound sources. Twenty trials of
evidence estimations are run per model. In this case, the three-source model
correctly shows significant increase over the one- and two-source models.
The four- and five-source models show slight increases in evidence, though
not enough to justify their higher complexity (Bush and Xiang, 2018).
1114 J. Acoust. Soc. Am. 148 (2), August 2020 Ning Xiang
otherwise, it is fixed at the physically measured value.
C. Concise representation of head-related transferfunctions
Head-related transfer functions (HRTFs) represent
directionally dependent responses between an external
source and the eardrums of a binaural listener. They are cru-
cial part of databank for auditory virtual reality (Blauert,
2001; Xie, 2013) in the communication-acoustics applica-
tions. Experimental measurement results of the HRTFs are
often represented on several hundred data points in fre-
quency domain, approximation by a recursive filter can
result in substantially less computation and data storage.
Botts et al. (2013) represent them by a pole-zero model of
digital filters with infinite-impulse response (IIR) filters. In
order to represent measured HRTFs in a parsimonious form,
Botts et al. (2013) applied Bayesian model-based analysis to
filter design in the form of IIR filters.
1. Parametric representation
The head-related transfer functions, H(z), can be pre-
sented in frequency domain (Botts et al., 2013) through a
zero-poles model
HðzÞ ¼ GYL
l¼1
z� ql
z� pl�YMm¼1
z2 � 2rq;m cos /q;m zþ r2q;m
z2 � 2rp;m cos /p;m zþ r2p;m
;
(82)
where z ¼ e�jx; pl; ql 2 R are real-valued zeros and poles,
qm ¼ rq;m ej/q;m ; pm ¼ rp;m ej/p;m ; (83)
and G is a real-valued gain factor. In the second line of
Eq. (83), the complex-valued zeros, ðz� qmÞðz� qmÞ, and
poles, ðz� pmÞðz� pmÞ, are presented as magnitudes, rp;m;rq;m, and phases, /p;m;/q;m, with �1 � cos /p;m;cos /q;m � 1. N ¼ LþM is the filter order, collectively.
2. Likelihood function and parameter prior
Botts et al. (2013) apply this model to derive filter coef-
ficients of concise IIR filters for the HRTFs. The squared
error, �k, between complex-valued HRTFs, and the model
DðxkÞ ¼ HRTFðxkÞ at discrete angular frequency, xk,
Michalopoulou, Z.-H., and Aunsri, N. (2018). “Environmental inversion
using dispersion tracking in a shallow water environment,” J. Acoust.
Soc. Am. 143, EL188–EL193.
Miki, Y. (1990). “Acoustical properties of porous materials—
Generalizations of empirical models,” J. Acoust. Soc. Jpn. 11, 25–28.
M€uller, P., Quintana, F. A., Jara, A., and Hanson, T. (2015). BayesianNonparametric Data Anaysis (Springer International Publishing, Cham).
Nannuru, S., Koochakzadeh, A., Gemba, K. L., Pal, P., and Gerstoft, P.
(2018). “Sparse Bayesian learning for beamforming using sparse linear
arrays,” J. Acoust. Soc. Am. 144, 2719–2729.�O Ruanaidh, J. J. K., and Fitzgerald, W. J. (1996). Numerical Bayesian
Methods Applied to Signal Processing (Springer-Verlag, New York).€Ozdamar, O., Eilers, R. E., Miskiel, E., and Widen, J. (1990).
“Classification of audiograms by sequential testing using a dynamic
Bayesian procedure,” J. Acoust. Soc. Am. 88, 2171–2179.
Patrick, J. H. (2015). A Concise Introduction to Logic, 12th ed. (Cengage
Learning, Stamford), Chap. 6.
Roncen, R., Fellah, Z. E. A., Lafarge, D., Piot, E., Simon, F., Ogam, E.,
Fellah, M., and Depollier, C. (2018). “Acoustical modeling and Bayesian
inference for rigid porous media in the low-mid frequency regime,”
J. Acoust. Soc. Am. 144, 3084–3101.
Schlittenlacher, J., Turner, R. E., and Moore, B. C. (2018). “Audiogram
estimation using Bayesian active learning,” J. Acoust. Soc. Am. 144,
421–430.
Schwarz, G. (1978). “Estimating the dimension of a model,” Ann. Stat. 6,
461–464.
Shannon, C. E. (1948). “The mathematical theory of communication,” Bell
Systems Tech. J. 27, 379–423.
Sivia, D. S., and Skilling, J. (2006). Data Analysis: A Bayesian Tutorial,2nd ed. (Oxford University Press, Oxford), pp. 103–126, 181–208.
Skilling, J. (2004). “Nested sampling,” in Bayesian Inference andMaximum Entropy Methods in Science and Engineering, Vol. 735 of AIPConference Proceedings, Garching, Germany (25–30 July), pp. 395–405.
Skilling, J. (2006). “Nested sampling for general Bayesian computation,”
Bayesian Anal. 1, 833–859.
Spiegelhalter, D. J., Best, N. G., Carlin, B. P., and van der Linde, A. (2002).
“Bayesian measures of model complexity and fit,” J. R. Stat. Soc. 64,
583–639.
Steininger, G., Dosso, S. E., Holland, C. W., and Dettmer, J. (2014).
“Estimating seabed scattering mechanisms via Bayesian model selection,”
J. Acoust. Soc. Am. 136, 1552–1562.
Stoica, P., and Selen, Y. (2004). “Model-order selection—A review of
information criterion rules,” IEEE Sign. Process. Mag. 21, 36–47.
S€u G€ul, Z., Odabas, E., Xiang, N., and Calıskan, M. (2019). “Diffusion
equation modeling for sound energy flow analysis in multi domain
structures,” J. Acoust. Soc. Am. 145, 2703–2717.
Tick, J., Pulkkinen, A., Lucka, F., Ellwood, R., Kaipio, B. T. C. J. P.,
Arridge, S. R., and Tarvainen, T. (2018). “Three dimensional photoacous-
tic tomography in Bayesian framework,” J. Acoust. Soc. Am. 144,
2061–2071.
Vaidyanathan, P. P., and Pal, P. (2011). “Sparse sensing with co-prime sam-
plers and arrays,” IEEE Trans. Sign. Process. 59, 573–586.
Williams, E. (1999). Fourier Acoustics: Sound Radiation and Near FieldAcoustical Holography (Academic Press, London), Vol. 2.
Woodward, P. M. (1953). Probability and Information Theory, withApplications to Radar, 2nd ed. (Pergamon Press, London).
Xiang, N. (2015). “Advanced room-acoustics decay analysis,” in Acoustics,Information, and Communication: Memorial Volume in Honor ofManfred R. Schroeder, edited by N. Xiang and G. M. Sessler (Springer,
Cham), Chap. 3, pp. 33–56.
Xiang, N. (2017a). “Acoustics in coupled volume systems,” in
Architectural Acoustics Handbook, edited by N. Xiang (J. Ross
Publishing, Plantation, FL), Chap. 4, pp. 59–74.
Xiang, N. (2017b). “Room-acoustic energy decay analysis,” in
Architectural Acoustics Handbook, edited by N. Xiang (J. Ross
Publishing, Plantation, FL), Chap. 6, pp. 119–136.
Xiang, N., Bush, D., and Summers, J. E. (2015). “Experimental validation
of a coprime linear microphone array for high-resolution direction-of-
arrival measurements,” J. Acoust. Soc. Am. 137, EL261–EL266.
Xiang, N., and Fackler, C. (2015). “Objective Bayesian analysis in
acoustics,” Acoust. Today 11, 54–61.
Xiang, N., Fackler, C. J., and Hou, Y. (2018). “Sound absorber design of
multilayered microperforated panels using Bayesian inference,” in
Proceedings of InterNoise 2018, Chicago, IL (August).
Xiang, N., and Goggans, P. M. (2001). “Evaluation of decay times in cou-
pled spaces: Bayesian parameter estimation,” J. Acoust. Soc. Am. 110,
1415–1424.
Xiang, N., and Goggans, P. M. (2003). “Evaluation of decay times in cou-
pled spaces: Bayesian decay model selection,” J. Acoust. Soc. Am. 113,
2685–2697.
Xiang, N., Goggans, P. M., Jasa, T., and Robinson, P. (2011). “Bayesian
characterization of multiple-slope sound energy decays in coupled-
volume systems,” J. Acoust. Soc. Am. 129, 741–752.
Xiang, N., and Landschoot, C. (2019). “Bayesian inference for acoustic
direction of arrival analysis using spherical harmonics,” J. Entropy 21,
579.
Xiang, N., Robinson, P., and Botts, J. (2010). “Comment on ‘Optimum
absorption and aperture parameters for realistic coupled volume spaces
determined from computational analysis and subjective testing results’ [J.