Page 1
Neuromagnetic Source Reconstruction and Inverse Modeling
Kensuke Sekihara1, Srikantan S. Nagarajan2
1Department of Electronic Systems and Engineering,
Tokyo Metropolitan Institute of Technology
Asahigaoka 6-6, Hino, Tokyo 191-0065, Japan
2Department of Bioengineering, University of Utah
50 S. Campus Center Drive, 2480 MEB, Salt Lake City, UT 84112-9202, USA
Corresponding author:
Kensuke Sekihara Ph.D.Tokyo Metropolitan Institute of TechnologyAsahigaoka 6-6, Hino, Tokyo 191-0065, JapanTel:81-42-585-8642Fax:81-42-585-8642E-mail: [email protected]
1
Page 3
Contents
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
8.2 Brief summary of neuromagnetometer hardware . . . . . . . . . . . . . . . 7
8.3 Forward modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
8.3.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
8.3.2 Estimation of the sensor lead field . . . . . . . . . . . . . . . . . . . 9
8.3.3 Low-rank signals and their properties . . . . . . . . . . . . . . . . . 14
8.4 Spatial filter formulation and non-adaptive spatial filter techniques . . . . 17
8.4.1 Spatial filter formulation . . . . . . . . . . . . . . . . . . . . . . . . 17
8.4.2 Resolution kernel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
8.4.3 Non-adaptive spatial filter . . . . . . . . . . . . . . . . . . . . . . 19
8.4.4 Noise gain and weight normalization . . . . . . . . . . . . . . . . . 22
8.5 Adaptive spatial filter techniques . . . . . . . . . . . . . . . . . . . . . . . 24
8.5.1 Scalar minimum-variance-based beamformer techniques . . . . . . 24
8.5.2 Extension to eigenspace-projection beamformer . . . . . . . . . . . 25
8.5.3 Comparison between minimum-variance and eigenspace beamformer
techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
8.5.4 Vector-type adaptive spatial filter . . . . . . . . . . . . . . . . . . 28
8.6 Numerical Experiments: Resolution kernel comparison between adaptive
and non-adaptive spatial filters . . . . . . . . . . . . . . . . . . . . . . . . 32
8.6.1 Resolution kernel for the minimum-norm spatial filter . . . . . . . 32
8.6.2 Resolution kernel for the minimum-variance adaptive spatial filter . 33
8.7 Numerical experiments: evaluation of adaptive beamformer performance . 34
3
Page 4
8.7.1 Data generation and reconstruction condition . . . . . . . . . . . . 34
8.7.2 Results from minimum-variance vector beamformer . . . . . . . . . 35
8.7.3 Results from the vector-extended Borgiotti-Kaplan beamformer . . 36
8.7.4 Results from the eigenspace projected vector-extended Borgiotti-
Kaplan beamformer . . . . . . . . . . . . . . . . . . . . . . . . . . 36
8.8 Application of adaptive spatial filter technique to MEG data . . . . . . . 37
8.8.1 Application to auditory-somatosensory combined response . . . . . 37
8.8.2 Application to somatosensory response: high-resolution imaging ex-
periments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
8.9 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4
Page 5
8.1 Introduction
The human brain has approximately 1010 neurons in its cerebral cortex. Their electro-
physiological activity generates weak but measurable magnetic fields outside the scalp.
Magnetoencephalography (MEG) is a method which measures these neuromagnetic fields
to obtain information about these neural activities (Hamalainen et al., 1993; Roberts et
al., 1998; Lewine et al., 1995). Among the various kinds of functional neuroimaging meth-
ods, such a neuro-electromagnetic approach has a major advantage in that it can provide
fine time resolution of millisecond order. Therefore, the goal of neuromagnetic imaging
is to visualize neural activities with such fine time resolution and to provide functional
information about brain dynamics. To attain this goal, one technical hurdle must be over-
come. That is, an efficient method to reconstruct the spatio-temporal neural activities
from neuromagnetic measurements needs to be developed. Toward this goal, a number
of algorithms for reconstructing spatio-temporal source activities have been investigated
(Baillet et al., 2001).
This chapter deals with this neuromagnetic reconstruction problem. However, we
do not provide a general review of various algorithms for this reconstruction problem.
Instead, we describe a particular class of source reconstruction techniques referred to as the
spatial filter, which allows the spatio-temporal reconstruction of neural activities without
assuming any kind of source model. Furthermore, among the spatial filter techniques, we
focus on adaptive spatial filter techniques. These techniques were originally developed in
the fields of array signal processing, including radar, sonar, and seismic exploration, and
have been widely used in such fields (van Veen et al., 1988). Nonetheless, the adaptive
spatial filter techniques are relatively less acknowledged in the MEG/EEG community.
In this chapter, we also formulate reconstruction techniques based on linear least-
squares methods (Hamalainen and Ilmoniemi, 1984) as the non-adaptive spatial filter.
This formulation enables us to compare the least-squares-based techniques with the adap-
tive spatial filter techniques on a common, unified basis. Actually, we compare these two
types of techniques by using the same figure of merit called the resolution kernel, and
5
Page 6
show that the adaptive techniques can provide much higher spatial resolution than the
least-squares-based methods.
The organization of this chapter is as follows: Following a brief review of the neu-
romagnetometer hardware in Section 8.2, we describe the forward modeling and some
basic properties of MEG signals in Section 8.3. Section 8.4 presents the formulation of
the linear least-squares-based methods as the non-adaptive spatial filter. Section 8.5 de-
scribes the adaptive spatial filter techniques. In Section 8.6, we present a quantitative
comparison between the adaptive and non-adaptive methods using the resolution kernel
criterion. Section 8.7 presents a series of numerical experiments on the adaptive spatial
filter performance. In Section 8.8, we demonstrate the effectiveness of the adaptive spatial
filter techniques by applying them to two sets of MEG data.
6
Page 7
8.2 Brief summary of neuromagnetometer hardware
It is generally believed that the neuromagnetic field is generated by the post-synaptic
ionic current in the pyramidal cells of cortical layer III. Here, neuronal cells are organized
into so-called columnar structure, and the synchronous activity of these cells results in
superimposed magnetic fields strong enough to be measured outside the human head. The
average intensity of this neuromagnetic field, however, is around a few hundred femto-
Tesla (fT)∗. To measure such an extremely weak magnetic field, a neuromagnetometer uses
a special device called a super-conducting quantum interference device (SQUID) (Clarke,
1994; Drung et al., 1991). This device is so sensitive that it can in principle measure a
single quantum of magnetic flux.
When measuring such a weak neuromagnetic field, a major problem arises from the
background environmental magnetic noise. Background magnetic noise is generated by
electronic appliances such as computers, power lines, cars, and elevators. Such noise is
common at a site where the neuromagnetometer is installed, and its average intensity
is usually five to six orders of magnitude greater than the neuromagnetic field. One
obvious way to reduce it is to use a magnetically-shielded room. However, a typical
medium-quality shielded room, most commonly used in MEG measurements, can reduce
the background noise by up to only three orders of magnitude.
To further reduce the background noise, neuromagnetometers are usually equipped
with a special type of detector configuration called a gradiometer (Hamalainen et al.,
1993; Lewine et al., 1995). The first-order gradiometer consists of two coils of exactly the
same area; they are connected in series, but wound in opposite directions. Therefore, the
gradiometer cancels the electric current induced by the background noise fields because
the sources of such noise fields are generally far from the gradiometer and induce nearly
the same amount of electric current in both coils. The gradiometer can achieve two
to three orders of magnitude reduction in the background noise, and can remove the
influence of the residual noise field within the magnetically-shielded room. The reduction
∗One fT is equivalent to 1 × 10−15 T.
7
Page 8
performance, however, depends on the manufacturing precision of the two coils.
Aside from the gradiometer, several other methods of removing the external noise
have been investigated (Vrba and Robinson, 2001; Adachi et al., 2001). Many of these
methods use extra sensors that measure only the noise fields. (Such sensors are usually
located apart from the sensor array which measures the neuromagnetic fields.) Quasi-
real-time electronics then perform the on-line subtraction between the outputs from the
extra sensors and from the regular sensors. A method that does not require additional
sensor channels has also been developed. This method applies a technique called signal-
space projection, and can be implemented completely as a post-processing procedure
(Parkkonen et al., 1999). These external noise cancellation methods make it possible to
use a magnetometer as a sensor coil instead of the gradiometer. They also permit the
measurement of neuromagnetic fields outside a magnetically-shielded room.
The most remarkable advance in neuromagnetometer hardware over the last ten
years has been the rapid increase in the number of sensors. Since neuromagnetometers
with a 37-channel sensor array became commercially available in the late 80s (Lewine
et al., 1995), the number of sensors in commercially available neuromagnetometers has
constantly increased. The latest neuromagnetometers are equipped with 200-300 sensor
channels, with whole-head coverage of the sensor array.
8
Page 9
8.3 Forward modeling
8.3.1 Definitions
Let us define the magnetic field measured by the mth detector coil at time t as bm(t), and
a column vector b(t) = [b1(t), b2(t), . . . , bM(t)]T as a set of measured data where M is the
total number of detector coils and the superscript T indicates the matrix transpose. A
spatial location is represented by a three-dimensional vector r: r = (x, y, z). The source-
current density at r and time t is defined as a three-dimensional column vector s(r, t).
The magnitude of the source current is denoted as s(r, t) (= |s(r, t)|), and the orientation
of the source is defined as a three-dimensional column vector η(r, t) = s(r, t)/s(r, t) =
[ηx(r, t), ηy(r, t), ηz(r, t)]T , whose ξ component (where ξ equals x, y, or z in this chapter),
is equal to the cosine of the angle between the direction of the source moment and the ξ
direction.
Let us define lξm(r) as the mth sensor output induced by the unit-magnitude source
located at r and directed in the ξ direction. The column vector lξ(r) is defined as lξ(r) =
[lξ1(r), lξ2(r), · · · , lξM (r)]T . Then, we define a matrix which represents the sensitivity of
the whole sensor array at r as L(r) = [lx(r), ly(r), lz(r)]. The mth row of L(r), lm(r) =
[lxm(r), lym(r), lzm(r)], represents the sensitivity at r of the mth sensor. Then, using the
superposition law, the relationship between b(t) and s(r, t) is expressed as
b(t) =
∫L(r)s(r, t)dr + n(t). (8.1)
Here, n(t) is the noise vector at t. The sensor sensitivity pattern, represented by the
matrix L(r), is customarily called the sensor lead field (Hamalainen et al., 1993; Sarvas,
1987), and this matrix is called the lead-field matrix. We define, for later use, the lead-field
vector in the source-moment direction as l(r), which is obtained by using l(r) = L(r)η(r).
8.3.2 Estimation of the sensor lead field
The problem of the source reconstruction is the problem of obtaining the best estimate
of s(r, t) from the array measurement b(t). It is thus apparent that, to solve this re-
9
Page 10
construction problem, we need to have a reasonable estimate of the lead-field matrix. In
this subsection, we describe how we can obtain the lead field using the spherically sym-
metric homogeneous conductor model, which is most commonly used in estimating the
MEG sensor lead field. Also, we briefly mention a realistically-shaped volume-conductor
model, which can generally provide more accurate lead field estimates, particularly for
non-superficial brain regions. Estimation of the lead-field is called the forward problem,
because this is equivalent to estimating the magnetic field from a point source located at
a known location; this problem stands in contrast to the inverse problem in which the
source configuration is estimated from a known magnetic field distribution.
Let us define the electric potential as V , the magnetic field as B, and the electric
current density as j. The source current s(r) defined in Section 8.3.1 is called the primary
current, (alternatively called the impressed current), which is directly generated from the
neural activities. There is another type of electric current called the return current or
volume current. It results from the electric field in the conducting medium, and it is not
directly caused by the neural activities. Defining the conductivity as ρ, the return current
is expressed as −ρ∇V where −∇V is equal to the electric field. Thus, the total electric
current j is expressed as
j(r) = s(r) − ρ∇V. (8.2)
The relationship between the total current j and the resultant magnetic field B is
given by the Biot-Savart law,
B(r) =µ0
4π
∫j(r′) × r − r′
|r − r′|3 dr′, (8.3)
where µ0 is the magnetic permeability of the free space. In order to derive the analytical
expression for the the relationship between the primary source current and the magnetic
field, we first consider the case in which a whole space is filled with a conductor with
constant conductivity ρ. In this case, it is easy to show that the following relationship
holds:
B0(r) =µ0
4π
∫s(r′) × r − r′
|r − r′|3dr′. (8.4)
10
Page 11
Here the magnetic field is denoted as B0 for later convenience. Note that Eq. (8.4) is
similar to Eq. (8.3). The only difference is that the total current density j(r) is replaced
by the primary current density s(r).
We then proceed to deriving a formula for the magnetic field outside a spherically-
symmetric homogeneous conductor, B. To do so, we make use of the fact that the radial
component of the magnetic field B is equal to the radial component of B0 in Eq. (8.4)
(Cuffin and Cohen, 1977; Sarvas, 1987), i.e.,
B · f r = B0 · f r, (8.5)
where f r is the unit vector in the radial direction defined as f r = r/|r|. (Note that we
set the coordinate origin at the sphere origin.) The relationship ∇×B = 0 holds outside
the volume conductor, because there is no electric current. Thus, B can be expressed in
terms of the magnetic scalar potential U(r),
B = −µ0∇U(r). (8.6)
This potential function is derived from
U(r) =1
µ0
∫ ∞
0
B(r + τf r) · f rdτ =1
µ0
∫ ∞
0
B0(r + τf r) · f rdτ, (8.7)
where we use the relationship in Eq. (8.5). By substituting Eq. (8.4) into Eq. (8.7), we
finally obtain
U(r) = − 1
4π
∫s(r′) × r′ · r
Adr′, (8.8)
where
A = |r − r′|(|r − r′|r + |r|2 − r′ · r).
The formula for B is then obtained by substituting Eq. (8.8) into Eq. (8.6), i.e.,
B(r) =µ0
4π
∫1
A2[As(r′) × r′ − (s(r′) × r′ · r)∇A]dr′, (8.9)
where
∇A = [|r − r′|2
|r| +(r − r′) · r|r − r′| + 2|r − r′| + 2|r|]r − [|r − r′| + 2|r| + (r − r′) · r
|r − r′| ]r′.
11
Page 12
To obtain the component of the lead-field matrix, lξm(r0), we first calculate B(rm)
(where rm is the mth sensor location) by using Eq. (8.9) with s(r′) = f ξδ(r′−r0), where
f ξ is the unit vector in the ξ direction. When the sensor coil is a magnetometer coil,
(which measures only the magnetic field component normal to the sensor coil), lξm(r0) is
calculated from lξm(r0) = B(rm) · f coilm where f coil
m is a unit vector expressing the normal
direction of the mth sensor coil. When the sensor coil is a first-order axial gradiometer
with a baseline of D, lξm(r0) is calculated from lξm(r0) = B(rm)·f coilm −B(rm+Df coil
m )·f coilm .
This lξm(r0) represents the sensitivity of the mth sensor to the primary current density
located at r0 and directed in the ξ direction.
One of the important properties of the lead field obtained using the spherically-
symmetric homogeneous conductor model is that if s(r0) and r0 are parallel, i.e., if the
primary current source is oriented in the radial direction, no magnetic fields are generated
outside the spherical conductor from such a radial source. Also, we can see that when
r0 approaches the center of the sphere, lξm(r0) becomes zero, and no magnetic field is
generated outside the conductor from a source at the origin.
The spherically-symmetric homogeneous conductor is generally satisfactory in ex-
plaining the measured magnetic field when only superficial sources exist, i.e., when all
sources are located relatively close to the sensor array. This is because the curvature of
the upper half of the brain is well approximated by a sphere. However, for sources located
in lower regions of the brain, the model becomes inaccurate because the curvature of the
lower brain regions significantly differs from a sphere. Such errors caused by misfits of the
model may be reduced by using a spheroidally-symmetric conductor model (Cuffin and
Cohen, 1977) or an eccentric sphere conductor model (Cuffin, 1991).
More fundamental improvements can be obtained by using realistically-shaped volume-
conductor models. Such conductor models can be constructed by first extracting the brain
boundary surface from the subject’s 3D MRI. We denote this surface Σ. We then use the
following Geselowitz formula (Geselowitz, 1970; Sarvas, 1987) to calculate magnetic fields
12
Page 13
outside the volume conductor:
B(r) = B0(r) − µ0
4πρ
∫Σ
V (r′)fΣ(r′) × r − r′
|r − r′|3 dS, (8.10)
where the integral on the right-hand side indicates the surface integral over Σ; r′ represents
a point on Σ, and fΣ(r′) is a unit vector perpendicular to Σ at r′. Here, we assume that
the conductivity within the brain is uniform, and denote it ρ. The second term on the
right-hand side of Eq. (8.10) represents the influence from the volume current, and to
calculate this term, we need to know V (r) on Σ, which is obtained by solving (Sarvas,
1987)
ρ
2V (r) = V0(r) − ρ
4π
∫Σ
V (r′)fΣ(r′) · r − r′
|r − r′|3 dS, (8.11)
where
V0(r) =1
4π
∫s(r′) · r − r′
|r − r′|3dr′, (8.12)
and r and r′ are points on the boundary surface.
Because the brain boundaries are irregular, the magnetic field B(r) can only be
obtained numerically using the boundary element method (BEM). In this calculation, we
first estimate the electric potential V (r) on the brain boundary surface Σ by numerically
solving Eq. (8.11). We then calculate magnetic fields outside the brain using Eq. (8.10).
The details of these numerical calculations are out of the scope of this chapter, and
they are found in (Barnard et al., 1967; Hamalainen and Sarvas, 1989). The numerical
method mentioned so far assumes uniform conductivity within the brain boundary, and
is called single-compartment BEM. It is usually used in estimating MEG sensor lead
fields (Fuchs et al., 1998). It can be extended to the multiple-compartment BEM and
such models are usually used for estimating the EEG sensor lead fields. The BEM-based
realistically-shaped volume conductor models generally provide significant improvements
in the accuracy of the forward calculation particularly for deep sources (Fuchs et al.,
1998; Cuffin, 1996), although they are computationally expensive. Improvements in the
computational efficiency of the BEM have been reported (Bradley et al., 2001; van‘t Ent
et al., 2001).
13
Page 14
8.3.3 Low-rank signals and their properties
Let us consider specific cases where the primary current consists of localized discrete
sources. The number of sources is denoted Q, and we assume that Q is less than the
number of sensors M . The locations of these sources are denoted r1, r2, . . . , rQ. The
source-moment distribution is then expressed as
s(r, t) =
Q∑q=1
sD(rq, t)δ(r − rq), (8.13)
where sD(rq) =∫
s(r)dr and this integral extends the small region around rq where
the qth source is confined. This type of localized source is called the equivalent current
dipole with moment sD. Here, sD is called the moment because it has a dimension of
current×distance. The basis underlying the equivalent current dipole is physiologically
plausible (Okada et al., 1987), and sources of neuromagnetic fields are often modeled with
the current dipoles.
Since there is little advantage to explicitly differentiating the current density s(rq, t)
and the current moment sD(rq, t), for simplicity we keep the same notation s(rq, t) to
express the current moment. Then, the Q-dimensional source magnitude vector is defined
as ν(t) = [s(r1, t), s(r2, t), . . . , s(rQ, t)]T . We define a 3Q × Q matrix that expresses the
orientations of all Q sources as Ψ(t) such that
Ψ(t) =
η(r1, t) 0 · · · 0
0 η(r2, t) · ...
... · . . . 0
0 · · · 0 η(rQ, t)
.
The composite lead-field matrix for the entire set of Q sources is defined as
Lc = [L(r1), L(r2), · · · ,L(rQ)]. (8.14)
Then, substituting Eq. (8.13) into Eq. (8.1), we have the discrete form of the basic rela-
tionship between b(t) and ν(t) such that
b(t) = [LcΨ(t)]ν(t) + n(t). (8.15)
14
Page 15
Let us define the measurement covariance matrix as Rb; i.e., Rb = 〈b(t)bT (t)〉,where 〈·〉 indicates the ensemble average. (This ensemble average is usually replaced by
the time average over a certain time window.) Let us also define the covariance matrix
of the source-moment activity as Rs; i.e., Rs = 〈Ψ (t)ν(t)νT (t)ΨT (t)〉. Then, using
Eq. (8.15), we get the relationship between the measurement covariance matrix and the
source-activity covariance matrix such that
Rb = LcRsLTc + σ2I, (8.16)
where the noise in the measured data is assumed to be white Gaussian noise with a
variance of σ2, and I is the M × M identity matrix.
Let us define the kth eigenvalue and eigenvector of Rb as λk and ek, respectively. Let
us assume, for simplicity, that all sources have fixed orientations. Then, unless some source
activities are perfectly correlated with each other, the rank of Rs is equal to the number
of sources Q. Therefore, according to Eq. (8.16), Rb has Q eigenvalues greater than σ2
and M −Q eigenvalues that are equal to σ2. The signal whose covariance matrix has such
properties is referred to as the low-rank signal (Paulraj et al., 1993; Sekihara et al., 2000).
Let us define the matrices ES and EN as ES = [e1, · · · ,eQ] and EN = [eQ+1, · · · ,eM ].
The column span of ES is the maximum-likelihood estimate of the signal subspace of
Rb, and the span of EN is that of the noise subspace (Scharf, 1991). In the low-rank
signals, the measurement covariance matrix Rb can be decomposed into its signal and
noise subspace components; i.e.,
Rb = ESΛSETS + ENΛNET
N . (8.17)
Here, we define the matrices ΛS and ΛN as
ΛS = diag[λ1, . . . , λQ] and ΛN = diag[λQ+1, . . . , λM ], (8.18)
where diag[· · · ] indicates a diagonal matrix whose diagonal elements are equal to the
entries in the brackets.
The most important property of the low-rank signal is that, at source locations, the
lead-field matrix is orthogonal to the noise subspace of Rb. This can be understood by
15
Page 16
first considering that
(Rb − σ2I)ek = LcRsLTc ek = 0, for k = Q + 1, · · · ,M. (8.19)
Since Lc is a full column-rank matrix, and we assume that Rs is a full rank matrix, the
above equation gives
LTc ek = [L(r1), L(r2), · · · ,L(rQ)]ek = 0 for k = Q + 1, · · · ,M. (8.20)
This implies that the lead-field matrices at the true source locations are orthogonal to
any noise level eigenvector; that is, they are orthogonal to the noise subspace (Schmidt,
1981), i.e.,
LT (rq)EN = 0 for q = 1, . . . , Q. (8.21)
Since the equation above holds, the relationship lT (rq)EN = 0 also holds. These orthog-
onality relationships are the basis of the eigenspace-projection adaptive beamformer de-
scribed in Section 8.5.2, as well as the basis of the well-known MUSIC algorithm (Schmidt,
1986; Schmidt, 1981; Mosher et al., 1992).
16
Page 17
8.4 Spatial filter formulation and non-adaptive spa-
tial filter techniques
8.4.1 Spatial filter formulation
Spatial filter techniques estimate the source current density (or the source moment) by
applying a linear filter to the measured data. Because the source is a three-dimensional
vector quantity, there are two ways to implement the spatial filter approach in the neuro-
magnetic source reconstruction. One is the scalar spatial filter and the other is the vector
spatial filter.
In the scalar approach, we use a single set weights that characterizes the properties
of the spatial filter, and define the set of the filter weights as a column vector w(r,η) =
[w1(r,η), w2(r,η), . . . , wM(r,η)]T . Here the weight vector depends both on the location
r and the source orientation η. This weight vector w(r,η) should only pass the signal
from a source with a particular location r and an orientation η. The weight vector rejects
not only the signals from other locations but also the signal from the location r if the
orientation of the source at r differs from η. Then, the magnitude of the source moment
is estimated using a simple linear operation,
s(r, t) = wT (r,η)b(t) =
M∑m=1
wm(r,η)bm(t), (8.22)
where the estimate of the source magnitude is denoted s(r, t). When using the scalar-type
beamformer in Eq. (8.22), we need to first determine the beamformer orientation η to
estimate the source activity at a specific location r. However, this η is generally unknown,
although several techniques have been developed to obtain the optimum estimate of the
source orientation (Sekihara and Scholz, 1996; Mosher et al., 1992).
The vector spatial filter uses a weight matrix W (r) that contains three weight vectors
wx(r), wy(r), and wz(r), which respectively estimate the x, y, and z components of the
source moment. That is, the source current vector is estimated from
s(r, t) = W T (r)b(t) = [wx(r), wy(r), wz(r)]b(t), (8.23)
17
Page 18
where s(r, t) is the estimate of the source current vector. The vector spatial filter estimate
the source orientation as well as the source magnitude.
The application of a spatial filter weight artificially focuses the sensitivity of a sensor
array on a specific location r, and this location r is a controllable parameter. Therefore,
in a post-processing procedure, we can scan the focused region over a region of interest
to reconstruct the entire source distribution.
8.4.2 Resolution kernel
The major problem with spatial filter techniques is how to derive a weight vector with
desirable properties. To develop such weight vectors, we need a criterion which charac-
terizes how appropriately the weight has been designed. The resolution kernel can play
this role. Combining Eqs. (8.1) and (8.23), we obtain the relationship,
s(r, t) =
∫W T (r)L(r′)s(r′, t)dr′ =
∫R(r, r′)s(r′, t)dr′, (8.24)
where,
R(r, r′) = W T (r)L(r′). (8.25)
This R(r, r′) is called the resolution kernel, which expresses the relationship between
the original and estimated source distributions (Menendez et al., 1997; Menendez et al.,
1996; Lutkenhoner and Menendez, 1997). Therefore, the resolution kernel can provide a
measure of the appropriateness of the filter weight. In other words, the weight must be
chosen so that the resolution kernel has a desirable shape, which generally satisfies the
three properties: (i) peak at the source location, (ii) a small main-lobe width, and (iii) a
low side-lobe amplitude.
The most important property among them is that the kernel should be peaked at the
source location. Only in this case, the reconstructed source distribution can be interpreted
as a smoothed version of the true source distribution. However, if this condition is not
met, the reconstructed source distribution should contain systematic bias and may be
totally different from the true source distribution. The kernel should also have a small
18
Page 19
main-lobe width, so that the reconstruction results have high spatial resolution. When
the kernel has a lower side-lobe amplitude, the results have less systematic noise and
artifacts.
8.4.3 Non-adaptive spatial filter
Minimum norm spatial filter
There are generally two types of spatial filter techniques. One is a non-adaptive method
in which the filter weight is independent of the measurements. The other is an adaptive
method in which the filter weight depends on the measurements. The primary interest of
this chapter is the application of the adaptive spatial filter technique to the neuromagnetic
source reconstruction. However, before proceeding to describe the adaptive spatial filter,
we briefly describe the non-adaptive spatial filter in order to clarify the difference between
these two types of spatial filter methods.
The best-known non-adaptive spatial filter is the minimum-norm estimate (Hamalainen
and Ilmoniemi, 1984; Hamalainen and Ilmoniemi, 1994; Wang et al., 1992; Graumann,
1991). The filter weight can be obtained by the following minimization:
min
∫||R(r, r′) − δ(r − r′)||2dr′, (8.26)
where δ(r) is the three-dimensional delta function. By making the resolution kernel close
to the delta-function, the weight is obtained and it is expressed as
W T (r) = [wx(r), wy(r), wz(r)] = LT (r)G−1 = [lx(r), ly(r), lz(r)]TG−1. (8.27)
The estimated current density is then expressed as
s(r, t) = W T (r)b(t) = LT (r)G−1b(t). (8.28)
The matrix G is often referred to as the gram matrix. The p and q element of G is given
by calculating the overlap between the lead fields of the pth and qth sensors,
Gp,q =
∫lp(r)lTq (r)dr. (8.29)
19
Page 20
Unfortunately, in biomagnetic instruments, the overlaps between the adjacent sensor
lead fields are very large, as depicted in Fig. 1(a). As a result, Gp,q has a more-or-less
similar value for various pairs of p and q. Consequently, the matrix G is generally very
poorly conditioned. This fact greatly affects the performance of this non-adaptive spatial
filter method because it requires calculation of the inverse of G, a process which is very
erroneous if G is nearly singular.
This gram matrix G is usually numerically calculated by introducing pixel grids
throughout the source space. Let us denote the locations of the pixel grid points r1, r2, . . . , rN ,
and the composite lead-field matrix for the entire pixel grids LN :
LN = [L(r1), L(r2), . . . ,L(rN)].
Then, the matrix G is calculated from
G = LNLTN . (8.30)
However, to avoid the numerical instability when inverting it, the following regularized
version is usually used:
G = LNLTN + γI , (8.31)
where γ is the regularization parameter. The final solution is expressed as
s(r, t) = LT (r)(LNLTN + γI)−1b(t). (8.32)
The regularization, however, inevitably introduces considerable amounts of smearing into
the reconstruction results. Besides, the solution obtained using the minimum-norm spatial
filter suffers a geometric bias in that the current estimates are forced to be closer to the
sensor array than their actual locations.
It should, however, emphasized that the poor performance of the minimum-norm
spatial filter is not because the method itself has a serious defect but because a mismatch
exists between the method and the biomagnetic instruments. This can be understood by
considering a situation for other imaging modalities such as the X-ray computed tomogra-
phy (CT). As is shown in Fig. 1(b), the overlaps between the lead fields of different sensors
20
Page 21
are very small for the X-ray CT. As a result, the matrix G is close to the identity matrix
and the non-adaptive spatial filter method works quite well. Indeed, the minimum-norm
spatial filter technique is considered identical to the filtered-backprojection algorithm
(Herman, 1980) used for image reconstruction from projections in commercial X-ray CT
systems. In the next subsection, we briefly describe investigations into ways of improving
the performance of the minimum-norm-based spatial filter.
Least-squares-based interpretation of the minimum-norm methods
The minimum-norm spatial filter is commonly derived by minimizing the least-squares-
based cost function. Actually, this least-squares-based interpretation is much more pop-
ular than the spatial-filter-based interpretation described in Section 8.4.3. Namely, the
solution in Eq. (8.32) minimizes the cost function,
F = ‖b − LN sN‖2 + γ‖sN‖2, (8.33)
where sN is a source vector whose elements consist of the current estimate at the pixel
points, i.e., sN = [s(r1, t), . . . , s(rN , t)]T . In Eq. (8.33), the first term on the right-hand
side is the least-squares error term and the second term is the total sum of the current
norm. Therefore, the optimum solution minimizes the total current norm as well as the
least-squares error. This is why the method is often referred to as the minimum-norm
estimate.
The trick to improving the performance of the minimum-norm method is to use a
more general form of the cost function, expressed as
F = (b − LN sN)TΥ (b − LN sN) + γsTNΦsN , (8.34)
where Φ represents some kind of weighting applied to the solution vector sN , and Υ
represents the weighting applied to the residual of the least-squares term. The solution
derived by minimizing this cost function is expressed as
sN = Φ−1LTN(LNΦ−1LT
N + γΥ −1)−1. (8.35)
21
Page 22
In this solution, the gram matrix becomes G = LNΦ−1LTN + γΥ −1. The inclusion of the
matrices Φ and Υ gives a greater degree of freedom in regularizing G, and by choosing
appropriate forms for these matrices, the numerical instability can be improved without
introducing unwanted side effects such as image blur.
In general, the matrix Φ is derived from a desired property of the solution. One
widely used example is the minimum weighted-norm constraint in which we use Φ = ΦL
whose non-diagonal elements are zero, and diagonal terms are given by
ΦL3k,3k =
1
‖lx(rk)‖2, ΦL
3k+1,3k+1 =1
‖ly(rk)‖2, and ΦL
3k+2,3k+2 =1
‖lz(rk)‖2(8.36)
This weight ΦL can reduce the geometric bias of the minimum-norm solution to some
extent, compensating for the variation in the lead-field norm. Low resolution electromag-
netic tomography (LORETA) (Pascual-Marqui and Michel, 1994; Wagner et al., 1996) is
another popular application of this particular type of Φ. It seeks the maximally smooth
solution by using Φ = ΦLΦR, where ΦR is the Laplacian matrix.
Bayesian-type estimation methods determine Φ based on prior knowledge of the neu-
ral current distribution (Schmidt et al., 1999; Baillet and Garnero, 1997). Determination
of Φ by fMRI has been proposed (Liu et al., 1998; Dale et al., 2000). The matrix Υ is
generally determined from the noise properties. When the measurements contain non-
white noise and we know the noise covariance matrix, Υ is usually set to the inverse of
the noise covariance matrix. The determination of the optimum forms for the matrices Φ
and Υ has been an active research topic, and many investigations have been performed
in this direction. However, we will not digress into the details of these investigations.
Instead, in the following section we describe different approaches known as the adaptive
spatial filter, which does not use the gram matrix of the lead field.
8.4.4 Noise gain and weight normalization
The spatial filter weights determine the gain for the noise in the reconstructed results. In
the scalar spatial filter techniques, the output noise power due to the noise input, Pn, is
22
Page 23
given by
Pn = wT (r)〈n(t)nT (t)〉w(r) = wT (r)Rnw(r), (8.37)
where Rn is the noise covariance matrix. When the noise is uncorrelated white Gaussian
noise, the output noise power is equal to
Pn = σ2wT (r)w(r) = σ2‖w(r)‖2, (8.38)
where σ2 = 〈n(t)nT (t)〉 is the power of the input noise. Therefore, the norm of the filter
weight vector ‖w(r)‖2 is called the noise power gain or the white noise gain. In vector
spatial filter techniques, the output noise power is expressed as
Pn = trW T (r)〈n(t)nT (t)〉W T (r) = trW T (r)RnWT (r). (8.39)
When the input noise is uncorrelated white Gaussian noise, this reduces to
Pn = σ2(‖wx(r)‖2 + ‖wy(r)‖2 + ‖wz(r)‖2). (8.40)
Here, the square sum of the norm of the filter weights trW T (r)W (r) = ‖wx(r)‖2 +
‖wy(r)‖2 + ‖wz(r)‖2 is the noise power gain. A minimum-norm spatial filter with weight
normalization has also been proposed (Dale et al., 2000) The output of this spatial filter
is expressed as
s(r, t) =W T (r)b(t)
trW T (r)W (r) . (8.41)
Because the weight norm is the noise gain, the output of this spatial filter is interpreted
as being equal to the SNR of the minimum-norm filter outputs.
23
Page 24
8.5 Adaptive spatial filter techniques
8.5.1 Scalar minimum-variance-based beamformer techniques
The adaptive spatial filter techniques use a weight vector that depends on measurement.
The best-known adaptive spatial filter technique is probably the minimum-variance beam-
former. The term “beamformer” has been customarily used in the signal-processing com-
munity with the same meaning as “spatial filter”. In this method, the spatial filter weights
are obtained by solving the constrained optimization problem,
minwTRbw, subject to wT l(r) = 1. (8.42)
and consequently we get
wT (r) =lT (r)R−1
b
lT (r)R−1b l(r)
, (8.43)
where l(r) is defined as l(r) = L(r)η.
The idea behind the above optimization is that the filter weight is designed to mini-
mize the total output signal power while maintaining the signal from the pointing location
r. Therefore, ideally, this weight only passes the signal from a source at the location r
with the orientation η, and suppresses the signals from sources at other locations or ori-
entations. One difficulty arises when applying it to actual MEG/EEG source localization
problems. That is, when we use the spherically symmetric homogeneous conductor model
to calculate l(r), the beamformer output has erroneously large values near the center of
the sphere. This is because ‖l(r)‖ becomes very small when r approaches the center of
the sphere.
A variant of the minimum-variance beamformer, proposed by Borgiotti and Kaplan
(Borgiotti and Kaplan, 1979), uses the optimization,
min wTRbw, subject to wTw = 1. (8.44)
The resultant weight vector is expressed as
wT (r) =lT (r)R−1
b√lT (r)R−2
b l(r). (8.45)
24
Page 25
Because wT (r)w(r) represents the noise power gain, the output of the above beamformer
directly corresponds to the power of the source activity normalized by the power of the
output noise. This Borgiotti–Kaplan beamformer is known to provide a spatial resolution
higher than that of the minimum-variance beamformer (Borgiotti and Kaplan, 1979).
Moreover, it can easily be seen that the output of the beamformer in Eq. (8.45) does not
depend on ‖l(r)‖. Thus, ‖l(r)‖-related artifacts are avoided.
Another more serious problem with the adaptive beamformer techniques described so
far is that they are very sensitive to errors in the forward modeling or errors in estimating
the data covariance matrix. Since such errors are nearly inevitable in neuromagnetic
measurements, these techniques generally provide noisy spatio-temporal reconstruction
results, as demonstrated in Section 8.7.
One technique has been developed to overcome such poor performance (Cox et al.,
1987; Carlson, 1988). The technique, referred to as diagonal loading, uses the regular-
ized inverse of the measurement covariance matrix, instead of its direct matrix inverse.
Although this technique has been applied to the MEG source localization problem (Robin-
son and Vrba, 1999; Gross and Ioannides, 1999; Gross et al., 2001), it is known that the
regularization leads to a trade-off between the spatial resolution and the SNR of the
beamformer output.
8.5.2 Extension to eigenspace-projection beamformer
We here describe the eigenspace-projection beamformer (van Veen, 1988; Feldman and
Griffiths, 1991), which is tolerant of the above-mentioned errors and provides improved
output SNR without sacrificing the spatial resolution in practical low-rank signal situa-
tions. Using Eqs. (8.43) and (8.17), and defining α = 1/[lT (r)R−1b l(r)], we rewrite the
weight vector for the minimum-variance beamformer as
w(r) = αR−1b l(r) = αΓS l(r) + αΓN l(r), (8.46)
where
ΓS = ESΛ−1S ET
S , and ΓN = ENΛ−1N ET
N .
25
Page 26
In Eq. (8.46), the second term on the right-hand side, αΓN l(r), should ideally be
equal to zero because the lead-field matrix l(r) is orthogonal to EN at the source locations
as indicated by Eq. (8.21). Various factors, however, prevent this term from being zero,
and a non-zero αΓN L(r) seriously degrades SNR as explained in the next section. There-
fore, the eigenspace-based beamformer uses only the first term of Eq. (8.46) to calculate
its weight vector w(r); i.e.,
w(r) = αΓS l(r) =ΓS l(r)
lT (r)R−1b l(r)
. (8.47)
Note that w(r) is equal to the projection of w(r) onto the signal subspace of Rb. Namely,
the following relationship holds (Feldman and Griffiths, 1991; Yu and Yeh, 1995):
w(r) = ESETS w(r). (8.48)
Therefore, the extension to an eigenspace-projection beamformer is attained by projecting
the weight vectors onto the signal subspace of the measurement covariance matrix.
8.5.3 Comparison between minimum-variance and eigenspace
beamformer techniques
Although the minimum-variance beamformer ideally has exactly the same SNR as that of
the eigenspace-based beamformer, the SNR of the eigenspace beamformer is significantly
higher in practical applications. The reason for this high SNR can be understood as
follows. Let us assume that a single source with a moment magnitude equal to s(t) exists
at r. We assume that the estimated lead field, l(r), is slightly different from the true lead
field l(r). The estimate of s(t), s(t), is derived by s(t) = wT (r)b(t) = wT (r)l(r)s(t) and
the average power of the estimated source moment s(t), Ps, is expressed as
Ps = 〈‖s(t)‖2〉 = Ps[wT l(r)]T [wT l(r)], (8.49)
where Ps is the average power of s(t) defined by Ps = 〈s(t)2〉. For the minimum-variance
beamformer, this Ps is expressed as
Ps = α2Ps[lT(r)R−1
b lT (r)]2 = α2Ps[lT(r)ΓS lT (r)]2, (8.50)
26
Page 27
where we use the orthogonality relationship lT (r)EN = 0. For the eigenspace-projection
beamformer, it is also expressed as
Ps = α2Ps[lT(r)ΓS lT (r)]2. (8.51)
The average noise power Pn is obtained using Eqs. (8.38) and (8.46) and, for the
minimum-variance beamformer, it is expressed as
Pn = α2σ2 [lT
(r)Γ 2S l(r) + l
T(r)Γ 2
N l(r)]. (8.52)
For the eigenspace-projection beamformer, Pn is expressed as
Pn = α2σ2 [lT
(r)Γ 2S l(r)]. (8.53)
Thus, the output SNR of the minimum-variance beamformer, SNR(MV), is expressed as
(Chang and Yeh, 1992; Chang and Yeh, 1993)
SNR(MV) =Ps
Pn
=Ps
σ2
[lT
(r)ΓS l(r)]2
[lT(r)Γ 2
S l(r) + lT
(r)Γ 2N l(r)]
. (8.54)
The SNR for the eigenspace-based beamformer, SNR(ES), is thus
SNR(ES) =Ps
Pn
=Ps
σ2
[lT(r)ΓS l(r)]2
[lT(r)Γ 2
S l(r)]. (8.55)
The only difference between Eqs. (8.54) and (8.55) is the presence of the second term
lT
(r)Γ 2N l(r) in the denominator of the right-hand side of Eq. (8.54). It is readily apparent
that SNR(MV) and SNR(ES) are equal if we can use an accurate noise subspace estimate
and an accurate lead-field vector, because the term lT
(r)Γ 2N l(r) is exactly equal to zero
in this case. It is, however, generally difficult to attain the relationship, lT(r)Γ 2
N l(r) = 0.
One obvious reason for this difficulty is that when calculating Γ 2N in practice, instead of
using Rb, the sample covariance matrix Rb must be used; Rb is calculated from Rb =∑Kk=1 b(tk)b
T (tk) where K is the number of time points.
Another factor that is specific to MEG and causes lT(r)Γ 2
N l(r) to have a non-zero
value is that it is almost impossible to use a perfectly accurate lead-field vector. This
is because the conductivity distribution in the brain is usually approximated by using
27
Page 28
some kind of conductor model –such as the spherically homogeneous conductor model–
to calculate the lead field matrix. Although this error may be reduced to a certain extent
by using a realistic head model, the error cannot be perfectly avoided. Let us define the
overall error in estimating l(r) as ε. Assuming that ‖ l(r) ‖2‖ ε ‖2, we can rewrite
Eq. (8.54) as
SNR(MV) =Ps
Pn
Ps
σ2
[lT (r)ΓS l(r)]2
[lT (r)Γ 2S l(r) + εTΓ 2
N ε]. (8.56)
Note that, in the denominator of the right-hand side of this equation, the norm of the
matrix εTΓ 2N ε has an order of magnitude proportional to ‖ ε ‖2 /λ2
N , where λN represents
one of the noise-level eigenvalues of Rb. The eigenvalue λN is usually significantly smaller
than the signal-level eigenvalues. Therefore, Equation (8.56) indicates that even when the
error ‖ ε ‖ is very small, the term εTΓ 2N ε may not be negligibly small compared to the
first term in the denominator. Thus, in practice the eigenspace-projection beamformer
attains an SNR significantly higher than that of the minimum-variance beamformer.
8.5.4 Vector-type adaptive spatial filter
The scalar beamformer techniques described in the preceding subsections require de-
termination of the beamformer orientation η to calculate l(r). We, here, describe the
extension to vector-type adaptive spatial filter techniques, which does not require the
pre-determination of η.
Problem of virtual source correlation
A naive way of extending to the vector beamformer is to simply use the scalar beamformer
weight vector obtained with η = f ξ to estimate the source in the ξ direction (where
ξ = x, y, or z). Let us try to estimate sξ(t) by using sξ(t) = wT (r,f ξ)b(t), where
wT (r,f ξ) is obtained using Eq. (8.43). The use of such weight vectors, however, generally
gives erroneous results, and the cause of this estimation failure can be explained as follows.
Let us assume that a single source with its orientation equal to η = [ηx, ηy, ηz]T exists at
r. Its activation time course is assumed to be s(t). Then, we can express the measured
28
Page 29
magnetic field as b(t) = ηxs(t)lx(r) + ηys(t)l
y(r) + ηzs(t)lz(r). This can be interpreted
as showing that the magnetic field is generated from three perfectly correlated sources
located at the same location r, with moments equal to ηxs(t)fx, ηys(t)f y, and ηzs(t)f z.
Let us, for example, consider the case of estimating the x component of the source
moment. The estimated moment, sx(t), is expressed as
sx(t) = wTx (r)b(t) = [ηxw
T (r,fx)lx(r) + ηyw
T (r,fx)ly(r) + ηzw
T (r,f z)lz(r)]s(t).
(8.57)
Since the weight wT (r,fx) is obtained by imposing the constraint wT (r,fx)lx(r) = 1,
we have
sx(t) = ηxs(t) + [ηywT (r,fx)l
y(r) + ηzwT (r,f z)l
z(r)]s(t). (8.58)
In this equation, there is no guarantee that the relationships wT (r,fx)ly(r) = 0 and
wT (r,fx)lz(r) = 0 hold. Instead, wT (r,fx)l
y(r) and wT (r,fx)lz(r) generally have
fairly large negative values, resulting in considerable errors.
A vector-extended minimum-variance beamformer
The above analysis also suggests how we can avoid such errors. Equation (8.57) indicates
that the weight should be derived with the multiple constraints,
wTx lx(r) = 1, wT
x ly(r) = 0, and wTx lz(r) = 0. (8.59)
That is, we impose the null constraints on the directions orthogonal to the one to be
estimated. We here omit the notation (r) for the weight expression unless this omission
causes ambiguity. Similarly, to derive wy and wz, the following constraints should be
imposed,
wTy lx(r) = 0, wT
y ly(r) = 1, and wTy lz(r) = 0, (8.60)
wTz lx(r) = 0, wT
z ly(r) = 0, and wTz lz(r) = 1. (8.61)
The minimum-variance beamformer with such multiple linear constraints, referred to as
the linearly constrained minimum-variance beamformer (Frost, 1972), is known to have
29
Page 30
the following solution:
W (r) = [wTx ,wT
y ,wTz ] = R−1
b L(r)[LT (r)R−1b L(r)]−1. (8.62)
It is clear from the discussion above that, when estimating one of the three orthogonal
components of the source moment, we need to suppress the other two components. By
doing this, we can avoid the errors caused by the perfectly correlated virtual sources, so the
beamformer can detect the source moment projected in three orthogonal directions. Note
that the set of weight vectors in Eq. (8.62) has been previously reported (van Drongelen
et al., 1996; van Veen et al., 1997; Spencer et al., 1992). In these reports, however, the
necessity for imposing the null constraints was not fully explained.
Vector-extended Borgiotti-Kaplan beamformer
The extension of the Borgiotti-Kaplan beamformer in Eq. (8.45) is performed in the
similar manner. The weight vectors are obtained by using the following constrained
minimizations,
minwx
wTx Rbwx subject to wT
x wx = 1, wTx ly(r) = 0, and wT
x lz(r) = 0, (8.63)
minwy
wTy Rbwy subject to wT
y lx(r) = 0, wTy wy = 1, and wT
y lz(r) = 0, (8.64)
minwz
wTz Rbwz subject to wT
z lx(r) = 0, wTz ly(r) = 0, and wT
z wz = 1. (8.65)
We first derive the expression for wx. Let us introduce a scalar constant ζ such that
wTx lx(r) = ζ where ζ can be determined from the relationship wT
x wx = 1. Then, the
constrained optimization problem in Eq. (8.63) becomes
minwx
wTx Rbwx subject to wT
x L(r) = ζ
1
0
0
= ζfx. (8.66)
The solution of this optimization problem is known to have the form
wx = ζR−1b L(r)[LT (r)R−1
b L(r)]−1fx. (8.67)
30
Page 31
Then, we have
wTx wx = ζ2fT
x Ωf x, (8.68)
where
Ω = [LT (r)R−1b L(r)]−1LT (r)R−2
b L(r)[LT (r)R−1b L(r)]−1.
Thus, we get ζ = 1/√
fTx Ωf x from the relationship wT
x wx = 1. Using exactly the same
derivation, the weights wy and wz can be derived, and a set of the weights is expressed
as
wξ =R−1
b L(r)[LT (r)R−1b L(r)]−1f ξ√
fTξ Ωf ξ
. (8.69)
Extension to eigenspace-projection vector beamformer
The extension to the eigenspace-projection vector beamformer is attained by using
wξ = ESETS wξ. (8.70)
The projection onto the signal subspace, however, cannot preserve the null constraints
imposed on the orthogonal components. This can be understood by considering, for
example, the case of wx. The null constraints in this case should be wTx ly(r) = 0 and
wTx lz(r) = 0. However, let us consider
wTx ly(r) = (ESET
S wx)T ly(r) = wT
x ESETS ly(r),
wTx lz(r) = (ESET
S wx)T lz(r) = wT
x ESETS lz(r). (8.71)
Because ly(r) and lz(r) are not necessarily in the signal subspace, we generally have
ESETS ly(r) = ly(r) and ESET
S lz(r) = lz(r), and therefore wTx ly(r) = 0 and wT
x lz(r) =0. It can, however, be shown that the eigenspace-projection beamformer in Eq. (8.70) can
still detect the three orthogonal components of the source moment even though the null
constraints are not preserved (Sekihara et al., 2001).
31
Page 32
8.6 Numerical Experiments: Resolution kernel com-
parison between adaptive and non-adaptive spa-
tial filters
8.6.1 Resolution kernel for the minimum-norm spatial filter
We compare the resolution kernels for the minimum-norm and the minimum-variance
spatial filter techniques. These two methods are typical and basic spatial filter techniques
in each category. In these numerical experiments, we use the coil configuration of the
148-channel Magnes 2500TM neuromagnetometer (4D Neuroimaging, San Diego). The
sensor coils are arranged on a helmet-shaped surface whose sensor locations are shown in
Fig. 2. The coordinate origin is chosen as the center of the sensor array. The z direction
is defined as the direction perpendicular to the plane of the detector coil located at this
center. The x direction is defined as that from the posterior to the anterior, and the
y direction is defined as that from the left to the right hemisphere. The values of the
spatial coordinates (x, y, z) are expressed in centimeters. The coordinate system is also
shown in Fig. 2. The origin of the spherically symmetric homogeneous conductor was set
to (0, 0,−11).
To plot the resolution kernel, we assume a vertical plane x = 0 located below the
center of the sensor array (Fig. 2). The power of the resolution kernel ‖R‖2 was calculated
using Eqs. (8.25), (8.27), and (8.31) for the minimum-norm method. A point source was
assumed to exist at (0, 0,−6), i.e., r′ in Eq. (8.25) was set to (0, 0,−6). The kernel was
plotted within a region defined as −5 ≤ y ≤ 5 and −9 ≤ z ≤ −1 on the vertical plane of
x = 0. The resulting resolution kernels are shown in Fig. 3. Here, the results in Fig. 3(a)
show the kernel obtained from the original minimum-norm method. It is well known
that the original minimum-norm method suffers from a strong geometric bias toward
the sensors. The results in Fig. 3(a) confirm this fact. The kernels of the minimum-
norm method with the lead-field normalization are shown in Fig. 3(b) and (c). Here, ΦL
32
Page 33
in Eq. (8.36) was used and the regularization parameter γ was set at 0.0001λ1 for (b)
and 0.001λ1† for (c). These results show that the lead-field normalization significantly
improves the performance of the minimum-norm method. However, the resolution is still
significantly low, particularly in the depth direction. Moreover, the peak of the kernel
is located a few centimeters shallower than the assumed location; the depth difference
depends on the choice of the regularization parameter. The results in Fig. 3(d) show the
kernel from the minimum-norm method with the normalized weight (Eq. (8.41)). The
main lobe is significantly sharper than those in the lead-field normalization cases of (b)
and (c). However, the peak is located 2cm deeper than its original position.
8.6.2 Resolution kernel for the minimum-variance adaptive spa-
tial filter
Next, the resolution kernel for the minimum-variance vector beamformer technique was
plotted. The kernel was calculated using Eqs. (8.25) and (8.62). Here, the covariance
matrix was calculated from Rb = (σ2I + Psl(r′)lT (r′)) where r′ is set to the source
location equal to (0, 0,−6). The calculated resolution kernels for the four SNR values
of√
Ps/σ2 are shown in Fig. 4. First of all, the kernel is peaked exactly at the target
source location (0, 0,−6). The kernel’s sharpness depends on the SNR value. This is one
characteristic of the adaptive spatial filter methods. We encounter the SNR between 8 and
2 in most actual measurements. In such an SNR range, the kernel sharpness obtained with
the minimum-variance spatial filter is significantly higher than that with the non-adaptive
minimum-norm method. These results clearly demonstrate that the minimum-variance
filter can provide more accurate reconstruction results with significantly higher spatial
resolution than the minimum-norm spatial filter.
†This λ1 is the largest eigenvalue of the gram matrix LTNLN .
33
Page 34
8.7 Numerical experiments: evaluation of adaptive
beamformer performance
8.7.1 Data generation and reconstruction condition
We next conducted a series of numerical experiments to test the performance of the
adaptive beamformer technique. Here, we assume the 37-sensor array of MagnesTM neu-
romagnetometer in which the sensor coils are arranged in a uniform, concentric array on
a spherical surface with a radius of 12.2 cm. The sensors are configured as first-order
axial gradiometers with a baseline of 5 cm. Three signal sources were assumed to exist
on a plane defined as x = 0. The source configuration is schematically shown in Fig. 5(a)
and their time courses are shown Fig. 5(b). The magnetic field was generated at a 1-ms
interval from 0 to 400 ms. Gaussian noise was added to the generated magnetic field,
and the SNR, defined as the ratio of the Frobenius norm of the signal-magnetic-field data
matrix to that of the noise matrix, was set to 16. The generated magnetic field is shown
also in Fig. 5(b).
The covariance matrix Rb was calculated using the data obtained with this time
window between 0 and 400 ms. We express the source-moment vector using the two
tangential components (θ, φ), and the radial component is assumed to be zero. The
reconstruction was performed by using
sφ(r, t) = wTφ (r)b(t) and sθ(r, t) = wT
θ (r)b(t). (8.72)
The reconstruction region was defined as the area between −4 ≤ y ≤ 4 and −8 ≤ z ≤ −3
on the plane x = 0, and the reconstruction interval was 1 mm in the y and z directions.
Once sφ(r, t) and sθ(r, t) were obtained, ϕ, an angle representing the mean source
direction in the φ − θ plane was calculated using
ϕ = arctan(
√〈sθ(t)2〉〈sφ(t)2〉) if
〈sθ(t)〉〈sφ(t)〉 ≥ 0,
ϕ = − arctan(
√〈sθ(t)2〉〈sφ(t)2〉) if
〈sθ(t)〉〈sφ(t)〉 < 0, (8.73)
34
Page 35
where 〈·〉 indicates the average over the time window with which Rb was calculated. Then,
the time course expressed in the mean source direction, s‖(r, t), and that in its orthogonal
direction, s⊥(r, t) were given by
s‖(r, t) = sφ(r, t) cosϕ + sθ(r, t) sin ϕ,
s⊥(r, t) = sθ(r, t) cosϕ − sφ(r, t) sin ϕ. (8.74)
In the following experiments, we used s‖(r, t) and s⊥(r, t) when displaying the time course
of a source activity.
To display the results of the spatio-temporal reconstruction, three time points at
220, 268, and 300 ms were selected. The amplitude of the second source happened to
be zero at 220 ms, and all the sources had non-zero amplitudes at 268 ms, while only
the second source had a non-zero amplitude at 300 ms. The snapshots of the source
magnitude distribution |s(r, t)| =√
s2φ(r, t) + s2
θ(r, t) at these three time points, and the
time averaged reconstruction√〈s(r, t)2〉 are presented in the following experiments.
8.7.2 Results from minimum-variance vector beamformer
The results of the spatio-temporal reconstruction obtained using the minimum-variance
vector beamformer in Eq. (8.62) are shown in Fig. 6(a). The estimated time courses at
the pixels nearest to the three source locations are shown in Fig. 6(b). These results show
that the reconstruction at each instant in time was fairly noisy: the snapshot at 220 ms
showed some influence from the second source, and the snapshot at 300 ms contained
the activities of the first and third sources. The time-averaged reconstruction, however,
resolved three active sources. In Fig. 6(b), only s‖(r, t) shows the source activity time
courses, but s⊥(r, t) contains no significant activities. This confirms the fact that the
source orientation is fixed during the observation period.
We next tested the minimum-variance beamformer with the regularized inverse (Rb+
γI)−1 instead of R−1b . The regularization parameter was set to 0.003λ1
‡. The results in
‡This λ1 is the largest eigenvalue of Rb.
35
Page 36
Fig. 7(a) show that a considerable amount of blur was introduced. The estimated time
courses are shown in Fig. 7(b). This figure shows that the SNR of the beamformer output
increased considerably in this case, although each time course shows some influence from
neighboring sources. The results demonstrate that the regularization leads to a trade-off
between the spatial resolution and the SNR of the beamformer output.
8.7.3 Results from the vector-extended Borgiotti-Kaplan beam-
former
The reconstruction results from weight vectors obtained using Eq. (8.69) are shown in
Fig. 8. The weight is equivalent to the vector-extended Borgiotti-Kaplan (B-K) beam-
former without the eigenspace projection. Comparison between the time-averaged re-
construction in Fig. 6(a) and in Fig. 8(a) confirms that the Borgiotti-Kaplan-type beam-
former has a spatial resolution much higher than the minimum-variance beamformer. The
spatio-temporal reconstruction, however, is very noisy in both cases.
8.7.4 Results from the eigenspace projected vector-extended
Borgiotti-Kaplan beamformer
We then applied the eigenspace-projected B-K beamformer obtained using Eqs. (8.69) and
(8.70) to the same computer-generated data set. The reconstructed source distributions
are shown in Fig. 9(a), and the estimated time courses are shown in Fig. 9(b). Comparison
between Figs. 8 and 9 confirms that the eigenspace projection can improve the SNR
with almost no sacrifice in spatial resolution. Comparing the results in Fig. 9 with the
minimum-variance results in Fig. 6, we can clearly see that the eigenspace-projected B-K
beamformer technique significantly improved both spatial resolution and output SNR.
36
Page 37
8.8 Application of adaptive spatial filter technique to
MEG data
This section describes the application of the adaptive spatial filter technique to actual
MEG data. The MEG data sets were collected using the 37-channel MagnesTM neuro-
magnetometer. The first data set is an auditory and somatosensory combined response,
which contains two major source activities. We show that the adaptive spatial filter tech-
nique can reconstruct these two sources and retrieve their time courses. The second data
set is the somatosensory response with very high SNR achieved by averaging 10000 trials.
With this data set, we show that the adaptive technique can separate cortical activities
only 0.7-cm apart. Throughout this section, we use the head coordinates shown in Fig. 10
to express the reconstruction results.
8.8.1 Application to auditory-somatosensory combined response
The evoked response was measured by simultaneously presenting an auditory stimulus and
a somatosensory stimulus to a male subject. The auditory stimulus was a 200 ms pure-tone
pulse with 1 kHz frequency presented to the subject’s right ear, and the somatosensory
stimulus was a 30 ms tactile pulse delivered to the distal segment of the right index
finger. These two stimuli started at the same time. The sensor array was placed above
the subject’s left hemisphere with the position adjusted to optimally record the N1m
auditory evoked field. A total of 256 epochs were measured, and the response averaged
over all the epochs is shown in the upper part of Fig. 11.
The adaptive vector beamformer technique was applied to localize sources from this
data set. The covariance matrix Rb was calculated with the time window between 0 ms
and 300 ms. We calculated, by using Eqs. (8.69) and (8.70), the eigenspace-projected
Borgiotti-Kaplan weight matrix containing the two weight vectors [wφ,wθ], and estimated
the source magnitude vector s(r, t) using Eq. (8.72). The signal subspace dimension Q was
set to two because the eigenvalue spectrum of Rb showed two distinctly large eigenvalues.
37
Page 38
The maximum-intensity projections of the reconstructed moment magnitude ‖s(r, t)‖2
onto the axial, coronal, and sagittal planes are shown in Fig. 11. The source magnitude
at three latencies, (65, 138, and 194 ms), is shown in this figure. The source magnitude
map at 138 ms contains a source activity presumably in the primary somatosensory cortex.
The source magnitude map at 194 ms shows a source activity in the primary auditory
cortex. The map at 65 ms contains both of these activities.
The time courses of points in the primary somatosensory and auditory cortices are
shown in Figs. 12(a) and (b), respectively. The coordinates of these cortices were deter-
mined from the maximum points in the source magnitude maps at 138 ms and 194 ms.
In Fig. 12(a) the P50 peak, which is known to represent the activity of the primary so-
matosensory cortex, is observed at a latency of about 50 ms. In Fig. 12(b), the auditory
N1m peak is observed at a latency of about 100 ms.
8.8.2 Application to somatosensory response: high-resolution
imaging experiments
Electrical stimuli with 0.2 ms duration were delivered to the right posterior tibial nerve
at the ankle with a repetition rate of 4 Hz. The MEG recordings were taken from the
vertex centering at Cz of the international 10-20 system. An epoch of 60 ms duration was
digitized at a 4000 Hz sampling frequency and 10000 epochs were averaged. The upper
part of Fig. 13 shows the MEG signals, recorded over the foot somatosensory region in
the left hemisphere. The eigenspace-projected Borgiotti-Kaplan beamformer was applied
to this MEG recording. The covariance matrix Rb was calculated with a time window
between 20 and 45 ms containing 100 time samples. The maximum-intensity projections
of the reconstructed source magnitude ‖s(r, t)‖2 onto the axial, coronal, and sagittal
planes are shown in Fig. 13.
The source magnitude maps revealed initial activation in the anterior part of the
S1 foot area at 33.1 ms, followed by co-activation of the posterior part of S1 cortex at
36.2 ms. The posterior activation became dominant at 37.2 ms and the initial anterior
38
Page 39
activation completely disappeared at 39.1 ms. Fig. 14 shows the source magnitude map
at 36.9 ms overlaid, with proper thresholding, onto the subject’s MRI. Here, the anterior
source was probably in area 3b and the posterior source was in an area near the marginal
sulcus. The separation of the two sources was approximately 7 mm, demonstrating the
high-resolution imaging capability of the adaptive spatial filter techniques. Details of this
investigation have been reported (Hashimoto, et al., 2001a), and the results of applying
the adaptive beamformer technique to the response from the median nerve stimulation
have also been reported (Hashimoto et al., 2001b).
39
Page 40
8.9 Acknowledgments
The author would like to thank Dr. D. Poeppel, Dr. A. Marantz, and Dr. T. Roberts
for providing the auditory data. We are also grateful to Dr. I. Hashimoto, and Dr. K.
Sakuma for providing the somatosensory data and for useful discussion regarding the
interpretation of the reconstructed results. This work has been supported by Grants-in-
Aid from the Kayamori Foundation of Informational Science Advancement; Grants-in-Aid
from the Suzuki Foundation; and Grants-in-Aid from the Ministry of Education, Science,
Culture and Sports in Japan (C13680948). This work has also been supported by the
Whitaker Foundation, and by National Institute of Health.
40
Page 41
Bibliography
[1] Hamalainen, M. S., Hari, R., IImoniemi, R. J., Knuutila, J., and Lounasmaa, O. V.,
1993, Magnetoencephalography-theory, instrumentation, and applications to nonin-
vasive studies of the working human brain, Rev. Mod. Phys., 65, pp. 413–497.
[2] Roberts, T. P. L. , Poeppel, D., and Rowley, H. A., 1998, Magnetoencephalography
and magnetic source imaging, Neuropsychiatry, Neuropsychology, and Behavioral
Neurology, 11, pp. 49–64.
[3] J. D. Lewine J. D. and Orrison Jr., W. W., 1995, Magnetoencephalography and
magnetic source imaging, in Functional Brain Imaging, (W. W. Orrison Jr. et al.,
eds.), pp. 369–417. Mosby-Year Book, Inc,.
[4] Baillet, S., Mosher, J. C. , and Leahy, R. M., 2001, Electromagnetic brain mapping,
IEEE Signal Processing Magazine, 18, pp. 14–30.
[5] van Veen, B. D. and Buckley, K. M., 1988, Beamforming: A versatile approach to
spatial filtering, IEEE ASSP Magazine, 5, pp. 4–24, April.
[6] Hamalainen, M. S. and Ilmoniemi, R. J., 1984, Interpreting measured magnetic fields
of the brain: Estimates of current distributions, Tech. Rep. TKK-F-A559, Helsinki
University of Technology.
[7] Clarke, J., 1994, SQUIDs, Scientific American, 271, pp. 36–43.
[8] Drung, D., Cantor, R., Peters, M., Ryhanen, P., and Koch, H., 1991, Integrated DC
SQUID magnetometer with high dv/db, IEEE Trans. Magn., 27, pp. 3001–3004.
41
Page 42
[9] Vrba, J. and Robinson, S., 2001, The effect of environmental noise on magnetometer-
and gardiometer-based MEG systems, in Proceedings of 12th International Confer-
ences on Biomagnetism, (R. Hari et al., eds.), Helsinki University of Technology, pp.
953–956.
[10] Adachi, Y., Shimogawara, M., Higuchi, M., Haruta, Y., and Ochiai, M., 2001, Re-
duction of non-periodical extramural magnetic noise in MEG measurement by con-
tinuously adjusted least squares method, in Proceedings of 12th International Con-
ferences on Biomagnetism, (R. Hari et al., eds.), Helsinki University of Technology,
pp. 899–902.
[11] Parkkonen, L. T., Simola, J. T., Tuoriniemi, J. T., and Ahonen, A. I., 1999, An
interference suppression system for multichannel magnetic field detector arrays, in
Recent Advances in Biomagnetism, (T. Yoshimoto et al., eds.), Tohoku University
Press, Sendai, pp. 13–16.
[12] Sarvas, J., 1987, Basic mathematical and electromagnetic concepts of the biomagnetic
inverse problem, Phys. Med. Biol., 32, pp. 11–22.
[13] Cuffin B. N. and Cohen D., 1977, Magnetic fields of a dipole in special volume
conductor shapes, IEEE Trans. Biomed. Eng., 24, pp. 372–381, 1977.
[14] Cuffin B. N., 1991, Eccentric spheres models of the head, IEEE Trans. Biomed.
Eng., 38, pp. 871–878.
[15] Geselowitz, D. B., 1970, On the magnetic field generated outside an inhomogeneous
volume conductor by internal current sources, IEEE Trans. Biomed. Eng., 2, pp.
346–347.
[16] Barnard, A., Duck, I., Lynn, M., and Timlake, W., 1967, The application of electro-
magnetic theory to electrocardiography II. Numerical solution of the integral equa-
tions, Biophys. J., 7, pp. 433–462.
42
Page 43
[17] Hamalainen, M. S. and Sarvas, J., 1989, Realistic conductivity geometry model of
the human head for interpretation of neuromagnetic data, IEEE Trans. Biomed.
Eng., 36, pp. 165–171.
[18] Fuchs, M., Drenckhahn, R., Wischmann, H.-A., and Wagner, M., 1998, An improved
boundary element method for realistic volume-conductor modeling, IEEE Trans.
Biomed. Eng., 45, pp. 980–997.
[19] Cuffin, B. N., 1996, EEG localization accuracy improvements using realistically
shaped head models, IEEE Trans. Biomed. Eng., 43, pp. 299–303, 1996.
[20] Bradley, C. P. , Harris, G. M. , and Pillan, A. J., 2001, The computational perfor-
mance of a high-order coupled FEM/BEM procedure in electropotential problems,
IEEE Trans. Biomed. Eng., 48, pp. 1238–1250.
[21] van‘t Ent, D., de Munck, J. C., and Kaas, A. L., 2001, A fast method to derive
realistic BEM models for E/MEG source reconstruction, IEEE Trans. Biomed. Eng.,
48, pp. 1434–1443.
[22] Okada, Y., Lauritzen, M., and Nicholson, C., 1987, MEG source models and physi-
ology, Phys. Med. Biol, 32, pp. 43–51.
[23] Paulraj, A., Ottersten, B., Roy, R., Swindlehurst, A., Xu, G., and Kailath, T., 1993,
Subspace methods for directions-of-arrival estimation, in Handbook of Statistics,
(N. K. Bose and C. R. Rao, eds.), Elsevier Science Publishers, Netherlands, pp.
693–739.
[24] Sekihara, K., Poeppel, D., Marantz, A., and Miyashita, Y., 2000, Neuromagnetic
inverse modeling: applications of eigenstructure-based approaches to extracting cor-
tical activities from MEG data, in Image, Language, Brain, (Alec Marantz et al.,
eds.), The MIT Press, Cambridge, pp. 197–231.
[25] Scharf, L. L., 1991, Statistical Signal Processing: detection, estimation, and time
series analysis, Addison-Wesley Publishing Company, New York.
43
Page 44
[26] Schmidt, R. O., 1981, A signal subspace approach to multiple emitter location and
spectral estimation, PhD thesis, Stanford University, Stanford, CA.
[27] Schmidt, R. O., 1986, Multiple emitter location and signal parameter estimation,
IEEE Trans. Antenn. Propagat., 34, pp. 276–280.
[28] Mosher, J. C., Lewis, P. S., and Leahy, R. M., 1992, Multiple dipole modeling and
localization from spatio-temporal MEG data, IEEE Trans. Biomed. Eng., 39, pp.
541–557.
[29] Sekihara, K. and Scholz, B., 1996, Generalized Wiener estimation of three-
dimensional current distribution from biomagnetic measurements, in Biomag 96:
Proceedings of the Tenth International Conference on Biomagnetism, (C. J. Aine
et al., eds.), Springer-Verlag, New York, pp. 338–341.
[30] de Peralta Menendez, R. G., Hauk, O., Gonzalez Andino, S., Vogt, H., and Michel,
C., 1997, Linear inverse solutions with optimal resolution kernels applied to electro-
magnetic tomography, Human Brain Mapping, 5, pp. 454–467, 1997.
[31] de Peralta Menendez, R. G., Gonzalez Andino, S., and Lutkenhoner, B., 1996, Fig-
ures of merit to compare distributed linear inverse solutions, Brain Topography, 9,
pp. 117–124, 1996.
[32] Lutkenhoner, B. and de Peralta Menendez, R. G., 1997, The resolution field concept,
Electroenceph. Clin. Neurophysiol., 102, pp. 326–334.
[33] Hamalainen, M. S. and Ilmoniemi, R. J., 1994, Interpreting magnetic fields of the
brain: minimum norm estimates, Med. & Biol. Eng. & Comput., 32, pp. 35–42.
[34] Wang, J. Z., Williamson, S. J., and Kaufman, L., 1992, Magnetic source images
determined by a lead-field analysis: The unique minimum-norm least-squares esti-
mation, IEEE Trans. Biomed. Eng., 39, pp. 565–575.
44
Page 45
[35] Graumann, R., 1991, The reconstruction of current densities, Tech. Rep. TKK-F-
A689, Helsinki University of Technology.
[36] Herman, G. T., 1980, Image Reconstruction from projections, Academic Press, New
York, USA.
[37] Pascual-Marqui, R. D. and Michel, C. M., 1994, Low resolution electromagnetic
tomography: A new method for localizing electrical activity in the brain, Int. J.
Psychophysiol., 18, pp. 49–65.
[38] Wagner, M., Fuchs, M., Wischmann, H.-A., Drenckharn, R., and Kohler, T., 1996,
Smooth reconstruction of cortical sources from EEG or MEG recordings, NeuroIm-
age, 3, pp. S168.
[39] Schmidt, D. M., George, J. S., and Wood, C. C., 1999, Bayesian inference applied
to the electromagnetic inverse problem, Human Brain Mapping, 7, pp. 195–212.
[40] Baillet S. and Garnero, L., 1997, A Bayesian approach to introducing anatomo-
functional priors in the EEG/MEG inverse problem, IEEE Trans. Biomed. Eng., 44,
pp. 374–385.
[41] Liu, A. K., Belliveau, J. W., and Dale, A. M., 1998, Spatiotemporal imaging of
human brain activity using functional MRI constrained magnetoencephalography
data: Monte Carlo simulations, Proc. Natl. Acad. Sci., 95, pp. 8945–8950.
[42] A. M. Dale, A. M., Liu, A. K., Fischl, B. R., Buckner, R. L., Belliveau, J. W., Lewine,
J. D., and Halgren, E., 2000, Dynamic statistical parametric mapping: Combining
fMRI and MEG for high-resolution imaging of cortical activity, Neuron, 26, pp.
55–67.
[43] Borgiotti G. and Kaplan, L. J., 1979, Superresolution of uncorrelated interference
sources by using adaptive array technique, IEEE Trans. Antenn. and Propagat., 27,
pp. 842–845.
45
Page 46
[44] Cox, H., Zeskind, R. M., and Owen, M. M., 1987, Robust adaptive beamforming,
IEEE Trans. Signal Process., 35, pp. 1365–1376.
[45] Carlson, B. D., 1988, Covariance matrix estimation errors and diagonal loading in
adaptive arrays, IEEE Trans. Aerospace and Electronic Systems, 24, pp. 397–401.
[46] Robinson, S. E. and Vrba, J., 1999, Functional neuroimaging by synthetic aperture
magnetometry (SAM), in Recent Advances in Biomagnetism, (T. Yoshimoto et al.,
eds.), Tohoku University Press, Sendai, pp. 302–305.
[47] Gross J. and Ioannides, A. A., 1999, Linear transformations of data space in MEG,
Phys. Med. Biol., 44, pp. 2081–2097.
[48] Gross, J., Kujara, J., Hamalainen, M. S., Timmermann, L., Schnitzler, A., and
R. Salmelin, 2001, Dynamic imaging of coherent sources: Studying neural inter-
actions in the human brain, Proceedings of National Academy of Science, 98, pp.
694–699.
[49] van Veen, B. D., 1988, Eigenstructure based partially adaptive array design, IEEE
Trans. Antenn. Propagat., 36, pp. 357–362.
[50] Feldman, D. D. and Griffiths, L. J., 1991, A constrained projection approach for
robust adaptive beamforming, in Proc. Int. Conf. Acoust., Speech, Signal Process.,
Toronto, May, pp. 1357–1360.
[51] Yu, J. L. and Yeh, C. C., 1995, Generalized eigenspace-based beamformers, IEEE
Trans. Signal Process., 43, pp. 2453–2461.
[52] Chang, L. and Yeh, C. C., 1992, Performance of DMI and eigenspace-based beam-
formers, IEEE Trans. Antenn. Propagat., 40, pp. 1336–1347.
[53] Chang L. and Yeh, C. C., 1993, Effect of pointing errors on the performance of the
projection beamformer, IEEE Trans. Antenn. Propagat., 41, pp. 1045–1056.
46
Page 47
[54] Frost, O. T., 1972, An algorithm for linearly constrained adaptive array processing,
Proc. IEEE, 60, pp. 926–935.
[55] van Drongelen, W., Yuchtman, M., van Veen, B. D., and van Huffelen, A. C., 1996, A
spatial filtering technique to detect and localize multiple sources in the brain, Brain
Topography, 9, pp. 39–49.
[56] van Veen, B. D., van Drongelen, W., Yuchtman, W. and Suzuki, A., 1997, Local-
ization of brain electrical activity via linearly constrained minimum variance spatial
filtering, IEEE Trans. Biomed. Eng., 44, pp. 867–880.
[57] Spencer, M. E., Leahy, R. M., Mosher, J. C., and Lewis, P. S., 1992, Adaptive
filters for monitoring localized brain activity from surface potential time series, in
Conference Record for 26th Annual Asilomer Conference on Signals, Systems, and
Computers, November, pp. 156–161.
[58] Sekihara, K., Nagarajan, S. S., Poeppel, D., Marantz, A., and Miyashita, Y., 2001,
Reconstructing spatio-temporal activities of neural sources using an MEG vector
beamformer technique, IEEE Trans. Biomed. Eng., 48, pp. 760–771.
[59] Hashimoto, I., Sakuma, K., Kimura, T., Iguchi, Y., and Sekihara, K., 2001a, Serial
activation of distinct cytoarchitectonic areas of the human SI cortex after posterior
tibial nerve stimulation, NeuroReport, 12, pp. 1857–1862.
[60] Hashimoto, I., Kimura, T., Iguchi, Y., Takino, R., and K. Sekihara, K., 2001b,
Dynamic activation of distinct cytoarchitectonic areas of the human SI cortex after
median nerve stimulation, NeuroReport, 12, pp. 1891–1897.
47
Page 48
Figure Captions
Fig. 1 Schematic views of the sensor lead field. (a) Biomagnetic instrument and (b)
X-ray computed tomography.
Fig. 2 The sensor locations and the coordinate system used for plotting the resolution
kernels in Section 8.6. The filled spots indicate the locations of the 148 sensors, and
the hatched rectangle shows the plane of x = 0 on which the resolution kernels were
plotted.
Fig. 3 Results of plotting the resolution kernels for minimum-norm-based spatial filter
techniques. (a) Results for the original minimum-norm method. (b) Results for
the minimum-norm method with the normalized lead field. The regularization pa-
rameter γ was set to 0.0001λ1. (c) Results for the minimum-norm method with
the normalized lead field. The regularization parameter γ was set to 0.001λ1. (d)
Results for the minimum-norm method with weight normalization. The parameter
γ was set to 0.0001λ1 for the results in (a) and (d).
Fig. 4 Results of plotting the resolution kernels for the minimum-variance adaptive spa-
tial filter technique. The plots are for SNR values of (a) 8, (b) 4, (c) 2, and (d)
1.
Fig. 5 (a) The coordinate system and the source configuration used in the numerical
experiments in Section 8.7. The cross section at x = 0 is shown. The square shows
the reconstruction region for the experimental results shown in Figs. 6 – 9. (b)
Time courses of the three sources assumed in the numerical experiments. Time
courses from the first to the third sources are shown from the top to the third row,
respectively. The three vertical broken lines indicate the time instants 220, 268,
and 300 ms, at which the source-moment magnitude is displayed. The bottom row
shows the generated magnetic field.
Fig. 6 (a) Results of the spatio-temporal reconstruction obtained using the minimum-
48
Page 49
variance-based vector beamformer in Eq. (8.62). The upper-left, upper-right, and
lower-left maps show the snapshots of the source-moment magnitude at 220 ms,
268 ms, and 300 ms, respectively. The lower-right map shows the time-averaged
reconstruction. (b) Estimated time courses from the first to the third sources are
shown from the top to the bottom, respectively. The two time courses in each panel
correspond to s‖(r, t) and s⊥(r, t). The three vertical broken lines indicate the time
instants at 220, 268, and 300 ms.
Fig. 7 (a) Results of the spatio-temporal reconstruction obtained using the minimum-
variance-based vector beamformer in Eq. (8.62) together with the regularized inverse
(Rb +γI)−1. The parameter γ was set to 0.003λ1, where λ1 is the largest eigenvalue
of Rb. (b) Estimated time courses from the first to the third sources are shown from
the top to the bottom, respectively.
Fig. 8 (a) Results of the spatio-temporal reconstruction with the vector-extended Borgiotti-
Kaplan-type beamformer (Eq. (8.69)). (b) Estimated time courses from the first to
the third sources are shown from the top to the bottom, respectively.
Fig. 9 (a) Results of the spatio-temporal reconstruction obtained using the eigenspace-
projected Borgiotti-Kaplan vector beamformer technique (Eqs. (8.69) and (8.70)).
(b) Estimated time courses from the first to the third sources are shown from the
top to the bottom, respectively.
Fig. 10 The x, y, and z coordinates used to express the reconstruction results in Sec-
tion 8.8. The coordinate origin is defined as the midpoint between the left and
right pre-auricular points. The axis directed away from the origin toward the left
pre-auricular point is defined as the +y axis, and that from the origin to the nasion
as the +x axis. The +z axis is defined as the axis perpendicular to both these axes
and is directed from the origin to the vertex.
Fig. 11 Results of the spatio-temporal reconstruction from the auditory-somatosensory
combined response shown in the upper trace of this figure. The auditory-somatosensory
49
Page 50
combined response was measured by simultaneously applying an auditory stimulus
and a somatosensory stimulus. A total of 256 epochs were averaged. The contour
maps show reconstructed source magnitude distributions at three different laten-
cies (65, 138, and 194 ms). The reconstruction grid spacing was set to 5 mm.
The maximum-intensity projections onto the axial (left column), coronal (middle
column), and sagittal (right column) directions are shown. The letters L and R
indicate the left and right hemispheres. The circles depicting a human head show
the projections of the sphere used for the forward modeling. The contour colors
represent the relative intensity of the source magnitude; the relationship between
the colors and relative intensities is indicated by the color bar.
Fig. 12 Time courses of the points nearest to (a) the primary somatosensory cortex and
(b) the primary auditory cortex. The solid and broken plotted lines correspond,
respectively, to s‖(r, t) and s⊥(r, t). Three vertical broken lines indicate the time
instants at 65, 138, and 194 ms.
Fig. 13 Results of the spatio-temporal reconstruction from the somatosensory response
shown in the upper trace of this figure. The somatosensory response was mea-
sured using the right posterior tibial nerve stimulation. The contour maps show
reconstructed source magnitude distributions at four different latencies. The re-
construction grid spacing was set to 1 mm. The maximum-intensity projections
onto the axial (left column), coronal (middle column), and sagittal (right column)
directions are shown. The letters L and R indicate the left and right hemispheres.
The circles depicting a human head show the projections of the sphere used for the
forward modeling. The contour colors represent the relative intensity of the source
magnitude; the relationship between the colors and relative intensities is indicated
by the color bar.
Fig. 14 The source magnitude reconstruction results at a latency of 36.9 ms. The source
magnitude map was properly thresholded and overlaid onto the sagittal cross sec-
50
Page 51
tion of the subject’s MRI. The colors represent the relative intensity of the source
magnitude; the relationship between the colors and relative intensities is indicated
by the color bar. The anterior source was probably in area 3b and the posterior
source was in an area near the marginal sulcus. The separation of the two sources
was approximately 7 mm in this case.
51
Page 52
Fig. 1 K. Sekihara
(a) (b)
Page 53
-200
2020
0
-20
-15
-10
-5
0
Fig. 2 K. Sekihara
x (cm)y (cm)
z (c
m)
Page 54
−4 −2 0 2 4−9
−8
−7
−6
−5
−4
−3
−2
−1
y (cm)
z (c
m)
(a)−4 −2 0 2 4
−9
−8
−7
−6
−5
−4
−3
−2
−1
y (cm)
z (c
m)
(b)
−4 −2 0 2 4−9
−8
−7
−6
−5
−4
−3
−2
−1
y (cm)
z (c
m)
(c)−4 −2 0 2 4
−9
−8
−7
−6
−5
−4
−3
−2
−1
y (cm)
z (c
m)
(d)
Fig. 3 K. Sekihara
Page 55
−4 −2 0 2 4−9
−8
−7
−6
−5
−4
−3
−2
−1
y (cm)
z (c
m)
(a)−4 −2 0 2 4
−9
−8
−7
−6
−5
−4
−3
−2
−1
y (cm)
z (c
m)
(b)
−4 −2 0 2 4−9
−8
−7
−6
−5
−4
−3
−2
−1
y (cm)
z (c
m)
(c)−4 −2 0 2 4
−9
−8
−7
−6
−5
−4
−3
−2
−1
y (cm)
z (c
m)
(d)
Fig. 4 K. Sekihara
Page 56
Fig. 5 (a) K. Sekihara
37-channel sensor array
three sources
sphere for the forward modeling
y (cm)
z (c
m)
0-5 5
-5
-10
-15
Page 57
−1
0
1no
rmal
ized
220 268 300
−1
0
1
norm
aliz
ed
0 100 200 300 400−1
0
1
latency (ms)
norm
aliz
ed
0 100 200 300 400−1
0
1
rela
tive
mag
nitu
de
latency (ms)
Fig. 5(b) K. Sekihara et. al
Page 58
−8
−6
−4
z (c
m)
−2 0 2 4−8
−6
−4
y (cm)
z (c
m)
−2 0 2 4y (cm)
Fig. 6(a) K. Sekihara et al.
Page 59
−1
0
1
rela
tive
valu
e
−1
0
1
rela
tive
valu
e
0 100 200 300 400−1
0
1
rela
tive
valu
e
latency (ms)
Fig. 6(b), K. Sekihara et al.
Page 60
−8
−6
−4
z (c
m)
−2 0 2 4−8
−6
−4
y (cm)
z (c
m)
−2 0 2 4y (cm)
Fig. 7(a) K. Sekihara et al.
Page 61
−1
0
1
rela
tive
valu
e
−1
0
1
rela
tive
valu
e
0 100 200 300 400−1
0
1
rela
tive
valu
e
latency (ms)
Fig. 7(b), K. Sekihara et al.
Page 62
−8
−6
−4
z (c
m)
−2 0 2 4−8
−6
−4
y (cm)
z (c
m)
−2 0 2 4y (cm)
Fig. 8(a) K. Sekihara et al.
Page 63
−1
0
1
rela
tive
valu
e
−1
0
1
rela
tive
valu
e
0 100 200 300 400−1
0
1
rela
tive
valu
e
latency (ms)
Fig. 8(b) K. Sekihara et al.
Page 64
−8
−6
−4
z (c
m)
−2 0 2 4−8
−6
−4
y (cm)
z (c
m)
−2 0 2 4y (cm)
Fig. 9(a) K. Sekihara et al.
Page 65
−1
0
1
rela
tive
valu
e
−1
0
1
rela
tive
valu
e
0 100 200 300 400−1
0
1
rela
tive
valu
e
latency (ms)
Fig. 9(b) K. Sekihara et al.
Page 66
y
z
x
Fig. 10 K. Sekihara et al.
Page 67
−10
−5
0
5
L
R
(cm
)
−5
0
5
10R L
(cm
)
−5
0
5
10
(cm
)
65ms
−10
−5
0
5
L
R
(cm
)
−5
0
5
10R L
(cm
)
−5
0
5
10
(cm
)
138ms
−5 0 5 10−10
−5
0
5
L
R
(cm)
(cm
)
−10 −5 0 5−5
0
5
10R L
(cm)
(cm
)
−5 0 5 10−5
0
5
10
(cm)
(cm
)
194ms
−50 0 50 100 150 200 250 300 350
−200
0
200
latency (ms)
fiel
d in
tens
ity (
fT)
Fig. 11 K. Sekihara.
Page 68
0 50 100 150 200 250 300−1
−0.5
0
0.5
1 65 138 194
latency (ms)
rela
tive
inte
nsity
(a)
0 50 100 150 200 250 300−1
−0.5
0
0.5
1
latency (ms)
rela
tive
inte
nsity
65 138 194
(b)
Fig. 12 K. Sekihara et. al
Page 69
−5
0
5 L
R
(cm
)
4
8
12
R L
(cm
)
4
8
12
(cm
)33.1ms
−5
0
5 L
R
(cm
)
4
8
12
R L
(cm
)
4
8
12
(cm
)
36.2ms
−5
0
5 L
R
(cm
)
4
8
12
R L
(cm
)
4
8
12
(cm
)
37.2ms
−5 0 5 10−5
0
5 L
R
(cm)
(cm
)
−10 −5 0 5
4
8
12
R L
(cm)
(cm
)
−5 0 5 10
4
8
12
(cm)
(cm
)
39.1ms
20 25 30 35 40 45
−100
0
100
latency (ms)
ampl
itude
(fT
)
Fig. 13, K. Sekihara
Page 70
7 mm
Fig. 14 K. Sekihara