RICE UNIVERSITY Building a 3D Atlas of the Mouse Brain by Tao Ju A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Approved, Thesis Committee: Joe Warren, Professor Computer Science Ron Goldman, Professor Computer Science Richard G. Baraniuk, Victor E. Cameron Professor Electrical and Computer Engineering Lydia Kavraki, Noah Harding Professor Computer Science Houston, Texas April, 2005
112
Embed
Building a 3D Atlas of the Mouse Brain - Department of Computer
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
RICE UNIVERSITY
Building a 3D Atlas of the Mouse Brain
by
Tao Ju
A Thesis Submitted
in Partial Fulfillment of theRequirements for the Degree
Doctor of Philosophy
Approved, Thesis Committee:
Joe Warren, ProfessorComputer Science
Ron Goldman, ProfessorComputer Science
Richard G. Baraniuk, Victor E. CameronProfessorElectrical and Computer Engineering
Recent years have witnessed the fast growth of medical imaging methods, such as light
microscopy, CT, MRI and PET. These imaging methods have been applied in biology
and medical science, and the vast amount of resulting data imaged from experimental
organisms play a critical role in the study of animal functions and diseases. Due to
the anatomical heterogeneity between different species and individuals, comparative
studies of data imaged from different animals often rely on a common template, called
an atlas. An atlas represents the average shape of an anatomical structure, and serves
as a road-map for analyzing imaged data from diverse species and individuals.
By far the most intriguing yet the least understood anatomical structure is the brain.
Understanding how the brain functions and falters will not only shed light on the fight
against fatal neurogenetic diseases that have caused millions of deaths, but will also
help to answer ultimate questions about ourselves. For this reason, building an atlas
of the brain has become particularly important, as the analysis of brain images often
hold the keys to brain science. In the past few decades, biological researchers around
the world have created brain atlases of various animals such as flies [4], monkeys [17],
rats [85], and mice [56, 72, 79, 68].
In our work, we focus on building atlases of the mouse brain. The mouse is a close
relative to human beings, and the mouse genome shares a large similarity with our
own. Studying the mouse brain will not only further our understanding of the human
brain, but will also help in the study of human genetic diseases. As a motivating
1
2
application, we shall see in the next section how 2D and 3D atlases of the mouse
brain are used for the study of gene expression patterns.
1.2 Brain Atlases in Gene Expression
1.2.1 Background
One of the major recent successes in biology has been sequencing the genome of vari-
ous organisms such as fruit flies, mice and human beings. Gene sequencing represents
the first step towards the larger goal of understanding the organization and function
of biological organisms at the molecular level. In a recent project [18] that represents
the next logical step towards this larger goal, biologists are determining where each
gene in the genome is being expressed; that is, which cells are producing transcripts
for specific proteins. Using a method known as in situ hybridization, Dr. Eichele at
the Baylor College of Medicine is collecting gene expression data for all 30K genes in
the mouse genome.
This data (consisting of 2D images of the mouse brain taken at distinct cross-sections)
will play a key role in understanding the functional relationship between distinct
genes in the mouse genome. To aid in the organization and analysis of this data, we
have developed a geometric database that allows biologists to pose queries comparing
expression data for different genes. At the core of this database lies a set of 2D atlases
of the mouse brain that partition the brain into disjoint anatomical regions.
Gene expression images
Genes play a basic role in biology because genes serve as blueprints for creating pro-
teins, the building blocks of biological organisms. However, not all genes are actively
involved in protein synthesis. In a particular tissue, only a subset of the genes are
expressing (i.e. synthesizing) proteins. To retrieve gene expression patterns, biolo-
3
gists developed a technique called in situ hybridization that identifies cells expressing
a particular gene by means of antibody-antigen interactions. Given a section of a
mouse brain, in situ hybridization highlights the expression of a particular gene, and
the stained cross-section can be imaged using light-microscopy at high resolutions.
Several examples of gene expression images at a similar sagittal (i.e. vertically cut
from front to back) cross-section of different mouse brains are shown in Figure 1.1.
The dark blue regions in each image are where the corresponding gene is expressed.
Note that each image differs slightly due to rotations and translations induced dur-
ing data collection, as well as anatomical deviations between individual mice. One
feature of these images is that they all exhibit common anatomical regions such as
the cerebellum (the dark folded region in the upper right portion of the images).
Given gene expression images, biologists want to compare expression patterns between
different genes within a region of interest. For example, they would like to pose queries
of the form: “What genes have high expressions in the cortex region?” or “What genes
have similar expression patterns with gene X in the midbrain?” Answers to these
queries often hold the keys to discovering gene functions and gene networks. With the
plan to generate expression image for over 2000 genes in the next two years, automated
methods that accurately and efficiently compares expression patterns between a large
number of genes are in great demand.
To organize gene expression data into a searchable form, we have constructed a 2D
database of gene expressions using a set of 2D atlases of the mouse brain. The atlases
are constructed one on each of eleven standard cross-sections, each corresponding to a
sagittal section of the mouse brain with particular anatomical interest. Queries to this
database are of the form: “For a given cross-section, which genes have a particular
expression pattern over a specific anatomical region?” The atlases are represented
using subdivision, a geometric technique in computer graphics that allows smooth
modelling of anatomical boundaries as well as flexible deformation of the atlas onto
4
Gene: CRY1 Gene: GLUR1B
Gene: CHAT Gene: TLE3
Figure 1.1 : Expression images from 4 genes taken at the same sagittal cross-sectionof the mouse brain.
expression images.
1.2.2 2D brain atlas
Subdivision is a fractal-like process that takes a coarse mesh and generates a sequence
of increasingly fine meshes which converge to a smooth mesh at the limit. A brief de-
scription of the subdivision methods used in studying gene expressions is presented in
[46]. We refer interested readers to an excellent introduction to subdivisions methods
in general by Warren [91].
5
We model each standard cross-section of the mouse brain as a Catmull-Clark sub-
division mesh [19] partitioned by a network of crease curves. The left portion of
Figure 1.2 shows such a coarse mesh for one sagittal cross-section of the mouse brain.
(Crease edges are thickened; crease vertices are large dots.) The middle part of this
figure shows the mesh generated by three rounds of subdivision. Note that the crease
curves partition the mesh into 15 disjoint sub-meshes, each corresponding to an im-
portant anatomical region of the mouse brain. The right portion of Figure 1.2 shows
the crease curve network and the sample image used in laying out the coarse mesh.
By tagging each quad in the coarse mesh with its corresponding anatomical region,
the partitioned smooth mesh serves as an atlas for this standard cross-section.
(a) (b) (c)
Figure 1.2 : Atlas at the coarsest level of subdivision (a), subdivided three times (b),and overlaid on its defining image (c).
Representing the brain atlas as a subdivision mesh offers several desirable features
for comparing gene expression patterns:
1. The mesh explicitly models the partitioning of the brain into anatomical regions
by smooth boundary curves, which allows queries to the associated database of
gene expression data to be restricted to anatomical regions without effort.
2. Each sub-mesh corresponding to an anatomical region contains a smooth param-
eterization, which can be used for mapping gene expressions on tissue images.
6
3. The inherent multi-resolution structure of the subdivision mesh not only allows
easy deformation by manipulating the control vertices at the coarsest level, but
also enables fast multi-level comparison of expression data mapped onto the
atlas.
Geneatlas: A 2D geometric database for gene expressions
Using subdivision meshes as 2D brain atlases, we constructed Geneatlas.org, a web-
based database of gene expressions over the mouse brain [46]. For a particular stan-
dard cross-section, the gene expression database consists of the refined subdivision
mesh for the standard atlas annotated with gene expression information for each gene
in the database. For each particular gene, we first deform the subdivision mesh onto
the gene’s corresponding expression image using a combination of affine transforma-
tions and local least-square deformations (details are described in [46]). Next, for
each quad in the deformed mesh, we estimate the number of cells covered by that
quad and their corresponding levels of expression. Note that the use of the deformed
mesh corrects for anatomical variations in individual mouse brains.
To facilitate interaction with the database, Geneatlas.org features a graphical inter-
face to allow users to pose queries of the following form: “For a given region of the
brain, which genes have a particular expression pattern?” Users may specify the tar-
get region by name or by interactively painting the desired region onto the atlas.
Target expression patterns can either be uniform patterns such as high, medium, low
or none as well as expression patterns for a given gene. For example, on the left of
Figure 1.3, the user has painted a query onto the lower midbrain of the atlas at a
particular cross-section, and he wants to find those genes whose expression pattern is
similar to the gene Slc6a3, which exhibits a check-mark expression shape.
To compute the answer to a particular query, the database compares the gene ex-
pression data for various pairs of genes. For example, if the user desires those genes
7
whose expression patterns are similar to the ith gene in the database, the query com-
pares expression data of all genes in the database to the ith gene. By computing a
norm (such as L1) that measures the similarity between the two expression patterns,
the database then reports those genes whose norm with respect to the target gene is
smallest. For example, the right portion of Figure 1.3 shows eight genes computed by
the database that answer the query shown on the left. Note that each gene exhibits a
similar expression pattern to the target gene in the region selected by the user. Using
the multi-resolution structure of the subdivision atlas, the comparison computation
can be greatly accelerated by generalizing the multi-level search technique proposed
by Chen et al.[21] for rectangular images (details are described in [46]).
The ability to search for genes with desired expression patterns in regions of interest
has the potential to lead to significant medical discoveries. For example, the target
gene Slc6a3 in the sample query in Figure 1.3 is a dopamine transporter related to
Parkinson’s Disease. Among the eight genes computed by the database, as shown
on the right of Figure 1.3, seven are known to be dopamine-related. The gene Hbn1,
however, is a newly discovered gene with unknown functions. This query suggests
further investigation into the function of the gene Hbn1 and its possible relations
with dopamine transport or even with Parkinson’s Disease. In the face of the large
Figure 1.3 : Example query using a 2D brain atlas for the genes with similar expressionpatterns to the gene Slc6a3 in the lower midbrain (right), and the results computedby the database (left).
8
number of genes that are active in the brain, the ability to answer such queries
efficiently and accurately will play an important role in identifying potential genes of
interest in genetics research and drug design.
1.2.3 3D brain atlas
The 2D brain atlases on Geneatlas.org are fundamentally limited because these atlases
are only defined on a fixed set of cross-sections of the mouse brain. Consequently,
queries into the database are limited to 2D planes of the brain on which there is an
atlas defined. Ultimately, we would like to answer queries of the form: “Which genes
have a particular expression pattern within a specific anatomical structure in 3D?” To
answer such queries, we need to construct a 3D atlas of the mouse brain that stores 3D
gene expression patterns over the entire brain volume. The 3D atlas would serve as
a 3D database of gene expressions that support fully spatial queries. The availability
of this atlas-based database with gene expressions from a significant portion of the
mouse genome would contribute immensely to the bio-informatics community.
To be able to store and compare spatial biological data, such as gene expression
patterns, we seek a 3D atlas representation with the following features:
• The atlas should model accurately the boundaries of the 3D anatomical regions
of the brain. In particular, different anatomical regions modeled in the atlas
should not overlap or leave gaps; each point in space should belong to a unique
anatomical region (and consequently should have exactly one associated level
of gene expression).
• The atlas should provide a smooth parameterization interior to each anatomical
region. The parameterization provides a common, smooth coordinate frame in
which gene expressions can be accurately stored and compared.
• In face of the large volume of gene expression data in the database, the atlas
9
should support a multi-resolution structure on which the comparison of gene
expressions can be accelerated in order to answer online queries in real-time.
Recall that our 2D brain atlases represented as quad subdivision meshes have these
desirable properties. Each 2D atlas consists of quadrilaterals associated with different
anatomical regions, and two quadrilaterals from different regions are separated by a
crease edge. After subdivision, the refined atlas consists of smoothly parameterized
quadrilaterals within each anatomical region, and the boundaries between different
regions are modelled by a network of smooth crease curves.
Similarly, we shall represent the 3D brain atlas as a tetrahedral subdivision mesh.
The 3D atlas consists of tetrahedral elements associated with different anatomical
structures, and two tetrahedra from different anatomical structures share a crease
triangular face. Using tetrahedral subdivision techniques developed by Schaefer et al.
[77], we can subdivide a coarse tetrahedral mesh and generate a refined tetrahedral
mesh. The refined 3D atlas consists of smoothly parameterized tetrahedra within each
anatomical structure, and the boundaries between different structures are modelled by
a network of smooth triangular faces. The subdivided atlas is well suited for mapping
gene expression patterns and for answering queries comparing 3D expression patterns
of different genes.
1.3 Atlas Creation
While our ultimate goal in the motivating project on mouse gene expressions is to
build a tetrahedral subdivision atlas, we focus in this thesis on building a high-
resolution polygonal atlas, which will serve as the basis for the construction of the
final tetrahedral atlas. The polygonal atlas consists of a surface network that models
the partitioning of the brain into anatomical volumes. Each anatomical volume in
the polygonal atlas can be further tetrahedralized to yield a subdivision atlas.
10
Here we briefly outline the steps involved in creating a high-resolution polygonal atlas
of the mouse brain:
1. Collect tissue sections of the mouse brain with Nissl staining imaged using light
microscopy.
2. Correct distortions on each tissue section, such as stretching and compacting,
which are induced by cryo-sectioning.
3. Annotate each tissue section with anatomical regions, and connect 2D anatom-
ical boundaries from each section to form a 3D surface network modelling the
boundaries of various anatomical divisions.
Note that our atlas is constructed from cross-section images of the brain, which
offer a much higher resolution than conventional 3D imaging data such as MRI. The
increased resolution allows our atlas to represent fine anatomical features, such as the
tubular structures of Fiber tracks and Ventricles. However, the physical sectioning of
brain tissue induces distortions that need to be corrected in order to create a smooth
atlas representing the undistorted mouse brain.
In this thesis, I present robust, efficient and accurate methods for correcting distor-
tions in the tissue sections as well as for building a surface model from corrected brain
sections. Our method for correcting distortion makes use of the sectioning direction
to automatically reproduce a smooth brain volume from consecutive tissue images.
After manual annotation, we are able to construct a geometrically and topologically
correct surface network from boundary curves on annotated sections in a robust and
flexible manner.
As an on-going collaborative project with the University of Houston and the Baylor
College of Medicine, I will be actively pursuing further research in the construction
and utilization of atlases for the study of mouse gene expressions. As part of my
future research, I plan to investigate the construction of a volumetric atlas from the
11
polygonal brain atlas generated using the techniques in this thesis. The eventual goal
is to construct a geometric database of gene expression patterns using the volumetric
atlas. The techniques described in this thesis are well suited for building high-quality
surface models of complex anatomical structures from serial tissue sections, a topic
which has become increasingly important in modern biological research.
Chapter 2
Data Collection
In medical imaging, volumetric data generated by 3D imaging methods such as MRI
and CT have wide ranging applications in the visualization and analysis of organs. 2D
imaging methods, such as optical microscopy, typically generate serial sections with
much higher resolution than MRI or CT scans. Reconstructing these 2D sections in
3D has become an important tool for understanding anatomical structures in 3D, and
in particular, for building high-resolution atlases.
2.1 Cryo-sectioning
Brain images are collected on P7 C57BL/6 Mus Musculus Brain using cyro-sectioning
[18]. The brain is first extracted from the skull and put into solution to freeze.
The frozen brain is then cut coronally into serial cryo-sections each 25µm thick.
Tissue sections are placed on slides and then a Nissl stain is applied. After staining,
a coverslip is applied to each slide. The tissue section is then imaged using light
microscopy at the resolution of 3.3µm per pixel. Given that the mouse brain is
roughly 0.6 × 0.8 × 1.2cm, imaging a single mouse brain yields around 500 images
whose dimensions are 2000 by 3500 pixels. Pictures in Figure 2.1 are taken from Dr.
Gregor Eichele’s laboratory at Baylor College of Medicine for high throughput in situ
hybridization.
12
13
(a) (b)
Figure 2.1 : Facilities for cryo-sectioning (a) and microscopy imaging (b).
2.2 Tissue Distortions
Unfortunately, raw cryo-sections cannot be used directly for 3D reconstruction. The
tissue preparation steps for light microscopy may introduce undesirable tissue distor-
tions. Similar distortions that are specific to different preparation procedures have
been closely examined by several authors [10],[28],[78],[16]. Figure 2.2 shows four
coronal (i.e. vertically cut from side to side) sections of a single mouse brain acquired
by cryo-sectioning. Each slice may exhibit tissue distortions of the following forms:
• Global rotations and translations introduced when the tissue sections are placed
on the slides.
• Regional tissue deformations in the form of vertical stretching and compacting
induced both during cutting and when the section is placed onto the slide.
Extreme deformations may lead to tearing and folding.
• Image artifacts, such as dust (dark artifact) and air bubbles (black ring-shaped
artifact), introduced during coverslipping.
14
Section 14 Section 15
Section 140 Section 141
Figure 2.2 : Four coronal (i.e. vertically cut from front to back) sections from a stackof histological sections of a mouse brain acquired by cryo-sectioning.
Observe from Figure 2.2 that tissue distortions may vary significantly even between
neighboring sections. Due to random global and regional tissue distortions, stacking
successive images will not produce a coherent 3D volume. For example, Figure 2.3
shows two synthetical cross-sections through the middle of 350 coronal (i.e. vertically
cut from side to side) sections of a single mouse brain. Note that the images look
jagged and the boundaries of anatomical structures are far from smooth.
15
(a)
(b)
Figure 2.3 : Synthetic cross-cut through the middle of 350 coronal brain sections inthe sagittal (a) and horizontal (b) direction.
Chapter 3
Smooth Volume Reconstruction
To construct a 3D atlas representing smooth anatomical structures in the mouse brain,
we need to correct tissue distortions in order to generate a smooth 3D image from
tissue sections. In this chapter we present a robust and efficient method for producing
a smooth 3D volume from distorted 2D sections. The method automatically computes
deformations of tissue sections that result in a smooth volume when stacked. The
technique works in the absence of any undistorted references. Our approach is based
on computing image warps between adjacent section pairs and then smoothing the
pairwise warps.
3.1 Related work
There has been a tremendous amount of work on 3D reconstruction from distorted
serial sections. Typically, reconstruction is achieved by deforming individual 2D sec-
tions using image registration (warping) techniques. Here we briefly review related
work on 2D image warping and 3D reconstruction.
Image warping
Image warping is the task of finding the best deformation (warp) of a source im-
age that matches a target image under a specific distance measure. Image warping
methods have been intensively studied in the field of medical imaging, and we re-
fer interested readers to survey articles by Maintz and Viergever [58], by Glasbey
[35] and by Lester and Arridge [54], as well as books by Toga [86], by Hajnal et al.
16
17
[39], and by Modersitzki [63] for excellent reviews. Software packages implementing
state-of-the-art warping methods are also available for medical applications, such as
the Automated Image Registration (AIR) package from UCLA [94] and the Insight
Segmentation and Registration Toolkit (ITK) from NLM [97].
For the purpose of 3D reconstruction, we are interested in classifying image warping
methods by the nature of the resulting image deformations. A number of methods
compute global deformations of the source image, such as rigid-body transformations
[96, 57], linear affine transformations [70, 83], and higher-degree polynomial defor-
mations [95]. Since these deformations are global in nature, they do not perform
well in the case of local variations between images [35]. To overcome this problem,
other warping methods consider local image deformations. Bookstein [13] considers
thin-plate splines in image registration, which was later applied in conjunction with
maximization of Mutual Information by Meyer et al. [61] or minimization of points
correlation by Guest et al. [37]. Alternatively, free-form, B-spline based deformations
defined on a regular grid of control points have been studied by numerous authors
including Rueckert et al. [74] and Studholme et al. [82]. Recently, a hybrid model of
global and local deformations was also considered by Pitiot et al. [69] in a piecewise
affine setting. Local warping methods have more flexible mapping functions and are
often regularized by some form of elastic energy to prevent excessive deformations.
Common optimization techniques for computing a regularized solution include solv-
ing partial differential equations (an excellent review of such methods can be found
in the book by Modersitzki [63]), finite-element methods [37], graph methods such as
max-flow [73] and min-cut [14], stochastic methods [1], and dynamic programming.
Since its first successful application in speech recognition by Sakoe and Chiba [76],
dynamic programming has been known in 1D as Dynamic Time Warping (DTW) and
has been extended in various ways for warping 2D images [55, 26, 90, 71]. Compared
to other optimization methods in the continuous domain, dynamic programming finds
discrete minimizers in a robust manner. However, due to the exponential time com-
18
plexity of a fully 2D dynamic programming task, existing DTW-based image warping
methods can only handle extremely low-resolution images.
3D reconstruction
Previous methods on warp-based 3D reconstruction from serial sections can be ap-
preciated from two perspectives: the types of image deformations performed, and the
subjects from which the deformations are computed.
Over the past two decades, many authors have reported how global linear trans-
formations can be applied automatically to bring successive sections into alignment
[87],[53, 75, 78, 81, 67, 16, 7]. The simplicity of linear transformations not only reduces
the computational complexity but also allows convenient manual adjustment through
software interfaces [72, 22, 47]. However, as shown in several studies [28, 44, 15], the
section distortions induced by tissue preparation are local in nature. With the ad-
vance of image registration techniques, reconstruction methods have been proposed to
correct localized section distortions by using local 2D deformations [29, 36, 50, 85, 3]
as well as elastic 3D surface deformations [84, 60, 31, 33].
A large number of the above reconstruction methods compute the warp of each sec-
tion so that the warped section matches a neighboring section. For example, Durr et
al. [29] computes elastic deformation between pairs of distorted images, while Karen
et al. [47] uses software tools for user-assisted alignment between every two con-
secutive sections by rigid-body transformations. However, unlike image registration
where exact matching is preferred, the goal of reconstruction is to form a smooth vol-
ume allowing natural progression of features through successive images [36]. Recent
methods, and in particular elastic 3D surface deformation methods, focus on warping
distorted sections onto an undistorted reference, such as block-face photos [60, 50, 33]
or tissue markers [81, 7], 2D sections of a 3D in vivo image [84, 78, 67], or sections
from an existing template [40, 3, 85]. The problem with this approach is that in many
19
applications an un-sectioned reference is not always available.
There have been only a few works so far [64, 36, 93] that address the problem of
smooth 3D reconstruction using local, elastic 2D image deformations in the absence of
a reference volume (note that existing methods using elastic 3D surface deformations
such as [84, 60, 33] can not be applied due to the lack of a reference). Without
the knowledge of the original object before sectioning, these researchers based their
reconstruction on the following assumption: the shape of an anatomical structure
varies slowly with respect to section thickness. In other words, corresponding points
on adjacent sections are likely to be located close together. In a global approach,
both Guest and Baldock [36] and Wirtz et al. [93] aim at minimizing an energy
functional consisting of a distance measure between consecutive sections (e.g. spring
forces between corresponding points [36] or pixel-wise squared differences [93]) and an
elastic deformation potential on each section. Although numerical solutions can be
computed by using finite element methods [36] or by approximating the differential
equations using finite differences [93], solving the global minimization problem is
non-trivial due to the massive size of the system.
A completely different local approach was taken by Montgomery and Ross [64]. Their
idea is to reposition a point on each section by applying a local Laplacian smoothing
operator on the position of that point and the positions of corresponding points on
the two adjacent sections. However, their method requires manual delineation of
contours on each section to establish the correspondence. Moreover, the Laplacian
operator is only applied to points on the contours; hence the deformation is restricted
to the overall shape of a contoured region and has limited power in matching interior
features with neighboring sections.
20
3.2 Method Overview
Here we introduce a new local approach for reconstructing a smooth representative
volume by elastically deforming serial sections, which requires neither human inter-
vention nor the use of a reference volume. Since it is impossible to undo the tissue
distortions without the knowledge of the original object before sectioning, we base
our reconstruction on the same assumption as previous work [36] - that is, the sec-
tion thickness (25 µm) is small compared to variations in the shape of anatomical
structures. Hence the reconstruction goal is a smooth volume where a point on a
section lies close to its corresponding points on the neighboring sections. To achieve
this goal, we present:
• An automatic 3D reconstruction method, called warp filtering, based on 2D
image warps between each pair of adjacent sections. During the reconstruction,
each section is deformed using an average warp computed from the pairwise
warps within a group of neighboring sections. The average warp results in a
smooth volume by effectively repositioning each point on one section to the
weighted-average location of the corresponding points on neighboring sections.
The algorithm works with any 2D image registration techniques for computing
pairwise warps, and can be easily parallelized for speed. We performed quan-
titative and qualitative validation of the reconstruction method on both real
and synthetic data, and the results revealed the effectiveness of our method in
building a smooth volume from distorted sections.
• A new image warping algorithm based on dynamic programming for computing
regularized warps between adjacent serial sections. Due to the nature of sec-
tioning distortions, we consider a class of 2D warps that can be decomposed into
1D piecewise linear deformations with elastic constraints. The representation
of such a decomposable 2D warp greatly facilitates warp filtering. Moreover,
the decomposition allows us to extend a well-known 1D discrete minimization
21
method called Dynamic Time Warping [76] to compute 2D elastic image warps
even between images of high resolutions. Experimental results have shown
that the proposed method achieves an efficiency comparable to state-of-the-art
warping methods, while the resulting deformations often match successive tissue
sections with improved accuracy.
• A novel process for recovering the shape of the original object from deformed
sections using rigid-body transformations. The process requires minimal human
interaction, and precedes the above automatic reconstruction so that a naturally
shaped, coherent volume can be created.
We present the reconstruction method by warp filtering in Section 3.3 and describe our
image warping algorithm in Section 3.4. Section 3.5 presents the complete framework
for producing a coherent and naturally shaped volume from deformed tissue sections,
using semi-automatic rigid-body alignment and automatic warp filtering. In Section
3.6 we present our validation framework involving quantitative measures developed
from the evaluation criteria suggested by Guest and Baldock [36], and we report the
experimental results on a MRI test volume and a stack of 350 histology sections.
Finally, we discuss limitations of the method and future research in Section 3.7.
3.3 Warp Filtering
To reconstruct a volume in which corresponding points on successive sections are
located close to each other, warp filtering computes image warps that match each
section to a group of neighboring sections based on pairwise warps between successive
sections.
22
3.3.1 The algorithm
The input to the algorithm is a stack of N serial sections represented as 2D images
gk : R2 → R (k = 1, . . . , N) and pairwise warps represented as bivariate, bivalued
functions φk,k+1 : R2 → R2 (k = 1, . . . , N − 1) between every two successive images.
The warps φk,k+1 can be computed using any user-specified 2D image warping method
so that the warped image gk ◦ φk,k+11 best matches the neighboring image gk+1.
The algorithm computes new warps Φk : R2 → R2 (k = 1, . . . , N) that match each im-
age gk to a group of its neighboring images {gk−d, ..., gk+d} (d > 0). Φk is represented
as the following weighted average,
Φk =k+d∑
i=k−d
γiφk,i, (3.1)
where γi(i = k − d, ..., k + d) are binomially-distributed weights that approximate a
Gaussian filter,
γi = 2−2d
2d
i− k + d
,
and φk,i represent the warps from image gk to each image gi in the neighborhood
i ∈ [k − d, k + d]. Given the pairwise warps {φk−d,k−d+1, . . ., φk+d−1,k+d}, φk,i can be
constructed inductively as 2
φk,i =
φk,i+1 ◦ φ−1i,i+1, k − d ≤ i < k
φk,k, i = k
φk,i−1 ◦ φi−1,i, k < i ≤ k + d
(3.2)
where φk,k denotes the identity.
In effect, under the warp Φk, each point on gk is repositioned not to match exactly
the corresponding points on a single neighboring image, but to match the location of
1The operator ◦ denotes function composition: (f1 ◦ f2)(x) = f1(f2(x)).2If pairwise warps φk,k+1 are not invertible, reverse warps φk+1,k also have to be computed using
the chosen image warping method between successive image pairs.
23
the weighted-average of corresponding points on a group of images {gk−d, ..., gk+d}.Consequently, in the warped stack of images gk ◦ Φk (k = 1, . . . , N), high-frequency
noise along lines of corresponding points through successive images, which are often
induced by random sectioning distortions, are removed, and the distance between
corresponding points on adjacent images is reduced.
We illustrate warp filtering in a 2D example in Figure 3.1. Here we consider the
simplified problem of reconstructing a coherent 2D image from a sequence of deformed
1D columns of pixels gk : R → R. Figure 3.1 (a) shows a grayscale image with
letters “TMI”, and Figure 3.1 (b) shows the same image but with each column gk
synthetically distorted using a random local function. To reconstruct a smooth 2D
image, pairwise warps φk,k+1 : R → R that establish correspondences between points
on column gk to points on column gk+1 are computed between successive columns. In
Figure 3.2 (a), a subset of lines connecting corresponding points on successive columns
is plotted based on these pairwise warps. Observe that the lines are jagged due to
the local distortions that vary from column to column. Our algorithm computes the
filtered warps Φk from pairwise warps φk,k+1 and produces a stack of warped columns
gk ◦ Φk that reconstruct a smooth 2D image shown in Figure 3.1 (c). Observe in
Figure 3.2 (b) that, after warping, corresponding points on successive columns lie
close to each other and form smooth curves in space.
3.3.2 Comparison
The proposed algorithm is most closely related to the local approach taken by Mont-
gomery and Ross [64], but we present two major improvements. First, the correspon-
dence between successive sections, which was established manually using contour
lines in [64], is now computed automatically as image warps. Second, the simple
two-neighbor Laplacian operator on contour lines is replaced by a more general de-
noising Gaussian filter that operates on general warps between image pairs in a larger
24
(a)
(b)
(c)
Figure 3.1 : Reconstruction of a smooth 2D image from deformed 1D columns usingwarp filtering. (a) The original grayscale image. (b) The same image with distortedcolumns. (c) The reconstructed image.
neighborhood.
The improved local approach offers several unique advantages over the global mini-
mization methods used in [36] and [93]:
Simplicity: There is no need to set up and solve a 3D minimization problem. Our
local approach involves only 2D image warping and simple weighted-averaging.
In general, the algorithm can be used in conjunction with any 2D image regis-
tration technique for smooth 3D reconstruction.
25
(a)
(b)
Figure 3.2 : (a) A subset of lines connecting corresponding points in successive dis-torted columns of Figure 3.1 (b). (b) The smoothed lines of corresponding pointsafter reconstruction.
Stability: In contrast to global minimization over all sections, computing pairwise
warps between two individual sections is a minimization problem at a small
scale and therefore less prone to errors due to variations in the data. Such
stability is highly desirable due to the systematic and random nature of section
distortions.
Efficiency: Decomposing a global problem into local warping and filtering tasks
makes substantial performance increase possible through parallelization. In
particular, since the pairwise warp between each image pair as well as the filtered
warp for each image are computed independently, computation time can easily
be reduced linearly in a distributed computing environment. Such dramatic
performance improvements are not possible with global minimizers.
26
3.3.3 Warp representation
Although the warp filtering algorithm can be used with any image warping techniques,
efficient computation using formulas (3.1) and (3.2) requires representating pairwise
image warps φk,k+1 in an appropriate manner.
Regardless of how pairwise warps φk,k+1 are represented, the filtered warp Φk can
always be constructed implicitly by evaluating the righthand side of formulas (3.1)
and (3.2) at every point (x, y). This approach, however, needs to evaluate {φk−d,k−d+1,
. . ., φk+d−1,k+d} for every point in the image. For efficiency, using formula (3.1) and
(3.2), we can construct Φk as an explicit function, which can then be used to evaluate
Φk(x, y) directly at every point (x, y). The second approach requires the pairwise
warps φk,k+1 to be represented for the following operations on functions:
• Inversion: φ−1i,j , representing the warp from image gj to image gi.
• Composition: φi,j ◦ φj,k, representing the warp from image gi to image gk.
• Convex combination: a φi,j + b φi,k, representing a weighted average of the two
warps.
Although linear functions are feasible, warps represented as higher-degree global poly-
nomials or displacement fields in many local warping methods can not be easily
adapted to take advantage of these function operations. In the next section, we de-
scribe a new image warping method for computing a local, piecewise linear warping
function that is readily represented for efficient warp filtering.
3.4 Computing image warps
The problem of computing the optimal warp is ill-posed [63] and NP-complete [49];
hence regularization is required. For warping between successive serial cryo-sectioned
images, we consider specific types of regularized image warps such that
27
1. Each 2D warp can be decomposed into a single 1D deformation in the horizontal
direction and independent 1D deformations in the vertical direction for each
column, and
2. Each 1D deformation is represented as a monotonic piecewise linear function
with elasticity constraints.
As we shall see in the following discussion, such regularized warps are capable of
representing local deformations that characterize the differences between adjacent
serial sections. On the other hand, the regularization allows fast warp filtering by
supporting the three operations on warp functions (i.e., inversion, composition and
convex combination) and efficient warp computation using dynamic programming.
3.4.1 Decomposing 2D warps
Even if the sections have been rigidly aligned to correct the rotational and trans-
lational differences, differences between successive tissue sections still exist in the
form of anatomical variance, localized tissue distortion, and image artifacts. While
anatomical variances and image artifacts typically do not have specific orientation,
regional distortions in the form of stretching and compacting take place mostly in
the vertical (i.e., slicing) direction. To better represent the deformations caused by
sectioning distortions while accommodating other possible local variances, we shall
compute a restricted warp φ : R2 → R2 of the following form:
φ(x, y) = (φX(x), φY (x, y))
where φX , φY are single-valued functions. In other words, φX models an overall 1D
deformation in the horizontal direction (i.e., shifting of image columns) and φY models
independent 1D deformations in the vertical direction for different values of x (i.e.,
shifting of pixels in each column). A similar decomposition has been considered
previously by Agazzi et al. [1] for text recognition. Such decompositions greatly
28
simplify the task of warp computation, since 1D (vector) warps can be computed
much more efficiently and robustly than 2D (image) warps.
Previous authors, such as Cox et al.[26], have considered more restricted forms of 2D
warps. In their methods, each 2D warp consists solely of independent 1D warps on
each column (i.e., φX is the identity function). To illustrate the difference, Figure 3.3
shows a simple example of warping between two images containing rings of different
radii, a typical type of deformation between successive tissue sections. Figure 3.3
(c) shows the source image deformed using only 1D warps in the vertical direction,
and Figure 3.3 (d) shows the same source image deformed using the combination
of 1D warps in both vertical and horizontal directions. Observe that the horizontal
warp φX is necessary to model the non-vertical variances between source and target
images, which is required to handle anatomical differences and image artifacts in
tissue sections.
(a) (b)
(c) (d)
Figure 3.3 : Warping from a source image (a) to a target image (b) by computing onlythe vertical deformation (c) and by computing deformations in both the horizontaland vertical directions (d).
29
3.4.2 Piecewise linear representation
For convenient representations that will facilitate the function operations in warp fil-
tering, we consider 1D warps {φX , φY } that are monotonic, piecewise linear functions.
In particular, given two images s and t with n + 1 columns and m + 1 rows,
1. The horizontal deformation φX is represented by a pair of piecewise linear func-
tions σ, ψ : [0, K ∈ Z+] → [0, n] that match column σ(k) in image s to column
ψ(k) in image t for k = 0, . . . , K.
2. The vertical deformation φY is represented by a sequence of piecewise linear
function pairs σk, ψk : [0, L ∈ Z+] → [0,m] for k = 0, . . . , K that match the
point (σ(k), σk(l)) in image s to point (ψ(k), ψk(l)) in image t for k = 0, . . . , K
and l = 0, . . . , L.
We require the functions σ, ψ and σΣ, ψΣ3 to be invertible, so that the warp φs,t from
image s to image t can be represented as:
φs,t(x, y) = (σ(x), σx(ψ−1x
(y)))
where x = ψ−1(x) for x ∈ [0, n] and y ∈ [0,m] (linear interpolation is used when
subscript x assumes a non-integer value). The symmetry of σ and ψ allows the
inverse warp φt,s from image t to image s to be represented in the same way with
symbols σ and ψ exchanged. Such convenience will become crucial for performing
function operations during warp filtering.
3.4.3 Elastic deformation
To prevent excessive image deformation, an image is typically modelled as an elastic
material and image warps are constrained by some form of deformation energy. Due
3σΣ is short hand for the sequence σ0, σ1, . . . , σK .
30
to the piecewise-linear nature of our warp representation, we consider a discrete form
of deformation energy that consists of three terms:
1. 1st and 2nd order deformation energy in the X direction:
EX(σ, ψ) = αX
K∑
k=0
δ(k)2 + βX
K−1∑
k=0
(δ(k + 1)− δ(k))2
where δ(k) = σ(k)− ψ(k) and αX , βX are constant weights.
2. Independent 1st and 2nd order deformation energy in the Y direction for each
vertical warp:
EY (σk, ψk) = αY
L∑
l=0
δk(l)2 + βY
L−1∑
l=0
(δk(l + 1)− δk(l))2
where δk(l) = σk(l)− ψk(l), and αY , βY are constant weights.
3. Coherence between neighboring vertical warp functions:
EC(σΣ, ψΣ) = γK−1∑
k=0
L∑
l=0
((σk+1(l)− σk(l))2 + (ψk+1(l)− ψk(l))
2).
where γ is a constant weight.
Note that the inclusion of the coherence term in 3) is necessary to ensure a smooth
deformation field using a 2D warp that is decomposed into independent 1D warps.
Putting these terms together, the warping problem that we consider becomes the
2. Construct a m + 1 by m + 1 auxiliary table e, in which ei,j is the error of the
minimal-error path with bounded slope from entry (0, 0) to entry (i, j) in the
32
error table ε. We can compute ei,j inductively as follows:
ei,j =
Min
ei−2,j−1 + Left(i, j) + βY
ei−1,j−2 + Down(i, j) + βY
ei−3,j−3 + Diag(i, j)
, i > 0 or j > 0
εi,j, i = 0 and j = 0
∞, i < 0 or j < 0
(3.5)
where each function Left, Down, and Diag computes a weighted sum of entries
in ε according to the direction and length of the previous path segment, as shown
in Figure 3.5. Note that the sum of the weights is proportional to the length of
the path segment in each direction.
3. The minimal-error path that leads to em,m encodes the piecewise linear functions
(σk, ψk) : [0, L] → [0,m], where L is the number of path segments and eσk(l),ψk(l)
are entries along the path for l = 0, . . . , L.
(0,0)
(m,m)
(0,0)
(m,m)
(a) (b)
Figure 3.4 : Comparison of a Manhattan path (a) and a restricted path (b) whoseslope is bounded between 1/2 and 2 and whose possible nodes (colored dark gray) liein a skewed square along the diagonal.
The time and space complexity of both of the above algorithms are O(m2). However,
we can dramatically improve both the speed and the memory usage. First, εi,j and
ei,j need to be computed only for grid points i, j that lie within a skewed square
33
1
2/3
2/3
1/3
1/3 1
2/3
1/3
2/3
1/3
ji,ε
1-j2,-iε
ji,ε
2-j1,-iε
1
1
1
1/2
1/21/2
1/21/2
1/2
ji,ε
3-j3,-iε
Figure 3.5 : Weights assigned to the entries in the error table ε for computing functionsLeft(i, j), Down(i, j) and Diag(i, j).
of size (m + 2)/3 along the diagonal, as shown in Figure 3.4 (b). The restriction
effectively reduces the computation time by a factor of 9. Furthermore, we can limit
our computation to grid points {i, j} within a narrow band along the diagonal, whose
half-width is the maximum deformation ‖σk(l) − ψk(l)‖. When warping between
adjacent tissue sections, we typically observe a maximum deformation of less than
10% in both the horizontal and vertical directions; hence banding further reduces the
computation time by approximately a factor of 5.
2D Warping
The 2D warping task M(σ, ψ, σΣ, ψΣ) is computed in two stages. First, observing a
similar form on the right-hand side of the 2D minimization goal in (3.3) (ignoring
the coherence term EC) and the 1D minimization goal in (3.4), we consider functions
(σ, ψ) such that (σ(k), ψ(k)) (k = 0, ..., K) form a path on the integer grid from (0, 0)
to (n, n) with bounded slope. Similarly, using DTW, such a path that minimizes
the sum of the vertical warping error∑K
k=0 MY (σ(k), ψ(k), σk, ψk) and the horizontal
elastic energy EX(σ, ψ) can be found in three steps:
1. Construct an n + 1 by n + 1 error table ε, so that
εi,j = MY (i, j, σi, ψj) + αX(i− j)2,∀ i, j ∈ [0, n]
where (σi, ψj) is the minimal-error vertical warp between column i in s and
34
column j in t, computed using the previous 1D algorithm.
2. Construct an n+1 by n+1 auxiliary table e in the same way as in (3.5) except
the constant βY is replaced by βX .
3. The minimal-error path that leads to en,n encodes the piecewise linear functions
(σ, ψ) : [0, K] → [0, n], where K is the number of path segments and eσ(k),ψ(k)
are entries along the path for k = 0, . . . , K.
Next, given (σ, ψ), the vertical warps (σΣ, ψΣ) can be computed using the previous
1D algorithm. However, these minimal-error vertical warps yield 2D warps that may
vary significantly from column to column. For example, Figure 3.6 (a) shows the
result of applying a 2D warp computed from Section 14 to Section 15 in Figure
2.2, without considering the coherence among vertical warps, to a test pattern (a
sequence of uniformly spaced horizontal bars). Notice the high-frequency noise in the
warped test pattern due to the large disparity between vertical warps in neighboring
columns. Typically, the true 2D deformations induced during the sectioning process
are much smoother. To incorporate the coherence energy EC(σΣ, ψΣ), given (σ, ψ),
instead of computing a single best vertical warp (σk, ψk) for every k = 0, . . . , K, we
compute a subset of the low-error warps (σkΣ, ψkΣ
) in step 3 of the 1D algorithm.
Finally, we add an extra dynamic programming pass to choose the best warp (σk, ψk)
from each group (σkΣ, ψkΣ
) that minimizes the sum of the vertical warping error∑K
k=0 MY (σ(k), ψ(k), σk, ψk) and the coherence energy EC(σΣ, ψΣ) (each vertical warp
function must first be re-parameterized to have the same domain [0, L], see Section
3.4.5). Figure 3.6 (b) shows that the coherent warp applied to the same test pattern
generates exhibits a much smoother deformation.
The dominant operation in 2D warping is the construction of the error table ε in step 1,
which has time complexity O(n2m2). However, the computation can be implemented
efficiently by restricting the calculation to a subset of entries and using appropriate
banding, as described in 1D warping. Experimental results have shown that the speed
35
(a) (b)
Figure 3.6 : A test pattern warped by applying a non-coherent warp (a) and a coherentwarp (b) from Section 14 to Section 15 in Figure 2.2.
of our algorithm is comparable to other state-of-the-art methods while producing
higher quality warps.
Examples and comparisons
Figure 3.7 illustrates two warping the adjacent Nissl-stained cryo-sections shown in
Figure 2.2. In each example, the goal is to find the warp that deforms the source
section s to match the target section t. Sections are represented as grayscale images
of dimensions n = 850 and m = 670. We compare the result of our method 4 to the
results of two other popular warping methods: Automated Image Registration (AIR)
[94], and NLM Insight Segmentation and Registration Toolkit (ITK) [97]. For AIR, we
used the 182-parameter 12-degree global polynomial non-rigid transformation model.5
For ITK, we used the FEM-based local deformable registration method.6 The l2-norm
between the warped source image and the target image in each method as well as the
computation time are compared in Table 3.1. Note that our dynamic programming
4With elastic energy weights αX = αY = 0.00001, βX = βY = 0.02, γ = 0.005.5In our test, align warp program was used with model menu number (m) 32, threshold (t1,t2) 2,
and convergence threshold (c) 0.00001.6In our test, FEMRegistrationFilter was used with mean square metric, single-resolution, 8 pixels
per element, 40 iterations with elasticity (E) and density (RhoC) set to 105.
36
method achieves efficiency comparable to a global warping method, while the quality
of the resulting 2D warp is often better than a conventional local warping method
when deforming between successive tissue sections.
Observe from Figure 3.7 that the proposed warping method also handles extreme
deformations (e.g. sever folding in the second example) and image artifacts (e.g.
the big air bubble in the first example) in a reasonable manner. These random
artifacts and distortions are examples of incompatible features between the source
and target images, which imply that there does not exist a perfect warp that exactly
matches one image to the other. Such incompatible features may also include tissue
tears, or appearing(disappearing) anatomical features (e.g. the dense circular feature
in upper middle of the target image in the first example) that are typical through
successive tissue sections. The proposed warping method matches compatible features
between the two images well, while producing moderate modifications in places where
incompatibility arises without triggering excessive deformations.
37
t t
s ‖s− t‖ s ‖s− t‖
AIR(s) ‖AIR(s)− t‖ AIR(s) ‖AIR(s)− t‖
ITK(s) ‖ITK(s)− t‖ ITK(s) ‖ITK(s)− t‖
DP (s) ‖DP (s)− t‖ DP (s) ‖DP (s)− t‖
Figure 3.7 : Two examples of warping from a source image s to a target image tusing the global polynomial transformation in the AIR package (AIR(s)), the de-formable registration implementation in ITK (ITK(s)), and our dynamic program-ming method(DP (s)).
38
Example 1 Example 2
Time l2-norm Time l2-norm
Source Image (s) – 11717.1 – 17069.7
Polynomial transformation (AIR(s)) 36 s 5907.5 252 s 12419.5
Deformable registration (ITK(s)) 1325 s 5005.9 1190 s 14314.6
Dynamic programming (DP (s)) 165 s 3951.9 173 s 9207.4
Table 3.1 : Performance comparison of three warping methods applied to the twoexamples in Figure 3.7. The l2-norm is computed between the source image (orwarped source image) and the target image.
3.4.5 Warp operations
As we mentioned before, the piecewise linear 2D warp representation facilitates the
three operations involved in warp filtering (i.e. inversion, composition and convex
combination). First, however, we need to resolve a subtle problem that arises in our
dynamic programming approach. Horizontal warps (σ, ψ) between different images
may have different domains K, and even vertical warps (σΣ, ψΣ) in a same 2D warp
may have different domains L. For convenience, we re-parameterize every horizontal
warp σ, ψ : [0, K] → [0, n] onto a fixed domain [0, 2n] as follows:
σ∗(x) = σ(ω−1(x)), ψ∗(x) = ψ(ω−1(x)) (3.6)
where ω(x) : [0, K] → [0, 2n] is a monotonic, invertible mapping defined as ω(x) =
σ(x) + ψ(x). Intuitively, (3.6) re-parameterizes the two functions σ and ψ along the
diagonal from (0, 0) to (n, n), as shown in Figure 3.8 (a). Every vertical warp (σk, ψk)
can be diagonalized onto the domain [0, 2m] in a similar fashion.
39
)x(*σ
)x(*ψ
(0,0)
(n,n)
x
(0,0)
(n,n)
(a) (b)
Figure 3.8 : (a) Re-parameterized functions σ∗ and ψ∗ of the restricted path (coloredin gray) along the diagonal from (0, 0) to (n, n). (b) Symmetric addition of twodiagonalized warps (colored in gray). The resulting warp is plotted in solid black.
1D warp operations
• Inversion: Given a pair of 1D warps {σ, ψ}, the inverted warp {σ, ψ}−1 is easily
generated by exchanging the two functions as {ψ, σ}.
• Composition: The composition of two pairs of warps {σ, ψ} and {σ, ψ} with
domain [0, 2n], represented as {σ, ψ} ◦ {σ, ψ}, is computed as a new symmetric
warp {σ(ψ−1), ψ(σ−1)}. Note that the new warping functions are defined on the
domain [0, n], so diagonalization (3.6) must be performed to re-parameterize
σ(ψ−1) and ψ(σ−1) onto the domain [0, 2n].
• Convex combination: Given two pairs of warps {σ, ψ} and {σ, ψ} with do-
main [0, 2n], the sum a{σ, ψ} + b{σ, ψ} is easily computed as the symmetric
warp {aσ + bσ, aψ + bψ}. Note that the new warping functions preserves the
monoticity, invertibility and the same domain. Figure 3.8 (b) illustrates this
operation for a = b = 1/2.
40
2D warp operations
Let {σ, ψ, σΣ, ψΣ} be a symmetric 2D warp between two images, the inverted warp
is easily generated by exchanging the symbols σ and ψ.
Given two 2D warps {σ, ψ, σΣ, ψΣ} and {σ, ψ, σΣ, ψΣ}, the convex combination can
be computed simply as {aσ + bσ, aψ + bψ, aσΣ + bσΣ, aψΣ + bψΣ}. Composition of the
two warps can be performed as follows. First, the horizontal warp in the composite
2D warp, denoted by {σ, ψ}, is computed as the composite of the horizontal warps
{σ, ψ} ◦ {σ, ψ}. Then, the vertical warps in the composite 2D warp, denoted by
{σΣ, ψΣ}, are computed so that
{σk, ψk} = {σk1 , ψk1} ◦ {σk2 , ψk2}
where k1 = σ−1(σ(k)) and k2 = ψ−1(ψ(k)).
3.5 Reconstruction framework
Having described the algorithms for computing and filtering the image warps, we
are now able to present the complete framework for reconstructing a coherent vol-
ume from a stack of parallel coronally-sliced histological sections of a mouse brain.
Reconstruction proceeds in two steps: rigid-body alignment and elastic alignment.
3.5.1 Rigid-body alignment
We first perform an initial alignment of the sections using rigid-body transformations.
Sectioning the mouse brain not only induces local tissue distortions, but also causes
each tissue section to be randomly rotated and shifted. These global deformations
need to be undone before the elastic alignment step for the following two reasons:
1. Rotational differences between successive images can not be captured well by
our 2D warping algorithm, which is tailored to deformations characterized by
41
vertical variations within each column and horizontal shifting of columns.
2. Random translations and rotations of tissue sections disrupt the spatial con-
tinuity between successive sections in the un-sectioned brain, which makes it
hard to recover the correct and natural 3D shape of the brain.
To undo the global deformations, one can apply PCA (principal component analysis)
[43] to align all the sections by their centers of mass (CM) and their principal direc-
tions of variations. However, PCA-based methods typically cannot reliably determine
the orientation of an image with a near-circular shape, such as the coronal sections
of the brain. More importantly, since the brain is naturally curved, the CM of suc-
cessive sections of the un-sectioned brain form a curve in space instead of a straight
line. Aligning images by their CM will have the effect of turning a curved brain into
a straight tube. Therefore, we develop a new semi-automatic method for reliably
aligning coronal brain sections while preserving the original shape of an un-sectioned
brain.
Symmetry detection
We first use Marola symmetry measures [59] to detect the bilateral symmetry in each
coronal slice and align each image by identifying the lines of symmetry of the tissue
with the vertical mid-line of the image (see Figure 3.9). The bilateral symmetry of the
brain not only allows us to determine the exact orientation of each coronal slice, but
also restricts the translational variation between successive sections to the vertical
direction on each image. Symmetry detection and alignment is implemented as an
automatic algorithm followed by user inspection and correction.
42
(a) (b)
Figure 3.9 : (a) A coronal section with the line of symmetry detected. (b) The lineof symmetry is aligned to the center of the image, while the vertical displacement isyet to be determined.
Vertical alignment
The purpose of vertical alignment is to recover the shape of the un-sectioned object,
especially in the stacking direction (i.e. the direction orthogonal to the sectioning
plane). Since an un-sectioned volume is not available, we use the 31 sagittal histolog-
ical sections in the Paxino’s atlas [68] as the reference to align our coronal sections.
The idea is to compute a reference CM by cutting the reference sagittal sections with
the plane corresponding to an experiment coronal section (see Figure 3.10), and to
translate the coronal section in the vertical direction so that its CM agrees vertically
with the reference CM.
The main question is how to determine the plane corresponding to each experimental
coronal section in the coordinate system of Paxino’s atlas 7. Note that the orientation
of slicing during the sectioning process usually deviates from the standard coronal
plane in Paxino’s atlas due to the difficulty of positioning the brain during sectioning.
However, we do know that the coronal sections are parallel and uniformly spaced, and
thus we can represent the kth section by the plane αx+βy + γz + δ− k = 0 in which
7In Paxino’s atlas, we let the X axis be orthogonal to the sagittal planes, the Y axis be orthogonal
to the horizontal planes, and the Z axis be orthogonal to the coronal planes
43
Reference CM
Figure 3.10 : Reference sagittal sections from Paxino’s atlas intersected with the planeof a coronal section at parallel line segments (thick bars) and the resulting referenceCM on that plane.
the parameters α, β, γ, δ are to be yet determined.
Here we present a simple method for determining the plane equations for each coronal
section with minimal human assistance. First, a group of n landmarks specified by the
anatomists are located in Paxino’s atlas with coordinates (xi, yi, zi) for i = 1, . . . , n.
Next, we let the user determine the index si of the experimental coronal section on
which the ith landmark appears (note that the exact location of the landmark on the
section is not required). Finally, the parameters for the plane equations are computed
by minimizing the quadratic error:
E =n∑
i=1
(αxi + βyi + γzi + δ − si)2 (3.7)
3.5.2 Elastic alignment
After rigid-body transformations, we next need to elastically warp each coronal sec-
tion so that a coherent volume is formed. We first apply the warping algorithm
presented in Section 3.4 to compute the warps between successive images, and then
re-parameterize each image using the Gaussian-filtered warps presented in Section
3.3.
Image filters can be applied to further improve the quality of the volume reconstructed
44
by warp filtering. Since the distance between adjacent sections (25µm) far exceeds
the size of a cell (∼ 10µm), matching individual cells on two adjacent sections is not
practical. To avoid aligning cellular details while matching macro features (e.g. the
dark folds of the cerebellum), we can apply a smoothing filter on the tissue sections
before computing the pairwise warps. Although a Gaussian filter could be used, an
edge-preserving filter is more appropriate because boundaries between anatomical
structures are retained. Figure 3.11 shows the result of applying the bilateral filter
[88] that we used in our experiments (other filters may also be used). Note that the
filtered image exhibits clearer boundaries between different anatomical features. After
performing warp filtering on the bilaterally filtered images, the final warps would then
be applied to the original images for accurate reconstruction.
(a) (b)
Figure 3.11 : A tissue section before (a) and after (b) applying the bilateral filter.
Even after reconstruction, image artifacts such as tissue folds and air bubbles still
remain. To clean up the image artifacts, we found that an effective solution is to
apply a majority filter to corresponding points through successive images based on
the pairwise image warps. For each pixel on an image, if the intensity of the pixel
is the highest or the lowest among its corresponding pixels in a group of nearby
images, we replace the pixel’s intensity by the average intensity of its neighbors (see
Figure 3.12 top). The filter removes outliers along matched pixels through different
images, which are often introduced by bubbles or tissue folding on a single image.
45
The bottom of Figure 3.12 shows a coronal section in the reconstructed volume before
and after the majority filter is applied. Observe that the dark folds in the original
image are much less apparent in the filtered image. The purpose of majority filtering
is to ensure a smooth appearance to the reconstructed volume, which is ideal for
overall visualization but may not be suitable for analysis of fine anatomical details.
For efficient implementation, majority filtering of pixel intensities can be performed
at the same time as the Gaussian filter is applied to the pairwise image warps.
(a) (b)
(c) (d)
Figure 3.12 : (a) A dark pixel and its corresponding pixels on neighboring sections.(b) Using a majority filter, the dark pixel is replaced with the average pixel intensityof its neighbors. (c) A reconstructed coronal section. (d) The same coronal sectionafter the majority filter is applied to the stack.
3.6 Experimental Validations
Here we report on the validation methods and the corresponding results for evaluating
the effectiveness of the proposed reconstruction method both qualitatively (i.e. by
visual examination) and quantitatively (i.e. by computing distance and smoothness
measures). These methods form a general framework for validating any warp-based
3D reconstruction algorithm. The evaluation is carried out on two sets of data:
a synthetic volume with known distortions, and real serial sections with unknown
46
distortions. All computations are performed on a commodity PC with 1.5GHz AMD
Athlon processor and 2.5GB memory.
3.6.1 Using a synthetic volume
We first apply our reconstruction method to a stack of synthetically distorted cross-
sections from an existing 3D volume. In our experiment, we use a MRI volume of
the C57BL/6J mouse brain that has been generously provided by the LONI group
at UCLA. The volume has dimension 2563 with a uniform spacing of 56µm. Figure
3.13 (a) shows a sagittal cross-section of the MRI volume. We take each of the
256 coronal cross-sections of the volume and apply a B-spline based local image
distortion that varies randomly from section to section. Figure 3.13 (b) and (c)
show one of the coronal sections from the MRI volume before and after the synthetic
distortion. Notice that deformation takes place mainly in the vertical direction in
order to simulate the actual cutting distortions in real cryo-sections. The sagittal
cross-sectional view of the distorted volume is shown in Figure 3.13 (d).
The reconstruction proceeds by first computing the pairwise warps between successive
sections followed by warp filtering with width d = 5. The complete computation took
742 seconds (12 minutes and 22 seconds), with 661 seconds spent on warp computation
and 81 seconds on warp filtering and final image deformation.
Visual validation
Figure 3.13 (e) shows the sagittal cross-section at the same location through the re-
constructed stack. Our reconstruction method removes the majority of high frequency
noises caused by synthetic distortions (see Figure 3.13 (d)) and is able to form smooth
and easily-recognizable anatomical structures.
47
(a)
(b) (c)
(d)
(e)
Figure 3.13 : (a) A sagittal cross-section of an MRI volume of the mouse brain. (b)A coronal section from the MRI volume. (c) The same section in (b) after syntheticdistortion. (d) A sagittal cross-section of the MRI volume after each coronal sectionhas been randomly distorted. (e) A sagittal cross-section of the reconstructed volumefrom distorted sections using warp filtering.
48
Quantitative validation
We further compute the l2-norm from each coronal cross-section in the original MRI
volume to the same section after synthetic distortion and after reconstruction; the
results are plotted in Figure 3.14. The l2-norm measures quantitatively how close
the reconstructed volume is to the real volume. Observe that the sum of all the
section-wise l2-norms is reduced by 46% after automatic reconstruction.
0
20
40
60
80
100
120
1 51 101 151 201 251
Coronal MRI Sections
L2
Dis
tan
ces
Figure 3.14 : The l2-norm from each of the 256 coronal sections of an MRI volumeto the same section after synthetic distortion (top curve) and to the reconstructedsection after warp filtering (bottom curve).
3.6.2 Using serial sections
We next test our method in reconstructing a 3D volumetric representation of the
cell-density for the mouse brain from serial sections. The input is a stack of 350
Nissl-stained images acquired by cyro-sectioning coronally a single frozen adult mouse
brain (data preparation is detailed in Chapter 2). Each image is 850× 670 pixels at
a resolution of 25µm per pixel. The reconstruction consists of rigid-body alignment
and elastic alignment using warp filtering.
49
Figure 3.15 (a) shows a synthetic cross-section cut in the sagittal direction through
the original stack of 350 coronal sections, and Figure 3.15 (b) shows a cross-section
cut at the same position after rigid-body alignment using the line of symmetry in
each coronal section. In Figure 3.15 (b), the CM of each coronal section is initially
positioned at the middle of the image. To determine the correct vertical alignment,
we let the user specify the indices of the coronal sections containing the 9 landmarks
whose coordinates (expressed in the Paxino’s atlas) are shown in Table 3.2. The plane
equations αx + βy + γz + δ − k = 0 are computed for the coronal sections that best
fit the landmarks at the specified planes. Table 3.3 compares the parameters and the
resulting fitting error E (as defined in equation 3.7) computed in two scenarios: using
standard coronal planes (i.e., set α = β = 0) and using arbitrary plane orientations.
Note that allowing deviation of the plane from the standard coronal direction reduces
the fitting error by 90.7%. In particular, a large non-zero value of β indicates a signif-
icant tilting of the slicing plane around the X axis (about 8.7 degrees) during physical
sectioning. Figure 3.15 (c) shows the result after aligning the sections vertically with
the reference CM computed by intersecting the computed planes with the sagittal
sections in Paxino’s atlas. Note that the CM of the aligned sections now form a curve
in space, and the sagittal profile of the stack resembles that of a real sagittal section
shown in Figure 3.15 (d).
Computing 349 pairwise warps between successive sections consumes 27MB and 976
minutes (16 hours and 16 minutes), averaging 168 seconds for each single warp. The
performance of the subsequent warp filtering stage at different filtering width d (de-
fined in Section 3.3.1) is summarized in Table 3.4. Since every pairwise warp φk,k+1 is
computed independently (and so is every filtered warp Φk), both warping and filtering
can be greatly sped up by distributing the computation of different warps to differ-
ent processors. Using this simple parallel scheme on a cluster of 16 processors, for
example, all pairwise warps would be computed within an hour, while warp filtering
would finish in approximately ten minutes at filtering width d = 20.
50
(a) (b)
(c) (d)
Figure 3.15 : Rigid-body alignment of coronal sections, viewed as synthetic cross-section cuts in the sagittal direction through the stack of all sections. (a): the originalsections. (b): sections aligned using their lines of symmetry, whose centers of massare initially positioned in the middle of each image. (c): sections aligned verticallyusing reference centers of mass. (d): a real histological sagittal section.
51
Landmarks X (Lateral) Y (Interaural) Z (Bregma) Section Index
1 −2.25 1.5 −4 121
2 2.25 1.5 −4 123
3 0 4.3 −2.5 152
4 0.6 4 −0.9 209
5 −0.6 4 −0.9 210
6 0 1.5 0.25 255
7 0 3 1.15 280
8 0.3 2 2.5 325
9 −0.3 2 2.5 327
Table 3.2 : The coordinates (in mm) of the landmarks in Paxino’s atlas, and theindices of the coronal sections in the stack that contain the landmarks.
Standard Coronal Plane Arbitrary Plane
α 0 0.29932
β 0 −4.90077
γ 31.9736 31.9222
δ 243.405 256.331
E 300.081 27.7732
Table 3.3 : Parameters in the plane equation computed for each coronal section andthe resulting fitting error.
52
Filter width d Time (min) Memory (MB) Mean Sk Max Sk
0 (Original stack) – – 85.43 1165.68
1 4.4 37 7.71 74.39
2 17.2 91 3.15 24.83
5 46.6 178 1.33 7.04
10 83.4 269 1.09 6.22
20 172.1 449 1.04 6.15
Table 3.4 : Comparison of performance for different filtering width d and the corre-sponding mean and maximum smoothness measure Sk of the reconstructed volume.
Visual validation
Figures 3.16 (a2) and 3.17 (b2) show the result of reconstruction at filtering width d =
20. Note that anatomical features become much more coherent in the reconstructed
volume. The reconstruction also recovers the shape of some key structures, such as
the folds in the cerebellum to the left and the hippocampus region in the middle;
these profiles are barely recognizable or completely illegible in the original stack. For
qualitative validation, we compare the reconstructed volume to real histology sections
from Paxino’s Atlas [68] at similar sagittal (Figure 3.16 (a4)) and horizontal (Figure
3.17 (b4)) locations. We find that the shapes of the anatomical structures recovered in
the reconstructed volume closely resemble the shapes of the corresponding structures
in real tissue sections.
Quantitative validation
Objective validation with serial sections is difficult due to the lack of knowledge of
the original object before sectioning. One commonly used technique is to measure
the correlation between adjacent sections before and after reconstruction, as done
53
Figure 3.16 : Comparison of sagittal cross-sections: the original stack of 350 brainsections after rigid-body alignment (a1), the reconstructed volume (a2), after applyingthe majority image filter (a3), and a real tissue section from Paxino’s atlas [68] atsagittal plate No. 110 (a4).
by Wirtz et al. [93]. However, as pointed out by Guest and Baldock [36], such
criteria is inappropriate since the purpose of reconstruction is not to exactly match
one section to the next section. Inspired by the smoothness measure proposed by
Guest and Baldock [36], we propose the following two-step procedure to measure the
smoothness of our reconstruction:
Correspondence evaluation
We first evaluate the quality of the pairwise warps φk,k+1 that establish the corre-
spondences between adjacent images gk, gk+1 for k = 1, . . . , N . This evaluation is
accomplished by computing the l2-norm between each warped image gk ◦ φk,k+1 and
target image gk+1, as plotted in Figure 3.18; the l2-norm reveals the quality of the
54
Figure 3.17 : Comparison of horizontal cross-sections: the original stack of 350 brainsections after rigid-body alignment (b1), the reconstructed volume (b2), after apply-ing the majority image filter (b3), and a real tissue section from Paxino’s atlas [68]at horizontal plate No. 157 (b4).
match. For comparison, we plot the l2-norm between successive images in the origi-
nal stack as well as in the rigidly registered stack. Observe that the pairwise warps
reduce the pixel-wise differences between neighboring sections substantially, by 60%
to 90% in comparison to the original stack. Correspondence evaluation is a neces-
sary step because the smoothness of corresponding features will make no sense if the
correspondences are computed incorrectly.
55
0
5000
10000
15000
20000
25000
30000
35000
1 51 101 151 201 251 301
Coronal Sections
L2
Dis
tan
ce
Figure 3.18 : The l2-norm between successive images in the stack of 350 coronalsections before (top curve) and after (middle curve) rigid-body alignment. The elasticwarping error between successive sections (bottom curve) is computed as the l2-normbetween the warped image gk ◦ Φk and image gk+1 for k = 1, ..., 349.
Smoothness evaluation
We compute a smoothness measure Sk on each section k as follows:
Sk =
∑ni=0
∑mj=0(B(i, j) + C(i, j)− 2A(i, j))2
nm(3.8)
where A(i, j) = Φ−1k (i, j) denotes the location of (i, j) in the warped image gk ◦ Φk,
B(i, j) = Φ−1k−1(φk−1,k(i, j)) and C(i, j) = Φ−1
k+1(φ−1k,k+1(i, j)) denote the locations of
the corresponding points of (i, j) on the two neighboring sections gk−1 ◦ Φk−1 and
gk+1 ◦Φk+1 after warping. As a reminder, φ refers to pairwise warps between adjacent
sections, and Φ refers to the filtered warp. Sk is similar to the CAM measure proposed
by [36], which effectively measures how close a point lies to its corresponding points on
neighboring sections. For comparison, the smoothness measure of each section before
reconstruction is computed by setting Φk−1, Φk and Φk+1 to the identity function.
Table 3.4 reports the mean and maximum smoothness measure Sk for all 350 sections
in the reconstructed volume for different filtering widths d. Observe that Sk decreases
56
by an order of magnitude just by warp filtering using a single neighbor on each side
of a section (i.e., d = 1). A reduction by two orders of magnitude is observed in both
the mean and maximum Sk for d ≥ 5. To interpret the numbers, we notice from (3.8)
that Sk reflects the average distance from every point A(i, j) to the mid-point of its
two corresponding points (B(i, j) + C(i, j))/2. For example, at d = 20, Sk = 1.04,
hence the average distance ‖(B(i, j)+C(i, j))/2−A(i, j)‖ = 0.51. In other words, in
the reconstructed volume, each point deviates on average from the middle of its two
corresponding points on the neighboring sections by only 0.51 pixels. These results
appear to improve on the experimental results of a global reconstruction method
reported in [36] that uses a similar measure of smoothness.
Figures 3.16 (a3) and 3.17 (b3) show the cross-section cuts of the reconstructed vol-
ume after incorporating the two image filters (the majority filter is applied to 3 neigh-
bors on each side of a given pixel). In comparison with Figures 3.16 (a2) and 3.17
(b2), computing warps on edge-enhanced images better reconstructs the anatomi-
cal boundaries, and majority filtering along lines of corresponding points through
successive sections improves the smoothness of the appearance.
3.7 Discussion
3.7.1 Smoothness of the volume
The smoothness of the reconstructed volume using our method is controlled by the
filtering width d. Inappropriate choices of d, however, may cause the method to
either flatten the highly-curved or sharp features (when d is too big) or fail to remove
the distortion errors (when d is too small). This problem of producing the desired
amount of smoothness in the reconstructed volume arises in all smoothness-based
reconstruction methods such as [36] and [93].
A good choice of d depends on the input data. In most cases, the scale of random tissue
57
distortions is small in comparison to the size of anatomical features, and the value
of d can be determined via experiments and visual evaluation. For example, Figure
3.19 shows the hippocampus region in a sagittal cross-section cut through the original
stack of 350 images and the reconstructed stack with filter width d = 5, 10, 20. Note
that although a choice of d = 5 already achieves a good local smoothness measure in
Table 3.4, the dark stripe at the top of hippocampus still exhibits low-frequency noise
inherent in the input data, which spans a larger neighborhood and requires a wider
filtering width. At d = 20, the waviness is gone and the region becomes smooth. Still,
the reconstructed volume needs to be examined to ensure that other key anatomical
features have not been overly smoothed.
(a) (b) (c) (d)
Figure 3.19 : Closeup looks at the hippocampus region in the synthetic sagittal cross-section of the rigid-body aligned stack (a) and the elastically aligned stack with filterwidth d = 5 (b), d = 10 (c) and d = 20 (d). The region is indicated by red boxes inFigure 3.16 (a1) and (a2).
Sometimes a reasonable choice of d fails to exist, for example, when the scale of the
largest distortion noise exceeds that of the smallest anatomical feature. To remove
large distortions in one place while preserving small meaningful features in other lo-
cations, we use a more flexible method by finding independent filter widths dk for
each section gk or even independent widths dk(i, j) for each individual pixel gk(i, j).
These local filter widths adapt to the regional noise-signal ratio based on the corre-
spondence information between adjacent sections. Such local choices of filter widths
can easily be incorporated into the warp filtering framework in (3.1) and (3.2). More-
58
over, edge-preserving filters, such as a bilateral filter [42], can be used in place of the
Gaussian filter during warp filtering to preserve sharp features in the reconstructed
volume that are often flattened out using the Gaussian filter.
3.7.2 Limitation of 2D warps
Any restricted image deformation, such as the regularized 2D warp described in Sec-
tion 3.4, has its limitations. Our warp representation models only those 2D deforma-
tions that can be characterized by non-uniform deformations in the vertical direction
and a simple uniform deformation in the horizontal direction. Eligible deformations
include translation, scaling, uniform growth/shrinkage of anatomical structures in the
horizontal direction, and sectioning distortions in the vertical direction. Deforma-
tions that are not well modelled include rotation and non-uniform growth/shrinkage
of anatomical structures in the horizontal direction. Although rotational differences
between adjacent sections can be eliminated using rigid-body registration, we do ob-
serve, in a few cases, non-uniform variance of anatomical structures in the horizontal
direction that are not modelled accurately by our method. Even for these rare cases,
our method behaves in a reasonable manner, as observed in the uniform appearance
of the graph of warping error in Figure 3.18. To better handle a wider range of
2D deformations, we are currently investigating more flexible ways of decomposing
2D warps into 1D warps, such as allowing columns to bend and break during the
horizontal deformation as discussed in [55, 90, 71].
While we will continue our research on more accurate and efficient warping meth-
ods, notice that comparing the quality of 2D warps generated by various methods
is difficult due to the lack of standard benchmark data. In an effort to promote
the development of benchmarks for 2D image warping methods as well as for 3D
reconstruction methods, we have made the 350 histological sections in our exper-
iment as well as our preliminary reconstruction results available for download at
59
http://www.geneatlas.org/gene/data/histology.zip. We encourage those interested
in this problem to apply their methods to this data.
Chapter 4
Surface Construction
Once the distortions on the tissue sections have been corrected and a smooth image
volume has been generated, we can go on to build a surface model representing the
anatomy of the mouse brain. Given a stack of parallel 2D sections of an anatomical
structure, a 3D surface can be constructed in three steps:
1. Annotate each section with anatomical regions that share common boundary
curves. Figure 4.1 (a,b) show two neighboring annotated sections of the mouse
brain.
2. Connect 2D boundary curves on neighboring sections to form a 3D boundary
surface between the two sections. Figure 4.1 (c) shows an example of a boundary
surface between the sections in (a,b).
3. Concatenate the surfaces constructed in Step 2 between successive sections to
form the model of the entire anatomical structure.
In our experiment, annotation of the mouse brain sections is provided by anatomists
at the Baylor College of Medicine. Since surface concatenation is straightforward,
the key step is generating a surface between the curves on two neighboring sections.
Surface generation is a challenging task because the annotated sections may contain
boundary curves of complex topology and geometry, which may vary significantly
from section to section. For example, the two sections shown in Figure 4.1 (a,b)
are only 25 µm apart in the mouse brain, yet the complexity of annotation and the
60
61
(a) (b) (c)
Figure 4.1 : Constructing a surface model from planar tissue sections: two neighboringtissue sections annotated with anatomical regions partitioned by boundary curves(a,b); a boundary surface between the two planes that connect the boundary curveson the two sections (c).
difference in the annotated regions makes it hard even for a human to create the
intermediate surface.
Here we present a new surface construction technique that is capable of building
geometrically and topologically correct surface networks between annotated tissue
sections with arbitrarily complex boundary curves. Using this technique, we can
construct a high-resolution model of the mouse brain automatically from the 350
planar tissue sections that have undergone distortion-correction as in the previous
chapter. The new technique also allows experts to conveniently adjust the topology
of the reconstructed surface to comply with the topology of the actual brain.
4.1 Introduction
4.1.1 Surface from curves
A typical way of building a 3D surface model is from 2D curves on planar sections.
Here we consider a stack of uniformly spaced parallel sections. Each section contains
a non self-intersecting curve network that partitions the section into closed regions,
62
as shown in Figure 4.1 (a,b). In particular, each region is associated with a material
type. In our application, each annotated region is associated with its anatomical type
(e.g. cortex, cerebellum, etc.), which are represented by different colors in Figure 4.1.
The goal is to connect the curve networks on each section to form a surface network
in 3D that partitions the entire space into volumes associated with corresponding
materials. The surface network should possess the following properties:
1. Interpolation: the surface network should exactly interpolate the curve networks
on each section, and the material types of the volumes should agree with those
of the regions on each section.
2. Geometric correctness: the surface network should be a closed mesh partitioning
the space into disjoint volumes. In particular, the mesh should not contain self-
intersections or gaps.
3. Topological correctness: the topology of the surface network should agree with
the topology of the original object (e.g. the un-sectioned mouse brain in our
study) from which the curve networks are derived.
Note that the second property is particularly important for building a 3D atlas of the
mouse brain. To map experimental brain data on the annotated brain atlas, we must
associate any point in the atlas with a unique anatomical structure. This requirement
will be violated if the surfaces bounding different anatomical structures intersect or
leave gaps. A geometrically correct surface network will generate anatomical volumes
that are exact partitions of the brain.
Given curve networks on two parallel sections, there usually exist multiple, geomet-
rically correct, yet topologically distinct surface networks. For example, Figure 4.2
(d,e,f) show three topologically different surface networks connecting the curve net-
works in Figure 4.2 (a,b). Since it is not possible to infer the topology of the original
63
Top Bottom
(a) (b) (c)
(d) (e) (f)
Figure 4.2 : Curve networks on two neighboring sections (a,b), an invalid surfacenetwork with self-intersecting geometry (c), and three geometrically correct surfacenetworks with various topology (d,e,f).
object solely from the curve networks, manual adjustment is often necessary to pro-
duce topologically correct surfaces.
4.1.2 Background
Surface reconstruction from curve networks has been studied extensively in the past
three decades, and numerous methods have been proposed. In many fields the problem
is also known as contour interpolation or contour tiling, and the curve networks on
each section are often referred to as contour lines [48]. Here we attempt only to
provide a brief review of some of these methods, and we refer interested readers to
excellent surveys by Hagen [38] and by Sloan and Painter [80].
With few exceptions [92], previous methods are designed to build a surface between
sections that are partitioned by the curve network into regions of only two material
64
types: outside and inside (e.g. air and tissue or empty and solid). Early approaches
attempt to find a triangular tiling that connects the contour lines on the neighbor-
ing sections while optimizing a quality measure, such as the minimum surface area
criteria in the work of Fuchs et al. [30]. While Fuch’s and later approaches [24, 62]
produce fairly good looking surfaces connecting simple contour lines, the techniques
may generate surfaces with either self-intersections or gaps between complex sections
containing multiple and nested contours.
Recent work on surface reconstruction from two-material sections focuses on handling
curve networks of arbitrary topology and geometry. Boissonnat presents a method
based on Delaunay triangulations [12], which is refined by Geiger [34]. Cheng et
al. [23] improve Boissonnat’s method further to generate surfaces without having to
compute 3D Delaunay triangulations. Several researchers have used implicit functions
to interpolate between 2D contours, such as the method of Herman et al. [41] and
Csebfalvi et al. [27]. The variational approach of Turk et al. [89] combines the two
steps of building and interpolating implicit functions. Generating smooth surfaces
by solving Partial Differential Equations (PDF) has also been considered by Bloor
et al.[11] and later by Chai et al. [20], who further studied smooth connections of
neighboring surfaces sharing a common contour line. Finally, a group of researchers
([9, 6, 66, 51, 8]) have developed methods based on computing orthogonal projections
of two neighboring sections in order to construct surfaces with correct geometry and
reasonable topology. In particular, the methods of Oliva et al. [66] and Barequet
et al. [8] compute the areas of differences on the projected sections and triangulate
these areas using Voronoi diagrams [66] or Straight Skeletons [8].
The above methods all have severe difficulty in extending to sections containing re-
gions associated with multiple materials. A common approach [34] is to treat each
material type separately and build boundary surfaces for one material at a time.
This approach, however, will result in invalid geometry such as self-intersections. For
example, each of the two sections shown in Figure 4.2 (a,b) contains three materi-
65
als colored as white, green and purple 1. Building surfaces separately for the green
material and the purple material will result in self-intersecting geometry as shown in
Figure 4.2 (c).
To the best of our knowledge, the only method for surface reconstruction between
sections with multiple material types is the approach proposed recently by Weinstein
[92]. Weinstein’s method builds surfaces simultaneously for all materials using a
volumetric approach. Each section is first voxelized onto a uniform grid, where each
grid point is associated with a material type; then the surfaces between volumes
of different materials are generated using contouring on the voxel grid. Weinstein’s
approach produces geometrically correct surfaces for multiple-material sections that
may contain curve networks of arbitrary topology. However, due to the use of discrete
voxels, the surface produced by Weinstein’s approach only approximates rather than
interpolates the curve network on each section.
Most existing algorithms, and particularly recent ones for handling complex curve
networks, are designed to completely automate surface reconstruction, leaving little
room for the user to adjust the topology of the resulting surface network. The few
exceptions include the method of Christiansen and Sederberg [24], where user inter-
action is required to guide the triangulation in cases of excessive ambiguity. Software
packages, such as SURFdriver [65], allow a limited degree of user interaction dur-
ing surface reconstruction, such as capping and connecting regions. However, these
mechanisms for user interaction are restricted to sections containing two material
types. The topology of the surface network becomes much more complicated with
the increase in the number of materials due to the interaction between volumes of
different material types. In particular, topology modification methods that treat each
material type separately cannot guarantee to create a geometrically correct surface.
1In grayscale printing, green is dark gray while purple is light gray.
66
4.1.3 Proposed method
In the next section, we introduce a new technique for surface reconstruction in the
presence of multiple materials. Our method is able automatically to create a geo-
metrically correct surface with reasonable topology from serial sections containing
arbitrarily complex curve networks and any number of material types. Our technique
is inspired by the projection-based approach taken by Oliva et al. [66] and Barequet
et al. [8], but our method can handle multiple material types. Unlike Weinstein’s
method [92], our approach produces 3D surface networks that exactly interpolate the
2D curve networks on the input sections. For example, Figure 4.2 (d) shows the result
produced automatically by our method given the two sections in (a,b). Notice that
the purple volume extends into the green volume without causing self-intersecting
geometry.
Another novelty of our method lies in its ability to allow users to modify the topology
of the surface network in a convenient and creative manner. Our method is composed
of two phases: topology specification and geometry construction. While a default
surface topology is assumed based on orthogonal projections of the curve networks,
the user is free to modify this basic topology and to create a wide range of variations.
The geometry construction phase guarantees to create an intersection-free and gap-
free surface network consistent with the specified topology. For example, Figure 4.2
(e,f) show two variations to the surface network in (d), where the connectivity of the
purple and green volumes have been modified manually.
4.2 Surface generation for multiple materials
4.2.1 Algorithm overview
Like most other methods for building surfaces from parallel sections, our method first
computes a layer of surface network between every two neighboring sections and then
67
concatenates successive layers to form a complete model. Given two neighboring
sections (an example is shown in Figure 4.3 (a,b)), our method proceeds in three
steps:
1. Project the curve networks from the two sections orthogonally onto a common
plane, referred to as the mid-plane (Figure 4.3 (c)). The projected curves
partition the mid-plane into closed regions.
2. Each closed region on the mid-plane is the orthogonal projection of a cylindrical
volume (or a wedge) between the two sections. Create a topology graph, shown
in Figure 4.3 (d), that describes how each wedge is partitioned vertically into
slabs, and how slabs between neighboring wedges are connected.
3. Polygonalize the boundary between two connected slabs of different materials.
The polygonal surface network partitions the space into volumes corresponding
to connected slabs of the same material in the topology graph (Figure 4.3 (e)).
The key difference between our method and most other methods for two-material
surface construction is that instead of considering how 2D curves are connected to
form 3D surfaces, we consider how 2D regions on each section are connected to form
3D volumes. In our method, each volume consists of basic volumetric units called
slabs. In the topology graph, connected slabs of the same material represent a con-
tinuous volume, while connected slabs of different materials represent the interface
between neighboring volumes. The volumetric approach provides an intuitive way of
representing and modifying the material transition between sections, and naturally
results in a closed surface network without self-intersections or gaps.
The automatic surface generation produces a surface network that relates regions on
different sections with overlapping orthogonal projections. However, one can revisit
and modify the graph structure to alter how volumes are formed between the two
sections. For example, Figure 4.3 (f,g) show a modified topology graph from the
68
Top
(a)
Bottom
(b)
A
B C D
(c)
B C D
A
(d) (e)
B C D
A
(f) (g)
Figure 4.3 : Surface generation: boundary curves on two neighboring sections (a,b),wedges partitioned by the orthogonal projections of the boundary curves on the mid-plane (c), the default material graph (d) and the triangulated surface (e), a modifiedmaterial graph (f) and the resulting surface (g).
default graph in (d) and the resulting surface. The polygonalization step is guaranteed
to produce a geometrically correct surface network given any valid graph, while at
the same time ensuring exact interpolation of the curve networks on the two sections.
4.2.2 Projection
Given two neighboring sections, orthogonal projection of the curve networks onto the
mid-plane is performed in two steps. First, we compute intersection points of the
69
curves from different sections projected on the mid-plane. These intersections are
added to the existing set of vertices in the curve network on each section 2. Second,
closed regions are identified on the mid-plane by tracing closed boundary cycles in the
projected curve network. Note that a region may be bounded by multiple boundary
cycles, which can be detected using a scan-line based algorithm.
4.2.3 Topology graph
Each closed region partitioned by the projected curve networks on the mid-plane
becomes the cross-section of a wedge extending from the top to the bottom section.
Since the projection of a wedge on each section is a subset of some region on that
section, each wedge is associated with a distinct material at its top and bottom. This
observation suggests the vertical transition of materials within each wedge.
Definition
The topology graph is built to determine how the space between two sections is
partitioned into volumes of various materials. A node in the topology graph represents
a slab of volume within a wedge associated with a particular material. In the example
of Figure 4.3 (d), the nodes belonging to the same wedge are organized along a vertical
line to illustrate the vertical composition of slabs within a wedge. In particular, the
top and bottom slabs in each wedge are associated with the corresponding materials
on the top and bottom section, and successive slabs in the same wedge consist of
different materials.
The edges in the topology graph connect successive nodes within a wedge or nodes
between neighboring wedges. The edges determine how volumes are formed from
2To maintain topological connectivity between surface networks generated for successive sections,
the curves on each section are augmented by intersection points with curves on its two neighboring
sections
70
slabs and how volumes of different materials interact. Specifically,
• An edge connecting nodes of the same material (shown as thickened lines in
Figure 4.3 (c)) represents a continuous volume formed by the corresponding
slabs.
• An edge connecting nodes of different materials (shown as dotted lines in
Figure 4.3 (c)) represents a boundary between the volumes to which the corre-
sponding slabs belong.
By default, the topology graph contains edges connecting successive nodes in a same
wedge as well as the top and bottom nodes between neighboring wedges, as shown
in Figure 4.3 (c). The default graph directly relates two regions on different sec-
tions with overlapping projections: if two regions consist of the same material, they
are connected into a volume; otherwise, the regions generate two volumes sharing a
common boundary, as shown in Figure 4.3 (d).
Graph variation
The default topology graph can be modified by inserting nodes between the top
and bottom nodes of each wedge, and by adding edges connecting inserted nodes to
existing ones. The addition of nodes and edges effectively builds intermediate layers of
structures between the two sections, making it possible to construct complex surface.
For example, the default graph in Figure 4.3 (d) can be modified by inserting a node
associated with the white material to wedge C and connecting the new node to the
existing white nodes in neighboring wedges B and D, as shown in Figure 4.3 (f).
While the purple and green volume shares a common boundary in the default graph,
they are separated by a continuous volume of white material in the modified graph,
as reflected in the resulting surface network in Figure 4.3 (g).
Our topology representation allows the user to design complicated surface topology
71
A
B
CD
E
A
B
C
D
E
A
B
C
D
E
A
B
C
D
E
(a) (b) (c) (d)
Figure 4.4 : The projection of curve networks from the two sections in Figure 4.2(a,b) (a), the default material graph (b) and the two variations (c,d) that generatethe surface networks in Figure 4.2 (c,d,e).
by modifying a simple graph structure. Although an arbitrary number of nodes and
edges can be added to the graph, the addition should adhere to the following rules to
ensure a valid topology:
1. To prevent self-intersecting volumes, intersecting edges are prohibited in the
topology graph. Assume we number the successive nodes in each wedge from
bottom to top, if two edges connect the ith and jth node of one wedge to the
kth and lth node of a neighboring wedge, we require (i− j) ∗ (k − l) ≥ 0.
2. To prevent isolated volumes between two sections, each newly added node in
the topology graph must be connected to a top or a bottom node of some wedge
through a chain of edges between nodes with the same material type.
As a more complex example, Figure 4.4 shows the default topology graph and two
variations that generate the surface networks in Figure 4.2 (d,e,f). Note that the mod-
ification of the topology graph satisfies the above two rules to ensure non-intersecting
and well-connected volumes.
72
Wedge splitting
Modifying the topology graph by adding nodes and edges is confined to existing
wedges. To represent a larger class of surface topology (e.g., connecting regions with
non-overlapping projections), we allow an existing wedge to be split into smaller
wedges. The refined topology graph after wedge splitting allows for finer control over
the resulting surface topology. Moreover, as we shall see in the next section, splitting
is also used for generating the surface geometry.
We explain wedge splitting using the example of Figure 4.5. The two sections in (a,b)
contain regions whose orthogonal projections on the mid-plane, shown in (c), do not
overlap. As a result, the default topology graph in (e) generates two separate pieces of
surface, as shown in (f). Wedge splitting proceeds in two steps. First, new boundary
curves are added to the projected curve networks in (c) that divides wedge A into
two smaller wedges, D and E, as shown in (d). Second, the default topology graph
in (e) is refined in (g) to reflect the division of the wedges. Each newly created wedge
(e.g., D) inherits the nodes and edges from the original wedge A as well as edges
to A’s neighboring wedges (e.g., B). Meanwhile, corresponding nodes in neighboring
new wedges, D and E, are connected by edges.
Wedge splitting offers more freedom in designing variations of surface topologies be-
tween sections. For example, the refined topology graph in Figure 4.5 (g) after wedge
splitting can be further modified into (h), where nodes and edges are added to form
a continuous volume between the originally disconnected green nodes from wedges
B and C. The resulting surface shown in (i) forms a tubular structure between the
green regions on the two sections.
73
Top
(a)
Bottom
(b)
A
B C
(c)
D
B CE
(d)
B C
A
(e) (f)
B C
D
E
B C
D
E
D
(g) (h) (i)
Figure 4.5 : Topology modification through wedge splitting: two neighboring sectionscontaining non-overlapping regions of the same material type (a,b), their orthogonalprojections (c) and the default material graph (e) that results in two separate surfaces(f), the splitting of wedges on the mid-plane (d) and the refined topology graph (g),the modified topology graph (h) and the resulting surface topology (i) that forms atube connecting regions on the two sections.
4.2.4 Polygonalization
While the topology graph abstractly describes how the 2D regions on each section
are connected to form volumes, it remains to determine the geometry of the surface
that forms the boundary between adjacent volumes. Polygonalization is performed
by first refining the abstract topology graph into an actual volumetric grid, referred
to as the topology grid. Contouring this volumetric grid produces a closed surface
74
network that partitions the space between the two sections into volumes of desired
topology. Finally, the contoured surface is smoothed to improve the fairness of the
surface while preserving geometric correctness.
Topology grid
The topology grid is an instantiation of the topology graph onto a 2D triangulation
of wedges. The regions enclosed by the projected curve networks on the mid-plane
are triangulated, as shown in Figure 4.6 (a). The wedges corresponding to the tri-
angulated regions are split into small wedges, each with the shape of a triangular
prism, while the topology graph is refined into a grid structure. Figure 4.6 (d,f) show
portions of the topology grid generated from the default and the modified topology
graph in Figure 4.6 (b,c).
In our implementation, we use the Straight Skeleton triangulation proposed by Aich-
holzer and Aurenhammer [2] and adopted by Barequet et al. [8] for their projection-
based contour interpolation. Straight Skeleton triangulation adds internal vertices
within each region that forms a skeleton of the region suitable for interpolating curve
networks. Since surfaces will only be formed between nodes of different materials on
the grid, we need to triangulate only those wedges that contain two or more nodes of
different materials. For example, wedge A in Figure 4.6(a) is not triangulated since
wedge A contains only a single node in the topology graph of (b,c).
Contouring the topology grid
Just like the topology graph, each edge in the topology grid containing a material
change represents the boundary between two neighboring volumes, and hence gives
rise to the polygons on the final surface. Here we extend the multi-material contouring
method of Ju et al. [45] from a voxel grid onto the topology grid. The special structure
of the topology grid, unlike that of a regular voxel grid, allows us to produce contoured
75
B C D
A
B C D
A
(a) (b) (c)
a b
d c
a b
c
(d) (e) (f) (g)
Figure 4.6 : (a) A triangulation of the projected regions on the mid-plane from Figure4.3 (c). (b) The default topology graph. (c) The modified topology graph. (d) Aportion of the topology grid corresponding to the gray triangles in (a) generated fromthe default topology graph in (b). (e) Contoured surface by creating one polygon foreach dotted edge in (d). (f) The topology grid generated from the modified topologygraph in (c). (g) Contoured surface by creating one polygon for each dotted edge in(f).
76
surfaces that exactly interpolate the curve networks.
To employ contouring algorithms that are typically designed for a voxelized grid,
however, we need to define two more topology elements on the topology grid: faces
and cells. A face on the topology grid is a closed cycle of edges connecting nodes
between two neighboring triangulated wedges, such as {a, b, c, d} in Figure 4.6 (d)
and {a, b, c} in Figure 4.6 (f). We refer to the portion of the grid shown in Figure
4.6 (d,f) as a column. A column consists of nodes and edges on triangulated wedges
whose projections on the mid-plane surround a single vertex. A cell is a layer on
the column that is made up of grid faces bounded between two closed cycle of edges
around the column. For example, the column in Figure 4.6 (b) contains one cell,
while the column in Figure 4.6 (d) contains two cells.
Assuming the top and bottom sections lie on planes z = 0 and z = 1 with the
mid-plane at z = 0.5, contouring proceeds in two simple steps,
1. Create one vertex for each cell on the topology grid. We assign coordinate
{x, y, (2i− 1)/2n} to the vertex of the ith (i = 1, . . . , n) cell in the triangulated
wedges that project onto triangles on the mid-plane sharing a common vertex
{x, y, 0.5}.
2. Create one polygon for each edge on the topology grid that contains a material
change. The polygon connects the vertices of the grid cells sharing that edge.
Figure 4.6 (e,g) show the contoured surfaces in the grid columns of (d,f). Note
that the contour contains polygons that are dual to grid edges connecting nodes of
different materials. The polygons project onto the triangulation on the mid-plane
and interpolate polygonal boundary segments on the two sections.
Dualing the topology grid is guaranteed to produce a closed contour that partitions
the space between the two sections into distinct volumes whose topology and materials
are specified by the topology graph. The surface is also free of self-intersections:
77
the only self-intersection that could take place is between triangles on the contour
projecting to the same triangle on the mid-plane, which is impossible since the vertices
of the grid cells sharing successive edges within the same triangulated wedge are
monotonically increasing (or decreasing).
Our contouring method lifts the triangles from the mid-plane to form polygons on the
3D contour. A similar mechanism was originally used by Barequet [8], whose method
only handles sections containing two materials. More importantly, Barequet’s method
forbids topology modification by maintaining a one-to-one map between a triangle on
the mid-plane and a triangle on the surface. Our method allows complex surface
topology to be generated by modifying the topology graph, which may result in
multiple triangles on the surface projecting onto the same triangle on the mid-plane
(see Figure 4.6 (g)).
Surface smoothing
The vertices of the surface produced by contouring lie on pre-set coordinates with
uniform spacing in the z direction, which may produce a surface with a stair-stepping
appearance. To achieve a smoother-looking surface while maintaining geometric cor-
rectness, we apply a Laplacian smoothing operator:
z∗i = αzi + (1− α)
∑j∈N(i) zj
‖N(i)‖
where zi denotes the z coordinate of vertex i, N(i) denotes the set of edge-adjacent
vertices of vertex i, and α is a scalar value between 0 and 1. In our implementation, we
choose α = 0.5. Note that smoothing the z-coordinate of vertices will not cause self-
intersections on the smoothed surface so long as the monotonicity of the z-coordinate
of vertices with the same projection on the mid-plane is explicitly preserved.
78
4.3 Results
Here we present the result of applying our surface generation method to serial sections
of the mouse brain after distortion-correction. The sections have been annotated first
to illustrate 17 major anatomical types 3. Annotation is performed using a Java-based
user-interface that allows the creation of smooth boundary curves using subdivision.
A surface network interpolating the curve networks on each section is then generated
automatically using our method. Finally, the surface topology is verified and corrected
manually, followed by automatic re-generation of the corrected surface network.
4.3.1 Automatic construction
Figure 4.7 (a,c,e) show the side, bottom and cross-section views of the surface network
automatically constructed using our method for the 350 sections of the mouse brain.
The input curve networks contain 200404 vertices, and the output surface contains
1132731 polygons and 590902 vertices. The surface network partitions space into
different anatomical structures, indicated by colors (the legend is shown in Figure 4.7
(g)), whose cross-sections coincide with the annotation on each of the 2D sections.
Note that although individual layers of surfaces between consecutive sections have
been smoothed, the model in Figure 4.7 (a,c,e) looks jagged due to the manual layout
of the curve networks. We applied a preliminary smoothing pass to the entire surface
model using a non-shrinking smoothing operator [Taubin] and the results are shown in
Figure 4.7 (b,d,f). In addition, edges on the surface network shared by three or more
polygons representing the ridges where three or more anatomical structures meet are
smoothed separately to generate a smooth curve network.
Figure 4.7 : Surface reconstruction of the mouse brain before (left column) and after(right column) smoothing: side view (a,b), bottom view (c,d), and cross-section view(e,f). The color legend is shown on the far right (g).
80
(a) (b)
(c) (d)
Figure 4.8 : Sections from the first 20 cross-sections of the mouse brain (a), thereconstructed surface network (b), the surface bounding Cortex (c) and Olfactorybulb (d).
In Figure 4.8 we take a closer look at the the surface constructed for the first 20
cross-sections of the mouse brain. We observe in particular the close interaction
between two anatomical structures: Cortex (red) and Olfactory bulb (green). While
the Cortex starts at the top surrounded by the Olfactory bulb, the Olfactory bulb
is surrounded by the Cortex at the bottom. The reconstructed surfaces for the two
anatomical structures are shown in Figure 4.8 (c,d).
An important property of our brain model, in contrast with existing polygonal mod-
81
els of the mouse brain [5], is that the various anatomical structures share common
boundary surfaces that do not leave gaps or self-intersect. Figure 4.9 (a) shows the
reconstruction of a complex structure in the mouse brain, the Fiber tracks. To view
the adjacency relation between Fiber tracks and its neighboring anatomical struc-
tures, we color the surface of Fiber tracks by its neighboring anatomical structures
in Figure 4.9 (b). Note that the coloring also provides us with information about the
3D anatomy of the brain that is difficult to obtain from 2D images, which will be
helpful for anatomists and brain researchers.
(a) (b)
Figure 4.9 : Surface of the Fiber tracks (a) and colored version using neighboringanatomical structures (b).
4.3.2 Manual adjustments
Due to the small distance between adjacent 2D sections in the mouse brain (i.e.
25µm), the automatically constructed surface network possesses the correct topology
in most places. However, the reconstructed surface fails to produce desired topology
occasionally when there is a large migration of anatomical features between successive
sections. Here we show two examples of repairing incorrect surface topology on the
reconstructed Ventricles through manual modification of the topology graph.
82
Figure 4.10 (a) shows a tubular portion of Ventricles that is broken into separate
pieces after automatic reconstruction due to a large displacement of corresponding
regions on successive sections. The surface is colored by the neighboring anatomical
structures. The surface was corrected using the wedge splitting technique discussed
in Section 4.2.3, and the regenerated surface, shown in Figure 4.10 (b), models a
continuous tube.
Figure 4.11 (a) shows a portion of the Ventricles containing a hole due to the migration
of its cross-sections. A continuous reconstruction of Ventricles is produced in Figure
4.11 (b) after modifying the topology graph between the corresponding sections in a
manner similar to the graph modification in Figure 4.3 (f).
83
(a) (b)
Figure 4.10 : Broken pieces of Ventricle resulting from automatic construction (a)and after manual adjustment (b).
(a) (b)
Figure 4.11 : A hole in Ventricles resulting from automatic construction (a) and aftermanual adjustment (b).
84
4.4 Discussion
The current surface generation method proceeds in two steps: automatic generation
and user-assisted modification. The first step produces a geometrically correct surface
with minimal topological variation; users are totally responsible for checking and
correcting invalid topologies. A major direction for future research is to develop
automatic methods for producing particular types of topologies, which would greatly
reduce the amount of human interaction. A simple example would be to always
connect a particular anatomical region (e.g. the nerve track) on successive sections.
More complicated examples includes tracking feature points (e.g. points where three
or more regions meet) on successive curve networks. These topology requirements
can be enforced by developing rules that would result in a desirable topology graph
in the automatic generation step.
Chapter 5
Future Work
The geometric techniques described in previous chapters can generate high-resolution
surface models from distorted tissue sections of a complex anatomical structure. We
are actively expanding our work to create a volumetric atlas for the larger goal of
analyzing gene expressions. I identify the following particular topics for my future
research:
Surface smoothing and simplification: although the preliminary smoothing scheme
discussed in Section 4.3.1 significantly improves the surface quality of the orig-
inal model, artifacts such as waviness can still be observed from the smoothed
surface in Figure 4.7 (d). One possible cause is that the concatenated contours
between successive sections have a fairly non-uniform triangulation, which may
reduce the effectiveness of a linear smoothing scheme. I intend to investigate
other mesh fairing methods that would deliver a smoother surface model. For
simplification, existing methods, such as QSlim [32] and simplification envelops
[25], can either be directly utilized or modified to work on surface networks.
Surface tetrahedralization: After the surface network has been simplified to a
reasonable size, we need to tetrahedralize the surface network to form a vol-
umetric model. In particular, we desire that the tetrahedrons of neighboring
anatomical structures should share common boundary faces and the sizes of the
tetrahedrons should adapt to the shape of the anatomical structure. Although
robust, adaptive tetrahedralization techniques have been developed for closed
meshes, these techniques are not yet available for surface networks that par-
85
86
tition space into multiple volumes. Hence the problem of stable and adaptive
tetrahedralization of surface networks will be an exciting area of research.
Atlas utilization: As in Geneatlas.org, our ultimate goal is to construct a 3D database
of gene expression patterns over the mouse brain. The availability of the vol-
umetric atlas will allow us to map expression data from different brains onto
the same atlas for accurate and efficient comparison. Constructing a spatial
database will involve extensive research on 3D registration of the atlas onto
experimental images, efficient comparison of expression data over the atlas, and
developing graphical tools for querying and visualizing gene expression pat-
terns. The availability of an atlas-based database containing expression data of
the twenty thousand genes in the mouse genome will be an invaluable resource
for biology and for medical researchers.
Chapter 6
Summary
In this thesis, I have describe how computer graphics techniques are applied in con-
structing a 3D geometric atlas of the mouse brain anatomy. In particular, I have
presented:
1. An automatic and robust method for elastically reconstructing a smooth 3D
brain from tissue cross-sections containing local distortions induced by physi-
cal sectioning. The method employs a novel image warping algorithm based
on dynamic programming and designed for adjacent tissue sections formed by
directional slicing.
2. A robust and flexible method for creating a geometrically and topologically
correct surface network from complex boundary curves on successive planar
sections. The method is best suited for building surface models of a complex
structure containing multiple anatomical regions and allows interactive user-
guided modification.
The combination of these two techniques presents a robust and efficient framework
for generating a high-quality polygonal atlas of an anatomical structure from cross-
sectional images. As part of a larger project, the surface model generated using these
techniques may become the basis for creating a volumetric atlas of the mouse brain
for the study of gene expression study.
87
Bibliography
[1] Agazzi, O., Kuo, S., Levin, E., and Pieraccini, R. Connected and de-
graded text recognition using planar hidden markov models. In IEEE Inter-
national Conference on Acoustics, Speech, and Signal Processing (ICASSP-93)
(April 1993), vol. 5, pp. 113–116.
[2] Aichholzer, O., and Aurenhammer, F. Straight skeletons for general
polygonal figures in the plane. In Computing and Combinatorics (1996), pp. 117–
126.
[3] Ali, W., and Cohen, F. Registering coronal histological 2-d sections of a rat
brain with coronal sections of a 3-d brain atlas using geometric curve invariants
and b-spline representation. IEEE Transactions on Medical Imaging 17, 6 (1998),
957–966.
[4] Armstrong, J., Kaiser, K., Mller, A., Fischbach, K., Merchant, N.,
and Strausfeld, N. Flybrain, an on-line atlas and database for the drosophila
nervous system. Neuron 15, 1 (1995), 17–20.
[5] at UCLA, M. A. P. http://www.loni.ucla.edu/map/index.html.
[6] Bajaj, C. L., Coyle, E. J., and Lin, K.-N. Arbitrary topology shape