Top Banner
Computational anatomy with the SPM software John Ashburner Wellcome Trust Centre for Neuroimaging, WC1N 3BG London, UK Received 20 October 2008; revised 7 January 2009; accepted 9 January 2009 Abstract An overview of computational procedures for examining neuroanatomical variability is presented. The review focuses on approaches that can be applied using the SPM software package, beginning by explaining briefly how statistical parametric mapping is usually applied to functional imaging data. The review then proceeds to discuss volumetry, with an emphasis on voxel-based morphometry, and the pre- processing steps involved using the SPM software. Most volumetric studies involve univariate approaches, with a correction for some global measure, such as total brain volume. In contrast, the overall form of the brain may be more accurately modeled using multivariate approaches. Such models of anatomical variability may prove accurate enough for computer assisted diagnoses. © 2009 Elsevier Inc. All rights reserved. Keywords: Brain imaging; Computational anatomy; Morphometry; SPM 1. Introduction Morphometrics is about studying the variability of the form (size and shape) of organisms or objects, and is a field that owes a great deal to the pioneering work of D'Arcy Thompson. In particular, it is his book Growth and Form that first proposed the concept of spatial normalization, where diverse and dissimilar brains can be referred as a whole to identical functions of very different coordinate systems[1]. This can be conceptualized as a generative model, whereby the brain of an individual subject is modeled by some canonical brain, which is deformed, or warped, to a different shape. Within this model, there is an assumption of a one-to-one mapping between the anatomy of one brain and that of another, and that it is possible to transfer this configuration of homologous points between them. This is achieved by intersubject registration, which involves estimating the deformation field that warps (maps) from one brain to another. Morphometry, or computational anatomy , involves analyzing features derived from the shapes of the brains. It is a large field of research, which has been applied within many areas of science. This review can only touch on a very small fraction of the literature and will omit a great deal. Emphasis is placed on those aspects of morphometry that the SPM5 1 software package has been applied to and areas that the author considers to have the greatest potential impact. The next section outlines the general framework of the SPM software package, which is followed by a section on the basic principles of volumetry. After this, the principles used for performing volumetry within the SPM framework are presented in the section on voxel-based morphometry (VBM). Morphometry can also be done using information from deformation fields, both within mass univariate and multivariate settings. This is described, prior to making some predictions about the direction the field may take. 2. The statistical parametric mapping framework SPM is a software package that was originally devised for statistical parametric mapping of PET and fMRI data. It is one of a number of packages that provide a similar framework, as well as some newer functionality that diverges from this. Originally, the SPM software was based entirely around the frequentist approach, which is still the view that most users of SPM, and other packages such as FSL or Available online at www.sciencedirect.com Magnetic Resonance Imaging 27 (2009) 1163 1174 E-mail address: [email protected]. 1 The software is available from http://www.fil.ion.ucl.ac.uk/spm at the Wellcome Trust Centre for Neuroimaging, London, UK. 0730-725X/$ see front matter © 2009 Elsevier Inc. All rights reserved. doi:10.1016/j.mri.2009.01.006
12

Computational anatomy with the SPM software - unipd.itenrigri/Public/Nordio/Biblio/ashburner09.pdf · Computational anatomy with the SPM software ... been applied within many areas

Mar 10, 2018

Download

Documents

dangtuyen
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Computational anatomy with the SPM software - unipd.itenrigri/Public/Nordio/Biblio/ashburner09.pdf · Computational anatomy with the SPM software ... been applied within many areas

Available online at www.sciencedirect.com

g 27 (2009) 1163–1174

Magnetic Resonance Imagin

Computational anatomy with the SPM softwareJohn Ashburner

Wellcome Trust Centre for Neuroimaging, WC1N 3BG London, UK

Received 20 October 2008; revised 7 January 2009; accepted 9 January 2009

Abstract

An overview of computational procedures for examining neuroanatomical variability is presented. The review focuses on approaches thatcan be applied using the SPM software package, beginning by explaining briefly how statistical parametric mapping is usually applied tofunctional imaging data. The review then proceeds to discuss volumetry, with an emphasis on voxel-based morphometry, and the pre-processing steps involved using the SPM software. Most volumetric studies involve univariate approaches, with a correction for some globalmeasure, such as total brain volume. In contrast, the overall form of the brain may be more accurately modeled using multivariate approaches.Such models of anatomical variability may prove accurate enough for computer assisted diagnoses.© 2009 Elsevier Inc. All rights reserved.

Keywords: Brain imaging; Computational anatomy; Morphometry; SPM

1. Introduction

Morphometrics is about studying the variability of theform (size and shape) of organisms or objects, and is a fieldthat owes a great deal to the pioneering work of D'ArcyThompson. In particular, it is his book Growth and Formthat first proposed the concept of spatial normalization,where diverse and dissimilar brains “can be referred as awhole to identical functions of very different coordinatesystems” [1]. This can be conceptualized as a generativemodel, whereby the brain of an individual subject ismodeled by some canonical brain, which is deformed, orwarped, to a different shape. Within this model, there is anassumption of a one-to-one mapping between the anatomyof one brain and that of another, and that it is possible totransfer this configuration of homologous points betweenthem. This is achieved by intersubject registration, whichinvolves estimating the deformation field that warps (maps)from one brain to another. Morphometry, or computationalanatomy, involves analyzing features derived from theshapes of the brains. It is a large field of research, which hasbeen applied within many areas of science. This review canonly touch on a very small fraction of the literature and willomit a great deal. Emphasis is placed on those aspects of

E-mail address: [email protected].

1 The software is available from http://www.fil.ion.ucl.ac.uk/spm at theWellcome Trust Centre for Neuroimaging, London, UK.

0730-725X/$ – see front matter © 2009 Elsevier Inc. All rights reserved.doi:10.1016/j.mri.2009.01.006

morphometry that the SPM51 software package has beenapplied to and areas that the author considers to have thegreatest potential impact.

The next section outlines the general framework of theSPM software package, which is followed by a section on thebasic principles of volumetry. After this, the principles usedfor performing volumetry within the SPM framework arepresented in the section on voxel-based morphometry(VBM). Morphometry can also be done using informationfrom deformation fields, both within mass univariate andmultivariate settings. This is described, prior to making somepredictions about the direction the field may take.

2. The statistical parametric mapping framework

SPM is a software package that was originally devised forstatistical parametric mapping of PET and fMRI data. It isone of a number of packages that provide a similarframework, as well as some newer functionality that divergesfrom this. Originally, the SPM software was based entirelyaround the frequentist approach, which is still the view thatmost users of SPM, and other packages such as FSL or

Page 2: Computational anatomy with the SPM software - unipd.itenrigri/Public/Nordio/Biblio/ashburner09.pdf · Computational anatomy with the SPM software ... been applied within many areas

1164 J. Ashburner / Magnetic Resonance Imaging 27 (2009) 1163–1174

AFNI, prefer to work with. The idea is to pre-process the rawPET or fMRI data to transform it into a version that is easierto work with, and then apply voxel-wise statistical tests, suchas t or F tests to obtain P values [2]. These would usually beinterpreted after correcting for multiple dependent compar-isons using the theory of Gaussian random fields [3].

2.1. Pre-processing

Pre-processing largely involves ensuring that the imagedata are in alignment. Rigid-body registration would be usedfor within-subject alignment and some form of nonlinearregistration for aligning subjects to a common anatomicalspace. The skull of an individual subject is a rigid body, andthe brain moves relatively little within the skull, so rigid-bodyregistration is a reasonable model to use for aligning multipleimages of the same subject. The skulls and brains of differentsubjects are different shapes, so more complicated models areneeded to parameterize this anatomical variability in order toachieve accurate intersubject image registration. Whenanalyzing brain function, the anatomical variability amongsubjects is generally treated as an inconvenient form of noise,which is to be factored out from the data as far as possible.

Prior to performing the voxel-wise statistical tests, theimage data would be blurred (low-pass filtered). This can bemotivated from a Weiner filtering perspective, such that thesignal-to-noise ratio is increased. The noise (uninterestingsignal) is assumed to contain proportionally more of thehigher spatial frequencies that the signal of interest, soblurring the data should remove proportionally more noisethan signal. Spatial smoothing also ensures that the residualdifferences, after fitting the model, are closer to Gaussian.This makes the data less likely to violate the assumptionsrequired for parametric statistical testing and hides more ofthe intersubject registration error. Another reason forblurring is that it simplifies the ensuing results, producingfewer regions of significant difference (“blobs”) to report.

2.2. Statistical inference

Statistical testing is performed within the framework ofthe general linear model, which is fitted at each voxel [2].The brain images should be in alignment after pre-processingso that each voxel in one image approximately correspondswith the same voxel in another image. Fitting a general linearmodel at each voxel involves finding the coefficients of alinear combination of basis functions, such that the residualvariance is minimized. The basis functions usually encodethe experimental design, but there are also additional basisfunctions that model out confounding effects, in which theexperimenter is not interested. The design matrix should bespecified so that the principles of Occam's razor arefollowed, and involves achieving an optimal balancebetween over- and underfitting. In theory, principled modelselection procedures could be used to compare designmatrices in terms of how well they model the signal inparticular brain regions. The optimal model (for someregion) would be the one with the greatest model evidence.

The purpose of the analysis is to answer some questionabout the brain, and this question is expressed in terms of acontrast vector. This vector specifies which linear combina-tion(s) of parameters are considered interesting and essen-tially separates interesting from uninteresting signal(confounding noise). For a t test, this linear combinationshould be significantly greater than zero for the test to beconsidered significant. For an F test, the variance modeledby these linear combinations of parameters should besignificantly greater than zero.

Testing is done on a voxel-by-voxel basis, so a correctionfor multiple comparisons is required. Without this correction,about 5% of the voxels would have P values less than .05and be considered significant simply by chance. ABonferroni correction could be used if the tests were fullyindependent, but because of the spatial smoothness of theimages, the data at each voxel are similar to that of itsneighboring voxels. This means that the tests are notindependent, so a Bonferroni correction would be tooconservative. Gaussian random field theory is thereforeused to make the necessary corrections [3].

3. Traditional volumetrics

The traditional approach to morphometry involvesmanually measuring lengths, angles, areas, volumes and soon. Usually though, the measures of interest are volumes. Anumber of automated and semiautomated procedures havebeen devised for parcellating the brain into different regions,from which the regional volumes can be computed. Once thevolume measurements are made, they can be comparedstatistically, using a variety of models.

3.1. Volume measurements

A number of interactive tools are available for manuallymaking volume measurements from MRI data, and it isprobably the most widely accepted form of morphometryamong the medical community. This traditional approachdoes have a number of disadvantages though. In particular,manual outlining of brain regions can be extremely time-consuming. It is also a subjective procedure, whereby theinterrater reproducibility of how various structures aredefined may be quite low. Structures such as hippocampior caudate may be traced accurately, but the highly variablefolding patterns of the cortex make defining the boundariesof cortical regions an extremely subjective procedure. Whenlabeling a brain image, a human expert has access to thesame information as an automated algorithm. Human expertsare currently believed to be able to partition a brain image ina more accurate way than automated algorithms. If this isactually true, it may not be for much longer. Usually, there isno ground truth available about the underlying Brodmannareas of the cortex. However, a recent evaluation of theFreeSurfer surface-based registration approach showed thatit was able to align the borders of histologically defined

Page 3: Computational anatomy with the SPM software - unipd.itenrigri/Public/Nordio/Biblio/ashburner09.pdf · Computational anatomy with the SPM software ... been applied within many areas

1165J. Ashburner / Magnetic Resonance Imaging 27 (2009) 1163–1174

Brodmann areas to an accuracy of between 2.5 mm (for areaV1) and about 10 mm (BA 44) [4]. Human expertsegmentations have not yet been evaluated in a similar way.

Intersubject registration usually forms a major componentof automated labeling strategies. If accurate nonlinearregistration can be achieved among scans of differentsubjects [5], then labels drawn on the scan of one subjectcan be propagated on to the scan of another. Additionalrefinements may then be made, based on Markov fields ormorphological operations. Automated labeling approachesare set to become even more accurate, especially ascomputers become more powerful, so allowing much moresophisticated registration approaches to be run within areasonable time. The ANIMAL+INSECT procedure [6,7]was one of the earliest automated labeling strategies. Sincethen, many others have been developed [8–10], includingsome third-party toolboxes for SPM. These are often basedaround the older nonlinear image registration approacheswithin the SPM software and include Individual BrainAtlases using SPM and Anatomical Automatic Labeling[11,12]. Rather than being based on volumetric registration,some other approaches are based on deformable surfaces[13–15] or use other more sophisticated approaches [16,17].Deformable surfaces are especially widespread for parcellat-ing subcortical structures [18], although volumetric registra-tion models are also being used for refining suchsegmentations [19].

In principle, parcellations obtained from automated toolscould also be manually corrected in order to obtain thebenefit from human expertise, but such corrections are rarelyperformed in practice.

3.2. Statistical analysis

Once the measurements have been made, they wouldusually be analyzed in order to detect statistically significantdifferences among populations of subjects. Such analysesare often performed independently, region-by-region. Insuch situations, there may be additional corrections toaccount for a global measure of whole-brain volumevariability. Generally, the volume of a brain structurewould be related to the whole-brain volume: larger brainscontain larger structures. One approach to account forwhole-brain volume would be to divide the individualstructure volumes by the measure of whole-brain volume,which is the same as the “proportional scaling” correctionmade by the SPM software [20]. This essentially renders themeasurements in terms of the proportion of the brainoccupied by each structure. Rather than simply taking thewhole-brain volume, various other measurements could beused by the correction. For example, in the field ofdementia research, it is typical to use total intracranialvolume. Total gray matter volume is another commonlyused reference. Another form of correction would be tofactor out any effect of whole-brain volume when makingthe comparisons. This is the same as the “AnCova”correction made by the SPM software [20] and involves

testing whether there are significant volumetric differencesthat cannot be explained by the whole-brain volume. Inprinciple, a number of “global” measures could be factoredout, but this is not possible with the proportional scalingmodel. The reader is referred to [21] for a very usefulreview of such correction methods.

Significant correlations occur among the volumes ofdifferent brain structures [22], and the volume of one brainstructure is not independent of the volume of another. Ratherthan comparing each region independently, another analysisframework could involve multivariate statistical tests, suchas the multivariate analysis of covariance (MANCOVA).Darwin discusses “correlation of growth” in The Origin ofSpecies and provides many examples from nature. The shapeof a brain is a result of its development, which in turn isinfluenced by gene expression during different develop-mental stages. There are also a number of complexenvironmental interactions influencing development, andthe whole dynamical system is currently much too complexto model explicitly. Each gene controlling brain developmentis likely to be expressed as a pattern over space (and time).Even an extremely simple generative model of brain shape,whereby growth is controlled by a linear combination ofsuch patterns, would need to be multivariate. If geneexpression maps were highly localized, then this mayprovide evidence to support the use of univariate modelsfor each region, but empirical data show that this is not thecase. However, the results of univariate models are generallyeasier to interpret and explain [23], and this is a major factorfor determining how data are analyzed and results arepresented. Multivariate analyses make inferences aboutpatterns of difference and do not explicitly localize theirresults to particular structures.

3.3. Allometry

We know, a priori, that larger brains are likely to havelarger brain structures, but we do not know exactly what therelationship is. Other examples from biology show thatproportions are unlikely to remain similar over differentsizes. For example, larger species have smaller brains inproportion to body size [24]. Many of D'Arcy Thompson'sideas were refined by Julian Huxley and others, whointroduced the framework of allometry, which characterizesshapes in terms of models of differential growth [25].Within the simplest allometric model, the relationshipsbetween two measurements (such as length, area orvolume) of structures x and y within an organism areassumed to have a relationship of the form y=bxk (or logy=log b+k log x), where b and k are constants. The factor bsimply denotes the value of y when x is unity. The variablek is of more biological interest and denotes a rate of growthper unit of measure. In these models, growth is assumed aprocess of “self-multiplication of living substance.” Undersuch a generative model of growth (or atrophy), thechanges in the volume of a structure are proportional to itscurrent volume.

Page 4: Computational anatomy with the SPM software - unipd.itenrigri/Public/Nordio/Biblio/ashburner09.pdf · Computational anatomy with the SPM software ... been applied within many areas

Fig. 1. This illustrates the effect of convolving with different kernels. On the left is a panel containing dots, which are intended to reflect some distribution ofpixels containing some particular tissue. In the center, these dots have been convolved with a circular function. The result is that each pixel now represents a counof the neighboring pixels containing that tissue. This is analogous to the effect of using measurements from circular regions of interest, centered at each pixel. Inpractice though, a gaussian kernel would be used (right). This gives a weighted integral of the tissue volume, where the weights are greater close to the center ofthe kernel.

1166 J. Ashburner / Magnetic Resonance Imaging 27 (2009) 1163–1174

Many allometic relationships have been found withinbiology, and some of these relate to the brain. For example,Zhang and Sejnowski [26] derived an allometric model forthe relative proportion of gray and white matter within themammalian brain. Their model accurately predicts that largerbrained mammals have proportionally more white matterthan gray matter. Their model is accurate for tiny speciessuch as shrews, through to larger-brained species such ashumans and whales. They also determined a model for therelationship between the volume of gray matter and thewhole-brain volume, but noted2 that the two models were notstrictly compatible. The simple allometric framework mayrequire refinements that account for overlapping measure-ments or measurements from regions that are within closeproximity. This theme will be revisited later, when discuss-ing the diffeomorphic framework.

Within an allometric model, it would seem unsurprisingthat female human brains have been found to haveproportionally more gray matter than male brains or thatthe rate of loss of brain tissue with age is faster for men thanfor women [27]. Within a more biologically plausible shapemodel, findings that might appear significant withoutworking with logarithms are rendered inconsequential.

4. Voxel-based morphometry

Voxel-based morphometry is another approach for com-paring the volume of tissue among populations of subjects[28–31]. The basic idea behind VBM is extremely simple,and involves identifying a particular tissue type — usuallygray matter—in the scan of each subject (segmentation) andwarping these tissue maps to a common anatomical space.These deformed tissue maps are then spatially blurred (seeFig. 1). A voxel-by-voxel statistical analysis of this pre-

2 Huxley also noted this. His first example of allometry involved therelationship between the weight of an organ and the weight of the wholebody minus the weight of the organ.

t

processed data is performed in a similar way to the univariateanalyses of structure volumes. Finally, the statistical measuresare corrected for multiple dependent comparisons.

The original aim of VBM was to study so-called meso-scopic anatomical differences, after discounting macro-scopic differences, which would be modeled by thedeformation fields that warp individual brains to a commonreference space. However, the precise definition of whatconstitutes mesoscopic and what constitutes macroscopic isunclear [32]. Another view would be that the analysis ofmesoscopic differences could be considered simply as anexamination of registration errors. If image registrationwere exact, then there would be no differences to examine.More recently, there was a change to the VBM framework,such that the pre-processed data were scaled such that thetotal volume of tissue in each structure is preserved afterwarping the data to a standard reference space [33]. Thiscorrection is by scaling by the Jacobian determinant of thedeformation and is colloquially known as “modulation.”The result is that the pre-processed data represent aquantitative measure (tissue volume per unit volume ofspatially normalized image).

4.1. Segmentation

To achieve an acceptable interpretation of a VBM study, itis important that the pre-processing is as accurate as possible(see Fig. 2). For example, if the tissue classification isinaccurate, then the pre-processed data will not properlyreflect tissue volumes. If the intersubject registration isinaccurate, then the statistical analysis will not comparehomologous structures across all brains. These issues areexactly analogous to those for manual volumetry, whereby ifanatomical structures are inaccurately defined, the results ofstatistical analyses are more difficult to interpret. For thisreason, it is essential that the pre-processing model is asaccurate as possible [28].

Similarly, in order for the pre-processed data to beaccurate, the MRI data must also be of high quality. Whendesigning MR sequences, it may be useful to know

Page 5: Computational anatomy with the SPM software - unipd.itenrigri/Public/Nordio/Biblio/ashburner09.pdf · Computational anatomy with the SPM software ... been applied within many areas

Fig. 2. This illustrates how findings from a VBM study of gray matter could be interpreted. The top row shows situations where there would be less gray matter ina cortical region compared to the situation shown below it. From left to right, differences could be attributed to folding, thickness, misclassification ormisregistration. Generally, the objective is to interpret differences in terms of thickness or folding.

1167J. Ashburner / Magnetic Resonance Imaging 27 (2009) 1163–1174

something about the principles that underlie the segmenta-tion model and some of the assumptions that are made [34].In order to separate gray matter from other tissues, there mustbe a high contrast between gray matter and other surroundingtissues. If gray matter is not clearly visible, then it cannot beprecisely segmented. The SPM5 segmentation approach [35]assumes that every voxel belongs to one of four distincttissue classes: gray matter, white matter, cerebrospinal fluidand everything else. Similar assumptions are commonamong brain segmentation algorithms, but they are notuniversal. Algorithms that are more sophisticated assumethat the brain is composed of more tissue types. For example,the intensity distribution of gray matter is typically verydifferent between cortex and thalamus [13], so greatersegmentation accuracy should be possible by using thisinformation. This is one reason why a number of segmenta-tion approaches do not identify the thalamus very accurately.Another reason is that its intensity is often very close to thatof white matter, which means that a slightly different MRIsequence may have a large effect on the segmentation ofthe thalamus.

SPM5 segmentation is based around a mixture ofgaussians model. It typically uses a positive linear combina-tion of two or more gaussians to represent the intensitydistribution of each tissue class. Even if the intensities ofeach tissue had a perfectly gaussian distribution, partialvolume effects (from mixing of signals from different tissuesin a single voxel) may result in a more kurtotic distribution.The use of multiple gaussians allows some of this kurtosis tobe modeled. Part of the segmentation algorithm involvesestimating the intensity distributions of the tissue types, interms of means, variances and mixing proportions of thegaussian distributions.

Image artefacts can be a problem, but some of them areaccounted for by the segmentation model. Image intensityinhomogeneity is usually modeled out from the data —provided it is of sufficiently low spatial frequency. Thisartefact is treated as a purely multiplicative field, which ismodeled by the exponential of a linear combination of low-frequency 3D cosine transform spatial basis functions.Typically, about 1000 basis function coefficients would beestimated by the segmentation algorithm in order to modelthese effects. Although these artefacts may not be noticedvisually in the images, it is essential that they be dealt with.Tissues are largely identified by their intensities, so if graymatter is brighter than white matter in some parts of a T1-weighted image, then an accurate intensity-based segmenta-tion of the different tissue types would not be possible.

Other artefacts are not modeled by the segmentation, sothey must be suppressed or corrected as far as possible.Studies of morphometry — particularly those involvingmulticenter data — are more accurate if the images are notdistorted spatially. Freely available tools exist for performinggradient nonlinearity unwarping [36], but there are nonereleased as part of the SPM software. Such spatial (andintensity) distortions should be accounted for when acquir-ing scan data. For a VBM study that compares onepopulation of subjects against another, it is also importantthat the positioning of the subjects in the scanner should notsystematically differ between the groups. This is becausespatial distortions are partially a function of head locationwithin the magnet.

Assigning a voxel to a tissue class is only partly basedupon the intensity of the voxel. If that particular intensity hasa higher probability density for the gray matter intensitydistribution than it has for other tissue types, then there is an

Page 6: Computational anatomy with the SPM software - unipd.itenrigri/Public/Nordio/Biblio/ashburner09.pdf · Computational anatomy with the SPM software ... been applied within many areas

1168 J. Ashburner / Magnetic Resonance Imaging 27 (2009) 1163–1174

increased probability of the voxel belonging to gray matter.A simple mixture of gaussians model would also considerhow common the particular tissue is in the image. Forexample, if 20% of the voxels are gray matter, then the priorprobability of a voxel being gray matter is 0.2. This priorprobability would be combined with the likelihood derivedfrom the intensity distributions of the classes and can beformulated probabilistically. In practice though, the segmen-tation algorithm obtains this prior probability from tissueprobability maps, which are registered with the image to besegmented. In SPM5, deformations of the tissue probabilitymaps are modeled by about 1000 cosine transform basisfunctions, and this registration is an integral component ofthe segmentation model. In fact, this spatial transformationprovides more accurate intersubject registration of brainimages than the older least squares image registrationapproach in the SPM software [37].

Currently, most MR image processing is conceptualizedwithin pipeline procedures, which involve applying a seriesof tools to the data [38–40]. A raw image would be enteredinto the pipeline, and the result of applying the first tool maybe a skull-stripped version. This skull-stripped image wouldthen be treated as input to the next tool, after which it may beinhomogeneity corrected. Then, the inhomogeneity-cor-rected image would be classified into different tissues andso on. The order in which the tools are applied will influencethe result of the processing. In many cases, earlier stepswould benefit from later steps, so it is not intuitively obviouswhat the optimal sequence of steps should be. For example,image registration allows tissue probability maps to beoverlaid on the MR scan, making tissue classification moreaccurate. However, tissue classification and bias correctionmake image registration more accurate. The SPM5 segmen-tation approach resolves this circularity by treating the wholeprocedure in terms of a probabilistic generative model,which considers inhomogeneity correction, tissue classifica-tion and nonlinear registration with the tissue probabilitymaps as components of a larger model.

This Bayesian generative modeling perspective is anincreasingly common theme within the neuroimagingliterature. Essentially, they are attempts to represent theprobability density of the data in the most accurate butparsimonious way possible (see, e.g., [41]). From a Bayesianperspective, scientific theories are essentially models[42]. Older models are abandoned as increasingly accuratemodels are devised, which better represent the probabilitydensity. VBM findings obtained by applying one model toa data set generally differ from findings obtained fromanother model. As generative models of anatomical imagesimprove, the interpretations of the findings should becomemore accurate.

4.2. Dartel registration

The segmentation of SPM5 incorporates a relativelysimple intersubject registration model, whereby only about1000 coefficients are used to explain the shape of the brain.

This relatively small number of parameters only modelsglobal brain shape and is unable to account for more of thedetailed shape variability found within the general popula-tion. One of the criticisms of VBM was that the precision ofintersubject registration may not have been high enough[32]. As one region of the brain is matched, the small numberof parameters may force a poor match of other brain regions.A simple illustrative example would be the behavior ofalignment when dealing with a population of subjects withlarge ventricles. An effect of aligning the ventricles of thesesubjects may be that there is displacement of tissueelsewhere in the spatially normalized brain images. Ifmisregistration is systematically greater in one populationthan in another, then it may be a cause of statisticallysignificant differences among the pre-processed data. Theanatomical differences may be real, but their localization andexplanation in terms of gray matter volumetry may beincorrect [43].

Dartel uses a more sophisticated registration model [44],which was developed to attempt to counter these criticisms.Rather than only about 1000 parameters, this model uses inthe order of 6,000,000 parameters. These provide enoughdegrees of freedom to achieve intersubject alignment that ismuch more precise. Recent work has evaluated Dartel incomparison to various ROI-based registration algorithms[45], which explicitly match user-defined structures.Although Dartel was not found to be as accurate as theROI approaches, it still performed well in comparison to theother approaches instantiated in the SPM software. Furtherwork [46] shows the accuracy of Dartel registration is muchhigher than that achieved by the other intersubject registra-tion approaches in the SPM software. The evaluations alsocompared a number of other widely used and fullyautomated intersubject registration algorithms. Measures ofaccuracy were based on the amount of overlap achieved formanually defined regions of the brain, but this time, thesemanual definitions played no part in computing the actualregistrations. Although ground truth, in the form of cyto-architectonic maps, was not available for the evaluation, themanual delineation of cortical regions by human experts wasassumed a reasonably accurate reflection of the underlyingfunctional areas.

Ideally, the more detailed registration approach of Dartelwould be incorporated into the segmentation model.Unfortunately, theory and practice do not always coincide,and sometimes shortcuts are taken for practical purposes.Therefore, segmentation and Dartel registration are currentlytwo separate steps in a processing pipeline. The strategy usedby Dartel is to align tissue class images to their commonaverage. It relies on SPM5 segmentation for generating grayand white matter tissue class images for all subjects, whichare in the closest possible rigid-body alignment with eachother. The Dartel registration itself involves alternatingbetween computing a template, based on the average tissueprobability maps from all subjects, and then warping allsubjects' tissue maps into increasingly good alignment with

Page 7: Computational anatomy with the SPM software - unipd.itenrigri/Public/Nordio/Biblio/ashburner09.pdf · Computational anatomy with the SPM software ... been applied within many areas

1169J. Ashburner / Magnetic Resonance Imaging 27 (2009) 1163–1174

the template [47]. Because the procedure is initiated withrigidly aligned data, the initial averages are blurred. As thedata gradually become better aligned, the averages becomeincreasingly sharp. Currently, gray and white matter tissuemaps are simultaneously aligned, but additional maps couldalso be included in the registration procedure.

Image registration can be conceptualized as an optimiza-tion procedure, set within a probabilistic framework. Theoptimization searches for those parameter settings that aremost probable according to the registration model. It isconvenient to frame the optimization as one of minimizing anegative log probability. There are usually two terms to suchan objective function: the likelihood term and the prior term.The likelihood term reflects the similarity among the warpedimages, whereas the prior term serves to regularize theestimated deformations and penalizes them as they becometoo extreme or unrealistic. One of the commonest likelihoodterms involves minimizing the mean squared differencebetween the individual's data and a deformed template. Thismodel is most appropriate for models that involve gaussianrandom noise. Because Dartel is based on matching tissuemaps, which are almost binary, a more appropriate likelihoodto term is based on assuming multinomial distributions [47].

It is quite common for nonlinear registration algorithms touse a small deformation model, whereby the parametersdescribe a displacement field, which would be added orsubtracted from an identity transform. Such models areusually not constrained to enforce a one-to-one mappingamong brains, so this relationship can easily break down ifthe displacements become too large [48]. When themagnitudes of the displacements are smaller, there is lesschance of this occurring. If two deformations that bothencode a one-to-one mapping are composed (i.e., onedeformation warped by another), then the resulting deforma-tion will also be one-to-one [48]. This leads to the strategy ofthe large deformation diffeomorphic metric mapping(LDDMM) algorithm [49], which effectively treats a largerdeformation as the composition of a sequence of smallerones. Dartel was intended to be a fast approximation to theLDDMM approach, which models a large deformation as asmall deformation composed with itself a large number oftimes [50–52]. Although the deformations it achieves are notas optimal as those obtained by LDDMM are, it still ensuresthat they are approximately one-to-one.

Dartel creates a “flow field” for each of the subject, whichencodes how the individual images should be warped, ordeformed, to match best the average shape of the template.For a typical VBM study, the gray matter tissue class imageof each subject would be warped to this average space. Thisaverage may not be well aligned with the coordinate systemof Talairach and Tournoux [53,54] or MNI space [55,56], soa further spatial transformation may be required in order toreport the location of differences within a more establishedcoordinate system.

In order to preserve the volume of tissue from eachstructure, the warped images are multiplied, voxel-by-voxel,

with the Jacobian determinants of the deformations. Jacobiandeterminants encode relative volumes of tissue, before andafter warping. For example, if a region shrinks to half itsoriginal volume during the warping, then the intensity wouldbe doubled so that the total signal from that regionis conserved.

4.3. “Smoothing”

The Jacobian-corrected, warped tissue class imageswould then be blurred by low-pass filtering the images.This is usually done by convolving with an isotropicgaussian kernel, which typically has a full width at halfmaximum of between about 8 and 12 mm. After blurring, theresulting image essentially contains a weighted sum of thetissue around each voxel. If convolution were done using aspherical kernel, then each voxel of the convolved imagewould represent a count of the number of voxels containingtissue within a sphere around that voxel. In practice though,the data are typically convolved with a gaussian, so the resultis a weighted count of voxels containing the tissue (seeFig. 3). The degree of blurring should relate to the accuracywith which the data can be registered. More blurring shouldbe used if intersubject registration is less accurate.

In principle, rather than using spheres or gaussians, itwould be possible to define regions of interest on the pre-processed data. The voxel count within such a region wouldprovide an estimate of the tissue volume within the structureover which the region is defined. It should be apparent thatthis would be the same as the traditional ROI-basedvolumetry approach [23], where the accuracy of the volumemeasure depends on the accuracy with which the tissues aresegmented and registered.

4.4. Statistical analysis

Statistical analyses of the pre-processed data areperformed by fitting a general linear model at each voxel[31]. The principle here is that a design matrix is specified,which models the sources of variance among the data. Ifthere were N subjects in the study, the design matrix wouldcontain N rows. The number of columns must be much lessthan N in order to accurately estimate the residual varianceand model the data properly. Typically, the matrix maycontain blocks that represent the group from which eachsubject belongs. Consider a study comparing (for example)healthy controls with some population of patients. Therewould be two columns in the design matrix to indicate groupmembership of each subject. Elements of the first and secondcolumns may be zero and one, respectively, for controls, andone and zero, respectively, for patients. This would be thesimplest kind of experimental design for a VBM study.

Typically, there would be both male and female subjectsin a study. Male brains tend to be systematically larger (withproportionally more white matter) than female brains.Because of this, it is useful to model out the effects of sexand possibly their interactions with the effect of interest [57].Similarly, age has quite a large influence on the composition

Page 8: Computational anatomy with the SPM software - unipd.itenrigri/Public/Nordio/Biblio/ashburner09.pdf · Computational anatomy with the SPM software ... been applied within many areas

Fig. 3. This illustrates some of the steps involved in pre-processing for VBM. The left-side column shows gray matter tissue maps from two subjects, whichare aligned according to a rigid-body transform. The middle column shows the same data after Dartel registration and scaling according to the Jacobiandeterminant of the deformation. The right-hand column shows these data after blurring with a gaussian kernel. These pre-processed data would be entered forstatistical testing.

1170 J. Ashburner / Magnetic Resonance Imaging 27 (2009) 1163–1174

of brains, so modeling out age effects is also often desirable.If there are large numbers of subjects in the study, then somenonlinear effects can also be accounted for by including ageand age squared as covariates in the model. Anotherimportant effect to consider is one of scanner and sequence.Although there is evidence to suggest that some data fromdifferent scanners may not be very systematically differentfrom each other [58], it is much safer to model out anyscanner effects from the data. This would therefore precludea comparison of (for example) patient data collected on onescanner with control data collected on another.

For VBM, the same issues arise concerning “global”effects as in the case of traditional volumetric analyses. Thechoice of how best to correct for overall brain or head size isleft up to the user [21]. Similarly, concerns about allometriceffects should also apply within the VBM framework, buthow best to deal with them is an unresolved issue.

Comparisons of cortical thickness measurements can alsobe made within a similar framework [59] to VBM, as well asother features pertaining to shape. The choice of features toexamine would generally be based upon existing knowledgeabout the condition under study. For example, if theobjective were to study a disease that is believed to causethinning of the cortical sheet, then comparisons of corticalthickness should reveal more significant differences thancomparisons of volume. Similarly, if the degree of foldingwere believed to differ because of some condition, then the

appropriate comparisons would involve some measure thatrelates more directly to folding, such as the curvature of thesurface. Such measures are not as local as volumetricdifferences and require a neighborhood of voxels to beconsidered for their computation. If very little is knownabout the condition, region-wise multivariate measures maybe most appropriate.

5. Analyzing deformation fields

Usually, when procedures such as SPM are applied toimaging data, the images are all spatially normalized to somecommon reference space. This requires a deformation fieldfor each subject, which maps from the reference space tohomologous points in the individual. These deformationfields encode the shapes of the individual's brains, relative tothe reference. These codifications can be analyzed in anumber of different ways, some of which are described next.

5.1. Localized differences

If intersubject image registration could be made highlyaccurate, then there would be no need to partition theimages into different tissue classes or to blur the dataspatially. The same volumetric information could beobtained entirely from the Jacobian determinants of thedeformations that warp the individual subjects' data to a

Page 9: Computational anatomy with the SPM software - unipd.itenrigri/Public/Nordio/Biblio/ashburner09.pdf · Computational anatomy with the SPM software ... been applied within many areas

1171J. Ashburner / Magnetic Resonance Imaging 27 (2009) 1163–1174

common space. In fact, the analysis of the deformationfields themselves has the potential to reveal more usefulinformation than could be obtained from a purely VBM-type analysis. There is much more information about shapeavailable than just that which is encoded by volumes.Information about lengths, areas, angles and so on may bemissed by an analysis of only volumes.

One approach for analyzing deformation fields hasbecome known as “tensor-based morphometry.” Thisapproach is intended for localizing certain forms ofmorphometric difference and is based on examining theJacobian matrices at each point in the deformation fields.These matrices (tensors) encode localized relative measures,such as volume (the determinant), length and area. Thesemeasures can be compared using mass-univariate tests ateach voxel. It is also possible to use multivariate tests ateach voxel, such that several features extracted from theJacobian matrices are analyzed simultaneously. The princi-ples of allometry can also be used, for example, by testingthe logarithms of the Jacobian determinants. Alternatively,the Jacobian matrices can be transformed in various ways,for example, by converting them into Hencky strain tensors[60] or by working with the symmetric component of thematrix logarithm of the Jacobian matrices [61,62]. Thesymmetric component encodes zooms and shears, and isobtained using a Cartan decomposition. The antisymmetric(skew-symmetric) components encode rotations, but rota-tions and translations would not be included in a voxel-wiseanalysis as they are not local shape properties. In order towork with such logarithmic transforms, the registrationprocedure must enforce a continuous one-to-one mapping(i.e., be diffeomorphic).

5.2. Patterns of difference

Another approach to morphometry would involve ananalysis of the entire shape, within a multivariate model.This type of analysis has a longer history, beginning withthe early approaches of Bookstein and others. It is possibleto perform multivariate shape analyses on manually placedlandmarks at homologous points in the brain images of apopulation of subjects. The first step would be to correctfor the pose of the subjects and would typically be done bya Procrustes analysis [63,64]. This involves determiningthe rigid-body transform (possibly also with isotropicscaling) that most closely aligns the points. It begins bytranslating so that the centroids of the clusters of points foreach subject are aligned with the origin. Then, if isotropicscaling is to be removed, the points are transformed suchthat their average root mean squared distance from theorigin is one. Rotations are computed by a procedureinvolving singular value decomposition of the cross-covariance matrix of the points. Once the correctionmatrix is determined, it is used to reposition the pointssuch that they no longer encode any information aboutlocation and orientation (and possibly scale). Following theProcrustes analysis, multivariate statistical tests, such as

MANCOVA, can be used to assess whether there aresignificant differences associated with some measure.Visualization of these differences can be done via methodssuch as canonical correlation analyses (CCAs) or Fisher'slinear discriminant analysis (LDA), which essentially showthe most highly discriminative deformations. It should benoted that the relationship between CCA and LDA isanalogous to that between an F test and a t test, in thatLDA is less flexible but does encode the sign of thediscriminant direction. This kind of analysis does notlocalize discrete differences, so the concept of applyingstatistical tests to find significant differences at particularvoxels becomes meaningless. Instead, these models showthe overall pattern of difference, and any statistical testswould be applied to the overall patterns. For moreinformation about this form of shape modeling, the readeris referred to the textbooks [65,66].

A similar multivariate morphometry approach can beapplied to deformation fields that are estimated bynonlinear image registration [67]. This procedure, knownwithin the neuroimaging field as “deformation-basedmorphometry” (DBM), involves very similar statisticalanalysis models to the procedures involving manualidentification of features. As is the case for VBM analyses,it is important that the intersubject registration is asaccurate as possible. Inaccurate registration would lead tounreliable interpretation of the results. The parameter filesgenerated by the segmentation approach of SPM5 encodedeformation fields, which have been corrected by Pro-crustes analyses, and are therefore in a suitable form forapplying DBM.

Automated intersubject image registration algorithmsestimate much more detailed mappings than would align-ment based on manually defined landmarks. As a result,there are many more parameters per subject than there aresubjects in the study. Issues pertaining to the “curse ofdimensionality” mean that some form of regularization isneeded to obtain meaningful results from shape modeling.Many multivariate models require within group covariancematrices to be computed, but these are inaccurate, orsingular, if there are too many parameters per subjectcompared with the number of subjects. One form ofregularization involves dimensionality reduction to reducethe size of the covariance matrices. Principle componentanalysis (PCA) is one commonly used dimensionalityreduction approach, but there are alternative approaches foridentifying potentially salient features. Many of theseprocedures can be viewed as generative models of thedeformations (see, e.g., [41] for descriptions of PCA, inde-pendent component analyses, etc.). More sophisticatedregularization methods are found within the multivariatepattern recognition literature. The general perception ofpattern recognition approaches is that they are only useful forseparating individual subjects into different groups, but someforms of pattern recognition are also able to provide similarcharacterizations to approaches such as CCA.

Page 10: Computational anatomy with the SPM software - unipd.itenrigri/Public/Nordio/Biblio/ashburner09.pdf · Computational anatomy with the SPM software ... been applied within many areas

1172 J. Ashburner / Magnetic Resonance Imaging 27 (2009) 1163–1174

5.3. Diffeomorphic models

Rather than simply comparing the deformation fieldsthemselves, it is possible to attempt to understand the causesof the deformations. Allometry was first conceived as a modelof growth [25] whereby a representation of a constant growthrate is derived from the logarithms of themeasures. Computingan exponential of these growth rates is equivalent to anintegration procedure. For a growth rate, v, its exponential (φ1)can be computed by settingφ0=1 and integrating the evolutionequation dφ/dt=vφt from t=0 to t=1.

In order to gain a proper understanding of the process ofmorphogenesis, it is important to use a biologically plausibleshape model. The diffeomorphic framework provides thebasis for such a model and is concerned with the deepercauses of shape variability [68]. The mathematics is toocomplicated to enter into here but [69] provides a goodoverview. In order to utilize this framework, images must bealigned using approaches such as LDDMM [49], whichexplicitly model shape differences between brains byintegrating the appropriate differential equations. LDDMMis based on a variational approach (the principle of stationaryaction) for computing the trajectory, over unit time, as onebrain evolves to match another, but other strategies are alsopossible [70,71]. Subsequent analysis is then based either onmetric distances between all pairs of brains or on analysis ofinitial momentum maps. These momentum maps encode theinitial conditions for the differential equations that warp atemplate image to match each of the individual brain imagesin the study. The initial evaluations of Dartel were based onsuch an analysis of its flow fields [44]. These results wereslightly disappointing, but they are easily explained bycomparing the properties of its flow fields with those of theinitial momentum maps obtained by LDDMM.

Diffeomorphisms form an essential component of Gre-nander's Pattern Theory [72], which is a generative modelingframework for representing the kind of complicated data thatare usually encountered in biology. Initial momentum mapsprovide a more parsimonious representation of shape than doother paramaterizations, so they provide a potentially verypowerful framework for characterizing shape differences.Given the rate with which computer power is growing, alongwith the increasing mathematical knowledge among neu-roscientists, this framework is likely to be very effective inthe future.

6. Predictions

From a Bayesian perspective of science [42], a body ofknowledge is accumulated so that posterior probabilitiesfrom one experiment become prior probabilities for another.In practice though, each experiment is usually performedindependently, with a relatively small data set. Findings fromdifferent experiments are combined by meta-analyses of thestatistical results of the individual experiments (i.e.,statistical measures are derived from statistical measures),

making a consistent synthesis impossible (even if theproblem of publication bias is ignored). Another strategywould be to increase the amount of raw data that is sharedamong researchers. Other biological sciences share muchmore of their data, and in these fields, bioinformatics projectshave become a more important component of the metricsused to assess the research. Such data-sharing projects arestarting to emerge within neuroimaging. For example, theAlzheimer's Disease Neuroimaging Initiative project isbeginning to show that data sharing could potentially leadto greater returns in terms of the general goal of increasinghuman health [73–75]. The greatest understanding ofneuroanatomical variability can only really come fromlarge data sets — particularly for multivariate models ofshape, where the curse of dimensionality limits the numberof features that can be examined [76].

Sometimes, it is worth examining why researchers wouldlike to adopt some particular form of morphometric analysis.If the goal is to derive useful knowledge that could one daylead to better diagnosis, then it should be preferable to aframework that is capable of actually providing a diagnosis[5,77]. Published papers from morphometry studies onlyreport a fraction of the anatomical variability that could bepertinent toward the intended goal. Provided that suitabletraining data are available, pattern recognition approacheshave the potential to provide diagnoses based on brainanatomy. One example of such an approach applied toAlzheimer's disease diagnosis, which used a relatively smallsample of (postmortem confirmed) subjects for training,showed an accuracy comparable to that achieved byradiologists examining the same data [78,79].

Similarly, if the goal is to find links between genetic dataand overall brain anatomy, or the anatomy of brain regionscomprising several voxels, then it may be desirable to applypattern recognition approaches. These are more sensitive, atthe level of the individual, than univariate approaches.Cross-validation procedures can then be used to obtain Pvalues. Currently, most applications of pattern recognitionare applied to volumetric data, although the emergence ofimproved shape models may allow more accurate predictionsto be made. Scientific progress could be defined byincreasing predictive accuracy.

Acknowledgment

John Ashburner is funded by the Wellcome Trust.

References

[1] ThompsonDW.Growth andForm.Cambridge,UK:CambridgeUniversityPress; 1917.

[2] Friston KJ, Holmes AP, Poline JB, Price CJ, Frith C. Detectingactivations in PET and fMRI: levels of inference and power.Neuroimage 1995;40:223–35.

[3] Friston KJ, Holmes AP, Worsley KJ, Poline JB, Frith C, FrackowiakRSJ. Statistical parametric maps in functional imaging: a general linearapproach. Hum Brain Mapp 1995;2:189–210.

Page 11: Computational anatomy with the SPM software - unipd.itenrigri/Public/Nordio/Biblio/ashburner09.pdf · Computational anatomy with the SPM software ... been applied within many areas

1173J. Ashburner / Magnetic Resonance Imaging 27 (2009) 1163–1174

[4] Fischl B, Rajendran N, Busa E, Augustinack J, Hinds O, Yeo BTT,et al. Cortical folding patterns and predicting cytoarchitecture. CerebCortex 2008;18(8):1973–80.

[5] Lao Z, Shen D, Xue Z, Karacali B, Resnick SM, Davatzikos C.Morphological classification of brains via high-dimensional shapetransformations and machine learning methods. Neuroimage 2004;21(1):46–57.

[6] Collins DL, Holmes C, Peters T, Evans A. Automatic 3D model-basedneuroanatomical segmentation. Hum Brain Mapp 1995;3:190–208.

[7] Collins DL, Zijdenbos AP, Barré WFC, Evans AC. ANIMAL+INSECT: improved cortical structure segmentation. In: Kuba A,Samal M, Todd-Pokropek A, editors. Proc. of the Annual Symposiumon Information Processing in Medical Imaging. Lect Notes ComputSci. New York: Springer; 1999. p. 210–23.

[8] Goldszal AF, Davatzikos C, Pham DL, Yan MXH, Bryan RN, ResnickSM. An image-processing system for qualitative and quantitativevolumetric analysis of brain images. J Comput Assist Tomogr 1998;22(5):827–37.

[9] Heckemann RA, Hajnal JV, Aljabar P, Rueckert D, Hammers A.Automatic anatomical brain MRI segmentation combining labelpropagation and decision fusion. Neuroimage 2006;33(1):115–26.

[10] Klein A, Hirsch J. Mindboggle: a scatterbrained approach to automatelabelling. Neuroimage 2005;24(2):261–80.

[11] Tzourio-Mazoyer N, Landeau B, Papathanassiou D, Crivello F, EtardO, Delcroix N, et al. Automated anatomical labeling of activations inSPM using a macroscopic anatomical parcellation of the MNI MRIsingle-subject brain. Neuroimage 2002;15(1):273–89.

[12] Tzourio-Mazoyer N, Hervé PY, Mazoyer B. Neuroanatomy: tool forfunctional localization, key to brain organization. Neuroimage 2007;37(4):1059–60.

[13] Fischl B, Salat DH, Busa E, Albert M, Dieterich M, Haselgrove C,et al. Whole brain segmentation: automated labeling of neuroanato-mical structures in the human brain. Neuron 2002;33(3):341–55.

[14] Fischl B, van der Kouwe A, Destrieux C, Halgren E, Segonne F, SalatDH, et al. Automatically parcellating the human cerebral cortex. CerebCortex 2004;14(1):11–22.

[15] Poupon F, Mangin J-F, Hasboun D, Magnin I, Frouin V. Multi-objectdeformable templates dedicated to the segmentation of brain deepstructures. In: Wells WM, Colchester A, Delp S, editors. Proc. 1stMICCAI, LNCS-1496, MIT, Boston. New York: Springer Verlag;1998. p. 1134–43.

[16] Cachia A, Mangin J-F, Rivière D, Papadopoulos-Orfanos D, Kherif F,Bloch I, et al. A generic framework for parcellation of the corticalsurface into gyri using geodesic Voronoi diagrams. Med Image Anal2003;7(4):403–16.

[17] Pitiot A, Delingette H, Thompson PM, Ayache N. Expert knowledge-guided segmentation system for brain MRI. Neuroimage 2004;23(1):S85–96.

[18] Patenaude B. Bayesian statistical models of shape and appearancefor subcortical brain segmentation. D.Phil. Thesis. Oxford, UK:University of Oxford; 2007.

[19] Khan AR, Wang L, Beg MF. FreeSurfer-initiated fully-automatedsubcortical brain segmentation in MRI using large deformationdiffeomorphic metric mapping. Neuroimage 2008;41(3):735–46.

[20] Friston KJ, Frith C, Liddle PF, Dolan R, Lammertsma AA, FrackowiakRSJ. The relationship between global and local changes in PET scans.J Cereb Blood Flow Metab 1990;10:458–66.

[21] O' Brien LM, Ziegler DA, Deutsch CK, Kennedy DN, Goldstein JM,Seidman LJ, et al. Adjustment for whole brain and cranial size involumetric brain studies: a review of common adjustment factors andstatistical methods. Harv Rev Psychiatry 2006;14(3):141–51.

[22] Mechelli A, Friston KJ, Frackowiak RS, Price CJ. Structuralcovariance in the human cortex. J Neurosci 2005;25(36):8303–10.

[23] Good CD, Scahill RI, Fox NC, Ashburner J, Friston KJ, Chan D, et al.Automatic differentiation of anatomical patterns in the human brain:validation with studies of degenerative dementias. Neuroimage 2002;17(1):29–46.

[24] Schoenemann PT. Evolution of the size and functional areas of thehuman brain. Annu Rev Anthropol 2006;35:379–406.

[25] Huxley JS. Problems of Relative Growth. London, UK: Methuen &Co. Ltd; 1932.

[26] Zhang K, Sejnowski TJ. A universal scaling law between gray matterand white matter of cerebral cortex. Proc Natl Acad Sci USA 2000;97(10):5621–6.

[27] Good CD, Johnsrude IS, Ashburner J, A Henson RN, Friston KJ,Frackowiak RSJ. A voxel-based morphometric study of ageing in 465normal adult human brains. Neuroimage 2001;14:21–36.

[28] Ashburner J, Friston KJ. Voxel-based morphometry: the methods.Neuroimage 2000;11(6):805–21.

[29] Mechelli A, Price CJ, Friston KJ, Ashburner J. Voxel-basedmorphometry of the human brain: methods and applications. CurrentMedical Imaging Reviews 2005;1(2):105–13.

[30] Wright IC, McGuire PK, Poline JB, Travere JM, Murray RM, Frith C,et al. A voxel-based method for the statistical analysis of gray andwhite matter density applied to schizophrenia. Neuroimage 1995;2:244–52.

[31] Salmond CH, Ashburner J, Vargha-Khadem F, Connelly A, GadianDG, Friston KJ. Distributional assumptions in voxel-based morpho-metry. Neuroimage 2002;17(2):1027–30.

[32] Bookstein FL. “Voxel-based morphometry” should not be used withimperfectly registered images. Neuroimage 2001;14(6):1454–62.

[33] Davatzikos C, Genc A, Xu D, Resnick SM. Voxel-basedmorphometry using the RAVENS Maps: methods and validationusing simulated longitudinal atrophy. Neuroimage 2001;14(6):1361–9.

[34] Deichmann R, Good CD, Josephs O, Ashburner J, Turner R.Optimization of 3-D MP-RAGE sequences for structural brainimaging. Neuroimage 2000;12(1):112–27.

[35] Ashburner J, Friston KJ. Unified segmentation. Neuroimage 2005;26(3):839–51.

[36] Jovicich J, Czanner S, Greve D, Haley E, van der Kouwe A, Gollub R,et al. Reliability in multi-site structural MRI studies: effects of gradientnon-linearity correction on phantom and human data. Neuroimage2005;30:436–43.

[37] Crinion J, Ashburner J, Leff A, Brett M, Price C, Friston K. Spatialnormalization of lesioned brains: performance evaluation and impacton fMRI analyses. Neuroimage 2007;37(3):866–75.

[38] Rex DE, Maa JQ, Toga AW. The LONI pipeline processingenvironment. Neuroimage 2003;19(3):1033–48.

[39] Zijdenbos AP, Forghani R, Evans AC. Automatic “pipeline” analysisof 3-D MRI data for clinical trials: application to multiple sclerosis.IEEE Trans Med Imaging 2002;21(10):1280–91.

[40] Mazziotta J, Toga A, Evans A, Fox P, Lancaster J. A probabilistic atlasof the human brain: theory and rationale for its development. TheInternational Consortium for Brain Mapping. Neuroimage 1995;2(2):89–101.

[41] Bishop CM. Pattern recognition and machine learning. New York:Springer Science and Business Media, LLC; 2006.

[42] Jaynes ET. Probability Theory: The Logic of Science. Cambridge, UK:Cambridge University Press; 2003.

[43] Ashburner J, Friston KJ. Why voxel-based morphometry should beused. Neuroimage 2001;14(6):1238–43.

[44] Ashburner J. A fast diffeomorphic image registration algorithm.Neuroimage 2007;38(1):95–113.

[45] Yassa MA, Stark CEL. A quantitative evaluation of cross-participantregistration techniques for MRI studies of the medial temporal lobe.Neuroimage 2009;44:319–27.

[46] Klein A, Andersson J, Ardekani BA, Ashburner J, Avants B, ChiangM-C, et al. Evaluation of 15 nonlinear deformation algorithms appliedto human brain MRI registration. Neuroimage, in press. doi:10.1016/j.neuroimage.2008.12.037.

[47] Ashburner J, Friston KJ. Computing average shaped tissueprobability templates. Neuroimage, in press. doi:10.1016/j.neuroimage.2008.12.008.

Page 12: Computational anatomy with the SPM software - unipd.itenrigri/Public/Nordio/Biblio/ashburner09.pdf · Computational anatomy with the SPM software ... been applied within many areas

1174 J. Ashburner / Magnetic Resonance Imaging 27 (2009) 1163–1174

[48] Christensen GE, Rabbitt RD, Miller MI, Joshi SC, Grenander U,Coogan TA, et al. Topological properties of smooth anatomic maps. In:Bizais Y, Barillot C, Di Paola R, editors. Proceedings of ImageProcessing in Medical Imaging. Dordrecht, The Netherlands: KluwerAcademic Publishers; 1995. p. 101–12.

[49] Beg MF, Miller MI, Trouvé A, Younes L. Computing largedeformation metric mappings via geodesic flows of diffeomorphisms.Int J Comput Vis 2005;61(2):139–57.

[50] Arsigny O, Commonwick X, Pennec N, Ayache N. Statistics ondiffeomorphisms in a Log-Euclidean framework. In: Larsen R, NielsenM, Sporring J, editors. MICCAI 2006, LNCS, vol. 4190. Berlin:Springer-Verlag; 2006. p. 924–31.

[51] Hernandez M, Bossa MN, Olmos S. Registration of anatomical imagesusing geodesic paths of diffeomorphisms parameterized with stationaryvector fields. IEEE workshop on Math. Meth. in Biom. Image Anal.(MMBIA 07); 2007.

[52] Vercauteren T, Pennec X, Perchant A, Ayache N. Symmetric log-domain diffeomorphic registration: a demons-based approach. Pro-ceedings of the 11th International Conference on Medical ImageComputing and Computer Assisted Intervention (MICCAI 2008),September 6–10 2008, New York; 2008.

[53] Lancaster JL, Woldorff MG, Parsons LM, Liotti M, Freitas CS, RaineyL, et al. Automated Talairach atlas labels for functional brain mapping.Hum Brain Mapp 2000;10:120–31.

[54] Talairach J, Tournoux P. Coplanar stereotaxic atlas of the human brain.New York: Thieme Medical; 1988.

[55] Eickhoff SB, Paus T, Caspers S, Grosbras M-H, Evans AC, Zilles K,et al. Assignment of functional activations to probabilistic cytoarch-itectonic areas revisited. Neuroimage 2007;36(3):511–21.

[56] Evans AC, Collins DL, Mills SR, Brown ED, Kelly RL, Peters TM. 3Dstatistical neuroanatomical models from 305 MRI volumes. Proc.IEEE-Nuclear Science Symposium and Medical Imaging Conference;1993. p. 1813–7.

[57] Luders E, Narr KL, Thompson PM, Rex DE, Woods RP, DeLuca H,et al. Gender effects on cortical thickness and the influence of scaling.Hum Brain Mapp 2006;27:314–24.

[58] Stonnington CM, Tan G, Klöppel S, Chu C, Draganski B, JackJr CR, et al. Interpreting scan data acquired from multiplescanners: a study with Alzheimer's disease. Neuroimage 2008;39(3):1180–5.

[59] Hutton C, De Vita E, Ashburner J, Deichmann R, Turner R. Voxel-based cortical thickness measurements in MRI. Neuroimage 2008;40(4):1701–10.

[60] Ashburner J, Andersson JLR, Friston KJ. High-dimensional imageregistration using symmetric priors. Neuroimage 1999;9(6):619–28.

[61] Lepore N, Brun C, Chou Y-Y, Chiang M-C, Dutton RA, Hayashi KM,et al. Generalized tensor-based morphometry of HIV/AIDS using

multivariate statistics on deformation tensors. IEEE Trans MedImaging 2008;27(1):129–41.

[62] Woods RP. Characterizing volume and surface deformations in an atlasframework: theory, applications, and implementation. Neuroimage2003;18(3):769–88.

[63] Bookstein FL. Endophrenology: new statistical techniques for studiesof brain form: life on the hyphen in neuro-informatics. Neuroimage1996;4(3):S36–8.

[64] Bookstein FL. Landmark methods for forms without landmarks:morphometrics of group differences in outline shape. Med Image Anal1997;1:225–43.

[65] Dryden IL, Mardia KV. Statistical shape analysis. New York: JohnWiley and Sons; 1998.

[66] Kendall DG, Barden D, Carne TK, Le H. Shape and shape theory.Chichester, UK: Wiley; 1999.

[67] Ashburner J, Hutton C, Frackowiak RSJ, Johnsrude I, Price CJ, FristonKJ. Identifying global anatomical differences: deformation-basedmorphometry. Hum Brain Mapp 1998;6(5):348–57.

[68] Vaillant M, Miller MI, Younes L, Trouvé A. Statistics on diffeomorph-isms via tangent space representations. Neuroimage 2004;24:S161–9.

[69] Miller MI. Computational anatomy: shape, growth, and atrophycomparison via diffeomorphisms. Neuroimage 2004;23:S19–33.

[70] Marsland S, McLachlan RI. A Hamiltonian particle method fordiffeomorphic image registration. In: Karssemeijer N, Lelieveldt B,editors. Proceedings of Information Processing in Medical Images.Lect Notes Comput Sci. New York: Springer; 2006. p. 396–407.

[71] Miller M, Trouvé A, Younes L. Geodesic shooting for computationalanatomy. J Math Imaging Vis 2006;24(2):209–28.

[72] Grenander U, Miller MI. Pattern theory: from representation toinference. Oxford, UK: Oxford University Press; 2007.

[73] Butcher J. Alzheimer's researchers open the doors to data sharing.Lancet Neurol 2007;6(5):480–1.

[74] Mueller SG, et al. The Alzheimer's disease neuroimaging initiative.Neuroimaging Clin N Am 2005;15:869–77.

[75] Mueller SG, et al. Ways toward an early diagnosis in Alzheimer'sdisease: the Alzheimer's Disease Neuroimaging Initiative (ADNI).Alzheimer Dement 2005;1:55–66.

[76] Hastie T, Tibshirani R, Friedman J. The Elements of StatisticalLearning. New York: Springer; 2001.

[77] Davatzikos C. Why voxel-based morphometric analysis should be usedwith great caution when characterizing group differences. Neuroimage2004;23:17–20.

[78] Klöppel S, Stonnington CM, Chu C, Draganski B, Scahill RI, RohrerJD, et al. Automatic classification of MR scans in Alzheimer's disease.Brain 2008;131(3):681–9.

[79] Klöppel S, Stonnington CM, Barnes J, Chen F, Chu C, Good CD, et al.Accuracy of dementia diagnosis—a direct comparison betweenradiologists and a computerised method. Brain 2008;131:2969–74.