DARTEL John Ashburner 2008. Overview Motivation –Dimensionality –Inverse-consistency Principles Geeky stuff Example Validation Future directions.

Post on 28-Mar-2015

231 Views

Category:

Documents

4 Downloads

Preview:

Click to see full reader

Transcript

DARTEL

John Ashburner

2008

Overview

• Motivation– Dimensionality– Inverse-consistency

• Principles

• Geeky stuff

• Example

• Validation

• Future directions

Motivation

• More precise inter-subject alignment– Improved fMRI data analysis

• Better group analysis• More accurate localization

– Improve computational anatomy• More easily interpreted VBM• Better parameterization of brain shapes

– Other applications• Tissue segmentation• Structure labeling

Image Registration

• Figure out how to warp one image to match another

• Normally, all subjects’ scans are matched with a common template

Current SPM approach

• Only about 1000 parameters.– Unable model detailed

deformations

A one-to-one mapping

• Many models simply add a smooth displacement to an identity transform– One-to-one mapping not enforced

• Inverses approximately obtained by subtracting the displacement– Not a real inverse Small deformation

approximation

Overview

• Motivation

• Principles

• Optimisation

• Group-wise Registration

• Validation

• Future directions

Principles

DiffeomorphicAnatomicalRegistrationThroughExponentiatedLie Algebra

Deformations parameterized by a single flow field, which is considered to be constant in time.

DARTEL

• Parameterising the deformation

• φ(0)(x) = x

• φ(1)(x) = ∫ u(φ(t)(x))dt• u is a flow field to be estimated

t=0

1

Euler integration

• The differential equation is

dφ(x)/dt = u(φ(t)(x))• By Euler integration

φ(t+h) = φ(t) + hu(φ(t))• Equivalent to

φ(t+h) = (x + hu) o φ(t)

Flow Field

For (e.g) 8 time steps

Simple integration• φ(1/8) = x + u/8• φ(2/8) = φ(1/8) o φ(1/8) • φ(3/8) = φ(1/8) o φ(2/8) • φ(4/8) = φ(1/8) o φ(3/8) • φ(5/8) = φ(1/8) o φ(4/8) • φ(6/8) = φ(1/8) o φ(5/8) • φ(7/8) = φ(1/8) o φ(6/8) • φ(8/8) = φ(1/8) o φ(7/8)

7 compositions

Scaling and squaring• φ(1/8) = x + u/8• φ(2/8) = φ(1/8) o φ(1/8)

• φ(4/8) = φ(2/8) o φ(2/8)

• φ(8/8) = φ(4/8) o φ(4/8)

3 compositions

• Similar procedure used for the inverse.Starts withφ(-1/8) = x - u/8

Scaling and squaring example

DARTEL

Jacobian determinants remain positive

Overview

• Motivation

• Principles

• Optimisation– Multi-grid

• Group-wise Registration

• Validation

• Future directions

Registration objective function

• Simultaneously minimize the sum of – Likelihood component

• From the sum of squares difference

• ½∑i(g(xi) – f(φ(1)(xi)))2

• φ(1) parameterized by u

– Prior component• A measure of deformation roughness

• ½uTHu

Regularization model

• DARTEL has three different models for H– Membrane energy– Linear elasticity– Bending energy

• H is very sparse

An example H for 2D registration of 6x6 images (linear elasticity)

Regularization models

Optimisation

• Uses Levenberg-Marquardt– Requires a matrix solution to a very large set

of equations at each iteration

u(k+1) = u(k) - (H+A)-1 b

– b are the first derivatives of objective function– A is a sparse matrix of second derivatives– Computed efficiently, making use of scaling

and squaring

Relaxation

• To solve Mx = cSplit M into E and F, where

• E is easy to invert• F is more difficult

• Sometimes: x(k+1) = E-1(c – F x(k))• Otherwise: x(k+1) = x(k) + (E+sI)-1(c – M x(k))

• Gauss-Siedel when done in place.• Jacobi’s method if not

• Fits high frequencies quickly, but low frequencies slowly

H+A = E+F

Highest resolution

Lowest resolution

Full Multi-Grid

Overview

• Motivation

• Principles

• Optimisation

• Group-wise Registration– Simultaneous registration of GM & WM– Tissue probability map creation

• Validation

• Future directions

Generative Models for Images

• Treat the template as a deformable probability density.– Consider the intensity distribution at each

voxel of lots of aligned images.• Each point in the template represents a probability

distribution of intensities.

– Spatially deform this intensity distribution to the individual brain images.

• Likelihood of the deformations given by the template (assuming spatial independence of voxels).

Generative models of anatomy

• Work with tissue class images.

• Brains of differing shapes and sizes.• Need strategies to encode such variability.

Automaticallysegmentedgrey matter

images.

Simultaneous registration of GM to GM and WM to WM

Grey matter

White matter

Grey matter

White matter

Grey matter

White matter

Grey matter

White matter

Grey matter

White matterTemplate

Subject 1

Subject 2

Subject 3

Subject 4

Template Creation

• Template is an average shaped brain.– Less bias in subsequent analysis.

• Iteratively created mean using DARTEL algorithm.– Generative model of data.– Multinomial noise model. Grey matter

average of 471 subjects

White matter average of 471 subjects

μ

t1

ϕ1

t2

ϕ2

t3

ϕ3

t4 ϕ4

t5

ϕ5

Average Shaped Template

• For CA, work in the tangent space of the manifold, using linear approximations.– Average-shaped templates give less bias, as the

tangent-space at this point is a closer approximation.• For spatial normalisation of fMRI, warping to a

more average shaped template is less likely to cause signal to disappear.– If a structure is very small in the template, then it will

be very small in the spatially normalised individuals.• Smaller deformations are needed to match with

an average-shaped template.– Smaller errors.

Average shaped templates

Linear Average

Average on Riemannian manifold

(Not on Riemannian manifold)

TemplateInitial

Average

After a few iterations

Final template

Iteratively generated from 471 subjects

Began with rigidly aligned tissue probability maps

Used an inverse consistent formulation

Grey matter average of 452 subjects – affine

Grey matter average of 471 subjects

Multinomial Model

• Current DARTEL model is multinomial for matching tissue class images.

log p(t|μ,ϕ) = ΣjΣk tjk log(μk(ϕj))t – individual GM, WM and background

μ – template GM, WM and background

ϕ – deformation

• A general purpose template should not have regions where log(μ) is –Inf.

Laplacian Smoothness Priors on template

2DNicely scale invariant

3DNot quite scale invariant – but probably close enough

Smoothing by solving matrix equations using multi-grid

Template modelled as softmax of a Gaussian process

μk(x) = exp(ak(x))/(Σj exp(aj(x)))

Rather than compute mean images and convolve with a Gaussian, the smoothing is done by maximising a log-likelihood for a MAP solution.

Note that Jacobian transformations are required (cf “modulated VBM”) to properly account for expansion/contraction during warping.

Determining amount of regularisation

• Matrices too big for REML estimates.

• Used cross-validation.

• Smooth an image by different amounts, see how well it predicts other images:

Rigidly aligned

Nonlinear registered

log p(t|μ) = ΣjΣk tjk log(μjk)

ML and MAP templates from 6 subjects

Nonlinear Registered Rigid registered

log

MAP

ML

Overview

• Motivation

• Principles

• Optimisation

• Group-wise Registration

• Validation– Sex classification– Age regression

• Future directions

Validation

• There is no “ground truth”• Looked at predictive accuracy

– Can information encoded by the method make predictions?

• Registration method blind to the predicted information• Could have used an overlap of fMRI results

– Chose to see whether ages and sexes of subjects could be predicted from the deformations

• Comparison with small deformation model

Training and Classifying

ControlTraining Data

PatientTraining Data

?

?

??

Classifying

Controls

Patients

?

?

??

y=f(aTx+b)

Support Vector Classifier

Support Vector Classifier (SVC)

SupportVector

SupportVector

Support

Vector

a is a weighted linear combination of the support vectors

Nonlinear SVC

Support-vector classification

• Guess sexes of 471 subjects from brain shapes – 207 Females / 264 Males

• Use a random sample of 400 for training.

• Test on the remaining 71.

• Repeat 50 times.

Sex classification results

• Small Deformation– Linear classifier

• 87.0% correct• Kappa = 0.736

– RBF classifier• 87.1% correct• Kappa = 0.737

• DARTEL– Linear classifier

• 87.7% correct• Kappa = 0.749

– RBF classifier• 87.6% correct• Kappa = 0.748

An unconvincing improvement

Regression

23

26

30

29

18

32

40

Relevance-vector regression

• A Bayesian method, related to SVMs– Developed by Mike Tipping

• Guess ages of 471 subjects from brain shapes.

• Use a random sample of 400 for training.

• Test on the remaining 71.

• Repeat 50 times.

Age regression results

• Small deformation– Linear regression

• RMS error = 7.55• Correlation = 0.836

– RBF regression• RMS error = 6.68• Correlation = 0.856

• DARTEL– Linear regression

• RMS error = 7.90• Correlation = 0.813

– RBF regression• RMS error = 6.50• Correlation = 0.867

An unconvincing improvement(slightly worse for linear regression)

Overview

• Motivation

• Principles

• Optimisation

• Group-wise Registration

• Validation

• Future directions

Future directions

• Compare with variable velocity methods– Beg’s LDDMM algorithm.

• Classification/regression from “initial momentum”.

• Combine with tissue classification model.

• Develop a proper EM framework for generating tissue probability maps.

u

Hu

“Initial momentum”

Variable velocity framework (as in LDDMM)

“Initial momentum”

Variable velocity framework (as in LDDMM)

Thank you

top related