Top Banner
London Mathematical Society ISSN 1461–1570 DURGA: A HEURISTICALLY-OPTIMIZED DATA COLLECTION STRATEGY FOR VOLUMETRIC MAGNETIC RESONANCE IMAGING CHRISTOPHER KUMAR ANAND, ANDREW THOMAS CURTIS and RAKSHIT KUMAR Abstract We present a heuristic design method for rapid volumetric magnetic resonance imaging data acquisition trajectories using a series of second order cone optimization subproblems. Other researchers have considered non-raster data collection trajecto- ries and under-sampled data patterns. This work demonstrates that much higher rates of under-sampling are possible with an asymmetric set of trajectories, with very little loss in reso- lution, but the addition of noise-like artifacts. The proposed data collection trajectory, Durga, further minimizes collection time by incorporating short, un-refocussed excitation pulses, resulting in above 98 percent collection efficiency for balanced steady state free precession imaging. The optimization sub- problems are novel, in that they incorporate all requirements, including data collection (coverage), physicality (device limits) and signal generation (zeroth and higher moment properties) in a single convex problem, which allows the resulting trajec- tories to exhibit a higher collection efficiency than any existing trajectory design. 1. Introduction Reducing imaging time in magnetic resonance imaging is driven by the desire to 1. reduce patient discomfort by reducing time in the magnet and shortening or eliminating breath-hold times, 2. capture dynamic processes, such as the beating of the heart and peristaltic motion in the abdomen, 3. reduce cost for the health-care system by increasing throughput, and decreas- ing delays caused by motion artifacted images. Rapid imaging has been an active research topic for the last twenty years, be- ginning with [14]. Research in this area can be roughly divided into the discovery and exploitation of 1. efficient data collection strategies: echo planar [13] and spiral imaging trajec- tories, [11], [3] Received 8 Augop 2006. 2000 Mathematics Subject Classification 92C55 90C90 49N99 68U10 15A29 c ????, Christopher Kumar Anand, Andrew Thomas Curtis and Rakshit Kumar LMS J. Comput. Math. ?? (????) 1–25
25

Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Feb 22, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

London Mathematical Society ISSN 1461–1570

DURGA: A HEURISTICALLY-OPTIMIZEDDATA COLLECTION STRATEGY

FOR VOLUMETRIC MAGNETIC RESONANCE IMAGING

CHRISTOPHER KUMAR ANAND, ANDREW THOMAS CURTIS andRAKSHIT KUMAR

Abstract

We present a heuristic design method for rapid volumetricmagnetic resonance imaging data acquisition trajectories usinga series of second order cone optimization subproblems. Otherresearchers have considered non-raster data collection trajecto-ries and under-sampled data patterns. This work demonstratesthat much higher rates of under-sampling are possible with anasymmetric set of trajectories, with very little loss in reso-lution, but the addition of noise-like artifacts. The proposeddata collection trajectory, Durga, further minimizes collectiontime by incorporating short, un-refocussed excitation pulses,resulting in above 98 percent collection efficiency for balancedsteady state free precession imaging. The optimization sub-problems are novel, in that they incorporate all requirements,including data collection (coverage), physicality (device limits)and signal generation (zeroth and higher moment properties)in a single convex problem, which allows the resulting trajec-tories to exhibit a higher collection efficiency than any existingtrajectory design.

1. Introduction

Reducing imaging time in magnetic resonance imaging is driven by the desire to

1. reduce patient discomfort by reducing time in the magnet and shortening oreliminating breath-hold times,

2. capture dynamic processes, such as the beating of the heart and peristalticmotion in the abdomen,

3. reduce cost for the health-care system by increasing throughput, and decreas-ing delays caused by motion artifacted images.

Rapid imaging has been an active research topic for the last twenty years, be-ginning with [14]. Research in this area can be roughly divided into the discoveryand exploitation of

1. efficient data collection strategies: echo planar [13] and spiral imaging trajec-tories, [11], [3]

Received 8 Augop 2006.2000 Mathematics Subject Classification 92C55 90C90 49N99 68U10 15A29c© ????, Christopher Kumar Anand, Andrew Thomas Curtis and Rakshit Kumar

LMS J. Comput. Math. ?? (????) 1–25

Page 2: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

2. efficient signal generation techniques: multiple echoes per excitation [10],steady-state imaging [21]

3. relationships between signals from geometrically different antennae: SMASH[27], SENSE [23], collectively called parallel imaging.

This paper is a contribution to the search for efficient collection strategies for vol-umetric imaging, specifically (i) a reformulation of the accepted design criteria, (ii)a series of convex optimization problems incorporating all of the criteria as proxyconstraints, and (iii) a specific collection strategy for steady-state imaging designedaccording to these criteria.

Parallel imaging with non-regular sampling is very expensive computationally,and still the subject of active research, [25]. One motivation for the present researchwas to provide non-regular data collection strategies which would result in better-conditioned inverse problems and faster iterative methods.

In section 2 we describe the basic image reconstruction problem in magnetic res-onance imaging, to establish notation and motivate the fitness criteria we presentin the next section. We derive design constraints from these informal criteria in sec-tion 3, which form the basis for the convex second order cone optimization problems.We briefly describe the details of our implementation in section 5, and interpret theresults, including a comparison of the results against previously-published work insection 6. In the final two sections, we list some open questions and possible newapproaches, and summarize the results.

1.1. Related workIn her MSc thesis [24], Ren presents a method of optimizing planar data collec-

tion by incorporating velocity insensitivity constraints into Teardrop [2] designs.Velocity insensitive data collection results in (approximately) the same signal, in-dependent of the tissue velocity. Nayak et al. [19] incorporate such constraints intoa rewinder following spiral data collection. Teardrop is a natural evolution of spi-ral data collection, which is an evolution of EPI, the first fast collection method.Spiral strategies, named for the shape of the sampling trajectory, work because,in the plane, rotations about the origin are a group with one generator, which al-lows such trajectories to be efficiently packed. Others have tried to generalize suchstrategies to three dimensions, but there can be no equally nice analogue to thetwo dimensional case because now the group of rotations does not have a singlegenerator.

Stacked spirals [11], [28], cones [6], and similar strategies are designed to main-tain a maximum separation between local pieces of the trajectory, based on thedictates of the Nyquist sampling criteria. Computationally, lack of a nice symme-try would result in a much larger optimization problem with quadratic growth inconstraints and exponential growth in solution time.

In this paper, we present a strategy for volumetric imaging, which, instead oflamenting this lack of simplifying symmetry, looks instead to the ‘blessing’ of dimen-sionality: random curves in R3 have near zero probability of intersection. Instead ofstruggling with optimization problems with unmanageable numbers of constraints,we pare the constraints down to a minimum, and rely on the fact that unless con-strained to intersect, they will not. The resulting randomness means that Nyquist’scriteria does not apply, which allows us to achieve significant under-sampling. We

2

Page 3: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

are not the first to propose using randomness to combat aliasing, see for example[18].

2. Magnetic resonance imaging

In magnetic resonance imaging, we measure radio-frequency magnetic fields cre-ated by the resonances of one or more nuclei in the object, usually hydrogen (mostlywater) in people. Measurable resonance occurs because the object is placed in alarge homogeneous field, and excited by the momentary application of oscillatingtransverse fields. For a very readable and complete account of how we create thesignals, and the complications which arrive, see [8]. MR signals are collected us-ing devices a lot like radio antennae, commonly called coils. The measurements wemake with these coils are not localized, but contain contributions from every nucleiin the object. Ignoring nonuniformity in the coil, and signal propogation throughthe sample, the signal is the sum of the magnetic fields produced by each nucleus.By working in a rotating frame of reference close to the resonant frequency, we canencode both relative frequencies and phases into complex valued signals.

Geometric encoding is achieved by inducing transient linear variations (referredto as gradients) in strength on the homogeneous field. Linear variations in fieldproduce linear variations in resonant frequency, which over time create linear phasevariations. If ρ : R3 → C is the original transverse magnetic field, the new fieldwill be exp(i〈x, k〉)ρ(x) where x ∈ R3 and k ∈ R3∗, the element of the dual spacecorresponding to the accumulated phase. It follows that the measured signal

s(t) =∫

R3ei〈x,k(t)〉ρ(x)dx,

is a sampling of the Fourier Transform of the object’s original magnetization.For any given trajectory k(t), we have a linear transformation Map(R3, C) →Map(R, C), and if it is invertible, we can reconstruct the original magnetizationfrom the measurements.

Early MR image reconstruction was constrained by the cost of computation, andfocussed on making data better fit existing hardware and software Fourier trans-forms. Data collection was forced to be regular, and sampled on rectangular grids(first in two and later in three dimensions). Even the first image reconstructionsbased on non-trivial inverse problems, e.g. phase conjugate symmetry [15], assumedregular rectangular data sampling, as did the first parallel imaging schemes. Reg-ular sampling is an approximation of an object in an infinite-dimensional functionspace by a vector in a finite-dimensional vector space. All approximations introduceerrors. If sufficient breadth of sampling in k-space is not made, the reconstructedimage, being the sum of only low-spatial-frequency Fourier basis functions, will lackfine detail. (How much detail is required depends on the application, but there cannever be too much.) If the samples are too widely separated, the smallest commonperiod of all the basis functions used to reconstruct the image will be too small,resulting in aliasing: the appearance of signal where it does not belong.

For non-regular sampling, both problems still occur, but the inversely-linearrelationship between defects in sampling, and defects in the image does not hold.In practice, hardware sampling rates have been higher than the sampling ratesdictated by object size, resulting in under-sampling gaps only appearing between

3

Page 4: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

trajectories. For (interleaved) spiral imaging, under-sampling as a result of excessivepitch of the spiral results in spiral artifacts in the image beginning at a radius whichgrows inversely with the gap between successive spiral arcs. These artifacts are alsoaliased signal, but their source is not recognizable, unlike the aliasing observed withregular rectangular sampling.

This simple picture is inexact. Design tolerances, manufacturing defects, andpatient-dependent effects combine to produce

1. distortions of the designed sampling trajectory k(t),

2. deviations in the resonance frequency,

3. signal variation over time.

In fact, imaging can be considered a four-, five- or six-dimensional problem, ifone takes into account time of acquisition, velocity of the tissue (blood flows),and chemical resonance offset (between water, fat and other organic molecules).Practical implementations account for these factors during image reconstruction.A common approach to calibration and correction is to measure some data severaltimes (most commonly, the lowest spatial frequencies, called the centre of k-space).Tissue velocity can be measured in this way, but also compensated for by arrangingfor the first and higher moments of the waveform k(t) to be zero [7], [20]. We takeboth of these approaches.

3. Fitness

Fitness of sampling trajectories is universally judged first with reference to thepoint spread function, ρpsf, which is the image reconstructed from the data generatedby a single delta function in image space. Other measures incorporating propertiesof tissues, the excitation pulse, etc. are beyond the scope of this paper, and it issafe to say that problems in the psf will manifest themselves in any other measureof fitness. Ideally, the psf would itself be a delta function, but this is impossiblegiven finitely many samples. The two most important features are the height andwidth of the central peak, and relative height and structure in signals outside thepeak. Since the final image will be, to a first approximation, the convolution of thetrue image with the psf, broadness of the central peak manifests as blurriness inthe image, and nonzero values outside the peak as unwanted signals. Structure inthe psf outside the peak will be repeated in the image, and since the human visualsystem is attuned to structure this is distracting in itself. Structure is associatedwith larger extreme values outside the peak, which could obscure fainter featuresin the image. Resolution can be defined to be the width of the central peak when itreaches half its peak height (called FWHM, full-width at half max), although thisis open to interpretation for non-monotone and asymmetrical central peaks.

Since using the psf directly to define an objective or constraints would be veryexpensive, we use the following heuristics: the union of the loci of the trajectoriesshould visit all parts of k-space within a ball {k : |k| < 1/resolution}, and shouldnot display visible symmetries, because these will manifest as symmetries in thespurious signal, which is usually observed in structured aliasing which containslarger extreme values. Rather than trying to minimize the maximum gap betweentrajectories, which corresponds to the usual interpretation of the Nyquist Theorem

4

Page 5: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

(see [24]), we will seek to create an irregular distribution of sampling voids whileavoiding very large gaps.

Specifically for the application to multi-coil reconstructions, we seek psfs in whichthe phase of the unwanted signal is uniformly distributed in a pseudo-random man-ner. This will result in the maximum cancellation of aliasing noise from differentreceivers. We believe that the resulting trajectory design will lead to faster conver-gence rates for iterative SENSE reconstructions, but an analysis of this is beyondthe scope of this paper. Even for single-coil reconstructions, such psfs will causealiasing noise cancellation for noise associated with large-scale features, although itwill not help for small, bright features, such as blood vessels enhanced by contrastagents.

Our initial design using this methodology targets steady-state imaging, so thetotal time for each trajectory is short. For comparison with the most efficient pub-lished trajectory, we use a repetition time, TR = 5.6ms.

To be able to calibrate and correct for machine and patient-dependent effects,we will constrain most of the trajectories to pass through k = 0, the point corre-sponding to no linear phase modulation across the imaging volume.

4. Convex subproblems

To reconstruct a volume, one or more trajectories through k-space are sampled,the data is resampled onto a rectangular, regular lattice, which is then transformedusing a Fast Fourier Transform. For each individual trajectory, we now develop sev-eral parametrized convex, second order cone optimization subproblems. The sub-problems can then be solved multiple times with different parameters, and in dif-ferent orders. The composition strategy for subproblems is entirely heuristic, basedon the above analysis of properties of trajectories which should be expected to leadto fit psfs. It is based on refinement on two levels: the heuristics have themselvesevolved based on numerous tests, and the choice of parameters and order for thesubproblems is tuned to complement properties of previously designed trajectories.

4.1. VariablesChanges in linear phase variation are induced by linear field variations, which

result from currents in electromagnets (so called gradient coils) designed for thispurpose. The currents are in turn driven by the gradient amplifiers, which arecontrolled by digital electronics. So the fundamental physical variables are thepiecewise-constant control functions, which are parametrized by lists of heights andwidths. The gradient strengths correspond to the differences in position in k-space.Knowing the positions in k-space is therefore equivalent to knowing the gradients,and more convenient when formulating the constraints.

For a single subproblem, m, the variables are ki,m, and the values of ki,m′ ,m′ 6= m are either fixed or unavailable. To handle soft (penalized constraints) weintroduce a variable for the violation of soft constraints τ . For reference, we collectall the parameters and variables in Table 1.

4.2. Universal ConstraintsWe use both soft and hard constraints. Hard constraints must be satisfied by

feasible points, whereas the violation of soft constraints is penalized, but still al-

5

Page 6: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

δt > 0 for the size of the time-step,M number of trajectories, andNm number of time steps of trajectory m ∈ {0, ...M},Gmax peak gradient strength (see (1)),Smax peak slew rate (see (2)) ,R transform size (see (3)),κ half excitation duration (see (5)),λ0 penalty scale for zero targets (see (6)),λgoal penalty scale for boundary sphere targets (see (6)),λnulling penalty scale for the first moment (see (4)),

ki,m ∈ R3 ∀m ∈ {1, ...M},∀i ∈ {1, ...Nm} - discrete k-space positionsτ > 0 penalty variable

Table 1: List of all parameters and variables.

lowed. All of the hard constraints are common to all subproblems. We developconstraints for readout trajectories which are rotatable. These trajectories can beused to acquire data for volumes in any orientation. Less strict constraints couldbe formulated if the relative orientation of the rectangular imaging volume and theprincipal axes of the gradient coils is fixed, and the constraints on the individualgradient amplifiers are independent. In this paper we will only consider the freelyrotatable case.

Peak constraints: Gradient amplifiers have peak current limits which restrictthe maximum absolute value of the gradient waveform amplitude. These limits canbe expressed as inequality range constraints on each of the n + 1 points in thediscrete waveform sequence as

||ki+1,m − ki,m||2 6 Gmax, i ∈ {1, . . . , Nm − 1}, (1)

where Gmax is the maximum allowable gradient amplitude.Slew constraints: Gradient amplifiers also have limits on slew rate or rate of

change of amplitude. This can be approximated as an inequality constraint on thefirst-order differences between adjacent discrete points as

||ki+2,m − 2ki+1,m + ki,m||2 6 Smax, i ∈ {1, . . . , Nm − 2}, (2)

Transform size constraints: To insure that all data is usable without in-creasing the computational complexity of the reconstruction algorithm, we mustconstrain trajectories to visit only the part of k-space that will be used in a FastFourier transform. For rotatable trajectories, this requires that the trajectories becontained in a ball of radius equal to the resolution of the reconstructed volume. Soif reconstructed voxels will be 1mm×1mm×1mm, trajectories must remain inside

6

Page 7: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

the ball {k : |k| < 1mm−1}:

||kj,m||2 6 R j ∈ {1, . . . , Nm}, (3)

where the radius, R, is the resolution in m−1. In practice, we want a slightly smallerball, so that even corrected trajectories will be inside the larger ball. This is easilydone simply by adjusting R.

First moment nulling: To make the readout gradient motion-insensitive, wezero the first moment, ∥∥∥∥∥

Nm−1∑i=1

i(ki+1,m − ki,m)

∥∥∥∥∥ < λnullingτ (4)

Involving all the variables, this is a global constraint. This has two consequences:(i) it cannot be effectively optimized by local or greedy algorithms which considerone point at a time, and (ii) it corresponds to dense columns/rows into the concreteproblem definition, and needs extra consideration if using a solver which can takeadvantage of sparsity. Motion insensitivity means that the phase of the magneti-zation we are measuring before and after the readout will be identical for tissueswhich are moving at a constant velocity. We can easily introduce constraints tonull higher moments, which will prevent pulsatile flow from modifying the magne-tization. Sensitivity to motion effects all MR experiments, and is in fact used toquantify flow in phase contrast angiography, but it is generally unwelcome other-wise. In the type of balanced steady-state imaging technique we are targeting, sincewe do not dephase (destroy) the magnetization from one readout to the next, butkeep modifying it with new RF pulses, errors will build up over time, which meansthat motion artifacts are more likely to be a problem, making such compensationa necessity.

Endpoints: Trajectories must begin and end somewhere. Balanced steady-stateimaging must begin and end at the centre of k-space. Although data collection neednot begin and end there, anything else is less efficient. Conventional pulses sequencesalso contain gradients for the purpose of defining a slice profile. The details of thisare beyond the scope of this paper, and interested readers should consult a basictextbook such as [8]. We will constrain the endpoints of our trajectories by

k1,m = (0, 0,−κ)k2,m = (0, 0, κ)

kNm−2,m = (0, 0,−κ)kNm−1,m = (0, 0, κ)

∀m ∈ {1, ...M}, (5)

where κ > 0 is a parameter determined by the length of the excitation. For thepresent application, we use κ = Gmax/2 as it simplifies testing of the algorithm.Remark: The endpoints are chosen in conjunction with the design of an excitationradio-frequency pulse. This is another optimization problem outside the scope ofthis paper. Our choice of endpoints is consistent with a so-called hard pulse, whichexcites the entire volume.

4.3. Second order cone subproblem, ITo these constraints, we add additional constraints, depending on the type of

trajectory we are designing. For the first type, we add constraints to insure coverage

7

Page 8: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

of all parts of k-space, and repeated traversal of k = 0. The idea is that by forcingthe trajectory to visit points distributed on the boundary, and visit the origin inbetween, other parts of k-space will be covered too.

We define a distribution of points on the boundary sphere of radius R. Together,these points cover the sphere without large gaps. Each of these points cj,m will bethe goal of a designated point, ij,m, on a designated trajectory, m. The number ofgoals per trajectory depends on the duration of the trajectories. All results in thispaper refer to five goals per trajectory, with three goals on the boundary sphere,and two (the second and fifth) at the origin. Goals are encoded into constraints

||kim,j ,m − cj,m||2 <

{λ0τ cj,m = 0λgoalτ otherwise

∀(j, m). (6)

We use different penalty parameters for the points we are pulling to the boundarysphere and the points we are pulling to the origin, because we require points to govery close to the origin if they are to be useful for calibration, whereas points onthe boundary sphere are sparse, and there is little to be gained by reaching themexactly. The best method of choosing goals is an open question. We use a simplepseudo-random distribution of vertices from a tesselation of the boundary, but therest of the algorithm is independent of this choice. We didn’t use a more randomdistribution, because we are most interested in a small number of trajectories.

4.4. Second order cone subproblem, IIWe observed that solutions of these problems were of variable quality: some tra-

jectories passed close to their goals, and some did not. Since our heuristic objectiveis to pass through k = 0 often, not on every trajectory, we introduced a secondstage, in which we chose some of the trajectories with the worst (highest) objec-tive value, and re-optimized them with a lower value for λ0. This has the effect ofrequiring these trajectories to pass near k = 0 but not through it, as visible in theright-most trajectory in figure 1.

4.5. Second order cone subproblem, IIIAlthough we took pains to eliminate symmetries in the design of the trajectories,

the choice of initial and final points along the z axis creates a preferred direction,and we observed lower coverage along the z = 0 plane, which is apparent in the psf.To correct this, we augment the trajectories, with trajectories designed to fill holesin the sampling pattern near the z = 0 plane.

To find the holes, we first find the density of the stage I and II trajectories. Tocalculate the density, we interpolate the control points ki,m to a large number ofequally-spaced samples. To each sample in k-space, we associate a delta function,and we take their sum. We then convolve the sum of these delta functions with apositive, continuous function, and resample the resulting continuous function on aregular, rectangular lattice. In our experiments, we used 323 lattices to save timeby reducing the memory footprint of the resampling operation. This is equivalentto resampling the data corresponding to the image with a single delta function atthe origin in image space (the point where the gradient coils have no effect). We usethe resampling kernel defined in [1], because it is convenient to reuse the same codeused to reconstruct images, but the exact shape is not important. To find ‘holes’

8

Page 9: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

Figure 1: Three individual trajectories, oriented so positive z is vertical. The twoleft trajectories are stage I trajectories, with the crossing of two points at k = 0clearly visible. The right trajectory was optimized with relaxed k = 0 penalties,and does not go through k = 0, and has no other crossings.

Figure 2: Two views of the extra ten trajectories which thread through points oflow density in a thickened annulus. On the left, one sees that the trajectories moveinitially to different sectors but follow the annulus around in the same direction.On the right, from the top view, one sees that the density is concentrated in anannulus, and that the trajectories are quite round from this point of view.

9

Page 10: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

near z = 0, and to put them into suitable trajectories, we restrict our attention topoints in the thickened annulus:

E ={

6 6√

x2 + y2 6 12, |z| 6 5}

.

We divide these points into six sectors, E1, ..., E6,

Ei = {(x, y, z) ∈ E,−π + 2π(i− 1)/6 < atan2(y, x)− π + 2πi/6} ,

and try to construct the longest trajectory (m) we can which circumnavigates theannulus visiting the point with the lowest density in each sector using the followinggreedy algorithm:

1. Start with the optimization problem given by the universal constraints. (Notethat it is really a feasibility, not an optimization problem at this stage.)

2. Pick the point in e ∈ Ej , with the lowest density, and the control pointki,m = k3,m.

3. Augment the optimization problem by adding the constraint

||k′ − e||2 < τ.

If the solution has a value of τ below a fixed threshold, keep this constraintin the problem and pick another pair in step 2, with j and i increased by one.If not, and i = Nm − 3, then remove this constraint from the problem, anduse the solution to the smaller problem as the trajectory. Otherwise, returnto step 2 with the same e but i increased by one.

To eliminate uneven sampling, we start this process in a different sector Ei for eachtrajectory, repeating after all sectors have played this role. The different shape ofthese trajectories is visible in figure 2.

In figure 3, we put together all of the different types of trajectories, with dif-ferent colouring to show that the stage I and II trajectories in red and green aresimilar, without any observable geometric relationship. The extra trajectories inturquoise, however, are visibly concentrated near a plane. In the final view of fig-ure 3, one observes that the bounding sphere constraint is tight at some points onsome trajectories.

5. Implementation

In principle, the hard work is in formulating an approximation to the trajec-tory design problem, which consists of convex, second order cone optimization(sub)problems (SOCP/SOCO): a class of optimization problem to which efficientinterior point methods are known to apply[12], [4]. Interpreting the numerical re-sults of our subproblems as trajectories is difficult. Initial attempts at formulatingconvex subproblems were hampered by the lack of a good visualization of the solu-tions, see [24]. Subsequently, undergraduate students developed a highly interactivevisualizer [16] for trajectories, using OpenGL and Cocoa Widgets. The visualizerwas then modified by the third author to include a user interface for modifyingmodel parameters and to integrate two solvers. Initially, we tried IPOPT [29], be-cause it accepts quite general non-linear optimization problems allowing flexibilityin the model, but it was unable to solve simple problems designed to test its per-formance on the types of constraints we planned to use. (Note that IPOPT is a

10

Page 11: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

Figure 3: Three views of complete sets of trajectories. On the left, the red trajecto-ries are designed in stage I, and the green are the redesigned relaxed trajectories.In the middle, the extra annular trajectories are added in cyan. On the right thefull set of trajectories is shown, one colour per trajectory.

very powerful solver, and we have used it successfully for other problems.) Next wedecided to look at primal-dual solvers specialized for SOCO constraints. The onlyopen-source solver easily called from C is socp.c [12], written to demonstrate ap-plications of SOCP. Although the authors warn that it is not being maintained, andoffers limited performance relative to other solvers, we found that it solved our testproblems with hundreds of variables. Its main limitations are: the C api requiresthat the user provide a feasible starting point, errors in input data are not detectedand reported to the user in a useful way, dependent variables must be eliminated,and it does not use sparsity, so it will not perform as well on large sparse problemsas it could. Given these limitations, we looked for a commercial solver with a betterC interface which could be linked to the visualizer, which we needed to compile onboth PowerPC and Intel Mac OS X, but none were available at the time of writing,nor could the developers provide an estimate of when support would be availablefor the (admittedly brand new) Intel platform.

5.1. SOCPA second order cone programming problem, as implemented in socp.c, is defined

asmin fT x

s.t. ||Aix + bi|| 6 cTi x + di, i = 1, . . . , L

, (7)

although other, more abstract definitions are more common. The optimizationvariable is the vector x ∈ Rm. The problem data are f ∈ Rm , and, for i =1, 2 . . . , L;Ai ∈ RNi×m, bi ∈ RNi and di ∈ R. The norm appearing in the con-straints is the Euclidean norm i.e. ||v|| = (vT v)1/2. The constraint,

||Aix + bi|| 6 cTi x + di

is called a second order cone constraint of dimension Ni. Such constraints bounda cone with elliptical cross section, hence the name, but are often used to boundthe interior of a sphere, which frequently occurs as a norm constraint on a vector.All of our universal constraints have this form. The penalized soft constraints have

11

Page 12: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

the more general form. The SOCP (7) is a convex programming problem since theobjective is a convex function and the constraints define a convex set, so each ofour subproblems has a unique connected minimal set. This guarantees a uniqueminimum. In practice it is always helpful to start a search at a feasible point (atrajectory satisfying the constraints). For our choice of solver it is required, and weexplain how to get one in an appendix.We collect now a complete stage I subprob-lem, and explain how we represented it in this standard form.

For a given design subproblem for trajectory m, the variables are τ ∈ R, ki,m ∈R3 for i ∈ {3, ...Nm−2}, where we exclude the initial and final positions ki,m whichare constant. The problem is

min τ

s.t.∥∥kim,j ,m − 0

∥∥ 6 λ0τ, ∀j ∈ {2, 5}∥∥kim,j ,m − cmj

∥∥ 6 λgoalτ, ∀j ∈ {1, 3, 4}∥∥∥∥∥n−1∑i=1

i(ki+1,m − ki,m)

∥∥∥∥∥ 6 λnullingτ,

‖ki+2,m − 2ki+1,m + ki,m‖ 6 Smax ∀i ∈ {1, ...Nm − 2}‖ki+1,m − ki,m‖ 6 Gmax i ∈ {1, ...Nm − 1}‖ki,m‖ 6 R i ∈ {1, ...Nm}.

(8)

The only component of the objective is the penalty term, so if we order thevariables

x = (τ, x3, y3, z3, x4, y4, ..., zNm−2), (9)

where ki,m = (xi, yi, zi), then f = (1, 0, 0, ...). The cones in our problems are in one-to-one correspondence with the constraints, all of which have dimension 3. The conecorresponding to moment nulling is the only dense constraint. The rows of the otherblocks have between one and three nonzeros. Since socp.c is not sparsity-aware,we didn’t organize the blocks to take advantage of the sparse structure. Care mustbe taken for the peak and slew constraints abutting the end points, which have tobe treated separately, since they contain ki,j values which are constant, and hencedon’t contribute to the A-block, but rather to the b-block, which is otherwise mostlyzero.

To construct the initial (primal) feasible problem, we modified the shape con-straints from hard to soft:

‖ki+2,m − 2ki+1,m + ki,m‖ 6 Smax/2 + τ

‖ki+1,m − ki,m‖ 6 Gmax/2 + τ(10)

which requires only a few changes in elements of matrices defining the problem.For the ranges of parameters we are interested in, the solution always satisfies τ <min{Gmax/2, Smax/2}, so we always get a feasible point for the original constraints.The relaxed problem has a simple feasible starting point given by taking all k valuesto be zero. Based on the (very sparse) block structure of our problem, it is simpleto find feasible initial values of the dual variables z and w by solving small linearsystems for z, and using a suitably large vector wi in the null-space of ci.

At every state in development, we found it extremely helpful to have an inte-grated visualization tool. The implementation details of the tool are not interesting,

12

Page 13: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

but the features may be:

1. trajectories drawn as cylindrical tubes,

2. a user interface for trajectory coloration and selection,

3. coloured control points on trajectories,

4. spheres for goals, and the outer sphere,

5. transparency,

6. cut-planes,

7. partial trajectory display,

8. density display (based on selected trajectories),

9. psf display (based on selected trajectories).

6. Performance

We use three surrogate measures of design performance prior to validation ofour design: duty cycle, point spread function (psf), and frame rate. As concerns thesolver, we have only observed it runs faster with large tolerances and it does notscale well to larger problem sizes. Scaling would improve by using a solver whichleveraged sparsity, because computation is bound by solving the linear system fora Newton step direction. Once we are satisfied the model cannot be improved, wewill take steps to improve the solver’s efficiency.

We will report all results with the following optimization parameters, which werechosen to facilitate comparison with the work of other authors:

# basic trajectories 54# relaxed trajectories 10

# threaded trajectories 10time between control points 0.1ms

duration of gradient 5.5mspeak gradient 40 mT/m

max slew per control point 150 T/m/starget maximum resolution 1mm−1

We use .1ms as a discrete time step, because it works well enough, and is a conve-nient unit for exposition and analysis.

6.1. Optimizer PerformanceThe quality of the solution depends on the coverage of k-space. Coverage in k-

space will not be significantly altered by small changes in the individual trajectories.So we can set the tolerance in solving the SOCP high, while heavily weightingthe moment-nulling constraint violation, and somewhat heavily weighting the zero-crossing constraint. So if we are anywhere near the outer goals, the first moment willbe within measurement error of zero. We found that increasing both the relative andabsolute duality-gap tolerances used for termination from 10−6 and 10−4 (suggesteddefaults) to 1/10 reduced the execution time for each subproblem by a factor oftwo, from four seconds to two seconds per subproblem, on average. (Times refer tomeasurements on a 2.5GHz PowerMac G5.) Total optimization time for the first

13

Page 14: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRIabove, and combined with a fully-refocussed slice select:

Results

We acquired ungated, free breathing cardiac images in various slice positions, including typical long axis and sagittal images. (See series below showing 25 ms frames from 30 interleave, 9

Figure 4: Pulse sequence diagrams for an optimized Teardrop planar bSSFP readouton the left, and for a single Durga trajectory on the right. The left diagram is ascreen capture from the console of a Picker Medical Systems Infineon scanner, andthe right is a plot generated with gnuplot. The trace order is the same in both cases(rf, slice, phase and read gradients, and data acquistion state).

two stages, including set-up and display was under a minute. Total optimizationtime for the density threading stage was half an hour. Neither time is reasonablefor on-line optimization while a patient is in the magnet, but both are reasonablefor one-time optimization to set up a protocol.

After 10 trajectories were recalculated with relaxed penalties on the k = 0goals, the maximum resolution in k-space sampled by each trajectory averaged0.940mm−1, with the maximum being 1.003mm−1 and the minimum 0.824mm−1.The worst-case distance between the selected point and it’s goal was between0.042mm−1 and 0.534mm−1, with an average of 0.263mm−1. Changes in the choiceof goals for a single trajectory and flexibility in the point pulled to each goal couldimprove these numbers, but of many experimental adjustments, the ones whichmade a noticeable difference to the psf were the relaxations and the additionaldensity threading trajectories.

6.2. Time efficiency comparisonThe simplest efficiency comparison is sampling duty-cycle, calculated by dividing

the total time data is collected by the repetition time for the pulse sequence. Thesame k-space trajectory will result in different duty-cycles when used with differ-ent pulse sequences, although a better trajectory in one usage is usually better inmost usages. Durga is designed to be used with volumetric, fully-balanced pulses,specifically balanced steady-state free precession (bSSFP).

In figure 4, we show pulse sequence diagrams for an efficient waveform design forplanar bSSFP imaging, see [2], and for the Durga waveform design for volumetricimaging. The first line shows the envelop of the transmitted radio-frequency pulseused to excite the magnetization. (In a steady state experiment, many repetitionsare required to set up the steady state.) No data can be collected during this

14

Page 15: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

time on a conventional imager, although in the more controlled environment of anNMR spectrometer, it is possible to transmit and receive simultaneously with well-calibrated antennae designed to be sensitive to orthogonal plane-polarized magneticwaves. If the rf pulse occurs when all the gradients are zero (no current), then thepulse excites magnetization uniformly across the imaging volume. In Teardrop andDurga, the rf pulse corresponds to the period where the slice gradient (the secondtrace) is nonzero. Such combinations are called selective excitations, because theyexcite the volume differently at different slice positions, with the profile of theexcitation well-approximated by the Fourier transform of the rf pulse envelope. Inthe planar case, time is required to ramp up and down the slice gradient, becauseduring data collection, no phase variation in the slice direction (transverse to theslab being measured) is desired. For the pulse used, phase variation in the slicedirection is created by the main lobe, so negative lobes must surround the excitationlobe. After 1.47ms, excitation is complete, and the Teardrop readout waveformbegins, lasting 3.43ms. Data collection is indicated by the step function in thebottom trace. The trajectory is the integral of the read and phase gradients, whichcorrespond to x and y in our notation.

The displayed Teardrop sequence does not have zero first moment, so flowingtissue will not reconstruct properly. In her master’s thesis, [24], Ting Ting Rendeveloped an optimization model and sequential SOCP implementation capable ofincorporating this and other global constraints, and found that doing so resultedin a 3 percent decrease in resolution.

On the right, the Durga pulse sequence design is quite different. It is designedfor volumetric imaging, so we must dephase in the slice direction at some point tobe able to resolve the z direction. To save time, we use the dephasing (linear phasevariation, exp(iz), of the magnetization) which would be caused by any finite-timepulse, and start sampling at k = (0, 0, c0), and finish sampling at k = (0, 0,−c0).This removes the need for the negative slice gradient lobes for rewinding. In figure 1,this behaviour is visible as a constant gap between the endpoints of the trajectory.In the pulse sequence diagram, it is visible as a short period with zero x- andy-gradients, and maximum value for the z-gradient.

Some authors refer to experiments as ‘volumetric’ when the excitation pulse isnot accompanied by any gradient activity. We use it in the more general senseof exciting a large volume uniformly, which is possible, although more difficult, ifgradients are also active. We have designed an energy-limited pulse which excitesa volume 50cm across with uniform magnitude and linearly-varying phase, so weknow such pulses exist, and plan to publish the optimization method in the future.

We will compare Durga and Teardrop to what we believe to be the most efficientalternative, the Hargreaves-Nishimura-Conolly (HNC), see [9], spiral pulse withextra rewinder to zero the first two moments. In table 2, we tabulate the times fordata collection, excitation, and the rewinder, where necessary. Unfortunately, onreal imagers, there are always some switching delays when going from rf transmit(excitation) to rf receive (data collection), when loading waveform data, etc. ForTeardrop, this makes a small difference in efficiency, but for HNC, it is larger,so we break the ideal and the implemented repetition times into two columns.Although Teardrop and HNC are designed for planar imaging, they can both beadapted to volume imaging with very little extra cost. (See [9].) Teardrop gains inefficiency over HNC by incorporating first-moment nulling as a constraint on the

15

Page 16: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

Teardrop HNC HNC-imp Durgareadout (ms) 3.43 2.40 2.40 5.50excitation (ms) 1.47 1.20 1.20 0.10rewinder (ms) - 1.40 1.40 -TR (ms) 4.93 5.00 5.90 5.60duty-cycle (fraction) 0.78 0.48 0.41 0.98

Table 2: Duty-cycle calculations for three balanced k-space sampling patterns, de-signed for bSSFP imaging. The HNC pattern uses extra time for a rewinder tobalance the first moment, the other strategies incorporate this as a constraint onthe readout trajectory. The repeat time (TR) used for Teardrop is the actual time,including dead time required by the scan controller to switch modes. For HNC,both the ideal and implemented time with overhead are given, in separate columns.For Durga, no measurements have been made, so no dead time is added. Durga’sefficiency comes from what is not there: time to rewind the first moment, and timeto rewind the slice excitation.

readout trajectory, and avoiding the time lost for the rewinder. Durga has the sameadvantage, and by being designed only for (large) volumetric imaging, it also savestime taken by slice rewinders, resulting in the best overall efficiency.

Duty-cycle is not the only factor influencing efficiency, if this is taken to beinformation gained per unit time. The sampling pattern in k-space, and the signalto noise ratio of the sampled data are also important. The effectiveness of thesampling pattern is largely captured by the quality of the psf, and Durga performswell in this respect, as we demonstrate below. Signal to noise ratios are in turndependent on other factors, most importantly on the type of pulse sequence (thesequencing of excitations with different energies and phases, gradient waveforms,and data collection) and the volume of tissue excited. Most noise sources are notdependent on the amount of tissue excited, so the signal to noise ratio (in themeasurements) is approximately proportional to the volume excited, so Durga willbenefit from being volumetric. The signal to noise ratio in the reconstructed imagewill also depend inversely on the volume of the voxels.

6.3. Point Spread FunctionThe point spread function (psf) is the image which would be constructed if a

delta function were being scanned. If the image reconstruction process is linear,then the measured image will be the convolution of the psf with the true image.In practice, the approximation is good, and its failings are well understood, if notalways easy to correct. Figure 5 shows the x = 0 slice of the psf obtained using the54 trajectories after stage II, and plots of the cross sections along the axes. Thecentral peak in the psf is not symmetric, with the y (vertical) axis being longerthan the z (horizontal) axis. This would produce more blurring in the y direction(and, similarly, the x direction which is not shown) than in the z direction, but thepsf is already quite good. The contour at half the peak height has a radius of 1mm.When enlarged, the first aliasing ring has a height at most 8 percent of the heightof the central peak. Other aliasing, which will appear as noise, decays quickly withsome points of height 2 percent.

16

Page 17: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

Figure 5: Close-up of the psf restricted to x = 0. In the image, hue is phase andbrightness is magnitude. Red is positive real, and turquoise is negative real. Bright-ness has been lightened to make the background visible. On the right, cross sectionsthrough the centre are plotted, with the y axis in red and the z axis in blue.

xy

xy

54 54+10

Figure 6: Comparison of 54 trajectories on the left and 54 + 10 density-fillingtrajectories on the right. Part of an x − y cross section of a numerical phantomcontaining cubes with different phases is displayed, with cross-sectional plots inboth the x and y directions. In the images, brightness corresponds to magnitude,and hue corresponds to phase. In the cross-sectional graphs, magnitude is black,and real and imaginary parts are blue and red, respectively. The effect of the extratrajectories on the resolution in the x − y plane is visible in two ways: in theimage, the squares on the right are clearly more square, and in the graphs, thedefinition and separation of the two objects is clear, especially when looking at themagnitudes, and looking at the inflection point on the real-part of the x cross sectionbetween the objects (the x sections are the adjacent plots between the images).

17

Page 18: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

Figure 7: A comparison of psfs corresponding to the first trajectory and all 54trajectories designed in the first phase. The corresponding (sets of) trajectories areshown on the left.

In figure 6, we show an x-y cross section of a numerical simulation of two cubes.The cubes are meant to have sharp edges, and the simulation shows that the edgesare the psf predicted. On the right, the same reconstruction is done including datafrom the 10 extra trajectories, and one can easily see that the resolution in thisplane is increased by their addition, although with an increase in apparent noise.

We stated as a goal that Durga should trade acquisition time for apparent noise.Regular trajectories, whether sampled on lattices, spirals or other nested shapeshave lower limits on acquisition time imposed by the Nyquist sampling theorem. Ifthis doesn’t hold for Durga, then one would expect that as the density of sampling(i.e. number of trajectories) decreases, the apparent noise will increase, but nothingregular like aliasing ghosts or spiral artificats appear. This is what we observe incomparing 1 trajectory to 54 trajectories in figure 7.

Although adding extra trajectories can reduce asymmetry in the central peak,there is still asymmetry which may contribute to distortion of reconstructed images,see figure 8, but overall, the psf promises sharp images (figure 9). Not visible inin figure 9, due to Fourier transform size limitations, there is a region of apparentnoise in the psf at a radius of 30cm. The largest magnitude values observed afterexamining several different narrow transforms (i.e. 2048× 64) was half a percent ofthe peak value. After observing this ring from different angles, which we had notexpected, given the irregularity of the sampling, we realized this ring corresponds tothe regular sampling along the trajectory: we were sampling once per 2µs (500kHz),so Nyquist aliasing should occur at

140mT/m · 2 · 10−6 s

= 0.298m, (11)

i.e. at a radius of thirty centimetres. Since the apparent noise is small, this would

18

Page 19: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

Figure 8: Three planar cross sections of the psf: x-y, x-z, and y-z, enlarged by zeropadding (using a larger Fourier transform than necessary).

Figure 9: Grayscale surface plot of psf for 64 trajectories.

be hard to distinguish from other sources of noise in imaging, but it does point tothe need for megahertz sampling in modern MR imagers.

6.4. Frame RateIn applications to dynamic imaging (beating heart, flowing blood), the most im-

portant factor is frame rate, the rate at which data for an entire plane or volume canbe collected. The authors of the missile-guided trajectory (MG) compared six meth-ods of volumetric imaging, using machine constraints of 10mT/m and 30mT/m/ms.We can time-dilate our trajectories by a factor of 5 to meet these requirements forpurposes of comparison. In table 3, we repeat the reported results from [17] forthe faster methods, adding the equivalent numbers for Durga. Durga is five timesfaster than the nearest competitor, and a comparison of the psfs indicate Durgawould display similar amounts of blurring, but larger amounts of apparent noise, asa result of the lower sampling rate. We don’t know how MG would perform undersimilar under-sampling conditions. It is as irregular as Durga, but it is designedwith very different objectives, and doesn’t include velocity insensitivity as a con-straint. So for spoiled scans (in which the signal is zeroed and regenerated fromscratch for each repetition), MG may perform as well as Durga, and we encouragethe authors to test it under extreme under-sampling.

There are other undersampled acquisitions, most notably undersampled radialtrajectories using Projection Reconstruction and different types of data-sharing. In

19

Page 20: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

Trajectory Total number of shots Scan time (s)Cylindrical EPI 537 27.0Stack of Spirals 215 11.0Missile Guidance 357 18.0Durga 64 1.8

Table 3: Scan times for different rapid volumetric trajectories. See [17].

Figure 10: Reconstruction of a numerical phantom with two linked solid tori, centredon the x-y and the x-z planes. Showing the x-y and the x-z planes, the two planesin a 3d multi-planar reformatting, and a surface rendering of the volume. All imagesare captured from OsiriX ([26]).

[5], Du et al. report a frame rate of 0.21 frames per second for (PR)Hyper-TRICKS,which is roughly comparable to 1/(64×0.0056s) = 2.79 frames per second for Durga(see table 2). After scaling for different gradient performance, this is in the rangereported for the fully-sampled trajectories in table 3. This comparison must beput into context: (PR)Hyper-TRICKS is being used for sequence types to whichDurga is not applicable, at least as currently designed, and calibration and artifactreduction for projection reconstruction is much simpler and more effective thangeneral non-raster reconstruction, which must be used with Durga.

6.5. Numerical Phantom

Within our trajectory visualization application, we have included a number ofnumerical phantoms, described by lists of cubes, which we simulate using a sincfunction approximation. The most interesting are two linked solid tori containedwithin a 20cm field of view. The limit on field of view is a function of memorylimits on our workstation, and not the trajectory itself. Using a Fourier Transformof size 2563, and a resampling kernel designed to preserve image intensity within thecentral 1283, we would loose image intensity to roll-off with a larger phantom, see[1]. In figure 10, we show the resulting multi-planar reformats, which clearly showintersections with two solid tori, and a surface rendering, which shows the linking.Looking carefully at the two cross sections, it is clear there is more apparent noisein the x-y plane, which is related to the stronger visible structure in the final psfin those directions.

20

Page 21: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

7. Future Work

We plan to continue this work, with the following list of priorities:

1. find more efficient ways of threading through points of lower sampling density

2. compare performance of this model with an LP model for the problem ofdesigning trajectories which are not rotatable

3. incorporate sparsity into the solver, or benchmark on another solver

4. add constraints to the current design problems to further improve the psf

5. try multiple assignments of goals to trajectories, searching for a set of assign-ments which produces a better psf

6. reduce and compensate for gradient waveform distortion by incorporating itinto the model

And while we stated as a goal the ability to trade off noise for reduced scan time,we have really concentrated on the trade going one way: towards a minimum scantime producing an acceptable image. We don’t know how this approach performswhen longer scan times are possible, but improved image quality is necessary. Giventhat existing methods work well in this case, and longer scan times also mean longerdesign times, another idea is probably needed to make this end of the spectrumcompelling.

Finally, for those who want both shorter scan time and high image quality, wewant to incorporate iterative SENSE (see [22]) image reconstruction techniques,which seek to take advantage of differences in the geometric encoding of differentreceiving antennae.

8. Summary

We have formulated a set of small second order cone problems, which takentogether design a very efficient set of k-space trajectories for sampling volumetricMRI data. Very high frame rates – above what the Nyquist sampling theoremindicates – are possible because the sampling is irregular. The amount of blurring,as shown by analysis of the point spread function, is comparable to trajectoriesan order of magnitude slower. Some of this efficiency comes from the integration ofboth first-moment nulling and slice rewinding into the readout trajectories, allowing98 percent sampling efficiency, as measured by duty cycle. Some of this efficiencycomes at the price of artifacts which appear to be background noise. There is somestructure to the apparent background noise, as seen in the reconstructed numericalsimulations, but given the reduction in sample time, the defects are minor.

Acknowledgements

We thank Mark Haacke, Paul Margosian, Michael Noseworthy, Michael Thomp-son, Ian Young and Yuri Zinchenko, for research suggestions and comments on themanuscript. We thank NSERC, CFI and OIT for research support.

21

Page 22: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

References

1. C. Anand, T. Terlaky and B. Wang, ‘Rapid, embeddable design methodfor spiral magnetic resonance image reconstruction resampling kernels.’ Op-timization and Engineering (2004) 485–502.

2. Christopher Anand, Michael Thompson, Dee Wu and Tom Cull,‘Teardrop, a novel trajectory for truefisp.’ ‘ISMRM April 2001 ConferenceProceedings, Glasgow,’ (2001) p. 1804, p. 1804.

3. Andrew V. Barger, Walter F. Block, Yuriy Toropov, Thomas M.Grist and Charles A. Mistretta, ‘Time-resolved contrast-enhancedimaging with isotropic resolution and broad coverage using an undersampled3D projection trajectory.’ Magn Reson Med 48 (2002) 297–305.

4. Stephen Boyd and Lieven Vandenberghe, Convex Optimization (Cam-bridge University Press, 2004).

5. J. Du, F. J. Thornton, S. B. Fain, F. R. Korosec, F. Browning, T. M.Grist and C. A. Mistretta, ‘Artifact reduction in undersampled projectionreconstruction MRI of the peripheral vessels using selective excitation.’ MagnReson Med 51 (2004) 1071–1076.

6. Paul T. Gurney, Brian A. Hargreaves and Dwight G. Nishimura,‘Design and analysis of a practical 3D cones trajectory.’ Magn Reson Med 55(2006) 575–582.

7. E. M. Haacke, ‘Improving mr image quality in the presence of motion byusing rephasing of gradients.’ American Journal of Roentgenology. 148 (1987)1251–1258.

8. E. M. Haacke, R. W. Brown, M. R. Thomson and R. Venkate-san, Magnetic Resonance Imaging. Physical Principles and Sequence Design(Wiley-Liss (John Wiley & Sons), New York, 1999).

9. Brian A. Hargreaves, Dwight G. Nishimura and Steven M. Conolly,‘Time-optimal multidimensional gradient waveform design for rapid imaging.’Magnetic Resonance in Medicine 51 (2004) 81–92.http://www3.interscience.wiley.com/cgi-bin/jissue/106592541

10. J. Hennig, A. Nauerth and H. Friedburg, ‘Rare imaging: a fast imagingmethod for clinical mr.’ Magnetic Resonance in Medicine 3 (1986) 823–833.

11. P. Irarrazabal and D. Nishimura, ‘Fast three-dimensional magnetic res-onance imaging.’ Magn Reson Med (1995) 33.

12. Miguel Sousa Lobo, Lieven Vandenberghe, Stephen Boyd and HerveLebret, ‘Applications of second-order cone programming.’ Linear AlgebraAppl. 284 (1998) 193–228. ILAS Symposium on Fast Algorithms for Control,Signals and Image Processing (Winnipeg, MB, 1997).

13. P. Mansfield, ‘Multi-planar image formation using nmr spin echoes.’ J PhysC (1977) L55–L58.

14. P. Mansfield, A. A. Maudsley and T. Baines, ‘Fast scan proton densityimaging by nmr.’ J.Phys.E:Scient.Instrum. 9 (1976) 271.

15. P. Margosian, F. Schmitt and D. E. Purdy, ‘Faster mr imaging: Imagingwith half the data.’ Health Care Instr. 1 (1986) 194.

22

Page 23: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

16. Gentaro Matsumoto and James Castura, ‘3d visualiser for trajectories.’BSc Thesis Project, (April 2005).

17. Roberto Mir, Andres Guesalaga, Juan Spiniak, Marcelo Guariniand Pablo Irarrazaval, ‘Fast three-dimensional k-space trajectory designusing missile guidance ideas.’ Magn Reson Med 52 (2004) 329–336.

18. K. S. Nayak and D. G. Nishimura, ‘Randomized trajectories for reducedaliasing artifact.’ ‘ISMRM 1998 Conference Proceedings,’ (1998) p. 670, p.670.

19. Krishna S. Nayak, Brian A. Hargreaves, Bob S. Hu, Dwight G.Nishimura, John M. Pauly and Craig H. Meyer, ‘Spiral balanced steady-state free precession cardiac imaging.’ Magn Reson Med 53 (2005) 1468–1473.

20. D. G. Nishimura, A. Macovski and J. Pauly, ‘Magnetic resonance an-giography.’ IEEE Transactions on Medical Imaging 5 (1986) 140–151.

21. A. Oppelt, R. Graumann, H. Barfuss, H. Fischer, W. Hartl andW. Shajor, ‘Fisp: a new fast mri sequence.’ Electromedica (1986) 15–18.

22. Klaas P. Pruessmann, Markus Weiger, Peter Bornert and PeterBoesiger, ‘Advances in sensitivity encoding with arbitrary k-space trajecto-ries.’ Magn Reson Med 46 (2001) 638–651.

23. Klaas P. Pruessmann, Markus Weiger, Markus B. Scheidegger andPeter Boesiger, ‘Sense: Sensitivity encoding for fast mri.’ Magn Reson Med42 (1999) 952–962.

24. Tingting Ren, ‘An optimal design method for mri teardrop gradient wave-forms.’ Master’s thesis, McMaster University, (August 2005).

25. D. Rosenfeld, ‘New approach to gridding using regularization and estima-tion theory.’ Magnetic Resonance in Medicine 48 (2002) 193–202.

26. Antoine Rosset, Luca Spadola and Osman Ratib, ‘Osirix: an open-source software for navigating in multidimensional dicom images.’ J DigitImaging 17 (2004) 205–216.

27. D. K. Sodickson and W. J. Manning, ‘Simultaneous acquisition of spatialharmonics (smash): fast imaging with radiofrequency coil arrays.’ Magn ResonMed (1997) 591–603.

28. D. R. Thedens, P. Irarrazaval, T. S. Sachs, C. H. Meyer and D. G.Nishimura, ‘Fast magnetic resonance coronary angiography with a three-dimensional stack of spirals trajectory.’ Magn. Reson. Med. (1999) 1170–1179.

29. A. Wachter, ‘An interior point algorithm for large-scale nonlinear optimiza-tion with applications in process engineering.’ Ph.D. thesis, Carnegie MellonUniversity, (2002).

23

Page 24: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

Appendix: Pseudo-random goal generation

To find well-distributed points on the boundary sphere:1. Start with a triangulation of one face of the hexahedron formed by joining

two triangular pyramids at the bases, with an edge in the plane y = 0. Forreference, we will put the bases on the plane z = 0 and the apex of thepyramid at (0, 0, R), scaling the pyramid if necessary, so that the bases haveradius R. We use one triangle, but different initial triangulations would resultin different numbers of goals on the boundary sphere.

2. Use mid-point subdivision to generate more and more dense triangulationsof this face, until the desired number of points is reached, and project all ofthe points radially onto the boundary sphere. By rotating this set through±2π/3 radians about the z axis, and reflecting in the x − y plane, we cangenerate a reasonably uniform covering of the boundary sphere, but to get‘pseudo-random’ trajectories, we need to mix things up more than that.

3. Eliminate points on z = 0 and y = 0, since these points overlap with otherfaces.

4. Sort the points lexicographically by z and then y co-ordinates.5. Create triplets of points, discarding extras. Triplets correspond to trajectories.6. From the mth triplet, (S1, S2, S3), form goals

c1,m = S1,c2,m = 0,c3,m = rotation of S2 by 2π/3 about z-axis and reflection in z = 0,c4,m = rotation of S2 by −2π/3 about z-axis,c5,m = 0.

7. Triplicate each of these sets of goals by rotating each set by 2π/3 and −2π/3,respectively.

8. Duplicate all of the above trajectories by reflecting the trajectories throughz = 0 and swap the first and third goals in the resulting sets, that is the firsttwo nonzero goals.

We now have 6n1 trajectories, where n1 is the number of triples we formed frompoints in the triangulation of one equilateral triangular face of the hexahedron. Forexample, in our numerical tests, we subdivide a single triangle 3 times to produce45 points, 28 without the two edges, which result in 9 triplets, and 54 sets of goals.Two examples of such trajectories are show in figure 1.

Note that if we did not swap two of the goals for the inverted sets, the problemswould be pairwise symmetric, and the resulting trajectories would intersect pairwisein the plane z = 0. This reduces coverage of k-space, and introduces symmetrywhich is observable as a reduction in quality of the psf, similar to the type ofreduction observable for other reasons in figures 5 and 6.

24

Page 25: Durga: A heuristically-optimized data collection strategy for volumetric magnetic resonance imaging

Optimized data collection for 3d MRI

Appendix: Finding feasible starting points

Since the C interface to SOCP requires a feasible starting point, we needed togenerate them. This depends on the particular form of the optimization problemaccepted by the solver.

Being a primal-dual solver, the actual problem solved by SOCP includes the dualof our optimization problem: The dual of (7) is given by

max −L∑

i=1

(bTi zi + diwi)

s.t.L∑

i=1

(ATi zi + ciwi) = f

||zi|| 6 wi, i = 1, . . . , L.

(12)

The dual optimization variables are zi ∈ RNi−1, wi ∈ R. We form the vectorof z = (zi, i = 1, . . . , L). The form of the dual problem is important, because incombination with knowledge of the sparsity of every constraint other than momentnulling, we can find simple, efficient means of computing dual-feasible points. Evenif we were using an infeasible solver, starting with good primal-dual feasible pointssaves time.

The C solver interface is actually a bit simpler. Block matrices are formed fromAi and ci; from bi and di; and from zi and wi. Each one of these contains multipleblocks, one block per cone.

Christopher Kumar Anand [email protected] Thomas Curtis [email protected] Kumar [email protected]

Department of Computing and Software,McMaster University,1280 Main St. W, ITB-202,Hamilton, ON, L8S 4K1, Canada

25