Top Banner
* Corresponding author. IMMD IX-Uni Erlangen, Lehrstuhl fu K r GDV, Am Weichselgarten 9, 91058 Erlangen, Germany. Tel.: #49-9131-85-299-23; fax: #49-9131-85-299-31. E-mail address: ove.sommer@informatik.uni-erlangen.de (O. Sommer) Computers & Graphics 23 (1999) 233}244 WSCG '98 An interactive visualization and navigation tool for medical volume data Ove Sommer*, Alexander Dietz, Ru K diger Westermann, Thomas Ertl Computer Graphics Group, Universita ( t Erlangen-Nu ( rnberg, Germany Abstract In order to make direct volume rendering practicable convenient visualization options and data analysis tools have to be integrated. For example, direct rendering of semi-transparent volume objects with integrated display of lighted iso-surfaces is one important challenge especially in medical applications. Furthermore, explicit use of multi-dimensional image processing operations often helps to optimize the exploration of the available data sets. On the other hand, only if interactive frame rates can be guaranteed, such visualization tools will be accepted in medical planning and surgery simulation systems. In this paper we propose a volume visualization tool for large scale medical volume data which takes advantage of hardware-assisted 3D texture interpolation and convolution operations. We demonstrate how to use the 3D texture mapping capability of high-end graphics workstations to display arbitrary iso-surfaces which can be directly illuminated to enhance the spatial relations between objects. Back-to-front 3D texture slicing is used to simultaneously display semi-transparent material densities. In order to enable on-the-#y data analysis, "rst approaches using hardware- assisted-convolution operations have been integrated. An implementation of the proposed method based on the OpenInventor rendering toolkit is described o!ering interactive frame rates at high image quality including sophisticated user interactions. ( 1999 Elsevier Science Ltd. All rights reserved. Keywords: Volume rendering; Graphics hardware; Convolution; Clipping geometries; Open inventor 1. Introduction A number of techniques have been developed to di- rectly visualize 3D scalar "elds on rectilinear Cartesian grids such as data sets from medical imaging modalities. In order to optimize the exploration process an impor- tant challenge is the integration of di!erent rendering algorithms which allow simultaneously representing ar- bitrary material quantities such as soft tissue or sharp boundary regions. Two widely used volume visualization methods which are often applied concurrently in medical applications are surface "tting [1] and direct volume rendering [2]. However, due to the large amount of voxels to be pro- cessed and geometric primitives which may be generated both techniques require time-intensive computations making it di$cult to achieve interactive visualization. Besides the use of dedicated volume rendering hard- ware [3}5] most impressive frame rates at high image quality are de"nitely obtained by taking advantage of hardware-assisted 3D texture interpolation on modern high-end graphics workstations [6]. On the other hand, the direct volume ray-casting o!ers the highest #exibility in terms of integrated feature enhancement, e.g. the re- construction of iso-contours and the simulation of realis- tic lighting and shading e!ects. Additionally, there is no need to generate intermediate representations such as polygonal meshes if rendering of iso-contours is desired. Another important trend is the development of graphics hardware that is able to support processing 0097-8493/99/$ - see front matter ( 1999 Elsevier Science Ltd. All rights reserved. PII: S 0 0 9 7 - 8 4 9 3 ( 9 9 ) 0 0 0 3 3 - 3
12

An interactive visualization and navigation tool for medical volume data

Jan 24, 2023

Download

Documents

Marlen Jurisch
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: An interactive visualization and navigation tool for medical volume data

*Corresponding author. IMMD IX-Uni Erlangen, LehrstuhlfuK r GDV, Am Weichselgarten 9, 91058 Erlangen, Germany.Tel.: #49-9131-85-299-23; fax: #49-9131-85-299-31.

E-mail address: [email protected] (O.Sommer)

Computers & Graphics 23 (1999) 233}244

WSCG '98

An interactive visualization and navigation tool for medicalvolume data

Ove Sommer*, Alexander Dietz, RuK diger Westermann, Thomas Ertl

Computer Graphics Group, Universita( t Erlangen-Nu( rnberg, Germany

Abstract

In order to make direct volume rendering practicable convenient visualization options and data analysis tools have tobe integrated. For example, direct rendering of semi-transparent volume objects with integrated display of lightediso-surfaces is one important challenge especially in medical applications. Furthermore, explicit use of multi-dimensionalimage processing operations often helps to optimize the exploration of the available data sets. On the other hand, only ifinteractive frame rates can be guaranteed, such visualization tools will be accepted in medical planning and surgerysimulation systems. In this paper we propose a volume visualization tool for large scale medical volume data which takesadvantage of hardware-assisted 3D texture interpolation and convolution operations. We demonstrate how to use the3D texture mapping capability of high-end graphics workstations to display arbitrary iso-surfaces which can be directlyilluminated to enhance the spatial relations between objects. Back-to-front 3D texture slicing is used to simultaneouslydisplay semi-transparent material densities. In order to enable on-the-#y data analysis, "rst approaches using hardware-assisted-convolution operations have been integrated. An implementation of the proposed method based on theOpenInventor rendering toolkit is described o!ering interactive frame rates at high image quality including sophisticateduser interactions. ( 1999 Elsevier Science Ltd. All rights reserved.

Keywords: Volume rendering; Graphics hardware; Convolution; Clipping geometries; Open inventor

1. Introduction

A number of techniques have been developed to di-rectly visualize 3D scalar "elds on rectilinear Cartesiangrids such as data sets from medical imaging modalities.In order to optimize the exploration process an impor-tant challenge is the integration of di!erent renderingalgorithms which allow simultaneously representing ar-bitrary material quantities such as soft tissue or sharpboundary regions.

Two widely used volume visualization methods whichare often applied concurrently in medical applications

are surface "tting [1] and direct volume rendering [2].However, due to the large amount of voxels to be pro-cessed and geometric primitives which may be generatedboth techniques require time-intensive computationsmaking it di$cult to achieve interactive visualization.

Besides the use of dedicated volume rendering hard-ware [3}5] most impressive frame rates at high imagequality are de"nitely obtained by taking advantage ofhardware-assisted 3D texture interpolation on modernhigh-end graphics workstations [6]. On the other hand,the direct volume ray-casting o!ers the highest #exibilityin terms of integrated feature enhancement, e.g. the re-construction of iso-contours and the simulation of realis-tic lighting and shading e!ects. Additionally, there is noneed to generate intermediate representations suchas polygonal meshes if rendering of iso-contours isdesired.

Another important trend is the development ofgraphics hardware that is able to support processing

0097-8493/99/$ - see front matter ( 1999 Elsevier Science Ltd. All rights reserved.PII: S 0 0 9 7 - 8 4 9 3 ( 9 9 ) 0 0 0 3 3 - 3

Page 2: An interactive visualization and navigation tool for medical volume data

Fig. 1. Volume rendering by 3D texture slicing.

algorithms like discrete convolution operations on 2D or3D images. Of course, it is not yet clear how theseoperations can be seriously used in medical applications,but on the other hand, it is quite easy to simulate classicaledge detection or noise removal algorithms which canthen be applied interactively.

This paper proposes a visualization system that takesadvantage of specialized graphics hardware to obtain#exible rendering and data analysis options at optimalspeed. The following major issues are addressed by thistool:

f Performance tuning: We make optimal use of hardwareresources by exploiting 3D texture interpolation forthe acceleration of the volume rendering process. A se-lection of classical image processing algorithms is im-plemented based on fast convolution operations.

f Direct volume rendering: We retain the full #exibility ofvolume ray-casting, in particular, we do not need anintermediate polygon representation for the visuali-zation of surface features.

f Surface shading: We use realistic lighting e!ects toenhance the understanding of the spatial orientation ofiso-surfaces and their relations to semi-transparentvolume structures.

f User interface: Since the visualization tool is embed-ded into the OpenInventor rendering toolkit it o!ershighest #exibility in terms of user manipulation andnavigation. Arbitrary objects which are speci"ed in theInventor "le format can be included.

The basic problem in volume visualization is the ex-traction and rendering of the information content ofthree-dimensional structures contained in the volume ina way that allows also for accurate analysis. An analyticdescription of the volume visualization process has beenproposed by KruK ger [7], who showed that all knowndirect volume rendering approaches can be understoodas specializations of a transport theory model of thepropagation of light in materials. Di!erent approxima-tions to the underlying transport equation have beendeveloped [8}12] which di!er substantially in the phys-ical phenomena they account for and in the way theemerging numerical models are solved.

One fundamental di!erence lies in the order of thenumerical evaluation of the arising integral equations.This can be done either in object-space order, where foreach voxel the projection onto the image plane is com-puted and visualized, or in image-space order, where foreach pixel those voxels are determined and renderedwhich contribute to the "nal pixel intensity. Image-spacemethods, i.e. ray-casting, are known to produce superiorquality images for the cost of repeated re-sampling of thedata. Although several acceleration techniques have beenproposed such as adapting the integration step size to thevariance in the data [13,14] or e$ciently stepping

through the volume [15] none of these methods allowsfor interactive image generation rates. Since voxel basedprojection methods [16}18] in general avoid the numer-ically complex re-sampling of the original domain, theexpected frame rates are much higher than those oftypical ray-casting techniques. Expressing the rotation ofthe volume objects by 2D shears and a "nal image warpand exploiting coherence and parallelism accelerates theprocess considerably [19].

Recently, the use of 3D texture mapping hardware[20], now also available in desktop graphics worksta-tions and PCs, has become a powerful visualization op-tion for direct volume rendering [6]. The rectilinearvolume data is "rst converted to a 3D texture. Then,a number of planes perpendicular to the viewer's line ofsight are clipped against the volume bounding box. Theresulting polygons are projected onto the image planeusing adequate blending operation to realize back-to-front or front-to-back compositing. Prior to the drawingprocedure texture coordinates in parametric space areassigned to each vertex of the clipped polygons. Duringrasterization the slice that is cut out of the 3D textureaccording to the texture coordinates is mapped ontothe generated fragments (see Fig. 1). Since this processis supported by specialized graphics hardware the timeit consumes decreases considerably compared to a soft-ware realization. Thus, interactive frame rates can beachieved.

However, there is one major limitation of this method:It is not possible to integrate realistic lighting and shad-ing e!ects which are known to enhance the perception ofspatial relations between objects. On the other hand,even in medical applications where the expert often navi-gates through highly complex structures it is importantto further improve the viewer's spatial sensation. Fur-thermore, it should not be ignored that also in medicaldata sets there are opaque boundary regions which arebest visualized by means of lighted iso-surfaces. In Fig. 2an example is given which compares the visualization ofan iso-surface of the skin of a 1283 human aneurysmCTA-Scan. On the left the rendering was performed with3D texture mapping and appropriately adjusted transferfunctions while on the right a volume ray-casting tech-nique with integrated lighting was applied. Note, thatalthough about 1200 slices were used to generate the left

234 O. Sommer et al. / Computers & Graphics 23 (1999) 233}244

Page 3: An interactive visualization and navigation tool for medical volume data

Fig. 2. Texture slicing vs. iso-surface shading.

image it can easily be seen that the visual sensation of "nedetails and the spatial relation of arbitrary structures isconsiderably improved by the lighting and shading in theright image.

Of course, combination methods could be used which"rst extract a polygonal surface by means of a marchingcubes type algorithm [1] which is then integrated withtexture based volume rendering in a two-pass procedure.However, especially for large data sets, the memoryand computation time requirements of even the mostadvanced iso-surface algorithm do not allow for aninteractive modi"cation of the iso-values. Additionally,the amount of geometric primitives which may be gener-ated can hardly be rendered in an acceptable amount oftime.

Recently, "rst approaches for combining hardware ac-celerated volume rendering via 3D texture maps andvolume lighting were presented. Van Geldern [21] storesthe sum of precomputed ambient and re#ected lightcomponents into the texture volume and performs stan-dard 3D texture compositing as described above. Theobvious drawback to this technique is the need for re-loading the texture memory each time any of the lightingparameters (including rotation of object) changes. On thecontrary Haubner et al. [22] precompute the voxel gradi-ents and store a quantized representation of the orienta-tion as the 3D texture map together with the volumedensity. Lighting is achieved by indexing into an appro-priately replicated color table. However, shaded surfacesare di$cult to achieve due to the limited quantization ofthe normal orientation and the intrinsic hardware inter-polation problems.

The basic idea of our approach lies in the utilization ofhardware-supported 3D texture interpolation to re-sample the data available on a rectilinear grid. So thenumerically most critical part of traditional volume ray-casting techniques can be avoided. We propose a methodto access the re-sampled texture values which allowsdetecting arbitrary iso-surfaces. Once their location inspace is determined various illumination e!ects can berendered. In a second rendering pass we perform 3Dtexture slicing to integrate direct rendering of semi-transparent density volumes.

Another feature of the developed visualization systemis the integration of speci"c data analysis algorithms. Upto now most of the available volume visualization toolsbased on 3D texture interpolation do not o!er additionalfunctionality to analyze the data appropriately. Speci"cstructures can only be enhanced or eliminated bymodifying the transfer functions in a trial-and-error pro-cess. Actually, we integrated hardware supported convo-lution operations into the present approach. Theseoperations can be applied on arbitrarily chosen clipplanes and thus allow interactively performing edge de-tection or noise removal algorithms on selected portionsof the data.

In the remaining sections we discuss the basic ideas ofour approach. Once we have outlined the underlyingconcepts and algorithms we also describe the way theyare realized within the OpenInventor toolkit. Thisis important since the acceptance of such a visualizationtool in practical applications can only be guaranteed ifit provides an intuitiv user interface with con-venient manipulation and navigation options. Finally,some examples are given to demonstrate the basicfunctionality.

2. Textured sweep-planes

So far, all known methods using 3D texture maps forthe rendering of scalar volume data operate in a back-to-front or front-to-back order parallel to the image plane.For a number of equally spaced clipping planes theplanar cross sections between these planes and the vol-ume bounding are computed and drawn into theframebu!er. Multiple polygons mapping onto the samescreen region are blended appropriately according to thefragment opacity.

Apart from the advantages of this approach there isone obvious disadvantage which disables direct lightingcalculations. Since texture mapping occurs during raster-ization at the end of the graphics pipeline and incomingfragments are immediately accumulated with pixel valuesalready drawn, it seems to be impossible to access inter-polated texture samples. On the other hand, in order todetermine smooth iso-surfaces and to compute realisticlighting e!ects we do need to retrieve these values andtheir location in object space.

To retain the whole #exibility o!ered by direct ray-casting techniques while taking advantage of real-time3D texture interpolation we propose a slightly di!erentapproach.

Instead of slicing the 3D texture map perpendicular tothe viewers's line of sight we perform the same procedurebut with cutting planes orthogonal to the image plane asoutlined in Fig. 3. The number of cross sections we cutout of the texture is equal to the number of scanlinesde"ned by the actual viewport.

O. Sommer et al. / Computers & Graphics 23 (1999) 233}244 235

Page 4: An interactive visualization and navigation tool for medical volume data

Fig. 3. Texture slicing orthogonal to image plane.

Fig. 4. Changing the viewing system.This approach is very similar to the algorithm pro-posed in [3] which was integrated into a special purposearchitecture for interactive volume rendering. However,it is interesting to see that the same algorithm can bee$ciently implemented on general purpose graphicsworkstations and also low-cost PCs. Additionally, wewill show that even if we only aim at rendering opaqueiso-surfaces and semi-transparent density volumes with-out lighting e!ects we can save a lot of memory resources.

At "rst glance, slicing the texture orthogonal to theimage plane does not allow us to gain anything. Since thecross sections are parallel to the scanlines, rendering withrespect to the actual viewing transformation would justproject each textured polygon onto the unique pixel row.Meaningful images would not be produced because it isimpossible to composite these polygons along the view-ing direction.

However, note that one cross section generated in thisway covers all interpolated data samples which wouldhave been re-sampled by a standard volume ray-castingapproach for the corresponding scanline. The entire in-terpolation procedure is performed by e$ciently usingthe available hardware resources. Nevertheless, it is stillan open problem how to retrieve the texture samples inthe cutting plane from the graphics pipeline in order tosimulate an image space driven approach.

2.1. Adjusted projection

We solve this problem by temporarily adjusting theglobal viewing transformation. The eyepoint is rotated insuch a way that we are looking from above down to theobject. Now, the viewing direction is parallel to theformer viewing up vector. The new viewing plane be-comes orthogonal to the original one (Fig. 4).

In the new viewing system 3D texture based volumerendering is performed as usual by intersecting the vol-ume's bounding box with the slicing planes. By drawingthe textured polygons slice by slice the material valueswhich would have been interpolated within one scanlineare generated. In order to access texture samples we readthe pixel values from the framebu!er into local mainmemory. Now, all the information necessary to processan arbitrary scanline is available at the expense of oneframebu!er operation. Of course, this operation is quiteexpensive, but it is still much faster than the interpolationof all data samples without hardware support.

The major bene"t of this approach is that volumere-sampling, which accounts for about 60}70% of thetotal time of a standard ray-caster, is completely avoided.Instead, this is accomplished by 3D texture interpolationand a framebu!er operation. In contrast to the volumerendering pipeline as proposed by Levoy [14] (see Fig. 5)two di!erent data and computation #ows are now pos-sible. We should also mention that in general only a smallfraction of the entire framebu!er has to be read. This willbe discussed below.

2.2. Volume integration

We are now ready to render the volume scanline byscanline. After reading the frambu!er all re-sampled datavalues which are necessary to process an entire scanlineare available in main memory. Each column of the rec-tangular memory segment in which the pixel data isstored contains the material samples which would havebeen reconstructed along one ray of sight emanatingfrom the image plane within the active scanline.

236 O. Sommer et al. / Computers & Graphics 23 (1999) 233}244

Page 5: An interactive visualization and navigation tool for medical volume data

Fig. 5. Overview of volume rendering pipelines.

The amount of light impinging on the view plane ata certain position can be simulated by evaluating thevolume rendering integral (1)

I(t0, t

1)"P

t1

t0

q(t) e~P

t

t0a(s) $s

dt (1)

along each ray. It sums up the contributions of the volumeemission q(t) along the ray, which is scaled by the opticaldepth according to the volume absorption a(s). Tradition-ally, the evaluation of the integral is performed using anEulerian sum: the ray is split into segments of equal lengthover which the source term and the opacity are assumed tobe constant. The continuous integral evaluates into adiscrete sum over the segments along each ray:

I(t0, t

1)+

n+k/0

qkak

k~1<i/0

(1!ai). (2)

Usually, the volume emission and absorption are as-signed to each voxel before the integration is performed,or both values are obtained from a transfer function

which maps the re-sampled material values to an RGB-color and an a-component.

In the latter case, as long as we assume an orthogra-phic projection, the volume integration along a ray ofsight collapses to the traversal of columns in the recti-linear 2D framebu!er segment in main memory. Foreach column Eq. (2) is evaluated in front-to-back orderwhich allows us to apply arbitrary acceleration tech-niques like a-termination or b-acceleration.

3. Iso-contour extraction

Maybe the most important drawback of volume pro-jection techniques via 3D texture maps is that the visualappearance of iso-contours can hardly be enhanced byrealistic illumination e!ects. Iso-contours, in general, canbe polygonalized, by determining the cross section be-tween the surface and the volume cells [1]. Once thepolygon model has been generated it can be renderedtaking advantage of hardware-assisted rendering of litand shaded triangle meshes.

O. Sommer et al. / Computers & Graphics 23 (1999) 233}244 237

Page 6: An interactive visualization and navigation tool for medical volume data

Fig. 6. Gradient approximation on original data.

Basically, surface "tting techniques and direct volumerendering with 3D texture maps can be integrated quiteeasily. But neither the emerging memory overhead northe time needed to reconstruct arbitrary surfaces is ac-ceptable for large data sets. On the contrary, in volumeray-casting no intermediate surface representation is gen-erated. The surface is directly visualized by successivelytesting whether the data samples along a ray meet certaincriteria. The common approach that does not alwaysyield accurate results is to traverse the ray until aniso-value is hit. Then the surface normal at this point iscomputed and arbitrary lighting or shading models canbe evaluated. In [14] a di!erent procedure was em-ployed. Prior to re-sampling, the material values areshaded and classi"ed with respect to an iso-value and thelocal greyscale gradient. Di!erent material types can beenhanced or suppressed in this way.

Our present approach o!ers the whole #exibility vol-ume ray-casting does. Classi"cation and shading as pro-posed in [14] can be applied to the acquired or preparedscalar values, but also the visualization of iso-surfaces byiso-value testing can be integrated straightforwardly.While the sweep-plane is traversed each data value iscompared to the speci"ed iso-value. If a hit with theiso-contour is determined the normal is calculated andthe contour at the actual location is shaded.

Very fast results can be achieved by directly computingthe normal from the values already in the sweep-planebu!er. On the other hand, the generated images showtypical block artifacts which emerge from the discretiz-ation of the bu!er, and additionally two further sweep-planes have to be stored in memory in order to access thetop and bottom neighbors.

Instead we transform the location of each data samplewhich belongs to a contour back into the original voxelarray (see Fig. 6). Then, the normals at this point isinterpolated from the normals at the eight nearest neigh-bors. This yields smooth results and is also less memoryintensive. Note that the normals we approximate at thediscrete grid points are temporarily stored until a com-plete scanline is processed, thus avoiding multiple ap-proximations of the same normal.

In order to accelerate the shading process we use anadditional 3D RGBa texture, which is pre-computedfrom the volume data set. In each texture element theoriginal scalar value is stored in the a component while inthe RGB color channels the gradient components arestored. Since texture elements are clamped to the range[0, 1] the gradients have to be scaled and translated bya factor of 0.5 before they are inserted into the texturemap.

When textured with the new RGBa map, after render-ing the cross polygons and transferring the pixel valuesinto main memory, each sample consists of a quadrupelin which the "rst three components correspond to thetri-linearly interpolated voxel gradient and the fourth

component represents the scalar material value. In orderto determine the normal at a surface point the RGBvalues have to be scale and translate appropriately.

In this way we can directly evaluate the lighting modelbased in the retrieved gradient values. This can be donewith respect to one or multiple iso-values or for eachre-sampled data item separately.

4. Convolution operations

Particularly in image processing the application ofdiscrete convolution operations on the available pixelvalues is a commonly used data analysis option. Depend-ing on the desired result di!erent kinds of convolutionkernels are applied which o!er distinct choices toenhance or suppress speci"c features. For example, di!er-ence operators to detect edges or simple averageoperators to perform noise reduction are often appliedseparately or one by one to improve the overall under-standing of the data. However, although fast softwarerealizations exist which e$ciently perform discrete con-volution operations on 2D images, in general their use islimited to non-realtime applications due to the numericalcomplexity of the "ltering process. On the other hand,special purpose hardware exists, now also available onmodern graphics workstations, which enables the convo-lution of large scale image with arbitrary "lter kernelsinteractively.

In particular, the newer SGI machines provide exten-sions which allow hardware-supported convolution ofmulti-channel pixel data. While the data is sent throughthe rendering pipeline, e.g., drawing from main memoryinto the framebu!er, convolution of the data takes placebefore it gets written to the framebu!er. Thus, the convo-lution of arbitrary slices from the volume data can beperformed quite easily. In the present work the key ideawas to extend this functionality to arbitrary clip planespassing through the volume.

Whenever a clip plane is activated and convolutionis enabled the pixel values within the clip plane areconvolved with a "lter kernel that can be chosen from a

238 O. Sommer et al. / Computers & Graphics 23 (1999) 233}244

Page 7: An interactive visualization and navigation tool for medical volume data

Fig. 7. Clip plane without convolution and with enabled high-pass "ltering.

pre-de"ned toolbox. Several kernels have been imple-mented, e.g., sobel, median, laplacian, Maar-Hildreth,blurr, etc. In this way it is possible to detect edges, tosharpen the clip plane image or to suppress noise within.Of course, it is not obvious whether it really makes senseto perform the convolution on arbitrarily sliced imagesfrom the data. Since the data values within the clip planeare interpolated from the discrete voxel values, "lteroperations can lead to results which may be in somesense misleading. However, since the operations are com-pletely interactive their application can often help toenhance the overall understanding (see Fig. 7).

In order to perform the convolution the clip planeextent has to be retrieved. This is accomplished in threesteps. First, we request the clip plane equation from theOpen-GL state. Then, the viewing transformation is ad-justed such that we are looking orthogonal to the clipplane. Finally, the plane is clipped against the volumebounding box and the obtained polygon is textured withthe 3D volume and projected onto the viewport. Now wehave all the interpolated voxel values within the clipplane in the framebu!er. Reading the framebu!er, en-abling convolution and drawing the image back into theframebu!er leads to the desired result.

5. Clipping geometries

In addition to interactive frame rates the possibility to#exibly edit the data to be processed is one of the mostimportant requirements in practical applications. Directmanipulation of transfer functions available in theextended OpenGL functionality allows the user to arbit-rarily map scalar values to RGBa color components.Although this technique enables easy and intuitive en-hancement of structures based on simple thresholding itfails whenever di!erent materials are represented by sim-ilar scalar values.

Quite often multiple planar clipping planes are used toconstruct more complex geometries. However, this strat-egy seems to be rather cumbersome and even the simpletask of clipping an arbitrarily scaled box cannot berealized using this approach. However, this feature andeven more #exibility can be achieved by taking advant-age of the per-pixel operations provided in the rasteriz-ation unit of modern graphics workstations.

As outlined in [23] the basic idea is that for eachcutting plane used in the texture slicing algorithm thosetextured fragments which are not contained in the clip-ping object are prevented from contributing to the "nalimage (see Fig. 8). The OpenGL stencil bu!er is used torestrict the rendering of the texture mapped slice to onlythose pixels. With the stencil bu!er test is enabled a pixelis drawn only if it passes the test between a user de"nedreference value and the value of the corresponding entryin the stencil bu!er. Thus, by initializing the entries in the

stencil bu!er properly and by choosing an adequatecomparison function pixels can be locked against furtherdrawing operations.

In order to determine for each plane whether a pixel iscovered by a cross section the clipping object is renderedin polygon mode, thereby setting the stencil bu!er when-ever a pixel is a!ected. First, an additional clipping planewith the same orientation and position as the slicingplane is enabled. Second, back faces are drawn withrespect to the present viewing direction and everything

O. Sommer et al. / Computers & Graphics 23 (1999) 233}244 239

Page 8: An interactive visualization and navigation tool for medical volume data

Fig. 8. The use of arbitrary clipping geometries is demonstratedfor the polygonal object. In regions where the object intersectsthe actual slice the stencil bu!er is locked. The textured slice isonly rendered into the locked/unlocked pixels.

Fig. 9. Clipping the brain by applying advanced pixel opera-tions.

that is in front of the plane is clipped. In a second pass allstencil bits which are set improperly have to be updated.This is accomplished by rendering the front faces andclearing the stencil bu!er at each position where a pixelpasses the depth test and the stencil bu!er test.

In our visualization toolkit we integrated di!erentalternative clipping geometries which can be furthermodi"ed by di!erent kinds of manipulators. Our toolo!ers the user simple objects like boxes and spheres butalso more complex and user de"ned shapes. In Fig. 9 theadvantages of this approach are demonstrated by asimple example.

Let us assume that an MRI-Scan is given and that theuser wants to separate the brain. Usually this can hardlybe achieved by simple thresholding since values repre-senting the brain are similar to those representing theskin. Only a time consuming manual segmentation of therelevant structures would lead to su$cient results. Inthe present example we constructed a polygonal objectwhich completely contains the brain by interactively dis-torting the vertices of a coarsely de"ned sphere. Aftera few minutes needed to model the brains hull the objectcan be used immediately to separate the relevant struc-tures. Even more e$ciently, now the transfer functioncan be adapted appropriately in order to supress non-relevant structures.

6. Using OpenInventor

The presented algorithm was implemented usingOpenInventor, an object-oriented graphics toolkit builton top of OpenGL, which has become a defacto standardfor interactive modeling, rendering and manipulation of3D scenes [24]. One part of the work presented hereis the complete integration of texture mapping basedvolume rendering into the OpenInventor framework in

order to obtain the whole #exibility and functionalityo!ered by the toolkit. By introducing a new class thevolume renderer is represented as a separate object with-in the hierarchical structure of the scene graph. Thisallows convenient application of built-in manipulators,sensors, editors and other pre-de"ned classes, methodsand features (light sources, anti-aliasing, stereo mode,perspective/parallel rendering, #y, walk, trackball).

In particular, the design of the new volume objecttakes advantage of the OpenInventor structuring mecha-nism of node kits which organizes the newly implementednodes as separately managed subgraphs (see Fig. 10). OurSoVolumeKit is subclassed from SoBaseKit and containsclip planes with a geometric representation which can beaccessed from the OpenInventor standard manipulatorsand a SoVolume::SoShape node. The internal structure isdesigned to support the handling of multiple volumes.During rendering an object of type SoGLRenderActiontraverses the scene graph and asks all objects to renderthemselves by calling their local GLRender method.Within this method of SoVolume all object speci"cOpenGL calls are performed.

We can arbitrarily switch between traditional back-to-front 3D texture projection and our new technique ofray-casting through textured sweep-planes implementingrealistic lighting e!ects. If the expensive lighting mode isenabled we automatically switch back to 3D textureprojection while the camera position is modi"ed return-ing to lighting mode after a certain time-out. For thelighting calculation during ray traversal we use the light-ing parameters (position, direction, etc.) of the interac-tively placed OpenInventor lights. These and other partsof the traversal state like color, clip planes, transforma-tion matrices, etc. can be accessed from the SoStateobject delivered by the SoGLRenderAction and by directOpenGL calls.

Since we have to cut multiple slices out of the volumeorthogonal to the viewport we need a separate renderarea in which the textured polygons can be drawn fromthe adjusted view point. Choosing the back bu!er has

240 O. Sommer et al. / Computers & Graphics 23 (1999) 233}244

Page 9: An interactive visualization and navigation tool for medical volume data

Fig. 10. Subgraph of SoVolumeKit: It contains a geometric representation of the bounding box, the subgraph of up to six clip planes,and the SoVolume node which is responsible for the rendering.

Table 1Timings in seconds for 1283 human head data set using onetexture map. Image resolution was 400]400

Mode Density IsoHW SW HW SW

GrPipe 1.01 * 1.01 *

Sample * 42.4 * 15.1Comp 32.20 33.4 * *

Grad! * * 1.80 1.9All 33.21 75.8 1.01! 17.0

!If we use an RGBa texture map with pre-computed gradientsthe time which is needed to perform the gradient calculations isalmost negligible [25].

several disadvantages. First, objects already drawn intothe back bu!er would be destroyed. Second, the actualrender area could be too small or could be overlayed byother windows, which makes calls to glDrawPixels fail.Therefore, we decided to use the SGIX}pbu!er extensionwhich provides a part of the physical framebu!er whichcan be directly accessed by the graphics hardware, butwhich is not displayed on the screen. Furthermore,p-bu!ers can be locked exclusively by a certain applica-tion against other access.

In order to minimize the amount of p-bu!er needed,only the bounding region that is covered by the volumeobject is taken into account during slicing. First, wedetermine the bounding box extent, and then we slicethat extent scanline by scanline. Arbitrarily translatedobjects can be handled in this way.

Our approach easily extends to perspective projec-tions. All which has to be done is to modify the currentprojection matrix in such a way that an additional rota-tion of 903 around the x-axis is applied after the perspect-ive projection. Also, the textured sweep-planes must be

inclined according to the chosen perspective for eachscanline. However, this procedure does not produce cor-rect results on all types of graphic hardware since per-spective correction during 3D texture mapping on theslicing planes is not always supported.

O. Sommer et al. / Computers & Graphics 23 (1999) 233}244 241

Page 10: An interactive visualization and navigation tool for medical volume data

Fig. 11. Example images from an OpenInventor session. They show the "ltering of a clip plane extent with a high-pass "lter, multipleclip planes, lighted iso-surfaces and integrated display of semi-transparent material and opaque surfaces.

242 O. Sommer et al. / Computers & Graphics 23 (1999) 233}244

Page 11: An interactive visualization and navigation tool for medical volume data

7. Results

The results were computed on a Silicon GraphicsIndigo2 Maximum Impact with a 250 MHz R4400 pro-cessor, 128 MB memory and TRAM option. Our experi-ments were run on di!erent data sets to demonstrate theimpact of the data resolution to certain parts of ouralgorithm and also to show the functionality of the im-plemented visualization tool. Two data sets were used:a human head MRI-Scan with 1283 voxels anda 2562]128 CTA-Scan of a human aneurysm.

Table 1 shows accurate timings for all distinct parts ofthe algorithm. Basically, we distinguished between fourdi!erent tasks: (1) All operations within the graphicspipeline (GrPipe) including framebu!er access. (2) Vol-ume re-sampling by tri-linear interpolation (Sample).(3) Mapping via the transfer function and compositing(Comp). (4) Gradient calculations in the iso-contourreconstruction (Grad). Finally, the overall times are given(All).

First, the front-to-back compositing of material colorand opacity values was applied (Density). This is equiva-lent to the standard volume rendering technique using3D texture maps. Second, we focused on a speci"c con-tour surface belonging to a given iso-value (Iso). Withineach column we compared the results of the hardware(HW) assisted with its software (SW) pendant. Note thatre-sampling is almost negligible in our approach, andthat we do not need to process the whole volume if theiso-value is changed. This is done in shell-rendering,where those voxels which belong to a certain iso-surfacehave to be classi"ed in advance.

We can see that the framebu!er operations dominatethe overall times during surface rendering. For each

scanline a 400]128 ) xJ3y framebu!er array was readand traversed in main memory. This corresponds toa step size of one voxel size, which has also been chosenin the reference method. No time for volume re-samplingis used which is in fact the dominant time in a standardray-casting method. We see that gradient calculationssigni"cantly slow down the overall times. This overheadcan be avoided as described if we store the volume dataas an RGBa texture map.

We should mention that in order to optimize ouralgorithm we completely avoid the software compositingin the actual implementation. If the iso-contour is deter-mined and written to the framebu!er we store thez-bu!er values and perform a second rendering pathusing the traditional back-to-front 3D texture projection.Consequently, in the "rst column of Table 1 the over-all time decreases to approximately 0.3 s. This methodwas used to generate the images in the color page,Fig. 11.

Our second experiment was run on the 2562]128aneurysm CT-Scan. Due to the limited texture memoryof our target architecture we "rst had to split the volume

into distinct blocks. In order to obtain a texture slicebelonging to a certain scanline all bricks have to bereloaded into texture memory. This slows down the ren-dering process considerably. The overall time increasedby about a factor of 4.

The last two images in the color plate show additionalexamples out of an interactive session with the presentedvisualization tool. The time needed to render the headdata set was approximately 3.0 s. Rendering the brickedaneurysm took about 8.7 s.

8. Discussion

We have extended the volume rendering techniques via3D texture maps by combining hardware-assisted textureinterpolation with realistic illumination e!ects for shadediso-surfaces. The whole #exibility of front-to-back vol-ume ray-casting is maintained in this way, i.e. simulta-neous visualization of soft tissue and solid iso-contourscan be achieved. Of course, we cannot compete withinteractive frame rates as achieved by 3D texture slicingtechniques, but on the other hand, the integration ofiso-surface reconstruction and hardware-assisted convo-lution operations allows almost interactively analyzinglarge data sets. In contrast to other approaches whichuses extended color maps and pre-computed voxel gradi-ents to simulate direct volume lighting, we save memory,get smooth contours and avoid the numerical re-calcu-lations during movements. One drawback is the handlingof multiple texture maps. Successively reloading texturemaps for each scanline processing slows down thealgorithm.

Acknowledgements

Thanks to Wolfgang Heidrich for his valuable advisesconcerning the OpenGL programming and to PeterHastreiter providing the medical background.

References

[1] Lorensen WE, Cline HE. Marching cubes: a high resolu-tion 3D surface construction algorithm. ComputerGraphics 1987;21(4):163}9.

[2] Kaufman A. Introduction to volume visualization. SilverSpring, MD: IEEE Computer Society Press, 1991.

[3] Guenter T. Virim: a massively parallel processor for real-time volume visualization in medicine. In: Proceedings of9th EG Hardware Workshop. Eurographics Association,Reading, MA: Addison-Wesley: 1994:103}8.

[4] Knittel G, Stra{er W. A compact volume rendering accel-erator. In: Kaufman A, KruK ger W, editors 1994 Sympo-sium on Volume Visualization ACM SIGGRAPH,1994:67}74.

O. Sommer et al. / Computers & Graphics 23 (1999) 233}244 243

Page 12: An interactive visualization and navigation tool for medical volume data

[5] P"ster H, Kaufman A. Cube-4}a scalable architecture for9real-time volume rendering. In: Craw"s R, Hansen Ch,editors. 1996 Symposium on Volume Visualization. ACMSIGGRAPH, 1996:47}54.

[6] Cabral B, Cam N, Foran J. Accelerated volume renderingand tomographic reconstruction using texture mappinghardware. In: Kaufman A, KruK ger W, editors 1994 Sympo-sium on Volume Visualization. ACM SIGGRAPH,1994:91}8.

[7] KruK ger W. The application of transport theory to thevisualization of 3-D scalar data "elds. In Kaufman A,editor. Visualization 90, Los Alamitos, CA. Silver Spring,MD: IEEE, IEEE Computer Society Press, 1990:273}80.

[8] Blinn J. Light re#ection functions for simulation of cloudsand dusty surfaces. Computer Graphics, Proceedings ofSIGGRAPH '82 1982;16(3):21}30.

[9] Drebin B, Carpenter L, Hanrahan P. Volume rendering.Computer Graphics 1988;22(4):65}74.

[10] Kajiya JT, Von Herzen BP. Ray tracing volume densities.Computer Graphics 1984;18(3):165}74.

[11] Levoy M. E$cient ray tracing of volume data. ACMTransactions on Graphics 1990;9(3):245}61.

[12] Sabella PA. A rendering algorithm for visualizing 3Dscalar "elds. Computer Graphics 1988;22(4):51}8.

[13] Danskin J, Hanrahan P. Fast algorithms for volume raytracing. In: Transactions 1992 Workshop on Volume Vis-ualization, 1992;91}8.

[14] Levoy M. Display of surfaces from volume data. IEEEComputer Graphics and Applications 1988;8(3):29}37.

[15] Yagel R, Shi Z. Accelerating volume animation by spaceleaping. In: Nielson GM, Bergeron D, editors. Visualiz-ation 93, Los Alamitos, CA. Silver Spring, MD: IEEE,IEEE Computer Society Press, 1993:62}9.

[16] Laur D, Hanrahan P. Hierarchical splatting: a progressivere"nement algorithm for volume rendering. ComputerGraphics 1991;25(4):285}8.

[17] Westover L. Footprint evaluation for volume rendering.Computer Graphics 1990;24(4):367}76.

[18] Wilhelms J, Van Geldern A. A coherent projection ap-proach for direct volume rendering. Computer Graphics1991;25(4):275}84.

[19] Lacroute P, Levoy M. Fast volume rendering usinga shear-warp factorization of the viewing transform. Com-puter Graphics, Proceedings of SIGGRAPH '941994;28(4):451}8.

[20] Akeley K. Reality engine graphics. Computer Graphics,Proceedings of SIGGRAPH '93 1993;27(4):109}16.

[21] Van Geldern A, Kwansik K. Direct volume rendering withshading via three-dimensional textures. In: Craw"s R,Hansen Ch, editors. 1996 Symposium on Volume Visualiz-ation. ACM SIGRAPH, 1996:23}30.

[22] Haubner M, Krapichler Ch, LoK sch A, Englmeier K-H,van Eimeren W. Virtual reality in medicine } computergraphics and interaction techniques. IEEE Transactionson Information Technology in Biomedicine, 1996.

[23] Westermann R, Ertl T. E$ciently using graphics hardwarein volume rendering applications. In: Computer GraphicsProceedings SIGGRAPH '98. Annual Conference Series,ACM SIGGRAPH, July 1998:169}77.

[24] Wernecke J. The inventor mentor, programming object-oriented 3D graphics with OpenInventor. Reading, MA:Addison-Welsey, 1994.

[25] Sommer O, Dietz A, Westermann R, Ertl T. An interactivevisualization and navigation tool for medical volume data.In: Skala V, editor. WSCG '98, vol. 2, February 1998:362}71. URL:http://wscg.zcu.cz.

244 O. Sommer et al. / Computers & Graphics 23 (1999) 233}244