APPEARANCE-PRESERVING SIMPLIFICATION OF POLYGONAL MODELS by Jonathan David Cohen A dissertation submitted to the faculty of the University of North Carolina at Chapel Hill in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Depart- ment of Computer Science Chapel Hill 1999 Approved by: Advisor: Professor Dinesh Manocha Reader: Professor Anselmo Lastra Reader: Professor Gregory Turk Professor Frederick Brooks, Jr. Professor Ming Lin
147
Embed
APPEARANCE-PRESERVING SIMPLIFICATION OF POLYGONAL MODELS …cohen/Publications/cohendiss.lowres.pdf · APPEARANCE-PRESERVING SIMPLIFICATION OF POLYGONAL MODELS by Jonathan David Cohen
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
A dissertation submitted to the faculty of the University of North Carolina at Chapel Hill inpartial fulfillment of the requirements for the degree of Doctor of Philosophy in the Depart-ment of Computer Science
1.7.1 Increased Robustness and Scalability of the Simplification EnvelopesAlgorithm ........................................................................................................... 11
1.7.2 Local Error Metric for Surface-to-Surface Deviation, with BijectiveMappings between Original and Simplified Surfaces........................................ 12
1.7.3 Bijective Mappings between Original and Simplified Surfaces forthe Edge Collapse Operation.............................................................................. 12
1.7.4 Local Error Metric for Texture Deviation between Original andSimplified Surfaces ............................................................................................ 13
Table 1: Simplification ε's as a percentage of bounding box diagonal and runtimes in minutes on HP 735/125 MHz..................................................................... 56
Table 2: A few simplification timings run on a SGI MIPS R10000 processor....................... 56
Table 3: Experimenting with cascading on the rotor model and the resultingnumber of triangles................................................................................................... 59
Table 4: Comparison of simplification using average normal vectors for offsetcomputation vs. using linear programming to achieve fewer invalidnormals. The bunny model is simplified using cascaded simplificationsafter ε=1/2 %. ........................................................................................................... 61
Table 5: Effect of lazy cost evaluation on simplification speed. The lazy methodreduces the number of edge cost evaluations performed per edge collapseoperation performed, speeding up the simplification process. Time is inminutes and seconds on a 195 MHz MIPS R10000 processor................................. 90
Table 6: Simplifications performed. CPU time indicates time to generate a progressivemesh of edge collapses until no more simplification is possible. ............................ 92
Table 7: Several models used to test appearance-preserving simplification.Simplification time is in minutes on a MIPS R10000 processor. .......................... 117
xiv
LIST OF FIGURES
Figure 1: The auxiliary machine room of a notional submarine model:250,000 triangles....................................................................................................... 1
Figure 2: The Stanford bunny model: 69,451 triangles............................................................. 3
Figure 19: Computation of ∆i .................................................................................................. 43
Figure 20: Simplification envelopes for various ε, measured as a percentageof bounding box diagonal...................................................................................... 46
Figure 21: Adding a triangle into a hole creates up the three smaller holes............................ 48
Figure 22: Curve at local minimum of approximation............................................................ 49
xv
Figure 23: Simplifying a bordered surface using border tubes. .............................................. 51
Figure 24: An adaptive simplification of the bunny model that favors the face,while simplifying its hind quarters........................................................................ 53
Figure 25: Viewing a level of detail. ....................................................................................... 54
Figure 26: Looking down into the auxiliary machine room (AMR) of a submarinemodel. This model contains nearly 3,000 objects, for a total of overhalf a million triangles. We have simplified over 2,600 of these objects,for a total of over 430,000 triangles. ..................................................................... 57
Figure 27: A battery from the AMR. All parts but the red are simplifiedrepresentations. At full resolution, this array requires 87,000triangles. At this distance, allowing 4 pixels of error in screenspace, we have reduced it to 45,000 triangles. ...................................................... 57
Figure 28: Level-of-detail hierarchies for three models. The approximationdistance, ε, is taken as a percentage of the bounding box diagonal. ..................... 58
Figure 29: The natural mapping primarily maps triangles to triangles. The twogrey triangles map to edges, and the collapsed edge maps to thegenerated vertex .................................................................................................... 64
Figure 30: Polygons in the plane. (a) A simple polygon (with an empty kernel).(b) A star-shaped polygon with its kernel shaded. (c) A non-simplepolygon with its kernel shaded.............................................................................. 65
Figure 31: Projections of a vertex neighborhood, visualized in polar coordinates.(a) No angular intervals overlap, so the boundary is star-shaped, andthe projection is a bijection. (b) Several angular intervals overlap, sothe boundary is not star-shaped, and the projection is not a bijection................... 66
Figure 32: Three projections of a pair of edge-adjacent triangles. (a) The projectededge is not a fold, because the normals of both triangles are within 90o
of the direction of projection. (b) The projected edge is a degeneratefold, because the normal of ∆2 is perpendicular to the direction ofprojection. (c) The projected edge is a fold because the normal of ∆2
is more than 90o from the direction of projection. ................................................ 68
Figure 33: The edge neighborhood is the union of two vertex neighborhoods.If we remove the two triangles of their intersection, we get twoindependent polygons in the plane. ....................................................................... 70
Figure 34: A fold-free projection of an edge neighborhood, Ne, that is not abijection. (a) The projection of Ne has a non-empty kernel. (b) The
xvi
projection of Nvgen has a 2-covered angle space. This can be detected
by noting that the sum of the angular intervals of the triangles of Nvgen
sum to 4π. .............................................................................................................. 73
Figure 35: A 2D example of an invalid projection due to folding. ......................................... 76
Figure 36: A 2D example of the valid projection space. (a) Two line segmentsand their normals. (b) The 2D Gaussian circle, the planes correspondingto each segment, and the space of valid projection directions (shadedin grey). ................................................................................................................. 77
Figure 37: The neighborhood of an edge as projected into 2D ............................................... 79
Figure 38: (a) An invalid 2D vertex position. (b) The kernel of a polygon is theset of valid positions for a single, interior vertex to be placed. It is theintersection of a set of inward half-spaces. ........................................................... 79
Figure 39: (a) Edge neighborhood and generated vertex neighborhood superimposed.(b) A mapping in the plane, composed of 25 polygonal cells (each cellcontains a dot). Each cell maps between a pair of planar elements in 3D. ........... 81
Figure 40: Each point, x, in the plane of projection corresponds to two 3D points,Xi-1 and Xi on meshes Mi-1 and Mi, respectively.................................................... 82
Figure 41: The minimum of the upper envelope corresponds to the vertex positionthat minimizes the incremental surface deviation. ................................................ 84
Figure 42: 2D illustration of the box approximation to total surface deviation.(a) A curve has been simplified to two segments, each with an associatedbox to bound the deviation. (b) As we simplify one more step, theapproximation is propagated to the newly created segment.................................. 85
Figure 43: Pseudo-code to propagate the total deviation from mesh Mi-1 to Mi...................... 86
Figure 44: Error growth for simplification of two models: (top) bunny model(bottom) wrinkled torus model. The nearly coincident curves indicatethat the error for the lazy cost evaluation method grows no fasterthan error for the complete cost evaluation method over the courseof a complete simplification.................................................................................. 89
Figure 45: Complexity vs. screen-space error for several simplified models. ........................ 92
Figure 46: The Ford Bronco model at 6 levels of detail, all at 2 pixels of screen-space error (0.17mm) ............................................................................................ 93
Figure 48: Close-ups of the Ford Bronco model at several resolutions. ................................. 94
Figure 49: Two transitional distances for the wrinkled torus model at 1 pixel(0.085 mm) of error. .............................................................................................. 95
Figure 50: 6 levels of detail for the lion (colors indicate levels of detail ofindividual parts)..................................................................................................... 96
Figure 51: Bumpy Torus Model. Left: 44,252 triangles full resolution mesh.Middle and Right: 5,531 triangles, 0.25 mm maximum image deviation.Middle: per-vertex normals. Right: normal maps................................................ 104
Figure 52: Components of an appearance-preserving simplification system........................ 106
Figure 53: A look at the ith edge collapse. Computing Vgen determines the shapeof the new mesh, Mi. Computing vgen determines the new mapping Fi,to the texture plane,�P.......................................................................................... 106
Figure 54: A patch from the leg of an “armadillo” model and its associated normalmap. ..................................................................................................................... 109
Figure 56: Texture coordinate deviation and correction on the lion’s tail. Left: 1,740triangles full resolution. Middle and Right: 0.25 mm maximum imagedeviation. Middle: 108 triangles, no texture deviation metric. Right: 434triangles with texture metric................................................................................ 112
Figure 57: Levels of detail of the “armadillo” model shown with 1.0 mm maximumimage deviation. Triangle counts are: 7,809, 3,905, 1,951, 975, 488 ................. 117
Figure 58: Close-up of several levels of detail of the “armadillo” model. Top: normalmaps Bottom: per-vertex normals ....................................................................... 118
1. INTRODUCTION
1.1 Motivation
In 3D computer graphics, polygonal models are often used to represent individual objects
and entire environments. Planar polygons, especially triangles, are used primarily because
they are easy and efficient to render. Their simple geometry has enabled the development of
custom graphics hardware, currently capable of rendering millions or even tens of millions of
triangles per second. In recent years, such hardware has become available even for personal
computers. Due to the availability of such rendering hardware and of software to generate
polygonal models, polygons will continue to play an important role in 3D computer graphics
for many years to come.
However, the simplicity of the triangle is not only its main advantage, but its main disad-
vantage as well. It takes many triangles to represent a smooth surface, and environments of
tens or hundreds of millions of triangles or more are becoming quite common in the fields of
industrial design and scientific visualization. For instance, in 1994, the UNC Department of
Computer Science received a model of a notional submarine from the Electric Boat division
Figure 1: The auxiliary machine room of a notional submarine model: 250,000 triangles
2
of General Dynamics, including an auxiliary machine room composed of 250,000 triangles
(see Figure 1) and a torpedo room composed of 800,000 triangles. In 1997, we received from
ABB Engineering a coarsely-tessellated model of an entire coal-fired power plant, composed
of over 13,000,000 triangles. It seems that the remarkable performance increases of 3D
graphics hardware systems cannot yet match the desire and ability to generate detailed and
realistic 3D polygonal models.
1.2 Polygonal Simplification
This imbalance of 3D rendering performance to 3D model size makes it difficult for
graphics applications to achieve interactive frame rates (10-20 frames per second or more).
Interactivity is an important property for applications such as architectural walkthrough,
industrial design, scientific visualization, and virtual reality. To achieve this interactivity in
spite of the enormity of data, it is often necessary to trade fidelity for speed.
We can enable this speed/fidelity tradeoff by creating a multi-resolution representation of
our models. Given such a representation, we can render smaller or less important objects in
the scene at a lower resolution (i.e. using fewer triangles) than the larger or more important
objects, and thus we render fewer triangles overall. Figure 2 shows a widely-used test model:
the Stanford bunny. This model was acquired using a laser range-scanning device; it contains
over 69,000 triangles. When the 2D image of this model has a fairly large area, this may be a
reasonable number of triangles to use for rendering the image. However, if the image is
smaller, like Figure 3 or Figure 4, this number of triangles is probably too large. The right-
most image in each of these figures shows a bunny with fewer triangles. These complexities
are often more appropriate for image of these sizes. Each of these images is typically some
small piece of a much larger image of a complex scene.
For CAD models, such representations could be created as part of the process of building
the original model. Unfortunately, the robust modeling of 3D objects and environments is
already a difficult task, so we would like to explore solutions that do not add extra burdens to
the original modeling process. Also, we would like to create such representations for models
acquired by other means (e.g. laser scanning), models that already exist, and models in the
process of being built.
3
Figure 2: The Stanford bunny model: 69,451 triangles
69,451 triangles 2,204 triangles
Figure 3: Medium-sized bunnies.
69,451 triangles 575 triangles
Figure 4: Small-sized bunnies.
4
Simplification is the process of automatically reducing the complexity of a given model.
By creating one or more simpler representations of the input model (generally called levels of
detail), we convert it to a multi-resolution form. This problem of automatic simplification is
rich enough to provide many interesting and useful avenues of research. There are many
issues related to how we represent these multi-resolution models, how we create them, and
how we manage them within an interactive graphics application. This dissertation is con-
cerned primarily with the issues of level-of-detail quality and rendering performance. In
particular, we explore the question of how to preserve the appearance of the input models to
within an intuitive, user-specified tolerance and still achieve a significant increase in render-
ing performance.
1.3 Thesis Statement
By applying 3D Euclidean distance metrics to the process of geometric simplification and
by representing appearance attribute fields in a decoupled form, we can preserve the ap-
pearance of polygonal models to within an intuitive, user-specified tolerance while achieving
significant increases in rendering performance.
1.4 Design Criteria
In this dissertation, we present techniques for automatically generating and employing
simplifications of polygonal models. We have developed these techniques with two major
design criteria in mind.
• Provide guaranteed, measured quality in the output models.
• Pre-compute as much as possible, keeping the real-time graphics application as fast as
possible.
The first criterion is the one that really defines our work. With guaranteed error bounds
available for all our levels of detail, it is possible for a graphics application to automatically
choose which level of detail to render for each object without any user-intervention on a per-
object basis. This is crucial for complex environments containing thousands of objects or
more.
5
The second criterion is based on empirical observation. Several excellent simplification
systems, like those of [Hoppe 1997] and [Luebke and Erikson 1997], now exist that dynami-
cally update the simplification of an object or scene while an interactive graphics application
is running. This becomes more feasible as more processing power becomes available on the
host machine. However, we have chosen to emphasize run-time efficiency. In our view, those
extra CPU cycles available on the host may be required for other application-related pur-
poses, rather than for assisting our level-of-detail management. Our research here explores
the limits of what level of quality preservation is possible using only statically-computed
levels of detail, allowing us to maximize the processing resources of our graphics computer
during interactive applications. This trade-off is discussed further in Section 2.4.3.
1.5 Input Domain
The algorithms we develop operate on manifold triangle meshes, including those with
borders. In the continuous domain, a manifold surface is one that is everywhere homeomor-
phic to an open disc. In the discrete domain of triangle meshes, such a surface has two
topological properties. First, every vertex is adjacent to a set of triangles that form a single,
complete cycle around the vertex. Second, each edge is adjacent to exactly two triangles. For
a manifold mesh with borders, these restrictions are slightly relaxed. A vertex may be sur-
rounded by a single, incomplete cycle (i.e. the beginning need not meet the end). Also, an
edge may be adjacent to either one or two triangles.
A mesh that does not have these properties is said to be non-manifold. Such meshes may
occur in practice by accident or by design. Accidents are possible, for example, during either
the creation of the mesh or during conversions between representation, such as the conver-
sion from a solid to a boundary representation. The correction of such accidents is a subject
of much interest [Barequet and Kumar 1997, Murali and Funkhouser 1997]. They may occur
by design because such a mesh may require fewer triangles to render than a visually-
comparable manifold mesh or because such a mesh may be easier to create in some situa-
tions.
Although our algorithms are designed to operate on strictly manifold meshes, and the cur-
rent implementations reflect this, they may be modified to deal well with meshes that are
6
“mostly manifold”. The simplest such modification might just leave the non-manifold
portions of the mesh unchanged from the input surface. A more sophisticated modification
may break the non-manifold mesh into a set of manifold meshes with borders, noting the
adjacency of these meshes. Such a modification fits well into the patch-based approach
described in Chapter 5. We acknowledge, however, that such modifications do not totally
solve the problem of non-manifold meshes. If a significant portion of the input mesh is non-
manifold, such algorithms may be of limited use for reducing complexity.
1.6 Research Summary
We began this research project in the summer of 1995. At that time, there were relatively
few publications on the subject of general polygonal mesh simplification; much of the earlier
work focused on the simplification of convex polyhedra [Das and Joseph 1990] and polyhe-
dral terrains [Agarwal and Suri 1994]. There were several publications on the more general
problem, though. The most well-known of these were [Rossignac and Borrel 1992],
[Schroeder et al. 1992], [Turk 1992], and [Hoppe et al. 1993]. During the course of this
research, the field of automatic simplification has become much more active, and quite a few
interesting techniques have been developed by other researchers. We discuss only the previ-
ous work here; the most relevant of the concurrent work is discussed in Chapter 2.
1.6.1 Previous Work
[Rossignac and Borrel 1992] present a simple, but powerful, scheme based on the clus-
tering of nearby vertices and the removal of any resulting degenerate geometry. This ap-
proach is remarkably flexible, and guarantees that the distance of the resulting geometry from
the original is no greater than the maximum vertex displacement. Unfortunately, this error
bound is quite loose. Also, the resulting geometry is often poorly shaped because no attention
is given to the local topology (connectivity) or curvature of the original geometry.
The approaches of [Schroeder et al. 1992] and [Turk 1992], on the other hand, pay close
attention to these details, preserving the local topology of the original geometry, and allowing
more simplification in regions of lower curvature than in those of higher curvature. These
approaches, which produce fairly nice-looking simplifications, provide no error bounds to
describe the quality of the final output.
7
[Hoppe et al. 1993] pose the simplification problem in an optimization framework. Tak-
ing a topology-preserving approach, the algorithm performs a sequence of mesh-simplifying
operations according to the guidance of an energy function. This function includes terms for
distance error, mesh complexity, and an extra spring force. This optimization process pro-
vides some confidence that the competing concerns of quality and complexity are balanced in
a reasonable fashion. However, the actual metric used for measuring distance is not rigorous;
a number of sample points are recorded on the original surface, and their minimum distances
from the new surface are measured as the simplification progresses. This metric does not
provide a guarantee on the maximum distance, and it is one-sided, with the potential for
points on the simplified surface to be quite far from the original surface.
We also knew of another interesting algorithm, which appeared in [Varshney 1994]. The
algorithm measures error by building a pair of geometric constraint surfaces around the input
surface to some distance tolerance, using these constraints to guarantee a desired error bound.
This approach preserves local topology, and provides a guaranteed error bound on the dis-
tance from the original surface to the simplified surface. Thus, it achieves the quality appear-
ance of [Schroeder et al. 1992], [Turk 1992], and [Hoppe et al. 1993], as well as the guaran-
teed error bounds of [Rossignac and Borrel 1992]. However, the running time of the algo-
rithm grows at least quadratically with the size of the input mesh. We took this algorithm as
an appealing starting point for our endeavor.
1.6.2 Our Approach
The main goal of our research is to automatically generate simplifications that preserve
the appearance of the original models. We define appearance preservation as the proper
sampling of all the appearance attributes that determine the final, shaded colors of a rendered
surface. For current real-time image generation systems, the most common appearance
attributes that vary across a surface are position, curvature, and color. Our implementation
supports these three attributes. Current off-line rendering systems, such as those supporting
the RenderMan shading language [Upstill 1989, Hanrahan and Lawson 1990], allow a myriad
of other appearance attributes to vary across a surface. In the limit, such attributes describe a
bidirectional reflectance distribution function [Foley et al. 1990] over the surface domain.
8
Our approach seems general enough to handle a wide variety of such additional attributes as
the need arises.
Figure 5 depicts the components of our system. In the left side of the diagram, we convert
the original mesh representation to a decoupled form; the position attribute is stored at the
polygon vertices, as usual, whereas the color and curvature information is stored in auxiliary
texture and normal map structures. These maps are linked to the polygon mesh using a
parameterization of the mesh, stored as 2D texture coordinates at the polygon vertices.
We then apply the actual simplification process, shown in the right side of the diagram.
We generate the simplified meshes using a surface approximation algorithm that provides
guaranteed bounds on the surface deviation (we develop two such surface approximation
algorithms: the simplification envelopes algorithm and the successive mapping algorithm).
We augment the surface approximation algorithm with a new texture deviation metric, which
guarantees bounds on the texture deviation. Our simplified meshes are thus equipped with
both surface and texture deviation bounds, and are supplemented by texture and normal
maps, which preserve the other attribute data.
SurfaceParameterization
MapCreation
SurfaceApproximation
TextureDeviation
Metric
SimplificationRepresentation
ConversionPolygonalMesh
SimplifiedMeshes
Texture andNormal Maps
Figure 5: Components of an appearance-preserving simplification system.
When we render the model, we choose an appropriate level of detail to use as the geome-
try, then perform a mip-mapped look-up to find the appropriate values for the color and
normal vector at each pixel covered by the geometry, shading that pixel according to these
attribute values. If we consider this data reduction as a filtering process, the simplification
pre-processing has taken care of the filtering of the surface position attribute, whereas the
run-time mip-mapping properly filters the curvature and color attributes on a per-pixel basis.
9
The error bounds we compute during the simplification process are essential for guaran-
teeing the quality of the resulting images. Both the surface and texture deviations are meas-
ured as the maximum 3D distances between corresponding points on the original and simpli-
fied surfaces. When we apply the current viewing parameters to project these 3D distances
into 2D we get an error bound in terms of pixels of deviation. For instance, if the surface and
texture deviations project to 2 pixels of deviation, the shaded pixels in an image of the
simplified model will be no more than 2 pixels from their correct positions in an image of the
original model. This intuitive error bound allows the user or application to automatically
control the levels of detail of all the objects in a complex environment, guaranteeing a
uniform quality, if desired, and accelerating the frame rate accordingly.
1.6.3 Results
We have applied our simplification algorithms to polygonal environments composed of
thousands of objects and up to a few million polygons, including the auxiliary machine room
of a notional submarine model, a lion sculpture from the Yuan Ming garden model, a Ford
Bronco model, a detailed “armadillo” model, and more. The algorithms have proven to be
efficient and effective. We have seen improvements of up to an order of magnitude in the
frame rate of interactive graphics applications, with little or no degradation in image quality.
For example, look at the bunnies in Figure 3 and Figure 4. Although the positions of the
surfaces are preserved quite well, as evidenced by the similarity of the silhouettes of the
bunnies, the shading makes it quite easy to tell which bunnies have been simplified and
which have not (i.e. the appearance has not been totally preserved). Figure 6 shows a view of
a complex “armadillo” model. We have applied our appearance-preserving algorithm to this
model to generate the simplified versions of Figure 7 and Figure 8, in which it is nearly
impossible to distinguish the simplifications from the original. This demonstrates that our
definition of appearance preservation may match our more intuitive notion of what it means
to preserve appearance.
10
Figure 6: �Armadillo� model: 249,924 triangles
249,924 triangles 7,809 triangles
Figure 7: Medium-sized �armadillos�
249,924 triangles 975 triangles
Figure 8: Small-sized �armadillos�
11
1.7 Contributions
This dissertation presents three major algorithms: the simplification envelopes and suc-
cessive mapping algorithms for geometric simplification, and an overall appearance-
preserving simplification algorithm. These techniques make a number of contributions to the
field of polygonal mesh simplification:
1. Increased robustness and scalability of the simplification envelopes algorithm
2. Local error metric for surface-to-surface deviation between original and simplified
surfaces
3. Bijective (one-to-one and onto) mappings between original and simplified surfaces
for the edge collapse operation
4. Local error metric for texture deviation, with bijective mappings between original and
simplified surfaces
5. Appearance-preserving simplification algorithm
6. Intuitive, screen-space error metric for surface and texture deviations
We now summarize each of these contributions.
1.7.1 Increased Robustness and Scalability of the Simplification Envelopes Algorithm
The simplification envelopes algorithm, which first appeared in [Varshney 1994], has
several useful properties: it provides a global error metric for surface-to-surface deviation
between original and simplified surfaces, using highly-detailed offset-like surfaces to provide
tight error bounds, it preserves global topology, preventing self-intersections that may result
in large screen-space artifacts for models built with close geometric tolerances, and it scales
well to environments composed of large numbers of objects, automating not only the process
of simplification, but the selection of appropriate viewing distances for simplified objects.
We present new techniques for both the creation of envelope surfaces and the simplifica-
tion of complex meshes using these envelopes. Our new approaches perform only conserva-
tive intersection tests between linear elements (line segments and triangles). Intersections are
reported conservatively (an intersection is reported if the elements come within some small
12
tolerance of each other), and actual intersection points are never computed. This approach is
more geometrically robust than the original algorithm of [Varshney 1994]. In addition, we
have improved the asymptotic running time of the overall simplification algorithm to
O(nlogn) for typical input models of n triangles, so it scales well to input meshes of very high
complexity.
1.7.2 Local Error Metric for Surface-to-Surface Deviation, with Bijective Mappingsbetween Original and Simplified Surfaces
The successive mapping algorithm provides a local error metric for measuring surface
deviation as the simplification process progresses. Although a number of simplification
algorithms now provide local error metrics, most provide bounds on the distance from the
original vertices to points on the simplified surface, rather than the true surface-to-surface
deviation. The tolerance volume simplification algorithm by Guéziec [Guéziec 1995] is a
noteworthy exception. The two-sided Hausdorff metric algorithm by Klein [Klein et al. 1996]
is also an exception, though the one-sided metric he advocates does not provide such an error
bound. However, neither of these other algorithms provides a bijective (one-to-one and onto)
mapping between the original and simplified surfaces, nor do they extend gracefully to deal
with other important attribute fields.
1.7.3 Bijective Mappings between Original and Simplified Surfaces for the EdgeCollapse Operation
Our successive mapping algorithm is the first edge-collapse-based simplification algo-
rithm to provide bijective mappings among the levels of detail. The wavelet-based algorithms
[DeRose et al. 1993] provide bijective mappings for the unsubdivide operation and the
mapping algorithm of [Bajaj and Schikore 1996] provides a bijective mapping for the vertex
remove operation (though it does not provide correct error bounds for this mapping). We use
this mapping in our edge-collapse-based simplification algorithm both to bound the surface
deviation error and to maintain a texture coordinate parameterization of the input surface
through the simplification process. Such a mapping has many potential uses, including
measuring and localizing the deviations of other attribute fields.
13
1.7.4 Local Error Metric for Texture Deviation between Original and SimplifiedSurfaces
The successive mapping simplification algorithm may be used not only to maintain a
texture coordinate parameterization of the input surface, but also to bound the maximum
texture deviation between the original and simplified surfaces, measuring it as a 3D distance
between corresponding points of the 2D texture domain. Such an error bound is important for
models rendered with generic, re-usable texture maps as well as for models with per-vertex
attributes stored in texture, normal, or other attribute maps. These models have been common
in the off-line rendering community for some time, and models with texture maps have
become quite common in the real-time graphics community as well. As the graphics accel-
eration hardware becomes capable of performing more complex shading operations, models
with these sorts of various maps will likely become increasingly common.
Table 4: Comparison of simplification using average normal vectors for offset computation vs. using linearprogramming to achieve fewer invalid normals. The bunny model is simplified using cascaded simplificationsafter ε=1/2 %.
4. A LOCAL ERROR METRIC FOR SURFACE DEVIATION
The simplification envelopes error metric for surface deviation presented in Chapter 3 is
global in the sense that we compute only a single error measure for each level of detail we
create. The simplification envelopes algorithm may seen as a “pay up front” method, because
we devote considerable effort to constructing a pair of envelope surfaces, then perform our
simplification operations rather quickly, verifying only that the resulting surface is still
contained between the envelopes.
In this chapter, we present a more local approach, called the successive mapping algo-
rithm, employing more of a “pay as you go” paradigm. We will still incur some initialization
overhead as we prioritize a set of edge collapse operations on our surface, but then we will
perform the majority of the work as we simplify the surface. This work consists of measuring
the local error in the neighborhood of each edge collapse.
The algorithm computes a piece-wise linear mapping between the original surface and the
simplified surface. It uses the edge collapse operation due to its simplicity, local control, and
suitability for generating smooth transitions between levels of detail. We also present rigor-
ous and complete algorithms for collapsing an edge to a vertex such that there are no local
self-intersections and a bijective (one-to-one and onto) mapping is guaranteed. The algorithm
keeps track of both incremental surface deviation from the current level of detail and total
deviation from the original surface.
The output of the algorithm a sequence of edge collapse operations, each with its own
error bound describing the error in the neighborhood of the operation. This progressive mesh
[Hoppe 1996], as described in Section 2.4.2, may be used as part of a dynamic simplification
system, though our current system only renders from static levels detail (see Sections 2.4.1
and 4.6.2).
63
This research was performed in collaboration with Dinesh Manocha and Marc Olano.
Much of this work appeared in the Proceedings of IEEE Visualization ’97 [Cohen et al.
1997]. The initial algorithm concept is based on work by Schikore and Bajaj [Bajaj and
Schikore 1996]. The key differences between our work and theirs is discussed in Section 4.8.
The rest of the chapter is organized as follows. We give an overview of our algorithm in
Section 4.1. In Section 4.2 we provide the details of the mathematical underpinnings of our
projection-based mapping algorithms. Section 4.2 discusses the creation of local mappings
for the purpose of collapsing edges. Using these mappings, we bound and minimize surface
deviation error in Section 4.4. Section 4.5 describes how to compute new texture coordinates
for the new mesh vertices. The implementation is discussed in Section 4.6 and its perform-
ance in Section 4.7. In Section 4.8 we compare our approach to some other algorithms, and
we conclude the chapter with Section 4.9.
4.1 Overview
Our simplification approach may be seen as a high-level algorithm that controls the sim-
plification process with a lower-level cost function based on local mappings. Next we de-
scribe this high-level control algorithm and the idea of using local mappings for cost evalua-
tion.
4.1.1 High-level Algorithm
At a broad level, our simplification algorithm is a generic greedy algorithm. Our simplifi-
cation operation is the edge collapse. We initialize the algorithm by measuring the cost of all
possible edge collapses, then we perform the edge collapses in order of increasing cost. The
cost function represents local error bounds on surface deviation and other attributes. After
performing each edge collapse, we locally re-compute the cost functions of all edges whose
neighborhoods were affected by the collapse. This process continues until none of the re-
maining edges can be collapsed.
The output of our algorithm is the original model plus an ordered list of edge collapses
and their associated cost functions. This progressive mesh [Hoppe 1996] represents an entire
64
continuum of levels of detail for the surface. Section 4.6.2 discusses how we use these levels
of detail to render the model with the desired quality or speed-up.
4.1.2 Local Mappings
The edge collapse operation we perform to simplify the surface contracts an edge (the
collapsed edge, e) to a single, new vertex (the generated vertex, vgen). Most of the earlier
algorithms position the generated vertex to one of the end vertices or mid-point of the col-
lapse edge. These choices for the generated vertex position are reasonable heuristics, and may
reduce storage overhead. However, these choices may not minimize the surface deviation or
other attribute error bound and can result in a local self-intersection. We choose a vertex
position in two dimensions to avoid self-intersections and optimize in the third dimension to
minimize error. This optimization of the generated vertex position and measurement of the
error are the keys to simplifying the surface without introducing significant error.
Figure 29: The natural mapping primarily maps triangles to triangles. The two grey triangles map toedges, and the collapsed edge maps to the generated vertex
For each edge collapse, we consider only the neighborhood of the surface that is modified
by the operation (i.e. those faces, edges and vertices adjacent to the collapsed edge). There is
a natural mapping between the neighborhood of the collapsed edge and the neighborhood of
the generated vertex (see Figure 29). Most of the triangles incident to the collapsed edge are
stretched into corresponding triangles incident to the generated vertex. However, the two
triangles that share the collapsed edge are themselves collapsed to edges. These natural
correspondences are one form of mapping.
This natural mapping has two weaknesses.
1. The degeneracy of the triangles mapping to edges prevents us from mapping points of
the simplified surface back to unique points on the original surface. This also implies
65
that if we have any sort of attribute field across the surface, a portion of it disappears
as a result of the operation.
2. The error implied by this mapping may be larger than necessary.
We measure the surface deviation error of the edge collapse operation as the distances
between corresponding points of our mapping. Using the natural mapping, the maximum
distance between any pair of corresponding points is defined as:
E v v v vgen gen= max(distance( , ),distance( , )1 2 , (14)
where v1 and v2 are the vertices of e.
If we place the generated vertex at the midpoint of the collapsed edge, this distance error
will be half the length of the edge. If we place the vertex at any other location, the error will
be even greater.
We can create mappings that are free of degeneracies and often imply less error than the
natural mapping. For simplicity, and to guarantee no self-intersections, we perform our
mappings using orthogonal projections of our local neighborhood to the plane. We refer to
them as successive mappings.
4.2 Projection Theorems
The simplification algorithm we will present depends on our ability to efficiently com-
pute orthogonal projections that provide bijective mappings between small portions of
triangle meshes. With this in mind, we present the mathematical properties of the mapping
used in designing the projection algorithm.
(a) (c)(b)
Figure 30: Polygons in the plane. (a) A simple polygon (with an empty kernel). (b) A star-shaped polygonwith its kernel shaded. (c) A non-simple polygon with its kernel shaded.
Definition 1 A simple polygon is a planar polygon in which edges only intersect at their two
endpoints (vertices) and each vertex is adjacent to exactly two edges (see Figure 30(a)).
66
Definition 2 The kernel of a simple polygon is the intersection of the inward-facing half-
spaces bounded by its edges (see Figure 30(b)). For a non-simple polygon (see Figure 30(c)),
the kernel is the intersection of a consistently-oriented set of half-spaces bounded by its
edges (i.e. if we traverse the edges in a topological order, the half-spaces must be either all
to our right or all to our left).
Definition 3 A star-shaped polygon is a simple polygon with a non-empty kernel (see Figure
30(b)).
By construction, any point in the kernel of a star-shaped polygon has an unobstructed line
of sight to the polygon's entire boundary.
Definition 4 A complete vertex neighborhood, Nv, is a set of triangles that forms a complete
cycle around a vertex, v.
The triangles of Nv are ordered: ∆0, ∆1, ..., ∆n-1, ∆0. Each pair of consecutive triangles in
this ordering, (∆i, ∆i+1), is adjacent, sharing a single edge, ei; one of the vertices of ei is v.
θ0θ5
θ4
θ3 θ2
θ1
θ9
θ8
θ7
θ6
θ0θ5
θ4
θ3 θ2
θ1
θ9
θ8
θ7
θ6
(a) (b)
Figure 31: Projections of a vertex neighborhood, visualized in polar coordinates. (a) No angular intervalsoverlap, so the boundary is star-shaped, and the projection is a bijection. (b) Several angular intervalsoverlap, so the boundary is not star-shaped, and the projection is not a bijection.
Definition 5 The angle space of an orthogonal projection of a complete vertex neighbor-
hood, Nv, is the θ-coordinate space, [0,2π], constructed by converting the projected neigh-
borhood to polar coordinates, (r,θ), with v at the origin (see Figure 31(a))).
67
Definition 6 The angular interval covered by the orthogonal projection of triangle, ∆i, from
a complete vertex neighborhood, Nv, is the interval [θi,θ(i+1) mod n], where θi is the θ-
coordinate of edge ei.
Definition 7 The angle space of an orthogonal projection of a complete vertex neighbor-
hood is multiply-covered if each angle, θ ∈ [0,2π], is covered by the projections of at least
two triangles from Nv. It is k-covered if each angle is covered the projections of exactly k
such triangles. A k-covered angle space is exactly multiply-covered if k > 1.
Lemma 1 The orthogonal projection of a complete vertex neighborhood, Nv, onto the plane,
�, provides a bijective mapping between Nv and a polygonal subset of � iff the angular
intervals of the projected triangles of Nv do not overlap.
Proof. Consider the projection of Nv in polar coordinates, with v at the origin, and e0 at
θ=0 (see Figure 31). Each triangle, ∆i, spans an angular interval in θ, bounded by ei on one
side and e(i+1) mod n on the other. If the intervals of the triangles do not overlap , then the
triangles cannot overlap, and the projection must be a bijection. If the intervals do overlap,
the triangles themselves must overlap (near the origin, which they both contain), and the
projection cannot be a bijection (see Figure 31(b)). �
Corollary 1 The orthogonal projection of a complete vertex neighborhood, Nv, onto the
plane, �, provides a bijective mapping between Nv and a polygonal subset of � iff the angle
space of the projection of Nv is 1-covered.
Proof. Lemma 1 shows that for a bijective mapping, the angle space cannot be multiply-
covered. Because the triangles of Nv form a complete cycle around v, the angle space must be
fully covered. Thus, each angle must be covered exactly once. �
Lemma 2 The orthogonal projection of Nv onto the plane, �, provides a bijective mapping
between Nv and a polygonal subset of � iff the projection of Nv’s boundary forms a star-
shaped polygon in �, with v in its kernel.
Proof. If the projection provides a bijective mapping, the angular intervals of the trian-
gles do not overlap, and the boundary forms a simple polygon, with the origin in the interior.
The entire boundary of the polygon is visible from the origin. This is by definition a star-
68
shaped polygon, with the origin, v, in its kernel. In the case where one or more interval pairs
overlap, portions of the boundary are typically occluded from the origin's point of view. Thus
v cannot be in the kernel of a star-shaped polygon. Note that if the angle space is exactly
multiply-covered, and the boundaries of these coverings are totally coincident, the entire
boundary also seems to be visible from the origin. However, such a polygon is not technically
simple, thus the projection of Nv is not technically star-shaped. �
Definition 8 A fold in an orthogonal projection of a triangle mesh is an edge with two
adjacent triangle whose projections lie to the same side of the projected edge. A degenerate
fold is an edge with at least one triangle with a degenerate projection, lying entirely on the
projected edge.
∆1∆2
∆1(∆2)
∆1
∆1
∆2
∆2
(a)
(c)
(b)
e
e e
e
Figure 32: Three projections of a pair of edge-adjacent triangles. (a) The projected edge is not a fold,because the normals of both triangles are within 90o of the direction of projection. (b) The projected edgeis a degenerate fold, because the normal of ∆2 is perpendicular to the direction of projection. (c) Theprojected edge is a fold because the normal of ∆2 is more than 90o from the direction of projection.
Lemma 3 An orthogonal projection of a consistently-oriented triangle mesh is fold-free iff
the triangle normals either all lie less than 90o or all lie greater than 90o from a vector in the
direction of projection.
Proof. We are given that the triangle mesh is orientable, with consistently oriented trian-
gles and consistent normal vectors. The orientation of a projected triangle depends only on
the relationship of its normal vector to the direction of projection (see Figure 32). When these
two vectors are less than 90o apart, the projected triangle will have one orientation, whereas if
69
they are greater than 90o apart, the projected triangle will have the opposite orientation. At
exactly 90o, the projected triangle degenerates into a line segment.
At a fold, the two triangles adjacent to the folded edge have opposite orientations in the
plane, whereas at a non-folded edge, they have the same orientation. If all the triangle nor-
mals lie within the same hemisphere, either less than or greater than 90o from the direction of
projection, all the projected triangles will be consistently oriented, implying that none of the
edges are folded.
If the normals do not all lie in one of these two hemispheres, the projected triangles may
be divided into three groups according to their orientations in the plane (one group is for
degenerate projections). Because the triangle mesh is fully connected, there must exist some
edge that is adjacent to two triangles from different groups; this edge is a fold (or degenerate
fold). �
Lemma 4 The orthogonal projection of Nv onto � provides a bijective mapping iff the
projection is fold-free and its angle space is not exactly multiply-covered.
Proof. Again, consider the projection of Nv in polar coordinates. When a fold occurs, the
angular intervals of these triangles overlap. Thus a projection with a fold does not provide a
bijective mapping. On the other hand, if the projection is fold-free, every edge around v has
its triangles laid out to either side. Because the final triangle of Nv connects to the initial
triangle, this fold-free projection provides a k-covering of the angle space. If k=1, the projec-
tion provides a bijective mapping (from Corollary 1). If k>1, the projection is exactly multi-
ply-covered, implying that angular intervals overlap, and the projection does not provide a
bijective mapping. �
Lemma 5 The orthogonal projection of Nv onto � provides a bijective mapping iff the
projected triangles are consistently oriented and the angle-space of the projection is not
exactly multiply-covered.
Proof. We must show that the consistent orientation criterion is equivalent to the fold-
free criterion of Lemma 4. The projection of each of the edges, e0...en, is either a fold or not a
fold. The two triangles adjacent to each non-folded edge are consistently oriented, whereas
70
those adjacent to each folded edge are inconsistently oriented (or degenerate). If none of the
edges are folded, all adjacent pairs of triangles are consistently oriented, implying that all of
Nv is consistently oriented. If any of the edges are folded, Nv is not consistently oriented. �
Theorem 1 The following statements about the orthogonal projection of a complete vertex
neighborhood, Nv, onto the plane, �, are equivalent:
• The projection provides a bijective mapping between Nv and a polygonal subset of P.
• The angular intervals of the projected triangles of Nv do not overlap.
• The angle space of the projection of Nv is 1-covered.
• The projection of Nv's boundary forms a star-shaped polygon in P, with the vertex, v,
in its kernel.
• The normals of the triangles of Nv all lie within the same hemisphere about the line of
projection and the angle space of the projection is not exactly multiply-covered.
• The projection of Nv is fold-free and its angle space is not exactly multiply-covered.
• The projected triangles of Nv are consistently oriented in P and the angle space of the
projection is not exactly multiply-covered.
Proof. This equivalence list is a direct consequence of Lemmas 1, 2, 3, 4, and 5 and Cor-
ollary 1. �
Nv1 ∩ Nv2
Nv1 − (Nv1 ∩ Nv2)
Nv2 − (Nv1 ∩ Nv2)v1
v2
e
Ne = Nv1 ∪ Nv2 ∪
∪Nv1
Nv2
Figure 33: The edge neighborhood is the union of two vertex neighborhoods. If we remove the twotriangles of their intersection, we get two independent polygons in the plane.
Definition 9 A complete edge neighborhood, Ne, is a set of triangles that forms a complete
cycle around an edge, e (see Figure 33).
71
If v1 and v2 are the vertices of e, we can also write:
N N Ne v v= ∪1 2
(15)
Lemma 6 Given an edge, e, and its vertices, v1 and v2, the orthogonal projection of Ne onto
the plane, �, is fold-free iff the projections of Nv1 and Nv2
onto � are fold-free.
Proof. The set of triangle edges in Ne is the union of the edges fromNv1andNv2
. If nei-
therNv1norNv2
contains a folded edge, then Ne cannot contain a folded edge. Similarly, if
eitherNv1or Nv2
contains a folded edge, Ne will contain that folded edge as well. Also note
that the projections ofNv1andNv2
must have the same orientation, because they have two
triangles and one interior edge (e) in common. �
Lemma 7 The orthogonal projection of Ne onto ��provides a bijective mapping between Ne
and a polygonal subset of � iff the projections of its vertices, v1 and v2, provide bijective
mappings between their neighborhoods and the plane, and the projection of the boundary of
Ne is a simple polygon in �.
Proof. The projection provides a bijective mapping betweenNv1and a star-shaped subset
of �, and between Nv2 and a star-shaped subset of �. The only way for Ne to not have a
bijective mapping with a polygon in the plane is if the projections of Nv1 and Nv2
overlap,
covering some points in the plane more than once.
Let ′Nv1 and ′Nv2
be the neighborhoods Nv1andNv2
with the two common triangles re-
moved, as shown in Figure 33:
′ = − ∩ ′ = − ∩N N N N N N N Nv v v v v v v v1 1 1 2 2 2 1 2( ); ( ); (16)
The projections of ′Nv1 and ′Nv2
are two polygons in �. If the projections of Nv1andNv2
are each bijections, and these two polygons do not overlap, then the projection of Ne is a
bijection. If the two polygons do overlap, the projection is not a bijection, because multiple
points on Ne are projecting to the same point in �. Given that the projections of Nv1andNv2
72
are fold-free, the only way for the two polygons to overlap is for their boundaries to intersect.
This intersection implies that the projection of Ne is a non-simple polygon.
So we have shown that if the projections of Nv1 and Nv2
provide bijective mappings
with polygons in � and the projection of Ne’s boundary is a simple polygon in �, then the
projection provides a bijective mapping between Ne and this simple polygon in �. Also, if the
projection covers a non-simple polygon, there can be no bijective mapping. �
Theorem 2 The orthogonal projection of Ne onto � provides a bijective mapping between Ne
and a polygonal subset of � iff the projection of Ne is fold-free, the projections of the neigh-
borhoods of its vertices, v1 and v2, are not exactly multiply covered, and the projection of its
boundary is a simple polygon in �.
Proof. Given Lemma 7, we only need to show that the projections of Nv1 and Nv2
pro-
vide bijective mappings iff the projection of Ne is fold-free, and the projections of Nv1 and
Nv2 are not exactly multiply-covered. This is a direct consequence of Lemmas 4 and 6.�
Definition 10 An edge collapse operation applied to edge e, with vertices v1 and v2, merges
v1 and v2 into a single, generated vertex, vgen. In the process, any triangles adjacent to e
become degenerate and are deleted.
Lemma 8 Given an edge, e, which is collapsed to a vertex, vgen, an orthogonal projection of
Ne is a simple polygon iff the same orthogonal projection of Nvgen is a simple polygon.
Proof. The collapse of e to vgen does not move affect the vertices on the boundary of Ne,
so Ne and Nvgen have the same boundary. Thus the projection of the boundary of Ne is simple
iff the projection of the boundary of Nvgen is simple. �
Lemma 9 A planar polygon with a non-empty kernel is simple iff it is star-shaped.
Proof. A star-shaped polygon is defined as a simple polygon with a non-empty kernel.
Thus if a polygon with a non-empty kernel is simple, it is star-shaped by definition. If a
polygon with a non-empty kernel is not simple, it cannot be star-shaped. �
73
Lemma 10 Given an edge, e, which is collapsed to a vertex, vgen, inside the kernel of e, an
orthogonal projection of Ne is simple iff the same projection of Nvgen is star-shaped.
Proof. Recall from Lemma 8 that Ne and Nvgen have the same projected boundary. We
have been given that this projected boundary is a planar polygon with a non-empty kernel.
From Lemma 9, we know that this polygon is simple iff it is star-shaped. Thus the projection
of the boundary of Ne is a simple polygon iff the projection of the boundary of Nvgen is a star-
shaped polygon. �
(a) (b)
Ne NVgen
e
vgen
Figure 34: A fold-free projection of an edge neighborhood, Ne, that is not a bijection. (a) The projectionof Ne has a non-empty kernel. (b) The projection of Nvgen
has a 2-covered angle space. This can be
detected by noting that the sum of the angular intervals of the triangles of Nvgen sum to 4π.
Theorem 3 Given an edge, e, which is collapsed to a vertex, vgen in the kernel of e, an
orthogonal projection of Ne onto � provides a bijective mapping between Ne and a polygonal
subset of � iff the projection of Ne is fold-free and the projected triangles of Nvgen are
consistently oriented and do not multiply-cover the angle space.
Proof. Theorem 2 shows that the projection of Ne is a bijection iff it is fold-free and sim-
ple. Lemma 10 shows that it is simple iff the projection of Nvgen is star-shaped. Theorem 1
shows that the projection of Nvgen is star-shaped iff its projected triangles are consistently
oriented and do not multiply-cover the angle space. Figure 34 depicts an example of a fold-
free edge projection that is not a bijection and collapses to a multiply-covered vertex neigh-
borhood. �
74
Edge Collapse in the Plane
Theorems 1 and 3 lead us to an efficient algorithm for performing an edge collapse op-
eration in the plane.
First, we find a fold-free projection for the edge, e. We can use linear programming with
the normals of Ne as constraints to find a direction that guarantees such a fold-free projection,
if one exists for this edge. We do not yet know if the projection is a bijection, but rather than
check to see if the projection forms a simple polygon, we defer this test until later.
Second, we find a point inside the kernel of the projection of Ne. Again, we can use linear
programming to find such a point, if one exists for this projection.
Third, we collapse e’s vertices, v1 and v2 to this point, vgen, in the kernel. We do not know
yet if the overall polygon is star-shaped, because even a non-simple polygon may have a non-
empty kernel. If the polygon is not simple, neither the projection of Ne nor the projection of
Nvgen provides a bijective mapping with a polygon in �.
Finally, we verify that projections of Nv1, Nv2
, and Nvgen are all bijections. For Nv1
and
Nv2, we verify that they are not exactly multiply-covered by adding up the spans of the
angular intervals of their triangles. These spans should sum to 2π (within some floating point
tolerance). For Nvgen, we check not only the sum of the angular spans, but also the orienta-
tions of the projected triangles. If the spans sum to 2π and the orientations are consistent,
Nvgen has a bijective mapping, and its boundary is star-shaped. These are all simple, O(n)
tests, with small constant factors. They guarantee that we have a bijective mapping between
Ne and the plane, and also between Nvgen and the plane; this also provides a bijective map-
ping between Ne and Nvgen.
All the steps of the preceding algorithm run in O(n) time (though we will later need to
find O(n2) edge-edge intersections, which we will use in the error calculation and 3D vertex
placement). This algorithm for performing an edge-collapse in the plane is described in more
detail in Section 4.3.
75
4.3 Successive Mapping
In this section we present an algorithm to compute the mappings we use to compute error
bounds and to guide the simplification process. We present efficient and complete algorithms
for computing a planar projection, finding a generated vertex in the plane, and creating a
mapping in the plane. These algorithms employ well-known techniques from computational
geometry and are efficient in practice. The basis for these algorithms are proven in Section
4.2.
4.3.1 Computing a Planar Projection
Given a set of triangles in 3D, we present an efficient algorithm to compute a planar pro-
jection that is fold-free. Such a fold-free projection contains no pair of edge-adjacent triangles
that overlap in the plane. This fold-free characteristic is a necessary, but not sufficient,
condition for a projection to provide a bijective mapping between the set of triangles and a
portion of the plane. In practice, most fold-free projections provide such a bijective mapping.
We later perform an additional test to verify that our fold-free projection is indeed a bijection
(see Section 4.3.3).
The projection we seek should be a bijection to guarantee that the operations we perform
in the plane are meaningful. For example, suppose we project a connected set of triangles
onto a plane and then re-triangulate the polygon described by their boundary. The resulting
set of triangles will contain no self-intersections, as long as the projection is a bijection.
Many other simplification algorithms, such as those by Turk [Turk 1992], also use such
projections for vertex removal. However, they simply choose a likely direction, such as the
average of the normal vectors of the triangles of interest. To test the validity of the resulting
projection, these earlier algorithms project all the triangles onto the plane and check for self-
intersections. This process can be relatively expensive and does not provide a robust method
for finding a bijective projecting plane.
We improve on earlier brute-force approaches in two ways. First, we present a simple,
linear-time algorithm for testing the validity of a given direction, ensuring that it produces a
fold-free projection. Second, we present a slightly more complex, but still expected linear-
time, algorithm that will find a valid direction if one exists, or report that no such direction
76
exists for the given set of triangles. We defer until Section 4.3.3 a final, linear-time test to
guarantee that our fold-free projection provides a bijective mapping.
4.3.1.1 Validity Test for Planar Projection
In this section, we briefly describe the algorithm that checks whether a given set of trian-
gles has a fold-free planar projection. Assume that we can calculate a consistent set of normal
vectors for the set of triangles in question (if we cannot, the local surface is non-orientable
and cannot be mapped onto a plane in a bijective fashion). If the angle between a given
direction of projection and the normal vector of each of the triangles is less than 90o, then the
direction of projection is valid, and defines a fold-free mapping from the 3D triangles to a set
of triangles in the plane of projection (any plane perpendicular to the direction of projection).
Note that for a given direction of projection and a given set of triangles, this test involves
only a single dot product and a sign test for each triangle in the set. The correctness of this
test is demonstrated in Section 4.2.
Direction of Projection
BadNormals
Not one-to-one on this intervalFigure 35: A 2D example of an invalid projection due to folding.
To develop some intuition, we examine a 2D version of our problem, shown in Figure 35.
We would like to determine if the projection of the curve onto the line is fold-free. Without
loss of generality, assume the direction of projection is the y-axis. Each point on the curve
projects to its x-coordinate on the line. If we traverse the curve from its left-most endpoint,
we will project onto a previously projected location if and only if we reverse our direction
along the x-axis. This can only occur when the y-component of the curve's normal vector
goes from a positive value to a negative value. This is equivalent to our statement that the
invalid normal will be more than 90o from the direction of projection.
77
4.3.1.2 Finding a Valid Direction
The validity test in the previous section provides a quick method of testing the validity of
a likely direction as a fold-free projection. Unfortunately, the wider the spread of the normal
vectors of our set of triangles, the less likely we are to find a valid direction by using any sort
of heuristic. It is possible, in fact, to compute the set of all valid directions of projection for a
given set of triangles. However, to achieve greater efficiency and to reduce the complexity of
the software system, we choose to find only a single valid direction, which is typically all we
require.
n2n1n1
n2
(a) (b)
Figure 36: A 2D example of the valid projection space. (a) Two line segments and their normals. (b) The2D Gaussian circle, the planes corresponding to each segment, and the space of valid projection direc-tions (shaded in grey).
The Gaussian sphere [Carmo 1976] is the unit sphere on which each point corresponds to
a unit normal vector with the same coordinates. Given a triangle, we define a plane through
the origin with the same normal as the triangle. For a direction of projection to be valid with
respect to this triangle, its point on the Gaussian sphere must lie on the correct side of this
plane (i.e. within the correct hemisphere). If we consider two triangles simultaneously
(shown in 2D in Figure 36) the direction of projection must lie on the correct side of each of
the two planes determined by the normal vectors of the triangles. This is equivalent to saying
that the valid directions lie within the intersection of half-spaces defined by these two planes.
Thus, the valid directions of projection for a set of N triangles lie within the intersection of N
half-spaces.
This intersection of half-spaces forms a convex polyhedron. This polyhedron is a cone,
with its apex at the origin and an unbounded base (shown as a shaded, triangular region in
Figure 36). We can force this polyhedron to be bounded by adding more half-spaces (we use
the six faces of a cube containing the origin). By finding a point on the interior of this cone
78
and normalizing its coordinates, we shall construct a unit vector in a valid direction of
projection.
Rather than explicitly calculating the boundary of the cone, we simply find a few corners
(vertices) and average them to find a point that is strictly inside. By construction, the origin is
definitely such a corner, so we just need to find three more unique corners to calculate an
interior point. We can find each of these corners by solving a 3D linear programming prob-
lem. Linear programming allows us to find a point that maximizes a linear objective function
subject to a collection of linear constraints [Kolman and Beck 1980]. The equations of the
half-spaces serve as our linear constraints. We maximize in the direction of a vector to find
the corner of our cone that lies the farthest in that direction.
As stated above, the origin is our first corner. To find the second corner, we try maxi-
mizing in the positive-x direction. If the resulting point is the origin, we instead maximize in
the negative-x direction. To find the third corner, we maximize in a direction orthogonal to
the line containing the first two corners. If the resulting point is one of the first two corners,
we maximize in the opposite direction. Finally, we maximize in a direction orthogonal to the
plane containing the first three corners. Once again, we may need to maximize in the opposite
direction instead. Note that it is possible to reduce the worst-case number of optimizations
from six to four by using the triangle normals to guide the selection of optimization vectors.
We used Seidel's linear time randomized algorithm [Seidel 1990] to solve each linear
programming problem. A public domain implementation of this algorithm by Hohmeyer is
available. It is very fast in practice.
4.3.2 Placing the Vertex in the Plane
In the previous section, we presented an algorithm to compute a valid plane. The edge
collapse, which we use as our simplification operation, merges the two vertices of a particular
edge into a single vertex. The topology of the resulting mesh is completely determined, but
we are free to choose the position of the vertex, which will determine the geometry of the
resulting mesh.
When we project the triangles neighboring the given edge onto a valid plane of projec-
tion, we get a triangulated polygon with two interior vertices, as shown in Figure 37. The
79
edge collapse will reduce this edge to a single vertex. There will be edges connecting this
generated vertex to each of the vertices of the polygon. We would like the set of triangles
around the generated vertex to have a bijective mapping with our chosen plane of projection,
and thus to have a one-to- one mapping with the original edge neighborhood as well.
In this section, we present linear time algorithms both to test a candidate vertex position
for validity, and to find a valid vertex position, if one exists.
4.3.2.1 Validity Test for Vertex Position
The edge collapse operation leaves the boundary of the polygon in the plane unchanged.
For the neighborhood of the generated vertex to have a bijective mapping with the plane, its
edges must lie entirely within the polygon, ensuring that no edge crossings occur.
(a) (b)
Figure 38: (a) An invalid 2D vertex position. (b) The kernel of a polygon is the set of valid positions for asingle, interior vertex to be placed. It is the intersection of a set of inward half-spaces.
This 2D visibility problem has been well-studied in the computational geometry literature
[O'Rourke 1994]. The generated vertex must have an unobstructed line of sight to each of the
surrounding polygon vertices (unlike the vertex shown in Figure 38(a)). This condition holds
if and only if the generated vertex lies within the polygon's kernel, shown in Figure 38(b).
This kernel is the intersection of inward-facing half-planes defined by the polygon's edges.
v1
v2
edge
Figure 37: The neighborhood of an edge as projected into 2D
80
Given a candidate position for the generated vertex in 2D, we test its validity by plugging
it into the implicit-form equation of each of the lines containing the polygon's edges. If the
position is on the interior with respect to each line, the position is valid; otherwise it is
invalid.
4.3.2.2 Finding a Valid Position
The validity test described above is useful if we wish to test out a likely candidate for the
generated vertex position, such as the midpoint of the edge being collapsed. If such a heuris-
tic choice succeeds, we can avoid the work necessary to compute a valid position directly.
Given the kernel definition for valid points, it is straightforward to find a valid vertex po-
sition using 2D linear programming. Each of the lines provides one of the constraints for the
linear programming problem. Using the same methods as in Section 4.3.1.2, we can find a
point in the kernel with no more than four calls to the linear programming routine. The first
and second corners are found by maximizing in the positive- and negative-x directions. The
final corner is found using a vector orthogonal to the first two corners.
4.3.3 Guaranteeing a Bijective Projection
Although rare in practice, it is possible in theory for us to find both a fold-free projection
and a vertex position within the planar polygon's kernel, yet still have a projection that is not
a bijection. Figure 34 shows an example of such a projection.
As proved in Section 4.2, we can verify that both the neighborhoods of the generated
vertex and the collapsed edge have bijective projections with the plane with a simple, linear-
time test. Given our edge, e, its two vertices, v1 and v2, and the generated vertex, vgen, these
projections are bijections if and only if the orientations of the triangles surrounding vgen are
consistent and the triangles surrounding v1, v2, and vgen each cover angular ranges in the plane
that sum to 2π.
We can verify the orientations of vgen’s triangles by performing a single cross product for
each triangle. If the signed areas of all the triangles have the same sign, they are consistently
oriented, and the projections are bijections. We verify the angular sums of triangles sur-
rounding v1, v2, and vgen using a vector normalization, dot product, and arccosine operation
81
for each triangle to compute its angular range. Each floating point sum will be within some
small tolerance of an integer multiple of 2π, with 1 being the valid multiplier.
4.3.4 Creating a Mapping in the Plane
After mapping the edge neighborhood to a valid plane and choosing a valid position for
the generated vertex, we define a mapping between the edge neighborhood and the generated
vertex neighborhood. We shall map to each other the pairs of 3D points that project to
identical points on the plane. These correspondences are shown in Figure 39(a) by superim-
posing these two sets of triangles in the plane.
CollapsedEdge
GeneratedVertex
(a) (b)
Figure 39: (a) Edge neighborhood and generated vertex neighborhood superimposed. (b) A mapping inthe plane, composed of 25 polygonal cells (each cell contains a dot). Each cell maps between a pair ofplanar elements in 3D.
We can represent the mapping by a set of map cells, shown in Figure 39(b). Each cell is a
convex polygon in the plane and maps a piece of a triangle from the edge neighborhood to a
similar piece of a triangle from the generated vertex neighborhood. The mapping represented
by each cell is linear.
The vertices of the polygonal cells fall into four categories: vertices of the overall poly-
gon in the plane, vertices of the collapsed edge, the generated vertex itself, and edge-edge
intersection points. We already know the locations of the first three categories of cell vertices,
but we must calculate the edge-edge intersection points explicitly. Each such point is the
intersection of an edge adjacent to the collapsed edge with an edge adjacent to the generated
vertex. The number of such points can be quadratic (in the worst case) in the number of
neighborhood edges. If we choose to construct the actual cells, we may do so by sorting the
intersection points along each neighborhood edge and then walking the boundary of each cell.
However, this is not necessary for computing the surface deviation.
82
4.4 Measuring Surface Deviation Error
Up to this point, we have projected the collapsed edge neighborhood onto a plane, col-
lapsed the edge to the generated vertex in this plane, and computed a mapping in the plane
between these two local meshes. The generated vertex has not yet been placed in 3D. We will
choose its 3D position to minimize the error in surface deviation.
Mi-1Mi
P
edge collapse
Fi-1 Fi-1
Xi-1 Xi
x
Figure 40: Each point, x, in the plane of projection corresponds to two 3D points, Xi-1 and Xi on meshes
MMi-1 and MMi, respectively.
Given the overlay in the plane of the collapsed edge neighborhood, Mi-1, and the gener-
ated vertex neighborhood, Mi, we define the incremental surface deviation between them:
E x F x F xi i i i, ( ) ( ) ( )−−
−−= −1
111 (17)
The function, Fi(X):Mi → P, maps points on the 3D mesh, Mi, to points, x, in the plane.
Fi-1(X):Mi-1 → P acts similarly for the points on Mi-1. Ei,i-1 measures the distance between the
pair of 3D points corresponding to each point, x, in the planar overlay (shown in Figure 40).
Within each of our polygonal mapping cells in the plane, the incremental deviation func-
tion is linear, so the maximum incremental deviation for each cell occurs at one of its bound-
ary points. Thus, we bound the incremental deviation using only the deviation at the cell
vertices, V:
E E x E vi ix
i iv V
i i kk
, , ,( ) max ( ) max ( )− ∈ − ∈ −= =1 1 1PP
(18)
83
4.4.1 Distance Functions of the Cell Vertices
To preserve our bijective mapping, it is necessary that all the points of the generated ver-
tex neighborhood, including the generated vertex itself, project back into 3D along the
direction of projection (the normal to the plane of projection). This restricts the 3D position
of the generated vertex to the line parallel to the direction of projection and passing through
the generated vertex's 2D position in the plane. We choose the vertex's position along this
line such that it minimizes the incremental surface deviation.
We parameterize the position of the generated vertex along its line of projection by a sin-
gle parameter, t. As t varies, the distance between the corresponding cell vertices in 3D varies
linearly. Notice that these distances will always be along the direction of projection, because
the distance between corresponding cell vertices is zero in the other two dimensions (those of
the plane of projection). The distance function for each cell vertex, vk, has the form (see
Figure 41):
E v m t bi i k k k, ( )− = +1 , (19)
where mk and bk are the slope and y-intercept of the signed distance function for vk as t varies.
4.4.2 Minimizing the Incremental Surface Deviation
Given the distance function, we would like to choose the parameter t that minimizes the
maximum distance between any pair of mapped points. This point is the minimum of the so-
called upper envelope, shown in Figure 41. For a set of k functions, we define the upper
envelope function as follows:
{ }U t f t f t f t i j i j k i ji i j( ) ( ) | ( ) ( ) , , ;= > ∀ ≤ ≤ ≠1 (20)
For linear functions with no boundary conditions, this function is convex. We convert the
distance function for each cell vertex to two linear functions, then use linear programming to
find the t value at which the minimum occurs. We use this value of t to calculate the 3D
position for the generated vertex that minimizes the maximum incremental surface deviation.
84
t
minimum
upperenvelope
E(v1) E(v4)
E(v3)
E(v2)
tmin
Figure 41: The minimum of the upper envelope corresponds to the vertex position that minimizes theincremental surface deviation.
4.4.3 Bounding Total Surface Deviation
Although it is straightforward to measure the incremental surface deviation and choose
the position of the generated vertex to minimize it, this is not the error we eventually store
with the edge collapse. To know how much error the simplification process has created, we
need to measure the total surface deviation of the mesh �i:
S X E F X X F F Xi i i i( ) ( ( )) ( ( )),= = − −0 0
1 (21)
Unfortunately, our projection formulation of the mapping functions provides only Fi-1-1
and Fi-1 when we are performing edge collapse i; it is more difficult to construct F0
-1, and the
complexity of this mapping to the original surface will retain the complexity of the original
surface.
We approximate Ei,0 by using a set of axis-aligned boxes (other possible choices for these
approximation volumes include triangle-aligned prisms and spheres). This provides a con-
venient representation of a bound on Si(X) that we can update from one simplified mesh to
the next without having to refer to the original mesh. Each triangle, ∆k, in �� has its own axis-
aligned box, bi,k such that at every point on the triangle, the Minkowski sum of the 3D point
with the box gives a region that contains the corresponding point on the original surface.
∀ ∈ ∈ ⊕−X F F X X bk i i k∆ , ( ( )) ,01 (22)
85
(a)
0
i�
�
(b)
0
i
i+1
�
�
�
Figure 42: 2D illustration of the box approximation to total surface deviation. (a) A curve has beensimplified to two segments, each with an associated box to bound the deviation. (b) As we simplify onemore step, the approximation is propagated to the newly created segment.
Figure 42(a) shows an original surface (curve) and a simplification of it, consisting of two
thick lines. Each line has an associated box. As the box slides over the line it is applied to
each point along the way; the corresponding point on the original mesh is contained within
the translated box. One such correspondence is shown halfway along the left line.
From (21) and (22), we produce ~( )S Xi , a bound on the total surface deviation, Si(X). This
is the surface deviation error reported with each edge collapse.
~( ) max ( )
,
S X X X S XiX X b
ii k
= − ′ ≥′∈ ⊕
(23)
~( )S Xi is the distance from X to the farthest corner of the box at X. This will always bound
the distance from X to F0-1(Fi(X)). The maximum deviation over an edge collapse neighbor-
hood is the maximum ~ ( )S Xi for any cell vertex.
The boxes, bi,k, are the only information we keep about the position of the original mesh
as we simplify. We create a new set of boxes, bi+1,k, for mesh �i+1 using an incremental
computation (described in Figure 43). Figure 42(b) shows the propagation from �i to �i+1.
The two lines from Figure 42(a) have now been simplified to a single line. The new box,
bi+1,0, is constant as it slides across the new line. The size and offset is chosen so that, at
every point of application, bi+1,0 contains the box bi,0 or bi,1, as appropriate.
86
If X is a point on �i in triangle k, and Y is the corresponding point on �i+1, the contain-
ment property of (22) holds:
F F Y X b Y bi i k i k01
1 1−
+ + ′∈ ⊕ ⊆ ⊕( ( )) , , (24)
For example, all three dots in Figure 42(b) correspond to each other. The dot on original
surface, �0 is contained in a small box, X ⊕ bi,0, which is contained in the larger box, Y ⊕
bi+1,0.
Because each mapping cell in the overlay between �i and �i+1 is linear, we compute the
sizes of the boxes, bi+1,k´, by considering only the box correspondences at cell vertices. In
Figure 42(b), there are three places we must consider. If bi+1,0 contains bi,0 and/or bi,1 at all
three places, it will contain them everywhere.
Together, the propagation rules, which are simple to implement, and the box-based ap-
proximation to the total surface deviation, provide the tools we need to efficiently provide a
surface deviation for the simplification process.
4.4.4 Accommodating Bordered Surfaces
Bordered surface are those containing edges adjacent to only a single triangle, as opposed
to two triangles. Such surfaces are quite common in practice. Borders create some complica-
tions for the creation of a mapping in the plane. The problem is that the total shape of the
neighborhood projected into the plane changes as a result of the edge collapse.
Bajaj and Schikore [Bajaj and Schikore 1996], who employ a vertex-removal approach,
deal with this problem by mapping the removed vertex to a length-parameterized position
PropagateError() :foreach cell vertex, v
foreach triangle, ∆i -1 , in �i -1 touching vforeach triangle, ∆i , in �i touching v
Expand ∆i ’s box so that when applied at Xi , it contains∆i -1 ’s box applied at Xi -1
Figure 43: Pseudo-code to propagate the total deviation from mesh ��i-1 to ��i.
87
along the border. This solution can be employed for the edge-collapse operation as well. In
their case, a single vertex maps to a point on an edge. In ours, three vertices map to points on
a chain of edges.
4.5 Computing Texture Coordinates
The use of texture maps has become common over the last several years, as the hardware
support for texture mapping has increased. Texture maps provide visual richness to com-
puter-rendered models without adding more polygons to the scene.
Texture mapping requires 2D texture coordinates at every vertex of the model. These co-
ordinates provide a parameterization of the texture map over the surface.
As we collapse an edge, we must compute texture coordinates for the generated vertex.
These coordinates should reflect the original parameterization of the texture over the surface.
We use linear interpolation to find texture coordinates for the corresponding point on the old
surface, and assign these coordinates to the generated vertex.
This approach works well in many cases, as demonstrated in Section 4.7. However, there
can still be some sliding of the texture across the surface. We have extended our mapping
approach to also measure and bound the deviation of the texture coordinates (see Chapter 5).
In this approach, the texture coordinates produce a new set of pointwise correspondences
between simplifications, and the deviation measured using these correspondences measures
the deviation of the texture. This extension allows us to make guarantees about the complete
appearance of the simplified meshes we create and render.
As we add more error measures to our system, it becomes necessary to decide how to
weight these errors to determine the overall cost of an edge collapse. Each type of error at an
edge mandates a particular viewing distance based on a user-specified screen-space tolerance
(e.g. number of allowable pixels of surface or texture deviation). We conservatively choose
the farthest of these. At run-time, the user can still adjust an overall screen-space tolerance,
but the relationships between the types of error are fixed at the time of the simplification pre-
process.
88
4.6 System Implementation
We divide our software system into two major components: the simplification pre-
process, which performs the automatic simplification described previously in this chapter,
and the interactive visualization application, which employs the resulting levels of detail to
perform high-speed, high-quality rendering.
4.6.1 Simplification Pre-Process
All the algorithms described in this chapter have been implemented and applied to vari-
ous models. Although the simplification process itself is only a pre-process with respect to
the graphics application, we would still like it to be as efficient as possible. The most time-
consuming part of our implementation is the re-computation of edge costs as the surface is
simplified, as described in Section 4.1.1. To reduce this computation time, we allow our
approach to be slightly less greedy by performing a lazy evaluation of edge costs as the
simplification proceeds.
Rather than recompute all the local edge costs after a collapse, we simply set a dirty flag
for these edges. When we pick the next edge to collapse off the priority queue, which check
to see if the edge is dirty. If so, we re-compute it's cost, place it back in the queue, and pick
again. We repeat this until the lowest cost edge in the queue is clean. This clean edge has a
lower cost than the known costs of all the other edges, be they clean or dirty. If the recent
edge collapses cause an edge's cost to increase significantly, we will find out about it before
actually choosing to collapse it. The potentially negative effect is that if the cost of dirty edge
has decreased, we may not find out about it immediately, so we will not collapse the edge
until later in the simplification process.
This lazy evaluation of edge costs significantly speeds up the algorithm without much
effect on the error growth of the progressive mesh. Table 5 shows the number of edge cost
evaluations and running times for simplifications of the bunny and torus models with the
complete and lazy evaluation schemes. Figure 44 shows the effect of lazy evaluation on error
growth for these models. The lazy evaluation has a minimal effect on error. In fact in some
cases, the error of the simplification using the lazy evaluation is actually smaller. This is not
89
0.001
0.01
0.1
1
10
100
10 100 1000 10000 100000
Err
or (
% o
f bo
undi
ng b
ox d
iago
nal)
Triangles
"complete cost evaluation""lazy cost evaluation"
0.001
0.01
0.1
1
10
100
10 100 1000 10000 100000
Err
or (
% o
f bo
undi
ng b
ox d
iago
nal)
Triangles
"complete cost evaluation""lazy cost evaluation"
Figure 44: Error growth for simplification of two models: (top) bunny model (bottom) wrinkled torusmodel. The nearly coincident curves indicate that the error for the lazy cost evaluation method grows nofaster than error for the complete cost evaluation method over the course of a complete simplification.
90
surprising, because a strictly greedy choice of edge collapses does not guarantee optimal error
growth.
Given that the lazy evaluation is so successful at speeding up the simplification process
with little impact on the error growth, we still have room to be more aggressive in speeding
up the process. For instance, it may be possible to include a cost estimation method in our
prioritization scheme. If we have a way to quickly estimate the cost of an edge collapse, we
can use these estimates in our prioritization. Of course, we must still record the guaranteed
error bound when we finally perform a collapse operation. If our guaranteed bound is too far
off from our initial estimate, we may choose to put the edge back on the queue, prioritized by
its true cost.
Model Method # Evaluations # Collapses #E / #C CPU Time % SpeedupBunny complete 1,372,122 34,819 39.4 5:01 N/A
Table 5: Effect of lazy cost evaluation on simplification speed. The lazy method reduces the number ofedge cost evaluations performed per edge collapse operation performed, speeding up the simplificationprocess. Time is in minutes and seconds on a 195 MHz MIPS R10000 processor.
4.6.2 Interactive Visualization Application
More important than the speed of the simplification itself is the speed at which our
graphics application runs. The simplification algorithm outputs a list of edge collapses and
associated error bounds. Although it is possible to use this output to create view-dependent
simplifications on the fly in the visualization application (as described by Hoppe [Hoppe
1997]), such a system is fairly complex, requiring computational resources to adapt the
simplifications and immediate-mode rendering of the final triangles.
Our application is written to be simple and efficient. We first sample the progressive
mesh to generate a static set of levels of detail. These are chosen to have triangle counts that
decrease by a factor of two from level to level. This limits the total memory usage to twice
the size of the input model.
We next load these levels of detail into our visualization application, which stores them
as display lists (often referred to as retained mode). On machines with high-performance
91
graphics acceleration, such display lists are retained in a cache on the accelerator and do not
need to be sent by the CPU to the accelerator every frame. On an SGI Onyx with InfiniteRe-
ality graphics, we have seen a speedup of 2-3 times, just due to the use of display lists.
Our interactive application is written on top of SGI's Iris Performer library [Rohlf and
Helman 1994], which provides a software pipeline designed to achieve high graphics per-
formance. The geometry of our model, which may be composed of many individual objects at
several levels of detail, is stored in a scene graph. One of the scene graph structures, the
LODNode, is used to store the levels of detail of an object. This LODNode also stores a list
of switching distances, which indicate at what viewing distance each level of detail should be
used (the viewing distance is the 3D distance from the eye point to the center of the object's
bounding sphere). We compute these switching distances based on the 3D surface deviation
error we have measured for each level of detail.
The rendering of the levels of detail in this system involves minimal overhead. When a
frame is rendered, the viewing distance for each object is computed and this distance is
compared to the list of switching distances to determine which level of detail to render.
The application allows the user to set a 2D error tolerance, which is used to scale the
switching distances. When the error tolerance is set to 1.0, the 3D error for the rendered
levels of detail will project to no more than a single pixel on the screen. Setting it to 2.0
allows two pixels of error, etc. This screen-space surface deviation amounts to the number of
pixels the objects' silhouettes may be off from a rendering of the original level of detail.
4.7 Results
We have applied our simplification algorithm to four distinct objects: a bunny rabbit, a
wrinkled torus, a lion, and a Ford Bronco, with a total of 390 parts. Table 6 shows the total
input complexity of each of these objects as well as the time needed to generate a progressive
mesh representation. All simplifications were performed on an SGI MIPS R10000 processor.
Figure 45 graphs the complexity of each object vs. the number of pixels of screen-space
error for a particular viewpoint. Each set of data was measured with the object centered in the
foreground of a 1000x1000-pixel viewport, with a 45o field-of-view, like the Bronco in Plates
2 and 3. This was the easiest way for us to measure the screen-space error, because the lion
92
and bronco models each have multiple parts that independently switch levels of detail.
Conveniently, this function of complexity vs. error at a fixed distance is proportional to the
function of complexity vs. viewing distance with a fixed error. The latter is typically the
function of interest.
Figure 46 shows the typical way of viewing levels of detail with a fixed error bound
and levels of detail changing as a function of distance. Figure 47 is a snapshot from our
“circling Broncos” video. We achieve a speedup nearly proportional to the reduction in
triangles. Figure 48 shows close-ups of the Bronco model at full and reduced resolutions.
Figure 49 and Figure 50 show the application of our algorithm to the texture-mapped lion
and wrinkled torus models. If you know how to free-fuse stereo image pairs, you can fuse the
tori or any of the adjacent pairs of textured lion. Because the tori are rendered at an appropri-
ate distance for switching between the two levels of detail, the images are nearly indistin-
guishable, and fuse to a sharp, clear image. The lions, however, are not rendered at their
appropriate viewing distances, so certain discrepancies will appear as fuzzy areas. Each of the
lion's 49 parts is individually colored in the wire-frame rendering to indicate which of its
levels of detail is currently being rendered.
Model Parts Original Triangles CPU Time (Min:Sec)Bunny 1 69,451 1:56Torus 1 79,202 2:44Lion 49 86,844 1:56
Bronco 339 74,308 1:29
Table 6: Simplifications performed. CPU time indicates time to generate a progressive mesh of edgecollapses until no more simplification is possible.
050
100150200250300350400450500
100 1000 10000 100000
Pixe
ls o
f E
rror
Number of Triangles
"bunny""torus""lion"
"bronco"
Figure 45: Complexity vs. screen-space error for several simplified models.
Figure 58: Close-up of several levels of detail of the �armadillo� model. Top: normal maps Bottom: per-
vertex normals
6. CONCLUSION
6.1 Contributions
This dissertation presents three major simplification algorithms:
1. Simplification Envelopes
2. Successive Mappings
3. Appearance-Preserving Simplification
Together, they make a number of contributions to the field of polygonal mesh simplifica-
tion (see Section 1.7 for more detail):
• Increased robustness and scalability of the simplification envelopes algorithm
• Local error metric for surface-to-surface deviation between original and simplified
surfaces
• Bijective mappings between original and simplified surfaces for the edge collapse op-
eration
• Local error metric for texture deviation, with bijective mappings between original and
simplified surfaces
• Appearance-preserving simplification algorithm
• Intuitive, screen-space error metric for surface and texture deviations
6.2 Future Extensions
We now describe some possible extensions to this dissertation research. These extensions
are organized by category: minimizing error, bijective mappings, non-manifold meshes,
parameterization, and appearance preservation.
120
6.2.1 Minimizing Error
It seems clear that there is much more we could do algorithmically to reduce the error in
our simplified models. This can be accomplished by reducing the actual error, tightening the
bounds on our measurement of the actual error, or both. One of the shortcomings of greedy
simplification algorithms with no bound on their output size compared to the optimal solu-
tion is that we have little idea of how much better we could be doing. Reducing our reported
error may very well increase the time it takes to perform the simplification. If we are ap-
proaching the point of diminishing returns, this may be of purely academic interest. On the
other hand, there may still be significant room for improvement, and we may find the error
reduction to be of great practical use.
The minimization of surface deviation for the successive mapping algorithm currently
uses only one degree of freedom for its optimization of the 3D vertex position. Moving in the
other two dimensions can change the mapping, making the optimization process more
complex. In addition, we currently minimize the incremental surface deviation, not the total
surface deviation. Finally, we accumulate our total error from the original surface using axis-
aligned boxes. If we could measure and minimize the total error directly from the original
surface, we could avoid some unnecessary error accumulation.
For the appearance-preserving simplification algorithm, the minimization of texture de-
viation is even more heuristic. The orthogonal projection mapping is used to compute the
corresponding texture coordinates for the generated vertex. These texture coordinates are not
currently optimized to minimize the texture deviation. Such an optimization would be similar
to optimizing the remaining two dimensions of the 3D vertex in the successive mapping
algorithm; it would change the mapping in the texture plane, making the optimization process
more complex.
For the simplification of complex models, the heuristic nature of the current minimization
is offset somewhat by the priority queue mechanism. There are many edge collapse opera-
tions to choose from to determine the next simplification step, so it may not matter too much
if each operation is not optimized as well as possible. However, as the model becomes more
coarse, the minimization of error becomes more important, and simplification with small
121
error growth becomes more difficult. The need for better error minimization depends, to
some extent, on what level of simplification the graphics applications require.
6.2.2 Bijective Mappings
The mappings we generate using the successive mappings approach are not the best map-
pings possible. Given any choice 3D position for the generated vertex, it should be possible
to optimize the mapping so it minimizes the maximum deviation. Adjusting the mapping
affects the distance function by changing the correspondences. In fact, we could even con-
struct our mappings directly on the 3D surface, rather than relying on planar projections at all.
Such a mapping might be initialized to be the natural mapping, then allow an optimization
process to minimize the error by adding, removing, or adjusting the piece-wise linear map-
ping cells on the two surfaces. A similar process might even be used to generate mappings
between levels of detail create by the simplification envelopes approach.
6.2.3 Non-manifold Meshes
These simplification algorithms, like most topology-preserving simplification algorithms,
apply most readily to manifold meshes. However, non-manifold meshes are of great impor-
tance for practical applications. Many meshes are not manifold, and we can even convert
meshes with all sorts of arbitrary polygon intersections into non-manifold meshes, with all
the intersections identified.
Our appearance-preserving simplification algorithm has the makings of an approach to
simplifying non-manifold meshes. The non-manifold meshes may be decomposed into
several manifold meshes and their adjacency information, much like the meshes in the
appearance-preserving algorithm are decomposed into adjacent patches with their individual
parameterizations. This sort of approach may work well for meshes that are “mostly mani-
fold,” with a few non-manifold regions. If the mesh is “mostly non-manifold,” there will be
too many patches, and we must develop new ways to achieve some degree of coherence
across the surface.
There is considerable overlap of the handling of non-manifold meshes with the handling
of topological modifications. In this area, there is a great need to explore ways of modifying
122
topology while preserving the surface appearance. This may include a characterization of
which types of topological modifications can have the most impact on the appearance of a
surface.
6.2.4 Parameterization
Most algorithms for polygonal surface parameterization today optimize the parameteriza-
tion to minimize distortions between the parameter space and the surface. This is important
for applying prefabricated, general-purpose textures to a variety of objects. Our application of
the parameterization is different, however, because the data originates at the vertices of the
polygonal surface; any distortion in mapping the data to the texture domain is reversed when
the data is reapplied to the surface during rendering.
In our case, it is more important to optimize the parameterization for the minimization of
filtering error and storage consumption. Using the actual data to be stored in a map, we
should be able to optimize the parameterization to further these goals. Topologically adjacent
data values with similar values should be stored more closely together in the parameter space
than those with dissimilar values. This will help minimize the error that occurs at each level
of the mip-map pyramid structure. For applications willing to tolerate some additional error,
it may be possible to discard one or more of the highest resolution mip-map levels, reducing
storage and possibly rendering bandwidth.
6.2.5 Appearance Preservation
Our appearance-preserving algorithm takes a clear and consistent approach to preserving
appearance attributes that vary across a given polygonal surface. However, the algorithm
depends on the commercial availability of graphics acceleration hardware that provides:
a) Sufficient bandwidth to render all the necessary appearance attribute maps at the de-
sired screen resolution
b) Computing resources and flexibility to light and shade polygonal primitives according
to attribute map values (in either single- or multi-pass fashion)
123
Even for those platforms with these capabilities, it is important for us to manage the
bandwidth required for complex scenes by ensuring that only the smallest necessary mip-map
levels are transmitted within the graphics engine.
For less capable graphics platforms, it will be necessary to develop other algorithms for
preserving appearance, using the more traditionally accelerated rendering techniques. Though
most graphics accelerators today have support for texture mapping, they may not have the
necessary bandwidth to transmit enough unique texture data to cover the screen at the desired
resolution (many gaming applications cover most of the screen area with generic, repeatable
texture patterns applied to the polygons). Thus, we may need to rely only on the capability of
handling per-vertex or per-polygon colors and normals. The metrics we use to measure the
effects of simplification on these colors and normals must take into account not only the
changes in these values, but the value changes times the area on the screen, and the resulting
changes in final, lit colors times the area on the screen. Such metrics will allow us to make
some guarantees as to the quality of the appearance of the resulting images.
6.3 Simplification in Context
Polygonal simplification is one of many techniques used to accelerate the rendering of
complex scenes. Other techniques include back-face culling, view-frustum culling, occlusion
culling, cell-and-portal culling, and image replacement. The various culling techniques
attempt to remove hidden geometry, whereas polygonal simplification and image replacement
(replacing some amount of complex geometry with an image) attempt to reduce the com-
plexity of the visible geometry.
An ambitious goal for an interactive 3D graphics algorithm is to have a rendering com-
plexity that is output sensitive. That is to say, the rendering algorithm should ideally take an
amount of time that is proportional only to the screen resolution, but not to the scene com-
plexity, to generate an image. Also, the rendered image should have as high a quality as that
produced with a reasonable non-output-sensitive algorithm. Polygonal simplification is not
likely to become a powerful enough tool to achieve this goal on its own. Even if we incorpo-
rate topological changes and object merging into our simplification scheme, as in [Luebke
and Erikson 1997], it will be difficult to achieve both our complexity and quality goals.
124
The combination of simplification techniques and image replacement techniques seems a
promising approach to achieving our high-quality, interactive rendering goals. A prototype
system that takes just such an approach is demonstrated in [Aliaga et al. 1998]. Pre-computed
images are used to replace far geometry, whereas simplification is used to reduce the com-
plexity of nearby geometry. Each technique is thus applied to its area of greatest strength
geometric primitives up close and images far away.
Among its other contributions, this dissertation presents an algorithm for appearance-
preserving simplification, using rendering primitives that provide a high ratio of quality to
rendering complexity (especially reducing transformation complexity). The representation is
a hybrid of sorts, itself employing both geometry and mapped images. We hope that our
approaches to appearance preservation and simplification in general, which provide guaran-
teed error bounds with the simplified models, will be useful components of the interactive
rendering algorithms of the future.
7. REFERENCES
Agarwal, Pankaj K. and Subhash Suri. Surface Approximation and Geometric Partitions.Proceedings of 5th ACM-SIAM Symposium on Discrete Algorithms. 1994. pp. 24-33.
Aliaga, Daniel, Jonathan Cohen, Andrew Wilson, Hansong Zhang, Carl Erikson, KennethHoff, Thomas Hudson, Eric Baker, Rui Bastos, Mary Whitton, Frederick Brooks, andDinesh Manocha. A Framework for the Real-Time Walkthrough of Massive Models.Technical Report TR #98-013. Department of Computer Science, University of NorthCarolina at Chapel Hill. March. 1998.
Aliaga, Daniel G. Visualization of Complex Models using Dynamic Texture-Based Simplifi-cation. Proceedings of IEEE Visualization'96. pp. 101-106.
Bajaj, Chandrajit and Daniel Schikore. Error-bounded Reduction of Triangle Meshes withMultivariate Data. SPIE. vol. 2656. 1996. pp. 34-45.
Barequet, Gill and Subodh Kumar. Repairing CAD Models. Proceedings of IEEE Visualiza-tion '97. October 19-24. pp. 363-370, 561.
Bastos, Rui, Mike Goslin, and Hansong Zhang. Efficient Rendering of Radiosity usingTexture and Bicubic Interpolation. Proceedings of 1997 ACM Symposium on Interactive3D Graphics.
Becker, Barry G. and Nelson L. Max. Smooth Transitions between Bump Rendering Algo-rithms. Proceedings of SIGGRAPH 93. pp. 183-190.
Blinn, Jim. Simulation of Wrinkled Surfaces. Proceedings of SIGGRAPH 78. pp. 286-292.
Brönnimann, H. and Michael T. Goodrich. Almost Optimal Set Covers in Finite VC-Dimension. Proceedings of 10th Annual ACM Symposium on Computational Geometry.1994. pp. 293-302.
Cabral, Brian, Nelson Max, and R. Springmeyer. Bidirectional Reflection Functions fromSurface Bump Maps. Proceedings of SIGGRAPH 87. pp. 273-281.
Carmo, M. do. Differential Geometry of Curves and Surfaces. Prentice Hall 1976.
126
Certain, Andrew, Jovan Popovic, Tony DeRose, Tom Duchamp, David Salesin, and WernerStuetzle. Interactive Multiresolution Surface Viewing. Proceedings of SIGGRAPH 96.pp. 91-98.
Chiba, N., T. Nishizeki, S. Abe, and T. Ozawa. A Linear Algorithm for Embedding PlanarGraphs using PQ-Trees. Journal of Computer and System Sciences. vol. 30(1). 1985. pp.54-76.
Clarkson, Kenneth L. Algorithms for Polytope Covering and Approximation. Proceedings of3rd Workshop on Algorithms and Data Structures. 1993. pp. 246-252.
Cohen, Jonathan, Dinesh Manocha, and Marc Olano. Simplifying Polygonal Models usingSuccessive Mappings. Proceedings of IEEE Visualization '97. pp. 395-402.
Cohen, Jonathan, Marc Olano, and Dinesh Manocha. Appearance-Preserving Simplification.Proceedings of ACM SIGGRAPH 98. pp. 115-122.
Cohen, Jonathan, Amitabh Varshney, Dinesh Manocha, Gregory Turk, Hans Weber, PankajAgarwal, Frederick Brooks, and William Wright. Simplification Envelopes. Proceedingsof SIGGRAPH 96. pp. 119-128.
Cook, Robert L. Shade Trees. Proceedings of SIGGRAPH 84. pp. 223-231.
Darsa, Lucia, Bruno Costa Silva, and Amitabh Varshney. Navigating Static Environmentsusing Image-Space Simplification and Morphing. Proceedings of 1997 Symposium onInteractive 3D Graphics. pp. 25-34.
Das, G. and D. Joseph. The Complexity of Minimum Convex Nested Polyhedra. Proceedingsof 2nd Canadian Conference on Computational Geometry. 1990. pp. 296-301.
DeFloriani, Leila, Paola Magillo, and Enrico Puppo. Building and Traversing a Surface atVariable Resolution. Proceedings of IEEE Visualization '97. pp. 103-110.
DeRose, Tony, Michael Lounsbery, and J. Warren. Multiresolution Analysis for Surfaces ofArbitrary Topology Type. Technical Report TR 93-10-05. Department of Computer Sci-ence, University of Washington. 1993.
Di Battista, Giuseppe, Peter Eades, Roberto Tamassia, and Ioannis G. Tollis. Algorithms forDrawing Graphs: An Annotated Bibliography. Computational Geometry Theory and Ap-plications. vol. 4. 1994. pp. 235-282.
Dörrie, H. Euler's Problem of Polygon Division. 100 Great Problems of Elementary Mathe-matics: Their History and Solutions. Dover, New York.1965. pp. 21-27.
Eck, Matthias, Tony DeRose, Tom Duchamp, Hugues Hoppe, Michael Lounsbery, andWerner Stuetzle. Multiresolution Analysis of Arbitrary Meshes. Proceedings ofSIGGRAPH 95. pp. 173-182.
127
El-Sana, Jihad and Amitabh Varshney. Controlled Simplification of Genus for PolygonalModels. Proceedings of IEEE Visualization'97. pp. 403-410.
Erikson, Carl. Polygonal Simplification: An Overview. Technical Report TR96-016. De-partment of Computer Science, University of North Carolina at Chapel Hill. 1996.
Erikson, Carl and Dinesh Manocha. Simplification Culling of Static and Dynamic SceneGraphs. Technical Report TR98-009. Department of Computer Science, University ofNorth Carolina at Chapel Hill. 1998.
Foley, James D., Andries van Dam, Steven K. Feiner, and John F. Hughes. ComputerGraphics: Principles and Practice. The Systems Programming Series. 2nd edition. Addi-son-Wesley, Reading, MA. 1990. 1175 pages.
Fournier, Alain. Normal Distribution Functions and Multiple Surfaces. Proceedings ofGraphics Interface '92 Workshop on Local Illumination. pp. 45-52.
Fraysseix, H. de, J. Pach, and R. Pollack. How to Draw a Planar Graph on a Grid. Combina-torica. vol. 10. 1990. pp. 41-51.
Garland, Michael and Paul Heckbert. Surface Simplification using Quadric Error Bounds.Proceedings of SIGGRAPH 97. pp. 209-216.
Gieng, Tran S., Bernd Hamann, Kenneth I. Joy, Gregory L. Schlussmann, and Isaac J. Trotts.Smooth Hierarchical Surface Triangulations. Proceedings of IEEE Visualization '97. pp.379-386.
Guéziec, André. Surface Simplification with Variable Tolerance. Proceedings of SecondAnnual International Symposium on Medical Robotics and Computer Assisted Surgery(MRCAS '95). pp. 132-139.
Hamann, Bernd. A Data Reduction Scheme for Triangulated Surfaces. Computer AidedGeometric Design. vol. 11. 1994. pp. 197-214.
Hanrahan, Pat and Jim Lawson. A Language for Shading and Lighting Calculations. Pro-ceedings of SIGGRAPH 90. pp. 289--298.
He, Taosong, Lichan Hong, Amitabh Varshney, and Sidney Wang. Controlled TopologySimplification. IEEE Transactions on Visualization and Computer Graphics. vol. 2(2).1996. pp. 171-814.
Heckbert, Paul and Michael Garland. Survey of Polygonal Simplification Algorithms.SIGGRAPH 97 Course Notes.1997.
Hoffmann, Christoff M. Geometric and Solid Modeling. in B. A. Barsky, ed. The MorganKaufmann Series in Computer Graphics and Geometric Modeling. Morgan Kaufmann,San Mateo, California. 1989. 338 pages.
128
Hoppe, Hugues. Progressive Meshes. Proceedings of SIGGRAPH 96. pp. 99-108.
Hoppe, Hugues. View-Dependent Refinement of Progressive Meshes. Proceedings ofSIGGRAPH 97. pp. 189-198.
Hoppe, Hugues, Tony DeRose, Tom Duchamp, John McDonald, and Werner Stuetzle. MeshOptimization. Proceedings of SIGGRAPH 93. pp. 19-26.
Hughes, Merlin., Anselmo Lastra, and Eddie Saxe. Simplification of Global-IlluminationMeshes. Proceedings of Eurographics '96, Computer Graphics Forum. pp. 339-345.
Kajiya, Jim. Anisotropic Reflection Models. Proceedings of SIGGRAPH 85. pp. 15-21.
Kalvin, Alan D. and Russell H. Taylor. Superfaces: Polygonal Mesh Simplification withBounded Error. IEEE Computer Graphics and Applications. vol. 16(3). 1996. pp. 64-77.
Kent, James R., Wayne E. Carlson, and Richard E. Parent. Shape Transformation for Polyhe-dral Objects. Proceedings of SIGGRAPH 92. pp. 47-54.
Klein, Reinhard. Multiresolution Representations for Surface Meshes Based on the VertexDecimation Method. Computers and Graphics. vol. 22(1). 1998. pp. 13-26.
Klein, Reinhard and J. Krämer. Multiresolution Representations for Surface Meshes. Pro-ceedings of Spring Conference on Computer Graphics 1997. June 5-8. pp. 57-66.
Klein, Reinhard, Gunther Liebich, and Wolfgang Straßer. Mesh Reduction with Error Con-trol. Proceedings of IEEE Visualization '96.
Koenderink, Jan J. Solid Shape. in J. M. Brady, D. G. Bobrow and R. Davis, eds. Series inArtificial Intelligence. MIT Press, Cambridge, MA. 1989. 699 pages.
Kolman, B. and R. Beck. Elementary Linear Programming with Applications. AcademicPress, New York. 1980.
Krishnamurthy, Venkat and Marc Levoy. Fitting Smooth Surfaces to Dense Polygon Meshes.Proceedings of SIGGRAPH 96. pp. 313-324.
Lee, Aaron W. F., Wim Sweldens, Peter Schröder, Lawrence Cowsar, and David Dobkin.MAPS: Multiresolution Adaptive Parameterization of Surfaces. Proceedings ofSIGGRAPH 98. pp. 95-104.
Luebke, David and Carl Erikson. View-Dependent Simplification of Arbitrary PolygonalEnvironments. Proceedings of SIGGRAPH 97. pp. 199-208.
Lyon, Richard F. Phong Shading Reformulation for Hardware Renderer Simplification.Technical Report #43. Apple Computer. 1993.
129
Maciel, Paulo W. C. and Peter Shirley. Visual Navigation of Large Environments usingTextured Clusters. Proceedings of 1995 Symposium on Interactive 3D Graphics. pp. 95-102.
Maillot, Jérôme, Hussein Yahia, and Anne Veroust. Interactive Texture Mapping. Proceed-ings of SIGGRAPH 93. pp. 27-34.
Mitchell, Joseph S. B. and Subhash Suri. Separation and Approximation of PolyhedralSurfaces. Proceedings of 3rd ACM-SIAM Symposium on Discrete Algorithms. 1992. pp.296-306.
Murali, T. M. and Thomas A. Funkhouser. Consistent Solid and Boundary Representationsfrom Arbitrary Polygonal Data. Proceedings of 1997 Symposium on Interactive 3DGraphics. April 27-30. pp. 155-162, 196.
Olano, Marc and Anselmo Lastra. A Shading Language on Graphics Hardware: The Pix-elFlow Shading System. Proceedings of SIGGRAPH 98. July 19-24. pp. 159-168.
O'Neill, Barrett. Elementary Differential Geometry. Academic Press, New York, NY. 1966.411 pages.
O'Rourke, Joseph. Computational Geometry in C. Cambridge University Press 1994. 357pages.
Pedersen, Hans. A Framework for Interactive Texturing Operations on Curved Surfaces.Proceedings of SIGGRAPH 96. pp. 295-302.
Peercy, Mark, John Airey, and Brian Cabral. Efficient Bump Mapping Hardware. Proceed-ings of SIGGRAPH 97. pp. 303-306.
Plouffe, Simon and Neil James Alexander Sloan. The Encyclopedia of Integer Sequences.Academic Press 1995. pp. 587.
Rohlf, John and James Helman. IRIS Performer: A High Performance MultiprocessingToolkit for Real--Time 3D Graphics. Proceedings of SIGGRAPH 94. July 24-29. pp. 381-395.
Ronfard, Remi and Jarek Rossignac. Full-range Approximation of Triangulated Polyhedra.Computer Graphics Forum. vol. 15(3). 1996. pp. 67-76 and 462.
Rossignac, Jarek and Paul Borrel. Multi-Resolution 3D Approximations for Rendering.Modeling in Computer Graphics. Springer-Verlag1993. pp. 455-465.
Rossignac, Jarek and Paul Borrel. Multi-Resolution 3D Approximations for RenderingComplex Scenes. Technical Report RC 17687-77951. IBM Research Division, T. J. Wat-son Research Center. Yorktown Heights, NY 10958. 1992.
130
Schikore, Daniel and Chandrajit Bajaj. Decimation of 2D Scalar Data with Error Control.Technical Report CSD-TR-95-004. Department of Computer Science, Purdue University.1995.
Schroeder, William J., Jonathan A. Zarge, and William E. Lorensen. Decimation of TriangleMeshes. Proceedings of SIGGRAPH 92. pp. 65-70.
Seidel, R. Linear Programming and Convex Hulls Made Easy. Proceedings of 6th AnnualACM Symposium on Computational Geometry. 1990. pp. 211-215.
Shade, Jonathan, Dani Lischinski, David Salesin, Tony DeRose, and John Snyder. Hierarchi-cal Image Caching for Accelerated Walkthroughs of Complex Environments. Proceed-ings of SIGGRAPH 96. pp. 75-82.
Turk, Greg. Re-tiling Polygonal Surfaces. Proceedings of SIGGRAPH 92. pp. 55-64.
Upstill, Steve. The RenderMan Companion. Addison-Wesley 1989.
van Dam, Andries. PHIGS+ Functional Description, Revision 3.0. Computer Graphics. vol.22(3). 1988. pp. 125-218.
Varshney, Amitabh. Hierarchical Geometric Approximations. Ph.D. Thesis. Department ofComputer Science. University of North Carolina at Chapel Hill. 1994.
Westin, Steven H., James R. Arvo, and Kenneth E. Torrance. Predicting Reflectance Func-tions from Complex Surfaces. Proceedings of SIGGRAPH 92. pp. 255-264.
Williams, Lance. Pyramidal Parametrics. Proceedings of SIGGRAPH 83. pp. 1-11.
Xia, Julie C., Jihad El-Sana, and Amitabh Varshney. Adaptive Real-Time Level-of-Detail-Based Rendering for Polygonal Models. IEEE Transactions on Visualization and Com-puter Graphics. vol. 3(2). 1997. pp. 171-183.a