Top Banner
An overview of lecture Geometric Algorithms Range searching. Nearest neighbor. Finding intersections of geometric objects. An optimal parallel algorithm for the 2D convex hull problem, Some applications of the 2D convex hull algorithm.
50
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Lecture24

An overview of lecture

• Geometric Algorithms

– Range searching.

– Nearest neighbor.

– Finding intersections of geometric objects.

• An optimal parallel algorithm for the 2D

convex hull problem,

• Some applications of the 2D convex hull

algorithm.

Page 2: Lecture24

Geometric search: Overview

• Types of data. Points, lines, planes, polygons,

circles, ...

• This lecture: Sets of N objects.

• Geometric problems extend to higher dimensions.

– Good algorithms also extend to higher dimensions.

– Curse of dimensionality.

• Basic problems.

– Range searching.

– Nearest neighbor.

– Finding intersections of geometric objects.

Page 3: Lecture24

Range Searching

1D Range Search• Extension to symbol-table ADT with comparable keys.

– Insert key-value pair.

– Search for key k.

– How many records have keys between k1 and k2?

– Iterate over all records with keys between k1 and k2.

• Application: database queries.

• Geometric intuition.

– Keys are point on a line.

– How many points in a given interval?

insert B B

insert D B D

insert A A B D

insert I A B D I

insert H A B D H I

insert F A B D F H I

insert P A B D F H I P

count G to K 2

search G to K H I

Page 4: Lecture24

1D Range Search

Implementations• Range search. How many records have keys between k1 and k2?

• Ordered array. Slow insert, binary search for k1 and k2 to find range.

• Hash table. No reasonable algorithm (key order lost in hash).

• BST. In each node x, maintain number of nodes in tree rooted at x.

Search for smallest element k1 and largest element k2.

log N

N

log N

countinsert range

ordered array N R + log N

hash table 1 N

BST log N R + log N

nodes examined

within interval

not touched

N = # records

R = # records that match

Page 5: Lecture24

2D Orthogonal Range Search• Extension to symbol-table ADT with 2D keys.

– Insert a 2D key.

– Search for a 2D key.

– Range search: find all keys that lie in a 2D range?

– Range count: how many keys lie in a 2D range?

• Applications: networking, circuit design, databases.

• Geometric interpretation.

– Keys are point in the plane.

– Find all points in a given h-v rectangle?

Page 6: Lecture24

2D Orthogonal Range Search:

Grid Implementation• Grid implementation.

– Divide space into M-by-M grid of squares.

– Create linked list for each square.

– Use 2D array to directly access relevant square.

– Insert: insert (x, y) into corresponding grid square.

– Range search: examine only those grid squares that could

have points in the rectangle.

LB

RT

Page 7: Lecture24

2D Orthogonal Range Search:

Grid Implementation Costs• Space-time tradeoff.

– Space: M2 + N.

– Time: 1 + N / M2 per grid cell examined on average.

• Choose grid square size to tune performance.

– Too small: wastes space.

– Too large: too many points per grid square.

– Rule of thumb: √N by √N grid.

• Running time. [if points are evenly distributed]

– Initialize: O(N).

– Insert: O(1).

– Range: O(1) per point in range.LB

RT

Page 8: Lecture24

Clustering• Grid implementation. Fast, simple solution for well-distributed

points.

• Problem. Clustering is a well-known phenomenon in geometric

data.

• Ex: USA map data.

– 80,000 points, 20,000 grid squares.

– Half the grid squares are empty.

– Half the points have 10 others in same grid square.

– Ten percent have 100 others in same grid square.

• Need data structure that gracefully adapts to data.

Page 9: Lecture24

Space Partitioning Trees• Space partitioning tree. Use a tree to represent the recursive

hierarchical subdivision of d-dimensional space.

• BSP tree:- Recursively divide space into two regions.

• Quad tree:- Recursively divide plane into four quadrants.

• Octree:- Recursively divide 3D space into eight octants.

• kD tree:- Recursively divide k-dimensional space into two half-

spaces.

• Applications.

– Ray tracing.

– Flight simulators.

– N-body simulation.

– Collision detection.

– Astronomical databases.

– Adaptive mesh generation.

– Accelerate rendering in Doom.

– Hidden surface removal and shadow casting.

Grid

Quadtree

kD tree

BSP tree

Page 10: Lecture24

Quad Trees• Quad tree. Recursively partition plane into 4 quadrants.

• Implementation: 4-way tree.

• Good clustering performance is a primary reason to choose quad trees

over grid methods.

a

b

c

e

f

gh

d

a h

d ge

b c

f

public class QuadTree {

private Quad quad;

private Value value;

private QuadTree NW, NE, SW, SE;

}

Page 11: Lecture24

kD Trees• kD tree. Recursively partition k-dimensional space into 2 halfspaces.

• Implementation: BST, but cycle through dimensions.

• Efficient, simple data structure for processing k-dimensional data.

– Adapts well to clustered data.

– Adapts well to high dimensional data.

– Discovered by an undergrad in an algorithms class!

level ≡ i (mod k)

points

whose ith

coordinate

is less than p’s

points

whose ith

coordinate

is greater than p’s

p

Page 12: Lecture24

Summary• Basis of many geometric algorithms:

search in a planar subdivision.grid 2D tree Voronoi diagram

intersecting

lines

basis N h-v lines N points N points N lines

representation2D array

of N listsN-node BST N-node multilist ~N-node BST

cells ~N squares N rectangles N polygons ~N triangles

search cost 1 log N log N log N

extend to kD? too many cells easycells too

complicated

use (k-1)D

hyperplane

Page 13: Lecture24

Geometric Intersection• Problem. Find all intersecting pairs among set of N geometric objects.

• Applications. CAD, games, movies, virtual reality.

• Simple version: 2D, all objects are horizontal or vertical line

segments.

• Brute force. Test all (N2) pairs of line segments for intersection.

• Sweep line. Efficient solution extends to 3D and general objects.

Page 14: Lecture24

• Sweep vertical line from left to right.

– Event times: x-coordinates of h-v line segments.

– Left endpoint of h-segment: insert y coordinate into ST.

– Right endpoint of h-segment: remove y coordinate from ST.

– v-segment: range search for interval of y endpoints.

Orthogonal Segment Intersection:

Sweep Line Algorithm

range searchinsert y

delete y

Page 15: Lecture24

Orthogonal Segment Intersection:

Sweep Line Algorithm• Sweep line: reduces 2D orthogonal segment intersection

problem to 1D range searching!

• Running time of sweep line algorithm.

– Put x-coordinates on a PQ (or sort). O(N log N)

– Insert y-coordinate into SET. O(N log N)

– Delete y-coordinate from SET. O(N log N)

– Range search. O(R + N log N)

• Efficiency relies on judicious use of data structures.

N = # line segments

R = # intersections

Page 16: Lecture24

Line Segment Intersection:

Implementation

• Efficient implementation of sweep line algorithm.

– Maintain PQ of important x-coordinates: endpoints and intersections.

– Maintain ST of segments intersecting sweep line, sorted by y.

– O(R log N + N log N).

• Implementation issues.

– Degeneracy.

– Floating point precision.

– Use PQ since intersection events aren't known ahead of time.

Page 17: Lecture24

The convex hull problem

Input: A set S = (p1, p2,…,pn) of n points on the plane.

Output: The convex hull CH(S) of these npoints.

• The convex hull is the smallest convex polygon containing all the n points.

• Each vertex of CH(S) is called an extreme point and the convex hull is output as a list of the extreme points.

Page 18: Lecture24

The convex hull problem

• Let pmax and pmin be two points in the set S with the

maximum and minimum x coordinates.

• Then pmax and pmin are convex hull vertices.

• The line segment divides the convex hull

into two parts, upper hull and lower hull.max minp p

Page 19: Lecture24

The convex hull problem

• We use the notation x(p) and y(p) to denote

the x and y coordinates of a point p.

• Given a line L specified by the equation

y = ax + b, and a point q =( , ),

• We say, q is below L if < a + b.

• We also say, q is above L if > a + b.

Page 20: Lecture24

The convex hull problem

• Given the set S of n points, we can find pmax and pmin in O(n) time.

• We can find all the points above and below also in O(n) time.

• We can compute the convex hull of all the points above and call this as UH(S).

• Similarly, we can compute the convex hull of all the points below and call this as LH(S).

• At the end, we can stitch these two hulls together at the two points pmax and pmin.

max minp p

max minp p

max minp p

Page 21: Lecture24

Sequential complexity

• The convex hull of n planar points can be

constructed in (n log n) time sequentially.

• The lower bound can be proved by

showing that the convex hull problem is

equivalent to sorting.

• We need to design an O(n log n) work

algorithm to achieve optimality.

Page 22: Lecture24

Computing the upper hull

• We will discuss an algorithm for computing the

upper hull of the set S. The algorithm for

computing the lower hull is exactly the same.

• A line L is tangent to a convex polygon P if all the

vertices of P are on the same side of L.

Page 23: Lecture24

A divide-and-conquer algorithm

• We discuss a divide-and-conquer

algorithm for computing the upper hull.

There are two phases, top-down and

bottom-up.

• First, we sort the points according to x-

coordinates.

Page 24: Lecture24

A divide-and-conquer algorithm

• In the top-down phase, we divide the point

set recursively into two parts and compute

the convex hull when the size of each

subproblem is small.

• In the bottom-up phase, we merge these

hulls pairwise to get the upper hull.

• The strategy is exactly similar to the

sequential algorithm for merge sort.

Page 25: Lecture24

A divide-and-conquer algorithm

Page 26: Lecture24

Merging two upper hulls

• The main problem in combining two upper hulls to form a single upper hull is to compute a common tangent to the two hulls.

• To achieve O(n log n) work, we need to complete the merging of all the upper hulls at a level of the tree in O(1) time.

Page 27: Lecture24

Merging two upper hulls

• We consider two upper hulls UH(S1) and UH(S2).

• Our aim is to find a common tangent to these two

upper hulls.

• We first find a tangent to UH(S2) from a point ri on

UH(S1).

Page 28: Lecture24

Merging two upper hulls

• Suppose the line is the tangent to UH(S2) from ri.

• Suppose ql is another vertex of UH(S2).

• Given the line riql, we first try to locate .

ii rr q

irq

Page 29: Lecture24

Merging two upper hulls

• In O(1) time, we can say whether is above or

below the line .

• If the neighboring vertices ql -1 and ql +1 of ql are

on either side of , then is above ql.

irq

irq

i lrq

i lrq

Page 30: Lecture24

Merging two upper hulls

• Suppose, UH(S2) has s points given in an array

according to their order on UH(S2).

• We allocate processors and divide the points on

UH(S2) into intervals and do a parallel search.

• We can identify the point in time.

s

irq

log( ) (1)log

sO O

s

s

Page 31: Lecture24

Merging two upper hulls

• Suppose, the common tangent to UH(S1) and

UH(S2) is the line .

• u is on UH(S1) and v is on UH(S2) .

• If we know the line , we can say in O(1) time

whether u is above or below the line .

uv

ii rrq

ii rrq

Page 32: Lecture24

Merging two upper hulls

• Suppose, there are t points on UH(S1), given in an

array according to their order on UH(S1).

• We divide these t points in intervals, each

interval contains points.

• We now do a parallel search in the following ways.

t

t

Page 33: Lecture24

Merging two upper hulls

• We allocate processors for the parallel search.

• Suppose ri is the boundary vertex of one of the

intervals.

• For each such ri, we can find the tangent to

UH(S2) in O(1) time using processors.

t s

sii rrq

Page 34: Lecture24

Merging two upper hulls

• Hence, we can identify two boundary vertices rj

and rk such that u is above rj and below rk.

• Hence, u must be one of the vertices in

between rj and rk.

• This computation takes O(1) time and

processors.

t

( )s t O n

Page 35: Lecture24

Merging two upper hulls

• We can do a similar computation to find a group

of vertices on UH(S2) in which v is a member.

• This computation again takes O(1) time

and processors.

s

( )s t O n

Page 36: Lecture24

Merging two upper hulls

• Now, we have vertices on UH(S2) and

vertices on UH(S1) .

• There are possible lines if we join one

point from UH(S1) and one point from UH(S2) .

• For each of these O(n) lines, we can check in O(1)

time whether the line is a common tangent to

UH(S1) and UH(S2) .

s t

( )s t O n

Page 37: Lecture24

Merging two upper hulls

• Suppose, is one such line.

• ul and ur are the two neighboring vertices of u.

Also, vl and vr are the two neighboring vertices of v.

• is the common tangent to both UH(S1) and

UH(S2) if all the point ul, ur, vl, vr are below .

uv

uv

uv

Page 38: Lecture24

Merging two upper hulls

• For each of the O(n) lines, we can check this condition

in O(1) time.

• Hence, we can find a common tangent to UH(S1) and

UH(S2) in O(1) time and O(n) work.

• We can form another array of vertices containing the

vertices in UH(S1) UH(S2) by deleting some parts of

the arrays of UH(S1) and UH(S2) and merging the

remaining parts.

Page 39: Lecture24

The convex hull algorithm

• We solve the problem through a divide and

conquer strategy.

• The depth of the recursion is O(log n) and we can

do the merging of the convex hulls at every level

of the recursion in O(1) time and O(n) work.

• Hence, the overall time required is O(log n) and

the overall work done is O(n log n) which is optimal.

• We need the CREW PRAM model due to the

concurrent reading in the parallel search algorithm.

Page 40: Lecture24

Intersection of half planes

• Consider a line L defined by the equation y = ax + b.

• L divides the entire plane into two half planes,

H+(L) and H-(L).

• H+(L) consists of all the points ( , ) such that

a + b.

• Similarly, H-(L) consists of all the points a + b.

• Intuitively, H+(L) is the set of points on or above the

line L,

• And, H-(L) is the set of points on or below the line L.

Page 41: Lecture24

Intersection of half planes

• For a set of lines, the intersection of the positive

half planes defined by these lines is a convex

region.

• However, the intersection may or may not be

bounded.

• Our aim is to compute the boundary of the

intersection.

Page 42: Lecture24

Dual transform

• Let T be a transformation that maps a point

p = (a, b) into the line T(p) defined by y = ax + b.

• The reverse transformation maps the line

L : y = ax + b into the point T(L) = (-a, b).

Page 43: Lecture24

A property

Property: A point p is below a line L if and only if T(p) is below the point T(L).– Consider a set of lines L1, L2,…,Ln, and the region C

defined by 1 i n H+(Li)

– The region C consists of all the points above all the lines Li,1 i n

Page 44: Lecture24

Intersection of half planes

• In the transformed domain, T(C+) = { T(p) | p C+ }

consists of all the lines above all the points T(Li),

for 1 i n.

Page 45: Lecture24

Intersection of half planes

• The extreme points of the intersection of half planes are now the line segments between two consecutive vertices of the convex hull in the dual space.

Page 46: Lecture24

Intersection of half planes

• To compute the intersection of the half planes,

we first convert the lines into their dual points.

• Then we compute the convex hull of these dual

points.

• Finally, we get the extreme points of the

intersection of half planes by converting the line

segments between two consecutive extreme

points of the convex hull into points.

Page 47: Lecture24

Intersection of half planes

• The transformations take O(1) time each if

we allocate one processor for each line.

• The convex hull construction takes O(log n)

time and O(n log n) work on the CREW

PRAM.

Page 48: Lecture24

Two variable linear program

• The two-variable linear program problem is

defined as:

Minimize cx + dy (Objective function)

Subject to: aix + biy + ci 0, 1 i n.

(Constraints)

Page 49: Lecture24

Two variable linear

programming

• Each constraint is a half plane. The

feasible region is a set of points satisfying

all the constraints.

• The solution of the linear program is a

point in the feasible region that minimizes

the objective function.

• The objective function is minimized at one

of the extreme points of the feasible region.

Page 50: Lecture24

Two variable linear

programming

• Hence, we can find all the O(n) extreme

points of the feasible region by the half

plane intersection algorithm.

• Then we can find the extreme point which

minimizes the objective function.

• The algorithm takes O(log n) time and

O(n log n) work on the CREW PRAM.