INCREMENTAL ALGORITHMS: SOLVING PROBLEMS IN A CHANGING WORLD A Dissertation Presented to the Faculty of the Graduate School of Cornell University in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by Alexa Megan Sharp August 2007
121
Embed
INCREMENTAL ALGORITHMS: SOLVING …asharp/papers/thesis.pdfINCREMENTAL ALGORITHMS: SOLVING PROBLEMS IN A CHANGING WORLD Alexa Megan Sharp, Ph.D. Cornell University 2007 A typical algorithm
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
INCREMENTAL ALGORITHMS: SOLVING PROBLEMS
IN A CHANGING WORLD
A Dissertation
Presented to the Faculty of the Graduate School
of Cornell University
in Partial Fulfillment of the Requirements for the Degree of
We can use Theorem 3.6’s algorithm as a strictly 12-competitive algorithm in
the unweighted case, because r∗M ≥ 12. Furthermore, the algorithm of Theorem 3.4
finds the optimal strict competitive ratio. This proves the following two theorems.
Theorem 3.11 Incremental bipartite matching is strictly 12-competitive.
Theorem 3.12 There is a polynomial-time algorithm for 2-level incremental bi-
partite matching that achieves the optimal strict competitive ratio on every in-
stance.
When we consider weighted edges, the results of Theorems 3.5, 3.6, and 3.7 all
hold under the strict analysis.
3.4 Network Flow
Given a directed network G = (V, E) with source s, sink t, and capacity function
c : E → Q, a flow is a function f : E → Q satisfying
(i) capacity constraints f(e) ≤ c(e) for all e ∈ E, and
(ii) flow conservation f in(v) = f out(v) for all v ∈ V \ s, t.
44
s
t
uv
Figure 3.5: An incremental network with solid level-one edges and dashed level-two
edges. Unless otherwise specified, the edges have unit capacity.
The maximum flow problem finds the maximum s-t flow f . In terms of the model
of Section 3.2, elements are unit s-t flow paths, and feasible solutions are flows
satisfying the given capacity function. The value of a flow f , denoted by |f |, is the
number of unit s-t flow paths it contains. Network flow is a well-studied problem in
computer science, with applications from airline scheduling to computer graphics.
It can be solved in polynomial time.
Incremental flow is defined on a directed network G = (V, E) with source s,
sink t, and a non-decreasing sequence of k capacity functions c` : E → Q that
define k feasible sets. A solution is a sequence of s-t flows (f1, f2, . . . , fk) such that
f` is feasible with respect to capacity function c`, and f`(e) ≥ f`−1(e) for every
edge e. The flow f ∗` denotes the maximum single-level flow at level `.
As an example, Figure 3.5 is a 2-level incremental flow network. We obtain a
(0, 2)-flow in this network by sending no level-1 flow and two units of level-2 flow
in parallel along paths s · u · t and s · v · t. Alternatively, we obtain a (1, 1)-flow
by sending one unit of level-1 flow along the only level-1 path s · u · v · t and no
additional level-2 flow. No (1, 2)-flow exists because using uv in level 1 precludes us
from sending any additional level-2 flow. In this chapter we only consider directed
integral flows; however, undirected and fractional flows are discussed in Chapter 5.
45
3.4.1 Aggregate Objective
Theorem 3.1 shows that for the aggregate objective, the incremental structure does
not affect the complexity of bipartite matching; this is an example of a polynomial
problem whose incremental version remains polynomial. However, the following
theorem demonstrates that the incremental structure affects network flow enough
to significantly change its complexity, thereby illustrating an intriguing dichotomy
between the closely related problems of bipartite matching and network flow.
Theorem 3.13 No algorithm for two-level incremental flow achieves the optimal
aggregate value on every instance in polynomial time, unless P = NP .
The proof of Theorem 3.13 follows from a reduction from 3-SAT. We are given a
3-SAT formula φ with variables V and clauses C. Each clause is a set of three
literals; the set of literals is the union of all positive literals v and all negative
literals v for v ∈ V . We denote each literal occurrence as a clause-literal pair
(c ∈ C, ` ∈ c). Given the formula φ, we construct an instance of incremental flow
such that a 3-aggregate solution exists if and only if φ is satisfiable.
Literal Gadgets For every literal occurrence (c, `) we create a literal gadget:
an in vertex, an out vertex, and a level-1 edge between the two.
Clause Gadgets For every clause c ∈ C we create a clause gadget: an in vertex,
an out vertex, and the three literal gadgets of the form (c, ` ∈ c), joined by
level-1 edges as shown in Figure 3.6.
Variable Gadgets For every variable v ∈ V we create a variable gadget: an in
vertex, an out vertex, and all literal gadgets of the form (c, v) or (c, v) for
any c, linked together as shown in Figure 3.6: positive literals on one side,
negative literals on the other, and each side joined by level-2 edges.
46
c.in
c.out
c
(c, u)
(c, w)
(c, v)
v v.in
v.out
(a, v)(b, v)
(d, v)(c, v)
Figure 3.6: The gadgets for clause c and variable v constructed from the formula
(u∨ v∨w)∧ (u∨ v∨w)∧ (u∨ v∨w)∧ (u∨ v∨w); the clauses are denoted a, b, c, d.
Literal gadgets appear inside labeled ovals.
Connectors We link the source, sink, and all clause gadgets together in series
with level-1 edges, forming the clause component ; the same is done for the
variable gadgets with level-2 edges, forming the variable component. These
edges are called connectors and are shown in Figure 3.7(a).
Lemma 3.14 The maximum single-level flows have values |f ∗1 | = 1 and |f ∗
2 | = 2.
Proof. The sink t has in-degree 1 and in-degree 2 in the level-1 and 2 graphs,
respectively, indicating that |f ∗1 | ≤ 1 and |f ∗
2 | ≤ 2. The level-1 graph is connected,
hence |f ∗1 | ≥ 1, and the level-2 graph has no cut of size 1, hence |f ∗
2 | ≥ 2.
Lemma 3.15 There is a satisfying assignment iff there is a 3-aggregate flow.
[⇒] Given a satisfying assignment, we route one unit of level-1 flow through all
clause gadgets, passing through gadget c using literal gadget (c, `) for some true
literal `. We route an additional unit of flow at level two through all variable
gadgets, using only false literals by passing through the positive (negative) side of
gadget v if v is false (true). Such a flow is shown in Figure 3.7(b).
47
a
b
c
d
u
v
w
(a, u)
(b, u)
(c, u)
(d, u)
(a, v)
(c, v)
(b, v)
(d, v)
(a,w)
(b, w)
(c, w)
(d,w)
(a, u) (a,w)
(b, u) (b, w)
(c, u) (c, w)
(d, u) (d,w)
(a, v)
(c, v)
(d, v)
(b, v)
s
t
(a)
a
b
c
d
u
v
w
(a, u)
(a, v)
(c, v)
(c, w)
(d,w)
(a,w)
(b, w)
(c, u)
(d, v)
s
t
(b)
Figure 3.7: 3.7(a) The incremental instance for formula (u∨v∨w)∧(u∨v∨w)∧(u∨v∨w)∧(u∨v∨w); the clauses are a, b, c, d. Each of the twelve literal gadgets appears
once in a clause gadget and once in a variable gadget, and thus the seemingly
separate source-sink paths in 3.7(a) actually share many vertices. The dotted
paths in 3.7(b) show a (1, 2)-flow based on the assignment u = 0, v = 0, w = 1.
[⇐] We make the following observations concerning (1, 2)-flows in our construction:
1. The level 1 component of any (1, 2)-flow is a path including every level-1
connector and one literal from each clause gadget. This is because the three
literals of any clause form a level 1 cut, as does any level-1 connector.
2. In any (1, 2)-flow, level 2 flow at (c, v).in (or (c, v).in) must proceed to
(c, v).out ((c, v).out) and back to the positive (negative) side of variable gad-
get v. The only other path would be to c.out, whose sole out-edge is a level-1
48
connector already carrying level 1 flow (see observation 1).
3. In any (1, 2)-flow, level 2 flow at v.in proceeds through either all (c, v) gadgets
or all (c, v) gadgets to v.out. This is because v.in has two out edges: one to a
sequence of all (c, v) gadgets and one to a sequence of all (c, v) gadgets. Once
flow proceeds to one of these sequences, induction on observation 2 implies it
passes through every literal gadget in the target sequence and ends at v.out.
4. The level 2 component of any (1, 2)-flow is a path passing through the posi-
tive or negative side of each variable gadget. The source vertex has only two
out edges: a level-1 connector to a clause gadget and a level-2 connector to
a sequence of all variable gadgets. By observation 1, the level-1 connector is
already used, forcing the flow to the first variable gadget. Repeated applica-
tion of observation 3 implies that the flow proceeds through the positive or
negative side of every variable gadget to the sink.
By observation 4, every variable v carries level-2 flow through one side of its gadget.
We set v false if this flow passes through v’s positive side and true otherwise; under
this assignment, all false literals carry level-2 flow. By observation 1, one literal
from each clause must carry level-1 flow (and not level-2 flow) and thus cannot be
false. This assignment satisfies the given formula.
Theorem 3.13 follows from Lemma 3.15 and the polynomial nature of our re-
duction. Thus, finding the optimal aggregate value is not possible, and we turn
to approximation algorithms. Theorem 3.26 gives a 1Hk
-approximation to the ag-
gregate objective for incremental flow; the next theorem proves that this approxi-
mation is tight. The proof relies on the reduction described in Theorem 3.13: we
make multiple copies of Theorem 3.13’s clause and variable components, which
49
appear1 over k levels rather than just two. If a clause component appears at level
`c and its corresponding (i.e., linked) variable component appears at level `v > `c,
then the observations of 3.15 are easily extended to the following lemma.
Lemma 3.16 Let `′c ≥ `c denote the earliest level in which clause component c
carries flow. If `′c < `v, then any flow through variable component v determines
a satisfying assignment. Also, any satisfying assignment can be used to achieve a
flow with separate flow paths through components c and v.
Theorem 3.17 It is NP-hard to find a β-approximation algorithm for the aggre-
gate objective of incremental flow for β > 1Hk
.
Proof. Suppose we have a ( 1Hk−ε
)-approximation algorithm. We will show we
can solve any instance of 3-SAT by constructing an incremental flow network and
using the approximation algorithm to identify satisfiable formulas.
Let b = kd1εe. We create a b × b matrix of components as shown in Fig-
ure 3.8. For 0 ≤ a0 ≤ a1 ≤ · · · ≤ ak ≤ b defined later, assign to level ` the
columns a`−1 + 1 through a`; each level ` column j contains variable components
v1j, v2j, . . . , va`−1j and clause components cj(a`+1), . . . , cj(b−1), cjb, all linked in series
between the source and the sink. Components in these columns contain only level
` edges. Variable component vij is linked to clause component cij. Then
1. The maximum flow at level ` has value a`, therefore we can obtain an a`(k +
1− `)-sum for any level `, regardless of whether φ is satisfiable or not.
2. Any level ` flow must pass through clause components cj(a`+1), . . . , cjb for
some column j ≤ a`. If there is any flow being sent down a level i column
1A component is said to appear at level ` if, in the incremental network, all ofits edges have capacity 0 prior to level ` and capacity 1 for all subsequent levels.
contains ` with 2v1 ≤ v` < 22v1, and so on. Let max(i) denote the last level in
interval i. This notation is illustrated in Figure 4.1.
Figure 4.1: Ten-level incremental input clustered into four intervals. Black dots
represent single-level solutions, labeled by cost and sorted on a number line. All
max(i) levels are indicated.
For each level ` in interval i, define S` =⋃i
j=1 Amax(j). Then S = (S1, S2, . . . , Sk)
is an incremental solution by construction. Note that S is also feasible because,
by monotonicity, any superset of Amax(i) is feasible for all levels in interval i. To
establish the approximation bound, recall that 2i−1v1 ≤ v` < 2iv1. Then
v(S`) ≤i∑
j=1
vmax(j) <i∑
j=0
2jv1 < 2i+1v1
= 4 · 2i−1v1 ≤ 4 · v` ≤ 4 · α · v(OPT(F`)) .
Thus each S` is at most a factor of 4α greater than the optimal at each level, and
hence the desired bound holds for both the aggregate and ratio objectives.
It is worth comparing the performance of these two algorithms to that of the
general ( αHk
)-approximation for incremental packing problems given in Chapter
3. First, the above results apply to both the aggregate and competitive ratio
objectives, whereas the packing result applies only to the former. For two levels,
the result of Chapter 3 gives an ( α1.5
)-approximation to the aggregate value, which
67
is slightly better than the (ϕ·α)-approximation provided by Theorem 4.1. For large
k, however, covering problems have a (4α)-approximation, which is much better
than the O( αlog k
)-approximation for packing problems. This approximation gap
can only get wider, as the latter algorithm is tight for incremental flow, whereas
there is no evidence to suggest that the former algorithm cannot be improved.
We now consider the three problems discussed in Section 4.1 and illustrate how
they fit the covering framework. There are many ways to define incremental ver-
sions of these problems; we consider a limited number in this chapter. Alternative
interpretations of edge cover and other problems are discussed in Chapter 5.
4.4 Edge Cover
Given an undirected graph G = (V, E), the edge cover problem finds a minimum
edge cover of G, that is, a minimum set of edges E ′ ⊆ E such that each vertex is
incident to at least one edge of E ′. Strict application of our incremental model to
edge cover has a negligible effect on the nature of the problem (an issue discussed
in Chapter 5). We instead consider subset edge cover, a generalization of edge cover
defined in [70]. In addition to G there is an input T ⊆ V , where T is a target set
of vertices that must be covered. In terms of our model, the objects are the edges
of G, and feasible solutions are edge sets that cover all vertices of T . The cost of
an edge cover S is v(S) = |S|. Subset edge cover can be solved in polynomial time
by reducing it to an instance of edge cover [70].
Incremental edge cover (IEC) is defined on an undirected graph G = (V, E)
with an increasing sequence of k target sets T1 ⊆ T2 ⊆ . . . ⊆ Tk. The objects
are the edges E, and the level ` feasible set contains the subset edge covers of T`.
Therefore a solution is a sequence of edge covers (S1, S2, . . . , Sk) such that S` is a
68
subset edge cover of T` in G, and S` ⊆ S`+1. The minimum single-level cover for
level ` is denoted by S∗` . The weighted case is defined analogously, except each
edge e ∈ E has a fixed cost ce ≥ 0, and v(S`) =∑
e∈S`ce.
4.4.1 Aggregate Objective
As with bipartite matching (Section 3.3), imposing the incremental constraint on
edge cover does not increase its complexity with respect to the aggregate objective.
The following theorem shows that for any number of levels, the optimal aggregate
solution can be found in polynomial time, even when the edges of E are weighted.
Theorem 4.3 There is an algorithm for incremental weighted edge cover that
achieves the optimal aggregate value on every instance.
We introduce some preliminary concepts and results to aid in the proof of
Theorem 4.3. Given an IEC instance (G, c,T = (T1, T2, . . . , Tk)), a vertex t ∈ Tk
is a level ` vertex if t first appears in target set T`; this level of t is denoted `t. For
each such vertex t, let et denote a minimum-cost edge incident to t.
Definition 4.3 An incremental solution is reduced if, for each edge u, v, either
(1) u, v ∈ Tk and u, v is added to the solution at level min`u, `v, or (2) u, v =
et for some t ∈ Tk and is added at level `t.
Lemma 4.4 For every incremental cover there is a reduced cover of no higher
cost.
Proof. Given an incremental cover S, we convert it into a reduced cover of equal
or lower cost. Let e = u, v be any edge in S. There are three cases.
(i) |u, v ∩ Tk| = 0. Remove e to produce a feasible solution of lower cost.
69
(ii) |u, v ∩ Tk| = 1. Without loss of generality, assume u ∈ Tk. If e is added to
S after level `u, remove it, because u must be covered by some other edge at
or prior to level `u. Otherwise, replace e with eu at level `u.
(iii) |u, v ∩ Tk| = 2. Without loss of generality, assume `u ≤ `v. If e is added
to S after level `v, remove it. If e appears at or before level `u, keep e in the
cover, but remove it from all levels prior to `u. Otherwise, e appears after
level `u but no later than level `v. In this case, replace e with ev at level `v.
Executing these modification on all edges in the original cover S neither affects the
feasibility of the cover nor increases its cost. All edges in the produced cover are
of the appropriate form and therefore the new cover is reduced.
Algorithm 3 Finds an optimal aggregate incremental edge cover.
We construct a subset edge cover instance (G′, c′, Tk) such that covers of G′ corre-
spond to reduced incremental covers of G of equal cost. There are two phases:
(i) Begin with the subgraph of G induced by Tk. For each e = u, v set c′(e) =
c(e)∑k
`=min`u,`v w`: the cost of using e to cover u and v in a reduced cover.
(ii) For each u ∈ Tk, create a new vertex u and edge u, u of cost c(eu)∑k
`=`uw`:
the cost of using eu to cover only u in a reduced cover.
We solve this single-level instance to obtain a minimum-cost subset edge cover S,
and build an incremental solution S = (S1, S2, . . . , Sk) as follows: if u, v ∈ S
then place u, v in Smin`u,`v, and if u, u ∈ S then place eu in S`u .
Lemma 4.5 Algorithm 3 finds a subset edge cover S of cost σ if and only if there
is a reduced incremental edge cover S of (G,T) with aggregate value σ.
70
2
1
3
2 2
1
1
(a)
2
1
3
2
2
2
2
3 33
333
3
(b)
Figure 4.2: 4.2(a): A three-level instance of IEC. Labels indicate the level at
which vertices first appear in the target set. 4.2(b): A subset edge cover instance
constructed from the graph in 4.2(a), where shaded vertices are the target set.
[⇐] Given a reduced cover S, we build a subset edge cover S by considering each
edge e ∈ S. By definition, there are only two types of edges in a reduced cover. If
e = u, v for u, v ∈ Tk appears in S at level min(`u, `v) then include u, v in S.
Otherwise, e = eu for some u ∈ Tk and appears at level `u; in this case, include
u, u in S. Because S incrementally covers Tk, S covers Tk. Moreover, the cost of
S and S are edge-by-edge equivalent, therefore v(S) is exactly the aggregate value.
[⇒] For analogous reasons, any S produced as described in Algorithm IEC from
some S that covers Tk will be a feasible reduced v(S)-aggregate solution.
The proof of Theorem 4.3 follow from Lemmas 4.4 and 4.5.
4.4.2 Ratio Objective
Although there is a polynomial-time algorithm achieving the optimal aggregate
value for incremental edge cover, the situation is more complicated for the com-
petitive ratio, where we must deal with both complexity and competitiveness.
This section has three parts: the first and second explore unweighted and
weighted edge cover, respectively, and the third studies the strict competitive ratio.
71
n copies
Figure 4.3: No algorithm is < 43-competitive on this instance. The level-1 target
nodes are shaded and the level-2 target nodes are white.
Unweighted Edge Cover
Theorem 4.6 No algorithm for incremental edge cover is < 43-competitive.
Proof. Consider the incremental edge cover instance given in Figure 4.3, con-
structed from n copies of a 3-edge gadget. Note that |S∗1 | = n, |S∗
2 | = 2n = 2 · |S∗1 |,
and that for each gadget we take either a (1, 3)-cover or a (2, 2)-cover. Suppose
there is a (43− α)-competitive algorithm for some α > 0. Then there is a constant
c such that |S1| ≤ (43− α)|S∗
1 |+ c, and
(43− α)|S∗
2 |+ c ≥ |S2| ≥ 2|S∗2 | − |S1|
≥ 2|S∗2 | − (4
3− α)|S∗
1 | − c
= 2|S∗2 | − 1
2(4
3− α)|S∗
2 | − c
= (43
+ 12α)|S∗
2 | − c.
Therefore c ≥ 3α4|S∗
2 |, a contradiction.
Thus for any number of levels, it is not possible to find an α-competitive algo-
rithm with α < 43. For k = 2, we have an algorithm that matches this bound.
Theorem 4.7 Two-level incremental edge cover is 43-competitive.
Proof. We argue that Algorithm 4 is 43-competitive. First observe that, since each
72
Algorithm 4 Finds a 43-competitive incremental edge cover.
Given input target sets T1 and T2, first find the minimum edge covers S∗1 , S
∗2 of
T1 and T2, respectively. Consider any edge e = u, v ∈ S∗1 \ S∗
2 . Without loss
of generality we may assume that both u, v ∈ T1 and u, v are covered by edges of
S∗2 \ S∗
1 at level 2. (If only u ∈ T1 then we swap e with the S∗2 edge that covers u.
If u is covered at level 2 with an edge in S∗2 ∩ S∗
1 then we swap e with the S∗2 \ S∗
1
edge covering v.) Initialize S1 = S∗1 ∩ S∗
2 .
While |S1| < b23|S∗
1 |c, select the remaining component C of S∗1 ∪ S∗
2 with the
smallest ratio|S∗2∩C||S∗1∩C| and set S1 = S1 ∪ (S∗
1 ∩ C), breaking the last component in
two, if necessary, to ensure |S1| ≤ b23|S∗
1 |c. For all remaining components C, set
S1 = S1 ∪ (S∗2 ∩ C ∩ T1). Let S∗
2 = S1 plus all edges of S∗2 \ S∗
1 with at most one
endpoint in S1.
edge of S∗1 has at most two edges of S∗
2 covering its endpoints,
|S1| ≤ b23|S∗
1 |c+ 2d13|S∗
1 |e ≤ 2
3|S∗
1 |+ 1.
Let C1, C2 denote the sets of components selected for S1 and S2\S1, respectively,
and let q be the ratio of the last component selected for S1. Then all components
C ∈ C1 have|S∗2∩C||S∗1∩C| ≤ q, all components C ∈ C2 have
|S∗2∩C||S∗1∩C| ≥ q, and
|S∗1 ∩ C1| ≥ 1
q|S∗
2 ∩ C1|,
|S∗1 ∩ C2| ≤ 1
q|S∗
2 ∩ C2|, and
|S∗2 ∩ C1| ≤ q|S∗
1 ∩ C1| ≤ q · 2|S∗1 ∩ C2| ≤ 2|S∗
2 ∩ C2|.
Since S∗2 = (S∗
2 ∩ C1) ∪ (S∗2 ∩ C2), we have |S∗
2 ∩ C2| ≥ 13|S∗
2 |. Let E0, E1, and E2
be the edges of (S∗2 \ S∗
1) ∩ C1 with zero, one, and two endpoints in (S∗1 \ S∗
2) ∩ C1,
73
respectively. Then 2|(S∗1 \ S∗
2) ∩ C1| = |E1|+ 2|E2|, and
|S∗1 ∩ C1|+ |E1 ∪ E0| = |(S∗
1 ∩ S∗2) ∩ C1|+ |(S∗
1 \ S∗2) ∩ C1|+ |E1 ∪ E0|
= |(S∗1 ∩ S∗
2) ∩ C1|+ 12|E1|+ |E2|+ |E1 ∪ E0|
≤ |(S∗1 ∩ S∗
2) ∩ C1|+ 32|E1 ∪ E2 ∪ E0|
= |(S∗1 ∩ S∗
2) ∩ C1|+ 32|(S∗
2 \ S∗1) ∩ C1|
≤ 32|S∗
2 ∩ C1|.
Finally we get the desired bound on |S2|,
|S2| ≤ |S∗1 ∩ C1|+ |E1 ∪ E0|+ |S∗
2 ∩ C2|
≤ 32|S∗
2 ∩ C1|+ |S∗2 ∩ C2|
= 32|S∗
2 | − 32|S∗
2 ∩ C2|+ |S∗2 ∩ C2|
≤ 32|S∗
2 | − 12· 1
3|S∗
2 | = 43|S∗
2 |. ut
Theorems 4.6 and 4.7 show that for two levels, no algorithm has better com-
petitive ratio than Algorithm 4. Algorithm 4 is not optimal, however: there are
instances, as in Figure 4.4, for which it obtains a competitive ratio of 43
although
better solutions exist. In the bipartite matching analog, we were able to show that
an optimal algorithm existed by using a black box that found the maximum-weight
matching of a certain size. Unfortunately, we have no such algorithm for the in-
cremental edge cover problem. We are able, however, to use the general result of
Theorem 4.2 to get a 4-competitive algorithm for k ≥ 3 levels.
Weighted Edge Cover
The addition of edge weights significantly changes both the complexity and com-
petitiveness of edge cover, although not as considerably as the weighted bipartite
matching problem (whose competitive ratio became arbitrarily small).
74
(a) (b) (c)
Figure 4.4: The bold edges in 4.4(b) and 4.4(c) show two possible S∗1 of 4.4(a). If
Algorithm 4 selects the edge cover in 4.4(b) then it can be 1-competitive; however,
if it selects the edge cover in 4.4(c) then it can at best be 43-competitive.
Theorem 4.8 There is no ϕ-competitive algorithm for incremental weighted edge
cover.
Proof. Consider the incremental weighted edge cover instance given in Figure
4.5. Note that |S∗1 | = 1
ϕA and |S∗
2 | = A. Suppose there is a (ϕ − α)-competitive
algorithm for some α > 0. Then there is a constant c such that |S1| ≤ (ϕ−α)|S∗1 |+c
and |S2| ≤ (ϕ− α)|S∗2 |+ c. Then
|S1| = A ⇒ (ϕ− α) 1ϕA + c ≥ |S1| = A
⇒ c ≥ α 1ϕA, a non-constant, and
|S1| = 1ϕA ⇒ (ϕ− α)A + c ≥ |S2| = A + 1
ϕA
⇒ c ≥ αA, also a non-constant.
Therefore no such α exists.
In contrast to the weighted bipartite matching problem, we have a constant
bound on the competitive ratio, and this bound is tight due to Theorem 4.1. Thus
75
A1
ϕA
Figure 4.5: No incremental algorithm is < ϕ-competitive on this instance, for A
defined in the proof of Theorem 4.8. The black vertex is in neither T1 nor T2.
for edge cover, the incremental constraint adversely affects the problem’s compet-
itiveness, but by at most a constant factor. However, there are still instances on
which the general ϕ-competitive algorithm does not obtain the optimal ratio; in
fact, finding such an optimal algorithm is NP-hard.
Theorem 4.9 No algorithm for two-level incremental weighted edge cover achieves
the optimal competitive ratio on every instance, unless P = NP .
We prove Theorem 4.9 by a reduction from Partition, an NP-hard problem [57].
Given a finite set A and sizes s(a) ∈ Z+ for all a ∈ A, a partition of A is some
A′ ⊆ A such that∑
a∈A′ s(a) =∑
a∈A\A′ s(a). We construct a 2-level instance of
IEC such that the optimal competitive ratio is 1+ϕ2
if and only if A has a partition.
We start with a vertex s, and for each element a ∈ A, we create vertices ua, va
and edges e1a = s, ua, e2
a = ua, va of cost C · s(a) and ϕ · C · s(a), respectively,
for some C > 1. The target sets are T1 =⋃
a∈A ua and T2 =⋃
a∈Aua, va; this
construction is shown in Figure 4.6.
Let S =∑
a∈A s(a). The optimal level-one cover S∗1 is all e1
a edges and has cost
C · S. The optimal level-two cover S∗2 is all e2
a edges and has cost ϕ · C · S.
Lemma 4.10 The optimal competitive ratio of this construction is at least 1+ϕ2
.
Proof. By way of contradiction, suppose there is an algorithm finding (S1, S2)
such that v(S1) ≤ (1+ϕ2− α)v(S∗
1) + c and v(S2) ≤ (1+ϕ2− α)v(S∗
2) + c for some
76
C · s(a1) ϕ · C · s(a1)
ϕ · C · s(a2)
ϕ · C · s(a3)C· s(a3)
C · s(a2)s
ua1
ua2
ua3va3
va2
va1
Figure 4.6: The two-level IEC graph for partition instance A = a1, a2, a3.
α > 0 and constant c. If we take C > cα·ϕ·S then v(S2) < 1+ϕ
2v(S∗
2). Note that
v(S2) = ϕ · CS + C · d for some integer d ≥ 0, so v(S2) ≤ ϕ · CS + (S2− 1
2)C, and
v(S1) ≥ ϕ · CS − (ϕ− 1)(v(S2)− ϕ · CS)
≥ ϕ · CS − (ϕ− 1)(ϕ · CS + (S2− 1
2)C − ϕ · CS)
= ϕ · CS − (ϕ− 1)(S2− 1
2)C
= ϕ+12
CS + ϕ−12
C.
Then c ≥ αv(S∗1) + ϕ−1
2C, a contradiction to c being constant.
Lemma 4.11 There is a partition of A if and only if there is a 1+ϕ2
-competitive
incremental cover.
Proof. [⇒] Given a partition A′ ⊆ A such that∑
a∈A′ s(a) = S2, we construct an
incremental cover (S1, S2) by selecting S1 = e1a | a ∈ A′ ∪ e2
a | a ∈ A \ A′ and
S2 = S1 ∪ e2a | a ∈ A′. This is a feasible solution, as both va and wa are covered
for each element a ∈ A. Furthermore,
v(S1) = C∑
a∈A′ s(a) + C∑
a∈A\A′ ϕs(a) v(S2) = ϕ · CS + C∑
a∈A′ s(a)
= CS2
+ ϕCS2
= ϕ · CS + CS2
= 1+ϕ2
v(S∗1) = 1+ϕ
2v(S∗
2) .
[⇐] Suppose we have a 1+ϕ2
-competitive cover (S1, S2). Then we claim that v(S2) =
1+ϕ2
v(S∗2), using the fact that v(S2) = ϕ · CS + C · d for some integer d ≥ 0:
77
v(S2) > ϕ+12
v(S∗2) implies v(S2) ≥ ϕ·CS+(S
2− 1
2)C, and S2 is not ϕ+1
2-competitive.
Conversely, v(S2) < ϕ+12
v(S∗2) implies v(S2) ≤ ϕ · CS + (S
2− 1
2)C, and S1 is not
competitive by Lemma 4.10. Thus (S1, S2) is such that v(S2) = 1+ϕ2
CS; we define
A′ = a | e1a ∈ S2 so that
∑a∈A′ s(a) = 1
C(v(S2)− ϕCS) = S
2.
Theorem 4.9 follows from Lemmas 4.10 and 4.11. As with bipartite matching,
it may be possible to find an algorithm achieving the optimal competitive ratio on
instances with polynomial-size edge weights.
Strictly Competitive Edge Cover
When we consider the strict competitive ratio, we have the following results.
Theorem 4.12 There is no strictly α-competitive algorithm for incremental edge
cover, for α < 32.
Proof. Consider one copy of the 3-edge gadget of Figure 4.3. In order to achieve
a strict 32-competitive ratio, an algorithm must find |S1| ≤ 3
2. Integrality implies
that |S1| = 1, therefore |S2| = 3 = 32|S∗
2 |.
The algorithms of Theorems 4.1 and 4.2 find strictly competitive solutions for 2
and k ≥ 3 levels, respectively, for both unweighted and weighted edges. Moreover,
the reduction of Theorem 4.9 also holds for strict competitive ratio.
4.5 Minimum Cut
Given a directed network G = (V, E) with source s, sink t, and a capacity function
c : E → Q, the minimum cut problem finds a minimum capacity s-t cut, that is, a
minimum capacity set of edges S ⊆ E such that all paths directed from s to t are
disconnected in G \ S. In terms of our model, the objects are the edges of G, and
78
feasible solutions are subsets of E that cut s from t. The cost of a cut is the sum
of the weights of its edges, v(S) =∑
e∈S ce. Minimum cut is the dual problem to
network flow, and can thus be solved in polynomial time.
Incremental cut is defined on a directed network G = (V, E) with source s, sink
t, and a non-decreasing sequence of k capacity functions c` : E → Q. A solution
is a sequence of edge sets (S1, S2, . . . , Sk) such that S` is an s-t cut of (G, E`) and
S` ⊆ S`+1. The minimum single-level cut for level ` is denoted by S∗` .
4.5.1 Aggregate Objective
As with the network flow problem, we show that the incremental structure in-
creases the complexity of minimum cut from polynomial-time to NP-hard for both
objectives.
Theorem 4.13 No algorithm for two-level incremental cut achieves the optimal
aggregate value on every instance in polynomial time, unless P = NP .
Theorem 4.13 follows via a reduction from 3-SAT. Given a formula φ with n vari-
ables and m clauses, we construct an incremental cut instance (G1, G2) such that
the optimal aggregate value is 18m if and only if φ is satisfiable.
Create source s, sink t, auxiliary vertices s′, t′, and edges (s, s′), (t′, t) of capacity
C > 0. Let |v| denote the number of literals using variable v or its negation v.
For each v, create 2|v| unit-capacity edges e1v, e
2v, . . . , e
|v|v , e1
v, e2v, . . . , e
|v|v joined by
high-capacity edges as shown in Figure 4.7(a). Connect all variable gadgets in
parallel between s′ and t′ using high-capacity edges to form G1 as in Figure 4.7(b).
Next construct the level-two graph G2. First add high-capacity bypass edges
(s, t′) and (s′, t). Then for each clause c, create an s-t path that passes in series
79
e1
ve2
ve3
ve4
v
e4
ve3
ve2
ve1
v
(a)
e1
ve2
ve3
ve4
v
e4
ve3
ve2
ve1
v
s
s′
t′
t
C
C
e1
we2
we3
we2
ue1
u
e1
ue2
ue2
we1
we3
w
(b)
e1
v
s
s′
t′
t
C
C
e3
w
e1
u
(c)
Figure 4.7: 4.7(a): A variable gadget. 4.7(b): The level-one graph. 4.7(c): The
level-two graph, showing bypass edges and a clausal path for clause c = u, v, w.Bold edges have prohibitively high cost. Thin edges have unit cost unless labelled
otherwise. Solid edges are level-one edges whereas dashed edges are level-two edges.
through some ei` for each of the three literals ` in c; these paths are such that
every ei` edge is used by at most one clausal path, and non-ei
` edges are given
high capacity. The bypass edges and one example clausal path are illustrated
in Figure 4.7(c). Give the high-capacity edges capacity 20m, thereby preventing
their use in any reasonable-cost solution. Finally, replace capacity c edges with c
unit-capacity parallel paths.
Set C = 6m. Then all minimum cuts of G1 have cost 3m and contain either all
eiv edges or all ei
v edges for each variable v; the only other reasonable-cost minimal
cuts are (s, s′) and (t′, t) which cost C > 3m. Moreover, all reasonable-cost
cuts of G2 contain both (s, s′) and (t′, t). The graph G2 can be cut optimally at
cost 2C + m = 13m by cutting (s, s′), (t′, t), and the first unit-cost edge in each of
the m clausal paths. Lemma 4.14 completes the proof.
80
Lemma 4.14 There is a satisfying assigment iff there is an 18m-aggregate cut.
[⇒] Given a satisfying assignment A, we construct an incremental cut as follows:
if A(v) = true then cut all eiv, otherwise cut all ei
v. This selection of 3m edges is
a cut of G1. To cut G2, we only add (s, s′) and (t′, t). This costs an additional
2C = 12m, yielding an incremental cut with aggregate value 3m + 15m = 18m.
We claim these edges are sufficient to cut s from t in G2: if not, then some s-t
path would remain. Because (s, s′) is cut, this path must originate along one of
the m clausal paths. It cannot follow such a path all the way from s to t, as each
clause contains one true literal whose edge is contained in our G1 cut. On the other
hand, any deviation from a clausal path is only possible immediately after the path
passes through an eiv or ei
v edge. If it deviates after an edge of the form eiv, then
the only path remaining to t passes through the cut edge (t′, t), a contradiction.
And if it deviates after an edge of the form eiv, then all ei
v must be cut, and it is
impossible to exit the variable gadget except to follow the clausal path.
[⇐] Now suppose we are given an incremental cut (S1, S2) with aggregate value
18m. Without loss of generality, we may assume that the cut S1 is minimal.
Furthermore, we claim that neither (s, s′) nor (t′, t) is contained in S1; if they
were, then our aggregate value would be at least 6m + 13m = 19m > 18m. Thus
the cut S1 must contain either all eiv or ei
v edges for each variable v, at cost 3m. This
defines our truth assignment A: set A(v) = true if all eiv are cut and A(v) = false
if all eiv are cut. In addition to S1, the cut S2 contains only (s, s′) and (t′, t); if it
contained even one more edge, it would have aggregate value 3m+15m+1 > 18m.
Hence the addition of (s, s′) and (t′, t) to S2 must suffice to cut s from t in G2, and
all clausal paths are cut by our level-one cut, indicating that under our assignment
every clause contains at least one true literal.
81
A ϕ · A
1ϕ · A + 2
s t
Figure 4.8: No incremental algorithm is < ϕ-competitive on this instance, for A
defined in Theorem 4.8.
Thus, finding the optimal aggregate value is not possible. However, Theorems
4.1 and 4.2 give ϕ- and 4-approximations to the optimal aggregate value for k =
2 and k ≥ 3, respectively. These bounds are much nicer than the tight 1Hk
-
approximation for incremental flow, illustrating that the incremental versions of
dual problems are not themselves duals.
4.5.2 Ratio Objective
Theorem 4.15 No algorithm for incremental cut is < ϕ-competitive.
Proof. Consider the incremental cut instance given in Figure 4.8. Note that
|S∗1 | = A and |S∗
2 | = ϕ · A + 1. Suppose there is a (ϕ − α)-competitive algorithm
for some α > 0. Then there is a constant c such that |S1| ≤ (ϕ − α)|S∗1 | + c and
|S2| ≤ (ϕ− α)|S∗2 |+ c. Then
|S1| = ϕ · A ⇒ (ϕ− α)A + c ≥ |S1| = ϕ · A
⇒ c ≥ αA, a non-constant, and
|S1| = A ⇒ (ϕ− α)(ϕ · A + 1) + c ≥ |S2| ≥ A + ϕ · A + 1
⇒ c ≥ α(ϕ · A + 1)− ϕ + 1, also a non-constant.
Therefore no such α exists.
82
A ϕ · A
s
ϕ · A
t
(a)
A ϕ · A
s
ϕ · A
t
(b)
A ϕ · A
s
ϕ · A
t
(c)
Figure 4.9: The bold edges in 4.9(b) and 4.9(c) show two possible S∗1 of 4.9(a). If
the algorithm of Theorem 4.1 selects the cut in 4.9(b) then it can be 1-competitive;
however, if it selects the cut in 4.9(c) then it can at best be ϕ-competitive.
This bound is matched by the two-level algorithm of Theorem 4.1; however,
there are instances, as in Figure 4.9, for which this tight algorithm is not optimal.
As with incremental flow, such an optimal algorithm is NP-hard to find.
Theorem 4.16 No algorithm for two-level incremental cut achieves the optimal
competitive ratio on every instance in polynomial time, unless P = NP .
Proof. We modify the reduction from Theorem 4.13 by multiplying each edge
capacity by a suitably large D ≥ 1. We then claim that a formula φ is satisfiable
if and only if there is a 1513
-competitive incremental cut.
[⇒] Given a satisfying assignment A, Theorem 4.13 describes how to construct a
(3mD, 15mD)-cut, which is 1513
-competitive.
[⇐] Suppose we have a 1513
-competitive incremental cut (S1, S2). Then there exists
a constant c such that v(S1) ≤ 1513
v(S∗1) + c and v(S2) ≤ 15
13v(S∗
2) + c. If either
(s, s′) or (t′, t) are contained in S1 then v(S1) ≥ 6mD = 1513
3mD + 3313
mD, which
is a contradiction for c < 3313
mD. Thus the cut S1 must contain either all eiv or ei
v
edges for each variable v, at cost 3mD. If S1 contains more than 3m edges then
v(S2) ≥ 15mD + D = 1513
13mD + D, which is a contradiction for c < D. Therefore
the cut S1 defines our truth assignment as in Theorem 4.13.
83
Therefore both incremental flow and incremental cut are NP-hard; however,
the latter has a constant-competitive algorithm, but the former does not.
4.6 r-Domination
Given a set of points in a metric space, the (k, r)-center problem finds a set of at
most k centers to open such that every point is within distance r of some open
center. There are two natural approaches (k, r)-center.
(i) k-center : fix the number of centers k and minimize the maximum distance
between any client and its closest center. The k-center problem is NP-hard;
however, Gonzalez [39] gives a 2-approximation algorithm.
(ii) r-domination [6, 37]: fix the radius r and cover all clients with as few cen-
ters as possible. The r-domination problem is NP-hard; the current best
algorithm is an O(log n)-approximation [6].
Incremental k-center is a well-studied problem [66, 71, 64, 21]; the input is the
same as k-center, but the goal is to find a sequence of k centers such that, for any
`, the first ` centers are competitive with the optimal `-center solution. The k-
center problem is not monotonic and therefore our model does not apply; however,
Gonzalez’ 2-approximation [39] translates to a 2-competitive incremental solution.
In contrast, there is no known work on incremental r-domination, but we can
apply our general model. The objects are potential center locations, and feasible
solutions are sets of centers that cover an incrementally increasing client base. This
precisely models restaurant expansion. Theorem 4.2 and the log n-approximation
of [6] yield a 4 log n-approximation for incremental r-domination – a new problem
and solution which arise naturally from our general minimization model.
84
Chapter 5
Alternative Incremental Formulations
5.1 Introduction
Chapters 3 and 4 discuss some standard minimization and maximization problems,
along with their natural incremental formulations. Each of these problems has
several possible incremental definitions, only one of which was explored in depth.
The selected variants are not the only interesting formulations, they are just the
most instructive; not only are alternative definitions of practical interest, they can
admit different complexity and competitiveness results.
There are two steps to constructing an incremental problem. First one must
decide which single-level problem to make incremental, and then which param-
eter to change over time. For example, the maximum flow problem has many
variations, such as fractional and undirected flow; the incremental problems differ
greatly depending on which problem is chosen as the basis. Varying the incremen-
tal parameters can have a similar effect; for example, Chapter 3 studies incremental
bipartite matching where the edges appear over time. An equally appealing model,
however, has edge costs increasing, rather than edge sets. Both formulations seem
reasonable, yet the choice could lead to different performance results.
Both of these decisions can produce a wide variety of problems suiting different
situations. We look at two examples of the first decision by studying incremental
versions of undirected and fractional flow; we find that removing directionality
does not change the complexity or competitiveness of the problem from the di-
rected version, yet allowing fractional solutions improves both the complexity and
competitiveness. We then consider two examples of the second decision by study-
85
ing incremental versions of minimum spanning tree and edge cover. We find that
the complexity and competitiveness of incremental spanning tree varies depending
on whether node sets or edge costs are increasing over time. Moreover, we study a
new version of edge cover with different complexity than that of the version studied
in Chapter 4, although its competitiveness is unchanged.
5.2 Network Flow
Chapter 3 studies a version of network flow where the underlying network is di-
rected and flow must be integral. In this section we consider both undirected and
fractional flow as the underlying problem in our incremental setting, and discuss
how this variation affects incremental performance. We investigate the undirected
and fractional cases separately in the following two sections, focusing on the strict
competitive ratio objective. Many of the proofs appear in [46], and are omitted.
5.2.1 Fractional Flow
For a network with integer edge capacities, the optimal fractional flow is no larger
than the optimal integral flow [28, 29, 30]. This property does not carry over to
flow’s incremental variant, however, as even with integer capacities, a fractional
incremental flow may have better competitive ratio than the best integral solution.
Theorem 3.21 shows that no integral incremental flow is strictly α-competitive for
α > 2n
on the instance of Figure 3.11; with fractional flow there is a strictly 12-
competitive solution that sends a 12
unit of flow along each level-1 flow path, leaving
at least half the capacity of each edge free for the level-2 flow. We generalize
this concept to a simple 1k-competitive algorithm for any k-level incremental flow
instance.
86
Theorem 5.1 The incremental fractional flow problem is strictly 1k-competitive.
Proof. First determine optimal single-level integral flows f ∗1 , f∗
2 , . . . , f ∗k . For each
edge e, set f`(e) = f`−1(e) + 1kf ∗
` (e), where f0(e) = 0. The incremental constraint
and flow conservation are easily verified; the capacity constraints are satisfied
because f`−1(e) = 1k
∑`−1i=1 f ∗
i (e) ≤ `−1k
f ∗`−1(e) ≤ k−1
kf ∗
` (e). Finally, |f`| ≥ 1k|f ∗
` |,
and we have the desired strict competitive ratio.
Thus we see a contrast between integral and fractional flow: no algorithm is
more than strictly 2n-competitive for the former, whereas the latter admits strictly
1k-competitive solutions. Moreover, it is NP-hard to find the optimal strict com-
petitive ratio for integral flows (Theorem 3.20); this is not true of fractional flows.
Theorem 5.2 There is an algorithm for incremental fractional flow that achieves
the optimal strict competitive ratio on every instance.
Proof. The proof relies on linear programming techniques; see [46] for details.
The linear programming (LP) techniques used in Theorem 5.2, although of the-
oretical interest, are impractical in many situations due to the constant associated
with solving LPs. This motivates a search for faster algorithms that find either
optimal or even approximate solutions. We give a polynomial-time approximation
scheme to find a near-optimal solution in almost polynomial-time.
Theorem 5.3 For any ε > 0, there is a strictly (1− ε)3-competitive algorithm for
k-level incremental fractional flow with runtime O(S(m, k)(mk + k2)1εlog1+ε
km1−ε
),
where S(m, k) is O(mk)+ the running time for a shortest paths algorithm.
Proof. The proof, described in [46], is based on the techniques of [36, 27, 76].
Thus we conclude that both complexity and competitiveness are much improved
if we study an incremental model based on fractional rather than integral flow.
87
5.2.2 Undirected Flow
We now examine incremental networks where the underlying edges are undirected.
In such networks, each edge e = u, v is considered as two directed edges e = (u, v)
and e = (v, u). There are two possible interpretations of undirected edges: in
bidirected networks, both e and e can carry flow provided that their sum is less
than c(e). In unidirected networks, either e or e can carry flow, but not both.
These two interpretations are equivalent in the integral unit-capacity case.
Theorem 5.4 There is no strictly α-competitive algorithm for incremental undi-
rected flow for α > 2n.
Proof. We remove directionality from Figure 3.11; Theorem 3.21’s proof carries
over directly for both bidirected and unidirected flow.
As with integral flow, but unlike fractional, it is NP-hard to find the optimal
competitive ratio for both bidirected and unidirected flow.
Theorem 5.5 No algorithm for two-level incremental undirected flow achieves the
optimal strict competitive ratio on every instance, unless P = NP .
Proof. The result follows with slight modification from Theorem 3.20’s reduction;
see [46] for details.
With fractional flows, however, the bidirected case becomes tractable.
Theorem 5.6 There is an algorithm for incremental bidirected fractional flow that
achieves the optimal strict competitive ratio on every instance.
Proof. Incremental bidirected fractional flow can be formulated as an LP; see [46]
for details.
88
Unfortunately, unidirected fractional flow contains a non-linear constraint and
therefore cannot be modeled by an LP. Thus LP techniques cannot be used to
solve the problem optimally, moreover, neither can any polynomial technique.
Theorem 5.7 No algorithm for 3-level incremental unidirected fractional flow
achieves the optimal strict competitive ratio on every instance, unless P = NP .
Proof. The result follows with slight modification from Theorem 3.20’s reduction;
see [46] for details.
5.3 Minimum Spanning Tree
Given an undirected graph G = (V, E) with edge cost ce ≥ 0 for each edge e ∈ E,
the minimum spanning tree (MST) problem finds a minimum-cost tree T ⊆ E
that spans all nodes of V . This is a classic problem in theoretical computer science
with many paradigmatic polynomial-time algorithms [60].
Minimum spanning tree fits the minimization model of Chapter 4; the objects
are the edges E, and feasible solutions are minimal edge sets that span V . If we
apply Chapter 4’s model directly then our incremental input is a graph G = (V, E)
with an increasing set of vertices V1 ⊆ V2 ⊆ · · · ⊆ Vk ⊆ V that must be covered at
each step using subsets of E. This is a special case of the incremental Steiner tree
problem discussed in Section 2.4.3, that is, our general model produces incremental
versions of MST and SMT that are very similar. This formulation of MST is
therefore nothing new, and we consider two alternative constructions that define
problems not special cases of Steiner tree.
89
5.3.1 Node-Incremental Spanning Tree
In the first formulation, the input is a graph G = (V, E) with a sequence of
vertices V1 ⊆ V2 ⊆ · · · ⊆ Vk ⊆ V , that is, nodes arrive over time. The goal is
to find an incremental sequence of trees (T1, T2, . . . , Tk), where T` spans precisely
the vertices of V`. We call this the node-incremental spanning tree problem. This
model prohibits a solution T` from using edges with endpoints in V \V`; therefore, it
is possible for v(T ∗` ) < v(T ∗
`−1), where T ∗` denotes level `’s minimum spanning tree.
Thus the model does not display Steiner Tree’s modified monotonicity property
(Section 2.4.3), and Theorem 4.2’s 4-competitive algorithm does not apply.
Let E` ⊆ E be the set of edges with both endpoints in V`, that is, the edges of
the subgraph induced by V`, and let cmin` , cmax
` denote the minimum- and maximum-
cost of any edge in E`, respectively. Define r∗n = max`cmax` /cmin
` .
Theorem 5.8 There is no α-competitive algorithm for node-incremental spanning
tree for α < r∗n.
Proof. Consider the node-incremental spanning tree instance given in Figure
5.1, where A = r∗nB. Note that v(T ∗1 ) = A(n − 1), v(T ∗
2 ) = Bn, and r∗n = A/B.
Suppose there is an (r∗n − α)-competitive algorithm for some α > 0. Then there is
a constant c such that v(T1) ≤ (r∗n − α)A(n− 1) + c, and
(A
B− α)Bn + c ≥ v(T2) = A(n− 1) + B.
Therefore c ≥ αBn + B − A, which is not a constant.
Therefore no algorithm achieves ratio better than r∗n on all instances of node-
incremental spanning tree. This bound can be arbitrarily bad, as r∗n is not bounded
from above; however, there is an algorithm matching this r∗n bound.
90
A
B
A A A
B B B B
n copies
Figure 5.1: All incremental algorithms are at best r∗n-competitive on this instance;
level-1 nodes and edges are solid, level-2 nodes and edges are dashed.
Theorem 5.9 There is an r∗n-competitive algorithm that achieves the optimal com-
petitive ratio and aggregate value for node-incremental spanning tree.
Proof. The algorithm initializes T1 = T ∗1 ; for each subsequent level `, it sets
T` = T`−1, then repeatedly adds the cheapest edge connecting each node of V`\V`−1
to the current minimum tree T`. We can view this as contracting V`−1 into a single
vertex v`−1 and running Prim’s MST algorithm [40, 69, 60] on v`−1∪ (V` \V`−1).
Let (S1, S2, . . . , Sk) denote any incremental solution; we show that v(T`) ≤
v(S`) for all `, which will imply the optimality of (T1, T2, . . . , Tk) for both the
aggregate and ratio objectives. Clearly v(T1) = v(T ∗1 ) ≤ v(S1). Now suppose by
induction that v(T`−1) ≤ v(S`−1). By the optimality of Prim’s algorithm we know
This algorithm applies even with weighted edges; therefore, the complexity
of finding the optimal competitive ratio is polynomial-time, whereas it was NP-
hard for the subset edge cover version (Theorem 4.9). Edge cover is thus another
example of a problem where different incremental formulations have different com-
plexities.
95
Chapter 6
Algorithmic Extensions
6.1 Introduction
The algorithms presented thus far have two properties in common: they (1) are
deterministic, and (2) produce incremental solutions. This chapter investigates
whether relaxing these properties can result in improved performance.
Certainly when the input sequence is not known in advance (as with online
algorithms), randomization can help mitigate some of the adversarial constraint’s
power, thereby lessening its contribution to the competitive ratio. Even when the
input sequence is known, however, randomization can still improve performance;
that is, it can diminish the incremental constraint’s contribution to the competitive
ratio. At first glance this may seem surprising, as the input is deterministic;
however, randomization has been used with deterministic input to solve problems
such as MAX-3SAT [53] and global minimum cut [56].
The incremental constraint is reasonable in general, but there are situations in
which violating the constraint is desirable. If we remove the constraint altogether,
however, allowing unlimited violations, then our model degenerates into the single-
level problem: using each individual level’s optimal solution is now feasible. We
find a balance between these two extremes by considering a model that allows
incremental constraint violations, but charges a penalty p for each such infraction.
For example, if p = 0, then we ignore the incremental constraint completely; in
this case, an α-approximation for the single-level problem leads to an α-competitive
algorithm for its incremental formulation by just using each level’s α-approximate
solution. At the other end of the spectrum, if p is large, then the incremental
96
constraint is law, and the best we can do is determined by the results of the model
without penalties. How do we deal with values of p between these extremes?
Clearly, the complexity of any problem with the addition of penalties cannot
be better than that without, as setting p very large reduces the former problem to
the latter. Similarly, the competitiveness cannot improve; however, we can hope to
express an algorithm’s competitive ratio as a function of p such that small values
of p lead to improved competitiveness.
6.1.1 Results
We first consider probabilistic algorithms and find that randomization leads to
an improved competitive algorithm for k-level covering problems; in particular,
Theorem 4.2’s 4α competitive ratio is lowered to an expected ratio of eα, where α
is the best-known approximation bound for the single-level covering problem.
We then relax the incremental constraint by studying a model whose solu-
tions can violate the constraint at a cost of p for each such violation. We con-
sider the (unweighted) bipartite matching problem, and find we can achieve a
max2/3, |M∗2 |/(|M∗
2 |+ p|M∗1 |)-competitive algorithm for any penalty p ≥ 0.
6.2 Probabilistic Algorithms
We use ideas from [64] to show that randomization improves Theorem 4.2’s de-
terministic 4α-competitive algorithm for covering problems to a probabilistic algo-
rithm with expected competitiveness of eα.
Theorem 6.1 Given an α-approximate algorithm A for a problem Π, we obtain
an eα-approximate aggregate value and an eα-competitive ratio in expectation for
97
its k-level incremental version Πk.
Proof. The algorithm is the same as that of Theorem 4.2, except the intervals
are defined differently. Let A` denote A’s level ` solution and v` denote its cost,
that is, v` = v(A`) = v(A(F`)) ≤ α · v(OPT(F`)). Rather than define interval i as
the set of levels ` with 2i−1v1 ≤ v` < 2iv1 (Theorem 4.2), we define interval i as
all levels ` with βei−1v1 ≤ v` < βeiv1, where β = eX for X a random variable in
[0, 1). Then if v′` = v`/(ei−1v1) then β ≤ v′` < βe, in fact, v′` = βeY for Y ∈ [0, 1).
Let max(i) denote the last level in interval i.
For each level ` in interval i, define S` =⋃i
j=1 Amax(j). Then S = (S1, . . . , Sk)
is an incremental solution by construction. Note that S is also feasible because by
monotonicity, any superset of Amax(i) is feasible for all levels in interval i. Then
E[v(S`)] ≤ E[∑i
j=1 vmax(j)] <∑i
j=0 E[βejv1] < E[βei+1v1
e−1], and
E[v(S`)v`
] < E[β·eiv1
v`· e
e−1] = e
e−1E[βe
v′`] = e
e−1E[e1−Y ]
= ee−1
E[eY ] = ee−1
e−1ln e
= e .
Thus each S` is at most an expected factor of eα greater than the optimal at each
level, and the desired bound holds for both the aggregate and ratio objectives.
Similar techniques could be used to possibly improve the performance of non-
covering problems, although no attempts have been made in this direction.
6.3 Relaxed Incremental Constraint
We consider the two-level bipartite matching problem discussed in Chapter 3. The-
orem 3.4 shows there is an algorithm that finds the incremental bipartite matching
of optimal ratio, and this optimal ratio may be as bad as 23
but no worse (Theorems
3.3, 3.2). Now consider the problem that still wants to maximize the competitive
98
ratio, however later solutions are allowed to remove edges from earlier solutions by
paying a penalty p for each such edge.
For bipartite matching, we assume p < 1 without loss of generality; if p ≥ 1
then we will never violate the incremental constraint and Theorem 3.4 can obtain
an optimal 23-competitive solution.
Theorem 6.2 Incremental bipartite matching is (|M∗
2 ||M∗
2 |+p|M∗1 |
)-competitive for 2 lev-
els, where 0 ≤ p < 1 is the penalty for each violation of the incremental constraint.
Proof. Consider Algorithm 1 of Theorem 3.3, which finds a 23-competitive incre-
mental bipartite matching (M1, M2) starting from some M∗1 , M∗
2 . We partition the
set of alternating paths of M∗1 ∪M∗
2 as follows. Let P1,P2 denote the sets of alter-
nating paths selected for M1 and M2\M1, respectively, in Algorithm 1. We further
partition P1; let P11 denote the set of alternating paths of P1 with p ≥ 1|P∩M∗
1 |.
These paths will not violate the incremental constraint because the penalty is too
high. Let P12 denote the set of alternating paths of P1 with p < 1|P∩M∗
1 |. These
paths will violate the incremental constraint on edges P ∩M∗1 for a |P ∩M∗
1 | ·p < 1
penalty in order to gain the edges of P ∩M∗2 .
We construct an (|M∗
2 ||M∗
2 |+p|M∗1 |
)-competitive incremental matching (M ′1, M
′2) by
initializing M ′1 = M1 and M ′
2 = M2. For each alternating path P of P12, we
augment M ′2 with P ’s level-2 edges and reduce M ′
2 by P ’s level-1 edges, that is,
M ′2 = M ′
2∪(P ∩M∗2 )\(P ∩M∗
1 ); this increases the value of M ′2 by 1−p|P ∩M∗
1 | > 0.
We now augment M ′1 with bQc level-1 edges from the alternating paths of P2, where
Q =|M∗
1 ||M∗
2 |+ p|M∗1 |
(|M∗2 | − p|M1| − |M∗
2 ||M1||M∗
1 |) .
This increases the value of M ′1 by bQc ≥ Q− 1 and decreases the value of M ′
2 by
99
at most bQcp ≤ Qp. Therefore
v(M ′1) = |M1|+ bQc ≥ |M1|+ Q− 1
=|M1||M∗
2 |+ p|M1||M∗1 |+ |M∗
1 ||M∗2 | − p|M1||M∗
1 | − |M1||M∗2 |
|M∗2 |+ p|M∗
1 |− 1
=|M∗
2 ||M∗
2 |+ p|M∗1 |· |M∗
1 | − 1, and
v(M ′2) ≥ |M2| − p|P12 ∩M1|+ |P12| − bQcp
≥ |M∗2 | − |P11 ∪ P12| − p|P12 ∩M1|+ |P12| −Qp
= |M∗2 | − |P11| − p|P12 ∩M1| −Qp
≥ |M∗2 | − p|P11 ∩M1| − p|P12 ∩M1| −Qp
= |M∗2 | − p|M1| −Qp
=|M∗
2 |2 + p|M∗1 ||M∗
2 | − p|M∗1 ||M∗
2 |+ p2|M1||M∗1 |+ p|M1||M∗
2 ||M∗
2 |+ p|M∗1 |
− p|M1|
=|M∗
2 ||M∗
2 |+ p|M∗1 |· |M∗
2 | .
Therefore the algorithm achieves a competitive ratio of|M∗
2 ||M∗
2 |+p|M∗1 |
.
If the bound|M∗
2 ||M∗
2 |+p|M∗1 |
< 23
then we ignore the penalties in order to obtain
Theorem 3.4’s optimal 23-competitive solution.
Similar techniques could be applied to other incremental problems that have
polynomial-time algorithms; however, no work has been done in the area. More-
over, it would be interesting to generalize the penalty model to support different
penalties for each individual incremental constraint.
100
Chapter 7
ConclusionWe now explore some of the more interesting open problems inspired by this thesis.
7.1 Online Algorithms
Chapter 2 examines select online problems, using their incremental results to ana-
lyze their online performance. We find that a gap between online and incremental
performance indicates the adversarial constraint contributes to the online compet-
itive ratio; conversely, little or no gap indicates the incremental constraint is pos-
sibly at fault. In this way, bounds on incremental competitiveness and complexity
reveal how much we can hope to improve an online algorithm’s performance.
Other online problems, such as online load balancing, robot navigation, trav-
eling salesman, call admission, and facility location could benefit from this sort of
analysis. These problems must first be studied in the incremental and online set-
tings before any such analysis can be done. If removing the adversarial constraint
leads to a 1-competitive solution then incremental analysis will not help; otherwise,
we can hope to determine what makes the problem’s online performance suffer.
7.2 Extensions to the Incremental Model
We would like to adapt our incremental model to handle the following limitations.
1. We require our solutions to build over time; a more realistic model allows
violations of this constraint at the expense of a penalty. We briefly consider
such a model for bipartite matching in Chapter 6, but more work should be
101
done with other problems and the general models of Chapters 3 and 4.
2. We require feasible solutions at each time step; in appropriate situations we
could allow violations of this constraint, but at the expense of a penalty.
This idea is used in facility location with outliers [17].
3. The incremental model assumes perfect knowledge of the input sequence.
However restrictive this may seem, it allows us to better understand models
with other levels of foresight, such as stochastic and online problems. Chapter
2 begins to address this issue, but it is not comprehensive.
7.3 Other General Models
Chapters 3 and 4 formulate general incremental models for packing and covering
problems; however, these are not the only two classes of interest. Partitioning and
sequencing problems, for instance, are large categories of combinatorial problems
that would also extend naturally to the incremental setting.
Packing and covering problems search over subsets of a collection of objects; in
contrast, partitioning problems search over all partitions of a collection of objects
[60]. Typically, such problems try to solve a covering and packing problem simul-
taneously by choosing some number of subsets under conflict, yet completely cover
the ground elements. For example, the interval scheduling and coloring problems
partition objects in the presence of conflicts, where conflicting objects cannot be
in the same partition. A natural incremental model for these problems reveals
the conflicts over time, as with incremental covering problems; the incremental
solution produces more finely grained partitions of the objects as time goes on.
For instance, the incremental coloring problem could have the edges appear over
102
time; a solution would be a sequence of colorings such that later colorings are
refinements of earlier colorings. Alternatively, our incremental model could reveal
the collection of objects over time, and now the incremental solution would grow
the partitions as time goes on; this variation is used in the online coloring problem
studied in Section 2.4.2.
Another large class of interest is that of sequencing problems, which search over
all permutations of a collection of objects [60]. For example, the Hamiltonian cycle
problem finds an ordering of n vertices subject to restrictions preventing one vertex
from following another. Similarly, the traveling salesman problem finds an ordering
of n vertices, but there is a cost associated with placing one vertex after another,
rather than a hard constraint. Online models where the objects are revealed over
time have been studied in [3, 4, 61]; the goal is to produce an ongoing sequence
of minimum cost. Notice that the competitive ratio and aggregate value objective
functions may not be the best choices in new classes of problems.
7.4 Correlation Clustering
There are individual combinatorial problems that do not fit the problem classes
discussed thus far in this dissertation. Of particular interest is correlation clustering
[5, 16], a problem of theoretical and practical value. The input is a set of items for
which it is known whether any pair is similar or dissimilar. The goal is to cluster
the items such that either (a) disagreements (similar pairs in different clusters
plus dissimilar pairs in the same cluster) are minimized, or (b) agreements are
maximized. Correlation clustering is challenging in its own right; its incremental
counterpart will only prove more difficult. However, it has natural applicability.
For instance, suppose a set of web pages needs to be organized into a directory
103
structure; one knows whether each pair is similar enough to be in the same directory
or sub-directory, or sub-sub-directory, down to the desired level of granularity.
One wants to cluster the pages at each level, that is, assign them to a directory,
a sub-directory, etc., such that items clustered together at an early stage (e.g., a
sub-directory) are necessarily clustered together in later stages (e.g., a directory).
This precisely fits the incremental framework, but we do not yet have the tools to
handle such a problem.
7.5 Completing the Picture
Finally, there are some open problems motivated by incomplete results of this
dissertation. For example, Chapter 3 presents a general approximation algorithm
for the aggregate value of any incremental packing problem; we do not, however,
have such an algorithm for the competitive ratio. We believe that Algorithm 1 for
bipartite matching can be generalized to certain packing problems, and that the
greedy algorithm of Theorem 3.19 can be generalized to others; however, we have
no formal proof as yet.
104
BIBLIOGRAPHY
[1] N. Alon, B. Awerbuch, Y. Azar, N. Buchbinder, and J. Naor. The online setcover problem. In Proceedings of the 35th Annual ACM Symposium on Theoryof Computing, pages 100–105. ACM Press, 2003.
[2] N. Alon and Y. Azar. On-line Steiner trees in the euclidean plane. In Pro-ceedings of the 8th Annual Symposium on Computational Geometry, pages337–343. ACM Press, 1992.
[3] G. Ausiello, V. Bonifaci, and L. Laura. The on-line asymmetric travelingsalesman problem. In Proceedings of the 9th Workshop on Algorithms andData Structures, pages 306–317, 2005.
[4] G. Ausiello, E. Feuerstein, S. Leonardi, L. Stougie, and M. Talamo. Compet-itive algorithms for the on-line traveling salesman problem. In Proceedings ofthe 4th Workshop on Algorithms and Data Structures, volume 955 of LectureNotes in Comput. Sci., pages 206–217. Springer, 1995.
[5] N. Bansal, A. Blum, and S. Chawla. Correlation clustering. Mach. Learn.,56(1-3):89–113, 2004.
[6] J. Bar-Ilan, G. Kortsarz, and D. Peleg. How to allocate network centers. J.Algorithms, 15(3):385–415, 1993.
[7] P. Benedek, A. Darazs, and J.D. Pinter. Risk management of accidental waterpollution: An illustrative application. Water Science and Technology, 22:265–274, 1990.
[8] J.R. Birge and J.W. Yen. A stochastic programming approach to the airlinecrew scheduling problem. In Proceedings of the 35th Annual Conference of theOperational Research Society of New Zealand, pages 19–30, 2000.
[9] B. Bollobas and A. J. Harris. List-colourings of graphs. Graphs and Combi-natorics, 1(2):115–127, 1985.
[10] J.A. Bondy and U.S.R. Murty. Graph Theory with Applications. The MacMil-lan Press Ltd, 1978.
[11] A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis.Cambridge University Press, New York, NY, USA, 1998.
[12] P. Briggs, K. D. Cooper, K. Kennedy, and L. Torczon. Coloring heuristics forregister allocation. In Proceedings of the ACM SIGPLAN 1989 Conference onProgramming language design and implementation, pages 275–284, New York,NY, USA, 1989. ACM Press.
105
[13] D. W. Bunn and S. N. Paschentis. Development of a stochastic model for theeconomic dispatch of electric power. Eur. J. Oper. Res. 27, 179-191, 1986.
[14] G. J. Chaitin. Register allocation and spilling via graph coloring. Proceedingsof the SIGPLAN 1982 Symposium on Compiler Construction, 17(6):98–105,June 1982.
[15] G. J. Chaitin, M. A. Auslander, A. K. Chandra, J. Cocke, M. E. Hopkins,and P. W. Markstein. Register allocation via coloring. Computer Languages,6:47–57, 1981.
[16] M. Charikar, V. Guruswami, and A. Wirth. Clustering with qualitative infor-mation. J. Comput. Syst. Sci., 71(3):360–383, 2005.
[17] M. Charikar, S. Khuller, D. M. Mount, and G. Narasimhan. Algorithms forfacility location problems with outliers. In Proceedings of the 12th annualACM-SIAM Symposium on Discrete algorithms, pages 642–651, 2001.
[18] A. Chetwynd. Total colourings of graphs. In R. Nelson and R. J. Wilson,editors, Graph Colourings, Pitman Research Notes in Mathematics Series,pages 65–77. Longman Scientific & Technical, Longman house, Burnt Mill,Harlow, Essex, UK, 1990.
[19] V. Chvatal. A greedy heuristic for the set-covering problem. Mathematics ofOperations Research, 4:233–235, 1979.
[20] G. B. Dantzig and G. Infanger. Multi-stage stochastic linear programs forportfolio optimization. Ann. Oper. Res., 45(1-4):59–76, 1993.
[21] S. Dasgupta. Performance guarantees for hierarchical clustering. In Proceed-ings of the 15th Annual Conference on Computational Learning Theory, pages351–363. Springer-Verlag, 2002.
[22] D. Dentcheva and W. Romisch, editors. Optimal power generation underuncertainty via stochastic programming, volume 458 of Lecture Notes in Eco-nomics and Mathematical Systems. Springer-Verlag, 1998.
[23] Y. Fang and W. Lou. A multipath routing approach for secured data delivery.In Proceedings of IEEE Micron, pages 1467–1473. IEEE Computer Society,2001.
[24] U. Feige. A threshold of ln n for approximating set cover. J. ACM, 45(4):634–652, 1998.
[25] A. Fiat and G. J. Woeginger, editors. Online Algorithms, The State of theArt, volume 1442 of Lecture Notes in Computer Science. Springer, 1998.
106
[26] S. Fiorini and R. J. Wilson. Edge-colourings of graphs. In L. W. Beinekeand R. J. Wilson, editors, Selected Topics in Graph Theory, chapter 5, pages103–126. Academic Press, Inc., London, 1978.
[27] L. K. Fleischer. Approximating fractional multicommodity flow independentof the number of commodities. SIAM J. Discret. Math., 13(4):505–520, 2000.
[28] Jr. L.R. Ford and D.R. Fulkerson. Maximal flow through a network. CanadianJournal of Mathematics, 8:399–404, 1956.
[29] Jr. L.R. Ford and D.R. Fulkerson. A simple algorithm for finding maximalnetwork flows and an application to the Hitchcock problem. Canadian Journalof Mathematics, 9:210–218, 1957.
[30] Jr. L.R. Ford and D.R. Fulkerson. Flows in Networks. Princeton UniversityPress, 1962.
[31] E. Fragniere and A. Haurie. A stochastic programming model for en-ergy/environment choices under uncertainty. International Journal of En-vironment and Pollution, 6(4–6):587–603, 1996.
[32] A.A. Gajvoronskij. Numerical minimization methods for convex function-als dependent on probability measures with applications to optimal pollutionmonitoring. Cybernetics 25, No.4, 471-482 translation from Kibernetika 1989,No.4, 43-51 (1989)., 1989.
[33] A. Gamst. Some lower bounds for a class of frequency assignment problems.IEEE Trans. Veh. Technol., VT-35:8–14, 1986.
[34] M. R. Garey and D. S. Johnson. Computers and Intractability; A Guide tothe Theory of NP-Completeness. W. H. Freeman & Co., 1990.
[35] M. R. Garey, D. S. Johnson, and H. C. So. An application of graph coloringto printed circuit testing. IEEE Transactions on Circuits and Systems, Vol.CAS-23(10):591–598, October 1976.
[36] N. Garg and J. Koenemann. Faster and simpler algorithms for multicommod-ity flow and other fractional packing problems. In Proceedings of the 39thAnnual Symposium on Foundations of Computer Science, page 300. IEEEComputer Society, 1998.
[37] C. Gavoile, D. Peleg, A. Raspaud, and E. Sopena. Small k-dominating setsin planar graphs with applications. In Proc. Workshop on Graphs, WG’01,LNCS 2204, pages 201–216, 2001.
[38] M. X. Goemans. Advanced algorithms. Technical Report MIT/LCS/RSS-27,M.I.T., 1994.
107
[39] T. Gonzalez. Clustering to minimize the maximum intercluster distance. The-oretical Computer Science, 38:293–306, 1985.
[40] R.L. Graham and P. Hell. On the history of the minimum spanning treeproblem. IEEE Annals of the History of Computing, 7(1):43–57, 1985.
[41] A. Gupta, M. Pal, R. Ravi, and A. Sinha. Boosted sampling: approximationalgorithms for stochastic optimization. In Proceedings of the 36th AnnualACM Symposium on Theory of Computing, pages 417–426. ACM Press, 2004.
[42] A. Gyarfas and J. Lehel. Online and first-fit colorings of graphs. J. GraphTh., 12:217–227, 1988.
[43] M. M. Halldorsson. A still better performance guarantee for approximategraph coloring. Inf. Process. Lett., 45(1):19–23, 1993.
[44] M. M. Halldorsson. Online coloring known graphs. In Proceedings of the10th annual ACM-SIAM Symposium on Discrete algorithms, pages 917–918,Philadelphia, PA, USA, 1999. Society for Industrial and Applied Mathematics.
[45] M. M. Halldorsson and M. Szegedy. Lower bounds for on-line graph coloring.In Proceedings of the 3rd annual ACM-SIAM Symposium on Discrete algo-rithms, pages 211–216, Philadelphia, PA, USA, 1992. Society for Industrialand Applied Mathematics.
[46] J. Hartline and A. Sharp. Hierarchical flow. In Proceedings of the 2nd Inter-national Network Optimization Conference, pages 681–687, 2005.
[47] J. Hartline and A. Sharp. An incremental model for combinatorial max-imization problems. In Proceedings of the 5th International Workshop onExperimental Algorithms, pages 36–48. Springer-Verlag, 2006.
[48] J. Hartline and A. Sharp. An incremental model for combinatorial minimiza-tion. technical report. Available at www.cs.cornell.edu/~asharp., 2006.
[49] J. Hastad. Some optimal inapproximability results. In STOC ’97: Proceedingsof the twenty-ninth annual ACM symposium on Theory of computing, pages1–10, New York, NY, USA, 1997. ACM Press.
[50] W. Hsu and G. L. Nemhauser. Easy and hard bottleneck location prpoblems.Discrete Applied Math, 1:209–215, 1979.
[51] O. H. Ibarra and C. E. Kim. Fast approximation algorithms for the knapsackand sum of subset problems. J. ACM, 22(4):463–468, 1975.
[52] N. Immorlica, D. Karger, M. Minkoff, and V. S. Mirrokni. On the costs andbenefits of procrastination: approximation algorithms for stochastic combina-torial optimization problems. In Proceedings of the 15th Annual ACM-SIAM
108
Symposium on Discrete Algorithms, pages 691–700. Society for Industrial andApplied Mathematics, 2004.
[53] D. S. Johnson. Approximation algorithms for combinatorial problems. InSTOC ’73: Proceedings of the fifth annual ACM symposium on Theory ofcomputing, pages 38–49, New York, NY, USA, 1973. ACM Press.
[54] D. S. Johnson. Fast algorithms for bin packing. Journal of Computer andSystem Sciences, 8:272–314, 1974.
[55] H. Kaplan and M. Szegedy. On-line complexity of monotone set systems.In Proceedings of the 10th annual ACM-SIAM symposium on Discrete algo-rithms, pages 507–516. Society for Industrial and Applied Mathematics, 1999.
[56] D. R. Karger. Random sampling in graph optimization problems. PhD thesis,Stanford University, Stanford, CA, USA, 1995.
[57] R. M. Karp. Reducibility among combinatorial problems. In R. E. Millerand J. W. Thatcher, editors, Complexity of Computer Computations, pages85–103. Plenum Press, 1972.
[58] R. M. Karp, U. V. Vazirani, and V. V. Vazirani. An optimal algorithm for on-line bipartite matching. In Proceedings of the 22nd Annual ACM Symposiumon Theory of Computing, pages 352–358. ACM Press, 1990.
[59] H. A. Kierstead, S. G. Penrice, and W. T. Trotter. On-line coloring andrecursive graph theory. SIAM J. Discret. Math., 7(1):72–89, 1994.
[60] J. Kleinberg and E. Tardos. Algorithm Design. Addison-Wesley, 2005.
[61] S. O. Krumke, W. E. de Paepe, D. Poensgen, and L. Stougie. News from theonline traveling repairman. In Proc. 26th Symp. on Mathematical Foundationsof Computer Science (MFCS), volume 2136 of Lecture Notes in Comput. Sci.,pages 487–499. Springer, 2001.
[62] H.W. Kuhn. The hungarian method for the assignment problem. Naval Re-search Logistics Quarterly, 2:83–97, 1955.
[63] F. T. Leighton. A graph colouring algorithm for large scheduling problems.Journal of Research of the National Bureau of Standards, 84(6):489–503, 1979.
[64] G. Lin, C. Nagarajan, R. Rajaraman, and D. P. Williamson. A general ap-proach for incremental approximation and hierarchical clustering. In Pro-ceedings of the 17th Annual ACM-SIAM Symposium on Discrete Algorithms,2006.
[65] L. Lovasz. On the ratio of optimal and fractional covers. Discrete Mathemat-ics, 13:383–390, 1975.
109
[66] R. R. Mettu and C. G. Plaxton. The online median problem. In Proceedings ofthe 41st Annual Symposium on Foundations of Computer Science, page 339.IEEE Computer Society, 2000.
[67] A. Meyerson. Online algorithms for network design. In Proceedings of the16th Annual ACM Symposium on Parallelism in Algorithms and Architec-tures, pages 275–280. ACM Press, 2004.
[68] B. Monien and E. Speckenmeyer. Ramsey numbers and an approximationalgorithm for the vertex cover problem. Acta Inf., 22(1):115–123, 1985.
[69] J. Nesetril. A few remarks on the history of the mst-problem. ArchivumMathematicum, 9:15–22, 1997.
[70] O. Parekh. Edge dominating and hypomatchable sets. In Proceedings of the13th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 287–291.Society for Industrial and Applied Mathematics, 2002.
[71] C. G. Plaxton. Approximation algorithms for hierarchical location problems.In Proceedings of the 35th Annual ACM Symposium on Theory of Computing,pages 40–49. ACM Press, 2003.
[72] J. Plesnık. On the computational complexity of centers locating in a graph.Aplikace Matematiky, 25:445–452, 1980.
[73] G. Robins and A. Zelikovsky. Improved Steiner tree approximation in graphs.In Proceedings of the 11th Annual ACM-SIAM Symposium on Discrete Algo-rithms, pages 770–779. Society for Industrial and Applied Mathematics, 2000.
[74] D. D. Sleator and R. E. Tarjan. Amortized efficiency of list update and pagingrules. Commun. ACM, 28(2):202–208, 1985.
[75] V. V. Vazirani. Approximation Algorithms. Springer-Verlag, 2003.
[76] K. D. Wayne and L. Fleischer. Faster approximation algorithms for generalizedflow. In Proceedings of the 10th annual ACM-SIAM Symposium on Discretealgorithms, pages 981–982. Society for Industrial and Applied Mathematics,1999.
[77] B. Wilson. Line-distinguishing and harmonious colourings. In Roy Nelsonand Robin J. Wilson, editors, Graph Colourings, Pitman Research Notes inMathematics Series, pages 115–133. Longman Scientific & Technical, Long-man house, Burnt Mill, Harlow, Essex, UK, 1990.
[78] A. Yao. New algorithms for bin packing. J. ACM, 27(2):207–227, 1980.
[79] L. Yu, X. Ji, and S. Wang. Stochastic programming models in financial opti-mization: A survey. Advanced Modeling and Optimization, 5(1), 2003.