Top Banner
Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop on Graphical Models, Statistical Inference, and Algorithms May, 20th, 2015
49

Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Jun 04, 2019

Download

Documents

hoangdan
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Max-product Belief Propagation for for Linear Programming

in Combinatorial Optimization

Jinwoo Shin

KAIST EE

IMA Workshop on Graphical Models, Statistical Inference, and AlgorithmsMay, 20th, 2015

Page 2: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Outline

•1. Motivation

•2. Belief Propagation for Linear Programming (LP)

- We found a generic criteria on convergence and correctness of the max-product BP

- We apply this criteria and show that BP can solve LP associated to Shortest Path, Perfect Matching, Network Flow, Traveling Salesman, Vertex Cover, Cycle Packing

•3. Belief Propagation for Minimum Weight Perfect Matching

- We develop a BP-based polynomial-time algorithm for solving the minimum weight perfect matching over arbitrary graphs

•4. Parallel Implementation of Belief Propagation for Large-scale Combinatorial Optimization

- We implement “parallel BP” using OpenMP and SIMD

•5. Conclusion

Page 3: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Motivation

“Why Belief Propagation for Optimization ?”

Page 4: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Research in Large-scale Optimization

Design distributed & parallel algorithms

Implement them in large-scale systems

Mathematicians

System Engineers

“This communication is a major bottleneckfor large-scale machine learning research”

Page 5: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Why System Abstraction is Helpful ?

Design distributed & parallel algorithms

Implement them in large-scale systems:Exploit limited abstraction to address system

design challenges

Mathematicians

Abstraction (API): Define a narrow Interface

System Engineers

Page 6: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Example of System Abstraction

Page 7: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Our Goal

•Goal: Design and implement distributed or parallel algorithms based on Belief Propagation for large-scale optimization

•Why Belief Propagation (BP) ?

- BP is the popular message-passing heuristic for solving computational inference problems arising in probabilistic graphical models

- BP was first proposed by [Pearl 1982] and it gained much attention from 1990’s for error-correcting codes, speech recognition, signal processing, computer vision, etc.

- BP is very easy to code in various system abstractions and we are currently implementing BP using., graph-chi, graph-lab, pThread, openMP.

Page 8: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Part I

Belief Propagation for Linear Programming:

How to solve LP using BP ?

Joint work with Sejun Park (KAIST)

(available at http://arxiv.org/abs/1412.4972)

Page 9: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Background: Graphical Model

•Graphical Model (GM) = A way to represent probabilistic relations between random variables through a graph

•Factor Graph for GM

Page 10: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Background: Belief Propagation

•Maximum-A-Posteriori (MAP) = An assignment maximizing the probability of GM

- MAP is necessary in many applications of GM

- However, it is computationally intractable (i.e., NP-hard) in general

•Max-product Belief Propagation (BP) is a heuristic for MAP

- It is a message-passing algorithm over factor graph

- It provably works if the factor graph is a tree (i.e. no cycle) and not clear about its performance for loopy GMs

m

↵!i

(x

i

) max

y:yi=xi

(y)

Y

j 6=i

m

j!↵

(y

j

) m

i!↵

(x

i

) Y

� 6=↵

m

�!i

(x

i

)

Page 11: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Motivation: BP can solve LP ?

•Consider the following GM and LP (Linear Programming)

- If LP has an integral solution, it is equivalent to MAP, i.e., BP can be a heuristic for solving LP !

- Question: Does it works ? (not clear since GM has loops)

- Next slide: Yes, it sometimes does !

Page 12: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Motivation: BP can solve LP ?

•In the past years, it has been evidenced that

- e.g. Bipartite Matchings [Bayati, Shah and Sharma 2006], Matchings [Sanghavi, Malioutov and Willsky 2007] Perfect Matchings [Bayati, Borgs, Chayes and Zecchina 2008] Shortest Path [Ruozzi and Tatikonda 2008] Independent Sets [Sanghavi, Shah and Willsky 2008] Min-cost Network Flow [Gamarnik, Shah and Wei 2011] Matchings with Odd Cycles [S., Chertkov and Gelfand 2013]

•However, not clear how much these results can generalize

- i.e., what is the limitation and possibility of BP in this direction ?

- Proof strategies in prior works are very sensitive to underlying problem setups

“BP can solve LP (Linear Programming) associated to several combinatorial optimization problems”

Page 13: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Our Contribution: BP can solve LP ?

•[Park and S. 2015] BP converges to the solution of LP if

- C1. LP has a unique and integral solution

- C2. Each variable is associated to at most two factors

- C3.

“We derive C2 and C3 while we generalize proof techniques in [Sanghavi, Malioutov and Willsky 2007] for more general GMs.”

“C3 is a only non-trivial condition, but typically easy to check given GM.”

Page 14: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Proof Strategy and Our Technical Contribution

•Our proof generalizes [Sanghavi, Malioutov and Willsky 2007]

- They prove that BP solves the LP relaxation to the maximum weight matching problem if LP has an integral solution, i.e., C1 holds

- The reason why their result is not easy to generalize even for perfect matching is because if the alternating path touches leaves of the computation tree, then one needs some specialized case-studies to make the better solution feasible (by LP constraints)

- We develop new proof techniques eliminating such case-studies

•Question: How much is this criteria useful ?

- Next slides: LP examples satisfying the conditions include shortest path, perfect matching, vertex cover, traveling salesman, cycle packing, integer network flow

1. They assume BP does not work to derive a contradiction and x is the LP optimal solution.

2. They build an alternating path in the computation tree to find a better solution.

Page 15: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Example I of LP solvable by BP

•Shortest Path = Find a shortest path from source s and destination t in a graph G=(V,E)

- It is known that the above LP always has an integral solution, i.e., C1 holds.

- One can easily check that the corresponding GM satisfies C2 and C3, i.e., BP can solve this LP !

- However, this fact is already known [Ruozzi and Tatikonda 2008] and we rediscover it under our generic framework.

Page 16: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Example II of LP solvable by BP

•Minimum Weight Perfect Matching = Find a minimum weight perfect matching in a graph G=(V,E)

- A perfect matching is a subset of edges where every vertex has degree one.

- It is known that the above LP always has a half-integral solution (xe=0,1/2 or1), i.e., C1 does not hold

- However, it is easy to check C2 & C3 hold, i.e., if LP has an integral solution, BP can compute it. This fact is already known in [Bayati, Borgs, Chayes and Zecchina 2008]

- Next slide: Can BP compute a fractional solution of LP ?

minimize w · x

subject to

X

e2�(v)

xe = 1, 8 v 2 V

x = [xe] 2 [0, 1]

|E|

Page 17: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Example II of LP solvable by BP

•One can design an equivalent LP having an integral solution

- Constructing a new graph having the same vertices, but each edge is copied twice.

- This LP on the new graph is equivalent to the original LP.

- However, this LP always has an integral solution, i.e., C1 holds.

- One can easily check that the corresponding GM satisfies C2 and C3, i.e., BP can solve this LP !

- This example shows that BP can be used for solving LP having fractional solutions.

minimize w · x

subject to

X

ei2�(v)

xei = 2 8 v 2 V

x = [xei : i = 1, 2] 2 [0, 1]

|E0|

Page 18: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Example III of LP solvable by BP

•Tighter LP for Minimum Weight Perfect Matching

- is a set of odd cycles

- The corresponding GM does not satisfy C2 and C3.

- Next slide: However, it is possible to design an equivalent LP for satisfying C2 and C3

minimize w · x

subject to

X

e2�(v)

xe = 1, 8v 2 V,

X

e2E(C)

xe |C|� 1

2

, 8C 2 C, x 2 [0, 1]

|E|

C

Page 19: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Example III of LP solvable by BP

•[S., Gelfand and Chertkov 2013] design an equivalent LP on a new graph via graphical transformation

- One can easily check that the corresponding GM satisfies C2 and C3, i.e., BP can solve this LP if its solution is integral !

minimize w · x

subject to

X

e2�(v)

xe = 1, 8v 2 V,

X

e2E(C)

xe |C|� 1

2

, 8C 2 C, x 2 [0, 1]

|E|

minimize w0 · y

subject to

X

e2�(i)

ye = 1, 8i 2 V, ye 2 [0, 1], 8e 2 E0,

X

j2V (C)

(�1)

dC(j,e)yiC ,j 2 [0, 2], 8e 2 E(C),X

e2�(iC)

ye |C|� 1, 8C 2 C

“Equivalent in integrality, uniqueness and optimality”

Page 20: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Example IV of LP solvable by BP

•Vertex Cover = Find a smallest set of vertices to cover all edges in a graph G=(V,E)

- Even if this LP has an integral solution (i.e., C1 holds), the corresponding GM does not satisfy C2 and C3.

- Next slide: However, one can design an equivalent LP satisfying C1, C2 and C3 (next slide).

Page 21: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Example IV of LP solvable by BP

•We consider the following dual LP

- This dual LP always has an half-integral solution. Furthermore, the domain of x is not [0,1]

- However, one can design an equivalent LP by duplicating edges many times so that the domain of x is [0,1] and it has an integral solution, i.e., C1 holds.

- The corresponding GM also satisfies C2 and C3, i.e., BP can solve this dual LP.

- The primal solution can be easily obtained from the dual one via the complementary slackness condition.

Page 22: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Example V of LP solvable by BP

•Minimum Cost Integer Network Flow = Find a minimum cost flow in a graph G=(V,E)

- It is known that the above LP always has an integral solution, i.e., C1 holds.

- The domain of x is not [0,1], but one can use duplicating techniques we used for the cases of perfect matching and vertex cover

- One can easily check that the corresponding GM satisfies C2 and C3, i.e., BP can solve this LP !

minimize w · x

subject to

X

e2�o(v)

xe �X

e2�i(v)

xe = dv, 8 v 2 V

x = [xe] 2 R|E|+

Page 23: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Example VI of LP solvable by BP

•Traveling Salesman Problem = Find a shortest tour in a graph G=(V,E)

- In this case, the above LP often has an fractional solution, i.e., C1 might not hold.

- One can easily check that the corresponding GM satisfies C2 and C3, i.e., BP can solve this LP if its solution is integral !

# vertices 5 10 15 20 25

integral solution 100% 93% 88% 87% 84%

minimize w · x

subject to

X

e2�(v)

xe = 2

x = [xe] 2 [0, 1]

|E|

Page 24: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Example VII of LP solvable by BP

•Cycle Packing = Find the maximum weight set of cycles with no common vertex in a graph G=(V,E)

- In this case, the above LP often has an fractional solution, i.e., C1 might not hold.

- One can easily check that the corresponding GM satisfies C2 and C3, i.e., BP can solve this LP if its solution is integral !

Page 25: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Summary of Part I

•Motivation: Can we use BP to solve Linear Programming ?

- We found generic sufficient conditions so that BP can solve LP

- They satisfied in many LPs associated to combinatorial optimizations, e.g., matching, perfect matching, shortest path, traveling salesman, vertex cover, network flow, cycle packing

- However, solving LP does not mean solving combinatorial optimizations if LP has fractional solutions

•Part II : If LP has fractional solutions, can we still design a BP-based algorithm for solving combinatorial optimization ?

- We will see such a case for the minimum weight perfect matching problem

Page 26: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Part II

Minimum Weight Perfect Matching via Blossom Belief Propagation

Joint work with Sungsoo Ahn (KAIST) and Michael Chertkov (LANL)

(The draft will appear at arXiv in a few weeks)

Page 27: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Minimum Weight Perfect Matching

•Minimum Weight Perfect Matching = Find a minimum weight weight perfect matching in a graph G=(V,E)

- A perfect matching is a subset of edges where every vertex has degree one.

- This IP (Integer Programming) is not clear how to solve in polynomial-time

Page 28: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Minimum Weight Perfect Matching

•Linear Programming for Minimum Weight Perfect Matching

- One can relax the integer constraints to obtain LP.

- In Part I, we know that BP can solve this LP

- However, this LP often does not have integral solutions (if G is non-bipartite)

- Motivation: Can we design a BP-based algorithm for solving the minimum weight perfect matching problem, i.e., IP (Integer Programming) instead of LP ?

- Question: Is there another LP relaxation having integral solutions ?

LP relaxation

Page 29: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Edmond’s LP

•Edmond’s LP for Minimum Weight Perfect Matching

- L is the set of odd cycles, called blossoms.

- [Edmond 1960’s] prove that this LP always has an integral solution if L contains all odd cycles.

- However, the number of odd cycles is exponentially large, i.e., impossible to solve this LP directly !

- Next slide: A polynomial-time algorithm using this LP !

LP relaxation

Page 30: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Edmond’s Blossom Algorithm

•Blossom algorithm uses the prima-dual method

- The algorithm runs in polynomial-time (i.e., O(|V|2|E|) algorithm where it maintains primal and dual variables (satisfying the complementary slackness condition) and update them iteratively until both are feasible

- The algorithm update (add, remove) the current set L of blossoms at each iteration

- There can be many variations in dual updates for more efficient implementation, where “Blossom-V” [Kolmogorov 2009] provides the fastest version.

- Our motivation: Can we use BP to develop a faster one ?

Dual

maximize

X

v2V

yv +X

S2LyS

subject to we � yv � yu �X

S2L:e2�(S)

yS ,

8e = (u, v) 2 E

yS � 0, 8S 2 L

� 0

Page 31: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Our Contribution

•Can we use BP to develop a faster or parallel version of Edmond’s blossom algorithm ?

- Our plan: Design a sequence of LPs where each LP is solvable by BP (i.e. cutting-plane-like approach)

- [Vempala et al. 2012] provides a polynomial-time cutting-plane scheme required to solve many intermediate LPs. However, the algorithm is quite complex and those LPs are not solvable by BP.

- We develop such an algorithm, where we call it Blossom-BP !

- [Ahn, Chertkov, and S. 2015] Blossom-BP outputs the minimum weight perfect matching after running O(|V|2) BPs or LPs.

- Blossom-BP looks different apparently from other variants of Edmond’s algorithm (including Blossom-V) since it updates primal variable by solving BP or LP, while most others use some greedy approach

- However, we prove that Blossom-BP is “implicitly“ a special version of Edmond’s algorithm

Page 32: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Blossom-BP

•Algorithm Parameters

•Algorithm has three steps, A, B and C

Page 33: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Blossom-BP

•Step A for primal update“This LP is solvable by BP

from our theorem in Part I”

Page 34: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Blossom-BP

•Step B for dual update

Page 35: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Blossom-BP

•Step C for termination

Page 36: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Blossom-BP vs. Blossom-V

•Blossom-V has four basic operations for updating parameters

- GROW, AUGMENT, SHRINK, EXPAND

Page 37: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Blossom-BP vs. Blossom-V

• Blossom-BP jumps over GROWs and AUGMENTs via BP

- However, Blossom-BP “implicitly” choose a different order of GROW

- To guarantee the right ordering, we add some random noise to weights

- Blossom-BP is much easier to implement than Blossom-V

- Blossom-BP is easy to parallelize

Page 38: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Summary of Part I and II

•Motivation: Can we use BP to solve optimizations ?

- Part I : Sufficient conditions so that BP can solve Linear Programming

- Part II : Primal-dual strategy to solve the minimum weight perfect matching by BP

- We are currently implementing a parallel version of Blossom-BP using OpenMP

- We dream more … but such theoretical studies are often slow

•Part III : Empirical study on BP for solving large-scale combinatorial optimizations

Page 39: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Part III

Parallel Implementation of Belief Propagation for Large-scale Combinatorial Optimization

Joint work with Inho Cho (KAIST) and Dongsu Han (KAIST)

Page 40: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Our Goal and Framework

•We aim for developing a distributed and parallel solver for generic combinatorial optimization using BP

Combinatorial Optimization

Design BP: We also study careful Initialization and Damping strategies

to boost up its convergence and accuracy

Post-processing: Run known heuristics for combinatorial optimization

Provide IP (Integer Programming)

Update weights by BP beliefs

Page 41: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Impact of Initialization on BP

•We run BP for Maximum Weight Matching

- We consider random graphs of 100k vertices and 50m edges

- We only run 100 iterations of BP (i.e. do not wait for convergence)

Page 42: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Impact of Initialization on BP

•We run BP for Maximum Weight Matching

- We consider random graphs of 100k vertices and 50m edges

- We only run 100 iterations of BP (i.e. do not wait for convergence)

Page 43: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Impact of Initialization on BP

•We run BP for Maximum Weight Matching

- We consider random bipartite graphs of 1k vertices and 50k edges

- We run BP until it converges

Page 44: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Impact of Damping on BP

•We run BP for Maximum Weight Matching

- We consider random graphs of 100k vertices and 50m edges

- We only run 100 iterations of BP (i.e. do not wait for convergence)

Page 45: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Parallel Implementation of Our Algorithm

•We implemented a multi-threaded version of BP

- We consider random sparse graphs

- We only run 100 iterations of BP (i.e. do not wait for convergence)

- We implement it using “OpenMP”

Page 46: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Parallel Implementation of Our Algorithm

•We further use SIMD for faster BP computations

- We consider random sparse graphs

- We only run 100 iterations of BP (i.e. do not wait for convergence)

- SIMD (Single Instruction, Multiple Data) well suits for BP since it has many repeated operations

Page 47: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Comparison for Maximum Weight Matching

•We consider random bipartite graphs of 10k vertices and 5m edges

- BP provides 99,9+% approximation ratio while Gurobi and Blossom always output the optimal solution

Page 48: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Conclusion

Page 49: Max-product Belief Propagation for for Linear Programming ... · Max-product Belief Propagation for for Linear Programming in Combinatorial Optimization Jinwoo Shin KAIST EE IMA Workshop

Conclusion

•Our goal is to implement message-passing algorithms using Belief Propagation for solving large-scale optimization

- Theory: This requires to understand the BP performance theoretically

- Implementation: This requires to understand the system abstraction issues

•Open Questions

- How to engineer given LP to satisfy Conditions in Part I ?

- Conditions in Part I can be much stronger ?

- Can BP solve other convex optimization ?

- Primal-dual strategy using BP in Part II is applicable to other problems ?

- How about other variants of BP ?