The PCP Theorem via gap amplification Irit Dinur Hebrew University
Jan 13, 2016
The PCP Theorem via gap amplification
Irit DinurHebrew University
The PCP Theorem [AroraSafra, AroraLundMotwaniSudanSzegedy, 1992]
PProb.rob.CCheckable.heckable.PProofroof
SAT Instance: SAT Instance:
VerifierVerifier
If sat() = 1 then 9proof, Pr[Ver accepts] = 1 If sat() < 1 then 8proof, Pr[Ver accepts] < ½
The PCP Theorem[AroraSafra, AroraLundMotwaniSudanSzegedy, 1992]
variablesvariables
VV11 VV22 VV33 VVnn…
PCP Thm <--> reduction from SAT to gap-CSP Given a constraint graph G, it is NP hard to decide
between1. gap(G)=02. gap(G)>
xx11 xx22
xx44
xx55
xxnn
xx33
VV 11
VV 22
This talk
New proof for the PCP Theorem: Given a constraint graph G, it is NP-hard to
decide between1. gap(G) = 02. gap(G) >
Based on: gap amplification, inspired by Reingold’s SL=L proof
Also: “very” short PCPs
step 0: Constraint Graph SAT is NP-hard
Given a constraint graph G, it is NP-hard to decide if gap(G)=0 or gap(G) > 0
proof: reduction from 3coloring. ={1,2,3}, inequality constraints on edges. Clearly, G is 3-colorable iff gap(G)=0.
PCP Thm: Given a constraint graph G, it is NP-hard to decide if gap(G)=0 or gap(G) >
Basic Plan
• Start with a constraint graph G (from 3coloring)
• G G1 G2 … Gk= final output of reduction
• Main Thm: gap(Gi+1) ¸ 2 ¢ gap(Gi) (if not already too large)
• size(Gi+1) = const ¢ size(Gi), degree, alphabet, expansion all remain the same.
• Conclusion: NP hard to distinguish between gap(Gk)=0 and gap(Gk)> (constant)
Main Step
Standard Standard transformations: making
transformations: making G a regular constant
G a regular constant degree expander, w/ self-
degree expander, w/ self-loopsloops
Composition with Composition with P P = a a “constant-size” PCP. “constant-size” PCP.
PP can be as inefficient can be as inefficient as possibleas possible
Gi Gi+1 : Gi+1 = ( prep(Gi) )t ² P
1. Preprocess G2. Raise to power t3. Compose with P = constant size PCP
Key step: G Gt, multiplies the gap by t; Keeps size linear !
poweringpowering
Powering a constraint graph
Vertices: same Edges: length-t paths (=powering of adj. matrix) Alphabet: dt reflecting “opinions” about neighbors Constraints: check everything you can!
uu vv
Powering a constraint graph
Vertices: same Edges: length-t paths (=powering of adj. matrix) Alphabet: dt reflecting “opinions” about neighbors Constraints: check everything you can!
uu vv
Observations:1. New Degree = dt
2. New Size = O(size) (#edges is multiplied by dt-1)
3. If gap(G)=0 then gap(Gt)=04. Alphabet increases from to dt
Amplification Lemma: gap(Gt) ¸ t ¢ gap(G)
Amplification Lemma: gap(Gt) > t ¢ gap(G)
Intuition:Spread the information inconsistencieswill be detected more often
uu vv
Assumption: G is d-regular d=O(1), expander, w self-loops
Given A:V dt“best”
extract a:V
by most popular value in a random t/2 step walk
vv
Amplification Lemma: gap(Gt) > t ¢ gap(G)
Given A:V dt “best”
extract a:V
by most popular value in a random t/2 step walk
uu
Extracting a:V
vv
Given A:V dt“best”
extract a :V
and consider F = { edges rejecting a }
Note: F/E ¸ gap(G)
vvuu
Extracting a:V
Amplification Lemma: gap(Gt) > t ¢ gap(G)
Relate fraction of rejecting paths to fraction of rejecting edges ( = F/E )
vvuu
Two Definitions
= (v0,v1,…,u,v,…,vt); j = (vj-1,vj)
Definition: the j-th edge strikes if 1. |j – t/2| < t
2. (u,v)2F, i.e., (u,v) rejects a(u), a(v)
3. A(v0) agrees with a(u) on u &
A (vt) agrees with a(v) on v .
Definition: N() = # edges that strike . 0 · N() < 2t If N()>0 then rejects, so gap(Gt) ¸ Pr[N()>0]
vv00
vvuu vvtt
jj
We will prove: Pr[N()>0] > t ¢ F/E
Lemma 1: E[N] > t ¢ F/E ¢ const(d, )
Intuition: Assuming N() is always 0 or 1, Pr[N>0] = E[N]
Lemma 2: E[N2] < t ¢ F/E ¢ const(d, )
Standard: Pr[N>0] ¸ (E[N])2/E(N2)
pf: E[N2|N>0]¢Pr[N>0]2¸(E[N|N>0])2¢Pr[N>0]2
Pr[N>0] > (t ¢ F/E )2 / (t ¢ F/E) = t ¢ F/E
gap(Ggap(Gtt) ) ¸̧ ¸ ¸ gap(G)gap(G)
Lemma 1: E[N] = t ¢ F/E
Ni() = indicator for event “the i-th edge strikes ” N =
i2JNi where J = { i : |i-t/2|< t }
Claim: if i 2 J E[Ni] ¼ 1/2 ¢ F/E
can be chosen by the following process:1. Select a random edge (u,v)2E, and let i = (u,v).2. Select a random i-1 step path from u3. Select a random t-i step path from v
Clearly, Pr[i 2 F] = F/E What is the probability that A (v0) agrees with a(u) and A (vt)
agrees with a(v) ?
vv00
vvuu vvtt
i-1i-1 t-it-i
Claim: if i 2 J E[Ni] ¼ 1/2 ¢ F/E chosen by :
1. Select a random edge (u,v)2E, and let i = (u,v).2. Select a random i-1 step path from u3. Select a random t-i step path from v
i-1 = t/2 walk from u reaches v0 for which A(v0) thinks a(u) of u, with prob. ¸ 1/.
i 2 J: roughly the same !! (because of self-loops)
vv00
vvuu vvtt
i-1i-1 t-it-it/2t/2
Back to E[N]
Fix i2J. Select by the following process:1. Select a random edge (u,v), and let i = (u,v).
2. Select a random i-1 step path from u3. Select a random t-i step path from v
1. Pr[i 2 F] = F/E
2. Pr[A(v0) agrees with a on u | (u,v) ] > 1/2
3. Pr[A(vt) agrees with a on v | (v0,…,u,v) ] > 1/2
E[Ni] = Pr[Ni=1] > F/E ¢ 1/2 ¢ const
so E[N] = i2JE[Ni] > t ¢ F/E ¢ const QED
vv00
vvuu vvtt
i-1i-1 t-it-i
We will prove: Pr[N()>0] > t ¢ F/E
Lemma 1: E[N] > t ¢ F/E ¢ const(d, )
Lemma 2: E[N2] < t ¢ F/E ¢ const(d, )
read: “most striked paths see · a constant number of striking edges”
By Pr[N>0] > (E[N])2 / E[N2]
Pr[N>0] > (t ¢ F/E )2 / (t ¢ F/E) = t ¢ F/E
gap(Ggap(Gtt) ) ¸̧¸ ¸ gap(G)gap(G)
Lemma 2: Upper bounding E[N2]
Observe: N() · # middle intersections of with F
Claim: if G=(V,E) is an expander, and F½E any (small) fixed set of edges, then E[(N’)2] < t¢F/E¢(t¢F/E+const)
proof-sketch: Compute i<jE[N’iN’j]. Conditioned on i 2 F, the expected # remaining steps in F is still · constant.
The full inductive step
Gi Gi+1 : Gi+1 = ( prep(Gi) )t ² P
1. Preprocess G2. Raise to power t3. Compose with P = constant size PCP
Preprocessing
G H=prep(G) s.t. H is d-regular, d=O(1) H is an expander, has self-loops.
maintain
size(H) = O(size(G)) gap(G) ¼ gap(H), i.e.,
1. gap(G) = 0 gap(H) = 02. gap(G)/const · gap(H)
Add expander edgesAdd expander edges
Add self-loopsAdd self-loops
[PY][PY] Blow up every vertex
Blow up every vertex
u into a cloud of deg(u)
u into a cloud of deg(u)
vertices, and inter connect
vertices, and inter connect
them via an expander.
them via an expander.
Reducing dt to
Consider the constraints {C1,…,Cn} (and forget the graph structure)
For each i, we replace Ci by {cij} = constraints over smaller alphabet .
P = algorithm that takes C to {cj}, cj over s. t. If C is “satisfiable”, then gap({cj})=0 If C is “unsatisfiable”, then gap({cj}) >
Composition Lemma: [BGHSV, DR]The system C’ = [iP(Ci) has gap(C’) ¼ gap(C)
CC11 CC22 CC33 CC44 CCnn……
cc1111 cc1212 cc1313 cc1414 cc1515
PP
ccn1n1 ccn2n2 ccn3n3 ccn4n4 ccn5n5
PP
Assignment-testers [DR] / PCPPs [BGHSV]
Composition
If P is any AT / PCPP then this composition works. P can be
Hadamard-based Longcode-based found via exhaustive search (existence must be
ensured, though)
P’s running time only affects constants.
Summary: Main theorem
Gi Gi+1 : Gi+1 = ( prep(Gi) )t ² P gap(Gi+1) > 2¢gap(Gi) and other params stay same
1. G [, ]2. G prep(G) [, /const]
3. G Gt [dt, t¢/const]4. G G ² P [, t¢/const’] = [,2]
G=G0 G1 G2 … Gk= final output of reduction After k=log n steps,
If gap(G0) = 0 then gap(Gk)=0
If gap(G0) > 0 then gap(Gk) > const
Application: short PCPs
…[PS, HS, GS, BSVW, BGHSV, BS] [BS’05]: NP µ PCP1,1-1/polylog[ log (n¢polylog ), O(1) ] There is a reduction taking constraint graph G to G’ such that
|G’| = |G|¢ polylog |G| If gap(G)=0 then gap(G’)=0 If gap(G)>0 then gap(G’)> 1/polylog|G|
Applying our main step loglog|G| times on G’, we get a new constraint graph G’’ such that
If gap(G) = 0 then gap(G’’)=0 If gap(G) > 0 then gap(G’’) > const
i.e., NP µ PCP1,1/2[ log (n¢polylog ), O(1) ]
final remarks
Main point: gradual amplification
Compare to Raz’s parallel-repetition thm
Q: get the gap up to 1-o(1)