Linear network code for erasure broadcast channel with feedback Presented by Kenneth Shum Joint work with Linyu Huang, Ho Yuet Kwan and Albert Sung 1 Mar 2014
Mar 28, 2015
Linear network code for erasure broadcast channel with feedback
Presented by Kenneth ShumJoint work with Linyu Huang, Ho Yuet Kwan and
Albert Sung
1Mar 2014
Erasure broadcast channel
Mar 2014 2
Sourcenode Data Packets
P1, P2, …, PN
User 1 User 2 User 3 User K….
Broadcast
Want to send all source data packets to each user.
Each transmittedpacket is erased withcertain probability
Erasure broadcast channel with feedback
Mar 2014 3
Sourcenode
User 1 User 2 User 3 User K….
Users can send acknowledgements back to the source node.
Linear Network Code
• Source node broadcasts encoded packets. • A packet is considered as a vector over a finite
field F.• An encoded packet is obtained by taking linear
combination of the N source packets, with coefficients drawn from F.– The vector formed by the N coefficients is called
the encoding vector of the encoded packet.
Mar 2014 4
Erasure broadcast channel
Mar 2014 5
Sourcenode Data Packets
P1, P2, …, PN
User 1 User 2 User 3 User K….
BroadcastLinear combinationsof P1, P2, …, PN
The packet header contains the encoding vector of the encoded packet.
The received packets are cached
Mar 2014 6
Sourcenode
The source packets can beinterpreted as the standard basise1, e2, … eN in vector space FN
User 1 User 2 User 3 User K….
v1, v2, … v’1, v’2, … v’’1, v’’2, …
Each user stores the received packets and the corresponding encoding vectors
Synopsis
• Objectives:– Minimize the completion time of each user.– Minimize encoding and decoding complexity.
• Decoding complexity can be reduced if the encoding vectors are sparse.– Apply some version of Gaussian elimination which
exploits sparsity.• The problem of generating sparse encoding
vector is related to some NP-complete problems.• Heuristic algorithms and comparison.
Mar 2014 7
Complexity Issues in Network Coding• Deciding whether there exists a linear network code with
prescribed alphabet size is NP-hard– Lehman and Lehman, Complexity classification of network
information flow problems, SODA, 2004.• The minimization of the number of encoding nodes is NP-
hard.– Langberg, Sprintson and Bruck, The encoding complexity of
network coding, Trans. IT 2006.– Langberg and Sprintson, On the hardness of approximating the
network coding capacity, Trans IT, 2011.• For noiseless broadcast channel, when the alphabet is
binary, the problem of minimizing the number of packet transmissions in the index coding problem is NP-hard.– El Rouayheb, Chaudhry and Sprintson, On the minimum number
of transmissions in single-hop wireless coding networks, ITW, 2007.
Mar 2014 8
Innovative Packet
• An encoded packet is said to be innovative to a user if the corresponding encoding vector is linearly independent with the encoding vectors received previously.
• If an encoded packet is innovative to all users, then we say that it is innovative.
• It is known that innovative packets always exist if the finite field size is larger than or equal to the number of users.– Keller, Drinea and Fragouli, Online broadcasting with
network coding, NetCod, 2008.9Mar 2014
Notation: Encoding matrix
Mar 2014 10
Sourcenode
User 1 User 2 User 3 User K….
C1C2 C3
The rows of matrix Ci are the encoding vectors of the received packets.
CK
The source packets areinterpreted as the standard basise1, e2, … eN in vector space FN
Given for all ’s, the set of all innovativeencoding vectors is defined as
11Mar 2014
The set of all innovative encoding vectors
Given an encoding vector ,the support of is defined as
The Hamming weight of is defined as the cardinality of . with Hamming weight is said to be -sparse.
12Mar 2014
Hamming weight and sparsity
SPASITY Problem
Consider both sparsity and innovativeness of anencoding vector, formulate the problem below:
Problem : SPARSITYInstance : K matrices over GF(q), where . is a positive integer. Question : Is there a vector withHamming weight less than or equal to ?
13Mar 2014
Example: Let q=2, K=2, N=4 and n=2. Consider the following two matrices
We have
There are three vectors in with Hammingweight less than or equal to n=2.
14Mar 2014
Theorem. SPARSITY is NP-complete.
Now define the optimization version of as follows:Question: Find a vector with minimum Hamming weight.
It can be shown that the optimization version ofSPARSITY is NP-hard.
However, for fixed K and q, by brute force methods, it can be solved in
15Mar 2014
Let be the row space of . Denote the orthogonal complement of by
Let be an matrix whose rows form a basis of . can be obtained by the Reduced Row Echelon Form(RREF) of .
16Mar 2014
Orthogonal complement
To check whether an encoding vector is innovative, we use the following fact.
Theorem. Given , an encoding vector belongs to iff for all ’s .
17Mar 2014
Minimizing the Hamming Weight
Given all ’s, we have their by RREF.Let be the i-th row of . Define
where denotes the logical-OR operator applied component-wise to vectors witheach non-zero component being treated as a “1”.
18Mar 2014
Example:Let q=3, K=3, N=4 and the orthogonalcomplements of be given by the row spaces of
The vector for , are
19Mar 2014
Define as the matrix whose k-th row is . Note that is a binary matrix andhas no zero rows.
Given a subset of column indices of , let be the submatrix of matrix ,whose columns are chosen according to .
20Mar 2014
Lemma 3.Let be an index set and .There exists an encoding vector with support inside (i.e. for )iff has no zero rows.
21Mar 2014
Example (cont’d)
Mar 2014 22
First user
Second user
Third user
Choose a 3 x w submatrix of B with minimal w, such that the submatrix has no zero rows.
We may choose the first two columns. We can find an encoding vector with two non-zero components.
By reducing HITTING SET to SPARSITY, NP-completeness of SPARSITY can be shown.
Problem: HITTING SETInstance: A finite set , a collection of
subsets of and an integer .Question: Is there a subset with
cardinality , such that for each we have ?
23Mar 2014
Example (cont’d)
Mar 2014 24
First user
Second user
Third user
Choose a 3 x w submatrix of B with minimal w, such that the submatrix has no zero rows.
1,4
First user
Second user
Third user
2
3
The minimal hitting sets are:{1,2}, {1,3}, {2,3}, {2,4}, {3,4}
Optimal Hitting method
• Solve the hitting set problem optimally by reducing it to binary integer programming.– Minimum sparsity at each iteration is guaranteed.
• After the support of the encoding vector is determined, find the coefficients which make the vector innovative.
Mar 2014 25
Greedy Hitting method
• Solve the hitting set problem heuristically by greedy method.– Sequentially pick an element which hits the largest
number of sets.– Minimum sparsity is not guaranteed.
• After the support of the encoding vector is determined, find the coefficients which make the vector innovative.
Mar 2014 26
Existing encoding schemes (I)
• Random linear network codes.– Encoding
• Phase 1: The source node first broadcast each packet.• Phase 2: Sends encoded packets with coeff. randomly
generated.
– Decode by Gaussian elimination.– No feedback is required.
Mar 2014 27
Existing encoding schemes (II)
• Chunked code– an extension of random linear network coding.– Divide the source packets into chunks. Each chunk
contains c packets.– Apply random linear network coding to each
chunk.– The resulting encoding vectors are c-sparse.– Feedback is not required.
Mar 2014 28
Existing encoding schemes (III)
• Instantly decodable network code– Encoding
• Phase 1: The source packets are first broadcast once.• Phase 2: Find a subset of users such that each of them
can decode a source packet by transmitting an encoded packet.
– Decoding: The user in the target set can decode one packet immediately if the encoded packet is received successfully.
– Feedback is required.Mar 2014 29
Existing encoding schemes (IV)
• LT code– Use the robust soliton degree distribution in
encoding– No feedback is required.
Mar 2014 30
Comparison of complexity
Mar 2014 31
Scheme Encoding Decoding
LT code O(N) O(N2)
Random linear network code
O(N) O(N3)
Chunked code O(c ) O(c2 N)
Instantly decodable network code
O(K3N2) O( min(K,N) N)
Optimal hitting method
O(1.238N+K) O( min(K,N) N2)
Greed hitting method
O(K2 N2) O( min(K,N) N2)
Completion time vs number of users(perfect feedback)
Mar 2014 32
Binary alphabet
Mar 2014 33
Completion time vs number of users(lossy feedback)
Mar 2014 34
Decoding time vs no. of users
Mar 2014 35
Encoding time vs no. of users
Mar 2014 36
Hamming weight vs no. of users
Mar 2014 37
Conclusion• We investigate the issue of the generation of
sparsest innovative encoding vectors which is proven to be NP-hard.
• A systematic way to generate the sparsest innovative encoding vectors is given.
• There is a tradeoff between encoding complexity, decoding complexity, and completion time.
38Mar 2014