Top Banner

of 9

05550418

Jun 04, 2018

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/13/2019 05550418

    1/9

    IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010 4341

    Efcient Interpolation in theGuruswamiSudan Algorithm

    Peter V. Trifonov , Member, IEEE

    Abstract A novel algorithm is proposed for the interpolationstep of the GuruswamiSudan list decoding algorithm. The pro-posed method is based on the binary exponentiation algorithm, andcan be considered as an extension of the LeeOSullivan method.The algorithm is shown to achieve both asymptotical and prac-tical performance gain compared to the case of iterative interpo-lation algorithm. Further complexity reduction is achieved by em-ploying the reencoding transformation. The keycontributionof thepaper, which enables the complexity reduction, is a novel random-ized ideal multiplication algorithm.

    Index Terms Grbner basis, GuruswamiSudan algorithm,ideal multiplication, interpolation, list decoding, reencoding,

    ReedSolomon codes.

    I. INTRODUCTION

    T HE GuruswamiSudan list decoding algorithm [1] isone of the most powerful decoding methods for ReedSolomon codes. Its complexity is known to be polynomial.However, the degree of the polynomial turns out to be too high.Therefore, computationally efcient algorithms are needed inorder to obtain a practical implementation of this method.

    The most computationally intensive step of the Gu-ruswamiSudan algorithm is construction of a bivariatepolynomial passing through a number of points with a givenmultiplicity. In this paper a novel reduced complexity inter-polation algorithm is presented. It is based on the well-knownbinary exponentiation method, so we call it binary interpolationalgorithm. The algorithm exploits the relationship betweenthe Grbner bases of zero-dimensional ideals and appropriatemodules. The key component of the proposed method is a novelfast randomized ideal multiplication algorithm (see Fig. 3).We show also that the interpolation complexity can be furtherreduced by employing the reencoding transformation [2], [3].

    The paper is organized as follows. Section III presents thenecessary background information. Section IV introduces thenovel interpolation algorithm. Numeric performance results are

    given in Section V. Finally, some conclusions are drawn.

    II. NOTATION

    is the

    ideal generated by .

    Manuscript received August 17, 2009; revised December 30, 2009. Date of current version August 18, 2010.

    The author is with the Distributed Computing and Networking Department,Saint-Petersburg State Polytechnic University, St. Petersburg 195251, Russia(e-mail: [email protected]).

    Communicated by H.-A. Loeliger, Associate Editor for Coding Techniques.

    Digital Object Identier 10.1109/TIT.2010.2053901

    is the module generated by .

    is the Hasse

    derivative of at point . means that has a root of multi-

    plicity at least in , i.e.,.

    is the ideal of polynomials having roots of multiplicity atleast at points .

    . is the leading term of with respect to

    some term ordering. iff for some

    and . is the dimension of vector . , where

    is a Grbner basis of some module, and.

    III. PRELIMINARIES

    This section introduces some background information on theGuruswamiSudan list decoding method, associated computa-tional algorithms, and various algebraic concepts used in thispaper.

    A. Term Orderings

    Multivariate polynomials are extensively used in this paper,so one needs to introduce monomial orderings to deal withthem. -weighted degree of a monomial equals

    . -weighted degree of a polyno-mial equals to the maximum of -weighted degreesof its nonzero terms. Weighted degree can be used to dene aterm ordering. -weighted degree lexicographic ordering is

    dened as. Lexicographic ordering is

    dened as .Leading term of a polynomialis given by . Multivariate polynomialscan be ordered according to their leading terms.

    B. GuruswamiSudan Algorithm

    The GuruswamiSudan algorithm addresses the problem of list decoding of ReedSolomon code overeld . That is, given a received vector , it ndsall message polynomials , such that and

    for at least distinct code locators [1]. This is0018-9448/$26.00 2010 IEEE

  • 8/13/2019 05550418

    2/9

    4342 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010

    Fig. 1. Iterative interpolation algorithm (IIA).

    accomplished by constructing a polynomial , such that

    , and factoring it.It is possible to show that the parameters of this algorithm

    must satisfy [4]

    (1)

    (2)

    (3)

    C. Interpolation

    Construction ofa polynomial turns out tobe the mostcomputationally expensive step of the GuruswamiSudan algo-rithm. This section presents an overview of two existing algo-rithms for the interpolation problem. The rst one will be usedto derive some important properties of the underlying algebraicstructures, and the second will be used as a component of theproposed method.

    Observe that the set of polynomialsis an ideal. The

    smallest non-zero polynomial of this ideal with respect to-weighted degree lexicographic ordering must satisfy

    the constraints of the GuruswamiSudan algorithm. Such apolynomial is guaranteed to appear in the Grbner basis of

    with respect to this term ordering [5]. However, it turnsout to be easier to construct a Grbner basis of the module

    . Iterative Interpolation Algorithm: The algorithm shown in

    Fig. 1 constructs non-zero polynomials, such that ,

    and are the smallest possible integers [4], [6], [7].These polynomials represent a Grbner basis of the module

    [8], [9]. In thecontext of list decoding one has to use the -weighteddegree lexicographic ordering. The solution of the interpolationproblem is given by the smallest polynomial in the obtainedvector . It can be seen that thecomplexity of the iterative interpolation algorithm (IIA) isgiven by .

    It will be sometimes convenient to represent a vector of poly-nomials , as

    ......

    .. . ...

    where is a polynomial matrix.

    Lemma 1: Let be avector of polynomials constructed by IIA for the input

    . Then

    where is the corresponding polynomial matrix.Proof: Observe that at each iteration of IIA the -degree of

    exactly one polynomial is increased by one. Hence, the sum of leading term -degrees of all polynomials after algorithm termi-nation is equal to the number of partial Hasse derivatives forcedto be zero. On the other hand, this algorithm can be interpretedas construction of the polynomial matrix

    (4)

    where

    and is the index of the smallest polynomialselected on line 7 of the algorithm. Obviously,

    for somenon-zero , andthe number oftermsin (4) is again equal to the number of Hasse derivatives forcedto be zero.

    Observe that for a xed term ordering there may exist manydifferent Grbner bases of a module. However, they share thefollowing property.

    Lemma 2: Let be a Grbnerbasis of the module . Then .

    Proof: Let be the Grbner basisof constructed by IIA for the same term ordering. Then

    and . Thismeans that the leading terms of and are thesame up to some constants in , and the statement follows fromLemma 1.

    Transformation of the Module Basis: It was shown in[10][13] that the ideal of interpolation polynomials isgenerated by

    (5)

  • 8/13/2019 05550418

    3/9

    TRIFONOV: EFFICIENT INTERPOLATION IN THE GURUSWAMISUDAN ALGORITHM 4343

    Fig. 2. Multi-dimensional Euclidean algorithm.

    where and .Hence, the basis of the module is given by

    ,where

    (6)

    Lemma 3: The polynomialsrepresent a Grbner basis of the moduleif

    , are distinct values.Proof: The lemma follows from the Buchberger S-pair cri-

    terion [14].

    The above described basis has to be transformed into aGrbner one with respect to -weighted degree lexi-cographic monomial ordering. This can be done with the algo-rithm given in [11], [12], which can be considered as a simpli-ed instance of the Buchberger algorithm. It is convenient topresent it here in a slightly modied form. Namely, this algo-rithm takes as input some polynomial , a Grbner basis

    of some module , andconstructs a Grbner basis of the module

    . The algorithm isshown in Fig. 2.

    Lemma 4: Let , be the polynomialssuch that . Thenthe Reduce algorithm constructs a Grbner basis of the module

    .Proof: This statement follows from Lemma 3 and invert-

    ibility of transformations used by the algorithm.

    The required Grbner basis is obtained as , where

    (7)

    The complexity of this method is given by [12].Curiously, if -weighted degree lexicographic orderingis used and , it reduces to the Gao decoding method[15], [16], with function being the standard extendedEuclidean algorithm with early termination condition. There-fore, will be referred to as the multidimensional Eu-clidean algorithm.

    IV. BINARY INTERPOLATION ALGORITHM

    This section introduces a novel interpolation algorithm.The main idea of this algorithm is to construct a sequence of

    ideals and modules of polynomials having roots withincreasing multiplicity. The proposed method can be consid-ered as an application of the well-known binary exponentiationalgorithm to zero-dimensional ideals.

    A. Interpolation via Ideal Multiplication

    The main drawback of the method given by (7) is that onehas to manipulate with the polynomials having large commondivisors. For example,

    .Furthermore, polynomial exponentiation is used in (5). The

    method proposed in this paper avoids both reducing the poly-nomials with large GCD, and computing large powers of poly-nomials. This is achieved by rst constructing Grbner basesfor small root multiplicities, and using them to obtain bases forlarger root multiplicities.

    Lemma 5: Let. Then .

    Proof:

    .

    This lemma implies that . One can

    avoid repeated calculations and reduce the overall number of calls to the algorithm by using the binary exponentia-tion method [17]. Namely, one can compute

    where , , and

    .The key problem addressed in this paper is how to con-

    struct efciently a Grbner basis of the product of idealsand .

    The standard way is given by

    (8)

    i.e., to compute pairwise products of all basis elements of theideals being multiplied. This requires bivariatepolynomial multiplications, and the basis of obtainedin such way is extremely redundant. Furthermore, Buchbergeralgorithm must be used in order to obtain a Grbner basis of

    .To the best of authors knowledge, the problem of efcient

    ideal multiplication was not considered in the literature, exceptin [9], where multiplication of zero-dimensional coprime idealswas reduced to linear convolution. However, the ideals consid-ered in this paper are not coprime.

    This problem can be again solved by constructing at each stepof the binary exponentiation algorithm a basis of the module of polynomials with limited -weighted degree.

    Lemma 6: Consider the polynomials, such that

    , and

    (9)

  • 8/13/2019 05550418

    4/9

    4344 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010

    Then , and the polynomialsconstitute a Grbner basis of this ideal.

    Proof: Observe that the polynomials repre-sent a Grbner basis of some module by lemma 3. Ob-viously, . Suppose that thepolynomials do not constitute a Grbner basis

    of . That is, there exists , where the terms of are not divisible by , i.e., and

    . Observe that .This means that the polynomials ,do not represent a Grbner basis of module . The trueGrbner basis of this module should consist of smaller polyno-mials, i.e., the sum of -degrees of their leading terms shouldbe less than . But this contradicts to Lemma 2. Hence,

    and constitute a Grbner basis of .

    Observe that there may exist Grbner bases of not satis-fying the constraints of this lemma.

    Let be the ideals given by their Grbner bases satisfying theabove lemma. One can construct a Grbner basis of the product

    of these ideals as follows. Letbe a sequence of distinct pairs of integers such that

    , andfor . Let

    (10)

    be a basis of some submodule of . By Lemma3, it is a Grbner basis of this submodule. It can be seen that

    .Let

    (11)

    The algorithm attempts to cancel the leading terms of the provided polynomials, so . As soon asone obtains , is a Grbner basisof .

    Lemma 7: is generated by .Proof: Consider , such that

    . Any such polynomial can berepresented as

    (12)

    The inner sum is an element of . Since thepolynomials , are a Grbner basisof and , one can use the multivariatepolynomial division algorithm to obtain

    .Similarly, isin , and the multivariate division algorithm leads to

    .Hence,

    . Last term in this expression is zero,

    Fig. 3. Construction of a Grbner basis of from the Grbner bases of and .

    since does not contain any monomials with, so.

    The lemma states that for any suitable polynomialone can replace the bivariate polynomials in

    (12) with univariate ones . This implies that the sequenceconverges eventually to the required module basis. How-

    ever, the convergence turns out to be quite slow. One may needto compute many bivariate polynomial products andapply the algorithm to them before the constraint (9) issatised. In many cases it appears even that . Thatis, a signicant fraction of pairs is useless.

    Therefore we propose to replace pairwise productsin (8) with their random linear combina-

    tions

    (13)

    where are independent random variables uniformly dis-

    tributed over . Obviously, such polynomials still generate theideal product if the linear transformation given by is in-vertible, i.e., if at least polynomialsare given. However, it turns out that in average one needs justa few such polynomials to obtain a basis of the ideal product.The reason is that depend on all pairwise products

    , and a Grbner basis construction algorithm(e.g., ) can take them into account simultaneously. Thiswill be discussed in more details in Section IV-B.

    However, it is impractical to construct the polynomials ex-plicitly as given by (13), since this requires one rst to computeall pairwise products . More efcient way is to

    construct a sequence of bases

    Reduce

    where , and are some random values uni-formly distributed over . Furthermore, we propose to con-struct the initial basis as

    , where for each is selectedso that , and the valuesare minimized. This reduces the number of iterations needed bythe algorithm. The proposed approach is summarizedin Fig. 3.

    Theorem 1: Given Grbner basesand of ideals and

  • 8/13/2019 05550418

    5/9

    TRIFONOV: EFFICIENT INTERPOLATION IN THE GURUSWAMISUDAN ALGORITHM 4345

    Fig. 4. Construction of a Grbner basis for .

    , the result of is a Grbner basis of , where .

    Proof: Observe that the sequence still converges to abasis of , since it is possible to select andso that the linear transformation (13) given by isinvertible, provided that sufciently many polynomials are con-structed. By lemma 4, the algorithm always producesa Grbner basis of some module. By lemma 6, this basis is aGrbner basis of .

    Remark 1: is not guaranteed to obtain a minimalGrbner basis of . In particular, a few polynomials may have

    . Such polynomials are redundant, and shouldbe eliminated, except the smallest one.

    The overall interpolation algorithm is shown in Fig. 4.-weighted degree lexicographic ordering must be

    used throughout this algorithm. Observe that in most practicalcases the polynomial can be constructed by using fastinverse discrete Fourier transform. FFT can be also used inthe implementation of polynomial multiplication, which isextensively used by this algorithm.

    Theorem 2: Interpolate algorithm constructs a Grbner basisof with respect to a given term ordering.

    Proof: The objective of the loop is to construct

    a Grbner basis of , such that it is also a Grbner basis of . Any Grbner basis of a zero-dimensional ideal must contain

    a polynomial for some[14, Th. 6.54], so this loop terminates eventually, and

    is indeed a Grbner basis of .Let . By induction, the input vectors

    to at line 14 are two copies of a Grbner basis of .By Theorem 1 its output is a Grbner basis of . Similar ar-gument applies to line 17. Hence, at the end of each iteration of the loop one obtains a Grbner basis of . Observealso, that at the end of each iteration .

    The interpolation polynomial needed by the Gu-ruswamiSudan algorithm can be found as the smallest elementof the basis produced by the algorithm.

    B. Complexity Analysis

    Let us rst estimate the convergence speed of the al-gorithm. Recall, that this algorithm constructs a Grbner basisof given Grbner bases of and(in fact, and ). For the sake of simplicity we will estimatetheprobability o f beinggeneratedby

    polynomials given by (13), such that for, and leading terms of summands do not cancel. The

    difference in the behavior of the actual algorithm withrespect to this impractical ideal multiplication method will bediscussed below.

    The polynomials can be repre-sented as a polynomial matrix

    . If they indeed generate , then thepolynomial matrix corresponding to the

    Grbner basis of this module satises

    where is the polynomial matrix corresponding to, and is some transformation matrix. On the

    other hand, , i.e., ,where the elements of matrix can be obtained bythe multivariate division algorithm. Hence

    (14)

    Since , the polynomialsare linearly independent over , and is invert-

    ible over the eld of rational functions, so it can be cancelledin (14). Therefore, the problem reduces to estimating the prob-ability of existence of a polynomial matrix satisfying

    (15)

    This is a system of linear equations in terms of . Observethat is a full-rank matrix over . Polynomial solutionexists if and only if the scalar matrix equations

    (16)

    are solvable for any , i.e., matrices have rank , where is the algebraic closure of . It is sufcient

    to consider onlysuch that somexedsubmatrix of looses rank for , i.e., the roots

    of . Such roots are called eigenvalues of polynomialmatrix [18].

    Let be a matrix consisting of rst columns of . This matrix satises

    where the polynomial matrix corresponds to. For each eigenvalue

    of one can identify linearly independent lefteigenvectors, i.e., vectors , such that

    . The geometric mul-tiplicity of eigenvalue is upper-bounded by itsalgebraic multiplicity , which is dened as the multi-plicity of root of . Equation (16) is solvable

  • 8/13/2019 05550418

    6/9

    4346 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010

    if for each for at least one, i.e., if is a full-rank matrix.

    The total number of such pairs is upper-bounded by. The poly-

    nomials represent a Grbner basis of some submodule of , and could be obtained from

    those given by by executing lines 610 of IIA for a fewadditional points and/or pairs . Hence, by lemma1, and

    .Let polynomials be a Grbner basis with

    respect to -weighted degree lexicographic orderingof and for some and . Then

    , where1for some , , and . Hence

    , and

    Since the polynomials and representGrbner bases of and , ,

    , whereand . Then

    and

    .Hence

    (17)Let us assume that the elements of are univariate poly-

    nomials with independent coefcients uniformly distributedover . Then are random vari-ables uniformly distributed over , where is the smallestalgebraic extension of , such that , and is theextension degree. Then the probability of being nonzerofor at least one is given by

    .Consider factorization , where

    , and are some monic irreducible polyno-mials. Each eigenvalue is a root of at least one of , so

    , and . Let. Observe that has distinct roots in

    . Assuming the worst case, where the geometric and alge-braic multiplicities of eigenvalues are the same, one obtains thefollowing expression for the probability of (16) being solvablefor all eigenvalues :

    (18)

    1There is no formal proof for this approximation. However, one can arguethat the polynomials in a Grbner basis of should have approximatelythe same 0 -weighted degree, since the IIA, which can be used to con-struct them, always increases the degree of the smallest polynomial. Numericalexperiments conrm this claim. Alternatively, if the received sequence is not acodeword, a Grbner basis of zero-dimensional ideal must contain the poly-

    nomials with 0

    -weighted degree both below and above the value givenby (2), and the approximate expression for derived below coincides with thatone.

    Assuming that is a polynomial with independent co-efcients uniformly distributed over , one can estimate theprobability of obtaining a particular factorization of as [19], where is the numberof monic irreducible polynomials of degree . Hence, the prob-ability of (15) being solvable is given by

    where summation is performed over all partitions of .Exact evaluation of this expression does not seem to be fea-

    sible. However, it can be seen that the value of (18) is domi-nated by the rst multiple, and it is known that a random poly-nomial over a nite eld has in average one root in it [20].Hence, the probability of (15) being unsolvable decreases ex-ponentially fast with . Thus, for sufciently large one canassume that a Grbner basis of can be derived from

    polynomials given by (13).The above analysis was performed for an impractical version

    of theproposed randomized ideal multiplication method. It turnsout that the polynomial matrix corresponding to the actual poly-nomials constructed on line 2 of the

    algorithm has usually more than one eigenvalue inwith high algebraic multiplicity. But the geometric multiplicityof the corresponding eigenvectors appears to be much less thanthe algebraic one (although still greater than 1), so the algorithmstill quickly converges.

    Let us now estimate the number of iterations of al-gorithm called on line 8 of . To do this observe that theobjective of is to decrease -weighted degrees

    of polynomials constructed on lines 2 and 7 of from ap-proximately to approximately , i.e., to cancel themonomials with too high -weighted degree. For eachpolynomial approximately mono-mials should be eliminated. The total number of monomials tobe eliminated canbe estimatedas 2

    . At least one monomial is cancelled during each iter-ation of . Taking into account (17), one obtains that thenumber of iterations in is given by ,where . The algorithm operates with polynomialscontaining terms, i.e., its complexity is given by

    .It can be seen from (1) that the number of polynomials in the

    basis of is . The degrees of these polyno-mials can be estimated as and

    . Computing a product of two suchpolynomials requiresoperations. The analysis given above suggests that thenumber of iterations performed by is . There-fore, the complexity of polynomial multiplications neededto construct the Grbner basis of from the basis of

    is . Hence, one call toat line 11 of the interpolation algorithm requires

    2

    Observe that the objective of minimization at line 2 of

    is to decreasethe number of monomials to be cancelled, i.e., decrease the number of iterationsin .

  • 8/13/2019 05550418

    7/9

    TRIFONOV: EFFICIENT INTERPOLATION IN THE GURUSWAMISUDAN ALGORITHM 4347

    operations forsome positive and .

    Obviously, the complexity of algorithm is dom-inated by the FOR loop. The number of calls to in thisloop is given by

    (19)

    The second term in this expression corresponds to line 17 of the algorithm. The complexity of the whole algorithm is dom-inated by the last iteration, so the overall complexity is givenby . Observe thatthis is better than the complexity of IIA.

    C. Reencoding

    The proposed binary interpolation algorithm can beintegrated with the reencoding transformation [2], [3],[21]. As it was shown in Section III-C-II

    . Let. Dividing by , one obtains

    where and. Substituting and dividing 3

    all polynomials in by , one obtains the moduleshown in the equation at the bottom of the page, which isgenerated byand , where

    . There is a one-to-one correspondence between

    the polynomials in and , and the smallest polyno-mial with respect to -weighted degree lexicographicordering in corresponds to the smallest polynomial withrespect to -weighted degree lexicographic ordering in

    . If the leading term of a polynomial in is ,then the leading term of the corresponding polynomial inis . This transformation essentially reducesthe number of interpolation points. For high-rate codes thissignicantly decreases the number of terms in the polynomials,reducing thus the overall algorithm complexity.

    The Grbner basis of can be constructed by a modiedversion of the algorithm, as shown in Fig. 5.

    -weighted degree lexicographic ordering must be usedthroughout this algorithm. The algorithm rst constructs aGrbner basis of . The corresponding loop in theoriginal algorithm terminates as soon as a polynomial withleading term is discovered. After change of variablesand term ordering the termination condition transforms to

    3This operation prevents one from using the concept of ideal here.

    Fig. 5. Construction of a Grbner basis for .

    . Then the algorithm proceeds with in-creasing root multiplicity in the same way as the originalalgorithm. However, the termination threshold of algorithm has to be changed. For root multiplicity , after thechange of variables the basis polynomials should have leadingterms , such that , where

    are the leading terms of the corresponding polynomialswhich would be obtained without reencoding. Therefore, thetermination threshold in the modied algorithm should be setto

    , where is the termi-nation threshold derived from (9). If the sizes of the bases

    supplied to are and , then .Hence, the termination threshold for is given by.

    V. NUMERIC RESULTS

    This section presents simulation results illustrating the per-formance of the proposed algorithm. Karatsuba fast univariatepolynomial multiplication algorithm [17] was used at steps 2and 7 in algorithm.

    Fig. 6 illustrates the probability distribution of the number of iterations performed by algorithm while constructingGrbner bases of for different values of

    code over and code over wereconsidered. First and third plots were obtained with the al-gorithm presented in Fig. 3. The second plot was obtainedby replacing leading term minimization on line 2 of thealgorithm with random selection of from the set

    . It can be seenthat in this case the algorithm indeed converges exponentially

  • 8/13/2019 05550418

    8/9

    4348 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 9, SEPTEMBER 2010

    Fig. 6. Probability distribution of the number of iterations in algorithm.

    fast. In the rst and third cases the convergence is still expo-nential for , although a bit slower than in the case of random polynomial selection. However, for the case of up to 8 iterations may be needed with high probability. Thereason is that constructing a Grbner basis of requires processing two different collections of polynomials

    and with .There is high probability that the smallest polynomialis used almost for all on line 2 of the algorithm. In thiscase the coefcients of are no longer independent. Inparticular, this causes the most probable partitions in (18)

    Fig. 7. Performance comparison of interpolation algorithms.

    to be those with large . This results in high algebraic andgeometric multiplicity of eigenvalues in . Hence, morecorrection polynomials (i.e., iterations of loop) areneeded to eliminate them. This effect is partially compensatedby the reduction of the total number of iterations inalgorithm. Observe also that for the algorithm con-verges faster for the case of compared to ,as predicted by (18).

    Fig. 7 presents average list decoding time obtained withIIA, LeeOSullivan algorithm, proposed binary interpolationalgorithm, IIA with reencoding transformation, and binaryinterpolation algorithm with reencoding. and

    ReedSolomon codes were considered. It can beseen that the proposed algorithm provides up to 12 times lowercomplexity than IIA for the case of code, and up to15 times lower complexity for the case of code.Observe that the complexity of the proposed method increasesslower than for the case of IIA, conrming thus the conclu-sion of Section IV-B. The complexity of the LeeOSullivanalgorithm turns out to be essentially the same as the one of IIAwith reencoding for rate-1/2 code, and exceeds it considerablyfor high-rate code. Observe also, that in some cases increasingthe root multiplicity reduces the complexity of the proposedinterpolation method. This represents the impact of the secondterm in (19), i.e., line 17 of the proposed algorithm.

  • 8/13/2019 05550418

    9/9

    TRIFONOV: EFFICIENT INTERPOLATION IN THE GURUSWAMISUDAN ALGORITHM 4349

    Observe also that the proposed algorithm outperforms thereencoding method in the case of low-rate code. For the high-rate code the reencoding method turns out to be better. However,employing reencoding jointly with the proposed method furtherreduces the complexity. The overall gain with respect to IIA isup to 22 times for the case of code, and up to 157

    times for the case of code.

    VI. CONCLUSION

    In this paper, a novel algorithm was proposed for the inter-polation step of the GuruswamiSudan list decoding algorithm.It is based on the binary exponentiation algorithm, and can beconsidered as an extension of the LeeOSullivan method. Theproposed algorithm was shown to achieve signicant asymp-totical and practical gain compared to the case of iterative in-terpolation algorithm. Furthermore, it can be integrated withthe reencoding transformation, achieving thus additional com-plexity reduction.

    An important advantage of the new approach is that its rststep (rst iteration of the WHILE loop in algo-rithm) coincideswith the Gaodecoding algorithm, which is ableto correct up to errors. Since the most likely error pat-terns can be corrected with this algorithm, one should invokethe remaining computationally expensive part of the proposedmethod only if the Gao algorithm does not produce a valid code-word. It is an open problem if it is possible to terminate the in-terpolation algorithm as soon as it produces a bivariate polyno-mial containing all the solutions of a particular instance of thedecoding problem, and avoid construction of basis for theworst-case given by (1)(3). Another interesting problem is

    to generalize the proposed algorithm to the case rational curvetting problem considered in [22].

    For the sake of simplicity, theproposed methodwaspresentedfor the case of all interpolation points having the same multi-plicity. However, it can be extended to the case of weighted in-terpolation, allowing thus efcient implementation of soft-deci-sion decoding.

    ACKNOWLEDGMENT

    The author thanks Dr. V. R. Sidorenko for many stimulatingdiscussions.The author is indebted to theanonymous reviewers,whose comments have greatly improved thequality of thepaper.

    REFERENCES[1] V. Guruswami and M. Sudan, Improved decoding of Reed-Solomon

    and algebraic-geometric codes, IEEE Trans. Inf. Theory , vol. 45, pp.17571767, Sep. 1999.

    [2] R. Koetter and A. Vardy, A complexity reducing transformation inalgebraic list decoding of Reed-Solomon codes, in Proc. ITW2003 ,Mar. 2003, pp. 1013.

    [3] R. Koetter, J. Ma, A. Vardy, and A. Ahmed, Efcient interpolationand factorization in algebraic soft-decision decoding of Reed-Solomoncodes, in Proc. IEEE Int. Symp. Inf. Theory , Yokohama, Japan, Jul. 4,2003, p. 365.

    [4] R. R. Nielsen andT. Hoholdt, DecodingReed-Solomon codes beyondhalf the minimum distance, in Proc. Int. Conf. Coding Theory Cryp-togr. , Mexico, 1998, pp. 221236.

    [5] T. Sauer, B. Buchberger and F. Winkler, Eds., Polynomial interpo-lation of minimal degree and Grbner bases, in Grbner Bases and Applications (Proc. Int. Conf. 33 Years of Grbner Bases) , 1998, vol.251, London Mathematical Society Lecture Notes, pp. 483494.

    [6] R. Koetter, Fast generalized minimum-distancedecodingof algebraic-geometry and Reed-Solomon codes, IEEE Trans. Inf. Theory , vol. 42,May 1996.

    [7] H. OKeefe and P. Fitzpatrick, Grbner basis solutions of constrainedinterpolation problems, Linear Algebra Applicat. , vol. 351, pp.533551, 2002.

    [8] J. Ma, P. Trifonov, and A. Vardy, Divide-and-conquer interpolationfor list decoding of Reed-Solomon codes, in Proc. IEEE Int. Symp. Inf. Theory , Chicago, IL, Jul. 2, 2004, p. 387.

    [9] P. Trifonov, Interpolation in list decoding of Reed-Solomon codes,Probl. Inf. Transm. , vol. 43, no. 3, pp. 190198, 2007.

    [10] K. Lee and M. E. OSullivan, An interpolation algorithm usingGrbner bases for soft-decision decoding of Reed-Solomon codes, inProc. IEEE Int. Symp. Inf. Theory , 2006, pp. 20322036.

    [11] M. Alekhnovich, Linear Diophantine equations over polynomials andsoft decoding of Reed-Solomon codes, IEEE Trans. Inf. Theory , vol.51, pp. 22572265, Jul. 2005.

    [12] K. Lee and M. E. OSullivan, List decoding of Reed-Solomon codesfrom a Grbner basis perspective, J. Symbol. Comput. , vol. 43, no. 9,Sep. 2008.

    [13] P. Trifonov, On the relationship of some Reed-Solomon decoding al-gorithms, in Proc. Coding Theory Daysin Saint-Petersburg Workshop ,2008, pp. 8387.

    [14] T. Becker and V. Weispfenning , Grbner Bases. A Computational Ap- proach to Commutative Algebra . New York: Springer, 1993.

    [15] S. Gao, A new algorithmfor decodingReed-Solomoncodes, in Com-munications, Information and Network Security , V. Bhargava, H. V.Poor, V. Tarokh, and S. Yoon, Eds. New York: Kluwer, 2003, pp.5568.

    [16] S. Fedorenko, A simple algorithm for decoding Reed-Solomon codesand its relation to the Welch-Berlekamp algorithm, IEEE Trans. Inf.Theory , vol. 51, pp. 11961198, Mar. 2005.

    [17] D. E. Knuth , The Art of Computer Programming . Reading, MA: Ad-dison-Wesley, 1973, vol. 2.

    [18] I. Gohberg, P. Lancaster, and L. Rodman , Matrix Polynomials . Sin-gapore: SIAM, 2009.

    [19] J. D. Dixon and D. Panario, The degree of the splitting eld of arandom polynomial over a nite eld, Electron. J. Combinat. , vol. 11,no. 1, 2004.

    [20] V. Leontev, Roots of random polynomials over a nite eld, Math. Notes , vol. 80, no. 2, pp. 300304, 2006.

    [21] J. Ma and A. Vardy, A complexity reducing transformation for theLeeOSullivan interpolation algorithm, in Proc. IEEE Int. Symp. Inf. Theory , 2007, pp. 19861990.

    [22] Y. Wu, New list decoding algorithms for Reed-Solomon and BCHcodes, IEEE Trans. Inf. Theory , vol. 54, no. 8, pp. 36113630, Aug.2008.

    Peter V. Trifonov (M05) was born in St. Petersburg, Russia, in 1980. He re-ceived the M.Sc. degree in computer science in 2003 and the Ph.D. (Candidateof Science) degree from St. Petersburg State Polytechnic University in 2005.

    Currently, he is an Associate Professorat theDistributed Computing andNet-working Department of the St. Petersburg State Polytechnic University. His re-search interests include coding theory and its applications in telecommunica-tions and other areas. Since September, 2008 he is serving as a secretary of theIEEE Russia Joint Sections Information Theory Society Chapter.