Top Banner
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 31, No. 3, JULY 1980 A Trace Minimization Problem with Applications in Joint Estimation and Control under Nonclassical Information T. BASAR ~. Communicated by Y. C. Ho Abstract. Let A and E be positive-definite matrices of dimensions n x n and m x n. Then, this paper considers the problem of minimizing Tr[A(I + C'C) -1] over all m x n real matrices and under the constraint Tr[~CC'] ~<1. The solution is obtained rigorously and without a priori employing the Lagrange multipliers technique. An application of this result to a decentralized team problem which involves joint estimation and control and with signaling strategies is also discussed. Key Words. Nonlinear programming, quadratic inequality con- straints, team problems, signaling strategies, nonclassical information. 1. Introduction In this paper, we report the solution of the following trace minimization problem [Problem (A)]: (A) min J(C), (1- t) where J((~) = Tr[_A(/. + ~,~)-1], (1-2) l~ ~ {C e Mm~: Tr[C'~ C] ~ 1}, (1-3) M,~ = space of m x n real matrices, (1-4) AcM,,, ~. e Mmm, A>0, ~>0, (1-5) In = n x n identity matrix. (1-6) Senior Research Scientist, Applied Mathematics Division, Marmara Scientific and Industrial Research Institute, Gebze, Kocaeli, Turkey. 343 0022-3239/80/0700-0343503.00/0 ~) 1980 Plenum Publishing Corporation
17
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 31, No. 3, JULY 1980

    A Trace Minimization Problem with Applications in Joint Estimation and Control under Nonclassical Information

    T. BASAR ~.

    Communicated by Y. C. Ho

    Abstract. Let A and E be positive-definite matrices of dimensions n x n and m x n. Then, this paper considers the problem of minimizing Tr[A(I + C'C) -1] over all m x n real matrices and under the constraint Tr[~CC'] ~< 1. The solution is obtained rigorously and without a priori employing the Lagrange multipliers technique. An application of this result to a decentralized team problem which involves joint estimation and control and with signaling strategies is also discussed.

    Key Words. Nonlinear programming, quadratic inequality con- straints, team problems, signaling strategies, nonclassical information.

    1. Introduction

    In this paper, we report the solution of the following trace minimization problem [Problem (A)]:

    (A) min J (C) , (1- t)

    where

    J((~) = Tr[_A(/. + ~,~)-1] , (1-2)

    l~ ~ {C e Mm~: Tr [C '~ C] ~ 1}, (1-3)

    M,~ = space of m x n real matrices, (1-4)

    AcM, , , ~. e Mmm, A>0, ~>0, (1-5)

    In = n x n identity matrix. (1-6)

    Senior Research Scientist, Applied Mathematics Division, Marmara Scientific and Industrial Research Institute, Gebze, Kocaeli, Turkey.

    343 0022-3239/80/0700-0343503.00/0 ~) 1980 Plenum Publishing Corporation

  • 344 JOTA: VOL. 31, NO. 3, JULY 1980

    Let H be a unitary matrix, such that

    II'~II = E = diag(o-1,..., o'm), o-~ ~ o'~+1, (2-1)

    and let

    C = II'C'. (2-2)

    Then, Problem (A) becomes isomorphic to Problem (B) formulated below; i.e., the solution of one problem can be obtained from the solution of the other by a unitary transformation [Problem (B)]:

    (B) rnin ](C), (3-1)

    where J is defined by (1-2) and li by

    ~ {C c Mmn : Tr[C'EC] ~< 1}, (3-2)

    with E being diagonal as given by (2-1). It should be noted that Problem (B) admits a global minimizing

    solution, since ](C) is continuous and f~ is compact. In the sequel, we seek to obtain such a solution, which is given in Theorem 2.2 after a series of lemmas. Such an optimization problem arises within the context of a nonclassical joint estimation and control problem. This intimate relation is discussed in Section 3, where we also provide the solution of the related estimation and control problem. Another application of the result can be seen in the encoder-decoder design for multichannel communication systems.

    2. Solution o[ Problem (B)

    We first make use of'the following lemma to convert Problem (B) into an equivalent minimization problem with equality constraints.

    Lemma 2.1. The constraint set li of Problem (B) can be replaced by

    lip ~{C ~ Mm~: Tr[C'EC] = 1}, (4)

    without any loss of generality.

    Proot. Assume the contrary; i.e., there exists a solution Co to Problem (B), such that

    Tr[C'oECo]=a0.

    It should be apparent that a = 0 cannot be possible, since it corresponds to

  • JOTA: VOL. 31, NO. 3, JULY 1980 345

    the maximizing solution C = O. Now, let e be chosen, such that

    and define

    Note that C is in f~, since

    0< e ~O,

    and A has at least one positive eigenvalue, since Co is at least of rank one (o~ being positive). This implies that the matrix

    B ~ (I. + d 'd) - ' - ( I . + c ;c0) - ' -< 0

    has at least one negative eigenvatue. Thus,

    J (d ) - J(C0) = Tr[AB] < 0,

    since A > 0, which is a contradiction. [] We thus now seek the solution of the following problem [Problem (C)]:

    (c) m22Y(c), (5)

    where l~p is defined by (4). Every solution of this minimization problem will have the following

    property.

    Lemma 2.2. Let Co ~ f~p be a solution. Then, A and CoCo commute. 2

    Proof. Let ~ denote the set of all n x n real, skew-symmetric matrices. Then, for every H e N, t e R, C = Co exp(tH) is also in f~p. This implies that, for all H ~ ~d,

    J(C0) = min Tr[A(In + exp(-tH)C'oCo exp(tH) -1] t

    = min Tr[exp(tH)~ exp( - tH) ( I , + C;C0)-~], (6) t

    where the last line follows, since exp(tH) is unitary. Using the polynomial expansion of exp(tH), we have

    exp(tH) = In + tH + 0(t),

    2 The proof given in the sequel is in the same spirit as that of Lemma 3 of Ref. 1.

  • 346 JOTA: VOL. 31, NO. 3, JULY 1980

    where 0(t) denotes an n x n matrix with the property that

    lim[O(t)/t] = On, t~O

    where 0 n is the n x n zero matrix. Using this expansion in (6) and defining

    go g (In + C;C0) -1,

    we can rewrite Eq. (6) as an inequality:

    Tr[5`Ko] ~< Tr[[I + tH + 0(t2)]5`[I - tH + O(t2)]Ko],

    which further reduces to

    0 0, letting t ~ 0, and rearranging yields

    0 ~ Tr[H[A, Ko]], for all H c ~, (7-2)

    where

    [A, Ko] = AKo- KoA

    is the commutator bracket of 5` and Ko. Since we could also have divided (7-1) by t < 0, the direction of inequality in (7-2) could have been reversed. Thus we have, as a necessary condition,

    0 = Tr[H[A, Ko]], for all H ~ ~,

    which further implies that

    [L Ko] = 0, (8)

    since one can choose

    and

    H = [L K0] e

    Tr[H 2] = 0, for any H ~ ~f,

    implies that H = 0~. Now, since (8) is equivalent to

    [Ko t, 5`] = [I, + C'oCo, 5,] = O,

    and since In commutes with every n x n matrix, the result follows. []

    Corollary 2.1. Let Co ~ f~p be a solution. Then, there exists a unitary matrix P, such that

    P'5`P = A= d iag(M, . . . , An), ]~i~)ki+l, (9)

  • JOTA: VOL. 31, NO. 3, JULY 1980 347

    and P'C'oCoP is diagonal. Furthermore, such a unitary transformation does not alter the equality constraint that characterizes fZp; in other words, CoP E l~p.

    Proof. This result is a direct consequence of Lemma 2.2 and the well-known fact that commuting matrices can be simultaneously diagonal- ized by a unitary transformation. Furthermore,

    Tr[P'C~ Z CoP] = Tr[PP: C; E Co] = TrfC; E Co],

    since P is unitary. [] It now follows directly from this corollary that Problem (C), and

    consequently Problem (B), becomes equivalent to the following problem [Problem (D)]:

    (D) rain J (G), (10-1) Yto

    where

    J(G) = Tr[A(i + G'G)-~],

    f~0~{G c ~e: G'G is diagonal}.

    (10-2) (10-3)

    Remark 2.1. It should be noted that, if G~ f~o is a solution of Problem (D), then GP ' is a solution of Problem (C), where P is uniquely defined by (9). Conversely, if Co c f~v is a solution of Problem (C), CoP ~ f~o is a solution of Problem (D).

    Let us now introduce the decomposition

    G = [gl, g2 . . . . . g~], gi ~ R '~, (11)

    g~ a column vector, for any G ~ M .... We now note that

    G6~o~g'~gj=O, for all i,j, i~j,

    and thus

    (12)

    Tr[G'EG]=Tr[EGG']= ~ Tr[Zg~gl]--- ~ 'Z gi gi. i~1 i=1

    We therefore observe that ProNem (D) can be reformulated as follows [Problem (E)]:

    (E) %n J({g3), (13-1) subject to

    g'~Zg~ = 1, (13-2) i= l

  • 348 JOTA: VOL. 31, NO. 3, JULY 1980

    where

    J({g,}) = ~ [A~/(1 + g'~g,)], (13-3) i=1

    and ~ is the set of orthogonal n-tuples {gi,. . , gn} in R m. It should be noted that the number of such nonzero vectors is at most rain(n, rn).

    Using this new formulation for Problem (D), we can now prove the following result.

    Lennna 2.3. Let G O ~ f~0 be a solution for Problem (D). Then, gO is either an eigenvector of ~ or a zero vector, for i = 1 . . . . . n.

    Proof. Since E is diagonal with positive eigenvalues, the constraint equation (13-2) describes a hyperellipsoid, and all points on that surface are regular. Letting

    I--~ {i: g,~ ~0},

    we note that the constraints

    g~gj = 0, i~] , i , ]e I , (14-1)

    are also regular. Thus, Lagrange multipliers can be introduced to obtain a necessary condition for any extremizing solution of Problem (E) to satisfy. Let ~ denote the Lagrange multiplier of (13-2), and let ~j be the Lagrange multiplier of (14-1). Then, a first-order necessary condition can be obtained by setting the gradient of

    J+ ~ ~g;Eg,+ Eog',gj~ u i~ l i,]~l

    with respect to g~, i = 1 . . . . , n, equal to zero. This yields the equation

    o' 0,2n/ ~ o 0 {~:~- [x , i ( l+g , g,) j m~g~ + E gj,i =0, i=1 . . . . , n.

    i:/o (14-2)

    Since {gO, i ~ io} are linearly independent, it follows from (14-2) that

    ~i = O, i , j 1 , i~ j ,

    and that gO is an eigenvector of Y~. [] It therefore follows from Lemma 2.3 that each gO should have at most

    one nonzero element. But the location of those nonzero elements is not yet determined. The following lemma does that.

  • JOTA: VOL. 31, NO. 3, JULY 1980 349

    Lemma 2.4. Let G~ tqp be a solution for Problem (D), with the eigenvalues of E and A ordered as in (2-1) and (9), respectively. Then, with a possible exception of the ith location, all other elements of gO are zero, for

    0 i=1 . . . . . n. Note that, if n>m, this implies that g~---=-0, for i>~m. Furthermore, for all i such that A, > h~+~, we have

    O' 0 z~_ O' 0 gi gi ~ gi+lgi+l. (15)

    Proof. We first prove (15). Let us first introduce the notation

    Y~g I] (l+g}gi). (16-1) ]=1

    Then, (13-3) can be written as

    J({yi}) = i hiYi/(1 + g~gl)"/l. (16-2) i=1

    Now, for a solution G , let us assume that (15) does not hold true for i = k, i.e.,

    0' 0 ~ 0' 0 gkgk ~- gk+tgk+~ and Ak > hk+~.

    Using (16-1), this implies that 0

    ~fk+l < ,yo (17-1)

    Since the denominator of (16-2) is independent of the permutation of the indices {i}, we can rewrite (16-2) as

    o/( 1+ o ,o o g lg l ) ' r l ]+[1 / ( l+ o ,o o o o o gl gl)'~l](hk~/k dr" }kk+l~k+l ) . i=1

    i~k, k+!

    However, if we define G as

    ~i = gO i k, k + 1, (17-2)

    ~k = gO+~, (17-3)

    ~k+~ = gO, (17-4)

    it is still in 12o, and thus it is an admissible candidate for a solution. Now,

    j ({3 ,o})_ j ({~@=[1/3 ,O1( l+ o ,o o A o

    and, since 0

  • 350 JOTA: VOL. 31, NO. 3, JULY 1980

    from (16-1) and (17), and since/~k >~k+l by hypothesis, we finally obtain

    > 0,

    which is a contradiction to the optimality of G . To prove the first statement of Lemma 2.4, we follow a similar

    approach. Let G o ~ f~o be a solution that does not satisfy the first statement of the lemma. This then implies that there exists a permutation of indices {i}, to be denoted by/3(i), with fiG) ~ i, such that

    0' 0 (g~ g~ )o'mo = 1. (18) i=1

    Let A 0' 0

    ai = g~ g~.

    If/~i >/~'+1 for a particular i, then a, 1> a,+l from (15). If A, = I,+1, we can still order the indices of ai such that ai >/o~i+1, since the objective functional J is invariant under any such permutation. Hence, (18) can be written as

    OLiO'~(i) = 1, i= i

    It is well known that

    min ~ a~cq3(o t3(i) i=1

    is attained by choosing/3(i) ----- i, since that particular choice orders o't3(~ ) in nondecreasing magnitude. We thus have

    Now, redefining

    min )2 aio'~(0 = 8 < 1. t3(i) i=1

    ~i = OLi/ t~,

    we observe that G defined by

    l [g l , . ,

    satisfies the constraint (13-2) and

    S((~,}) < S({g }),

    n>m,

    m ~n,

  • JOTA: VOL. 31, NO. 3, JULY 1980 351

    since ffi > ai > 0, for at least one i. This then constitutes a contradiction to the optimality of G . []

    Corollary 2.2. Let G O be a solution for Problem (D). Then, there exists an integer p ~< rain(m, n) and constants a1 t> o~2 ~ " " ~ O/p 0 , such that

    go = (~, 0 , . . . , 0),

    go = (0, 4~, 0 . . . . ,0),

    . . , , /~ ,o , ,o ) , gp = (0,. 0 . . . .

    gO =(0 , . . . ,0 ) , p

  • 352 JOTA: VOL. 31, NO. 3, JULY 1980

    Theorem 2.1. There exists a unique p* that satisfies (22)-(23), and the solution of Problem (F) is given uniquely by

    p0 = p,, (24)

    ai =-1+ ~/(&lo-~)l 2 ~/(os.)tj) 1+ o? >0. (25) j= i j= l

    Proof. Under (22-2), p* given by (22-1) is unique by definition. Furthermore, since

    P f (p + 1) - / (p) = [4(o'p+l/()te+l)-,./(o'p/;te) ] ~, ~/(o'P,i),

    i=1

    and since {o'i/)t~} is a nondecreasing sequence, it follows that f(p) is a nondecreasing function of p. This then implies that there exists a unique p* that satisfies (23-1); in other words, under (23-2), p* is the maximum integer that satisfies the inequality f(p) < 1.

    To prove (24) and (25), we know a priori that Problem (F) admits a solution for some p ~0,

    and thus the expression given by (27) should be positive. Furthermore, we note that, if ae given by (27) is positive for some p ~< rain(n, m), then ai >0, i < p, since

  • JOTA: VOL. 31, NO. 3, JULY 1980 353

    moreover, ai+l ~< ai, since {aJo-i} is a nonincreasing sequence. This therefore implies that the optimum p0 is the largest integer p for which ap given by (27) is positive. Since the denominator of (27) is always positive, (24) readily follows. It should be noted that a choice p0 1> A, > 0 and o-m >~. ~ era > 0, respectively. Then, the minimum value

  • 354 JOTA: VOL. 31, NO. 3, JULY 1980

    of Problem (A) is

    37[ J *= ~/(o'~hi) 1+ E o"i + i hi, (30) I..i=1 i=1 i=p*+l where p* is uniquely defined by (22)-(24). A corresponding minimizing solution is

    Co=HOP ' ,

    where II and P are defined by (2-1) and (9), respectively, and G O is given by (29) and (25-2).

    Remark 2.2. If the upper bound in (1-3) is not one, but another scalar /z, this can be absorbed in Z by scaling all its eigenvalues by a factor 1//~. If this is carried on, it follows that the minimum value J* given by (30) will then be replaced by

    ]7{ "} 4( ih,) .+ 2 + E h,, (3a) Li=I i=l i=p*+l

    where p* is defined by (22)-(23) with 1 replaced by/x.

    Remark 2.3. Another approach to obtain the result of Theorem 2.2 is first to adjoin the constraint on the objective functional (1-2) via a Lagrange multiplier and then to investigate the first-order and second-order condi- tions. A rigorous derivation of the said conditions, however, is difficult to obtain for the problem under consideration. For such an ad hoc solution of a similar problem, arising in communication theory, the reader is referred to Ref. 2.

    3. An Application to Joint Estimation and Control

    Consider the following two-stage stochastic decision (team) problem with nonclassical information structure. A special version of this problem was first mentioned in Ref. 3 and then discussed in Refs 4 and 6.

    Let x denote an n-dimensional Gaussian vector with mean zero and covariance A >(~. Decision maker DM1 wishes to estimate the value of x based on a noisy observation y, where

    y = u(x )+ w, (32)

    with u(-) being the decision rule of decision maker DM2, of dimension m ~< n, and w being an m-dimensional Gaussian random vector of mean

  • JOTA: VOL. 31, NO. 3, JULY t980 355

    zero and covariance Z > 0. In decision and control theory, u(-) is also known as a signaling strategy, since DM2 signals the value of x to DM1 through the (possibly lower-dimensional) observation vector y. In doing this, DM2 would also like to keep the signaling power as low as possible; hence, the problem is to find an estimator &(. ) and a decision rule u(.), such that the objective functional

    J(8, u) = E[(6(y) - x)'O(8(y) - x) + u'(x)Du(x)] (33)

    is jointly minimized over admissible 6(.) and u(.), and for appropriately chosen weighting matrices O > 0, D > 0.

    To show the applicability of Theorem 2.2 to this problem, we further assume that the signaling strategy is linear, i.e.,

    u(x) = Ax, (34)

    for some m x n matrix A. 3 Under such a choice for u(-), (33) is minimized by a unique 6(.) for each A, which is

    60(y) = E[x ]y] = AA'(AAA'+ Z)-]y. (35)

    Substitution of (35) into (33) and manipulation yield an expression in terms of

    C & (Z1/2)-~AA~/2, (36-1)

    which is

    where

    J(C) = Tr [0 ] - Tr[QC'(CC'+ I)-~C] + Tr[CC'E3], (36-2)

    () ~ A1/zOA]/2, (36-3)

    1~) ~ Y.,t/2D~l/2. (36-4)

    Furthermore, using the well-known identity

    C'(CC'+ Z)~~ C = Jr - ( I+ C'C) -~

    in (36-2), we finally obtain

    J (c) = Tr[0(I + C'C) -~] + Tr[C'bC], (37)

    which has to be minimized over all real m x n matrices C. It should be noted

    3 Whether such a choice is optimal within the general class of strategies could be investigated (as has been done in Ref. 3 for a special version), but this is not the intent of the present paper. This problem is in fact considered in Ref. 5 within the context of a vector-source muttichanneI communication system (see Section 4).

  • 356 JOTA: VOL. 31, NO. 3, JULY 1980

    that, since

    J(c)~>0, J(0) = Tr iO]< co,

    and J is continuous, we can restrict the search of a minimizing solution to a compact subset of R m~, and consequently a solution exists. Let Co be such a minimizing solution, and let

    Tr[C~/)Co] =/zo. (38)

    Then, Co is definitely also a solution of the minimization problem

    min Tr[O(I + C'C)-I], ~~lz o

    where

    (39-1)

    f~,o = {C e Mm,: Tr[C'/)C] =/Zo}. (39-2)

    Therefore, the problem of minimizing (37) can be done in two steps. Step (i). Determine

    F(/Z) = min Tr[()(I + C'C)-I], (40)

    where 12~ is defined by (39-1), with/zo replaced by/Z > 0. This is precisely Problem (C), with ~p replaced by 1)~, and thus F(/Z) can be obtained from (31) for each/z > 0.

    Step (ii). Minimize F(/Z)+/Z over/Z I>0, where F(/Z) is given by (40) for positive /z, and by F(0)= Tr[0] for /Z = 0, provided that we identify 11/> )t2 i>- /> ~t, > 0 as eigenvalues of 0 and 0-m >i 0-m-1 ~>" ~> O'1 > 0 as eigenvalues of/) .

    F(/Z) is actually defined in m different regions in the/,-domain, each region determined from inequalities (23-1) with 1 replaced by/Z. It can be shown that F( .) is both continuous and continuously differentiable across the boundaries, although it will be sufficient to differentiate F(. ) in each of the intervals determined by (23). If the solution of our problem is an inner point (i.e.,/z0>0), then/z0 should be a stationary point of F(/Z)+/z in at least one of these regions. In other words, the solution of

    ]7[ -- i---~1 ~/(0-iAi) /z-lt- i=l ~ 0-i +1=0 (41-1) should give us /zo for some particular p. Solving the above equation, we obtain

    p /Z = 2 (~/(o9~i)-

  • JOTA: VOL. 31, NO. 3, JULY 1980 357

    Since this has to be positive by hypothesis, we immediately have the following conclusion.

    Theorem3.1. LetA l~ > ' - .~>An>0andc~m>~ -. .~>o'~>0denotethe eigenvalues of 0 and /), respectively. Then, if the expression (41-2) is nonpositive for all p ~< m, p-0 is necessarily zero. Consequently, the joint estimation-control problem under consideration admits a unique solution, given by

    A0 = 0, 80(y) = 0.

    Now, the fact that (41-2) is positive for some integer p does not necessarily imply that the resulting/x is optimal, since we also have to check whether the resulting/x is in the right region governed by p. This is verified by employing inequality (23-1) with 1 replaced by /x. The result is the condition

    ~/(trp/Ap) ~ 4(~riA~)< ~ v/(~A~) (42) i= l i=1

    for that particular p for which (41-2) is positive. If no such p can be found [i.e., an integer p ~< m that satisfies (42) and simultaneously makes (41-2) positive], then the conclusion is again that of Theorem 3.1, i.e.,/x0 = 0.

    If positivity of (41-2) and condition (42) are satisfied by more than one p, then one has to compare the resulting values of F(/x) +/x with one another and also with F(0). Hence, in the most general case, the final solution can be obtained after a finite number (~< m) of comparisons. To illustrate this, let us consider the following numerical example.

    Example 3.1. Assume

    n =4, m =2, Al=9, A2=A3=4, A4= ~1 =~2= 1.

    Then, there are two regions: Region I, where 0 ~ 1/2. F(.) can be obtained as

    [9/(/x + 1)+9, F0x) = [25/(~ + 2) + 5,

    in Region I, in Region II.

    F(/z) +/z has a unique stationary point which is/x = 3 and is in Region It. The minimum value of (33) is then F(3) + 3 = 13, for that particular choice of eigenvalues.

  • 358 JOTA: VOL. 31, NO. 3, JULY 1980

    4. Conclusions

    In this paper, we have obtained rigorously the solution of a trace minimization problem under a quadratic inequality constraint and without employing the Lagrange multipliers technique. This nonlinear programming problem has then been shown to be related to a decentralized stochastic team problem which involves joint estimation and control, and a nonclassi- cal information structure. A decision maker wishes to estimate the value of a Gaussian random vector by making use of a noise corrupted observation, in which the said random vector appears only implicitly through the signaling strategy of another decision maker; and, in estimating the value of the random vector under quadratic expected loss, DM1 has also to take into account the soft constraint on the strategy of DM2. This is thus a joint minimization problem with strictly nonclassical information structure, and its solution has been discussed in Section 3 under the assumption that DM2 uses only linear strategies. It has been shown that the solution of the trace minimization problem obtained in the main part of the paper could be effectively used to obtain explicitly the solution of that particular team problem.

    Now, as has been mentioned in Section 3, the scalar version of this decision problem with nonclassical information structure first appeared in Ref. 3, where Witsenhausen commented on the applicability of certain results from information theory to prove that the linear solution is actually optimal within the class of all measurable mappings. Verifi- cation of this result basically involves obtaining the Shannon performance bound for a related communication problem and showing that linear decision rules actually attain that bound (see Ref. 4). Recently, using a similar technique, this result was extended to the decision problem of Section 3, but with the specific choices n = m, O--E = A = In, and it was shown that linear decision rules are still optimal within the general class of decision rules (Refs. 4, 6). For n ~ rn, however, it has been dis- cussed in Refs. 4 and 6 that nonlinear decision rules could result in better performances.

    In Ref. 5, the vector-source multichannel communication problem related to the decision problem of Section 3 has been investigated for general parametric values, and it has been shown that it is possible to determine the Shannon performance bounds explicitly and that, even if n ~ m and ~ and A have distinct eigenvalues, the linear coding could achieve a performance very close to Shannon bounds. All these results imply that there are cases when the linear decision rules described in Section 3 could be optimal or nearly optimal even outside the linear class.

  • JOTA: VOL. 31, NO. 3, JULY 1980 359

    References

    I. WITSENHAUSEN, H. S., A Determinant Maximization Problem Occurring in the Theory" of Data Communication, SIAM Journal on Applied Mathematics, Vol. 29, pp. 515-522, 1975.

    2. LEE, K. H., and PETERSEN, D. P., Optimal Linear Coding for Vector Channels, IEEE Transactions on Communications, Vol. COM-24, pp. 1283-t290, 1976.

    3. WITSENHAUSEN, H. S., The Intrinsic Model for Discrete Stochastic Control: Some Open Problems, Control Theory, Numerical Methods, and Computer Systems Modelling, Edited by A. Bensoussan and J. L. Lions, Springer-Verlag, New York, New York, 1975.

    4. KASTNER, M. P., Information and Signaling in Decentralized Decision Problems, Harvard University, Cambridge, Massachusetts, Division of Applied Sciences, Technical Report No. 669, 1977.

    5. BASAR, T. (J., Performance Bounds and Optimal Linear Codingfor Multichannel Communication Systems, Bo~azigi University, Istanbul, Turkey, PhD Thesis, 1978.

    6. HO, Y. C., KASTNER, M. P., and WONG, E., Teams, Signaling, and Information Theory, Proceedings of the IEEE Conference on Decision and Control, Clear- water Beach, Florida, 1977.