A Ordinary and Partial Differential and Difference Equations A.I Introduction In this appendix we will present the solution to standard ordinary and partial differential equations and their corresponding difference equations which arise when solving for the boundary response in RBC. A.2 Second Order Ordinary Differential Equations Weare interested in equations of the type d 2 u --2 = ±k 2 u(t); 0:::;;t:::;;7 dt (A.2.1) with boundary conditions, u(O) = A and U(7) = H. The general solu- tion depends on the sign of the right hand side (RHS)· of the equation. If we select the positive sign, then the general solution will be U (t) = acosh kt + bsinh kt , while the negative sign leads to u (t) = acos kt + bsin kt . (A.2.2) (A.2.3) We solve the full boundary value problem by splitting it into two related problems with boundary values: {u(O) = A, u(r) = O} and {u(O) = 0, U(7) = H}. After solving these two problems we can add the solutions to get the overall solution by calling on the principle of super- position. We will consider the second set of boundary conditions {O,B} in detail and then infer the result for {A,O}. We use the boundary conditions to determine the unknown coefficients a and b in (A.2.2) and (A.2.3). If we consider the general solution (A.2.2) which contains the hyperbolic
48
Embed
Ordinary and Partial Differential and Difference Equations
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
A
Ordinary and Partial Differential and Difference Equations
A.I Introduction
In this appendix we will present the solution to standard ordinary and partial differential equations and their corresponding difference equations which arise when solving for the boundary response in RBC.
A.2 Second Order Ordinary Differential Equations
Weare interested in equations of the type
d 2u --2 = ±k2u(t); 0:::;;t:::;;7 dt
(A.2.1)
with boundary conditions, u(O) = A and U(7) = H. The general solution depends on the sign of the right hand side (RHS)· of the equation. If we select the positive sign, then the general solution will be
U (t) = acosh kt + bsinh kt ,
while the negative sign leads to
u (t) = acos kt + bsin kt .
(A.2.2)
(A.2.3)
We solve the full boundary value problem by splitting it into two related problems with boundary values: {u(O) = A, u(r) = O} and {u(O) = 0, U(7) = H}. After solving these two problems we can add the solutions to get the overall solution by calling on the principle of superposition.
We will consider the second set of boundary conditions {O,B} in detail and then infer the result for {A,O}. We use the boundary conditions to determine the unknown coefficients a and b in (A.2.2) and (A.2.3). If we consider the general solution (A.2.2) which contains the hyperbolic
250 A. Ordinary and Partial Differential and Difference Equations
functions, we see that the cosh(.) function cannot meet the boundary
condition u (0) = 0 so a = 0 and b = .: k and the solution is sm r
u (t) = B :::: :~ . (A.2.4)
Similarly, with (A.2.3) the cos(.) function must be rejected because of the boundary condition at t = 0 and we then have
u (t) = B s~n kt . sm kr
(A.2.5)
By symmetry, the boundary conditions (A,O) are met by replacing B with A and t with r - t in each case. We then sum the partial solutions to get the overall solution to (A.2.1).
In summary, we have the two equations and their corresponding solutions
and
u(t) = Asinh k(r-t) + Bsinh kt sinh kr
(A.2.7)
d 2u dt2 = - k 2u(t); O~t~r, u(O) = A, u(r) = B, (A.2.8)
u(t) = Asin k(r-t) + Bsin kt sin kr
A.3 Second Order Ordinary Difference Equations
(A.2.9)
We can relate the differential equations in section A.2 to analogous difference equations by approximating the second order differential by a second order difference using the Taylor polynomial expansion, viz.,
d 2u _ u(n -1) - 2u(n) + u(n +1) dt2 - h 2
(A.3.1)
where t = nh and u(n) ~u(nh). If we discretize (A.2.1) by setting r = (N + l)h, we obtain
It can be shown that the general solution of this difference equation has the same functional form as the corresponding differential equation (A.2.1) but the argument is not the obvious transformation of kt-knh. Instead the general solution is, for a positive RHS
Second Order Ordinary Difference Equations 251
u(n) = acosh nO + bsinh nO, (A.3.3)
and for a negative RHS we have
u (n) = acos nO + bsin nO , (A.3.4)
where the parameter 0 is determined by substituting the general solution into (A.3.2).
As before the cos(.) and cosh(.) functions are rejected for the boundary value problem with u (0) = A and u (N + 1) = B, and the general solution is asin nO or asinh nO , depending on the sign of the RHS of the equation.
Let us consider the negative case with the trigonometric solution and substitute into (A.3.2) to obtain an implicit equation for 0, viz.,
h 2k 2 cos 0 = 1 - (A.3.5)
2
where we have made use of the identity
sin ex + sin {3 = 2 sin ( ex; (3 ) cos ( ex ~ (3 ) • (A.3.6)
In the hyperbolic case, when we substitute the proposed general solution, we obtain the implicit equation for 0 as
h2k 2 coshO = 1 + -- (A.3.7)
2
where we have used the hyperbolic identity
sinh ex + sinh {3 = 2 sinh ( ex; (3 ) cosh ( ex ~ (3 ) • (A.3.B)
More explicitly, we can solve for 0 as
o = In [coshO + ..J cosh20 - 1 ] . (A.3.9)
So in conclusion we have the two equations and their solutions, viz.,
In either case as h - 0, if we compare the implicit equations for 0 with the series expansions for cosO and cosh() we have 0::::: hk and then nO ::::: nhk = kt and the discrete argument, nO, converges to the continuous kt.
A.4 Second Order Partial Differential Equations
Two dimensional RBC models are boundary value problems in both dimensions and correspond to the elliptic class of partial differential equations (PDEs). We are particularly interested in the Helmholtz equation
which is the continuous analogy of the NCI model. The classical solution to this equation is to propose a separable solution u(x,y) = X(x)Y(y) which results in
yd2~ + xd2:; = 7C'2k2XY . (A.4.2) dx dy
As in the previous sections we solve this boundary problem by setting the boundary conditions to zero on all but one side and then use superposition to derive the full solution. For example, we specify that the boundary conditions be zero for u(x, 0), u(x,b), and u(O,y) and that u(a,y) = g(y), the true boundary values at x = a. We then separate out the PDE into two ordinary differential equations of the form (A.2.1), one in x and the other in y. In order to meet the zero boundary conditions at y, = 0 and y = b we require a sin(.) function for the y-direction. Hence the RHS is negative for the equation in y and consequently positive for the equation in x and the two equations are
d 2y 2 2 -2 = - 7C' 'Y Y , (A.4.3) dy
Second Order Partial Differential Equations 253
Solving for Y we get
Y = kysin('Y7rY) (A.4.5)
and the boundary condition u (x, b) = 0 determines 'Y as
'Yp ~ 'Y(P) = ~; p = 1, 2, .... (A.4.6)
The solution for X is
X = kxSinh(-.j 'Y~ + k 2 7rx) = kxSinh(.J p2 + b2k 2 ~). (A.4.7)
Remembering that u = XY we have the solution
. qb7rX smh-b-
up(x,y) = cp sin( Pb7rY ) (A.4.8) qb7ra
sinh-b-
where we have defined qb ~ .J'-p-;;2-+-b-:;2~k""2 and of course there are an infinite number of these solutions so the overall solution is the series
00
u(x,y) = E up(x,y) (A.4.9) p=1
where the cp = kj(ysinh qb;a are chosen to meet the boundary condition
atx = a
g (y) = i; cp sin P7rY . p=1 b
(A.4.lO)
In particular cp are the coefficients in the Fourier sine series of period 2b for g(y) and are given by
b 2 J . P7rY cp = Ii g(y) sm -b- dy .
o (A.4.11)
This is the solution for the boundary conditions at x = a and we must superpose the results for each of the four sides to obtain the complete solution to (A.4.1) as
254 A. Ordinary and Partial Differential and Difference Equations
qb7r(a -x) . qb7rX 00 apsinh b + cpsmh -b-
( ) ~ sl.n P7rY u x,Y = i.J b p=1 . qb7ra
smh -b-
qa7r(b - y) d· h qa7rY 00 bpsinh + psm --
+ E a a sin P7rX (A.4.12) p=1 . qa7rb a
smh--a
where we have defined qa ~.J p2 + a 2e and ap' bp, cp' and dp are the pth Fourier sine coefficients of the four boundaries at x = 0, y = 0, x = a, and y = b respectively.
A.5 Second Order Partial Difference Equations
We again approximate the second order differential by a second order difference using the Taylor polynomial expansion. In particular
The general approach that we take to solve this partial difference equation is analogous to the technique used for the continuous PDE in section A.4 and we propose a separable solution of the form u(i,j) = X(i)YU). If we again impose zero boundary conditions on all but the right side at i = (N + 1), then we split (A.5.2) into the two discrete counterparts of (A.4.3) and (A.4.4), viz.,
YU -1) - 2YU) + YU + 1) = - 7r2i YU) (A.5.3)
an ordinary difference equation in j with a trigonometric solution, and
an ordinary difference equation in i with a hyperbolic solution. Solving for Ywe get
(A.5.5)
Second Order Partial Difference Equations 255
so we have a sin(.) function in the j-direction and the argument is again fixed by the boundary condition at j = (M + 1) as
o /1 O(P) = P p = (M +I)
p = 1,2, ... , M. (A.5.6)
Note that p now only takes M distinct values and so the infinite series solution obtained in the continuous case will now become a finite series and the Fourier sine series will be replaced by the DST as we shall show below.
Now Op depends on i via (A.3.5)
1r2'Y2
2 (A.5.7)
so the boundary condition at j = (M + 1) effectively balances the two equations (A.5.3) and (A.5.4) by regulating the coupling coefficient 'Y as
2/12 _2 'Yp = 'Y (P) - ~ (1 - cos 1rOp) . (A.5.8)
As with the ordinary difference equation, the functional form for the PDE is the same in both the continuous and discrete cases but the argument is different. So the solution to the discrete approximation of the Helmholtz equation with one non zero boundary condition at i = (N + I) is the finite series expansion
• • M • P1rj sinh(iwp) U(I,J) = p~/psm( M + 1 ) sinh(N + l)wp (A.5.9)
The coefficients cp are chosen to match the boundary condition at i = (N + 1) so
M . u(N +1,j) = E cpsin f;.1rJ I
p=! + (A.5.lO)
and cp are - 1_2_ times the DST coefficients sp, of u(N + l,j), viz., '\! M+I
sp = - 1_2_ E u(N +l,j) sin P1rj . (A.5.1I) '\! M+I p=! M+I
We use (A.3.5) to determine the parameter wp as
(k2h2 + 'Y~)1r2 cosh wp = I + 2
k 2h21r2 = 2 + 2 - cos 1rOp (A.5.12)
where we have used (A.5.8).
256 A. Ordinary and Partial Differential and Difference Equations
When we solve with the true boundary condition at j = ° or j = (M + 1) we must replace M by N and then wp-wp' which is defined
via (A.5.12) with Op-Op' ~ N: 1 .
We must sum the four partial solutions for each boundary to give the complete solution to (A.5.2) as
u(i,j) = (A.5.13)
E {UP,Q(i,j) + Up,c(i,j)} + f;{Uq,b(i,j) + Uq,JJ,j)} p=1 q=1
where
Up,Q(i,j) + up,c(i,j) = (A.5.14)
ap sinh(N + 1-i)wp + cp sinh iwp P7fj' ----"----------'-----''----.:.... sin --
sinh(M + l)wq N + The coefficients ap ' bq , cp ' and dq are, by analogy with (A.5.li), proportional to the pth and qth DST sine coefficients of the four boundaries at x = 0, y = 0, x = a, and y = b respectively. In the special case when M = Nwe have
• . _ N {ap sinh(N + 1-i)wp + cp sinh iwp . P7fj U(l,j) - E . h(N 1) sm N 1 p=1 sm + wp +
(A.5.16)
bp sinh(N + 1- j)wp + dp sinh jwp . P7fi} + sm-- . sinh(N + l)wp N + 1
B
Properties of the Discrete Sine Transform
B.t Introduction
In this appendix we will introduce the discrete sine transform (DST) and present the DST of the following standard sequences: constant, linear, exponential, sinusoidal and hyperbolic.
Definition The DST matrix, 'Ir is defined as
'Ir(i,j) = . 1_2_ sin ij7r " 1 si,j' sN . 'V N+l N+l
(B.1.1)
It is a unitary symmetric transform and hence is its own inverse, viz.,
'Ir'lr = I (B. 1.2)
where I is the N x N identity matrix. We refer to the kth column, or row, of the 'Ir matrix as 'Irk ~ 'Ir(i,k) with components ('Ir(1,k), 'Ir(2,k) , ... , 'Ir(N,k»).
Notation We use lower case letters to indicate vectors and upper case for matrices and then denote the corresponding Id or 2d transformation by ~ so
x = 'lrx (B.1.3)
and ~
T = 'lrT'Ir . (B. 1.4)
258 B. Properties of the Discrete Sine Transform
Tridiagonal Toeplitz Matrices The columns, 'frk are the eigenvectors of a symmetric tridiagonal Toeplitz matrix T, with entries {-a, 1, -a} since T'frk = A(k)'frk and therefore
(B.l.5)
where A is a diagonal matrix of eigenvalues A(k), where
I ~(k) ~ 1 - 2a cos ~; k~I" .. 'N·1 (B.l.6)
B.2 DST Evaluation Technique
To determine the DST of the standard sequences we will study the following vector equation
Tx=b (B.2.1)
where x is the sequence under study, T is the tridiagonal Toeplitz matrix {-a, 1, -a}, and b contains only two non-zero entries:
b(1) = aA = ax (0);
so that the sequence is
b(N) = aB = ax(N +1), (B.2.2)
x(n) = a[x(n -1) + x(n +1)]; n=I, ... ,N; (B.2.3)
x(O) = A ; x(N +1) = B .
Transforming (B.2.1) we get ~
'frT x = b = aA 'fr 1 + aB 'fr N (B.2.4)
but using (B.l.2) and (B.l.5) we have 'frT x = 'frT('fr'fr)x = Ai so
~ A 'fr 1(k) + B 'frMk) x(k) = A(k)/a (B.2.5)
and we can express the DST coefficients as the sum of two weighted sin(.) functions. Let us now use this technique to determine the DSTs of the standard seq uences.
Constant Sequences 259
B.3 Exponential Sequences
If we consider x(n) = e np then (B.2.3) is satisfied, and we have
A = 1; B = e(N +I)p
and a is given by
e np = a [e(n-I)p + e(n+l)p]
hence dividing by enp we have
1 a = ---- = ---eP + e - P 2 cosh p
Applying (B.2.S) we obtain the DST coefficients as
i'1(k) + e(N+I)p i'N<k) x(k) = -------
k7r 2( cosh p - cos N + 1 )
for the sequence x(n) = enp•
B.4 Constant Sequences
(B.3.1)
(B.3.2)
(B.3.3)
(B.3.4)
If we consider the exponential sequence with p = 0 then we get a constant sequence with x(n) = 1. Substituting in (B.3.4) we get
" i'1(k) + i'N<k) x(k) = k-... .. (B.4.1)
2(1 - cos N + 1 )
Now i' N<k) = (-1)k -I i'l (k) therefore the even harmonics are all zero and the odd harmonics are
Now
are
sin (2r + 1)1f' N+l
(2r + 1)1f'} - cos N +1
(B.4.2)
sin 20 _ cos 20 = cot 0 therefore the DST coefficients for x(n) =
which is the same as (B.6.2) with ir N replaced by ir I. Similarly the DST of cosh(N + I-n)p is a combination of (B.6.2) and (B.6.4) and it can be shown that
x(k) = cosh(N + l)p irl(k) + ir~k) k7r
2( cosh p - cos N + 1 )
which is the same as (B.6.4) with irN and ir l interchanged.
B.7 Sinusoidal Sequences
(B.6.7)
We can also get the DST of the trigonometric functions by combining the DST coefficients for two exponential sequences, since
we have
e inp _ e -inp sin np = -----
2i (B.7.1)
262 B. Properties of the Discrete Sine Transform
x(k) = ___ ir_tJ.._k ) __ _ k7r
2( cos p - cos N + 1 )
Similarly for the cos(.) function
we have
cos np =
x(k) = ir1(k) + cos(N + l)p irtJ..k) k7r
2( cos p - cos N + 1 )
(B.7.2)
(B.7.3)
(B.7.4)
c
Transform Domain Variance Distributions
C.I Introduction
In this appendix we will derive the normalized variance distributions for RBC and DCT coding of 1st-order Markov processes. We will first consider RBC in section C.2 and then DCT coding in C.3.
C.2 ld RBC
In this section we will show that the normalized variance distribution varies only slowly with p while the variance reduction ratio 1/. varies very rapidly with p.
Normalized· Variance Distribution
Assuming a 1st-order Markov process. we have already seen in (2.7.23) that
(C.2.1)
and so taking the diagonal entries from the matrix we have the variances. u;(k). viz .•
u;(k) = R;(k.k) = ~;) ; k=I •...• N (C.2.2)
where
(C.2.3)
and
264 C. Transform Domain Variance Distributions
2p k7r A(k) = 1 - --2 COS --; k = 1, ... , N. (C.2.4)
l+p N+l
The average value for the transform domain variances is u;, viz.,
a; ~~ [~2 kL(~)] . (C.2.5)
We divide the transform domain variances by their average value u; to obtain the normalized variance distribution as
(C.2.6)
where A(k) is given in (C.2.4). If we now consider the special case when p::::: 1, which is a good
approximation for practical images, then
p ::::: 1 = 1 - e , say (C.2.7)
and
2p 2 2 (1 + p2) = (p + p-I)
= (1 - e) + (1 + e + O(i»
2 = (2 + O(e2» (C.2.8)
= 1 + O(e2) •
So, we then have
A(k) ::::: 1 k7r (C.2.9) - cos--
N+l
which is independent of p for highly correlated processes and therefore the norm,alized distribution is
u;(k) N ~ = -----------------------------------
"j [1 -cos ::1 ] [~y -cos ):1 )-] (C.2.10)
which is also independent of p. In summary we have shown that e has only a second order effect on the distribution which is therefore approximately constant for small e (large p) and varies only slowly as p decreases as shown in Fig. C.2.1 for the case N = 7 as p varies from 0.5 to 0.99.
FIGURE C.2.I. Theoretical variance distribution for Id RBC, N = 7, of a Istorder Markov process for p = 0.5, "', 0.99.
Variance Reduction Ratio As we have just shown, the normalized variance distribution is fairly insensitive to changes in p, however the variance reduction ratio is very sensitive to such changes as we shall now show.
By definition
and so from (C.2.3) we have
Nu; = ----:-;------
(j2 f; _1_ k=! A(k)
N 1 E-k=,A(k)
where as before'A(k) is given in (C.2A).
(C.2.11)
(C.2.12)
Again taking the special case when p independent of p so that
;::: 1 == (1 - e), A(k) is largely
(1 + p2) 1] ex (1- p2) =
1 + (1 - 2e + O(e2» 1 - (1 - 2e + O(e2»
266 C. Transform Domain Variance Distributions
2 - 2e :::: 2e
1 = - - 1
e
(C.2.13)
1 and so, since 71 a (- - 1), 71 - 00 as e - O. For example,
_ 99 e _ 19 71.99 - 19 71.95 = 5.271.95 and 71.95 - 9"" 71.9 = 2.1 71.9·
Computing 71 from (C.2.12) for N = 3, 7, 15 and p = 0.80, ... , 0.995 we obtain the values shown in Table C.2.1 which closely follow the ratios predicted by the approximate proportional relationship given in (C.2.13). The tabulated results clearly show how very rapidly 71 increases as p approaches unity. Furthermore, as the transform block size N decreases, the prediction improves and 71 increases.
C.3 Id DCT
In this case since the DCT, although a good approximation, is not the true KL T of the Markov process, there are no simple eigen equations for the transform domain variances and we have
u~(k) = u; [ CRCT] (k,k) (C.3.1)
where C is the DCT matrix and R is the symmetric Toeplitz image autocorrelation matrix with first row: [1 p p2 ... pN -I]. We again normalize by the average variance value, u~ which is
where we have made use of the fact that C is a unitary transform and so the trace of the R matrix is unchanged after transformation, that is the sum of the diagonal elements, N in this case, is invariant under unitary transformation. Consequently the normalized variances are simply
(C.3.3)
Using (C.3.3), we calculated the normalized transform domain variance distributions for a block size of 8 with p varying over the same interval of 0.5 to 0.99 used for RBC. The distributions are plotted in Fig. C.3.1 and as the correlation coefficient varies, the distribution varies much more rapidly than in the case of RBC shown in Fig. C.2.I. These "differences are most pronounced as p becomes large when the curves are widely spread. for the DCT but tightly grouped with RBC.
FIGURE C.3.1. Theoretical variance distribution for Id DCT, N = 8, of a Istorder Markov process for p = 0.5, "', 0.99.
D
Coding Parameters for Adaptive Coding Based on Activity Classes
D.I Introduction
The normalized variances and resulting bit allocations used for the adaptive coding experiments described in chapter five are given in this appendix. The DCT parameters are given in section D.2 and then the RBC parameters are tabulated in section D.3.
D.2 Adaptive DCT Coding Parameters
The normalized DCT variances for the four activity classes are listed below in Table D.2.1 for ensemble A, and Table D.2.2 for ensemble B. The resulting bit allocations are then given in Tables D.2.3(a)-(d) for ensemble A at the four rates of 1.5, 1.0, 0.5, and 0.25 bits/pixel. The next four tables, D.2.4(a)-(d), are then for ensemble B at these same four data rates.
270 D. Coding Parameters for Adaptive Coding Based on Activity Classes
TABLE D .2.1. Normalized variances for adaptive DCT coding.
276 D. Coding Parameters for Adaptive Coding Based on Activity Classes
D.3 Adaptive RBC Coding Parameters
The normalized RBC variances for the four activity classes are listed below in Table D.3.1 for ensemble A, and Table D.3.2 for ensemble B. The resulting bit allocations are then given in Tables D.3.3(a)-(d) for ensemble A at the four rates of 1.5, 1.0, 0.5, and 0.25 bits/pixel. The next four tables, D.3.4(a)-(d), are then for ensemble B at these same four data rates.
Adaptive RBC Coding Parameters 277
TABLE D.3.1. Normalized variances for adaptive RBC coding.
1. Jain, A. K., Fundamentals of Digital Image Processing, Prentice-Hall, Englewood Cliffs, NJ, 1989.
2. Algazi, V. R. and Jain, A. K., Noise analysis and measurement and data compression study for a digital x-ray system, Signal and Image Processing Laboratory, U. C. Davis, Davis, CA, Nov. 1981.
3. Jain, A. K., Farrelle, P. M., and Algazi, V. R., "Image Data Compression," in Digital Image Processing Techniques, ed. M. Ekstrom, PrenticeHall, Englewood Cliffs, NJ, 1984.
4. Jain, A. K., "Image data compresion: A review," Proc. IEEE, vol. 69, 349-389, March 1981.
5. Netravali, A. N. and Limb, J. 0., "Picture coding: A review," Proc. IEEE, vol. 68, 366-406, March 1980.
6. Cooley, J.,W. and Tukey, J. W., "An algorithm for the machine calculation of complex Fourier series," Math. of Com put., vol. 19,297-301,1965.
7. Ahmed, N., Natarajan, T., and Rao, K. R., "Discrete cosine transform," IEEE Trans. Comput., vol. C-23, 90-93, Jan. 1974.
8. Jain, A. K., "A fast Karhunen-Loeve transform for a class of stochastic processes," IEEE Trans. Commun., vol. COM-24, 1023-1029, Sept. 1976.
9. Harmuth, H. F., Transmission of Information by Orthogonal Functions, Springer Verlag, New York, 1972.
10. Andrews, H. C., Computer Techniques in Image Processing, Academic Press, New York, 1970.
11. Enomoto, H. and Shibata, K., "Orthogonal transform coding system for television signals," IEEE Trans. Electromagnetic Compatibility, vol. EMC-13, 11-17, Aug. 1971.
12. Pratt, W. K., Chen, W. H., and Welch, L. R., "Slant transform image coding," IEEE Trans. Commun., vol. COM-22, 1075-1093, Aug. 1974.
13. Fino, B. J. 'and Algazi, V. R., "Slant-Haar transform," Proc. IEEE, vol. 62, 653-654, May 1974.
14. Habibi, A., "Hybrid coding of pictorial data," IEEE Trans. Commun., vol. COM-22, 614-624, May 1974.
15. Shannon, C. E., "A mathematical theory of communication," BSTJ, vol. 27,379-423,623-656, 1948.
16. Shannon, C. E., "Coding theorems for a discrete source with a fidelity criterion," IRE Nat. Conv. Rec., Part 4, 142-163, 1959.
17. Ramamoorthy, P. A. and Tran, T., "A systolic architecture for real-time composite video image coding," MILCOM 86, 49.6.1-49.6.4, Monterey, CA, Oct. 1986.
284 References
18. Tao, B. P., Abut, H., and Mehran, F., "LSI architecture for VQ systolic array systems," MILCOM 86,49.4.1-49.4.5, Monterey, CA, Oct. 1986.
19. Miyahara, M. and Kotani, K., "Block distortion in orthogonal transform coding - analysis, minimization, and distortion measure," IEEE Trans. Commun., vol. COM-33, 90-96, Jan. 1985.
20. Reeve, H. C. and Lim, J. S., "Reduction of blocking effect in image coding," Proc. ICASSP 83, 1212-1215, Boston, 1983.
21. Nakagawa, M. and Miyahara, M., "Generalized Karhunen-Loeve transformation I (theoretical consideration)," IEEE Trans. Commun., vol. COM-35, 215-223, Feb. 1987.
22. Meiri, A. Z. and Yudilevich, E., "A pinned sine transform image coder," IEEE Trans. Commun., vol. COM-29, 1728-1735, Dec. 1981.
23. Jain, A. K., "Partial differential equations and finite difference methods in image processing, Part I: Image representation," J. Optimiz. Theory Appl., vol. .73, 65-91, Sept. 1977.
24. Farrelle, P. M. and Jain, A. K., "Recursive block coding - a new approach to transform coding," IEEE Trans. Commun., vol. COM-34, 161-179, Feb. 1986.
25. Faux, I. D. and Pratt, M. J., Computational Geometry for Design and Manufacture, Ellis Horwood Ltd., Chichester, England, 1983.
26. Schreiber, W. F. and Knapp, C. F., "TV bandwidth reduction by digital coding," IRE Nat. Conv. Rec., Part 4, vol. 6, 88-89, 1958.
27. Schreiber, W. F., Knapp, C. F., and Kay, N. D., "Synthetic highs: an experimental TV bandwidth reduction system," J. Soc. Motion Pict. Telev. Eng., vol. 68, 525-537, Aug. 1959.
28. Yan, J. K. and Sakrison, D. J., "Encoding of images based on a twocomponent source model," IEEE Trans. Commun., vol. COM-25, 1315-1322, Nov. 1977.
29. Wang, S. H. and Jain, A. K., "Applications of stochastic models for image data compression," Signal and Image Processing Lab~, Dept. Elec. Comput. Eng. Univ. California, Davis, Sept. 1979.
30. Hosaka, K., "A new picture quality evaluation method," Proc. International Picture Coding Symposium, Tokyo, Japan, April 1986.
31. Jain, A. K., "Advances in mathematical models for image processing," Proc. IEEE, vol. 69, 502-528, May 1981.
32. Farrelle, P. M., "Recursive block coding techniques for data compression," M. S. Thesis, Signal and Image Processing Laboratory, U. C. Davis, Davis, CA, Dec. 1982.
33. Papoulis, A., Probability, Random Variables, and Stochastic Processes, McGraw-Hill, Inc., New York, 1965.
34. Ranganath, S. and Jain, A. K., "Two-dimensional linear prediction models, Part I: Spectral factorization and realization," IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-33, 280-299, Feb. 1985.
36. Ray, W. D. and Driver, R. M., "Further decomposition of the KarhunenLoeve series representation of a stationary random process," IEEE Trans. Inform. Theory, vol. IT-16, 663-668, Nov. 1970.
37. Yip, P. and Rao, K. R., "A fast computational algorithm for the discrete sine transform," IEEE Trans. Commun., vol. COM-28, 304-307, Feb. 1980.
38. Wang, Z., "Comments on 'A fast computational algorithm for the discrete sine transform'," IEEE Trans. Commun., vol. COM-34, 204-205, Feb. 1986.
39. Wang, Z., "Fast algorithms for the discrete W transform and for the discrete Fourier transform," IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-32, 803-816, Aug. 1984.
40. Bisherurwa, E. J. K. and Coakley, F. P., "New fast discrete sine transform with offset," Electron. Lett., vol. 17,803-805, 15 Oct. 1981.
41. Ahmed, N. and Flickner, M., "Some considerations of the discrete cosine transform," Proc. 16th Asilomar Con/. Circuits Syst. Comput., 295-299, Pacific Greve, CA, Nov. 1982.
42. Jain, A. K., "A sinusoidal family of unitary transforms," IEEE Trans. Pattern Anal. Mach. Intelligence, voL PAMI-l, 356-365, Oct. 1979.
43. Jain, A. K. and Jain, J. R., "Partial differential equations and finite difference methods in image processing, Part II: Image restoration," IEEE Trans. Automat. Contr., vol. AC-23, 817-834, Oct. 1978.
44. Bellman, R., Introduction to Matrix Analysis, Tata McGraw-Hill Publishing Co. Ltd., New Delhi, India, 1974.
45. Eggerton, J. D. and Srinath, M. D., "A visually weighted quantization scheme for image bandwidth compression at low data rates," IEEE Trans. Commun., vol. COM-34, 840-847, Aug. 1986.
46. Huang, J. J. Y. and Schultheiss, P. M., "Block quantization of correlated Gaussian random variables," IEEE Trans. Commun. Syst., vol. CS-ll, 289-296, Sept. 1963.
47. Max, J., "Quantizing for minimum distortion," IRE Trans. Inform. Theory, vol. IT-6, 7-12, March 1960.
48. Lloyd, S. P., "Least squares quantization in PCM," IEEE Trans. Inform. Theory, vol. IT-28, 129-137, Mar. 1982. This is nearly a verbatim reproduction of a draft manuscript circulated for comments at Bell Laboratories; the Mathematical Research Dept. log date is July 31,1957.
49. Segall, A., "Bit allocation and encoding for vector sources," IEEE Trans. Inform. Theory, vol. IT-22, 162-169, March 1976.
50. Chen, W. H., Smith, C. H., and Fralick, S. C., "A fast computational algorithm for the discrete cosine transform," IEEE Trans. Commun., vol. COM-25, 1004-1009, 1977.
51. Chen, W. H. and Smith, C. H., "Adaptive coding of monochrome and color images," IEEE Trans. Commun., vol. COM-25, 1285-1292, Nov. 1977.
52. Reininger, R. C. and Gibson, J. D., "Distribution of two-dimensional DCT coefficients," IEEE Trans. Commun., vol. COM-31, 835-839, June 1983.
286 References
53. Yuan, S. and Yu, K. B., "Zonal sampling and bit allocation of HT coefficients in image data compression," IEEE Trans. Commun., vol. COM-34, 1246-1251, Dec. 1986.
54. Andrews, H. C., "Entropy considerations in the frequency domain," Proc. IEEE (letters), 113-114, Jan. 1968.
55. Wintz, P. A. and Kurtenbach, A. J., "Waveform error control in PCM telemetry," IEEE Trans. Inform. Theory, vol. IT-14, 650-661, Sept. 1968.
56. Kurtenbach, A. J. and Wintz, P. A., "Quantizing for noisy channels," IEEE Trans. Commun. Tech., vol. COM-17, 291-302, Apr. 1969.
57. Wood, R. C., "On optimum quantization," IEEE Trans. Inform. Theory, vol. IT-15, 248-252, Mar. 1969.
58. Paez, M. D. and Glisson, T. H., "Minimum mean-squared-error quantization in speech PCM and DPCM systems," IEEE Trans~ Commun. (concise papers), vol. COM-20, 225-230, Apr. 1972.
59. Adams, W. C. and Giesler, C. E., "Quantizing characteristics for signals having Laplacian amplitude probability density function," IEEE Trans. Commun., vol. COM-26, 1495-1297, Aug. 1978.
61. Tzou, K., "A fast computational approach to the design of block quantization," IEEE Trans. Acoust., Speech, Signal Processing (Corresp.), vol. ASSP-35, 235-237, Feb. 1987.
62. Limb, J. 0., Rubinstein, C. B., and Thompson, J. E., "Digital coding of color video signals - a review," IEEE Trans. Commun., vol. COM-25, 1349-1385, Nov. 1977.
63. Hang, H. M. and Haskell, B. G., "Interpolative vector quantization of color image,s," Proc. International Picture Coding Symposium, Tokyo, Japan, April 1986.
64. Weber, A. G., "Image Data Base," USC SIP I Report 101, Signal and Image Processing Institute, University of Southern California, Feb. 1986.
65. Kernighan, B. W. and Ritchie, D. M., The C Programming Language, Prentice-Hall, Englewood Cliffs, New Jersey, 1978 .
.. 66. Anderson, G. and Anderson, P., The Unix C Shell Field Guide, PrenticeHall, Englewood Cliffs, New Jersey, 1986.
67. Jain, A. K. and Wang, S. H., "Stochastic image models and hybrid coding," Signal and Image Processing Lab., Dept. Elec. SUNY, Buffalo, Final Rep., NOSC Contract N00953-77-C-003MJE, Oct. 1977.
68. Tasto, M. and Wintz, P. A., "Image coding by adaptive block quantization,"IEEE Trans. Commun. Tech., vol. COM-19, 956-972, Dec. 1971.
69. Algazi, V. R. and Sakrison, D. J., Computer processing in communications, Polytechnic Institute of Brooklyn, Brooklyn, NY, 1969.
70. Andrews, H. C. and Tescher, A. G., "The role of adaptive phase coding in 2 and 3 dimensional Fourier and Walsh image compression," Proc. Walsh Function Symposium, Washington D.C., March 1974.
References 287
71. Narasimhan, M. A., Rao, K. R., and Raghava, V., "Image data processing by hybrid sampling," SPIE, vol. 119 Application of Digital Image Processing, 130-136, 1977.
72. Saghri, J. A. and Tescher, A. G., "Adaptive transform coding based on chain coding concepts," IEEE Trans. Commun., vol. COM-34, 112-117, Feb. 1986.
73. Gimlett, J. I., "Use of 'activity' classes in adaptive transform image coding," IEEE Trans. Commun. (Corresp.), vol. COM-23, 785-786, July 1975.
74. Cox, R. V. and Tescher, A. G., "Channel rate equalization techniques for adaptive transform coders," SPIE, vol. 87 Advances in Image Transmission Techniques, 239-246, 1976.
75. Clarke, R. J., Transform Coding of Images, Academic Press, London, 1985.
76. Farrelle, P. M. and Jain, A. K., "Recursive block coding with a variable block size," Proc. International Picture Coding Symposium, Tokyo, Japan, April 1986.
77. Farrelle, P. M. and Jain, A. K., "Quad-tree based two source image coding," MILCOM 86,49.2.1-49.2.5, Monterey, CA, Oct. 1986.
78. Pavlidis, T., Algorithms for Graphics and Image Processing, Computer Science Press, Rockville, MD, 1982.
79. Huffman, D. A., "A method for the construction of minimum redundancy codes," Proc. IRE, vol. 40, 1098-1101, 1952.
80. Gallager, R. G., "Variations on a theme by Huffman," IEEE Trans. Inform. Theory, vol. IT-24, 668-674, Nov. 1978.
81. Spriggs, H. and Nightingale, C., "Recursive binary nesting .. a quad tree approach to image compression," Proc. International Picture Coding Symposium, Tokyo, Japan, April 1986.
82. Hartley, R: V. L., "A more symmetrical Fourier analysis applied to transmission problems," Proc. IRE, vol. 30, 144-150, 1942.
83. Bracewell, R. N., "Discrete Hartley transform," J. Opt. Soc. America, vol. 73, 1832-1835, 1983.
84. Bracewell, R. N., "The fast Hartley transform," Proc. IEEE, vol. 72, 1010-1018, 1984.
85. Gray, R. M., "Vector quantization," IEEE ASSP Magazine, 4-29, April 1984.
86. Habibi, A. and Wintz, P. A., "Image coding by linear transformation and block quantization," IEEE Trans. Commun., vol. COM-19, 50-62, Feb. 1971.
87. Linde, Y., Buzo, A., and Gray, R. M., "An algorithm for vector quantizer design," IEEE Trans. Commun., vol. COM-28. 84-95, Jan. 1980.
88. Lookabaugh, T., "stdvq.c and stdvqe.c - a vector quantizer package," Information Systems Laboratory, Stanford University, Stanford, CA, Jan. 15, 1987. Private communication.
89. Equitz, W. H., "Fast algorithms for vector quantization picture coding," M. S. Thesis, Massachusetts Institute of Technology, June 1984.
288 References
90. Hartigan, J. A., Clustering Algorithms, Wiley, 1975.
91. Bentley, J. L., "Multidimensional binary search trees used for aSSOCiative searching," Communications oj the ACM, vol. 18,509-517, Sept. 1975.
92. Floating Point Programmers Guide jor the Sun Workstation, Version A, Sun Microsystems, 23 May 1986.
93. Gersho, A. and Ramamurthi, B., "Image coding using vector quantization," Proc. ICASSP 82,428-431, April 1982.
94. Ramamurthi, B. and Gersho, A., "Image coding using segmented codebooks," Proc. International Picture Coding Symposium, Davis, CA, March 1983.
95. Baker, R. L. and Gray, R. M., "Differential vector quantization of achromatic imagery," Proc. International Picture Coding Symposium, Davis, CA, March 1983.
96. Murakami, T., Asai, K., and Yamazaki, E., "Vector quantizer for video signals';" Electron. Lett., vol. 7, 1005-1006, Nov. 1982.
97. Marr, D., Vision: a computational investigation into the human representation and processing oj visual injormation, W. H. Freeman and Co., San Francisco, CA, 1982.
Index
Ac energy, 163-165, 166, 187, 188 Adams, W. C., 63 Adaptive coding
Id model bit allocation algorithm, 77 boundary response for, 21-29, 241 coding results, 108-124 derivation of, 16-19 DST for, 28-29 hybrid predictive/transform coding,
137-146 normalized variance for, 263-267 rate distortion analysis, 73-76 RBC algorithm, 19-21 simplified algorithm, 25 variance reduction for, 88-91, 265-
266
Orthogonality, 10-11, 74 Overhead, 155-156,219
Paez, M. D., 63 Parseval relation, 84-85 Partial difference equations, 254-256 Partial differential equations, 5-6, 31,
252-254 PCM. See Pulse code modulation Peppers image. See Image ensemble Performance curve
Sailboat image. See Image ensemble Sakrison, D. J., 7 Satellite image data, 2 Schreiber, W. F., 6 Schultheiss, P. M., 56, 61 Segall, A., 59, 62, 63 Segmentation. See Quad tree