Top Banner
967

Encyclopaedia of Mathematics: Coproduct — Hausdorff — Young Inequalities

Sep 11, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Managing Editor
M. Hazewinkel
Scientific Board
J. F. Adamst, S. Albeverio, J. B. Alblas, S. A. Amitsur, I. J. Bakelman, J. W. de Bakker, C. Bardos, H. Bart, H. Bass, A. Bensoussan, M. Bercovier, M. Berger, E. A. Bergshoeff, L. Berkovitz, E. Bertint, F. Beukers, A. Beutelspacher, K.-D. Bierstedt, H. P. Boas, J. Bochnak, H. J. M. Bos, B. L. J. Braaksma, T. P. Branson, D. S. Bridges, A. E. Brouwer, M. G. de Bruin, R. G. Bums, P. Cameron, H. Capel, P. Cartier, C. Cercignani, J. M. C. Clark, Ph. Clement, A. M. Cohen, J. W. Cohen, P. Conrad, H. S.M. Coxeter, R. F. Curtain, M. H. A. Davis, M. V. Dekster, C. Dellacherie, G. van Dijk, H. C. Doets, I. Dolgachev, A. Dress, J. J. Duistermaat, D. van Dulst, H. van Duyn, H. Dym, A. Dynin, M. L. Eaton, W. Eckhaus, J. Eells, P. van Emde Boas, H. Engl, G. Eskin, G. Ewald, V. I. Fabrikant, A. Fasano, M. Fliess, R. M. Fossum, B. Fuchssteinert, G. B. M. van der Geer, R. D. Gill, V. V. Goldberg, J. de Graaf, J. Grasman, P. A. Griffith, A. W. Grootendorst, L. Gross, P. Gruber, E. J. Hannan, K. P. Hart, G. Heckman, A. J. Hermans, W. H. Hesselink, C. C. Heyde, K. Hirscht, M. W. Hirsch, K. H. Hofmann, A. T. de Hoop, P. J. van der Houwen, N. M. Hugenholtz, C. B. Huijsmans, J. R. Isbell, A. Isidori, E. M. de Jager, D. Johnson, P. T. Johnstone, D. Jungnickel, M.A. Kaashoek, V. Kac, W. L. J. van der Kallen, D. Kanevsky, Y. Kannai, H. Kaul, M. S. Keane, E. A. de Kerf, W. Klingen­ berg, T. Kloek, J. A. C. Kolk, G. Komen, T. H. Koornwinder, L. Krop, B. Kupershmidt, H. A. Lauwerier, J. van Leeuwen, J. Lennox, H. W. Lenstra Jr., J. K. Lenstra, H. Lenz, M. Levi, J. Lindenstrauss, J. H. van Lint, F. Linton, A. Liulevicius, M. Livshits, W. A. J. Luxemburg, R. M. M. Mattheij, L. G. T. Meertens, P. Mekenkamp, A.R. Meyer, J. van Mill, I. Moerdijk, J.P. Murre, H. Neunzert, G. Y. Nieuwland, G. J. Olsder, B. 0rsted, F. van Oystaeyen, B. Pareigis, K. R. Parthasarathy, I. I. Piatetski1-Shapiro, H. G. J. Pijls, N. U. Prabhu, G. B. Preston, E. Primrose, A. Ramm, C. M. Ringel, J. B. T. M. Roerdink, K. W. Roggenkamp, G. Rozenberg, W. Rudin, S. N. M. Ruysenaars, A. Salam, A. Salomaa, J.P. M. Schalkwijk, C. L. Scheffer, R. Schneider, J. A. Schouten, A. Schrijver, F. Schurer, I. A. Segal, J. J. Seidel, A. Shenitzer, V. Snaith, T. A. Springer, J. H. M. Steenbrink, J.D. Stegeman, F. W. Steutel, P. Stevenhagen, I. Stewart, R. Stong, L. Streit, K. Stromberg, L. G. Suttorp, D. Tabak, F. Takens, R. J. Takens, N. M. Temme, S. H. Tijs, B. Trakhtenbrot, L. N. Vaserstein, M. L. J. van de Vel, F. D. Veldkamp, P: M. B. Vitanyi, N.J. Vlaar, H. A. van der Vorst, J. de Vries, F. Waldhausen, B. Wegner, J. J. 0. 0. Wiegerinck, J. Wiegold, J. C. Willems, J. M. Wills, B. de Wit, S. A. Wouthuysen, S~ Yuzvinski1, L. Zalcman, S.l. Zukhovitzki1
ENCYCLOPAEDIA OF
An updated and annotated translation of the Soviet 'Mathematical Encyclopaedia'
Springer Science+Business Media, B.V. 1995
This International Edition in 6 volumes is an unabridged reprint of the original lO-volume hardbound library edition. (ISBN originallO-volume set: 1-55608-101-7)
ISBN 978-0-7923-2974-9 ISBN 978-1-4899-3795-7 (eBook) DOI 10.1007/978-1-4899-3795-7
AH Rights Reserved © 1995 by Springer Science+Business Media Dordrecht
OriginaHy published by Kluwer Academic Publishers in 1995 Softcover reprint ofthe hardcover Ist edition 1995
No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical
induding photocopying, recording or by any information storage and retrieval system, without written permis sion from the copyright owner
COPRODUCI' of a family of objects in a category - A concept describing the (categorical analogues of the) construction of a direct sum of modules or a discrete union (bouquet) of sets in the language of morphisms. Let A;, i e/, be an indexed family of objects in a category 9R. An object S, together with morphisms a;:A;~s. is called the coproduct of the family A;, ie/, if for any family of morphisms a;:A;~X, ie/, there exists a unique morphism a: S ~x such that a;a =a;, i e/. The morphisms a; are called the imbeddings of the coproduct; the coproduct is denoted by rr A;(a;), zer· rr A;, or S=A!*···*An in case /={1, ... ,n}. zer· The morphism a figuring in the definition of the copro- duct is sometimes denoted by rr Ja; or (* ); Elai. The
IE
coproduct of a family of objects is defined uniquely up to an isomorphism; it is associative and commutative. The coproduct is the dual concept of the product of a family of objects in a category.
The coproduct of the empty family of objects is the left zero (initial object) of the category. In an Abelian category, the coproduct is fcequently called the direct sum of the family A;, i e/, and is denoted by ~;er4i• or A 1 +···+An in case /={1, ... ,n}. In most categories of structured sets, the coproduct of a family of objects coincides with the free product of the family, and as a rule requires special description. Thus, in the category of groups, the coproduct is the free product of groups; in the category of modules it is the direct sum of modules; etc.
In a category with null morphisms, if S = ITer4;(a;) is a coproduct, there exist uniquely defined m~rphisms 'IT;: S ~A; such that a;'IT; = lA, a;'ITj = 0. In an Abelian category the coproduct and the product of a finite fam­ ily of objects are one and the same.
References [1] TSALENKO, M.SH. and SHUL'GEiFER, E.G.: Fundamentals of
category theory, Moscow, 1974 (in Russian). M Sh T. l k . . sa en o
Editorial comments. Also in not necessarily Abelian categories the coproduct of a family of objects is frequently called the sum of a family of objects or the direct sum of a family of objects. Often used notations are II;eA• ~;e1A;
and $;e1A;.
References [A1] PoPESCU, N.: Abelian categories with applications to rings
and modules, Acad. Press, 1973.
[A2] ADAMEK, J.: Theory of mathematical structures, Reidel, 1983.
AMS 1980 Subject Classification: 18AXX
CoRE IN 1HE 1HEORY OF GAMES - The set of all non-dominated outcomes, that is, the set C of out­ comes such that a domination s >-Kc cannot hold for any outcomes seS, ceC and coalition KE9t;. One defines in this respect:
1) The core. The set c(v) of imputations that are not dominated by any other imputation; the core coincides with the set of imputations satisfying ~; esx; ;;;tv(S) for any coalition S. If c(v )7!:: 0 and a von Neumann- Morgenstern solution (see Solution in game theory) exists, then c(v) is contained in any von Neumann- Morgenstern solution.
2) The kernel. The set k(v) of individually rational configurations (x, ~)(see Stability in game meory) such that the following inequality holds for any i, 1 E11 e~:
[~e(S, x)-p,~e(S, x)Jxj .;;;;; 0, ,, ft
where e(S, x)=v(~)- ~kesxk and Tu is the set ot' coal­ itions containing the player i and not containing the player j. The kernel k(v) is contained in an M\­ bargaining set.
3) The nucleolus. The minimal imputation n(v) rela­ tive to the quasi-order --<. defined on the set of imputa­ tions by: x <:,y if and only if the vector fJ(x, v)=(81(x, v), ... ,9n(x, v)), where
8;(x, v) = max mine(S, x), llll=iSeU
lexicographically precedes 9(y, v ). The nucleolus n ( v) exists and is unique for any game with a non-empty set of imputations. In a cooperative game the nucleolus is contained in the kernel.
References [1] VOROB'Ev, N.N.: 'The present state of the theory of games',
Russian Math. Surveys 15, no. 2 (1970), 77-136. (Uspekhi Mat. Nauk 15, no. 2 (1970), 103-107) A./. Sobolev
Editorijil comments. The Russian word ('yadro') is the same for all three notions defined above, but these notions may be distinguished by prefixing with the corresponding
1
CORE IN THE THEORY OF GAMES
English letterr c-yadro' for core, 'k-yadro' for kernel and 'n­ yadro' for nucleolus). These three notions do not share many properties.
See [A1], [A7] for core, [A2] for kernel and [A3] for nucleolus. (A4], (A5] are general references. (A6] deals also with mathematical economics and the role of the concept of the core of a game in that setting.
References [A1] BoNDAREVA, O.N.: 'Certain applications of the methods of
linear programming to the theory of cooperative games', Probl. Kibernet 10 (1963), 119-139 (in Russian).
[A2] MAscHLER, M. and DAvis, M.: The kernel of a cooperative game', Naval Res. Logist. Quart. 12 (1965), 223-259.
[A3] ScHMEIDLER, D.: 'The nucleolus of a characteristic function game', SIAM J. Appl. Math. 17 (1969), 1163-1170.
[A4] OwEN, G.: Game theory, Acad. Press, 1982. [A5] Sz£P, J. and FoRGO, F.: Introduction to the theory of games,
Reidel, 1985. [A6] RoSENMOI.LER, J.: Cooperative games and markets, North­
Holland, 1981. [A7] SHAPLEY, L.S.: 'On balanced sets and cores', Naval Res.
Logist. Quart. 14 (1967), 453-460.
AMS 1980 Subject Classification: 90012
CORNISH- FISHER. EXPANSION- An asymptotic expansion of the quantiles of a distribution (close to the normal standard one) in terms of the corresponding quantiles of the standard normal distribution, in powers of a small parameter. It was studied by E.A. Cornish and R.A. Fisher [1]. If F(x, t) is a distribution function depending on t as a parameter, if ~(x) is the normal distribution function with parameters (0, 1 ), and if F(x, t)~~(x) as t~O, then, subject to certain assumptions on F(x, t), the Cornish- Fisher expansion of the function x =p-I [~(z ), t] (where p-I is the function inverse to F) has the form
m-1
X = Z + L S,(z )t; + O(tm), (1) r=l
where the S;(z) are certain polynomials in z. Similarly, one defines the Cornish- Fisher expansion of the func­ tion z =~-· [F(x, t)] (~-· being the function inverse to ~) in powers of x:
m-1
Z = X+ .}: Q;(X )t; + O(tm), (2) i=l
where the Q;(x) are certain polynomials in x. Formula (2) is obtained by expanding ~-I in a Taylor series about the point ~(x) and using the Edgeworth expan­ sion. Formula (1) is the inversion of (2).
If X is a random variable with distribution function F(x, t), then the variable Z=Z(X)=~- 1 [F(X, t)] is normally distributed with parameters (0, 1 ), and, as fol­ lows from (2), ~(x) approximates the distribution func­ tion of the variable
2
i=l
as t~O better than it approximates F(x, t). If X has zero expectation and unit variance, then the first terms of the expansion (1) have the form
x = z+(y1h1(z)]+[y2h2(z)+yih3(z)]+ · · ·.
Here Yl =~e3 I "231 2, Y2 =~e.~ I ~et with "r the r-th cumulant of X, h1(z)=H2(z)l6, h2(z)=H3(z)l24, h3(z)=-[2H3(z)+HJ(z)]l36, and with Hr(z) the Hermite polynomials, defined by the relation
#_z)H,(z) = ( -IY d;;:> (#_z)=clY (z)).
Concerning expansions for random variables obeying limit laws from the family of Pearson distributions see [3]. See also Random variables, transfonnations of.
References [1] CORNISH, E.A. and FISHER, R.A.: 'Moments and cumu1ants in
the specification of distributions', Rev. Jnst. Intemat. Statist. 5 (1937), 307-320.
[2] KENDALL, M.G. and STuART, A.: The advanced theory of statis­ tics. Distribution theory, Griffm, 1969.
[3] BoL'SHEV, L.N.: 'Asymptotically Pearson transformations', Theor. Probab. Appl. 8 (1963), 121-146. (Teor. Veroyatnost. i Primen. 8 (1963), 129-155) VI p . . agurova
Editorial comments. For the methods of using an Edge­ worth expansion to obtain (2) (see also Edgeworth series), see also [A1).
References [A1] BICKEL, P.J.: 'Edgeworth expansions in non parametric statis­
tics', Ann. Statist. 2 (1974), 1-20. · [A2] JoHNsoN, N.L. and Kon, S.: Distributions in statistics. Con­
tinuous distributions, 1, Houghten Mifflin, 1970.
AMS 1980 Subject Classification: 60F05, 62E20, 60EXX
CoRNU SPIRAL, c/othoid - A transcendental plane
curve (see Fig.) whose natural equation is
a r =­
s'
where r is the radius of curvature, a =const and s is the arc length. It can be parametrized by the Fresnel integrals
I 2 I 2
x = [cos ~ ds, y = [sin ~ ds,
which are well-known in diffraction theory. The spiral of Cornu touches the horizontal axis at the origin. The
asymptotic points are M 1 ( y:;;; I 2, y:;;; I 2) and M2(- y:;; 12,- y:;; 12).
y
0
The spiral of Cornu is sometimes called the spiral of Euler after L. Euler, who mentioned it first (1744). Beginning with the works of A. Cornu (1874), the spiral of Cornu is widely used in the calculation of diffraction of light.
References [1] JAHNKE, E., EMDE, F. and LoscH, F.: Tafeln hiiheren Funk­
tionen, Teubner, 1966.
D.D. Sokolov
[A1] LAWRENCE, J.D.: A catalog of special plane curve, Dover, reprint, 1972.
AMS 1980 Subject Classification: 53A04
CORRELATION, duality - A bijective mapping " between projective spaces of the same finite dimension such that SP CSq implies K(Sq)CK(Sp). The image of a sum of subspaces under a correlation is the intersection of their images and, conversely, the image of an inter­ section is the sum of the images. In particular, the image of a point is a hyperplane and vice versa. A necessary and sufficient condition for the existence of a correlation of a projective space Iln(K) over a division ring K onto a space IIn(L) over a division ring L is that there exists an anti-isomorphism a: K~L, i.e. a bijective mapping for which a(x + y) = a(x) + a(Y ), a(xy)=a(Y)a(x); in that case IIn(L) is dual to Iln(K). Examples of spaces with an auto-correlation, i.e. a correlation onto itself, are the real projective spaces (K = R, a= id), the complex projective spaces (K = C, a: z~Z) and the quaternion projective spaces (K =H, a:z~Z).
A polarity is an auto-correlation K satisfying ,(2 = id. A projective space IIn(K) over a division ring K admits a polarity if and only if K admits an involutory anti­ automorphism, i.e. an anti-automorphism a with a2 = id.
A subspace W is called a null subspace relative to an auto-correlation " if PC ~e(P) for any point PEW, and
CORRELATION
strictly isotropic if W C K( W). Any strictly isotropic subspace is a null subspace. A polarity relative to which the whole space is a null space is called a null (or symplectic) polarity (see also Polarity).
Let the projective space IIn(K) over a division ring K be interpreted as the set of linear subspaces of the (left) linear space Kn + 1 over K. A semi-bilinear form on Kn + 1 is a mapping f: Kn + 1 X Kn + 1 ~K together with an anti-automorphism a of K such that
f(x +y, z) = f(x, z)+ f(y, z),
f(x,y+z) = f(x,y)+f(x,z),
f(x, ky) = f(x,y)a(k).
In particular, if K is a field and a= id, then f is a bi­ linear form. A semi-bilinear form f is called non­ degenerate provided f(x,y)=O for all x (ally) implies y =0 (x =0, respectively). Any auto-correlation " of IIn(K) can be represented with the aid of a non­ degenerate semi-bilinear form f in the following way: for a subspace V of K" + 1 its image is the orthogonal complement of V with respect to f:
K(V) = {y EKn + 1: f(x,y)=O for all XE V}
(the Birkhof!- von Neumann theorem, [2]). " is a polar­ ity if and only iff is reflexive, i.e. if f(x,y)=O implies f(Y, x)=O. By multiplying fby a suitable element of K one can bring any reflexive non-degenerate semi­ bilinear form f and the corresponding automorphism a in either of the following two forms:
1) a is an involution, i.e. a2 = id, and
f(y, x) = a(f(x,y)).
In this case one calls f symmetric if a= id (and hence necessarily K is a field) and Hermitian if a:f=id.
2) a= id (and hence K is a field) and
f(y, x) = - f(x,y).
Such an f is called anti-symmetric. A special example of a correlation is the following.
Let IIn(K) be a projective space over a division ring K. Define the opposite division ring K0 as the set of ele­ ments of K with the same addition but with multiplica­
tion xy = yx.
a: x~x is an anti-isomorphism from K onto K0 which defines the canonical correlation from Iln(K) onto IIn(K0 ). The (left) projective space Iln(K0 ), which can be identified with the right projective space Iln(K)*, i.e.
3
CORRELATION
with the set of linear subspaces of the (n +I)­ dimensional right vector space Kn + 1, is the (canonical) dual space of IIn(K) (cf. Projective algebra, the con­ struction of IIn).
Editorial comments.
References [A1] BAER, R.: Linear algebra and projective geometry, Acad.
Press, 1952. [A2] BIRKHOFF, G. and NEUMANN, J. VON: 'The logic of quantum
mechanics', Ann. of Math. 37 (1936), 823-843. [A3] Dmuoomffi, J.: La geometrie des groupes classiques,
Springer, 1963. [A4] HuGHES, D.R. and PIPER, F.C.: Projective planes, Springer,
1972.
AMS 1980 Subject Classification: 51A10, 10C05, 15A63
CORRELATION COEFFICIENT - A numerical characteristic of the joint distribution of two random variables, expressing a relationship between them. The correlation coefficient p=p(Xi> X2) for random vari­ ables X 1 and X 2 with mathematical expectations a 1 =EX1 and a2 =EX2 and non-zero variances or = DX I and 0~ =ox 2 is defined by
p(XI> X2) = E(X1 -a1XX2 -a2). O'J 0'2
The correlation coefficient of X 1 and X 2 is simply the covariance of the normalized variables (X1 -a!) 1 o1
and (X2 -a2) I o2. The correlation coefficient is sym­ metric with respect to X 1 and X 2 and is invariant under change of the origin and scaling. In all cases -1 os;;pos;; 1. The importance of the correlation coeffi­ cient as one of the possible measures of dependence is determined by its following properties: I) if X 1 and X 2
are independent, then p(XI> X2)=0 (the converse is not necessarily true). Random variables for which p=O are said to be non-correlated. 2) I p I = I if and only if the dependence between the random variables is linear:
The difficulty of interpreting p as a measure of depen­ dence is that the equality p=O may be valid for both independent and dependent random variables; in the general case, a necessary and sufficient condition for independence is that the maximal correlation coefficient equals zero. Thus, the correlation coefficient does not exhaust all types of dependence between random vari­ ables and it is a measure of linear dependence only. The degree of this linear dependence is characterized as follows: The random variable
A O"z X2 = p-(X1-a1)+a2
O'J
gives a linear representation of X 2 in terms of X 1
4
E(X2-X2f = minE(X2-ciXI-c2f; c.,c2
A. V. Prokhorov
AMS 1980 Subject Classification: 62JXX
CORRELATION FUNCTION of a real stochastic pro­ cess {X(t): teT} - The function in the arguments t, seT defined by
B(t, s) = E[X(t)-EX(t)][X(s)-EX(s)].
For the correlation function to be defined, it must be assumed that the process X(t) has a finite second moment EX(tY for all t e T. The parameter t varies here over some subset T of the real line; it is usually interpreted as 'time', though an entirely analogous definition is possible for the correlation function of a stochastic field, where T is a subset of a finite­ dimensional space. If X(t)=[X1(t), ... ,Xn(t)] is a mul­ tivariate stochastic process (stochastic function), then its correlation function is defined to be the matrix­ valued function
where B(t, s) = II Bij(t, s) li7.J=l>
Bij(t, s) = E[X;(t)-EX;(t)](Xj(s)-EXj(s)]
is the joint co"elation function of the processes X;(t), ~(t).
The correlation function is an important characteris­ tic of a stochastic process. If X(t) is a Gaussian process, then its correlation function B(t, s) and its mean value EX(t) (i.e. its first and second moments) uniquely determine its finite-dimensional distributions; hence also the process as a whole. In the general case, the first two moments are known to be insufficient for a full description of a stochastic process. For example, B(t, s)=e-alt-sl is at one and the same time the correlation function of a stationary Gaussian Markov process the trajectories of which are continuous, and also the correlation function of the so-called telegraph signal, a stationary Markov point process taking the two values +I. However, the correlation function does determine several important properties of a process: the so-called second-order properties (i.e. properties expressed in terms of second moments). In view of this, and also because of their relative simplicity, correlation methods are frequently employed both in the theory of stochastic processes and in its statistical applications (see Correlogram).
The rate and nature of decrease of the correlations as I t - s I ~ oo provides an idea of the ergodic properties of a process. Conditions relating to the rate of decrease of correlations, in some form or another, appear in limit theorems for stochastic processes. Local second-order properties, such as mean-square continuity and differentiability, provide a useful - though extremely crude - characteristic of the local behaviour of a process. The properties of the trajec­ tories in terms of the correlation function have been investigated to a considerable degree in the Gaussian case (see Sample function). One of the most complete branches of the theory of stochastic processes is the theory of linear extrapolation and filtration, which yields optimal linear algorithms for the prediction and approximation of stochastic processes; this theory is based on a knowledge of the correlation function.
A characteristic property of the correlation function is the fact that it is positive definite:
n ~ C;SB(t;, tj) ;;;;, 0,
i,j=l
for any n, any complex ci, ... , en and any t I , ... , tn E T. In the most important case of a station­ ary process in the broad sense, B(t, s) depends (only) on the difference between the arguments: B(t, s)=R(t-s). The condition that it be positive definite then becomes
n ~ C;SR(t;-tj);;;;, 0.
i,j=l
If R(t) is also continuous at t =0 (in other words, the process X(t) is mean-square continuous), then
R(t) = j eit>. F(d'A),
where F(dA.) is a positive finite measure; here A. runs over the entire real line if T = ( - oo, oo) (the case of 'continuous time'), or over the interval [-'IT, 'IT] if T= { ... , -1, 0, 1, ... } (the case of 'discrete time'). The measure F(dA.) is known as the spectral measure of the stochastic process. Thus, the correlation and spec­ tral properties of a stationary stochastic process prove to be closely related; for example, the rate of decrease in correlations as t~oo corresponds to the degree of smoothness of the spectral density j(A.)=F(dA.) IdA., etc.
In statistical mechanics, the term is also used for the joint probability density p(x I> .•• , Xm) of m distinct particles of the system under consideration placed at points xI, ... , Xm; the totality of these functions uniquely determines the corresponding discrete stochas­ tic field.
References [1] DooB, J.L.: Stochastic processes, Chapman and Hall, 1953. [2) Lo~VE, M.: Probability theory, Princeton Univ. Press, 1963.
CORRELATION FUNCTION IN STATISTICAL MECHANICS
[3] GIKHMAN, 1.1. and SKOROKHOD, A.V.: Introduction to the theory of stochastic processes, Saunders, 1969 (translated from the Russian).
A.S. Kholevo
AMS 1980 Subject Classification: 62JXX
CORRELATION FUNCDON IN SfATISTICAL MECHANICS - A function describing the influence of particles or groups of particles on one another and the effects due to the interaction of subsystems of the sys­ tem under consideration.
In classical statistical mechanics, the correlation functions G2(1, 2), G3(1, 2, 3), ... , are defined by the relations
F2(1, 2) = F 1(l)F1(2)+G2(1, 2),
F3(1, 2, 3) = Ft(l)Ft(2)Ft(3)+Ft(1)G2(2, 3)+
+Ft(2)G2(1, 3)+Ft(3)G2(1, 2)+63(1, 2, 3), ... ,
where the symbols .1, 2, ... , in the arguments of the functions denote the sets of coordinates r and momenta p of the 1-st, 2-nd, ... , particles, respectively, and Fs(l, ... ,s) are the reduced distribution functions
F,(1, ... ,s) = V [ 1- ~] · ..
.. . [ 1 - s; 1 ) j D1 d(s + 1) .. · dN,
where V is the volume of the system, N is the number of particles and the D1 = D1(1, ... , N) are the distribu­ tion functions in the phase space at time t, normalized so that
jD,(l, ... ,N)d1 · · · dN = 1.
The variation of D1 in time is characterized by the Liouville equation aD, I at =AD,, where A represents the Liouville operator, which is not explicitly dependent on time. One usually considers the case in which A is the sum of an additive part and a binary part charac­ terizing the interactions of the particles:
A= ~ A(j)+ ~ A(jt>h). l.;.j.;.N I.;.j,<}2.;.N
According to the principle of correlation damping, the correlation functions satisfy the boundary conditions Gs(l, . .. ,s)~O as max{ I ri-r21• ... , I ri-rs I, ... , I fs-1-rs I }~oo.
The correlation functions G1(l)=F1(l),G2(1,2), ... ,Gs(l, ... ,s) are the func­ tional derivatives,
[ c5'A1(u) l G,(l, ... ,s) = c5u(1)6u(2) ... 6u(s) u=O·
of a functional A1(u) which is related to the so-called generating functional
ft,(u) = f{ IT [I+ ;uv>]}D,d1···dN I.;.j<.N
5
The functional A,(u) satisfies the equation
aA,(u) = ju(l)A(l) M,(u) dl + at 8u(l)
+ ~ J {u(l)u(2)+ + ~u(l)+ ~u(2)}A(l, 2)
{ 8A,(u) 8A1(u) + 82A,(u) }dl d2 8u(l) 8u(2) 8u(l)8u(2) ·
In quantum statistical mechanics, the correlation functions are operator quantities, defined as follows:
F2(1,2) = S(l,2){Ft(l)Ft(2)}+G2(1,2), (*)
+F1(3)G2(1,2)}+G3(1,2,3), ... ,
where S(l, 2), S(l, 2, 3) are the symmetrization opera­ tor for Bose systems and the anti-symmetrization operator for Fermi systems. The correlation functions (*), forming the density matrix, satisfy the quantum­ mechanical Liouville equation (see [2]).
In quantum statistical mechanics, besides the correla­ tion function (*) one considers correlation functions based on conventional thermodynamical averages (see [3]), and correlation functions based on quasi-averages (see [3]).
Bilinear combinations of correlation functions (both quantum-mechanical and classical) yield the Green functions (see [5]). Correlation functions possess spec­ tral representations; they satisfy the Bogolyubov ine­ quality and a variation of the mean-value theorem (see [4]).
Correlation functions corresponding to the Kirkwood decomposition are sometimes used (see [6]); another version is a space-time correlation function (see [8]).
Correlation functions may be interpreted as charac­ teristic functions of probability measures (see [9]).
References
6
[1] BoGoLYUBOV, N.N.: Problems in the dynamic theory in statisti­ cal mechanics, Moscow, 1946 (in Russian).
(2] BoGoLYUBOV, N.N. and GUROV, K.P.: Zh. Eksp. i Teoret. Fiziki 17, no. 7 (1947), 614-628.
[3) BoGoLYUBOV, N.N.: Selected works, 3, Kiev, 1971 (in Rus­ sian).
[4] BoGoLYUBOV, N.N., JR. and SADOVNIKOV, B.l.: Some questions in statistical mechanics, Moscow, 1975 (in Russian).
(5) BoGoLYUBOV, N.N. and TYABLIKOV, S.B.: Dokl. Akad. Nauk SSSR 159, no. I (1959), 53-56.
[6] LIBOV, R.: Introduction to the theory of kinetic equations, Wiley, 1969.
[7] lsiHARA, A.: Statistical physics, Acad. Press, 1971. [8] RUELLE, D.: Statistical mechanics: rigorous results, Benjamin,
1974.
(9] PREsTON, C.J.: Gibbs states on countable sets, Cambridge Univ. Press, 1974. AN E ·t . . rmz ov
A.M. Kurbatov
AMS 1980 Subject Classification: 82A05, 82A15
CORRELATION (IN STATISTICS)- A dependence between random variables not necessarily expressed by a rigorous functional relationship. Unlike functional dependence, a correlation is, as a rule, considered when one of the random variables depends not only on the other (given) one, but also on several random factors. The dependence between two random events is mani­ fested in the fact that the conditional probability of one of them, given the occurrence of the other, differs from the unconditional probability. Similarly, the influence of one random variable on another is characterized by the conditional distributions of one of them, given fixed values of the other. Let X and Y be random variables with given joint distribution, let mx and my be the expectations of X and Y, let o~ and a~ be the vari­ ances of X and Y, and let p be the correlation coeffi­ cient of X and Y. Assume that for every possible value X= x the conditional mathematical expectation y(x) = E[ Y I X= x] of Y is defined; then the function y(x) is known as the regression of Y given X, and its graph is the regression curve of Y given X. The depen­ dence of Y on X is manifested in the variation of the mean values of Y as X varies, although for each fixed value X= x, Y remains a random variable with a well­ defined spread. In order to determine to what degree of accuracy the regression reproduces the variation of Y as X varies, one uses the conditional variance of Y for a given X=x or its mean value (a measure of the spread of Y about the regression curve):
ol1x = E(Y-E(YjX=x)f.
If X and Y are independent, then all conditional mathematical expectations of Y are independent of x and coincide with the unconditional expectations: y(x)=my; and then also a~ 1 x=a~. When Y is a func­ tion of X in the strict sense of the word, then for each X= x the variable Y takes only one definite value and a~1x=O. Similarly one defines x(y)=E[XI Y=y] (the regression of X given Y ). A natural index of the con­ centration of the distribution near the regression curve y(x) is the correlation ratio
2 2 - O'ylx
71Yix-l- 2 • O'y
One has 11~ 1 x = 0 if and only if the regression has the formy(x)=my, and in that case the correlation coeffi­ cient p vanishes and Y is not correlated with X. If the regression of Y given X is linear, i.e. the regression curve is the straight line
then
a~IX = a~(l-p'l) and 'II~IX = p2;
if, moreover, I pI= 1, then Y is related to X through an exact linear dependence; but if 1J~ 1 x=p2 <1, there is no functional dependence between Y and X. There is an exact functional dependence of Y on X, other than a linear one, if and only if p2 <11~1x= 1. With rare exceptions, the practical use of the correlation coefficient as a measure of the lack of dependence is justifiable only when the joint distribution of X and Y
is normal (or close to normal), since in that case p = 0 implies that X and Y are independent. Use of p as a measure of dependence for arbitrary random variables X and Y frequently leads to erroneous conclusions, since p may vanish even when a functional dependence exists. If the joint distribution of X and Y is normal, then both regression curves are straight lines and p uniquely determines the concentration of the distribu­ tion near the regression curves: When I p I= 1 the regression curves merge into one, corresponding to linear dependence between X and Y; when p = 0 one has independence.
When studying the interdependence of several ran­ dom variables X 1, ••• , Xn with a given joint distribu­ tion, one uses multiple and partial correlation ratios and coefficients. The latter are evaluated using the ordi­ nary correlation coefficients between X; and ~· the totality of which form the correlation matrix. A meas­ ure of the linear relationship between X 1 and the total­ ity of the other variables X2, ••• ,Xn is provided by the multiple-correlation coefficient. If the mutual rela­ tionship of X 1 and X 2 is assumed to be determined by the influence of the other variables X 3, ••• , Xn, then the partial correlation coefficient of X 1 and X 2 with respect to X 3, ••• , Xn is an index of the linear rela­ tionship between X 1 and X 2 relative to X 3, •.. , Xn.
For measures of correlation based on rank statistics (cf. Rank statistic) see Kendall coefficient of rank corre­ lation; Spearman coefficient of rank correlation.
Mathematical statisticians have developed methods for estimating coefficients that characterize the correla­ tion between random variables or tests; there are also methods to test hypotheses concerning their values, using their sampling analogues. These methods are col­ lectively known as correlation analysis. Correlation analysis of statistical data consists of the following basic practical steps: 1) the construction of a scatter plot and the compilation of a correlation table; 2) the computation of sampling correlation ratios or correla­ tion coefficients; 3) testing statistical hypothesis con­ cerning the significance of the dependence. Further
CORRELATION (IN STATISTICS)
investigation may consist in establishing the concrete form of the dependence between the variables (see Regression).
Among the aids to analysis of two-dimensional sam­ ple data are the scatter plot and the correlation table. The scatter plot is obtained by plotting the sample points on the coordinate plane. Examination of the configuration formed by the points of the scatter plot yields a preliminary idea of the type of dependence between the random variables (e.g. whether one of the variables increases or decreases on the average as the other increases). Prior to numerical processing, the results are usually grouped and presented in the form of a correlation table. In each entry of this table one writes the number niJ of pairs (x, y) with components in the appropriate grouping intervals. Assuming that the grouping intervals (in each of the variables) are equal in length, one takes the centres x; (or y;) of the intervals and the numbers niJ as the basis for calcula­ tion.
For more accurate information about the nature and strength of the relationship than that provided by the scatter plot, one turns to the correlation coefficient and correlation ratio. The sample correlation coefficient is defined by the formula
LL(X; - x)(yj-Y>nu p = j j
v~n;.(x;-x? V"'fn)yj-yY'
j j
n n
In the case of a large number of independent obser­ vations, governed by one and the same near-normal distribution, p is a good approximation to the true correlation coefficient p. In all other cases, as charac­ teristic of strength of the relationship the correlation ratio is recommended, the interpretation of which is independent of the type of dependence being studied. The sample value ~~ 1 x is computed from the entries in the correlation table:
l ~n-j{yj -'j)2' n j
where the numerator represents the spread of the con­ ditional mean values y; about the unconditional mean y (the sample value ~lF is defined analogously). The quantity ~~ 1 x-p2 is used as an indicator of the devia­ tion of the regression from linearity.
7
CORRELATION (IN STATISTICS)
The testing of hypotheses concerning the significance of a relationship are based on the distributions of the sample correlation characteristics. In the case of a nor­ mal distribution, the value of the sample correlation coefficient p is significantly distinct from zero if
(pi> [I+ n~2 rl• where t a is the critical value of the Student !­
distribution with (n- 2) degrees of freedom correspond­ ing to the chosen significance level a. If p=f.O one usu­ ally uses the Fisher z-transform, with p replaced by z according to the formula
z = l~n[I+~]- 2 I-p
Even at relatively small values n the distribution of z is a good approximation to the normal distribution with mathematical expectation
llnl.±£.+ p 2 I-p 2(n-I)
and variance 1 I (n- 3). On this basis one can now define approximate confidence intervals for the true correlation coefficient p.
For the distribution of the sample correlation ratio and for tests of the linearity hypothesis for the regres­ sion, see [3].
References [I] CRAMER, H.: Mathematical methods of statistics, Princeton
Univ. Press, 1946. (2] WAERDEN, B.L. VANDER: Mathematische Statistik, Springer,
1957. [3] KENDALL, M.G. and STUART, A.: The advanced theory of statis­
tics, 2. Inference and relationship, Griffm, 1979. [4] AIVAZYAN, S.A.: Statistical study of dependence, Moscow, 1968
(in Russian). A V. p kh . . ro orov
AMS 1980 Subject Classification: 62JXX
CORRELATION MATRIX - The matrix of correla­ tion coefficients of several random variables. If X~o ... ,Xn are random variables with non-zero vari­ ances aT, ... ,a~, then the matrix entries Pu (i=fj) are equal to the correlation coefficients (cf. Correlation coefficient) p(X;, Xj); for i = j the element is defined to be 1. The properties of the correlation matrix P are determined by the properties o~ the covariance matrix ~. by virtue of the relation ~=BPB, where B is the diagonal matrix with (diagonal) entries a" ... ,an.
A. V. Prokhorov
AMS 1980 Subject Classification: 62JXX
CORRELATION RATIO - A characteristic of depen­ dence between random variables. The correlation ratio of a random variable Y relative to a random variable X is the expression
8
[ D(YIX)] 11~1x =I-E DY ,
where DY is the variance of Y, D(YI X) is the condi­ tional variance of Y given X, which characterizes the spread of Y about its conditional mathematical expec­ tation E(Y 1 X) for a given value of X. Invariably, 0~11t1x~l. The equality 11t1x=O corresponds to non-correlated random variables; 11t 1 x = 1 if and only if there is an exact functional relationship between Y and X; if Y is linearly dependent on X, the correlation ratio coincides with the squared correlation coefficient. The correlation ratio is non-symmetric in X and Y, and so, together with 'IJlriX• one considers 11i-1 y (the correla­ tion ratio of X relative to Y, defined analogously). There is no simple relationship between 11t 1 x and 'IJi 1 y. See also Correlation (in statistics). A. V. Prokhorov
AMS 1980 Subject Classification: 62JXX
CORitELOGRAM of a time series x 1, ••• , xT - The set of serial (sample) correlation coefficients
I T-r _ _
- ~(x.-xi T•=I
where x is the sample mean of the series, i.e. - } T x = T~x •.
s=l
The term correlogram is sometimes applied to the graph of r1 as a function oft. It is an empirical measure of the statistical interdependence of the terms of the sequence { xt}. In time-series analysis, the correlogram is used for statistical inferences concerning a probabil­ ity model suggested for the description and explanation of an observed sequence of data.
The term theoretical correlogram is sometimes used for the normalized correlation function of a (stationary) random sequence {X;}:
cov(X., Xs+r) p1 = D(X.) ,t=I,2, ... ,
where
cov(X.,Xs+r) = E(X,-EX.)(X-+1-EXs+r)
is the covariance of the random variables Xs, Xs+t• and D(Xs) is the variance of the random variable Xs. If {xt} is regarded as a realization of the random sequence { X1}, then, under fairly general assumptions, the sam­ ple correlogram { r1} gives consistent and asymptoti­ cally normal estimators for the theoretical correlogram {pt} (see [3]).
From a mathematical point of view, the descriptions of a stationary random sequence in correlation and spectral terms are equivalent; in the statistical analysis
of time series, however, the correlation and spectral approaches have different fields of application, depend­ ing on the initial material and the final aim of the analysis. Whereas spectral analysis gives one an idea of the existence and intensities of periodic components in a time series, correlation methods are more convenient when one is investigating statistical relationships between consecutive values of the observed data. In sta­ tistical practice, methods based on the correlogram are usually employed when there are grounds to postulate a fairly simple stochastic model (auto-regression, moving averages or a mixed model with auto-regression and moving averages of relatively low orders) generating the given time series (e.g. in econometrics). In such models, the theoretical correlogram {p1 } possesses special pro­ perties (it vanishes for all sufficiently large t-values in the moving-averages model; in auto-regression models it decreases exponentially with possible oscillation). The presence of one such property in the sample correlo­ gram may serve as an indication for a certain hypothet­ ical probability model. In order to check for goodness­ of-fit and to estimate the parameters of the selected model, statistical methods have been developed based on the distributions of the serial correlation coefficients. References
[I] ANDERSON, T.W.: The statistical analysis of time series, Wiley, 1971.
[2) KENDALL, M.G. and STUART, A.: The advanced theory of statis­ tics, 3. Design and analysis, Griffin, 1966.
[3) HANNAN, E.J.: Multiple time series, Wiley, 1970.
A. S. Kholevo
AMS 1980 Subject Classification: 62M1 0
CoRRESPONDENCE, relation - A generalization of the notion of a binary relation (usually) between two sets or mathematical structures of the same type. Correspondences are widely used in mathematics and also in various applied disciplines, such as theoretical programming, graph theory, systems theory, and mathematical linguistics.
A correspondence between two sets A and B is any subset R of the Cartesian product A XB. In other words, a correspondence between A and B consists of certain ordered pairs (a, b), where a EA and b EB. As a rule, a correspondence is denoted by a triple (R, A, B)
and one may write aRb or R(a, b) in place of (a, b)ER.
Instead of 'correspondence' the term 'binary relation', or 'relation' is sometimes used (in the general case where A and B need not coincide).
For finite sets, the matrix and graphical representa­ tions of a correspondence are commonly used. Suppose that A and B have n and m elements, respectively, and let (R, A, B) be some correspondence. One cart describe this by using an n X m matrix the rows and columns of which are labelled with the elements of A and B,
CORRESPONDENCE
respectively, and the intersection of the a-th row with the b-th column contains 1 if (a, b)ER, and 0 other­ wise. Conversely, every (n Xm)-matrix consisting of zeros and ones describes a unique correspondence between A and B. In the graphical representation, the elements of A and B are represented by points in the plane. These points are usually denoted by the same symbols as the corresponding elements. Then a and b are connected by an arrow (arc) from a to b if (a, b)ER. Thus, the correspondence is represented by an oriented graph.
The set of all correspondences between two sets A
and B forms a complete Boolean algebra the zero of which is the empty correspondence and the identity of which is the so-called complete correspondence, consist­ ing of all pairs (a, b), a EA, b EB. Let R CA X B. The set
DomR = {aeA: 3b(a,b)ER}
is called the domain of definition of R, and the set
RanR = {beB: 3a(a,b)eR}
is called the range, or image, of R. The correspondence R is everywhere defined if DomR =A, and surjective if RanR =B. For every a EA the set
ImR a = {beB: (a, b)eR}
is called the image of a with respect toR, and for every bEB, the set
CoimRb = {aeA: (a,b)eR}
is called the co-image (or pre-image) of b with respect to R. Then
DomR = U CoimR b, RanR = U lmR a. beB aeA
Any correspondence R establishes a Galois correspondence between the subsets of A and those of B. Namely, to any subset X CA, one assigns the subset
x' = u aEXImR a e;B. Together with the dual
correspondence S, which assigns to every Y CB the set y' = U beYCoimR b, the Galois correspondence defines
a closure operator on both A and B. The inverse or involution R •, or R -I, of a
correspondence (R, A, B) is defined by the equation
R# = {(b,a): (a,b)eR}.
This establishes a bijection between the correspondence (R, A, B) and (R #, B, A), which is an isomorphism of Boolean algebras. Given two correspondences (R, A, B)
and (S, B~ C), their product or composite is given by
(RS,A,C) = {(a,c): 3b(a,b)eR/\(b,c)eS}.
Multiplication of correspondences is associative, and its identities are the diagonal binary relations. More­ over, (RS)#=S#R#, and R 1 CR2 implies that Rf CRf. Therefore the correspondences between a
9
CORRESPONDENCE
family of sets form an ordered category with involution. Multiplication and involution enable one to express the properties of correspondences in terms of algebraic relations. For example, a correspondence (R, A, B) is everywhere defined if RR# ;:JEA (EA. is the diagonal of A), and R is functional, that is, it is the graph of a func­ tion from A into B, if RR# ;:JEA and R# R CEB.
For any correspondence R, there are functional correspondences F and G such that R =F# G. More­ over, R c;,RR # R. Any difunctional correspondence induces equivalence relations on the domain and on the image whose quotient sets have the same cardinality. This only holds for difunctional correspondences.
Let ~ be a class of mathematical structures of the same type that is closed under finite Cartesian pro­ ducts. By a correspondence between two structures A, BE~, one means a substructure R of A XB. Thus one has group correspondences, module correspon­ dences, ring correspondences, and others. Such correspondences often have useful descriptions of their structure. For example, let A and B be groups and let R be a subgroup of the direct product A XB. The sets
KerR= {aeA: (a,b)eR}, JR = {beB: (l,b)eR}
are called the kernel and the indeterminacy of R, respec­ tively. KerR is a normal subgroup of DomR, IR is a normal subgroup of Ran R, and the quotient groups (DomR) jKer R and (RanR) I IR are isomorphic. It follows, in particular that all group correspondences are difunctional.
References [I] KUROSH, A.G.: Lectures on general algebra, Chelsea, 1963
(translated from the Russian). [2] MAL'TSEV, A.l.: Algebraic systems, Springer, 1973 (translated
from the Russian). (3] CALENKO, M.S. (M.SH. TSALENKO]: 'Classification of
correspondence categories and types of regularities for categories', Trans. Moscow Math. Soc. 1 (1982), 239-282. (Trudy Moskov. Mat. Obshch. 41 (1980), 241-285)
M.Sh. Tsalenko
Editorial comments. In algebraic geometry correspon­ dences are used widely, [A1], Chapt. 2, 3. They are defined as the following slightly more technical concept. A correspondence Z between two (projective) varieties X and Y is defined by a closed algebraic subset zcxx Y. It is said to be a rational mapping if Z is irreducible and there exists a Zariski-open subset X0 ex such that each xeX0 is related by Z to one and only one point of Y (i.e. card lmz(x)= 1 ). The correspondence Z is said to be a bira­ tional mapping if both Z and Z# are rational mappings.
References [A1] MUMFORD, D.: Algebraic geometry 1: Complex projective
varieties, Springer, 1976.
CosECANT - One of the trigonometric functions:
10
1 y = cosecx = -.-; smx other notations are cscx, coscx. The domain of defin­ ition is the entire real line with the exception of the points with abscissas
x = 'ITn, n=O, +1, ±2, ....
The cosecant is an unbounded odd periodic function (with period 2'1T). Its derivative is:
' cosx ( cosecx) = - -.-2 - = -cotgx cosec x. sm x
The integral of the cosecant is:
Jcosecxdx =In ltg ~ j+c. The series expansion is:
1 x 1x3 31x5
Yu.A. Gor'kov
AMS 1980 Subject Classification: 33A 1 0, 26A09
CosET IN A GROUP G by a subgroup H ifrom the left) - A set of elements of G of the form
aH = {ah: heH},
where a is some fixed element of G. This coset is also called the left coset by H in G defined by a. Every left coset is determined by any of its elements. aH = H if and only if aEH. For all a, bEG the cosets aH and bH are either equal or disjoint. Thus, G decomposes into pairwise disjoint left cosets by H; this decomposition is called the left decomposition of G with respect to H. Similarly one defines right cosets (as sets Ha, aEG) and also the right decomposition of G with respect to H. These decompositions consist of the same number of cosets (in the infinite case, their cardinalities are equal). This number (cardinality) is called the index of the sub­ group H in G. For normal subgroups, the left and right decompositions coincide, and in this case one simply speaks of the decomposition of a group with respect to a normal subgroup.
O.A. lvanova
AMS 1980 Subject Classification: 20A05
COSINE - One of the trigonometric functions:
y = cosx.
Its domain of definition is the entire real line; its range of values is the closed interval [ -1, 1]; the cosine is an even periodic function (with period 2'1T). The cosine and the sine are related via the formula
sin2 x +cos2 x = 1.
The cosine and the secant are related via the formula
l COSX = --.
(cosx)' = -sinx.
Jcosxdx = sinx+C.
cosx = 1--+-- · · · -oo<x<oo. 2! 4!
The inverse function is the arccosine. The cosine and sine of a complex argument z are
related to the exponential function by Euler's formula:
eiz = cosz + i sinz.
cosx =
If z =ix (a purely imaginary number), then
where cosh x is the hyperbolic cosine. Yu.A. Gor'kov Editorial comments. A geometric interpretation of the cosine of an argument (angle) !p is as follows. Consider the unit circle T in the (complex) plane with origin 0. Let !p denote the angle between the radius (thought of as varying) and the positive x-axis. Then cos!p is equal to the (signed) distance from the point ei+ on T corresponding to !p to thE x-axis. See also Sine. References
(1] MARKusHEVIcH, A.l.: Theory of functions of a complex vari­ able, 1, Chelsea, 1977 (translated from the Russian).
AMS 1980 Subject Classification: 33A 1 0, 26A09
CosiNE AMPLITUDE, elliptic cosine - One of the three basic Jacobi elliptic functions, denoted by
cnu = cn(u, k) = cosamu.
The cosine amplitude is expressible in terms of the Weierstrass sigma-functions, the Jacobi theta-functions or a power series, as follows:
a1(u) 8o(0)8z(v) cnu = cn(u, k) = a3(u) = 8z(0)8o(v) =
2 4 6 = 1-.!!_+(1+4k2).!!_-(1+44k2 +16k4).!!_+ ... ,
2! 4! 6!.
where k is the modulus of the elliptic function, OE;;;kE;;;l; v=u j2<N, and 2<N='1TIJ~(O). For k=O, 1 one has, respectively, en( u, 0) =cos u, cn(u, 1) = 1 I cosh u.
References (1] HURWITZ, A .. and CoURANT, R: Vorlesungen iiber allgemeine
Funktionentheorie und elliptische Funktionen, 2, Springer, 1964,
Chapt. 3. E.D. So/omentsev
COSMOLOGICAL CONSTANT
Editorial comments. More on the function en u, e.g. derivatives, evenness, behaviour on the real line, etc. can be found in [A1].
References [A1] MARKuSHEVICH, A.I.: Theory of functions of a complex vari­
able, 3, Chelsea, 1977 (translated from the Russian).
AMS 1980 Subject Classification: 33A25
COSINE, HYPERBOLIC - See Hyperbolic functions.
AMS 1980 Subject Classification: 33A10
COSINE THEOREM - The square of a side of a tri­ angle is equal to the sum of the squares of the other two sides, minus double the product of the latter two sides and the cosine of the angle between them:
c2 = a2 +b2 -2abcosC
Here a, b, c are the sides of the triangle and C is the angle between a and b.
Yu.A. Gor'kov
COSMOLOGICAL CONSI'ANT- A physical constant characterizing the properties of vacuum, sometimes introduced in the general theory of relativity. Einstein's equations (cf. Einstein equations) including the cosmo­ logical constant are
I 81rG R'l-lg'lR = 71i1 +Ag,1,
where A is the cosmological constant, g;j is the metric tensor, R;j is the Ricci tensor, R is the curvature of space, Tij is the energy-momentum tensor, c is the speed of light, and G is the gravitational constant. These equations are the Lagrange equations for the action
c3 J S =So- 1617G (R+2A)dV,
where S 0 is the action for matter and V denotes four­ dimensional volume. A. Einstein introduced the cosmo­ logical constant in the general theory of relativity [1] in order to ensure that the equations of the gravitational field admit a spatially homogeneous static solution (the so-called Einstein cosmological model). However, since the advent of Friedmann's evolutionary cosmological model and its experimental verification, the fact that the original Einstein equations have no such solution is not considered a defect of the theory. There are no reliable indications that the cosmological constant is distinct from zero. However, the existence of a suffi­ ciently small cosmological constant ( 1 A 1 EO;; 10-55 em-2) does not contradict the observed data or general physical principles.
The existence of a cosmological constant may essen­ tially modify certain steps in the evolution of the most
11
COSMOLOGICAL CONSTANT
widespread cosmological models (see [2], Chapt. 4). In this connection, it has been proposed that cosmological models with a cosmological constant should be utilized to explain certain properties of the distribution of qua­ sars (see [3], [4], (5]).
The term Ag;j in the gravitational field equations may be incorporated in the energy-momentum tensor of vacuum (see (2]). In this case, vacuum has energy density £ = c4 A I 8'1TG and pressure p = - c4 A I 8'1TG, corresponding to the state equation p = -£. In a theory with a cosmological constant the properties of vacuum already appear in the non-relativistic approximation. Thus, the gravitational potential of a point mass in a theory with a cosmological constant is (see [6], Chapt. 16)
cp = -<r-7-! r2c2.
The term Ag;j is invariant under transformations of the local Lorentz group, corresponding to the principle of Lorentz-invariance of vacuum in quantum field theory. The concept of the cosmological constant as an index of the energy density and pressure of vacuum makes it possible, in principle, to relate the concept of a cosmo­ logical constant with the concepts of quantum field theory. There are various formulas that Jirik the value of the cosmological constant to the fundamental physi­ cal constants and the age of the Universe (see [2], Chapt. 24).
References [1] EINSTEIN, A.: Sitzungsber. K Preuss. Akml. Wissenschaft.
(1917), 142-152. [2] ZEL'DOVICH, YA.B. and NoviKOV, I.D.: Structure and evolution
of the universe, Vol. 2. Relativistic astrophysics, 1983 (translated from the Russian).
[3] PETRosiAN, V., SALPETER, E. and SZEKERES, P.: Astrophys. J. 147 (1967), 1222-1226.
[4] SHnovSKii, I.S.: A,stron. Tsirkulyar 429 (1967). [5] KAIIDASHEV, N.S.: Astron. Tsirkulyar 430 (1967). [6] ToLMAN, R.C.: Relativity, thermodynamics and cosmology,
Oarendon Press, 1934. D D S k l .. 0 oov
Editorial comments.
AMS 1980 Subject Classification: 83F05, 85A40
COSMOLOGICAL MODELS - One of the basic con­ cepts in cosmology as a science describing the Universe (the mega-world surrounding us) as a whole, ignoring details of no significance in this respect.
The mathematical form of a cosmological model depends on which physical theory is adopted as basis for the description of moving matter: accordingly, one distinguishes between general-relativistic models, Newtonian models, steady-state models, models with variable gravitational constant, etc. The most important of these are the general-relativistic models. Astronomi-
12
cal systems may also be categorized as cosmological models: the Ptolemaic system, the Copernican system, etc. Modem cosmological models enable one to concen­ trate attention on essential details by introducing the concept of averaging physical properties over a physi­ cally large volume. The averaged values are assumed to be continuous and (usually) many times differentiable. That this averaging operation is possible is not self­ evident. One can imagine a hierarchical model of the Universe, in which there exist qualitatively distinct objects of ever-increasing scales. However, the available observational data do not match a model of this type.
As yet, the averaging procedure for general­ relativistic cosmological models lacks an adequate mathematical basis. The difficulty is here that different 'microstates', which yield the same cosmological model when averaged,, constitute distinct pseudo-Riemannian manifolds, possibly even possessing distinct topological structures (see also Geometro-dynamics).
The physical basis for general-relativistic cosmological models is Einstein's general relativity theory (sometimes including the version with a cosmological constant; see Relativity theory). The mathematical form of general­ relativistic cosmological models is the global geometry of pseudo-Riemannian manifolds. It is assumed that the topological structure of the manifold must be predicted theoretically. The choice of a specific topolog­ ical structure for a cosmological model is complicated by the fact that models having different topologies and different global properties may be locally isometric. One method for solving this problem is to advance additional postulates, which either follow from general theoretical considerations (such as the causality princi­ ple), or are experimental facts (e.g. the postulate in [I], Vol. 2, Chapt. 24, follows from CP-violation). The con­ struction of a cosmological model usually begins with the assumption of some specific type of symmetry, in respect to which one distinguishes between homogeneous and isotropic cosmological models, aniso­ tropic homogeneous cosmological models, and the like (see [1], Vol. 2, [2]). The first general-relativistic cosmo­ logical model was proposed by A. Einstein in 1917 (see [3]); it was static, homogeneous and isotropic and included a A-term, i.e. a cosmological constant. Subse­ quently, A.A. Friedmann developed a non-static homo­ geneous isotropic model, known as the Friedmann model [4]. The non-static nature predicted by this model was observed in 1929 (see [5]). The Friedmann model has different versions, depending on the values of the parameters that figure in it. If the density of matter p is not greater than some critical value Po, one has what is called an open model; if p >Po one has a clo;ed model. In terms of suitable coordinates, the metric of the Friedmann cosmological model has the
form
ds 2 = c2 dt 2 - [.!M] 2 [ dr2 +r2(d02+sin2 8dqi)].
Ro l-kr2 /R5
where t denotes time, p and Po are the average and so­ called critical densities of matter at the moment of time in question, c is the velocity of light, and r, () and cf> are coordinates. This metric is also known as the Robertson-Walker metric. The critical density Po is a certain function of time, and it turns out that the mag­ nitude p-Po does not change sign. If k<O, the spatial cross-section t = const is Lobachevskii space; if k = 0 it is Euclidean space (though the cosmological model itself is not flat); if k >0 one obtains spherical space. The function R(t) (the world radius) is determined from the Einstein equations and the equations of state; it vanishes at one (ko;;;;;;O) or two (k>O) values oft, and simultaneously the average density, curvature and other physical characteristics of the model become infinite. At such points the cosmological model is said to have a singularity. Depending on the equation of state, one speaks of cold (the pressure p =0) or hot (p =f. 13, where f. is the energy density) models. The discovery (see [6]) of isotropic equilibrium radiation (T~3" K) corroborates the hot model. Regardless of the crude nature of the Friedmann models, they already convey the main features of the structure of the Universe. For the further construction of cosmological models on the basis of Friedmann models see [1], Vol 2, Chapt. 3. A theory has been developed for the evolution of small deviations of a cosmological model from a Friedmann model. A result of this evolution is apparently the for­ mation of galactic clusters and other astronomical objects. The available observational data seem to imply that the real Universe is described to a good degree of accuracy by a Friedmann model. These data, however, do not permit the determination of the sign of k (it seems somewhat more probable that k <0). There are other possible topological interpretations of Fried­ mann models, obtained by different factorizations by a spatial section (different ways of pasting it together). The observed data impose only weak restrictions on the nature of these factorizations (see [1], Vol. 2). In a logi­ cally consistent theory, the construction of a cosmological model must begin with the selection of a manifold- the carrier of a pseudo-Riemannian metric. However, there is as yet no method for selecting mani­ folds. There are only a few restrictions on the possible global structure of the cosmological model, based on the causality principle and on CP-violation (see [1], Vol. 2).
Many other cosmological models have been pro­ posed, in particular anisotropic homogeneous models (see [1], Vol. 2, Chapts. 18-22, [8]).
COSMOLOGICAL MODELS
Prior to the appearance of general-relativistic cosmo­ logical models, it was implicitly assumed that the distri­ bution of matter is isotropic, homogeneous and static. However, this assumption leads to what is known as the gravitational, photometrical and other paradoxes (infinitely large gravitational potential, infinitely large illuminance, etc.). General-relativistic models avoid these paradoxes (see [2]). With respect to mass distribu­ tions, good Newtonian approximations analogous to those valid in general-relativistic cosmological models have been obtained for certain general-relativistic cosmological models (see [7]). These cosmological models are also free of the above-mentioned paradoxes.
References [I] ZEL'DOVICH, YA.B. and NoviKov, J.D.: Relativistic astrophy­
sics, I - Stars and relativity; 2 - Structure and evolution of the Universe, 1971-1983 (translated from the Russian).
[2] PETRov, A.Z.: Einstein spaces, Pergamon, 1%9 (translated from the Russian).
[3] EINSTEIN, A.: Sitzungsber. K Preuss. Akad. Wissenschaft. (1917), 142-152.
[4] FRIEDMANN, A.A.: Z. Phys. 10 (1922), 377-386. [5] HUBBLE, E.P.: Proc. Nat. Acad. Sci. 15 (1929), 168-173. [6] PENZIAS, A.A. and WILSON, R.W.: Astrophys. J. 142 (1965),
419-421. [7] HECKMANN, 0. and ScHOCKING, E.: Handbuch der Physik,
Vol. 53, Berlin, 1959, pp. 489-519. [8] BEUNsKii, V.A., LIFSHITS, RM. and KuAI.ATNIKOV, I.M.:
Soviet Physics Uspekhi 13 (1971), 745-765. (Uspekhi Fiz. Nauk 102, no. 3 (1970), 463-500)
[9] PENROSE, R.: 'Structure of space-time', in C.M. DeWitt and J.A. Wheeler (eds.): Batel/e Rencontres 1967 Lectures in Math.
Physics, Benjamin, 1968. D.D. Sokolov
Editorial comments. Exact spherical symmetry around every point implies that the Universe is spatially homogene­ ous and that it admits a six-parameter group of isometries, whose surfaces of transitivity are space-like three-surfaces with constant curvature (see [A1], [A2]). This Universe has the above mentioned Robertson-Walker metric. The unex­ pectedly strong requirement of spherical symmetry around every point comes from the fact that here an observational symmetry (e.g. microwave background) is intended. The observations refer to the past null-cone and not to the three-surface of transitivity. All expanding Friedmann models with zero cosmological constant and non-negative density and pressure contain a singularity in the past (the 'big bang'). The past null-cone stops on this singularity without covering all material in the Universe. It is only possible to see material inside our 'horizon'. The observed isotropy does not say anything about matter outside the horizon.
It has long been thought, that the initial singularity is a result of the imposed symmetry, but a number of recent theorems by S.W. Hawking and R. Penrose indicate that some initial singularity must exist in many models of the Universe irrespective of symmetry (see [A2], Chapt. 8). Because of this singularity and the resulting horizon the microwave background from two well separated points on the sky has' no causal connection (i.e. do not have overlap­ ping past null-cones). This makes it a surprise that the tern-
13
COSMOLOGICAL MODELS
perature ends up the same everywhere. A solution of this 'horizon problem' is given in the inflationary model. This is a Friedmann model with a physically special equation of state for matter and vacuum at ultra-high temperature ("' 1 025 K ). In this regime a phase transition is expected in which the true vacuum appears as rapidly expanding bubbles in a sur­ rounding of negative pressure. During this transition the Universe and the horizon expand exponentially (hence infla­ tion). The presently visible Universe is only a tiny fraction of one bubble of the phase transition and fits inside the infla­ tion horizon by a large margin. Also other long-standing problems find a natural solution in this model. For a physical introduction see ( A3], for a more rigorous treatment see (A4].
Many other cosmological models have been considered in the literature. For a discussion of the Bianchi type I - IX models see (A5), Chapt. 11. Nearly all known models can be found in [A6].
References [A1] WALKER, A.G.: 'Completely symmetric spaces', J. London
Math. Soc. 19 (1944), 219-226. [A2] HAWKING, S.W. and ELus, G.F.R.: The large scale structure
of space-time, Cambridge Univ. Press, 1973. [A3] Gum, A.H. and STEINHARDT, P.J.: Scientific American May
(1984), 90-102. [A4] GIBBONS, G.W., HAWKING, S.W. and SIKLOS, S.T.C. (EDS.):
The very early universe, Cambridge Univ. Press, 1983. [A5) HAWKING, S.W. and IsRAEL, W. (ms.): General relativity,
Cambridge Univ. Press, 1979. [A6] KRAMER, D., STEPHANI, H., MAcCALLUM, M. and HERLT, E.:
Exact solutions of Einstein's field equations, Cambridge Univ. Press, 1980.
[A7] WEINBERG, S.: Gravitation and cosmology, Wiley, 1972. [A8] LANDSBERG, P.T. and EVANS, D.A.: Mathematical cosmology,
Oxford Univ. Press, 1977.
y = cotanx = C?sx; smx
other notations are cotx, cotgx and ctgx. The domain of definition is the entire real line with the exception of
. . ts 'th b . - -0 + 1 +2 -- porn WI a SCISsas x-7Tn, n- , _ , _ , .... The cotangent is an unbounded odd periodic function (with period 7T). The cotangent and the tangent are related by I
cotanx = --. tanx
The inverse function to the cotangent is called the arccotangent. The derivative of. the cotangent is given by: , -I
( cotan x) = -.-2-. smx
The series expansion is:
14
The cotangent of a complex argument z is a mero­ morphic function with poles at the points z =7rn, n=O, +I, +2, ....
Yu.A. Gor'kov
Editorial comments. See also Tangent, curve of the; Sine; Cosine.
AMS 1980 Subject Classification: 33A 1 0, 26A09
COTES FORMULAS - Formulas for the approximate computation of definite integrals, given the values of the integrand at finitely many equidistant points, i.e. quadrature formulas with equidistant interpolation points (cf. Quadrature formula). Cotes' formulas are
fJ(x)dx ~ :,ta~>J[!_], n=I,2,.... (*) -b' k=O n
The numbers a~> are known as Cotes' coefficients; they are determined from the condition that formula (*) be exact if j(x) is a polynomial of degree at most n.
The formulas were proposed by R. Cotes (1722) and considered in a more general form by I. Newton. See Newton- Cotes quadrature formula.
BSE-3
Editorial comments. Cotes' formulas were published in [A2] after Cotes' death. In the Western literature these for­ mulas are known as the Newton- Cotes formulas. A detailed analysis of them can be found in [A1], (A3], [A4].
References [A1] BRASS, H.: Quadraturverlahren, Vandenhoek & Ruprecht,
1977. [A2] CoTEs, R.: Harmonia mensurarum, 1722. Published by A.
Smith after Cotes' death. [A3) DAVIS, P.J. and RABINOWITZ, P.: Methods of numerical
integration, Acad. Press, 1984. [A4] ENGELS, H.: Numerical quadrature and cubature, Acad.
Press, 1980.
AMS 1980 Subject Classification: 65032
COUNTABLE SET- A set of the same cardinality as the set of natural numbers. For example, the set of rational numbers or the set of algebraic numbers.
M.I. Voftsekhovskii
AMS 1980 Subject Classification: 03E1 0, 04A 10
COUNTABLY·ADDITIVE SET FUNCfiON - An additive set function p. defined on the algebra ~ of sub­ sets of a set M such that
~ [ ;~ E;] = ;;~E;) for any countable family of non-intersecting sets E; from~.
M.I. Voitsekhovskii
AMS 1980 Subject Classification: 28A 1 o
CoUNTABLY·COMPACf SPACE - A topological space X in which it is possible to extract a finite sub-
covering from any countable open covering of that space.
M.l. Voftsekhovskii
Editorial comments.
References (A1] AluuiANGEL'SICIT, A.V. and PONOMAREV, V.I.: Fundamentals
of general topology: problems and exercises, Reidel, 1984 (translated from the Russian).
AMS 1980 Subject Classification: 54020
COUNTABLY·NORMED SPACE - A locally convex space X whose topology is defined using a countable set of compatible norms II * III, ... , II * II n, •.. , i.e. norms such that if a sequence { Xn} c X that is funda­ mental in the norms II * liP and II * II q converges to zero in one of these norms, then it also converges to zero in the other. The sequence of norms { II * II n} can be replaced by a non-decreasing sequence II * liP =e;;;; II * II q• where p <q, which generates the same topology with base of neighbourhoods of zero Up,.={xeX: II x llp<t'}. A countably-normed space is metrizable, and its metric p can be defined by
00 1 II x-y lin p(x, Y) = n~l 2n 1 + II X-Y II n
An example of a countably-normed space is the space of entire functions that are analytic in the unit disc I z I < 1 with the topology of uniform convergence on any closed subset of this disc and with the collection of norms II x(z) lin =maxi z j.;;I-1 ;n I x(z) I· References
[1) GEL'FAND, I.M. and SHILOV, G.E.: Generalized functions, Acad. Press, 1964 (translated from the Russian).
V.I. Sobolev
AMS 1980 Subject Classification: 46BXX
COUNTABLY ZERO-DIMENSIONAL SPACE - A normal space X that can be represented in the form of
a union X= U ;: 1 X; of subspaces X; of dimension dimX;=e;;;O.
M.I. Voitsekhovskit
Editorial comments. If X is a metrizable space, then its countable zero-dimensionality is equivalent to it being countable dimensional, i.e. being the union of countably many finite-dimensional subspaces.
AMS 1980 Subject Classification: 54F50, 54F45
CoURANT-FiuEDRims-LEwv coNDmoN - A necessary condition for the stability of difference schemes in the class of infinitely-differentiable coeffi­ cients. Let O(P) be the dependence region for the value of the solution with respect to one of the coefficients (in particular, the latter might be an initial condition) and let (MP) be the dependence region of the value uh(P) of the solution to the corresponding difference
COURANT THEOREM
equation. A necessary condition for uh(P) to be con­ vergent to u(P) is that, as the grid spacing h is dimin­ ished, the dependence region of the difference equation covers the dependence region of the differential equa­ tion, in the sense that
O(P) c lim Oh(P). h.....O
References [I) CoURANT, R, FRIEDRICHS, K.O. and LEWY, H.: 'Ueber die
partiellen Differenzgleichungen der mathematische Physik', Math Ann. 100 (1928), 32-74.
[2) GoDUNOV, S.K. and RYABEN'Kii, V.S.: The theory of difference schemes, North-Holland, 1964.
N.S. Bakhvalov
References [A1] CoURANT, R. and FRIEDRICHS, K.O.: Supersonic flow and
shock waves, lnterscience, 1948. [A2] CoURANT, R, FRIEDRICHS, K.O. and LEWY, H.: On the par­
tial difference equations of mathematical physics, NY0-7689, lnst. Math. Sci. New York Univ., 1956 (translated from the German).
[A3] FoRSYIHE, G.E. and WASOw, W.R.: Finite difference methods for partial differential equations, Wiley, 1960.
(A4] MITCHELL, A.R and GRIFFITHS, D.F.: The finite difference method in partial equations, Wiley, 1980.
(A5] RICHTMEYER, R.D. and MORTON, K.W.: Difference methods for initial value problems, Wiley, 1967.
AMS 1980 Subject Classification: 65M1 0
CouRANT NUMBER - A term used in the con­ sideration of difference schemes for integrating one­ dimensional hyperbolic systems. If T is the grid spacing with respect to t, h the grid spacing with respect to x and A the maximum inclination of the characteristics, then the Courant number of the difference scheme equals AT I h.
References [I] GoDUNOV, S.K. and RYABEN'Kii, V.S.: The theory of difference
schemes, North-Holland, 1964. N S B kh l . . a va ov
Editorial comments. The Courant number plays a role in the Courant-Friedrichs-Lewy condition. References are given there.
AMS 1980 Subject Classification: 65M1 0
CoURANT THEOREM on conformal mapping of domains with variable boundaries - Let { Dn} be a sequence of nested simply-connected domains in the complex z-plane, Dn+l CDn, which converges to its ker­ nel Dz0 relative to some point z o; the set Dzo is assumed to be bounded by a Jordan curve. Then the sequence of functions { w = f,(z)} which map Dn con­ formally onto the disc Ll={w: I w I <1}, f,(zo)=O, /n(z 0)>0, is uniformly convergent in the closed domain
15
Dz. to the function w = j(z) which maps Dz. confor­
mally onto fl, moreover j(zo)=O, f (zo)>O. This theorem, due to R. Courant [ 1 ], is an extension
of the Caratheodory theorem.
References [1A) CoURANT, R.: Gott. Nachr. (1914), 101-109. [IB) CoURANT, R.: Gott. Nachr. (1922), 69-70. [2) MAllKUSHEVICH, A.I.: Theory of functions of a complex variable,
3, Chelsea, 1977 (translated from the Russian).
E.D. Solomentsev
Editorial comments. Ct. Caratheodory theorem tor the definition of 'kernel of a sequence of domains'.
AMS 1980 Subject Classification: 30C35, 30C20
CousiN PROBLEMS - Problems named after P. Cousin [1], who first solved them for certain simple domains in the complex n-dimensional space en. First (additive) Cousin problem. Let t¥1 = { U a} be a covering of a complex manifold M by open subsets U a,
in each of which is defined a meromorphic function fa; assume that the functions fafJ =fa - /tJ are holomorphic in u a{J = u a n u fJ for all a, p (compatibility condition). It is required to construct a function f which is mero­ morphic on the entire manifold M and is such that the functions f- fa are holomorphic in U a for all a. In other words, the problem is to construct a global mero­ morphic function with locally specified polar singulari­ ties.
The functions fafJ• defined in the pairwise intersec­ tions U afJ of elements of t¥1, define a holomorphic 1- cocycle for t¥1 , i.e. they satisfy the conditions
fatJ+ffJa = 0 in UafJ• (I)
fap+f{:ty+fya = 0 in Ua n Up n Uy,
for all a, p, y. A more general problem (known as the first Cousin problem in cohomologica/ formulation) is the following. Given holomorphic functions fap in the intersections U afJ, satisfying the cocycle conditions (1 ), it is required to find functions ha, holomorphic in u a•
such that fap = hp-ha (2)
for all a, {J. If the functions fap correspond to the data of the first Cousin problem and the above functions ha exist, then the function
J = lfa+ha in Ua}
is defined and meromorphic throughout M and is a solution of the first Cousin problem. Conversely, iff is a solution of the first Cousin problem with data {fa}, then the holomorphic functions ha = f- fa satisfy (2). Thus, a specific first Cousin problem is solvable if and only if the corresponding cocycle is a holomorphic coboundary (i.e. satisfies oondition (2)).
16
The first Cousin problem may also be formulated in a local version. To each set of data { U a .fa} satisfying the compatibility condition there corresponds a uniquely defined global section of the sheaf .A I ~ , where .A and ~ are the sheaves of germs of mero­ morphic and holomorphic functions, respectively; the correspondence is such that any global section of .A I ~ corresponds to some first Cousin problem (the value of the section " corresponding to data {fa} at a point z E U a is the element of .A z I~ 'z with representative fa). The mapping of global sections q>: f(.A)~f(.A 1 ~)maps each meromorphic function f on .A to a section "J of .A I ~ , where Kj{z) is the class in .A z 1 ~ z of the germ off at the point z, z eM. The localized first Cousin problem is then: Given a glo­ bal section " of the sheaf .A I ~ , to find a mero­ morphic function f on M (i.e. a section of .A) such
that #/) = "· Theorems concerning the solvability of the first
Cousin problem may be regarded as a multi­ dimensional generalization of the Mittag-Leffler theorem on the construction of a meromorphic function with prescribed poles. The problem in cohomological formulation, with a fixed covering t¥1, is solvable (for arbitrary compatible {fa}) if and only if H 1 ( t¥1, ~) = 0 (the Ceeh cohomology for t¥1 with holomorphic coeffi­ cients is trivial).
A specific first ·Cousin problem on M is solvable if and only if the corresponding section of .A I ~ belongs to the image of the mapping q,. An arbitrary first Cousin problem on M is solvable if and only if 4> is surjective. On any complex manifold M one has an exact sequence .,
f(.l)-+ f(.l I aJ)-+ H 1(M, fJ).
If the Cech cohomology for M with coefficients in ~ is trivial (i.e. H 1(M, ~)=0), then 4> is surjective and H 1 ( t¥1, ~ ) = 0 for any covering t¥1 of M. Thus, if H 1(M, ~ )=0, any first Cousin problem is solvable on M (in the classical, cohomological and local version).' In particular, the problem is solvable in all domains of holomorphy and on Stein manifolds ( cf. Stein mani­ fold). If D c C2, then the first Cousin problem in D is solvable if and only if D is a domain of holomorphy. An example of an unsolvable first Cousin problem is: M=Cl \ {0}, Ua=(za*O}, a= 1, 2, /1 =(ZJZ2)-l, /2=0.
Second (multiplicative) Cousin problem. Given an open covering t¥1 = { U a} of a complex manifold M and, in each Ua, a meromorphic function fa, fa .,.0 on each component of Ua, with the assumption that the func­ tions fafJ = f Ji 1 are holomorphic and nowhere vanish­ ing in U afJ for all a, P (compatibility condition). It is
required to construct a meromorphic function f on M such that the functions jJ;; 1 are holomorphic and nowhere vanishing in U a for all a.
The cohomological formulation of the second Cousin problem is as follows. Given the covering '¥1 and func­ tions fafJ• holomorphic and nowhere vanishing in the intersections UafJ• and forming a multiplicative 1- cocycle, i.e. /. r = I m· U
a{Jj fJa afJ•
fapfrt·tfya = I in Ua n Up n Uy,
it is required to find functions ha, holomorphic and nowhere vanishing in Ua, such thatfap=hph; 1 in Uap for all a, p. If the cocycle ifafJ} corresponds to the data of a second Cousin problem and the required ha exist, then the function f=ifaha in Ua} is defined and meromorphic throughout M and is a solution to the given second Cousin problem. Conversely, if a specific second Cousin problem is solvable, then the corresponding cocycle is a holomorphic coboundary.
The localized second Cousin problem. To each set of data { U a, fa} for the second Cousin problem there corresponds a uniquely defined global section of the sheaf .A • I fJ • (in analogy to the first Cousin prob­ lem), where u~t• =.A\ {0} (with 0 the null section) is the multiplicative sheaf of germs of meromorphic func­ tions and fJ • is the subsheaf of fJ in which each stalk fJ ; consists of germs of holomorphic functions that do not vanish at z. The mapping of global sections
If f(vK*) ~ f(vK* I~·)
maps a meromorphic function f to a section K j of the sheaf vii • I fJ •, where K j(z) is the class in .A; I fJ ; of the germ of f at z, z EM. The localized second Cousin problem is: Given a global section ~e· of the sheaf vii • 1 fJ •, to find a meromorphic function f on M, f-=!=0 on the components of M (i.e. a global section of vK "), such that lfij) = K •.
The sections of M• I Q • uniquely correspond to divisors (cf. Divisor), therefore vii • I (!) • =!!) is called the sheaf of germs of divisors. A divisor on a complex manifold M is a formal locally finite sum ~k/l1 , where k1 are integers and A1 analytic subsets of M of pure codimension 1. To each meromorphic function f corresponds the divisor whose terms are the irreducible components of the zero and polar sets off with respec­ tive multiplicities k1, with multiplicities of zeros con­ sidered positive and those of poles negative. The map­ ping 1[1 maps each function f to its divisor (/); such divisors are called proper divisors. The second Cousin problem in terms of divisors is: Given a divisor A on the manifold M, to construct a meromorphic function f on M such that A=(/).
Theorems concerning the solvability of the second
COUSIN PROBLEMS
Cousin problem may be regarded as multi-dimensional generalizations of Weierstrass' theorem on the construc­ tion of a meromorphic function with prescribed zeros and poles. As in the case of the first Cousin problem, a necessary and sufficient condition for the solvability of any second Cousin problem in cohomological ver­ sion is that H 1(M, fJ ·)=0. Unfortunately, the sheaf fJ • is not coherent, and this condition is less effective. The attempt to reduce a given second Cousin problem to a first Cousin problem by taking logarithms encounters an obstruction in the form of an integral 2- cocycle, arid one obtains an exact sequence
a H 1(M, ~) ~ H 1(M, ~·) ~ H 2(M, Z),
where Z is the constant sheaf of integers. Thus, if H 1(M, fJ)=H2(M, Z)=O, any second Cousin problem is solvable on M, and any divisor is proper. If M is a Stein manifold, then a is an isomorphism; hence the topological condition H 2(M, Z)=O on a Stein manifold M is necessary and sufficient for the second Cousin problem in cohomological version to be solvable. The composite mapping c=aop,
{J a f(~) ~ H 1(M, ~·) ~ H 2(M, Z)
maps each divisor A to an element c(A) of the group H2(M, Z), which is known as the Chern class of A. The specific second Cousin problem corresponding to A is solvable, assuming H 1 (M, fJ ) = 0, if and only if the Chern class of A is trivial: c(A)=O. On a Stein mani­ fold, the mapping c is surjective; moreover, every ele­ ment in H 2(M, Z) may be expressed as c(A) for some divisor A with positive multiplicities k1. Thus, the obstructions to the solution of the second Cousin prob­ lem on a Stein manifold M are completely described by the group H 2(M, Z).
Examples.I)M=C2\{z1=z2, lz 1 1=1}; the first Cousin problem is unsolvable; the second Cousin prob­ lem is unsolvable, e.g., for the divisor A=M n {z I =z2, I Zl I< I} with multiplicity 1.
2) M = { I zr + z~ + z~ - I I <I} c C3, A is one of the components of the intersection of M and the plane z 2 = iz 1 with multiplicity I. The second Cousin problem is unsolvable (M is a domain of holomorphy, the first Cousin problem is solvable).
3) The first and second Cousin problems are solvable in domains D = D 1 X · · · X Dn C C", where D1 are plane domains and all D1, with the possible exception of one, are simply connected. References
[1] CousiN, P.: 'Surles fonctions den variables', Acta Math. 19 (1895), 1-62.
[2] SHABAT, B.V.: Introduction to complex analysis, 2, Moscow, 1976 (in Russian).
[3] GUNNING, R.C. and ROSSI, H.: Analytic functions of several complex variables, Prentice-Hall, 1965. E.M. Chirka
17
COUSIN PROBLEMS
Editorial comments. The Cousin problems are related to the Poincare problem (is a meromorphic function given on a complex manifold X globally the quotient of two holo­ morphic functions whose germs are relatively prime for all xeX?) and to the, more algebraic, Theorems A and B of H. Cartan and J.-P. Serre, cf. [A1], [A2], [A3].
References [A1] CAZACU, C.A.: Theorie der Funktionen mehreren komplexer
Veranderlicher, Birkhauser, 1975 (translated from the Rumanian).
[A2] GRAUERT, J. and REMMERT, R.: Theory of Stein spaces, Springer, 1979 (translated from the German).
[A3] HORMANDER, L.: An introduction to complex analysis in several variables, North-Holland, 1973.
[A4] KRANTz, S.G.: Function theory of several complex variables, Wiley, 1982, Chapt. 6.
[A5] RANGE, R.M.: Holornorphic functions and integral represen­ tations in several complex variables, Springer, 1986, Chapt. 6.
AMS 1980 Subject Classification: 32C35
CovARIANCE - A numerical characteristic of the joint distribution of two random variables, equal to the mathematical expectation of the product of the devia­ tions of these two random variables from their mathematical expectations. The covariance is defined for random variables X I and X 2 with finite variance and is usually denoted by cov(X~o X2). Thus,
cov(Xt. X2) = E[(X1 -EXtXX2 -EX2)],
so that cov(XI, X2)=cov(X2, XI); cov(X, X)=DX=var(X). The covariance naturally occurs in the expression for the variance of the sum of two random variables:
D(Xt + X2) = DXt +DX2 +2cov(Xt. X2).
If XI and X 2 are independent random variables, then cov(XI, X2)=0. The covariance gives a characterization of the dependence of the random variables; the correla­ tion coefficient is defined by means of the covariance. In order to statistically estimate the covariance one uses the sample covariance, computed from the formula
-1 -~<..W> -XtXX¥> -x2), n -1 i=t
where the (X)_i), ..¥q>), i= I, ... ,n, are independent ran­ dom variables and X 1 and X 2 are their arithmetic means.
A. V. Prokhorov Editorial comments. In the Western literature one always uses V(X) or var(X) for the variance, instead of D(X).
AMS 1980 Subject Classification: 62J10
CovARIANCE ANALYSIS - A collection of methods in mathematical statistics relating to the analysis of models of the dependence of the mean value of some random variable Y on a set of non-quantitative factors F and simultaneously on a set of quantitative factors x.
18
The variables x are called the concomitant variables relative to Y; the factors F defme a set of conditions of a qualitative nature under which the observations on Y and x are obtained, and are described by so-called indi­ cator variables; among the concomitant and indicator variables can be both random and non-random ones (controlled in the experiment); if the random variable Y is a vector, then one talks about multivariate analysis of covariance.
The basic theoretic and applied problems in the analysis of covariance relate to linear models. For example, if the scheme under analysis consists of n observations Y 1o ••• , Yn with p concomitant variables and k possible types of experimental conditions, then the linear model of the corresponding anal