Top Banner
Lecture Notes in Economics and Mathematical Systems (Vol. 1-15: Lecture Notes in Operations Research and Mathematical Economics, Vol. 16-59: Lecture Notes in Operations Research and Mathematical Systems) Vol. 1: H. BOhlmann, H. Loeffel, E. Nievergel!, Einliihrung in die Theorie und Praxis der Entscheidung bei Unsicherheit. 2. Auflage, IV, 125 Seiten. 1969. DM 18,- Vol. 2: U. N. Bhat, A Study of the Queueing Systems M/G/I and GIIM1. VIII, 78 pages. 1968. DM 18,- Vol. 3: A Strauss, An Introduction to Optimal Control Theory. Out of print Vol. 4: Branch and Bound: Eine Einfiihrung. 2., geanderte Auflage. Herausgegeben von F. Weinberg. VII, 174 Seiten. 1973. DM 20,- Vol. 5: L. P. Hyvarinen, Information Theory for Systems Engineers. VII, 205 pages. 1968. DM 18,- Vol. 6: H. P. KOnzi, O. MOiler, E. EinfOhrungskursus in die dynamische Programmierung. IV, 103 Seiten. 1968. DM 18,- Vol. 7: W. Popp, EinfOhrung in die Theorie der Lagerhaltung. VI, 173 Seiten. 1968. DM 18,- Vol. 8: J. Teghem, J. Loris-Teghem, J. P. Lambotte, Modeles d'Attente M/G/I et GIiM/l a Arrivees et Services en Groupes. IV, 53 pages. 1969. DM 18,- Vol. 9: E. Schultze, Einfiihrung in die mathematischen Grundlagen der Informationstheorie. VI, 116 Seiten. 1969. DM 18,- Vol. 10: D. Hochstiidter, Stochastische Lagerhaltungsmodelle. VI, 269 Seiten. 1969. DM 20,- Vol. 11/12: Mathematical Systems Theory and Economics. Edited by H. W. Kuhn and G. P. Szego. VIII, IV, 486 pages. 1969. DM 38,- Vol. 13: Heuristische Planungsmethoden. Herausgegeben von F. Weinberg und C. A Zehnder. II, 93 Seiten. 1969. DM 18,- Vol. 14: Computing Methods in Optimization Problems. 191 pages. 1969. DM 18,- Vol. 15: Economic Models, Estimation and Risk Programming: Essays in Honor of Gerhard Tintner. Edited by K. A. Fox, G. V. L. Narasimham and J. K. Sengupta. VIII, 461 pages. 1969. DM 27,- Vol. 16: H. P. KOnzi und W. Oettli, Nichtlineare Optimierung: Neuere Verfahren, Bibliographie. IV, 180 Seiten. 1969. DM 18,- Vol. 17: H. Bauer und K. Neumann, Berechnung optimaler Steue- rungen, Maximumprinzip und dynamische Optimierung. VIII, 188 Seiten. 1969. DM 18,- Vol. 18: M. Wolff, Optimale Instandhaltungspolitiken in einfachen Systemen. V, 143 Seiten. 1970. DM 18,- Vol. 19: L. P. Hyvarinen, Mathematical Modeling for Industrial Pro-. cesses. VI, 122 pages. 1970. DM 18,- Vol. 20: G. Uebe, Optimale FahrpUine. IX, 161 Seiten. 1970. DM 18,- Vol. 21: Th. Liebling, Graphentheorie in Planungs- und Touren- problemen am Beispiel des stiidtischen StraBendienstes. IX, 118 Seiten. 1970. DM 18,- Vol. 22: W. Eichhorn, Theorie der homogenen Produktionsfunk- tion. VIII, 119 Seiten. 1970. DM 18,- Vol. 23: A Ghosal, Some Aspects of Queueing and Storage Systems. IV, 93 pages. 1970. DM 18,- Vol. 24: Feichtinger Lernprozesse in stochastischen Automaten. V, 66 Seiten. 1970. DM 18,- Vol. 25: R. Henn und O. Opitz, Konsum- und Produktionstheorie. I. II, 124 Seiten. 1970. DM 18,- Vol. 26: D. Hochstiidter und G. Uebe, Okonometrische Methoden. XII, 250 Seiten. 1970. DM 20,- Vol. 27: I. H. Mufti, Computational Methods in Optimal Control Problems. IV, 45 pages. 1970. DM 18,- Vol. 28: Theoretical Approaches to Non-Numerical Problem Sol- ving. Edited by R. B. Banerji and M. D. Mesarovic. VI, 466 pages. 1970. DM27,- Vol. 29: S. E. Elmaghraby, Some Network Models in Management Science. III, 176 pages. 1970. DM 18,- Vol. 30: H. Noltemeier, SensitiviUitsanalyse bei diskreten linearen Optimierungsproblemen. VI, 102 Seiten. 1970. DM 18,- Vol. 31: M. KOhlmeyer, Die nichtzentrale t-Verteilung. II, 106 Sei- ten. 1970. DM 18,- Vol. 32: F. Bartholomes und G. Hotz, Homomorphismen und Re- duktionen linearer Sprachen. XII, 143 Seiten. 1970. DM 18,- Vol. 33: K. Hinderer, Foundations of Non-stationary Dynamic Pro- gramming with Discrete Time Parameter. VI, 160 pages. 1970. DM18,- Vol. 34: H. Stormer, Semi-Markoff-Prozesse mit endlich vielen Zustiinden. Theorie und Anwendungen. VII, 128 Seiten. 1970. DM18,- Vol. 35: F. Ferschl, Markovketten. VI, 168 Seiten. 1970. DM 18,- Vol. 36: M. J. P. Magill, On a General Economic Theory of Motion. VI, 95 pages. 1970. DM 18,- Vol. 37: H. MOlier-Merbach, On Round-Off Errors in Linear Pro- gramming. V,.48 pages. 1970. DM 18,- Vol. 38: Statistische Methoden I. Herausgegeben von E. Walter. VIII, 338 Seiten. 1970. DM 24,- Vol. 39: Statistische Methoden II. Herausgegeben von E. Walter. IV,157 Seiten. 1970. DM 18,- Vol. 40: H. Drygas, The Coordinate-Free Approach to Gauss- Markov Estimation. VIII, 113 pages. 1970. DM 18,- Vol. 41: U. Ueing, Zwei Losungsmethoden fOr nichtkonvexe Pro- grammierungsprobleme. IV, 92 Seiten. 1971. DM 18,- Vol. 42: A V. Balakrishnan, Introduction to Optimization Theory in a Hilbert Space. IV, 153 pages. 1971. DM 18,- Vol. 43: J.A Morales, Bayesian Full Information Structural Analy- sis. VI, 154 pages. 1971. DM 18,- Vol. 44, G. Feichtinger, Stochastische Modelle demographischer Prozesse. XIII, 404 Seiten. 1971. DM 32,- Vol. 45: K. Wendler, Hauptaustauschschritte (Principal Pivoting). 11,64 Seiten. 1971. DM 18,- Vol. 46: C. Boucher, Le<;ons sur la theorie des automates ma- thematiques. VIII, 193 pages. 1971. DM 20,- Vol. 47: H. A Nour Eldin, Optimierung linearer Regelsysteme mit quadratischer Zielfunktion. VIII, 163 Seiten. 1971. DM 18,- Vol. 48: M. Constam, FORTRAN liir Anfanger. 2. Auflage. VI, 148 Seiten. 1973. DM 18,- Vol. 49: Ch. SchneeweiB, Regelungstechnische stochastische Optimierungsverfahren. XI, 254 Seiten. 1971. DM 24,- Vol. 50: Unternehmensforschung Heute - Obersichtsvortrage der ZOricher Tagung von SVOR und DGU, September 1970. Heraus- gegeben von M. Beckmann. IV, 133 Seiten. 1971. DM 18,- Vol. 51: Digitale Simulation. Herausgegeben von K. Bauknecht und W. Nef. IV, 207 Seiten. 1971. DM 20,- Vol. 52: Invariant Imbedding. Proceedings of the Summer Work- shop on Invariant Imbedding Held at the University of Southern California, June-August 1970. Edited by R. E. Bellman and E. D. Denman. IV, 148 pages. 1971. DM 18,- Vol. 53: J. RosenmOller, Kooperative Spiele und Markle. IV, 152 Seiten. 1971. DM 18,- Vol. 54: C. C. von Weizsacker, Steady State Capital Theory. III, 102 pages. 1971. DM 18,- Vol. 55: P. A V. B. Swamy, Statistical Inference iQ Random Coef- ficient Regression Models. VIII, 209 pages. 1971. DM 22,- Vol. 56: Mohamed A EI-Hodiri, Constrained Extrema. Introduction to the Differentiable Case with Economic Applications. III, 130 pages. 1971. DM 18,- Vol. 57: E. Freund, Zeitvariable MehrgroBensysteme. VIII,160 Sei- ten. 1971. DM 20,- Vol. 58: P. B. Hagelschuer, Theorie der linearen Dekomposition. VII, 191 Seiten. 1971. DM 20,- continuation on page 322
329

Variable Structure Systems with Application to Economics and Biology: Proceedings of the Second US-Italy Seminar on Variable Structure Systems, May 1974

Sep 11, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Lecture Notes in Economics and Mathematical Systems
(Vol. 1-15: Lecture Notes in Operations Research and Mathematical Economics, Vol. 16-59: Lecture Notes in Operations Research and Mathematical Systems) Vol. 1: H. BOhlmann, H. Loeffel, E. Nievergel!, Einliihrung in die Theorie und Praxis der Entscheidung bei Unsicherheit. 2. Auflage, IV, 125 Seiten. 1969. DM 18,-
Vol. 2: U. N. Bhat, A Study of the Queueing Systems M/G/I and GIIM1. VIII, 78 pages. 1968. DM 18,-
Vol. 3: A Strauss, An Introduction to Optimal Control Theory. Out of print
Vol. 4: Branch and Bound: Eine Einfiihrung. 2., geanderte Auflage. Herausgegeben von F. Weinberg. VII, 174 Seiten. 1973. DM 20,-
Vol. 5: L. P. Hyvarinen, Information Theory for Systems Engineers. VII, 205 pages. 1968. DM 18,-
Vol. 6: H. P. KOnzi, O. MOiler, E. Nievergel~ EinfOhrungskursus in die dynamische Programmierung. IV, 103 Seiten. 1968. DM 18,-
Vol. 7: W. Popp, EinfOhrung in die Theorie der Lagerhaltung. VI, 173 Seiten. 1968. DM 18,-
Vol. 8: J. Teghem, J. Loris-Teghem, J. P. Lambotte, Modeles d'Attente M/G/I et GIiM/l a Arrivees et Services en Groupes. IV, 53 pages. 1969. DM 18,-
Vol. 9: E. Schultze, Einfiihrung in die mathematischen Grundlagen der Informationstheorie. VI, 116 Seiten. 1969. DM 18,-
Vol. 10: D. Hochstiidter, Stochastische Lagerhaltungsmodelle. VI, 269 Seiten. 1969. DM 20,-
Vol. 11/12: Mathematical Systems Theory and Economics. Edited by H. W. Kuhn and G. P. Szego. VIII, IV, 486 pages. 1969. DM 38,-
Vol. 13: Heuristische Planungsmethoden. Herausgegeben von F. Weinberg und C. A Zehnder. II, 93 Seiten. 1969. DM 18,-
Vol. 14: Computing Methods in Optimization Problems. 191 pages. 1969. DM 18,-
Vol. 15: Economic Models, Estimation and Risk Programming: Essays in Honor of Gerhard Tintner. Edited by K. A. Fox, G. V. L. Narasimham and J. K. Sengupta. VIII, 461 pages. 1969. DM 27,-
Vol. 16: H. P. KOnzi und W. Oettli, Nichtlineare Optimierung: Neuere Verfahren, Bibliographie. IV, 180 Seiten. 1969. DM 18,-
Vol. 17: H. Bauer und K. Neumann, Berechnung optimaler Steue­ rungen, Maximumprinzip und dynamische Optimierung. VIII, 188 Seiten. 1969. DM 18,-
Vol. 18: M. Wolff, Optimale Instandhaltungspolitiken in einfachen Systemen. V, 143 Seiten. 1970. DM 18,-
Vol. 19: L. P. Hyvarinen, Mathematical Modeling for Industrial Pro-. cesses. VI, 122 pages. 1970. DM 18,-
Vol. 20: G. Uebe, Optimale FahrpUine. IX, 161 Seiten. 1970. DM 18,-
Vol. 21: Th. Liebling, Graphentheorie in Planungs- und Touren­ problemen am Beispiel des stiidtischen StraBendienstes. IX, 118 Seiten. 1970. DM 18,-
Vol. 22: W. Eichhorn, Theorie der homogenen Produktionsfunk­ tion. VIII, 119 Seiten. 1970. DM 18,-
Vol. 23: A Ghosal, Some Aspects of Queueing and Storage Systems. IV, 93 pages. 1970. DM 18,-
Vol. 24: Feichtinger Lernprozesse in stochastischen Automaten. V, 66 Seiten. 1970. DM 18,-
Vol. 25: R. Henn und O. Opitz, Konsum- und Produktionstheorie. I. II, 124 Seiten. 1970. DM 18,-
Vol. 26: D. Hochstiidter und G. Uebe, Okonometrische Methoden. XII, 250 Seiten. 1970. DM 20,-
Vol. 27: I. H. Mufti, Computational Methods in Optimal Control Problems. IV, 45 pages. 1970. DM 18,-
Vol. 28: Theoretical Approaches to Non-Numerical Problem Sol­ ving. Edited by R. B. Banerji and M. D. Mesarovic. VI, 466 pages. 1970. DM27,-
Vol. 29: S. E. Elmaghraby, Some Network Models in Management Science. III, 176 pages. 1970. DM 18,-
Vol. 30: H. Noltemeier, SensitiviUitsanalyse bei diskreten linearen Optimierungsproblemen. VI, 102 Seiten. 1970. DM 18,-
Vol. 31: M. KOhlmeyer, Die nichtzentrale t-Verteilung. II, 106 Sei­ ten. 1970. DM 18,-
Vol. 32: F. Bartholomes und G. Hotz, Homomorphismen und Re­ duktionen linearer Sprachen. XII, 143 Seiten. 1970. DM 18,-
Vol. 33: K. Hinderer, Foundations of Non-stationary Dynamic Pro­ gramming with Discrete Time Parameter. VI, 160 pages. 1970. DM18,-
Vol. 34: H. Stormer, Semi-Markoff-Prozesse mit endlich vielen Zustiinden. Theorie und Anwendungen. VII, 128 Seiten. 1970. DM18,-
Vol. 35: F. Ferschl, Markovketten. VI, 168 Seiten. 1970. DM 18,­
Vol. 36: M. J. P. Magill, On a General Economic Theory of Motion. VI, 95 pages. 1970. DM 18,-
Vol. 37: H. MOlier-Merbach, On Round-Off Errors in Linear Pro­ gramming. V,.48 pages. 1970. DM 18,-
Vol. 38: Statistische Methoden I. Herausgegeben von E. Walter. VIII, 338 Seiten. 1970. DM 24,-
Vol. 39: Statistische Methoden II. Herausgegeben von E. Walter. IV,157 Seiten. 1970. DM 18,-
Vol. 40: H. Drygas, The Coordinate-Free Approach to Gauss­ Markov Estimation. VIII, 113 pages. 1970. DM 18,-
Vol. 41: U. Ueing, Zwei Losungsmethoden fOr nichtkonvexe Pro­ grammierungsprobleme. IV, 92 Seiten. 1971. DM 18,-
Vol. 42: A V. Balakrishnan, Introduction to Optimization Theory in a Hilbert Space. IV, 153 pages. 1971. DM 18,-
Vol. 43: J.A Morales, Bayesian Full Information Structural Analy­ sis. VI, 154 pages. 1971. DM 18,-
Vol. 44, G. Feichtinger, Stochastische Modelle demographischer Prozesse. XIII, 404 Seiten. 1971. DM 32,-
Vol. 45: K. Wendler, Hauptaustauschschritte (Principal Pivoting). 11,64 Seiten. 1971. DM 18,-
Vol. 46: C. Boucher, Le<;ons sur la theorie des automates ma­ thematiques. VIII, 193 pages. 1971. DM 20,-
Vol. 47: H. A Nour Eldin, Optimierung linearer Regelsysteme mit quadratischer Zielfunktion. VIII, 163 Seiten. 1971. DM 18,-
Vol. 48: M. Constam, FORTRAN liir Anfanger. 2. Auflage. VI, 148 Seiten. 1973. DM 18,-
Vol. 49: Ch. SchneeweiB, Regelungstechnische stochastische Optimierungsverfahren. XI, 254 Seiten. 1971. DM 24,-
Vol. 50: Unternehmensforschung Heute - Obersichtsvortrage der ZOricher Tagung von SVOR und DGU, September 1970. Heraus­ gegeben von M. Beckmann. IV, 133 Seiten. 1971. DM 18,-
Vol. 51: Digitale Simulation. Herausgegeben von K. Bauknecht und W. Nef. IV, 207 Seiten. 1971. DM 20,-
Vol. 52: Invariant Imbedding. Proceedings of the Summer Work­ shop on Invariant Imbedding Held at the University of Southern California, June-August 1970. Edited by R. E. Bellman and E. D. Denman. IV, 148 pages. 1971. DM 18,-
Vol. 53: J. RosenmOller, Kooperative Spiele und Markle. IV, 152 Seiten. 1971. DM 18,-
Vol. 54: C. C. von Weizsacker, Steady State Capital Theory. III, 102 pages. 1971. DM 18,-
Vol. 55: P. A V. B. Swamy, Statistical Inference iQ Random Coef­ ficient Regression Models. VIII, 209 pages. 1971. DM 22,-
Vol. 56: Mohamed A EI-Hodiri, Constrained Extrema. Introduction to the Differentiable Case with Economic Applications. III, 130 pages. 1971. DM 18,-
Vol. 57: E. Freund, Zeitvariable MehrgroBensysteme. VIII,160 Sei­ ten. 1971. DM 20,-
Vol. 58: P. B. Hagelschuer, Theorie der linearen Dekomposition. VII, 191 Seiten. 1971. DM 20,-
continuation on page 322
Lectu re Notes in Economics and Mathematical Systems Managing Editors: M. Beckmann and H. P. Kunzi
Systems Theory
Variable Structure Systems with Application to Economics and Biology Proceedings of the Second US-Italy Seminar on Variable Structure Systems, May 1974
Edited by A Ruberti and R. R. Mohler
Springer-Verlag Berlin· Heidelberg· New York 1975
Editorial Board
H. Albach· A V. Balakrishnan· M. Beckmann (Managing Editor) . P.Dhrymes J. Green • W. Hildenbrand . W. Krelle . H. P. Kunzi (Managing Editor) • K Ritter R. Sato . H. Schelbert . P. Schonfeld
Managing Editors Prof. Dr. M. Beckmann Brown University Providence, RI 02912/USA
Editors Dr. A Ruberti Istituto di Automatica Universita Roma 00184 Roma Italy
Dr. R. R. Mohler Oregon State University Dept. of Electrical and Computer Engineering Corvallis, Oregon 97331 USA
Prof. Dr. H. P. Kunzi Universitat Zurich 8090 Zurich/Schweiz
AMS Subject Classifications (1970): 90 A XX, 90CXX, 92A05, 92A 15, 93AXX, 93BXX
ISBN 978-3-540-07390-1 ISBN 978-3-642-47457-6 (eBook) 001 10.1007/978-3-642-47457-6
This work is subject to copyright. All rights are reserved. whether the whole or part of the material is concerned. specifically those of translation. reprinting. re-use of illustrations. broadcasting. reproduction by photo­ copying machine or similar means. and storage in data banks. Under § 54 of the German Copyright Law where copies are made for other than private use. a fee is payable to the publisher. the amount of the fee to be determined by agreement with the publisher. © by Springer-Verlag Berlin· Heidelberg 1975
PREFACE
The proceedings of the Second US-Italy Seminar on Variable Structure
Systems is published in this volume. Like the first seminar, its conception
evolved from common research interests on bilinear systems at the Istituto di
Automatica of Rome University and at the Electrical and Computer Engineering
Department of Oregon State University. Again, the seminar was focused on
variable structure systems in general. In this case, however, emphasis is
given to applications in biology and economics along with theoretical investi­
gations which are so necessary to establish a unified theory and to motivate
further developments in these applications of social significance.
By bringing together the talents of social and biological scientists with
those of engineers and mathematicians from throughout Italy and the United States,
the seminar was intended to yield a cross-pollination of significant results and
a base for more meaningful future research. The editors are encouraged by the
progress, with which they hope the reader will agree, is made in this direction.
No pretense is made, however, that completely satisfactory integration of theore­
tical results and applications has been accomplished at this time.
Among the more important conclusions which have resulted from this seminar
are that bilinear and more general variable structure models arise in a natural
manner from basic principles for certain biological and economic processes.
Interesting results have been achieved on representation, identification and
control theory for bilinear systems. Nevertheless, much remains to be done on a
number of problems in such areas as control system design, analysis and compari­
son of different abstract representations, analysis of structural properties
(including controllability, stability, and so on), and identification with
additive input noise.
The control problem for bilinear systems naturally leads to a feedback
structure and therefore to more complex types of systems (for instance, with
quadratic terms in the differential equation). Similarly, these systems appear
in modeling biological and socio-economic processes with built-in control
mechanisms. Thus, the investigation of these classes of systems seems to be the
natural development of the research on bilinear systems, within the wider frame­
work of variable structure systems.
The Editors wish to thank the Consiglio Nazionale delle Ricerche and the
National Science Foundation for their support of this seminar as a part of the
US-Italian Cultural Program. Also, sincere appreciation is extended to all
colleagues and friends who collaborated to make this a successful venture.
CONTENTS
A. V. Balakrishnan, University of California, Los Angeles, California
Time-Varying Bilinear Systems .... 44
On the Reachable Set for Bilinear Systems ...... 54
R. Brockett, Harvard University, Cambridge, Massachusetts
Algebraic Realization Theory of Two-Dimensional Filters ..•••.....•.•..•... 64
E. Fornasini, G. Marchesini, Universita di Padova, Padova, Italy
Controllability of Bilinear Systems .•..........••........••......•..•. 83
G-S. J. Cheng, T. J. Tarn, D. L. Elliott, Washington University, St. Louis, Missouri
Periodic Control of Singularly Perturbed Systems •.....••..•.•.•..•.•.... 101
G. Guardabassi, A. Locatelli, Politecnico di Milano, Milan, Italy
Estimation for Bilinear Stochastic Systems ..•..••.•...•..•••..••..••.•• 116
A. Willsky, Steven I. Marcus, Massachusetts Institute of Technology, Cambridge, Massachusetts
A Probabilistic Approach to Identifiability ..••.•••.••.••..•.•..•.•...• 138
G. Picci, Universita di Padova, Padova, Italy
Some Examples of Dynamic Bilinear Models in Economics .•••...•...••..•••• 163
M. Aoki, University of Illinois, Urbana, Illinois
Bilinearity and Sensitivity in Macroeconomy •.•.•..••.•.••••.•..•••••.. 170
P. d'Alessandro, Universita di Roma, Rome, Italy
VI
Variable Parameter Structures in Technology Assessment and Land Use ...••••• 200
H. Koenig, Michigan State University, East Lansing, Michigan
An Optimization Study of the Pollution Subsystem of the World Dynamics Model .206
L. Mariani, Universita di Padova, Padova, Italy B. Nicoletti, Universita di Napoli, Naples, Italy
A Basis for Variable Structure Models in Human Biology •.•.•.•••••..•••••• 233
R. Mohler, Oregon State University, Corvallis, Oregon
The Inmllme Response as a Variable Structure System ...••••..•.••••••••••• 244
C. Bruni, M. Giovenco, G. Koch, R. Strom, Universita di Roma, Rome, Italy
Nonlinear Systems in Models for Enzyme Cascades ••••..•..•.••.•.••..•••• 265
H. T. Banks, R. P. Miech, D. J. Zinberg, Brown University, Providence, Rhode Island
Mathematical Model of the Peripheral Nervous Acoustical System: Applications to Diagnosis and Prostheses .......•...•...........•••.•...•.•.•.•... 278
E. Biondi, F. Grandori, Politecnico di Milano, Milan, Italy
A Systems Analysis of Cerebral Dehydration •.•.•••••••..••••...••••••..• 299
R. Bell, University of California, DaviS, California
STOCHASTIC BILINEAR PARI'IAL DIFFERENTIAL EQUATIONS
A. V. Balakrishnan
System Science DepariJnent University of California Los Angeles, California
Abstract: We prove existence and uniqueness theorems for a class of partial
differential equations with a bilinear stochastic forcing term. We give both
white noise and Wiener process [Ito integral] versions and indicate the inter-
relationships. Another feature is the use of semigroup theory, in contrast to the
Lions-Magenes variational theory.
1. Introduction
Let us begin with a simple example in one spatial variable x, of the kind of sto­
chastic bilinear partial differential equations wehave in mind:
+ f(t,x) n(t,x); 0 < t, x £ ~ <:t. .1)
(!1l being an open interval of the real line ) with appropriate initial conditions
and boundary conditions such that the Cauchy problem:
= o < t; x £ !1l 0.2)
has a unique solution with the usual continuity properties in t. The rrain
question then concerns the bilinear forcing term n(t,x) which we wish to allow
to be 'white noise'. If we fix the point x in~, we should clearly get white
Gaussian noise in the time variable t. Also, if we keep t fixed, and take ~
distinct space points ~,x2' then n(t,~), n(t,x2) should be stochastically
independent. On the other hand, we already know that for fixed t and x n(t,x)
will have infinite variance, so that such 'pointwise' stateIrents must be suitably
IIDdified. In the Wiener process IIDdel, this is in effect accomplished by going
to the 'integral' version. In this paper we shall retain the 'differential' form
2
(1.1), far various :reasons, some of which will hopefully be clearer as we proceed.
In the first place we assUJre that n(t,x) for each 'realization' is such that it
is lebesgue measurable and square integrable in the cross-product space
H x [0 < t < T], the t:iJIe interval being fixed and finite throughout. Let H
demte the Hilbert space L2 (!il) • Then
net,. ) e: H a.e. in t, 0 < t < T,
and
For any h( ... ) in W, we should then have that
En, h]
is Gaussian, [,] denoting inner product in W (and IIDre.genemllly in any
Hilbert space we lIB.y be working with), with variance
where d denotes the constant corresponding to the mise spectral density.
Further, the independence properties lIB.y now be stated as: for any g, h e: W,
[n,h] and [n,g] are jointly Gaussian with
E( [n,h] • [n,g]) = [g,h]
E denoting 'expectation'. Thus, given the distinct points (tl''')' (t2 ,x2)
we can find functions 'approximating' delta functions as closely as we wish at
these points, even of the product form if necessary, say
respectively, such that their inner-product in W is zero. It is natural then
also to set (1. 2) as an abstract Cauchy problem in H, or in other words as a
semigroup equation, the solution far each t being in H:
df df =
A f(t)
where A is the differential operator in (1. 2) with the given bo\ll1dary conditions
and has to be the infinitesimal generator of a semigroup operators, strongly
continuous at the origin. Using the notation
B(f,n)
to denote the product function f(x) n(x) ,each being an elerrent of M, we nay then
rewrite (1.1) as an abstract 'first-order' equations in H:
where
A f(t) + B(f(t), n(t» o < t < T (1. 3)
This then is essence is the abstract setting we shall employ. We shall study both
Ito solutions (see [1] for a version of the Ito solution in the case where H is
finite d:iJrensional) as well as white noise solutions. The fundamental notions con-
cerning Ito ingegrals and white noise integrals are introduced in Section 2.
The Ito solution is described in Section 3 and the (extended) white noise solu-
tion in Section 4 where also SOlIe of the interrelationships between the two
solutions are described.
2. WIll te Noise: Fundamental Notions
Because the use of the white noise concept is unique with this presentation
and is basic to the discussion of solutions to non-linear equations, we shall
begin by a brief exposition of the relevant ideas.
Let H be a real separable Hilbert space; even the finite dimensional case
is not without interest. Let
Then of course W is also a similar real separable Hilbert space. We shall use
4
[ , ] to denote the :inner product in all Hilbert spaces involved. For f, g in
W, let us note that
T [f ,g] = J [f(t), g(t)] dt
o
We invoke a 'Function Space' definition of the 'white noise' processes. Thus
any elenent of W will be a white noise sample functi6n or sample point. We shall
use the generic notation: W , to denote sample point. Each w then is an elenent
of W , with corresponding function w (t) , 0 < t < T, which is defined a. e. in
t as an elenent of H, ani for each elenent h in H
lJAJ(t), h]
is a Lebesgue measurable function of t, and square integrable in [0, T] • As with
any Lebesgue measurable function, we cannot talk about the value at any fixed t,
for arbitrary w. We nrust define next (to complete the definition of a Function
Space stochastic process) a sigrIa-algebra of sets. This will be the sigrIa alge­
bra of Borel sets in H. This sigrIa-algebra is generated by the class of all
open sets in H. Finally we nrust define a measure on this sigrIa-algebra. Here is
where the peculiarity of the 'white noise' notion come in. We shall be able to
define only a 'weak distribution' or a measure on cylinder sets (with bases in
finite dimensional subspaces), and co\mtably additive on cylinder sets with bases
in the same finite dimensional subspace. Put another way, let B be a Borel set
in W; then for each finite-dimensional subspace En' the measure of
is defined and countably additive for each fixed n. Thus we cannot in general
talk about the probability of the event B but only of the finite-dimensional
'cress-section' Let ~ denote such a meaS)Jre. If h is an art>i traIy element
of W, then
5
is a numerical valued random variable since 11 is defined and oountablyadditive
on inverse :images of' Borel sets of the real line:
(wi [w,hJ E Borel set
being cylinder sets with base in the sane one-cl.ilrensional space. M::lreover l.l is
completely specified by the characteristic function:
¢(h) = E(exp i [w,hJ (2.l)
For an exposition on such measures see [2J, [3J. By 'white noise' we shall mean
that the oorresponding 'weak distribution' is the 'Gauss measure': defined by
¢(h) = exp - 1/2 [h,hJ
(We omit a possible arbitrary multiplicative constant in the exponent).
Under this definition, it is immediate that for any h in W,
[W,hJ
[h,hJ
and further for any tw:) elements g, h in W, the random variables
[w,hJ, [w,gJ
[h,gJ.
(2.2)
The ITOst important question is what we shall mean by a 'random variable'. It
is clear that we Imlst have, denoting by f(w) the function mapping W into some
other Hilbert space Y, that f(.) Imlst be (Borel) measurable. But since l.l is
defined only on cylinder sets it need not be defined in general on inverse sets of
the form:
[W If(w) e: Borel set in Y]
But we may consider functions in the first instance such that inverse :inages of
Borel sets are cylinder sets. Such a ftmction is called a 'tane' function.
Clearly any tane function has the form f(P w), where f(.) is maasurable, ani P n n
is a finite-d:i.rrensional projection. Thus every tane-function will be a 'random
variable' . To define a random variable in general we may use the familiar tech­
nique in analysis: by 'completion' in appropriate topology. Here it is conveni-
ent to use convergence in probability. Thus we may now llllke a precise definition:
Let f(.) map W into Y. It will be called a random variable if f(.) is continurus
and given any e: ,IS > 0, we can find a finite dimensional projection P e: such
that for all fini te-d:i.rrensional projections P, Q, bigger that P e:
1.I [wi IIf(Pw) - f(Qw) II ~ e:] < IS (2.3)
Or, put an::>ther way, if {p a} denotes the class of all finite dimensional pro­
jections and we consider {p a} as a 'directed system' under the usual ordering,
then f(P aW ) must be Cauchy in probability. This means that if f(.) is a random
variable, then we may 'take'
Probability
1.I [ f(p w) e: B ] n y
1.I ([wlf(w} e: B] n E + orth. CompI. of E) Y n n
where Pn is a sequence of finite-dimensional projections converging to the
identity; En is a finite dimensional subspace for each n, En+l :::> En'
U E = W. More generally, a Cauchy sequence in probability of tame functions n n
will be identified with a random variable. To distinguish the two cases, we shall
call f (w) continuous and satisfying (2. 3), a ''white noise integral". Note that
a randc:m variable need not be defined for every w.
7
Not every continuous function is a randcm variable. For example, take
f(w) = [w,w]
This is continuous in w (and this is of course the crux of the white noise
point of view) , but it is NOT a randcm variable. In fact we can give
a precise answer; (see [4]): Let L denote any bounded linear transformation
mapping W into W. Then
f(w) = [Lw,w] (2.4)
is a random variable if and only if (L + L*) is trace-class (nuclear), and in
that case the variable has finite second JIOIneI1t, and
E( [Lw,w]) = Trace (L + L*)/2
E( [Lw,w]2 ) = 2 II(L + L*)/211 2 H.S
(2.5)
(2.6)
As we might expect, if the function f(.) is linear, the characterization is
equally simple. Thus, let L be any bounded linear transformation mapping W into
Y. Then
f(w) = Lw
is a random variable if and only if L is Hilbert-Schmidt, and in that case so is
and further
(2.7)
It lfK)uld be useful at this point to pause and look at a concret example, in
particular to note the distinction from Ito IID..Iltiple integrals. Thus let
H = Rl, the real line. Then
8
T [h,w] = f h(t) w(t)dt
° If (however abhorrent in a rigOD:>us sense) we consider the heuristic equivalence
w (t) tV Wet)
where we.) is the Wiener process on C(O,T), then it is true that
T f h(t)w(t)dt =
T f h(t)dW(t) (2.8)
° in the sense that both sides yield the same randall variable distribution. Ebw­
ever differences appear as soon as we go to non-linear functions. Thus let k(.)
be any function in W, and define the linear transfonnation L by:
L h = [k,h] k
T f k(t)2 dt o
'Ib sharpen this difference further, let L be any Hilbert-Sclunidt operator mapping
W into W; we know then that it can be characterized by a kernel K(t,s):
where
o
T T 2 f f K(t,s) ds dt = o 0
ItL112<co H.S.
AssUJDe for s:implici ty of notation that L is synuetric:
K(t,s) = K(s,t)
Let {CP (.)} n
be any complete orthono:rnal system in W. Let P denote n
the prDjection operator corresponding to the span of CPk' k=l, .•. n. We know
that we have the representation:
K(t,s) = r r a .. cp.(t) cp.(s) 1.J 1. J
fureover P LP has the kernel: n n
n n = r
Now
n n [LP w, P wJ = r r a .. [w,cp.J [w,cp.J
n n 1 1 1.J 1. J
whereas, the Ito integral
On the other hand
n r a .. 1 1.1.
T T E( (f f K (t,s)dW(s)dW(t) -
o 0 m
T T f f K (t,s)dW(S)dW(t»2 o 0 m
T T 2 = (1/2) f f (K (t,s) - Km(t,s» ds dt
o 0 n
(2.9)
so that the left-side of (2.9) converges (in the mean of order two) to the Ito
Integral
whereas
[LP w, P wJ n n
converges only if the series
00 r a.. converges 1 1.1.
10
or L (nGl assumed syrrmetric) is trace class. On the other hand, given any
Hilbert-Scbnidt operator L, we can always find a sequence Ln of trace-class
operators with zero trace such that
IlL - Ln l12 H.S.
goes to zero, and hence the corresponding Ito integr'als converge. For each Ln ,
the Ito integr'al as well as our 'white noise' integral yield the sarre random
variable (that is, the distribution is the sarre). In fact we can define the Ito
integr'al as the rrean-square limit of the white noise integr'als
[Lnw,w]
H.S.
Note that 12(L) is a random variable, according to our definition, since Ln
can be chosen to have finite dirrensional range for each n.
Let us also note that if L is a Hilbert-Schmidt map of W into W, and is symmetric
and nuclear, and if the corresponding kernel is K(t,s) then the Ito integral
T T 12(L); f f K(t,s)dW(s)dW(t) = (Lw,w] - Trace L (2.10)
o 0
Finally, as L ranges over the class of nuclear symmetric operators mapping W
into W, the random variables
[Lw,w] (2.11)
form a linear space which can be made into an inner-product space under the
inner-product:
= (2) Tr LM* + (Tr.L) (Tr.M) (2.12)
But this space is NCYI' closed; the limits can be Ito integr'als of kernels which
are NCYI' trace class. This is one disadvantage (from the mathematical point of
view) of our white noise integr'als.
11
Let us extend these ideas to IIDre general functionals still taking Rl = H.
Thus let K(tl , ... ,tn ) be an eleJreI1t of L2«0,T)n. Let
T T p(w) J '" f K(tl,···t )w(tl )w(t2)···w(t )dtl···dt o 0 n n n
(2.13)
defining a continuous homogeneous polynomial of degree n over W. Let K(tl, .•. t n )
denote the 'symmetrised' version of K( .. ):
<lIn! )
the stun over all permutations 1T of the indices. Then for <Pi in W:
T T f ... f K(tl,··t )<Pl(tl)···<P·(t )dtl··dt o 0 n n n n
defines a continuous linear functional on the tensor product Hilbert space
W x " x W n tinEs, (or a continuous n-linear form over W that is also Hilbert-
Schmidt) . Also:
p(w) = k(W, •.• w) (2.14)
We need to consider sane special classes of such polynomials. Thus let Fi'i=l. .. N
denote disjoint (Lebesgue) measurable subsets of [O,T) and let ~ (.) denote the
corresponding characteristic (indicator) functions. Define
N = L
i l
defining
1;. 1 n
12
1;. = IF w(t)dt l. i
Hence it can readily be verified that (the 1;i being :independent Gaussians):
E(l1«W)2)
= (n!)
+ [n/2] ( ~ (n-2\1)! (n!/(n-2\1)! 2\1\1!)2 ~ ~.~.~. (~ ..•• i i i2 l"i' .1 'l.1l.1 \I \I \1+ n v=l l.2\1+1 l.n l.1 l.\I
m(F. ) •. m(F. »)2 • m(F. ) •• m(F ) l.1 l. \I l.2\I+ 1 n
where [nl2] denotes largest integer .::. nl2, and moreover by direct integration
it is irrmediate that:
A
(~ ~ (a. . ••• . . • ) (F) (F ) " " l.1l.1 l.\ll.\ll.2\I+1· .l.]. • m .•• m . l.1 l. l.1 l.\I
T TA • X. (t""+l)' X. (t.) = I .. 1 K(t1 ,t1 , •• t ,t 't2 ...... 1 ··t )dt1 •• dtv l.2\I+1 L.V l.j] 0 0 \I \I VT n
A 2 T TA 2 L. J(a ..... ) (F. ) ... m(F. ) = I ... I K(t1 , .. t) dt1 ••• dt l.1 l.n l.1 l.n 0 0 n n
Folloong Ito [5] we shall call a "special e1ementcmy function" aIr:! function
which is a finite linear canbination of functions of the form:
N N ~ .•.• ~a .•. ,
l.1 l.n X.(t1 ) •• X' (t ) l. l. n
where a. •.. = 0 unless all the indices are distinct. l.1 l.n
For such a function we note that
since
T TA 10", f 0 f(t1,t1,··t\l,t\l,t2\1+1··tn)dt1··dt\l
n
= 0
(2.15)
13
for any v 1 [n/2]. Moreover Pf(w) is clearly a tane function. NcM as Ito
has shoon, such elementary special funcitons are fundanental in L2[ (0 ,T)n]. Let
K(. .. ) be any elerrent in L2 « 0 , T)n) . Then define the Ito integral
= limit m
where the limit is taken in the mean square sense, fm being a sequence of
special elementary functions such that
Note that
2 E(Pf (w) - Pf.(w))
m J
Thus defined In (K) is a random variable since each Pf (w) is tame. Moreover m
for
1
where ¢ i ( . ) is any orthonormal sequence in W, we have, denoting by
( ) th . d . t K t l , ... tn e symmetrlze verSlon:'
I\(W) = f··· f K(tl,··tn)W(tl)··W(tn)dtl··dtn
where
T T A
Kn_2v(t2V+l,···tn) = fo" fo K(tl,tl,t2,t2,··tv,tv,t2v+l,··tn)dtl··dtv
This is an easy cons~quence of the Ito decanposition formula [5] and has been
established in [6]. Note that (2.16) reduces to the usual decomposition formula
for Hermite polynomials for K( ... ) = 1, T = 1. The important (the crucial)
property of Ito integrals is the orthogonality:
in (2.16) are tame.
Thus frum (2 .16) we have
+ [nL2]
(2.17)
Both (2 .16) ani (2 .17) are of course also valid for the elementary functions
(2.15). We can naY answer cur question as to when pew) defined by (2.13) is
a ranian variable. Before we do this havever, we shall introduce the degree of
generality necessary for us.
Thus let us get back to the general case where H is a seperab1e Hilbert
space. Let Hs denote another ani possibly different separable Hilbert space,
and let
Let p( ) denote a holIDgeneous polynomial of degree j mapping W into Ws.
Then we know that
pew) = k(w, .•. w)
where k(. •• ) is a symmetric j-1inear form over W with range in Ws. Let us
assume that k(. .• ) is Hilbert-Schmidt; or, equivalently that k(. •• ) is a H.S.
linear bourrled transfcmnation mapping the j-tensor product Hilbert space
W x ..• x W, j times. Then we know that we can express k(. •. ) as
T T k($l,···$·) = f··· f K(t; sl,···s·;$l(sl)···$ (s »ds1 ··ds
J 0 Q J nn n
where K(t;s , ••• s ; ... ) is a member of L2«O,T)j+1 ; N), N being the 1 n - -
Hilbert space of Hilbert-Schmidt operators on the j-tensor space H ® . .® H,
j-times, into Hs. In fact we can write:
15
L L k(<I>. , •• <1>. )(t) [<I>. (sl),x.. J •• [<1>. (s.)x.J . i. 11 1J. 11 ~ 1J. J J 11 J
where <1>. is any canplete orthonormal sequence in W, the convergence of the 1
series being in the strong sense in L2 (0 ,T) j +l, N). If ei denotes a canplete
orthononnal system in H,
so that the function:
k(<I> .•• <1>. ) (t) [<I>. (sl) , x..J ••. [<I>. (s.), x.J 11 1j 11 ~ 1j J J
is in the linear space generated by functions of the form:
where aCt) is a f1ll1ction in Ws ' and fiL) in L2«O,T), ~). The point is that
we may thus introduce the notion of the "special elenentary f1ll1ctions" as in the
scalar case, that is:" linear canbinations of f1ll1ctions of the form:
where fL •• ) is defined as in (2.15) (with n=j), the ei being arDitrarily
chosen frcm the orthonormal basis {ei } , and these are dense in
L2«O,T)j+l; !!). Given any K(t, sl' ••. Sj; •.. ) therein, let Kn(t, sl' •• Sj •.• )
denote an approximating sequence of special elenentary f1ll1ctions; then we may
define the Ito integral as in the scalar case by:
I. (K) = limit J
where again each PK (w) is a tarre function, the limit being taken in the mean n
square sense. Of course
16
The decanposition formula (2.16) can again be proved in the same way: Define
where K(t,sl, •.• Sj' ••. ) is defined by:
N N L L
il=l itl
where a. . (t) defines an element of L2«0,T); Hs )' and is syrnrretric in 1.1 ..• 1.j
the indices. Then
T T = fo·· fo K(tl,tl,··tv,tv,t2v+l'· .tj)dtl··dtv
Of course all functions in (2.18) are tame; and (2.17) holds, as well as the
orthogonality property as in the scalar case.
We are now ready to state our theo:rem on when a polynomial is a random
variable or (I a white noise I integral).
Theorem 2.1 Let p(w) denote a hollOgeneous polynomial of degree j m3.pping W into
W s. Let ku ( .•. ) denote the corD:sponding syIl1l'retric j -linear form so that:
Given any orthonormal sequence {¢.} in W, suppose that for each v, 0 ~ 2v ~ j 1.
k. (cp • , cp • , ••• ¢ . , cpo , 1/12 +1' ... 1/1. ) , ] 1.1 1.1 1.v 1.v v ]
1/ii e: W,
defines for each Nl, ••• Nv ' a Hilbert-Schmidt linear bounded transformation
m3.pping the (j - 2v) tensor product space W ® .. -® w, j - 2 times, into W s •
Denote this operator by k j _2v (Nl' ••• N ; •.. ). Suppose further that this
sequence of operators converges in the Hilbert-Schmidt norm as the Ni , i;l, .•. v,
go to infinity. Then p(w) is a randan variable with finite second mooent.
17
Proof Taking v = 0, we see that kj ( ... ) itself must be Hilbert-Schmidt.
Let {¢ .} denote an orthonormal basis in W. Let 1
r r k.(¢. , ... ¢. (t)[¢. (sl),xlJ . . J 11 1 J. 11 1 i =1 1 j =1
[¢. (s.) ,x.J 1j J J
Let T T
k. N(~l""~') = f ... f K. N(t;sl, ... s';~l(sl) ... ~.(S.))dSl •. dS.,~. £ W J, J 0 0 J, J J J J 1
Let PN denote the projection on the space spanned by the first N ¢., Then 1
= k. N"w) J ,
=
[j12J L o
The assumptions of the theorem assert that the k. 2v N(") fonn a Cauchy J- ,
(2.19)
sequence in the Hilbert-Schmidt norm, implying in turn that p(PNW) is a Cauchy
sequence in the mean square sense.
We shall state a sufficient condition which ensures the conditions of the
thear'em as a Coro:!.lary. First let us recall that the so-called S-topology on a
Hilbert space. This is the locally convex topology induced by taking as
seminanns
p(x) =
where S is any self-adjoint non-negative definite, nuclear operator (also
called 'Covariance' operators).
Corollary With p(.), and kj ( ... ) as in the theorem, suppose k( ... ) is
confinous in the product S-topology on wj. Then pew) is a random
18
variable.
Proof The implication of the continuity in the product S-topology is that
there exist covariance operators S i' i = 1, ... j such that for any 4> i in W:
(2.20)
an:l this in turn enables us to verify the conditions imposed in the theorem.
Before we close this section we need to introduct the notions (new with
this paper) of an 'extended white noise integral'. With W = L2 (0, T), and
Gauss measure)l on W as before, we shall Call f(w ) with range in another
Hilbert space an 'extended white noise integral', if {f(P NW)} is a Cauchy
sequence in the mean square sense where PN is the projection on the first N
rrembers of any orthonormal basis of the form:
{e. f.(t)} 1. J
where {ei } is any orthonormal basis in H am fj (.) is an orthonormal basis
in L2 ( 0 , T), 11.)' In the case where H is of dimension one, nothing new is
introduced clearly. However, as soon as the dimension is IIOre than one we are
introducing a new integral. For example, let H be of dimension two, am let
be two orthonormal basis vectors in H. Define the operator L mapping W
into W by:
J [e2 ,f(s)] ds + e 2 o
Then L is not nuclear; in fact the eigen values of L are:
± 2T/(2n+l)7T , n = 0,1. ••
On the other hand the infinite series
L [L e. f. (. ), e. f J• ( )] 1 1. J 1.
is absolutely convergent am converges to zero. Hence we can see that
19
= 2
and hence [UU,w] is an 'extended' white noise integral even though it is
NOT a white noise integral. See section 4 for more examples. For a polynomial
to be an extend white noise integral we have only to impose the condition of
Theorem 2.1 but new us.Ing only orthonormal bases of the fonn required in the
definition.
3. Bilinear Equations: Abstract Formulation
We begin with an abstract formulation of bilinear equations. Let H, Hn
denote real separable Hilbert spaces. Let B(x,n) denote a bilinear fonn
mapping the cross product space H x Hn into H. Let A denote the infinitesi­
mal generator of a strongly continuous semigroup S(t), t ~ 0, of linear bcunded
operators mapping H into itself. Our first objective is the bilinear equation:
x(t) = Ax(t) + B(x(t), n(t» o < t < T
x(O) given,
where the bilinear form B( ... ) is bcunded (or equivalently, B is a linear
bcunded operator mapping the tensor product space
Theorem 3.1 Let
([O,T]; H ) n
(3.1)
20
Then for each n(.) in Wn ' the integral version of (3.1):
t t [x(t),hJ = [x(O),hJ + J [B(x(s), n(s»,hJ ds + J [x(s),A*hJds
o 0
(3.2)
for every h in the domain of A*, has a unique solution x(t) , continuous in
o ~ t ~ T (in H). MJreover the corresp::mding mapping F(n) = x
defined on Wn ' with range in W, where
is continuous.
Proof. We first note that the equation:
t t [x(t),hJ = [x(O),hJ + J [x(s),A*hJ ds + J [u(s),hJds (3.3)
o 0
has, for each u(.) in W, the unique continuous solution:
t x(t) = S(t)x(O) + J S(t-cr) u(o), do, 0 < t < T (3.4)
o
tbw, because the bilinear form B( ... ) is bounded, we have:
IIB(x,n)11 < IIBII IIxil II nil
and hence if x(t) is a continuous function satisfying (3.2), we have
B(x(s), n(s» o < s < T
clearly defines an elerrent of W. Let u(s) denote this function. Then combining
(3.2) and (3.3), (3.4) we obtain that
t x(t) = S(t)x(O) + J S(t-cr) B( x(o), n(o»d:r
o
Also, if y(.) is another continuous solution of (3.2) we see that
z(t) = x(t) - yet)
t J S(t-a) B(z(a), n(a)) dO' o
Next for fixed n ( .) in Wn , we note that for any elerrent x in H,
S(t-a) B(X, n(a)) = L(t,a)x, 0 2 0' 2 t,
is such that (I IS(t)1 I being bounded in [O,T]):
IIL(t,a) xii < k Iln(a)11 II xii, 0 < a < t
where k is a positive constant independent of t, () 2 t 2 T, and hence
t = J L(t;a)f(a)da
o
(3.5)
(3.6)
defines a linear bounded transfo:rrration mapping W into W. But what is crucial,
because of (3.6), and the fact that n(.) is an element of W ,L is actaully n n
qU3.si-nilpotent. In fact, following the usual Tricomi method for finite dimen-
sional kernels as in [3], we can readily shew that
T T (J J IIL(t;a) 11 2da dt)j-l / (j-2)!
o 0
Hence
or, Ln is quasi-nilpotent, and hence (I - L ) has a bounded inverse. n It should
be noted that Ln is NOT necessarily Hilbert-Schmidt, even though it is of the
type that one should label Vol terra.
Hence (3.5), which can now be expressed
z = L z n
implies that z(t) must be zero a.e., and being continuous must be identically
22
Now let us prove existence. Let
u(t) = S(t)x(O) (3.7)
defining an element of W. Now since Ln is quasinilpotent, we have that (I - Ln)
has a bounded inverse, and is given by:
Let
x(.) = (I - L )-1 u n
Then we have that
f L(t,cr)x(cr)dcr = S(t)x(O) a.e. o
But for x(.) in W,
t f L(t,cr) x(cr)dcr o
is actually continuous in t. This can be seen readily as follows:
t+t. f L(t+t.,cr)x(cr)dcr o
t f L(t,cr) x(cr)dcr o
t t+t. = f (S(t.) - I) B(x(cr), n(cr» dcr + f L(t+t.,cr) x(cr)dcr
o t
where in the first term the integrand clearly goes to zero, a.e., while
II (S(t.) - I) B(x(cr), n(cr» II ~ II B(x(cr), n(cr» II
and
(3.8)
(3.9)
(3.10)
so that the first term on the right in (4.10) goes to zero, and it is :iJIJnediate
that the second term goes to zero also. Hence x(.) is the difference a.e. of tv.u
23
cxmtinuous fU'lctions, and hence can be taken to be continuous. Or, (3 .9) has a
continuous solution, and any such solution is of COUI'Se unique in W. Using the
fact that any solution of (3.4) is a solution of (3.3), we see that the solution
of (3.9) satisfies (3.2). This proves existence.
Next we wish to study the dependence of the solution on n(.), for fixed x( 0)
For this let us rewrite:
Now for each n in Wn ' L(n) defines a linear transformation on W into W. Let
E(W,W) denote the Banach space of linear boU'lded tr>ansfornations on W into W. Then
L(n) is a linear transformation of Wn into this space, and mJreover
IIL(n)11 ~k IT Ilnll
where the lefts ide denotes the rorm as an element of E(W,W). Hence L(n) defines
a linear boU1.ded tr>ansforrration of W. MJreover: n
and hence
IIF(n)11 < 2 I IL(n)jul I < Ilull I T(klln(.)II)j-l/ (j-2)! 1
(3.11)
+ Ilull + k T Iln(.)11 Ilull (3.12)
This is enough to show the continuity with respect to n of (I _L(n»-lu.
As a rratter of fact it is actually Frechet differentiable and locally boU'lded in
every sphere of finite radius. In other w;)ros F(n) is analytic, entire.
Introducing the Gauss rreasure on Wn ' we have then that (3.1) has a U'lique
solution for each given x(O), and we.) in Wn . The question that remains is:
when is the corresponding mapping F(w) a random variable? We shall exploit
Ito integrals in the resolution of this question. First we need to define Ito
integrals more generally than in the polynomial case. Using this definition
24
we shall shcM that (3.2) or equivalently (3.5) has an 'Ito integral solution',
inte~ting that is, the integrals in (3.2) ani (3.5) as Ito integrals. Thus
let us first define the integral when tre ~nt is an Ito polynomial. Let
Pj(w) denote a Hilbert-Schmidt polynanial, harogeneous of degree j mapping Wn
into W. Let k. ( •.• ) denote the oorresponding Hilbert Schmidt synnetric J
j -linear form, ani let Kj ( ... ) denote its kernel. Let
x.(W) = L(K.) J J J
Let k2 (x (. ), nC» denote a bilinear form over W x Wn with range in
W (not necessarily H.S. !):
o
K2(t,s; x,n) being a Hilbert Schmidt operator on the tensor-product space
H x Hn a.e. in 0 < S < T, such that
T T 2 f .. ·f IIK(t,s;·,·)11 H S ds dt < 00 o 0 ••
Then we define the Ito integral:
T f K2(t, s; x.(s,w), w(s» ds (Ito) o J
as
where Kj +l is the kernel oorresponding to the (j+l) degree polynanial defined
by
ani is readily verified to be Hilbert-Schmidt. Thus defined tre integral
clearly enjoys the additivity property in teTIIIS of the ~t x (.). Suppose J
naN we asst.me that B(x,n) is Hilbert-Schmidt, and take ~(..) as:
~(t,s; x,n) = S(t-s) B(x,n) 0 < s < t; zero otrerwise. Then we may define
25
the integral in (3.5) as an Ito integral far' the case where xL) there is an
Ito polynomial Xj (w) where
x. (w) ]
j = 1
1 1
where Ki is the kernel of a Hilbert-Schnidt hanogeneous polynomial Pi (w)
of degree i. Suppose now that {xj(w)} is a Cauchy sequence in the L2
(or mean square) sense. Let us observe first that the limit will be a randan
variable, accorDing to our definition. Far' each j, define the polynomial:
~ p. (P • .w) i=O 1 J'.l
where P N is any sequence of finite-dimensional projections converging m:mo-
tonially to the identity. Let K. N denote the kernel corresponding to 1,
I. (K. N) 1 1,
is tame, and is a Cauchy sequence in the mean square sense converging to Ii (Ki ) ,
since the kernels converge in H S norm, and further:
~ i=O
converges to X. (w) ]
in the mean square. Hence taking the combined limit in N and j, we see that
the Cauchy sequence {x. (w)} ]
defines a randan variable which we shall
denote by x(w). If in addition
is also a Cauchy sequence in the mean square sense, the limit is also a random
variable, and we define it to be the Ito integral 'corresponding to x(w)'. In
particular in connection with (3.5) this is what we shall mean by the Ito
integral: t f S(t-s) B(x(s,w),w(s»ds o
(Ito) o < t < T
26
as an elerrent of W; that is x(.,) is the rrean square limit of a Cauchy
sequence of Ito polyncmials. and the Ito integral corresponding to each such
polyncmial converges and is defined as the Ito integral. We can then state:
Theorem 3.2 Suppose that B(. ,.) is Hilbert-Schmidt, and the integral in (3.5)
is interpreted as and Ito integral. Then, (3.5) has a uniqure solution with
finite second moment.
x(w) = ex>
where K. is the symmetric j-linear form corresponding to J
p. (w) = k. (w, .. w) = L(w)j u(w); u(t) = S(t)x(O). J J
First of all we shall shcM that for each u in W,
L(w)u
(3.13)
(3.14)
with range in the Hilbert space W, is a randcm variable with finite second
moment. First let us show that (for fixed w) L(w) is H.S. Let w e: Wn ,
let u e: W, and let
so that
L(w)u = g
o
(3.15)
27
where B(.) II13.pS Hn into ECH,H). But (3.13) implies first of all that B(.)
II13.pS Hn into h(H ,H), the Hilbert space of H. S. operators, and further the II13.pping
is in addition also Hilbert-Schmidt. Let
IIBII H.S.
denote the Hilbert-Schmidt norm of the operator B ( . ) . Then in (3.15) we have that
S(t-O)B(U(O), w(o» = S(t-cr)B(w(o»u(o)
is of course H.S. and the H.S. norm is
< c IIBIIH•s. Ilw (0)11 Q<O<t<T
From this it follows that L(w) is a Hilbert-Schmidt operator, and further,
(3.16)
Because of (3.13) we lI13.y reverse the roles of w and u, and obtain that
L(w)u = L(u)w (3.17)
where Uu) is Hilbert-Schmidt for each u, with a similar inequality as (3.16):
(3.18)
It is convenient to write ( 3 .17) in the bilinear form
L(w)u = L(u,w)
We note that this bilinear form (over W x W) is NOT Hilbert-Schmidt in n
general. [Take H = Hn = Rl , B(x,n) ;; xn]. Also the II13.pping L(w) defined on
W into the Hilbert space of Hilbert-Schmidt operators on W into W is not n --
Hilbert-Schmidt either; so that in particular L(w) is NOT a random variable.
28
Next let
p. (w) = L(W)~ J
Then of course P. ( • ) is a honDgeneous polyn:>mial of degree j and we can write: J
p.(W) J
= I (sum over all permutations of the :indices in the
kernel K. (t,ol' ..• O.; x.. , ••• x.» J J.l J
0·< o. < o. 1 ••• <01 < t J J-
and zero otherwise.
(3.19)
~te that the kernel Kj (t, 01 , .•• OJ; ~, ••• x j ) is "syrmetric" (in the :indices in
the argument). Define now the j-linear farms kj(<Pl , ••• <Pj ) over Wn , by
= l o
o < t < T
with range in W. Then of course k j ( ... ) is syrmetric and
29
p.(w) = k.(w, ••• w). ] ]
Let us now first show that p. (w) is a random variable with finite second IIDlIIel1.t. ]
For this, let {Ijli} be any complete orthononnal sequence in Wn and let
z;;. (w) = [w, Ijl.] ~ ~
yielding a sequence of independent zero mena unit variance Gaussian sequence. Let
Pk denote the projection on the space spanned by the first k of the Ijli' Then
we have
Pj(PNW)
]
Let us note that consistent with our previous noation we can also write:
L (Ijl. ) ... L(Ijl. )u ~ ~j
From (3.16) we may calculate that
2 E[ IIPj (p~) II ]
N N = j! l ... ?
\1=1 (]-LV). 2 v. ~2v+l ~j
(3.19)
(3.20)
(3.21)
30
., J.
< (3.22)
Proof
For OJ 1. 0j=l .. < 01 < T, we can readily calculate that the Hilbert­
Schmidt nonn:
< 2 Ilu(a.) II J
and the estimate of the lenma readily follows.
Lemna 1 proves that the series in (3.13) converges in the mean square sense. Let
x.(w) = ! r.(K.) J 0 11
Then the Ito integral
t J S(t-s)B(x.(s,w),w(s»ds (Ito) + S(t)x, 0 < t < T o J
is clearly equal to
and hence converges, in fact to x(w), so that we have bbtained a solution to
(3.5) interpreted in the Ito sense. Uniqueness is :imIediately deduced from
(3.13). Thus non-uniqueness would imply that there is a sequence of Ito poly­
nomials Zj (w) such that
But implies that if p. (.) is the polynanial (not necessarily harogeneous) of J
degree j corresponding to Zj (w) and ti is an orthonannal basis that
31
But
and since from (3.11) we can deduce for every i:
II I - L(<I>.) II > e: > 0 1 -
for same e: , it follows that the Hilbert-Schmidt norm of Pj ( •• ), or
equivalently
4. White Noise Solutions
We are now ready to examine the conditions under which F(w) defined in
Theorem 3.1 defines a white noise integral. Unfortunately in diJrension higher
than one for Hn r,.,e can only obtain 'extended' white noise integral.
Theorem 4.1 Let A in (3.1) be zero. Then the corresponding F(w) given by
Theorem 3.1 is an extended white noise integral.
Proof Let us pick an orthonornal. basis of the type required:
e. f. (t) 1 J = o < t < T
Let N denote the double index (m,n) and let m,n ~ N. Let
u(t) x o < t < T
32
and let
with k j ( ... ) the corresponding symmetric j -linear form so that:
p.(w) = k.(w, ••. ,w) ] ]
A
and let K. ( ... ) denote the corresponding kernel defined by (3.19). For each ]
o ~ 2vs. j, and each N, define the {j - 2v) hOlIOgeneous polynomial:
and let N ,j
LeIma 1: Let
T T JO··JOKj(t;sl,sl,··sV,sV,s2V+l,··Sj;Y2V+l'·Yj)dSl··dsv
(4.1) where
Then
N,j ~ . 2''''1 11K. 2 - K. 2 II goes to zero with N in L2«O,T)]- V' ), !!) ]- ]-
Proof Let us first consider the case j = 2. Then
N }: k.(<p. ,<p. )
. ] 1.1 1.1 1.1=1
i=l ~ ~
n t
33
C4.1)
N 2 T m 2 n t t 2 112 k.Ccp. ,cpo )11 < fIIIBCBCx,e.),e·)11 C Iff f.Cs1 )f.Cs )ds1ds 2) dt i=l ] ~1 ~1 - 0 i ~ ~ 1 0 0 ] ] 2
But
IIIBCBCx,e.),e.)11 < 00 •
1 ~ ~
Hence the lemrra is readily verified for j=2. Next let us consider j=3, v=1.
Let us define
and let us note that
k 3CCP. ,cpo ,1jJ) = Cl/3!) C2 LCcp. ) LCcp. ) LC1jJ)u ~1 ~l ~l ~
+ 2 LCcp. ) LC1jJ) LCcp. )u ~l ~l
+ 2 LC1jJ) LCcp. ) LCcp. )u) ~1 ~1
34
Let US look at the sum obtained for each term in turn. First the middle term:
N m n t sl 2 lL(~. )L(~)L(~. )u = vN;VN(t) = l l f dS1 f dS2 f B(B(B(x,e.), 1 ~1 ~1 i=l i=l 0 0 0 ~
where
s3 i=l ~ ~
Now the rrain point is that for fixed k:
n t sl l f f [F (sl,s3)~J f.(sl)f.(s3) dS1 dS 3 j=l 0 0 m J J
can be expressed as:
n l [J k f., Tf.J 1 m, J J
where the inner products are in L2(O,T) and Jm,k is an operator defined by:
s m g=J kf; g(s) = f [L B(B(B(x,e.),~(o),),e.),~ J f(o)do
m, 0 1 ~ ~ K
o for s > t
IlI3.pping L/O,T) into itself, and similarly T is defined by:
s f = Tf; g(s) = f f(o)do
o
Both J k and T are Hilbert-Schmidt. Hence we have that m,
2 l IITf·11 < j J
35
t
f o
cr m 2 f [I(B(B(x,e.),~(s»,e.),~] ds dcr o 1 1 1
so that further
I B(B(B(x,e.),y),e.) = My 1 1 1
cr m 2 f I II B(B(x,e.),~(s»,e·)1 I ds dcr o 1 1 1
defines M as a H.S. operator mapping Hn into Hs' Hence we can verify that
(we skip the tedious details):
defines KN as a H.S. operator sequence converging in the H.S. norm to K where
K~ = 0
I i=l
Let
s m J (s) = f I B(B(B(x,~(cr»,e.),e.)dcr mOl 1 1
Then we can write
=
n t sl = {L J J (J (s2) - Jm(sl»f).(s2)f).(sl)ds2ds1}
1 0 0 m
1
where the term in curly brackets go to zero, and canbining the other term with
the stun just above we get
n
which can be expressed as
t J (t-s)C(B(x,W(s»ds o
where C is defined by:
00
Cy = I B(B(y,e.),e.) i=l ~ ~
and is a H. S. operator, mapping H into itself. The convergence is in the sense s
required; ~ omit the details being similar to the case just finished. In a
similar manner we can show that
37
o
Thus
t t (1/3!) ( f (t-s)C B(x,~(s»ds + f sB(Cx,~(s»ds)
o 0
and the corresponding kernel agrees with (4.1) as required. Note that we have
just shown that if
o
More generally, using J to denote the operator:
t J u = v; v(t) = J C u(s)ds
o o ~ t ~ T,
OJ OJ
i ... ~ k.(~. ,~. , .... ~. ,<p. ~2V+l'·· .~.) 1 1V ] 11 11 1\1 1J ]
I denotes sum over all permutation of the indices 'JT
J. J 1
38
The convergence in (4.2) is in the B.S. norm, or we have the statement of the
Lenma in terms of the kernels.
Let us now turn to the proof of the Theorem. By virtue of the Lenma Pj (w) can
be defined as an extended white noise integral, and further we have:
= [j/2]
~ j!
is a Cauchy sequence in the rrean or order two.
Since
(m+2v) !
m+2v km (ljil" •• ,ljim) = 1
(m+2v) ! L 7T
u Jl ••• J v L(ljil+v) ••• L(ljim+v) 0
where nCM 7T denotes all permutations of the indices 1 thru (m+v) and
= L(lji.) ]
Arguing as in Section 3, I.enura 1, we can estimate that
m+2 I I (m+2v)! K 112 < Ilell 2v m B.S.
so that the series
2m IIBIIB•S • -rn+v+l IIxI12
converges absolutely. The est:imate above is enough further to yield convergence in
the rrean of order two of xn (w), and we can define the limit as
00
m (4.6)
39
where l<in is the kernel corresponding to the hcm:lgeneous polynomial:
'" km ( 1/1. '''1/in) =
1 I (1/2 ",,! ) I J l .•• J" L(~v+l)··L(~"+m) (4.7) iii!" 0 7T
where L(ljij) = L(~.+v). J .
fureoever since
'" (m+2,,) ! I O~,m+2) x(P~) = I I ;!2"v! m=O v=O m m
we note that x(Plf) is a Cauchy sequence in the mena square with the same limit
(4.6) and hence x(w) is an extended white noise integral described by (4.6).
By'calculating R further we can obtain an infinite dimensional extension of the m
Wong-Zakai [7J and Ito [8J results. The Ito work in particular helps in pointing
to the relationship of our extended white noise integrals to the Stratonovich-Fisk
integrals.
'" Cx I B(B(x,e.),e.)
where { e.} l
is any orthonorrral basis in Hs' the definition being readily veri-
fied to be independent of the particular basis chosen. Then the extended white
noise solution of (3.1) with A=O, is also the Ito solution of:
dx(t) """d-t = (1/2)C x(t) + B(x(t), net)); x(O) = Xo (4.9)
Proof Let us note that C is Hilbert-Schmidt. The Ito solution to (4.9) can be
expressed:
where K is the kernel of: m
k (1jJl, ••• 1jJ ) m m =
where
40
o
and
u (t) = S(t)x o < t < T o 0
(4.10)
On the other hand the extened white noise solution of (3.1) with A=O, is given by:
L I (K) o m m
and hence equal if and only if the kernels are the sane, or, it is enough to show
that for each ljil' ..• ljim' we have:
(4.11)
where lji +' = lji. n J J
on the right, and 7T' means that the order of the indices (n+ 1) •..
(n+M) is fixed but otherwise all permutations of the indices 1 thru (n+m) are
allowed in the sum. For n=l, m=2 for example, 7T' would mean
But now to see that (4.11) holds we note that the lefthand side
t sl sm_l v=L(ljil)···L(ljim)uo ; vet) = I I ... 10 S(t-sl )B(S(sl-s2)(B •.. B(S(sm)xo '···
o 0
and we have only to expand the integrand using (4.10). Thus we have:
i 00 (t-sl ) 1 I I = i l ! i l i m
which can be written
41
III- m i 2! i l! m-
\ Jl···J L(1jJ +1).' .L(1jJ + ) u [.1 n n n m 0
1T
and the identification is complete, since
i m
Next let us consider the general case A = D.
s i m+l
Theorem 4.2 Suppose Set) is self-adjoint and compact. Suppose further
B(x,n) m
i=l l l
(4.12)
(4.13)
(4.14)
where each Li is Hilbert-Schmidt mapping Hs into Hn' and each b i is an eigen
vector of A, so that
A b. = A. b. l l l (4.15)
Then (3.1) has an extended white noise solution, and further it is also the Ito
solution of:
dX(t) Cit =
A x(t) + B(x(t),n(t» + C x(t)/2
where ex is again defined by (4.8), and because of (4.14), specializes to:
~f The main point is to note that because of (4.15):
m t v = L($)u; vet) = I (ExpA.t)
1 ~ f (Exp-A.S) b~[L.u(s),$(s)]ds a ~. ~
and as a result for the special orthononnal basis
t v = 2 I L(cp.) L(CP· )u; vet) = f s(t-C1) Cu(a)dcr
i ~ ~ a
The rest of the argunent leading to (4.12), (4.13) is established similarly.
(4.16)
AcknCMledgm:nt. The author acknowledges his deep indebtedness to Prcfessor K. Ito
for a st:im.tlating discussion and nruch insight gained as a result. In particular
Prcfessor Ito made the useful observation that the use of the special orthonormal
functions in the definition of the extended white noise integral is appropriate in
view of the special role played by the t:ine variable.
43
References
University Press, 1970.
2. r. M. Gelfand and N. Ya. Vilenkin: Generalized Functions, Academic Press, vo1.4
1964.
3. A. V. Balakrishnan: Introduction to Optimization Theory in a Hilbert Space,
Springer-Verlag, Lecture Notes, 1970.
4. A.V. Balakrishnan: Stochastic Optimization Theory in a Hilbert Space, Journal
of Applied Mathematics and Optimization, Vol. 1, No.2, 1974.
5. K. Ito: Multiple Wiener Integral, Journal of the M3.therratical Society of Japan,
May 1951.
6. A. V. Balakrishnan: On the Approxirration of Ito Integrals by Band-Limited
Processes, sm~ Journal on Control, 1974.
7. E. Wong and M. Zakai: On the Relation Between Ordinary and Stochastic
Differential Equations, International Journal of Engineering Science, Vol. 3,
1965.
8. K. Ito: Stochastic Differentials, Journal of Applied M3.thematics and
Optimization, (to appear).
Research supported in part under AFOSR Grant No. 73-2492, Applied Math Division,
USAF.
Via Eudossiana, 18 - Roma
The early results on the realization theory with bilinear internal
descriptions have been presented in [1J and [2J. The first one considers
zero-state realizations and make essential use of linear (associative)
algebraic tools, the second one considers realizations homogeneous in
the state, with equilibrium initial state, and uses mainly Lie algebra
techniques. The theory developed in [1J has been further extended, in
[3J, to the case of multidimensional input and output and, in [4J, to
the case of equilibrium initial state.
In this paper we consider the problem of finding bilinear time­
varying internal descriptions, without any special assumption on the i~
itial state, for a given input-output function. The results are devel­
oped in tight connection with those established in [1J. They are con­
cerned with a realizability condition, a test for minimality and the
equivalence of all minimal realizations.
In order to avoid notational complexities we consider here only
the case of single input and single output.
This work was partially supported by Consiglio Nazionale delle Ricerche.
Paper presented at the U.S.A. - Italy Seminar on "Variable Structure Systems", Port­ land (Oregon), May 26-31, 1974.
45
Consider the following input-state-output description
~(t) A(t)x(t) +N(t)x(t)u(t) +B(t)u (t)
(1 )
y(t) = C(t)x(t)
where u(t)ER is the input and x(t)ERn is the state at time t. The ma­
trices A(·), N(·), B(·), C(·) are continuous functions of time, of suit­
able dimensions.
The procedure proposed in [5J for computing the response in the
case of constant coefficients can be easily extended to the present one.
Let ~(t,T) denote the state transition matrix associated with the equa­
tion
x(t) A(t)x(t) (2)
and assume that the input u(·) is a continuous function of time. Then
the value of the output at time t, corresponding to an initial state x at time to' can be expressed as follows
y(t) co (t (t
yo(t) + L 1'.' ••• w, (t,t1 ,···,t,)u(t1 )···u(t,)dt1 ···dt, i=1 J t J t 1 1 1 1
o 0
(4)
and the kernels wi (t,t1 , ... ,t i ) of the Volterra-series expansion are
symmetrical with respect to t1 , ... ,ti and assume the value
(5 )
46
(6)
Consider a mathematical object specified as follows: an element
t of R, a continuous function y (.): R~R and a collection {wi (t,t1 , •.• o 0 '+1
•. ,ti)}~ of functions wi("""'): R1 ~R continuous with respect to all
variables. This may be used to define an input-output function having
the form specified by (3). We shall say that this function can be realized
by means of a bilinear and finite dimensional internal description if
there exists a description like (1) whose response, from a suitable state
x assumed at time to' coincides with the given one. This is equivalent
to the existence of four matrices A(·), N(·), B(·), C(·) continuous fun~
tions of time, of suitable dimensions, and a vector x, such that the
equations (4) and (5) are satisfied (the latter on the subset (6». The
object {A (.), N (.), B (.), C (.) ,x} is called a bi linear realization and
the integer n its dimension.
On the basis of these assumptions it is possible to prove the fol
lowing
THEOREM 1. An input-output function like (3) can be realized by
means of a bilinear and finite dimensional internal description if and
only if there exist two continuous functions of time F(') and H(·), re­
spectively n~n and 1~n, and an n~1 vector G, such that
H(t)G for all te:R (7)
(8 )
(*) This prescription specifies uniquely each wi(t.t1 •.••• t i ) if this is symmetrical.
47
PROOF. Sufficiency. Assume that (7) and (8) are satisfied and let
A(t) = 0, N(t) F (t), B (t) = 0, C (t) = H (t), jt= G (9)
Clearly the response of a description like (1), with coefficients defined
as in (9), coincides with the given input-output function.
Necessity. Consider a bilinear description of type (1); from this it is
always possible define a new bilinear description [2J
z (t) ~(t)z(t) + ~(t)z(t)u(t) (10)
'" yet) C(t)z(t) z(to)=~
with
'" r: t1 :]. '" r: t1 B:t1 '" n 1\ (t) N(t)= C(t)=(C(t) 0), 2=
(11 )
having the same response of the given one. The input-output function of
the description (10) can be evaluated by means of (3), (4) and (5). Ob­
'" serving that the corresponding state transition matrix ~(t,T) can be fac
tored in the form
'" '" "'-1 ~(t,T) = X(t)X (T)
it is easy to deduce equations (7) ,(8), by assuming
~(t)~(t) = H(t)
F(t)
(12)
(13 )
REMARK 1. It is of interest the case in which the given input-out­
put function has the form
y (t) yo(t) + I: w1 (t,t1 )u(t1 )dt1 o
(14 )
48
It is easy to show that, in this case, the condition given in Theorem 1
can be reduced to that of the existence of two continuous functions F(·)
and H(·), respectively nxn and 1xn, and an n x1 vector G, such that (7)
and (8) are satisfied onZy fo~ i=1, i.e. such that
'" '" H(t)G = yo(t) (1 5)
i'I(t)]):(t1)~ = w1 (t,t1 ) (16)
0
(17 )
where F (.), H (.) and G are respectively (n+1) x (n+1), 1 x (n+1) and (n+1) x1
satisfies (7) and (8) for yo(t) and w1 (t,t1 ) as well as for all the ker­
nels wi (t,t1 , ••• ,ti' (i=2,3, ..• ) that are zero in equation (14). Thus
Theorem 1 guarantees the realizability of the input-output function (14)
by means of a bilinear internal description.
It is possible to observe, at this point, that (17) identifies, be
side to the bilinear description characterized by (9), also a Zinea~
description, that is the one specified by the equations
X (t) ]):(t)~u(t) (18)
'" yet) H(t)x(t)
This can be easily verified by performing, in backward direction, the
transformation used to pass from (1) to (10).
REMARK 2. If one seeks directly the conditions under which the in­
put-output function (14) can be realized by means of a linear internal
description, one arrives easily at the existence of two continuous func­
tions ~(.) and B(·), respectively 1xn and nx1, and an nx1 vector y such
that
(19)
49
(20)
It is easy to see that these conditions are equivalent to (15) and (16).
4. MINIMAL REALIZATIONS
The results of Theorem 1 naturally lead to the consideration of
the sequences of functions {Yo(')' w1 (·,·), w2 (·,·,o), ••• } that can be
factored in the forms (7) and (8). Such sequences have been called fac-
torizable, the triplet {F(·), G,H(o)} a factorization and the integer n
its dimension; they have been extensively studied in [1J, with reference
to the (more general) case in which also G is a function of time. For
reader's convenience we report here a result useful for the analysis of
the minimal realizations of a function like (3)
THEOREM 2 [1J. An m-dimensional factorization {F(o) ,G(·) ,H(.)} of
a factorizable sequence of functions is minimal (i.e. of least dimension)
if and only if the m rows of
and. respectively. the m columns of
( 22)
are linearly independent on Rm. All minimal factorizations are a single
equivalence class modulo the relation:
{ F 1 (. ) , G 1 (. ), H 1 (0) }o.. { F 2 ( • ), G2 (. ), H 2 ( • )} if and 0 n l y if
(23)
50
where T is a aonstant nonsingular mxm matrix.
On the basis of this property it is possible to prove some results
on the minimality of bilinear realizations. More particularly, since any
input-output function admitting bilinear realizations admits always bi­
linear realizations homogeneous in the state (i.e., with B(·)=O; see
proof of Theorem 1), we shall firstly consider realizations of this kind.
It is worth noting that this choice allows us to present the results in
a very concise form. We have, in fact,
THEOREM 3. A bilinear realization~ homogeneous in the state~ of
the input-output funation (3) is minimal (i.e. of least dimension over
all bilinear realizations, homogeneous in the state~of the given funation)
if and only if its dimension is equal to the dimension of a minimal faa­
torization of the sequenae {yo(·),w1 (·,·),w2 (·,·,·), ••• L All minimal
realizations~ homogeneous in the state~ are a single equivalenae alass
modulo the relation: {A1(·),N1(·),C1(·),x1}"'{A2(·),N2(·),C2(·),St2}
if and only if
(24)
where T(·) is a nonsingular nxn matrix of alass c1 .
PROOF. The proof of the condition of minimality is very simple. It has been proved in Theorem 1 that a factorization {F(·) ,G,H(·)} always
provides a bilinear realization homogeneous in the state (see (9», of the same dimension. On the other hand, also the converse is true. There­
fore any minimal factorization always provides a minimal bilinear re­ alization homogeneous in the state.
To prove the second part, observe firstly that a transformation like (24) leaves unchanged the input-output function. Thus we have only
to prove that any two minimal bilinear realizations, homogeneous in the
51
state, are related by (24). It is immediately seen that there exist a
T1 (.) such that
(26)
But the right-hand-terros of (25) and (26) identifie two minimal factor­
izations of the sequence {Yo(') ,w1 (.,.), w2 (· ,.,.) , ... } and, by Theorem
2, are related by (23). The concatenation of the three equivalences is
clearly like (24).
REMARK 3. To check whether a given bilinear realization
{A(.) ,N(·) ,C(·) ,x},homogeneous in the state, is minimal or not it is suf
ficient to put
G(t) (27)
H(t) = G(t)X(t)
in the matrices (21) and (22) (where X(t) is a fundamental matrix sol­
ution of the homogeneous equation (2) associated with (1» and to verify
whether the n rows of the former and, respectively, the n columns of the
latter are linearly indipendent.
It is clearly possible that, in some cases, there exist bilinear
realizations of lower dimension; if this is the case, they are necessa­
rily non homogeneous in the state. To handle this problem it is convenient
to prove the following result
THEOREM 4. Let {F(') ,G,H(·)} be any minimaL factorization of the
sequence {yo(o),w1 (o,o),w2 (o,o,o), .. o} and Let n denote its dimension.
Then, there exists a biLinear reaLization, non homogeneous in the state,
52
of dimension Zower .than n if and onZy if there exists a costant nxn non
singuZar matrix T such that
(28)
where F 11 (0) is (n-1) x (n-1), H1 (0) is 1 x (n-1) and G1 is (n-1) x1. In this
case the dimension of a minimaZ reaZiaation (i.e. of Zeast dimension over
aZZ biZinear reaZiaations) is equaZ to n-1.
PROOF. Sufficiency 0 If there exists a matrix T that satisfies (28)
then a bilinear realization, non homogeneous in the state, is given by
A(t) = 0, N(t) =F11 (t), C(t) =H1 (t), B(t) =F12 (t), x=G (29)
and this is of dimension n-1. A realization of dimension m<n-1 cannot
exist because, if this would be the case, it would be possible to con-
struct (see proof of Theorem 1) a bilinear realization, homogeneous in
the state, of dimension m+1<n: this would contradict the hypothesis.
Necessi ty. Assume that there exists a bilinear realization
{A(o), N(o), B(o), C(o) ,x} of dimension lower than n (and, necessarily,
equal to n-1). From this it is always possible to construct, firstly, a
realization of equal dimension like {O,T (0 )N( 0 )T-1 (0) ,T (o)B (0) ,C (o)T -1 (0) ,T (ta)lC} and, then, a realization of dimension n like
T (0) B (0 )] [01 -1 [T (to) lC] } , , (C(o)T (0) 0),
° ° 1
The latter is homogeneous in the state and, therefore, defines a minimal
factorization of {yo(o) ,w1 (0,0) ,w2(o,o,o) ,.o.} and, in turn, is related
to the given one {F(o) ,G,H(o)} by (23) 0 This completes the proof of the
necessity of (28).
Theory of BiLinear DynamiaaL Systems~ SIAM J. on Control, 12
(1974), to appear.
[2J R.W.BROCKETT, On the ALgebraia Struature of BiLinear Systems~ Theory
and Applications of Variable Structure Systems, R.R.Mohler and
A. Ruberti, eds., Academic Press, New York, 1972,pp. 153-158.
[3J P.d'ALESSANDRO, A.ISIDORI and A.RUBERTI, Theory of BiLinear DynamiaaL
Systems, Notes for a Course Held at C.I.S.M. (Udine), Springer,
Vienna, 1972, pp. 1-72.
eds., Reidel, Dordrecht, 1973, pp. 83-130.
[5J C.BRUNI, G.DI PILLO and G.KOCH, On the MathematiaaL ModeLs of BiZi
near Systems, Ricerche di Automatica, 2 (1971), pp. 11-26.
ON THE REACHABLE SET FOR BILINEAR SYSTEMS
Roger W. Brockett Division of Engineering and Applied Physics
Harvard University Cambridge, Massachusetts
For finite dimensional linear systems, under very mild regularity
assumptions. the reachable set for a compact control set is closed and convex,
regardless of the initial state. This fact is significant in understanding
the time optimal control problem and in the design of computational algorithms for
producing optimal controls. For bilinear systems the reachable set is typically
not convex and not even simply connected although Filippov's theorem [1] shows
that it is closed if the control set is compact and convex. Sussmann [2] has
shown that it need not be closed for a compact control set, and examples abound
indicating that its connectivity depends heavily on the initial state.
In this paper we give some sufficient conditions for the reachable set at a
fixed time of a bilinear system to be convex. We also make a deeper study of a
particular class of bilinear systems which occurs in some applications in
economics, probability, etc. For this class we are able to describe the reachable
set rather concretely, making it one of the few classes of systems with a drift
term for which this is possible.
2. CONVEXITY FOR t SMALL
We will be considering systems in ~n of the form
m x(t) = (A + L ui(t)Bi)x(t);
i"'l
(**)
is a controllable in the usual sense, then for lui(t)! 'E and E sufficiently
small the reachable set for (*) at time T > 0 will be near the reachable set for
(**) and intuition suggests that the reachable set for (*) will be homeomorphic
to the n-ball {x:llxl I' I}. In any case as E grows this character would be
destroyed. For example for a.harmonic oscillator with a controllable "spring".
i.e. for
This work was supported by the U.S. Office of Naval Research under the Joint Services Electronics Program by Contract N00014-67-A-0298-0006.
55
[:::::] - [~] with !u(t)! ( E« 1 the reachable set at time t for t small is homeomorphic to
a disk but for T large enough the reachable set encircles the origin. (See
figure 1.) Examples of this type playa role in the classical theory of the
stability of second order periodic equations. Reference [3] discusses this
circle of ideas from the point of view of bilinear systems.
(a) The reachable set for small t ~) The reachable set for t larger
Figure 1: The development of holes in the reachable set
The difficulty in this example is that the free motion of the system is an
undamped oscillation. If we are to establish that the reachable set is homeo­
morphic to an n-ball then we must adopt hypotheses which will exclude this type
of behavior. If we ask that the reachable set be convex the assumptions which
are stronger still must be made. For the system
i(t) = u(t)Bx(t); x(O) = x o
the reachable set is the image of the admissible controls under the map
x(t) - {eXP[It u(cr)dcrB)}x o 0
This set will not be convex for -1 ( u(t) ( 1, say, unless B2 - yB, convexity
in this case following from the expression
exp Ba - (I+f(a)B)
In the absence of such a condition there exists vectors x such that (exp Ba)x 2 0 0
is not convex for -1 ( a (1. One way to insure B - yB is to ask that B be of
rank 1 and we will exploit this possibility systematically.
Our first theorem is based on the following obvious lemma.
Lemma: Let Xl and x2 be real numbers, at least one of which is nonzero, such
that xl x2 ~ O. Let ul and u2 be real numbers with a , ui (S, i - 1,2. Then
for 0 < y < 1
56
a (:
Proof: Clearly the hypothesis implies that xl (YXl+(1-y)x2) ~ 0 and
x2(yxl+(1-y)x2) ~ O. The quantity in question takes on its minimum value when
But
It takes on its largest value when ul - u2 - B and
Recall that any rank one matrix can be factored as bc' where band care
vectors and prime denotes transpose. Thus expressing a matrix as bc' is simply a
way of insuring that it is of rank one.
Theorem 1: Let bi (·) and ci (·) be ~-valued continuous functions of time and let
A(·) be an n by n matrix valued measurable function of time. Consider the system
m
i(t) - A(t)x(t) + L ui(t)bi(t)ci(t)x(t); Xo - given; a i (: ui(t) ~ Bi i-I
suppose that for each i, ci(O)xo is nonzero. Then there exists T> 0 such that
for each t E [O,T] the reachable set at time t is convex.
* Proof: If the controls ui steer the state from x(O) to x and ui steer the state
from x(O) to x* then
* * aui(t)cix(t)+(l-a)ui(t)cix (t) ui(t) - *
aci(t)x(t)+(l-a)ci(t)x (t)
* steers the system along the path ax(t)+(l-a)x (t). Moreover if ci(t)x(t) and
* ci(t)x (t) are of the same sign then, by the lemma, ui(t) lies between a i and Bi
* provided ui(t) and ui(t) are likewise bounded. But by continuity of c(·) and x(·)
and since ci(O)x(O) are all nonzero there exist t > 0 such that ci(t)x(t) have
constant sign on [O,t] and thus the theorem is proven.
3. CONVEXITY FOR ALL TIME
We denote by K the cone of real matrices which are nonnegative off the
diagonal, any values being allowed on the diagonal. As is well known, and easily
proven, the solution of the matrix differential equation
i(t) - A(t)X(t); X(O) - I
has, for t ) 0, nonnegative entries if A(t) E K for all t ~ O. ~!e denote by
57
1R: the open subset of ~n consisting of those n-tuples having positive entries.
lITe note the following consequence of Theorem 1.
Theorem 2: Let A( ), bi (·) and ci (·) be measurable and let the components of
ci(t) be nonnegative for all t ~ O. Suppose that x satisfies the ~n-valued
differential equation
i-I
n x(O) e: 7R+
and assume that for each admissible ui and all t > 0 the matrix m
A(t) + L ui(t)bi(t)ci(t) is nonnegative off the diagonal. Then the reachable
set at ti~e t is convex for all positive t.
Proof: This is an immediate consequence of the proof of theorem 1 and the fact
that under the hypothesis each of the ci(t)xi(t) are nonnegative for all t.
4 • BANG-BANG CONTROL
Under the hypothesis discussed in theorem 2 it is possible to describe in
some detail the minimum time control to transfer x e: ~+ to any other reachable o n
point. That time optimal controls exist follows from the remarks in the intro-
duction and in view of the convexity established in theorem 2 we have the
relatively simple situation described in the following theorem.
Theorem 3: Let A(·), b i (') and ci (·) be measurable and let the components of
ci(t) + 0 be nonnegative for all t ~ O. Suppose that x satisfies the ??n-valued
differential equation
i-I
n x(O) e: 7li"+
and assume that for each admissible ui and all t > 0 the matrix m
A(t) + L ui(t)bi(t)ci(t) is nonnegative off the diagonal. Then if p is the i-I
outward pointing normal to a support hyperplane for the reachable set at time tl
and point xl on the boundary of the reachable set,
x(O) to this boundary point satisfies
the control which steers
if <P'~A(tl,t)bi(t» > 0
In particular, if m '" 1 and A and b i are time
58
invariant with (A,b i ) controllable then this condition specifies ui ( ) uniquely (i.~
almost everywhere). If A is n by n with real eigenvalues then n-1 switches is the
maximum number required for a minimum time transfer.
~: If Xl is a point on the boundary of the reachable set at time t1 and if
u is a control which steers the system from Xo to Xl then
x1 (t) - ~(t1,to)xo + J:1 ~(t1,a)(iI1 bi(a)ui(a)<ci(a),x(a»)da
o
If P is the outward pointing normal for a support hyperplane at Xl then <p,x>
is maximized by the given choice of ui (). Thus the controls maximize
I Jt 1 <P,~(t1,a)ui(a)bi(a)<ci(a),x(a»da
i m 1 t o
n But x(a) e; ~+ and the components of ci (t) are nonnegative and not all zero so
that <ci(a),x(a» is positive. Thus if the ui(t) maximize this integral they
must satisfy the description given in the theorem statement. Now if A and b At. i
are constant with (A,b i ) a controllable pair for each i then <p,e -hi> does not
vanish on an interval unless p is zero. Thus in this case ui (·) is specified
uniquely. As is well known in connection with the linear time optimal control
problem, <p,eA~> changes sign at most n-1 times if A has real eigenvalues.
5. THE REACHABLE SET WITH UNCONSTRAINED u(·)
For time invariant linear systems with an unconstrained control set one finds
that the reachable set does not depend on T. This is not true in general for
bilinear systems however with u(t) unconstrain ed it is easier to compute the
reachable set. In this section we demonstrate this fact by actually computing
the reachable set as a function of t. The following lemma, of some interest in
its own right. plays a central role.
Lemma: Let A and B be constant matrices n by nand n by m respectively. Suppose ----- n-1 n (B.AB ••••• A B) spans 1R. The reachable set at time t1 for
m i(t) = Ax(t) + I biui(t);
i-1
x(O) .. x e; ~n o
At1 A S - {x:x - e x(O) + K{e t B; 0, t , t 1}}
where K{eAtB; 0' t , t 1} indicates the interior of the smallest convex cone with
vertex zero containing the vectors eA~i; 0, t , t 1 •
Proof:
59
and since the system is controllable the reachable set for u(t) > 0 is open. Thus
it is clear that S contains the reachable set. On the other hand, from the stan­
dard properties of the Lebesgue integral, if z is any point in S then ~ At
z - 1. aie ~ + E with i-1
differentiable
a i ~ 0 and E arbitrarily small. Now consider a family of
E positive functions ui(t) defined in such a way as to approximate
impulses which occur at ti with strength a i • The approximation being in sense
that
for any continuous function f with compact support. That such approximations E exist is a standard construction in distribution theory. Moreover, if ui ( )
is such a family then the response to ui approaches z. Thus we see that any
point in S can be approximated arbitrarily closely. But it is clear that the
reachable set is convex so this means that all points in the set S can be reached.
We now apply this lemma and our previous results to display the reachable
set for a class of bilinear systems.
Theorem 4: Consider the time invariant n-dimensional bilinear system
m
n x(O) E 7R+
Assume that the bi are linearly independent and have a sin~le nonzero entry.
Assume further that {A~i} span ~n and that A is nonnegative off the diagonal so
that, after a possible reordering of the basis, the equations take the form
with bib i forming a basis for the diagonal matrices in the ll-b10ck. Then the
reachable set at time t1 for x(O) n
E 11?+ is
S(t1) - {x: xl > 0; x2 E A22t
K{e A21 ; o , t , t 1}}
~: Assume that the basis is ordered so that the equations take the given n form. Now since xl (t) E 7R+ and since entries in bib i are positive and form a
basis as assumed, along any trajectory we can define u as A11x1 (t)+A12x1 (t)+ m __
L uibib i and write i-1
Xl (t) - u(t)
x2(t) = A22x2(t) + A21x1 (t)
Now x1(t) must be positive and differentiable but other than that it is
60
unrestricted since u(t) is otherwise arbitrary. Thus the x2 's we can reach are
precisely those of the form
A22t Jt A22 (t-o) x2(t) - e x(O) + 0 e A12xl (a)da
with xl ( ) differentiable positive. One sees using the previous lemma that this
set is then precisely the set ascribed to x2 in the theorem statement. The vector
Xl on the other hand
in arbitrarily small
n can be transfered from any 7K + value n to any other 7K + value
time with
n - (+6 x(a)da
arbitrarily small. Thus we can co