Top Banner
* Martin Fisher School of Physics, Brandeis University, Waltham, MA 02254, U.S.A. (e-mail: schweber@brandeis.edu). s Collegium Helveticum, Semper-Sternwarte, ETH-Zentrum/STW, Schmelzbergstrasse 25, CH- 8092 Zurich, Switzerland (e-mail: waechter@collegium.ethz.ch). Stud. Hist. Phil. Mod. Phys., Vol. 31, No. 4, pp. 583 } 609, 2000 ( 2000 Published by Elsevier Science Ltd. Printed in Great Britain 1355-2198/00 $ - see front matter Complex Systems, Modelling and Simulation Sam Schweber*, Matthias Wa K chters Some mathematicians see analogies between theorems or theories, the very best ones see analogies between analogies. Stefan Banach (quoted in Ulam, 1976) 1. Introduction We tend to think of the growth of scienti"c knowledge in terms of the Kuhnian model. But in his Structure Kuhn was speci"cally concerned with the dynamics of disciplinary and subdisciplinary changes. There are also much larger revo- lutions than the disciplinary ones described by Kuhn, which we call Hacking- type revolutions. Ian Hacking has identi"ed their characteristics (Hacking, 1981, 1992): (i) They transform a wide range of scienti"c practices and they are multi- disciplinary. In a Hacking-type revolution something happens in more than one discipline; a multiplicity of scienti"c disciplines are transformed. (ii) New institutions are formed that epitomise the new directions. (iii) They are linked with substantial social change. After a Hacking-type revolution there is a di!erent feel to the world, there is a marked change in the texture of the world. And (iv) because they are &big', there can be no complete, all-encompassing history of a Hacking-type revolution. PII: S 1 3 5 5 - 2 1 9 8 ( 0 0 ) 0 0 0 3 0 - 7 583
27

Complex Systems, Modelling and Simulation

Oct 30, 2014

Download

Documents

avitam

S. Schweber, M. Watcher, Stud. Hist. Phil. Mod. Phys., Vol. 31, No. 4, pp. 583-609, 2000

An introduction of a new Hacking-type revolution of computation and simulation. Explanation of Hacking-type revolutions and the current chabge in physics and chemistry.

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Complex Systems, Modelling and Simulation

* Martin Fisher School of Physics, Brandeis University, Waltham, MA 02254, U.S.A. (e-mail:[email protected]).s Collegium Helveticum, Semper-Sternwarte, ETH-Zentrum/STW, Schmelzbergstrasse 25, CH-8092 Zurich, Switzerland (e-mail: [email protected]).

Stud. Hist. Phil. Mod. Phys., Vol. 31, No. 4, pp. 583}609, 2000( 2000 Published by Elsevier Science Ltd.

Printed in Great Britain1355-2198/00 $ - see front matter

Complex Systems, Modelling andSimulation

Sam Schweber*, Matthias WaKchters

Some mathematicians see analogies between theorems or theories, the very bestones see analogies between analogies.

Stefan Banach (quoted in Ulam, 1976)

1. Introduction

We tend to think of the growth of scienti"c knowledge in terms of the Kuhnianmodel. But in his Structure Kuhn was speci"cally concerned with the dynamicsof disciplinary and subdisciplinary changes. There are also much larger revo-lutions than the disciplinary ones described by Kuhn, which we call Hacking-type revolutions. Ian Hacking has identi"ed their characteristics (Hacking, 1981,1992):

(i) They transform a wide range of scienti"c practices and they are multi-disciplinary. In a Hacking-type revolution something happens in more thanone discipline; a multiplicity of scienti"c disciplines are transformed.

(ii) New institutions are formed that epitomise the new directions.(iii) They are linked with substantial social change. After a Hacking-type

revolution there is a di!erent feel to the world, there is a marked change inthe texture of the world. And

(iv) because they are &big', there can be no complete, all-encompassing historyof a Hacking-type revolution.

PII: S 1 3 5 5 - 2 1 9 8 ( 0 0 ) 0 0 0 3 0 - 7

583

Page 2: Complex Systems, Modelling and Simulation

1We would like to thank Prof. Libby Schweber for showing us a manuscript of the Introduction toher forthcoming book on the meanings and uses of statistics in France and Great Britain during thenineteenth century. Hacking's notion of a style of reasoning is there analysed critically and we havemade use of that analysis.

That the scienti"c revolution of the seventeenth century satis"es all these criteriais clear. The Royal Society and the AcadeHmie des Sciences are some of the newinstitutions it created. Hacking pointed to the rise of the bourgeoisie as indica-tive of the substantial social change associated with that revolution. The numer-ous statistical societies founded in the 1830s are some of the new institutionsassociated with what Hacking called the probabilistic revolution. The avalancheof numbers gave a di!erent feel to the world: it had become quanti"ed andnumbers and statistics ruled it. It was Mr. Gradkin's world. Concomitantly, thepreviously dominant determinist Weltanschauung became replaced by a view ofthe world in which probability and chance played an ever increasing role. Theresult was the emergence of a new statistical style, constituted by a plethoraof abstract statistical entities and governed by autonomous statistical laws,which are &used not only to predict phenomena but also to explain [them]'.Hacking (1981) called this new use of numbers the &inferential' style of statisticalreasoning, and contrasted it with the more traditional descriptive style alreadyin place.

The end of the nineteenth century witnessed another Hacking-type revolutionconnected with the microscopic modelling of matter and the establishment ofscience-based technologies, particularly those grounded in the new understand-ing of organic chemistry and of electricity. World War II was likewise respon-sible for a large scale transformation of science and scienti"c practices but weshall not call it a Hacking-type revolution*in order to emphasise the fact thatwe require a Hacking revolution to have a basic and central conceptual com-ponent: it must have associated with it a new style of scientixc reasoning. Styles ofreasoning are the constructs that specify what counts as scienti"c knowledge,and constitute the cognitive conditions of possibility for science (Hacking, 1992).They are made concrete and explicit through the speci"cation of ontological andexplanatory models. The emergence of a new style is associated with theintroduction of new types of sentences, entities and explanations. Hacking hasgiven the following as examples of styles of reasoning: the taxonomic, experi-mental, statistical and genetic or evolutionary styles of reasoning.1

The World War II transformation was brought about by the plethora of noveldevices and instruments produced by the wartime activities: oscilloscopes; fastelectronic circuitry; the myriad of new vacuum tubes; microwave generators(magnetrons, klystrons) and detectors; rockets; computers; nuclear reactors; newparticle detectors. Many of these devices had been introduced before the war butin a relatively primitive state and on an individual basis. These instruments dida!ect a wide variety of "elds. The scale on which these devices and instrumentsbecame available transformed the stage*but the conceptual foundations after

584 Studies in History and Philosophy of Modern Physics

Page 3: Complex Systems, Modelling and Simulation

2See Fortun and Schweber (1993) and Edwards (1996) for the genesis and the details of thisrevolution.3See for example Herman et al. (1973), Wilson (1986), Lawley (1987), Allen and Tildesley (1993),Galison (1996, 1998) and Ceperley (1999). For an insightful philosophical overview see Keller(2000b). See also the National Research Council 1999 report Strengthening the Linkages between theSciences and Mathematical Sciences (Washington DC: National Academy Press).

the war were not altered by their introduction. World War II did initiatea scienti"c revolution in management science, risk assessment, military planningand consolidated an engineering approach that became known as systemsengineering. A new style of reasoning evolved during World War II in manage-ment science connected with operations research and game theory (and theassociated mathematical tools such as linear and non-linear programming). Inthe United States, the new institutions connected with this Hacking revolutionare exempli"ed by the Rand Corporation. The new style of reasoning restruc-tured companies like the Ford Motor Company under McNamara and trans-formed the U.S. Department of Defense when he and his whiz-kids came to thePentagon in 1960.2

We are witnessing another Hacking-type revolution, in which the computer isthe central element*in the same sense that the steam engine was the centralelement in the "rst industrial revolution and that factories driven by steampower and steam locomotives and railroads transformed the economic andsocial landscape. That the computer has similarly generated a sweeping trans-formation of the social, material, economic and cultural context is evi-dent*think only of the transformation of the workplace and the novelroutinisations that the computer has introduced, of e-commerce, of the newclasses of professionals, etc. However, we shall not be concerned in this paperwith these broad and general features of the revolution. Rather we focus on oneof its components, which for lack of a better name we call the &complex systemscomputer modelling and simulation' revolution, for complexity has become oneof our buzzwords and mathematical modelling and simulation on computersconstitute, we claim, its style of reasoning. Historically, the interaction betweenmathematics and the sciences has had many di!erent aspects. Two of these havebeen modelling and simulation. The formulation of a mathematical model ofa system may involve any branch of mathematics in its architecture, construc-tion, testing and evaluation. By simulation of a system is usually meant thearticulation of an existing mathematical model. The output of the simulationmay be graphical, numerical or analytical; its purpose is to better understand themodel and/or to assess its predictions (Abraham, 1991, p. 2). The facility withwhich computers can generate such outputs*in ever more complex prob-lems*is of course the reason why they have played a fundamental role in this&revolution'. Dramatic advances in microelectronics made large mainframes andcheap, powerful desktop computing possible. Powerful graphics were one of theby-products of these advances, and graphics in turn generated a new kind ofintuition in mathematical modelling. Computers have transformed simulation.3

585Complex Systems, Modelling and Simulation

Page 4: Complex Systems, Modelling and Simulation

Although they have not replaced classical analysis, numerical computation andgraphical representation have become the dominant methods of simulation.Computers may also be revolutionising mathematics by their use as heuristictools and by challenging the usual concept of what constitutes a proof(MacKenzie, 1996).

Julian Schwinger once quipped that Feynman diagrams had brought quan-tum "eld theory, one of the most abstruse branches of theoretical physics, to themasses (Schwinger, 1983). Similarly desk top computing has revolutioniseddoing theory and the relationship between theory and experiment in many"elds. Indeed, computers have revolutionised modelling in the physical sciences.The advantage of being able to explore models rapidly and e$ciently using thetools of &pencil and paper' mathematical analysis has been greatly reduced.A skilled modeller can now analyse the consequences of a given model oftenmuch more rapidly using computers than doing so by analytic means. Thecomputer has thus levelled the "eld of doing certain kinds of theoreticalmodelling. And the number of scientists using desktop computers for modellingpurposes is increasing dramatically. This quantitative increase is responsible fora qualitative change in the way science is being done in many "elds. ThusRohrlich already in 1990 suggested that computer simulation in physics entailed

a qualitatively new and di!erent methodology [that lies] somewhere intermediatebetween traditional theoretical physical science and its empirical methods ofexperimentation and observation. In many cases, it involves a new syntax whichgradually replaces the old, and it involves theoretical model experimentation ina qualitatively new and interesting way. Scienti"c activity has thus reached a newmilestone somewhat comparable to the milestones that started the empiricalapproach (Galileo) and the deterministic mathematical approach to dynamics (theold syntax of Newton and Laplace) (Rohrlich, 1990, p. 507).

An equally dramatic transformation is occurring in &supercomputing'. JamesLanger, in a report to the U.S. National Academy of Sciences summarising the"ndings of a July 1998 National Conference on Advanced Scienti"c Computing,noted that

computing speeds and capacities have been growing exponentially for over twodecades. [But] it is only over the last several years that scienti"c computation hasreached the point where it is on a par with laboratory experiments and mathemat-ical theory as a tool for research in science and engineering. The computer literallyis providing a new window through which we can observe the natural world inexquisite detail.

Similarly and just as dramatically, during the past few years the computer-basedInternet has emerged as a vehicle for communicating huge amounts of informationthroughout the world. [...] These advances in computing and communicationpoint to a structural transformation of the ways in which we gain understanding,make informed decisions, and innovate in modern society. A profound transforma-tion in the way research is carried out is taking place. Nor is the transformationover. Since it is likely that in the next decade &terascale' computing systems, with

586 Studies in History and Philosophy of Modern Physics

Page 5: Complex Systems, Modelling and Simulation

4 In 1996, in order to ensure the safety and reliability of the United States' nuclear arsenal and toadhere fully to the Comprehensive Test Ban Treaty, the U.S. Department of Energy established theAccelerated Strategic Computing Initiative (ASCI). The goal of ASCI is to simulate the results ofnew weapons designs, and to simulate the e!ects of aging on existing and new designs, all in theabsence of additional data from underground nuclear tests. Thus with funding from ASCI three newcomputer systems that can sustain more than 1 tera#ops have been installed at the Los Alamos,Sandia and Lawrence Livermore National Laboratory. By 2002 computer systems 10 times morepowerful are to be delivered to these laboratories, and by 2004 computers capable of 100 trillionsoperations per second will be available.5Thus in December 1999 the Human Genome Project &celebrated' the completion of the mapping ofchromosome 22 by an international team located in Cambridge, England, St. Louis and OklahomaCity in the US and in Tokyo, Japan.6For a valuable introduction to the subject see Auyang (1998).7See for example the websites of the centres for the study of complex systems at the University ofMichigan (http://www.pscs.umich.edu/), Georgia Tech and Brandeis University.

speeds and capacities approximately 1000 times larger than the present ones,will become widely available to the scienti"c and engineering community,major changes will be continue to be e!ected.*4+ In addition, broad access to digitallibraries in which massive experimental data sets are stored, and access toweb-based connections to a wide variety of analytic tools will become the norm.This is already the case in high energy physics and with some parts of the genomeproject.*5+

The ever growing complexity of the problems being addressed is nurturing andaccelerating the trend toward collaborative research. Success in solving the prob-lems being tackled not only requires investigators with di!erent backgrounds andtoolkits to interact with one another but also demands intensive collaborationwith computer scientists. But by virtue of the web and of the internet thesecollaborations no longer require that the members of the collaboration be togetherat the same place at the same time.

The computer made the study of non-linear systems and the phenomena theyexhibit a practical possibility. In particular, the use of high resolution computergraphics made it possible to identify and explore ordered patterns in thesehighly irregular phenomena. The combining of &numerical experiments' on thecomputer with mathematical analysis gave rise to the new interdisciplinary "eldof non-linear dynamics. As a result of such studies, starting in the mid 1960s,a number of people suggested that there possibly existed surprising and deepconnections among a wide varieties of systems chosen from disciplines as diverseas statistical mechanics, biology, computer science, economics, and perhapsothers. The approaches in all these "elds used non-reductive, synthetic analyses,and non-linearity was an essential part of the mathematical modelling of thephenomenon or process.6 The restructuring of universities with the establish-ment of new multidisciplinary doctoral programmes in interdisciplinary centresfor non-linear dynamics or complex systems,7 the recognition of computermodelling as valid, accepted subdisciplines in physics, chemistry, astrophysics,#uid dynamics, geology, biology, ecology, in engineering, the social sciences,economics, medicine, library science and in other "elds, are instantiations of the

587Complex Systems, Modelling and Simulation

Page 6: Complex Systems, Modelling and Simulation

8See the Santa Fe Studies in the Sciences of Complexity, the proceedings of the conferences held there.9 In this connection note the advances in nanotechnology, made possible by advances in instrumen-tation and technology (e.g. scanning electron microscopes and lasers): Slater's vision of molecularengineering has been realised. Note also the linking of the cosmological scale with the submicro-scopic in string theory and the fact that experiments relating to cosmology have become the testingground for speculations in &elementary particle physics'.10Note that &directed evolution' is already being used commercially for the selection of betterenzymes.11For a readable account of computability and computational complexity see Garey and Johnson(1979) and Flake (1999).12How these results would be a!ected if the computations were done on a quantum computerremains to be explored.

social and institutional changes taking place. The Santa FeH Institute is perhapsthe paridigmatic institutional manifestation of this viewpoint and of the &com-plex systems computer modelling and simulation' revolution.8 The fact that theInstitute for Advanced Study in Princeton*not one of the most innovativeinstitutions around*has recently assembled a group doing theoretical biologyindicates how pervasive the revolution is.

We have indicated that Hacking-type revolutions are linked with substantialsocial change and a marked change in the texture of the world. Perhaps the mostpalpably felt change in &atmospherics' is the dramatic collapse of time scales ofchange stemming from the use of computers and the tasks they can accomplishby virtue of computations and simulations, and an associated change in spatialscales9*not to mention the creation of virtual new spaces. And if thus farcollapsing time scales have only a!ected the tempo of social evolution, the nearfuture portends the collapsing of the time scale of biological evolution, whichmay well turn out to be even more signi"cant.10

There are other aspects of the revolution that merit emphasis. It seems thatthe twentieth-century emphasis on reductionism*fuelled by the great insightsand accomplishments of non-relativistic quantum mechanics in explaining theatomic and molecular world, and by the equally impressive achievements ofrelativistic quantum "eld theory and of experimental high energy physics thatyielded the standard model*is being replaced by &constructivism', i.e. theattempt to see how far the world can be &reconstructed' in terms of the stablefoundational theories at hand. This, of course, is only possible with the use ofcomputers. And in doing so, the use of computers has focused attention on whatis and what is not computable in "nite time on universal computers of theTuring type.11 Thus Wolfram (1985) noted that physical questions that concernidealised limits of in"nite time, in"nite volume, or numerical precision canrequire arbitrarily long computations, and thus be formally undecidable. Thestudy of spin glasses (Fischer and Hertz, 1991) and other disordered systems incondensed matter physics has shown that certain fundamental properties ofsuch systems*e.g. the true ground state*are not computable on a Turing-typecomputer in "nite time (when the number of spins becomes very large).12Similarly, it has recently been proved that the three-dimensional Ising model is

588 Studies in History and Philosophy of Modern Physics

Page 7: Complex Systems, Modelling and Simulation

13NP " non-deterministic polynomial time problem. NP-problems form a class of computationalproblems that may or may not be solvable in polynomial time but that can be expressed in sucha way that plausible solutions can be tested for correctness in polynomial time. An NP-completeproblem is one into which any instance of any other NP-class problem can be translated inpolynomial time. This in turn means that if a polynomial algorithm exists for an NP-completeproblem, then any problem that is NP can be solved with the same algorithm. One of theoutstanding, and as yet unsolved, problems in computer science is to prove (or disprove) the claimthat P"NP. The answer appears to be that PONP. What is clear is that all P algorithms, i.e.algorithms executable in polynomial time, are NP: if one can solve a problem in polynomial time,one can certainly check it in polynomial time.14Biological systems are much more complex than those studied in physics and chemistry and weshall not be concerned with them. Biological systems*such as organisms or ecologies*cannot beunderstood unless several levels are simultaneously considered. Moreover, one is often required tostudy "ne details at a lower level that are linked to large outcomes at higher levels. One seeks tounderstand how higher level units (e.g. societies) emerge from the interactions among lower-levelbiological units, e.g. organisms, and in particular, how natural selection at one given level a!ectsselection at lower or higher levels. For example, ecological communities are structured to a largeextent by the competition among individuals for food and space, and by that of predators and theirpotential preys over who utilises the resources in these prey's bodies: whether they become food forthe predators or the site where o!spring is produced. But in addition, an essential component of thestability of an ecosystem is an intensive mutualism; and it is this web of interdependence that makespossible the diversity and abundance that sustains the ecosystem (see Leigh, 1991). Ecologists wantto understand how competition among individuals and species brings forth the interdependence thatcharacterises ecological communities. What keeps competition among a group's members fromoverwhelming their common interest in their group's e!ectiveness and destroying the common goodof their cooperation? (Allen and Starr, 1982).

an NP-complete problem (Cipra, 2000).13 These results, at the practical level,together with deep results about algorithmic computability in general, havechallenged previously held notions about what is calculable &in principle' andhave given support to more &naturalistic' explanations and understanding of theworld. Coupled with a widespread trend to look upon our amazingly robust,empirically based representations*even the most foundational ones such asQCD or the standard model*as &e!ective' theories, &models' of the world, the&modelling and simulation revolution' has implied that what is meant by &reality'is again up for grabs and the views of the American pragmatists are gaining evergreater currency. The robustness and accuracy of the &e!ective' theories in thecomputation of the properties of not only simple systems but also of fairlycomplex systems in many branches of physics and chemistry has shifted theemphasis in modelling from &models of ', to &models for' (Keller, 2000a).

Complex systems are characterised by multiple temporal and spatial scales.The &simplest' cases*like those beautifully analysed by Sunny Auyang in herbook Foundations of Complex System Theories (1998)*are essentially single levelmodels in which system, constituent level and the connections between them areencompassed. In this paper we shall be primarily concerned with compositesystems in physics and chemistry.14 One of the reasons that the modelling inphysics and chemistry is relatively &simple' is that relatively simple descriptionsof the environment in which the &elementary' constituents are stable can begiven, and that in such environments these constituents have quasi-constant

589Complex Systems, Modelling and Simulation

Page 8: Complex Systems, Modelling and Simulation

properties. Furthermore, the &elementary' entities that constitute the furniture ofthat part of the world can usually be taken to be ahistoric, and the interactionsbetween them are either given by the foundational theory that describes theirdynamics or ascertained experimentally and described phenomenologically. Butwhen one recalls that the number of identi"ed molecules is of the order of 12 to13]106, and that similarly the number of systems and materials investigated bycondensed matter physicists ranges in the tens of thousands, it is clear that thechallenge is to give descriptions and understanding, i.e. formulate models, thaton the one hand re#ect the foundational theory and the properties of the&elementary' constituents, yet are applicable to a broad class of structures andprocesses. Thus, useful models should be simple, yet describe the empiricallydetermined facts to the accuracy of the chemical or physical experiments.Furthermore, they should be e$cient and cost e!ective: the amount of computertime should not increase too rapidly with the size of the system.

In physics and chemistry, the way the interactions between the constituentsthat make up the system are described can broadly be characterised as

(a) &energetic' or(b) &informational'.

By &energetic'we mean the generic description of the interactions of the &elemen-tary' constituents in a given level as described by physics. For example, in thecase of electrons and nuclei the description of their interaction by exchanges ofphotons, or when applicable, i.e. depending on the context, the (approximate)description of their interactions by Coulomb and Biot}Savart potentials. Or, inthe case of molecules, the description of their structure or their interactions bypotentials derived from the foundational theory by considering simple models,e.g. Heitler}London covalent bonding; or in the case of more phenomenologicaldescriptions of intermolecular forces*such as van der Waals or polarisationinteractions*using simple quantum mechanical models and giving the para-meters that enter this description empirically determined values. In these casesthe mathematical, foundational description of the system in terms of the inter-acting constituents is known*and the problem is to get a valid, robust, approxi-mate description at the system level*the system level description being selectedby the context and the measuring processes used to probe the system (see forexample Kaplan, 1987; Marina, 1988).

By &informational interaction' we mean a description of the interaction ofmore complex structural entities as in the case of organic reactions or macro-molecular interactions in which structural changes are described in terms of&molecular recognition', or in the case of biological entities in terms of transfersof &information' which are presumably linked to entropic changes. Thus chem-ists sometimes describe the early stage in the catalytic reaction of an enzyme invivo as follows: &the enzyme recognizes a substrate and forms a substrate-enzymecomplex. Self-regulation of the enzyme occurs at the substrate-binding site toidentify the shape and positions of functional groups of substrate' (Kitaura, 1998,p. 95; emphasis added). No one doubts that the phenomena associated with

590 Studies in History and Philosophy of Modern Physics

Page 9: Complex Systems, Modelling and Simulation

molecular recognition and informational transfer are ultimately to be under-stood in terms of &energetic' interactions. An underlying assumption is that eachmolecule at a particular time exists in a particular quantum state (described bya density matrix), and that molecular function can be de"ned and described interms of a transition from this initial state to another, the "nal state (againdescribable by a density matrix). This molecular function or process can involvenot only changes in energy levels (electronic, vibrational, rotational), but alsochanges in conformation (from changes in the positions of the nuclei). But sucha detailed description may be cumbersome and complex, whereas the &informa-tional' language has heuristic merit. In phase transitions, one speaks of con"g-urational entropy as the di!erence between the entropy of initial and "nal states.The issue is how to relate such di!erences in entropy to structural properties ofthe initial and "nal con"gurations (which are well de"ned for the &in"nitesystems' considered in phase transitions), to properties of individual moleculesin the case of biochemical reactions, and how to di!erentiate entropic fromenergetic considerations. Some biochemists, in fact, speak for example of a pro-tein (in a certain conformation) as being endowed with a certain amount of&information' which is inherent in its structure. By virtue of their structure,macromolecules can select or pick out (i.e. interact selectively with) certain othermolecules. As a result of the interaction, their structure is changed, and hencetheir informational content. Reverting back to their original con"gurationimplies a change in informational content. Such interactions can be analysed ininformational terms and related to entropy changes in the reaction: the informa-tional change stemming from con"gurational restructuring must be balanced bya change in the entropy of the macromolecules and that of the environment.This way of modelling and visualising the interaction introduces a new lan-guage*which re#ects the structural changes*but which still admits the possi-bility of quanti"cation (see Loewenstein, 1999).

Our aim in this paper is to sketch how the simpler language and description ofthe (complex) emergent level is related to the more detailed (and complex)language and description at the compositional level. We shall restrict ourselvesto considerations relating to physics and chemistry and limit our discussion tothe case where interactions are described &energetically'. The relationship be-tween structure and the seemingly robust &informational' language wherebystructural changes allow descriptions in terms of informational exchanges andrelate these to entropic changes in a reaction will be addressed in a subsequentpaper. We shall be principally concerned with &computational chemistry', bywhich we understand the study of the structure, properties and dynamics ofchemical systems. We further shall only be concerned with problems of molecu-lar electronic structure. The advances in that "eld can be gauged by the fact thattwenty-"ve years ago accurate calculations of the structure and properties ofmolecules with more than a dozen atoms were rare. They are commonplacetoday. It can be argued that the award of the 1998 Nobel Prize in chemistry toWalter Kohn and John Pople is an a$rmation of the importance of this"eld and the acceptance of computational chemistry as a third methodology

591Complex Systems, Modelling and Simulation

Page 10: Complex Systems, Modelling and Simulation

15Models, modelling, simulations and their objects have become the focus of a great deal ofattention from historians and philosophers of sciences. See for example the Summer 1999 issue ofScience in Context, and the informative introductory essay by Sergio Sismondi (Sismondi, 1999). Seealso Cartwright (1983), Morrison and Morgan (1999).16See Hesse (1966), and also Nersessian (1999) for recent work on the subject.

alongside theory and experiment. Our paper is structured as follows: Section2 states what we mean by a model. Since quantum mechanical modelling is whatwe are principally concerned with, Section 3 outlines the assumptions that go inthat representation of the world. The main part of the paper, Section 4, is anattempt to understand the relation between the description of an emergent leveland that of the level of its compositional elements, in particular, the relationbetween the description of molecules in terms of structural diagrams andformulas and the quantum mechanical description that represents the moleculesin terms of their constituent atoms. An epilogue reassesses our presentation.

2. What Is a Model?15

In its simplest form, a model is a simpli"ed representation of a structure orprocess. Achinstein (1968, p. 257) has characterised models and analogies in thephysical sciences as follows:

In all the cases considered we might describe the model or analogy as (or ascontaining)

(1) a representation of X: but(2) one that is either not literal, or not faithful in all respects, or not complete, and

may represent X in some &indirect' manner; and(3) one that utilizes something more or less familiar, known, understood, readily

grasped, or easily experimented upon. Thus, a representational model repre-sents X, but not completely and not necessarily literally, by utilizing some-thing Y that is familiar or more easily grasped. In a theoretical model werepresent X, but only approximately and not completely, by bringing it under,or at least utilizing parts of, some more basic theory or theories that arefamiliar and understood. In an imaginary ["ctitious] model we representX but not in a way intended to be literal, by imagining how X could satisfycertain conditions, where either the set of conditions or the way we representX is more or less familiar. In an analogy X is represented in an indirect way bybeing shown to be similar in some though not all respects to a distinct itemmore familiar or more easily grasped.

Models can be conceptual*as in the case of mathematical modelling, or in theuse of Feynman diagrams in high energy physics, or material, as in the case of thecog and wheels in Maxwell's modelling of the electromagnetic "eld, or the use oflego-like structures to represent chemical structure.16 In this paper we shall limitourselves to the models used in physics and chemistry. We shall di!erentiate

592 Studies in History and Philosophy of Modern Physics

Page 11: Complex Systems, Modelling and Simulation

17See in this connection the Introductory Remarks made by R. S. Mulliken at a 1972 Symposium oncomputational methods for large molecules (Herman et al., 1973).

(loosely) between models and the theories used in the models*reserving theappellation &theory' with what has recently become equated with &foundational'or &e!ective theories' (Georgi, 1989). We accept Margaret Morrison's insightthat models have an instrumental component that cannot be ascribed totheories (Morrison, 1998). By a theoretical model for any complex structure orprocess we shall mean an &approximate but well dexned mathematical procedureof simulation' (Pople, 1999; emphasis in the original). The kinds of mathematicalmodelling we shall be concerned with are general ones: we will not considerparticular procedures for particular molecules or particular symmetries. In bothphysics and chemistry, models which make use only of the fundamental con-stants of physics (i.e. the mass, charge and magnetic moment of the electron; themasses, charges, and magnetic moments of the nuclei, 2) are usually given theappellation ab initio; those in which some of the parameters are determined by"tting to some empirical data are usually called semi-empirical. In this paper weshall be principally concerned with ab initio modelling in chemistry, and adoptnon-relativistic quantum mechanics as the &foundational' theory for the descrip-tion of chemical systems, wherein the binding in molecules is assumed to be dueto the Coulomb forces between electrons and nuclei.

Our notion of a &model' is well illustrated by the density functional theory(DFT) developed by Walter Kohn, Pierre Hohenberg and Lu Sham (Dreizlerand Gross, 1990). An exact solution of the SchroK dinger equation describinga molecule or a solid is impossible to obtain.17 But surprisingly accurateapproximations for the ground state of a system can be based on DFT. One canreadily prove that the ground state energy of an n electron system is a functiononly of the electron density o(r). The DFT model then consists in representingthe electrons by one-body wavefuctions t

iwhich satisfy one-body Hartree}

Fock-like equations

A!+22m

+2#<N(r)#e2P

o(r)Dr!r@D

#k%9#

[o(r)]BWi(r)"e

iW

i(r), (1)

where the "rst term is the kinetic energy of the electron, the second the potentialdue to all the nuclei, the third the electron-electron repulsive potential, and thefourth, the exchange and correlation potential which accounts for the PauliPrinciple and spin e!ects. This last term is usually approximated by a localpotential corresponding to the exchange-correlation potential of a homogene-ous electron gas of density o(r). DFT can account for lattice constants, atomicpositions, phonon frequencies and elastic properties with errors less than a fewpercent. It has done so for the fullerenes. It has explained the properties of C

60,

and has predicted that solid C36

should be a superconductor with a high ¹c.

(Bernholc, 1999).

593Complex Systems, Modelling and Simulation

Page 12: Complex Systems, Modelling and Simulation

The complexity of the mathematical description of molecular systems requiresthe use of approximations. One way to assess the adequacy of any suchapproximation is to compare the computed value of a property of interest withits measured value. However, the uncertainties that may be associated with eachof these values often implies that a critical attitude concerning the merit of theapproximation and of the model is warranted. It is because of this that variousbounds that may exist for certain theoretical values can be of importance inassessing the adequacy of any model that may be used to determine them. Theoutstanding example of such bounds in non-relativistic quantum mechanicalcomputations is the conventional &variational principle' for ground state totalenergies of systems. This principle asserts that the value determined for theenergy of a system described to be in some energy eigenstate other than the exactground state can never be less than the exact theoretical value for the groundstate energy. In marked contrast, the corresponding ground state energy valuesof systems that consist of collections of electrons and nuclei which are describedby the Born}Oppenheimer adiabatic approximation of molecular systems mustsatisfy an obverse &variational principle' (Brattsev, 1965; Epstein, 1966), wherebyany system in the energetic ground state of that adiabatic approximation willhave a computed value of the total energy that can never be greater than theexact theoretical value. The possibility that extremely complicated computa-tions can now be executed by available computers does not assure an acquisi-tion of exact adiabatic ground state energy values. It is possible that one couldobtain ostensible theoretical values of that quantity that might appear to be ingood agreement with experimental values as a result of approximations thatmay arise in the course of such computations.

3. Quantum Mechanical Descriptions

The formulation of non-relativistic quantum mechanics by Werner Heisen-berg, Max Born, Pascual Jordan, Paul Dirac, Erwin SchroK dinger, and WolfgangPauli from 1925 to 1927 was a revolutionary achievement. Its underlyingmetaphysics is atomistic. Its success derived from the conyuence of a theoreticalunderstanding, the representation of the dynamics of microscopic particles(non-relativistic quantum mechanics) and the formulation within that formalismof the consequence of the strict identity and indistinguishability of electrons, andthe apperception of a quasi-stable ontology, namely, electrons and nuclei, thebuilding blocks of the entities*atoms, molecules, simple solids*that populatedthe domain that was being carved out. Quasi-stable means that under normalterrestrial conditions, electrons and (non-radioactive) nuclei could be treated asahistoric objects, whose physical characteristics were seemingly independent oftheir mode of production and whose lifetimes could be considered as essentiallyin"nite.

Quantum mechanics implies that bound composite structures have discreteenergy levels, and determines the characteristic energies involved in terms of the

594 Studies in History and Philosophy of Modern Physics

Page 13: Complex Systems, Modelling and Simulation

18Quantum mechanics indicates that structured systems can only have discrete states, each havinga characteristic energy. The smaller the size of the system the greater the scale of characteristicenergies. The excitation energies of the valence electrons of atoms, whose sizes are of the order of10~8 cm, are of the order of electron-volts. The vibrational and rotational spectrum of molecules,composite systems made up of atoms, have characteristic energies of a tenth and a hundredth of aneV respectively. Nuclei, which have radii of the order of 10~13 cm, have characteristic excitationenergies ranging from thousands (KeV) to millions of electron-volts (MeV). In composite systemsmade up of electrically charged particles, the exited states tend to have short lifetimes. They decay tothe stable ground state by emitting electromagnetic radiation.19The plasticity of atoms is manifested by the ease with which the valence electrons can maketransitions to excited states, by their ability to enter into covalent bonds with the valence electrons ofother atoms, or to form ionic bonds. The stability is displayed by the permanence*under mostterrestrial conditions*of the core structure.

masses, charges and spins of the constituent elements.18 If the ambient tempera-ture, ¹, of the surroundings is such that k¹ is much larger than the bindingenergies of the compound structures under consideration, they will dissociate,and the physical processes involved must be described in terms of their constitu-ents. On the surface of the earth an atom can be considered a simpleobject*composed of a nucleus and a surrounding cloud of electrons*thatexhibits both stability and plasticity.19 The combined stability and plasticity ofatoms allows them to further combine with one another and to form morecomplex objects, such as molecules, and solids. Moreover, the possible com-posite structures that can be created from the &elementary' constituents arealmost limitless.

In 1929, in the wake of the enormous success of non-relativistic quantummechanics in explaining atomic and molecular structure and interactions, Dirac,one of the main contributors to these developments, in a now famous quotationdeclared: &The general theory of quantum mechanics is now almost complete'.Whatever imperfections still remained were connected with the synthesis of thetheory with the special theory of relativity. But these were &[2] of no import-ance in the consideration of atomic and molecular structure and ordinarychemical reactions [2]. The underlying physical laws necessary for the math-ematical theory of a large part of physics and the whole of chemistry are thuscompletely known, and the di$culty is only that the exact application of theselaws lead to equations much too complicated to be soluble [2]' (Dirac, 1929).Dirac's assertion is still valid and computational complexity may well be thelimiting factor. As Anderson (1972) has emphasised

[2] the reductionist hypothesis does not by any means imply a &constructionist'one: The ability to reduce everything to simple fundamental laws does not implythe ability to start from those laws and reconstruct the universe. In fact, the morethe elementary particle physicists tell us about the nature of the fundamental laws,the less relevance they seem to have to the very real problems of the rest of science,much less to those of society. The constructionist hypothesis breaks down whenconfronted with the twin di$culties of scale and complexity.

595Complex Systems, Modelling and Simulation

Page 14: Complex Systems, Modelling and Simulation

20 In the case of nuclear physics it again required the apperception of a quasi-stable ontology*inthis case, neutrons and protons*the masses, electric charges and magnetic moments of which wereempirically determined. Neutrons and protons could be considered the &elementary' constituents ofnuclei, and with the determination of phenomenological low energy neutron-neutron, neutron-proton and proton-proton interaction potentials a quantum mechanical description of nuclearstructure and nuclear reactions could be given. (Note that low energy means energies such that thenucleons move non-relativistically, and well below the threshold for meson production.) The forcesbetween the nucleons were initially unknown (in contradistinction to the Coulomb and the magneticinteraction between charges). These forces had to be ascertained through scattering experiments, andthen used to see whether they could account for the observed regularities in nuclear structure. Thestory repeated itself again in the world of hadrons: the classi"cation by Murray Gell-Mann andYuval NemH eman of the multitude of seemingly &elementary particles' discovered in cosmic rays and inthe collisions induced in high energy accelerators eventually led to the standard model. Once againthere are (some twenty) parameters that enter that description*e.g. the masses of the quarks*andwhy the parameters have the value they have are meta questions: the standard model cannot answerthem.

The constructionist hypothesis has to confront the problem of computability.It may be*in analogy with GoK del's theorem*that the only way to overcomethe computability limitation when trying to describe complex systems in termsof foundational theories is to insert empirical data at some intermediate step andthus to bootstrap oneself. We shall return to this issue in Section 4.

Recall that in non-relativistic quantum mechanics the mass, charge, magneticmoment and spin of the electron, and those of the atomic nuclei, are empiricallydetermined parameters that enter into the theory as constants. Why thesemasses and changes are what they are, are meta questions that non-relativisticquantum mechanics cannot answer. But non-relativistic quantum mechanicsgives an explanation of the Mendeleev periodic table of the elements, and manyother intrinsic questions. A comparable story unfolded in nuclear physics and inthe world of hadrons.20 These advances speak of the robustness of quantummechanics and it is worthwhile to try to identify the attributes that are respon-sible for it. To do so it is important to di!erentiate between the mathematicalstructures employed*Hilbert spaces (the elements of which are identi"ed withthe possible state of systems), self-adjoint operators (associated with the observ-ables of the system), the implementation of the time evolution of state vectors bya unitary transformation*and the specixc equations associated with particularsystems (e.g. the Hamiltonian used to give a non-relativistic description ofa helium atom or of a hydrogen molecule).

This same disjunction appears in the various models developed within theframework of quantum "eld theory (QFT) to describe the subatomic world.During the past twenty years our understanding of quantum "eld theories (andtheir renormalisation) has changed dramatically. What has emerged from thework of Kenneth Wilson and Steven Weinberg is a more limited view: all extanttheoretical representations of phenomena are only partial descriptions thatdepend on the energy at which the interactions are being analysed. Successfulquantum "eld theories are low energy approximations to a more fundamentaltheory that may or may not be a "eld theory. What Weinberg and others have

596 Studies in History and Philosophy of Modern Physics

Page 15: Complex Systems, Modelling and Simulation

21The notions of "nalisation and decoupling clearly have important rami"cations for policy issues.The concept of a quasi-stable theory has a$nity with the notion of a &closed theory' that Heisenbergadvanced in the late twenties and that he published in Dialectica in 1948. The notion of a quasi-stable theory was introduced in Schweber (1989). For Heisenberg's views see &The Notion ofa `Closed Theorya in Modern Science' in Heisenberg (1974). Heisenberg's conceptualisation empha-sises the stability of the theory against small changes in the formalism. Thus it seems extremelydi$cult to make small changes in the formalism of quantum mechanics that maintain logicalconsistency, empirical validity and the usual notions of causality. See also Weinberg (1996).

shown is that the reason that quantum "eld theory describes physics at access-ible energies &is that any relativistic quantum theory will at su$ciently lowenergies look like a quantum "eld theory'. These reconceptualisations have ledto a hierarchical structuring of the microscopic and sub-microscopic world,wherein*except for the level of quarks and the standard model*the metaquestions at a given level are answered by the next lower level. Only emergentquestions remain at any given level. A hierarchical arraying of parts of thephysical universe has been stabilised, each with its quasi-stable ontology andquasi-stable e!ective theory and the partitioning is fairly well understood. Forthe energy scales that are experimentally probed in atomic, molecular andcondensed matter physics the irrelevance (to a very high degree of accuracy) ofthe domains at much shorter wavelengths has been justi"ed. E!ectively somekind of stabilisation*one might call it a modi"ed form of "nalisation*hastaken place in these domains.21

One can further analyse this hierarchical arraying into two tracks that canreadily be recognised historically: a reductionist component characterised bya search for underlying simplicity and of ever more &elementary' constituentsand a component motivated by the attempt to understand the complexity thatcan emerge from the composition of &elementary constituents' from their interac-tions with one another. It should be emphasised that this hierarchical arrayingand the laws applicable in the various domains are context dependent. Thiscontext dependence of physical laws is made explicit when one recognises that atthe earliest stage of the evolution of our universe, the table of nuclides registeredonly the following elements: protons, deuterons and helium. Similarly, there areno laws of chemistry in stellar interiors*nuclei are fully ionised at the ambienttemperatures within stars. One can therefore look upon the laws of physics asmore foundational (not fundamental) than chemistry, because

(a) when the context is properly taken into account the laws of physics encom-pass &in principle' the phenomena and the laws of chemistry and

(b) the laws of physics are more general than those of chemistry, that is, those ofchemistry are valid under more special conditions than those of physics (seeGell-Mann, 1994).

Primas has consistently drawn attention to the context dependence of physicallaws, and has elaborated an approach to describing the context dependence andthe emergent properties that can be realised under those conditions*giventhe foundational theory. His method is formulated in terms of a topological

597Complex Systems, Modelling and Simulation

Page 16: Complex Systems, Modelling and Simulation

22When the chemical system of interest is one in which no chemical changes are presumed to occurand the molecules which are presumed to constitute it remain unaltered, their characterisation bymeans of non-relativistic quantum mechanics entails just the same procedure that would be used tocharacterise any physical system. A possible characterisation of &molecules' is to consider thementities whose constituents have relative distances that do not exceed some stipulated values,irrespective of the total energy of the combination.23For an overview of the history of quantum chemistry see Gavroglu and Simo8 es (1994, 1997, 1999,2001a,b), Simo8 es (1993, 2001) and Simo8 es and Gavroglu (2000, 2001). For an overview of themethods of computational chemistry see Wilson (1986).

characterisation of the context: in a mathematically formalised theory Primasintroduces &contexts' by choosing coarser topologies which are compatible withthe "ner topology of the more foundational theory (Primas, 1998).

We shall take a more pedestrian approach to ascertain the relationshipbetween the chemists'modelling of their world and that given by the physicists'foundational theory of that domain, which to the degree of approximation weshall be working with we shall take to be non-relativistic quantum mechanics.Our aim is to understand the coarse graining that allows the visual structuralrepresentations of the chemists to emerge and be useful. Thus the question wewish to address is the relationship of the foundational physical viewpoint andwhat can be viewed as a foundational chemical viewpoint. While a systemconsidered from a foundational physical viewpoint can be regarded to bea collection of elementary entities that remain unchanged with the passage oftime, the same system considered from a foundational chemical viewpoint isdescribed in terms of constituent molecules, that are regarded as suitablecollections of fundamentally unchanging constituents the combinations ofwhich can undergo changes in composition. The chemical description of a sys-tem thus corresponds to a model that involves a conceptually di!erent set offoundational entities to describe chemical behaviour than does the modelyielding a physical description of the same system to describe physical behav-iour. Everyone agrees that chemical systems are reducible conceptually tophysical systems and that the chemical description must be consistent with thephysical one; the question is whether the chemical description is derivable fromthe physical description.22

4. Chemistry: The Born}Oppenheimer Approximation23

One of the characteristic properties of matter in our terrestrial environmentis that to a good approximation it can be described in terms of the positionand velocity of fairly well-de"ned constituent entities. For molecules andsimple solids these constituent entities are often simply related to their atomicconstituents. Recall that one of the great triumphs of the old quantum theorywas accounting for the rotational and vibrational spectra of a large classof polyatomic molecules considered as quasi-rigid structures (Dennison, 1925,1926).

598 Studies in History and Philosophy of Modern Physics

Page 17: Complex Systems, Modelling and Simulation

24There is an enormous literature on the Born}Oppenheimer approximation. It probably is onlyvalid for modelling the ground state and low-lying rotational and vibrational states of a molecule.See Primas (1981b) for an insightful discussion. We have learned much from this presentation.25The shape of a molecule can be de"ned in a manner that is susceptible to mathematical analysis.Thus it is supposed that the shape of a molecule or a solid is apprehended by the shape of a "nitesubset of its points, which is usually taken to be the equilibrium position of its constituent nuclei(modulo rotations about the centre of mass).

According to non-relativistic quantum mechanics the stability of matter(when the interaction between the constituents entities is taken to be just theCoulomb interaction) is a consequence of:

(a) the exclusion principle for the electrons(b) the smallness of the ratio of the electron mass to the mass of nuclei:

m/M+10~4.

The smallness of the value of m/M is the basis of the Born}Oppenheimerapproximation, the standard wave mechanical approach to &explaining' chemicalstructure.24 That molecules have a structure has been taken by chemists asessentially an undisputed fact since the 1870s; X-ray di!raction cemented the ideaand generations of chemists have been raised on the assumption that this isuncontroversial. What is meant by attributing a structure and a shape to a mol-ecule is to say that in speci"ed contexts it makes sense to think of the constituentsof a molecule as forming a regular geometrical "gure.25 Furthermore if thesubstructural entities are taken to be atoms it is often meaningful to join certain ofthem by (one or more) lines, which represent the chemical bonds between them.Thus a methane molecule is usually represented as a carbon atom located at thecentroid of a regular tetrahedron and a hydrogen atom at each of the vertices.

Recall brie#y "rst the case of a diatomic molecule. If one assumes that thetranslational motion of the molecule as a whole has been separated out, then toa good approximation (neglecting terms of order m/M) the SchroK dinger equa-tion for the internal motion is given by

A!+22M

+2R!

+22m

n+i/1

+2i#<BW(r

1,2, r

n, R)"EW(r

1,2, r

n, R), (2)

where W is the total electronic and nuclear wave function, E the total energy,M the reduced mass of the nuclei in their relative motion and

<"<ee#<

eN#<

NN,

<ee"+

i}j

e2

rij

,

<eN

"!

n+i/1

ZAe2

riA

!

n+i/1

ZBe2

riB

,

<NN

"

ZAZ

Be2

r, (3)

599Complex Systems, Modelling and Simulation

Page 18: Complex Systems, Modelling and Simulation

26Note that t(MriN, R) must be properly antisymmetrised in the electron coordinates to be an

acceptable wave function.27Nor is it likely that a foundational theory for chemical reactions should be expressed in terms ofan adiabatic model. Recently, Feynman's path-integral approach to quantum theory has been usedto consider non-adiabatic approximations. In addition to its application to dynamical problems,this formulation has been widely used to obtain a variety of properties of physical systems inquantum-statistical-thermodynamic equilibrium (Roepstor!, 1994) and has been shown to enablethe construction of approximations to the quantum-mechanical partition functions that are rigorousbounds of the exact entities. It is evidently capable in principle of providing rigorous lower boundsfor ground-state energies of molecular systems, although these do not appear to have beendetermined computationally as yet. We thank Prof. Sidney Golden for pointing this out to us.

with <ee

the electron-electron repulsion potential, <eN

the electron-nuclearattraction energy, and<

NNthe nuclear-nuclear repulsion energy (see for example

Karplus and Porter, 1970, pp. 454!.). The wave function W is a function of theelectronic coordinates and of the internuclear separation: W"W(Mr

iN, R). By

virtue of the fact that m/M;1, nuclear and electronic motion have verydi!erent time scales. Born and Oppenheimer (1927) suggested that for the &fast'electronic motion the nuclei could be considered as essentially not moving.Therefore for the electrons "rst solve the SchroK dinger equation with R "xed (butfor all possible values of R):

A!+2

2m

n+i/1

+2i#V(r, R)Bt(r, R)"E(R)t(r,R). (4)

After solving for t(MriN, R)26 assume that W(Mr

iN, R) can be approximated by

((MriN, R)"t(Mr

iN, R)s(R). (5)

Upon substituting this wave function into equation (2), and if, since the electronwave function t(Mr

iN, R) varies slowly with intranuclear distance, one assumes

that

+2R[t(Mr

iN, R)s(R)]+t(Mr

iN, R)+2

Rs(R), (6)

one "nds that s(R) satis"es the following equation:

A!+22M

+2R#E(R)Bs(R)"Es(R), (7)

where E(R) is the eigenvalue determined from the solution of equation (4). Thisequation is readily solved by setting

s(R)">1m

(h, u)R(R). (8)

Although some fairly rigorous justi"cation can be given for the approximationsmade when calculating the ground state of a diatomic molecule it is not likelythat these approximations are justi"able for the excited states.27

When E(R) corresponds to stable con"guration, with short distance repulsionand an attractive basin around some distance R

e(such as in the case of the

600 Studies in History and Philosophy of Modern Physics

Page 19: Complex Systems, Modelling and Simulation

triplet electronic state of a hydrogen molecule) E(R) is usually approximated bythe "rst terms of its Taylor expansion around R

eand the resulting SchroK dinger

equation is then readily solved to yield the vibrational energies and the vibra-tional states R

vl(R) of the nuclear motion (see Karplus and Porter, 1970, p. 475).

It is however important to recognise that a solution of the form (8) does notattribute to the molecule a de"nite orientation*a prerequisite for attributing ita shape.

To get an insight into how shape emerges consider a macroscopic object. Theequations governing the atoms of the object are rotationally symmetric, buta solution of these equations representing the macroscopic object has a de"niteorientation in space. How is the symmetry broken? Spontaneous symmetrybreaking occurs only for systems with in"nitely many degrees of freedom. Theappearance of broken symmetry for a macroscopic object arises because it hasa macroscopic moment of inertia I, so that its ground state is part of a tower ofrotationally excited states whose energies are separated by only very, very tinyamounts, of order h2/I. This gives the state vector of the macroscopic body anextreme sensitivity to external perturbations; even very weak external "elds willshift the energy by much more than the energy di!erence of these rotationallevels. In consequence, any rotationally asymmetric external "eld will cause theground state or any other state of the macroscopic object with de"nite angularmomentum quantum numbers rapidly to develop components with other angu-lar momentum quantum numbers. The states of the object which are relativelystable with respect to small external perturbations are not those with de"niteangular momentum quantum numbers, but rather those with a de"nite orienta-tion, in which the rotational symmetry of the underlying theory is broken (seeWeinberg, 1996, Vol. 2, pp. 164}165; Brown and Cao, 1991).

For the case of a molecule a similar thing happens: the environment isresponsible for giving the molecule its shape. If the diatomic molecule is in anenvironment at temperature ¹ the appropriate (non-pure) state vector describ-ing it will be of the form

+i

CTl

t(r, R)>lm

(h, /)Rvl(R). (9)

Thus only when the context is taken into account can we attribute a shape to themolecule (see Primas, 1981b).

These considerations can be generalised to apply to polyatomic molecules. Asis well known, there exists by now a huge literature concerning this subject, andin particular regarding ab initio computations of chemical properties. We shallbe primarily interested in the limits of the &foundational' theory and its relationto the conventional &chemical' descriptions. We take the Hamiltonian whichdescribes a polyatomic nucleus to be

H"!

+22

N+a

1

Ma+2a!

+22m

+i/1

+2i#<

nn#<

en#<

ee, (10)

601Complex Systems, Modelling and Simulation

Page 20: Complex Systems, Modelling and Simulation

28The Born}Oppenheimer approximation gives lower bounds for the resulting adiabatic groundstate total energies of molecules. There was a later theory developed by Born, known as theBorn}Huang adiabatic approximation (1954) which gives upper bounds for the resulting adiabaticground-state total energies of molecules. An essential feature of the theory is that accidentaldegeneracies of electronic energies are possible and cannot be avoided by merely carrying out theadiabatic approximation exactly when they occur. The degeneracies themselves are not observableexperimentally, but give rise to the so-called Jahn}Teller e!ect that can modify spatial structures ofmolecules from those that would be expected to occur in the absence of degeneracy. The degenera-cies are removed by more extended computations of the electronic energy than the strictly adiabaticone because there are still some terms of the total molecular Hamiltonian not dealt with which canbe identi"ed as non-adiabatic coupling between electrons and nuclear motion that serve to removethem.

where a runs over the N nuclei, i over the electrons, and the <S, as above, are the

potential between the various charges (see e.g. Tinkham, 1964; Sprik, 1993). Forthe ground state (and perhaps the lowest vibrational and rotational states),assuming the validity of the Born}Oppenheimer approximation, one attemptsto solve a SchroK dinger equation for the electrons with the nuclei clamped at thepositions Ra , a"1, 2,2,N and one tries to determine properly antisymmet-rised electronic wave functions that satisfy the equation

A!+22m

n+i/1

+2i#e2

n+i/1

N+a/1

ZaDri!Ra D

!e2+i|j

1

Dri!r

jDBt(r

i, Ra )"E(Ra )t(r

i, Ra ).

(11)

Again assuming that one can approximate W(MriN, MRaN) by t(Mr

iN, MRaN) s(MRaN)

and that one can neglect the variation of t(MriN, MRaN) with Ra , one "nds that the

potential function that determines the nuclear motion is

uBO

(Ra)"e2 +

a|b

ZaZbDRa!Rb D

#E(Ra). (12)

One usually determines the ground state of the combined system by what iscalled &relaxing the nuclear con"guration', i.e. by "nding the solutions of

RRRa

u(Ra)"0, a"1,2,N. (13)

This equation has numerous distinct locally stable solutions even for smallvalues of N. Thus if the use of the Born}Oppenheimer approximation could bejusti"ed, this approximation becomes the explanation of both the stability andthe diversity of molecular structure*and gives support to the intuitive pictureof molecular structure determined by the positions of the nuclei.28 It is likelythat for the case of a protein (that is a molecule with fairly large N and muchlarger n) the computational complexity of determining the position of the nucleifrom equation (13) is exponential in time on a Turing-type computer. Thestructure of a complex molecule like a protein can therefore probably not bepredicted by a priori calculations! In fact, the equilibrium positions of the nuclei (incomplex molecules such as proteins) are almost always empirically determined (e.g.

602 Studies in History and Philosophy of Modern Physics

Page 21: Complex Systems, Modelling and Simulation

29 In the case of simpler molecules the equilibrium positions can and are often determined by "ttingthe theoretically determined absorption and/or emission spectra of radiation to the observedspectra.

through X-ray or neutron scattering experiments) and taken as inputs to determiningthe shape of the energy surface determined by equation (13).29 Furthermore, largeab initio calculations usually replace the nuclear potential and the chemicallyinert core electrons by pseudopotentials. In this model the valence electrons ofatoms and molecules are treated dynamically whereas the e!ect of the coreelectrons is approximated by potentials in which the valence electrons move(Szasz, 1985).

A related question is the following: how does one justify the Newtonian modelof polyatomic molecules in which one describes the motion of the nucleiclassically by the equations of motion:

Mad2Radt2

"!+auBO(Ra), (14)

which are taken as the starting point for determining the properties of largebiological molecules (see HoK ltje and Folkers, 1997). In these (computer-based)calculations the potential in equation (14) is further approximated by assumingthat the forces between the nuclei are local and can be considered oriented inspace and harmonic oscillator Hooke-like.

The above approaches at understanding molecular structure are framed in thewave mechanical formulation of quantum mechanics and ascribe properties towave functions that translate the intuitions obtained from models of moleculesdeveloped prior to the advent of quantum mechanics*such as ascribing a fairlywell-de"ned structure to the molecule. In important, seminal papers and in hisbook, Chemistry, Quantum Mechanics and Reductionism, Primas (1981a,b) hasindicated some of the general features of the process of going from the founda-tional theory of equation (10) to the description that attributes de"nite positionsto the nuclei and considers them classical (commuting) variables. What isinvolved in obtaining this emergent description is best exhibited by consideringthe equations of motion for the electronic and nuclear variables in the Heisen-berg representation. The Hamiltonian in terms of these variables is

H"

1

2m+i

P2i#+

a

1

2M2aP2a#<ee(q)#<

NN(Q)#<

eN(q, Q). (15)

To investigate the limit e"(m/M)1@2P0 Primas rescaled the nuclear variables:

q"et, Ka"e2Ma ,

Xa (q)"Qa(q/e),

Ya(q)"ePa(q/e), (16)

603Complex Systems, Modelling and Simulation

Page 22: Complex Systems, Modelling and Simulation

so that

xXai (q),>a{j(q)y"i+edaa{dij . (17)

In the limit eP0, i.e. in the limit of the masses of the nuclei being consideredin"nite, the electronic equations of motion become autonomous:

d

dtqi(t)"

1

mpi(t),

d

dtpi (t)"!

RRqi (t)

(<ee

(q(t))#<eN

(q(t), X(et))), (18)

where in the limit eP0, X(et)"X(0). The nuclear variables satisfy the equations

d

dtXa (q)"

1

KaYa (q),

d

dqYa(q)"!

RRXa(q)

(<NN

(X(q))#<eN

(q(q/e), X(eq))). (19)

The standard way to handle this multi-time scale equation is to use theBoguliubov and Metropolsky (1961) perturbation theory which averagesthe equation over a period of the fast motion. Primas de"nes this averaging to bethe average over the electronic degrees of freedom with respect to some station-ary state described by a density matrix o

e. This coarse graining looses informa-

tion*a required step in obtaining the &emergent' description and results ina description of the nuclei by variables X and > that commute for all times q,q@,that is they behave like Newtonian point particles moving on de"nite trajector-ies*thus intuitively justifying the approach of equation (14). The quantummechanical treatment for "nite masses of the nuclei requires the consideration ofthe eO0. The expansion in e is an asymptotic one, the point e"0 beingsingular. The picture that emerges is one that describes the nuclei as Newtonianpoint particles with de"nite positions and momenta at each instant*but &they#uctuate very rapidly (on a time scale much more rapid than the collectivenuclear motions like translations, vibration or rotation) so that the trajectoriesare only stochastically knowable' (Primas, 1981b, p. 341).

5. Epilogue

We have sketched some of the steps in modelling chemical structure toemphasise some of the &foundational' aspects of the problem. The importantfeature is that no one questions the quantum mechanical underpinning. Naturehas a hierarchical structure*with di!erent time, energy, and length scales, and

604 Studies in History and Philosophy of Modern Physics

Page 23: Complex Systems, Modelling and Simulation

30See the articles by Coker, and by Hilbers and Esselink in Allen and Tildesley (1993).

it is possible (up to some degree of accuracy) to investigate and describe theselevels independently. The lessons from condensed matter physics and chemistryis that in the microscopic domain*where the context is such that (non-radioac-tive) nuclei and electrons can be considered stable, ahistoric objects*there areno new foundational laws, only new phenomena*and the challenge is to linkthe descriptions and representations of the hierarchical levels (see Lebowitz,1999). If in the future greater experimental accuracy requires that relativistice!ects be taken into account to explain some structural features (e.g. thebehaviour of some of the core electrons, see Wilson (1988)) that would not entaila challenge to the &foundational' assumptions. It would merely change theHamiltonian used in the description, or the nature of the approximations.

We have outlined Primas' approach to emphasise that going from the phys-ical foundational theory to the more intuitive chemical representations thatdescribe the nuclei as located at more or less "xed positions, entails taking thecontext and the experimental situation into account, and that these must beinvoked in justifying the averaging procedures that will yield the classicalchemical structural models (for relatively simple, medium-sized molecules).Given that the foundational theory and the foundational ontology for thedomain have been stabilised, the problem at hand is to see how farthe constructionist programme can be implemented. And in this enterprise thecomputer has been an essential and invaluable tool. Ab initio quantum mechan-ical calculations for molecules and solids are now predictive and are regularlyused as instruments for addressing structural problems in condensed matterphysics, molecular chemistry and materials science. Computational chemistryand physics have become an integral part of these branches of the physicalsciences. Computational modelling constitutes a new style of scienti"c reason-ing. But it must always be kept in mind that the modelling depends onapproximations made at the foundational level. Thus as the number of electronsand nuclei increases, processes such as chemical reactions resist easy descrip-tions: a theory that is quantitatively correct for all molecular states and allnuclear con"gurations is required. Progress in this area will depend on progressin both modelling (and therefore on which formulation of the foundationaltheory is used*Feynman, Heisenberg, or SchroK dinger) and in computertechnology.30

In view of the impressive cognitive advances in (computational) condensedmatter physics, quantum chemistry and materials science, and of the sociologic-al restructuring of these "elds, we believe that we are justi"ed in asserting thatwe are in the midst of a Hacking-type revolution that is integrally connectedwith the powers of the modern computer. The essential manifestation of thisrevolution is that modelling and simulation in these "elds have achieved a quali-tatively new level of e!ectiveness, ubiquity and authority. In physics andchemistry this revolution is connected with the stabilisation of quasi-autono-mous ontological levels for which foundational theories not only provide

605Complex Systems, Modelling and Simulation

Page 24: Complex Systems, Modelling and Simulation

31This aspect has been stressed by Norton Wise. It has also been made by Eric Francoeur (2000) andwas communicated to us by Steve Weininger.

explanations (to a high degree of accuracy), but are also able to predict (to a highdegree of accuracy) the structure and properties of composite systems made upfrom the &elementary' constituents of that domain. However, though founda-tional, the microscopic theory that describes the dynamics of the &elementary'constituents (non-relativistic quantum mechanics) is not capable, because ofcomputational complexity, of &reconstructing' ab initio all possible entities,systems and processes within its domain of validity without the aid of empiricalimput. This is surely an important conclusion that merits further investigation.

There is another aspect of the Hacking revolution*and one of its mostconsequential by-products*which we have not addressed.31 The e!ectivenessof computer simulations such as those of the conformations of large proteinsand their transitions derives in great part from their being visual and interactive.Many of the researchers doing such simulations declare that they have the senseof manipulating the &thing' itself. The interactive and visual character of thesimulation gives the modelling an almost tactile quality and allows model andreality to become confounded in a way not experienced previously. This is surelya new perception of the notion of a scienti"c model.

Acknowledgements*We would like to thank Sunny Auyang, Sidney Golden, Evelyn Fox Keller,Alfred Red"eld, Hugh Pendleton, Norton Wise and Steve Weininger for useful and helpful com-ments and especially Hans Primas, for stimulating discussions.

References

Abraham, R. (1991) &Complex Dynamical Systems Theory: Historical Origins, Contem-porary Applications', in E. Laszlo (ed.), The New Evolutionary Paradigm (New York:Gordon and Breach), pp. 1}31.

Achinstein, P. (1968) Concepts of Science: A Philosophical Analysis (Baltimore: JohnsHopkins Press).

Allen, M. P. and Tildesley, D. J. (eds) (1993) Computer Simulation in Chemical Physics(Dordrecht: Kluwer Academic Publishers).

Allen, T. F. H. and Starr, T. B. (1982) Hierarchy: Perspectives for Ecological Complexity(Chicago: University of Chicago Press).

Anderson, P. W. (1972) &More is Di!erent', Science 177, 393}396.Auyang, S. (1998) Foundations of Complex System Theories (Cambridge: Cambridge

University Press).Bernholc, J. (1999) &Computational Materials Science: The Era of Applied Quantum

Mechanics', Physics Today 52(9), 30}35.Boguliubov, N. N. and Metropolsky, V. A. (1961) Asymptotic Methods in the Theory of

Non-linear Oscillations (New York: Gordon and Breach).Born, M. and Oppenheimer, J. R. (1927) &Zur Quantentheorie der Molekulen', Annalen

der Physik 84, 457}484.Brattsev, V. F. (1965) &Ground State Energy of a Molecule in the Adiabatic Approxima-

tion', Soviet Physics-Doklady 10, 44}54.

606 Studies in History and Philosophy of Modern Physics

Page 25: Complex Systems, Modelling and Simulation

Brown, L. M. and Cao, T. Y. (1991) &Spontaneous Breakdown of Symmetry: ItsRediscovery and Integration into Quantum Field Theory', Historical Studies in thePhysical and Biological Sciences 21, 211}235.

Cartwright, N. (1983) How the Laws of Physics Lie (Oxford: Clarendon Press).Ceperley, D. M. (1999) &Microscopic Simulations in Physics', Reviews of Modern Physics

71, 438}443.Cipra, B. (2000) &Statistical Physicists Phase out a Dream', Science 288, 1261}1262.Dennison, D. (1925) &The Molecular Structure and the Infrared Spectrum of Methane',

Astrophysical Journal 62, 84}103.Dennison, D. (1926) &On the Analysis of Certain Molecular Spectra', Philosophical

Magazine 1, 195}218.Dirac, P. A. M. (1929) &Quantum Mechanics of Many-Electron Systems', Proceedings of

the Royal Society (London) A 126, 714}723.Dreizler, R. M. and Gross, E. K. U. (1990) Density Functional Theory (Berlin: Springer).Edwards, P. (1996) The Closed World (Cambridge, MA: MIT Press).Epstein, S. T. (1966) &Ground State Energy of a Molecule in the Adiabatic Approximation',

Journal of Chemical Physics 44, 836}837; &Erratum', Journal of Chemical Physics 44, 4062.Fischer, K. H. and Hertz, J. A. (1991) Spin Glasses (Cambridge: Cambridge University

Press).Flake, G. W. (1999) The Computational Beauty of Nature (Cambridge, MA: MIT Press).Fortun, M. and Schweber, S. S. (1993) &Scientists and the State: The Legacy of World War

II', Social Studies of Science 23, 595}642.Francoeur, E. (2000) &Beyond Dematerialization and Inscription', Hyle 6(1), 1}19.Galison, P. (1996) &Computer Simulation and the Trading Zone', in P. Galison and D. J.

Stump (eds), The Disunity of Science (Stanford: Stanford University Press), pp. 18}157.Galison, P. (1998) Image and Logic (Chicago: The University of Chicago Press).Garey, M. R. and Johnson, D. S. (1979) Computers and Intractability (New York: Freeman).Gavroglu, K. and Simo8 es, A. (1994) &The Americans, the Germans and the Beginnings of

Quantum Chemistry. The Con#uence of Diverging Traditions', Historical Studies in thePhysical Sciences 25, 47}110.

Gavroglu, K. and Simo8 es, A. (1997) &Di!erent Legacies and Common Aims: RobertMulliken, Linus Pauling and the Origins of Quantum Chemistry', in J.-L. Calais andE. S. Kryachko (eds), Conceptual Perspectives in Quantum Chemistry (Dordrecht:Kluwer Academic Publishers), pp. 383}413.

Gavroglu, K. and Simo8 es, A. (1999) &Quantum Chemistry qua Applied Mathematics. TheContributions of Charles Alfred Coulson (1910}1974)', Historical Studies in the PhysicalSciences 29, 363}406.

Gavroglu, K. and Simo8 es, A. (2001a) &Preparing the Ground for Quantum Chemistry inGreat Britain: The Contributions of the Physicist R.H. Fowler and the Chemist N.V.Sidgwick', submitted for publication in British Journal for the History of Science.

Gavroglu, K. and Simo8 es, A. (2001b) &The Role of Meetings in the Making of QuantumChemistry, 1923}1977', submitted for publication in Physics in Perspective.

Gell-Mann, M. (1994) The Quark and the Jaguar: Adventures in the Simple and the Complex(New York: Freeman).

Georgi, H. (1989) &E!ective Quantum Field Theories', in P. Davies (ed.), The New Physics(Cambridge: Cambridge University Press), pp. 446}457.

Hacking, I. (1981) &From the Emergence of Probability to the Erosion of Determinism', inJ. Hintikka, D. Gruender and E. Agazzi E (eds), Probabilistic Thinking, Thermodynamicsand the Interaction of the History and Philosophy of Science. Proceedings of the 1978Pisa Conference on the History and Philosophy of Science, 2 Vols (Dordrecht: Reidel),Vol. II, pp. 105}123.

Hacking, I. (1992) &`Stylea for Historians and Philosophers', Studies in History andPhilosophy of Science 23, 1}20.

607Complex Systems, Modelling and Simulation

Page 26: Complex Systems, Modelling and Simulation

Heisenberg, W. (1974) &The Notion of a Closed Theory in Modern Physics', inW. Heisenberg, Across the Frontiers (New York: Harper and Row), pp. 39}46.

Herman, F., McLean, A. D. and Nesbet, R. K. (eds) (1973) Computational Methods forLarge Molecules and Localized States in Solids (New York: Plenum Press).

Hesse, M. (1966) Models and Analogies in Science (Notre Dame: University of NotreDame Press).

HoK ltje, H. D. and Folkers, G. (1997) Molecular Modeling: Basic Principles and Applica-tions (New York: Weinheim).

Kaplan, I. (1987) Theory of Intermolecular Interactions (Amsterdam: Elsevier).Karplus, M. and Porter, R. N. (1970) Atoms and Molecules (Reading, MA: Ben-

jamin/Cummins).Keller, E. F. (2000a) &Models of, Models for', Philosophy of Science 67, Supp. Proceedings

of the 1998 Philosophy of Science Association Annual Meeting.Keller, E. F. (2000b) &Models, Simulation, and Computer Experiments', paper presented

at a conference on scienti"c experimentation, Amsterdam, 15}17 June 2000.Kitaura, K. (1998) &Molecular Recognition and Self-Regulation', in S. Nagakura (ed.),

Functionality of Molecular Systems, Vol. 1 (Berlin: Springer), pp. 95}109.Lawley, K. P. (ed.) (1987) &Ab Initio Methods in Quantum Chemistry', in Advances in

Chemical Physics, Vol. LXVII (New York: Wiley).Lebowitz, J. (1999) &Statistical Mechanics: A Selective Review of Two Central Issues',

Reviews of Modern Physics 71, 346}347.Leigh, E. G. Jr. (1991) &Levels of Selection, Potential Con#icts, and their Resolution: The

Role of the `Common Gooda', in L. Keller (ed.), Levels of Selection in Evolution(Princeton: Princeton University Press), pp. 15}30.

Loewenstein, W. R. (1999) The Touchstone of Life (New York: Oxford University Press).MacKenzie, D. (1996) Knowing Machines (Cambridge, MA: MIT Press).Marina, J. (ed.) (1988) Molecules in Physics, Chemistry, and Biology (Dordrecht: Kluwer

Academic).Morrison, M. (1998) &Modelling Nature: Between Physics and the Physical World',

Philosophia Maturalis 35, 64}85.Morrison, M. and Morgan, M. (1999) Models as Mediators (Cambridge: Cambridge

University Press).Nersessian, N. J. (1999) &Model-Based Reasoning in Conceptual Change', in L. Magnani,

N. J. Nersessian and P. Thagard (eds), Model-Based Reasoning in Scientixc Discovery(New York: Kluwer Academic/Plenum Publishers), pp. 5}22.

Pople, J. A. (1999) &Nobel Lecture: Quantum Chemical Models', Reviews of ModernPhysics 71, 1267}1274.

Primas, H. (1981a) &Foundations of Theoretical Chemistry', in R. G. Wooley (ed.),Quantum Dynamics of Molecules (New York: Plenum Press), pp. 39}114.

Primas, H. (1981b) Chemistry, Quantum Mechanics and Reductionism (Berlin: Springer;second edition 1983).

Primas, H. (1998) &Emergence in Exact Natural Sciences', Acta Polytechnica Scandinavica91, 83}98.

Roepstor!, G. (1994) Path Integral Approach to Quantum Physics: An Introduction (Berlin:Springer).

Rohrlich, F. (1990) &Computer Simulation in the Physical Sciences', in A. Fine, M. Forbesand L. Wessels (eds), Proceedings of the 1990 biennial meeting of the Philosophy ofScience Association (East Lansing: Philosophy of Science Association), Vol. 2, pp.507}517.

Schweber, S. S. (1989) &Molecular Beam Experiments, the Lamb Shift, and the RelationBetween Experiments and Theory', American Journal of Physics 57, 299}307.

Schwinger, J. (1983) &Renormalization Theory for Quantum Electrodynamics', in L.Brown and L. Hoddeson (eds), The Birth of Particle Physics (Cambridge: CambridgeUniversity Press), pp. 329}353.

608 Studies in History and Philosophy of Modern Physics

Page 27: Complex Systems, Modelling and Simulation

Simo8 es, A. (1993) Converging Trajectories, Diverging Traditions: Chemical Bond, Valence,Quantum Mechanics and Chemistry, 1927}1937 (University of Maryland, UniversityMicro"lms Inc.), Publication d 9327498.

Simo8 es, A. (2001) &Chemical Physics and Quantum Chemistry in the Twentieth-Century',in D. C. Lindberg and R. L. Numbers (eds), Cambridge History of Science, 8 Vols(Cambridge: Cambridge University Press), M. J. Nye (ed.), Volume 5, Modern Physicaland Mathematical Sciences.

Simo8 es, A. and Gavroglu, K. (2000) &Quantum Chemistry in Great Britain: Developinga Mathematical Framework for Quantum Chemistry', Studies in History and Philoso-phy of Modern Physics 31, 511}548.

Simo8 es, A. and Gavroglu, K. (2001) &Issues in the History of Theoretical and QuantumChemistry, 1927}1960', in C. Reinhardt (ed.), Bridging Boundaries. Chemical Sciences inthe Twentieth Century (New York: Wiley).

Sismondi, S. (1999) &Models, Simulations, and their Objects', Science In Context 12,247}260.

Sprik, M. (1993) &E!ective Pair Potentials and Beyond', in Allen and Tildesley (eds), pp.211}260.

Szasz, L. (1985) Pseudopotential Theory of Atoms and Molecules (New York: Wiley).Tinkham, M. (1964) Group Theory and Quantum Mechanics (New York: McGraw Hill).Ulam, S. (1976) Adventures of a Mathematician (New York: Scribner).Weinberg, S. (1996) The Quantum Theory of Fields, Vol. 2 (New York: Cambridge

University Press).Wilson, S. (1986) Chemistry by Computer (New York: Plenum Press).Wilson, S. (ed.) (1988) Relativistic Ewects in Atoms and Molecules. Methods in Computa-

tional Chemistry, Vol 2. (New York: Plenum Press).Wolfram, S. (1985) &Undecidability and Intractability in Theoretical Physics', Physical

Review Letters 54, 735}738.

609Complex Systems, Modelling and Simulation