Top Banner

of 31

Psha vs Dsha

Jun 01, 2018

Download

Documents

Thomas Marks
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/9/2019 Psha vs Dsha

    1/31

    Journal of Earthquake Engineering, Vol. 6, Special Issue 1 (2002) 4373c Imperial College Press

    DETERMINISTIC VS. PROBABILISTIC SEISMIC HAZARDASSESSMENT: AN EXAGGERATED AND

    OBSTRUCTIVE DICHOTOMY

    JULIAN J. BOMMERDepartment of Civil and Environmental Engineering,

    Imperial College, London SW7 2BU, UK

    Deterministic and probabilistic seismic hazard assessment are frequently representedas irreconcilably different approaches to the problem of calculating earthquake groundmotions for design, each method fervently defended by its proponents. This situationoften gives the impression that the selection of either a deterministic or a probabilisticapproach is the most fundamental choice in performing a seismic hazard assessment.The dichotomy between the two approaches is not as pronounced as often implied andthere are many examples of hazard assessments combining elements of both methods.Insistence on the fundamental division between the deterministic and probabilistic ap-proaches is an obstacle to the development of the most appropriate method of assessmentin a particular case. It is neither possible nor useful to establish an approach to seismichazard assessment that will be the ideal tool for all situations. The approach in eachstudy should be chosen according to the nature of the project and also be calibratedto the seismicity of the region under study, including the quantity and quality of thedata available to characterise the seismicity. Seismic hazard assessment should continueto evolve, unfettered by almost ideological allegiance to particular approaches, with theunderstanding of earthquake processes.

    Keywords : Seismic hazard; probabilistic seismic hazard assessment; deterministic seismichazard assessment; seismic risk.

    1. Introduction

    Seismic hazard could be dened, in the most general sense, as the possibility of potentially destructive earthquake effects occurring at a particular location. Withthe exception of surface fault rupture and tsunami, all the destructive effects of earthquakes are directly related to the ground shaking induced by the passageof seismic waves. Textbooks that present guidance on how to assess the hazard of strong ground-motions invariably present the fundamental choice facing the analystas that between adopting a deterministic or probabilistic approach [e.g. Reiter,1990; Krinitzsky et al ., 1993; Kramer, 1996]. Statements made by proponents of

    the two approaches often imply very serious differences between deterministic andprobabilistic seismic hazard assessment and reinforce the idea that the choice be-tween them is one of the most important steps in the process of hazard assessment.This paper aims to show that this apparently diametric split between the two ap-proaches is misleading and, more importantly, that it is not helpful to those faced

    43

  • 8/9/2019 Psha vs Dsha

    2/31

    44 J. J. Bommer

    with the problem of assessing the hazard presented by earthquake ground motionsat a site.

    2. DSHA VS. PSHA: An Exaggerated Dichotomy

    Probabilistic seismic hazard assessment (PSHA) was introduced a little over30 years ago in the landmark paper by Cornell [1968] and has become the mostwidely used approach to the problem of determining the characteristics of strongground-motion for engineering design. Some, however, have challenged the approachand put up vociferous defence of deterministic seismic hazard assessment (DSHA),in turn soliciting rm responses from those favouring PSHA. The division betweenthe two camps has been expressed in the scientic and technical literature, some-times in terms reminiscent of political debate. a In some situations the divisionhas become public as for example when in October 2000 US newspapers reportedthe disagreements between Caltrans and the US Army Corps of Engineers regard-ing the choice of PSHA or DSHA in assessing design loads for the new easternspan of the Bay Bridge in San Francisco Bay. Hanks and Cornell [2001] have pre-dicted a similar showdown around the seismic hazard assessment for the YuccaMountains nuclear waste repository [Stepp et al ., 2001]. All of this points to aseemingly irreconcilable split between DSHA and PSHA, which warrants closerexamination.

    2.1. Determinism and probability in seismic hazard assessment

    Reiter [1990] and Kramer [1996], currently the most widely consulted textbooks onseismic hazard analysis, describe DSHA in the same way. The basis of DSHA is todevelop earthquake scenarios, dened by location and magnitude, which could affectthe site under consideration. The resulting ground motions at the site, from whichthe controlling event is determined, are then calculated using attenuation relations;

    in some cases, there may be more than one controlling event to be considered indesign.

    The mechanics of PSHA are far less obvious than those of DSHA, with the resultthat there is often misunderstanding of many of the basic features. The excellentrecent papers by Abrahamson [2000] and by Hanks and Cornell [2001] provide veryuseful clarication of many issues that have created confusion regarding PSHA.The essence of PSHA is to identify all possible earthquakes that could affect a site,including all feasible combinations of magnitude and distance, and to characterisethe frequency of occurrence of different size earthquakes through a recurrence rela-

    tionship. Attenuation equations are then employed to calculate the ground-motionparameters that would result at the site due to each of these earthquakes and hence

    a The reader is referred, for example, to the acknowledgements in the paper by Krinitzsky [1998]and the response by Hanks and Cornell [2001].

  • 8/9/2019 Psha vs Dsha

    3/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 45

    the rate at which different levels of ground motion occur at the site is calculated.The design values of motion are then those having a particular annual frequency of occurrence.

    Common to both approaches is the very fundamental, and highly problematic,issue of identifying potential sources of earthquakes. Another common feature isthe modelling of the ground motion through the use of attenuation relationships,more correctly called ground-motion prediction equations [D. M. Boore, written communication ]. The principle difference in the two procedures, as described above,resides in those steps of PSHA that are related to characterising the rate at whichearthquakes and particular levels of ground motion occur. Hanks and Cornell [2001]point out that the two approaches have far more in common than they do in differences and that in fact the only difference is that a PSHA has units of timeand DSHA does not. This is indeed a very fundamental distinction between thetwo approaches as currently practised: in a DSHA the hazard will be dened asthe ground motion at the site resulting from the controlling earthquake, whereasin PSHA the hazard is dened as the mean rate of exceedance of some chosen ground-motion amplitude [Hanks and Cornell, 2001]. At this point it is useful tobriey dene the different terms used to characterise probabilistic seismic hazard:the return period of a particular ground motion, T r (Y ), is simply the reciprocalof the annual rate of exceedance. For a specied design life, L , of a project, the

    probability of exceedance of the level of ground motion (assuming that a Poissonmodel is adopted) is given by:

    q = 1 e LT r . (1)

    Once a mean rate of exceedance or probability of exceedance or return period isselected as the basis for design, the output of PSHA is then expressed in terms of a specied ground motion, in the same way as DSHA.

    Another important difference between the two approaches, which is discussedfurther in Sec. 3, is related to the treatment of hazard due to different sources of

    earthquakes. In PSHA, the hazard contributions of different seismogenic sources arecombined into a single frequency of exceedance of the ground-motion parameter;in DSHA, each seismogenic source is considered separately, the design motionscorresponding to a single scenario in a single source.

    Regarding differences and similarities between the two methods, it is oftenpointed out that probabilities are at least implicitly present in DSHA in so faras the probability of a particular earthquake scenario occurring during the de-sign life of the engineering project is effectively assigned as unity. An alterna-tive interpretation is that within a framework of spatially distributed seismicity,

    the probability of occurrence of a deterministic scenario is, mathematically, zero[Hanks and Cornell, 2001] but in general the implied probability of one is a validinterpretation of the scenarios dened in DSHA. As regards the resulting groundmotions, however, the probability depends upon the treatment of the scatter in thestrong-motion prediction equation: if the median plus one standard deviation is

  • 8/9/2019 Psha vs Dsha

    4/31

    46 J. J. Bommer

    used, this will correspond to a motion with a 16-percent probability of exceedance,for the particular earthquake specied in the scenario. The probabilistic nature of ground-motion levels obtained from scaling relationships is reected in the fact thatin Japan probabilistic usually refers to ground motions obtained in this way froma particular scenario, as opposed to deterministic ground motions obtained bymodelling of the fault rupture and wave propagation [Abrahamson, 2000].

    It can equally be pointed out that any PSHA includes many deterministicelements in so much that the denition of nearly all of the input requires theapplication of judgements to select from a range of possibilities. This applies inparticular to the denition of the geographical limits of the seismic sources zonesand the selection of the maximum magnitude. The deterministic nature of dening

    seismic source zones and the consequently great differences that can arise amongstthe interpretations of different experts are well known [e.g. Barbano et al ., 1989;Reiter, 1990].

    In addition to the various parameters that dene the physical model that is thebasis for any PSHA, it could also be argued that another parameter, which has apronounced inuence on the input to engineering design, is also dened determin-istically: the design probability of exceedance. This issue, which is of fundamen-tal importance to the raison detre of probabilistic seismic hazard assessment, isexplored in Sec. 4.

    2.2. From Cornell to kernel boldemdash which PSHA ?

    The nature of the conict between proponents of PSHA and DSHA was previouslylikened to that between opposing political or religious ideologies, in which eachside claims exclusive ownership of the truth. However, scratching the surface of opposing sides in ideological conicts nearly always reveals the division to be farless clear, with divisions within each camp being often almost as pronounced asthe fundamental ideological split itself. One only needs to look at the history of

    the Left in international politics, in which internal conicts have often taken ona ferocity at least as great as that shown in the confrontation with the Right, fora clear example of an apparent dichotomy concealing a multitude of opinions andphilosophies. b In this sense, the analogy remains useful for the split between theproponents of DSHA and PSHA: the defendants of PSHA often ignore the factthat there are many different methods of analysis that fall under the heading of probabilistic and the proponents of each of these methods argue their case bypointing out shortcomings in the others.

    The formal beginning of PSHA, as mentioned before, can be traced back to the

    classic paper by Cornell [1968]. Important developments included the developmentof the software EQRISK by McGuire [1976], with the result that the method is fre-

    b Hence the joke that if there are three Trotskyists gathered in a room there will be four politicalparties.

  • 8/9/2019 Psha vs Dsha

    5/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 47

    quently referred to by the name CornellMcGuire. A signicant difference betweenEQRISK and the original formulation of Cornell [1968] was the inclusion of theinuence of the uncertainty or scatter in the strong-motion prediction equation,subsequently explored by Bender [1984].

    Two fundamental features of the CornellMcGuire method are denition of seismogenic source zones, as areas or lines, with spatially uniform activity and theassumption of a Poisson process to represent the seismicity, both of which have beenchallenged by different researchers who have proposed alternatives. For example,the use of Markov renewal chains has been proposed as an alternative probabilisticmodel for subduction zones with identied seismic gaps, to develop either slip-dependent [Kiremidjian and Anagnos, 1984] or time-dependent [Kiremdjian and

    Suzuki, 1987] estimates of hazard. There have also been numerous studies publishedgiving short-term hazard estimates based on non-Poissonian seismicity, such as theforecast by Parsons et al . [2000] for hazard in the Sea of Marmara area followingthe 1999 Kocaeli and Duzce earthquakes.

    Many alternatives to uniformly distributed seismicity within sources dened bypolygons have been put forward, such as Bender and Perkins [1982, 1987] whoproposed sources with smoothed boundaries, obtained by dening a standard erroron earthquake locations. In an earlier study Peek et al . [1980] used fuzzy set theoryto smooth the transitions between source boundaries.

    There have also been proposals to do away with source zones altogether and usethe seismic catalogue itself to represent the possible locations of earthquakes, anapproach that may have been used before 1968. Such historic approaches can benon-parametric [Veneziano et al ., 1984] or parametric [Shepherd et al ., 1993; Kijkoand Graham, 1998]; Makropoulos and Burton [1986] developed an approach usingthe earthquake catalogue to represent sources and Gumbel distributions. Recentadaptation of these zone-free methods include the approach based on spatially-smoothed historical seismicity of Frankel [1995] and the kernel method of Woo[1996], who alludes to misgivings over zonation and, consequently, with the edice

    of hazard computation built on zonation , the cornerstone of the CornellMcGuiremethod. In these most recent methods, earthquake epicentres in the catalogue aresmoothed, according to criteria related to their magnitude and recurrence interval,to form seismic sources.

    The differences amongst these different approaches to PSHA are not simply aca-demic: Bommer et al . [1998] produced hazard maps for upper-crustal seismicity inEl Salvador determined using the CornellMcGuire approach, two zone-free meth-ods and the kernel method. The four hazard maps, prepared using exactly the sameinput, show very signicant differences in the resulting spatial distribution of the

    hazard and the maximum values of PGA vary amongst the four maps by a factorof more than two. Similarly divergent results obtained using the CornellMcGuireand the Frankel [1995] approach have been presented by Wahlstr om and Gr unthal[2000] for Fennoscandanavia.

  • 8/9/2019 Psha vs Dsha

    6/31

    48 J. J. Bommer

    2.3. . . . and which DSHA ?

    Just there are many different approaches to PSHA, developed since the publica-tion of Cornell [1968], there is not a single, established and universally acceptedapproach to DSHA, in part precisely because of the absence of a classic point of reference. Arguably, the recent paper by Krinitzsky [2001], could become the stan-dard reference for DSHA that is currently missing from the technical literature.One important difference between DSHA as proposed by Krinitzsky [2001] andDSHA as described by Reiter [1990] and Kramer [1996], is that whereas the latterimply that the ground motions for each scenario should be calculated using median(50-percentile) values from strong-motion scaling relationships, Krinizsky [2001]proposes the use of the median-plus-one-standard deviation (84-percentile) values.

    Confusion regarding the meaning of DSHA is created by the use of the worddeterministic to describe scenarios obtained by deaggregation of PSHA, a sub- ject discussed in Sec. 3. Romeo and Prestininizi [2000] obtain design earthquakescenarios by manipulation of magnitude-distance pairs found from deaggregationand refer to these as deterministic reference events ; to add to the confusion, thepaper also uses the term in the sense dened in Sec. 2.1, assigning the maximummagnitude earthquake to an active fault.

    It is sometimes thought that DSHA is mainly applicable to site-specic hazardassessments, but there are examples of deterministic seismic hazard maps, suchas those produced by the California Division of Mines and Geology for Caltrans[Mualchin, 1996]. The map is prepared by assigning the maximum credible earth-quake (MCE) to each known active or potentially active fault, calculating the re-sulting ground motions, and mapping contours of the highest values of the chosenground-motion parameter. Anderson [1997] argues for supplementing probabilisticmaps with such deterministic scenario maps to provide insight to what mighthappen if particular faults actually rupture during the design life of a project.

    The hazard mapping method developed by Costa et al . [1993] for Italy, and sub-sequently applied to many other countries [e.g. Alvarez et al ., 1999; Aoudia et al .,2000; Panza et al ., 1996; Radulian et al ., 1996] has also been called deterministic,although it is signicantly different from DSHA as described in Sec. 2.1. The methodis based on the generation of synthetic accelerograms at the nodes of a grid, fromwhich contours of the peak motions are drawn. One notable feature of the approachis that the choice of grid size results in an arbitrary minimum source-to-site distanceof about 15 km, which is hardly a worst-case scenario.

    2.4. Combined DSHA and PSHA

    The apparently irreconcilable contradiction between PSHA and DSHA has not beensufficient to prevent their combined use in some recent applications. In common withmost seismic design codes, the 1997 edition of the Uniform Building Code (UBC97)denes the design earthquake actions on the basis of a zonation map correspondingto a 475-year return period, used to anchor a response spectrum. Within Zone 4,

  • 8/9/2019 Psha vs Dsha

    7/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 49

    for any site within 15 km of an identied active fault capable of producing anearthquake of M 6.5 or greater, the near source factors N a and N v are applied toincrease the spectral ordinates for the effects of rupture directivity [e.g. Somervilleet al ., 1997]. Arguably there is a probabilistic element in this procedure since theslip rate is used in addition to the maximum magnitude to classify the fault andthus determine the values of the factors. However, the application of N a and N v isessentially deterministic: the implicit assumption is that if there is an active faultwithin 15 km of the site, it will rupture during the design life of the structure andfurthermore it will rupture in such a way as to produce forward directivity effectsat the site.

    A more explicit combination of DSHA and PSHA has been used in the prepa-

    ration of the maximum considered earthquake (for which the acronym MCE is,confusingly, also used) maps for the 1997 NEHRP Recommended Provisions for Seismic Regulations for New Buildings [Leyendecker et al ., 2000]. Before continu-ing, it is important to note here that in this context MCE has a different meaningfrom that within DSHA mentioned above. Most importantly and this is a sourcefor much potential confusion in the DSHA context, the earthquake refers toan event dened by magnitude and location, whereas in PSHA earthquake nowrefers to the ground motion at the site. Two maps are produced, giving contours of spectral ordinates at 0.2 and 1.0 s, one using PSHA for a probability of exceedance

    of 2% in 50 years, the other deterministic, using active fault locations and activi-ties combined with median values from strong-motion scaling relations multipliedby a factor of 1.5. Areas are dened on the probabilistic map where the ordinatesexceed 1.5g and 0.6g for periods of 0.2 and 1.0 s respectively; within these plateaus,the values from the deterministic map are used instead wherever these are lowerthan those from the probabilistic map. This process effectively uses DSHA to capthe results of PSHA by not allowing motions higher than those corresponding toa deterministic scenario. Wherever the deterministic values are higher than thosefrom the probabilistic map, the latter are retained; hence, the conclusion is that

    the probabilistic estimates should not exceed the deterministic estimates, which areeffectively considered therefore as an upper bound. The validity of this assumptionis questionable, however, in part due to the possibility that a signicant active faulthas not been recognised, but more importantly because of the very different treat-ment of uncertainty in the preparation of the two maps, a subject explored furtherin the next section.

    3. Earthquakes, Seismic Actions and Seismic Hazard

    One of the most fundamental differences between DHSA and PSHA is the trans-parency or otherwise of the underlying features of the earthquake processes and inthe treatment of the uncertainties associated with current models of these processes.In this section, the two methods are considered with respect to their treatment of uncertainty and their relationship with real earthquake processes.

  • 8/9/2019 Psha vs Dsha

    8/31

    50 J. J. Bommer

    3.1. Earthquake scenarios and seismic actions

    The rst and most fundamental output from a PSHA is a hazard curve, a plotof the values of the annual rate of exceedance, return period or the probabilityof exceedance within a particular design life against a selected ground-motion orresponse parameter (Fig. 1).

    The values on a hazard curve convey whether the area is of low, moderateor high seismic hazard and its slope may indicate if the larger earthquakes haverelatively short or long recurrence intervals. Beyond this, a hazard curve for a sin-gle ground-motion parameter tells one almost nothing about the nature of earth-quakes likely to affect the site. For a given exceedance probability, the curves implythat the corresponding level of ground motion increases smoothly and continuouslywith the period of exposure. This is the inevitable result of calculations based onrandom spatial distribution of earthquakes and a continuous magnitude-frequencyrelationship. In passing it is worth noting that the validity of the GutenbergRichterrecurrence relationship, which forms the backbone of PSHA, has been questioned byseveral researchers [Krinitzsky, 1993; Speidel and Mattson, 1995; Hofmann, 1996].Several centuries from now, when some accelerograph stations have been operatingfor thousands of years, it will be possible to plot the actual variation of aver-age ground-motion parameters with time to observe how well our hazard curvesare modelling what actually may happen at any given site. The results will of course be a series of step functions rather than a smooth curve, but if the hazard

    Fig. 1. Seismic hazard curves obtained for San Salvador (El Salvador) using the median valuesfrom the attenuation relationship and including the integration across the scatter [Bommer et al .,2000].

  • 8/9/2019 Psha vs Dsha

    9/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 51

    Fig. 2. Relationship between magnitude and recurrence intervals for earthquakes of M 7.1 andgreater in the Mexican subduction zone [Hong and Rosenblueth, 1988].

    Fig. 3. Contours of MMI > VII for upper-crustal earthquakes in and around San Salvador duringthe last three centuries [Harlow et al ., 1993].

  • 8/9/2019 Psha vs Dsha

    10/31

    52 J. J. Bommer

    curve provided a good t to the recorded values this would vindicate PSHA. Inthe meantime, it is not actually possible to validate the results of a probabilisticseismic hazard assessment.

    The characteristic earthquake model, in which major faults produce large earth-quakes of similar characteristics at more or less constant intervals, is clearly notconsistent with the idea that hazard varies gradually with time. The model wasoriginally proposed by Singh et al . [1983] for major events in the Mexican subduc-tion zone, where earthquakes of magnitude above 8 occur quasi-periodically andthere is an absence of activity in the magnitude range 7.4 to 8.0 (Fig. 2). The con-cept of characteristic earthquakes has been subsequently applied to major crustalfaults and reinforced by paleoseismology [Schwartz and Coppersmith, 1984]. For a

    given site affected by a single source with a characteristic earthquake, the groundmotion expected at the site will clearly jump by a certain increment when thecharacteristic earthquake occurs.

    The concept of characteristic earthquakes may not be limited, however, to largeevents on plate boundaries and major faults. Main [1987] identied characteristicearthquakes of magnitude about 4.6 in the sequence preceding the 1980 eruptionof Mount St. Helens. Another example may be the upper-crustal seismicity alongthe Central American volcanic chain [White and Harlow, 1993]. These destructiveevents occur at irregular intervals, often in clusters, around the main volcanic cen-

    tres (Fig. 3). Recurrence relationships derived for the entire volcanic chain clearlyindicate a bilinear behaviour, reminiscent of the characteristic model (Fig. 4). Therecurrence of these destructive events, sometimes with almost identical locationsand characteristics [Ambraseys et al ., 2001], obviously suggests itself for a deter-ministic treatment, as has been done for example in the microzonation study of SanSalvador [Faccioli et al ., 1988].

    Fig. 4. Magnitude-frequency relationships for the Central American volcanic chain using theseismic catalogue for different periods: clockwise from top left 18981994, 19311994 and19641994 [Bommer et al ., 1998].

  • 8/9/2019 Psha vs Dsha

    11/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 53

    3.2. Seismic hazard from multiple sources of seismicity

    In the simple descriptions of DSHA and PSHA given in Sec. 2.1, apart from thefundamental difference related to the units of time, the most important distinc-tion between the two approaches is in the way the hazard from different sources of seismicity are treated. DSHA treats each seismogenic source separately and theirinuence on the nal outcome is entirely transparent. PSHA combines the contri-butions from all relevant sources into a single rate for each level of a particularground-motion parameter. The consequence is that if the hazard is calculated interms of a range of parameters, such as spectral ordinates at several periods, thenal results will generally not be compatible with any physically feasible earth-quake scenario. In recent years, many deaggregation techniques have been devel-oped to identify the earthquake scenarios that contribute most signicantly to thedesign motions obtained from PSHA [McGuire, 1995; Chapman, 1995, 1999; Baz-zurro and Cornell, 1999; Musson, 1999]. The output of these techniques is oftenthat more than one scenario must be dened in order to match different por-tions of the uniform hazard spectrum (UHS). This is the reason that Romeo andPrestininzi [2000] needed to use an adjusted design earthquake, altering the sce-nario found by deaggregation, in order for the resulting spectrum not to fall belowthe UHS.

    If such manipulation of the scenarios found from deaggregation is required or if different scenarios from different sources are to be used, this begs the question of why should the inuence of different seismogenic sources be combined in the rstplace. The answer would appear to lie in the primordial importance that PSHAattaches to the rate of exceedance of ground-motion levels. The total probabilitytheorem requires that all earthquakes contributing to the rate of exceedance of agiven ground-motion level be considered simultaneously and therein PSHA createsfor itself considerable physical problems for the sake of mathematical rigour.

    For now let us assume that there is a sound and rigorous basis for the selectionof the design probabilities, an assumption that will be revisited later. If manyearthquake sources affect a site, and consequently a wide range of M -R scenarios arepossible, selecting the design seismic actions at a particular annual frequency maybe a rational approach. If, however, characteristic earthquakes are identied, whatis the result of this approach? The answer will depend on the relation between thecharacteristic recurrence interval and the design return period; for large magnitudeearthquakes on plate boundaries the result could lie somewhere in the no-mansland above the ground-motion levels from the background seismicity and belowthe ground motions due to the occurrence of the characteristic event.

    Let us return to the case of San Salvador (Fig. 4); the historical record overthree centuries reveals recurrence intervals of between 2 and 65 years for localdestructive earthquakes, with an average of about 22 years [Harlow et al ., 1993].What then is the result of selecting ground motions corresponding to a 475-yearreturn period? Unlike characteristic events on major faults or subduction zones,there is evidently a degree of uncertainty associated with the location of future

  • 8/9/2019 Psha vs Dsha

    12/31

    54 J. J. Bommer

    events, hence a probabilistic approach may help to select an appropriate source-to-site distance (although it cannot guarantee that the distance for any site in thenext earthquake will not actually be shorter). The source-to-site distance of thehazard-consistent scenario has indeed been found to decrease as the return periodgrows, but the main effect of going to lower and lower probabilities of exceedanceis just to add more and more increments of standard deviation to the expectedmotions from a typical earthquake scenario [Bommer et al ., 2000].

    3.3. The issue of uncertainty

    All seismic hazard assessment, based on our current knowledge of earthquake pro-

    cesses and strong-motion generation, must inevitably deal with very considerableuncertainties. Fundamental amongst these uncertainties are the location and mag-nitude of future earthquakes: PSHA integrates all possible combinations of theseparameters while DSHA assumes the most unfavourable combinations.

    Another very important element of the uncertainty is that associated with thescatter inherent to strong-motion scaling relationships, which again is treated dif-ferently in the two methods. PSHA generally now includes integration across thescatter in the attenuation relationship as part of the calculations, although this hasnot always been the case: the US hazard study by Algermissen et al . [1982] is based

    on median values. In DSHA, as noted previously, the scatter is either ignored, byusing median values, or accounted for by the addition of one standard deviation tothe median values of ground motion. Both approaches have shortcomings, discussedin the next section and also in Sec. 5.3, but it is also important to note here thatthe different treatments of scatter in the two methods questions the validity of thecomparisons that are sometimes made between the two. In particular, the use of the deterministic map to cap the ground motions calculated probabilistically in theNEHRP MCE maps, discussed in Sec. 2.4, ignores completely the fact that the de-terministic motions are based on median values while the probabilistic values may,

    for a 2475-year return period, may be related to values more than 1.5 standarddeviations above the median [Bommer et al ., 2000]. Like is not being comparedwith like.

    An important development in the understanding of the nature of uncertainty instrong-motion scaling relationships is the distinction between epistemic and aleatoryuncertainty. In very simple terms, the epistemic uncertainty is due to incompletedata and knowledge regarding the earthquake process and the aleatory uncertaintyis related to the unpredictable nature of future earthquakes [e.g. Anderson andBrune, 1999]; a more complete denition of epistemic and aleatory uncertainty is

    provided by Toro et al . [1997]. A major component of the uncertainty is due to allof the parameters that are currently not included in simple strong-motion scalingrelationships, whose inclusion would reduce the scatter. Consider equations for du-ration based on magnitude, distance and site classication: additional parametersthat could be included to reduce the scatter are those related to the directivity

  • 8/9/2019 Psha vs Dsha

    13/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 55

    [Somerville et al ., 1997] and velocity of rupture, and degree to which the rupture isunilateral or bilateral [Bommer and Martinez-Pereira, 1999]. This would reduce thescatter in the regressions to determine the equations, but it would not necessarilyreduce the uncertainty associated with estimates of future ground motions becausethe point of rupture initiation, and the direction and speed of its propagation, arecurrently almost impossible to estimate for future events. The reduction of theuncertainty in the strong-motion scaling relationship has simply transferred it tothe uncertainty in the other parameters of the hazard model. PSHA would thenhandle this by considering more and more scenarios to cover all of the possible vari-ations of each feature, whereas DSHA would simply assume their least favourablecombination.

    3.4. Acceleration time-history representation of seismic hazard

    The most complete representation of the ground-shaking hazard is an accelerationtime-history. The specications for selecting or generating accelerograms in currentseismic design codes are generally unworkable, largely because of the representationof the basic design seismic actions in terms of probabilistic maps and uniform hazardspectra [Bommer and Ruggeri, 2002].

    The requirement of dynamic analysis for critical, irregular and high ductility

    structures and the need to dene appropriate input has been one of the main mo-tivations for the development of deaggregation techniques such as Kameda andNojima [1988], Hwang and Huo [1994], Chapman [1995] and McGuire [1995], thelast of which has been widely adopted into practice. Since the PSHA calculationsinclude integration across all possible combinations of magnitude ( M ) and distance(R ) and also across the scatter in the strong-motion relationships, the deaggrega-tion must dene the earthquake scenario in terms of M , R and , the number of standard deviations above the median. Bommer et al . [2000] have shown that if asingle seismogenic source is considered, it is possible to dene the hazard-consistent

    scenario almost exactly rather than as bins of M , R and values. Performingthe deaggregations of hazard determined with and without integration across thescatter reveals interesting features of the way PSHA treats the uncertainty: for a100000-year return period, the scenario without scatter is an earthquake of M s 6.6at 2 km from the site; with scatter, the scenario becomes an M s 6.3 earthquakeat 6 km, together with 2.7 standard deviations above the median. The implica-tions of this for hazard assessment in general are discussed in Sec. 5.4 but here theinterest is to consider the implications for the selection of hazard-consistent realaccelerograms for engineering design. In the rst case, records could be searched

    that approximately match the M -R pair of 6.6 and 2 km; if sufficient number of ac-celerograms were found, the uncertainty would be represented by their own scatter,whereas if only few records were found they could be scaled to account for the un-certainty. That the scatter is inherent in selected suites of accelerograms is reectedby specications such as those in the 1999 AASHTO Guidelines for Base-Isolated

  • 8/9/2019 Psha vs Dsha

    14/31

    56 J. J. Bommer

    Bridges : if three records are used in dynamic analysis, the maximum response val-ues are used for design, whereas if more than seven are employed, it is permittedto use the average response. In the case of the second earthquake scenario, one isfaced with the almost impossible task of nding records that approximately matchthe M -R pair of 6.3 and 6 km, for which all ground-motion parameters are simulta-neously almost three standard deviations above the median for this M -R scenario.This problem can be overcome by carrying out the hazard assessment consideringthe joint probabilities of different strong-motion parameters [Bazzurro, 1998] butsuch techniques are some way from being adopted into general practice.

    4. Seismic Hazard and Seismic RiskSeismic hazard outside of the context of seismic risk is little more than an academicamusement. The development of better practice is not always well served by papersthat present local, regional or even global hazard studies whose intended purposeis never stated or those that propound the virtues of one particular approach ormethod as the best for all applications.

    4.1. Hazard assessment as an element of risk mitigation

    The seismic risk in the existing built environment may be calculated in order todesign nancial (insurance and reinsurance) or physical (retrot and upgrading)mitigation measures. For planned construction, hazard estimates are required sothat appropriate measures can be taken to control the consequent levels of riskthrough relocation (exposure) or earthquake-resistant design (vulnerability).

    In the case of nancial loss estimation for the purposes of insurance, the prob-ability associated with different levels of risk is a vital part of the informationrequired to x premiums and to guide the purchase of reinsurance. In the case of seismic design, of new or existing structures, the target is an acceptable level of

    risk, for which probability may be needed. In the PSHA approach, once the designprobability of exceedance is chosen, it is assumed that design for the correspondinglevel of ground motion will provide an acceptable level of risk.

    The important issue is always what is at stake since what matters is thepossibility of loss of function due to an earthquake, whether that be to safelyhouse people, to provide rapid transportation, emergency medical care or a constantenergy supply, or to contain radioactive material. Krinitzsky [2001] proposes thatany project for which the consequences of failure are intolerable, to the owner and/orthe users, should be considered critical. There are strong arguments for using a de-

    terministic approach for critical facilities [e.g. Krinitzsky, 1995b] although currentDSHA procedures may warrant improvement (Sec. 5.4).The denition of intolerable is, of course, subjective, although it is unlikely that

    anyone would contend the adjective being applied to the failure of a nuclear powerplant. In this context, let us consider the seismicity of Great Britain, which by

  • 8/9/2019 Psha vs Dsha

    15/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 57

    Fig. 5. British earthquakes during the over a period of more than seven centuries. Magnitudes:a: M = 5 . 5, b: 5.5 > M > 5.0, c: 5.0 > M > 4.5, d: 4.5 > M > 4.0. [Ambraseys and Jackson,1985].

  • 8/9/2019 Psha vs Dsha

    16/31

    58 J. J. Bommer

    anyones standards is low. Ambraseys and Jackson [1985] review several centuriesof seismicity in Britain (Fig. 5) and nd no earthquake larger than M s 5.5 with theadditional blessings that these events are very infrequent recurrence intervalsnationally of at least 300 years and that focal depths tend to increase withmagnitude. Accounts of losses of life or property due to British earthquakes arehard to come across. This lack of signicant activity and of any consensus on the principles for zoning this seismotectonically inhomogeneous territory [Woo, 1996]has not prevented PSHA being performed. The probabilistic study by Musson andWinter [1997] found 10000-year PGA values below 0.10 g in most of the country butnudging past 0.25 g at isolated locations, no doubt in part because of their use of rather high values of maximum magnitude from 6.2 to 7.0. An earlier study by OveArup and Partners [1993] found similar results at selected locations and also carriedout a probabilistic risk assessment. It was found that annual earthquake losses wereonly 10% of those due to meteorological hazards, although this was largely theresult of the unlikely but still credible scenario of a magnitude 6 earthquake directly under a city [Booth and Pappin, 1995]. A useful, but complex and lengthy, studywould compare these probabilistic losses with the cost of incorporating earthquake-resistant design into UK construction, and actually perform an iterative analysis todetermine the reduction in risk from different levels of investment in earthquake-resistant design and construction. This would allow an informed decision regarding

    the benet of the investment compared to the loss that may be prevented, takinginto account competing demands on the same resources.

    The issue of seismic safety denitely cannot be dismissed for nuclear powerplants built in the UK, for which elaborate probabilistic hazard assessments havebeen carried out to dene the 10 000-year design ground motions. The real hazard isposed by moderate magnitude [M 5] earthquakes, whose association with tectonicstructures is tenuous, that could produce damaging ground motions over a radiusof about 510 km. In the authors opinion, unless there is good scientic reasonsto exclude it as unfeasible, the only rational design basis for seismic safety in such

    a setting is a DSHA based on an earthquake of about M s 5.5, which would requirerupture on a fault that could very easily escape detection, occurring close to or evenbelow the power station. This approach could be classied as one of conservatism.In their treatise on the philosophy of seismic hazard assessment for nuclear powerplants in the UK, Mallard and Woo [1993] argue against the use conservatismand in favour of a systematic methodology for quantifying uncertainty . The toolproposed for this procedure is the logic-tree, a device that has become part of thestock-in-trade of PSHA enthusiasts. As applied to seismic hazard assessment, theetymology of the second part of its name is obvious from its dendritic structure,

    but the logic is sometimes harder to detect.

    4.2. The use and abuse of probability

    At this point, three principles for seismic hazard assessment can be stated:

  • 8/9/2019 Psha vs Dsha

    17/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 59

    (1) Seismic hazard can only be rationally interpreted in relation to the mitigationof the attendant risk.

    (2) If determinism is taken to mean the assignment of values by judgement, it is im-possible to perform a seismic hazard assessment, by any method, without manydeterministic elements. These should be based, as far as possible, exclusivelyon the best scientic data available.

    (3) Probability, at least in a relative sense, is essential to the evaluation of risk c

    (but not necessarily to its calculation).

    The application of logic-trees to seismic hazard assessment is a mechanism forhandling those epistemic uncertainties that cannot, unlike the aleatory scatter in

    strong-motion scaling relationships, be statistically measured. The different optionsat each step are considered and each is assigned a subjective weight that reects thecondence in each particular value or choice. The results allow the determination of condence bands on the mean hazard curve, which as Reiter [1990] rightly pointsout, places an uneasy burden on those who have to use the results . Krinitzsky[1995a] presents powerful arguments against the use of logic-tree for hazard assess-ment, while illustrating how without the contentious application of weights it

    Fig. 6. Contours of 475-year PGA levels for El Salvador produced by independent seismic hazardstudies [Bommer et al ., 1997].

    c For critical structures, any nite probability of failure may be considered intolerable.

  • 8/9/2019 Psha vs Dsha

    18/31

    60 J. J. Bommer

    can be a useful tool for comparing risk scenarios. In the context of this study, andin terms of the three principles outlined above, the logic-tree for hazard assessmentappears to be upside down: deterministic judgements d are turned into numbersand treated together with observational data to calculate probabilities that thenx the design ground motions. Returning to the point made in Sec. 3.2, this againpoints to PSHA having a degree of condence in its probabilities, which, to theauthor, does not seem justied. Reiter [1990] notes that different studies may conict over whether the likelihood of exceeding 0.3g at site X is 10 3 or 10 4

    and whether the likelihood of exceeding the same acceleration at location Y is 10 4

    or 10 5 . Given how PSHA is actually applied, this means that for a speciedannual rate of exceedance estimates of PGA can vary by factors of two or more

    (Fig. 6). It is important to note that a degree of this divergence in results canbe removed through application of the procedures proposed by the Senior SeismicHazard Analysis Committee [SSHAC, 1997]. However, the available earthquakeand ground-motion data for most parts of the world is such that any estimateof the ground motion for a prescribed rate of exceedance will inevitably carry anappreciable degree of uncertainty.

    So the big question is, how are the probabilities xed? Hanks and Cornell [2001]explain that the starting point is a performance target expressed as a probabilityof failure, P f , which is set by life safety concerns or perhaps political at . This

    probability is dened by the integral:

    P f =

    0H (a ) F (a ) da (2)

    where H (a ) describes the hazard curve (annual rate of exceedance of different levelsof acceleration, a), and F (a ) is the derivative of the fragility function (the proba-bility of failure given a particular level of acceleration). This is ne if an iterativeprocess is carried out in order determine the appropriate fragility curve, whichwould then dene the design criteria, to give the desired level of P f . If the fragility

    curve is sufficiently steep, once it has been determined Hanks and Cornell [2001]show how Eq. (2) can be approximated to determine the level of H (a ) to be used asthe basis of design, although this begins to get complicated because to obtain F (a )the design must already have been carried out. However, this is not how the designlevels used in current practice have been xed or else it is quite remarkable thatnearly every country in the world, from New Zealand to Ethiopia, regardless of seismicity, building types or construction standards, all came up with 0.002 as thedesign annual rate of exceedance!

    In fact, the almost universal use of the 475-year return period in codes can be

    traced back to the hazard study for the USA produced by Algermissen and Perkins[1976], which was based on an exposure period of 50 years (a typical design life)

    d Krinitzsky [1995a] describes these as degrees of belief that are no more quantiable than love or taste .

  • 8/9/2019 Psha vs Dsha

    19/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 61

    and a probability of 10% of exceedance, whose selection has not been explained.Every code and regulation since has either followed suit or else adopted its ownreturn periods: Whitman [1989] reports proposals by the Structural Engineers As-sociation of Northern California to use a 2000-year return period, chosen because the committee believed it reected a probability or risk comparable to other risks that the public accepts in regard to life safety . If one considers that in most structuralcodes the importance factor increases the design ground motions by a constantfactor, this means that the actual return period of the design accelerations will bedifferent from 475 years and will probably vary throughout a country. In the devel-opment of the NEHRP Guidelines, this is actually been done intentionally in orderto provide a uniform margin against collapse, resulting in design for motions cor-responding to 10% in 50 years in San Francisco and 5% in 50 years in Central andEastern US [Leyendecker et al ., 2000]. A review of seismic design regulations aroundthe world reveals a host of design return periods, the origin of which is rarely if ever explained. The AASHTO Guidelines specify 500 years for essential bridges,2500 years for critical bridges. e In the Vision 2000 proposal for a frameworkfor performance-based seismic design, return periods of 72, 244, 475 and 974 yearshave been specied as the basis for xing demand for different performance levels[SEAOC, 1995]. It is hard not to feel that some of these values have been almostpulled from a hat, all the more so for the 10 000-year return periods specied for

    safety-critical structures such as nuclear power plants.

    5. The Best of Both Worlds

    The three principles stated in Sec. 4.2 imply that it is not possible to perform usefulseismic hazard assessment that is free from determinism or probability, since bothare essential ingredients. Hence it is logical to look for ways that make best use of both these components.

    5.1. Hybrid approaches

    Hanks and Cornell [2001] assert that the main difference between deterministic andprobabilistic approaches is that PSHA has units of time and DHSA does not. Thisis true in the brief descriptions of the two methods presented at the beginning of the paper but it does not mean that time and probabilities cannot be attachedto deterministic scenarios. Orozova and Suhadolc [1999] assign recurrence ratesto earthquakes of particular magnitude in order to adapt the method of Costaet al . [1993] discussed in Sec. 2.3 to create a deterministic-probabilistic

    approach. Furthermore, it is common in loss estimation to model the hazard as aseries of earthquake scenarios, each of which is assigned an average rate from the

    e A wonderful example occurred in a recent project the design return period was nally xedby the engineer at 2000 years because the bridge was judged to be almost critical !

  • 8/9/2019 Psha vs Dsha

    20/31

    62 J. J. Bommer

    recurrence relationships [e.g. Frankel and Safak, 1998; Kiremidjian, 1999; Bommeret al ., 2002].

    It has in fact already become well recognised that there is not a mutually exclu-sive dichotomy between DSHA and PSHA, opening up the possibility of exploringcombined methods. Reiter [1990] says that in many situations the choice has been rephrased so that the issue is not whether but rather to what extent a particular approach should be used . McGuire [2001] asserts that determinism vs. probabil-ism is not a bivariate choice but a continuum in which both analyses are conducted ,but more emphasis is given to one over the other . It is, however, interesting tonote that both Reiter [1990] and McGuire [2001], who make positive contributionsto moving away from the dichotomy, point towards deaggregation of PSHA as animportant tool in hybrid approaches.

    5.2. Against orthodoxies f

    How should the method for seismic hazard assessment be chosen? Reiter [1990]rightly argues that the analysis must t the needs . Nonetheless, there is a ten-dency amongst many in the eld of earthquake engineering who would argue forthe superiority of one approach or the other as the ideal, a panacea for all situ-ations. Such arguments carry the danger of establishing orthodoxies, with all theirattendant perils and limitations. Hazard assessment should be chosen and adaptednot only to the required output and objectives of the risk study of which it formspart, it should also be adapted to the characteristics of the area where it is beingapplied and the level and quality of the data available. There is no need to establishstandard approaches that may be well suited to some settings and not to others, orwhich may be appropriate now and yet not be in a few years time as understandingof the earthquake process advances. Every major earthquake that is well recordedby accelerographs throws up new answers and poses new questions, leading to rapidevolution. The near-eld directivity factors proposed by Somerville et al . [1997], forexample, have had to be revised not only numerically but also conceptually fol-lowing the 1999 earthquakes in Turkey and Taiwan, a mere two years after theirpublication. The ability to locate and identify active geological faults in continentalareas has also advanced by leaps and bounds over recent years, and continues todo so [e.g. Jackson, 2001].

    What are the bases for the orthodoxies of PSHA and DSHA? For the PSHAchurch, the cornerstone of its creed seems to be the overriding importance of theprobability of exceedance of particular levels of ground motion, despite the factthat teams of experts working with the same data can easily come up with answers

    f In the period 182 to 188 A.D., at a time when the early Christian church had gone beyondonly having enemies outside its camp, St. Irenaeus of Lyons penned a multi-tome treatise entitledAdversus Haereses Against Heresies. The effects of such rigid insistence on a particular, narrowphilosophy in stiing religious and scientic thought in later centuries are known only toowell.

  • 8/9/2019 Psha vs Dsha

    21/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 63

    that differ by an order of magnitude. For the DSHA temple, the sacred cow isthe worst-case scenario, even though this is not what they actually produce, asdiscussed in the Sec. 5.4. The fact is that one cannot get away from the fact thatwith current levels of knowledge of earthquake processes a major component of seismic hazard assessment is judgement and risk decisions are governed not onlyby scientic and technical data but also by opinion. The word orthodoxy, fromthe Greek, has exactly that paradoxical and unscientic meaning: correct ( ortho )opinion ( doxa ). The Oxford Dictionary goes on to dene the word also to meannot independent-minded and unoriginal .

    5.3. Alternative approaches to seismic hazard mapping One area in which PSHA currently enjoys almost total domination is in seismichazard mapping, particularly for seismic design codes. The difference between de-terministic and probabilistic hazard maps highlights one extremely important dif-ference between current practice of the two methods, quite apart from the debat-able issue of whether or not units of time are included. Probabilistic maps combinethe inuence and effects of different sources of seismicity, often into a single mapshowing contours of PGA that are then used to anchor spectral shapes that relateonly to the site conditions and which somehow try to cover all possible earthquake

    scenarios.Donovan [1993] states that the ground motion portion of codes represents an

    attempt to produce a best estimate of what ground motions might occur at a site during a future earthquake . Current code descriptions of earthquake actions clearlydo not full this objective for many reasons, particularly because of the practice of combining the hazard from various sources of seismicity. Here it is hard to justifythis by insisting on the importance of the total probability when codes are generallybased on the arbitrary level of the 475-year return period and, even more impor-tantly, a return period which holds only for the zero period spectral ordinate. In

    fact, because of the use of discrete zones to represent continuously varying hazardover geographical areas, the actual return period will often not correspond to the475-year level even at the spectral anchor point. The uniform hazard spectra of seismic codes do not represent motions that might occur during a plausible futureearthquake, whence the difficulties of codes to specify acceleration time-histories.In certain regions, particularly those affected by both small, local earthquakes andlarger, distant earthquakes, the application of spectral modal analysis using thecode spectrum can amount to designing a long-period structure for two differenttypes of earthquake occurring simultaneously. It is important to note here that

    several codes, and in particular the NEHRP provisions [FEMA, 1997], use two pa-rameters to dene the spectrum, hence the spectral shape does vary with hazardlevel, but the above comments still apply to some extent.

    If it is accepted that the total probability theorem is not an inviolable principlein dening earthquake actions, especially when the calculation of that probability

  • 8/9/2019 Psha vs Dsha

    22/31

    64 J. J. Bommer

    is subject to such uncertainty and presented so approximately, more imaginativeprocedures can be used. The hazard resulting from different sources of seismicitycan be treated individually and their resulting spectra dened separately. Thisis, in fact, already done: the seismic design codes of both China and Portugaldene one spectrum for local events and another for more distant large magnitudeearthquakes. Once different types of seismicity are separated, it will usually be foundthat at most sites one source dominates. One possibility is then to deaggregate thehazard associated with the dominant source for each site and hence characterise thehazard in terms of actual M -R scenarios. The hazard could then be represented bymaps showing contours of M and R for different types of seismicity. The problemstill remains as to how to select an appropriate return period at which to perform

    the deaggregation, but in many cases it will be possible to dene the M -R pairsdeterministically [Bommer and White, 2001]. This would differ from the maps of magnitude and distance presented by Harmsen et al . [1999] because a single pairof maps would cover all strong-motion parameters rather than just the spectralordinate at a single response period.

    Another issue is how to incorporate rationally the uncertainty without goingto very complex representations of hazard in terms of M -R - contours [Bazzurroet al ., 1998]. The important point is that once the hazard is characterised by pairs of magnitude and distance, every required parameter of the ground motion, including

    ordinates of spectral acceleration and displacement in both the vertical and hori-zontal directions, and duration, can be computed directly and the required criteriafor selecting or generating acceleration time-histories are provided [Bommer, 2000].Within the framework of performance-based seismic design it is already envisagedthat several levels of earthquake action will be considered in structural design. If the objective is to obtain improved control over earthquake response to expectedseismic actions, then it is surely a step in the right direction to begin by separatingdifferent types of earthquake action.

    5.4. Upper bounds : the missing piece

    In 1971, the San Fernando earthquake more than doubled number of strong-motionaccelerograms available, marking the dawn of the age of curve tting to clouds of data to nd scaling relationships. It is curious that the curves tted severely under-estimated the most signicant ground-motion recorded in the earthquake (Fig. 7).The scatter in these relationships is generally assumed to be lognormal and isinvariably large: for spectral ordinates, the 84-percentile values are typically 80100% higher than the median. This scatter creates difficulties for both DSHA and

    PSHA. In PSHA the untruncated lognormal scatter results in the probable maxi-mum ground motion at a particular site increasingly indenitely as the time window of the PSHA increases , due to the increasing inuence of the tail of the Gaussian distribution on the probabilistic values [Anderson and Brune, 1999]. This has givenrise to different mechanisms for truncating the scatter at a certain number of

  • 8/9/2019 Psha vs Dsha

    23/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 65

    Fig. 7. Attenuation relationship derived for the 1971 San Fernando earthquake data [Dowrick,1987].

    standard deviations above the median, which assumes that for all magnitude anddistance combinations, the physical upper bound on the ground motion is always

    at a xed ratio of the median amplitudes. There is also debate regarding the actuallevel at which the truncation should be applied: proposals range from 2 to 4.5 stan-dard deviations [Reiter, 1990; Abrahamson, 2000; Romeo and Pristininzi, 2000] andsome widely used software codes employ cut-offs at 6 standard deviations above themedian.

    Untruncated lognormal distributions create even more difficulties for DSHA,since, at least for critical structures, its basis should be to identify the worst-casescenario. Firstly, it establishes a maximum credible earthquake (MCE) as the basisfor the design. Abrahamson [2000] points out that if the MCE is determined as

    the mean value from an empirical relationship between magnitude and rupturedimensions, it is not the worst case; the worst case would be the magnitude atwhich the probability distribution is truncated. This truncation requires a physicalrather than statistical basis: upper bounds are required, especially because theregressions are supported by very few data for large magnitudes [Jackson, 1996].

  • 8/9/2019 Psha vs Dsha

    24/31

    66 J. J. Bommer

    After xing the MCE, DSHA calculates design ground motions as the medianor median-plus-one-standard deviation values from strong-motion scaling relation-ships. Again Abrahamson [2000] points out that this is not the worst case butthen goes on to state the worst case ground motion would be 2 to 3 standard deviations above the mean . The problem is where between these two limits tox the worst case? Probabilistically they are close, corresponding to the 97.7 and99.9 percentiles respectively, although in terms of spectral amplitudes the effect of the third standard deviation thrown on can increase the amplitudes by factors of two, which clearly has very signicant implications for design. Here again, physicalupper bounds are needed. Can upper bounds be xed for ground-motion param-eters? Reiter [1990] argues that there must be a physical limit on the strength

    of ground motion that a given earthquake can generate. It is also interesting tonote that before sufficient strong-motion accelerograms were available for regres-sion analyses, a number of studies considered possible upper bounds on ground-motion parameters [e.g. Ambraseys, 1974; Ambraseys and Hendron, 1967; Ida, 1973;Newmark, 1965; Newmark and Hall, 1969]. Upper bounds need to be establishedand procedures for their application developed that take into account current un-derstanding of epistemic uncertainty. For example, if a deterministic scenario hasincluded forward directivity effects, does it then make sense to add on two or threestandard deviations when a signicant component of the scatter has already been

    accounted for?There are many ways that the scatter in attenuation equations can be trun-

    cated, including the adaptation of models developed for log-normal distributionswith nite maxima [Bezdak and Solomon, 1983] to ground-motion prediction mod-els [Restrepo-Velez, 2001]. As PSHA pushes estimates to longer return periods theissue of truncating the inuence of the scatter, in order to avoid physically unrealis-able ground-motion amplitudes, becomes more important. The Pegasos project, cur-rently underway to assess seismic hazard at nuclear power plant sites in Switzerlanddown to very low rates of exceedance, is taking PSHA to new limits [Smit et al .,

    2002]. The ground-motion estimates will be dened by median values, standarddeviations and truncations, with associated condence intervals for each of these,for a wide range of magnitude-range distance pairs.

    5.5. Time-dependent estimates of seismic hazard

    In order to make full use of the output from a PSHA, including estimates of un-certainty and the expected levels of ground motion for a range of return periods, itis necessary that the engineering design also follow a probabilistic approach. Such

    approaches are now being developed, particularly through the outstanding work of Allin Cornell and his students at Stanford University [e.g. Bazzurro, 1998]. However,there are many obstacles to the adoption of these approaches in routine engineeringpractice simply because there are few areas of activity where the saying time is money is as true as it is in the construction industry. One could actually conclude

  • 8/9/2019 Psha vs Dsha

    25/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 67

    that probabilistic assessment of design earthquake actions is far more advancedthan probabilistic approaches to seismic design and for this reason the results of PSHA are generally not used to their full advantage.

    The limited time made available for even major engineering projects is reectedin the very tight deadlines often set for the execution of seismic hazard assessments.In engineering practice it is not unusual for a hazard assessment for a site to becarried out in a few weeks, generally making it inevitable that the study will bebased largely on available data published in papers and reports. Under such circum-stances, and especially where the design considers the seismic performance at onlyone limit state, DSHA can represent an attractive option to dene a level of groundmotion that is unlikely to be severely exceeded. One advantage of DSHA is that

    it immediately releases the hazard analyst from the responsibility of dening thedesign return period, which the client or engineer often passes, quite incorrectly, tothe Earth scientist. More importantly, DSHA is generally more dependent on thedata that is likely to be determined most reliably: the characteristics of the largesthistorical earthquakes and the location of the largest geological faults. Since theactual calculations involved in DSHA are simple, the analyst can carry out sensi-tivity analyses even when working to a very strict timetable. Furthermore, becauseof the transparency of the deterministic approach, peer review can be carried outswiftly and easily, which may not be the case for a probabilistic assessment where

    so many input parameters and assumptions, whose individual inuences are oftendifficult to distinguish, need to be dened. One cannot escape from the fact thatexpert opinion is of overriding importance, regardless of whether the probabilisticor deterministic approach is adopted. The value of a logic-tree formulation, withweights assigned by one analyst, in situations where the design engineer forwhom the provision of earthquake resistance is just one of a series of considerationsthat affect siting, dimensions and detailing is seeking a single number to charac-terise the earthquake action, is questionable. This is not to say that careful use of multiple expert opinions is not to be encouraged, indeed a Level 4 hazard study as

    recommended by SSHAC [1997] provides a very well crafted framework for exactlysuch a process. However, the time scale, and hence cost, required to perform such astudy is beyond the means and the schedule of most engineering projects: to datethe SSHAC Level 4 method has only be applied in the Yucca Mountains project[Stepp et al ., 2001] and now the Pegasos project in Switzerland [Smit et al ., 2002].When working to a very short time scale, however, a peer-reviewed DSHA maybe far more useful to the project engineer and ultimately to the provision of anadequate level of earthquake resistance.

    6. Conclusions

    There is not a simple dichotomy between probabilistic and deterministic approachesto seismic hazard assessment many different probabilistic approaches exist andthere is not a standard methodology for deterministic approaches. There is not,

  • 8/9/2019 Psha vs Dsha

    26/31

    68 J. J. Bommer

    nor does there need to be, a method that can be applied as a panacea in all sit-uations, since the available time and resources, and the required output, will varyfor different applications. There is no reason why a single approach should be idealfor drafting zonation maps for seismic design codes, for loss estimation studies inurban areas, for emergency planning, and for site-specic assessments for nuclearpower plants. McGuire [2001] presents an interesting scheme for selecting the rel-ative degrees of determinism and probabilism according to the application. Woo[1996] states that the method of (probabilistic) analysis should be decided on themerits of the regional data rather than the availability of particular software orthe analysts own philosophical inclination. The reality is that both of these viewsare partially correct and hence both criteria need to be used simultaneously: the

    selection of the appropriate method must t the requirements of the applicationand also consider the nature of the seismicity in the region and its correlation orindeed lack thereof with the tectonics. It is not possible to dene a set of criteriathat can then be blindly applied to all types of hazard assessment in all regionsof the world and for this reason no such attempt has been made in this paper.The analyst, after establishing the needs and conditions set by the engineer, shouldadapt the assessment both to these criteria and to the region under study. Castingaside simplistic choices between DSHA and PSHA will help the best approach to befound and will also allow the best use to be made of the considerable and growing

    body of expertise in engineering seismology around the world.The condence with which seismic probabilities can be calculated does not gen-

    erally warrant their rigid use to dene design ground motions and wherever possi-ble the results should at least be checked against deterministic scenarios, whereverthere is sufficient data for these to be dened [McGuire, 2001]. One application inwhich the arguments for strict adherence to the total probability theorem cannotbe defended is in the derivation of earthquake actions for code-based seismic design.The currently very crude denition of earthquake actions in seismic codes could begreatly improved if it was accepted that it is not necessary to use representations

    that attempt to simultaneously envelope all of the possible ground motions thatmay occur at a site.

    Finally, one area in which research is required, that will be of benet to seismichazard assessment in general and perhaps in particular to deterministic approaches,is to identify upper bounds on ground-motion parameters for different combinationsof magnitude, distance and rupture mechanism. The existing database of strong-motion accelerograms can provide some insight into this issue [e.g. Martnez-Pereira,1999; Restrepo-Velez, 2001]. Advances in nite fault models for numerical simula-tion of ground motions could be employed to perform large numbers of runs, with

    a wide range of combinations of physically realisable values of the independentparameters, in order to obtain estimates of the likely range of the upper bounds onsome parameters.

  • 8/9/2019 Psha vs Dsha

    27/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 69

    Acknowledgements

    The author wishes to thank David M. Boore, Luis Fernando Restrepo-Velez, JohnDouglas and John Berrill for heads up regarding some key references, and partic-ularly to Ellis Krinitzsky and Tom Hanks for providing pre-prints of papers. Theauthor has enjoyed and beneted from discussing many of the issues in this pa-per with others, particularly Norman A. Abrahamson, Amr Elnashai, Rui Pinho,Nigel Priestley, Sarada K. Sarma and the students of the ROSE School in Pavia.Robin McGuire, Belen Benito, John Douglas, Nick Ambraseys and Sarada Sarmaall provided very useful feedback on the rst draft of this manuscript. The author isparticularly indebted to Dave Boore for an extremely thorough, detailed and chal-lenging review of the rst draft; in the few cases where my determined stubbornnesshas led me to ignore his counsel, I have probably done so to the detriment of thepaper. The second draft of the paper has further beneted from thorough reviewsby John Berrill and two anonymous referees, to whom I also extend my gratitude.

    References

    Abrahamson, N. A. [2000] State of the practice of seismic hazard evaluation, GeoEng 2000 , Melbourne, Australia, 1924 November.

    Algermissen, S. T. and Perkins, D. M. [1976] A probabilistic estimate of maximum ac-

    celerations in rock in the contiguous United States, US Geological Survey Open-File Report 76416 .Algermissen, S. T., Perkins, D. M., Thenhaus, P. C., Hanson, S. L. and Bender, B. L.

    [1982] Probabilistic estimates of maximum acceleration and velocity in rock in thecontiguous United States, US Geological Survey Open-File Report 821033 .

    Alvarez, L., Vaccari, F., and Panza, G. F. [1999] Deterministic seismic zoning of easternCuba, Pure Appl. Geophys. 156 , 469486.

    Ambraseys, N. N. [1974] Dynamics and response of foundation materials in epicentralregions of strong earthquakes, Proceedings Fifth World Conference on Earthquake Engineering , Rome, vol. 1 , cxxvi-cxlviii.

    Ambraseys, N. N., Bommer, J. J., Buforn, E. and Udas A. [2001] The earthquakesequence of May 1951 at Jucuapa, El Salvador, J. Seism. 5 (1), 2339.

    Ambraseys, N. N. and Hendron, A. [1967] Dynamic Behaviour of Rock Masses . RockMechanics in Engineering Practice, Stagg, K. and Zienkiewicz, O. (eds.), John Wiley,pp. 203236.

    Ambraseys, N. N. and Jackson, J. A. [1985] Long-term seismicity of Britain, EarthquakeEngineering in Britain, Thomas Telford, pp. 4965.

    Anderson, J. G. [1997] Benets of scenario ground motion maps, Engrg. Geol. 48 , 4357.Anderson, J. G. and Brune, J. N. [1999] Probabilistic seismic hazard analysis without

    the ergodic assumption, Seism. Res. Lett. 70 (1), 1928.Aoudia, A., Vaccari, F., Suhadolc, P. and Megrahoui, M. [2000] Seismogenic potential

    and earthquake hazard assessment in Tell Atlas of Algeria, J. Seism. 4 , 7998.Barbano, M. S, Egozcue, J. J., Garca Fern` andez, M., Kijko, A., Lapajne, L., Mayer-Rosa,

    D., Schenk, V., Schenkova, Z., Slejko, D. and Zonno, G. [1989] Assessment of seismichazard for the Sannio-Matese area of southern Italy a summary, Natural Hazards 2 , 217228.

  • 8/9/2019 Psha vs Dsha

    28/31

    70 J. J. Bommer

    Bazzurro, P. [1998] Probabilistic seismic demand analysis, PhD Dissertation, StanfordUniversity.

    Bazzurro, P. and Cornell, C. A. [1999] Disaggregation of seismic hazard, Bull. Seism.Soc. Am. 89 , 501520.Bazzurro, P., Winterstein, S. R. and Cornell, C. A. [1998] Seismic contours: a new char-

    acterization of seismic hazard, Proc. Eleventh European Conference on Earthquake Engineering , Paris.

    Bender, B. [1984] Incorporating acceleration variability into seismic hazard analysis,Bull. Seism. Soc. Am. 74 , 14511462.

    Bender, B. and Perkins, D. M. [1982] SEISRISK II, A computer program for seismichazard estimation, US Geological Survey Open-File Report 82293 .

    Bender, B. and Perkins, D. M. [1987] SEISRISK III, A computer program for seismichazard estimation, US Geol. Surv. Bull. 1772 , 120.

    Bezdek, J. C. and Solomon, K. H. [1983] Upper limit lognormal distribution for drop sizedata, ASCE J. Irrigation and Drainage Engrg. 109 (1), 7288.

    Bommer, J. J. [2000] Seismic zonation for comprehensive denition of earthquakeactions, Proceedings Sixth International Conference on Seismic Zonation , PalmSprings, CA.

    Bommer, J. J. and Martnez-Pereira, A. [1999] The effective duration of earthquakestrong motion, J. Earthq. Engrg. 3 (2), 127172.

    Bommer, J. J. and Ruggeri, C. [2002] The specication of acceleration time-histories inseismic design codes, Eur. Earthq. Engrg. , 16 (1), in press.

    Bommer, J. J. and White, N. [2001] Una propuesta para un mtodo alternativo de zoni-caci on ssmica en los pases de Iberoamerica, Segundo Congreso Iberoamericano de Ingeniera Ssmica , Madrid, 1619 October.

    Bommer, J., McQueen, C., Salazar, W., Scott, S. and Woo, G. [1998] A case study of thespatial distribution of seismic hazard (El Salvador), Natural Hazards 18 , 145166.

    Bommer, J. J., Scott, S. G. and Sarma, S. K. [2000] Hazard-consistent earthquakescenarios, Soil Dyna. Earthq. Engrg. 19 , 219231.

    Bommer, J., Spence, R., Erdik, M., Tabuchi, S., Aydinoglu, N., Booth, E., del Re, D. andPeterken, O. [2002] Development of an earthquake loss model for Turkish Catastro-phe Insurance, J. Seism. , accepted for publication.

    Bommer, J. J., Udas, A., Cepeda, J. M., Hasbun, J. C., Salazar, W. M., Su arez, A.,Ambraseys, N. N., Buforn, E., Cortina, J., Madariaga, R., M` endez, P., Mezcua, J.

    and Papastamatiou, D. [1997] A new digital accelerograph network for El Salvador,Seism. Res. Lett. 68 , 426437.Booth, E. D. and Pappin, J. W. [1995] Seismic design requirements for structures in the

    United Kingdom, European Seismic Design Practice , A. S. Elnashai (ed.), Balkema,133140.

    Chapman, M. C. [1995] A probabilistic approach to ground-motion selection for engi-neering design, Bull. Seism. Soc. Am. 85 , 937942.

    Chapman, M. C. [1999] On the use of elastic input energy for seismic hazard analysis,Earthq. Spectra 15 , 607635.

    Cornell, C. A. [1968] Engineering seismic risk analysis, Bull. Seism. Soc. Am. 58 ,15831606.

    Costa, G., Panza, G. F., Suhadolc, P. and Vaccari, F. [1993] Zoning of the Italian terri-tory in terms of expected peak ground acceleration derived from complete syntheticseismograms, J. Appl. Geophys. 30 , 149160.

    Donovan, N. [1993] Relationship of seismic hazard studies to seismic codes in the UnitedStates, Tectonophysics 218 , 257271.

  • 8/9/2019 Psha vs Dsha

    29/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 71

    Dowrick, D. J. [1987] Earthquake Resistant Design for Engineers and Architects . Secondedition, John Wiley & Sons.

    Faccioli, E., Battistella, C., Alemani, P. and Tibaldi, A. [1988] Seismic microzoning in-vestigations in the metropolitan area of San Salvador, El Salvador, following thedestructive earthquake of 10 October 1986, Proceedings International Seminar on Earthquake Engineering , Innsbruck, 2865.

    FEMA [1997] NEHRP recommended provisions for seismic regulations for new buildingsand other structures, FEMA 302 , Federal Emergency Management Agency, Wash-ington DC.

    Frankel, A. [1995] Mapping seismic hazard in the Central and Eastern United States,Seism. Res. Lett. 66 (4), 821.

    Frankel, A. and Safak, E. [1998] Recent trends and future prospects in seismic hazardanalysis, Geotechnical Earthquake Engineering & Soil Dynamics III, ASCE Geotech-nical Special Publication 75 , 1 , 91115.

    Hanks, T. C. and Cornell, C. A. [2001] Probabilistic seismic hazard analysis: a beginnersguide, Earthq. Spectra , submitted.

    Harlow, D. H., White, R. A., Rymer, M. J. and Alvarado, S. G. [1993] The San Salvadorearthquake of 10 October 1986 and its historical context, Bull. Seism. Soc. Am.83 (4), 11431154.

    Harmsen, S., Perkins, D. and Frankel, A. [1999] Deaggregation of probabilistic groundmotions in the Central and Eastern United States, Bull. Seism. Soc. Am. 89 , 113.

    Hofmann, R. B. [1996] Individual faults cant produce a Gutenberg-Richter earthquakerecurrence, Engrg. Geol. 43 , 59.

    Hong, H. P. and Rosenblueth, E. [1988] The Mexico earthquake of September 19, 1985 model for generation of subduction earthquakes, Earthq. Spectra 4 (3), 481498.

    Hwang, H. H. M. and Huo, J. R. [1994] Generation of hazard-consistent ground motion,Soil Dyn. Earthq. Engrg. 13 , 377386.

    Ida, Y. [1973] The maximum acceleration of seismic ground motion, Bull. Seism. Soc.Am. 63 (3), 959968.

    Jackson, D. D. [1996] The case for huge earthquakes, Seism. Res. Lett. 67 (1), 35.Jackson, J. A. [2001] Living with earthquakes: know your faults, J. Earthq. Engrg. 5 ,

    Special Issue No. 1, 5123.Kameda, H. and Nojima, N. [1988] Simulation of risk-consistent earthquake motion,

    Earthq. Engrg. Struct. Dyn. 16 , 10071019.

    Kijko, A. and Graham, G. [1998] Parametric-historic procedure for probabilistic seismichazard analysis, Proceedings Eleventh European Conference on Earthquake Engineer-ing , Paris.

    Kiremidjian, A. S. [1999] Multiple earthquake event loss estimation methodology, Pro-ceedings of the Eleventh European Conference on Earthquake Engineering: Invited Lectures . Balkema, pp. 151160.

    Kiremidjian, A. S. and Anagnos, T. [1984] Stochastic slip-predictable models for earth-quake occurrence, Bull. Seism. Soc. Am. 74 , 739755.

    Kiremidjian, A. S. and Suzuki, S. [1987] A stochastic model for site ground motions fromtemporally independent earthquakes, Bull. Seism. Soc. Am. 77 , 11101126.

    Kramer, S. L. [1996] Geotechnical Earthquake Engineering , Prentice Hall.

    Krinitzsky, E. L. [1993] Earthquake probability in engineering Part 2: Earthquakerecurrence and limitations of Gutenberg-Richter b-values for the engineering of criticalstructures, Engrg. Geol. 36 , 152.

    Krinitzsky, E. L. [1995a] Problems with Logic Trees in earthquake hazard evaluation,Engrg. Geol. 39 , 13.

  • 8/9/2019 Psha vs Dsha

    30/31

    72 J. J. Bommer

    Krinitzsky, E. L. [1995b] Deterministic versus probabilistic seismic hazard analysis forcritical structures, Engrg. Geol. 40 , 17.

    Krinitzsky, E. L. [1998] The hazard in using probabilistic seismic hazard assessment forengineering, Env. Engrg. Geosci. 4 (4), 425443.Krinitzsky, E. L. [2001] How to obtain earthquake ground motions for engineering design,

    Engrg. Geosci. , submitted.Krinitzsky, E. L., Gould, J. P. and Edinger, P. H. [1993] Fundamentals of Earthquake

    Resistant Construction , Wiley.Leyendecker, E. V., Hunt, R. J., Frankel, A. D. and Rukstales, K. S. [2000] Development

    of maximum considered earthquake ground motion maps, Earthq. Spectra 16 , 2140.Main, I. G. [1987] A characteristic earthquake model of the seismicity preceding the

    eruption of Mount St. Helens on 18 May 1980, Phys. Earth Planetary Interiors 49 ,283293.

    Makropoulos, K. C. and Burton, P. W. [1986] HAZAN: a FORTRAN program to eval-uate seismic hazard parameters using Gumbels theory of extreme value statistics,Comput. Geosci. 12 , 2946.

    Mallard, D. J. and Woo, G. [1993] Uncertainty and conservatism in UK seismic hazardassessment, Nuclear Energy 32 (4), 199205.

    McGuire, R. K. [1976] FORTRAN computer program for seismic risk analysis, US Geological Survey Open-File Report 7667 .

    McGuire, R. K. [1995] Probabilistic seismic hazard analysis and design earthquakes:closing the loop, Bull. Seism. Soc. Am. 85 , 12751284.

    McGuire, R. K. [2001] Deterministic vs. probabilistic earthquake hazard and risks, Soil Dyn. Earthq. Engrg. 21 , 377384.

    Mualchin, L. [1996] Development of Caltrans deterministic fault and earthquake hazardmap of California, Engrg. Geol. 42 , 217222.

    Musson, R. M. W. [1999] Determination of design earthquakes in seismic hazard analysisthrough Monte Carlo simulation, J. Earthq. Engrg. 3 (4), 463474.

    Musson, R. M. W. and Winter, P. W. [1997] Seismic hazard maps for the UK, Natural Hazards 14 , 141154.

    Newmark, N. M. [1965] Effects of earthquakes on dams and embankments, Geotechnique 15 (2), 139160.

    Newmark, N. M. and Hall, W. J. [1969] Seismic design criteria for nuclear reactor facili-ties, Proceedings Fourth World Conference on Earthquake Engineering , Santiago de

    Chile, vol. 2

    , B5.1B5.12.Orozova, I. M. and Suhadolc, P. [1999] A deterministic-probabilistic approach for seismichazard assessment, Tectonophysics 312 , 191202.

    Ove Arup and Partners [1993] Earthquake hazard and risk in the UK. Her MajestysStationery Office, London.

    Panza, G. F., Vaccari, F., Costa, G., Suhadolc, P. and Fah, D. [1996] Seismic inputmodelling for zoning and microzoning, Earthq. Spectra 12 , 529566.

    Parsons, T., Roda, S., Stein, R. S, Barka, A. and Dietrich, J. H. [2000] Heightened odds of large earthquakes near Istanbul: an interaction-based probability calculation, Science 288 , 28 April, 661665.

    Peek, R., Berrill, J. B. and Davis, R. O. [1980] A seismicity model for New Zealand,

    Bull. New Zealand Nat. Soc. Earthq. Engrg. 13 (4), 355364.Radulin, M., Vaccari, F., Mandrescu, N., Panza, G. F. and Moldoveanu, C. L. [2000]Seismic hazard of Romania: deterministic approach, Pure Appl. Geophys. 157 ,221248.

  • 8/9/2019 Psha vs Dsha

    31/31

    Deterministic vs. Probabilistic Seismic Hazard Assessment 73

    Reiter, L. [1990] Earthquake Hazard Analysis: Issues and Insights , Columbia UniversityPress.

    Restrepo-Velez, L. F. [2001] Explorative study of the scatter in strong-motion attenuationequations for application to seismic hazard assessment, Master Dissertation, ROSESchool, Universita di Pavia.

    Romeo, R. and Prestininzi, A. [2000] Probabilistic versus deterministic hazard analysis:an integrated approach for siting problems, Soil Dyn. Earthq. Engrg. 20 , 7584.

    Schwartz, D. P. and Coppersmith, K. J. [1984] Fault behaviour and characteristic earth-quakes: examples from the Wasatch and San Andreas faults, J. Geophys. Res. 89 ,56815698.

    SEAOC [1995] Vision 2000 A framework for performance based design. StructuralEngineers Association of California, Sacramento.

    Shepherd, J. B., Tanner, J. G. and Prockter, L. [1993] Revised estimates of the levels of ground acceleration and velocity with 10% probability of exceedance in any 50-yearperiod for the Trinidad and Tobago region, Caribbean Conference on Earthquakes ,Volcanoes , Windstorms and Floods , 1115 October, Port of Spain, Trinidad.

    Singh, S. K., Rodriguez, M. and Esteva, L. [1983] Statistics of small earthquakes andfrequency of occurrence of large earthquakes along the Mexican subduction zone,Bull. Seism. Soc. Am. 73 , 17791796.

    Speidel, D. H. and Mattson, P. H. [1995] Questions on the validity and utility of b-values:an example from the Central Mississippi Valley, Engrg. Geol. 40 , 927.

    Somerville, P. G., Smith, N. F., Graves, R. W. and Abrahamson, N. A. [1997] Modicationof empirical strong ground motion attenuation relations to include the amplitude andduration effects of rupture directivity, Seism. Res. Lett. 68 , 199222.

    SSHAC [1997] Recommendations for probabilistic seismic hazard analysis: guidance onuncertainty and the use of experts. Senior Seismic Hazard Analysis Committee,NUREG/CR-6372 , Washington, DC.

    Smit, P., Torri, A., Sprecher, C., Birkhauser, P., Tinic, S. and Graf, R. [2002] Pegasos a comprehensive probabilistic seismic hazard assessment for nuclear power plants inSwitzerland, Twelfth European Conference on Earthquake Engineering , London.

    Stepp, J. C., Wong, I., Whitney, J., Quittmeyer, R., Abrahamson, N., Toro, G., Youngs, R.,Coppersmith, K., Savy, J., Sullivan, T. and Yucca Mountain PSHA Project Members[2001] Probabilistic seismic hazard analyses for ground motions and fault displace-ments at Yucca Mountain, Nevada, Earthq. Spectra 17 (1), 113151.

    Toro, G. R., Abrahamson, N. A. and Scheider, J. F. [1997] Model of strong groundmotions from earthquakes in Central and Eastern North America: best estimates anduncertainties, Seism. Res. Lett. 68 (1), 4157.

    Veneziano, D., Cornell, C. A. and OHara, T. [1984] Historical method of seismic hazardanalysis, Electrical Power Research Institute Report NP-3438 , Palo Alto, California.

    White, R. A. and Harlow, D. H. [1993] Destructive upper-crustal earthquakes of CentralAmerica since 1900, Bull. Seism. Soc. Am. 83 (4), 11151142.

    Whitman, R. (ed.) [1989] Workshop on ground motion parameters for seismic hazardmapping, Technical Report NCEER-89-0038 , National Center for Earthquake Engi-neering Research, State University of New York at Buffalo.

    Woo, G. [1996] Kernel estimation methods for seismic hazar