Top Banner
24 CDQM, Volume 12, Number 1, 2009, pp. 24-34 COMMUNICATIONS IN DEPENDABILITY AND QUALITY MANAGEMENT An International Journal A framework for multi-objective optimization of the technical specifications of PHWRs G. Srinivas 1* , A. K. Verma 2 and A. Srividya 2 1 Nuclear Power Corporation of India Ltd., Mumbai, India, E-Mail: [email protected] 2 IIT Bombay, Mumbai, India, E-Mail: [email protected] , [email protected] accepted February 26, 2009 Summary This paper presents a framework for Multi-objective Optimization of the Technical Specifications of the Pressurized Heavy Water Reactors in India. The Probabilistic Safety Assessment study of the 540 MWe PHWR is considered as the basis for setting up the multi-objective problem for arriving at the optimized surveillance test intervals for the Emergency Core Cooling Valves. The proposed work would be based on the genetic algorithm applied to this multi- objective problem. Key words: Multi-objective optimization, pressurized heavy water reactor, probabilistic safety assessment, genetic algorithm. 1. INTRODUCTION The Pressurized Heavy Water Reactor (PHWR) is a very complex nuclear power generating system. The concept of the PHWR was commercially first exploited by the Canadian’s and was called the CANDU (CAN adian D euterium U ranium) reactor. The choice of PHWR in the Indian Nuclear Power program is based on the long term objectives and the availability of the resources and infrastructure. The features of PHWR that favored this choice are Use of natural uranium as fuel, which obviates the need for developing fuel enrichment facilities. High neutron economy made possible by use of heavy water as moderator, which means low requirements of natural uranium both for initial core as well as for subsequent refueling. Also * corresponding author UDC 519.863:621.039.51
11

05 Srinivas, Verma, Srividya

Mar 25, 2023

Download

Documents

Khang Minh
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: 05 Srinivas, Verma, Srividya

24

CDQM, Volume 12, Number 1, 2009, pp. 24-34

COMMUNICATIONS

IN

DEPENDABILITY AND

QUALITY

MANAGEMENT

An International Journal

A framework for multi-objective optimization

of the technical specifications of PHWRs

G. Srinivas 1*, A. K. Verma 2 and A. Srividya 2

1 Nuclear Power Corporation of India Ltd., Mumbai, India, E-Mail: [email protected] 2 IIT Bombay, Mumbai, India, E-Mail: [email protected], [email protected]

accepted February 26, 2009

Summary This paper presents a framework for Multi-objective Optimization of the Technical Specifications of the Pressurized Heavy Water Reactors in India. The Probabilistic Safety Assessment study of the 540 MWe PHWR is considered as the basis for setting up the multi-objective problem for arriving at the optimized surveillance test intervals for the Emergency Core Cooling Valves. The proposed work would be based on the genetic algorithm applied to this multi-objective problem. Key words: Multi-objective optimization, pressurized heavy water reactor, probabilistic safety assessment, genetic algorithm.

1. INTRODUCTION The Pressurized Heavy Water Reactor (PHWR) is a very complex nuclear power generating system. The concept of the PHWR was commercially first exploited by the Canadian’s and was called the CANDU (CANadian Deuterium Uranium) reactor. The choice of PHWR in the Indian Nuclear Power program is based on the long term objectives and the availability of the resources and infrastructure. The features of PHWR that favored this choice are

• Use of natural uranium as fuel, which obviates the need for developing fuel enrichment facilities.

• High neutron economy made possible by use of heavy water as moderator, which means low requirements of natural uranium both for initial core as well as for subsequent refueling. Also

* corresponding author

UDC 519.863:621.039.51

Page 2: 05 Srinivas, Verma, Srividya

25

fissile plutonium production (required for Stage 2 of the Indian Nuclear power program)is high compared to Light Water Reactors.

• Being a pressure tube reactor, with no high pressure reactor vessel, the required fabrication technologies were within the capability of indigenous industry.

• The technology for production of heavy water, required as moderator and coolant in PHWR, was available in the country.

The Indian PHWR design has evolved through a series of improvements over the years in progressive projects. Such improvements have been driven by, among others, evolution in technology, feedback from experience in India and abroad, including lessons learnt from incidents and their precursors, evolving regulatory requirements and cost consideration. Valuable experience gained in design, manufacture, construction, operation, maintenance and safety regulation has enabled continual evolution, improvement and refinement in the PHWR concept in a progressive manner [1].

All Nuclear power plants operate with a strict operating framework established prior to the licensing of its operation. This framework is known as the Technical Specification (TS) of the reactor. This document lays down the limiting conditions of operation (LCO), which are vital to ensure the availability of the safety and safety support systems and thereby ensure the safe operation of the plant. The Technical specification explicitly lay down the rules governing the operation of plant and in particular the safety and safety support systems. While most safety and safety support systems remain in poised standby condition, some of them would also be continually operating. The Technical specifications lay out the Surveillance Test Interval (STI’s), In-Service Inspection (ISI) Interval, Allowed Outage Time (AOT) and other guidelines to ensure the Safety of the operating plant. Most of these operating criteria have been arrived at based on traditional practices and assumptions by the designers of the systems.

The developments in the safety assessment of the Indian PHWR have also evolved and are at par with the international practices. The deterministic and the probabilistic safety assessments are part of the Preliminary Safety Analysis Report (PSAR) submitted to the Atomic Energy Regulatory Board (AERB) of India for the Licensing of the Reactors being installed in the country. India has completed the Probabilistic Safety Assessment (PSA) of all the Operating Reactors in the country. The latest being the PSA of the 540MWe PHWR Station at TAPS 3&4 (Tarapur Atomic Power Station) [2]. These developments have resulted in adding a new dimension in the optimization of the operating criteria for the Plant operations with the application of Risk Informed Decision Making (RIDM). The developments in the recent years made in Probabilistic Safety assessment of the PHWR’s in India has given a new quantitative tool for evaluating the Technical Specification on a Risk Informed basis.

The information available through the PSA of the plant could be effectively utilized for the optimization of the STI’s and AOT’s for the operating stations. In this paper a framework is established for the optimization of the STI’s and AOT’s fulfilling the multi objectives of System Reliability, Core Damage Frequency and Costs of surveillance testing. The system reliability and core damage frequency is represented as thousands of minimal cut sets. In the PSA model, which includes the Event trees and System fault trees, results in the determination of the minimal cut sets.

Cut sets are the unique combinations of component failures that can cause system failure. Specifically, a cut set is said to be a minimal cut set if, when any basic event is removed from the set, the remaining events collectively are no longer a cut set. Each minimal cut sets consists of series-parallel combination of the unavailability’s of the components. The PSA model includes all of the important safety and safety support systems. It thus gives rise to a nonlinear and multimodal

Page 3: 05 Srinivas, Verma, Srividya

26

optimization problem with constraints that is attempted to be solved based on the PSA model at the plant level. Most conventional techniques have difficulties in handling such multimodal and nonlinear optimization problems. A Genetic Algorithm is one of the nontraditional optimization techniques that can be successfully applied for such problems [3]. 2. MULTI-OBJECTIVE OPTIMIZATION PROBLEM A multi-objective optimization problem (MOOP) has a number of objective functions which are to be minimized or maximized. In the general form it is stated as follows:

)1(

.,........,2,1,................................

;,......,2,1,.......0)(...............................

;,.......2,1,......0)(.................

;,.....,2,1),.......(.../

)()(

=≤≤

==

=≥=

nixxx

Kkxh

Jjxgtosubject

MmxfMaximizeMinimize

Uii

Li

k

j

m

As can be observed above, it states an objective function which has to be maximized or minimized

subject to some constraints. A solution x is a vector of n decision variables: ( )Tnxxxx ,......,, 21= . The

last set of constraints is called the variable bounds, restricting each decision variable to take a value within a lower )(L

ix and an upper )(Uix bound. These bounds constitute a decision variable space D,

or simply a decision space. Associated with the problem are J inequality and K equality constraints. The terms )(xg j and )(xhk are called constraint functions. A solution x that does not satisfy all the

(J+K) constraints and all the 2N variable bounds stated in the above is called an infeasible solution. On the other hand, if any solution x satisfies all constraints and variable bounds, it is known as a feasible solution. The set of all feasible solutions is called the feasible region or S which constitutes a part of the decision variable space D.

There are M objective functions considered in the above formulation. Each objective function can be either minimized or maximized. Multi-objective optimization is sometimes referred to as vector optimization because of a vector of objectives, instead of a single objective, is optimized [4]. 3. GENETIC ALGORITHM IN SOLVING THE MULTI-OBJECTIVE PROBLEM The multi-objective optimization problem arises when in correspondence of each point X in the search space we must consider several objective functions )(Xfi i= 1, 2,…,N and then identify that X*, which gives rise to the best compromise among the various objective functions. The comparison of two solutions with respect to several objectives may be achieved through the introduction of the concepts of Pareto optimality and dominance, which enable solutions to be compared and ranked without imposing any a priori measure as to the relative importance of individual objectives, neither in the form of subjective weights nor arbitrary constraints.

Let us consider N different objective functions )(Xfi i= 1, 2,…,N where X represents the vector of independent variables identifying a generic proposal of solution. We say that solution X (weakly) dominates solution Y if X is better or equal on all objectives but better on at least one [6] i.e. if

)()( YfXf ii ≥ for i= 1, 2,…,N with the inequality strictly holding for at least one i.

The solutions not dominated by any other are said to be non-dominated solutions. Within the genetic approach, Figure 1, in order to treat simultaneously several objective functions, it is

Page 4: 05 Srinivas, Verma, Srividya

27

necessary to generalize the single-fitness procedure employed in the single-objective GA by assigning N fitnesses to each X.

Concerning the insertion of an individual (i.e. an X value) in the population, often constraints exist, which impose restrictions that the candidate individual has to satisfy and whose introduction speeds up the convergence of the algorithm, due to a reduction in the search space. Such constraints may be handled, just as in the case of single-objective GAs, by testing whether, in the course of the population creation and replacement procedures, the candidate solution fulfills the criteria pertaining to all the N fitnesses.

Set Objective function and constraints

Random Population

Objective function evaluation and sort of the population

Parents

Crossover and Mutation

New Population

Is convergence achieved?

Is new population’s size equal

N?

Objective function evaluation and sort of the population

END

Discarded

yes

yes

yes

no Are constraints verified?

no

no

Figure 1. Flow chart of genetic algorithm [5]

Once a population of individuals (chromosomes) {X} has been created, we rank them according to the Pareto dominance criterion by looking at the N-dimensional space of the fitnesses )(Xfi i= 1, 2,…,N. All non-dominated individuals in the current population are identified. These solutions are considered the best ones, and assigned the rank 1. Then, they are virtually removed from the

Page 5: 05 Srinivas, Verma, Srividya

28

population and the next set of non-dominated individuals are identified and assigned rank 2. This process continues until every solution in the population has been ranked. The selection and replacement procedures of the multi-objective GAs are based on this ranking: every chromosome belonging to the same rank class has to be considered equivalent to any other of the class, i.e. it has the same probability of the others to be selected as a parent and survive the replacement.

During the optimization search, an archive of vectors, each one constituted by a non-dominated chromosome and by the corresponding N fitnesses, representing the dynamic Pareto-optimality surface is recorded and updated. At the end of each generation, non-dominated individuals in the current population are compared, in terms of the fitnesses, with those already stored in the archive and the following archival rules are implemented.

1. If the new individual dominates existing members of the archive, these are removed and the new one is added.

2. If the new individual is dominated by any member of the archive, it is not stored. 3. If the new individual neither dominates nor is dominated by any member of the archive then: • If the archive is not full, the new individual is stored. • If the archive is full, the new individual replaces the most similar one in the archive (an

appropriate concept of distance being introduced to measure the similarity between two individuals: in this paper we shall adopt a Euclidean distance based on the values of the fitnesses of the chromosomes normalized to the respective mean values in the archive).

The archive of non-dominated individuals is also exploited by introducing an elitist parents’ selection procedure, which should in principle be more efficient. Every individual in the archive (or, if the archive’s size is too large, a pre-established fraction of the population size Nga, typically Nga/4) is chosen once as a parent in each generation. This should guarantee a better propagation of the genetic code of non-dominated solutions, and thus a more efficient evolution of the population towards Pareto optimality.

At the end of the search procedure, the result of the optimization is constituted by the archive itself that gives the Pareto-optimality region [5]. 4. CASE STUDY: ECCS VALVE’S TEST INTERVAL OPTIMIZAT ION 4.1 ECCS System Description The Emergency Core Cooling System (ECCS) is one of the safety systems provided to mitigate the consequences of Loss of Coolant Accident in the event of a break within PHT Main Circuit Pressure Boundary. The ECCS is designed to provide enough coolant to the primary system to transport heat from the core to the ultimate heat sink, so as to ensure adequate core cooling during all phases of the accident, thereby avoiding any significant fuel failure and release of activity beyond prescribed limits to the environment. The schematic of the ECCS system is shown in Figure 2.

PHT system main circuit is divided into two identical loops for the purpose of heat transport from core to Steam Generators (SGs). These two loops are isolated at a system pressure of ≤ 55 Kg/cm2(g) (as measured at RIH) by closing feed and bleed and pressuriser isolation valves. ECCS consists of a high-pressure light water injection system and a long-term recirculation system.

Page 6: 05 Srinivas, Verma, Srividya

29

i) High Pressure Light Water Injection System

A gas accumulator (TK-3) and two light water accumulators (TK-1&2) both of carbon steel are provided to supply emergency coolant to the core. The gas tank (TK-3) of 65m3 capacity, pressurized to 45 kgf/cm2(g), is connected to the light water accumulators TK-1&2 through two normally closed pneumatically operated isolation valves (MV - 74 & 75). The initiation of light water injection takes place if reactor inlet header pressure is less than or equal to 40 Kg/cm2(g) and one of the ECCS conditioning signal as below is present. (i) F/M vault temperature high i.e. ≥ 650C, (ii) Pump room pressure high > 18 g/cm2(g) & (iii) Moderator level in calandria high i.e. ≥ 105% FT.

ii) Low Pressure Long Term Recirculation

ECC recirculation ensures prolonged cooling to remove decay heat from core. The system consists of 4x50% capacity pumps and 3x50% plate type heat exchangers. ECCS pumps take suction from suppression pool water in the reactor building basement. Each pump has a check valve on the discharge side to prevent reverse flow through idle pump. The ECC heat exchangers are used, to cool the suppression pool water (H2O/D2O mixture) during low-pressure recirculation stage. The secondary sides of the heat exchangers are provided with Active Process Water.

When PHT pressure drops to < 40 Kg/cm2(g) in any one RIH alongwith any one conditioning signal, the high pressure injection valves line up for ‘All header injection’ mode. Light water accumulators get pressurized. Rupture disc ruptures and coolant injection starts from accumulators. Crash cooling starts automatically and 2 out of 4 ECCS pumps start automatically. Decay heat removal for healthy loop will be through SGs. 4.2 Mathematical Model of the Problem The goal is to optimize the effectiveness of the Surveillance Test Intervals (STIs) of the ECCS header injection valves, namely MV1 to MV16, with respect to two opposing criteria: (i) mean availability; and (ii) cost. The STI’s then represent the decision variables of the optimization problem and different choices of their values will lead to different performances with respect to the three above-mentioned objectives.

Mean unavailability

To compute the system unavailability, the fault tree with the top event “Failure of ECCS Injection to all headers of Loop1”. This is chosen because of the feature of Loop isolation prior to the actuation of ECCS. The Boolean reduction of the corresponding structure function gives the N system minimal cut sets (MCS).

As for the mean unavailability iu of a generic individual component i, several models have been proposed in the literature to account for the different contributions coming from failure on demand, human errors, maintenance etc. In this study the following model will be used:

0)(2

1 γττ

τλρτλρ +++++=i

i

iiiiii

tdiiiu ,

where ρi is the probability of failure on demand, λi the failure rate for the ith component, τi the test interval for the ith component, ti the mean downtime due to testing, di the mean downtime due to corrective maintenance and γ0 the probability of human error. The above equation is valid for ρ<0.1 and λτ<0.1 which are reasonable assumptions when considering safety systems.

Page 7: 05 Srinivas, Verma, Srividya

30

Then, the mean system unavailability U can be expressed as the sum of all the minimal cut sets. The minimal cut sets in turn depend on the redundancies in the system and the modeling of common cause failures [6] and human error probabilities [7]:

∑=

≈N

jMCSU j

1,

where N is the number of MCS.

Figure 2. Schematic Diagram of Emergency core cooling system for 540MWe TAPS 3,4

The analytical expressions for the unavailability of a one out of two system is [8]:

MDCRav qqqqq +++= ,

where:

qR - random independent failure contribution, qC - dependent failure contribution (represents the failures due to common cause failures), qD - demand failure contribution (this tem represents contribution due to failure on demand), qM - maintenance contribution.

V25

RD

MV23 MV21

MV19 MV18

BLEED

MV2 MV1

CORE

Loop 2

Loop 1

RIH ROH

RIH ROH

RIH

RIH

ROH

ROH

MV4 MV3 MV6 MV5

MV8 MV7

MV10 MV9

MV12 MV11 MV14 MV13 MV16 MV15

FROM FEED

LOOP1 LOOP2

High Pressure Injection Ckt

Low Pressure Recirculation Circuit

MV22

MV150

Drain to 3332 TK2

Vent to 3332 TK2

V26

Page 8: 05 Srinivas, Verma, Srividya

31

( )

( )

( )

++++=

++++=

+=

+=

τττλλγγ

τττλλγ

τλτλ

τλτλ

rCRM

rCRD

CCC

rRRR

Qq

QQq

q

q

2

and

28

324

5

000

010

2

Similar expression for system unavailability’s of two-out-of-three system’s and one-out-of-three system’s shown in ref 8 indicate the nonlinear nature of this objective function.

Cost function

We assume that the cost objective C is made up of two major contributions:

1. CS&M - costs associated with surveillance and maintenance (S&M), 2. Caccident - costs associated with consequences related to accidents possibly occurring at the

plant, therefore.

While the cost of S&M would decrease with an increase in the interval, however the accident costs would increase with the increase in the interval. The costs considered would be referenced to the base line costs. In this analysis the base line costs of surveillance would be 1.0 if the interval is one month, while the accident costs would be 1.0 for frequency of the sum of the sequences considered.

C= CS&M + Caccident .

For a given component i the S&M costs are computed on the basis of given yearly inspection (Cht,i) and corrective maintenance (Chc,i) costs.

For a given mission time, TM, the number of inspections performed on component i are (TM/τi); of these, on average, a fraction equal to (ρi+λiτi) demands also a corrective maintenance action. Thus the surveillance and maintenance costs amount to:

∑=

+

+

=

C

iiiiM

ihci

i

MihtMS

N

i

dT

CtT

CCi1

).(.... ,,& τλρττ .

Caccident costs are the costs associated to damages of accidents which are not mitigated due to the ECCS failing to intervene. This was done by accounting for the probability of the corresponding accident sequences. The small LOCA event tree is shown in Figure 3.

Actually, the ECCS plays an important role in many other accident sequences generated from other initiators such as medium LOCA, station blackout and turbine trip. In this study, for simplicity only the contribution due to small LOCA’s in the accident category of A3 & B2 will be considered, in the initial phase. These categories represent:

A3 - late loss of core structural integrity expected after a few hours after the start of the accident, resulting in large fraction of fuel failure,

B2 - fuel product release from overheated fuel and significant metal water reaction.

Page 9: 05 Srinivas, Verma, Srividya

32

Figure 3. The small LOCA event tree

Page 10: 05 Srinivas, Verma, Srividya

33

In the later stage the entire set of accident sequences for these two categories would be considered across all the event trees considered for the Level-1 PSA study:

23 BAaccident CCC += .

Constraints to the Optimization problem

1. System Unavailability<1x10-3, 2. Incremental change in Core Damage Frequency<0.1xBaseline CDF, 3. 720hrs<STI<2160hrs, currently the STI’s for the ECCS valves is one month. It is targeted to

find the optimum between one month to three months. 5. CONCLUSIONS The use of Genetic Algorithms in the multiobjective optimization problem would give us the pareto optimal solutions to the conflicting objectives. The selection of the optimal solution would depend on the tradeoff between the objectives. The constraints would help us decide the optimal solution. The use of multi-objective optimization with genetic algorithms would demonstrate its efficacy in RIDM. Acknowledgements I wish to acknowledge the support by Nuclear Power Corporation of India Limited in the pursuit of research on the subject and for the extensive use of the Probabilistic Safety Assessment study in this work. REFERENCES [1] Bajaj, S.S. and Gore, A.R. (2006), The Indian PHWR, Nuclear Engineering and Design, 236

(2006) 701-722.

[2] Nalini Mohan et al. (2007) Salient Features of Level-1 Probabilistic Safety Assessment (PSA) for Tarapur Atomic Power Plant, in: Quality Reliability and Infocom Technology (edited by Kapur, P.K. and Verma A.K.), Macmillan India, New Delhi, pp. 496-502.

[3] Alok Mishra et al (2006) Safety Management in NPP’s: a Multiobjective Optimization, in: Reliability, Safety and Hazard-Advances in Risk Informed Technology (edited by Varde, P.V., Srividya, A., Sanyasi Rao, V.V.S. and Ashok Chauhan.), Narosa Publishing House, New Delhi, pp. 415-422.

[4] Kalyanmoy Deb, Multi-Objective Optimization using Evolutionary Algorithms, John Wiley and Sons, Ltd. 2005.

[5] Aureli Munoz, Sebastian Martorell, Vicente Serradel. Genetic algorithms in optimizing surveillance and maintenance of components. Reliability Engineering and System Safety 1997, 57:107-120.

[6] Srinivas, G. et al (2006) Significance of Common Cause Failures in Level-1 PSA and Techniques for reducing its impact on Core Damage Frequency, In: Reliability, Safety and Hazard-Advances in Risk Informed Technology (edited by Varde, P.V., Srividya, A., Sanyasi Rao, V.V.S. and Ashok Chauhan.), Narosa Publishing House, New Delhi, pp. 355-360.

Page 11: 05 Srinivas, Verma, Srividya

34

[7] Srinvas, G. et al. (2007) Human Reliability Assessment in PSA of Indian PHWR-A Comparative Study of ASEP and HCR Methodologies, in: Quality Reliability and Infocom Technology (edited by Kapur, P.K. and Verma A.K.), Macmillan India, New Delhi, pp. 269-275.

[8] G. Apostolakis and T.L.Chu, The unavailability of systems under periodic maintenance, Nuclear Technology , Vol 50, Aug. 1980.