Top Banner
Georgia Tech Research Institute Information & Communications Laboratory Technical Report ICL-DO-01-15 3 June 2015 A Comparison of Traditional Simulation and the MSAL Approach Margaret L. Loper Georgia Tech Research Institute Information & Communications Laboratory 75 5 th Street, N.W. Atlanta, GA 30332-0832 R. K. Garrett, Jr.
49
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Georgia Tech Research InstituteInformation & Communications Laboratory

Technical Report ICL-DO-01-153 June 2015

A Comparison of Traditional Simulation and the MSAL Approach

Margaret L. LoperGeorgia Tech Research Institute

Information & Communications Laboratory75 5th Street, N.W.

Atlanta, GA 30332-0832

R. K. Garrett, Jr.Prime Solutions Group, INC.

1300 S. Litchfield Rd, STE A1020Goodyear, AZ 85338

Page 2: A Comparison of Traditional Simulation and MSAL (6-3-2015)

A Comparison of Traditional Simulation and the MSAL Approach

Table of Contents

1.0 SYSTEM MODELING AND SIMULATION BASICS 3

CONCEPTUAL MODEL 5SIMULATION PROGRAM 5SENSITIVITY ANALYSIS 6MONTE CARLO METHODS 7SIMULATION OPTIMIZATION 7

2.0 RISK AND UNCERTAINTY PRIMER 9

3.0 M&S IN SYSTEM OF SYSTEMS ENGINEERING 11

SYSTEM OF SYSTEMS AND COMPLEX SYSTEMS 11UNCERTAINTY AND COMPLEXITY 13CURRENT STATE OF M&S IN SYSTEMS ENGINEERING 14

4.0 GRAPHS, ANALYTICS, AND SIMULATION 23

5.0 THE MODEL-SIMULATION-ANALYSIS-LOOPING APPROACH 26

MSAL ARCHITECTURE 28MSAL LOOPS 29BENEFITS OF MSAL VS TRADITIONAL M&S APPROACH 32

REFERENCES 32

Page 3: A Comparison of Traditional Simulation and MSAL (6-3-2015)

A Comparison of Traditional Simulation and the MSAL Approach

The world is becoming more complex and therefore, more unpredictable. Current risk management systems and other attempts to predict the future are based too much upon linear relationships derived from past experience. They fail to take into account our behavioral limitations in handling probabilities, and also the nature of complex non-linear systems, which do not always have a definite or repeatable cause and effect relationship. (Leong 2010)

It used to be thought that if we could just collect enough data, if we had enough computing power, we could model any system, such as the weather or the economy. Complexity theory has demonstrated that non-linear effects, when amplified over a sufficiently long period of time, can upset all our predictions. The weather and the economy are examples of complex systems. Such systems exhibit regularity in behavior without being completely predictable. They are therefore capable of creating entirely unexpected new forms of behavior. In other words, the relationship between cause and effect in complex systems is not as consistent as the regular, predictable simple systems we are familiar with. (Leong 2010)

Advances in technology are driving complexity into the manufacturing, deployment and operations of systems, and System of Systems (SoS) at all levels. This complexity is rooted in advanced software and computing developments required to achieve the desired capabilities in future systems. Decision makers, practitioners and researchers are in need of innovation in modeling, simulation, data and knowledge engineering to enable current methods to meet future challenges. Garrett, et al. (2010) presented the idea that the essence of SoS engineering (SoSE) needed to address this complexity is based not on the characteristics of the entity systems, but is instead based on the relationships and interactions between the entities that comprise the SoS. These relationships and interactions. To enable SoSE, the use of a graph mathematics and agent-based simulations basis was proposed to explicitly model the SoS providing for bounding context for actionable architectures, models and simulations.

1.0 System Modeling and Simulation BasicsSimulation is a multidisciplinary approach to solving problems that includes mathematics, engineering, physical science, social science, computing, medical research, business, economics, and so on. Simulation is not new; it dates back to the beginnings of civilization where it was most commonly used in warfare. With the development of computers, simulation moved from role-playing, where people or toy soldiers represented the systems of interest, to computer-based simulation, where software is developed to encode algorithms that represent the systems of interest. While once referred to simply as simulation, today, the discipline is more often called modeling and simulation (M&S or MODSIM), emphasizing the importance of first modeling the system of interest before developing a computational representation. There are a number of

Page 4: A Comparison of Traditional Simulation and MSAL (6-3-2015)

definitions of models, simulations, and M&S. The definitions published by the US Department of Defense (DoD) in their online glossary (MSCO 2011) are as follows:

Model is a physical, mathematical, or otherwise logical representation of a system, entity, phenomenon, or process. For the purposes of this discussions models are considered static.

Simulation is a method for implementing a model and behaviors in executable software. For the purposes of this discussions simulations are considered dynamic.

M&S is the discipline that comprises the development and/or use of models and simulations.

Although the terms “modeling” and “simulation” are often used as synonyms, within the discipline of M&S both are treated as individual and equally important concepts.

A good way to understand Modeling & Simulation (M&S) is to look at the process through which models and simulations are developed. There are different M&S life cycles described in the literature (Sargent 1982) (Kreutzer 1986) (Balci and Nance 1987) (Balci 2012). Despite emphasizing different aspects, most represent similar concepts or steps. The process shown in Figure 1 captures the basic ideas represented in most processes (Loper 2015). The boxes in blue represent model development activities and the orange boxes represent simulation development activities. These boxes do not represent an absolute separation between modeling and simulation - develop simulation model & program bridges between the modeling and simulation activities, and verify & validate model and simulation represent activities that are done throughout the entire lifecycle. Explanation of some of these steps is described in the following sections.

Figure 1: M&S Lifecycle Process

Page 5: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Conceptual ModelA model is a simplification and approximation of reality, and the art of modeling involves choosing which essential factors must be included, and which factors may be ignored or safely excluded from the model. This is accomplished through the process of simplification and abstraction. Simplification is an analytical technique in which unimportant details are removed in an effort to define simpler relationships. Abstraction is an analytical technique that establishes the essential features of a real system and represents them in a different form. The resultant model should demonstrate the qualities and behaviors of a real world system that impact the questions that the modeler is trying to answer. The process of simplification and abstraction is part of developing the conceptual model. A simulation conceptual model is a living document that grows from an informal description to a formal description and serves to communicate between the diverse groups participating in the simulation’s development. It describes what is to be represented, the assumptions limiting those representations, and other capabilities (e.g., data) needed to satisfy the user’s requirements. An informal conceptual model may be written using natural language and contain assumptions about what you are or are not representing in your model. A formal conceptual model is an unambiguous description of model structure. It should consist of mathematical and logical relationships describing the components and the structure of the system. It is used as an aid to detect omissions and inconsistencies and resolve ambiguities inherent in informal models, and used by software developers to develop code for the computational model.

Simulation ProgramThe next step is to create the simulation by coding the conceptual model into a computer recognizable form that can calculate the impact of uncertain inputs and decisions we make, on outcomes that we care about. Translating the model into computer code and then into an executable involves selecting the most appropriate simulation methodology and an appropriate computer implementation. According to (Kinder 2015) simulation methodologies appropriate for System of Systems include: discrete event simulation, discrete vent system specification (DEVS), petri nets, agent-based modeling and simulation, system dynamics, surrogate models, artificial neural networks, Bayesian belief networks, Markov models, game theory, network models (graph theory), and enterprise architecture frameworks.

A simulation model includes:

Model inputs that are uncertain numbers - uncertain variables Simulation model – mathematical calculations as required Model outputs that depend on the inputs - uncertain functions

System behavior at specific values of input variables is evaluated by running the simulation model for a fixed period of time. The uncertain (input) variables are represented by a random number generator that returns sample values from a

Page 6: A Comparison of Traditional Simulation and MSAL (6-3-2015)

representative distribution of possible values for each uncertain element. When we perform a simulation with this model, we test many different numeric values for the uncertain variables, and we obtain many different numeric values for the uncertain (output) functions.

The basic step in a simulation run, called a trial, is very simple:

Choose sample values for the uncertain variables Evaluate the model; run the program Observe and record the values of the uncertain functions

For models with few uncertain variables, where the possible values of these variables cover a limited range, it may be possible to run a series of trials, where we systematically "step through" the range of each variable, in all combinations. When the number of input variables is large and the simulation model is complex, the simulation trial may become computationally prohibitive. For most models of any size, we would need millions or even billions of trials, and running them all might not actually tell us very much.

Hence, simulation normally relies on random sampling of values for the uncertain variables. When choosing the values for the uncertain variables, we draw one or more random numbers and we use these numbers to randomly select sample values from the range of possible values (the distribution) of each uncertain variable. If we do this effectively, we can obtain “coverage” of the possible values and model outcomes, even if the model has many uncertain variables.

A simulation run includes many hundreds or thousands of trials. Each trial is an experiment, or instance, where we supply numerical values for input variables, evaluate the model to compute numerical values for outcomes of interest, and collect these values for later analysis. To obtain more accurate results, you run more trials, so there is a tradeoff between accuracy of the results, and the time taken to run the simulation.  

Sensitivity AnalysisAn approach for assessing model results is sensitivity analysis. Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system can be apportioned to different sources of uncertainty in its inputs, and is usually run one input variable at a time. A related practice is uncertainty analysis, which has a greater focus on uncertainty quantification and propagation of uncertainty where multiple variables are varied together to interrogate interaction effects. Ideally, uncertainty and sensitivity analysis should be run in tandem. Sensitivity analysis can be useful for a range of purposes:

Testing the robustness of the results of a model or system in the presence of uncertainty.

Increased understanding of the relationships between input and output variables in a system or model.

Page 7: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Uncertainty reduction: identifying model inputs that cause significant uncertainty in the output and should therefore be the focus of attention if the robustness is to be increased.

Searching for errors in the model. Model simplification – fixing model inputs that have no effect on the output, or

identifying and removing redundant parts of the model structure. Finding regions in the space of input factors for which the model output is either

maximum or minimum or meets some optimum criterion (see optimization and Monte Carlo methods).

Monte Carlo MethodsThe Monte Carlo method was invented by scientists working on the atomic bomb in the 1940s, who named it for the city in Monaco famed for its casinos and games of chance. Monte Carlo methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The numbers from random sampling are “plugged into” a mathematical model and used to calculate outcomes of interest. This process is repeated many hundreds or thousands of times. Monte Carlo simulation is especially helpful when there are several different sources of uncertainty that interact to produce an outcome. Monte Carlo methods are mainly used in three distinct problem classes: optimization, numerical integration, and generation of draws from a probability distribution.

To use Monte Carlo simulation, you must be able to build a mathematical model of your system. To deal with uncertainties in your model, you replace certain fixed numbers with functions that draw random samples from probability distributions. To analyze the results of a simulation run, you use statistics such as the mean, standard deviation, and percentiles. Monte Carlo methods vary, but tend to follow a particular pattern:

Define a domain of possible inputs Generate inputs randomly from a probability distribution over the domain Perform a deterministic computation on the inputs Aggregate the results

Monte Carlo simulations sample probability distribution for each variable one at a time to produce hundreds or thousands of possible outcomes. The results are analyzed to get probabilities of different outcomes occurring. A disadvantage of Monte Carlo simulation is that it usually requires several (perhaps many) runs at given input values to obtain an accurate estimator. Therefore computing time can be very high, and much of a multi-dimensional performance surface goes un-interrogated. In addition if there are multi-variable interactions that drive system performance Monte Carlo techniques may not capture that behavior

Simulation OptimizationSimulation optimization is the process of finding the best input variable values from among all possibilities without explicitly evaluating each possibility. The objective of simulation optimization is to minimize the resources spent while maximizing the

Page 8: A Comparison of Traditional Simulation and MSAL (6-3-2015)

information obtained in a simulation experiment. (Carson and Maria 1997) Optimization helps us make better choices when we have all the data, and simulation helps us understand the possible outcomes when we don’t. Simulation optimization combines these analytic methods, to make better choices for decisions we do control, taking into account the range of potential outcomes for factors we don’t control.

An optimization model includes:

Decision variables for resources you control Uncertain variables for factors you don’t control An objective to maximize or minimize, that may depend on decision and

uncertain variables Constraints to satisfy, that may also depend on decision and uncertain variables Techniques to sample multi-variable interactions Techniques to optimize against single and multiple goals

An example of a simulation optimization problem is an intersection with four-way stop signs (Carson and Maria 1997). Suppose the intersection is a bottleneck for traffic during rush hours and a decision has been made to replace it with traffic lights in all four directions. The problem is to determine the optimal green times in all directions in order to ensure the least wait-time for cars arriving from all directions. The green-times must allow for the flow of traffic in all the pre-specified directions. Given the rates at which cars arrive from every direction, and picking reasonable green-time values, a simulation model of the intersection can be developed and run to determine the wait-time statistics. Running of this simulation model can only provide the answer to a “what if” question (e.g. What if the green-time in all four directions is 2 minutes ... what is the resulting wait-time for cars arriving at the intersection?). Optimization is finding the optimal values for the various green-times.

A simulation optimization solver can search through thousands of ways to allocate green-times and, for each green-time, through thousands of possible future outcomes, seeking the best set of choices. The solution gives us a much better picture of the decisions we should make, given what we know for sure and the range of outcomes for factors that we don’t know for sure.

A simulation optimization model is shown in Figure 2. The output of a simulation model is used by an optimization strategy to provide feedback on progress of the search for the optimal solution. This in turn guides further input to the simulation model.

Figure 2: A Simulation Optimization Model

Page 9: A Comparison of Traditional Simulation and MSAL (6-3-2015)

2.0 Risk and Uncertainty PrimerA quantitative risk model calculates the impact of the uncertain parameters and the decisions we make on outcomes that we care about. Such a model can help decision makers understand the impact of uncertainty and the consequences of different decisions. The process of risk analysis includes identifying and quantifying uncertainties, estimating their impact on outcomes that we care about, building a risk analysis model that expresses these elements in quantitative form, exploring the model through simulation, and making risk management decisions that can help us avoid, mitigate, or otherwise deal with risk.

In risk analysis, our goal is to identify each important source of uncertainty, and quantify its magnitude as well as we can. Uncertainty quantification (UQ) is the identification, characterization, propagation, analysis and reduction of all uncertainties in M&S (CASL 2015). It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. A physics-based example would be to predict the acceleration of a human body in a head-on crash with another car: even if we exactly knew the speed, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense. In a system simulation an example, e.g., a military commander pondering a decision to engage a threat based on data from a sensor network, the outcome would be dependent upon the latency in information, effect of malfunctioning sensors, the position of assets capable of performing the engagement, etc. The commanders’ decision will be highly stochastic and not based on physics, but in the ability to gather accurate data, fuse the data, communicate the data in a timely manner then subsequent cognitive processes (either human and/or machine-based) to make the decision.

Uncertainty can enter mathematical models and experimental measurements in various contexts. One way to categorize the sources of uncertainty includes (Wikipedia Uncertainty):

Parameter uncertainty, which comes from the model parameters that are inputs to the mathematical model, but whose exact values are unknown and cannot be controlled in physical experiments, or whose values cannot be exactly inferred by statistical methods.

Parametric variability, which comes from the variability of input variables of the model.

Structural uncertainty, aka model inadequacy, model bias, or model discrepancy, which comes from the lack of knowledge of the underlying true physics. It depends on how accurately a mathematical model describes the true system for a real-life situation, considering the fact that models are almost always only approximations to reality.

Algorithmic uncertainty, aka numerical uncertainty, which comes from numerical errors and numerical approximations per implementation of the computer model.

Page 10: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Experimental uncertainty, aka observation error, which comes from the variability of experimental measurements. The experimental uncertainty is inevitable and can be noticed by repeating a measurement for many times using exactly the same settings for all inputs/variables.

Interpolation uncertainty, which comes a lack of available data collected from computer model simulations and/or experimental measurements. For other input settings that don't have simulation data or experimental measurements, one must interpolate or extrapolate in order to predict the corresponding responses.

It is commonly assumed that uncertainty can be classified into two categories (Wikipedia):

Aleatoric uncertainty, aka statistical uncertainty, which is representative of unknowns that differ each time we run the same experiment. For example, a single arrow shot with a mechanical bow that exactly duplicates each launch will not all impact the same point on the target due to random and complicated vibrations of the arrow shaft, the knowledge of which cannot be determined sufficiently to eliminate the resulting scatter of impact points.

Epistemic uncertainty, aka systematic uncertainty, which is due to things we could in principle know but don't in practice. This may be because we have not measured a quantity sufficiently accurately, or because our model neglects certain effects, or because particular data are deliberately hidden.

Significant uncertainties complicate engineering, fielding and employment decisions about systems. Complex systems include both aleatoric and epistemic uncertainties (vernacularly distinguished as “known unknowns” and “unknown unknowns”). Aleatoric uncertainties are “irreducible” in the sense that they are always present, while epistemic uncertainties are often “reducible” through investment, time or research. An example of aleatoric uncertainty in the ballistic missile defense system (BMDS) is space weather. An example of epistemic uncertainty in BMDS is a threat trajectory before launch (e.g., launch timing, identification of the threat missile, how many missiles). Figure 3 shows where uncertainties exist in the M&S process.

Figure 3: Uncertainties in M&S (Grange and Deiotte 2014)

Page 11: A Comparison of Traditional Simulation and MSAL (6-3-2015)

There are two major types of problems in uncertainty quantification: one is the forward propagation of uncertainty and the other is the inverse assessment of model uncertainty and parameter uncertainty.

Forward uncertainty propagation: the quantification of uncertainties in system output(s) propagated from uncertain inputs. One of the five categories of probabilistic approaches for uncertainty propagation is simulation-based method. This category includes Monte Carlo simulation.

Inverse uncertainty quantification: estimates the discrepancy between the experimental measurements of a system and the results from the mathematical model (i.e., bias correction), and estimates the values of unknown parameters in the model if there are any (calibration).

3.0 M&S in System of Systems EngineeringA System is a set of interacting or interdependent components forming an integrated whole. Most systems share common characteristics, including:

Structure: it contains parts that are directly or indirectly related to each other; Behavior: it contains processes that transform inputs into outputs; Interconnectivity: the parts and processes are connected by structural and/or

behavioral relationships.

Different types of systems are commonly modeled in the context of systems engineering including: system of systems and complex systems.

System of Systems and Complex SystemsSystem of systems (SoS) is a collection of task-oriented or dedicated systems that pool their resources and capabilities together to create a new, more complex system, which offers more functionality and performance than simply the sum of the constituent systems. Formally, SoS is defined as a set or arrangement of systems that results when independent and useful systems are integrated into a larger system that delivers unique capabilities (DoD 2004) [DoD SE Guide].

SoS are typically defined as having the following properties (Maier 1996):

Operational independence: the system-of-systems is composed of systems that can be usefully operated independently if the system-of-systems is decomposed

Managerial independence: the individual systems that comprise the system-of-systems are separately acquired, and therefore are actually run and maintained independently of each other

Evolutionary development: the development of the system-of-systems is evolutionary with functions added, removed, and modified with experience, therefore never actually appearing to be fully formed

Emergent behavior: the entire system-of-systems performs functions and carries out purposes that are emergent of the entire process, and cannot be localized to

Page 12: A Comparison of Traditional Simulation and MSAL (6-3-2015)

any component system; the principal purpose of the system-of-systems is fulfilled by its emergent behavior

Geographic distribution: the independent system components can readily exchange information only, and not substantial items like mass or energy

The methodology for defining, abstracting, modeling, and analyzing SoS problems is typically referred to as system of systems engineering (SoSE). SoSE methods characterize and evolve the SoS, and design for interoperability. There is a critical need for SoSE to have capabilities that allow analysis and prediction of emergent behaviors, quality of service, continuous verification, as well as methods for managing the integration of systems in a dynamic context with limited control (INCOSE 2014).

Complex Systems are systems composed of interconnected parts that as a whole exhibit one or more properties (e.g., behavior) not obvious from the properties of the individual parts (Joslyn 2000). A system’s complexity may be one of two forms: disorganized complexity or organized complexity (Weaver 1948). Disorganized complexity is a matter of a very large number of parts, and organized complexity is a matter of the subject system (possibly with a limited number of parts) exhibiting emergent properties. Phenomena of disorganized complexity are treated using probability theory and statistical mechanics, while organized complexity deals with phenomena that escape such approaches and confront “dealing simultaneously with a sizable number of factors which are interrelated into an organic whole”. Complex systems are composed of a (1) large number of entities, with (2) non-trivial interaction networks (not too simple or too complete), whose (3) impacts on one another are non-linear, and whose overall behavior tends to display emergent characteristics. The relationship of SoS and complex systems can be found in Figure 4.

Figure 4: Intersection of SoS and Complex Systems (Balestrini-Robinson 2009)

The additional dimension of managerial control is critical to identifying the appropriate principles of the system-of-systems. Not all SoS, or the component systems within them, are of similar complexity. The classes of SoS include (ODUSD 2008):

Page 13: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Virtual systems o Completely lack a central management authority, although maintains a

common purpose for the system-of-systems. Any large-scale behavior emerging from the combined systems must rely on “invisible” mechanisms for maintainability.

o A swarm of systems are virtual systems, as their long-term nature is determined by highly distributed mechanisms that appear to have no central management control.

Collaborative systems o Have a central management organization that does not have the

authority to run individual systems. Must voluntarily collaborate to fulfill the central objective.

o Requires agreement to work to mutually beneficial goals and frequently requires re-negotiation of goals based on an evolving environment (natural or man-made).

o The Internet is an example of a collaborative system, where internet component sites exchange information using protocol regulations, adhered to voluntarily.

Directed systemso Built and managed to fulfill specific purposes, with centrally managed

operation. Maintains the ability to run independently, but operation within the integrated system-of-systems is subordinate to central management purposes.

o More a limit to a SoS because this requires the systems to give up their independence.

Acknowledged systemso Have recognized objectives, designated manager, and resources for the

SoS. Constituent systems retain their independent ownership, objectives, funding, and development and sustainment approaches. Changes in systems are based on collaboration between SoS and the system.

o Most DoD systems tend toward acknowledged within a SoS. However, multinational military coalitions are a collaborative SoS.

Uncertainty and Complexity Complexity1 and uncertainty have become critical considerations for SoSE applications, opening new avenues for the use and development of models. Increasingly models are being recognized as essential tools to learn, communicate, explore and resolve the particulars of complex problems. However, this shift in the way in which models have been used has not always been accompanied by a simultaneous shift in the way in which models have been conceived and implemented. Too often, models were conceived and built as predictive devices, aimed at capturing single, best, objective explanations. Considerations of uncertainty were often downplayed and even eliminated because it

1 Complexity is being defined as a function of the number of entities and entity interaction/relationships in a manner in kind with McCabe’s Cyclomatic number (McCabe 1976) used in the software community.

Page 14: A Comparison of Traditional Simulation and MSAL (6-3-2015)

interfered with the modeling goals. This view did not take into account that other uses may require models to be developed differently and thus required different ways for managing uncertainty (Brugnach 2008).

The most common approach for representing uncertainty in models is to use stochastic methods for parameters in the model. A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables. A deterministic model always performs the same way for a given set of initial conditions. Conversely, in a stochastic model - usually called a statistical model - randomness is present, and variable states are not described by unique values, but rather by probability distributions.

Important observations related to the fundamental formulation of complex engineering design problems in these new settings include (DeLaurentis 2000):

The need to directly model uncertainty, in its variety of forms, such as: low fidelity contributing analyses, unknown operational environment, ambiguous requirements, and human preferences.

The inappropriateness of optimizing to deterministic objectives, in light of uncertainty.

Addressing affordability as an objective and uncertainty as a reality in system design shifts the fundamental question from “can it be built” to “should it be built” to “with what confidence might it succeed”. The goal should be to design a system that satisfies the requirements, and then to determine the robustness of the design to changes in assumptions made along the way. Uncertainty associated with the engineering analyses conducted is greatest in the conceptual and preliminary design phases of the systems engineering lifecycle, yet this is also the time when large numbers of possible alternatives are being excluded. (DeLaurentis 2000)

Deterministic design simply neglects these uncertainties by assuming all inputs and outputs to be precise. This practice is increasingly inappropriate for the development of affordable systems especially that are network communications and decision making intensive where the importance of cost prediction and risk mitigation is equal to that of system performance. Thus, advances in modeling and simulation architectures must be able to handle imprecise, ambiguous, or uncertain information in the contributing analyses.

Current State of M&S in Systems Engineering

Characterizing M&S There are many ways of characterizing the types of models and simulation methodologies used in representing systems. Figure 5 gives one such taxonomy. Each methodology is best suited for a particular problem type, and addresses the dimensions of complexity in different ways, as shown in Figure 6. What Figure 6 indicates is that there is no ideal method for modeling a complex system. While agent-based modeling is most suitable, it is also the most difficult to implement and validate.

Page 15: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Network models are the easiest to implement but do not capture the dynamic behaviors or intelligence of the systems we are interested in representing. Since all complex systems have many interconnected components, the science of networks and network theory are important aspects of the study of complex systems.

Figure 5: Taxonomy of Modeling and Simulation (Balestrini-Robinson 2009)

Figure 6: Modeling Techniques for Complex Systems (Balestrini-Robinson 2009)

Page 16: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Regardless of simulation methodology, a good way to think about representing the decision-making behavior in a SoS simulation is with the observe-orient-decide-act (OODA)2 loop, as shown in Figure 7. The idea behind OODA is that decision-making occurs in recurring cycles and processing the cycle quickly, observing and reacting to unfolding events rapidly, can enable us to “get inside” the opponent's decision cycle and gain an advantage.

The four interrelated and overlapping OODA processes are listed below. In parentheses is the relationship of each process to common simulation representation/functions:

Observe: the collection of data by means of the sensing (M&S: electromagnetic phenomena)

Orient: the analysis and synthesis of data to form one’s current mental perspective (M&S: networks, network communications, data fusion, data analytics)

Decide: the determination of a course of action based on one’s current mental perspective (M&S: decision theory, artificial intelligence, control theory, cognitive learning…)

Act: the physical playing out of decisions (M&S: F=ma)

Boyd focuses on orientation as the critical link to integrate observation, decision and action. And it is orientation that ‘manages’ the relationships, i.e., network, between the constituents of the SoS.

When a SoS works toward meeting a goal, the OODA based event sequence (or mission thread) that is followed has some interesting effect on simulation methodology. Simulation of sensing and acting tends toward a structured, deterministic problem with a strong physics basis and long history of empirical data. Simulation for Orient and Decision are unstructured, stochastic, and emergent behaviors are to be anticipated. Empirical data for Orient and Decision tend to be very specific to a use case and environment, and rarely is there sufficient data to obtain statistical significance. A SoS requires many iterative steps back and forth through the OODA-based events resulting in significant challenges to building a robust simulation composition. It also allows the notion that validation of Observe and Act simulations that are physics based is tractable. However, current validation techniques don’t work as well where there are no/minimal physics (i.e., just equations) because the number of unknowns is much higher than the number of equations leading to non-unique solutions. This indicates that agent-based techniques are ideally suited for interrogating the many plausible behaviors of the SoS.

2 Developed in 1996 by military strategist and USAF Colonel John Boyd for air combat (Boyd 1996)

Page 17: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Figure 7: The OODA Loop

The M&S Pyramid Models and simulations are often classified by the DoD into four levels - campaign, mission, engagement, and engineering - and depicted as a Pyramid, as shown in Figure 8. The pyramid is a useful construct for describing the difference between models and simulations based on resolution, fidelity and aggregation (Loper 2015).

Resolution is the degree of detail and precision used in the representation of an item or the real world aspects in a model or simulation (MSCO 2011). The layers in the pyramid reflect different levels of resolution. When developing a model or simulation, the level of resolution needed depends on the type of problem to be solved. Using a defense example, campaign models and simulations are applied in warfare analysis; mission-level simulations are used in such areas as air defense, missile defense, and power projection; engagement simulations are used in most DoD weapon system projects; and engineering-level models and simulations have a strong base in understanding phenomenology and the environment. While the M&S pyramid originated in the DoD, other science and engineering domains (e.g., physics, meteorology, social science) have similar constructs for thinking about levels of resolution (e.g., social science uses the terms macro, meso, and micro).

Fidelity is the accuracy of the representation when compared to the real world (DoDD 1995). Aggregation is the ability to group objects while preserving the salient effects of object behavior and interaction while grouped (MSCO 2011). The bottom of the pyramid (Engineering level) is considered to represent the model in a very precise way (e.g., theory or physics-based equations). Due to the types of equations used to represent the system, these models and simulations can require long computational run times (e.g., hours, days, weeks). Models and simulations at this level are considered to have high fidelity and resolution, and low levels of aggregation. In other words, a high degree of detail and precision are used in the representation, as well as a high degree of accuracy. These simulations are physics-based and deterministic. Low levels of aggregation means that the entities in the system are not grouped, they are represented as individual objects.

Page 18: A Comparison of Traditional Simulation and MSAL (6-3-2015)

As you move up the pyramid (i.e., Engagement, Mission, Campaign), models and simulations use a concept called aggregation to group objects and represent them in a less precise way. For example, the components of a Patriot missile at the Engineering level would be aggregated to represent missile intercept at the Engagement level. At the Mission level, a simulation might include multiple missile intercept systems in order to represent an air defense scenario. If the need were to look at a larger operational environment such as the Gulf War, a simulation at the Campaign level would need to represent many air defense systems in addition to command and control and other defense systems. Moving up the pyramid results in the interrogation of missions involving scenarios of ever increasing entity systems, communications infrastructures and layers of command and control where the effects of OODA Orient and Decision drive performance, but current simulation techniques do not use Orient and Decision behavior algorithms that include UQ, cognitive processing, etc., over the entire range of combinatoric possibilities; just the scripted effects of typically single path communications and decision making. Basically these techniques result in performance predictions that are highly biased by and calibrated to ‘yesterday’, and make eliciting emergence difficult.

Models and simulations at higher levels of the pyramid typically use more abstract representations of systems. For example, the Patriot missile may be an important system in a Gulf War simulation, but it would be represented with less precision and fidelity at the Campaign level than the Engineering level (i.e., lethality tables instead of physics). This type of abstraction leads to less computationally intensive simulations as you move up the pyramid, and this leads to shorter run times (e.g., minutes, hours).

Figure 8: The M&S Pyramid

Models and simulations are traditionally developed to solve problems for a single level of the M&S pyramid (e.g., terminal guidance for the Patriot). However, not all system design and management problems can be addressed this way, especially if they are distributed and require interactions with other systems. For example it is common for land or sea-based systems to rely on space-based sensors to manage ever increasing numbers of tracks (threats, friendlies, unknowns). There is an increasing need to understand the relationships and influences of systems that are represented at different levels of the pyramid. For example, what if we could take the Engineering level design of a new missile system and put it in a Campaign level model of the Gulf War. This would give operators the ability to determine the effectiveness and use of the new system,

Page 19: A Comparison of Traditional Simulation and MSAL (6-3-2015)

while it is still being designed. This need to cross boundaries exists in all areas of modern science and engineering.

From a simulation perspective, our focus has been on spanning multiple levels of the M&S pyramid in one of two ways: distributed or unified. Distributed refers to connecting existing models and simulations, developed for different levels of the M&S pyramid, using a distributed simulation framework. There are several commonly used frameworks to choose from to connect the models and simulations, including Distributed Interactive Simulation (DIS) (IEEE 1278) and the High Level Architecture (HLA) (IEEE 1516). This approach has the advantage of being able to use existing simulation assets and repurpose them to solve a new problem. The disadvantage of this approach is that it requires solving common distributed simulation design issues (e.g., semantic interoperability, model assumption, time and ordering), and solving issues related to crossing levels of the pyramid (e.g., aggregating, disaggregating, spatial and temporal scale). In addition these simulations typically have disparate data architectures making integration of the federate at best challenging and cumbersome despite interface standards.

Unified refers to developing a single, standalone simulation that represents all of the levels of the pyramid. This approach has the advantage of being able to solve many of the common distributed design issues through careful choice of assumptions, data, formalisms, and time. In other words, developing a multi-level model that is designed to work together instead of connecting pieces of simulations that were never intended to work together. The disadvantage of this approach is that the representations of the different levels of the pyramid may need to be very abstract, having less fidelity and precision than the original simulations. Adding more and more fidelity to the unified simulation (over time) could result in a simulation that is complex and difficult to understand. In other words, it could result in the type of simulations used decades ago, before the advantages of distributed simulation were understood. As shown in Figure 6, there is no single M&S methodology that captures all aspects of a complex system, which means that a unified simulation will need to use multiple simulation approaches to represent the system of interest, thereby creating a different version of distributed simulation (I.e., between simulation methodologies vs standalone simulations).

Current State of Systems Engineering Systems engineering is based on the historical success of delivering systems with a single mission focus. These historical systems tended to be deliberately designed around a hierarchical architecture with tightly coupled and embedded components. The systems were mechanical and electrical intensive systems where software was not the primary driver in system performance, or acquisition cost, or time to market. How do the successes of the past extend into new trends and constantly evolving employment schemes and uncertain environments? The engineering world began an explosive change beginning in the mid-1990s with the maturation in Computer Aided Design, and physics-based simulation, e.g., continuum mechanics, and parallel computing. Systems engineering attempted to follow this revolution without realizing the significant differences between the structured nature of physics-based continuum mechanics with well-posed differential equations and

Page 20: A Comparison of Traditional Simulation and MSAL (6-3-2015)

extensive empirical data sets, and the semi-structure domain of systems where the basis is only in mathematics and limited empirical data. In addition systems now tend to be multi-mission in nature, loosely coupled, modular, and software intensive with a distributed computing environment. Thus, software and computing have been key drivers in the explosive growth of system complexity, data and the challenges associated with integration. Integration is now about managing complexity about interfaces and relationships that are loosely coupled and multi-mission in nature (both intra-system and inter-system) to enable functionality, interoperability within a greater enterprise. This Enterprise has become the domain of SoSE.

Challenges of M&S for SoSE The traditional approach to M&S for SoSE has the following challenges.

Model Developmento Conceptual models tend to be developed around a specific mission

environment and a specific use case. The conceptual models capture the assumptions about what is important to the real-world system we want to understand. Conceptual models use many types of representations to capture the structure, behavior and interconnectivity of the system of interest, but are most commonly captured in non-executable formats.

o Model Based Systems Engineering (MBSE) tools support structure, behavior and interconnectivity of the system, but focus on hierarchical connectivity across systems - not large numbers of objects with arbitrary associations across objects. Focusing on hierarchical structures also make traversing layers in the pyramid overly complicated. Further, MBSE tools produce artifacts that are predominately non-executable.

o Simulation developers use the conceptual model as the basis for implementing the simulation. Note that many different simulation implementations can come from a single conceptual model, depending on the simulation methods (e.g., time, environment, behavior) and methodologies (e.g., agent based, system dynamics, network models) used by the developer.

o The power of M&S is in the conceptual modeling phase. We need MBSE tools to support: arbitrary associations across objects, as well as quantitative and executable conceptual models.

Model Executiono In all SoSE analyses, focus is on the simulation itself – a specific

implementation of assumptions about objects, structure, behavior and interconnectivity of the real world system. The only ability to understand the system comes from the embedded structure through inputs and outputs. There is no easy way to look at new structures and relationships without changing the simulation itself.

o Simulations are based on a set of assumptions about what is important to the real-world system. These assumptions may not be relevant to the next operational need/environment, but we reuse simulations to save

Page 21: A Comparison of Traditional Simulation and MSAL (6-3-2015)

cost and time. The ability to adapt a simulation or federation (collection of networked simulations) to address a new SoS architecture can be severely limited, yet we do it anyway.

o A common approach to modeling a SoS is by networking simulations together to create a complex (federation) environment. We use solutions like DIS and HLA to connect the simulations, which are focused on interfaces and data passing. These federations are rarely based on common and/or consistent models and the architectures do not solve the difference in assumptions, behavior, structure, etc of the simulations at different levels of fidelity, resolution or aggregation.

o Distributed simulation federations are not typically used for SoS analysis. Time management is a vexing issue for these types of applications, and since Monte Carlo analysis is used to understand uncertainty, restarting the federation for each new replication is a concern. For these reasons, large constructive models are used for SoS analysis in a standalone mode.

o The focus on developing federations from disparate simulations has distracted us from focusing on defining simulation goals and intended use. This has enabled a cottage industry that demands infinite resolution be used everywhere for everything…despite the fact that performance is driven by a specific time scales and more detail only gets in the way and drives failure up and cost up even higher.

Systems Engineering Processo M&S is used in each step of the systems engineering life cycle, but

current practice does not reuse the models and simulations in each phase. In other words, a unique set of models and simulations are used at each phase. There is a need to push everything to the left – use models and simulations developed for later life cycle phases earlier in the process, or have a common system model that can be used across life cycle phases.

o The systems engineering “V” process is serial in nature, but an approach that is iterative and concurrent would have more value for a SoS. For example, systems engineering techniques are still dominated by hardware driven design, but software now dominates system cost. A systems engineering approach is needed that integrates both hardware and software design.

Understanding Uncertaintyo The traditional approach to understanding uncertainty in SoSE is focused

on a sequential process with only a few loop-backs for refinement: a conceptual model exists from which a simulation is developed (but rarely is the conceptual model revisited as the simulation evolves), the simulation is executed for a constrained number of scenarios using sensitivity or Monte Carlo analysis to explore uncertainty, analyze the

Page 22: A Comparison of Traditional Simulation and MSAL (6-3-2015)

data and adjust input parameters of the simulation; repeat. The number of runs that can be made at a given input value limits this process; so much of a multi-dimensional performance surface goes un-interrogated.

o Uncertainties about SoS capabilities are inherently greater than just the sum of the uncertainties about the constituent systems. Unresolved or even undiscovered conflicts among intended uses (e.g., missions) and functions exist even with well-engineered Interfaces. This leads to unanticipated SoS operational environment impacts that were inconsequential to and ignored in the constituent systems. Composing SoS from constituent M&S systems compounds their uncertainties. Yet the challenge in testing SoS drives an increasing reliance on M&S to predict SoS capabilities.

o Deterministic modeling and simulation techniques are well established and can be validated. Validation of SoS is at best challenging, perhaps impossible to achieve. What does validation mean in system simulation where there are no physics or determinism, only math and ambiguity.

o SoS M&S engineering needs a deliberate process to design and invest in successive test and M&S refinement for progressive uncertainty reduction and increasing confidence.

4.0 Graphs, Analytics, and SimulationGraph theory is the study of graphs, which are mathematical structures used to model pairwise relations between objects. A “graph” in this context is made up of "vertices" or "nodes" and the edges, i.e., the relationships that connect them. A graph may be undirected, meaning that there is no distinction between the two vertices associated with each edge, or its edges may be directed from one vertex to another. (Wikipedia Graph Theory)

Graphs can be used to model many types of relations and processes in physical, biological, social and information systems. In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. In the context of network theory, a complex network is a graph (network) with non-trivial topological features. Most social, biological, and technological networks display substantial non-trivial topological features, with patterns of connection between their elements that are neither purely regular nor purely random. Such features include a heavy tail in the degree distribution, a high clustering coefficient, community structure, and hierarchical structure. Networks with a medium number of interactions can realize the most complex structures. Complexity of graphs tends to be defined as a function of the number of nodes, edges, and active paths in the graph. An example is McCabe’s cyclomatic number (McCabe 1976) used in computer science to describe software complexity.

The breadth of problems requiring graph analytics is growing rapidly, including areas such as:

Page 23: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Large network systems Social networks Cyber security Natural language understanding Semantic search and knowledge discovery

Graph analytics is different from relational analytics (Hoskins 2015). Relational analytics typically explore relationships by comparing one-to-one or maybe even one-to-many. Using relational analytics, it would be easy to identify one person and his or her 10 friends. It would also be easy to find any number of people and all of their friends. Graph analytics can compare many-to-many. With relational analytics, it becomes much more difficult to answer questions about the second level of indirect friends a person has. Graph analytics make it possible to ask not only about the friends of a person but also all of their friends too. Building on these kinds of questions allows researchers to find key influencers within an entire network, not merely within the direct relationships of a subset of that network.

Graph analytics can also infer paths (a tensor defining an event sequence from node, to node, to node through a subgraph) through complex relationships to find connections that are not easy to see in relational analytics. Relational analytics are ideal for analysis of structured, unchanging data via tables and columns. Graph analytics are useful for unstructured, constantly changing data because it gives users information and context about relationships in a network and deeper insights that improve the accuracy of predictions and decision-making.

One area where graph analytics particularly earns its stripes is in data discovery. It enables us to discover the “unknown unknowns” - to see patterns in the data when we don’t know the right question to ask in the first place - by teasing out relationships that aren’t obvious. As patterns begin to emerge from multiple data sets, we gain a more complete picture of everything that actually affects business outcomes. This is also known as inference testing.

There are a few common ways that we classify graph analytics approaches:

Identifying centralities, such as items or events that lie at the root of other surrounding events or patterns. The aim is to quantify the importance or influence of a particular node (or group) within a network.

Identifying connections between two or more items originally thought to be unrelated. One example of this in financial services is identifying preliminary indicators of cyber attacks so that they can be prevented.

Identifying communities that revolve around certain patterns, e.g., a common theme. For example, the FBI might be interested in identifying groups of people who have been communicating about bomb making.

Graphs and graph analytics apply to M&S in several ways. The first is the definition of the execution of events in the simulation (an event string is a graph path). Two fundamental components of a simulation model are a set of state variables and a set of

Page 24: A Comparison of Traditional Simulation and MSAL (6-3-2015)

events. The model emulates the system being studied by producing state trajectories/paths, that is, time plots of the values of the system's state variables. Measures of performance are determined as statistics of these state trajectories. Events are the points in time at which at least one state variable changes value. No simulated time passes when an event occurs; simulated time passes only between the occurrences of events. An Event Graph consists of nodes and edges. Each node corresponds to an event, or state transition, and each edge correspond to the scheduling of other events. It is possible for event graphs to be acyclic (i.e., there is no way to start at a vertex v and follow a sequence of edges that eventually loops back to v again) or directed (loop back to vertex v is possible).

In addition to graphs defining the order in which events are processed in a simulation, graphs can also be the basis for stochastic simulation, e.g., Marchov chains, Bayes nets, credal nets. For example, a Bayesian network (Bayes network or belief network) is a probabilistic graphical model that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). A Bayesian network could be used to represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Or, A Bayes net could be an OODA based event trajectory representing a military operation.

The second way in which graphs apply to simulation is in the definition of scenarios. A scenario (or mission) is an identification of the major systems/players that must be represented by the simulation, a conceptual description of the capabilities, behavior, and relationships (interactions) between these major system/player over time, and a specification of relevant environmental conditions (e.g., terrain, atmospherics) (MSCO 2011). It is common to think of scenarios as event based and cast as a directed acyclic graph (DAG) with branches at decision points. Unlike a fault tree, scenarios are described as a success tree.

Figure 9 is a sample plot graph from a hypothetical bank robbery scenario as interpreted from crowd sourced stories (Moss 2015). Notice this is event based, and you could create a bunch of DAGs based on any number of plausible event sequences.

Page 25: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Figure 9: Hypothetical bank robbery scenario

The data associated with the scenario is commonly captured in an ontology. Ontology is a formal naming and definition of the types, properties, and interrelationships of the entities that exist for a particular domain of discourse (e.g., mission). Ontology compartmentalizes the variables needed for some set of computations and establishes the relationships between them3. Figure 10 represents a simple ontology of Resource Description Framework (RDF) statements for the semantic web.

3 However, ontology languages tend to be hierarchical (e.g., RDF/OWL), so cross-hierarchical association can only be one-offs. There is no means to have a general type of cross class associations.

Page 26: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Figure 10: RDF graphs create web of data

When creating a simulation model, we need both the structure of the sequence of events for the mission, as well as the data and behavior associated with the individual components of the mission. Graphs are commonly used constructs for capturing this information. It is important to note that while we can describe aspects of M&S as graphs, traditional simulations don’t necessarily take advantage of these constructs. Legacy simulations were developed in a time when graph constructs were not as mature as they are today, so simulation models use hierarchical structures with relational database design.

As for the M&S pyramid, you can envision that each level of resolution could be described as a graph, or set of graphs, which represent the system or mission of interest. Traversing the pyramid would then amount to merging graphs (going up) or extracting sub-graphs (going down). In this context, graphs would provide continuity across the pyramid, allowing for data and structure connectivity4.

5.0 The Model-Simulation-Analysis-Looping ApproachWhy has system engineering so often failed? Current M&S processes don’t provide the necessary context, structure and constraint given all the combinatoric possibilities in plausible employment concepts and the dynamic pace of change in the mission environment. The question “How can we evaluate architecture concepts against mission capabilities of interest?” needs to be accomplished early in the systems engineering lifecycle. This is not possible with the static, language based, M&S architecture development methods currently in use by systems engineers today.

4 Most hierarchical graphs are structured such that arbitrary associations across the graph are impossible to define – it’s a data structure problem (e.g., RDF/OWL ontologies). While SysML BDD is better, most tool implementations don’t deal well with arbitrary associations). A possible approach would be to use the adjacency matrix (i.e., means of representing which vertices (or nodes) of a graph are adjacent to which other vertices). Using matrices gets away from data architectures, at least for a while.

Page 27: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Generally the systems engineering (SE) literature describes SoSE as an extension of traditional systems engineering following a serial process typical of the SE ‘Vee’. This basis is rooted in a decades old notion that a system is characterized as:

• Single Mission, centralized control• Hierarchical and structured• Tightly coupled, embedded subsystems• Dominated by Mechanical and Electrical hardware• Human-in-the-loop decision-making

Unfortunately, this approach to SE has not kept pace with the technology revolution, especially that about computing, software, information technology, and big data analytics. To complicate matters further, systems are now fielded as one component of an integrated SoS. Today both SoSE and SE must deal with a new, top-down, paradigm where a system must work in a SoS characterized as:

• Multi-Mission, decentralized control, uncertain environments• Un/semi-structured, modular and loosely coupled• Distributed decision making• Dominated by software, data and computing infrastructure• Human-on-the-loop decisions under uncertainty.

In this ‘new order’ due to combinatoric complexity there are many plausible ways to configure a SoS to achieve a given goal. In addition due to these characteristics, it is the inter-relationships, or interstitials, between entities that then drive performance. Thus SoSE and SE are inherently graph-based in nature. Unfortunately SE tools are currently ill posed to this SoS domain because they are not aligned with modern computer science and software best practice, and the tools are hierarchically based and reliant on relational database tools. One has to look no further than the state of requirements engineering based on static, hierarchical requirements databases that the GAO continuously describes as problematic to maintaining acquisition cost, schedule and performance in US federal programs.

A new approach to SoSE and SE is proposed where traditional SE concepts are exploited in an iterative and concurrent process of modeling, simulation, and data analytics. This approach is rooted in model-based engineering practice and the vision for simulation-based acquisition of the 1980’s and 1990’s, and exploits modern information technology, computer science and software engineering. This approach is based on graph mathematics and the emerging graph computing and NOSQL graph databases and is called Model-Simulation-Analysis-Looping, or MSAL.

The MSAL approach is shown in Figure 11. It applies mathematical architecture techniques that focus on the mission environment and goal-based mission threads to quantitatively answer key questions about architecture alternatives. Architectures are evaluated early and often in run time environments providing for an understanding of break points and performance boundaries. The architecture process addresses risk issues that have repeatedly challenged complex SoS architectures. Risk is identified as

Page 28: A Comparison of Traditional Simulation and MSAL (6-3-2015)

patterns in the architecture bounded by uncertainty. High uncertainty is equated to high risk. All areas of a common pattern can be readily identified and tracked as a risk, and monitored through continuous testing.

Figure 11: Model-Simulation-Analysis-Looping Approach

MSAL ArchitectureMSAL is based on iterative looping between modeling, simulation and data analytics. The architectural components of MSAL include:

A Model is a static representation of the system entities (objects), their structure and interconnections. Models are represented as graphs, enabling the ability to capture and represent complex relationships in systems. The model represents the system’s structure.

The Mission Environment is an ontology composed of an object oriented, hierarchical class structure of the operational mission goals, stakeholder requirements, resource entities, operations doctrine, and constraining policy. The mission environment represents the system context - the set of physical entities, conditions, circumstances and influences needed to meet a specific mission goal.

A Mission Thread provides the operational and technical description of the end-to-end set of activities to meet a specific mission goal in accordance with a specific mission operation (as specified by the mission environment). The mission threads represent a plausible sequence of events required to achieve the mission goal. There are many plausible mission threads for each mission environment. Mission threads are based on state machines (e.g. UML or SysML sequence diagram) and become a testable simulation.

A Simulation is a dynamic representation of the model to include entity and inter-entity behaviors. The output from a simulation run is an instance of a mission thread. Simulations are executable software that are dynamic, have quantitative behaviors and a simulation infrastructure, e.g., time management,

Page 29: A Comparison of Traditional Simulation and MSAL (6-3-2015)

data management, optimizer interface, and data output schemes. The simulation becomes a ‘test harness’ by which models can be tested, rather than the focus of the analysis.

Multiple iterations between modeling, simulation and data analytics are used to reduce uncertainty and maximize probability of meeting mission goals. The looping is enabled and empowered by running on a common graph-computing infrastructure. In addition to the architectural components described above, two other significant pieces of the MSAL ecosystem include:

Data and graph analytics are leveraged to address the huge amounts of data produced by the millions of iterative simulation runs and evolving models. Data and graph analytics support simulation-based optimization and uncertainty reduction of parametric data from millions of simulation runs.

Uncertainty quantification plays a fundamental role in characterizing the impact of variability (aleatoric) and lack-of-knowledge (epistemic) variables of the graph model as compared to the physical SoS based on quantities of interest. Uncertainty quantification techniques add powerful forensics that enables architecture comparisons and synthesis of the best components of many architectural options.

MSAL LoopsMSAL is a set of three nested loops about a common Mission Model, as shown in Figure 12. The Uber Loop is the intersection of the real or tactical world with the virtual run-time environment. The Model-Analysis-Loop (MAL) creates the static models that are abstractions of the real world. The Simulation-Analysis-Loop (SAL) tests the dynamic behavior of a model along a goal-based, mission thread via simulation to quantify both performance and uncertainty.

Figure 12: MSAL Looping

Page 30: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Another way to view the MSAL loops is to unroll them into a timeline, as shown in Figure 13.

Figure 13: MSAL Loop Timeline

Uber Loop The purpose of the Uber Loop is the intersection of the real or tactical world with the virtual run-time environment.

The Uber loop starts with the establishment of a Mission Environment that defines the SoS and the determination of the Model Under Test.

The Mission Environment is the ontology describing the stakeholders, resources, operations, policy, and mission goals for the SoS.

The Model Under Test is the architecture, design and subsequent hardware and software of the tactical systems and employed SoS. It is a subset of the Mission Environment and is modeled graphically using a variety of graph-based artifacts, e.g., SysML and UML. The Model Under Test is a composition of sub-models, which achieve their own objectives, or missions, in concert with each other in pursuit of the composite mission goal.

After iterations of the MAL and SAL, the updated mission model is returned to the Uber loop to update the mission environment and the model under test.

Model-Analyze Looping The purpose of the MAL is to create the static models that are abstractions of the real world. The MAL is completely in the run-time environment.

The key component of MAL is the Mission Model, an abstraction of the Model Under Test. The model under test is the real world; the mission model is an abstracted version of the real world. There could be many mission models for one model under test. Hence, the mission model is an abstraction that

Page 31: A Comparison of Traditional Simulation and MSAL (6-3-2015)

represents a single sub-model of the Model Under Test. The mission model is in the run-time.

Mission Threads for each Mission Model are created for key mission goals. Each mission thread is a sub-graph that explicitly defines the event sequencing through the prosecution of a mission starting with the introduction of a stimulus and ending with the completion of the mission goal. A Mission is composed of multiple mission threads. This graphical representation of the SoS is called the event space. A SoS is generally multi-mission (or at least some of the pieces are). Thus a SoS has a number of missions, and many possible mission threads for each mission.

• Graph analytics are conducted on iterations of mission models and/or historical archives of other relevant models looking for characteristics such as complexity, centrality, density, etc. This enables us to reduce the number of mission models to the ones with the most impact on the model under test.

• Inference testing for pattern recognition is conducted between multiple mission models. Through inference testing, a plausible set of ‘good enough’ mission threads can be established.

• The MAL allows early modeling of mission threads across diverse component systems within the SoS to provide constraint and structure for subsequent architecture development. The goal is to find the most plausible “structures” on which to conduct the SAL.

Simulate-Analyze Looping The purpose of the SAL is to test the dynamic behavior of a model along a goal-based, mission thread via simulation to quantify both performance and uncertainty. SAL is completely in the run-time environment and is Live, Virtual, Constructive (LVC); so many simulators are appropriate depending on the mission goals being queried.

The first step of the SAL is to drive the simulation by one-at-a-time parameter sensitivity studies. Key variables are selected and then binned to run optimization campaigns and calculate local uncertainty for areas of noteworthy performance (ANP).

Forward propagation of combined aleatory and epistemic uncertainty is then conducted to define local uncertainty.

Each simulation run creates an instance of a mission thread. The integration of multiple instances of mission threads and subsequent use of Bayes (or Markov or creedal) statistics create macro uncertainty about the mission thread.

Calculate probability of success, Ps, for meeting mission goals and bounding macro uncertainty.

• Postulate new variable definitions to reduce uncertainty and new mission model to reduce uncertainty, and then iterate through the SAL. Is the probability of success Ps now acceptable? Have ‘risky’ ANP been reduced or eliminated? Are the macro and local uncertainty acceptable? If not, continue to iterate.

• There will be 103 to 106 simulation runs in each SAL campaign. From SAL results, new mission models are postulated and a new MAL to SAL cycle begins again.

Page 32: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Benefits of MSAL vs Traditional M&S Approach1. It’s about quantifying mission success against a set of mission goals upon

mission threads.

2. There is one data store, i.e., knowledge management system for cost, schedule and performance. All applications, including data analytics, are built upon and enabled by this single data store that is extensible across the SoS lifecycle.

3. Stochastic assessment of success in meeting a mission goal, to include cost and schedule assessments, and include both local and global uncertainty bounds for each mission thread.

4. Exploits existing tools in the M&S toolbox. MSAL is simply a different way of integrating existing engineering tools and techniques to get at a top-down assessment early in the concept definition phase.

5. Requirements are an outcome of MSAL, not an input. These requirements are dynamic, math base, testable, and are in fact also a ‘test’ harness manifested in a simulation engine.

ReferencesBalci, Osman, and Richard E. Nance. "Simulation model development environments: a research

prototype." Journal of the Operational Research Society (1987): 753-763.

Balci, Osman. "Verification validation and accreditation of simulation models." Proceedings of the 29th conference on Winter simulation. IEEE Computer Society, 1997.

Balestrini-Robinson, S., Zenter, J., and T. Ender. “On Modeling and Simulation Methods for Capturing Emergent Behaviors for Systems of Systems.” 12th Annual Systems Engineering Conference National Defense Industrial Association. Oct 2009.

Boyd, John R. "The essence of winning and losing." Unpublished lecture notes (1996).

Brugnach, M., et al. "Chapter four complexity and uncertainty: rethinking the modelling activity." Developments in Integrated Environmental Assessment 3 (2008): 49-68.

Cares, J. R., Distributed Networked Operations, Alidade Press, 2005.

Carson, Yolanda, and Anu Maria. "Simulation optimization: methods and applications." Proceedings of the 29th conference on Winter simulation. IEEE Computer Society, 1997.

Center for Applied Scientific Computing (CASL), “The PSUADE Uncertainty Quantification Project,” Lawrence Livermore National Laboratory, US Department of Energy, https://computation.llnl.gov/casc/uncertainty_quantification (Retrieved on 2015–04–1)

Department of Defense Directive (DoDD) 5000.59-P, DoD Modeling and Simulation (M&S) Master Plan, 1995.

Department of Defense (DoD) Defense Acquisition Guidebook, 2004, available at: https://dag.dau.mil/Pages/Default.aspx.

Department of Defense, Systems Engineering Guide for System of Systems, v1.0, 2008.

Page 33: A Comparison of Traditional Simulation and MSAL (6-3-2015)

DeLaurentis, Daniel A., and Dimitri N. Mavris. "Uncertainty modeling and management in multidisciplinary analysis and synthesis." AIAA Aerospace Sciences Meeting, Paper No. AIAA-2000–422. 2000.

Frontline Solvers Website, available at: http://www.solver.com .

Garrett, R. K., Anderson, S., Baron, N. T., & Moreland, J. D. (2011). Managing the interstitials, a system of systems framework suited for the ballistic missile defense system. Systems Engineering, 14(1), 87-109.

Grange, F. and R. Deiotte, “A Modeling and Simulation Strategy for Uncertainty Reduction in System-of-Systems Engineering”, NDIA 17th Annual Systems Engineering Conference, Springfield, VA, 2014.

Hoskins, Mike “How graph analytics deliver deeper understanding”, InfoWorld, http://www.infoworld.com/article/2877489/big-data/how-graph-analytics-delivers-deeper-understanding-of-complex-data.html

IEEE 1278.1 – 2012. Standard for Distributed Interactive Simulation - Application Protocols, Version 7

IEEE 1278.2 – 1995. Standard for Distributed Interactive Simulation - Communication Services and Profiles

IEEE 1516 – 2010. Standard for Modeling and Simulation High Level Architecture - Framework and Rules

IEEE 1516.1 – 2010. Standard for Modeling and Simulation High Level Architecture - Federate Interface Specification

IEEE 1516.2 – 2010. Standard for Modeling and Simulation High Level Architecture - Object Model Template (OMT) Specification

INCOSE, “A World in Motion: SE Vision 2025”, 2014, http://www.incose.org/docs/default-source/aboutse/se-vision-2025.pdf?sfvrsn=4.

Joslyn, C. and L. Rocha, “Towards semiotic agent-based models of socio-technical organizations”, Proc. AI, Simulation and Planning in High Autonomy Systems (AIS 2000) Conference, Tucson, Arizona, pp. 70-79, 2000.

Kinder, Andrew, Michael Henshaw, and Carys Siemieniuch. "System of systems modelling and simulation–an outlook and open issues." International Journal of System of Systems Engineering 5.2 (2014): 150-192.

Kreutzer, Wolfgang. System simulation programming styles and languages. Addison-Wesley Longman Publishing Co., Inc., 1986.

Leong, Lam Chuan, “Thinking through Complexity, Managing for Uncertainty”, Civil Service College, Singapore, Ethics — Issue 7, January 2010, accessed March 2015, available at: https://www.cscollege.gov.sg/Knowledge/Ethos/Issue%207%20Jan%202010/Pages/Thinking-through-Complexity-Managing-for-Uncertainty.aspx.

Loper, M. (editor) Modeling & Simulation in the Systems Engineering Life Cycle: Core Concepts and Accompanying Lectures, Series: Simulation Foundations, Methods and Applications, Springer, expected publication 14 May 2015.

Page 34: A Comparison of Traditional Simulation and MSAL (6-3-2015)

Maier, M. W., “Architecting Principles for Systems of Systems,” in Proceedings of the Sixth Annual International Symposium, International Council on Systems Engineering, Boston, MA, 1996.

Marvin, Joseph W., and Robert K. Garrett Jr. "Quantitative SoS Architecture Modeling." Procedia Computer Science 36 (2014): 41-48.

McCabe, Thomas J. "A complexity measure." Software Engineering, IEEE Transactions on 4 (1976): 308-320.

McManus, Hugh, and Daniel Hastings. "3.4. 1 A Framework for Understanding Uncertainty and its Mitigation and Exploitation in Complex Systems." INCOSE International Symposium. Vol. 15. No. 1. 2005.

Moss, Richard “Creative AI: Software writing software and the broader challenges of computational creativity”, Gizmag, 2 March 2015, retrieved from http://www.gizmag.com/creative-ai-computational-creativity-challenges-future/36353/

MSCO DoD Modeling and Simulation (M&S) Glossary, 2011. Available at: http://www.msco.mil/MSGlossary.html .

Office of the Deputy Under Secretary of Defense for Acquisition and Technology, Systems and Software Engineering. Systems Engineering Guide for Systems of Systems, Version 1.0. Washington, DC: ODUSD(A&T)SSE, 2008.

Sargent, Robert G. "Verification and validation of simulation models." Proceedings of the 37th conference on Winter simulation. Winter Simulation Conference, 2005.

Weaver, W., "Science and Complexity", American Scientist 36: 536, 1948 (Retrieved on 2007–11–21.)

Wikipedia Uncertainty, accessed March 2015, available at: https://en.wikipedia.org/wiki/Uncertainty.

Wikipedia Graph Theory, accessed March 2015, available at: https://en.wikipedia.org/wiki/Graph_theory.