Top Banner
ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr HQ STRICOM AMSTI-ES 12249 Science Drive Suite 1403A Orlando, FL 32826 407-384-3874 [email protected] Claude W. Abate Sherikon, Inc 12249 Science Drive Suite 140 Orlando, FL 32826 407-384-5397 [email protected] Keywords: Embedded Simulation, Embedded Training, Autonomous Trainers ABSTRACT The army has placed a renewed emphasis on an embedded training capability as a result of lessons learned from the Advanced Warfighting Experiment (AWE) 97-06 on the potentials of digitization. Through the Inter-Vehicle Embedded Simulation Technology (INVEST) Science and Technology Objective (STO), the Simulation Training and Instrumentation Command (STRICOM) will develop the technology that will lay the foundation for incorporating embedded simulation into future and legacy combat vehicles. This paper presents current status and future evolution of the enabling technologies needed to fully embed these technologies into a combat vehicle. These ES systems will support both training and operational (go-to-war) enhancements for the Army XXI and Army After Next inventory of combat vehicles. The key enabling technologies for an autonomous vehicle capability include: low cost image generation, live-virtual object and terrain integration, virtual target injection into sensor displays, synchronized semi-automated player models, simulation filtering tool, intelligent tutoring system, time-based and UWB communication, and automated vehicle model development and optimization.
25

ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

Jul 08, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

ENABLING TECHNOLOGIES FOREMBEDDED SIMULATION & EMBEDDED TRAINING

Hubert A. BahrHQ STRICOM AMSTI-ES

12249 Science Drive Suite 1403AOrlando, FL 32826

[email protected]

Claude W. AbateSherikon, Inc

12249 Science Drive Suite 140Orlando, FL 32826

[email protected]

Keywords:

Embedded Simulation, Embedded Training, Autonomous Trainers

ABSTRACTThe army has placed a renewed emphasis on an embedded training capability as a result of lessons learnedfrom the Advanced Warfighting Experiment (AWE) 97-06 on the potentials of digitization. Through theInter-Vehicle Embedded Simulation Technology (INVEST) Science and Technology Objective (STO), theSimulation Training and Instrumentation Command (STRICOM) will develop the technology that will lay thefoundation for incorporating embedded simulation into future and legacy combat vehicles. This paperpresents current status and future evolution of the enabling technologies needed to fully embed thesetechnologies into a combat vehicle. These ES systems will support both training and operational (go-to-war)enhancements for the Army XXI and Army After Next inventory of combat vehicles. The key enablingtechnologies for an autonomous vehicle capability include: low cost image generation, live-virtual object andterrain integration, virtual target injection into sensor displays, synchronized semi-automated player models,simulation filtering tool, intelligent tutoring system, time-based and UWB communication, and automatedvehicle model development and optimization.

Page 2: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION &EMBEDDED TRAINING

By Hubert Bahr and Claude Abate, STRICOM

IntroductionTo fight and win on the modern battlefield twothings are required; weapons systems thatout perform the opponent’s weapon system,and crews that are better trained to use theweapon system effectively. A cost effectivemeans of improving weapon systemperformance is Embedded Simulation (ES),which includes Embedded Training (ET) orthe capability to train and maintain crewproficiency on the same equipment they willgo to war on, and the Embedded Operations(EO) functions of situational awareness (SA),mission rehearsal (MR), commandcoordination (CC), critical decision making(CDM) and course of action analysis (COAA).

The competition for better weapons is onecomponent of the challenge and the otherinvolves training the crews to be moreproficient than the opponent. But there aretraining costs associated with more complexweapons systems. To date, stand alonetrainers have been employed at the schoolhouse and in the units. The power projectionarmy of the future will have to spend moretime maintaining task proficiency whilestand-alone trainers cannot meet thedeploying force requirements and are toocostly to operate and maintain. One optionthat will overcome this deficiency is a trainingsystem that is integrated into the vehicle.However, any sub-system that is integratedinto the vehicle becomes a luxury unless itprovides improved combat effectiveness. Forthis reason, the proposed approach ofembedded simulation (ES) technologyaddresses a system that provides bothtraining support and go-to-war capabilities.An expanded discussion of ES training usescan be found in reference 1.

In this paper we are going to walk through thevarious levels of training applications(gunnery) from a single vehicle to multiplevehicles to multiple vehicles participating inlive fire and force-on-force exercise similar totraining conducted at the Combat TrainingCenters (CTCs) and finally go-to-warexamples.

BackgroundWhile stand alone trainers such asCOFT/AGTS, SIMNET/CCTT and M-1 DriverTrainers have served the Army of Excellencewell, technological advances andminiaturization now present the ability andaffordability for embedding crew andcollective training systems into the vehicle.We will refer to these ground combat systemswith an embedded training capability as“autonomous” trainers.

The goal of the Inter-Vehicle EmbeddedSimulation Technology- Science andTechnology Objective (INVEST-STO)program is to develop and demonstrate thetechnology that will lay the foundation forincorporating ES and ET into future as wellas legacy vehicles.

The enabling technologies and componentsused to run an Embedded Simulation System(ESS) are basically the same for the stand-alone trainer with the exception that they willbe smaller, faster, more powerful and lessexpensive. Common components includeimage generators, simulation computer,Semi-automated Forces (SAF), data logger,terrain database, communications andinstructor operator. Stand alone Imagegenerators of today are located in large racksand wired to monitors located at the variouscrew stations. In the future they will be nolarger than a card and the images projecteddirectly into vehicle sights or sensors. Thelarge rack mounted computers will bereplaced by a very small and powerful lap topsize computer that is accessable to the crewand loaded with software (SW) containingcurrent ModSAF and world terrain databasemodels. An application hardware (HW) datalogger will be linked into the simulationcomputer to record crew actions and supportAARs. The only common component that isnon-applicable to an ESS is a dedicatedinstructor operator because that task belongsto the vehicle commander or senior cadrepersonnel.When units deploy to a combat zone inresponse to a rapid deployment mission inthe next century, the benefits of autonomous

Page 3: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

trainers become readily apparent. In addition,the dual-use design of the ESS can be usedto enhance operational effectiveness.

Page 4: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

ESS Training ApplicationsLet’s assume that in any future combatsystem the crew will have to be trained tomaneuver and engage targets using a highlysophisticated suite of fire control sensors anddevices. The design of the ESS will be suchthat the crew can hone its maneuver andgunnery skills by projecting selected trainingexercises into vehicle optics or sensorsystems. Due to advances in technology andminiaturization, the graphics card and openscene visual processing will be capable ofdisplaying a terrain database (world modelssupport SW), and the SAF (CGF support SW)will display the target array on to thedatabase. These SAF entities will be fullyfunctional (move, shoot and maneuver) andreplicate enemy capabilities. The on-boarddata logger will record all engagements forfollow-on After Action Review (AAR) by theunit cadre or vehicle commander. Thistraining can be conducted in the motor pool,assembly area or enroute to a combattheater. Training can be tailored to meetindividual or crew (collective training) needsin terms of tactical conditions(offense/defense), force ratios, degree ofdifficulty in terms of probably of hit & kill, andenvironmental, terrain and light conditions.This provides a virtual 360-degree battlefieldwith ground and air targets.

Training ExampleTraining will follow the normal crawl, walk,run strategy starting with a stationary singlecrew exercise and progress to multiplemoving vehicles in a combined arms live fireexercises. However, autonomous trainersrequire a further segregation of exercises interms of simulation mode, i.e. (1) live vehfiring virtual rounds vs. virtual target on virtualterrain; (2) live vehicle firing virtual rounds vs.virtual target on live terrain and (3) livevehicle firing live rounds vs. virtual target onlive terrain. The technology / engineeringchallenges associated with each mode arelisted below.

1. Live vehicle firing virtual rounds vs.virtual target on virtual terrain requires:

a. geometric pairing vice laser pairingb. aim point determinationc. realistic fire on target effectsd. scenario generation

(a) Geometric pairing will be requiredbecause a laser pairing system will notwork between live and virtual targets ( novehicle present to provide a laser return).In virtual on virtual simulation shooter-target pairing is practically inherentbecause the locations and orientations ofthe vehicles and weapons are knownalmost perfectly in the simulation world.Engagements are simulated bycomputing the ballistic flyout of simulatedrounds and determining where theyimpact on target or the terrain. Thegeometric pairing solution takes place atthe time of ranging to the target (in theshooter’s sight picture). The on boardsimulation computer calculates thedistance to target and stimulates thevehicle to enter the appropriate rangereturn in the gunners sight.

(b) Aim point determination will be calculatedby capturing, at the instant of firing, thecrosshair location with respect to thetarget. In a virtual on virtual engagement,the locations and orientations of vehicleare known essentially perfectly (they aresynthesized by the simulation) and theirorientation relative to each other areeasily derived from their worldcoordinates. The simulation computerknows the relative position of crosshair tothe target and stimulates the appropriateburst on target effect. Location of impactis also needed to determine target andcasualty effects.

(c) Realistic fire on target effects models arestored as part of the terrain database andwill be generated by the IG at the time ofround impact on the target. Obscuration,gun recoil and visual tracers will also bestimulated in the sights of the firingvehicle at the time of firing.

(d) Scenario generation would beaccomplished at the battalion level and inaccordance with published gunnerytactics, techniques and procedures. Thescenarios developed in this example,would be a series of crew gunneryexercises or firing tables designed totrain or sustain crew proficiency. Alltargets would be virtual and arrayed tomatch current enemy fire and maneuver

Page 5: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

doctrine. Firing scenarios can eitherreside on the vehicle simulation computerHW, on a CD-ROM or ported down to theusing unit or plugged into the removablestorage application HW.

2. Live veh firing virtual rounds vs. virtualtarget on live terrain requires:

a. terrain fidelity and terrain correlationb. injection of virtual target into a live

scene

(a) Terrain fidelity and terrain correlationsupports a clear image of the virtualtarget that is spatially correlated to thelive terrain. For example, the virtualtarget must realistically move over thelive terrain and not give the appearanceof floating above or sinking into theterrain.

(b) Injection of virtual target into the livescene or augmented reality involves theprocess of generating virtual images thatappear to fit seamlessly into the real-world environment. A critical requirementis image clipping or removal of thosevirtual images that should be partly orfully obscured by intervening real-worldobjects.

3. Live veh firing live rounds vs. virtualtargets on live terrain requires:

a. GPS location of firerb. Vehicle/ hull attitudec. Gun/turret orientation (AZ & Elev.)d. Safety overwatch (observer console)e. Aim point determination (GPS

Interferometry)f. Geo-pairing

(a) All vehicles will be equipped with a globalpositioning system (GPS). This systemaccurately identifies the location of thefirer in terms of its X & Y coordinates andevery other friendly or enemy vehicle inthe exercise can be tracked and geo-paired for gunnery purposes.

(b) The hull attitude of the virtual target isimportant to determine the strike of roundand to calculate vehicle damage.Orientation of a virtual vehicle influencesits vulnerability and ability to identify andengage the live vehicle. The simulation

computer on the live platform isgenerating this information.

(c) Gun orientation further defines the virtualvehicle’s ability to identify and engagethe live vehicle. The simulation computeron the live platform generates thisinformation.

(d) If this technology is used to replacewooded targetry on live fire ranges athome station or at the CTCs, then it isessential to have tower or safety officeroverwatch in order to see the liveengagement of a virtual target. Safetyover watch can only be accomplished byproviding a safety-overwatch consolewith the virtual target array similar tothose on the firing vehicle. A simulationcomputer and image generator (IG)capability must be available to safetypersonnel or sent by telemetry from thefiring vehicle to overwatch elementsensors.

(e) When crossing the boundary betweenlive and virtual (or engaging real targetsthat cannot be seen) the orientation ofthe real shooter vehicle with respect tothe world becomes important. Thissituation requires solving the problem ofmeasuring accurately not only theposition of the shooter and targetvehicles, but also the pointing angle inworld coordinates of the shooter's gun.

(f) In the real world of live-instrumentedvehicles on training ranges, it is notpossible to determine locations andorientations very well. Geometric pairingfrom shooter to target is determinedusing geometry, namely the locations ofthe vehicles, the pointing angle of theshooter’s gun and the line from shooteroutward toward the target.

1. ESS in support of multi-echeloncombined arms (collective) trainingconducted at our Combat TrainingCenters requires an additional set ofenabling technologies. (The fact thatmultiple players are participatingmeans communication links becomepacing items for successfulexecution). These technologiesinclude:

a. Digitized terrain database

Page 6: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

b. Optimal live/virtual registration(Geometric pairing combined with GPS,vehicle attitude and gun orientation)

c. Synchronized SAFd. Increased communications bandwidth /

reduced commo / distributed processinge. Automated vehicle / smart modelsf. Communications / sensor surrogatesg. Intelligent tutoring systemh. Automated battlefield information filtering

tooli. Scenario builder / modification tool

(a) High resolution digitized terraindatabases are essential for any live-virtual exercise. Resolution must be toDigital Terrain Elevatoin Data (DTED)level 4 with horizontal resolution at 5meters and vertical resolution at 1.5meters or less. All databases should bestandardized & interoperable orcompliant with Synthetic EnvironmentData Representation & InterchangeSpecification (SEDRIS) conversionmechanisms.

(b) Implementation of geo-pairing for directfire and non line-of-sight engagementsimulation is base upon accurate GPSposition location measurements andaccurate GPS-based turret anglemeasurements (shooter-target pairingand accurate visual representation of livevehicles in the virtual world).

(c) Synchronized SAF or the CollectiveObservation of a Common Entity (SAF) isdesigned to synchronize the SAFsgenerated on each player platform. Forexample, in a platoon exercise all tankswill see exactly the same view of theOpposing Force (OPFOR) as they moveor as they are attrited by platoon directfire. The advantage of synchronizing isthe reduction of update communicationstraffic between friendly players as actionstake place affecting the status of the SAFentities. The technology involves themodeling of the SAF entity at a high level(in terms of behaviors) so that onlyinfrequent updates to the model arerequired.

(d) ESS has more stringent commorequirements than any stand-alonesystem. These requirements includeweight, volume, range, power, andbandwidth management. With multipleplayers in the simulation exercise they

must be constantly reporting current state(a few updates per second at a minimum)and interaction information to ensureproper representation. The use of ultrawideband (UWB) technology showspromise with handling the high databandwidth load. It uses less power forgiven range, has urban environmentcapability and a low probability ofdetection & interception. This UWBtechnology also has the potential for thelonger term “go to war” tactical internetcommunications requirements.

(e) By accurately modeling the behavior of ahuman player, each live or virtual entitycan use this behavior model to predictthe state of other live entities on thebattlefield; and thus reduce the commobandwidth required to update operationaland status information exchangebetween all entities. The optimization ofmodel information can be using on-boardcomputational resources (simulationcomputer and vehicle modeling SW).

(f) Communication and real-time sensorsurrogates will employ UWB technology.UWB has the capability to be used asprecision radar with the inherent benefitsas a communication system. Use of thiscapability in an imaging array can providean electronic video camera and provide amore accurate live/virtual vehicleregistration; as well as providing terraindatabase updates that were not presentwhen the database was produced andthus avoiding costly high resolutiondatabase generation.

(g) A SW application connected into thesimulation computer system will be anembedded intelligent training system.The SW will duplicate as closely aspossible the trainee undergoing on-the-job training in a crew position taskenvironment without benefit of a humaninstructor. Students can train in aninteractive environment towards aparticular goal or task that will includechallenging training scenarios, monitoring& evaluation of the trainee actions,meaningful feedback comments to errorsand response to trainee requests forinformation.

(h) A SW application connected to thesimulation computer system will bedesigned to process tactical informationand/or use intelligent agents to filter out

Page 7: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

extraneous information not readilyneeded by the commander. This systemwill automate the collection &dissemination of critical informationautomatically allowing rapid decision-making. This system will preventinformation overload by eliminating non-essential information, reducingcommunications bandwidth, anduncluttering the commander’s display.

(i) The TRADOC community will provide astandard library of ARTEP scenarios andthe units will have the capability todevelop their own scenarios or modifyexisting ones to meet METLrequirements. This technology will belocated at battalion staff level andinterfaced to the automated BattlePlanning System (BPS).

5. Go-To- War Operational Enhancements:ESS in support of operational enhancementsmakes the technology more affordable than asingle training enhancement system. Anexpanded discussion of ES uses for the AANcan be found in reference 2.

a. Situational awarenessb. Battlefield visualizationc. Mission planning/rehearsald. Course of action analysise. Critical decision making.f. Command & Control (staff uses)g. Information overload reduction (Info

Filtering Tool)

(a) Situational Awareness (SA) can beenhanced by an ESS. The rapidprocessing and sharing of enemy andfriendly location information in astructured format can assist thecommander with making timelydecisions. ESS can be used to automatethe Tactical Decision Making Process(TDMP) because the computer cancollect and compile essential enemyinformation and filter out non-essentialinformation and display as either a 2D or3D view. The simulation computer cancompare old and new enemy situationaltemplates to predict possible enemyactions or intentions. As the enemycloses the computer can display onscreen weapon range arcs to alert thecrews of their vulnerability to enemydirect or indirect fire. SA will not be

degraded by extended distancesbecause UWB technology has the multi-path over the horizon connectivity whichcan use every vehicle as a store-and -forward relay platform and by operationsin a built-up area because UAB isimmune to signal interference caused byman-made structures.

(b) Tactical information from the variousground and airborne sensor systems canbe ported into the battalion TacticalOperations Center (TOC) for use by thecommander and staff to make timelydecisions. If necessary this informationcan be displayed on every vehicle tacticaldisplay instantaneously to give everycrew a clear picture of enemy action. Therapid graphic display of enemy info like aFamily of Scatterable Mines (FASCAM)minefield becomes a powerful tool thatcan save time and lives. Operationsorders and graphics can be transmittedelectronically and thereby reducingreport preparation and distribution times.

(c) Mission planning and rehearsal can berealistically accomplished by conductinga virtual reconnaissance of the battlearea or a virtual look back at thedefensive position, and a virtualrehearsing against a GGF on the sameterrain database and using similar light &environmental conditions. Electronicplanning and stealth reconnaissance willmaximize the use of planning time andminimize exposure to enemy observationand fire. Because of the inherentcovertness of UWB technology, criticalinformation can be passed freely duringboth planning and rehearsal phases.Stetliminating some of the unknowns andreinforcing proper execution whilerehearsing enemy “what if” situations.The concept of “Perfect practice makesfor perfect execution” would enhancecrew confidence.

(d) Developing the best course of action canbe made easier by running the variousBlue courses of action virtually againstthe Red courses of action. Quicksimulations can be run to determinepossible results of the various courses ofaction. The commander can make hisfinal decision based upon the result ofthe computer comparative analysis andrisks involved.

Page 8: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

(e) An ESS can automate the collection &dissemination of key information (SA)automatically, thereby allowing rapiddecision making based upon the mostrecent information available and modelsof enemy tactics, techniques, proceduresand order of battle information. Using theESS to reduce the commander’sinformation processing duties can abatethe stresses of combat decision making.ESS contributes to digitization as a forcemultiplier.

(f) Command, control and communicationwill be expedited and improved by usingthe on-board processing capacity, smartmodels, intelligent agents, covertdigitized communications, and the realtime display of enemy and friendlyactivity / status. Graphical displays viceverbal transmission of critical informationwill save time and standardizeinformation exchange. Commanders andstaffs can overwatch unit personnel &logistical status and anticipate supportrequirements.

(g) ES can be used to perform as anintelligent agent to filter out extraneousinformation and provide non-redundanttransmission of information that is crucialto decision making. The resultant filteredoutput to the human decision-maker willpermit faster and more accuratedecisions and prevent informationoverload and display clutter. The systemcan be embedded into the AdvancedTactical Command & Control System(ATCCS) and the display tailored to showinformation the commander considerscritical to his decision making process.

Embedded System (ES)Architecture

a. Three architecture approaches are currently being studied.

(1) A completely separate stand-alone ESsubsystem with its own processingimage generator and SW. Advantageswould be utilizing off-the-shelf andruggedized HW, lower cost, use of stateof the art graphics cards and processorsand least interference with actual vehicleHW & SW. Disadvantage is the need foradditional space requirements.

(2) Fully embed ES into the vehiclesubsystems by adding or upgrading thevehicle’s own computer and SWarchitecture to accommodate theadditional requirements to support ES.Advantages would be not requiring anyextra space and use of the current videointerface. Disadvantage is that newcomponents would have to be militarizedand integrated with the existing systemand the associated costs.

(3) A hybrid or combination of 1 and 2. Inthis approach the idea would be to findthe optimal approach that utilizes asmuch of the vehicle computer andnetworking capability, but anything newor to costly to militarize is put in aseparate subsystem. The disadvantageof this approach is that it will still requireextra space.

b. Stet the final decision to implement anarchitecture approach will be made at a latertime. Hardware and software architecture forESS must be done in a away that will allowfor seamless integration in any potentialvehicle platform. This will require emphasison architecture interfaces (HW & SW) thatcan be fully embedded into the vehiclearchitecture design. Computer technologyand graphic systems are improveing at sucha rapid rate that the ES architecture must bedesigned in a way to reduce upgrade costs.ESS will be developed under an architecture-based approach that will emphasize looselycoupled interfaces and maximum use ofcommercial standards and softwareApplication Programming Interfaces (APIs). .We can show with the below ES TechnicalReference Model the interrelationshipbetween the HW & SW needed for a fullyESS. Future vehicles will be designed withES as part of the vehicle architecture.

Page 9: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

c. ES Technical Reference Model

Conclusion

The enabling technologies associated with INVEST-STO are a significant first step to meet the trainingand operational challenges needed to support theArmy After Next (AAN). The force projection army ofthe next century will have the benefit of anautonomous training system and dual use ESScapable of providing improved SA and otheroperational enhancements. This capability will givenew meaning to the “train as you fight” imperative.Intelligent tutoring systems and a robust on-bpardtraining support package will ensure that the crewsattain and sustain proficiency advantages over anyadversary. The mental agility and informationdominance gained through Force XXI will spawn thetechnology enablers that will make an ESS a keycomponent of combat and training readiness in allfuture crew and command & control systems.INVEST-STO is at the leading edge of these futureoperational and training capabilities .

• Embedded Mission Rehearsal• Embedded Training• Battlefield Visualization• Command Coordination• Virtual Test & Evaluation• Simulation Based Acqusition• 3D Mission Visualization• Stealth / Virtual Recon

Embedded SimulationTechnical Reference Model

ApplicationSoftware

SupportSoftware

SystemSoftware

MissionRehearsal

CommandCoordination

BattlefieldVisualization

ProcessCommunications

WorldModels

EntityCommunications

CGF

DeviceDrivers

OperatingSystem

MainMemory

ProcessorMass

Memory

IGs

VehicleModels

PowerBus

AudioDatabus

VideoDatabus

InternalDatabus

Motherboard

EmbeddedTraining

VehicleExternal

CommunicationsUser

InternalCommunications

External User Interface w/HW

Tng/Opn Mission:

Application 1 Application 2 Application 3 Application n

• Vehicle Simulation Mode (i.e.weapons, mobility, ...)

• Virtual World/Virtual Target Injection

• Mission Planning/Scenario GenerationSystem

• Terrain Database Generation System

• Stealth/Flying Carpet Mode

• Automated Exercise Manager

• After Action Review/Replay System

• Vehicle to Vehicle SimulationCommunication Architecture

• Entity Generation System (i.e.ModSAF)

Significant Capabilities

Functional Drivers

SimulationComputer SysHardware

ComputerSupport Hardware

Figure 2

Page 10: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

REFERENCES

1. Bahr, H., Abate C. and Collins J.,“Embedded Simulation for Army GroundCombat Vehicles,”

19th I/ITSEC Conference Proceedings,December 1997

2. Abate,C., Bahr, H. and Brabbs,J.,Embedded Simulation for the Army AfterNext,Armor, July-August 1998.

Hubert A. Bahr is a Decorated Vietnam Veteranwith 28 years of Federal Service. He received hisBS degree in engineering from the University ofOklahoma in 1972 and his Masters Degree incomputer engineering from the University ofCentral Florida in 1994. For the past 18 years hehas been involved with instrumented Force onForce Ranges. He is currently the lead engineerfor the INVEST STO in the Research andEngineering Directorate of STRICOM. Hisresearch interests are in the areas of parallelprocessing, artificial intelligence, and computerarchitecture. He is also pursuing his Ph.D. at theUniversity of Central Florida.

Claude W. Abate is a Senior Military Analyst forSherikon, Inc. and is currently supporting theSimulation Technology Division, Research andEngineering Directorate of STRICOM. He is agraduate of Florida Southern College and has aMasters of Science Degree from Florida StateUniversity. Prior to joining Sherikon, Mr. Abatewas a career Army officer with a variety ofcommand and staff assignments in the US andoverseas. As a retired Colonel, his experienceincludes a perspective as a training and doctrinedeveloper and Training Brigade Commander atthe US Army Armor Center and School and as anopposing force commander at the NationalTraining Center. He has two years experienceworking with PM CATT on the Close CombatTactical Trainer and is currently the projectcoordinator for INVEST-STO. His militaryschooling includes the Command and GeneralStaff College and the Army War College.

Page 11: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

DEVELOPING SYNCHRONIZED PLAYER MODELSFOR EMBEDDED TRAINING

Vanna McHale and Wesley Braudaway Ph.D.Science Applications International Corporation (SAIC)

Orlando, Florida

AbstractThe Synchronized Player Models (SPM) project supports the U.S. Army Inter-Vehicle EmbeddedSimulation Technology (INVEST) Science & Technology Objective (STO) Program [1]. The overall goalof the SPM project is to reduce the network bandwidth required to maintain synchronization between aLive vehicle, a Modular Semi-Automated Forces (ModSAF) player model simulation and its associatedclone models in separate simulation environments. The SPM project conducted a series of experimentsin order to determine the feasibility of the SPM objective. The first experiment, reported in this paper,focused on the ability to have computer-generated forces operate identically in separate simulationenvironments without requiring network communication. To obtain this level of synchronization it isnecessary to have a repeatable ModSAF that provides simulation events (e.g., vehicle location events,firing events, damage events) that occur at the same simulation time in each run of the same scenario.

This paper discusses the use of repeatability to support synchronized embedded simulation and focuseson the modifications required to produce a deterministic, repeatable ModSAF. Experiments wereconducted to test and demonstrate the repeatable ModSAF and are illustrated in this paper. TheseModSAF modifications, that were developed in support of SPM, were the basis for developing therepeatability mode currently supported in the ModSAF version 4.0 baseline.

Authors BiographyVanna McHale is a member of the Advanced Simulation Research Team within SAIC’s OrlandoOperation and a scientist on the Synchronized Player Models project. Ms. McHale received her B.S. inComputer Science from the University of West Florida and has been actively involved in the M&Scommunity since 1992.

Wesley Braudaway, Ph.D. is a member and technical lead of the Advanced Simulation Research Teamwithin SAIC Orlando’s Operation. Dr. Braudaway was the Principal Investigator for the SynchronizedPlayer Models project. He was also the System Architect for CCTT SAF and has been involved inseveral Computer Generated Forces related research and development projects. Dr. Braudawayreceived his Ph.D. from Rutgers University’s Computer Science Department and has been activelyinvolved in the M&S community since 1991.

Page 12: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

DEVELOPING SYNCHRONIZED PLAYER MODELS FOR EMBEDDEDTRAINING

Vanna McHale and Wesley Braudaway Ph.D.Science Applications International Corporation (SAIC)

Orlando, Florida

1. INTRODUCTION

Simulation technology advances can be leveragedinto a form suitable for embedding into groundvehicles for training, mission planning, and otheroperational uses. Embedded Training (ET) is acapability designed into or added onto operationalhardware and software systems that enables it toprovide the simulation cues necessary to traincrewmembers. This on-board technology willallow mission rehearsal and sustainment trainingto occur whether the soldiers are at home stationsor deployed.

As part of this simulation environment, collectiveoperation requires the synchronization of multipleembedded simulations. Using today’s distributedinteractive simulation technology, the volume ofdata transfer required to support embeddedtraining at the unit and battalion level is asignificant obstacle because of the network andcommunication limitations of the fielded systems.Typically, the fielded systems rely on wirelesscommunication, which provide very low bandwidthfor simulation use. The Simulation, Training andInstrumentation Command (STRICOM) isconducting the Inter-Vehicle Embedded SimulationTechnology (INVEST) Science and TechnologyObjective (STO) to address the technologiesrequired to provide this ET capability [1].

Providing deterministic Computer GeneratedForces (CGF) as part of the simulationenvironment solves part of the ET problem byremoving the need for communication tosynchronize the computer-generated models.Suppose each simulation environment has its owndeterministic CGF to simulate all computer-generated models. The simulation environmentswill produce exactly the same simulation eventsequence for the same scenario without requiringany coordination. If the input to these CGFs issynchronized then there is no need to synchronize

the computer-generated parts of the simulationenvironments as done today using the DistributedInteractive Simulation (DIS) protocol.

The SPM project chose as its simulation platformthe Modular Semi-Automated Forces (ModSAF)system. This paper describes the necessarymodifications to implement a deterministic orrepeatable ModSAF. This paper is organized todescribe the synchronization challenge forembedded simulation, the solution tosynchronization using a repeatable CGF, the effortrequired to make ModSAF repeatable, and theremaining effort required to complete thesynchronization of multiple embedded simulations.

2. PROBLEM DEFINITION

The INVEST objective is to provide a simulationenvironment for both individual vehicle operationsand multiple vehicles that interoperate within acollective simulation. In the collective mode, thesimulation environments (one for each live vehicle)must be synchronized to present an identicalsynthetic situation to each vehicle concurrently.

There are two types of synchronization requiredfor this modeling (see Figure 1).

SimulationEnvironmentfor LiveVehicle A

SimulationEnvironmentfor LiveVehicle B

Synch2

Synch1

Vehicle A Vehicle B

Figure 1: SPM Synchronization

Page 13: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

This first synchronization activity occurs betweenthe live Vehicle A and its simulation environment.The interaction of the vehicle and its simulationenvironment is achieved by a direct coupling of thevehicle’s vision blocks and controls to thesimulation environment. A virtual model in thesimulation environment representing the livevehicle, called the player model, replicates thebehaviors of an actual vehicle under the commandand control of its crew. Any interaction betweenthe live vehicle and simulated entities is achievedas a side effect of the interaction between theplayer model for the vehicle and the simulatedentities within the simulation environment.

The second synchronization occurs betweenVehicle A’s simulation environment and VehicleB’s simulation environment. Contained in thesimulation environment are CGF (vehicles, units,munitions, etc.) and a player model representingeach live vehicle participating in the collectivesimulation. The synchronization activity makeseach environment identical as defined by statedata and events affecting each virtual entityregardless of whether they are computer-generated or player models. For example, inaddition to replicating live vehicle A’s simulationenvironment, a “clone” of that player model mustexist within vehicle B’s simulation environment thatreplicates that same behavior. Thissynchronization is implemented today using theDIS protocol and communication channelsrequiring very high bandwidths. However, inmeeting the objectives of INVEST STO, highbandwidth and physical connectivity are notfeasible alternatives.

3. SYNCHRONIZATION THROUGHREPEATABILITY

The CGF of an embedded simulation can besynchronized by ensuring that each simulationenvironment is started at the same time and thatthe computer-generated models process the sameevents with respect to time in each simulationenvironment. Assuming that all other aspects ofthe simulation environments are synchronized,synchronization of the computer-generatedmodels is achieved using a repeatableimplementation of the CGF in each environment.Each CGF replicates the exact simulation of allcomputer-generated models in each simulationenvironment.

A CGF implementation is repeatable if thesimulation events (e.g., vehicle location events,firing events, damage events) occur at the samesimulation time in each run of the same scenario(where “scenario” is defined as a set of initialconditions and external events). By satisfying thisrequirement, the CGF implementation within eachembedded simulation environment will result in asynchronized simulation environment as long asthe initial conditions are the same, the simulationtime is synchronized and the external events aresynchronized. No additional communication willbe necessary to synchronize the CGF models ineach simulation environment.

4. IMPLEMENTING ModSAFREPEATABILITY

Although CGF implementations, such as ModSAF,are not typically designed to be repeatable, theycan be modified to provide a repeatable mode ofexecution. Repeatability can be achieved bymodifying the scheduling of simulation events, thegeneration of stochastic events, and theelimination or control of distributed events.

4.1 Event Scheduling

Many CGF implementations, including ModSAF,are event simulations that are managed to ensurea perceived real-time performance. They aredesigned to emulate a real vehicle byimplementing a model that performs as close tothe vehicle’s actual real-time performance aspossible.

A model’s behavior is implemented as discreteevent changes maintained in an event queue,such as changing its location over time, firingweapons, or taking damage. Each event in thequeue is executed relative to a simulation time thatis also updated periodically with respect to real-time. An attempt is made to maintain a simulationtime equivalent to real-time so that the model’sperformance appears to correctly emulate anactual vehicle’s real-time performance. Thisdiffers slightly from discrete event simulationswhere events in the queue are executed at anexplicit, predetermined simulation time.

While the generation and execution of simulationevents are deterministic and repeatable, thesynchronization of simulation time to real-time is

Page 14: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

not repeatable. As the operating system interruptsthe simulation to make system service calls atdifferent times and for different periods fromsimulation run to simulation run, the execution ofevents with respect to real-time (relative to thestart of the simulation) will also vary. Because thediscrete events are a sampling of continuous real-time and therefore an approximation of real-time,the outcome of an event may vary if a differentsampling occurs between the runs. For example,consider the same movement event for a vehiclein two different simulation runs (as shown inFigure 2).

MoveEvent T1

MoveEvent T2

Simulation Run (SR1) Simulation Run (SR2)

Figure 2: Same Discrete Event in different SRs

Because real-time advanced further in SR2(possibly due to a longer system interrupt), thevehicle traveled farther during this event in SR2than in SR1 in order to maintain the real-timeapproximation. Suppose that this update in bothruns placed the vehicle within line of sight of anopposing vehicle and that opposing vehiclereacted immediately by firing its weapon.Because this observation and reaction occurred atdifferent simulation times within the two simulationruns, the executions are no longer identical (i.e.,not repeated).

Because of the great dependency betweenevents, as a single event between the simulationruns diverges with respect to time, the differencesbetween the runs will cascade until there aresignificant variations in the simulation outcomes ofeach.

To provide a repeatable mode for ModSAF, itsevent scheduling mechanism was modified.These modifications included changes to the clockand scheduler implementations, and thescheduling of behavior models.

4.1.1 Clock Implementation

To support real-time simulation, the ModSAF real-time clock and its simulation clock were tightlyinterconnected. As stated earlier, when running ina real-time mode, the simulation clock isdependent on the real-time clock and is advancedwith respect to the real-time clock. However, whatwas unusual about the ModSAF implementationwas that the procedure to advance the simulationclock also always advanced the real-time clock.The ModSAF clock implementation was modifiedto remove this reverse dependency.

In ModSAF’s repeatable mode, ModSAF’s real-time clock continues to be based upon the systemclock. However, the simulation clock was modifiedto advance to the next event’s time on the eventqueue after processing each event. Thecontinuous frame update rate for the simulationclock was disabled, essentially severing thesimulation clock to real-time clock dependency.

4.1.2 Scheduler Implementation

A thorough review of the ModSAF scheduler wasconducted to determine areas resulting in randomscheduling and/or invoking of events and functioncalls. ModSAF’s scheduler implements fourqueues: a high priority real-time queue, a periodicreal-time queue, a deferred real-time queue, and asimulation time queue. All four queues utilize thereal-time clock to invoke events on the queues.The simulation queue implementation was alteredto be event driven and based upon the simulationclock rather than the real-time clock to produce arepeatable ModSAF mode. Therefore, when asimulation queue’s event is invoked in this mode,the simulation clock is advanced to the time of thenext event on the simulation queue.

4.1.3 Scheduling Behavior Models

To utilize the modified, event-driven simulationclock, the scheduling of behavior and physicalmodel update functions was moved from the real-time queue to the simulation queue. All non-simulation functions such as user interface andnetwork functions remained on the real-timequeues.

The simulation processing of each vehicle occursin that vehicle’s “tick” or update function. Allvehicle events are generated as part of this tick

Page 15: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

and therefore, have the biggest influence onrepeatable performance. Moving this function andits associated unit behaviors to the simulationqueue will provide the required ModSAFrepeatability by removing them from the influencesof the real-time clock. No other modification to themodels or the modeling architecture was required.

4.2 Stochastic Events

Many CGF systems, including ModSAF, includean implementation of stochastic events. Forexample, given a vehicle is hit by munitions, it maylose its ability to move, it may lose its ability toshoot, or it may be totally destroyed depending ofsome statistical representation of the chances foreach alternative. These events are implementedusing algorithms that are based on a numbertaken from a random number sequence. Arepeatable random number sequence is easilyimplemented by using the same random numberseed between simulation runs.

ModSAF’s random number seed was notinitialized in one central location and wascorrected as part of this effort. ModSAF wasmodified to provide initialization within the randomnumber generator library to provide a constantrandom number seed for ModSAF’s repeatablemode.

4.3 Distributed Events

The distribution of simulation events on a networkalso impacts repeatability since it is difficult toguarantee the execution time of delivered eventsbetween simulation executions. This is due tonetwork latency, variable order of networkpackets, and the variability in times at which theoperating system services its network operations.A repeatable CGF could be achieved either by notallowing distributed simulation events or byexplicitly controlling the network events.Controlled network events can be achieved usinga reliable network mechanism and some reliabletime management capability that alleviates theproblem of network latency and system networkmanagement.

In ModSAF’s repeatable mode, this problem isavoided by only providing repeatability in a non-networked mode.

5. MODSAF REPEATABILITY EXPERIMENT

ModSAF’s repeatability was confirmed byexperimentation with several scenarios todemonstrate that ModSAF’s repeatable modegenerates a duplicate simulation outcome for thesame scenario. For a particular scenario, twoseparate ModSAF executions were run to collectdata. The data was collected to confirm thatidentical location update, fire, and damage eventsoccurred at exactly the same simulation time insuccessive runs. Several other scenarios wereused to determine the success of the repeatableModSAF implementation relative to a variety ofbehaviors. Data collection and analysis wasperformed for the following areas:

Scheduler Analysis to determine which functioncalls were placed on the scheduler, the number ofcalls made to the functions, and the cumulativeprocessing time.

Queue Scheduling to identify the functionscheduling sequence for the deferred, periodic andhigh priority real-time queues and scheduledsimulation time queue.

Random Number Generator to analyze therandom number generator activity, to determine itscalling functions and to determine the calling eventsimulation time.

Vehicle Specific Data to gather the vehicle’sposition (x and y coordinates) and to obtain itsdamage assessment with respect to simulationtime.

Using this data collection, graphs were created torepresent and assess ModSAF’s repeatableperformance. The first graph (see Figure 3)compares the position variation for the samevehicle between two runs of the non-repeatableModSAF. Position variation is defined as thedistance between two instances of the samevehicle at the same simulation times between thetwo runs of the same scenario. This graph showsthe cascading effect of event differences over timebetween two runs.

Page 16: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

Figure 3: Vehicle Location Difference, Non-Repeatable

The second graph (see Figure 4) shows the samechange in position for the same vehicle from onerun to another using ModSAF’s repeatable mode.This graph illustrates that the vehicle’s positionthroughout the entire scenario remains consistentfrom run to run (i.e, the distance between thevehicles instances within each run is always zero).

Ticks

Figure 4: Vehicle Location Difference, Repeatable

In addition to location data, damage assessmentdata was also analyzed to determine repeatabilityof fire and detonation events and overall vehicledamage. The third graph (see Figure 5) shows thedamage assessment data collected for all elevenvehicles within the scenario. Each run is illustratedas a separate series to show that within the non-repeatable ModSAF, no two events happen in thesame manner at the same simulation time (timeevents in the graph).

Figure 5: ModSAF Damage Assessment, Non-Repeatable

The final graph (see Figure 6) represents thedamage assessment data using the ModSAFrepeatable mode. In this figure, the two runs areindividually graphed, showing that run 1 is entirelyoverlaid with the data from run 2. This illustratesthat over the entire scenario, all vehicles receivedthe same damage at the same scenario time, fromone run to another.

Figure 6: ModSAF Damage Assessment,Repeatable

Numerous other scenarios were executed to testthe robustness of the ModSAF repeatability mode.These scenarios incorporated a variety ofadditional behaviors and obstacles. In each casedata collected about the simulation events wasconsistent from one run to the other, ensuring thatrepeatability was attained within ModSAF.

6. FURTHER SPM EFFORT

The remaining SPM project goals [2] address thesynchronization between a live vehicle and itsplayer model, and the communication of behaviorcontrol updates. Current SPM activities are

Page 17: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

focusing on continued experimentation to refinethe understanding of the player model controlinterface, behavior parameterization, and overallbandwidth issues. Additional experiments arebeing developed to investigate the capability tosynchronize the simulated models using the statechanges of vehicle behavior parameters ratherthan vehicle location updates. It is believed thatthis method of synchronization combined with arepeatable CGF will substantially reduce thebandwidth required to synchronize a distributedsimulation of player models.

In future phases, additional experiments will beperformed to identify and implement the controlbetween ModSAF and the behavior controlinterface based upon the INVEST STO operationsidentified by the INVEST Architecture WorkingGroup.

The ongoing SPM experimentation will alsoconsider alternative behavior control strategieswithin ModSAF to control the synchronization ofplayer model and its associated clone models.

The results of current and future experiments willinfluence the final SPM architecture andimplementation. It is intended that the SPMprototype will be integrated into a developed ETarchitecture whose objective is to demonstrate theviability of the embedded simulation approach.

7. CONCLUSIONS

The objective of the SPM architecture is to provideComputer Generated Forces that will interact withlive vehicles through use of a player/clonemodel approach. This objective must be achievedwhile using a lower bandwidth than the currentDIS protocol approach for implementing simulationenvironment synchronization. To achieve thisobjective, it is believed that dead reckoning at thebehavior abstraction level and reducing therequirement for communication to achievesynchronization will reduce the bandwidth [2].

Through the use of a repeatable CGF, SPMachieves synchronization of the computer-generated models in different simulationenvironments without resorting to DIS protocol.This approach assumes that the scenario (inputsand distributed events) implemented in eachenvironment is synchronized. A repeatable CGFis achieved by scheduling the simulation eventsindependent of real-time, generating stochasticevents from a fixed random number sequence,and either eliminating or control the processing ofdistributed events. The results of this effort werefully integrated into ModSAF version 4.0 to providea ModSAF with a repeatable mode.

8. ACKNOWLEDGEMENT

The authors would like to acknowledge the effortsof Deanna Nocera, Gene McCulley and EddieCason for their contribution to this analysis andrepeatable ModSAF development.

The authors would also like to acknowledge theefforts of Derrick Franceschini and Gene McCulleyof the Advanced Distributed SimulationTechnology (ADST - II) program for their completeintegration and testing of the repeatable mode intothe Army’s ModSAF version 4.0 baseline.

The work presented in this paper was sponsoredby STRICOM under contract N61339-97-C-0040.

9. REFERENCES

[1] Bahr, H.A., “Embedded Simulation for GroundVehicles,” Spring 97 SimulationInteroperability Standards WorkshopProceedings, Institute for Simulation &Training, Orlando, FL, March 1997.

[2] Braudaway, W., and Nocera, D.L.,“Synchronized Player Models for EmbeddedTraining,” Spring 98 Simulation InteroperabilityStandards Workshop Proceedings, Institutefor Simulation & Training, Orlando, FL, March1998.

Page 18: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

BEHAVIOR MODELING FRAMEWORK FOR EMBEDDED SIMULATION

Amy Henninger, William Gerber, Ronald DeMara, Michael Georgiopoulos, and Avelino GonzalezUniversity of Central Florida

Orlando, FL.

ABSTRACT

Although embedded training has become the preferred approach for training military forces, it issurrounded by a variety of technical challenges. The Inter-Vehicle Embedded Science and Technology(INVEST) Science and Technology Objective (STO) program explores technologies required to embedsimulation in combat vehicles. One of these requirements is to provide a simulation environment in whichcomputer generated forces, manned simulators, and live vehicles may interact in real-time. Unfortunately,providing this geographically distributed and untethered real-time interaction is severely limited by thecommunications requirements imposed by the need to convey large amounts of data between therespective players. By extending the concept of Distributed Interactive Simulation (DIS) dead-reckoning,a vehicle movement method, to the behavioral level, this limitation may be mitigated. The Vehicle ModelGeneration and Optimization for Embedded Simulation (VMGOES) project at the University of CentralFlorida is focusing on this aspect of the INVEST program. This paper presents the specifications anddevelopment process of VMGOES.

ABOUT THE AUTHORS

Amy Henninger is a doctoral candidate in computer engineering at the University of Central Florida, aResearch Fellow at U.S. Army STRICOM, and a recipient of the Ninth Annual I/ITSEC Scholarship. Shehas earned B.S. degrees in Psychology, Industrial Engineering, and Mathematics from Southern IllinoisUniversity, an M.S. in Engineering Management from Florida Institute of Technology, and an M.S. inComputer Engineering from UCF.

William Gerber, Lt. Col., U.S.A.F. (Ret.), is a Ph.D. student in computer engineering at the University ofCentral Florida and a Research Fellow at U.S. Army STRICOM. He has a B.S.E.S. degree inAstronautics and Engineering Sciences from the U.S.A.F. Academy, an M.S.E. in Nuclear Engineeringfrom the University of California at Los Angeles and an M.S.Cp.E. in Knowledge-Based Systems fromUCF.

Ronald DeMara is a full-time faculty member in the Electrical and Computer Engineering Department atthe University of Central Florida. Dr. DeMara received the B.S.E.E.. degree from Lehigh University in1987, the M.S.E.E. degree from the University of Maryland, College Park in 1989, and the Ph.D. degreein Computer Engineering from the University of Southern California, Los Angeles in 1992.

Michael Georgiopoulos is an Associate Professor at the Department of Electrical and ComputerEngineering of the UCF. His research interests lie in the areas of neural networks, fuzzy logic andgenetic algorithms and the applications of these technologies in cognitive modeling, signal processingand electromagnetics. He has published over a hundred papers in scientific journals and conferences.

Avelino Gonzalez received his bachelor’s and master’s degrees in Electrical Engineering from theUniversity of Miami, in 1973 and 1974, respectively. He obtained his Ph.D. degree from the University ofPittsburg in 1979, also in Electrical Engineering. He is currently a professor in the Electrical andComputer Engineering Department at UCF, specializing in human behavior representation.

Page 19: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

BEHAVIOR MODELING FRAMEWORK FOR EMBEDDED SIMULATION

Amy Henninger, William Gerber, Ronald DeMara, Michael Georgiopoulos, and Avelino GonzalezUniversity of Central Florida

Orlando, Florida

INTRODUCTION

The combination of computer simulation andnetworking technologies has provided the U.S.military forces with an effective means of trainingthrough the use of Distributed InteractiveSimulation (DIS). DIS is an architecture forbuilding large-scale simulation models from a setof independent simulator nodes (Smith, 1992) thatrepresent one or more entities in the battlefieldsimulation. By communicating over a network viaa common protocol, these entities are able to existsimultaneously and interact meaningfully in thesame virtual environment. Currently, however, theability of live vehicles to interact with thesesimulated forces in the virtual world is constrainedby the communication requirements needed forreal-time interoperability. Eliminating or reducingthis impediment would enhance military training ina number of ways. For example, it woulddiminish the costs associated with having livevehicles travel to maneuver ranges for liveexercises. Also, by shifting more of the training tooperational units, it would reduce the costsassociated with the training schools. In essence,the military could rely less on formal school-housetraining, more on deployable training systems, andfundamentally make training more readily availableon an “as-needed” basis.

To accomplish these objectives, the Departmentof Defense has recently initiated an effort todetermine how embedded training and advancedsimulation technologies could be used toovercome the obstacles surrounding thistechnology. One problem, for instance, is that inorder for a driver of a live vehicle to train in avirtual domain, he must be able to traverse theartificial/virtual terrain. Correspondingly, he mustbe able to see the other live and virtual entities onthe virtual battlefield and interact with them in realtime. To accomplish this, the embedded trainingsystems must sustain the transfer of massivevolumes of data. Unfortunately, the networkingand communications limitations of currently fielded

systems make the transfer of this data usingcurrent DIS supported techniques a strenuoustask.

Current forms of DIS dead-reckoning are viewedas vehicle movement methods that are used toreduce DIS packet traffic. By communicating agiven vehicle’s location, velocity and accelerationto other DIS simulators, the models residing onthese simulators can predict the unperturbed nearterm physical location of the vehicle. In the eventthat this vehicle begins to deviate from itspredicted path, the simulator responsible forcreating the entity will send out an update of thevehicle’s true location to the other simulators.Thus, the predictive utility of the dead-reckoningmodel is pivotal to the success of network trafficminimization.

The requirement to transfer enormous volumes ofdata coupled with the communication limitations ofcurrently fielded systems makes using currentlyexisting DIS methods an inadequate approach.Bahr and DeMara (1996) suggest that extendingthe concept of DIS dead reckoning to thebehavioral level may reduce DIS traffic more thanmerely applying DIS dead reckoning to vehiclemovement tasks. Figure 1 illustrates the DIS deadreckoning concept applied to embedded trainingand simulation. As indicated by Figure 1, thisconcept requires the distributed processing ofmultiple vehicle models because every live orsimulated vehicle is represented by a model andevery model is resident on every vehicle. Thevehicle model (VM) serves to predict the actions ofthe vehicle it represents. When the actions of thevehicle are consistent with the actions predictedby the vehicle’s model, all of the copies of thatvehicle’s model are correctly reflecting the livevehicle’s actions. In this instance, the interactionbetween the other vehicles and the vehicle modelin the virtual world is an accurate representation ofthe vehicles’ interactions in the real world.However, if the actions of the vehicle are notconsistent with the actions predicted by thevehicle’s model, the copies of that vehicle’s model

Page 20: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

are not correctly reflecting the live vehicle’sactions. In this instance, the interaction betweenthe other vehicles and the vehicle model is notconsistent with their real world interaction.

As indicated in Figure 1, a system that extends theDIS dead-reckoning concept to the behaviorallevel requires the identification of discrepanciesbetween the behavior of an actual vehicle and thatvehicle’s model. The portion of this system thatidentifies and classifies these discrepancies isreferred to as the Difference Analysis Engine(DAE) in Figure 1. By comparing the state of thevehicle model with the state of the actual entity,the DAE identifies whether discrepancies in thebehavior as well as the position exist. If there arediscrepancies, the DAE determines whether anupdate is necessary and what that update shouldbe. The types of information provided by the DAEare specified in a future section of this paper.

Figure 1. DIS Dead-Reckoning ApproachExtended to Behavioral Level

This paper offers a framework for the developmentof the VM and the DAE. Also, this paperaddresses the integration of the two componentsinto the full system known as Vehicle ModelGeneration and Optimization for EmbeddedSimulation (VMGOES).

SCOPE OF MODEL

Frequently, DIS simulations use computercontrolled combatants known as ComputerGenerated Forces (CGFs) to populate thebattlefield. The behavior of a CGF may begenerated by a human operator assisted bysoftware, in which case the class of CGF isreferred to as a semi-automated force (SAF) orgenerated completely by software, in which casethe class of CGF is referred to as an autonomousforce (AF). The behaviors generated by CGFsare based on doctrine and represent a widevariety of tasks with a reasonable level of detail.Because these behavioral models are fashionedentirely by doctrine, they emulate standardprocedures that are acquired from declarativeknowledge (i.e., manuals and interviews) andprovide a range of feasible behavior. However,these models of behavior provide norepresentation for the 1) implicit knowledge or 2)intrinsic performance characteristics that make“live entities” unique from one another. Forexample, the current CGF behavioral models usedin DIS exercises may simulate the movement of avehicle to a given location by some standardmovement model, but they do not “individualize”that movement method by either assigning orsimulating human performance characteristics(e.g., tendency to hug the side of the road,propensity to maintain speed above speed limit,etc.) to it. Thus, behavioral models fashionedentirely by doctrine are often characterized asyielding responses that are “canned”,"predictable", or “too perfect”. However, the factthat these behaviors are "canned" or"preprogrammed" in no way suggests that thesebehaviors are simplistic. Prevalent SAF systemshave integrated hundreds of thousands of lines ofcode to successfully emulate the command andcontrol hierarchy of a military unit and its operationon the battlefield. By providing a variety ofplanned behaviors (e.g., "Conduct a Tactical Road

Live Vehicle 1

VM 1

VM 2

DAE1

Vehicle Entity Database

Manned Module 2

DAE2VM 2

VM 1

CGF

Simulator Entity Database

Real timeactuals

Prediction

Updates

Situational Awareness

Situational Database

•Mission data•Terrain

Situational Database

•Mission data•Terrain

Manned Module or Live Vehicle N

VM N

VM 1

CGF

DAEN

Vehicle Entity Database

Real timeactuals

Prediction

Updates

Situational Awareness

Situational Database

•Mission data•Terrain

...

VM N

CGF

VM N

VM 2

Real timeactualsSituational Awareness

...

...

...

Updates

Prediction

Page 21: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

March", "Attack By Fire", "Service StationResupply", etc.), situational awareness andassessment, and reactive behaviors (e.g., "Breacha Minefield", "Call for Indirect Fire", "Actions onContact", etc.), they have successfully providedsuitable friendly and enemy forces to populate thebattlefield.

The models to be used in this project areconceptually similar to CGF models, but they aredistinguishable by the addition of humanperformance characteristics in the model. In otherwords, whereas a CGF may emulate the selectionof a vehicle's cover and concealment position,extending the DIS dead-reckoning concept to thebehavioral level requires the prediction of thevehicle's actual cover and concealment position.This necessary increase in detail for the VMcoupled with the research oriented nature of thisproject, limits the initial efforts for VMGOES to anexercise smaller in scope than one may find in atypical DIS exercise.

The exercise used in VMGOES centers around aBlufor M1A2 tank platoon or section performing aTactical Road March and executing an Actions onContact task in response to a potential enemythreat (i.e., an Opfor T-72 platoon, section, orvehicle). A variety of control parameters can bemodified by the VMGOES model users. Thisallows the users to more fully exercise the modelto evaluate its ability to generalize. Theseparameters are categorized in two groups: (1) taskparameters and (2) operational parameters. Theseparameters and their permissible ranges aredefined below.

Task parameters that may be changed by theevaluators are expressed by task. These tasksinclude Tactical Road March and tasks related toActions on Contact maneuvers.

Tactical Road March Parameters

Tactical Road March parameters that may bemodified include the route and march rate.

Route - may be defined within the constraints ofthe assumptions/conditions (listed underAssumptions section).

March rate - must be defined within theacceptable limits of the march rates delimited insimulation.

Actions on Contact Parameters

Rules of engagement is the only parameter thatmay be modified to influence this task.Rules of engagement - may be initialized asfree, tight, or hold to either all or none of theBlufor M1A2 entities.

Operational Parameters

The following operational parameters may bechanged in a VMGOES exercise:

Terrain - area where scenario is executed withinconstraints of the Assumptions section

Blufor Unit Size - tank section or tank platoon

Opfor Unit Size - single vehicle, tank section ortank platoon, and

Opfor Unit Location - positioning (location anddirection) of Opfor unit

Assumptions

Lastly, the following conditions/assumptions willapply to the exercises considered by VMGOES:

1. Terrain does not include bodies of water (e.g.,lakes, rivers, swamps or ponds).

2. Model does not simulate CommandOverrides, Fragmented Orders, or otherexternally initiated changes in orders.

3. Opfor (T-72 vehicles) operate according todefaulted behavior of simulation unlessspecified otherwise for a scenario.

4. Blufor units should begin exercise on route,have heading directed towards end of route,and be oriented closely parallel to its positionon the route.

5. Manned module always represents the leadtank (i.e., platoon leader).

6. There will be no modifications to terrain (e.g.,obstacles or minefields).

7. M1A2 may not initiate calls for support (e.g.,indirect fire).

8. The section of terrain east of Barstow Roadand west of Hill 720 in the NTC-0101 terraindatabase will be used for development andtests.

9. The simulation's environmental factors (e.g.,weather, tactical smoke, etc.) will not changeduring a scenario.

Page 22: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

10. Tactical Road March tasks may only beassigned to terrain where the road isobservable.

MODELING PARADIGM

To develop the vehicle model, VMGOES is using amachine learning technique known as Learning byObservation (Gonzalez, et al, 1998). Thistechnique facilitates the development ofintelligent, computational models of humanbehavior. Although a relatively new concept in thediscipline of machine learning, Learning byObservation has been successfully used in avariety of highly complicated, real-world tasks.Pomerlau (1992), for example, used Learning byObservation in the development of anAutonomous Land Vehicle In a Neural Network(ALVINN). In this project, Pomerlau trained aneural network to drive a vehicle through a one-lane road under ideal environmental conditions.Moreover, this network was able to generalize itstraining to perform satisfactorily in two-lane as wellas in dirt roads, and under adverse environmentalconditions (snow, rain, etc.). Expanding on thiswork, Pomerlau et al. (1994) have developed“smart” vehicles as part of Advanced ResearchProjects Agency's (ARPA's) Unmanned GroundVehicle (UGV) program, intended to reduce theneed for human presence in hazardous situations.These vehicles are capable of driving themselvesat speed up to 55 mph for distances of over 90miles on public roads. Moreover, they are capableof driving both during the day and night, driving ona variety of roads, avoiding obstacles, and evenperforming parallel parking.

With respect to the vehicle models in this project,Learning by Observation will be used to learnhuman decision making skills (e.g., reactivetransitions, route planning, selection of cover andconcealment) and low-level human controlstrategies (e.g., route following, scanning, etc.).Ultimately, these behaviors will be learned throughthe observation of a human expert tankcommander. However, as a preparatory step,VMGOES is currently developing a VM prototypeby learning these behaviors through theobservation of a ModSAF M1A2 entity. In otherwords, instead of using a human as the expertwhose behavior is learned, VMGOES is initiallyusing a ModSAF entity as the “expert” whosebehavior is learned. This prototype model isreferred to as VMModSAF. It is anticipated that this

prototype work will assist in the identification oftechnical issues that may arise in the secondphase of the study and ultimately, will increase thelikelihood of successful results in the final system.

VMGOES DEVELOPMENT

In addition to using ModSAF to supply the M1A2"expert" entity from which VMModSAF will learn, theModSAF system is being used to provide thesimulated environment in which the vehiclemodels interact. This allows VMGOES researchersto focus on the development of accurate behavior-based models as opposed to the implementationissues pertaining to vehicle simulation (e.g.,physical modeling, weapons modeling, networkinterface, etc). The VM is embedded in ModSAFand receives input data consistent with sensoryinformation obtainable from the controls anddisplay systems resident in an M1A2. The VMoutput contains the commands and parametersneeded to control the vehicle's motion andweapons' execution. The DAE, alternatively, runsas a separate process and receives input from boththe vehicle model and the (live or simulated)master vehicle's interface. In both the VMModSAF

and the vehicle model derived from the humanexpert (VMMM), this interface supplies sensoryinformation and dead-reckoning type datapertinent to the given model's behavior. This isalso true of the interface to the (live or simulated)master vehicle. Once it receives these inputs, theDAE identifies whether discrepancies existbetween the vehicle models and the mastervehicle and sends out the necessary updates.The updates sent out by the DAE contain one ormore of following four types of information: 1)position, orientation and other basic dead-reckoning information 2) model parameterinformation, 3) behavior enumeration, or 4) actionenumeration. Examples of updates containingthese types of information are provided in thefollowing section.

Software Engineering Model

As illustrated in Figure 2, VMGOES has adoptedan Incremental Model (Schach, 1993) softwareengineering process. Using this type of model,the product is designed, implemented, integrated,and tested as a series of thirteen incrementalbuilds, where each build is represented by anumbered circle. Also, each build consists of codepieces that interact together to provide a specific

Page 23: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

functional capability. For the most part, the stagesdown the left-hand column of Figure 2 representthe VM builds and the stages across the bottomtwo rows represent the DAE builds as they areintegrated with the VM. Of the two rowsrepresenting the DAE builds, the top rowrepresents the development cycle of VMGOESusing a ModSAF entity as the “expert”, and thebottom row represents the development cycle ofthe VMGOES using the human expert. Finally,the integration of the two components (VM andDAE) occurs at the intersection of the column androws.

Figure 2. VMGOES Development Model

The thirteen VMGOES model builds presented inFigure 2 are enumerated and defined belowaccording to their functional divisions. Builds 1and 2 can be described as:

1. Train VM1 to replicate reactive behaviorcontext transitions by observing a ModSAFM1A2 CGF entity. In this build the reactivebehaviors are enumerated as a part of thetraining set.

2. Train VM2 to replicate reactive behaviorcontext transitions by observing a ModSAFM1A2 CGF entity. In this build the reactivebehaviors are not enumerated as a part of thetraining set.

In both Builds 1 and 2, a ModSAF M1A2 entityserves as the "expert" from which knowledge isacquired. Also, both models resulting from thesebuilds focus on the acquisition of knowledgepertaining to unit level reactive behaviors. VM1

and VM2 are both trained with data containing

sensory information (input). The difference,however, is that the output data used to train VM1

includes the reactive behavior enumeration,whereas the VM2 model does not have access tothis enumeration. As a result, VM2 mustadditionally employ some strategy to infer thereactive behavior type that should be associatedwith a given input vector or cluster of vectors.This second build better reflects the actual task athand: to learn behaviors from a human expert. Inother words, since the human expert will not beverbalizing or enumerating his behavior, themethodology for developing the final models in thisproject must be capable of inferring what thatbehavior is. It is anticipated that methodsdeveloped in Build 2 will assist in meeting thisrequirement.

Builds 3 and 4 can be described as:

3. Train VMModSAF to replicate context transitionsand actions for VMGOES test scenarios byobserving ModSAF M1A2 CGF entity.

4. Train VMMM to replicate context transitions andactions for VMGOES test scenarios byobserving a human expert in a M1A2 MannedModule.

The functionality provided by both Builds 3 and 4is specific to the VMGOES test scenarios asdefined in the VMGOES Requirements Document.These are briefly described in the Model Scopesection of this paper. The difference betweenBuilds 3 and 4 is that Build 3 uses a ModSAFentity as the "expert", whereas Build 4 uses ahuman expert. As previously discussed, it isanticipated that modeling strategies learned by theVMGOES team in Build 3 will be useful in thedevelopment of the final vehicle model (VMMM)developed in Build 4.

Builds 5 and 6 can be described as:

5. Integrate VMModSAF with DAE dead-reckoningcontrol (DAEDR) to evaluate VMModSAF/DAEDR forVMGOES test scenarios.

6. Integrate VMMM with DAEDR to evaluateVMMM/DAEDR for VMGOES test scenarios.

Builds 5 and 6 are simply integration checkpointsin the development cycle. In both of these builds,the vehicle models are being integrated with the

VM1

VM2

VMMod/ DAEDR

VMMM / DAEDR

DAECTO

DAECTO

DAEUCTO

DAEUCTO

DAEOLL

DAEOLL

1

2

3

4

5

6

7 8

9

10

11

12

13

Start

End

Page 24: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

DAEs' basic dead-reckoning control mechanism.This will enable the DAE to update the vehiclemodel position or orientation with basic dead-reckoning type parameters, in the event that theVM has deviated from the path pursued by themaster vehicle.

Builds 7, 8, and 9 can be described as:

7. Train DAE context transition override control(DAECTO) of VMModSAF to recognize contexttransitions for VMGOES test scenarios. In thisbuild, reactive behavior transitions aresupplied as part of the training set.

8. Train DAECTO of VMModSAF to recognize contexttransitions for VMGOES test scenarios. Inthis build, reactive behavior transitions are notsupplied as part of the training set.

9. Train DAECTO context transition overridecontrol (DAECTO) of VMMM to recognize contexttransitions for VMGOES test scenarios.

In general, these builds focus on the contexttransition override control of the DAE. This controlmechanism allows the DAE to update the VM'senumerated behavior type and is used when theDAE identifies the behavior/context of the livevehicle as being different from the behaviorenumerated by the vehicle's model. For example,if the VM is performing a "Withdraw" and the DAEdetermines that the live vehicle is performing an"Assault", the DAE directs the VM to change itsbehavior to an Assault.

Specifically, Builds 7 and 8 are using ModSAF asthe "expert" and Build 9 is using a human in amanned module as the expert. Additionally, Build7 and 8, like Builds 1 and 2, are distinguishableby the availability of behavior enumerations in theoutput of the training set.

Builds 10 and 11 can be described as:

10. Train DAE unrecognized context transitionoverride control (DAEUCTO) of VMModSAF torecognize unrecognizable context transitionsfor VMGOES test scenarios.

11. Train DAEUCTO of VMMM to recognizeunrecognizable context transitions forVMGOES test scenarios.

Builds 10 and 11 focus on providing the DAE withthe capability to completely control the vehiclemodel, when the DAE is unable to recognize whatthe live vehicle is doing. Again, Build 10 uses theModSAF M1A2 entity as the "expert" and Build 11uses a human expert.

Lastly, Builds 12 and 13 can be described as:

12. Develop procedures for refining previouslytrained DAEUCTO/VMModSAF off-line (adjustDAEUCTO/VM ModSAF parameters or contexts) forVMGOES test scenarios.

13. Develop procedures for refining previouslytrained DAEUCTO/VMMM off-line (adjustDAEUCTO/VMMM parameters or contexts) forVMGOES test scenarios.

Once VMGOES becomes functional, it will haveaccess to more observational data. Those datamay help further explain behavior. Builds 12 and13 capitalize on this fact by providing amechanism to capture those data and refine thevehicle models.

SUMMARY

This paper described a modeling framework forthe development of a system designed to reducethe communications bandwidth required for aninter-vehicle embedded simulation exercise. Thissystem includes a behavior model of the vehicle inthe exercise and a difference analysis enginetasked with keeping that model synchronized withits live counterpart. Presently, the vehicle modeland the DAE are being developed by using aModSAF M1A2 entity as the "expert" from whichknowledge is acquired. Future endeavors includeefforts to apply the lessons learned from thisphase of the study to the elicitation of knowledgefrom a human expert.

ACKNOWLEDGEMENTS

This work was sponsored by the U.S. ArmySimulation, Training, and InstrumentationCommand as part of the Inter-Vehicle EmbeddedSimulation and Technology (INVEST) Science andTechnology Objective (STO), contractN61339-98-K-0001. That support is gratefullyacknowledged.

Page 25: ENABLING TECHNOLOGIES FOR EMBEDDED SIMULATION & EMBEDDED …isl.ucf.edu/publication/papers/hbahr/06_HBahr_I_ITSEC_98.pdf · EMBEDDED SIMULATION & EMBEDDED TRAINING Hubert A. Bahr

BIBLIOGRAPHY

Bahr,H.A. and DeMara, R.F., (1996). AConcurrent Model Approach to ReducedCommunication in Distributed Simulation,Proceedings of the 15th Annual Workshop onDistributed Interactive Simulation, Orlando, FL.

Gonzalez, A.J., Georgiopoulos, M., DeMara, R.F.,Henninger, A, and Gerber, W., (1998).Automating the CGF Model Development andRefinement Process by Observing ExpertBehavior in a Simulation. In Proceedings of the8th Conference in Computer Generated Forcesand Behavior Representation, Orlando, FL:University of Central Florida Institute forSimulation and Training.

Pomerlau, D.A., (1992). Neural NetworkPerception for Mobile Robot Guidance, Ph.D.Dissertation, School of Computer Science,Carnegie Melon University, Pittsburg, PA.

Pomerlau, D., Thorpe, C., Longer, D., Rosenblatt,J.K., and Sukthankar, R., (1994). AVCS Researchat Carneghie Mellon University. Proceedings OfIntelligent Vehicle Highway Systems America1994 Annual Meeting, p. 257-262.

Schach, S.R., (1993). Software Engineering.Aksen Associates Incorporated Publishers,Boston, MA.

Smith, S., and Petty, M. (1992). ControllingAutonomous Behavior in Real-Time Simulation.In Proceedings of the Second Conference inComputer Generated Forces and BehaviorRepresentation. Orlando, FL: University of CentralFlorida Institute for Simulation and Training.