Top Banner
Emergence-Oriented Programming Daniel W. Palmer Dept. of Math. & Comp.Sci. John Carroll University Cleveland, OH, USA [email protected] Marc Kirschenbaum Dept. of Math. & Comp.Sci. John Carroll University Cleveland, OH, USA [email protected] Linda Seiter Dept. of Math. & Comp.Sci. John Carroll University Cleveland, OH, USA [email protected] Abstract In this paper we describe Emergence-Oriented Programming (EOP), a novel, human-centric technique to engineer swarm algorithms at a higher level of complexity than those developed with simple reactive agents. The process is iterative, building modules of behavior that can be layered to produce solutions that converge faster than reactive swarms to the desired emergent goal. The layers are modular and can be independently applied, mirroring the arbitrarily nested cognitive model proposed by Baas and Emmeche. The layers are produced by external observers recognizing and reinforcing patterns within swarms that are not visible at lower levels. Each layer builds upon the previous one leading to emergence, but the entire hierarchy can be mechanically collapsed into executable if-then rules based on robot primitives. We demonstrate portions of this technique to improve on the reactive swarm approach for solving the 4-color mapping problem. Keywords: Swarm Engineering, emergent behavior, software development, aspect-oriented programming. 1 Introduction President Harry S Truman once said, "I have found the best way to give advice to your children is to find out what they want and then advise them to do it." Mangling his intentions, the quote also applies to emergent behavior - "...the best way to write a swarm program to do something specific is to find out what it does and then adopt that as the thing you want it to do." It is much easier to steal an existing swarm algorithm and apply it to a specific domain than to build a swarm algorithm in that domain from scratch. There are many examples of this including puck clustering systems, ant colony optimization[4], honey bee routing algorithms, etc. One reason this strategy works well is that biological swarms have been refining their individual behaviors through natural selection for uncounted generations. Developing agent algorithms to produce specific emergent results requires encompassing a very wide range of expressiveness. At one extreme are complex swarm behaviors that do not easily decompose into discernable patterns of cause and effect to an external observer. At the other extreme is programming an autonomous agent that only gains information about the world through fixed sensors, and can only interact with the world through predefined action primitives. It is comparatively easy to understand how a simple set of executable rules governs the behavior of a single agent, but it is not at all clear from those rules how a swarm of interacting agents will behave. Humans are very good at translating well-understood algorithms into formal representations. We have a natural ability to understand step-by-step instructions, and to decompose larger problems into smaller ones. Both skills are necessary in writing procedural or object-oriented programs. However, humans do not intuitively understand emergence - often our "gut" predicts the opposite of what actually occurs: traffic jams travel in the opposite direction of the flow of traffic, slowing down information transfer in a system can speed up the decision making process as well as improve its accuracy, increasing the amount of memory available for a page replacement algorithm can decrease system performance, introducing a destabilization factor into a system can make it more stable, having fewer options leads to increased overall satisfaction, etc. Thus it is very easy to express desired, system-wide results (i.e. increase throughput, reduce resource usage, forms groups by an external characteristic, satisfy constraints, avoid bottlenecks), but it is difficult to identify the specific low- level behaviors that will produce those results. Several techniques have been suggested to bridge this gap. Work has been done with genetic algorithms (GA) to automatically produce agent programs by mimicking the many generations of simulated natural selection. This approach succeeds when the gap is small or can be broken down into multiple smaller sub-problems[8]. Without expressible intermediate goals, fitness functions have difficulty recognizing or rewarding progress. Thus, it becomes a much less automated process and requires intelligent problem decomposition. Alternatively, Icosystems has developed a GA-based system with fitness functions that can recognize “progress” without expressing these intermediate goals in software. Human observers interact with the results of a simulated swarm and select those that they find “interesting” [3]. The humans are the fitness function. The ideal way to program a swarm is to simply shout out high-level objectives and let the swarm figure out how to solve the problem. We have used this approach with our experiments involving human swarms and find that it works
8

Emergence-Oriented Programming

Feb 27, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Emergence-Oriented Programming

Emergence-Oriented Programming

Daniel W. Palmer Dept. of Math. & Comp.Sci.

John Carroll University Cleveland, OH, USA

[email protected]

Marc Kirschenbaum Dept. of Math. & Comp.Sci.

John Carroll University Cleveland, OH, USA

[email protected]

Linda Seiter Dept. of Math. & Comp.Sci.

John Carroll University Cleveland, OH, USA

[email protected]

Abstract – In this paper we describe Emergence-Oriented Programming (EOP), a novel, human-centric technique to engineer swarm algorithms at a higher level of complexity than those developed with simple reactive agents. The process is iterative, building modules of behavior that can be layered to produce solutions that converge faster than reactive swarms to the desired emergent goal. The layers are modular and can be independently applied, mirroring the arbitrarily nested cognitive model proposed by Baas and Emmeche. The layers are produced by external observers recognizing and reinforcing patterns within swarms that are not visible at lower levels. Each layer builds upon the previous one leading to emergence, but the entire hierarchy can be mechanically collapsed into executable if-then rules based on robot primitives. We demonstrate portions of this technique to improve on the reactive swarm approach for solving the 4-color mapping problem.

Keywords: Swarm Engineering, emergent behavior, software development, aspect-oriented programming.

1 Introduction President Harry S Truman once said, "I have found the best way to give advice to your children is to find out what they want and then advise them to do it." Mangling his intentions, the quote also applies to emergent behavior - "...the best way to write a swarm program to do something specific is to find out what it does and then adopt that as the thing you want it to do." It is much easier to steal an existing swarm algorithm and apply it to a specific domain than to build a swarm algorithm in that domain from scratch. There are many examples of this including puck clustering systems, ant colony optimization[4], honey bee routing algorithms, etc. One reason this strategy works well is that biological swarms have been refining their individual behaviors through natural selection for uncounted generations. Developing agent algorithms to produce specific emergent results requires encompassing a very wide range of expressiveness. At one extreme are complex swarm behaviors that do not easily decompose into discernable patterns of cause and effect to an external observer. At the other extreme is programming an autonomous agent that only gains information about the world through fixed

sensors, and can only interact with the world through predefined action primitives. It is comparatively easy to understand how a simple set of executable rules governs the behavior of a single agent, but it is not at all clear from those rules how a swarm of interacting agents will behave. Humans are very good at translating well-understood algorithms into formal representations. We have a natural ability to understand step-by-step instructions, and to decompose larger problems into smaller ones. Both skills are necessary in writing procedural or object-oriented programs. However, humans do not intuitively understand emergence - often our "gut" predicts the opposite of what actually occurs: traffic jams travel in the opposite direction of the flow of traffic, slowing down information transfer in a system can speed up the decision making process as well as improve its accuracy, increasing the amount of memory available for a page replacement algorithm can decrease system performance, introducing a destabilization factor into a system can make it more stable, having fewer options leads to increased overall satisfaction, etc. Thus it is very easy to express desired, system-wide results (i.e. increase throughput, reduce resource usage, forms groups by an external characteristic, satisfy constraints, avoid bottlenecks), but it is difficult to identify the specific low-level behaviors that will produce those results. Several techniques have been suggested to bridge this gap. Work has been done with genetic algorithms (GA) to automatically produce agent programs by mimicking the many generations of simulated natural selection. This approach succeeds when the gap is small or can be broken down into multiple smaller sub-problems[8]. Without expressible intermediate goals, fitness functions have difficulty recognizing or rewarding progress. Thus, it becomes a much less automated process and requires intelligent problem decomposition. Alternatively, Icosystems has developed a GA-based system with fitness functions that can recognize “progress” without expressing these intermediate goals in software. Human observers interact with the results of a simulated swarm and select those that they find “interesting” [3]. The humans are the fitness function. The ideal way to program a swarm is to simply shout out high-level objectives and let the swarm figure out how to solve the problem. We have used this approach with our experiments involving human swarms and find that it works

Page 2: Emergence-Oriented Programming

quite well (See Figure 4). Unfortunately, software systems and autonomous robots are not yet capable of reacting to arbitrary voice commands in spoken language. Nevertheless, it does present an ideal to strive for. Instead, we pursue the more practical, but more labor-intensive approach outlined in the solid arrows of Figure 1. The developers of a swarm must have a high-level goal in mind for the swarm to accomplish (Desired System-level Behaviors). Using the technique of Emergence-Oriented Programming (EOP) described in this paper, they can produce agent rules (Agent Behavior Rules) that can produce emergent behavior to complete goals. These rules can be mechanically translated into robot primitives (Executable Robot Commands or simulation equivalents). Then, either a physical robotic swarm or a software simulation can be run to demonstrate and observe the emergent problem solving (Realized Emergent System). It’s the long way around, but it bridges the gap between conceptualizing what the swarm should do and having it happen.

DesiredSystem Level

Behaviors

Agent Behavior

Rules

Realized Emergence

System

Executable Robot

Commands

Emergence-OrientedProgramming Simulation & Physical

Swarms

Compilation

Figure 1. Emergence-Oriented Programming Approach 2 A Model of Emergence To demonstrate the power of EOP we first present a model of emergence upon which our methodology is built. Baas and Emmeche[2] define the following framework for emergent structures: Let S1 be a collection of general systems or “agents” (S1i is an arbitrary agent), Int1 be interactions between agents, and Obs1 be observation mechanisms for measuring the properties of agents to be used in the interactions. The interactions then generate a new kind of structure S2 = Result(S1,Obs1,Int1) which is the result of the interactions among the agents. S2 is an emergent structure, which may be subject to new interactions Int2 as well as new observational mechanisms Obs2. P is an emergent property P∈Obs2(S2) and P∉Obs2(S1i). A property is considered emergent if it is observable or explainable within the emergent structures and their interactions, yet the same observation mechanism can not observe nor explain the property in the realm of the initial underlying structure. Baas and Emmeche illustrate this using the concept of viscosity; it is only observable in

the presence of large numbers of liquid molecules and has no meaning at all for an individual molecule. An additional example lies in the domain of programming languages. It is relatively easy for an experienced Java programmer to look at a specific Java program and identify the presence of object-oriented design patterns such as the abstract factory, strategy, or visitor patterns[6]. It is not realistically possible to look at the corresponding byte code or machine-level instructions and recognize the same patterns.

Figure 2: Emergence Framework

Figure 2 shows the emergence framework relative to a swarm simulation. As a simulation executes, the discrete structure and interactions of simple agents are observed. Over time, the data will contain unexpected patterns of global structure and interactions; the agent model has evolved into a swarm. While Figure 2 only shows two layers of structure, the model obviously can grow to multiple layers. Observation is both a critical and labor-intensive component for explaining emergence. It can be a daunting task to examine massive quantities of data, looking for patterns and attempting to derive some intuition toward understanding why a set of simple low-level agent behaviors result in a particular collective behavior. However, a visual presentation of the emergent structures and behaviors may provide researchers with a better means for explaining how and why they arise. Ultimately, developing observer software can lead to automating some of the role of human observers in the process. 3 Emergence-Oriented Programming Emergent-oriented programming mirrors the multiple structured levels of the emergence framework. At the base level is a simulation of a decentralized, multi-agent system executing a single, simple ruleset. The global structures that emerge from the interactions can be observed from outside the base simulation. These global structures interact and can be observed by a higher-level external entity. This layering is unbounded and makes sense as long as the global structures from the previous level have

Page 3: Emergence-Oriented Programming

interesting interactions, producing additional emergent structure. At each iteration of the process, the observer can affect downward causation[4], changing the behavior at the lower level. This amounts to having meta-rules in an agent's ruleset that select among several possible rulesets depending on an agent's participation in higher-level emergent structures. For example, consider betting on horses at a pari-mutuel race track. A gambler treats favorite and longshot bets differently, yet the payoff odds for a horse winning is directly related to the number of bets on that horse with respect to all bets placed. Therefore, as a group, all the bettors determine which horse is the favorite and which ones are longshots. This information is gathered by the track and immediately posted, affecting subsequent betting behavior. This example follows the terminology in section 2 as follows: an individual bettor is a member of S1, the entire collection of bettors and their interactions comprise S2, the rumor mill, and conversations among bettors, as well as, their previous bets make up Int1, Obs1 consists of the betting window clerks, and Obs2 is the track management, who observes all the bets and reports the updated odds. P, an emergent property, is the selection of the race favorite. The act of posting new odds is downward causation – changing the behavior of the bettors. Note that the track cannot determine the favorite by looking at the actions of a single bettor, P∉Obs2(S1i), and the individual bettors, even in groups cannot determine the favorite without the track’s feedback.

Arbitrarily nested layers with cascading dependencies are conceptually difficult to deal with, and difficult to implement on a "simple" autonomous agent. Fortunately, we can conceptually describe any single level and its immediately higher level observer independently of specifics. Thus, we can "focus" our developmental attention at any level (call it f), and consider only it and its immediate observer ( Obs(f) ). All higher level observers, Obs(...Obs(f) ), have no direct effect on f, and any lower levels can be ignored, treating f as the base level. If we want to consider the interaction between level f and the level below, (i.e. the level f is observing) we need only change the focus of consideration to level g, such that f = Obs(g). Thus consideration of all levels can be conceptually described by explaining two-levels only, and all specifics of observation, upwards and downwards causality and emergence can be expressed within those two levels. By demonstrating the base level simulation, we can then describe by induction all levels within the model.

Consider the following example, illustrating this concept: a swarm of robots is deposited into a bounded region with the goal of finding and transporting large, randomly-distributed items to the perimeter of the region. The items are too bulky to be moved by a single robot, so cooperation must play a role. A first level algorithm would be to have robots do a random walk until they stumble onto an item and then begin pushing the item in a random

direction. Over time, items that “accidentally” get pushed to the edge will stay there, because robots cannot get behind them and push them away. This algorithm will eventually position all the items at the perimeter. Observers watching the robots will see that they waste a lot of energy trying to push items that are too heavy, pushing against each other, and moving items unproductively. If we consider the robot’s behaviors as the level of focus (f), then observers that can see the wasted efforts are denoted Obs(f). One way to streamline the robot’s efforts is to reinforce behaviors that lead to the robots forming groups before trying to push the items. The movements of the groups indicate that the swarm could be more efficient if the groups began to follow a dispersion algorithm to make it easier to find any remaining items. Now the level of focus, h (where h = Obs(f)), are the groups of robots, and observers of h (where Obs(h) = Obs(Obs(f))) watch the groups’ behaviors.

However, we need to be able to do more than just conceptually collapse the layers. At some point this multi-layered structure must be executed on an autonomous robot using something as simple as if-then rules. With the requirement that the robots have onboard memory, then we can demonstrate that the only limit on the complexity of the layered model is the amount of available memory. An autonomous robot is nothing more than a sensor platform with predefined primitive actions. Without storage memory, an agent can only be reactive - the current readings of the sensors wholly dictating the robot's immediate action. In this setting, the robot's behavior can be fully described by a finite state automata, a look-up table, or a fixed set of prioritizes if-then rules whose predicates are sensor readings and consequents are primitives. The robot can also get information through communication with other agents, but the only information they can provide is the current readings of their sensors. In the limit, any agent in the swarm can only derive information about its surroundings through a fixed set of onboard sensors. By adding onboard memory, an agent now can only store two things: its own sensor readings and actions over time and space, or other agents' sensor readings and actions over time and space. The critical point is that all information onboard an agent-robot is based on its sensors and internal information (time stamps, agent ID tags, etc.)

Thus, no matter how sophisticated the higher level observers are, no matter how complex the upper levels of emergent structure are, they too are based only on an agent's sensor data and internal data. Likewise, no matter how convoluted or how many levels the downward causation traverses, the actions produced can be broken down into the base primitives the robot can execute. Therefore, swarm programmers can make liberal use of observers, extending them to as many levels as is practical, confident that whatever is produced as base-level actions,

Page 4: Emergence-Oriented Programming

can be mechanically compiled into executable instructions on an autonomous agent.

4 Implementation Framework Emergent properties arise from the numerous interactions of many simple agents. It is difficult to encapsulate emergence in a traditional programming language. An object-oriented language such as Java is capable of encapsulating the properties and behavior of a single agent within a class instance, or object. Thus, Java provides a specific language mechanism for modularizing agents. However, emergent structure does not exist within a single agent, nor does it result from the execution of a single agent behavior; rather it arises from the interactions of many agents over the course of time. Thus, to encapsulate emergence, we need a program language construct that can express modularity across a set of objects and a set of object interactions. This requires encapsulation of a set of classes and class method invocations. Object-oriented languages do not support such modularity. A new programming language paradigm has been introduced in recent years to address such “cross-cutting” concerns [7]. Aspect-oriented programming (AOP) is a language model that supports the encapsulation of concerns that cross the object-oriented class boundary. AOP provides direct language support for modularizing emergent structure and behavior. For example, an aspect may be written that describes a set of method invocations, which may occur in multiple classes. Emergent behavior may thus be recognized when such a set of method invocations occurs during a program execution. 4.1 Using Aspect-Oriented Programming Aspect-oriented programming allows the implementation to accurately reflect the conceptual layers from the cognitive model by creating modular structures that encapsulate the layers on the software side[7]. It supports modularized code that can be mechanically inserted into executable code at specific, user-definable locations. The key point is the set of locations can not be encapsulated modularly at the lower layer because the programming language at the lower level can not provide a construct for describing the set of execution points. For example, if we want to perform some logging operation that should occur directly before a particular set of invocations of the method foo( ), we can either manually insert code prior to each invocation of foo( ) throughout the Java program, or use a higher level language such as AOP that has a structure to express all invocations of foo( ) simultaneously. By intertwining the new AOP modules expressed at the higher level into the executable bytecodes (in the case of Java) the original source is untouched, yet integrates the desired functionality in the resulting executable. Many aspects can be applied to the same code, and aspects can be applied to other aspects. This powerful

mechanism provides a direct, tangible representation of the layers within the emergence model described in section 3. We use the AspectJ compiler[1] to develop all code related to the visualization and implementation of downward causation of emergence, thus implementing the emergence framework without tangling the agent and swarm models. 4.2 Control Flow of Emergence-Oriented

Programming

Figure 3. EOP Roadmap

Our framework for implementing emergence-oriented programming is depicted in figure 3. Typically the design process loops through boxes marked 2, 3, and 4 beginning with a visualization of the swarm’s behavior. This is accomplished by applying AOP to gather appropriate data later displayed for analysis. Visual data is analyzed by humans (currently) and/or swarm based software observers (envisioned for the future) looking for patterns that impact the emergent behavior of the swarm. The analysis will lead to a higher level understanding of the behavior of the swarm which is fed back to individual agents. Thus, information is gathered at a higher level and downward causation is used to alter the behavior of agents at the base level. The loop is repeated until the swarm becomes an efficient problem solver.

It may turn out that finding patterns is too difficult because of the randomness of agents. In general, patterns can be found more easily if some of the variables are previously determined. One way to implement this is to have the ability to manually set actions of individual agents and observe the resulting outcomes. Loop 2, 3, and 5 allows for human dynamic what-if experimentation. By clicking on the visualized agents and picking their action, it is possible to find patterns that create the next level of understanding of the swarm. This loop is continued until it is possible to implement downward causation (4) or the swarm’s performance is acceptable.

Algorithm First Cut 1

Visualization Supported by AOP 2

High Level Understanding of Emergent Algorithm and Feedback 3

Software (AOP) Implementation of Downward Causation 4

Virtual Human Swarm Experiments 0

Interactive Human Driven Downward Causation/Exploration 5

Problem Solving Algorithm 6

Page 5: Emergence-Oriented Programming

There is one more possibility if the process lingers too long in the loop 2, 3, and 5. Instead of using the individual programmer (5) to force actions, a virtual human swarm (0) [10] experiment is run. It is possible to limit all sensory input to a virtual human swarm allowing researchers to design experiments enforcing constraints on humans to match agent sensorial capabilities. All human actions can be recorded and given to the visualization module (2) to begin another loop cycle. At this point visualizing the movements of groups might indicate that the swarm would be more efficient if the groups began to follow a dispersion algorithm to make it easier to find any remaining items.

4.3 Three facets of Human involvement In figure 3 there are three places where humans perform integral roles in the EOP process: in component 3 humans are observers who visually assess the swarm’s behavior, looking for patterns that encourage emergence; in component 5, humans dynamically alter the state of agents, and observe their responses, and in component 0, humans participate in virtual swarm experiments for algorithm extraction. This section describes each facet in the above order.

4.3.1 Humans as Observers and Pattern Recognizers

One difficulty in writing swarm algorithms derives from their imprecise debugging process. Programs are written at the agent level, but results can only be considered at the swarm level, so actual bugs can only be observed indirectly through the lens of hundreds or thousands of agent interactions. The nature of the offensive code must be inferred from the high level behavior with the confident knowledge that the bug exists within the current program. However, as difficult as this is, it’s much easier than the reverse process of trying to conjure up code that does not exist to produce better global behaviors not yet observed. Yet this is exactly the task of the swarm developer trying to produce more effective emergent systems. The primary common ground between these two processes is that they both require good visualization tools. Developing sophisticated emergent code thus becomes an iterative process of visualizing global behavior through software, observing and analyzing it for constructive patterns, writing Aspect components to reinforce these patterns, and then repeating the process at the next higher level.

The whole process begins with a baseline, reactionary swarm to produce a desired emergent behavior. This particular swarm relies solely on randomness to eventually generate a solution within the problem space it inhabits. To build a more effective swarm, one with less wasted effort and faster solution convergence, we use the iterative EOP process. Each iteration starts with swarm researchers (humans) crowding around a terminal screen that is visually revealing the behaviors at the agent level. This can mean

coloring the agents according to an important characteristic, such as state, stability, or sub-goal, and then speculating on the low-level causes for the observed high-level behavior. As more and more observations are made, hypotheses about the inefficiencies in the agent algorithms grow gradually and ideas about the root causes accrue. When confidence is high, the developers write new aspects to automatically recognize the observed behavior identified by humans and trigger downward causation to produce more effective emergent behavior. For example, in an autonomous robot dispersion algorithm, the visualization tool revealed that sometimes a single isolated agent could wander into a large stable region, causing pockets of overcrowding and destabilize the entire partial solution. By adding an aspect that recognized stable regions and resisted the effects of “rogue” agents, the simulation converges slightly faster and produces more uniform dispersions [9]. The process continues by creating new, higher-level visualizations of the augmented algorithm. Humans also determine when and to what extent the swarm algorithm is effectively accomplishing its intended goal. When the results are “good enough” for the current problem, the EOP development cycle can end. However, when the answer is “not good enough”, there are additional avenues of attack available.

4.3.2 Humans Interact Dynamically with Swarm Simulations to Extract Complex Patterns

When the swarm behavior becomes too complex to extract patterns and understanding through passive observation, the technique depicted in the 2,3,4 loop of Figure 3 and the previous section is no longer beneficial. For humans to gain further insight into the agents’ dependencies and behaviors, they must dynamically interact using the approach represented by the 2,3,5 loop in the diagram. The AOP visualizations are upgraded to provide a clickable, menu-driven interface that allows humans to directly alter agent states and behavior. For example, after watching a swarm algorithm visualization for a while without progress, a human observer might get a sense (without even being able to articulate why) that “agents near the perimeter are underperforming”, or that “agents in a certain region tend to stay in that region”, or some other simulation-specific flaw prevents the swarm from completing its goal. Exploring these “hunches” can lead to new insights and improvements to the algorithms. The person can produce immediate feedback by clicking on the offending agent or agents and forcing some behavior on them: initiate a random walk, pause for a set duration, increase its probability of interaction, or some simulation-specific response. The human observer can now enter the realm of “What if?”, trying different constraints to monitor their effect on the performance. Patterns are much easier to recognize if inputs can be controlled. To enhance the pattern recognition and data extraction process, a human observer can also dynamically modify the frequency and characteristics of the information visualization. Part of the

Page 6: Emergence-Oriented Programming

interaction process will reveal to the active human observers what information they lack; allowing them to identify the additional data to collect and display. Another way to use the dynamic interaction is as an immediate feedback prototyping process. In section 4.3.1, we describe how human observations and insights are implemented in software, allowing higher-level observations. It may be the case that some high level observations may be difficult to code, and several possible approaches to choose from. In this case, the active human observer can play the role of the highest level observer, initiating several different flavors of downward causation with the click of the mouse. Once the most effective approach is identified, then only that one needs to be implemented in software and the human interaction can move up to the next level.

4.3.3 Virtual Human Swarm (VHS) When even interactive observation no longer yields fruitful understanding, a human swarm experiment can be run to further explore the algorithm. These experiments have a human be the controller for each agent in the swarm. Elsewhere, we describe the process of extracting swarm algorithms from observing physical human swarms that take place in a gymnasium (see Figure 4)[10]. To more precisely control the human swarm environment and more easily extract information, we developed virtual human swarm (VHS) experiments that take place in a networked virtual environment (see Figures 5,6,7). This system gives us the ability to control everything a human agent senses, and record everything the agent does. The actions as well as the location of each agent over time are recorded into a database to be used later to observe the behavior of the humans in the VHS. Additionally, we can interview each participant to determine the factors that impacted their behavior, and the reasons for their actions. It is also helpful to inquire about their problems, obstacles towards the solution and possible options for improvement. For example, someone might say that “reaching the goal was easy once they found a cluster of other agents”, “it was too difficult to get other agents to help me”, or “It felt like there weren’t enough agents to address the problem.” These insights can be considered and folded into the next iteration of the EOP process. To run these experiments and do subsequent analysis of the results, it is absolutely necessary to have both the agent point-of-view and a global view of all the participants. Figure 6 shows a typical first-person perspective and Figure 7 shows the overhead, world-view of the virtual swarm. A very useful aspect of this system is to give swarm algorithm developers an agent’s-eye-view of the problem. It is extremely difficult to not think in terms of a global algorithm when all data is presented in a global context. Seeing the agent’s viewpoint with its limitations and constraints forces the developers to adopt a more agent-based brand of thinking.

Figure 4. Physical human swarm shown clustering by color

Figure 5. Virtual Human swarm interacting across a networked virtual world

A VHS can be used as a tool in Emergence-Oriented programming (EOP). At any step in the process if researchers are unable to produce the next level in the EOP paradigm, a VHS experiment can be designed and run to match the sensorial capabilities of agents. Patterns

Figure 6. “Form a Line” experiment seen in the first person perspective view of virtual world (full visual capability) observed can be used to help produce a possible direction for designing the next level of abstraction. Note, as a

Page 7: Emergence-Oriented Programming

special case, a VHS can be applied as a first step for developing a first cut swarm algorithm

Figure 7 Global information view of the virtual world –

used for evaluating algorithms 5 EOP in Action Emergence-Oriented Programming is an ongoing development project; we have been researching different components of it for several years and spent much of last year implementing and using it in a manual way. In this section we present our vision of how the complete, integrated system will work, and describe those portions that have been realized. We use the 4-coloring graph problem as the driving example to demonstrate the system. It is well known that it is possible to color the nodes of a planer graph using 4 or fewer colors such that no two nodes connected by an edge have the same color. We developed a first cut swarm algorithm with stationary agents assigned to each node. Agents can determine whether or not a node’s color is in conflict with one of its neighbors. If in conflict, the agent randomly chooses a color from a palette for the node. Under this algorithm, randomness is enough for the swarm to successfully color a planer graph. We use this as the base level analogous to the Structureagents found in Figure 2. To understand how the swarm accomplishes this and in the hopes of improving efficiency, the team applied AOP to create a visualization showing the colors of the nodes of the graph as they changed. (see Figure 8a) The team, acting as observers analyzes the visual data and notices that partial solutions form but adjacent nodes in conflict will quickly change color causing a ripple effect that dissolves the partially constructed solution. This pattern can be minimized if agents participating in a partial solution resist changing their color. The team develops aspects as the next higher level in the cognitive hierarchy model. This level observes how long a node has remained the same color (its stability) and uses this information to create downward causation to modify the probability of a node choosing to change its color. The downward causation dramatically improves the performance of the swarm. (see Figure 9) To understand how the improvement works, another cycle of EOP begins by creating a visualization of the stability of nodes. (see Figure 8b) A new graph is produced where the nodes are given colors ranging from bright blue to dark

blue to dark red to red. Bright blue indicates a very stable node whereas red represents a very unstable node.

Figure 8a Agent-level swarm graph coloring 8b Higher-

level stability visualization Static vs. Feedback Swarm

0

10000

20000

30000

40000

50000

60000

70000

80000

90000

100000

0 50 100 150 200 250 300

Nodes in Graph

Itera

tio

ns

to S

olu

tio

n

Static Swarm

Feedback Swarm

Figure 9. Comparison between Base and Node Stability

Level

We are currently investigating three possible next steps in the development of this algorithm. The first is to perform another standard iteration of EOP with upgraded visualization aspects to observe and understand clustering of stable nodes. The second is to allow humans to interact with the executing system, selecting nodes and forcing behavior to observe the resulting system attributes. The third is to develop a virtual human swarm experiment and observe how interacting humans solve the problem. Another iteration of EOP will produce a third level in the cognitive model of Figure 2, yielding something like Figure 10. The interactive execution has not yet been implemented; it is currently under development. However, experiments with the virtual human swarm approach to solving the 4-color problem have begun (see figure 11). The current layout of the interface is depicted in Figures 12a,b,c. Humans select the background color of the screen (the color for the node that they control) by selecting one of the four colors found on the lower palette. The colors of all neighboring nodes are shown in the row of colors near the top of the screen. When a human changes his or her node’s color, that information is sent to all adjacent neighbors and those GUI’s are updated. Figures 12a and 12b show the solution for two nodes in the graph. Note that in 12a, the node’s color is black, and none of its neighbors are black.

Page 8: Emergence-Oriented Programming

Likewise for figure 12b, the node is red and not in conflict with any of its neighbors. However, note in figure 11 that the leftmost node and its connected neighbor in the upper right are both green. The humans know this information from their GUIs, and can choose to take action or not.

Figure10. Emergence Framework for Graph Coloring

Figure 11. Six humans using the VHS software solve the 4

coloring problem for the graph found in figure 12c.

6 Conclusion Emergence-oriented programming is an iterative approach to creating autonomous swarm algorithms by leveraging human decision-making and pattern recognition abilities during the development process. While there is much debate in the swarm engineering community as to the overall feasibility of this approach, we have implemented and experimented with several major components of the system and demonstrated remarkable improvement over a static, reactive swarm algorithm on the graph coloring problem.

Figures 12a, b, c. Screen shots of the VHS 4-coloring graph

problem. 12a corresponds to the lower left black node in figure 12c, and 12b corresponds to the upper left red node.

View Clustering

References [1] AspectJ. http://www.aspectj.org.

[2] N. Baas and C. Emmeche, “On Emergence and Explanation,” Intellectica, no.25, pp.67-83, 1997.

[3] E. Bonabeau, P. Funes & B. Orme, “Exploratory Design of Swarms,” Proceedings of the 2nd International Workshop on the Mathematics and Algorithms of Social Insects, C. Anderson & T. Balch, Eds. Georgia Institute of Technology, pp. 17–24.

[4] F. Comellas, J. Ozon, “An Ant Algorithm for the Graph Coloring Problem,” ANTS’98, First International Workshop on Ant Colony Optimization, Brussels, Belgium, October 15-16, 1998.

[5] C. Emmeche, S. Køppe and F. Stjernfelt, “Levels, Emergence, and Three Versions of Downward Causation,” eds. 2000, pp. 13-34, in Downward Causation. Minds, Bodies and Matter. Aauhus University Press.

[6] E. Gamma, R. Helm, R. Johnson, J. Vlissides, Design Patterns: elements of reusable object-oriented software. Addison-Wesley Publishing Company, Inc. 1995.

[7] G. Kiczales, J. Lamping, M. Mendhekar, C. Maeda, C. Lopes, J-M Loingtier, J. Irwin. Bowman, B., Debray, S. K., and Peterson, L. L. “Aspect-Oriented Programming,” Proceedings of the European Conference on Object-Oriented Programming., Springer-Verlag LNCS, 1997.

[8] M. Kovacina, Evolving Swarm Behaviors, Masters Thesis, Case Western Reserve University, 2005.

[9] Daniel W. Palmer, Marc Kirschenbaum, Linda M. Seiter, Jason Shifflet, Peter Kovacina, “Behavioral Feedback as a Catalyst for Emergence in Multi-Agent Systems,” Advanced Intelligent Mechatronics, 2005.

[10] D. Palmer et. al., “Using a Collection of Humans as an Execution Testbed for Swarm Algorithms,” Proceedings of the IEEE Swarm Intelligence Symposium. Indianapolis: Institute of Electrical and Electronics Engineers, 2003: 58-64

Int GrowStabilty

Graph Clustering

Int ColorResolution

Graph Stability

View Stability Int ThrashingAvoidance

View Color

CGraph olor