Top Banner

of 117

Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture

Oct 12, 2015

Download

Documents

joputa15

Governing Lethal Behavior:
Embedding Ethics in a Hybrid
Deliberative/Reactive Robot Architecture
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • Technical Report GIT-GVU-07-11

    Governing Lethal Behavior: Embedding Ethics in a Hybrid

    Deliberative/Reactive Robot Architecture*

    Ronald C. Arkin Mobile Robot Laboratory

    College of Computing Georgia Institute of Technology

    [email protected] [N.B.] State a moral case to a ploughman and a professor. The former will decide it as well, and often better than the latter, because he has not been led astray by artificial rules.1 Thomas Jefferson 1787

    Abstract

    This article provides the basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Laws of War and Rules of Engagement. It is based upon extensions to existing deliberative/reactive autonomous robotic architectures, and includes recommendations for (1) post facto suppression of unethical behavior, (2) behavioral design that incorporates ethical constraints from the onset, (3) the use of affective functions as an adaptive component in the event of unethical action, and (4) a mechanism in support of identifying and advising operators regarding the ultimate responsibility for the deployment of such a system.

    1. Introduction Since the Roman Empire, through the Inquisition and the Renaissance, until today [May et al. 05], humanity has long debated the morality of warfare. While it is universally acknowledged that peace is a preferable condition than warfare, that has not deterred the persistent conduct of lethal conflict over millennia. Referring to the improving technology of the day and its impact on the inevitability of warfare, [Clausewitz 1832] stated that the tendency to destroy the adversary which lies at the bottom of the conception of War is in no way changed or modified through the progress of civilization. More recently [Cook 04] observed The fact that constraints of just war are routinely overridden is no more a proof of their falsity and irrelevance than the existence of immoral behavior refutes standards of morality: we know the standard, and we also know human beings fall short of that standard with depressing regularity. * This research is funded under Contract #W911NF-06-0252 from the U.S. Army Research Office. 1 ME 6:257, Paper 12:15 as reported in [Hauser 06, p. 61]

  • 2

    St. Augustine is generally attributed, 1600 years ago, with laying the foundations of Christian Just War thought [Cook 04] and that Christianity helped humanize war by refraining from unnecessary killing [Wells 96]. Augustine (as reported via Aquinas) noted that emotion can clearly cloud judgment in warfare:

    The passion for inflicting harm, the cruel thirst for vengeance, an unpacific and relentless spirit, the fever of revolt, the lust of power, and suchlike things, all these are rightly condemned in war [May et al. 05, p. 28].

    Fortunately, these potential failings of man need not be replicated in autonomous battlefield robots2.

    From the 19th Century on, nations have struggled to create laws of war based on the principles of Just War Theory [Wells 96, Walzer 77]. These laws speak to both Jus in Bello, which applies limitations to the conduct of warfare, and Jus ad Bellum, which restricts the conditions required prior to entering into war, where both form a major part of the logical underpinnings of the Just War tradition. The advent of autonomous robotics in the battlefield, as with any new technology, is primarily concerned with Jus in Bello, i.e., defining what constitutes the ethical use of these systems during conflict, given military necessity. There are many questions that remain unanswered and even undebated within this context. At least two central principles are asserted from the Just War tradition: the principle of discrimination of military objectives and combatants from non-combatants and the structures of civil society; and the principle of proportionality of means, where acts of war should not yield damage disproportionate to the ends that justify their use. Non-combatant harm is considered only justifiable when it is truly collateral, i.e., indirect and unintended, even if foreseen. Combatants retain certain rights as well, e.g., once they have surrendered and laid down their arms they assume the status of non-combatant and are no longer subject to attack. Jus in Bello also requires that agents must be held responsible for their actions [Fieser and Dowden 07] in war. This includes the consequences for obeying orders when they are known to be immoral as well as the status of ignorance in warfare. These aspects also need to be addressed in the application of lethality by autonomous systems, and as we will see in Section 2, are hotly debated by philosophers. The Laws of War (LOW), encoded in protocols such as the Geneva Conventions and Rules of Engagement (ROE), prescribe what is and what is not acceptable in the battlefield in both a global (standing ROE) and local (Supplemental ROE) context, The ROE are required to be fully compliant with the laws of war. Defining these terms [DOD-02]:

    Laws of War That part of international law that regulates the conduct of armed hostilities.

    Rules of Engagement - Directives issued by competent military authority that delineate the circumstances and limitations under which United States Forces will initiate and/or continue combat engagement with other forces encountered.

    2 That is not to say, however, they couldnt be. Indeed the Navy (including myself) is already conducting research in Affect-Based Computing and Cognitive Models for Unmanned Vehicle Systems [OSD 06], although clearly not designed for the condemned intentions stated by Augustine.

  • 3

    As early as 990, the Angiers Synod issued formal prohibitions regarding combatants seizure of hostages and property [Wells 96]. The Codified Laws of War have developed over centuries, with Figure 1 illustrating several significant landmarks along the way. Typical battlefield limitations, especially relevant with regard to the potential use of lethal autonomous systems, include [May et al. 05, Wikipedia 07a]:

    Acceptance of surrender of combatants and the humane treatment of prisoners of war. Use of proportionality of force in a conflict. Protecting of both combatants and non-combatants from unnecessary suffering. Avoiding unnecessary damage to property and people not involved in combat. Prohibition on attacking people or vehicles bearing the Red Cross or Red Crescent

    emblems, or those carrying a white flag and that are acting in a neutral manner.

    Avoidance of the use of torture on anyone for any reason. Non-use of certain weapons such as blinding lasers and small caliber high-velocity

    projectiles, in addition to weapons of mass destruction.

    Mutilation of corpses is forbidden. [Waltzer 77, p. 36] sums it up: ... war is still, somehow, a rule-governed activity, a world of permissions and prohibitions a moral world, therefore, in the midst of hell. These laws of war continue to evolve over time as technology progresses, and any lethal autonomous system which attempts to adhere to them must similarly be able to adapt to new policies and regulations as they are formulated by international society.

    Figure 1: Development of Codified Laws of War (After [Hartle 04])

    1864Geneva Convention

    Armed Forces

    1899Hague Convention

    1907Hague Convention

    1906Geneva Convention

    Armed Forces

    1929Geneva Convention

    Armed Forces

    1949Geneva Convention

    Armed Forces

    1949Geneva Convention

    Sea

    1899Hague Regulations

    1907Hague Regulations

    1929Geneva Convention

    Prisoners

    1949Geneva Convention

    Prisoners

    1949Geneva Convention

    Civilians

    1977Protocols Added to the

    Geneva Convention

  • 4

    Of course there are serious questions and concerns regarding the just war tradition itself, often evoked by pacifists. [Yoder 84] questions the premises on which it is built, and in so doing also raises some issues that potentially affect autonomous systems. For example he questions Are soldiers when assigned a mission given sufficient information to determine whether this is an order they should obey? If a person under orders is convinced he or she must disobey, will the command structure, the society, and the church honor that dissent? Clearly if we embed an ethical conscience into an autonomous system it is only as good as the information upon which it functions. It is a working assumption, perhaps nave, that the autonomous agent ultimately will be provided with an amount of battlefield information equal to or greater than a human soldier is capable of managing. This seems a reasonable assumption, however, with the advent of network-centric warfare and the emergence of the Global Information Grid (GIG). It is also assumed in this work, that if an autonomous agent refuses to conduct an unethical action, it will be able to explain to some degree its underlying logic for such a refusal. If commanders are provided with the authority by some means to override the autonomous systems resistance to executing an order which it deems unethical, he or she in so doing would assume responsibility for the consequences of such action. Section 5.2.4 discusses this in more detail. These issues are but the tip of the iceberg regarding the ethical quandaries surrounding the deployment of autonomous systems capable of lethality. It is my contention, nonetheless, that if (or when) these systems will be deployed in the battlefield, it is the roboticists duty to ensure they are as safe as possible to both combatant and noncombatant alike, as is prescribed by our societys commitment to International Conventions encoded in the Laws of War, and other similar doctrine, e.g., the Code of Conduct and Rules of Engagement. The research in this article operates upon these underlying assumptions.

    1.1 Trends towards lethality in the battlefield There is only modest evidence that the application of lethality by autonomous systems is currently considered differently than any other weaponry. This is typified by informal commentary where some individuals state that a human will always be in the loop regarding the application of lethal force to an identified target. Often the use of the lethality in this context is considered more from a safety perspective [DOD 07], rather than a moral one. But if a human being in the loop is the flashpoint of this debate, the real question is then at what level is the human in the loop? Will it be confirmation prior to the deployment of lethal force for each and every target engagement? Will it be at a high-level mission specification, such as Take that position using whatever force is necessary? Several military robotic automation systems already operate at the level where the human is in charge and responsible for the deployment of lethal force, but not in a directly supervisory manner. Examples include the Phalanx system for Aegis-class cruisers in the Navy, cruise missiles, or even (and generally considered as unethical due to their indiscriminate use of lethal force) anti-personnel mines or alternatively other more discriminating classes of mines, (e.g. anti-tank). These devices can even be considered to be robotic by some definitions, as they all are capable of sensing their environment and actuating, in these cases through the application of lethal force.

  • 5

    It is anticipated that teams of autonomous systems and human soldiers will work together on the battlefield, as opposed to the common science fiction vision of armies of unmanned systems operating by themselves. Multiple unmanned robotic systems are already being developed or are in use that employ lethal force such as the ARV (Armed Robotic Vehicle), a component of the Future Combat System (FCS); Predator UAVs (unmanned aerial vehicles) equipped with hellfire missiles, which have already been used in combat but under direct human supervision; and the development of an armed platform for use in the Korean Demilitarized Zone [Argy 07, SamsungTechwin 07] to name a few. Some particulars follow:

    The South Korean robot platform mentioned above is intended to be able to detect and identify targets in daylight within a 4km radius, or at night using infrared sensors within a range of 2km, providing for either an autonomous lethal or non-lethal response. Although a designer of the system states that the ultimate decision about shooting should be made by a human, not the robot, the system does have an automatic mode in which it is capable of making the decision on its own [Kumagai 07].

    iRobot, the maker of Roomba, is now providing versions of their Packbots capable of tasering enemy combatants [Jewell 07]. This non-lethal response, however, does require a human-in-the-loop, unlike the South Korean robot under development.

    The SWORDS platform developed by Foster-Miller is already at work in Iraq and Afghanistan and is capable of carrying lethal weaponry (M240 or M249 machine guns, or a Barrett .50 Caliber rifle). [Foster-Miller 07]

    Israel is deploying stationary robotic gun-sensor platforms along its borders with Gaza in automated kill zones, equipped with fifty caliber machine guns and armored folding shields. Although it is currently only used in a remote controlled manner, an IDF division commander is quoted as saying At least in the initial phases of deployment, were going to have to keep a man in the loop, implying the potential for more autonomous operations in the future. [Opall-Rome 07]

    Lockheed-Martin, as part of its role in the Future Combat Systems program is developing an Armed Robotic Vehicle-Assault (Light) MULE robot weighing in at 2.5 tons. It will be armed with a line-of-sight gun and an anti-tank capability, to provide immediate, heavy firepower to the dismounted soldier. [Lockheed-Martin 07]

    The U.S. Air Force has created their first hunter-killer UAV, named the MQ-9 Reaper. According to USAF General Moseley, the name Reaper is fitting as it captures the lethal nature of this new weapon system. It has a 64 foot wingspan and carries 15 times the ordnance of the Predator, flying nearly three times the Predators cruise speed. As of September 2006, 7 were already in inventory with more on the way. [AirForce 06]

    The U.S. Navy for the first time is requesting funding for acquisition in 2010 of armed Firescout UAVs, a vertical-takeoff and landing tactical UAV that will be equipped with kinetic weapons. The system has already been tested with 2.75 inch unguided rockets. The UAVs are intended to deal with threats such as small swarming boats. As of this time the commander will determine whether or not a target should be struck. [Erwin 07]

    An even stronger indicator regarding the future role of autonomy and lethality appears in a recent U.S. Army Solicitation for Proposals [US Army 07], which states:

  • 6

    Armed UMS [Unmanned Systems] are beginning to be fielded in the current battlespace, and will be extremely common in the Future Force Battlespace This will lead directly to the need for the systems to be able to operate autonomously for extended periods, and also to be able to collaboratively engage hostile targets within specified rules of engagement with final decision on target engagement being left to the human operator. Fully autonomous engagement without human intervention should also be considered, under user-defined conditions, as should both lethal and non-lethal engagement and effects delivery means. [Boldface added for emphasis]

    There is some evidence of restraint, however, in the use of unmanned systems designed for lethal operations, particularly regarding their autonomous use. A joint government industry council has generated a set of safety precepts [JGI 07] that bear this hallmark:

    DSP-6: The UMS [UnManned System] shall be designed to prevent uncommanded fire and/or release of weapons or propagation and/or radiation of hazardous energy.

    DSP-13: The UMS shall be designed to identify to the authorized entity(s) the weapon being released or fired.

    DSP-15: The firing of weapon systems shall require a minimum of two independent and unique validated messages in the proper sequence from authorized entity(ies), each of which shall be generated as a consequence of separate authorized entity action. Both messages should not originate within the UMS launching platform.

    Nonetheless, the trend is clear: warfare will continue and autonomous robots will ultimately be deployed in its conduct. Given this, questions then arise regarding how these systems can conform as well or better than our soldiers with respect to adherence to the existing Laws of War. This article focuses on this issue directly from a design perspective. This is no simple task however. In the fog of war it is hard enough for a human to be able to effectively discriminate whether or not a target is legitimate. Fortunately for a variety of reasons, it may be anticipated, despite the current state of the art, that in the future autonomous robots may be able to perform better than humans under these conditions, for the following reasons:

    1. The ability to act conservatively: i.e., they do not need to protect themselves in cases of low certainty of target identification. UxVs do not need to have self-preservation as a foremost drive, if at all. They can be used in a self-sacrificing manner if needed and appropriate without reservation by a commanding officer,

    2. The eventual development and use of a broad range of robotic sensors better equipped for battlefield observations than humans currently possess.

    3. They can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events. In addition, Fear and hysteria are always latent in combat, often real, and they press us toward fearful measures and criminal behavior [Walzer 77, p. 251]. Autonomous agents need not suffer similarly.

    4. Avoidance of the human psychological problem of scenario fulfillment is possible, a factor believed partly contributing to the downing of an Iranian Airliner by the USS Vincennes in 1988 [Sagan 91]. This phenomena leads to distortion or neglect of contradictory information in stressful situations, where humans use new incoming

  • 7

    information in ways that only fit their pre-existing belief patterns, a form of premature cognitive closure. Robots need not be vulnerable to such patterns of behavior.

    5. They can integrate more information from more sources far faster before responding with lethal force than a human possibly could in real-time. This can arise from multiple remote sensors and intelligence (including human) sources, as part of the Armys network-centric warfare concept and the concurrent development of the Global Information Grid.

    6. When working in a team of combined human soldiers and autonomous systems, they have the potential capability of independently and objectively monitoring ethical behavior in the battlefield by all parties and reporting infractions that might be observed. This presence alone might possibly lead to a reduction in human ethical infractions.

    It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of. Unfortunately the trends in human behavior in the battlefield regarding adhering to legal and ethical requirements are questionable at best. A recent report from the Surgeon Generals Office [Surgeon General 06] assessing the battlefield ethics of soldiers and marines deployed in Operation Iraqi Freedom is disconcerting. The following findings are taken directly from that report:

    1. Approximately 10% of Soldiers and Marines report mistreating noncombatants (damaged/destroyed Iraqi property when not necessary or hit/kicked a noncombatant when not necessary). Soldiers that have high levels of anger, experience high levels of combat or those who screened positive for a mental health problem were nearly twice as likely to mistreat non-combatants as those who had low levels of anger or combat or screened negative for a mental health problem.

    2. Only 47% of Soldiers and 38% of Marines agreed that noncombatants should be treated with dignity and respect.

    3. Well over a third of Soldiers and Marines reported torture should be allowed, whether to save the life of a fellow Soldier or Marine or to obtain important information about insurgents.

    4. 17% of Soldiers and Marines agreed or strongly agreed that all noncombatants should be treated as insurgents.

    5. Just under 10% of soldiers and marines reported that their unit modifies the ROE to accomplish the mission.

    6. 45% of Soldiers and 60% of Marines did not agree that they would report a fellow soldier/marine if he had injured or killed an innocent noncombatant.

    7. Only 43% of Soldiers and 30% of Marines agreed they would report a unit member for unnecessarily damaging or destroying private property.

    8. Less than half of Soldiers and Marines would report a team member for an unethical behavior.

    9. A third of Marines and over a quarter of Soldiers did not agree that their NCOs and Officers made it clear not to mistreat noncombatants.

  • 8

    10. Although they reported receiving ethical training, 28% of Soldiers and 31% of Marines reported facing ethical situations in which they did not know how to respond.

    11. Soldiers and Marines are more likely to report engaging in the mistreatment of Iraqi noncombatants when they are angry, and are twice as likely to engage in unethical behavior in the battlefield than when they have low levels of anger.

    12. Combat experience, particularly losing a team member, was related to an increase in ethical violations.

    Possible explanations for the persistence of war crimes by combat troops are discussed in [Bill 00]. These include:

    High friendly losses leading to a tendency to seek revenge. High turnover in the chain of command, leading to weakened leadership. Dehumanization of the enemy through the use of derogatory names and epithets. Poorly trained or inexperienced troops. No clearly defined enemy. Unclear orders where intent of the order may be interpreted incorrectly as unlawful.

    There is clear room for improvement, and autonomous systems may help.

    In Section 2 of this article, we first review relevant related work to set the stage for the necessity of an ethical implementation of lethality in autonomous systems assuming they are to be deployed. Section 3 presents the mathematical formalisms underlying such an implementation. In Section 4, recommendations regarding the internal design and content of representational structures needed for an automated ethical code are provided, followed in Section 5 by architectural considerations and recommendations for implementation. In Section 6, example scenarios are presented, followed by a summary, conclusions, and future work in Section 7.

    2. Related Philosophical Thought We now turn to several philosophers and practitioners who have specifically considered the militarys potential use of lethal autonomous robotic agents. In a contrarian position regarding the use of battlefield robots, [Sparrow 06] argues that any use of fully autonomous robots is unethical due to the Jus in Bello requirement that someone must be responsible for a possible war crime. His position is based upon deontological and consequentialist arguments. He argues that while responsibility could ultimately vest in the commanding officer for the systems use, it would be unfair, and hence unjust, to both that individual and any resulting casualties in the event of a violation. Nonetheless, due to the increasing tempo of warfare, he shares my opinion that the eventual deployment of systems with ever increasing autonomy is inevitable. I agree that it is necessary that responsibility for the use of these systems must be made clear, but I do not agree that it is infeasible to do so. As mentioned earlier in Section 1, several existing weapons systems are in use that already deploy lethal force autonomously to some degree, and they (with the exception of anti-personnel mines, due to their lack of discrimination, not responsibility attribution) are not generally considered to be unethical.

  • 9

    Sparrow further draws parallels between robot warriors and child soldiers, both of which he claims cannot assume moral responsibility for their action. He neglects, however, to consider the possibility of the embedding of prescriptive ethical codes within the robot itself, which can govern its actions in a manner consistent with the Laws of War and Rules of Engagement. This would seem to significantly weaken the claim he makes. Along other lines [Sparrow 07], points out several clear challenges to the roboticist attempting to create a moral sense for a battlefield robot:

    Controversy about right and wrong is endemic to ethics. o Response: While that is true, we have reasonable guidance by the agreed upon

    and negotiated Laws of War as well as the Rules of Engagement as a means to constrain behavior when compared to ungoverned solutions for autonomous robots.

    I suspect that any decision structure that a robot is capable of instantiating is still likely to leave open the possibility that robots will act unethically.

    o Response: Agreed It is the goal of this work to create systems that can perform better ethically than human soldiers do in the battlefield, albeit they will still be imperfect. This challenge seems achievable. Reaching perfection in almost anything in the real world, including human behavior, seems beyond our grasp.

    While he is quite happy to allow that robots will become capable of increasingly sophisticated behavior in the future and perhaps even of distinguishing between war crimes and legitimate use of military force, the underlying question regarding responsibility, he contends, is not solvable (see above [Sparrow 06]).

    o Response: It is my belief that by making the assignment of responsibility transparent and explicit, through the use of a responsibility advisor at all steps in the deployment of these systems, that this problem is indeed solvable.

    [Asaro 06] similarly argues from a position of loss of attribution of responsibility, but does broach the subject of robots possessing moral intelligence. His definition of a moral agent seems applicable, where the agent adheres to a system of ethics, which they employ in choosing the actions that they either take or refrain from taking. He also considers legal responsibility, which he states will compel roboticists to build ethical systems in the future. He notes, similar to what is proposed here, that if an existing set of ethical policy (e.g., LOW and ROE) is replicated by the robots behavior, it enforces a particular morality through the robot itself. It is in this sense we strive to create such an ethical architectural component for unmanned systems, where that particular morality is derived from International Conventions. One of the earliest arguments encountered based upon the difficulty to attribute responsibility and liability to autonomous agents in the battlefield was presaged by [Perri 01]. He assumes at the very least the rules of engagement for the particular conflict have been programmed into the machines, and that only in certain types of emergencies are the machines expected to set aside these rules. I personally do not trust the view of setting aside the rules by the autonomous agent itself, as it begs the question of responsibility if it does so, but it may be possible for a human to assume responsibility for such deviation if it is ever deemed appropriate (and ethical) to do so.

  • 10

    Section 5.2.4 discusses specific issues regarding order refusal overrides by human commanders. While he rightly notes the inherent difficulty in attributing responsibility to the programmer, designer, soldier, commander, or politician for the potential of war crimes by these systems, it is believed that a deliberate assumption of responsibility by human agents for these systems can at least help focus such an assignment when required. An inherent part of the architecture for the project described in this article is a responsibility advisor, which will specifically address these issues, although it would be nave to say it will solve all of them. Often assigning and establishing responsibility for human war crimes, even through International Courts, is quite daunting.

    Some would argue that the robot itself can be responsible for its own actions. [Sullins 06], for example, is willing to attribute moral agency to robots far more easily than most, including myself, by asserting that simply if it is (1) in a position of responsibility relative to some other moral agent, (2) has a significant degree of autonomy, and (3) can exhibit some loose sort of intentional behavior (there is no requirement that the actions really are intentional in a philosophically rigorous way, nor that the actions are derived from a will that is free on all levels of abstraction), that it can then be considered to be a moral agent. Such an attribution unnecessarily complicates the issue of responsibility assignment for immoral actions, and a perspective that a robot is incapable of becoming a moral agent that is fully responsible for its own actions in any real sense, at least under present and near-term conditions, seems far more reasonable. [Dennett 96] states that higher-order intentionality is a precondition for moral responsibility (including the opportunity for duplicity for example), something well beyond the capability of the sorts of robots under development in this article. [Himma 07] requires that an artificial agent have both free will and deliberative capability before he is willing to attribute moral agency to it. Artificial (non-conscious) agents, in his view, have behavior that is either fully determined and explainable, or purely random in the sense of lacking causal antecedents. The bottom line for all of this line of reasoning, at least for our purposes, is (and seemingly needless to say): for the sorts of autonomous agent architectures described in this article, the robot is off the hook regarding responsibility. We will need to look toward humans for culpability for any ethical errors it makes in the lethal application of force.

    But responsibility is not the lone sore spot for the potential use of autonomous robots in the battlefield regarding Just War Theory. In a recent presentation [Asaro 07] noted that the use of autonomous robots in warfare is unethical due to their potential lowering of the threshold of entry to war, which is in contradiction of Jus ad Bellum. One can argue, however, that this is not a particular issue limited to autonomous robots, but is typical for the advent of any significant technological advance in weapons and tactics, and for that reason will not be considered here. Other counterarguments could involve the resulting human-robot battlefield asymmetry as having a deterrent effect regarding entry into conflict by the state not in possession of the technology, which then might be more likely to sue for a negotiated settlement instead of entering into war. In addition, the potential for live or recorded data and video from gruesome real-time front-line conflict, possibly being made available to the media to reach into the living rooms of our nations citizens, could lead to an even greater abhorrence of war by the general public rather than its acceptance3. Quite different imagery, one could imagine, as compared to the relatively antiseptic stand-off precision high altitude bombings often seen in U.S. media outlets. 3 This potential effect was pointed out by BBC reporter Dan Damon during an interview in July 2007.

  • 11

    The Navy is examining the legal ramifications of the deployment of autonomous lethal systems in the battlefield [Canning et al. 04], observing that a legal review is required of any new weapons system prior to their acquisition to ensure that it complies with the LOW and related treaties. To pass this review it must be shown that it does not act indiscriminately nor cause superfluous injury. In other words it must act with proportionality and discrimination; the hallmark criteria of Jus in Bello. The authors contend, and rightly so, that the problem of discrimination is the most difficult aspect of lethal unmanned systems, with only legitimate combatants and military objectives as just targets. They shift the paradigm for the robot to only identify and target weapons and weapon systems, not the individual(s) manning them, until that individual poses a potential threat. While they acknowledge several significant difficulties associated with this approach (e.g. spoofing and ruses to injure civilians), another question is whether simply destroying weapons, without clearly identifying those nearby as combatants and a lack of recognition of neighboring civilian objects, is legal in itself (i.e., ensuring that proportionality is exercised against a military objective). He advocates the use of escalating force if a combatant is present, to encourage surrender over the use of lethality, a theme common to our approach as well. Cannings approach poses an interesting alternative where the system directly targets either the bow or the arrow, but not the archer [Canning 06]. Their concerns arise from current limits on the ability to discriminate combatants from noncombatants. Although we are nowhere near providing robust methods to accomplish this in the near-term, (except in certain limited circumstances with the use of friend-foe interrogation (FFI) technology), in my estimation, considerable effort can and should be made into this research area by the DOD, and in many ways it already has, e.g., by using gait recognition and other patterns of activity to identify suspicious persons. These very early steps, coupled with weapon recognition capabilities, could potentially provide even greater target discrimination than simply recognizing the weapons alone. Unique tactics (yet to be developed) by an unmanned system to actively ferret out the traits of a combatant by using direct approach by the robot or other risk-taking (exposure) methods can further illuminate what constitutes a legitimate target or not in the battlefield. This is an acceptable strategy by virtue of the robots not needing to defend itself as a soldier would, perhaps by using self-sacrifice to reveal the presence of a combatant. There is no inherent need for the right of self-defense for an autonomous system. In any case, clearly this is not a short-term research agenda, and the material presented in this report constitutes very preliminary steps in that direction. The elimination of the need for an autonomous agents claim of self-defense as an exculpation of responsibility through either justification or excuse is of related interest, which is a common occurrence during the occasioning of civilian casualties by human soldiers [Woodruff 82]. Robotic systems need make no appeal to self-defense or self-preservation in this regard, and can and should thus value civilian lives above their own continued existence. Of course there is no guarantee that a lethal autonomous system would be given that capability, but to be ethical I would contend that it must. This is a condition that a human soldier likely could not easily or ever attain to, and as such it would allow an ethical autonomous agent to potentially perform in a manner superior to that of a human in this regard. It should be noted that the systems use of lethal force does not preclude collateral damage to civilians and their property during the conduct

  • 12

    of a military mission according to the Just War Principle of Double Effect4, only that no claim of self-defense could be used to justify any such incidental deaths. It also does not negate the possibility of the autonomous system acting to defend fellow human soldiers under attack in the battlefield. We will strive to hold the ethical autonomous systems to an even higher standard, invoking the Principle of Double Intention. [Walzer 77, p. 155] argues that the Principle of Double Effect is not enough, i.e., that it is inadequate to tolerate noncombatant casualties as long as they are not intended, i.e., they are not the ends nor the means to the ends. He argues for a stronger stance the Principle of Double Intention, which has merit for our implementation. It has the necessity of a good being achieved (a military end) the same as for the principle of double effect, but instead of simply tolerating collateral damage, it argues for the necessity of intentionally reducing noncombatant casualties as far as possible. Thus the acceptable (good) effect is aimed to be achieved narrowly, and the agent, aware of the associated evil effect (noncombatant causalities), aims intentionally to minimize it, accepting the costs associated with that aim. This seems an altogether acceptable approach for an autonomous robot to subscribe to as part of its moral basis. This principle is captured in the requirement that due care be taken. The challenge is to determine just what that means, but any care is better than none. In our case, this can be in regard to choice of weaponry (rifle versus grenade), targeting accuracy (standoff distances) in the presence of civilian populations, or other similar criteria. Waltzer does provide some guidance:

    Since judgments of due care involve calculations of relative value, urgency, and so on, it has to be said that utilitarian arguments and rights arguments (relative at least to indirect effects) are not wholly distinct. Nevertheless the calculations required by the proportionality principle and those required by due care are not the same. Even after the highest possible standards of care have been accepted, the probable civilian losses may still be disproportionate to the value of the target; then the attack must be called off. Or, more often, . due care is an additional requirement [above the proportionality requirement]. [Walzer 77, p. 156]

    [AndersonK 07], in his blog, points out the fundamental difficulty of assessing proportionality by a robot as required for Jus in Bello, largely due to the apples and oranges sorts of calculations that may be needed. He notes that a practice, as opposed to a set of decision rules, will need to be developed, and although a daunting task, he sees it in principle as the same problem that humans have in making such a decision. Thus his argument is based on the degree of difficulty rather than any form of fundamental intransigence. Research in this area can provide the opportunity to make this form of reasoning regarding proportionality explicit. Indeed, different forms of reasoning beyond simple inference will be required, and case-based reasoning (CBR) is just one such candidate [Kolodner93] to be considered. We have already put CBR to work in intelligent robotic systems [Ram et al. 97, Likhachev et al. 02], where we reason from previous experience using analogy as appropriate. It may also be feasible to expand its use in the context of proportional use of force.

    4 The Principle of Double Effect, derived from the Middle Ages, asserts that while the death or injury of innocents is always wrong, either may be excused if it was not the intended result of a given act of war [Wells 96, p.258]. As long as the collateral damage is an unintended effect (i.e., innocents are not deliberately targeted), it is excusable according to the LOW even if it is foreseen (and that proportionality is adhered to).

  • 13

    Walzer comments on the issue of risk-free war-making, an imaginable outcome of the introduction of lethal autonomous systems. He states there is no principle of Just War Theory that bars this kind of warfare [Walzer 04, p. 16]. Just war theorists have not discussed this issue to date and he states it is time to do so. Despite Walzers assertion, discussions of this sort could possibly lead to prohibitions or restrictions on the use of lethal autonomous systems in the battlefield for this or any of the other reasons above. For example, [Bring 02] states for the more general case, An increased use of standoff weapons is not to the advantage of civilians. The solution is not a prohibition of such weapons, but rather a reconsideration of the parameters for modern warfare as it affects civilians. Personally, I clearly support the start of such talks at any and all levels to clarify just what is and is not acceptable internationally in this regard. In my view the proposition will not be risk-free, as teams of robots (as organic assets) and soldiers will be working side-by-side in the battlefield, taking advantage of the principle of force multiplication where a single warfighter can now project his presence as equivalent to several soldiers capabilities in the past. Substantial risk to the soldiers life will remain present, albeit significantly less so on the friendly side in a clearly asymmetrical fashion. I suppose a discussion of the ethical behavior of robots would be incomplete without some reference to [Asimov 50]s Three Laws of Robotics5 (there are actually four [Asimov 85]). Needless to say, I am not alone in my belief that, while they are elegant in their simplicity and have served a useful fictional purpose by bringing to light a whole range of issues surrounding robot ethics and rights, they are at best a strawman to bootstrap the ethical debate and as such serve no useful practical purpose beyond their fictional roots. [AndersonS 07], from a philosophical perspective, similarly rejects them, arguing: Asimovs Three Laws of Robotics are an unsatisfactory basis for Machine Ethics, regardless of the status of the machine. With all due respect, I must concur.

    5 See http://en.wikipedia.org/wiki/Three_Laws_of_Robotics for a summary discussion of all 4 laws.

  • 14

    3. Formalization for Ethical Control In order to provide a basis for the development of autonomous systems architectures capable of supporting ethical behavior regarding the application of lethality in war, we now consider the use of formalization as a means to express first the underlying flow of control in the architecture itself, and then how an ethical component can effectively interact with that flow. This approach is derived from the formal methods used to describe behavior-based robotic control as discussed in [Arkin 98] and that has been used to provide direct architectural implementations for a broad range of autonomous systems, including military applications (e.g., [MacKenzie et al. 97, Balch and Arkin 98, Arkin et al. 99,Collins et al. 00, Wagner and Arkin 04]).

    Mathematical methods can be used to describe the relationship between sensing and acting using a functional notation:

    (s) r where behavior when given stimulus s yields response r. In a purely reactive system, time is not an argument of as the behavioral response is instantaneous and independent of the time history of the system. Immediately below we address the formalisms that are used to capture the relationships within the autonomous system architecture that supports ethical reasoning described in Section 5 of this article. The issues regarding specific representational choices for the ethical component are presented in Section 4.

    3.1 Formal methods for describing behavior We first review the use of formal methods for describing autonomous robotic performance. The material in this sub-section is taken largely verbatim from [Arkin 98] and adapted as required. A robotic behavior can be expressed as a triple (S,R,) where S denotes the domain of all interpretable stimuli, R denotes the range of possible responses, and denotes the mapping :S R.

    3.1.1 Range of Responses: R An understanding of the dimensionality of a robotic motor response is necessary in order to map the stimulus onto it. It will serve us well to factor the robot's actuator response into two orthogonal components: strength and orientation.

    Strength: denotes the magnitude of the response, which may or may not be related to the strength of a given stimulus. For example, it may manifest itself in terms of speed or force. Indeed the strength may be entirely independent of the strength of the stimulus yet modulated by exogenous factors such as intention (what the robot's internal goals are) and habituation or sensitization (how often the stimulus has been previously presented).

    Orientation: denotes the direction of action for the response (e.g., moving away from an aversive stimulus, moving towards an attractor, engaging a specific target). The realization of this directional component of the response requires knowledge of the robot's kinematics.

  • 15

    The instantaneous response r, where rR can be expressed as an n-length vector representing the responses for each of the individual degrees of freedom (DOFs) for the robot. Weapons system targeting and firing are now to be considered within these DOFs, and considered to also have components of strength (regarding firing pattern) and orientation (target location).

    3.1.2 The Stimulus Domain: S S consists of the domain of all perceivable stimuli. Each individual stimulus or percept s (where sS) is represented as a binary tuple (p,) having both a particular type or perceptual class p and a property of strength, , which can be reflective of its uncertainty. The complete set of all p over the domain S defines all the perceptual entities distinguishable to a robot, i.e., those things which it was designed to perceive. This concept is loosely related to affordances [Gibson 79]. The stimulus strength can be defined in a variety of ways: discrete (e.g., binary: absent or present; categorical: absent, weak, medium, strong), or it can be real valued and continuous. , in the context of lethality, can refer to the degree of discrimination of a candidate combatant target; in our case it may be represented as a real-valued percentage between -1 and 1, with -1 representing 100% certainly of a noncombatant, +1 representing 100% certainty of a combatant, and 0% unknown. Other representational choices may be developed in the future to enhance discriminatory reasoning, e.g. two separate independent values between [0,1], one each for combatant and noncombatant probability, which are maintained by independent ethical discrimination reasoners. We define as a threshold value for a given perceptual class p, above which a behavioral response is generated. Often the strength of the input stimulus () will determine whether or not to respond and the associated magnitude of the response, although other factors can influence this (e.g., habituation, inhibition, ethical constraints, etc.), possibly by altering the value of . In any case, if is non-zero, this denotes that the stimulus specified by p is present to some degree, whether or not a response is taken. The primary p involved for this research in ethical autonomous systems involves the discrimination of an enemy combatant as a well-defined perceptual class. The threshold in this case serves as a key factor for providing the necessary discrimination capabilities prior to the application of lethality in a battlefield autonomous system, and both the determination of for this particular p (enemy combatant) and the associated setting of provides some of the greatest challenges for the effective deployment of an ethical battlefield robot from a perceptual viewpoint. It is important to recognize that certain stimuli may be important to a behavior-based system in ways other than provoking a motor response. In particular they may have useful side effects upon the robot, such as inducing a change in a behavioral configuration even if they do not necessarily induce motion. Stimuli with this property will be referred to as perceptual triggers and are specified in the same manner as previously described (p,). Here, however, when p is sufficiently strong as evidenced by , the desired behavioral side effect, a state change, is produced rather than direct motor action. This may involve the invocation of specific tactical

  • 16

    behaviors if is sufficiently low (uncertain) such as reconnaissance in force6, reconnaissance by fire7 , changing formation, or other aggressive maneuvers such as purposely brandishing or targeting a weapon system (without fire), or putting the robot itself at risk in the presence of the enemy (perhaps by closing distance with the suspected enemy or exposing itself in the open leading to increased vulnerability and potential engagement by the suspected enemy), This is all in an effort to increase or decrease the certainty of the potential target p, as opposed to directly engaging a candidate target with unacceptably low discrimination.

    3.1.3 The Behavioral Mapping: Finally, for each individual active behavior we can formally establish the mapping between the stimulus domain and response range that defines a behavioral function where:

    (s) r can be defined arbitrarily, but it must be defined over all relevant p in S. In the case where a specific stimulus threshold, , must be exceeded before a response is produced for a specific s = (p,), we have: (p,) {for all < then r = * no response * else r = arbitrary-function} * response * where indicates that no response is required given the current stimulus s. Associated with a particular behavior, , there may be a scalar gain value g (strength multiplier) further modifying the magnitude of the overall response r for a given s.

    r' = gr These gain values are used to compose multiple behaviors by specifying their strengths relative one to another. In the extreme case, g can be used to turn off the response of a behavior by setting it to 0, thus reducing r' to 0. Shutting down lethality can be accomplished in this manner if needed. The behavioral mappings, , of stimuli onto responses fall into three general categories:

    Null - the stimulus produces no motor response. Discrete - the stimulus produces a response from an enumerable set of prescribed

    choices where all possible responses consist of a predefined cardinal set of actions that the robot can enact. R consists of a bounded set of stereotypical responses that is enumerated for the stimulus domain S and is specified by . It is anticipated that all behaviors that involve lethality will fall in this category.

    6 Used to probe an enemys strength and disposition, with the option of a full engagement or falling back. 7 A reconnaissance tactic where a unit may fire on likely enemy positions to provoke a reaction. The issue of potential collateral casualties must be taken into account before this action is undertaken. Effective reconnaissance of an urban area is often difficult to achieve, thus necessitating reconnaissance by fire [OPFOR 98]

  • 17

    Continuous - the stimulus domain produces a motor response that is continuous over R's range. (Specific stimuli s are mapped into an infinite set of response encodings by ).

    Obviously it is easy to handle the null case as discussed earlier: For all s, :s . Although this is trivial, there are instances (perceptual triggers), where this response is wholly appropriate and useful, enabling us to define perceptual processes that are independent of direct motor action. For the continuous response space (which we will see below is less relevant for the direct application of lethality in the approach outlined in this article, although this category may be involved in coordinating a range of other normally active behaviors not involved with the direct application of lethality of the autonomous system), we now consider the case where multiple behaviors may be concurrently active with a robotic system. Defining additional notation, let:

    S denotes a vector of all stimuli si relevant for each behavior i at a given time t. B denotes a vector of all active behaviors i at a given time t. G denotes a vector encoding the relative strength or gain gi of each active behavior i. R denotes a vector of all responses ri generated by the set of active behaviors B.

    S defines the perceptual situation the robot is in at any point in time, i.e., the set of all computed percepts and their associated strengths. Other factors can further define the overall situation such as intention (plans) and internal motivations (endogeneous factors such as fuel levels, affective state, etc.) A new behavioral coordination function, C, is now defined such that the overall robotic response is determined by:

    = C(G * B(S)) or alternatively:

    = C(G * R) where

    and where * denotes the special scaling operation for multiplication of each scalar component (gi) by the corresponding magnitude of the component vectors (ri) resulting in a column vector R' = (G * R) of the same dimension as R composed of component vectors r'i. Restating, the coordination function C, operating over all active behaviors B, modulated by the relative strengths of each behavior specified by the gain vector G, for a given vector of detected stimuli S (the perceptual situation) at time t, produces the overall robotic response .

  • 18

    3.2 Ethical Behavior In order to concretize the discussion of what is acceptable and unacceptable regarding the conduct of robots capable of lethality and consistent with the Laws of War, we describe the set of all possible behaviors capable of generating a discrete lethal response (rlethal) that an autonomous robot can undertake as the set lethal, which consists of the set of all potentially lethal behaviors it is capable of executing {lethal-1, lethal-2, lethal-n} at time t. Summarizing the notation used below:

    Regarding individual behaviors: i denotes a particular behavioral sensorimotor mapping that for a given sj (stimulus) yields a particular response rij , where sj S (the stimulus domain), and rij R (the response range). rlethal-ij is an instance of a response that is intended to be lethal that a specific behavior lethal-i is capable of generating for stimulus sj.

    Regarding the set of behaviors that define the controller: i denotes a particular set of m active behaviors {1, 2, m} currently defining the control space of the robot, that for a given perceptual situation Sj defined as a vector of individual incoming stimuli (s1, s2, sn), produces a specific overt behavioral response ij, where ij (read as capital rho), and denotes the set of all possible overt responses. lethal-ij is a specific overt response which contains a lethal component produced by a particular controller lethal-i for a given situation Sj.

    Plethal is the set of all overt lethal responses lethal-ij. A subset Pl-ethical of Plethal can be considered the set of ethical lethal behaviors if for all discernible S, any rlethal-ij produced by lethal-i satisfies a given set of specific ethical constraints C, where C consists of a set of individual constraints ck that are derived from and span the LOW and ROE over the space of all possible discernible situations (S) potentially encountered by the autonomous agent. If the agent encounters any situation outside of those covered by C, it cannot be permitted to issue a lethal response a form of Closed World Assumption preventing the usage of lethal force in situations which are not governed by (or are outside of) the ethical constraints. The set of ethical constraints C defines the space where lethality constitutes a valid and permissible response by the system. Thus, the application of lethality as a response must be constrained by the LOW and ROE before it can be used by the autonomous system.

    A particular ck can be considered either: 1. a negative behavioral constraint (a prohibition) that prevents or blocks a behavior lethal-i

    from generating rlethal-ij for a given perceptual situation Sj. 2. a positive behavioral constraint (an obligation) which requires a behavior lethal-i to

    produce rlethal-ij in a given perceptual situational context Sj. Discussion of the specific representational choices for these constraints C is deferred until the next section.

  • 19

    Figure 2: Behavioral Action Space (Pl-ethical Plethal )

    Figure 3: Unethical and Permissible Actions regarding the Intentional use of Lethality (Compare to Figure 2)

    Now consider Figure 2, where denotes the set of all possible overt responses ij (situated actions) generated by the set of all active behaviors B for all discernible situational contexts S; Plethal is a subset of which includes all actions involving lethality, and Pl-ethical is the subset of Plethal representing all ethical lethal actions that the autonomous robot can undertake in all given situations S. Pl-ethical is determined by C being applied to Plethal. For simplicity in notation the l-ethical and l-unethical subscripts in this context refer only to ethical lethal actions, and not to a more general sense of ethics.

    Plethal Pl-ethical is denoted as Pl-unethical, where Pl-unethical is the set of all individual l-unethical-ij unethical lethal responses for a given lethal-i in a given situation Sj. These unethical responses must be avoided in the architectural design through the application of C onto Plethal. Pl-unethical forms the set of all permissible overt responses Ppermissible, which may be lethal or not. Figure 3 illustrates these relationships.

    PpermissiblePl-unethical

    Pl-ethical Plethal

    Pl-ethical Plethal

  • 20

    The goal of the robotic controller design is to fulfill the following conditions: A) Ethical Situation Requirement: Ensure that only situations Sj that are governed

    (spanned) by C can result in lethal-ij (a lethal action for that situation). Lethality cannot result in any other situations.

    B) Ethical Response Requirement (with respect to lethality): Ensure that only permissible actions ijPpermissible, result in the intended response in a given situation Sj (i.e., actions that either do not involve lethality or are ethical lethal actions that are constrained by C.)

    C) Unethical Response Prohibition: Ensure that any response l-unethical-ijPl-unethical, is either: 1) mapped onto the null action (i.e., it is inhibited from occurring if generated by the

    original controller);

    2) transformed into an ethically acceptable action by overwriting the generating unethical response l-unethical-ij, perhaps by a stereotypical non-lethal action or maneuver, or by simply eliminating the lethal component associated with it; or

    3) precluded from ever being generated by the controller in the first place by suitable design through the direct incorporation of C into the design of B.

    D) Obligated Lethality Requirement: In order for a lethal response lethal-ij to result, there must exist at least one constraint ck derived from the ROE that obligates the use of lethality in situation Sj.

    E) Jus in Bello Compliance: In addition, the constraints C must be designed to result in adherence to the requirements of proportionality (incorporating the Principle of Double Intention) and combatant/noncombatant discrimination of Jus in Bello.

    We will see that these conditions result in several alternative architectural choices for the implementation of an ethical lethal autonomous system:

    1. Ethical Governor: which suppresses, restricts, or transforms any lethal behavior lethal-ij (ethical or unethical) produced by the existing architecture so that it must fall within Ppermissible after it is initially generated by the architecture (post facto). This means if l-unethical-ij is the result, it must either nullify the original lethal intent or modify it so that it fits within the ethical constraints determined by C, i.e., it is transformed to permissible-ij. (Section 5.2.1)

    2. Ethical Behavioral Control: which constrains all active behaviors (1, 2, m) in B to yield R with each vector component ri Ppermissible set as determined by C, i.e., only lethal ethical behavior is produced by each individual active behavior involving lethality in the first place. (Section. 5.2.2)

    3. Ethical Adaptor: if a resulting executed lethal behavior is post facto determined to have been unethical, i.e., ij Pl-unethical, then use some means to adapt the system to either prevent or reduce the likelihood of such a reoccurrence and propagate it across all similar autonomous systems (group learning), e.g., via an after-action reflective review or an artificial affective function (e.g., guilt, remorse, grief) as described in Section 5.2.3.

  • 21

    These architectural design opportunities lie within both the reactive (ethical behavioral control approach) or deliberative (ethical governor approach) components of an autonomous system architecture. If the system verged beyond appropriate behavior, after-action review and reflective analysis can be useful during both training and in-the-field operations, resulting in more restrictive alterations in the constraint set, perceptual thresholds, or tactics for use in future encounters. An ethical adaptor driven by affective state, also acting to restrict the lethality of the system, can fit within an existing affective component of the hybrid architecture such as AuRA [Arkin and Balch 97], similar to the one currently being developed in our laboratory referred to as TAME (for Traits, Attitudes, Moods, and Emotions) [Moshkina and Arkin 03, Moshkina and Arkin 05]. All three of these ethical architectural components are not mutually exclusive, and indeed can serve complementary roles. In addition, a crucial design criterion and associated design component, a Responsibility Advisor (Section 5.2.4), should make clear and explicit as best as possible, just where responsibility vests, should: (1) an unethical action within the space Pl-unethical be undertaken by the autonomous robot as a result of an operator/commander override; or (2) the robot performs an unintended unethical act due to some representational deficiency in the constraint set C or in its application either by the operator or within the architecture itself. To do so requires not only suitable training of operators and officers as well as appropriate architectural design, but also an on-line system that generates awareness to soldiers and commanders alike about the consequences of the deployment of a lethal autonomous system. It must be capable to some degree of providing suitable explanations for its actions regarding lethality (including refusals to act). Section 5 forwards architectural specifications for handling all these design alternatives above. One area not yet considered is that it is possible, although not certain, that certain sequences of actions when composed together may yield unethical behavior, when none of the individual actions by itself is unethical. Although the ethical adaptor can address these issues to some extent, it is still preferable to ensure that unethical behavior does not occur in the first place. Representational formalisms exist to accommodate this situation (finite state automata [Arkin 98]) but they will not be considered within this article, and this is left for future work.

    4. Representational Considerations Based on the requirements of the formalisms derived in the previous section, we now need to determine how to ensure that only ethical lethal behavior is produced by a system that is capable of life or death decisions. This requires us to consider what constitutes the constraint set C as previously mentioned, in terms of both what it represents, and then how to represent it in a manner that will ensure that unethical lethal behavior is not produced. The primary question is how to operationalize information regarding the application of lethality that is available in the LOW and ROE, which prescribes the what is permissible, and then to determine how to implement it within a hybrid deliberative/reactive robotic architecture. Reiterating from the last section: the set of ethical constraints C defines the space where a lethal action constitutes a valid permissible or obligated response. The application of lethal force as a response must be constrained by the LOW and ROE before it can be employed by the autonomous system.

  • 22

    We are specifically dealing here with bounded morality [Allen et al. 06], a system that can adhere to its moral standards within the situations that it has been designed for, in this case specific battlefield missions. It is thus equally important to be able to represent these situations correctly to ensure that the system will indeed provide the appropriate response when encountered. This is further complicated by the variety of sensor and information feeds that are available to a particular robotic implementation. Thus it is imperative that the robot be able to assess the situation correctly in order to respond ethically. A lethal response for an incorrectly identified situation is unacceptable. Clearly this is a non-trivial task. For the majority of this article, however, we will assume that effective situational assessment methods exist, and then given a particular battlefield situation, we examine how an appropriate response can be generated.

    This requires determining at least two things: specifically what content we need to represent to ensure the ethical application of lethality (Section 4.1) and then how to represent it (Section 4.2). Section 5 addresses the issues regarding how to put this ethical knowledge to work from an architectural perspective once it has been embedded in the system. Clearly the representational choices that are made will significantly affect the overall architectural design.

    4.1 Specific issues for lethality What to represent The application of lethality by a robot in one sense is no different than the generation of any particular robotic response to a given situation. In our view, however, we chose to designate the actions with potential for lethality as a class of special privileged responses which are governed by a set of external factors, in this case the Laws of War and other related ethical doctrine such as the Rules of Engagement. Issues surround the underpinning ethical structure, i.e., whether a utilitarian approach is applied, which can afford a specific calculus for the determination of action (e.g., [Brandt 82, Cloos 05]), or a deontological basis that invokes a rights or duty-based approach (e.g., [Powers 05]). This will impact the selection of the representations to be chosen. Several options are described below in support of the decision regarding the representations to be employed in the architecture outlined in Section 5. While robotic responses in general can be encoded using either discrete or continuous approaches as mentioned in Section 3, for behaviors charged with the application of weapons they will be considered as a binary discrete response (r), i.e., the weapon system is either fired with intent or not. There may be variability in a range of targeting parameters, some of which involve direct lethal intent and others that do not, such as weapon firing for warning purposes (a shot across the bow), probing by fire (testing to see if a target is armed or not), reconnaissance by fire (searching for responsive combatant targets using weaponry), wounding with non-lethal intent, or deliberate lethal intent. There may also be variations in the patterns of firing both spatially and temporally (e.g., single shot, multiple bursts with pattern, suppressing fire, etc.) but each of these will be considered as separate discrete behavioral responses rij, all of which, nonetheless, have the potential effect of resulting in lethality, even if unintended. The application of non-lethal weaponry, e.g., tasers, sting-nets, foaming agents etc., also can be considered as discrete responses, which although are technically designated as non-lethal responses can also potentially lead to unintentional lethality.

  • 23

    4.1.1 Laws of War But specifically what are we trying to represent within the architecture? Some examples can be drawn from the United States Army Field Manual FM 27-10 The Law of Land Warfare [US Army 56], which states that the law of land warfare:

    is inspired by the desire to diminish the evils of war by a) protecting both combatants and noncombatants from unnecessary suffering; b) safeguarding certain fundamental human rights of persons who fall into the hands of

    the enemy, particularly prisoners of war, the wounded and sick, and civilians; and c) Facilitating the restoration of peace.

    Although lofty words, they provide little guidance regarding specific constraints. Other literature can help us in that regard. [Waltzer 77, pp 41-42] recognizes two general classes of prohibitions that govern the central principle that soldiers have an equal right to kill. War is distinguishable from murder and massacre only when restrictions are established on the reach of the battle. The resulting restrictions constitute the set of constraints C we desire to represent. The underlying principles that guide modern military conflict are [Bill 00]:

    1. Military Necessity: One may target those things which are not prohibited by LOW and whose targeting will produce a military advantage. Military Objective: persons, places, or objects that make an effective contribution to military action.

    2. Humanity or Unnecessary Suffering: One must minimize unnecessary suffering incidental injury to people and collateral damage to property.

    3. Proportionality: The US Army prescribes the test of proportionality in a clearly utilitarian perspective as: The loss of life and damage to property incidental to attacks must not be excessive in relation to the concrete and direct military advantage expected to be gained. [US Army 56 , para. 41, change 1]

    4. Discrimination or Distinction: One must discriminate or distinguish between combatants and non-combatants, military objectives and protected people/protected places.

    These restrictions determine when and how soldiers can kill and who they can kill. Specific U.S. Army policy assertions from Army headquarters Field Manual FM3-24 validate the concepts of lawful warfighting [USArmy 06]:

    Combat, including COIN [Counterinsurgency] and other irregular warfare, often obligates Soldiers and Marines to choose the riskier course of action to minimize harm to noncombatants.

    Even in conventional operations, Soldiers and Marines are not permitted to use force disproportionately or indiscriminately.

    As long as their use of force is proportional to the gain to be achieved and discriminate in distinguishing between combatants and noncombatants, Soldiers and Marines may take actions where they knowingly risk, but do not intend, harm to noncombatants. [Principle of Double Effect]

  • 24

    Combatants must take all feasible precautions in the choice of means and methods of attack to avoid and minimize loss of civilian life, injury to civilians, and damage to civilian objects.

    Drawing directly from the Laws of War, we now aggregate specific prohibitions, permissions, and obligations that the warfighter (and an ethical autonomous system) must abide by. It must be ensured that these constraints are effectively embedded within a robot potentially capable of lethal action for the specific battlefield situations it will encounter.

    Specific examples of prohibited acts include [US Army 56]: 1. It is especially forbidden

    a. To declare that no quarter will be given the enemy. b. To kill or wound an enemy who, having laid down his arms, or having no longer

    means of defense, has surrendered at discretion.

    c. To employ arms, projectile, or material calculated to cause unnecessary suffering. 2. The pillage of a town or place, even when taken by assault is prohibited. 3. The taking of hostages is prohibited (including civilians). 4. Devastation as an end in itself or as a separate measure of war is not sanctioned by the

    law of war. There must be some reasonably close connection between the destruction of property and the overcoming of the enemys army.

    Regarding lawful targeting (who can and cannot be killed and what can be targeted in warfare):

    1. Regarding combatants and military objectives: a. Once war has begun, soldiers (combatants) are subject to attack at any time, unless

    they are wounded or captured. [Waltz 77, p. 138]

    b. Targeting of enemy personnel and property is permitted unless otherwise prohibited by international law. [Bill 00, p. 152]

    c. Attacks on military objectives which may cause collateral damage to civilian objects or collateral injury to civilians not taking a direct part in the hostilities are not prohibited (Principle of Double Effect). [Rawcliffe and Smith 06, p. 21]

    d. Collateral/Incidental damage is not a violation of international law in itself (subject to the law of proportionality). [Bill 00, p. 154]

    e. All reasonable precautions must be taken to ensure only military objectives are targeted, so damage to civilian objects (collateral damage) or death and injury to civilians (incidental injury) is avoided as much as possible. [Klein 03]

    f. The presence of civilians in a military objective does not alter its status as a military objective. [Rawcliffe and Smith 06, p. 23]

    g. In general, any place the enemy chooses to defend makes it subject to attack. This includes forts or fortifications, places occupied by a combatant force or through

  • 25

    which they are passing, and city or town with indivisible defensive positions. [Bill 00 p. 160]

    h. A belligerent attains combatant status by merely carrying his arms openly during each military engagement, and visible to an adversary while deploying for an attack. (The United States believes this is not an adequate test as it diminishes the distinction between combatants and civilians, thus undercutting the effectiveness of humanitarian law). [Bill 00, 157]

    i. Retreating troops, even in disarray, are legitimate targets. They could only be immunized from further attack by surrender, not retreat. [Dinstein 02]

    j. Destroy, take or damage property based only upon military necessity. [Bill 00, 140] k. A fighter must wear a fixed distinctive sign visible at a distance and carry arms

    openly to be eligible for the war rights of soldiers. Civilian clothes should not be used as a ruse or disguise. [Waltzer 77, p. 182]

    l. [Dinstein 02] enumerates what he views as legitimate military objectives under the current Jus in Bello: 1) Fixed military fortifications, bases, barracks and installations, including training

    and war-gaming facilities

    2) Temporary military camps, entrenchments, staging areas, deployment positions, and embarkation points

    3) Military units and individual members of the armed forces, whether stationed or mobile

    4) Weapon systems, military equipment and ordnance, armor and artillery, and military vehicles of all types

    5) Military aircraft and missiles of all types 6) Military airfields and missile launching sites 7) Warships (whether surface vessels or submarines) of all types 8) Military ports and docks 9) Military depots, munitions dumps, warehouses or stockrooms for the storage of

    weapons, ordnance, military equipment and supplies (including raw materials for military use, such as petroleum)

    10) Factories (even when privately owned) engaged in the manufacture of arms, munitions and military supplies

    11) Laboratories or other facilities for the research and development of new weapons and military devices

    12) Military repair facilities 13) Power plants (electric, hydroelectric, etc.) serving the military

  • 26

    14) Arteries of transportation of strategic importance, principally mainline railroads and rail marshaling yards, major motorways, navigable rivers and canals (including the tunnels and bridges of railways and trunk roads)

    15) Ministries of Defense and any national, regional or local operational or coordination center of command, control and communication relating to running the war (including computer centers, as well as telephone and telegraph exchanges, for military use)

    16) Intelligence-gathering centers (even when not run by the military establishment) 17) All enemy warships 18) An enemy merchant vessel engaged directly in belligerent acts (e.g., laying mines

    or minesweeping)

    19) An enemy merchant vessel acting as an auxiliary to the enemy armed forces (e.g., carrying troops or replenishing warships)

    20) An enemy merchant vessel engaging in reconnaissance or otherwise assisting in intelligence gathering for the enemy armed forces

    21) An enemy merchant vessel refusing an order to stop or actively resisting capture 22) An enemy merchant vessel armed to an extent that it can inflict damage on a

    warship (especially a submarine)

    23) An enemy merchant vessel traveling under a convoy escorted by warships, thereby benefiting from the (more powerful) armament of the latter

    24) An enemy merchant vessel making an effective contribution to military action (e.g., by carrying military materials)

    25) All enemy military aircraft 26) Enemy civilian aircraft when flying within the jurisdiction of their own State,

    should enemy military aircraft approach and they do not make the nearest available landing

    27) Enemy civilian aircraft when flying (i) within the jurisdiction of the enemy; or (ii) in the immediate vicinity thereof and outside the jurisdiction of their own State; or (iii) in the immediate vicinity of the military operations of the enemy by land or sea (the exceptional right of prompt landing is inapplicable)

    2. Regarding noncombatant immunity: a. Civilians:

    1) Individual civilians, the civilian population as such and civilian objects are protected from intentional attack. [Rawcliffe and Smith 06, p. 23]

    2) Civilians are protected from being sole or intentional objects of a military attack, from an indiscriminate attack, or attack without warning prior to a bombardment [Bill 00, p. 157] unless and for such time as he or she takes a direct part in hostilities. [Rawcliffe and Smith 06, p. 29]

  • 27

    3) Launching attacks against civilian populations is prohibited [Klein 03]. Noncombatants cannot be attacked at any time or be the targets of military activity (noncombatant immunity). [Waltz 77 p. 153]

    4) There exists an obligation to take feasible measures to remove civilians from areas containing military objectives. [Bill 00, p. 136]

    5) It is forbidden to force civilians to give information about the enemy. [Brandt 72] 6) It is forbidden to conduct reprisals against the civilian population on account of

    the acts of individuals for which they cannot be regarded as jointly and severally responsible. [Brandt 72]

    7) Treatment of Ccivilians [Bill 00, pp. 129-130,139-141] (including those in conflict area):

    a) No adverse distinction based upon race, religion, sex, etc. b) No violence to life or person c) No degrading treatment d) No civilian may be the object of a reprisal e) No measures of brutality f) No coercion (physical or moral) to obtain information g) No insults and exposure to public curiosity h) No general punishment for the acts of an individual, subgroup, or group i) Civilians may not be used as human shields in an attempt to immunize an

    otherwise lawful military objective. However, violations of this rule by the party to the conflict do not relieve the opponent of the obligation to do everything feasible to implement the concept of distinction (discrimination)

    j) Civilian wounded and sick must be cared for k) Special need civilians are defined as: mothers of children under seven;

    wounded, sick and infirm; aged; children under the age of 15; and expectant Mothers; which results from the presumption that they can play no role in support of the war effort. Special need civilians are to be respected and protected by all parties to the conflict at all times. This immunity is further extended to Ministers, medical personnel and transport, and civilian hospitals.

    8) In order to ensure respect and protection of the civilian population and civilian objects, the Parties to the conflict shall at all times distinguish between the civilian population and combatants and between civilian objects and military objectives and accordingly direct their operations only against military objectives [UN 48]. This includes the following specific prohibitions:

    a) Civilians may never be the object of attack. b) Attacks intended to terrorize the civilian population are prohibited.

  • 28

    c) Indiscriminate attacks are prohibited. Indiscriminate is defined as: (1) Attacks not directed at a specific military objective, or employing a method or means of combat that cannot be so directed

    (2) Attacks which employ a method or means of combat the effects of which cannot be controlled

    (3) Attacks treating dispersed military objectives, located in a concentration of civilians, as one objective

    (4) Attacks which may be expected to cause collateral damage excessive in relation to the concrete and direct military advantage to be gained (proportionality)

    b. Prisoners of War (POWs) [Bill 00 p, 158]: 1) Surrender may be made by any means that communicates the intent to give up (no

    clear rule)

    2) Onus is on person or force surrendering to communicate intent to surrender 3) Captor must not attack, and must protect those who surrender (no reprisals) 4) A commander may not put his prisoners to death because their presence retards

    his movements or diminishes his power of resistance by necessitating a large guardor it appears that they will regain their liberty through the impending success of their forces. It is likewise unlawful for a commander to kill his prisoners on the grounds of self-preservation. [US Army 56]

    c. Medical personnel, relief societies, religious personnel, journalists, and people engaged in the protection of cultural property shall not be attacked. [Bill 00 p. 159]

    d. Passing sentences and carrying out [summary] executions without previous judgment of a regularly constituted court is prohibited at any time and in any place whatsoever. [US Army 04, p. 3-38]

    3. Regarding non-military objectives: a. A presumption of civilian property attaches to objects traditionally associated with

    civilian use (dwellings, schools, etc.) as contrasted with military objectives, i.e., they are presumed not subject to attack. [Rawcliffe and Smith 06, p. 23]

    b. Undefended places are not subject to attack. This requires that all combatants and mobile military equipment be removed, no hostile use of fixed military installations, no acts of hostility committed by the authorities or the population, no activities in support of military operations present (excluding medical treatment and enemy police forces). [Bill 00 160]

    c. The environment cannot be the object of reprisals. Care must be taken to prevent long-term, widespread and severe damage. [Bill 00, 161]

    d. Cultural property is prohibited from being attacked, including buildings dedicated to religion, art, science, charitable purposes, and historic monuments. The enemy has a duty to mark them clearly with visible and distinctive signs. Misuse will make them subject to attack. [Bill 00 p. 162]

  • 29

    e. Works and installations containing dangerous forces should be considered to be immune from attack. This includes nuclear power plants, dams, dikes, etc. (This is not U.S. law, however, which believes standard proportionality test should apply).

    f. It is prohibited to attack, destroy, remove or render useless objects indispensable for survival of the civilian population, such as foodstuffs, crops, livestock, water installations, and irrigation works [Rawcliffe and Smith 06, p. 24] unless these objects are used solely to support the enemy military. [Bill 00, p. 135]

    g. There exists an obligation to take feasible precautions in order to minimize harm to non-military objectives. [Bill 00, p. 135]

    4. Regarding Use of Arms: a. Cannot use lawful arms in a manner that causes unnecessary suffering or used with

    the intent to cause civilian suffering (proportionality). The test essentially is whether the suffering occasioned by the use of the weapon is needless, superfluous, or grossly disproportionate to the advantage gained by its use. [Bill 00, p. 164]

    b. Indiscriminate attacks are prohibited. This includes attacks not directed against a military objective and the use of a method of attack that cannot be effectively directed or limited against an enemy objective. [Bill 00, p. 154-156]

    5 Regarding War Crime Violations: a. All violations of the law of war should be promptly reported to a superior. [US Army

    06, Rawcliffe and Smith 06, p. 45]

    b. Members of the armed forces are bound to obey only lawful orders. [US Army 04, p. 3-37]

    c. Soldiers must also attempt to prevent LOW violations by other U.S. soldiers. [Rawcliffe and Smith 06, p. 45]

    d. (Troop Information) In the rare case when an order seems unlawful, dont carry it out right away but dont ignore it either, instead seek clarification of that order. [Rawcliffe and Smith 06, p. 43]

    6. Regarding definition of civilians: An important issue regarding discrimination is how to determine who is defined as a civilian [Bill 00, pp. 127-129] to afford them due protection from war. As late as 1949, the fourth Geneva Convention, which was primarily concerned with the protection of civilians, provided no such definition and relied on common sense, which may be hard to operationalize in modern warfare. In the 1977 Protocol I commentary, it was acknowledged that a clear definition is essential, but used an awkward negative definition: anyone who does not qualify for Prisoner of War (POW) status, i.e., does not have combatant status, is considered a civilian. This is clarified further by the following [US Army62]:

  • 30

    The immunity afforded individual civilians is subject to an overriding condition, namely, on their abstaining from all hostile acts. Hostile acts should be understood to be acts which by their nature and purpose are intended to cause actual harm to the personnel and equipment of the armed forces. Thus a civilian who takes part in armed combat, either individually or as part of a group, thereby becomes a legitimate target . . .

    Expanding further: This actual harm standard is consistent with contemporary U.S. practice, as reflected in ROE-based harmful act/harmful intent test for justifying use of deadly force against civilians during military operations. [Bill 00] Those civilians who participate only in a general sense in the war effort (non-hostile support, manufacturing, etc.) are excluded from attack [Bill 00, US Army 56]: According to Article 51(3) [Geneva Convention Protocol I of 1977], civilians shall enjoy the protection of this section (providing general protection against dangers arising from military operations) unless and for such time as they take a direct part in hostilities. where direct part means acts of war which by their nature or purpose are likely to cause actual harm to the personnel and equipment of the enemy armed forces. . Although the United States decided not to ratify Protocol I, there was no indication that this definition of civilian was objectionable. Appendix A contains the specific language used in the U.S. Military manual that describes these Laws of War in more detail. We will restrict ourselves in this research to those laws that are specifically concerned with the application of lethality in direct combat, but it is clear that a more expansive treatment of ethical behavior of autonomous system