Top Banner
Language d Communication, Vol. 9. No. I. pp. 23-33. 1989. Printed in Great Britain. 0271-S309/89 f3.00 + .OO Pergamon Press pk ON SAYING ‘STOP’ TO A ROBOT COLLEEN CRANGLE 1. Introduction This paper discusses the interpretation of stop-commands addressed to a robot. The purpose of the paper is first to show how complex a problem is introduced by the need to tell a robot, in English, to stop some activity. It further aims to show, in outline, the simpie and elegant solution adopted for an experimental robot now under development. Third, the paper discusses ways in which the present analysis can be deepened using insights from recent philosophical analysis of the notion of intention. Finally, it seeks to identify some as yet unsolved problems in saying stop to a robot. The work described in this paper forms part of a larger project on instructable robots- robots that can be taught, in English, to perform new tasks (Crangle et al., 1987; Suppes and Crangle, 1988). Our early work used a simple robot emulator that was taught, in English, to perform tasks in elementary arithmetic (Maas and Suppes, 1984,1985; Crangle and Suppes, 1987). More recently, we have used the Robotic Aid, an experimental robot being developed as an aid for physically disabled people in the home or workplace (Michalowski et al., 1987a, b). This robot consists of a manipulator and simple gripper mounted on an omnidirectional vehicle. We are currently looking at its use in an office- like environment, a room containing a table and bookcase and books of various sizes and shapes. We want the user to be able to teach the robot to perform organizational tasks such as sorting the books and packing them into or out of the bookcase. Most of the examples in this paper are taken from our experiment with the Robotic Aid. Others are included where appropriate, however, to demonstrate the generality of the approach. Tasks such as these book-handling chores, and indeed any that take place in a relatively unstructured environment, demand a complex and subtle combination of motor, perceptual, and cognitive skills. In the instructable robot project, we work with the realistic assumption that the robot will not perform perfectly or as expected in all circumstances. The ‘pick up’ action, for instance, will not initially be successful if the book cannot easily be grasped or is at the bottom of a pile, and a heavy book may slip in the robot’s grasp and have to be put down and grasped again more securely. An integral part of the project is therefore the interpretation and execution of corrective commands, commands that modify action the robot has already taken. Stopcommands are a particular important form of corrective command. It is vital both for safety and for efficacy that a robot that understands natural- language commands requesting action also understands requests to halt that action. This paper focuses on commands such as Stop moving!, Stop pushing the book!, and Stop pushing the book off the table!, commands that make reference to a specific action the robot is capable of performing. Other commands that are of interest but cannot be examined here in detai1 include the following: commands such as Stop standing next to Correspondence relating to this papershould be addressed to Colieen Crangle, Institute for Mathematical Studies in the Social Sciences, Ventura Hall, Stanford University, Stanford, CA 94305, U.S.A. 23
11

On saying 'stop' to a robot

Feb 01, 2023

Download

Documents

Junjie Zhang
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: On saying 'stop' to a robot

Language d Communication, Vol. 9. No. I. pp. 23-33. 1989. Printed in Great Britain.

0271-S309/89 f3.00 + .OO Pergamon Press pk

ON SAYING ‘STOP’ TO A ROBOT

COLLEEN CRANGLE

1. Introduction This paper discusses the interpretation of stop-commands addressed to a robot. The purpose of the paper is first to show how complex a problem is introduced by the need to tell a robot, in English, to stop some activity. It further aims to show, in outline, the simpie and elegant solution adopted for an experimental robot now under development. Third, the paper discusses ways in which the present analysis can be deepened using insights from recent philosophical analysis of the notion of intention. Finally, it seeks to identify some as yet unsolved problems in saying stop to a robot.

The work described in this paper forms part of a larger project on instructable robots- robots that can be taught, in English, to perform new tasks (Crangle et al., 1987; Suppes and Crangle, 1988). Our early work used a simple robot emulator that was taught, in English, to perform tasks in elementary arithmetic (Maas and Suppes, 1984,1985; Crangle and Suppes, 1987). More recently, we have used the Robotic Aid, an experimental robot being developed as an aid for physically disabled people in the home or workplace (Michalowski et al., 1987a, b). This robot consists of a manipulator and simple gripper mounted on an omnidirectional vehicle. We are currently looking at its use in an office- like environment, a room containing a table and bookcase and books of various sizes and shapes. We want the user to be able to teach the robot to perform organizational tasks such as sorting the books and packing them into or out of the bookcase. Most of the examples in this paper are taken from our experiment with the Robotic Aid. Others are included where appropriate, however, to demonstrate the generality of the approach.

Tasks such as these book-handling chores, and indeed any that take place in a relatively unstructured environment, demand a complex and subtle combination of motor, perceptual, and cognitive skills. In the instructable robot project, we work with the realistic assumption that the robot will not perform perfectly or as expected in all circumstances. The ‘pick up’ action, for instance, will not initially be successful if the book cannot easily be grasped or is at the bottom of a pile, and a heavy book may slip in the robot’s grasp and have to be put down and grasped again more securely. An integral part of the project is therefore the interpretation and execution of corrective commands, commands that modify action the robot has already taken. Stopcommands are a particular important form of corrective command. It is vital both for safety and for efficacy that a robot that understands natural- language commands requesting action also understands requests to halt that action.

This paper focuses on commands such as Stop moving!, Stop pushing the book!, and Stop pushing the book off the table!, commands that make reference to a specific action the robot is capable of performing. Other commands that are of interest but cannot be examined here in detai1 include the following: commands such as Stop standing next to

Correspondence relating to this paper should be addressed to Colieen Crangle, Institute for Mathematical Studies in the Social Sciences, Ventura Hall, Stanford University, Stanford, CA 94305, U.S.A.

23

Page 2: On saying 'stop' to a robot

24 COLLEEN CR4NGLE

the table! that request a state to be brought to an end; commands such as Stop rearranging the books on the shelf! that call for the interruption of a multi-step, structured activity; the single-word command Stop!; and commands such as Stop! You’re going to damage the book! that call a current activity to a halt by referring to its unwanted consequences.

The paper is organized as follows. Section 2 shows why stop-commands pose such an interesting problem for natural-language instruction. Sections 3,4, and 5 outline the broad strategy we have adopted for the interpretation of natural-language commands to a robot and show how that strategy is being applied to the experimental robot for the interpretation of commands of the form Stop doing X! In section 6 the analysis of stop-commands is evaluated and deepened in response to an important distinction introduced by Michael Bratman (1987). This distinction leads us to differentiate between those cases in which the robot intends to act in a certain way and those in which it intentionally acts that way.

2. The challenge of ‘stop’-commands: three problems In most contexts, the command Stop doing X! or Stop X-ing! expresses more than the

simple intention that the agent-human or robot-freeze. Consider, for instance, the command Stop pressing the button! Here the robot must withdraw its end-effector and reduce the force on the button to zero. In general, a command to stop doing X not only expresses the idea that X should be interrupted, it carries with it the usually unexpressed intention for the agent to do Z instead. The problem is to determine what that Z should be.

Consider first the command Stop moving! addressed to a mobile robot. All that is required is that the robot freeze. But now consider two different circumstances in which the command is given, one in which the robot is moving over level ground, the other in which it is moving up a ramp. In the second case the robot’s wheels may be turning, but with the robot making little or no progress up the ramp, most or all of the wheel action being dissipated in the robot’s attempts not to slip back down. If the interpretation of the stop-command has the effect simply of suspending all active processes in the robot giving rise to wheel rotation, the robot will roll back down the ramp, certainly not the anticipated response to the stop- command. Yet the straightforward action of suspending all such processes would suffice for the robot’s movement along a surface with no incline. The point of this ‘spinning wheel’ example is that even for cases where the intention expressed by the command Stop doing X! seems straightforwardly to depend only on what X is, at some level of detail the action that must be taken in reponse to the stop-command depends on the context in which the command is given.

Now consider the ‘moving target’ problem, described as follows. Suppose that in response to the command Move in the direction of Sam! the robot begins to move. Sam now gets up and walks away, but the robot does not change course for (regardless of the fact that tracking a moving object is difficult) a reasonable interpretation of the command is one that assumes the reference was to Sam’s position at the time of utterance. But now suppose the robot is told Stop moving toward Sam! Perhaps the speaker did not notice Sam leave or perhaps from her perspective the robot is still moving toward Sam. Even so, the command is not a spurious one. It should not be ignored. The robot’s response should take into account the fact that its current action arose from an original intention to move toward Sam. What we have in the ‘spinning wheel’ problem and the ‘moving target’ problem are in fact examples of this more general situation: the robot intends to do one thing but due to external circumstances is instead doing something else.

Page 3: On saying 'stop' to a robot

ON SAYiNG ‘STOP’ TO A ROBOT 25

A further problem with stop-commands is, to borrow a phrase from studies of practical reason within philosophy, the ‘package deal’ problem. Suppose the user has told the robot to go 3 feet left. To the left, however, is a table and when the user notices the table she says Stop going toward the table! (Often the more colloquial Stop! You’re going to hit the table would be used.) Here the robot intends to do X but due to external circumstances is a&o doing Y. Ideally, the robot’s perceptual functioning should tell it that it is indeed doing Y and provide the information that will allow it to stop doing Y. But perceptual feedback will not always be the solution. Suppose the robot is obeying the sequence of commands Scoop up some ice cream!, Bring the spoon up to my mouth!, Lower the spoon! The user then says Stop feeding me! Ideally, the robot should know all that is entailed in doing Y (feeding the speaker). But even that knowledge would not always be enough. Suppose the robot is obeying the command Pick up the book on your right? This book, a thin and flimsy one, bends out of shape as it is picked up and the operator says Stop bending the book! Being bent out of shape is certainly not generally entailed in a book’s being picked up. It is simply a possible side effect, and typically not even an expected one.

The ‘spinning wheel’, ‘moving target’, and ‘package deal’ problems all point to the fact that to get the interpretion of stop-commands right and to ensure an appropriate response from the robot, we must take several things into account. We must consider not only the nature of the action referred to in the stop-command but also what the robot’s intention is, what its original intention was, and the relation between the robots’s current action and the action referred to in the stop-comm~d.

3. The interpretation of action commands When we make a request such as Put the book down on the table! or Stop bending

the book!, we want some specific action to be produced to achieve a desired result. But we seldom have in mind either a detailed algorithm for undertaking the action or a fixed set of constraints for the result that must be produced-for the first command, for instance, any clear spot on the table might do, or anywhere at all on the table. Such details are not part of the meaning of the command. Nor are other details such as exactly how long the action takes, whether the motion is smooth and swift or somewhat hesitant, whether the book is placed face up or face down on the table. However, if the robot were to knock several other books off the table in the process, or drop the book from a height of 3 feet, in ordinary circumstances we would not consider the request to have been fully satisfied. And if the command were Put the cup of coffee down on the table!, we certainly would intend the motion to be smooth enough to avoid spillage.

The inte~retation and execution of a natural-language request for action thus apparently has conflicting demands to meet. Some specific and appropriate response must be produced. But we cannot always build explicit details of that response into the natural-language semantics for such details are not always part of the meaning of the command.

This problem, although inescapably present with natural-language commands, is not unique to natural language. Siiar difficulties arise whenever high-level task specifications are given to a robot. Within artificial intelligence research, a common approach to the problem of achieving specific and appropriate action has been to postulate plans or task schemas that the robot fills out to achieve a given goal. The fully formed plan then specifies the action the robot must take. Recent work has paid special attention to the circumstances in which the action has to be taken, seeking to design robots that can adjust their plans

Page 4: On saying 'stop' to a robot

26 COLLEEN CRANGLE

and even their goals in response to information supplied by the context (Georgeff et al., 1987; Nilsson, 1985).

This ‘planning’ paradigm has obvious intuitive appeal for some tasks and for autonomous robots in general. When a robot is being taught a new task, however, it will not yet have a general plan for that task, not even a partially filled-out plan that it can work from and reason about. For an instnrctabfe robot with its strong umbilical link to the operator giving it natural-language instructions, greater reliance has to be placed on the following two- fold strategy. First, all info~ation that is present in the words of the natural-l~guage command must be exploited. Second, the non-linguistic contextual. information demanded by those words must be identified and accessed. When the robot’s own sensory and cognitive capacities do not provide this information, the system must rely on interaction with the user to get it.

As demonstrated in general terms by the ‘spinning wheel’, ‘moving target’, and ‘package deal’ problems, what is important for the interpretation of stop-commands is information about both the robot’s intentions and its actions. Studies of aspectual complementation further confirm the extent to which the semantics of stop-commands relies on an analysis of action. See, for example, The Semantics of English Aspectual Complementation by Alice Freed (1979). Although Freed’s comparative analysis of a wide range of aspectualizers- namely, beg&, start* continue, keep, resume, repeat, stop, quit, cease, finish and end- does much to explain the conversational presuppositions and entailments of these words, it does not provide an adequate framework for the interpretation of stop-commands by a robot. What is missing and what is added in this study is the notion of intentionality. The next section therefore examines actions and intentions in the context of stop-commands to a robot, and proposes as a starting point a simple view of what it is for a robot to perform an action intention~y. The section that follows describes the responses of the experimental robot of our study to various stop-commands, pinpointing several natural ways in which the user is expected to interact with the robot.

Before moving on, however, it is important to point out that there are two competing concerns in any analysis of natural Ianguage for robots. The first is to be accurate in the semantic analysis, that is, to propose and implement an interpretive system that, for our study of stop-commands, does justice to the way stop and its complements really work in English. The second concern, however, is to devise a system that allows the robot to interpret naturdl-language commands given its limited cognitive and perceptual functioning. For instance, we do not expect to make the robot able to detect what the speaker is focused on visually-whether the cup or the plate, for instance, for the command Stop pushing the cup to the plate! (Is it the cup that must no longer be pushed or must the pushing no longer be toward the plate?) Yet when such clues are present in human discourse, they are undoubtedly used to determine the appropriate interpretation. Furthermore, as already noted, we are also limited in the extent to which we can make the robot reason about the task being performed. The discussion that follows will show how we’have attempted to balance these competing concerns.

4. Actions and intentions As stated in the Introduction, the actions of interest in this paper are all singular acts

performed by an agent, namely our robot. To clarify what a singular act is, compare the singular act of picking up a book to the serial act of picking up books a, b and c one after the other, and to the generic act of picking books up off the counter at the end of every

Page 5: On saying 'stop' to a robot

ON SAYING ‘STOP’ TO A ROBOT 27

workday in the library. Distinctions such as these are familiar in studies of aspect in linguistics. See David Dowty (1972), for instance, and Bernard Comrie (1976).

Aspect is said to be concerned with the internal temporal quality of an event in terms of its inception, completion, repetition, duration, and so on. Many different syntactic and lexical means are used to express aspectual distinctions. To take just one example, the verb throw by its ordinary meaning signifies an activity that has a well-defined end point while the verb in does not always do so-to run a race is to engage in an activity with a well- defined end point, to run around is not (in general). It is important to note too that the aspectual character of an action is determined not by the verb alone but by the verb in consort with other parts of the sentence, such as the noun phrase or adverbial phrase, for instance. Consider the aspectual difference between singing versus singing a song and pushing a book versus pushing a book to the edge of the table. In both cases, the first activity has no well-defined end point whereas the second does. The aspectual character of an action is also determined by non-linguistic factors. For instance, consider the statement The robot ispushing the book said of a robot that is doing its daily cleaning of the bookshelf. This chore consists of dusting the top of each book and pushing the book up to the one on its immediate left. Each book-pushing action in this context thus has a well-defined end point.

An action that consists of a process leading up to a well-defined end point will be called telic, and an action without a well-defined end point will be called atefic. The end point of a telic action does not extend through time, does not have any duration; in Comrie’s terms, it is a punctual event. A natural-language expression will sometimes refer specifically to these punctual end-point events, not to the process leading up to the end point at all. Examples are reaching the summit of the mountain, winning the prize, finding the red book on the shelf. These three categories of action will be examined in the next section-telic actions, atelic actions and punctual end-point events. In the course of the discussion, repeated and habitual actions will also be referred to.

What of intentions? Typically, an instructable robot obeys commands; it does not generate its own goals or form its own intentions. In its most straightforward operation, it is said to be performing X intentionally when it is performing X in response to the command Do X! For instance, while going down the corridor toward the front door in response to the command Go to the front door!, although the robot may also be thought to be going to the mailboxes which are at the front door, only going to the front door is thought to be done intentionally. There will be a reason later, in section 6, to return to this notion of intentionality and to make use of the distinction offered by Bratman in his Intention, Plans, and Practical Reason (1987). There he distinguishes between intending to do X and doing X intentionally.

For now, it is enough to stay with this simpler view of intention and therefore to differentiate between two sets of circumstances under which stop-commands may be issued. In the first, the user says Stop doing X! and the English expression X refers straightforwardly to some action X for which there was an earlier command of the form Do X! In the second set of circumstances, the user says Stop doing Y! and the English expression Y is a natural way of referring to the action X (perhaps just one of many different ways), but is not the way the action was originally referred to in the English command that gave rise to it. The primary significance of distinguishing these two circumstances is that in the second the robot must determine that it is indeed doing Y without relying on knowledge of its

Page 6: On saying 'stop' to a robot

2s COLLEEN CRANGLE

own internal ‘state’ or processes for they will reflect the intention expressed in the original command, the intention to do X.

Note that at this stage the discussion has been restricted to those cases in which the robot is indeed doing X, That is, examples of the ‘moving target’ problem described in section 2 are buried for the moment. They will be resurrected again in section 6.

5. The robot’s responses Interrupting intentional actions

First consider the case in which an action X-either telic or atelic-is being performed intentionally by the robot when the command Stop X-ingl is issued. Although one might expect these examples to be almost trivially easy-a case of simply canceling the original command-, the difficulty lies in determining the appropriate alternative action that must be taken. Studies of negation suggest an approach to these commands.

Take the sentence Susan is not walking to school, It can be interpreted as saying quite generally that is it not the case that Susan is walking to school (the so-called sentence- negation interpretation). But, as pointed out by D. Gabbay and J. Moravcsik (1978), normal, everyday discourse typically uses negation to make a specific denial, to say in effect ‘No it wasn’t like that but like this. ‘So Susan is not walking to school, for instance, would be used to express the idea that Susan is not walking to school, she’s running, or the idea that Susan is not walking to school but to church, or the idea that it is not Susan but Sally who is walking to school. Similarly, Stop pushing the red book to the edge of the table! would be used to achieve one of the following specific actions: to get the robot to stop pushing altogether, that is, to cease all motion; to change the direction of pushing so that the book no longer gets pushed to the edge of the table but perhaps to the center of the table; to get not the red book but some other book pushed; to stop the red book’s being pushed without altering the basic robot movement (the book is put aside and the motion continues as before).

Clearly, then, although X does not in and of itself determine what the appropriate response should be to Stop X-ing!, it does suggest a range of responses that are appropriate under ‘normal’ or ‘default’ circumstances. This fact shapes our current design of the robot’s response. For each action X within the robot’s repertoire a set of default responses is identified. For instance, for the action of moving in a certain direction, two responses are appropriate: to cease all motion or to move in a different position. For the action of pushing an object to a specific spot, four responses are appropriate: to push the object elsewhere, to push some other object, to cease all ma~pulator motion, or to put the object aside and continue the former motion.

How should the robot choose between the different actions? Studies of negation argue that in a negated assertion it is the most specific part of the assertion that is negated. When a sentence contains an adverbial, for instance, it is the adverbial that is negated-otherwise the specific information contained in the adverbial is superfluous. Similarly, the most specific part of a stop-command indicates what is to be stopped and so suggests an appropriate response. For each action in the robot’s repertoire, then, the responses to it are ordered in terms of the specificity they address in the original action. For instance, for the command Stop moving ieft?, the response of moving in a different direction is a response to the specific directionality of the original action and so it appears first. For the action of pushing an object to a specific spot, pushing the object elsewhere and pushing another object are both

Page 7: On saying 'stop' to a robot

ON SAYING ‘STOP’ TO A ROBOT 29

more specific than the other two responses and they are listed first. In response to the stop- command, then, the robot selects the first action in the list, and, if it is acceptable to the user, -performs it in place of X. If the first action is rejected by the user, the second is selected, and so on. Future work must look at ways of using the particular specificity of the stop-command to help select a response.

Interrupting unintentional actions Now consider the interpretation of Stop X-&g? when X is not being performed

intentionally. First, for telic actions, if the robot is not doing X intentionally, there is no way the robot can tell with certainty whether or not it is doing X before the end point has been attained. Consider, for instance, the case of the robot’s being told Stop pushing the book to the edge of the table! while it is pushing the book out the way, having been told to push the book out the way. The robot can deny the imputed intention and continue its present activity, a response many humans would adopt most naturally. A more cooperative response is for the robot to reason from the fact that it is pushing the book out the way to the likelihood of its being pushed to the edge of the table while being pushed out the way. That is, in general, the robot would reason from its present intentional activity to the likelihood that the end point of X will be attained. If the likelihood was high, the robot would suspend that current intentional activity.

However, because an instructable robot is limited in the extent to which it can reason about its own actions, under the system’s current design the robot temporarily suspends whatever current intentional activity is most closely related to the telic action referred to in the stop-command. So, for example, for Stop pushing the book to the edge of the table!, the robot stops its pushing of the book, the action it took in response to the command Push the book out the way? The user is then asked if that is what he or she wants. If there is no such related intentional action to be stopped, the robot responds as most people would, that is, by saying in effect, ‘Am I really doing that (pushing the book to the edge of the table)? What must I do to stop it?’ This last solution-interrogating the user directly-is adopted also at present for all atelic actions-such as Stop bending the book? The underlying theory of action for stopcommands must therefore include some measure of how closely related the various actions in the robot’s repertoire are to each other. The action of moving, for instance, is related to almost all the other actions in its repertoire but not closely to many. This measure does not have to be above all reproach; the choice of related action is always confirmed by the operator. Future work should examine how the robot can use its knowledge of its current physical surroundings to select the most appropriate related intentional action and, when Xis an atelic action, to determine whether or not it is indeed doing X.

Punctual end-point events and repeated actions When a command Do X! has been given and X is a punctual end-point event-Find

the biggest book!, for instance-the stop-command needed to halt the action must make explicit reference to the process leading up to the end point. That is, we must say something like Stop looking for the book!; we will not say ~top~ndi~g the book! Process-oriented stop-commands like Stop looking for the book! are interpreted as calls to interpret an intentional action if the robot is currently intentionally pursuing an associated end point of that process.

But what about the possibility of the command Stop X-ing! being issued where X is

Page 8: On saying 'stop' to a robot

30 COLLEEN CRANGLE

a punctual end-point event? In these cases, the stop-command must be interpreted as a call to halt some habitual or repeated action. Take the command often addressed to a child, Stop jumping on the furniture! By the time the individual jumping event can truly be said to be taking place-at which time it would make sense to issue this command as a call to halt the specific individual act-it typically cannot be stopped. The real purpose of the command is to refer to a past action and to curtail repetitions of it. Even if only one jump has taken place, the expectation in issuing this command is that future occurrences will be avoided. Such a stop-command therefore not only brings a repeated or habitual action to a halt, it also constrains future behavior. To take another example (not of a command that refers to a punctual end-point event but of a future-directed command), Stop bending the books! would typically be not only a call to take some corrective action right then but also a call to behave in a certain way in the future. A command that most clearly makes the point about punctual end-point events, though it would not be used very often, is Stop finding the book! A situation in which it would make sense to issue this command is one in which someone keeps on hiding a book and something else keeps on finding it. As a suggestion on how to bring this game to an end, a third person says to the second Stop~nding the book!

6. Intending, trying, and doing something intentionally The previous section examined our experimental robot’s response to commands of the

form Stop doirzg XI The correct inte~retation and appropriate response were seen to depend not only on the action X-its aspectual character, in particular-but also on whether the robot was performing X intentionally or unintentionally, and if unintentionally, what it was the robot was doing intentionally.

In this section, the notion of intentionality is explored in more depth and the three problems of section a-the ‘spinning wheeI’, ‘moving target’ and ‘package deal’ problems- are revisited.

The lesson of the ‘spinning wheel’ problem is that you cannot assume that simply because the robot intends to do X (move, or move up the ramp) it is actually doing X. What you can be sure of, however, in a robot designed to obey your instructions is that having been told to do X where X is, broadly speaking, within its capabilities, it will tv to do X. Following Anscombe’s remarks about the relation between wanting and trying to get (Anscombe, 1963), Bratman remarks that the ‘primitive sign’ of an intention to Xis trying to X. That there is some strong relation between intending to do X and trying to do X is borne out by our tendency to say to the robot Stop trying to do X! when we know of its intention to do X and see that it is not succeeding. If the robot were told Pick up the book by Tayior and poorer, for instance, but because the book was large it did not manage to get a firm enough grip, we would say something like Stop trying to pick up the book. Push it instead!, not Stop picking up the book? Similarly for moving up the ramp, we would say Stop trying to move up the ramp!

In Intention, Hans, and Practical Reason (1987) Bratman identifies the ‘standard triad’ of intentions action. In a typical case of agent A’s intention~ly doing X, three things are true: (a) A intends to do X; (b) A tries to do X; and (c) A intentionally does X. This is the typical case, however. He rejects the simple view that in general to do Xintentionally I must intend to do X, suggesting rather that to do X intentionally I must intend to do something, but I need not intend to do X. He uses an extended example of video games

Page 9: On saying 'stop' to a robot

ON SAYING ‘STOP’ TO A ROBOT 31

to develop this position, an example too detailed to reproduce here in its entirety. A simpler supporting example that has strong intuitive appeal, however, for acknowledging a distinction between intentionally doing Xand intending to do X is the following. It illustrates the point that one may do X intentionally even while doubting that one is X-ing but if one intends to do X one must believe that one will do X (or at least not have beliefs inconsistent with the belief that one will do X). The example is that of a person trying to make ten carbon copies on a typewriter. She may be skeptical of succeeding but if she does succeed, under normal circumstances we would say that she intentionally made ten copies. We would not ordinarily allow that she intends to make the ten copies if she really believes that she will not make them. [Challenges to this line of reasoning are pursued and countered in Bratman (1987). They are not of central concern here; the example is simply intended to make the distinction plausible in the absence of a detailed review of Bratman’s argument. ]

Bratman offers the following schema as a framework: If an agent A intends to do Y and A does Xin the course of executing the intention to do Y AND - - -, then A does X intentionally.

A full theory of intentional action would tell us how to fill in the blanks. He suggests one rough set of conditions based on his extended discussion of the video-games example. For our instructable robot and its need to obey s&p-commands, we suggest a comparable set of conditions. Consider the following example.

Suppose we want our robot to push a book from one location to another on the table. Suppose we in fact have two robots. Robot A is untrained. It will blindly push a book from its present location to the target location taking the most direct route regardless of anything that may be in its way. Robot B has been instructed in the niceties of moving books around. If it encounters another book in its path it first pushes that book out the way before continuing. A and B are both told: Push the book by Taylor and Moore to the other side of the table! A book by Sande and Hawkins (S&H) lies in the direct path of the book by Taylor and Moore (T&M). A and B both end up pushing S&H in the course of pushing T&M to the other side of the table. But do they both do so intentionally? It is very natural to say that robot B does. But robot A is more accurately described as pushing S&H inadvertently or accidently. The difference of course lies in the fact that with robot B, S&H is pushed in order to push T&M to the other side of the table. With robot A, S&H’s being pushed is merely a side effect of pushing T&M to the other side of the table. To be clear about the example, we must eliminate the possibility that robot B pushes S&H in a distinctively purposeful way by, say, breaking contact with T&M altogether and moving over to S&H then back again. Let us suppose the two acts of pushing appear similar. The difference is not one of appearance; the difference between the two acts is concretely vested in the robots’ different understandings of what it is to push a book from one location to another on the table and in their correspondingly different book-pushing skills. Because pushing a book from one location to another for robot B entails pushing other books out the way, the acts of pushing these other books are intentional for robot B and stop- commands that refer to them should be interpreted as calls to interrupt intentional actions.

At this stage we can break down the ‘package deal’ problem into three categories.

In the first, an intentional action by the robot-going to the front door, for example- entails not another distinct action such as opening a door along the way but is itself an action that can be described in some other way-going to the mailboxes which are at the

Page 10: On saying 'stop' to a robot

32 COLLEEN CRANGLE

front door, for instance. The example of going 3 feet left and in so doing also going to the table falls into this category too.

In the second category, an intentional action by the robot has some side effect, either expected or unexpected. The examples of bending the book in the course of picking it up and robot A’s pushing S&H out the way while pushing T&M to the other side of the table fall into this category.

In the third category, an intentional action undertaken by the robot entails some other action that must be undertaken, when appropriate, in order to perform the first action successfully. Here we have robot B’s pushing S&H out they way while pushing T&M to the other side of the table. Here too, we have the action of feeding the operator by bringing the spoon up to the operator’s mouth.

Stop-commands associated with the third category should, as argued for the robot A and robot B example, be treated as calls to interrupt intentional acts. Stop-commands associated with the first two categories are rightly treated for our experimental robot as calls to halt some unintentional action. In a robot equipped with extensive knowledge of its surroundings and of the possible side effects of its actions, this strategy would have to be revised. It is a topic for further study how this deeper knowledge would influence the robot’s interpretation of stop-commands.

In closing, I return briefly to the ‘moving target’ problem of section 2. Under our revised view of what the robot intends to do and what it does intentionally, if the operator intends the robot to do X and gives it a command of the form Do X!, the robot will itself intend to do X and will try to do X. If it then succeeds in doing X, it will be doing X intentionally. In addition, depending on the robot’s particular set of skills, there will be some actions entailed by the intentional performance of other actions and these too are performed intentionally when they are executed in the course of performing those other actions.

For the ‘moving target’ problem, when Sam gets up and goes away after the robot has begun to obey the command Move in the direction of Sam!, we have a case in which the robot intends to move toward Sam and is indeed trying to move toward Sam but is certainly not intentionally moving toward Sam because it is not in fact moving toward Sam at all. However, because the robot’s current action arose solely and entirely from an intention to move toward Sam, that current action surely is intentional. It is just not the action referred to in the original command or the action referred to in the stop-command. The current action is thus intentional, but, I would say, unintended. The proper treatment of the stop- command will be neither as specified earlier or unintentional actions nor as specified earlier for intentional actions. The intuition expressed initially in section 2 remains valid, however: the stop-command should not be treated as a spurious one that can be ignored; some appropriate action must be taken in response. Just what the response should be in the case of an intentional but unintended action remains a topic for further study.

Acknowledgemenls-This work was funded in part by the United States Veterans Administration through the Rehabilitation Research and Development Center of the VA Medical Center in Palo Alto. The author acknowledges the use of facilities in the Center fir Design Research, the Center for the Study of Language and Informati&, and the Institute for Mathematical Studies in the Social Sciences at Stanford University. Lin Liang. Michael Barlow and Stefan Michalowski all contributed to the development of the Robotic Aid and the impl&entation of its natural-language system. The paper benefited from discussions with Patrick Suppes.

Page 11: On saying 'stop' to a robot

ON SAYING ‘STOP’ TO A ROBOT 33

REFERENCES

ANSCOMBE, G. E. M. 1963 Intention. Cornell University Press, Ithaca.

BRATMAN, M. 1987 Intention, Plans, and Practical Reason. Harvard University Press, Cambridge, MA.

COMRIE, B. 1976 Aspect: an Introduction to the Study of Verbal Aspect and Related Problems. Cambridge University Press, Cambridge, MA

CRANGLE, C. E. and SUPPES, P. 1987 Context-fling semantics for an instructable robot. International JournaI of Man-Machine Studies 21, 371-400.

CRANGLE, C. E.. SUPPES, P. and MICHALOWSKI, S. J. 1987 Types of verbal interaction with instructable robots. In Rodriguez, G. (Ed.) Proceedings of the Workshop on Space Telerobotics, JPL Publication 87-13, Vol. II. pp. 393-402. NASA Jet Propulsion Laboratory, Pasadena, CA.

DOWTY, D. R. 1972 Studies in the logic of verb aspect and time reference in English, Ph.D. dissertation, University of Texas at Austin.

FREED, A. F. 1979 The Semantics of English Aspectual Complementation. D. Reidel, Boston, MA.

GABBAY, D. M. and MORAVCSIK, J. M. 1978 Negation and denial. In Guenthner, F. and Rohrer, C. (Ed.) Studies in Formal Semantics. pp. 251-265. North Holland, Amsterdam.

GEORGEFF, M. P., LANSKY, A. L. and SCHOPPERS, M. J. 1987 Reasoning and planning in dynamic domains: an experiment with a mobile robot. Artificial Intelligence Center, SRI International, 333 Ravenswood Avenue, Menlo Park, CA 94025.

MAAS, R. E. and SUPPES, P. 1984 A note on discourse with an instructable robot. Theoretical Linguistics 11, S-20.

MAAS, R. E. and SUPPES, P. 1985 Natural-language interface for an instructable robot. International Journal of Man-Machine Studies 22, 215-240.

MICHALOWSKI, S., CRANGLE. C. and LIANG, L. 1987a A natural-language interface to a mobile robot. In Rodriguez, G. (Ed.) Proceedings of the Workshop on Space Telerobotics, JPL Publication 87-13, Vol. II, pp. 381-392. NASA Jet Propulsion Laboratory, Pasadena, CA.

MICHALOWSKI, S. J., CRANGLE, C. and LIANG. L. 1987b Expezimental study of a natural-language interface to an instructable robotic aid for the severely disabled. Proceedings of the Tenth Annual Conference on Rehabilitation Technology RESNA ‘87, San Jose, June 1987.

NILSSON, N. J. 1985 Triangle tables: a proposal for a robot programmin g language. Technical Note 347, Artificial Intelligence Center, SRI International, Menlo Park, CA 94025.

SUPPES, P. and CRANGLE, C. 1988 Context-fndng semantics for the language of action. In Dancy, J., Moravcsik, J. and Taylor C. (Eds) Human Agency: Language, Duty, and Value, pp. 47-76. Stanford University Press, Stanford, CA.