Top Banner
Intentional Learning Procedures as Intention Revision Mechanisms Jos´ e Mart´ ın Castro-Manzano 1 , Axel Arturo Barcel´ o-Aspeitia 1 , and Alejandro Guerra-Hern´ andez 2 1 Instituto de Investigaciones Filos´ oficas Universidad Nacional Aut´ onoma de M´ exico Circuito Mario de la Cueva s/n Ciudad Universitaria, M´ exico, D.F., M´ exico, 04510 [email protected], [email protected] 2 Departamento de Inteligencia Artificial Universidad Veracruzana Sebasti´ an Camacho No. 5, Xalapa, Ver., M´ exico, 91000 [email protected] Abstract. By recovering some insights from a philosophical analysis about intentions, we focus on intentions as plans writ large. Using this assumption we suggest proof of how an existing implementation of BDI learning procedures works as an intention revision mechanism as sug- gested in an abstract specification. Finally, we translate the abstract postulates in terms of AgentSpeak(L) so that the intention revision spec- ification meets somehow the implementation. Key words: Intention revision, BDI agents, learning, AgentSpeak(L). 1 Introduction Although intentions have received a lot of attention from the philosophical and computational points of view, their dynamic features have not been to much studied [15]. Certainly, there are philosophical and formal theories of inten- tion [4,5,11,13] but few of them consider the possibility of the revision of in- tentions [15]. And while the adaptation of the revision postulates to analyze intentional changes provides an useful and abstract specification for intention revision, it is not commited to any fixed mechanism or implementation. On the other hand, AgentSpeak(L) [12,14] has a concrete operational semantics that provide a framework to analyze the explicit changes in the agent’s state and events in the environment; however, it does not account for intention revision as such. By contrast, the intention revision framework does not explicitly ana- lyze the events that produce intentional changes nor the mechanism by which intentional changes may occur, and only focuses on three particular operations of reconsideration: expansion, contraction and revision, whose completeness as a repertoire of actions is still left open [2]. Finally, there is a learning procedure for AgentSpeak(L) agents [6,8] that may be used to fix a particular mechanism for intention revision if the intentions are considered as plans.
11

Intentional Learning Procedures as Intention Revision Mechanisms

Apr 22, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Intentional Learning Procedures as Intention Revision Mechanisms

Intentional Learning Procedures as IntentionRevision Mechanisms

Jose Martın Castro-Manzano1, Axel Arturo Barcelo-Aspeitia1, and AlejandroGuerra-Hernandez2

1 Instituto de Investigaciones FilosoficasUniversidad Nacional Autonoma de Mexico

Circuito Mario de la Cueva s/n Ciudad Universitaria, Mexico, D.F., Mexico, [email protected], [email protected]

2 Departamento de Inteligencia ArtificialUniversidad Veracruzana

Sebastian Camacho No. 5, Xalapa, Ver., Mexico, [email protected]

Abstract. By recovering some insights from a philosophical analysisabout intentions, we focus on intentions as plans writ large. Using thisassumption we suggest proof of how an existing implementation of BDIlearning procedures works as an intention revision mechanism as sug-gested in an abstract specification. Finally, we translate the abstractpostulates in terms of AgentSpeak(L) so that the intention revision spec-ification meets somehow the implementation.

Key words: Intention revision, BDI agents, learning, AgentSpeak(L).

1 Introduction

Although intentions have received a lot of attention from the philosophical andcomputational points of view, their dynamic features have not been to muchstudied [15]. Certainly, there are philosophical and formal theories of inten-tion [4,5,11,13] but few of them consider the possibility of the revision of in-tentions [15]. And while the adaptation of the revision postulates to analyzeintentional changes provides an useful and abstract specification for intentionrevision, it is not commited to any fixed mechanism or implementation. On theother hand, AgentSpeak(L) [12,14] has a concrete operational semantics thatprovide a framework to analyze the explicit changes in the agent’s state andevents in the environment; however, it does not account for intention revisionas such. By contrast, the intention revision framework does not explicitly ana-lyze the events that produce intentional changes nor the mechanism by whichintentional changes may occur, and only focuses on three particular operationsof reconsideration: expansion, contraction and revision, whose completeness asa repertoire of actions is still left open [2]. Finally, there is a learning procedurefor AgentSpeak(L) agents [6,8] that may be used to fix a particular mechanismfor intention revision if the intentions are considered as plans.

Page 2: Intentional Learning Procedures as Intention Revision Mechanisms

2

Following a philosophical analysis about intentions, we consider intentionsas plans. While this consideration may be subtle, it is indeed useful for ourpurposes. Assuming this position and the existing learning procedures, we go onto prove some propositions regarding the abstract specification and the learningimplementation for AgentSpeak(L) agents. In this way, we can explore howlearning procedures work as intention revision mechanisms. Thus, the generalissue is, as usual, the meeting between some abstract specification and someparticular implementation; the goal, and the main contribution of this work, isto show how an existing implementation of BDI learning allows intention revisionas suggested in an abstract specification.

This work is motivated by the problem of intention revision. We know inten-tion revision is possible since the philosophical analysis of the role of intentionstells us that intentions have certain features such as pro-activity, inertia andadmissibility [4]. These features guarantee, respectively, that intentions needmechanisms of commitment, defeasibility and consistency which, in turn, allowthe research about intentions in terms of revision and non-monotonicity [15].Thus, we explore how some learning procedures may work as mechanisms for in-tention revision in order to study the relationships between intentional learningand intention revision.

The paper is organized as follows. In Section 2 we consider the distinctionBratman makes about intentions and we focus on the concept of intentions asplans. In Section 3 we revisit the abstract specification for intention revision. InSection 4 we display the results of how an existing learning procedure works asan intention revision mechanism, and so, we translate the abstract specificationin terms of a concrete one. Finally, in Section 5 we discuss the results as well ascurrent and future work.

2 Preliminaries about intentions

Bratman proposes a taxonomy of intentions and distiguishes three kinds of in-tentions: deliberative, non-deliberative and policy-based [4]. When an agent at t1intends φ in t2 as a process of deliberation, the intention is called deliberative. Ifthe agent has an intention, not for an actual deliberation, but because the agenthas it from a previous moment t0 and the agent has preserved the intentionfrom t0 to t1 without reconsidering it, it is called non-deliberative. Finally, whenthe intentions are general and about particular circumstances, they are calledpolicy-based intentions. The importance of policy-based intentions lies in theirstructure: Bratman considers policy-based intentions are intentions that behavelike rules.

Following this account of intentions, Bratman also suggests that plans areintentions writ large. These two ideas are useful for two reasons: the existingformalisms about intention seem to forget that intentions have a plan structureand treat intentions as another atomic fragment of the BDI architecture; andalso, this insight is useful since it lets us treat plans as intentions. However, plansalone are merely courses of action that wait for some form activation, i.e., in order

Page 3: Intentional Learning Procedures as Intention Revision Mechanisms

3

for plans to become actual intentions we require some form of commitment, forwe also know that intentions are particular courses of action which the agent hascommited to achieve. So, in order to reconciliate these ideas we say intentionsare plans tout court.

Thus, we consider the first two kinds of intention, deliberative and non-deliberative, can be reduced into a single kind, say, ordinary intention. Andso, we modify Bratman’s taxonomy about intentions, and at the top of theclassification we obtain only ordinary intentions and policy-based intentions. Itis clear that, from now on, we focus on the second kind of intention, which weare going to formally understand as policy-based intentions with a structurete : ct ← h where te denotes an event that triggers the whole intention, ctdenotes a set of particular circumstances and h denotes the set of actions to do(see table 1).

It should be clear that we can not talk about intention revision if the inten-tions do not allow the possibility of revision. So, the non-deliberative intentionsare not part of this study. Deliberative intentions and policy-based have a cer-tain inclusion relationship: all policy-based intentions are deliberative, but notall deliberative are policy-based. It should be clear that, from now on, whenwe consider an agent, its intentions constitute a particular sort of intentions,namely, policy-based intentions. These intentions, as an irreducible componentof the BDI model of rational agency [4], have certain features that, taken to-gether, make them different from beliefs and desires: pro-activity (intentions areconduct-controlling components), inertia (once an intention has been taken, itresists being abandoned) and admissibility (once an intention has been taken,the agent will not consider contradictory options). Therefore, when we consideran agent, we say its intentions require a notion of commitment (given the princi-ple of pro-activity), a notion of consistency (given the admissibility criteria) anda notion of retractability (given the notion of inertia). These features guarantee,respectively, that intentions need mechanisms of commitment, defeasibility andconsistency which, in turn, allow the research about intentions in terms of revi-sion: just as the changes of beliefs require a theory of belief revision, the changesof intentions require a theory of intention revision.

3 The abstract specification

One of the firsts steps for a theory of intention revision is the adaptation of theexisting specifications for other components of the BDI model. The adaptation ofthe AGM postulates [1] for revision and contraction so far seems to preserve itsproperties and behave accordingly for intention. However, such adaptation is notcommited to any fixed mechanism or implementation. For sake of completness,we present the postulates as an abstract specification.

The � stands for the revision function, for the contraction, ⊕ for expan-sion, Σ for the intentional set and φ for a particular intention. Σ⊥ denotes aninconsistent intentional set.

Page 4: Intentional Learning Procedures as Intention Revision Mechanisms

4

Postulate 1 (�1) For any intention φ and any intentional set Σ, Σ � φ is anintentional set.

Postulate 2 (�2) φ ∈ Σ � φ.

Postulate 3 (�3) Σ � φ ⊆ Σ ⊕ φ.

Postulate 4 (�4) If ¬φ /∈ Σ, then Σ ⊕ φ ⊆ Σ � φ.

Postulate 5 (�5) Σ � φ = Σ⊥ if ` ¬φ.

Postulate 6 (�6) If ` φ⇔ ψ, then Σ � φ = Σ � ψ.

Postulate 7 (1) For any intention φ and any intentional set Σ, Σ φ is anintentional set.

Postulate 8 (2) Σ φ ⊆ Σ .

Postulate 9 (3) If φ /∈ Σ, then Σ φ = Σ .

Postulate 10 (4) If 6` φ, then φ /∈ Σ φ .

Postulate 11 (5) If φ ∈ Σ, then Σ ⊆ (Σ φ)⊕ φ .

Postulate 12 (6) If ` φ⇔ ψ, then Σ φ = Σ ψ .

Our goal now is to show how an existing learning procedure works as amechanism for intention revision, i.e., how an existing implementation meetsthese postulates.

4 Intentional learning procedures as a mechanism forintention revision

To relate the intentional learning procedures with the intention revision, we willneed the formalism of AgentSpeak(L) [14] as defined for its interpreter Jason [3].

4.1 Syntax of AgentSpeak(L)

An agent ag is formed by a set of plans ps (agps) and beliefs bs (groundedliterals). Each plan has the form te : ct← h. The context ct of a plan is a literalor a conjunction of them. A non empty plan body h is a finite sequence of actions,goals (achieve ! or test ? an atomic formula), or beliefs updates (addition + ordeletion −). > denotes empty elements, e.g., plan bodies, contexts, intentions.The trigger events are updates (addition or deletion) of beliefs or goals. Thesyntax is shown in table 1.

Page 5: Intentional Learning Procedures as Intention Revision Mechanisms

5

ag ::= bs ps at ::= P (t1, . . . , tn) (n ≥ 0)bs ::= b1 . . . bn (n ≥ 0) a ::= A(t1, . . . , tn) (n ≥ 0)ps ::= p1 . . . pn (n ≥ 1) g ::= !at | ?atp ::= te : ct← h u ::= +b | − bte ::= +at | − at | + g | − gct ::= ct1 | >ct1 ::= at | ¬at | ct1 ∧ ct1h ::= h1;> | >h1 ::= a | g | u | h1;h1

Table 1. Sintax of AgentSpeak(L) [3]

4.2 Operational semantics of AgentSpeak(L)

The operational semantics of AgentSpeak(L) is defined as a transition systembetween configurations 〈ag, C,M, T, s〉, where:

– ag is an agent program formed by beliefs bs and plans ps.– An agent circumstance C is a tuple 〈I, E,A〉 where I is the set of intentions{i, i′, . . . , n} s.t. i ∈ I is a stack of partially instantiated plans p ∈ ps; E is aset of events {〈te, i〉 , 〈te′, i′〉 , . . . , n}, s.t. te is a triggerEvent and each i isan intention (internal event) or an empty intention > (external event); andA is a set of actions to be performed by the agent in the environment.

– M is a tuple 〈In,Out, SI〉 that works as a mailbox, where In is the mailboxof the agent, Out is a list of messages to be delivered by the agent andSI is a register of suspended intentions (intentions that wait for an answermessage).

– T is a tuple 〈R,Ap, ι, ε, ρ〉 that registers temporal information: R is the set ofrelevant plans given certain triggerEvent; Ap is the set of applicable plans(the subset of R s.t. bs |= ctx); ι, ε y ρ register, respectively, the intention,the event and the current plan during an agent execution.

– The configuration label s ∈ {SelEv,RelP l, AppP l, SelAppl, SelInt, AddIM,ExecInt, ClrInt, ProcMsg} indicates the current step in the reasoning cycleof the agent.

For the time being, we will only deal with the definition of an agent as atuple 〈bs, ps〉 and the circumstances CI .

4.3 Intentional learning procedures

It is well known that in dynamic environments a very cautious agent performsbetter than a bold one; and inversely, in static environments boldness pays bet-ter [10]. The relevance of learning intentionally is that the right degree of cau-tioness or boldness is learned by the agents, instead of being established onceand forever by the programmers [7]. This adaptive behavior is only possible ifthe agents have a single-minded commitment strategy.

Page 6: Intentional Learning Procedures as Intention Revision Mechanisms

6

In the context of AgentSpeak(L) it is known that agents do not follow asingle-minded commitment explicitly [9]. However, the use of intentional learningmechanisms provides an alternative way to achieve a single-minded strategy [9].The basic idea is that agents can learn, in the same way they learn the adoptionof successful plans, the reasons of the adoptions of plans that fail. An extension ofthe operational semantics of AgentSpeak(L) that deals with intentional learning,by way of incremental and inductive methods, has been proposed [8,7]. It isinspired in the way Jason [3] is extended with speech acts: the new rules ofthe operational semantics are implemented in a library of plans. Using thesetechniques it is possible to study the relations between revision and learningusing rules like this:

(Abandon)SE(CE) = 〈+abandon(φ),>〉 ∧ agbs |= intending(φ)

〈ag, C,M, T, SelEv〉 → 〈ag′, C ′,M, T, SelEv〉

s.t. C ′E = CE\{〈+abandon(φ),>〉}, agbs 6|= intending(φ), C ′I = CI\φThis rule dictates that when an agent intends φ and an event of the form

+abandon(φ) is generated, the event is removed from CE (the list of events), theagent no longer believes intending(φ), the intention is removed from CI (the setof intentions) and a new event is to be selected. The important step here is thatwe can reduce this rule to the next function without losing its main propertiesabout intentions:

Definition 1 (Abandon) The abandon rule is a function s.t.

abandon(φ,CI) =

{{CI − φ} if φ ∈ CI ,{CI} if φ /∈ CI |φ = >

Conversely, we can define a function that behaves like this:

Definition 2 (Learn) The learn rule is a function s.t.

learn(φ,CI) =

{{CI} if φ ∈ CI |φ = >,{CI ∪ φ} if φ /∈ CI

In here, notice that we work with CI . It is important to see the differencebetween agps and CI , which lies in the level of commitment. The component CI ofthe agent circumstance denotes the intentions the agent is commited to achievewithin a reasoning cycle, while agps is the base or library of plans. Workingwith this component has two advantages: it lets us work within each reasoningcycle and not with the whole base of plans (although, eventually, we wouldlike to extend our results to the base of plans); and, from a cognitive point ofview, rational agents seem to perform reconsideration processes during reasoningcycles.

Another important issue to consider is the problem of the consequences of anintention. During reasoning cycles the consequences of an intention are subin-tentions that need to be accomplished before the original intention is achieved.If we let φ and ψ be intentions, the consequences of an intention φ is Cn(φ) =

Page 7: Intentional Learning Procedures as Intention Revision Mechanisms

7

{ψ : ψ ∈ h(φ)}. Thus, considering the nature of CI , we can say its consequencesare defined like this Cn(CI) =

⋃φ∈CI

ψ ∈ h(φ). Notice, also, that the conse-quences of intentions do not behave like implications. When we require that ψhas to be achieved to accompplish φ, we do not mean an implication of theform ψ → φ, for such form implies a problem of collateral effect [5], which goesagainst the well defined theories of intentions. On the contrary, we mean φ← ψas a recipe for action, which is is coherent with the view of intentions as plans.Finally, the application of the abandon rule yields C ′I ⊆ CI , which is triviallytrue; and φ /∈ Cn(C ′I) since if φ ∈ Cn(C ′I) then φ will fail and the rule ClrIntwill take φ out of C ′I [3].

4.4 Results

Using these functions we prove the next results.

Proposition 1 Abandon iff Contraction:

abandon(φ,CI)⇔ CI φ

Proof. From left to right, assuming abandon, the six postulates for contractionshould hold.Case 1. By definition, the result of abandon is a set of intentions CI , s.t. CI isan intentional set.Case 2. The application of abandon(φ,CI) yields C ′I = CI −φi, with C ′I ⊆ CI .Case 3. If φ /∈ CI , then it is clear that abandon(φ,CI) = CI .Case 4. If φ /∈ Cn(CI), then φ /∈ CI . Thus, abandon(φ,CI) = CI .Case 5. If φ ∈ CI , then it follows that CI ⊆ abandon(φ,CI) ∪ φ.Case 6. If φ = ψ, abandon(φ,CI) = abandon(ψ,CI).From right to left, assuming the six postulates of contraction, the defintion ofabandon should hold. We have to check that the postulates yield intentional setswhether φ ∈ CI or φ /∈ CI , which is the case from �1 to �3 and �5 to �6. Thecase of �4 is slighty different: if 6` φ then φ /∈ CI , therefore abandon(φ,CI) = CIwhich means that φ /∈ CI φ. �

Proposition 2 Learn iff expansion.

learn(φ,CI)⇔ CI ⊕ φ

Proof. From left to right, assuming learn, the definition of expasion should hold,i.e., Σ ⊕ φ = CI ∪ φ. There is only one case: learn(φ,CI) = CI ∪ φ whetherφ ∈ CI or φ /∈ CI . In the other direction, assuming the definition of expansion,it trivially matches the definition of learn. �

Having these results, it is easy to see the next one which, basically, statesthat learning and abandoning are duals:

Lemma 1 learn(φ, abandon(φc, CI))⇔ abandon(φc, learn(φ,CI))

Page 8: Intentional Learning Procedures as Intention Revision Mechanisms

8

Remark 1 Using the lemma, the following results are straighforward:If φ ∈ CI or φ /∈ CI then learn(φ, abandon(φc, CI)) ⊆ learn(φ,CI)If φc /∈ CI and φ ∈ CI then learn(φ, abandon(φc, CI)) = CIIf φc /∈ CI and φ /∈ CI then learn(φ, abandon(φc, CI)) = CI ∪ φ

Proposition 3 Learn and abandon iff revision.

learn(φ,CI) ◦ abandon(φc, CI)⇔ CI � φ

Proof. From left to right, assuming the composition of both functions, the revi-sion should hold.Case �1: learn(φ, abandon(φc, CI)) yields a set CI , which is an intentional set.Case �2: Any φ is accepted in learn(φ, abandon(φc, CI)).Case�3: learn(φ, abandon(φc, CI) ⊆ learn(φ,CI). By remark 1, whether φ ∈ CIor φ /∈ CI , learn(φ, abandon(φc, CI) ⊆ CI , i.e, is a subset of learn(φ,CI).Case �4: If φc /∈ CI then learn(φ,CI) ⊆ learn(φ, abandon(φc, CI)). By re-mark 1, if φ ∈ CI then learn(φ, abandon(φc, CI)) = CI and if φ /∈ CI thenlearn(φ, abandon(φc, CI)) = CI∪φ, i.e., learn(φ,CI) ⊆ learn(φ, abandon(φc, CI)).Case �5: If ` ¬φ, then for all agent configurations, ¬φ ∈ CI . If we applylearn(φ, abandon(φc, CI)), then we obtain a set CI s.t. φ ∈ CI , which is a con-tradiction.Case �6: If φ⇔ ψ, learn(φ, abandon(φc, CI)) = learn(ψ, abandon(ψc, CI)).In the remaining direction, assuming the axioms of revision, both definitionshave to hold. We only have to check that the postulates yield closed intentionalsets, which is the case from �1 to �4 and �6. The case of �5 is slighty dif-ferent: an inconsistent intentional set is not supported by the application oflearn(φ, abandon(φc, CI). �

The goal of modifying CI represents the common assumption that agentsneed to adapt to the environment in short terms during reasoning cycles. Thenext important step here is that we want to extend the results above by gener-alizing the next definitions, so that the functions do not apply to the componentCI during a reasoning cycle, but to all the plans of the agent’s architecture agps.The idea of modifying agps represents the idea that agents need to modify theirplans during their life time, in the long run, given the short adaptations to theenvironment.

Definition 3 (Abandon) The abandon rule is a function s.t.

abandon(φ, agps) =

{{agps − φ} if φ ∈ agps,{agps} if φ /∈ agps|φ = >

Conversely:

Definition 4 (Learn) The learn rule is a function s.t.

learn(φ, agps) =

{{agps} if φ ∈ agps|φ = >,{agps ∪ φ} if φ /∈ agps

Page 9: Intentional Learning Procedures as Intention Revision Mechanisms

9

It is straightforward that the generalized application of the learning proce-dures over the library of plans should behave accordingly:

Proposition 4 The following statements hold:

abandon(φ, agps)⇔ agps φ

learn(φ, agps)⇔ agps ⊕ φ

learn(φ, agps) ◦ abandon(φc, agps)⇔ agps � φ

Lemma 2 learn(φ, abandon(φc, agps))⇔ abandon(φc, learn(φ, agps))

4.5 Translation

With these results in mind, we can proceed to translate the abstract intentionrevision postulates in terms of AgentSpeak(L), so that the abstract specificationmeets a particular concretion, i.e., we answer two questions: i) is there someimplementation near to the abstract specification? and ii) and what does theimplementation mean according to the specification?

Translation 1 (AS�1) Given an agent ag = 〈bs, ps〉, and an intention φ ∈agps, agps � φ ⊆ agps.

Translation 2 (AS�2) φ ∈ agps � φ, given that agps is an intentional set.

Translation 3 (AS�3) agps � φ ⊆ (agps ⊕ φ), and so, the revision of φ impliesthe expansion of the set of intentions.

Translation 4 (AS�4) If ¬φ /∈ agps, then agps⊕φ ⊆ agps�φ, which means thatif an intention is already in the set of intentions, then its expansion is includedin the revision.

Translation 5 (AS�5) agps � φ = agps⊥ if ` ¬φ. This postulate is quite im-portant. It defines one property of Bratman’s account of intentions. It defines,prima facie, what we understand as intentional consistency. In terms of thisformalism, if an intention φ can not be derived from agps, then φ ∈ agps wouldmake agps inconsistent.

Translation 6 (AS�6) If φ⇔ ψ then φ� agps = agps � ψ.

Translation 7 (AS1) If agps is an intentional set, then the contraction of agpsis also an intentional set. In other words, contraction is closed.

Translation 8 (AS2) agps φ ⊆ agps. This inclusion indicates that the in-tentional set to be contracted has to be a subset of the original one.

Translation 9 (AS3) If φ /∈ agps, then agps − φ = agps. This is to say, ifan intention to be contracted is not in the original set of intentions, then thecontractions is vacuous.

Page 10: Intentional Learning Procedures as Intention Revision Mechanisms

10

Translation 10 (AS4) If φ can not be derived from agps, then φ can not becontracted from agps, for φ /∈ agps.

Translation 11 (AS5) If φ ∈ agps, then agps ⊆ (agbs φ)⊕ φ.

Translation 12 (AS6) If φ⇔ φ, then agpsφ = agpsψ. This one indicatesthat intentions must be treated equally when doing contractions.

In this way we can understand, within a particular implementation, the mean-ing of the abstract specification. It is easy to see that these particular intentionallearning procedures provide a mechanism for intention revision as specified bythe abstract specification, whenever we consider intentions as plans writ large.The importance of this translation lies in the preservation of the properties ofthe abstract specification in the grounded implementation.

5 Conclusion

By recovering some insights from a philosophical analysis about intentions, wetook the idea that intentions are plans and not merely another atomic fragmentof the BDI architecture. Using that assumption we showed how an existing im-plementation of intentional learning allows intention revision as suggested in anabstract specification. We tried to answer two technical questions: i) is theresome implementation near to the abstract specification? and ii) and what doesthe implementation mean according to the specification? And we suggested thatagents need to adapt to the environment in short terms during reasoning cyclesand in long terms.

Finally, the philosophical and also technical question still remains: how doesthis whole process preserves the properties of the philosophical analysis? Futurework is related to answer this question, as well as to what kind of relation, if any,allows this framework for intentions. We certainly require the intention revisionstructure to be related to a non-monotonic logical framework.

Acknowledgements. The authors would like to thank the anonymous review-ers for their helpful comments and precise corrections. The first author is sup-ported by the CONACyT scholarship 214783.

References

1. Alchourron, C. E., Gardenfors, P., Makinson, D.: On the logic of theory change:partial meet contraction and revision functions. Journal of Symbolic Logic, 50, 510-530 (1985).

2. Benthem, J.: Dynamic logic for belief revision. Journal of Applied Non-ClassicalLogics. Volume 17 No. 2 (2007)

3. Bordini, R.H., Hubner, J.F., Wooldridge, M.: Programming Multi-Agent Systemsin AgentSpeak using Jason. Wiley, England (2007)

Page 11: Intentional Learning Procedures as Intention Revision Mechanisms

11

4. Bratman, M.: Intention, Plans, and Practical Reason. Harvard University Press,Cambridge (1987)

5. Cohen, P., Levesque, H.: Intention is choice with commitment. Artificial Intelligence42(3), 213-261 (1990).

6. Guerra-Hernandez, A., Ortız-Hernandez, G.: Toward BDI sapient agents: Learningintentionally. In: Mayorga, R.V., Perlovsky, L.I. (eds.) Toward Artificial Sapience:Principles and Methods for Wise Systems, pp. 77–91. Springer, London (2008)

7. Guerra-Hernandez, A., Castro-Manzano, J.M., El-Fallah-Seghrouchni, A.: Towardan AgentSpeak(L) theory of commitment and intentional learning. In: Gelbuch, A.,Morales, E.F. (eds.), MICAI 2008. LNCS, vol. 5317, pp. 848–858, Springer-Verlag,Berlin Heidelberg (2008)

8. Guerra-Hernandez, A., Ortız-Hernandez, G., Luna-Ramırez, W.A.: Jason smiles:Incremental BDI MAS learning. In: MICAI 2007 Special Session, pp. 61-70, LosAlamitos IEEE, CSP (2008)

9. Guerra-Hernandez A., Castro-Manzano, J.M., El-Fallah-Seghrouchni, A.: CTLA-gentSpeak(L): a Specification Language for Agent Programs. Journal of Algorithmsin Cognition, Informatics and Logic, (2009).

10. Kinny, D., Georgeff, M.: Commitment and effectiveness of situated agents. In:Proceedings of the Twelfth International Joint Conference on Artificial Intelligence(IJCAI-91), 8288, Sydney, Australia (1991).

11. Konolige, K., Pollack, M. E.: A representationalist theory of intentions. In Proceed-ings of International Joint Conference on Artificial Intelligence (IJCAI-93), 390-395,San Mateo: Morgan Kaufmann (1993).

12. Rao, A.S., Georgeff, M.P.: Modelling Rational Agents within a BDI-Architecture.In: Huhns, M.N., Singh, M.P., (eds.) Readings in Agents, pp. 317–328. MorganKaufmann (1998)

13. Rao, A.S., Georgeff, M.P.: Modelling Rational Agents within a BDI-Architecture.In Huhns, M.N., Singh, M.P., (eds.) Readings in Agents, pp. 317-328. Morgan Kauf-mann (1998)

14. Rao, A.S.: AgentSpeak(L): BDI agents speak out in a logical computable language.In: de Velde, W.V., Perram, J.W. (eds.) MAAMAW. LNCS, vol. 1038, pp. 42–55.Springer, Heidelberg (1996)

15. Hoek, W. van der, Jamroga, W., Wooldridge, M.: Towards a theory of intentionrevision. Synthese, Springer-Verlag (2007).