Top Banner
Noname manuscript No. (will be inserted by the editor) Mario G´ omez · Javier Carb´ o · Clara Benac Earle E-mail: {mgomez1,jcarbo,cbenac}@inf.uc3m.es Honesty and Trust Revisited: The Advantages of Being Neutral About Other’s Cognitive Models Received: date / Accepted: date Abstract Open distributed systems pose a challenge to trust modelling due to the dynamic nature of these systems (e.g., electronic auctions) and the unreliability of self-interested agents. The majority of trust models implicitly assume a shared cognitive model for all the agents participating in a society, and thus they treat the discrepancy between information and experience as a source of distrust: if an agent states a given quality of service, and one experiences a different quality for that service, such discrepancy is assumed to indicate dishonesty, and thus trust is reduced. Herein, we propose a trust model which does not assume a concrete cognitive model for other agents, but instead uses the discrepancy between information experience to better predict the behavior of the others. This neutrality about other agents’ cognitive models allows an agent to exploit information provided by dishonest agents, and to get utility from agents having a different model of the world. The experiments performed suggest that this model improves the performance of an agent in dynamic scenarios under certain conditions such as those found in market-like environments. 1 Introduction and motivation The large expansion of network access is increasing the number of decen- tralized applications that can be considered as open distributed systems. Moreover, these open distributed systems can be modelled as agent systems where agents are autonomous components that interact with one another. Agents may constantly change the way they interact, forming an uncertain and dynamic environment. Although there are different definitions [9], we Group of Applied Artificial Inteligence Carlos III University of Madrid Av. Universidad 30, 28911, Legan´ es (Spain)
26

Honesty and trust revisited: the advantages of being neutral about other's cognitive models

Feb 05, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

Noname manuscript No.(will be inserted by the editor)

Mario Gomez · Javier Carbo · ClaraBenac EarleE-mail:{mgomez1,jcarbo,cbenac}@inf.uc3m.es

Honesty and Trust Revisited: TheAdvantages of Being Neutral AboutOther’s Cognitive Models

Received: date / Accepted: date

Abstract Open distributed systems pose a challenge to trust modelling dueto the dynamic nature of these systems (e.g., electronic auctions) and theunreliability of self-interested agents. The majority of trust models implicitlyassume a shared cognitive model for all the agents participating in a society,and thus they treat the discrepancy between information and experience asa source of distrust: if an agent states a given quality of service, and oneexperiences a different quality for that service, such discrepancy is assumedto indicate dishonesty, and thus trust is reduced. Herein, we propose a trustmodel which does not assume a concrete cognitive model for other agents, butinstead uses the discrepancy between information experience to better predictthe behavior of the others. This neutrality about other agents’ cognitivemodels allows an agent to exploit information provided by dishonest agents,and to get utility from agents having a different model of the world. Theexperiments performed suggest that this model improves the performance ofan agent in dynamic scenarios under certain conditions such as those foundin market-like environments.

1 Introduction and motivation

The large expansion of network access is increasing the number of decen-tralized applications that can be considered as open distributed systems.Moreover, these open distributed systems can be modelled as agent systemswhere agents are autonomous components that interact with one another.Agents may constantly change the way they interact, forming an uncertainand dynamic environment. Although there are different definitions [9], we

Group of Applied Artificial InteligenceCarlos III University of MadridAv. Universidad 30, 28911, Leganes (Spain)

Page 2: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

can state that trust is an abstract property applied to others that helps toreduce the complexity of decisions. Trust is a universal concept that plays avery important role in social organizations as a mechanism of social control.Therefore, modelling trust in open distributed systems such as agent systemsbecomes a critical issue since their offline and large-scale nature weaken thesocial control of direct interactions.

Often, there are objective and universal criteria to evaluate the qualityof interactions (products/services provided by them). In this case, trust canbe inferred from certificates issued by third parties that verify such objectivecriteria. Unfortunately, when a set of universal objective evaluation criteriais not available, this subjective and local trust is not easily asserted. Thereare several application domains where interpersonal communications are themain source of trust due to the subjective nature of the evaluation crite-ria: books, films, web pages, leisure activities, consulting services, technicalassistance, etc.

Although there are several ways to infer trust, numerous studies haveshown that in real life one of the most effective channels to avoid deception isthrough reputation-based mechanisms [2]. Reputation is a concrete valuationthat can be used to build trust in others. Usually, the group of people withgood reputation (collaborators, colleagues and friends) that cooperates witha particular person to improve the quality of decisions forms an informalsocial network [18]. In this context, trust and reputation are strongly linked.

So far, the majority of researchers on trust and reputation has assumed,either explicitly or implicitly, that a discrepancy between the informationprovided by other agents and the perceptions of an agent based on his directexperience is an indicator of dishonesty. Furthermore, although dishonestydoes not necessarily entails that an agent or service provides no value at all,the usual way of handling the perceived discrepancy between information andexperience is based on a reduction of the trust upon the information provider:if agent a says that the quality of a service s has a value of q, and agent a′

has experienced a quality of service q′, then q − q′ is assumed to representa degree of dishonesty and is used as a source of distrust: the highest thediscrepancy the lowest the trust. Although the former approach is a validway of avoiding risks, we claim that under certain circumstances a radicallydifferent approach to handling such discrepancy would be advantageous. Inparticular, we propose a trust modeling framework that does not interpretsq−q′ as a signal of dishonesty, neither uses it as a source of distrust; instead,our model exploits such discrepancy to try to predict the changes in thebehavior of other agents.

Our proposal for trust modeling draws upon the following epistemic as-sumption: the actual beliefs of other agents are not knowledgeable, becausedifferent agents may probably use a different framework to represent andupdate their beliefs about other agents. That assumption, which may seemnegative at a first glance, is turned into positive by using the discrepancy be-tween information and experience as a valuable knowledge source to betteranticipate the behavior of other agents. This assumption is quite likely to bevery relevant when considering open agent societies made up of heterogeneousagents that are likely to have different cognitive systems, and thus different

Page 3: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

views of the world. In other words, in open environments an agent has wellfounded reasons to believe that another agents may use different frameworksto represent and update their beliefs. The same idea is applicable to otherkind of open distributed systems in addition to multi agent systems, such assemantic web services, P2P networks and electronic marketplaces. Further-more, even though indicating dishonesty, there is no strong reason to discardother agents or services just because of that, as far as one is able to learnthe pattern of discrepancy between information and experience. For example,the recommendations obtained by a provider a about a competitor a′ mighttry to make us believe a is better provider than a′, so if our agent is able tolearn such pattern, it would be able to estimate the real value of a′ and takeadvantage of that.

The paper is organized as follows: section §2 presents an overview of thestate of the art in the field, §3 describes the different components of our trustmodeling framework and presents a synthetic definition of trust as an aggre-gation of its primitive components, §4 describes some experiments to test ourmodel using the ART testbed, and finally, §5 sums ups the contributions ofour work.

2 State of the Art

Several trust models have been proposed; two of the most cited reputationmodels are SPORAS and HISTOS [20]. SPORAS is inspired in the founda-tions of the chess players evaluation system called ELOS. The key idea ofthis model is that trusted agents with very high reputation experience muchsmaller changes in reputation than agents with low reputation. SPORAScomputes the reliability of agents’ reputation using the standard deviationof such measure.

HISTOS is designed to complement SPORAS by including a way to dealwith witness information (personal recommendations). HISTOS includes wit-ness information as a source of reputation through a recursive computationof weighted means of ratings. It computes reputation of agent i for agent jfrom the knowledge of all the chain of reputation beliefs corresponding toeach possible path connecting i and j. In addition, HISTOS plans to limitthe length of paths that are taken into account. To make a fair comparisonwith other proposals, that limit should be valued as 1, since most of the otherviews consider that agents communicate only their own beliefs, but not thebeliefs of other sources that contributed to their own beliefs of reputation.Based on these principles, the reputation value of a given agent at iterationi, Ri, is obtained in SPORAS recursively from the previous one Ri−1 andfrom the subjective evaluation of the direct experience DEi:

Ri = Ri−1 +1θ· Φ(Ri−1) · (DEi −Ri−1)

Let θ be the effective number of ratings taken into account in an evaluation(θ > 1). The bigger the number of considered ratings, the smaller the change

Page 4: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

in reputation. Furthermore, Φ stands for a damping function that slows downthe changes for very reputable users:

Φ(Ri−1) = 1− 1

1 + e−(Ri−1−Max)

σ

,

where dominion D is the maximum possible reputation value and σ is chosenin a way that the resulting Φ would remain above 0.9 when reputations valueswere below 3

4 of D.Another well known reputation model is due to Singh and Yu. This trust

model [19] uses Dempster-Shafer theory of evidence to aggregate recommen-dations from different witnesses. The main characteristic of this model is therelative importance of fails over success. It assumes that deceptions causestronger impressions than satisfactions. It then applies different gradientsto the curves of gaining/losing reputation in order to lose reputation easily,while it is hard to acquire it. The authors of this trust model define differentequations to calculate reputation according to the sign (positive/negative)of the received direct experience (satisfaction/deception) and the sign of theprevious reputation corresponding to the given agent.

Instead of Dempster-Shafer theory, Sen’s reputation model [17] uses learn-ing to cope with recommendations from different witnesses. Unfortunatelylearning requires a high number of interactions and a relatively high numberof witnesses to avoid colluding agents benefiting from reciprocative agents.

Another remarkable reference in the field is REGRET [16]. The REGRETmodel takes into account three ways of computing indirect reputation de-pending on the information source: system, neighborhood and witness rep-utations. Note that witness reputation is the one that corresponds to theconcept of reputation that we are considering. REGRET includes a measureof the social credibility of the agent and a measure of the credibility of the in-formation in the computation of witness reputation. The former is computedfrom the social relations shared between agents. It is computed in a similarway to neighborhood reputation, using third party references about the rec-ommender directly in the computation of how its recommendations are takeninto account. In addition, the latter measure of credibility (information cred-ibility) is computed from the difference between the recommendation andwhat the agent experienced by itself. The similarity is computed by match-ing this difference with a triangle fuzzy set centered on 0 (the value 0 standsfor no difference at all). The information credibility is considered relevantand taken into account in the experiments of this present comparison. Bothdecisions are also, to a certain degree, supported by the authors of REGRET,who also assume that the accuracy of previous pieces of information (wit-ness) are much more reliable than the credibility based on social relations(neighborhood), and they reduce the use of neighborhood reputation to thosesituations were there is not enough information on witness reputation. Thecomplete mathematical expression of both measures can be found in [15].But the key idea of REGRET is that it also considers the role that socialrelationships may play. It provides a degree of reliability for the reputationvalues, and it adapts them through the inclusion of a temporal dependent

Page 5: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

function in computations. The time dependent function ρ gives higher rel-evance to direct experiences produced at times closer to current time. Thereputation held by any part at a iteration i is computed from a weightedmean of the corresponding last θ direct experiences. The general equation isof the form:

Ri =j=i∑

j=i−θ

ρ(i, j) ·Wj ,

where ρ(i, j) is a normalized value calculated from the next weight function:

ρ(i, j) =f(j, i)∑k=i

k=i−θ f(k, i),

where i ≥ j. Both represent the time or number of iterations of a directexperience. For instance, a simple example of a time dependent function f is:

f(j, i) =j

i

REGRET also computes reliability with the standard deviation of reputationvalues, computed from:

STD −DV Ti = 1−j=i∑

j=i−θ

ρ(i, j)· | Wj −Ri |

REGRET, however, defines reliability as a convex combination of this de-viation with a measure, 0 < NI < 1, whether the number of impressions,i, obtained is enough or not. REGRET establishes an intimacy level of in-teractions, itm, to represent a minimum threshold of experiences to obtainclose relationships. More interactions will not increase reliability. The nextfunction models the level of intimacy with a given agent:

if(i ∈ [0, itm]) → NI = sin(π

2 · itm· i), Otherwise → NI = 1

Another trust model that is specially relevant to our work is due to Abdul-Rahmman [1]. This model does not tackle with direct experiences, instead itdraws on reputation as the only source of information to build trust. Whileother models combine witness information with direct infomation to infertrust on a target agent, this model uses direct experience to evaluate andcorrect the information provided by witnesses about the target agent. In thismodel, reputation is represented as a discrete belief with four possible val-ues: very trustworthy, trustworthy, untrustworthy and very untrustworthy.The main contribution of this model is the use of the discrepancy betweendirect experience and witness information to infer trust on a third agent:prior to combine the information about a given agent provided by a numberof witnesses, this information is adjusted according to previous informationcoming from that witness about any other agent and the experienced out-comes that support/invalidate such information. In other words, this modeladjust witness information according to the semantic closeness between the

Page 6: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

witness and oneself (how different they “think”); giving more importance tothe information provided by agents that are perceived as being akin in termsof their cognitive models or preferences.

FIRE [11] is a trust and reputation model that integrates informationcoming from four different sources: interaction trust, role-based trust, wit-ness reputation and certified reputation. Interaction trust is built from thedirect experience of an agent, in particular, the direct trust component of RE-GRET is exploited in this model. Role-based trust is based on relationshipsbetween the agents, which is mostly domain-specific. Witness information isbuilt from reports of witnesses about an agent’s behavior. Certified reputa-tion is a novel type of reputation introduced by the authors, which is builtfrom third-party references provided by the agent itself. Certified reputationplays a similar role to what we call advertisements, since in both cases anagent i that has just joined the environment can make some assessment ofthe trustworthiness of another agent j, based on the certified reputation oradvertisements provided by the agent j itself. The main limitation of theFIRE model in [11] is that all agents are assumed to be honest in exchanginginformation.

Another approach when agents are acting in uncertain environments, isto apply adaptive filters such as Alpha Beta, Kalman and IMM [5,6]. Thesefilters have been recognized as a reasoning paradigm for time-variable factswithin the Artificial Intelligence community [14]. Making time-dependentpredictions in noisy environments is not an easy task. They apply a tem-poral statistical model to the noisy observations perceived through a linearrecursive algorithm that estimates a future state variable. Particularly, whenthey are applied to reputation modeling, the state variable would be thereputation, while observations would be the results from direct experiences.

From an Artificial Intelligence perspective, reputation models embed-ded in agents should involve a cognitive approach[13]: enriching the inter-nal model for making cooperative and competitive decisions rather than en-riching the exchanged reputation information. In contrast to socio-cognitivemodels, computational models involve a numerical decision making, made upof utility functions, probabilities, and evaluations of past interactions. Thecombination of both computational models intends to reproduce the reason-ing mechanisms behind human decision-making. In this paper we present atrust modeling framework that combines both views, since it assumes thecognitive stance, but uses a numerical approach.

Other researchers have proposed a socio-cognitive view of trust [12,8,3]. Schillo’s model [12] distinguishes between two types of motivations fortrust: honesty and altruism. A more enriched model is from Castelfranchiand Falcone [8] who claim that some other beliefs in addition to reputa-tion are essential to compute the amount of trust of a particular agent: itscompetence (ability to act as we wish), willingness (intention to cooperate),persistence (consistency along time) and motivation (our contribution to itsgoals). For the authors, these beliefs should be taken into consideration indetermining how much trust is set on this agent. Brainov and Sandholm [3]highlight the relevance of modeling opponent’s trust, because, if this outsidetrust was not taken into account, this would lead to an inefficient trade be-

Page 7: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

tween agents involved. Thus both agents would be interested in showing thetrustworthiness of the counterpart to allocate efficiently resources.

Another example of a socio-cognitive approach from Carbo et al. [7],called a fuzzy reputation agent system (AFRAS), supports the fuzzy natureof the reputation concept itself. It uses fuzzy logic to represent reputationsince this concept is built up with vague evaluations (they depend on per-sonal and subjective criteria), uncertain recommendations (malicious agents,different points of view), and incomplete information (untraceability of ev-ery agent in open systems). Furthermore, reliability of fuzzy reputation isimplicit in the shape of the corresponding fuzzy set. Additionally it alsoincludes other beliefs that intend to represent an emotive characterizationof agents including shyness, egoism, and susceptibility. It also includes aglobal belief and a global adaptation value of agent interaction, referred toas remembrance. This attribute determines the relevance given to the lastdirect interaction when updating trustworthiness. It represents the generalconfidence of the agent on its own beliefs. The more success is achieved inpredicting the behavior of a particular agent, the more relevance is appliedto the already asserted beliefs over future experiences with any agent (notonly that particular agent).

3 Trust Model

A general definition of trust embraces many dimensions, for instance: tohave belief or confidence in the honesty, goodness, skill or safety of a person,organization or thing (Cambridge Advanced Learners Dictionary). Hereinwe adopt an approach centered on the skill dimension: trust as a confidencein the competence, utility or satisfaction expected from other agent. Morespecifically, we approach trust as a belief that estimates the quality of serviceexpected from a particular agent, based on both direct experience using thatservice, and information obtained from other agents. We have chosen theterm quality of service to sum up our view of trust because it encompassesthe notions of utility and interaction that is characteristic of many opendistributed systems.

Typically, a trust model considers two main sources of information tobuild trust: direct experience or direct interaction, and recommendations,often called witness-information or “word of mouth”. In our model we keepthis distinction between direct experience and recommendations, but in addi-tion, we distinguish between the recommendations about third party agentsand the recommendations provided by an agent about himself, what wecall advertisements. All in all, we distinguish three main components oftrust, namely: Direct Trust (DT), Advertisements-based Trust (AT), andRecommendations-based Trust (RT).

– Direct Trust: also named interaction trust, refers to the quality assess-ment of a service based on direct experience. Direct experience is the mostreliable source of information, and the most valuable resource to estimatethe reliability of other sources of information. Consequently, this kind oftrust is usually either the most expensive or the most scarce (or both).

Page 8: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

– Advertisements-based Trust: refers to the assessment of a servicebased on information obtained directly from the provider of the service.This information is less reliable than direct experience, specially when aprovider is motivated to lie about the quality of service he is able (and isintending) to provide. However, due to the self-interest of the advertisers,this information tends to be the cheapest and the most affordable.

– Recommendations-Based Trust: refers to the assessment of a servicebased on information provided by third party agents. This informationmay be considered more or less reliable depending on the relationshipsamong the agents involved and the specific application domain.

To simplify the dynamics of a multi-agent system, we use a discrete timemodel made up of time steps. A time step represents the minimal time periodan agent requires to take decisions, act, and perceive the result of its actions.We use t to denote a particular time step in the past, T for the current timestep, T + 1 for the next time step, and ΣT for a period of time until timestep T . In general we distinguish between two types of beliefs: impressions(referred to as partial beliefs in the upcoming definitions), which are based onthe experience and information obtained for a single time step, and currentbeliefs, which are the result of a number of impressions produced for a periodof time.

To handle both uncertainty and ignorance, we use two dimensions torepresent the confidence on a belief, namely: intimacy, and predictability.

– Intimacy is a measure of confidence based on the number of data orinteractions backing a belief: more interactions imply more intimacy.

– Predictability is a measure of confidence based on the dispersion orvariability of the data: higher dispersion implies lower predictability.

In our model, every component of trust have attached a measure of confi-dence made up of both intimacy and predictability. In addition, we introducethe use of t-norms for combining intimacy and predictability into a singleconfidence value, and t-conorms for aggregating the confidence coming fromseveral sources of information.

3.1 Direct Trust

Direct Trust is an empirically (based on observations) though subjective (ob-servations are mediated by the cognitive system) assessment of the qualityof a service provided by an agent. Direct Trust is what we call an historicbelief, for it based on past experiences only. In specific contexts the qualityof service may be interpreted as the competence performing some action, theutility obtained from an economic transaction, or some kind of subjectivedegree of satisfaction obtained by directly interacting with another agent.

Page 9: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

Definition 1 Direct Trust (DTΣT ): assesses the QoS provided from oneagent until time step T inclusive, based on direct experience.

DTΣT =∑T

t=0 ϕ(t, T )pDT∑Tt=0 ϕ(t, T )

where pDT : R → [0, 1] is the partial DT obtained for the agent in time stept, and ϕ(T, t) : N → [0, 1] is a forgetting function used to weight each partialbelief according to its age (number of time steps since a belief was obtained,T-t).

We do not provide a specific definition of pDT because it may depend onthe specific application domain. Instead, we abstract the domain by consid-ering that pDT is a value representing the minimum component of DT, thatis the impression an agent obtains from its experience during a single timestep.

A forgetting function is a function used to weight each partial belief ac-cording to the age of the experience or information that induced that belief.A concrete example of a forgetting function follows.

ϕ(t, T ) ={

0 T − t ≥ φcos( π

2φ · (T − t)) 0 < T − t

}where φ is a parameter that establishes the maximum age for an experi-

ence to be relevant (to influence current beliefs).

Definition 2 Direct Trust Confidence (DTCΣT ): assesses the relianceof DT as an estimator of the QoS provided by one agent.

DTCΣT = ITMDT ⊗ (1− υDT (pDT ))

where ITMDT ∈ [0, 1] is the intimacy for DT [16], a growing function in[0,1] over the number of interactions used to compute DT , υDT ∈ [0, 1] is ameasure of the variability of pDT t, and ⊗ is a T-norm operator.

Note that DTCΣT ∈ [0, 1] because both ITM and υDT ∈ [0, 1] by defi-nition.

The intimacy level represents the degree to which an agent ”knows” (hasintimated with) another agent, concerning one specific service, and is a func-tion of the number of interactions or data used to compute a belief. Differentfunctions can be used to calculate the intimacy level; for example REGRET[16] uses the following function:

ITMΣT ={

1 N ≥ NUDTN

NUDtN < NUDt

}(1)

The value for NUDT can be decided arbitrarily, by empirical test, or theo-retically, but the final decision on which NU to use should take into accountthe specific context. The same can be said from υDT ; the selection of an

Page 10: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

specific measure of variability should take into account the interpretationand usage of confidence measures in their specific application context. Forexample, the sample standard deviation of the data would be an appropriatemeasure of variability when data are abundant and follows a normal distri-bution, but other estimators as the range may be more appropriate whendata are very scarce or do not follows a normal distribution.

3.2 Trust based on Advertisements

Trust based on Advertisements, or Advertisements-based Trust (AT) refersto the component of trust based on the information about a service obtainedfrom the provider of the service itself. Depending on the domain, such akind of information may adopt the form of a formal or informal recommen-dation, a certificate, or some kind of subjective, uncertain or imprecise belief.

Definition 3 Advertisements-based Trust (ATT+1): assesses the QoSexpected from one agent in the next time step (T + 1), based on the adver-tisements from the agent.

ATT+1 =

1 : pATT+1 + ∆ATΣT ≥ 10 : pATT+1 + ∆ATΣT ≤ 0

pATT+1 + ∆ATΣT : otherwise

where pATT+1 : R → [0, 1] is the most recent advertisement from the agent,and ∆ATΣT (AT-Discrepancy) is the discrepancy between advertisementsand experiences obtained in the past (until time step T inclusive).

We do not provide a more specific definition of pAT because it may de-pend on the specific application domain. Instead, we abstract the domain byconsidering pAT as the minimum component of AT.

The point of this approach to using advertisements is to learn the patternof discrepancy between past advertisements and experiences, and then usingthat pattern to anticipate the future behavior of an agent based on the mostrecent advertisements. The pattern of discrepancy between past experiencesand advertisements is what we call Advertisements-based Trust Discrepancy,or just AT-Discrepancy, and is defined below.

Definition 4 AT-Discrepancy (∆ATΣT ): measures the discrepancy be-tween the past advertisements made by one agent and the experiences actuallyobtained.

∆ATΣT =∑T

t=0 ϕ(t, T )(pDT − pAT )∑Tt=0 ϕ(t, T )

where pAT : R → [0, 1] is the Partial AT for the agent and time step t, andϕ(t, T ) is a time forgetting function.

Page 11: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

Note that ∆ATΣT ∈ [−1, 1], since pDT t, pAT t, ϕ(t, T ) ∈ [0, 1] by defini-tion. Positive values of ∆ATΣT means that the experiences obtained from theagent were better than advertised, negative values have the opposite meaning,and zero means that the experiences matched perfectly the advertisements.

Since we have defined the Advertisements-based trust as a function of un-certain beliefs, it is considered itself an uncertain belief, whose confidence de-pends on the confidence of its constituent beliefs. Specifically, the confidenceon the Advertisements-based Trust is defined over the the AT-Discrepancy.

Definition 5 AT Confidence (ATCT+1): assesses the degree of relianceon the AT as an estimation of the QoS to be obtained from one agent in thenext time step.

ATCT+1 = ITMAT ⊗ (1− υAT (∆AT tj ))

where ITMAT is the intimacy for AT , ∆AT t = pDT t − pAT t is the partialdiscrepancy observed between AT and DT in time step t, υAT ∈ [0, 1] is ameasure of the variability of ∆AT , and ⊗ is a T-norm operator.

3.3 Trust based on Recommendations

Trust based on recommendations, or Recommendations-based Trust (RT)refers to the component of trust based on the information obtained froma group of agents about the quality of a service provided by another agent.Depending on the domain, such a kind of information may adopt the form of aformal or informal recommendation, a certificate, or some kind of subjective,possibly uncertain belief.

We have defined AT using a measure of the discrepancy between infor-mation and experience, and we will define RT likewise. However, RT musthandle the fact that there are potentially many providers of information(recommenders) about the same agent. As a result, we have to distinguishbetween the component of RT due to the recommendations provided by asingle agent, and the component due to the recommendations provided byseveral agents, what we call combined RT.

Definition 6 Recommendations-based Trust (RTT+1k ): assesses the QoS

expected from one agent in the next time step (T + 1), based on the recom-mendations from other agent ak.

RTT+1k =

1 : pRTT+1k + ∆RTΣT

k ≥ 10 : pRTT+1

k + ∆RTΣTk ≤ 0

pRTT+1k + ∆RTΣT

k : otherwise

where pRT t

k : R → [0, 1] is the partial Recommendations-based Trust obtainedfrom agent ak, and ∆RTΣT

k (RT-Discrepancy) is the discrepancy betweenpast recommendations and experiences about the agent.

Page 12: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

The point here is again to learn the pattern of discrepancy between in-formation and experience, in this case between recommendations and expe-rience. The pattern of discrepancy between past experiences and recommen-dations is what we call Recommendations-based Trust Discrepancy, or justRT-Discrepancy, and is defined below.

Definition 7 RT-Discrepancy(∆RTΣTk ): measures the discrepancy between

the past recommendations by agent k about one agent and a concrete service,and the experiences obtained using that service, until time step T inclusive.

∆RTΣTk =

∑Tt=0 ϕ(t, T )(pDT t − pRT t

k)∑Tt=0 ϕ(t, T )

where pRT t : R → [0, 1] is the partial Recommendations-based Trust for theagent, and time step t, and ϕ(t, T ) is a time forgetting function.

Note that ∆RTΣTk ∈ [−1, 1], since pDT t, pRT t, ϕ(t, T ) ∈ [0, 1] by defini-

tion. Positive values of ∆RTΣT means that the experiences obtained fromthe agent and the service were better than recommended, negative valueshave the opposite meaning, and zero means that the experiences matchedperfectly the recommendations.

Definition 8 Combined Recommendations-based Trust (cRTT+1): as-sesses the QoS expected from one agent in the next time step, based on bothhistoric information and the most recent recommendations about that service.

cRTT+1 =∑Nk

k=1 (RTT+1k ×RTCT+1)∑Nk

k=1 RTCT+1k

where RTT+1k is the Recommendations-based Trust about the agent based on

agent ak’s recommendations, and RTCT+1k is the confidence on that belief as

an estimation of the QoS to be obtained from the agent in T + 1.

cRT aggregates the recommendations obtained from several agents. Sim-milarly, the confidence on cRT is defined as an aggregation of the confidenceson every recommendation.

Definition 9 RT Confidence (RTCT+1): assesses the degree of relianceon the Recommendations-based Trust (RTT+1

k ) obtained from agent ak, asan estimation of the QoS to be obtained from other agent in the next timestep.

RTCT+1k = ITMRT ⊗ (1− υRT (∆RT t

k))

where ITMRT is the intimacy for RT , ∆RT tk = pDT t − pRT t

k is the partialdiscrepancy observed between DT and RT in time step t, υRT ∈ [0, 1] is ameasure of the variability of ∆RT t

k, and ⊗ is a T-norm operator.

Page 13: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

Definition 10 Combined RT Confidence (cRTCT+1): assesses the de-gree of reliance on the Combined Recommendations-based Trust as an esti-mation of the QoS to be obtained from one agent in the next time step.

cRTCT+1k =

k⊕(RTCT+1

k )

where⊕k denotes the aggregation of the confidence associated to each rec-

ommender (RTCΣTk ) using a T-conorm operator.

3.4 Global Trust

So far we have defined the components of trust according to the source ofinformation. Each single component alone has some advantages and draw-backs that encourages its combination into a single overall measure that willtrade-off the pros and cons of every component. On the one hand, DT is abelief that predicts the future QoS to be obtained by an agent based only onpast experiences, therefore, this is the component with the highest stabilitybut also the slowest reaction to changes in the environment, since DT wouldnot detect the change until it is actually experienced. On the other hand,AT and RT are much more anticipatory beliefs, in that they provide a wayto predict the changes in the environment before actually experiencing themby direct interaction.

Below follows the definition of a global measure of trust that integratesthe three components into a single belief. This belief, herein called GlobalTrust, assesses the Quality of Service to be obtained from an agent in theimmediate future, using all the available sources of information, and boththe historic (already tested) and the most recent (non tested) informationabout a service. More formally, Global Trust (GT ) is defined as a weightedmean over DT, AT and RT, using their confidence values as the weights.To better understand the essential difference between DT and the other twocomponents, consider the example below.

Suppose that there are two agents providing the same service, nameda1 and a2. Let us assume that the QoS provided by a1 is better than theQoS provided by a2, for instance the average QoS for a1 is 0,7 and for a2

is 0,6. Now, suppose that a client agent c has interacted with a1 and a2

in the past and has concluded that a1 is the best provider. Now, imaginethat the QoS provided by a2 has shifted from 0,6 to 0,8. Probably, a2 willmake new advertisements reflecting that improvement. If b uses only pastobservations to assess a2 QoS, it would not react immediately to the newadverts, because it has not interacted with a2 for some time and thus, itwould have no evidence to update its beliefs about a2. However, if c uses therelation (pattern of discrepancy) between a2 adverts and the QoS obtainedfrom a2 in the past to estimate the new QoS, then c would update its beliefsabout a2 and as a result it will realize that a2 is now a better provider thana1, without having interacted with a2 directly.

Page 14: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

Summing up, DT constitutes an anchor point to reduce risks and avoidan extremely volatile behavior, while AT and RT allow an agent to adaptquicker to changes in the environment before experiencing them by directinteraction. Although apparently AT and RT may increase the risk of a de-cision because of the inherently lower reliability of information compared todirect experience, the risk are minimized by using the confidence values toweight each belief, as is apparent in the following definitions:

Definition 11 Global Trust (GTT+1): assesses the QoS expected from oneagent during the next time step, using all the sources of information.

GTT+1 =DTΣT ×DTCΣT + ATT+1 ×ATCT+1 + cRTT+1 × cRTCT+1

DTCΣT + ATCT+1 + cRTCT+1

where DTΣT is the DT for the agent; ATT+1 is the Anticipatory AT; cRTT+1

is the Combined Recommendations-based Trust, and DTCΣT , ATCT+1, RTCT+1

are the confidences associated to DT , AT and cRT respectively.

Global Trust is defined as an aggregation of uncertain beliefs with a differ-ent degree of confidence, therefore the confidence on Global Trust is definedas an aggregation of the confidence on every constituent belief, as follows.

Definition 12 Global Trust Confidence (GTCT+1): assesses the relianceon the Global Trust GT as an estimation of the QoS to be obtained in thenext time step.

GTCT+1 = DTCΣT ⊕ATCT+1 ⊕ cRTCT+1

where ⊕ is a T-conorm operator.

To note that Global Trust and Global Trust Confidence can be used eitherindependently or combined into a single value (eg. GT × GTC), dependingon the specific application domain.

4 Experiments

In this section we explain the tool used and the experiments performed totest our model.

4.1 The ART testeb

We have chosen the ART testbed [10] to test our trust model. This testbedis a simulator of the art appraisals domain which goal is twofold:

– to serve as a competition forum in which researchers can compare theirtechnologies against objective metrics,

– and as an experimental tool, with flexible parameters, allowing researchersto perform customizable, easily-repeatable experiments.

Page 15: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

The ART testbed (ARTT) operates as a simulation game in which paint-ing appraisers compete to obtain the largest portion of the market by maxi-mizing the accuracy of their appraisals. Clients (generated by the simulationengine) request appraisals for paintings from different artistic eras (classical,impressionism, post-modern, etc.). The agents (from now on we use appraiserand agent undistinguishably) have different degrees of expertise on picturesbelonging to different eras, so they are able to produce more accurate ap-praisals in some eras than in others. The client share of each appraiser isprogressively modified by the simulation, according to the quality of eachagent’s appraisals compared with other appraisers: more accurate appraiserstend to increase their client shares in detriment of other agents. In the end,the winning agent is the appraiser with the highest bank account balance.In this context, an agent has several ways to improve the accuracy of itsappraisals:

– First of all, appraisers may choose to invest more money to generatetheir own opinions. However, the accuracy of agents’ opinions is stronglylimited by the expertise level.

– Secondly, an agent can purchase opinions to other agents. The more accu-rate the opinions purchased, the more accurate the appraisals, therefore,an agent is strongly motivated to make good estimations of the accuracyof the opinions provided by other agents.

– And finally, appraisers can also purchase reputation information aboutother agents, which helps appraisers know from which other appraisers torequest opinions.

An important feature of the ARTT is the autonomy of agents, their ca-pacity to deny cooperation or to act maliciously. In addition to investingmore o less money to generate a better or worse opinion, there are differentways in which agents can try to get advantage over other agents, for instance,lying about its level of expertise in a given era, or lying about the reputationof other agents. Therefore, agents must be cautious in deciding when, andfrom whom, to request opinions and reputation information. Furthermore,accuracy improves if an agent is able to provide the simulation with goodestimations of the opinion’s accuracy of other agents. More specifically, theappraisals of an agent are the weighted mean of all the opinions purchased(including its own opinion); the weights are provided by the appraiser agentfor each other agent, and are supposed to represent the degree to which anagent trust other agents.

A simulation in the ARTT comprises a number of time steps. In eachtime step, the following sequence of actions occur:

1. At the beginning of each time step, the simulation engine assigns clients,together with the paintings to be appraised, to each appraiser. In the firsttime step of the simulation, the same number of clients are assigned toeach appraiser. In the subsequent time steps, the distribution of clientsdepends on the appraiser’s accuracy in the previous time step, i. e., themore accurate an appraiser is in a time step (compared with other agents),the more clients he gets in the next time step.

Page 16: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

2. Once the clients have been assigned to each appraiser, the appraisersmay exchange reputation information following the reputation transac-tion protocol shown in Figure 1(a). An appraiser i requests reputationinformation from an appraiser j. The appraiser j may accept or not therequest. If it accepts the request, the appraiser i sends the payment andthe appraiser j sends the reputation, which need not be truthful. Other-wise, if the appraiser j rejects the request, the reputation transaction isaborted.

3. Figure 1 (b) shows the sequence of steps in an opinion transaction. Whenan appraiser i requests an opinion to an appraiser j, the potential providermay reject the request (thus the transaction is aborted), or send a (pos-sible untruthful) certainty assessment about the opinion it claims it willsend. In the latter case, the appraiser i may either decline the potentialopinion (and abort the transaction) or send payment. Upon receipt ofpayment, the appraiser j sends the opinion, which need not b truthful.

4. At the end of each time step, the simulation engine calculates the ap-praiser’s final appraisal based on the opinions the appraiser purchasesand weights the appraiser places on those opinions. Weights are real val-ues between zero and one that an appraiser assigns, based on its trustmodel, to another’s opinion. The simulation engine, not the appraiser it-self, performs the appraisal calculation to ensure the appraiser does notemploy strategies for selecting opinions to use after receiving all purchasedopinions.

There are several reasons to use ARTT to test our framework.– First of all, in this domain, as in other market-like domains in which infor-

mation providers (recommenders and advertisers in our terminology) arethemselves competitors, information providers are motivated to deceiveother agents about their true competence; and this motivation operates inboth directions: (a) by making others believe their competitors are worseproviders than himself, and (b) by making others belief he is better thanhe really is. In other words, moderately regular patterns of deception areexpected making this domain a good candidate to evaluate the utility ofour model.

– Secondly, due to its economical component, this domain is a good exampleto test the suitability of our model to adapt to smooth changes in thequality of a service as those derived of the law of demand which is typicalof open markets.

– And thirdly, this domain uses continuous variables to specify both thequality of service (the accuracy of the opinions in a given era), and theinformation about the quality of a service. That information can be ob-tained as advertisements, named certainties in ARTT, and recommenda-tions, named reputations.

4.2 Experiments setup

We describe first the mapping of our trust model to ARTT, and then wedescribe the purpose and design of the experiments.

Page 17: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

Fig. 1 Interaction protocols for opinion and reputation transactions

In order to understand the mapping between the testbed and our model,we should understand the way appraisals on the value of a painting are build.Each agent is assigned by the simulator a random expertise level s∗, usingan uniform distribution in 0.1, 0.2, ...,1. The opinions of an agent on the truemarket value of a painting are generated by the simulation such that opinionerrors adhere to a normal distribution with mean equals to 0 and a standarddeviation s defined as:

s = (s∗ +α

Cg)tk (2)

where s∗ is the expertise level of an agent for a particular era, α is a parameternamed Sensing-Cost-Accuracy, Cg is the cost invested by an agent to generatean opinion, and tk is the true value of a painting, its market price. Note thatCg is a variable, since it is specified by an agent for each particular opinion,ranging from 0 to any positive value. From the definition of s follows thatthe so called expertise level s∗ is actually the minimum standard deviation ofopinions’ error, and thus, s∗ = 0.1 constitutes the best case (the most expertan agent can be), since it implies a narrow distribution of errors, i. e., moreaccurate opinions.

We map our trust model to ARTT domain as follows:

– The partial Direct Trust (pDTT ) is mapped to the perceived expertiseabout an agent in a given era, which refers to the estimation of the ac-curacy of that agent’s opinions on paintings belonging to the given era.Since during a single time step, an agent may have several paintings toappraise per era, pDTT is defined as a function of the average error of anagent’s opinions for that era, noted e. However, opinion errors can not beused straightforwardly as a measure of trust. First of all, we must makeerrors relative to the true price of painting so as to make them compara-ble. The relative error of an opinion about a painting k in is defined as

Page 18: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

follows

ek =|ok − tk|

tkwhere ok is the value of a painting according to an agent’s opinion, andtk is the true value of the painting. We use the absolute value of thedifference because we are measuring the magnitude of the error, not thedirection. In addition, one must invert the measure of error and encloseit in [0,1] to become a measure of trust: small errors imply a high pDT ,while large errors imply a low pDT . We are using the following functionto calculate pDT for each era:

pDTT =

1 e ≤ LBe

0 e ≥ UBeUBe−e

UBe−LBe0.1 < e < 1

where LB and UB are the lower and upper bounds expected (or enforced)from opinion errors. Values falling out of these limits are rated either asthe best value (when e < LB) or as the worst value (when e > UB).The lower bound for opinion errors is imposed by the simulation, and is0.1, the minimum expertise level. The maximum expertise level is 1, butthere is no upper bound due to the way s is calculated (see Equation 2):the component α

Cgcan be extremely big because Cg (the price invested to

generate an opinion) can be arbitrarily small. A reasonable upper limitfor α

Cgcan be established depending on the expected domain of Cg in the

context of an specific combination of simulation parameters; for instance,we can assume than an agent will never expend more money generatingopinions than the fee f paid by clients per appraisal. In particular, in ourexperiments we have been using 0.1 or 0.2 as reasonable upper limits forα

Cg, which implies UB = 1.1 or UB = 1.2 respectively.

– Advertisements are mapped to certainties. Certainties are provided byagents when their opinions are requested. These values represents thequality –the accuracy– of the opinions that agents are able and willing toprovide. A certainty is a value in [0,1], but it cannot be mapped directly tothe partial Advertisements-based Trust (pAT ). The point is that an agentmay and will probably receive multiple certainties per time step and era,due to several paintings belonging to the same era. As a consequence, wehave to aggregate all the certainties for the same era, using for instancethe mean.

– Recommendations are mapped to reputations. There is a direct map-ping between a single reputation and what we call partial Recommendations-based Trust (pRT ), since a reputation is a value in [0,1] that representsthe assessment on the opinions accuracy of other agents for a specific era.

– The three former mappings go from the application domain to our model,they are input mappings.But we also have to establish an output map-ping, from our global notion of trust, to the global notion of trust inthe ART Testbed, which are the weights send by each appraiser to thesimulationengine at the end of each time step. In particular, we use theGlobal Trust × Global Trust Confidence (GT ×GTC) to calculate thoseweights.

Page 19: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

In order to evaluate the gains and drawbacks of our trust model in dy-namic and uncertain environments, we have compared three models to handletrust and reputation: M1, M2 and M3. M1 refers to an implementation of thetrust model described in this paper, while M2 and M3 are modifications ofthis model that handle the discrepancy between information and experiencein a more classical way: as a source of dishonesty and/or distrust.

M2 In this model AT and RT are calculated as the weighted average of thepast beliefs (pAT and pRT ), similarly to the way DT is calculated in ourmodel, that is to say, only based on past data. For instance, AT is definedas follows:

ATT+1 =∑T+1

t=0 ϕ(t, T + 1)pAT t∑Tt=0 ϕ(t, T )

In addition, this model uses the discrepancy between information andexperience as a source of distrust, to calculate the confidence on a trustbelief. For instance, ATC is defined as follows:

ATCT+1 = ITMAT ⊗ (1− υat(pAT t))⊗∆ATΣT

M3 This model uses the same definitions of AT and RT that M2, but differsin the definitions of the confidence measures; more specifically, this modelignores the discrepancy between information and experience (∆AT and∆RT ). For example, the confidence on AT is defined as follows:

ATCT+1 = ITMAT ⊗ (1− υat(pAT t))

We have compared the three models introduced above along several vari-ables. Most of the variables studied have a strong correlation, so we haveselected just four variables that are representative of the goodness of eachmodel: number of appraisals, average error, bank balance, and stability. Thebest indicators of the goodness of a model in terms of accuracy predicting thebehavior of other agents are the number of appraisals (NA) (total number ofpaintings to appraise obtained during an entire simulation) and the averageerror (AE) of the appraisals; also an indication of accuracy is the bank bal-ance (BB), which is the difference between the revenues and the expenses,this is an indicator of the efficiency, or the relation between the accuracyof the appraisals and the resources expended to obtain such accuracy; andfinally, the stability (ST) represents the ability of an agent to converge toand keep a low average error; more specifically, ST is defined as the numberof time steps in which the average error for the last 5 time steps changes lessthan a given criterium (|average error increment| < 0.01).

Since in the ART Testbed agents can use their own opinions to appraise apainting, and they know themselves very well, self-opinions tend to neutralizethe influence of the opinions purchased to other agents. In order to remark thedifferences between the three trust models being compared, we have enforcedall agents to use opinions purchased to other agents only, and not their ownopinions.

We have conducted three groups of experiments:

Page 20: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

– (a) Experiments with dynamic prices. In these experiments we introducea market-like model, with alternating periods of inflation and deflation.Actually, the price to buy an opinion is fixed, but price evolution canbe simulated by changing the cost agents invest to generate the opinionsbeing sold to other agents. Investing more money implies an improvementon the quality of opinions, and thus it is equivalent to a price reduction(better relation between quality and price). Investing less money meansa decrease in the quality of the opinions, and thus it is equivalent to aprice augment.

– (b) Experiments varying the degree of deception or dishonesty. In theseexperiments, appraiser agents try to deceive other agents by lying aboutthe quality of both their opinions and the opinions of other agents. Inparticular, agents will try make others believe their own opinions arebetter than they are, and make others believe that the opinions of thirdparty agents are worse than they are.

– (c) Experiments varying the prices randomly. This is a variant of the ex-periments with dynamic prices, in which we establish a maximum randomprice variation per time step (noise parameter). The result is a dynamicenvironment which is much more difficult for agents to adapt to than theexperiments in group (a).

The same experimental situation is used as the baseline for the threegroups of experiments: an static scenario where agents always invest the sameamount of money to generate opinions and are completely sincere and fair.Each experiment varying a parameter is repeated twice. A single experimentinvolves 9 agents competing during 60 time steps, with the same proportionof agents (3) using each trust model. Each time step the simulator randomlygenerates 270 paintings belonging to 10 artistic, and distributes them amongappraiser agents according to their relative average error in the previous timestep.

4.3 Experiment Results

Figure 2 summarizes the results of the experiments when evolving the pricesdynamically, according to a market-like model consisting of alternating in-flation/deflation periods. The bars with diagonal lines represent the averagescore for the agents using M1 (our model), dotted bars for the agents usingM2, and horizontal lines for the agents using M3. There are three experimen-tal situations, from left to right: static prices, slow price dynamics, and fastprice dynamics. Price dynamics are simulated by using a function that incre-ments/decrements the money expended to generate the opinions being sold.In particular, each time step the expenses are increased a fixed quantity thatis a proportion of the opinion cost (Cg): a 10% in the slow price dynamicsscenario, and 20% in the fast price dynamics scenario. Expenses can rangebetween 10% and 200% of the opinion cost.

Results show that all models perform similarly in the case of a completelystatic environment with fixed prices and no deception. However, there areclear differences when considering dynamic prices, and these changes are

Page 21: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

Number of Appraisals

500

700

900

1100

1300

1500

Static Slow Fast

Bank Balance

3000040000

50000600007000080000

90000100000

Static Slow Fast

Average Error

0

0.1

0.2

0.3

0.4

0.5

0.6

Static Slow Fast

Stability

0

10

20

30

40

50

60

Static Slow Fast

Fig. 2 Experiments with dynamic prices

stronger the more quickly prices change. In particular, the agents using M1produce the most accurate estimations of other agents(lower AE), achieve thelarger amount of clients (higher NA), obtain the best economic results (higherBB), and remain stabilized for the longest amount of time. The observeddifferences are statistically significant1. Although the differences between M2and M3 seem too small as to be generalized at first glance, in some casesthese differences have turned to be statistically significant; in particular, thedifference in the bank balance for the third scenario (fast price dynamics)has reached statistical significance.

Figure 3 sums up the results of our experiments varying the prices ran-domly according to a certain amount of noise: no noise (static prices), lownoise, and high noise. In this experiment, the expenses to generate opinionsevolved following a random schema. Each timestep the expenses are incre-mented or decremented (with equal likeliness) in a random amount rangingbetween 0 and a proportion over the opinion cost that is established by anoise parameter: 15% in the low noise scenario, and 30% in the high noisescenario.

The results obtained in this experiment are completely consistent with thefirst group of experiments: the agents with the anticipatory model performbetter than the agents using the non anticipatory models in all the variablesanalyzed. The differences between the M1 and the other two models arestatistically significant, but they are not significant between M2 and M3.

Figure 4 shows the results of our experiments introducing a certain de-gree of deception concerning both the advertisements (certainties) and therecommendations about the accuracy of agent opinions. We consider threeexperimental situations: no deception, low deception, and high deception. In

1 For this result as well as for the other results commented herein, we have testedthe statistical significance of the difference between the means by applying student’sbilateral t test for two samples and a significance criterion α = 0.05. .

Page 22: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

Number of Appraisals

500

700

900

1100

1300

1500

No Noise Low Noise High Noise

Bank Balance

30000400005000060000700008000090000

100000

No Noise Low Noise High Noise

Stability

0

10

20

30

40

50

60

No Noise Low Noise High Noise

Average Error

0.00

0.10

0.20

0.30

0.40

0.50

0.60

No Noise Low Noise High Noise

Fig. 3 Experiments with noise (random price changes)

this experiment the expenses are constant, but agents modify the the valuesprovided as certainties (i.e. the advertisements) and reputations (i.e. recom-mendations) so as to deceive other agents. Certainties are overvalued to makeothers believe that one is better that it actually is, while reputations are un-dervalued. In both cases, the current believes of the agent that is lying aremodified in an amount characterized by a parameter that is 0,2 in the lowdeception scenario and 0,8 in the high deception scenario. In any case, thevalues given are adjusted if necessary to stay in the interval [0..1].

In this experimental scenario, all the models perform quite similarly interms of the average error and the number of appraisals achieved (there areno statistically significant differences), but the agents using M1 achieve bettereconomic results and remain stabilized for longer periods of time (the differ-ences are statistically significant between M1 on the one hand, and M2 andM3 on the other hand). The differences between M2 and M3 are not statisti-cally significant. Anticipatory agents earn more money even when obtainingfewer paintings to appraise, as is the case for the low deception scenario.This result may seem contradictory at first glance because appraisals are themain source of revenues, but there is an explanation: the agents using M1purchase less opinions to obtain appraisals of similar quality, in other words,the M1 agents are more efficient M2 and M3 agents.

5 Conclusions

In this paper we have introduced a computational model to handle trust indynamic and uncertain domains such as electronic market places and dis-tributed information systems. This model extends Rahman’s [1] notion ofsemantic closeness between experience and information to deal with con-tinuous domains: first of all, we use continuous variables instead of discretevariables; second, our model combines the reputation information and the

Page 23: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

Number of Appraisals

500

700

900

1100

1300

1500

No Deception Low Deception High Deception

Bank Balance

3000032000340003600038000400004200044000460004800050000

No Deception Low Deception High Deception

Stability

0

10

20

30

40

50

60

No Deception Low Deception High Deception

Average Error

0.00

0.05

0.10

0.15

0.20

No Deception Low Deception High Deception

Fig. 4 Experiments with deception

trust based on direct experiences (Rahman model uses only reputation); andthird, we distinguish between two types of information: advertisements andrecommendations.

Several frameworks to handle trust in agent societies rely upon the no-tion of honesty when considering the discrepancy between information andexperience. Usually, the discrepancy observed between direct experience andinformation concerning that experience (witness information) is interpretedas a consequence of the information provider intentional behavior, and it isused to estimate the credibility of that provider (more discrepancy implyingless credibility); in other words, the discrepancy between information andexperience is interpreted in terms of the honesty of the information provider,and it is used as source of distrust.

Sabater [15] argues that although Rahman’s approach is useful in somesituations, it has some limitations because it is unable to differentiate betweenlying agents and agents that have a different view of the world. However, thereare some reasons to adjust beliefs using the discrepancy between experienceand information: on the one hand, in many domains, and specially in realapplications, it is actually impossible to know whether an agent is lying orjust thinking differently; on the other hand, it is often more important toestimate the utility expected from an agent than figuring out whether anagent is lying or not.

In our model, the observed discrepancy between information and experi-ence is used to adapt quicker to changes in the environment by anticipatingchanges in the world before experiencing them. This approach does not iden-tify discrepancy with a bad behavior as such, instead our model uses thatinformation to improve the utility of the most recent information. To notethat in order to benefit from this approach discrepancies between informa-tion and experience must be relatively consistent over time, that is to say,discrepancies between information and experience must follow a regular pat-

Page 24: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

tern. Anyway, if there is no pattern that would imply a very low confidence,and thus the model would perform similarly to other models that use theinformation discrepancy as a source of distrust. A drawback of our approachmight arise from the higher risk implicit in the importance given to novelinformation to take decisions, but the risk is in fact reduced by the use ofconfidence values that integrate both a notion of intimacy and an estimationof predictability: during the calculation of Global Trust as an aggregation ofDT, AT and RT, each component is weighted according to the confidence onthat belief.

We have used controlled experimental conditions to demonstrate the fea-sibility and utility of the anticipatory mechanism in a market-like simulationenvironment, the art appraisals domain. On the one hand, we have showedthe utility of the anticipatory approach to adapt to dynamic environments,including both inflationary and deflationary dynamics. On the other hand,we have demonstrated the robustness of the model to deal with deception,both positive (over-valuating), to make an agent belief one is better than itactually is, and negative (under-valuating) deception, to make an agent beliefa third agent is worst than itself. In addition, we have tested our model intwo competitions based on the ART Testbed, the First Spanish ART com-petition, and the First International ART Testbed Competition, wining theformer, and being finalists in the latest. There are yet some

Finally, an aspect that have been not discussed in this paper and deservesfuture discussion and research is the use of t-norms and t-conorms as away to aggregate beliefs. We think this approach is quiet flexible and wellsuited to further extend the model by introducing other sources of trust suchas social or organizational relationships. These families of operators showsome interesting properties to aggregate numerically represented beliefs, andopen the door to the use of a number of different operators verifying distinctproperties. These alternative operators might fit better for different scenarios.

6 Appendix: Some notes on t-conorms

In former sections of this paper we have proposed the use of T-conorm oper-ators to aggregate information coming from several sources of information.In particular, we have introduced T-conorm operators to aggregate the con-fidence on the recommendations coming from several agents, and to computethe Global Trust Confidence.

T-conorms are a family of functions defined as the “complementary” ofa family of functions called T-norms. In mathematics, a T-norm is a kindof binary operation used in the framework of probabilistic metric spacesand in multi-valued logic, specifically in fuzzy logic. A t-norm generalizesintersection in a lattice and the AND operator in logic. The name derivesfrom triangular norm, which refers to the fact that in the framework ofprobabilistic metric spaces t-norms are used to generalize triangle inequalityof ordinary metric spaces.

Definition 13 T-norm A T-norm is a function T : [0, 1] × [0, 1] → [0, 1]with the following properties:

Page 25: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

T-norm dual T-conorm (S-Norm)>min(a, b) = min{a, b} ⊥max(a, b) = max{a, b}>Luka(a, b) = max{0, a + b - 1 } ⊥Luka(a, b) = min{a + b, 1}>prod(a, b) = a · b ⊥sum(a, b) = a + b− a · b

Table 1 Some T-norms and their dual T-conorms

1. Commutativity: T (a, b) = T (b, a)2. Monotonicity: T (a, b) ≤ T (c, d) if a ≤ c and b ≤ d3. Associativity: T (a, T (b, c)) = T (T (a, b), c)4. Null element: T (a, 0) = 05. 1 acts as Identity element: T (a, 1) = a

A t-norm is called Archimedean if 0 and 1 are its only idempotents. AnArchimedean t-norm is called strict if 0 is its only nilpotent element.

A well know property of t-norms is: T (a, b) ≤ Min(a, b) It is actuallyconsequence of the axioms (T1, T2, T4).

T-conorms, also called S-norms, and can be defined as complementary toT-norms.

Definition 14 S-norm Given a T-norm T , the complementary S-norm isdefined as a generalization of De Morgan’s laws

S(a, b) = 1− T (1− a, 1− b)

It follows that a S-norm satisfies the following properties:

1. Commutativity: S(a, b) = S(b, a)2. Monotonicity: S(a, b) ≤ S(c, d) if a ≤ c and b ≤ d3. Associativity: S(a, S(b, c)) = S(S(a, b), c)4. Null element: S(a, 1) = 15. Identity element: S(a, 0) = a

Some examples of T-norms and their dual T-conorms (⊥) are shown inTable 1

Acknowledgements

Funded by projects CICYT TSI2005-07344, CICYT TEC2005-07 186 andCAM MADRINET S-0505/TIC/0255

References

1. S. H. A. Abdul-Rahman. Supporting trust in virtual communities. In 33th

IEEE International Conference on Systems Sciences, 2000.2. S. Ba, A. Whinston, and H. Zhang. Building trust in online auction markets

through an economic incentive mechanism. Decision Support System, 35:273–286, 2003.

3. S. Braynov and T. Sandholm. Trust revelation in multiagent interaction. pages57–60, 2002.

Page 26: Honesty and trust revisited: the advantages of being neutral about other's cognitive models

4. M. Butz, O. Sigaud, and P. Gerard, editors. Anticipatory Behavior in AdaptiveLearning Systems, Foundations, Theories, and Systems, volume 2684 of LectureNotes in Computer Science. Springer, 2003.

5. J. Carbo, J. Garcia, and J. Molina. Subjective trust inferred by kalman filter-ing vs. a fuzzy reputation. In T. K. Wang S, Yana D., editor, Workshop onConceptual Modelling for Agents (COMOA2004), 23th International Confer-ence on Conceptual Modelling, Lecture Notes in Computer Science 3289, pages496–505, Shanghai, China, November 2004. Springer-Verlag.

6. J. Carbo, J. Garcia, and J. Molina. Convergence of agent reputation with alpha-beta filtering vs. a fuzzy system. In International Conference on IntelligentAgents, Web Technologies and Internet Commerce, Wien, Austria, November2005.

7. J. Carbo, J. Molina, and J. Davila. Trust management through fuzzy reputa-tion. International Journal of Cooperative Information Systems, 12(1):135–155,Mar. 2003.

8. C. Castellfranchi and R. Falcone. Principles of trust for multiagent systems:Cognitive anatomy, social importance and quantification. In Third Interna-tional Conference on Multi-Agent Systems, pages 72–79, 1998.

9. R. Conte and M. Paolucci. Reputation in Artificial Societies. Kluwer AcademicPublishers, 2002.

10. K. Fullam, T. Klos, G. Muller, J. Sabater, A. Schlosser, Z. Topol, K. S. Barber,J. Rosenschein, L. Vercouter, and M. Voss. A specification of the agent repu-tation and trust (art) testbed: Experimentation and competition for trust inagent societies. In The Fourth International Joint Conference on AutonomousAgents and Multiagent Systems (AAMAS-2005), pages 512–518, 2005.

11. T. D. Huynh, N. R. Jennings, and N. R. Shadbolt. An integrated trust andreputation model for open multi-agent systems. Autonomous Agents and Multi-Agent Systems, 13(2):119–154, 2006.

12. P. F. M. Schillo and M. Rovatsos. Using trust for detecting deceitful agentsin artificial societies. Applied Artificial Intelligence, Special Issue on Trust,Deception and Fraud in Agent Societies, 14(8):825–849, 2000.

13. S. Marsh. Trust in distributed artificial intelligence. In Castelfranchi andWerner, editors, Lecture Notes in Artificial Intelligence 830, pages 94–112.Springer Verlag, 1994.

14. S. Russell and P. Norvig. Artificial intelligence: a modern approach. PrenticeHall Pearson Education International, 2003.

15. J. Sabater. Trust and Reputation for agent societies. Consejo Superior deInvestigaciones Cientificas, Bellaterra, Spain, 2003.

16. J. Sabater and C. Sierra. Regret: a reputation model for gregarious societies.In Fourth Workshop on Deception, Fraud and Trust in Agent Societies, pages61–69, Montreal, Canada, 2001.

17. S. Sen, A. Biswas, and S. Debnath. Believing others: pros and cons. In Proceed-ings of the 4th International Conference on MulitAgent Systems, pages 279–285,Boston, MA, July 2000.

18. S. Wasserman and J. Galaskiewicz. Advances in Social Network Analysis. SagePublications, Thousand Oaks, U.S., 1994.

19. B. Yu and M. Singh. A social mechanism for reputation management in elec-tronic communities. Lecture Notes in Computer Science, 1860:154–165, 2000.

20. G. Zacharia and P. Maes. Trust management through reputation mechanisms.Applied Artificial Intelligence, 14:881–907, 2000.