Top Banner
Learning to Share Learning to Share Meaning in a Multi- Meaning in a Multi- Agent System Agent System (Part I) (Part I) Ganesh Padmanabhan Ganesh Padmanabhan
22

Learning to Share Meaning in a Multi-Agent System (Part I)

Feb 01, 2016

Download

Documents

irish

Learning to Share Meaning in a Multi-Agent System (Part I). Ganesh Padmanabhan. Article. Williams, A.B. , "Learning to Share Meaning in a Multi-Agent System " , Journal of Autonomous Agents and Multi-Agent Systems , Vol. 8, No. 2, 165-193, March 2004. (Most downloaded article in Journal). - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Learning to Share Meaning in a Multi-Agent System (Part I)

Learning to Share Learning to Share Meaning in a Multi-Meaning in a Multi-

Agent SystemAgent System(Part I)(Part I)

Ganesh PadmanabhanGanesh Padmanabhan

Page 2: Learning to Share Meaning in a Multi-Agent System (Part I)

ArticleArticle

Williams, A.B.Williams, A.B., , "Learning to Share Meaning in a Multi"Learning to Share Meaning in a Multi-Agent System "-Agent System ", , Journal of Autonomous Agents and MJournal of Autonomous Agents and Multi-Agent Systemsulti-Agent Systems, Vol. 8, No. 2, 165-193, March 2004. , Vol. 8, No. 2, 165-193, March 2004. (Most downloaded article in Journal) (Most downloaded article in Journal)

Page 3: Learning to Share Meaning in a Multi-Agent System (Part I)

OverviewOverview

Introduction (part I)Introduction (part I) Approach (part I)Approach (part I) Evaluation (part II)Evaluation (part II) Related Work (part II)Related Work (part II) Conclusions and Future Work (part II)Conclusions and Future Work (part II) DiscussionDiscussion

Page 4: Learning to Share Meaning in a Multi-Agent System (Part I)

IntroductionIntroduction

One Common Ontology? Does that One Common Ontology? Does that work?work?

If not, what issues do we face when If not, what issues do we face when agents have similar views of the agents have similar views of the world but different vocabularies?world but different vocabularies?

Reconciling Diverse Ontologies so Reconciling Diverse Ontologies so that Agents can communicate that Agents can communicate effectively when appropriate.effectively when appropriate.

Page 5: Learning to Share Meaning in a Multi-Agent System (Part I)

Diverse Ontology Paradgm: Diverse Ontology Paradgm: Questions AddressedQuestions Addressed

““How do agents determine if they know How do agents determine if they know the same semantic concepts?”the same semantic concepts?”

““How do agents determine if their different How do agents determine if their different semantic concepts actually have the same semantic concepts actually have the same meaning?”meaning?”

““How can agents improve their How can agents improve their interpretation of semantic concepts by interpretation of semantic concepts by recursively learning missing discriminating recursively learning missing discriminating attributes?”attributes?”

““How do these methods affect the group How do these methods affect the group performance at a given collective task?”performance at a given collective task?”

Page 6: Learning to Share Meaning in a Multi-Agent System (Part I)

Ontologies and MeaningOntologies and Meaning

Operational Definitions NeededOperational Definitions Needed Conceptualization, ontology, universe Conceptualization, ontology, universe

of discourse, functional basis set, of discourse, functional basis set, relational basis set, object, class, relational basis set, object, class, concept description, meaning, object concept description, meaning, object constant, semantic concept, constant, semantic concept, semantic object, semantic concept semantic object, semantic concept set, distributed collective memoryset, distributed collective memory

Page 7: Learning to Share Meaning in a Multi-Agent System (Part I)

ConceptualizationConceptualization

All objects that an agent presumes to All objects that an agent presumes to exist and their interrelationships with exist and their interrelationships with one another.one another.

Tuple: Universe of Discourse, Tuple: Universe of Discourse, Functional Basis Set, Relational Basis Functional Basis Set, Relational Basis SetSet

Page 8: Learning to Share Meaning in a Multi-Agent System (Part I)

OntologyOntology

Specification of a conceptualizationSpecification of a conceptualization Mapping of language symbols to an Mapping of language symbols to an

agent’s conceptualizationagent’s conceptualization Terms used to name objectsTerms used to name objects Functions to interpret objectsFunctions to interpret objects Relations in the agent’s worldRelations in the agent’s world

Page 9: Learning to Share Meaning in a Multi-Agent System (Part I)

ObjectObject

Anything we can say something Anything we can say something aboutabout

Concrete or Abstract Concrete or Abstract classes classes Primitive or CompositePrimitive or Composite Fictional or non-fictionalFictional or non-fictional

Page 10: Learning to Share Meaning in a Multi-Agent System (Part I)

UOD and ontologyUOD and ontology

““The difference between the UOD The difference between the UOD and the ontology is that the UOD are and the ontology is that the UOD are objects that exist but until they are objects that exist but until they are placed in an agent’s ontology, the placed in an agent’s ontology, the agent does not have a vocabulary to agent does not have a vocabulary to specify objects in the UOD.”specify objects in the UOD.”

Page 11: Learning to Share Meaning in a Multi-Agent System (Part I)

Forming a ConceptualizationForming a Conceptualization

Agent’s first step at looking at the Agent’s first step at looking at the world.world.

Declarative KnowledgeDeclarative Knowledge Declarative SemanticsDeclarative Semantics Interpretation Function maps an Interpretation Function maps an

object in a conceptualization to object in a conceptualization to language elementslanguage elements

Page 12: Learning to Share Meaning in a Multi-Agent System (Part I)

Distributed Collective Distributed Collective MemoryMemory

Page 13: Learning to Share Meaning in a Multi-Agent System (Part I)

Approach OverviewApproach Overview

AssumptionsAssumptions Agents’ use of supervised inductive Agents’ use of supervised inductive

learning to learn representations for learning to learn representations for their ontologies.their ontologies.

Mechanics of discovering similar Mechanics of discovering similar semantic concepts, translation, and semantic concepts, translation, and interpretation.interpretation.

Recursive Semantic Context Rule Recursive Semantic Context Rule Learning for improved performance.Learning for improved performance.

Page 14: Learning to Share Meaning in a Multi-Agent System (Part I)

Key AssumptionsKey Assumptions ““Agents live in a closed world represented Agents live in a closed world represented

by distributed collective memory.”by distributed collective memory.” ““The identity of the objects in this world The identity of the objects in this world

are accessible to all agents and can be are accessible to all agents and can be known by the agents.”known by the agents.”

““Agents use a knowledge structure that Agents use a knowledge structure that can be learned using objects in the can be learned using objects in the distributed collective memory.”distributed collective memory.”

““The agents do not have any errors in The agents do not have any errors in their perception of the world even though their perception of the world even though their perceptions may differ.”their perceptions may differ.”

Page 15: Learning to Share Meaning in a Multi-Agent System (Part I)

Semantic Concept LearningSemantic Concept Learning

Individual Learning, i.e. learning Individual Learning, i.e. learning one’s own ontologyone’s own ontology

Group Learning, i.e. one agent Group Learning, i.e. one agent learning that another agent knows a learning that another agent knows a particular conceptparticular concept

Page 16: Learning to Share Meaning in a Multi-Agent System (Part I)

WWW Example DomainWWW Example Domain

Web Page = specific semantic objectWeb Page = specific semantic object Groupings of Web Pages = semantic Groupings of Web Pages = semantic

concept or classconcept or class Analogous to Bookmark organizationAnalogous to Bookmark organization Words and HTML tags are taken to be Words and HTML tags are taken to be

boolean features.boolean features. Web Page represented by boolean vector.Web Page represented by boolean vector. Concepts Concepts Concept Vectors Concept Vectors Learner Learner

Semantic Concept Description (rules)Semantic Concept Description (rules)

Page 17: Learning to Share Meaning in a Multi-Agent System (Part I)

Ontology LearningOntology Learning

Supervised Inductive LearningSupervised Inductive Learning Output = Semantic Concept Output = Semantic Concept

Descriptions (SCD)Descriptions (SCD) SCD are rules with a LHS and RHS etc.SCD are rules with a LHS and RHS etc. Object instances are discriminated Object instances are discriminated

based on tokens contained within based on tokens contained within sometimes resulting in “…a peculiar sometimes resulting in “…a peculiar learned descriptor vocabulary.”learned descriptor vocabulary.”

Certainty ValueCertainty Value

Page 18: Learning to Share Meaning in a Multi-Agent System (Part I)

Locating Similar Semantic Locating Similar Semantic ConceptsConcepts

1)1) Agent queries another agent for a concept by Agent queries another agent for a concept by showing it examples.showing it examples.

2)2) Second agent receives examples and uses its own Second agent receives examples and uses its own conceptualization to determine if it knows the conceptualization to determine if it knows the concept (K), maybe knows it (M), or doesn’t know it concept (K), maybe knows it (M), or doesn’t know it (D).(D).

3)3) For cases, K and M, the second agent sends back For cases, K and M, the second agent sends back examples of what it thinks is the concept that was examples of what it thinks is the concept that was queried.queried.

4)4) First agent receives the examples, and interprets First agent receives the examples, and interprets those using its own conceptualization to “verify” those using its own conceptualization to “verify” that they are talking about the same concept.that they are talking about the same concept.

5)5) If verified, the querying agent then adds that the If verified, the querying agent then adds that the other agent knows its concept to its own other agent knows its concept to its own knowledge base.knowledge base.

Page 19: Learning to Share Meaning in a Multi-Agent System (Part I)

Concept Similarity Concept Similarity EstimationEstimation

Assuming two agents know a particular Assuming two agents know a particular concept, it is feasible and probable given a concept, it is feasible and probable given a large DCM, that the sets of concept large DCM, that the sets of concept defining objects differ completely.defining objects differ completely.

Cannot simply assume that the target Cannot simply assume that the target functions generated by each agent using functions generated by each agent using supervised inductive learning from supervised inductive learning from example will be the same.example will be the same.

Need to define other ways to estimate Need to define other ways to estimate similarity.similarity.

Page 20: Learning to Share Meaning in a Multi-Agent System (Part I)

Concept Similarity Estimation Concept Similarity Estimation FunctionFunction

Input: sample set of objects representing a concept in Input: sample set of objects representing a concept in another agentanother agent

Output: Knows Concept (K), Might Know Concept (M), Don’t Output: Knows Concept (K), Might Know Concept (M), Don’t Know Concept(D).Know Concept(D).

Set of Objects Set of Objects Tries mapping set of objects to each of its Tries mapping set of objects to each of its concepts using description rules concepts using description rules each concept receives each concept receives an interpretation value an interpretation value interpretation value is compared interpretation value is compared with thresholds to make K,M, or D determination.with thresholds to make K,M, or D determination.

Interpretation Value for one concept is the proportion of Interpretation Value for one concept is the proportion of objects in the CBQ that were inferred to be this concept.objects in the CBQ that were inferred to be this concept.

Positive Interpretation Threshold = how often this concept Positive Interpretation Threshold = how often this concept description correctly determined an object in the training description correctly determined an object in the training set to belong to this conceptset to belong to this concept

Negative Interpretation ThresholdNegative Interpretation Threshold

Page 21: Learning to Share Meaning in a Multi-Agent System (Part I)

Group KnowledgeGroup Knowledge Individual KnowledgeIndividual Knowledge VerificationVerification

Page 22: Learning to Share Meaning in a Multi-Agent System (Part I)

Translating Semantic Translating Semantic ConceptsConcepts

Same algorithm as for locating Same algorithm as for locating similar concepts in other agents.similar concepts in other agents.

Two concepts determined to be the Two concepts determined to be the same, can be translated regardless same, can be translated regardless of label in the ontologies.of label in the ontologies.

Difference: After verification, Difference: After verification, knowledge is stored as “Agent B knowledge is stored as “Agent B knows my semantic concept X knows my semantic concept X as Y.”as Y.”