Tetherless World Constellation Why Watson Won: A cognitive perspective Jim Hendler and Simon Ellis Tetherless World Professor of Computer,Web and Cognitive Sciences Director, Rensselaer Institute for Data Exploration and Applications Rensselaer Polytechnic Institute (RPI) http://www.cs.rpi.edu/~hendler @jahendler (twitter)
In this talk, we present how the Watson program, IBM's famous Jeopardy playing computer, works (based on papers published by IBM), we look at some aspects of potential scoring approaches, and we examine how Watson compares to several well known systems and some preliminary thoughts on using it in future artificial intelligence and cognitive science approaches.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Tetherless World Constellation
Why Watson Won: A cognitive perspective
Jim Hendlerand Simon Ellis
Tetherless World Professor of Computer,Web and Cognitive SciencesDirector, Rensselaer Institute for Data Exploration and Applications
Rensselaer Polytechnic Institute (RPI)http://www.cs.rpi.edu/~hendler
@jahendler (twitter)
Tetherless World Constellation
IBM Watson
Tetherless World Constellation
How’d I get into it? Watson and Semantic Web
IBM
Tetherless World Constellation
Watson and Semantic Web
IBM
???Is Watson cognitive?“The computer’s techniques for unraveling Jeopardy! clues sounded just like mine. That machine zeroes in on key words in a clue, then combs its memory (in Watson’s case, a 15-terabyte data bank of human knowledge) for clusters of associations with those words. It rigorously checks the top hits against all the contextual information it can muster: the category name; the kind of answer being sought; the time, place, and gender hinted at in the clue; and so on. And when it feels ‘sure’ enough, it decides to buzz. This is all an instant, intuitive process for a human Jeopardy! player, but I felt convinced that under the hood my brain was doing more or less the same thing.”
— Ken Jennings
Tetherless World Constellation
Outline
• Is Ken right?– How Watson Works– Watson as a cognitive architecture??– Beyond Watson
???Inside Watson
Watson pipeline as published by IBM; see IBM J Res & Dev 56 (3/4), May/July 2012, p. 15:2
???Question Analysis
???Question analysis
What is the question asking for?
Which terms in the question refer to the answer?
Given any natural language question, how can Watson accurately discover this information?
Who is the president of Rensselaer Polytechnic Institute?
Focus Terms: “Who”, “president of Rensselaer
Polytechnic Institute”
Answer Types: Person, President
QuestionAnalysis
???Parsing and semantic analysis
What information about a previously unseen piece of English text can Watson determine?
How is this information useful?
Natural Language Parsing Semantic Analysis
- grammatical structure
- parts of speech
- relationships between words
- ...etc.
- meanings of words, phrases, etc.
- synonyms, entailment
- hypernyms, hyponyms
- ...etc.
???Question analysis pipeline
UnstructuredQuestion Text
Parsing&
SemanticAnalysis
MachineLearning
Classifiers
Structured Annotationsof Question:
Focus, answer types, Useful search queries
???Search Result Processing and Candidate Generation
???Primary Search
Primary Search is used to generate the corpus of information from which to take candidate answers, passages, supporting evidence, and essentially all textual input to the system
It formulates queries based on the results of Question Analysis
These queries are passed into a (cached) search engine which returns a set number of highly relevant documents and their ranks. on the open Web this could be a regular search engine
(our extension)
???Candidate Generation
Candidate Generation generates a wide net of possible answers for the question from each document.
Using each document, and the passages created by Search Result Processing, we generate candidates using three techniques: Title of Document (T.O.D.): Adds the title of the
document as a candidate. Wikipedia Title Candidate Generation: Adds any noun
phrases within the document’s passage texts that are also the titles of Wikipedia articles.
Anchor Text Candidate Generation: Adds candidates based on the hyperlinks and metadata within the document.
???Search Result Processing andCandidate Generation
???Scoring & Ranking
???Scoring
Analyzes how well a candidate answer relates to the question
Two basic types of scoring algorithm Context-independent scoring Context-dependent scoring
???Types of scorers
Context-independent Question Analysis Ontologies (DBpedia, YAGO, etc) Type hierarchy reasoning
Context-dependent Analyzes feature of the natural language environment
where candidates were found Relies on “passages” found during search
Many special purpose ones used in Jeopardy
???Scorers
Passage Term Match
Textual Alignment
Skip-Bigram
Each of these scores supportive evidence These scores are then merged to produce a single
candidate score
???Example:Textual Alignment
Finds an optimal alignment of a question and a passage
Assigns “partial credit” for close matches
“Who is the President of RPI?” Who President of RPI. Shirley Ann Jackson is the President of RPI.
• A: Poorly– no conversational ability– no concept of self– no deeper reasoning
…• Q: How does Watson fare as a model
of question answering?
Tetherless World Constellation
Watson and Q/A
• Watson’s feed-forward pipeline has the following properties– lots of candidates generated
• the more the better– “ad hoc” filtering pipelines
• domain independent usually score lower than domain dependent
– no “counter-reasoning” between answers• separately scored, only comparison is
numbers
Tetherless World Constellation
Production rules, modules, etc
Production Rule style Architectures cf ACT-R (Anderson 1974; …2012) - modularization, but not Watson style - parallelization, but in rule productions (procedural memory) - declarative memory is fact basedWatson is not well correlated, except for using search for declarative memory
Tetherless World Constellation
Network based
Network based architectures (cf. spreading activation (Collins 75), marker-passing (Hendler 86) … Microsaint 2006) - positive activations - inhibitory nodes (or other negative enforcers)Watson has no negative inhibition, does use network-based scorers
Tetherless World Constellation
MAC/FAC
MAC/FAC (Gentner & Forbus, 1991) Many are chosen, few are called model of analogic reasoning Strong correspondence in performance, not in mechanism New work by Forbus (SME) uses a more feed-forward mechanism
(Discussions in progress)
Office of Research
Cognitive Architecture? Watson as “component”
MemoryReasoning
Decision Making
Watson, Cogito, and Clarion
Tetherless World Constellation
Summary
• Watson won by a combination of – natural language processing– search technologies– semantic typing (minimal reasoning)– scoring heuristics– machine learning (scorer tuning)
• Watson Q/A has some interesting analogies to cognitive architectures of the past– but mainly at a “level of abstraction”
• Watson as a memory component in a more complex cognitive system is a very intriguing possibility