Top Banner
Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael Wolverton (SRI), and Paulo Pinheiro da Silva (UTEP)
33

Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

Dec 22, 2015

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Explanation and Trust for Adaptive Systems

Alyssa Glass(Stanford / SRI / Willow Garage)

In collaboration with Deborah McGuinness (Stanford/RPI), Michael Wolverton (SRI), and Paulo Pinheiro da Silva (UTEP)

Page 2: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Outline

Adaptive Systems The User Perspective Trusting Adaptive Systems Discussion & Future Work

Page 3: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Why is “trust” an issue? Systems are getting more complex

Hybrid and distributed processing Multiple learning components Multiple heterogeneous, distributed information

sources Highly variable reliability of information sources Less transparency of system computation and

reasoning Systems are taking more autonomous control

Guide/assist user actions Perform autonomous actions on behalf of user “reason, learn from experience, be told what to do,

explain what they are doing, reflect on their experience, and respond robustly to surprise” *

* DARPA PAL program: http://www.darpa.mil/ipto/programs/pal/

Page 4: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

One Adaptive System: CALO Cognitive Assistant that Learns and

Organizes Personal office assistant, tasked with:

Noticing things in the cyber and physical environments Aggregating what it notices, thinks, and does Executing, adding/deleting, suspending/resuming

tasks Planning to achieve abstract objectives Anticipating things it may be called upon to do or

respond to Interacting with the user Adapting its behavior in response to past experience,

user guidance Contributed to by 22 different organizations

Page 5: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Working with a Cognitive Assistant

CALO users need to Understand system behavior and responses Trust system reasoning and actions

To believe and act on recommendations from CALO, users need ways of exploring how and why the system acted, responded, recommended, and reasoned the way it did.

Additional wrinkle: CALO knowledge, behavior, and assumptions are constantly changing through several forms of machine learning.

A unified framework for explaining behavior and reasoning is essential for users to trust and adopt cognitive assistants.

Page 6: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Outline

Adaptive Systems The User Perspective Trusting Adaptive Systems Discussion & Future Work

Page 7: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Interacting with Complex Systems

Page 8: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Study Procedure 14 participants

12 men, 2 women Wide range of ages, education,

previous CALO experience Assigned tasks to accomplish

with CALO (many scripted) Told about trust study in

advance Structured interview format Identified 8 themes, in 3 major

categories

Page 9: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Usability

Theme 1: Basic usability is important even in prototype-level systems.

“I can’t tell you how much I would love to have [the system]. But I also can’t tell you how much I can’t stand it.”

Page 10: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Usability

Theme 2: Learning algorithms can give the impression that the user is being ignored.

“You specify something, and [the system] comes up with something completely different. And you’re like, it’s ignoring what I want!”

Page 11: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

ExplanationsTheme 3: Users consistently want to ask context-sensitive questions, particularly when they are surprised by responses or failures.

What are you doing? Why did you do that? When will you be finished? What information sources did you use?

“If there had been an option to ask a question, I would have loved to ask a question.”

“I asked [‘Why?’] all the time, but I wasn’t getting answers!”

Page 12: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Explanations

Theme 4: The granularity of feedback is important.

“I don’t just want an idiot light.”

Page 13: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Trust

Theme 5: Users don’t trust opaque systems; they want transparency.

“The ability to check up on the system, ask it questions, get transparency to verify what it is doing, is the number one thing that would make me want to use it.”

Page 14: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Trust

Theme 6: Access to information provenance can improve trust in both the information and the automated reasoning.

“[The system] needs a better way to have a meta-conversation.”

Page 15: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Trust

Theme 7: Like in politics and the economy, gaining user trust relies on properly managing expectations.

“I was paralyzed with fear about what it would understand and what it would not.”

Page 16: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Trust

Theme 8: Most users have a “trust but verify” attitude that makes system autonomy difficult without explainable verification.

“I trust [the system’s] accuracy, but not its judgment.”

Page 17: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Outline

Adaptive Systems The User Perspective Trusting Adaptive Systems Discussion & Future Work

Page 18: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

The Use-Ask-Understand-Update Cycle

Use Ask

UnderstandUpdate

Page 19: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

What is an “explanation”…

<user>: Why are you doing <subtask>?<system>: I am trying to do <high-level-task>

and <subtask> is one subgoal in the process.

<user>: Why are you doing <high-level-task>?<user>: Why haven’t you completed

<subtask> yet?<user>: Why is <subtask> a subgoal of <high-

level-task>?<user>: When will you finish <subtask>?<user>: What sources did you use to do

<subtask>?

Initial request and answer strategy

Follow-up questions for mixed initiative dialogue

Page 20: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

How Explanation Can Help

T1: Basic Usability T2: Being Ignored T3: Context-Sensitive Questions T4: Granularity of Feedback T5: Transparency T6: Provenance T7: Managing Expectations T8: Autonomy & Verification

Page 21: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

How Explanation Can Help

T1: Basic Usability T2: Being Ignored T3: Context-Sensitive Questions T4: Granularity of Feedback T5: Transparency T6: Provenance T7: Managing Expectations T8: Autonomy & Verification

Page 22: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

How Explanation Can Help

T1: Basic Usability T2: Being Ignored T3: Context-Sensitive Questions T4: Granularity of Feedback T5: Transparency T6: Provenance T7: Managing Expectations T8: Autonomy & Verification

Page 23: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

The Integrated Cognitive Explanation Environment (ICEE)

Unified framework for explaining logical and task reasoning.

Applicable to multiple task execution systems.

Leverage existing InferenceWeb work for generating formal justifications.

Underlying task reasoning useful beyond explanation.

Provide sample implementation of end-to-end system.

Page 24: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Explanation ExampleSample question type: task motivation

Why are you doing <subtask>?

Strategy: reveal task hierarchyI am trying to do <high-level-task> and <subtask> is one subgoal

in the process.

Alternate strategies: Provide task abstraction Expose preconditions Expose termination conditions Reveal meta-information about task dependencies Explain provenance related to task preconditions or other

knowledge

Possible follow-up suggestions: Request additional detail Request clarification of of the given explanation Request an alternate strategy to the original query

McGuinness, D.L.; Glass, A.; Wolverton, M.; Pinheiro da Silva, P. A Categorization of Explanation Questions for Task Processing Systems. AAAI Workshop on Explanation-Aware Computing (ExaCt-2007), Vancouver, Canada, 2007.

Page 25: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Sidetrack:An InferenceWeb Primer

Trust

Explanation

Presentation

Abstraction

InferenceMeta-Language

InferenceRule

Specs

ProvenanceMeta-data

InformationManipulation

Data

Interaction

Understanding

Proof Markup Language

Framework for Framework for explainingexplaining reasoning and execution tasks by reasoning and execution tasks by abstracting, storing, exchanging, combining, annotating, filtering, abstracting, storing, exchanging, combining, annotating, filtering,

comparing, and rendering justifications from varied cognitive comparing, and rendering justifications from varied cognitive reasoners.reasoners.

1. Registry and service support for knowledge provenance.

2. Language for encoding hybrid, distributed proof fragments (both formal and informal).

3. Declarative inference rule representation for checking proofs.

4. Multiple strategies for proof abstraction, presentation, and interaction.

Page 26: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Sidetrack continued:Representations in PML

Proof Markup Language (PML) is a proof interlingua Used to represent justification of information

manipulation steps done by theorem provers, extractors, other reasoners

Main components concern inference representation and provenance issues

Specification written in OWL

iw:hasConclusion:

SupportsTopLevelGoal

iw:NodeSet

iw:isConsequenceOf

iw:InferenceStep

iw:hasLanguage:(Supports GA BL)

KIF

iw:hasRule:iw:hasSourceUsage:iw:hasEngine: SPARK

http://foo.com/Example.owl#Laptop

TailorComment

Page 27: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Sample Interface Linked to ICEE

Page 28: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Learning by Instruction

Relatively straight-forward to explain

Store instruction, resulting modification

Strategies present instruction and related meta-information

Demonstrated in CALO with Tailor task learning system

Page 29: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Learning by Demonstration Generalizes user’s demonstration to learn a procedure One data point --> generalization will sometimes be

wrong Specifically, it will occasionally over generalize

Generalize the wrong variables, or too many variables Produce too general a procedure because of a coarse-

grained type hierarchy Explain the relevant aspects of the generalization

process To help the user identify and correct over

generalizations To help the user understand and trust the learned

procedures Working with LAPDOG task learning system in CALO

Page 30: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Support-Vector Machines Augment SVM to gather additional meta-information about the SVM itself:

Support vectors identified by SVM Support vectors nearest to the query point Margin to the query point Average margin over all data points Non-support vectors nearest to the query point Kernel transformation used, if any

Represent SVM learning and meta-information as justification in PML, using added SVM rules

Design abstraction strategies for presenting justification to user as a similarity-based explanation

Demonstrated in CALO with PLIANT preference learning system PLIANT uses user-elicited preferences and past choices to learn user scheduling

preferences Inconsistent user preferences, over-constrained schedules, and necessity of

exploring the preference space result in user confusion about why a schedule is being presented.

Lack of user understanding of PLIANT’s updates creates confusion, mistrust, and the appearance that preferences are being ignored.

Provide justifications of schedule suggestions, without requiring user to understand SVM learning.

Page 31: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Future Work

Using conflicts to drive the learn-explain cycle

Using explanations to identify high-reward learning opportunities

Support more advanced dialogues and interfaces

User study using ICEE explanations

Page 32: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Questions?

Page 33: Explanation and Trust for Adaptive Systems Alyssa Glass (Stanford / SRI / Willow Garage) In collaboration with Deborah McGuinness (Stanford/RPI), Michael.

QuickTime™ and aTIFF (LZW) decompressor

are needed to see this picture.

Resources Explanation questions & strategies:

McGuinness, D.L., Glass, A., Wolverton, M., and Pinheiro da Silva, P. A Categorization of Explanation Questions for Task Processing Systems. AAAI 2007 Workshop on Explanation-Aware Computing (ExaCt-2007), Vancouver, Canada, 2007.

CALO trust study: Glass, A., McGuinness, D.L., and Wolverton, M. Toward Establishing Trust in

Adaptive Agents. Technical Report, KSL-07-04, Knowledge Systems, Artificial Intelligence Laboratory, Stanford University, 2007.

Explanation interfaces: McGuinness, D.L., Ding, L., Glass, A., Chang, C., Zeng, H., and Furtado, V. Explanation

Interfaces for the Semantic Web: Issues and Models. 3rd International Semantic Web User Interaction Workshop (SWUI’06), Athens, Georgia, 2006.

Overview of ICEE: McGuinness, D.L., Glass, A., Wolverton, M., and Pinheiro da Silva, P. Explaining Task

Processing in Cognitive Assistants that Learn. Proceedings of the 20th International FLAIRS Conference (FLAIRS-20), Key West, Florida, 2007.

Video demonstration of ICEE: http://iw.stanford.edu/2006/10/ICEE.640.mov