LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE A Relational Representation for Procedural Task Knowledge Stephen Hart Roderic Grupen David Jensen Laboratory for Perceptual Robotics University of Massachusetts Amherst New England Manipulation Symposium May 25, 2005
21
Embed
A Relational Representation for Procedural Task Knowledge
A Relational Representation for Procedural Task Knowledge. Stephen Hart Roderic Grupen David Jensen Laboratory for Perceptual Robotics University of Massachusetts Amherst New England Manipulation Symposium May 25, 2005. Introduction and Motivation. - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
A Relational Representation for Procedural Task
Knowledge
Stephen Hart Roderic Grupen David Jensen
Laboratory for Perceptual RoboticsUniversity of Massachusetts Amherst
New England Manipulation SymposiumMay 25, 2005
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
Introduction and Motivation
• Robots performing tasks in real-world environments require methods to:• Produce fault-tolerant behavior• Focus on most salient and relevant information • Handle multi-modal, continuous data • Leverage past experience (i.e. adapt and reuse)
• Can we learn probability estimates regarding the effects of sensorimotor variables on task success?– e.g. If I take these actions, how likely am I to succeed at my
task?
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
Generalized Task Expertise
• Declarative knowledge– Captures abstract knowledge about the task– e.g. find an object, reach to it, pick it up...
• Procedural knowledge– Captures knowledge about how to instantiate the
abstract policy in a particular environmental context– e.g. turn my head to the left, use my left hand to
reach, use an enveloping grasp...
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
Schema Theory• Arbib (1995) describes control programs composed of:
– Perceptual schema - a Ball might be characterized by “size,” “color,” “velocity,” etc.
– Motor schema - actions characterized by a “degree of readiness” and “activity level.”
• Are such distinctions misleading?– Gibsonian Affordances: a perceptual feature is only meaningful if it
facilitates action – Mirror Neurons: the same neurons will activate when performing an
action or when observing someone else perform that action
• Claim: All perceptual information can come from appropriately designed controllers
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
How do we learn procedural structure?
• We would like the robot to differentiate its actions based on environmental context– e.g. Pick and Place
• Which available sensorimotor features are correlated – structure learning
• How these features relate, probabilistically, to each other – parameter learning
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
Relational Data• Data with complex dependencies between instances or varying
structure (not i.i.d.)
• Applicable to robotics domain because:– Different training episodes may exhibit varying structure
• Data designated as Objects and Attributes– Objects are related through the structure of the data– Attributes are related through learned statistical dependencies
• Relational Dependency Networks– approximate the full joint distribution of a set of variables with a set
of conditional probability distributions– Perform Gibbs sampling to do joint inference
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
locale
bounding box
dimensions
orientation
convergence
state
lift-able
fingers
Localize Reach Grasp
convergence
state
Some Controller Objects
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
What is Relational About this Data?
ReachController
GraspController
ReachController
Simple Assembly 1:
GraspController
AssembleController
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
What is Relational About this Data?
ReachController
GraspController
ReachController
Simple Assembly 2:
GraspController
AssembleController
RemanipulateController
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
Gathering the Dataset
• Observe an autonomous program or a teleoperator performing a task a variety of ways
• Each trial may follow a different trajectory
• Data is collected after each trial
• Model is learned with Proximityhttp://kdl.cs.umass.edu/proximity/
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
Experiments
• PickUp with DexterTM
• 2 objects (3 orientations)• tall box, coffee can
• 2 grasps: • 2 VF, 3 VF
• 2 reaches:• top approach• side approach
• 8 locales• uniformly distributed
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
locale
bounding box
dimensions
orientation
convergence
state
lift-able
fingers
Localize Reach Grasp
convergence
state
The Learned Model Graph
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
Attribute Trees
• The RDN algorithm estimates a CPD for each attribute– Learns a locally consistent Relational Probability
Tree (RPT) for that attribute
• Each tree focuses attention on the most salient predictors of the corresponding attribute– Manages complexity– Allows for easy and intuitive interpretation– Each attribute (sensorimotor feature) has an
affordance in terms of the current task
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
RPT for “Lift-able”
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
Using the RDN to construct policy
• How do we use the learned schema to perform the task again?– At each action point:
• perform joint inference on task success variables and find most likely resource assignment
• Use this assignment and see how likely success is• Perform next action with resource binding, possibly
uncovering new information through interaction
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
Yeah, but... how does it perform?
• Pick up the can with 2 or 3 fingers from the top
• Pick up the box with 2 fingers – From the side or the top standing up– From the top laying down
• Predicts little probability of success if object is outside reachable workspace
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
Where to Next?
• How do we learn the declarative structure?– Previous work by Huber, Platt, etc.
• Capture dynamic response of controllers during execution– Learn dependencies through direct interaction with
the environment
• Can we sample a set attributes from uncountable possible set– Resample if poor policies are learned
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
The End
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
RDNs in Robotics• What do we know?
– a collection of controllers are necessary for a task, usually organized as a sequence of sub-goals
– controllers have state, attached resources, and can reveal perceptual information through execution
– controllers can execute sequentially or in conjunction
• What don’t we know?– Which sensorimotor features of each controller are
important and how they correlate
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE
Localize
Reach Reach Grasp Grasp
Localize
Reach Reach Grasp
Localize
Reach Grasp Grasp
Localize
Reach Grasp
Four Training Structures
LABORATORY FOR PERCEPTUAL ROBOTICS • UNIVERSITY OF MASSACHUSETTS AMHERST • DEPARTMENT OF COMPUTER SCIENCE