Reinforcement Learning Simulations and Roboticstaylorm/14_580/Hawbaker.pdf · Reinforcement Learning Simulations and Robotics. Models ... up robot learning using an imperfect simulator

Post on 04-Jul-2020

4 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

Transcript

Reinforcement Learning Simulations and

Robotics

Models

● Partially observable – noise in sensors

● Policy search methods rather than value function-based approaches

● Isolate key parameters by choosing an appropriate representation for a value function or policy

● Incorporating prior knowledge and transfer knowledge from simulations

Safety

● Key issue of the learning process● Doesn't apply to the rest of the RL community● Perkins and Barto

– RL agents based on Lyapunov functions– Switching between the underlying controllers – Always safe and offers basic performance

guarantees.

Grid World Themed Movements

● Classical RL approaches ● Discrete states and actions ● Projected for navigational tasks● Use actions like “move to the cell to the left”● Use a lower level controller to take care of

accelerating moving and stopping while ensuring precision

Quick Reward Shaping

● Rewards → Quick success – Real-world experience costly

● Specifying good reward functions – Requires domain knowledge

– Difficult in practice

● Intermediate rewards instead of binary

Tracking Solution

● Used to help convergence ● The dynamics of a robot can change

– Temperature– Wear on gears or motors– Other external factors

Building an Accurate Model

● Challenging ● Requires very many data samples ● Under-modeling errors accumulate

– Simulated robot can quickly diverge from the real-world system

● Transfer requires significant modifications if model is not accurate

Approximate models

● Verifying and testing algorithms in simulation● Establishing proximity to theoretical optimal

solution● Calculating approximate gradients for local

policy improvement● Identifying strategies for collecting more data● Performing “Mental rehearsal”

Mental Rehearsal

● Practicing in simulation● The simulated learning step● Used after learning a forward model from real

world● Only the resulting policy is transferred to the robot● Model-based methods

– Sample efficient– Often requires a great deal of memory

Mental Rehearsal Issues

● Simulation Biases● Stochasticity of the real world● Efficient optimization when sampling from a

simulator

Mental Rehearsal Solutions

● Add a stochastic model of distribution to your simulation

● Average results over model uncertainty ● Artificially add noise the the simulation

– Avoids policy over-fitting

– Smooths model errors

● Explicity model uncertainty

Grounded Simulation Learning

Iterative optimization framework for speeding up robot learning using an imperfect simulator

1. Behavior is optimized in simulation

2. Behavior is tested on robot and compared to expected results from the simulation

3. Simulator is modified using machine-learning approach to come closer to reality

GSL: Fitness Sim

● Imperfect simulation of the robot● Evaluates the parametrized behavior of

the robot● Function must be modifiable● Used to make the simulation better match the

real robot’s behavior.

GSL Fitness Robot

● Small number of evaluations ● Evaluates the fitness of the parametrized

behavior on the robot itself

GSL Explore Robot

● A small number of explorations can be

run on the real robot

● Collect states and actions relevant to the current parameterization of the behavior

● While exploring

GSL Learn

● Used to learn a model of the effects of actions

on state of the real robot.

● This model will be used to modify Fitness sim to make it better reflect the behavior on the real robot.

GSL Optimize

● In simulation● Optimization to find better parameters

Ball in Cup Real Robot Example

Ball in Cup Real Robot Results

● 42-45 episodes to get the ball n the cup● 70-80 episodes to be consistent● Always converged tot he maximum after 100

episodes

Simulation in Robot RL

● Simulation matched recorded data very well● Simulated policies usually missed ● First improve a demonstrated policy in

simulation and only perform the fine-tuning on the real robot

● Importance sampler– Considers only the n best previous episodes

SARSA

● Popular base RL algorithm for robotics● Compatible with Q-Value reuse

● The mapping Q-Value Reuse function

Q-Value Reuse

Transfer Methods

● Weak Transfer: Time spent in source task doesn't count against the learner in the target

● Strong Tranfer: Source time does count

Two Step Transfer

● Learned sequentially from multiple source tasks

● The Q-Value Reuse function for two step

top related