Abstract This presentation questions the need for reinforcement learning and related paradigms from machine-learning, when trying to optimise the behavior of an agent. We show that it is fairly simple to teach an agent complicated and adaptive behaviors under a free-energy principle. This principle suggests that agents adjust their internal states and sampling of the environment to minimize their free-energy. In this context, free-energy represents a bound on the probability of being in a particular state, given the nature of the agent, or more specifically the model of the environment an agent entails. We show that such agents learn causal structure in the environment and sample it in an adaptive and self- supervised fashion. The result is a policy that reproduces exactly the policies that are optimized by reinforcement learning and dynamic programming. Critically, at no point do we need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem using just the free- energy principle. The ensuing proof of concept is important because the free-energy formulation also provides a principled account of perceptual inference in the brain and furnishes a unified framework for action and perception. Action and active inference: A free-energy formulation
33
Embed
Abstract This presentation questions the need for reinforcement learning and related paradigms from machine-learning, when trying to optimise the behavior.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Abstract
This presentation questions the need for reinforcement learning and related paradigms from machine-learning, when trying to optimise the behavior of an agent. We show that it is fairly simple to teach an agent complicated and adaptive behaviors under a free-energy principle. This principle suggests that agents adjust their internal states and sampling of the environment to minimize their free-energy. In this context, free-energy represents a bound on the probability of being in a particular state, given the nature of the agent, or more specifically the model of the environment an agent entails. We show that such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. The result is a policy that reproduces exactly the policies that are optimized by reinforcement learning and dynamic programming. Critically, at no point do we need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem using just the free-energy principle. The ensuing proof of concept is important because the free-energy formulation also provides a principled account of perceptual inference in the brain and furnishes a unified framework for action and perception.
Action and active inference:A free-energy formulation
The free energy principle and actionActive inference and prediction errorOrientation and stabilizationIntentional movementsCued movementsGoal-directed movements Autonomous movementsForward and inverse models
The free energy principle and actionActive inference and prediction errorOrientation and stabilizationIntentional movementsCued movementsGoal-directed movements Autonomous movementsForward and inverse models
The free energy principle and actionActive inference and prediction errorOrientation and stabilizationIntentional movementsCued movementsGoal-directed movements Autonomous movementsForward and inverse models
sensory prediction and error hidden states (location)
cause (perturbing force) perturbation and action
Active inference under flat priors(movement with percept)
10 20 30 40 50 60-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
time10 20 30 40 50 60
-2
-1
0
1
2
time
10 20 30 40 50 60-0.2
0
0.2
0.4
0.6
0.8
1
1.2
time10 20 30 40 50 60
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
time
Visual stimulus
Sensory channels
a
v
s
1x1
x 2x
vv
s
x
v
( )g
av
sensory prediction and error hidden states (location)
cause (perturbing force) perturbation and action
Active inference under tight priors(no movement or percept)a
v
s
1x1
x 2x
vv
s
10 20 30 40 50 60
0
0.5
1
time10 20 30 40 50 60
-2
-1
0
1
2
time
10 20 30 40 50 60-0.2
0
0.2
0.4
0.6
0.8
1
1.2
time10 20 30 40 50 60
-1
-0.5
0
0.5
1
time
x
v
( )g
a
v
under flat priors under tight priors
actionperceived andtrue perturbation
Retinal stabilisation or tracking induced
by priors
Visual stimulus
a-2 -1 0 1 2
-2
-1
0
1
2
displacement
-2 -1 0 1 2-2
-1
0
1
2
displacement
10 20 30 40 50 60-1
-0.5
0
0.5
1
time
10 20 30 40 50 60-1
-0.5
0
0.5
1
time
real
perceived
x
x
a
v
v
Overview
The free energy principle and actionActive inference and prediction errorOrientation and stabilizationIntentional movementsCued movementsGoal-directed movements Autonomous movementsForward and inverse models
sensory prediction and error
cause (prior) perturbation and action
Active inference under tight priors(movement and percept)
The free energy principle and actionActive inference and prediction errorOrientation and stabilizationIntentional movementsCued movementsGoal-directed movements Autonomous movementsForward and inverse models
The free energy principle and actionActive inference and prediction errorOrientation and stabilizationIntentional movementsCued movementsGoal-directed movements Autonomous movementsForward and inverse models
11 2 34
2 1/2 2 2 3/2
( , )( ( ) )
2 1 : 0
(1 5 ) 5 (1 5 ) : 0
xxf x
G x x x x vx
x xG
x x x x
position
velo
city
null-clines
-2 -1 0 1 2-2
The mountain car problem
0 x
x
equations of motion
-2 -1 0 1 20
0.1
0.2
0.3
0.4
0.5
0.6
0.7
position
Height
-2 -1 0 1 2-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5
3
position
Forces
Desired location
1
flow and density nullclines
velo
city
-2 -1 0 1 2-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
-2 -1 0 1 2-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
velo
city
-2 -1 0 1 2-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
-2 -1 0 1 2-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
position
velo
city
-2 -1 0 1 2-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
position-2 -1 0 1 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
0
Q
Uncontrolled
Controlled
Expected
x
x
argmin
( | )( | ) ln
( | )
| ( ( ))
Q
x
D
p x mD p x m dx
Q x m
p x m eig f
argmin
0
F
a
100 200 300 400 500-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5prediction and error
time100 200 300 400 500
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
2.5hidden states
time
100 200 300 400 500-5
0
5causes - level 2
time100 200 300 400 500
-1
-0.5
0
0.5
1
time
perturbation and action
20 40 60 80 100 120-1.5
-1
-0.5
0
0.5
1
1.5prediction and error
time20 40 60 80 100 120
-1.5
-1
-0.5
0
0.5
1
1.5hidden states
time
20 40 60 80 100 120-8
-6
-4
-2
0
2
4
6x 10
-4 causes - level 2
time20 40 60 80 100 120
-15
-10
-5
0
5
10
15
time
perturbation and action
Learning in controlled environment
Active inference in uncontrolled environment
Using just the free-energy principle and a simple gradient ascent scheme, we have solved a benchmark problem in optimal control theory with just a handful of learning trials. At no point did we use reinforcement learning or dynamic programming.
Goal-directed behaviour and trajectories
prediction and error
time
hidden states
time
velo
city
behaviour action
position
velo
city
time
perturbation and actionbehaviour
20 40 60 80 100 120-1.5
-1
-0.5
0
0.5
1
1.5
20 40 60 80 100 120-1.5
-1
-0.5
0
0.5
1
1.5
-2 -1 0 1 2-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
20 40 60 80 100 120-3
-2
-1
0
1
2
3
-2 -1 0 1 2-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
20 40 60 80 100 120-3
-2
-1
0
1
2
3
Action under perturbation
8s
20 40 60 80 100 120-1.5
-1
-0.5
0
0.5
1
1.5
hidden states
time20 40 60 80 100 120
-1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
hidden states
time20 40 60 80 100 120
-1
-0.5
0
0.5
hidden states
time
5.5x 4x
6x
Simulating Parkinson's disease?
Overview
The free energy principle and actionActive inference and prediction errorOrientation and stabilizationIntentional movementsCued movementsGoal-directed movements Autonomous movementsForward and inverse models
-20 -10 0 10 20
-30
-20
-10
0
10
20
30
-20 -10 0 10 20
-30
-20
-10
0
10
20
30
velo
city
-20 -10 0 10 20
-30
-20
-10
0
10
20
30
-20 -10 0 10 20
-30
-20
-10
0
10
20
30
velo
city
position-20 -10 0 10 20
-30
-20
-10
0
10
20
30
-20 -10 0 10 20
-30
-20
-10
0
10
20
30
position
velo
city
controlled
velo
city
velo
city
velo
city
before
after
trajectoriesdensities
Q
0
Learning autonomous behaviour
50 100 150 200 250-30
-20
-10
0
10
20
30
40
50prediction and error
time50 100 150 200 250
-30
-20
-10
0
10
20
30
40
50hidden states
time
position
velo
city
learnt
-20 -10 0 10 20
-30
-20
-10
0
10
20
30
50 100 150 200 250-10
-5
0
5
10
time
perturbation and action
Autonomous behaviour under
random perturbations
Overview
The free energy principle and actionActive inference and prediction errorOrientation and stabilizationIntentional movementsCued movementsGoal-directed movements Autonomous movementsForward and inverse models
• The free-energy can be minimised by action (through changes in states generating sensory input) or perception (through optimising the predictions of that input)
• The only way that action can suppress free-energy is through reducing prediction error at the sensory level (speaking to a juxtaposition of motor and sensory systems)
• Action fulfils expectations, which can manifest as an explaining away of prediction error through resampling sensory input (e.g., visual tracking);
• Or intentional movement, fulfilling expectations furnished by empirical priors.
• In an optimum control setting a training environment can be constructed by minimising the cross-entropy between the ensemble density and some desired density. This can be learnt and reproduced under active inference.