Slide 1
Workshop on Mathematical Models of Cognitive Architectures
December 5-9, 2011CIRM, Marseille
Abstract
This presentation will look at action, perception and cognition
as emergent phenomena under a unifying perspective: This
Helmholtzian perspective regards the brain as a (generative) model
of its environment. The imperative for any brain is then to
optimize a free energy bound on the (Bayesian) evidence for its
model of the world. We will see that this is not just mandated for
the brain but for any self-organizing system that resists a natural
tendency to disorder in a changing environment. More specifically,
maximizing Bayesian evidence leads in a fairly straightforward way
to an understanding of action as active inference, and perception
in terms of predictive coding. I hope to illustrate these points
using simulations of perceptual categorization and action
observation.
Active inference, free energy and the Bayesian brain
Karl FristonUniversity College London
Objects are always imagined as being present in the field of
vision as would have to be there in order to produce the same
impression on the nervous mechanism - Hermann Ludwig Ferdinand von
Helmholtz
Thomas BayesGeoffrey HintonRichard FeynmanFrom the Helmholtz
machine to the Bayesian brain and self-organizationHermann
Haken
Richard Gregory
Gerry EdelmanStephen Grossberg
Overview
Ensemble dynamicsEntropy and equilibriaFree-energy and
surprise
Free-energy principleAction and perceptionHierarchies and
generative models
PerceptionBirdsong and categorizationSimulated lesions
ActionActive inferenceAction observation
temperatureWhat is the difference between a snowflake and a
bird?
Phase-boundary
a bird can move (to avoid surprises)
4What is the difference between snowfall and a flock of
birds?Ensemble dynamics, clumping and swarming
birds (biological agents) stay in the same place They resist the
second law of thermodynamics, which says that their entropy should
increase
This means biological agents must self-organize to minimize
surprise - to ensure they occupy a limited number of states (cf
homeostasis).
But what is the entropy?
entropy is just average surpriseLow surprise (we are usually
here)High surprise (I am never here)
But there is a small problem agents cannot measure their
surpriseBut they can measure their free-energy, which is always
bigger than surprise
This means agents should minimize their free-energy?
Change sensory inputsensations predictionsPrediction errorChange
predictionsActionPerceptionaction and perception to suppress
prediction errors and minimise surpriseWhat is
free-energy?free-energy is basically prediction error
Action to minimise a bound on surprisePerception to optimise the
bound
Action
External states in the worldInternal states of the agent
(m)Sensations
More formally,
Free-energy is a function of sensations and a proposal density
over hidden causes
and can be evaluated, given a generative model comprising a
likelihood and prior:
So what models might the brain use?
Action
External states in the worldInternal states of the agent
(m)Sensations
Backward(modulatory)Forward(driving)lateral
Hierarchal models in the brainAnd their hidden states, causes
and parameters
Synaptic gainSynaptic activitySynaptic
efficacyActivity-dependent plasticityFunctional
specializationAttentional gainEnabling of plasticity
Perception and inferenceLearning and memoryThe proposal density
and its sufficient statistics
Laplace approximation:Attention and salience
Synaptic activity
Synaptic plasticity
Synaptic gaincf Hebb's Lawcf Rescorla-Wagnercf Bayesian
filtering or Predictive coding
Laplace code assumption
Free energy minimisation
Generative model
Backward predictionsForward prediction error
Synaptic activity and message-passing
David MumfordPredictive coding
Adjust hypothesessensory inputBackward connections return
predictionsby hierarchical message passing in the brain
prediction
Forward connections convey feedbackPerceptual inference
hierarchical message passingPrediction errorsPredictions
Summary
Biological agents resist the second law of thermodynamics
They must minimize their average surprise (entropy)
They minimize surprise by suppressing prediction error
(free-energy)
Prediction error can be reduced by changing predictions
(perception)
Prediction error can be reduced by changing sensations
(action)
Perception entails recurrent message passing in the brain to
optimise predictions
Action makes predictions come true (and minimises
surprise)Overview
Ensemble dynamicsEntropy and equilibriaFree-energy and
surprise
Free-energy principleAction and perceptionHierarchies and
generative models
PerceptionBirdsong and categorizationSimulated lesions
ActionActive inferenceAction observation
Generating bird songs with attractorsSyrinxHVC
time (sec)FrequencySonogram0.511.5
causal stateshidden states
102030405060-505101520prediction and
error102030405060-505101520hidden statesBackward predictionsForward
prediction error102030405060-10-505101520causal statesPerception
and message passing
stimulus0.20.40.60.82000250030003500400045005000time
(seconds)
Perceptual categorization
Frequency (Hz)Song a
time (seconds)Song b
Song c
Hierarchical (deep) birdsong: sequences of
sequencesSyrinxNeuronal hierarchy
Time (sec)Frequency (KHz)sonogram0.511.5
Christoph vonder Malsburg
Frequency (Hz)perceptFrequency (Hz)no top-down messagestime
(seconds)Frequency (Hz)no lateral messages0.511.5-40-200204060LFP
(micro-volts)LFP-60-40-200204060LFP
(micro-volts)LFP0500100015002000-60-40-200204060peristimulus time
(ms)LFP (micro-volts)LFP
Simulated lesions and false inference
no structural priorsno dynamical priorsOverview
Ensemble dynamicsEntropy and equilibriaFree-energy and
surprise
Free-energy principleAction and perceptionHierarchies and
generative models
PerceptionBirdsong and categorizationSimulated lesions
ActionActive inferenceAction observation
predictionsReflexes to action
action
dorsal rootventral hornsensory errorActive inferenceAction can
only suppress (sensory) prediction error. This means action fulfils
our (sensory) predictions
Descendingproprioceptive predictionsvisual inputproprioceptive
inputAction, predictions and priors
Exteroceptive predictions
Autonomous behavior and
action-observation00.20.40.60.811.21.40.40.60.811.21.4actionposition
(x)position (y)00.20.40.60.811.21.4observationposition
(x)Descending predictionshidden attractor
states(Lotka-Volterra)
Thank you
And thanks to collaborators:
Rick AdamsSven BestmannJean DaunizeauHarriet BrownLee
HarrisonStefan KiebelJames KilnerJrmie MattoutKlaas Stephan
And colleagues:
Peter DayanJrn DiedrichsenPaul VerschureFlorentin Wrgtter
And many others
Perception and Action: The optimisation of neuronal and
neuromuscular activity to suppress prediction errors (or
free-energy) based on generative models of sensory data.
Learning and attention: The optimisation of synaptic gain and
efficacy over seconds to hours, to encode the precisions of
prediction errors and causal structure in the sensorium. This
entails suppression of free-energy over time.
Neurodevelopment: Model optimisation through activity-dependent
pruning and maintenance of neuronal connections that are specified
epigenetically
Evolution: Optimisation of the average free-energy
(free-fitness) over time and individuals of a given class (e.g.,
conspecifics) by selective pressure on the epigenetic specification
of their generative models.
Time-scaleFree-energy minimisation leading to