Review
Hopefully, the exercise with the coin-tossing example has illustrated how the process of learning
can be modelled by sequential application of Bayes' Theorem.
This approach has the advantage that it tells us how uncertain the participant should be after each
observation. We might expect this uncertainty to predict behavioural factors like:
- Reaction time
- Response confidence (e.g. how much they are willing to gamble on the next trial)
Interestingly, uncertainty also predicts how much the observer should alter his beliefs on each trial.
Note in the example above, the posterior moves around a lot in the first few trials but later on
the posterior moves less with each new observation. This is because the prior is becoming more and more
precise as it is based on more and more observations, as we saw in the lecture:
... a case of precision weighting - remember that one?
In the next section we will see how a Bayesian learning model, like the one we just used
in the coin tossing context, can be used to model learning in a one-armed bandit task.
►►►