Models of Learning

Developed by
Jill O'Reilly
Hanneke den Ouden
-August 2015

## Statistical inference on the model parameters

Let's end with a little bit of science and ask the question:

Do subjects adapt their learning rate or softmax decision parameter based on whether they are in a volatile or stable environment?

For this, we can simply use Matlab's stats toolbox, using a 2-sample t test or (if the parameters are not normally distributed) a Mann-Whitney U test

We will use the variable fitted that was outputted by matlab in the previous step

For the sake of time we will not go over doing normality tests of the data, which you should normally do, but just run both the parametric and nonparametric statistics and see whether they are consistent (keeping in mind that the nonparametric stats is more stringent)

[h p(1)]=ttest2(fitted.alpha.ev(1,:),fitted.alpha.ev(2,:));
[p(2) h]=ranksum(fitted.alpha.ev(1,:),fitted.alpha.ev(2,:));
[h p(3)]=ttest2(fitted.beta.ev(1,:),fitted.beta.ev(2,:));
[p(4) h]=ranksum(fitted.beta.ev(1,:),fitted.beta.ev(2,:));

Where p contains

• the p-values reporting significant differences in the posterior expected values for the learning rate (parametric p(1); nonparametric p(2))
• the softmax temperature (parametric p(3); nonparametric p(4))

#### Note on the scope of this tutorial

The aim of this tutorial is to give you an introduction on the basics of using reinforcement learning models to understand behaviour: what the most basic model looks like mathematically; how to specify this model in matlab; run simulations; estimate parameters; and visualise results. This section on inference on model parameters is purposefully only very brief. You can use the estimated parameter in the same way you would use any summary statistic of your data, and analyse them in your favourite statistical package (R, SPSS, SAS) and statistical framework (frequentist, Bayesian).

In addition, in the future I hope to find time to extend this tutorial to include a section on Bayesian model comparison, which allow us to test hypotheses not about parameters, but rather about the models themselves, and assess which out of a set of models is best, taking into account both their complexity and their ability to explain the data.