Models of Learning

Developed by
Jill O'Reilly
Hanneke den Ouden
-August 2015

## Summary values for α and β

The distributions over α and β give us a full picture of these parameters: a probability for each value in the bin, that that parameter value generated the observed data.

However, for our analysis we usually we want a single estimate for α and β.

How would you go about obtaining this from the distributions plotted on the previous page?

?

Let's have a look at how well this model is performing. In the original trial-by-trial dataplot, we now added the estimated choice probabilities (in turquoise)

?

From this plot, it is hard to quantify how well the estimated probabilities are matching the simulated choice probabilities. We will next look at the correlation of the original, simulated choice probabilities, and correlate these with the choice probabilities that we obtained from both the maximuml likelihood and expected value estimates:

?

To quantify whether the ML or EV is doing better, you would have to run many simulations for a range of parameter values, and compare how much the ML and EV predicted choice probabilities deviate from the simulated choice probabilities.

Below, we will do something slightly different, and have a look at how good both the ML and the EV estimates, are at recovering parameter values for α and β across a large number of iterations.

Because estimating the parameters takes a bit longer than just generating data, we'll go for a slightly lower subject number, let's do 20 for each of the 2 conditions.

Set the following....

subjects = 1:40;
simulate = true;
fitData = true;
plotIndividual = false;

...save, and run!

You now see both the probability density functions (PDFs) for each of the parameter across all subjects, separately for each condition.

?

We can also view these aggregated across all subjects

?

What conclusions can we draw from these plots?

?