Models of Learning

Developed by
Jill O'Reilly
Hanneke den Ouden
-August 2015

Bayes' Theorem and sequential learning

Say that instead of revealing the sequence of coin tosses all at once, we do them one at a time.
How should the observer's beliefs about q evolve as each new coin toss in the sequence is observed?

We can model the observer's evolving beliefs using Bayes' Theorem.
Let the sequence of coin tosses be denoted y1, y2, y3... yi
Then:

p(q|y1:i) ∝ p(yi|q) p(q|y1:i-1)

Where:

  • p(q|y1:i) is the posterior probability of some value of q given all the observed data on trials 1-i
  • p(yi|q) is the likelihood function for some value of q given most recent coin toss
  • p(q|y1:i-1) is the prior probability of some value of q given all the coin tosses up until the most recent one

Let's walk this through-

  • On trial 1, we have no prior knowledge about the value of q, so the posterior probability of each candidate value of q is given by the likelihood function alone.
    • Say I observe a head on trial 1. What would be the likelihood that the true value of q is 0.8?
      (remember: q is the probability that the coin comes up heads)
      ?
    • Now say I observe a tail on trial 1. What would be the likelihood that the true value of q is 0.8?
      ?
    • Can you work out what the likelihood function over all candidate values of q for 0 to 1 should look like, given that I observed a head ?
      ?
    • Can you work out what the likelihood function over all candidate values of q for 0 to 1 should look like, given that I observed a tail ?
      ?
  • On trial 2, we can again calculate the likelihood function p(q|y2)=p(y2|q), but we also want to take into account our prior knowledge p(q|y1:i)=p(q|y1) - ie the likelihood function from trial 1.
    • For a given candidate value of q, the posterior is p(q|y2)*p(q|y1).
      Say I observe the sequence HT. What is the posterior probability that q=0.6?
      ?
    • If we perform the calculations above for every candidate value of q between 0 and 1, we get a distribution that expresses the posterior probability of each value of q.
      Can you work out what this would look like for the sequences HH, TT and HT?
      ?
  • On trial 3, we again have a likelihood function p(q|y3)=p(y3|q), and a prior.
    The prior p(q|y1:i-1)=p(q|y1:2) needs to capture information from trials 1 and 2. But the posterior from trial 2 is exactly that - when i=2, the posterior p(q|y1:i)=p(q|y1:2), the prior for trial 3.
    • Section 3 or the Matlab script UncertaintyTutorial1.m works out the posterior probability for each value of q after each of a series of coin tosses.
      Run this section of the script and have a look at how the posterior evolves
    • ?
    • Try altering the sequence of coin tosses and observing how the posterior evolves then.
      In particular -
      • If you change the order of the Hs and Ts but not the number of each, does the posterior on the last trial change?
      • What if you increase the number of coin tosses but keep the proportion of Hs and Ts the same?
        How does the posterior on the last trial differ if you have 15 coin tosses instead of 5?
        What about the uncertainty?

►►►