Exercise 4.14 - MAP estimation for 1d Gaussians

Answers

Assume that the variance for this distribution σ 2 is known, and the mean μ is subject to a normal distribution with mean m and variance s 2 . Similiar to the question before, the posterior takes the form:

p ( μ | X ) p ( μ ) p ( X | μ ) .

So the posterior is another normal distribution, by comparing the coefficient for μ 2 in the exponential:

1 2 s 2 N 2 σ 2 ,

and that for μ :

m s 2 + n = 1 N x n σ 2 ,

we have the posterior mean and variance by completing the square:

σ post 2 = s 2 σ 2 σ 2 + N s 2 ,

μ post = ( m s 2 + n = 1 N x n σ 2 ) σ post 2 .

This finishes question (a).

For question (b), we already knew that the MLE is:

μ MLE = n = 1 N x n N .

As N increases, we have:

lim N μ post = lim N σ 2 s 2 m + n = 1 N x n σ 2 s 2 + N = n = 1 N x n N .

For question (c), when s 2 , μ post also converges to μ MLE since σ 2 s 2 0 .

For question (d), when s 2 0 , then σ 2 s 2 and μ post converges to m .

Both (c) and (d) are very intuitive. s 2 means a non-informative prior has been introduced so MAP is the same as MLE. s 2 0 means that the knowledge that μ is close to m is very strong so that finite observations cannot modify this belief.

User profile picture
2021-03-24 13:42
Comments