Homepage › Solution manuals › Kevin P. Murphy › Machine Learning: a Probabilistic Perspective › Exercise 11.13 - EM for EB estimation of Gaussian shrinkage model
Exercise 11.13 - EM for EB estimation of Gaussian shrinkage model
Answers
This is an example of non-mixture latent graphical model. In this case the latent variable is no longer the one-hot type, making it different from the EM forms that we have developed.
Recall that the complete likelihood for Gaussian shrinkage model is:
Taking logarithm yields:
Note that is essentially Gaussian, hence the posterior over can be analytically written down with (4.125) (though tedious). Hence all terms that dependent on in the logarithm of the complete likelihood can be estimated as moments of their posterior. This is possible since all such terms taking the form or . This completes the E-step.
For the M-step, this model is not different from others we have developed so far. Taking partial gradient w.r.t. and and setting them to zero would yield the update rules.