Homepage › Solution manuals › Kevin P. Murphy › Machine Learning: a Probabilistic Perspective › Exercise 7.8 - Bayesian linear regression in 1d with known $\sigma^{2}$
Exercise 7.8 - Bayesian linear regression in 1d with known $\sigma^{2}$
Answers
For question (a), the estimation for is .
For question (b), we have:
To simplify the algebra, we observe that and are independent in this prior. Thus the prior distribution can be reduced to: where can be an arbitrary finite number, thus:
For question (c) and (d), we consider the posterior distribution for parameters:
where we only maintain terms dependent on and . To marginalize out , we have:
Hence the posterior distribution over is a normal distribution. The coefficients for and in the exponential are respectively:
Thence its posterior variance is:
its mean is:
Finally, let us plug to with statistics of :
The posterior variance is:
from which we observe that, with grows, the denominator increases as the bound from the Cauchy inequality:
To put in other words, The larger the Cauchy difference is, the more confidence we have for the estimation of . Such difference is determined by the variance of the distribution on . The posterior variance can be reduced to:
Therefore for any fixed generative distribution on , the uncertainty on declines as .