Homepage Solution manuals Kevin P. Murphy Machine Learning: a Probabilistic Perspective Exercise 11.15 - Posterior mean and variance of a truncated Gaussian

Exercise 11.15 - Posterior mean and variance of a truncated Gaussian

Answers

Denote A = c i μ i σ , for the conditional mean, by linearity:

𝔼 [ z i | z i c i ] = μ i + σ 𝔼 [ 𝜖 i | 𝜖 i A ] .

And we have:

𝔼 [ 𝜖 i | 𝜖 i A ] = 1 p ( 𝜖 i A ) A + 𝜖 i 𝒩 ( 𝜖 i | 0 , 1 ) d 𝜖 i = ϕ ( A ) 1 Φ ( A ) = H ( A ) ,

where H is defined by (11.139). (Recall the definition of the conditional expectation!) Therefore we have:

𝔼 [ z i | z i c i ] = μ i + σ H ( A ) .

Now we proceed to calculate the expectation for the squared term:

𝔼 [ z i 2 | z i c i ] = μ i 2 + 2 μ i 𝜎𝔼 [ 𝜖 i | 𝜖 i A ] + σ 2 𝔼 [ 𝜖 i 2 | 𝜖 i A ] .

To evaluate 𝔼 [ 𝜖 i 2 | 𝜖 i A ] , we make use of the hint of this exercise:

d d w ( w 𝒩 ( w | 0 , 1 ) ) = 𝒩 ( w | 0 , 1 ) w 2 𝒩 ( w | 0 , 1 ) .

Hence we can solve for the following integral by parts:

b c w 2 𝒩 ( w | 0 , 1 ) d w = Φ ( c ) Φ ( b ) c 𝒩 ( c | 0 , 1 ) + b 𝒩 ( b | 0 , 1 ) .

Thence:

𝔼 [ 𝜖 i 2 | 𝜖 i A ] = 1 p ( 𝜖 i A ) A + w 2 𝒩 ( w | 0 , 1 ) d w = 1 Φ ( A ) + A ϕ ( A ) 1 Φ ( A ) .

Plug it into the previous formula:

𝔼 [ z i 2 | z i c i ] = μ i 2 + 2 μ i σ H ( A ) + σ 2 1 Φ ( A ) + 𝐴𝜙 ( A ) 1 Φ ( A ) = μ i 2 + σ 2 + H ( A ) ( σ c i + σ μ i ) .

User profile picture
2021-03-24 13:42
Comments