Homepage Solution manuals Kevin P. Murphy Machine Learning: a Probabilistic Perspective Exercise 11.12 - EM for robust linear regression with a Student t likelihood

Exercise 11.12 - EM for robust linear regression with a Student t likelihood

Answers

Assuming that v and σ 2 are fixed, then the likelihood takes the form:

p ( y | 𝐱 , 𝐰 , σ 2 , v ) = f ( σ 2 , v ) × ( 1 + ( y 𝐰 T 𝐱 ) 2 2 σ 2 ) v + 1 2 .

We only derive the M-step for this exercise with fixed σ 2 and v . The negative logarithm likelihood is:

L ( 𝐰 ) i = 1 N log ( 1 + ( y i 𝐰 T 𝐱 i ) 2 2 σ 2 ) .

Taking gradient:

L 𝐰 i = 1 N ( y i 𝐰 T 𝐱 i ) 𝐱 i 2 σ 2 + ( y i 𝐰 T 𝐱 i ) 2 .

The dependency of the denominator on 𝐰 makes the optimum analytically intractable. One possible solution is to randomly initialize 𝐰 with 𝐰 0 and conduct iteration:

𝐰 t + 1 = ( i = 1 n 𝐱 i 𝐱 i T 2 σ 2 + ( y i 𝐰 t T 𝐱 i ) 2 ) 1 ( i = 1 n y i 𝐱 i 2 σ 2 + ( y i 𝐰 t T 𝐱 i ) 2 ) .

User profile picture
2021-03-24 13:42
Comments