Exercise 4.9 - Sensor fusion with known variances in 1d

Answers

Denate the two observed datasets by Y ( 1 ) and Y ( 2 ) , with size N 1 , N 2 , the likelihood is:

p ( Y ( 1 ) , Y ( 2 ) | μ ) = n 1 = 1 N 1 p ( Y n 1 ( 1 ) | μ ) n 2 = 1 N 2 p ( Y n 2 ( 2 ) | μ ) exp { A μ 2 + B μ } ,

where we have dropped terms independent from μ and used:

A = N 1 2 v 1 N 2 2 v 2 , B = 1 v 1 n 1 = 1 N 1 Y n 1 ( 1 ) + 1 v 2 n 2 = 1 N 2 Y n 2 ( 2 ) .

Differentiate the likelihood w.r.t. μ and set it to zero, we have:

μ MLE = B 2 A

The conjugate prior of this model must have a form proportional to exp { A μ 2 + B μ } , so it is a normal distribution:

p ( μ | a , b ) exp { a μ 2 + b μ } .

The posterior distribution is:

p ( μ | Y ) exp { ( A + a ) μ 2 + ( B + b ) μ } .

Hence we have the MAP estimation:

μ MAP = B + b 2 ( A + a ) .

It is noticable that the MAP converges to ML estimation when observation times grow:

μ MAP μ ML .

The posterior distribution is another normal distribution, with:

σ MAP 2 = 1 2 ( A + a ) .

For non-informative prior, we have a = b = 0 so p ( μ | a , b ) is uniform in the domain, then the MAP estimation is the same as MLE.

User profile picture
2021-03-24 13:42
Comments