Exercise 20.4 - Inference in 2D lattice MRFs

Answers

For question (a), we consider only two options:

  • Eliminate by column, i.e., eliminate m variables each round.
  • Eliminate by row, i.e., eliminate n variables each round.

Let { X i , j } i = 1 , j = 1 m , n denotes the collection of all hidden variables. Note that the local evidence { Y i , j } i = 1 , j = 1 m , n can be absorbed into the local energy, what being broken is only the normalization condition. It is easy to see that the bottleneck in computing p ( 𝐗 | 𝐘 ) is the evidence:

p ( 𝐘 ) = p ( 𝐗 , 𝐘 ) d 𝐗 .

The order by which we eliminate all hidden variables determines the efficiency of the overall algorithm.

For the first option, we begin by cancelling the last row, { X i , n } i = 1 m , this step involves altogether:

3 m 1

terms. In which are ( m 1 ) edges within this column, m edges with the ( n 1 ) -th column and m local evidence. To perform the integral, let us assume that a sampling-based paradigm is adopted, and each variable in X i , n and X i , n 1 is sampled for K times. This elimination ends up with a table with K m entries. Hence the integral is performed under complexity:

K 2 m × ( 3 m 1 ) = 𝒪 ( m K 2 m ) .

This procedure shall be iterated along the MRF for n times, results in the overall complexity of:

𝒪 ( m n K 2 m ) .

Symmetrically, that for the second option is:

𝒪 ( m n K 2 n ) .

For question (b), with m n , we should adopt the first method.

For question (c), since m = n , either method will do. We can adopt the Viterbi algorithm for HMM in this case, using a table of size 2 2 n to save the propogated information (as dynamic programming).

User profile picture
2021-03-24 13:42
Comments