This week we’ll be covering Pawan, Kolmogorov, and Torr, “An Analysis of Convex Relaxations for MAP Estimation of Discrete MRFs”, JMLR 2009.

A related paper is Kumar, Torr, and Zisserman, “Solving Markov Random Fields using Second Order Cone Programming Relaxations“, CVPR 2006, which may be useful as a quick introduction as well as an example of how to apply these methods to a vision problem (object recognition), but the main focus of the meeting will be on the JMLR paper.

The paper begins by describing the integer programming formulation of MAP estimation, equivalently energy minimization, hence the integer program and all relaxations are minimization problems in the paper. Next, the paper gives three relaxations of the integer program – LP-S, the linear programming relaxation (the main one we dealt with in Wainwright); QP-RL, a quadratic programming relaxation; and SOCP-MS, a second order cone programming relaxation. One of the focuses of the paper is to compare these different relaxations to one another, and describes how to do so in the next section, through the notion of dominance. The paper goes on to prove that LP-S dominates SOCP-MS, and prove that SOCP-MS and QP-RL are equivalent.

The next focus of the paper is to prove that for a class of relaxations (on trees and certain types of cycles) LP-S strictly dominates SOCP-MS (and consequently QP-RL). The paper then gives two additional types of constraints that can be added to the SOCP relaxation, such that the new SOCP relaxation dominates LP-S for some problems and has a feasibility region not a super-set of the LP-S feasibility region.

The majority of the paper is in the proofs, so beyond understanding the basic set-up, I think that’s where we should be spending our time. In particular, if there are any steps that you do not follow, post them in the comments and we can try to work through it during the meeting.

# Notes:

**Going from SDP to SOCP relaxation:**

One fact that seems to be used without an explicit proof is , where is the Frobenius inner product on matrices. Here’s a quick proof, where the second direction uses part of the CVPR paper.

First, note that , since for any vector , . Therefore, .

To go the reverse direction, note that any symmetric positive semidefinite matrix can be written as using the Cholesky decomposition. Let , and be any positive semidefinite matrix. , where is the th column of (not row, as stated in the CVPR paper), and the last inequality uses the fact that .

Recall that in the SDP relaxation, our positive semidefinite constraint was . From the CVPR paper, we then have, for any positive semidefinite matrix , .

Finally, note that the final inequality is equivalent to the following SOCP constraint: where and , which can be verified by squaring both sides.

At this point, the SOCP relaxation is equivalent to the SDP relaxation. However, the SOCP relaxation is made more tractable (at the cost of removing constraints) by only considering a subset of psd matrices . The typical set of used is those with all zeros except for a 2 x 2 submatrix corresponding to two vertices connected by an edge in the MRF.