# UMass LIVING

## August 29, 2009

### July 16 3-5 pm: Meeting Summary, Chapter 6 – Variational Methods in Parameter Estimation

Filed under: Uncategorized — umassliving @ 4:38 pm
Tags: ,

to be filled in

## August 20, 2009

### (Aug 20 3-5PM) Meeting Summary : Wainwright – Estimating the “Wrong” Graphical Model

Filed under: Uncategorized — umassliving @ 5:08 pm
Tags: ,

Things we covered today:

1. In the regularized surrogate likelihood (Eq 16) $\ell_B(\theta)$, we use $B(\theta)$, from the Bethe surrogate formulation covered in the survey paper. We then went through the steps in the joint estimation and prediction in section 4.2. We noted that the noise model is localized for each element of $y$ : $p(y|z) = \prod_{s = 1}^N p(y_s | z_s)$. Erik wondered whether this was done for simplicity or by necessity and we thought it was for simplicity.
2. The parameter estimate based on the surrogate likelihood $\hat{\theta^n}$ is asymptotically normal.
3. We had a long discussion about the relationship between globally stable, Lipschitz, and strongly convex.  Any variational method that has a strongly convex entropy approximation is globally stable. Also, any variational method based on a convex optimization is Lipschitz stable.  I’m not sure if there’s a difference between Lipschitz stable and globally stable…
4. None of us knew how to derive eq 22, but we got some intuition by observing that if $\alpha = 0$, then this is when the SNR is pure noise and the observation $Y$ is useless. This is reflected in eq 22 by removing the term involving $Y$. Similarly, if $\alpha = 1$, then $Z_s = Y_s$, which is intuitive.
5. When the SNR $\alpha = 0$, in section 6.2 they show that $\Delta B(\hat{\theta}) = \mu^* = \Delta A(\theta)$ which means $\Delta B(\theta)$ will be different.
6. In Figure 4, the curve marked with the red diamonds (approximate model) is upper bounded, as stated in Theorem 7.  Figure 4 also illustrates that performing approximate inference (TRW method) using the approximate model (parameters) can be superior to performing approximate inference using the true model (black circles).

## August 13, 2009

### (Aug 20 3-5pm) Pre-meeting Overview: Wainwright – Estimating the “wrong” graphical model

Filed under: Uncategorized — umassliving @ 6:37 pm
Tags: ,

M. J. Wainwright. Estimating the “wrong” graphical model: Benefits in the computation-limited setting. – http://www.eecs.berkeley.edu/~wainwrig/Papers/Wainwright06_JMLR.pdf

This paper is about variational approaches to parameter estimation and prediction in an MRF.  In particular, they argue that, in the variational setting, an inconsistent estimator of $\theta$ can be desirable, because it can offset errors later in the prediction step.

Section 2,3 are covered in the Graphical Model survey.

Some points to examine:

• the overall procedure presented in Section 4.2.
• the application to Gaussian mixtures in Section 6.1
• the comparison of the tree-reweighted approach and sum-product in Section 7.

### (Aug 13 3-5pm) Meeting Summary: Wainwright / Jordan, ch 8, p214-233

Filed under: Uncategorized — umassliving @ 6:35 pm
Tags: ,

The main points that will be covered in these subsections are

(1) Extending the LP view of mode finding that we saw for max-product on tree graphs to reweighted-max-product.

(2) Examples of first order LP relaxations.

(3) Higher order LP approximations and how they can be more accurate.

(1) Extending the LP view of mode finding that we saw for max-product on tree graphs to reweighted-max-product.

The tree-reweighted Bethe variational problem uses the same polytope $\mathbb{L}$ that defines the first order LP in the first half of this section. Thus, naturally, the tree-reweighted Bethe variational problem (and hence the reweighted max-product) has the same connection to first order LP as the regular Bethe problem (or regular max-product).

$\lambda^* := log\ M^*$, where $M^*$ is the fixed point in reweighted max- product algorithm, can be the solution to the LP in the reweighted case. This is true when equations 8.23 are satisfied by the pseudo-max-marginals.

(2) Examples of first order LP relaxations.

Section 8.4.4 discusses examples where an LP problem is formulated from a graph problem. Example 8.3 shows how the Error-control coding problem may be thought of as an LP problem.

Steps in the process:

(a) Convert the error-control coding problem’s graph to a pairwise markov random field. Create auxillary variables to handle any interactions that have more than two variables. Appendix E.3 should help in understanding this

(b) Find out the constraints that need to be enforced in the model (Equations 8.24, which may be rewritten as  8.25)

(c) Solve the LP problem (Feldman)

Example 8.4 shows the same for the (a) Ising model, (b) (Maximum Weight) Independent Set problem, (c) (Minimum Weight) Vertex Cover problem, (d) Max-cut problem. These are all examples of intractable problems, but a first oder LP approximate solution is possible. Example 8.5 is another classic graph problem, Maximum Weight Matching problem, solved using an LP. Equation 8.29 is the exact problem, Equation 8.30 is the LP version.

Other important points in this subsection

(a) First order LP interger program with binary variables has strong consistency property. This is not true for higher order variables.

(b) MWIS, Maxcut are submodular maximization problems, and MWVC is a supermodular minimization problem. Thus, all three are not regular binary QPs. They are intractable in general. LP approximate solutions are hence needed for all three.

(3) Higher order LP approximations and how they can be more accurate.

Section 8.5 discusses higher order LP relaxations by using the marginal polytopes defined on hypergraphs. So far, we used the polytopes as defined by the pairwise interactions in the graph. By considering polytopes defined by the hypergraphs instead, we can enforce more constraints. This means that the relaxations in the LP will be of higher order.

Higher order relaxation LPs are more accurate than (or atleast as accurate as) the lower order relaxation LPs for the same graphical model. This is because the higher order relaxation constraints are in addition to the lower order ones.

Equation 8.34 is significant. $\mathbb{L}_1(G) \supseteq \mathbb{L}_2(G) \supseteq \mathbb{L}_3(G) ...\supseteq \mathbb{L}_k(G) \supseteq...\supseteq \mathbb{M}(G)$.

If  $t$ is the tree-width of the hypergraph for the graphical model, then the LP solution is the same as the exact solution on the original polytope $\mathbb{L}_t(G) = \mathbb{M}(G)$.

Finally, example 8.6 shows how the second order relaxation adds additional constraints that makes the solution “tighter” than a first order relaxation. This is done by solving the same problem solved earlier in Example 8.3 by two methods:

(a) binary MRF, which is a first order relaxation LP

(b) hypergraph MRF, which is a second order relaxation LP

Comparing the constraints, we see that $\tau_{23}$ has no constraints in (a).  8.38 applied in case of (b) enforces the contraint on  $\tau_{23}$.

Considering one solution $\tilde{\tau} = (1, \frac{1}{2}, \frac{1}{2}, 0)$, we see this is valid in (a), but not valid in (b). Thus, adding higher order constraints results in more accurate solutions.

Lifting and Projecting

The introduction of additional parameters in order to achieve higher order relaxations is called lifting. The lifted polytope can be projected back to lower space by projecting. Example 8.7 illustrates this. Lifting by introducing $\tau_{stu}$ results in inequalities 8.41. These can be projected back to the original pairwise parameters by using the simple Fourier-Motzkin elimination method, resulting in inequalities 8.41. Some of these inequalites correspond to the cycle inequalities or triangle inequalities. This was alluded to earlier in Example 4.1)

## August 11, 2009

### (Aug 13 3-5pm) Pre-meeting Overview: Wainwright / Jordan, ch 8, p214-233

Filed under: Uncategorized — umassliving @ 3:49 pm
Tags: ,

Section 8.4.3 till 8.5

The main points that will be covered in these subsections are

(1) Extending the LP view of mode finding that we saw for max-product on tree graphs to reweighted-max-product.

(2) Examples of first order LP relaxations.

(3) Higher order LP approximations and how they can be more accurate.

Section 8.4.3 extends the arguments from earlier subsections in this chapter to show that reweighted max-product (and tree-reweighted Bethe variational problem) can be thought of as an LP problem. Conditions 8.23 have to be met for the max-product solution and LP problem solution to be dual of one another.

Section 8.4.4 discusses examples where an LP problem is formulated from a graph problem. Example 8.3 shows how the Error-control coding problem may be thought of as an LP problem. Example 8.4 shows the same for the (a) Ising model, (b) (Maximum Weight) Independent Set problem, (c) (Minimum Weight) Vertex Cover problem, (d) Max-cut problem. These are all examples of intractable problems, but a first oder LP approximate solution is possible. Example 8.5 is another classic graph problem, Maximum Weight Matching problem, solved using an LP.

Section 8.5 discusses higher order LP relaxations by using the marginal polytopes defined on hypergraphs (rather than the polytopes defined by pairwise interactions in a tree). The main claim and proof in this section is that higher order relaxation LPs are more accurate the lower order relaxation LPs for the same graphical model. The $t^{th}$ relaxation LP solution, where $t$ is the tree-width of the hypergraph for the graphical model, is the same as the exact solution on the original polytope $\mathbb{M}(G)$.

Finally, example 8.6 shows how the second order relaxation adds additional constraints that makes the solution “tighter” than in a first order relaxation. This is done by solving the same problem solved earlier in Example 8.3 by the two methods, and comparing the constraints and considering one solution $\tilde{\tau} = (1, \frac{1}{2}, \frac{1}{2}, 0)$.

We will be reading papers next week, so please come with suggestions for papers you think are interesting.

## August 8, 2009

### Duality for Linear Programs

Filed under: Uncategorized — umassliving @ 5:11 pm
Tags: ,

I felt bad that my explanation of duality for linear programs was not especially cogent during the meeting, so I decided to write up a quick ~5 minute primer on duality.

First, I should mention some references.  The bulk of what will follow is from Introduction to Linear Optimization by Bertsimas and Tsitsiklis.  They also write a lot of other stuff on optimization that is probably useful if you’re really interested in this area.  Also, there’s a book called Convex Optimization by Boyd and Vandenberghe that talks about the more general case of convex problems rather than strictly linear problems.  Both of these books are in our vision lab library in Moe’s cubicle.  Lastly, duality plays in important role in Support Vector Machines, in particular going from the primal to the dual is necessary to see how to kernels can be used with SVMs.  The notes on this page – http://www.stanford.edu/class/cs229/materials.html – are a good overview of duality for SVMs.  (Also let me plug the fact that these lectures can be found on YouTube.)

To begin with, consider the following dietary problem – you have $n$ types of food and $m$ types of nutrients.  You know that one unit of food $j$ gives you $a_{ij}$ units of nutrient $i$.  You want to achieve an ideal diet that contains $b_i$ units of nutrient $i$ using amount $x_j$ of food $j$, and finally you have a cost $c_j$ associated with each food $j$.  Achieving a mix of foods that gives you your desired nutrients at minimal cost is then the following linear program:

minimize $c^T x$ subject to $Ax = b$

Now to that, let’s add a vector of Lagrange multipliers $p$ to form the generalized Lagrangian:

minimize $c^T x + p^T (b - Ax)$

This is what Moe was referring to as turning a constrained problem to an unconstrained problem, but is different from the dual.

Let’s now define $q(p) = \min_x c^T x + p^T (b - Ax)$, and also define $x^*$ as an optimal feasible solution to the original (primal) minimization problem.  Knowing this, we can see that for any $p$:

$q(p) = \min_x c^T x + p^T (b - Ax) \leq c^T x^* + p^T (b - Ax^*) = c^T x^* \leq c^T x'$

The second equality comes from the fact that $x^*$ is a feasible solution and so satisfies $Ax = b$, and the final inequality is for all feasible $x'$ and follows from the fact that $x^*$ is an optimal solution.

Thus we know that for any $p$, $q(p)$ is a lower bound to the original primal problem, so we want to maximize $q(p)$ over all possible values of $p$ to get the tightest lower bound possible.  We can also rearrange the terms:

$q(p) = \min_x p^T b + (c^T - p^T A)x$

Now we can see that if $p^T A \neq c^T$, then we can set $x$ to have the opposite sign as the non-zero component of $c^T - p^T A$ and make the minimum value equal to $-\infty$.  Putting together this fact with the fact that we want to maximize $q(p)$, we now have our dual problem:

$\max_p q(p)$ subject to $p^T A = c^T$

What happened here?  Our constraints turned into variables $p$, and our variables $x$ turned into constraints $p^T A = c^T$.  This is going to be true in general – going from the primal to the dual will change a minimization problem to a maximization problem, change constraints to variables and variables to constraints.

Moreover, think about going back to the original primal problem and adding the constraint $x \geq 0$, corresponding to the constraint that we have to use a non-negative amount of food.  Then you can see that we only need $c^T \geq p^T A$ in order to avoid $-\infty$, and hence direct inequality constraints on variables $x \geq 0$ in the primal turn into inequality constraints in the dual, and inequality constraints in the primal turn into direct inequality constraints on variables in the dual.  Equality constraints in one form turn into free variables in the other form, as we saw originally.

What’s the point of all of this?  One thing to note is what happens if you manage to find a pair $(x', p')$ that satisfy $c^T x' = q(p')$?  We know from above that $q(p) \leq c^T x$ for any $p$ and any feasible $x$, therefore we know that $x'$ is an optimal solution to the primal problem, and vice versa that $p'$ is optimal for the dual.  Thus, one obvious thing to try when solving a linear program is to try to solve the primal and dual at the same time.  The difference in the values gives you bounds on what the optimal solution can be, and once you’ve closed the gap to zero, you know you’ve found the optimal solution.

There is also the stronger result of strong duality, that says if a linear programming problem has an optimal solution, so does its dual, and the optimal costs of the two problems are equal.  So if you find a solution to one problem, then you know the other must be feasible and have the same optimal solution.

Note these results depend on the fact that the original problem was linear, they will not hold for any arbitrary problem.  However, many results can be extended to convex problems.  There are also additional details such as complementary slackness and KKT conditions that play a deeper role in understanding duality and help with solving such problems.  The notion of complementary slackness also gives an intuition behind the meaning of the dual problem for the original dietary problem: we know that food $j$ has cost $c_j$.  In the dual, we can think of $p$ as being costs on the nutrients, and based on how much food $j$ has of each nutrient, we have a second price $p^T A_j$ on food $j$, where $A_j$ is the $j$th column of $A$.  The primal-dual relationship is saying that, when the prices are correctly chosen on the nutrients, we satisfy the constraints as well as the price accounting that, for the optimal solution, we have $c^T x^* = p^* b$.

## August 6, 2009

### (Aug 06 3-5pm) Meeting Summary: Wainwright / Jordan, ch 8, p195-213

Filed under: Uncategorized — umassliving @ 5:47 pm
Tags: ,

Discussed the variational formulation of mode computation in theorem 8.1 and the zero-temperature limit of the Bethe variational principle. The resulting IP can be transformed to a LP over $\mathbb{L}(G)$ (more precisely, its dual). Gary provided a description of the dual of a LP.

Covered the max-product algorithm for Gaussians and the process of converting the standard Gaussian in exponential form to a pairwise MRF. Gary pointed out the type in 8.15 (see pre-meeting overview comments).

Covered LP relaxation and how it provides an approximation to the exact LP. Discussed the relationship between the extreme points in $\mathbb{M}(G)$ and $\mathbb{L}(G)$ and the definition of weakly and strongly persistent solutions (Gary won the debate, after Manju sided with him).

Covered the main point of section 8.4.2, that max-product does not solve the relaxed LP in the case of graphs with cycles (contrary to the analogous result for sum-product).

### (Aug 06 3-5pm) Pre-meeting Overview: Wainwright / Jordan, ch 8, p195-213

Filed under: Uncategorized — umassliving @ 10:06 am
Tags: ,

Section 8 covers the problem of mode computation. That is given a probability distribution, compute the most probable configuration or the mode. It turns out that, this problem also has a variational representation as made explicit in Theorem 8.1 (see Appendix B.5. for the proof).  Similar to our treatment in chapter 4, this variational problem can be solved exactly for tree-structured and Gaussian graphical models. Some points to focus on for this week:

• The objective function is simpler than the one for inference since it does not include $A^*$, and so the main source of difficulty is in characterizing the space of mean parameters, $\mathcal{M}$ explicitly. It should be clear that the optimization problem is an integer programming one (wikipedia).
• Section 8.2 transforms the IP problem to a LP one over the set $\mathbb{L}(G)$ for the case of tree-structured distributions and shows that a Lagrangian method for solving the dual of the LP are the max-product updates. Note that this result does not carry over to non-tree-structured graphs as shown in section 8.4.2.
• Section 8.3 discusses the exact computation of the mode for Gaussian graphical models (see Appendix C).
• Section 8.4.1 discusses first order LP relaxation for computing the mode of a discrete graphical model with cycles, and discusses the concept of strong and weak persistency.

## July 31, 2009

### (Jul 30) Meeting Summary: Wainwright / Jordan, ch 7

Filed under: Uncategorized — umassliving @ 10:43 am
Tags: ,

Discussion from the meeting:

• Why is the Bethe approximation to the entropy $H_{Bethe}$ not generally concave?
• Since we know that $A^*$ is convex, and is the negative entropy, we know that the entropy is concave.  However, $H_{Bethe}$ does not generally correspond to a true entropy.  We assume that the edges form a tree and compute the entropy as if it were, subtracting on the mutual information on each edge, but this is not a true entropy unless the edges actually do form a tree, and hence is not generally concave.
• On page 167, in the discussion of the projection operation $\mapsto$, how do we know that $\mu(F) \in \mathcal{M}(F)$?
• To see this, we need to go back to the original definition of $\mathcal{M}$.  Assume $\mu \in \mathcal{M}$.  By the definition of $\mathcal{M}$, there exists some distribution $p$ such that $\mu_\alpha = \mathbb{E}_p[\phi_\alpha(x)]$ for all sufficient statistics indexed by $\alpha$, where the expectation is with respect to $p$.  Clearly then, we can use the exact same $p$ to show that $\mu(F) \in \mathcal{M}(F)$ since the requirements are the same, but only for a subset of the sufficient statistics $\mathcal{I}(F)$.
• Understand 7.2.1.  In particular, consider the distribution over spanning trees $\rho$ that places all its mass on one specific spanning tree.  This seems very similar to structured mean field, where we just use the one spanning tree as the tractable subfamily we optimize over.  Yet structured mean field gives us a lower bound on $A(\theta)$ whereas tree-reweighted Bethe gives us an upper bound.  What accounts for this difference?  For instance, why does structured mean field give a lower bound when we know that $A^*(\mu(F)) \leq A^*(\mu)$, and we are subtracting off $A^*$ in the objective function?
• In mean field, we have that $A^*(\tau) = A^*(\mu(F)) = A^*(\mu)$, as proven in last week’s meeting summary.  Understanding this is the key to understanding this issue.  In mean field, we assumed that the canonical parameters that were outside those defined for the tractable subgraph were 0.  Therefore, $A(\theta)$ is the same, whether you include the inner product with the extra parameters, which are all zero, or not.  The convex combination of spanning trees is different because of the projection operator $\mapsto$ above.  We are still searching across the full space of possible mean parameters, but when we compute entropy, we use the projection operator to essentially ignore all the mean parameters that are not in our subgraph.  This differs from mean field, where the mean parameters that are not in our subgraph are deterministic functions of the mean parameters that are in the subgraph, corresponding to the functions $g$ in Wainwright and $\Gamma$ in the structured mean field optimization paper.  This difference means that you are searching over an outer bound as opposed to an inner bound, and are using an upper bound to the objective function, thus you are guaranteed to get an upper bound to $A(\theta)$.
• There should be a $\rho(T')$ in front of the second inner product in the second equation on page 176.
• This is for the $T'$ they mention that has $\Pi^{T'}(\lambda)$ different from zero.  Additionally, if you are wondering why $A^*$ is strictly convex, it is a consequence of the fact that $A$ is strictly convex.  The details are in Appendix B.3.
• Understand the general ideas in 7.2.2 and 7.2.3, the exact details are not as important.  For instance, understand why $H_{ep}$ is generally not concave, but why using the re-weighting subject to $\sum_{\ell=1}^{d_I} \rho(\ell) = 1$ gives a concave function.
• Since entropy is concave, if we take a positive combination of entropy functions, that should also be concave.  In $H_{ep}$, we have a negative amount $(1-d_I) H(\tau)$, so it is not concave.  Re-weighting subject to $\sum_{\ell=1}^{d_I} \rho(\ell) = 1$, or actually, $\sum_{\ell=1}^{d_I} \rho(\ell) \leq 1$, will fix this problem, making the approximation concave.
• Understand the benefit of the convexified algorithms for algorithmic stability and what it means for an algorithm to be globally Lipschitz stable, why that is important, and how Figure 7.3 suggests that ordinary sum-product is not globally Lipschitz stable.
• One other important take-away from Figure 7.3 is why the approximations are better the lower the coupling strength (size of the parameter on the edges).  This is because, the lower the coupling strength, the more independent the individual nodes, which in the extreme case of a coupling strength of zero becomes a simple product distribution.  The higher the coupling strength, the more important the loops in the graph become, the more difficult inference becomes.
• Understand how the convexified algorithms can be used within learning in 7.5.  We use the fact that when $B$ upper bounds the true cumulant function, then the surrogate likelihood lower bounds the true likelihood.  Yet in chapter 6, we saw how using mean field within EM also gives us a lower bound on the true likelihood, despite the fact that mean field gives a lower bound on the cumulant function.  What accounts for this discrepancy?
• This is a check on your understanding of mean field within EM.  If you look at what mean field is approximating in EM, you should see why a lower bound on the cumulant gives a lower bound, whereas in 7.5 we need an upper bound on the cumulant to give a lower bound.
• Understand the intuitative interpretation of example 7.3.
• Another interesting point is the idea, in the final paragraph of the chapter, that it may be better to have an inconsistent estimator of the true parameter $\theta^*$, if your interest is in prediction (inference) and you must use an approximate inference algorithm.

Slightly more esoteric details:

• On page 184, why do $\mu_i$ show up along the diagonal of $N_1[\mu]$?  Hint: each random variable $X_i$ is binary valued.
• The diagonal elements are $\mathbb{E}[X_i^2]$.  Since $X_i$ is binary, we have $\mathbb{E}[X_i^2] = \sum_{0,1} X_i^2 p(X_i) = p(X_i = 1) = \mu_i$.
• Still unsure of whether you can go the other direction and show that $\text{cov}(X) \succeq 0$ implies $N_1[\mu] \succeq 0$.  The wording in Wainwright seems to imply that you can, but it’s not clear to me how to do that using the Schur complement formula as they claim.
• Update: Kim, Kojima, Yamashtia, “Second Order Cone Programming Relaxation of Positive Semidefinite Constraint” has a simple proof of this equivalence.  Let $A = \left( \begin{array}{cc} 1 & -\mu^T \\ 0 & \mathbf{I} \end{array} \right)$.  Then $B = A^T N_1[\mu] A$ where $B = \left( \begin{array}{cc} 1 & 0 \\ 0 & \mathbb{E}[XX^T] - \mu \mu^T \end{array} \right)$.  Clearly $B$ is positive semidefinite if and only if $\mathbb{E}[XX^T] - \mu \mu^T = \text{cov}(X) \succeq 0$, and since $A$ is nonsingular, $N_1[\mu]$ is psd if and only if $B$, and hence $\text{cov}(X)$, is psd.

## July 23, 2009

### (Jul 30 3-5pm) Pre-meeting Overview: Wainwright / Jordan, ch 7

Filed under: Uncategorized — umassliving @ 4:26 pm
Tags: ,

The approximation methods we’ve covered in chapters 4 and 5 have both been nonconvex.  This chapter addresses the issue of how to create convex approximations, where both the set $\mathcal{L}$ that we use is convex and the objective function that we seek to maximize is concave.

7.1 deals with the generic framework for the convex approximations.  7.2 shows how to convexify the algorithms we saw in chapter 4, while 7.3 shows how to create new algorithms that are convex, not based on using convex combinations.  7.4 shows a separate benefit of convexity in terms of algorithmic stability, and 7.5 discusses how the convex approximations for inference fit in to learning.

Errata:

• There is a missing subscript in equation (7.1) on page 167, the expectation should be $\mathbb{E}[\phi_\alpha(X)]$.
• There should be (I believe) a $\rho(T)$ in front of the second inner product in the second equation on page 176.

• Why is the Bethe approximation to the entropy $H_{Bethe}$ not generally concave?
• On page 167, in the discussion of the projection operation $\mapsto$, how do we know that $\mu(F) \in \mathcal{M}(F)$?
• Understand 7.2.1.  In particular, consider the distribution over spanning trees $\rho$ that places all its mass on one specific spanning tree.  This seems very similar to structured mean field, where we just use the one spanning tree as the tractable subfamily we optimize over.  Yet structured mean field gives us a lower bound on $A(\theta)$ whereas tree-reweighted Bethe gives us an upper bound.  What accounts for this difference?  For instance, why does structured mean field give a lower bound when we know that $A^*(\mu(F)) \leq A^*(\mu)$, and we are subtracting off $A^*$ in the objective function?
• Convince yourself that you can obtain the tree-reweighted sum-product updates in equation (7.12) on page 172 by following the Lagrangian formulation of ordinary sum-product in chapter 4, and using the new messages $M_{ts}(x_s) = \exp(\frac{1}{\rho_{st}} \lambda_{st}(x_t))$.
• Understand the general ideas in 7.2.2 and 7.2.3, the exact details are not as important.  For instance, understand why $H_{ep}$ is generally not concave, but why using the re-weighting subject to $\sum_{\ell=1}^{d_I} \rho(\ell) = 1$ gives a concave function.
• Understand the general idea in 7.3.1.  The details are not as important, although for the interested, I have put some notes on this section below.
• Understand the benefit of the convexified algorithms for algorithmic stability and what it means for an algorithm to be globally Lipschitz stable, why that is important, and how Figure 7.3 suggests that ordinary sum-product is not globally Lipschitz stable.
• Understand how the convexified algorithms can be used within learning in 7.5.  We use the fact that when $B$ upper bounds the true cumulant function, then the surrogate likelihood lower bounds the true likelihood.  Yet in chapter 6, we saw how using mean field within EM also gives us a lower bound on the true likelihood, despite the fact that mean field gives a lower bound on the cumulant function.  What accounts for this discrepancy?
• Understand the intuitative interpretation of example 7.3.

Slightly more esoteric details:

• On page 184, why do $\mu_i$ show up along the diagonal of $N_1[\mu]$?  Hint: each random variable $X_i$ is binary valued.
• On page 184, why is the constraint $N_1[\mu] \succeq 0$ imply the constraint $\text{cov}(X) \succeq 0$?  Use the Schur complement formula and the stated fact that if $M$ is symmetric and positive-definite, then so is the Schur complement of $D$ in $M$.
• Here is a quick proof of that fact: we want to show that, if for all $x$, $x^T M x > 0$, then, for all $y$, $y^T S y > 0$ where $S$ is the Schur complement of $D$.  Split $x$ into two components, $x_1$ and $x_2$, with $x_1$ matching the size of $A$, and $x_2$ matching the size of $D$.  Then we have, $x^T M x = x_1^T A x_1 + x_2^T C x_1 + x_1^T B x_2 + x_2^T D x_2 > 0$.  Now substitute in $C = B^T$ since $M$ is symmetric, and assign $x_2 = -D^{-1} B^T x_1$.  Putting that in and simplifying, we get $x_1^T A x - x_1^T B D^{-1} B^T x_1 = x_1^T S x_1 > 0$, where $x_1$ was arbitrary, thus proving that $S$ is positive-definite.
• Now, how can we apply that to prove that $N_1[\mu] \succeq 0$ implies $\text{cov}(X) \succeq 0$?  If we let the single element 1 in the top left of $N_1[\mu]$ be $A$, and $\mathbb{E}[XX^T]$ in the bottom right be $D$, and take the Schur complement of $A$, we get the fact that, if $N_1[\mu] \succeq 0$, then $\mathbb{E}[XX^T] - \mu\mu^T \succeq 0$, and the second term is exactly $\text{cov}(X)$, thus proving the implication.
• One thing I am unsure of is whether you can go the other direction, and show that $\text{cov}(X) \succeq 0$ implies $N_1[\mu] \succeq 0$.  The wording in Wainwright seems to imply that you can, but it’s not clear to me how to do that using the Schur complement formula as they claim.
• The important thing is that we know that the constraint $N_1[\mu] \succeq 0$ is an outer bound to the marginal polytope $\mathbb{M}(K_m)$.  To show this, we use the fact that if there is a distribution $p(X)$ that realizes mean parameter $\mu$, then letting $Y$ be the random vector $(1, X)$, we have for any vector $a$, $a^T N_1[\mu] a = a^T \mathbb{E}[YY^T] a = \mathbb{E}[(a^T Y)^2] > 0$, proving that $N_1[\mu] \succeq 0$.
• So right now, we have $\mathbb{M}(K_m)$ is outer bounded by the constraint $N_1[\mu] \succeq 0$ which in turn is outer bounded by the constraint $\text{cov}(X) \succeq 0$, and uncertain as to whether the two constraints are equivalent or not.
• Why is it valid to replace the quantity within the determinant on page 184, $\text{cov}(X)$ to $N_1[\tau]$, as in equation (7.22) on page 185?  Again we use the Schur complement formula, and the property that $\det(M) = \det(A)\det(S)$ where $S$ is the Schur complement of $D$ in $M$ (see properties of determinant).  Using this and the fact above that by setting $A$ as the single element 1 in the top right corner of $N_1[\mu]$, the Schur complement of $N_1[\mu]$ is $\text{cov}(X)$ shows that the substitution is valid.
• Why does setting canonical parameters as in (7.28) lead to mean parameters matching the empirical mean parameters?  The paper Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudo-moment matching gives a proof that the optimal $\tau$ returned by the modified sum-product algorithm satisfies an admissibility condition that $\langle \theta, \phi(x) \rangle + \text{constant} = \sum \log \tau_s(x_s) + \sum \rho_{st} \log \frac{\tau_{st}(x_s, x_t)}{\tau_s(x_s) \tau_t(x_t)}$.  Using the $\tilde{\theta}$ given in (7.28), we can see that $\hat{\mu}$ satisfies this condition, and since the solution is unique, we have $\hat{\mu} = \tau$.
• On page 193, how do we know that $\|\theta\|_q = \sup_{\|\tau\|_{q'} \leq 1} \langle \tau, \theta \rangle$?  From Holder’s inequality.
« Previous PageNext Page »

Blog at WordPress.com.