Cleared up confusion over univariate versus multivariate sufficient statistics. Key point was that will be a vector, not a scalar.

“Marginalization” for a Gaussian means computing mean and covariance, in general it means computing the mean parameters (for discrete random variables using the standard overcomplete sufficient statistics, this will coincide with our normal definition of marginalization).

Note there’s a missing in equation 4.63, the beginning should be . Also, starting on page 113 and repeating several other times, the paper references equation 4.48, but actually means equation 4.58.

Answer to the pre-meeting question of how we know that is convex: for any given projection for a fixed , the set is convex because of the convexity of . is then just the intersection of all these convex sets, and hence is convex itself.

How do we get the term-by-term entropy approximation 4.68? If you start with the definition of entropy as and use equation 4.58 for , and in addition, ignore the log partition function, you can derive 4.68, but this is a mistake, because the log partition function ties all the components together, so you can’t decompose the probability. The approximation, then, is to assume that the distribution factorizes as , and using that approximation, you can derive 4.68.

In the pre-meeting overview, there was the question of what would happen in example 4.9, showing the Bethe approximation as a special case of EP, if you took the sufficient statistics associated with one particular node , , and put those as a single element of the intractable component. What I believe will happen is that you will lose the marginalization constraints for that particular node for all edges connected to (). In addition, the entropy approximation will change, since will no longer be equal to the mutual information due to the lack of the marginalization constraint.

Deriving equation 4.77: The key is how to take the derivative of . Unlike in the proof of Theorem 4.2, we do not know the exact form of and so we can’t assume that are the marginals and use them in the standard entropy formula. It turns out that the derivative of the entropy is the negative of the canonical parameters that correspond to . This is because (as you can read on page 68) the conjugate dual is the negative entropy, and gives the mapping from mean parameters to canonical parameters. Also, you can prove this directly by letting be the canonical parameters associated with , using the fact that , and take the derivative of . Remember that depends on , so you’ll need to take that into account when you take the derivative with respect to .

If anyone figures out how to derive equation 4.78, please put that in a comment.

One thing that wasn’t mentioned during the meeting, and which may either be important or obvious, is that does not appear in the augmented distribution . Therefore, you can compute the expectation of with respect to (which is ), and then adjust so that the expected value of is the same under base distribution without changing , which would not be true if you adjusted any of the other ‘s.

Another question for thought: at the bottom of p123, it says the entropy associated with the augmented distribution does not have an explicit form, but can be computed easily. How would you compute it, and why can’t you do the same thing for the original entire distribution?

In the middle of p124, there is a typo – after the product , to the right of that, it should be , not .

## Leave a Reply