UMass LIVING

June 30, 2009

(Jul 2 3-5pm) Pre-meeting Overview: “GBP” & “EP for LDA”

Filed under: Uncategorized — umassliving @ 2:19 pm
Tags: ,

Paper 1: “Constructing Free Energies and the Generalized Belief Propagation Algorithm”, Yedida, Freeman & Weiss

Sections I and II are an intro and overview, covering the basics of factor graphs and BP.  Useful to review, although we probably won’t spend any time on it.

Section III and IV give the approximation framework – think about how this relates to the variational framework given in Wainwright and Jordan.  For instance, what corresponds to U(b) and H(b)?

Section V puts the Bethe approximation in this framework and covers some of its properties, and Section VI covers the correspondence between Bethe and BP.  Useful for review (although the subsections dealing with hard constraints may not be as important) but we probably won’t spend any time on these either.

Section VII builds on III and IV, and corresponds to the hypergraph section of Wainwright and Jordan, VIII discusses some details of the region graph method, IX gives the details of GBP, X is a detailed example of GBP, and XI discusses the accuracy of GBP versus BP.

Appendices A, B, and C talk about some specific variants and special cases of the region graph method, and puts everything together in a Venn diagram.  Good things to know, but we might not have time to discuss.

Summary:

Sections III, IV, VII – XI – read carefully, we will focus on these sections

Sections I, II, V, VI – should mostly be review, we’ll skip these sections

Appendices A-C – will cover if we have time

Paper 2: “Expectation Propagation for the Generative Aspect Model”, Minka & Lafferty

The generative aspect model, is the Latent Dirichlet Allocation (LDA) [paper, code] model we covered in Wainwright & Jordan (fig 2.7 on pg. 21 and example 3.5 on pg. 47). All the factors in the model are discrete and thus the model is a member of the exponential family as discussed in detail on pg. 47. This paper tackles the two problems of inference (i.e. marginalization to compute the document probability using (3)) and learning (i.e. estimating the parameters of the model given a collection of documents). Note that we have not yet covered learning in Wainwright & Jordan (thats the topic of chapter 6, after we cover the mean field algorithms in chapter 5), thus for the purpose of this meeting please pay particular attention to the inference (section 4.2). The most important sections for our purposes are 3 and 4.2 and how they relate to section 4.3 of Wainwright and Jordan, so spend the majority of your time understanding the connections between the two papers.

Summary:

Sections 3 & 4.2: Read carefully. We’ll spend the most time on this. I added a link to the original EP paper in case section 3 is too brief.

Sections 6 & 7: Results and conclusions. We’ll spend some time on this.

Sections 1 & 2: Overview of LDA, we’ll spend very little time on this

Section 4.1: The original variational inference algorithm for the LDA, read briefly, we’ll cover in more detail in chapter 5 of Wainwright & Jordan.

Section 5: Covers learning parameters, read briefly, we’ll cover in more detail in chapter 6 of Wainwright & Jordan

Advertisements

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: