UMass LIVING

August 8, 2009

Duality for Linear Programs

Filed under: Uncategorized — umassliving @ 5:11 pm
Tags: ,

I felt bad that my explanation of duality for linear programs was not especially cogent during the meeting, so I decided to write up a quick ~5 minute primer on duality.

First, I should mention some references.  The bulk of what will follow is from Introduction to Linear Optimization by Bertsimas and Tsitsiklis.  They also write a lot of other stuff on optimization that is probably useful if you’re really interested in this area.  Also, there’s a book called Convex Optimization by Boyd and Vandenberghe that talks about the more general case of convex problems rather than strictly linear problems.  Both of these books are in our vision lab library in Moe’s cubicle.  Lastly, duality plays in important role in Support Vector Machines, in particular going from the primal to the dual is necessary to see how to kernels can be used with SVMs.  The notes on this page – http://www.stanford.edu/class/cs229/materials.html – are a good overview of duality for SVMs.  (Also let me plug the fact that these lectures can be found on YouTube.)

To begin with, consider the following dietary problem – you have n types of food and m types of nutrients.  You know that one unit of food j gives you a_{ij} units of nutrient i.  You want to achieve an ideal diet that contains b_i units of nutrient i using amount x_j of food j, and finally you have a cost c_j associated with each food j.  Achieving a mix of foods that gives you your desired nutrients at minimal cost is then the following linear program:

minimize c^T x subject to Ax = b

Now to that, let’s add a vector of Lagrange multipliers p to form the generalized Lagrangian:

minimize c^T x + p^T (b - Ax)

This is what Moe was referring to as turning a constrained problem to an unconstrained problem, but is different from the dual.

Let’s now define q(p) = \min_x c^T x + p^T (b - Ax), and also define x^* as an optimal feasible solution to the original (primal) minimization problem.  Knowing this, we can see that for any p:

q(p) = \min_x c^T x + p^T (b - Ax) \leq c^T x^* + p^T (b - Ax^*) = c^T x^* \leq c^T x'

The second equality comes from the fact that x^* is a feasible solution and so satisfies Ax = b, and the final inequality is for all feasible x' and follows from the fact that x^* is an optimal solution.

Thus we know that for any p, q(p) is a lower bound to the original primal problem, so we want to maximize q(p) over all possible values of p to get the tightest lower bound possible.  We can also rearrange the terms:

q(p) = \min_x p^T b + (c^T - p^T A)x

Now we can see that if p^T A \neq c^T, then we can set x to have the opposite sign as the non-zero component of c^T - p^T A and make the minimum value equal to -\infty.  Putting together this fact with the fact that we want to maximize q(p), we now have our dual problem:

\max_p q(p) subject to p^T A = c^T

What happened here?  Our constraints turned into variables p, and our variables x turned into constraints p^T A = c^T.  This is going to be true in general – going from the primal to the dual will change a minimization problem to a maximization problem, change constraints to variables and variables to constraints.

Moreover, think about going back to the original primal problem and adding the constraint x \geq 0, corresponding to the constraint that we have to use a non-negative amount of food.  Then you can see that we only need c^T \geq p^T A in order to avoid -\infty, and hence direct inequality constraints on variables x \geq 0 in the primal turn into inequality constraints in the dual, and inequality constraints in the primal turn into direct inequality constraints on variables in the dual.  Equality constraints in one form turn into free variables in the other form, as we saw originally.

What’s the point of all of this?  One thing to note is what happens if you manage to find a pair (x', p') that satisfy c^T x' = q(p')?  We know from above that q(p) \leq c^T x for any p and any feasible x, therefore we know that x' is an optimal solution to the primal problem, and vice versa that p' is optimal for the dual.  Thus, one obvious thing to try when solving a linear program is to try to solve the primal and dual at the same time.  The difference in the values gives you bounds on what the optimal solution can be, and once you’ve closed the gap to zero, you know you’ve found the optimal solution.

There is also the stronger result of strong duality, that says if a linear programming problem has an optimal solution, so does its dual, and the optimal costs of the two problems are equal.  So if you find a solution to one problem, then you know the other must be feasible and have the same optimal solution.

Note these results depend on the fact that the original problem was linear, they will not hold for any arbitrary problem.  However, many results can be extended to convex problems.  There are also additional details such as complementary slackness and KKT conditions that play a deeper role in understanding duality and help with solving such problems.  The notion of complementary slackness also gives an intuition behind the meaning of the dual problem for the original dietary problem: we know that food j has cost c_j.  In the dual, we can think of p as being costs on the nutrients, and based on how much food j has of each nutrient, we have a second price p^T A_j on food j, where A_j is the jth column of A.  The primal-dual relationship is saying that, when the prices are correctly chosen on the nutrients, we satisfy the constraints as well as the price accounting that, for the optimal solution, we have c^T x^* = p^* b.

Advertisements

Blog at WordPress.com.