Homepage Solution manuals Dimitris Bertsimas Introduction to Linear Optimization Exercise 4.7 (Duality in piecewise linear convex optimization)

Exercise 4.7 (Duality in piecewise linear convex optimization)

Consider the problem of minimizing max i=1, ,m (aix bi) over all x n. Let v be the value of the optimal cost, assumed finite. Let A be the matrix with rows a1,,am, and let b be the vector with components b1,,bm.

(a)
Consider any vector p m that satisfies pA = 0,p 0, and i=1mpi = 1. Show that pb v.
(b)
In order to obtain the best possible lower bound of the form considered in part (a), we form the linear programming problem
maximize pb subject topA = 0 pe = 1 p 0,

where e is the vector with all components equal to 1. Show that the optimal cost in this problem is equal to v.

Answers

Recall from Section 1.3 (Piecewise linear convex objective functions), that we can rewrite the piecewise linear optimization problem

minimize max i = 1, ,m (aix bi) subject tox free.
(1)

in the following equivalent form as a linear optimization problem:

minimize z subject toz a1x b1 z amx bm x free, z free.
(2)

For the sake of completeness, we rewrite (2) in a more canonical way:

minimize [ 01 ] [ x z ] subject to [ Ae ] [ x z ] b x free, z free.
(3)

Following the definition on page 142, the linear optimization problem in (3) results in the following dual problem:

maximize pb subject to p [ Ae ] = [ 01 ] p 0,
(4)

or, in a more reader-friendly form,

maximize pb subject topA = 0 pe = 1 p 0.
(5)
(a)
Any vector p satisfying the conditions given in the exercise is feasible for the dual problem (5). By weak duality, its objective value pb cannot exceed that of the optimum v of the primal problem.
(b)
We have shown that (5) is the dual problem for the primal (1). By strong duality (Theorem 4.4), optimal values of both problems must coincide.
User profile picture
2022-03-20 19:39
Comments