Homepage › Solution manuals › Dimitris Bertsimas › Introduction to Linear Optimization › Exercise 4.6* (Duality in Chebychev approximation)
Exercise 4.6* (Duality in Chebychev approximation)
Let be an matrix and let be a vector in . We consider the problem of minimizing over all . Here is the vector norm defined by . Let be the value of the optimal cost.
- (a)
- Let be any vector in that satisfies and . Show that .
- (b)
- In order to obtain the best possible lower bound of the form considered in
part (a), we form the linear programming problem
Show that the optimal cost in this problem is equal to .
Answers
Proof. Unfolding the definition of the maximum norm, we rewrite our optimization problem as follows:
|
| (1) |
Recall from Section 1.3 (Piecewise linear convex objective functions), that the problems involving maximums and absolute values can be rewritten as linear optimization problems by introducing a dummy variable to the system and restricting it from below:
|
| (2) |
For the sake of completeness, we rewrite (2) in a more canonical way:
|
| (3) |
Using the definition on page 142, the linear optimization problem in (3) results in the following -dimensional dual problem:
|
| (4) |
We introduce the following variable change. Set . The trick is to notice that . (This follows from the complementary slackness as two components and , , cannot be positive at the same time.). Thus, the above optimization problem is equivalent to
|
| (5) |
By weak duality (Theorem 4.3), a feasible solution of the dual problem (3) results in an objective value less than or equal to the optimal cost of the primal problem. This is exactly the exercise assertion in (a). By the strong duality, the optimal values of both optimization problems must be equal. This is the assertion in (b). □