Abdelkhalak El Hami

Optimizations and Programming


Скачать книгу

      Then Γ and Σ are closed and convex sets.

      THEOREM 1.5.– images is a basic feasible solution of (Ps) if and only if images is an extreme point of Σ.

      Examples where the solution is not unique show that the optimal solutions lie on the boundary:

      THEOREM 1.6.– Every optimal solution of (P) (respectively of (Ps)) belongs to the boundary of Γ (respectively of Σ).

      A question presents itself: can there be optimal solutions but no optimal basic feasible solutions? If so, the simplex method, which only considers basic solutions, would not be able to find an optimal solution. The following theorem tells us that this does not happen.

      THEOREM 1.7.– If (Ps) has an optimal solution, then (Ps) has an optimal basic feasible solution.

      The simplex algorithm moves along a sequence of vertices of the polyhedron and converges rapidly on average. However, Klee and Minty [KLE 72] found pathological examples where the simplex algorithm visits almost every vertex, which can potentially be an enormous number. The existence of theoretical linear programs that derail the simplex algorithm motivated research to find an algorithm that has polynomial complexity in m and n, even in the worst-case scenarios. Khachian [KHA 79] found the first such algorithm, based on the ellipsoid method from nonlinear optimization. Although it is faster than the simplex method in Minty’s problems, it never succeeded in establishing itself because it tended to be much slower than the simplex algorithm on real problems in practice. Nevertheless, as a mathematical result, it inspired research into interior-point algorithms.

      In 1984, Karmarkar [KAR 84] found a new algorithm. The computation time in the worst-case scenario is proportional to n3.4L, where L denotes the number of bits required to encode the tableaus A, b and c that define the linear program. The algorithm is too complicated for computations by hand; known as the interior-point algorithm, it enters into the polyhedron itself to converge on the optimal vertex more quickly.

      EXAMPLE 1.9.– Consider the LP

image

      The optimum is x* = (2, 2)T with cost z* = 6. Starting from the interior point x0 = (1, 1)T and without a safety factor, the algorithm performs two iterations: it considers the point x1 = (2, 1.591)T, followed by the point x*.

      Duality is an essential notion in linear programming. The duality operation associates any so-called primal linear programming problem with another such problem, known as the dual, that is defined solely in terms of the data of the primal problem.

      Duality is interesting for two reasons:

       – the dual problem has an important economic interpretation that offers another perspective of the primal problem;

       – there are straightforward and powerful mathematical relations between the primal and dual problems; this will allow us to develop new algorithms that will be more efficient in many situations.

      Suppose that we have the following primal program:

image image

      REMARK.– The primal and dual programs satisfy the following relations:

       – m = number of constraints of (P) = number of variables of (D),

       – n = number of variables of (P) = number of constraints of (D).

      If (P) has two constraints, (D) has two variables and can be solved graphically, regardless of the number of variables of (P).

      So, what is the form of (D) when (P) is not in canonical form?

      In such a case, we can determine (D) by reducing (P) to canonical form, as shown in the following example.

      EXAMPLE 1.10.– Consider the LP:

image

      By the above definition, the dual of (P) is given by

image

      Setting y = uv gives

      (D) min w(y) = yb s.t. yAc, y with no sign restriction (denoted (y)).

image

      THEOREM 1.8.– The dual of the dual is the primal.

      1.8.1. Duality theorem

      Now that we have defined the dual of a linear program, let us study the links between the solutions of programs that are dual to one another.

      THEOREM 1.9.– Let (x, y) be a feasible solution of (images) and (u, v) a feasible solution of (images). Then:

       1) z(x, y) ≤ w(u, v);

       2) z(x, y) = w(u, v) ⇒ (x, y) and (u, v) are optimal solutions of () and ( ).

      THEOREM 1.10.– One of the following three cases is satisfied:

       1) () and () have optimal solutions and max z(x, y) = min w(u, v);

       2) one of () and () has a feasible solution, but not the other;

       3) neither () nor () have feasible solutions.

      THEOREM 1.11.– Let (x, y) (respectively, (u, v)) be feasible solutions of (images) (respectively, (images)). Then (x, y) and (u, v) are optimal solutions of (images) and (images) if and only if the following statements hold:

       – if one constraint is strictly an inequality in () (respectively, ()), then the corresponding variable of () (respectively,