Abdelkhalak El Hami

Optimizations and Programming


Скачать книгу

the value of a restricted variable (i.e. a positive variable or a negative variable) in one of the programs () or () is not zero, then the corresponding constraint in the other program is an equality.

      This theorem can alternatively be stated as follows:

      THEOREM 1.12.– Let x and p be admissible solutions of the primal and dual programs, respectively. Then the vectors x and p are, respectively, optimal solutions of the two problems if and only if:

image

       Application

      Consider the LP

image

      The dual of this program is:

image

      The solution of the primal is x* = (1, 0, 1)T .

      The optimal dual solution can be constructed from additional slack conditions:

       – the condition = 0 is satisfied because x* is admissible;

       – the condition (cj − pT Aj)xj = 0 is satisfied for j = 2:- for j = 1, this condition becomes:5p1 + 3p2 = 13.- for j = 3, it becomes:3p1 = 6.

      These two conditions give p1 = 2 and p2 = 1. This solution is dual admissible. The total dual cost is 19, and so is the primal cost.

      In mathematical programming, good performance essentially depends on the program’s ability to construct useful bounds (lower bounds for maximization and upper bounds for minimization). These bounds can be obtained by relaxing the linear program, i.e.

       – either increasing the set of solutions to optimize over a larger space (that is easier to optimize); this includes constraint relaxation (and in particular continuous relaxation);

       – or replacing the objective function with an upper bound (in the case of maximization), for example, Lagrangian relaxation.

      For a mathematical program, the following continuous relaxations are possible:

       – (simple) continuous maximization, which is very effective for LPs (with the simplex algorithm, the interior-point method, or the volume algorithm); however, we will see that continuous relaxation is rarely the best option;

       – in the specific context of LPs, we can improve the value of linear relaxation by adding constraints – reinforcement;

       – we can also add variables – reformulation;

       – Lagrangian relaxation is another approach.

      1.9.1. Lagrangian relaxation

      Lagrangian duality provides a particularly fruitful framework for relaxation. It works as follows:

       – choose an inequality in the LP and remove it from the LP;

       – add this inequality to the objective function with a multiplier that penalizes the objective function if the solution found does not satisfy the inequality.

      This relaxation often produces a very good relaxation value, far better than continuous relaxation.

      [1.11]image

      To each constraint iI, we assign a real number λi ≥ 0 called the Lagrange multiplier. We also define the vector λ = (λi, 0 iI). The Lagrangian function is then defined as

image

      This function has the two vector arguments x and λ and takes values in ℝ. Suppose that the PL is such that the problem minx∈S images(x, λ) has an optimal solution for every λ ≥ 0.

      More details are given in Chapter 7. Now consider the following LP [FLE 10]:

      [1.12]image

      We obtain the optimal solution (0, 1.1, 4.4)T, which gives the optimal cost as z* = 6.6.

      By relaxing the third constraint (x1 + x2 ≥ 4.4) and keeping the integrality constraints on the variables xi, we obtain the following LP by adding the expression λ(4.4 − x1x3) to the objective function, where λ is the Lagrangian relaxation coefficient, which is updated at each iteration of the method:

      [1.13]image

      The last constraint is added to bound the variables xi. This guarantees that the problem is bounded for every λ > 0. The coefficient λ is updated as follows: λk+1 = max images It can be shown that λ tends to 1 and z tends to 8.4.

      Consider the LP:

      [1.14]image

      We will now perform a postoptimal analysis of the optimal solution in terms of the problem data, also known as a sensitivity analysis. In other words, how does the solution or the objective function vary if we modify one of the entries of the vectors c or b, or the matrix A?

      To conduct a postoptimal analysis, we need to be able to express the solution of the simplex algorithm in terms of the initial problem data instead of simplex tableaus, which change at each iteration.

      As always, we introduce slack variables to the problem [1.14], writing A for the resulting matrix of size m × (m + n). We also assume that the basic variables B of the optimal solution xB are known. This information can be read off from the final tableau of the simplex algorithm, which is why this procedure is called postoptimal analysis.

      Let B = {xj1, xj2, . . . , xjm} be the list of basic variables, and let N be its complement with respect to the index set {1, 2, . . . , m + n}. For example, if m = n = 3 and B = {x1, x5, x2}, then N = {x3, x4, x6}. Based on the sets B and N, we can extract the following submatrices of A:

       – AB is the submatrix formed by the columns corresponding to the variables of B. We take them in the order specified by B.

       – AN is the submatrix formed by the columns corresponding to the variables of N in the order specified by N.

      EXAMPLE 1.11.– Consider the problem

      [1.15]image