optional algorithms, elitism is fairly important. In addition, although not as commonly employed as elitism, the migration operator has been shown to have a significant impact on performance. Clearly, there is a wide variety of GA operators, and many books have been written on this subject. The reader is referred to references [4,5,8,9] for just a few of these. The good news is that almost all variations are effective optimization engines—they vary primarily in how quickly they converge and their probability of finding global solutions.
1.7 Multi‐Objective Optimization and the Pareto‐Optimal Front
Much of our focus in this chapter has focused on the single‐objective optimization of an objective or fitness function f(x). However, in the case of design problems, it is normally the case that there are multiple objectives of interest. In this section, we begin our consideration of the multi‐objective optimization problem.
Let us begin our discussion by the consideration of some designs of an electric motor. Suppose we are designing a motor and wish to minimize material cost and also to minimize loss. Further, suppose that we had eight designs available, each of which met all specifications for the machine, but each of which had a different cost and a different loss, as illustrated in Figure 1.15. Therein, each number listed is an index of a design. Thus, the x above point 1 has coordinates given by f(x1).
Now let us consider the question of which design is the best. Let us first compare design 5 to design 7 (again using our shorthand notation of using a 5 to designate x5). Design 7 has lower loss and lower cost than design 5, and so it is clearly better than design 5. Using similar reasoning, we can see that design 8 is better than designs {4,5,7}. Now let us compare design 8 to design 6. Design 8 has lower loss, but design 6 has lower cost. At this point, it is difficult to say which is better. This leads to the concept of dominance.
Consider two parameter vectors xa and xb. The parameter vector xa is said to dominate xb if f(xa) is no worse than f(xb) in all objectives and f(xa) is strictly better than f(xb) in at least one objective. The statement “xa dominates xb” is equivalent to the statement that “xb is dominated by xa.” Returning to the example of the previous paragraph, we can say that design 8 dominates designs {4,5,7}.
Consider a set of parameter vectors denoted X. The nondominated set Xnd ⊂ X is a set of parameter vectors that are not dominated by any other member parameter vector in X. In our example, X = {1, 2, …, 8} and Xnd = {1, 6, 8}. The nondominated set can be viewed as the set of best designs.
If we consider the set of parameters to be the entire domain for the parameter vector (i.e., Ω) then the set of nondominated designs is referred to as the Pareto‐optimal set. The boundary defined by the objectives over this set is known as the Pareto‐optimal front. Pareto was an Italian engineer, economist, and philosopher who lived in the period of 1848–1923 (overlapping Mendel and Darwin) and who, besides his work in multi‐objective optimization, formulated the Pareto principle that 80% of effects stem from 20% of causes (and that 80% of wealth is owned by 20% of the population).
Figure 1.15 Motor performance objective space.
Figure 1.16 Pareto‐optimal set and front.
The concepts of a Pareto‐optimal set and Pareto‐optimal front are illustrated in Figure 1.16 for a system with a two‐dimensional parameter space (i.e., a two‐dimensional parameter vector) and a two‐dimensional objective space. As an example, if we were designing an inductor, the variables of the parameter space may consist of the number of turns and diameter of the wire, variables in the objective space might be inductor volume and inductor power loss. Returning to Figure 1.16(a) we see the parameter space defined over a set Ω (shaded). Members of this set include x1, x2, x3, and x4. Every point in Ω is mapped to a point in objective space Γ (also shaded) shown in Figure 1.16(b). For example, x4 in parameter space maps to f(x4) in objective space. Points along the dashed line in Ω in Figure 1.16(a) are nondominated and form the Pareto‐optimal set. They map to the dashed line in Figure 1.16(b) in objective space where they form the Pareto‐optimal front.
The goal of multi‐objective optimization is to calculate the Pareto‐optimal set and the associated Pareto‐optimal front. One may argue that in the end, only one design is chosen for a given application. This is often the case. However, to make such a decision requires knowledge of what the tradeoff between competing objectives is—that is, the Pareto‐optimal front.
There are many approaches to calculating the Pareto‐optimal set and the Pareto‐optimal front. These approaches include the weighted sum method, the ε‐constraint method, and the weighted metric method, to name a few [10]. In each of these methods, the multi‐objective problem is converted to a single‐objective optimization, and that optimization is conducted to yield a single point on the Pareto‐optimal front. Repetition of the procedure while varying the appropriate parameters yields the Pareto‐optimal front.
In order to illustrate this procedure, let us consider the ε‐constraint method for a system with two objectives, f1(x) and f2(x), which we wish to minimize. In order to determine the Pareto‐optimal set and front, the problem
is repeatedly solved for different values of ε. This minimization can be carried out numerically using any single‐objective minimization method. Solving (1.7-1) with ε = ε1a yields x1a with objective function values (ε1a,f2(x1a)) which is a point on the Pareto‐optimal front, as shown in Figure 1.17. Solving (1.7‐2) with ε = ε1b yields x1b with objective function values (ε1b,f2(x1b)). Repeating the procedure over a range of ε values, the Pareto‐optimal set and front can be determined.
Figure 1.17 Calculation of Pareto‐optimal