63,09 €
Multi-Objective Optimization in Theory and Practice is a simplified two-part approach to multi-objective optimization (MOO) problems.
This second part focuses on the use of metaheuristic algorithms in more challenging practical cases. The book includes ten chapters that cover several advanced MOO techniques. These include the determination of Pareto-optimal sets of solutions, metaheuristic algorithms, genetic search algorithms and evolution strategies, decomposition algorithms, hybridization of different metaheuristics, and many-objective (more than three objectives) optimization and parallel computation. The final section of the book presents information about the design and types of fifty test problems for which the Pareto-optimal front is approximated. For each of them, the package NSGA-II is used to approximate the Pareto-optimal front.
It is an essential handbook for students and teachers involved in advanced optimization courses in engineering, information science and mathematics degree programs.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 307
Veröffentlichungsjahr: 2019
This is an agreement between you and Bentham Science Publishers Ltd. Please read this License Agreement carefully before using the ebook/echapter/ejournal (“Work”). Your use of the Work constitutes your agreement to the terms and conditions set forth in this License Agreement. If you do not agree to these terms and conditions then you should not use the Work.
Bentham Science Publishers agrees to grant you a non-exclusive, non-transferable limited license to use the Work subject to and in accordance with the following terms and conditions. This License Agreement is for non-library, personal use only. For a library / institutional / multi user license in respect of the Work, please contact: [email protected].
Bentham Science Publishers does not guarantee that the information in the Work is error-free, or warrant that it will meet your requirements or that access to the Work will be uninterrupted or error-free. The Work is provided "as is" without warranty of any kind, either express or implied or statutory, including, without limitation, implied warranties of merchantability and fitness for a particular purpose. The entire risk as to the results and performance of the Work is assumed by you. No responsibility is assumed by Bentham Science Publishers, its staff, editors and/or authors for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products instruction, advertisements or ideas contained in the Work.
In no event will Bentham Science Publishers, its staff, editors and/or authors, be liable for any damages, including, without limitation, special, incidental and/or consequential damages and/or damages for lost data and/or profits arising out of (whether directly or indirectly) the use or inability to use the Work. The entire liability of Bentham Science Publishers shall be limited to the amount actually paid by you for the Work.
Bentham Science Publishers Ltd. Executive Suite Y - 2 PO Box 7917, Saif Zone Sharjah, U.A.E. Email: [email protected]
This book is the second part of a presentation on “multi-objective optimization in theory and practice.” This book treats static multi-objective optimization programming (MOOP) problems in which an objective or constraint does not vary over time. The first part focused on classical methods of problem-solving. These methods concerned extensive areas of real-life decision problems including hierarchical programming problems, fuzzy programming problems, and game theory applications.
This second part concerns the use of metaheuristic methods of resolution in more challenging practical cases. These difficulties include the size of the real-world application, the nonlinearities, and discontinuities of the Pareto-optimal front we may have.
The development of multi-objective approaches accelerated in the mid-1980s with the help of evolutionary algorithms for solving real-world multi-objective problems. A recent review by I. Fister et al. (2013) collected 74 nature-inspired algorithms. These algorithms refer mainly to swarm intelligence, physics and chemistry, and biological systems.
Thus, Particle Swarm Optimization (PSO) algorithm is inspired by the social behavior of birds in flocks in search of food. A recent survey study by Lalwani et al. (2013) showed that PSO can be applied to various real-world problems. The first three domains electrical engineering, industrial engineering, and flow shop scheduling problems represent the half of the collected studies within 16 different areas and challenges. A significant contribution is due to K. Deb et al. (2002, 2005) for generating tunable test problems.
Different toolkits and suites were designed to evaluate the process of optimization algorithms. Evolutionary computation includes standard evolution strategy, evolution programming, genetic algorithm, and genetic programming.
Their extension to MOOP problems required new concepts, such as Pareto domination ranking and fitness sharing. The goal is to maintain the diversity in the population of solutions through successive generations. Evolution strategies algorithms are nature-inspired approaches. These algorithms are stochastic population-based. Collective strategies are possible within a population. Such situations occur in nature with birds or fish in flocks. Other co-evolutionary models implicate two different populations or species which compete. Solving MOOP problems may use two different ways. One way is to decompose the initial problem into a sequence of subproblems. The hybrid evolutionary algorithm is another efficient method in which different metaheuristics or heuristics combine.
This book is an attempt to handle most significant aspects of the real-life complexities in decision-making. Various difficulties arise from non-convexities, nonlinearities, and non-differentiability of objective functions and side-constraints, multiple conflicting goals, a multiplicity of global and local solutions, uncertainties due to inaccurate data, etc.
This book originated in lectures by the author on operations research and optimization techniques with applications at the University of Paris. More recently, the author was participating as an invited speaker at several international conferences, notably on game theory, evolutionary optimization, and metaheuristic algorithm in general.
This book reviews and evaluates multi-objective programming models using several software packages. The book includes ten Chapters. Chapter 1 focuses on techniques that determine Pareto-optimal sets of solutions. This Chapter shows the vast variety of algorithms currently available to solve multi-objective optimization problems. We attempt one classification of these methods by considering standard categories (e.g., swarm intelligence, genetic algorithm, physics and chemistry-based algorithms). In Chapter 1 dominance concepts are defined, and we propose an original Pareto ranking method based on the Hasse diagrams. Chapter 2 and Chapter 3 introduce the interest of metaheuristic algorithms for solving optimization problems. A metaheuristic algorithm refers to a higher level ‘master’ strategy which guides and controls the operations of other lower-level ‘worker’ heuristics. The simulated annealing algorithm illustrates this approach by the physical annealing principle (Chapter 2). Free software packages SciLab and GENOCOP solve constrained and unconstrained multi-objective optimization problems (Chapter 3). Chapter 4 to Chapter 6 discuss genetic search algorithms and evolution strategies. The population-based genetic algorithms (Chapter 4) adopt new concepts such as Pareto ranking and fitness sharing. In fact, one goal is to maintain the diversity in the population of solutions. Evolution strategy algorithms are nature-inspired techniques (Chapter 5 and Chapter 6). Collective behaviors are those prevailing in birds or fish in flocks. Co-evolutionary models involve different populations or species. In Chapter 7, we examine two ways of solving MOOP problems. One way consists of a decomposition of the initial problem into a sequence of subproblems. Another performant way is a hybridization of different metaheuristics or heuristics. Chapter 8 is on many-objective (more than three objectives) optimization and parallel computation. These approaches are suitable for the complex and time-consuming real-world applications. Chapter 9 and Chapter 10 treat the design and types of test problems. Test functions help to evaluate MOOP algorithms. Toolkits and suites allow constructing tunable tests problems (Chapter 9). Chapter 10 includes a collection of a variety of fifty test functions. For each of them, the package NSGA-II is used to approximate the Pareto-optimal front.
This book is typically user-oriented with theoretical and practical aspects. This book includes detailed examples, figures, test functions, and small-size applications from the literature. This book uses the commercial package Mathematica 7®, and free software packages, including notably SciLab 5.5.2 (an alternative to MatLab), GENOCOP III, NSGA-II and a pseudo NSGA-III.
The author thanks, Philippe Mathieu, Professor at the University of Lille in France for associating him with his research unit on Multi-Agent Systems and Behaviors. This research unit is part of the division Interaction and Collective Intelligence in the Center for Research in Computer Science, Signal, and Automatic Control of Lille. He showed interest in this research. This work also benefited from the electronic documentation at the University of Lille.
The author also thanks anonymous reviewers for their comments. I especially thank one of them from the University of Birmingham (UK) for her/his suggestions along with other documentary references. The suggestion to present new elements of the many-objective optimization (i.e., more than three objectives) makes it possible to approach more real-life applications.
I express my gratitude to my Publisher Dr. Humaira Hashmi, In-charge of the eBook Department Publications, Bentham Science Publishers, for her patient and stimulating assistance in preparing this book. I thank Asma Ahmed, Dealing Manager of this project for her professional cooperation in realizing this eBook. I also extend my thanks to Sahar Iftekhar of the ‘Graphics Department’ for his expertise and technical advice in the preparation of graphics in this book. My gratitude goes also to Mr. Tashfeen Hashmi for the attractive design of the cover page.
None Declare.
None Declare.
Multi-objective optimization (MOO) problems belong to programming approaches in which the decision-maker is faced with a multiplicity of conflicting objectives. This situation occurs with real-world problems involving engineering design, chemical processes, financial management, etc. In such problems, we will obtain rather a set of equally good solutions and not just one best solution. Classical methods for solving MOO problems have shown their limits to find Pareto-optimality fronts. The difficulties were the necessity of repetitive applications, the requirement of some knowledge about the problem, the insufficient spread of non-dominated solutions, and the sensitivity of results to the shape of the Pareto-optimal front. Therefore, metaheuristics with application to practical MOO problems had an accelerated development in the decades 1980s and 1990s. A variety of nature-inspired algorithms have been proposed in the recent years with extensive applications. The concept of dominance is the basis of the Pareto optimality. The dominance binary relation is a strict partial order relation. This approach allows a comparison between feasible solutions in the fitness space and decision space. The non-dominated solution sets yield Pareto fronts. Different techniques were proposed to find good approximations of the Pareto sets when a Pareto front cannot be determined analytically. Our method uses graph theory analysis. This approach provides a non-dominated set by using the Hasse diagram of an acyclic graph. Numerous examples from the literature show connected and disconnected Pareto-optimal fronts. In particular, Pareto-optimal fronts change if we decide to maximize instead to minimize the objectives. Algorithms use different selection procedures. The selection criteria can be an elitist Pareto ordering, non-Pareto criteria like indicators, a bi-criteria evolution, and the extended concepts of dominance like ε-dominance and grid dominance.
The first use of heuristic algorithms goes back to Turing (1948) [1] when he was breaking the German Enigma code during World War II (see also Yang, 2014 [2], and Angelov, 2016 [3]). After that, heuristic and metaheuristic algorithms for solving programming problems resulted from the difficulties with classical optimization methods. Abido (2010) [4] specified four inconveniences with conventional algorithms. Firstly, there is a need for a repetitive application of an algorithm to find the Pareto-optimal solutions. Secondly, some knowledge about the problem is required. Thirdly, an algorithm is sensitive to the shape of the Pareto-optimal front. Finally, the spread of the Pareto-optimal solutions depends on the chosen algorithm. Heuristic algorithms are suitable solvers for severe high-dimensional real-life problems (see, for instance, Tong et al., 2014 [5]).
Heuristics and metaheuristics are approximation resolution methods. Heuristics denotes techniques which seek near-optimal solutions at a low cost, is often problem-specific. Metaheuristics consists of a master strategy that guides and corrects the operations of subordinate heuristics (see, for instance, Reeves, 1995 [6]). Heuristics denotes a set of strategies to guide the search process in the feasible space. Metaheuristics, such as evolutionary algorithms (EAs), refers to a higher level procedure which combines different includes of depended heuristics for exploring search area. Metaheuristics includes notably genetic algorithms (GAs), evolution strategy (ES), and genetic programming (GP). EAs also include, but are not limited to, nature-inspired algorithms such as neural methods, simulated annealing, tabu search, ant colony systems and other particle swarm intelligence techniques. The capacity of such methods to solve NP-hard 1 combinatorial problems is well known (e.g., the problems of traveling salesperson, scheduling, graph, and transportation). The book by Michalewicz, 1999 [7] introduced metaheuristics for solving numerical optimization problems. Many authors proposed an overview of evolutionary techniques with applications. Some overviews include Srinivas and K. Deb’s sorting GA (1994) [8], Fonseca and Fleming multi-objective GA (1993, 1995) [9, 10], Hajela and J. Lee’s weighted-based GA (1996) [11], Zitzler, 1999 [12] including Schaffer’s vector evaluated GA [13].
Genetic algorithms use principles of the Darwinian evolution characterized as follows. Individuals within populations (or species) differ. Traits are passed on to offspring. Few offspring can survive in every generation. The selected members who survive have most favorable performances. This natural process on individuals has consequences on the corresponding population. This evolution process is backward-looking, mostly deterministic (i.e., partially random). It is not perfect and can produce new traits besides existing traits. Such algorithms are population-based stochastic algorithms which elements include a population of individuals, fitness evaluation, genetic operators guiding evolution, and selection.
Fig. (1.1) shows the basic cycle of the process. The initial step consists of a population of individuals chosen at random. In the evaluation phase of the primary cycle, we evaluate all the individuals by using the objective functions of the programming problem. Next, fitness values are assigned to individuals on this basis. Then, the fittest individuals are selected for reproduction. After that, new individuals emanate from the use of genetic operators, such as with crossover and mutation. Closing the basic cycle, the new population including the selected individuals and offspring is transferred to the first step for evaluation, and a new cycle goes on.
Fig. (1.1)) Basic cycle of an evolutionary algorithm.One should indicate the fast development of an MOO approach in the mid-1980s with the help of evolutionary algorithms2. An early attempt to use genetic algorithms to solve MOO problems was realized by Ito et al. (1983) [15]. Goldberg (1989) [16] proposed Pareto-set fitness assignment to solve Schaffer’s multi-objective problems. In the same period, two books dealt with theory and techniques of MOO, such as Chankong and Haimes’ book (1983) [17], and that book of Sawaragi et al. (1985) [18]. The fast expansion of this approach was stimulated by numerous real-world applications from science, technology, management, and finance. Rangaiah’s book (2009) [19] was the first publication on MOOPs with a focus on chemical engineering. The applications in this area were notably in chemical, mineral processing, oil and gas, petroleum, pharmaceutical industries. Lai and C-L. Huang (1994) [20] extended the MOO approach to fuzzy decision-making problems 3.
The first use of genetic-based search algorithms to multi-objective optimization problems goes back to the pioneering work of Rosenberg (1967) [24] (see Coello, 1999 [25]). In his brief history of metaheuristics X-S. Yang, 2014 [2] specified the relevant decades of the development of evolutionary algorithms. The 1960s and 1970s knew the development of genetic algorithms at the University of Michigan. The contribution of Holland (1975) [26] proposed a search method based on the Darwinian evolution concepts and natural principles of biological systems. Crossover, mutation, and selection operators were used to solve difficult combinatorial problems. In the same period, researchers from the Technical University of Berlin proposed evolutionary strategies. Rechenberg (1971) [27] and Schwefel (1975) [28] proposed a search method for solving optimization problems. D.B. Fogel (1994) [29] introduced the evolutionary programming by using simulated evolution as a learning process 4.
Following X-S. Yang, the decades 1980s, and 1990s were fruitful steps for metaheuristic algorithms. Kirkpatrick, Gelatt Jr, and Vecchi pioneered the simulated annealing SA algorithm in 1983 [30]. This method has its origin in the annealing process of metals. In 1986, the use of memory was proposed by Glover’s tabu search in 1986 [31]. In 1992, the search technique by Dorigo [32] was inspired by the swarm intelligence of ant colonies using a pheromone to communicate. Later in 1995, Kennedy and Eberhardt [33] developed the particle swarm optimization, inspired by the swarm intelligence of fish and birds. In 1997, Storn and Price [34] proposed the differential evolution (DE) algorithm. This vector-based evolutionary algorithm was proved to be more efficient than a genetic algorithm. In 1997, the study on No Free Lunch Theorems (NFL) by Wolpert and Macready (1997) [35] was a significant step in the development of better algorithms. Indeed, theorems proved that there exists no universal better algorithm for all applications. Thus, the most efficient algorithm would be valid only for a given class of problems.
In the recent years, other nature-inspired algorithms were proposed. We can mention harmony search (HS) algorithm for distribution, transport and scheduling 2001, honeybee algorithms 2004, firefly algorithm (FA) 2007, cuckoo search algorithm (CSA) 2009, bat algorithm (BA) 2010 based on echolocation behavior, flower pollination algorithm (FPA) 2012.
Some survey articles on metaheuristic algorithms suggested some classifications. Besides conventional aggregating approaches, Fonseca and Fleming (1995) [10] considered population-based non-Pareto methods (VEGA), Pareto-based approaches, and niche induction techniques. I. Fister et al. (2013) [36] divided algorithms into four groups of inspiration, such as 1) swarm intelligence (SI)– based, 2) not SI-based bio-inspired algorithms, 3) physics/chemistry-based algorithms, and 4) other algorithms. Fig. (1.2) and Table 1.1 show the selected classification for this study.
Fig. (1.2)) Metaheuristic algorithms for optimization problems [classification inspired by I. Fister et al. (2013) [36].However, I. Fister et al. (2013) [36] declared that there is no single classification. Taxonomies depend on focus and perspective. Thus, algorithms include population-based algorithm (e.g., GA, FA, PSO algorithms) or trajectory-based (e.g., SA algorithm). We can divide algorithms into several categories such as attraction-based procedures (e.g., FA algorithm using the attraction of light) or non-attraction-based procedures (e.g., GA algorithms). Algorithms can also be classified rule-based (crossover and mutation principles in GAs) and updating equation-based (e.g., PSO, CS).
Fig. (1.2) shows most metaheuristic algorithms. The algorithms consist of bio-inspired algorithms, and physics and chemistry-based algorithms. The algorithms belong to one or more of the following groups, such as swarm intelligence, genetic algorithms, genetic programming GP, and evolutionary algorithms (see Tong et al., 2014 [5]).
Table 1.1 defines the algorithms of Fig. (1.2) with more information about the acronym, the authors, and year of publication. Additional information from the literature is the number of references provided by IEEE Explore Digital Library 5 for each algorithm. It tells the widespread use of each algorithm. The ten first algorithms with more than 1,000 publications are: 1) HEMH, 2) GA, 3) ECBO, 4) PSO, 5) GP, 6) SA, 7) EP, 8) ACO, 9) DE, 10) MOEA where the first three algorithms have 174,091 items. 25 algorithms have between 100 and 1,000 publications. 15 other algorithms have between 10 and 100 items, and eight algorithms have less than ten items.
The review of the literature by Lalwani et al. (2013) [93] showed the applicability of particle swarm optimization (PSO) to multi-objective problems 7(MOPSO). In this algorithm8, the potential solution is the swarm which individuals are the particles. This heuristic is inspired by the choreography of bird flocks. It was validated using standard test functions and compared against the EMO algorithms9. Each particle is trying to move using information it holds on its position, velocity, and performance. The movement of a particle is governed by updating its position and velocity. Fig. (1.3) shows applications in a broad range of activities in engineering, industry, biology, management, and environment. The application areas of the studies 10 were presented by Lalwani et al. (2013) [93]. The applications included aerospace engineering (abbreviated by “Aero Eng” in Fig. (1.3)), biological sciences (or “Biological”), chemical engineering (or “Chem Eng”), data mining (or “data min”), electrical engineering (or “Elec Eng”), Electromagnetic Engineering (or “Electromag”), Electronics Engineering (or “Electronics”), Environmental Sciences (or “Environ”), flow shop and job shop scheduling problems (or ”‘Flowshop”), image processing (or “Image Proc”), industrial engineering (or “Ind Eng”), mechanical engineering (or “Mecha Eng”), neural network (or “Neural netw”), robotics, and software engineering (or “Software E.”).
Fig. (1.3)) Application area of MOPSO algorithm in published papers in the period 2006-2012 [data, and adapted Figure from Lalwani et al., 2013 [93], Figure 3, p.45]. Note: The sum of rounded percentages differs from hundred percent.Problems studied in these applications are listed in Table 1.2 (see Lalwani et al., 2013 [93], Table 2 for more details).
Optimum design of framed structures is a typical application of evolutionary algorithms 11. Framed structures include planar and space trusses. Trusses are composed of two-, four-, ten-, 25- and 72-bar (see Table 1.3). Single objective problems consist of minimizing total weight (or volume) of truss structures. Multiple objective problems may consist of two to nine objectives. Two common objectives are reducing the structural weight and minimizing the deflection (i.e., vertical displacement of structural elements under a lead). Other objectives concern, e.g., the frequency of vibration, peak stress or the stress of each member of the trusses. Table 1.3 shows the evolutionary methods used for each application from the literature. The studies are put in a chronological order over a 30-years, 1986-2016. GAs characterized a first subperiod 1986-2000. Various evolutionary methods belong to the next sub-period 2003-2016. Applications of multi-objective structural optimization are shown in Table 1.4 covering the period 1993-2004.