Multi-Objective Optimization in Theory and Practice II: Metaheuristic Algorithms - André A. Keller - E-Book

Multi-Objective Optimization in Theory and Practice II: Metaheuristic Algorithms E-Book

André A. Keller

0,0
63,09 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Multi-Objective Optimization in Theory and Practice is a simplified two-part approach to multi-objective optimization (MOO) problems.

This second part focuses on the use of metaheuristic algorithms in more challenging practical cases. The book includes ten chapters that cover several advanced MOO techniques. These include the determination of Pareto-optimal sets of solutions, metaheuristic algorithms, genetic search algorithms and evolution strategies, decomposition algorithms, hybridization of different metaheuristics, and many-objective (more than three objectives) optimization and parallel computation. The final section of the book presents information about the design and types of fifty test problems for which the Pareto-optimal front is approximated. For each of them, the package NSGA-II is used to approximate the Pareto-optimal front.

It is an essential handbook for students and teachers involved in advanced optimization courses in engineering, information science and mathematics degree programs.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 307

Veröffentlichungsjahr: 2019

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents
Cover
Table of Contents
Title
BENTHAM SCIENCE PUBLISHERS LTD.
End User License Agreement (for non-institutional, personal use)
Usage Rules:
Disclaimer:
Limitation of Liability:
General:
PREFACE
ACKNOWLEDGEMENTS
Conflict of Interest
Consent for Publication
Pareto-Optimal Front Determination
Abstract
1.1. Introduction
1.1.1. Heuristic and Metaheuristic Algorithm
1.1.2. History of Metaheuristics
1.1.3. Metaheuristic Algorithms and Applications
1.1.4. Optimum Design of Framed Structures: A Review of Literature
1.2. ELEMENT OF STATIC MULTI-OBJECTIVE PROGRAMMING
1.2.1. Problem Formulation
1.2.2. Concept of Dominance
1.2.3. Pareto-Optimality
1.3. Pareto-Optimal Front
1.3.1. Non-Dominated Solution
1.3.2. Analytical Pareto-Optimal Front
1.3.3. Near Pareto-Optimal Front
1.3.3.1. Pareto Ranking
1.3.3.2. Extended Pareto Ranking Application
1.3.3.3. Application to an Engineering Problem
1.3.3.4. Near Pareto-Optimal Front Approximation
1.3.4. Shape of a Pareto-Optimal Front
1.3.4.1. The Impact of Minimizing or Maximizing each Objective
1.3.4.2. The Impact Conflicting or Non-conflicting Objectives
1.4. Selection Procedures of Algorithms
1.4.1. Elitist Pareto Criteria
1.4.2. Non-Pareto Criteria
1.4.3. Bi-criterion Evolution
1.4.4. Other Concepts of Dominance
REFERENCES
Metaheuristic Optimization Algorithms
Abstract
2.1. Introduction
2.2. Simulated Annealing Algorithm
2.2.1. Annealing Principle and Description
2.2.2. Problem Formulation
2.2.3. Algorithm Description
2.2.3.1. Acceptance Probability
2.2.3.2. Algorithm
2.2.3.3. Different Cooling Schedules
2.2.3.4. Penalty Function Approach
2.3. Multi-Objective Simulated Annealing
2.3.1. MOSA Algorithm
2.3.2. Test Problems
REFERENCES
Evolutionary Strategy Algorithms
Abstract
3.1. Introduction
3.2. Principles and Operators
3.2.1. Algorithm for Solving Optimization Problems
3.2.2. Binary and Real-Number Encoding
3.2.3. Genetic Operators
3.3. GA-Based Mathematica® Notebook
3.4. Single-Objective Optimization
3.4.1. SciLab Package for Genetic Algorithm
3.4.2. GA-Based Software Package: GENOCOP III
REFERENCES
Genetic Search Algorithms
Abstract
4.1. Introduction
4.2. Niched Pareto Genetic Algorithms (NPGA)
4.3. Non-Dominated Sorting Genetic Algorithm
4.4. Multi-Objective Optimization Test Problems
4.4.1. Unconstrained Optimization Problems
4.4.1.1. Kursawe's Test Function using SciLab
4.4.1.2. UF1 Test Function using MOEA Framework
4.4.1.3. Viennet’s Test Function with SciLab
4.4.2. Constrained Optimization Problem
REFERENCES
Evolution Strategy Algorithms
Abstract
5.1. Introduction
5.2. Differential Evolution Strategy
5.2.1. Principles and Algorithm
5.2.2. DE Operators
5.2.2.1. Mutation Operators
5.2.2.2. Crossover Operators
5.2.2.3. Selection Operators
5.3. DE Algorithm for Single-Objective Optimization Problems
5.4. Multi-Objective DE Algorithm
5.4.1. Diversity-Promoting
5.4.2. Performing Elitism
REFERENCES
Swarm Intelligence and Co-Evolutionary Algorithms
Abstract
6.1. Introduction
6.2. Particle Swarm Optimization
6.3. Cooperative Co-Evolutionary Genetic Algorithm
6.4. Competitive Predator-Prey Optimization Model
6.4.1. Principle of PP Algorithm
6.4.2. PP Algorithm
6.4.3. Illustrative Problems
REFERENCES
Decomposition-Based and Hybrid Evolutionary Algorithms
Abstract
7.1. Introduction
7.2. Decomposition-Based Algorithm
7.2.1. Scalar Decomposition Principle
7.2.1.1. Weighted Sum Approach
7.2.1.2. Tchebycheff Approach
7.2.1.3. Weighted Vector Generation Techniques
7.2.2. Decomposition-Based MOEA Algorithm
7.2.3. MOEA Framework Software Package
7.3. Hybrid Evolutionary Algorithms
7.3.1. Hybridization under Lamarckian Strategy
7.3.2. Two-Stage Hybrid Search Method
7.3.3. Multi-Objective Genetic Local Search Algorithm
7.3.4. Archive-Based Hybrid Scatter Search (AbYSS)
7.3.4.1. Description and Main Features
7.3.4.2. Scatter Search Template
7.3.4.3. jMetal Implementation
REFERENCES
Many-Objective Optimization and Parallel Computation
Abstract
8.1. Introduction
8.2. Many-Objective Optimization
8.2.1. Dominance Extensions for MaOPs
8.2.2. Efficient Optimizers for MaOPs
8.2.2.1. Hypervolume-Based Algorithm
8.2.2.2. Vector Angle-Based Algorithm
8.2.2.3. Reference Point-Based Evolutionary Algorithm
8.2.2.4. Preference-Based Evolutionary Algorithm
8.2.3. Test Problem Suites for MaOPs
8.2.4. A Matlab Platform for MaOPs
8.2.5. Test Problem DTLZ#2 with Many Objectives
8.2.5.1. Formulation of Test Problem DTLZ#2
8.2.5.2. Implementation of a Pseudo NSGA-III Algorithm
8.2.5.3. Parallel Coordinate Plots for DTLZ#2
8.3. Parallel Computation of Metaheuristic Algorithms
8.3.1. Parallelization Strategies
8.3.2. Parallel Designs
8.3.3. Parallel Metaheuristic Algorithms and Applications
REFERENCES
Design of Test Problems
Abstract
9.1. Introduction
9.2. Key Characteristics of Test Functions
9.2.1. Major Complexities
9.2.2. Generating Methods
9.2.3. Test Suites
9.3. Analytical Pareto-Optimal Fronts
9.4. Multi-Objective Optimization with Mathematica®
9.4.1. Interactive Mathematica® Demonstration
9.4.2. Pareto-Optimal Solution
9.4.3. Resolution Process
REFERENCES
Fifty Collected Test Functions
Abstract
10.1. Introduction
10.2. All Fifty Test Problems
10.2.1. Problem Description
10.2.2. Approximated Pareto Fronts Using NSGA-II
10.2.3. Mathematical Problem Formulation
10.3. Selected Test Problems
REFERENCES
List of Abbreviations
List of Journal Abbreviations in the References
List of Symbols

Multi-Objective Optimization

In Theory and Practice II:

Metaheuristic Algorithms

Authored ByAndré A. Keller

Center for Research in Computer Science, Signal and Automatic Control of Lille,
University of Lille – CNRS, France

BENTHAM SCIENCE PUBLISHERS LTD.

End User License Agreement (for non-institutional, personal use)

This is an agreement between you and Bentham Science Publishers Ltd. Please read this License Agreement carefully before using the ebook/echapter/ejournal (“Work”). Your use of the Work constitutes your agreement to the terms and conditions set forth in this License Agreement. If you do not agree to these terms and conditions then you should not use the Work.

Bentham Science Publishers agrees to grant you a non-exclusive, non-transferable limited license to use the Work subject to and in accordance with the following terms and conditions. This License Agreement is for non-library, personal use only. For a library / institutional / multi user license in respect of the Work, please contact: [email protected].

Usage Rules:

All rights reserved: The Work is the subject of copyright and Bentham Science Publishers either owns the Work (and the copyright in it) or is licensed to distribute the Work. You shall not copy, reproduce, modify, remove, delete, augment, add to, publish, transmit, sell, resell, create derivative works from, or in any way exploit the Work or make the Work available for others to do any of the same, in any form or by any means, in whole or in part, in each case without the prior written permission of Bentham Science Publishers, unless stated otherwise in this License Agreement.You may download a copy of the Work on one occasion to one personal computer (including tablet, laptop, desktop, or other such devices). You may make one back-up copy of the Work to avoid losing it. The following DRM (Digital Rights Management) policy may also be applicable to the Work at Bentham Science Publishers’ election, acting in its sole discretion:25 ‘copy’ commands can be executed every 7 days in respect of the Work. The text selected for copying cannot extend to more than a single page. Each time a text ‘copy’ command is executed, irrespective of whether the text selection is made from within one page or from separate pages, it will be considered as a separate / individual ‘copy’ command.25 pages only from the Work can be printed every 7 days.The unauthorised use or distribution of copyrighted or other proprietary content is illegal and could subject you to liability for substantial money damages. You will be liable for any damage resulting from your misuse of the Work or any violation of this License Agreement, including any infringement by you of copyrights or proprietary rights.

Disclaimer:

Bentham Science Publishers does not guarantee that the information in the Work is error-free, or warrant that it will meet your requirements or that access to the Work will be uninterrupted or error-free. The Work is provided "as is" without warranty of any kind, either express or implied or statutory, including, without limitation, implied warranties of merchantability and fitness for a particular purpose. The entire risk as to the results and performance of the Work is assumed by you. No responsibility is assumed by Bentham Science Publishers, its staff, editors and/or authors for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products instruction, advertisements or ideas contained in the Work.

Limitation of Liability:

In no event will Bentham Science Publishers, its staff, editors and/or authors, be liable for any damages, including, without limitation, special, incidental and/or consequential damages and/or damages for lost data and/or profits arising out of (whether directly or indirectly) the use or inability to use the Work. The entire liability of Bentham Science Publishers shall be limited to the amount actually paid by you for the Work.

General:

Any dispute or claim arising out of or in connection with this License Agreement or the Work (including non-contractual disputes or claims) will be governed by and construed in accordance with the laws of the U.A.E. as applied in the Emirate of Dubai. Each party agrees that the courts of the Emirate of Dubai shall have exclusive jurisdiction to settle any dispute or claim arising out of or in connection with this License Agreement or the Work (including non-contractual disputes or claims).Your rights under this License Agreement will automatically terminate without notice and without the need for a court order if at any point you breach any terms of this License Agreement. In no event will any delay or failure by Bentham Science Publishers in enforcing your compliance with this License Agreement constitute a waiver of any of its rights.You acknowledge that you have read this License Agreement, and agree to be bound by its terms and conditions. To the extent that any other terms and conditions presented on any website of Bentham Science Publishers conflict with, or are inconsistent with, the terms and conditions set out in this License Agreement, you acknowledge that the terms and conditions set out in this License Agreement shall prevail.

Bentham Science Publishers Ltd. Executive Suite Y - 2 PO Box 7917, Saif Zone Sharjah, U.A.E. Email: [email protected]

PREFACE

André A. Keller

This book is the second part of a presentation on “multi-objective optimization in theory and practice.” This book treats static multi-objective optimization programming (MOOP) problems in which an objective or constraint does not vary over time. The first part focused on classical methods of problem-solving. These methods concerned extensive areas of real-life decision problems including hierarchical programming problems, fuzzy programming problems, and game theory applications.

This second part concerns the use of metaheuristic methods of resolution in more challenging practical cases. These difficulties include the size of the real-world application, the nonlinearities, and discontinuities of the Pareto-optimal front we may have.

The development of multi-objective approaches accelerated in the mid-1980s with the help of evolutionary algorithms for solving real-world multi-objective problems. A recent review by I. Fister et al. (2013) collected 74 nature-inspired algorithms. These algorithms refer mainly to swarm intelligence, physics and chemistry, and biological systems.

Thus, Particle Swarm Optimization (PSO) algorithm is inspired by the social behavior of birds in flocks in search of food. A recent survey study by Lalwani et al. (2013) showed that PSO can be applied to various real-world problems. The first three domains electrical engineering, industrial engineering, and flow shop scheduling problems represent the half of the collected studies within 16 different areas and challenges. A significant contribution is due to K. Deb et al. (2002, 2005) for generating tunable test problems.

Different toolkits and suites were designed to evaluate the process of optimization algorithms. Evolutionary computation includes standard evolution strategy, evolution programming, genetic algorithm, and genetic programming.

Their extension to MOOP problems required new concepts, such as Pareto domination ranking and fitness sharing. The goal is to maintain the diversity in the population of solutions through successive generations. Evolution strategies algorithms are nature-inspired approaches. These algorithms are stochastic population-based. Collective strategies are possible within a population. Such situations occur in nature with birds or fish in flocks. Other co-evolutionary models implicate two different populations or species which compete. Solving MOOP problems may use two different ways. One way is to decompose the initial problem into a sequence of subproblems. The hybrid evolutionary algorithm is another efficient method in which different metaheuristics or heuristics combine.

This book is an attempt to handle most significant aspects of the real-life complexities in decision-making. Various difficulties arise from non-convexities, nonlinearities, and non-differentiability of objective functions and side-constraints, multiple conflicting goals, a multiplicity of global and local solutions, uncertainties due to inaccurate data, etc.

This book originated in lectures by the author on operations research and optimization techniques with applications at the University of Paris. More recently, the author was participating as an invited speaker at several international conferences, notably on game theory, evolutionary optimization, and metaheuristic algorithm in general.

This book reviews and evaluates multi-objective programming models using several software packages. The book includes ten Chapters. Chapter 1 focuses on techniques that determine Pareto-optimal sets of solutions. This Chapter shows the vast variety of algorithms currently available to solve multi-objective optimization problems. We attempt one classification of these methods by considering standard categories (e.g., swarm intelligence, genetic algorithm, physics and chemistry-based algorithms). In Chapter 1 dominance concepts are defined, and we propose an original Pareto ranking method based on the Hasse diagrams. Chapter 2 and Chapter 3 introduce the interest of metaheuristic algorithms for solving optimization problems. A metaheuristic algorithm refers to a higher level ‘master’ strategy which guides and controls the operations of other lower-level ‘worker’ heuristics. The simulated annealing algorithm illustrates this approach by the physical annealing principle (Chapter 2). Free software packages SciLab and GENOCOP solve constrained and unconstrained multi-objective optimization problems (Chapter 3). Chapter 4 to Chapter 6 discuss genetic search algorithms and evolution strategies. The population-based genetic algorithms (Chapter 4) adopt new concepts such as Pareto ranking and fitness sharing. In fact, one goal is to maintain the diversity in the population of solutions. Evolution strategy algorithms are nature-inspired techniques (Chapter 5 and Chapter 6). Collective behaviors are those prevailing in birds or fish in flocks. Co-evolutionary models involve different populations or species. In Chapter 7, we examine two ways of solving MOOP problems. One way consists of a decomposition of the initial problem into a sequence of subproblems. Another performant way is a hybridization of different metaheuristics or heuristics. Chapter 8 is on many-objective (more than three objectives) optimization and parallel computation. These approaches are suitable for the complex and time-consuming real-world applications. Chapter 9 and Chapter 10 treat the design and types of test problems. Test functions help to evaluate MOOP algorithms. Toolkits and suites allow constructing tunable tests problems (Chapter 9). Chapter 10 includes a collection of a variety of fifty test functions. For each of them, the package NSGA-II is used to approximate the Pareto-optimal front.

This book is typically user-oriented with theoretical and practical aspects. This book includes detailed examples, figures, test functions, and small-size applications from the literature. This book uses the commercial package Mathematica 7®, and free software packages, including notably SciLab 5.5.2 (an alternative to MatLab), GENOCOP III, NSGA-II and a pseudo NSGA-III.

Prof. André A. Keller Center for Research in Computer Science Signal and Automatic Control of Lille University of Lille – CNRS France

ACKNOWLEDGEMENTS

André A. Keller

The author thanks, Philippe Mathieu, Professor at the University of Lille in France for associating him with his research unit on Multi-Agent Systems and Behaviors. This research unit is part of the division Interaction and Collective Intelligence in the Center for Research in Computer Science, Signal, and Automatic Control of Lille. He showed interest in this research. This work also benefited from the electronic documentation at the University of Lille.

The author also thanks anonymous reviewers for their comments. I especially thank one of them from the University of Birmingham (UK) for her/his suggestions along with other documentary references. The suggestion to present new elements of the many-objective optimization (i.e., more than three objectives) makes it possible to approach more real-life applications.

I express my gratitude to my Publisher Dr. Humaira Hashmi, In-charge of the eBook Department Publications, Bentham Science Publishers, for her patient and stimulating assistance in preparing this book. I thank Asma Ahmed, Dealing Manager of this project for her professional cooperation in realizing this eBook. I also extend my thanks to Sahar Iftekhar of the ‘Graphics Department’ for his expertise and technical advice in the preparation of graphics in this book. My gratitude goes also to Mr. Tashfeen Hashmi for the attractive design of the cover page.

Prof. André A. Keller 2019, Lille France

Conflict of Interest

None Declare.

Consent for Publication

None Declare.

Pareto-Optimal Front Determination

André A. Keller

Abstract

Multi-objective optimization (MOO) problems belong to programming approaches in which the decision-maker is faced with a multiplicity of conflicting objectives. This situation occurs with real-world problems involving engineering design, chemical processes, financial management, etc. In such problems, we will obtain rather a set of equally good solutions and not just one best solution. Classical methods for solving MOO problems have shown their limits to find Pareto-optimality fronts. The difficulties were the necessity of repetitive applications, the requirement of some knowledge about the problem, the insufficient spread of non-dominated solutions, and the sensitivity of results to the shape of the Pareto-optimal front. Therefore, metaheuristics with application to practical MOO problems had an accelerated development in the decades 1980s and 1990s. A variety of nature-inspired algorithms have been proposed in the recent years with extensive applications. The concept of dominance is the basis of the Pareto optimality. The dominance binary relation is a strict partial order relation. This approach allows a comparison between feasible solutions in the fitness space and decision space. The non-dominated solution sets yield Pareto fronts. Different techniques were proposed to find good approximations of the Pareto sets when a Pareto front cannot be determined analytically. Our method uses graph theory analysis. This approach provides a non-dominated set by using the Hasse diagram of an acyclic graph. Numerous examples from the literature show connected and disconnected Pareto-optimal fronts. In particular, Pareto-optimal fronts change if we decide to maximize instead to minimize the objectives. Algorithms use different selection procedures. The selection criteria can be an elitist Pareto ordering, non-Pareto criteria like indicators, a bi-criteria evolution, and the extended concepts of dominance like ε-dominance and grid dominance.

Keywords: Attraction-based algorithm, Conflicting/nonconflicting objectives, Darwinian evolution, Engineering design problem, Globally/locally Pareto-optimality, Hasse diagram, Master strategy, Metaheuristics, Multi-objective optimization, Nature-inspired algorithm, Near/exact Pareto-optimal front, Non-dominated solution, Pareto-optimal solution, Pareto ranking, Population-based algorithm, Strongly/weakly Pareto-optimality, Subordinate heuristics, Swarm intelligence, Trajectory-based algorithm.

1.1. Introduction

The first use of heuristic algorithms goes back to Turing (1948) [1] when he was breaking the German Enigma code during World War II (see also Yang, 2014 [2], and Angelov, 2016 [3]). After that, heuristic and metaheuristic algorithms for solving programming problems resulted from the difficulties with classical optimization methods. Abido (2010) [4] specified four inconveniences with conventional algorithms. Firstly, there is a need for a repetitive application of an algorithm to find the Pareto-optimal solutions. Secondly, some knowledge about the problem is required. Thirdly, an algorithm is sensitive to the shape of the Pareto-optimal front. Finally, the spread of the Pareto-optimal solutions depends on the chosen algorithm. Heuristic algorithms are suitable solvers for severe high-dimensional real-life problems (see, for instance, Tong et al., 2014 [5]).

1.1.1. Heuristic and Metaheuristic Algorithm

Heuristics and metaheuristics are approximation resolution methods. Heuristics denotes techniques which seek near-optimal solutions at a low cost, is often problem-specific. Metaheuristics consists of a master strategy that guides and corrects the operations of subordinate heuristics (see, for instance, Reeves, 1995 [6]). Heuristics denotes a set of strategies to guide the search process in the feasible space. Metaheuristics, such as evolutionary algorithms (EAs), refers to a higher level procedure which combines different includes of depended heuristics for exploring search area. Metaheuristics includes notably genetic algorithms (GAs), evolution strategy (ES), and genetic programming (GP). EAs also include, but are not limited to, nature-inspired algorithms such as neural methods, simulated annealing, tabu search, ant colony systems and other particle swarm intelligence techniques. The capacity of such methods to solve NP-hard 1 combinatorial problems is well known (e.g., the problems of traveling salesperson, scheduling, graph, and transportation). The book by Michalewicz, 1999 [7] introduced metaheuristics for solving numerical optimization problems. Many authors proposed an overview of evolutionary techniques with applications. Some overviews include Srinivas and K. Deb’s sorting GA (1994) [8], Fonseca and Fleming multi-objective GA (1993, 1995) [9, 10], Hajela and J. Lee’s weighted-based GA (1996) [11], Zitzler, 1999 [12] including Schaffer’s vector evaluated GA [13].

Genetic algorithms use principles of the Darwinian evolution characterized as follows. Individuals within populations (or species) differ. Traits are passed on to offspring. Few offspring can survive in every generation. The selected members who survive have most favorable performances. This natural process on individuals has consequences on the corresponding population. This evolution process is backward-looking, mostly deterministic (i.e., partially random). It is not perfect and can produce new traits besides existing traits. Such algorithms are population-based stochastic algorithms which elements include a population of individuals, fitness evaluation, genetic operators guiding evolution, and selection.

Fig. (1.1) shows the basic cycle of the process. The initial step consists of a population of individuals chosen at random. In the evaluation phase of the primary cycle, we evaluate all the individuals by using the objective functions of the programming problem. Next, fitness values are assigned to individuals on this basis. Then, the fittest individuals are selected for reproduction. After that, new individuals emanate from the use of genetic operators, such as with crossover and mutation. Closing the basic cycle, the new population including the selected individuals and offspring is transferred to the first step for evaluation, and a new cycle goes on.

Fig. (1.1)) Basic cycle of an evolutionary algorithm.

1.1.2. History of Metaheuristics

One should indicate the fast development of an MOO approach in the mid-1980s with the help of evolutionary algorithms2. An early attempt to use genetic algorithms to solve MOO problems was realized by Ito et al. (1983) [15]. Goldberg (1989) [16] proposed Pareto-set fitness assignment to solve Schaffer’s multi-objective problems. In the same period, two books dealt with theory and techniques of MOO, such as Chankong and Haimes’ book (1983) [17], and that book of Sawaragi et al. (1985) [18]. The fast expansion of this approach was stimulated by numerous real-world applications from science, technology, management, and finance. Rangaiah’s book (2009) [19] was the first publication on MOOPs with a focus on chemical engineering. The applications in this area were notably in chemical, mineral processing, oil and gas, petroleum, pharmaceutical industries. Lai and C-L. Huang (1994) [20] extended the MOO approach to fuzzy decision-making problems 3.

The first use of genetic-based search algorithms to multi-objective optimization problems goes back to the pioneering work of Rosenberg (1967) [24] (see Coello, 1999 [25]). In his brief history of metaheuristics X-S. Yang, 2014 [2] specified the relevant decades of the development of evolutionary algorithms. The 1960s and 1970s knew the development of genetic algorithms at the University of Michigan. The contribution of Holland (1975) [26] proposed a search method based on the Darwinian evolution concepts and natural principles of biological systems. Crossover, mutation, and selection operators were used to solve difficult combinatorial problems. In the same period, researchers from the Technical University of Berlin proposed evolutionary strategies. Rechenberg (1971) [27] and Schwefel (1975) [28] proposed a search method for solving optimization problems. D.B. Fogel (1994) [29] introduced the evolutionary programming by using simulated evolution as a learning process 4.

Following X-S. Yang, the decades 1980s, and 1990s were fruitful steps for metaheuristic algorithms. Kirkpatrick, Gelatt Jr, and Vecchi pioneered the simulated annealing SA algorithm in 1983 [30]. This method has its origin in the annealing process of metals. In 1986, the use of memory was proposed by Glover’s tabu search in 1986 [31]. In 1992, the search technique by Dorigo [32] was inspired by the swarm intelligence of ant colonies using a pheromone to communicate. Later in 1995, Kennedy and Eberhardt [33] developed the particle swarm optimization, inspired by the swarm intelligence of fish and birds. In 1997, Storn and Price [34] proposed the differential evolution (DE) algorithm. This vector-based evolutionary algorithm was proved to be more efficient than a genetic algorithm. In 1997, the study on No Free Lunch Theorems (NFL) by Wolpert and Macready (1997) [35] was a significant step in the development of better algorithms. Indeed, theorems proved that there exists no universal better algorithm for all applications. Thus, the most efficient algorithm would be valid only for a given class of problems.

In the recent years, other nature-inspired algorithms were proposed. We can mention harmony search (HS) algorithm for distribution, transport and scheduling 2001, honeybee algorithms 2004, firefly algorithm (FA) 2007, cuckoo search algorithm (CSA) 2009, bat algorithm (BA) 2010 based on echolocation behavior, flower pollination algorithm (FPA) 2012.

1.1.3. Metaheuristic Algorithms and Applications

Some survey articles on metaheuristic algorithms suggested some classifications. Besides conventional aggregating approaches, Fonseca and Fleming (1995) [10] considered population-based non-Pareto methods (VEGA), Pareto-based approaches, and niche induction techniques. I. Fister et al. (2013) [36] divided algorithms into four groups of inspiration, such as 1) swarm intelligence (SI)– based, 2) not SI-based bio-inspired algorithms, 3) physics/chemistry-based algorithms, and 4) other algorithms. Fig. (1.2) and Table 1.1 show the selected classification for this study.

Fig. (1.2)) Metaheuristic algorithms for optimization problems [classification inspired by I. Fister et al. (2013) [36].

However, I. Fister et al. (2013) [36] declared that there is no single classification. Taxonomies depend on focus and perspective. Thus, algorithms include population-based algorithm (e.g., GA, FA, PSO algorithms) or trajectory-based (e.g., SA algorithm). We can divide algorithms into several categories such as attraction-based procedures (e.g., FA algorithm using the attraction of light) or non-attraction-based procedures (e.g., GA algorithms). Algorithms can also be classified rule-based (crossover and mutation principles in GAs) and updating equation-based (e.g., PSO, CS).

Fig. (1.2) shows most metaheuristic algorithms. The algorithms consist of bio-inspired algorithms, and physics and chemistry-based algorithms. The algorithms belong to one or more of the following groups, such as swarm intelligence, genetic algorithms, genetic programming GP, and evolutionary algorithms (see Tong et al., 2014 [5]).

Table 1.1 defines the algorithms of Fig. (1.2) with more information about the acronym, the authors, and year of publication. Additional information from the literature is the number of references provided by IEEE Explore Digital Library 5 for each algorithm. It tells the widespread use of each algorithm. The ten first algorithms with more than 1,000 publications are: 1) HEMH, 2) GA, 3) ECBO, 4) PSO, 5) GP, 6) SA, 7) EP, 8) ACO, 9) DE, 10) MOEA where the first three algorithms have 174,091 items. 25 algorithms have between 100 and 1,000 publications. 15 other algorithms have between 10 and 100 items, and eight algorithms have less than ten items.

Table 1.1Description of 65 metaheuristic algorithms for optimization 6.MetaheuristicalgorithmAuthorsYearIEEE XplorerUpdate:08-30-2016ABCArtificial bee colonyKaraboga & Basturk [37]2007869ACOAnt colony optimizationDorigo [32]19925,064AIAArtificial immune algorithmYildiz [38]20091,738BABat algorithmYang [39]2010335BBOBig bang optimizationErol & Eksin [40]200636BCOBee colony optimizationTeodorović & Dell’Orco [41, 42]2005910BFOBacterial foraging optimizationPassino [43]2002311BSBee systemSato & Hagiwara [44]19971,158BSOBee swarm optimizationDrias et al. [45]2005500CBOColliding bodies optimizationKaveh & Mahdavi [46]20144CCDECooperative coevolutionary differential evolutionGhasemishabankareh et al. [47]201611COEACoevolutionary metaheuristicYang [2]20144CSACuckoo search algorithmYang and S. Deb [48]2009218CSOCat swarm optimizationChu et al. [49]200662CSPChaotic swarming of particlesKaveh et al. [50]2014357CSSACharged system search algorithmKaveh & Talatahari [51]2009176DEDifferential evolutionStorn & Price [34]19974,835DEADolphin echolocation algorithmKaveh & Farhoudi [52]20134ECBOEnhanced colliding bodies optimizationKaveh & Ilchi Ghazaan [53]201418,345EMOElectro-magnetism optimizationCuevas et al. [54]2012209EPEvolutionary programmingL.J. Fogel et al. [55]19666,557ESEagle strategyYang and S. Deb [56]201011EVOEgyptian vulture optimizationSur et al. [57]201343FAFirefly algorithmYang [58, 59]2010382FPAFlower pollination algorithmYang [60]201235FSAFish School algorithmX-L. Li et al. [61]2002232GAGenetic algorithmHolland [26]197552,038GEGrammatical evolutionRyan et al. [62]1998106GPGenetic programmingKoza [63]199210,965GSAGravitational search algorithmRashedi et al. [64]2009266GSOGlowworm swarm optimizationKrishnanand & Ghose [65]200548HEMHHybrid evolutionary metaheuristicsGrosan & Abraham [66]; Ting et al. [67]2007 2015102,708HSHarmony searchGeem et al. [68]2001506IECInteractive evolutionary algorithmTakagi [69]2001348IWDIntelligent water dropShah-Hosseini [70]200757KHKrill herd algorithmGandomi & Alavi [71]201230MBOMarriage bees optimizationAbbas [72]20019MCSSMagnetic charged system searchKaveh et al. [73]201527MOEAMultiobjective evolutionary algorithmCoello et al. [74]20053,129MOEA/DMOEA based on decompositionQ. Zhang & H. Li [75]199748MOGAMultiobjective GAFonseca & Fleming [9]1993668MOGLSMultiobjective genetic local search algorithmIshibuchi & Murata [76]1998193MOMDPSOMultiobjective mixed-discrete PSOTong et al. [5]20141MOPFAMultiobjective FPAYang et al. [77]20132MOPSOMultiobjective PSOMoore & Chapman [78]1999486MOSAMulti-objective simulated annealingSerafini [79]1992253MSAMonkey search algorithmMucherino & Seref [80]20078NPGANiched Pareto genetic algorithmHorn et al. [81]199431NSGANon-dominated Sorting genetic algorithmSrinivas & K. Deb [8]1994674PAESPareto-archived evolution strategyKnowles & Corne [82]200018PPESPredator prey evolution strategyLaumanns et al. [83]199816PSOParticle swarm optimizationKennedy & Eberhart [33]199516,593SASimulated annealingKirkpatrick et al. [30]19838,697SFLAShuffled frog leaping algorithmEusuff & Lansey [84]2003162SOSpiral optimizationTamura & Yasuda [85]2011717SPEAStrength Pareto evolutionary algorithmZitzler & Thiele [86]1998189SSScatter searchGlover [31]19861,435SSOSwallow swarm optimizationKaveh et al. [87]20143TCOTermite colony optimizationHedayatzadeh et al [88].201016TLBOTeaching learning-based optimizationR.V. Rao et al. [89]201189VBAVirtual bee algorithmYang [90]200537VEGAVector evaluated GASchaffer [13]1984301VOESVector optimized evolution strategyYang [59]201054WBGAWeight-based genetic algorithmHajela & J. Lee [11]199624WCAWater cycle algorithmEskandar et al. [91]2012196WSAWolf search algorithmR. Tang et al. [92]201282

The review of the literature by Lalwani et al. (2013) [93] showed the applicability of particle swarm optimization (PSO) to multi-objective problems 7(MOPSO). In this algorithm8, the potential solution is the swarm which individuals are the particles. This heuristic is inspired by the choreography of bird flocks. It was validated using standard test functions and compared against the EMO algorithms9. Each particle is trying to move using information it holds on its position, velocity, and performance. The movement of a particle is governed by updating its position and velocity. Fig. (1.3) shows applications in a broad range of activities in engineering, industry, biology, management, and environment. The application areas of the studies 10 were presented by Lalwani et al. (2013) [93]. The applications included aerospace engineering (abbreviated by “Aero Eng” in Fig. (1.3)), biological sciences (or “Biological”), chemical engineering (or “Chem Eng”), data mining (or “data min”), electrical engineering (or “Elec Eng”), Electromagnetic Engineering (or “Electromag”), Electronics Engineering (or “Electronics”), Environmental Sciences (or “Environ”), flow shop and job shop scheduling problems (or ”‘Flowshop”), image processing (or “Image Proc”), industrial engineering (or “Ind Eng”), mechanical engineering (or “Mecha Eng”), neural network (or “Neural netw”), robotics, and software engineering (or “Software E.”).

Fig. (1.3)) Application area of MOPSO algorithm in published papers in the period 2006-2012 [data, and adapted Figure from Lalwani et al., 2013 [93], Figure 3, p.45]. Note: The sum of rounded percentages differs from hundred percent.

Problems studied in these applications are listed in Table 1.2 (see Lalwani et al., 2013 [93], Table 2 for more details).

Table 1.2Type of problems solved by using MOPSO by areas [adapted from Lalwani et al. (2013) [93], Table 2, pp. 60-89].AreasType of problems●Aerospaceengineering: Flight path optimization; multi-mode resource leveling; design optimization of laminated composite plates.●Biological sciences: Molecular docking; consistent mine patterns from microarray data; biclustering of microarray data; biclusters in gene expression; cancer chemotherapy optimization; not-redundant gene-markers from microarray gene expression data.●Chemical engineering: Electrochemical machining process parameter optimization; -amylase and ethanol production from spoiled starch rich vegetables.●Civil engineering: Selective withdrawal from thermally stratified reservoirs; reservoir operation problem; automatic calibration of a rainfall-runoff model; water resource management; operation management with sedimentation problems; dispatch wheel motor design; pro system stability enhancement.●Data mining: non-ordered data mining; modeling of the classification rule mining; mining rules from large datasets; designing novel classifiers; multilayer ensemble pruning.●Electrical engineering: Dispatch of electric power at economic and environmental issues; power flow problem; congestion problem in power system; EED problem; design of surface mount permanent magnet motors; tuning of PID speed controller; highly constrained EED problem; optimal capacitor sizing; power; electrical distribution system planning; photovoltaic grid connected system design; reactive power optimization and voltage control; energy-saving power generation scheduling; operation management of fuel cell power plants; intelligent state space pruning; electrical distribution systems; scheduling optimization problem of wind power integrated system; allocation of multi-type flexible alternating current transmission system devices; static SVC installation for power system loading margin improvement; planning of electrical distribution system; design of a wind farm.●Electromagnetic engineering: Designing of microwave absorbers; designing of planar multilayered electromagnetic absorbers; designing UWB planar antennas; cogging torque minimization; designing of pumping lasers of Raman amplifiers; design of an antipodal Vivaldi antenna for UWB; hysteresis model parameter identification.●Electronics engineering: WSN; VLSI floorplanning; layout for a WSN; hybrid power filter compensator; power system shunt filter design; crossing waypoints location in air route network; energy-efficient clustering in WSN; field-effect transistor modeling.●Environmental sciences: Land use zoning; designing of soil sampling network; land management; water saving crop planning.●Image processing: a contrast enhancement of gray-level digital images; hyperspectral clustering images; pixel-level image fusion; image compression quality assessment.●Industrial engineering: Optimal power plant operation; bin packing; mixed model assembly line sequencing; industrial cracking furnace; multi-criteria human resource allocation; topology optimization of compliant mechanism; project management; inventory classification; vehicle routing; tolerance allocation; project selection; multi-loop PI controller tuning; refining process in oil industry; grinding process; time-cost-quality tradeoff; cylindrical helical gear design; supply chain sourcing strategy design; machining optimization in titanium alloy; traveling salesman problem; optimization of activated sludge process; optimization of heating, ventilating and air conditioning system; tanker conceptual design; decisions of facility location and allocation.●Mechanical engineering: Drawbead design sheet metal forming; Grashof mechanism; diesel engine control parameter optimization; optimization of multi-pass face milling; air compressor design; gain tuning of gas turbine engine fuel controller.●Robotics: Robot motion planning; multi-robot cooperative box pushing problem.●Software engineering: Software testing for fault prediction; portfolio optimization; stock traders Problems; vibration suppression of smart structures; construction of emerging markets exchange traded funds; implementation of software environment; support vector parameter selection; motion segmentation problem; video coding.
Note: The acronyms are defined in the list of abbreviations in this book.

1.1.4. Optimum Design of Framed Structures: A Review of Literature

Optimum design of framed structures is a typical application of evolutionary algorithms 11. Framed structures include planar and space trusses. Trusses are composed of two-, four-, ten-, 25- and 72-bar (see Table 1.3). Single objective problems consist of minimizing total weight (or volume) of truss structures. Multiple objective problems may consist of two to nine objectives. Two common objectives are reducing the structural weight and minimizing the deflection (i.e., vertical displacement of structural elements under a lead). Other objectives concern, e.g., the frequency of vibration, peak stress or the stress of each member of the trusses. Table 1.3 shows the evolutionary methods used for each application from the literature. The studies are put in a chronological order over a 30-years, 1986-2016. GAs characterized a first subperiod 1986-2000. Various evolutionary methods belong to the next sub-period 2003-2016. Applications of multi-objective structural optimization are shown in Table 1.4 covering the period 1993-2004.

Table 1.3Applications of single objective structural optimization in chronological order.ApplicationMethodReferenceDate●Space truss structuresDual simplex algorithmAdeli & Kamal [97]1986● Truss structuresGA algorithmRajeev & Krishnamoorthy [98]1992● Framed structuresGA algorithmCao [99]1996● Design of planar and space truss structuresGA algorithmErbatur et al. [100]2000● Sizing and layoutPSO algorithmSchutte & Groenwold [101]2003● Design stressHS algorithmK.S. Lee & Geem [102]2004● Design of space trussBB-BC algorithmErol & Eksin [40]2006● Design of space trussesBB-BC algorithmCamp [103]2007● Design of planar and space truss structuresPSO algorithmI.J. Li et al. [104]2007● Design of truss structuresPSO algorithmPerez & Behdinan [105]2007● Sizing and layoutSA algorithmLamberti [106]2008●Design of truss structuresGA algorithmTogan & Daloglu [107]2008● Sizing of space trussesHybrid BB-BC/PSO algorithmKavel & Talatahari [108]2009● Design of space trussesHybrid PSO, ACO & HS algorithmsKavel & Talatahari [109]2009● Design of truss structuresABC algorithmSommez [110]2011● Design of truss structuresHS algorithmDegertekin [111]2012● Truss sizingTLBO algorithmDegertekin & Hayalioğlu [112]2013● Sizing and layout of truss structuresRO algorithmKaveh & Khayatazad [113]2013● Design of trussesTLBO modified algorithm