Meta-heuristic and Evolutionary Algorithms for Engineering Optimization - Omid Bozorg-Haddad - E-Book

Meta-heuristic and Evolutionary Algorithms for Engineering Optimization E-Book

Omid Bozorg-Haddad

0,0
120,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

A detailed review of a wide range of meta-heuristic and evolutionary algorithms in a systematic manner and how they relate to engineering optimization problems This book introduces the main metaheuristic algorithms and their applications in optimization. It describes 20 leading meta-heuristic and evolutionary algorithms and presents discussions and assessments of their performance in solving optimization problems from several fields of engineering. The book features clear and concise principles and presents detailed descriptions of leading methods such as the pattern search (PS) algorithm, the genetic algorithm (GA), the simulated annealing (SA) algorithm, the Tabu search (TS) algorithm, the ant colony optimization (ACO), and the particle swarm optimization (PSO) technique. Chapter 1 of Meta-heuristic and Evolutionary Algorithms for Engineering Optimization provides an overview of optimization and defines it by presenting examples of optimization problems in different engineering domains. Chapter 2 presents an introduction to meta-heuristic and evolutionary algorithms and links them to engineering problems. Chapters 3 to 22 are each devoted to a separate algorithm-- and they each start with a brief literature review of the development of the algorithm, and its applications to engineering problems. The principles, steps, and execution of the algorithms are described in detail, and a pseudo code of the algorithm is presented, which serves as a guideline for coding the algorithm to solve specific applications. This book: * Introduces state-of-the-art metaheuristic algorithms and their applications to engineering optimization; * Fills a gap in the current literature by compiling and explaining the various meta-heuristic and evolutionary algorithms in a clear and systematic manner; * Provides a step-by-step presentation of each algorithm and guidelines for practical implementation and coding of algorithms; * Discusses and assesses the performance of metaheuristic algorithms in multiple problems from many fields of engineering; * Relates optimization algorithms to engineering problems employing a unifying approach. Meta-heuristic and Evolutionary Algorithms for Engineering Optimization is a reference intended for students, engineers, researchers, and instructors in the fields of industrial engineering, operations research, optimization/mathematics, engineering optimization, and computer science. OMID BOZORG-HADDAD, PhD, is Professor in the Department of Irrigation and Reclamation Engineering at the University of Tehran, Iran. MOHAMMAD SOLGI, M.Sc., is Teacher Assistant for M.Sc. courses at the University of Tehran, Iran. HUGO A. LOÁICIGA, PhD, is Professor in the Department of Geography at the University of California, Santa Barbara, United States of America.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 419

Veröffentlichungsjahr: 2017

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Preface

About the Authors

List of Figures

1 Overview of Optimization

Summary

1.1 Optimization

1.2 Examples of the Formulation of Various Engineering Optimization Problems

1.3 Conclusion

2 Introduction to Meta‐Heuristic and Evolutionary Algorithms

Summary

2.1 Searching the Decision Space for Optimal Solutions

2.2 Definition of Terms of Meta‐Heuristic and Evolutionary Algorithms

2.3 Principles of Meta‐Heuristic and Evolutionary Algorithms

2.4 Classification of Meta‐Heuristic and Evolutionary Algorithms

2.5 Meta‐Heuristic and Evolutionary Algorithms in Discrete or Continuous Domains

2.6 Generating Random Values of the Decision Variables

2.7 Dealing with Constraints

2.8 Fitness Function

2.9 Selection of Solutions in Each Iteration

2.10 Generating New Solutions

2.11 The Best Solution in Each Algorithmic Iteration

2.12 Termination Criteria

2.13 General Algorithm

2.14 Performance Evaluation of Meta‐Heuristic and Evolutionary Algorithms

2.15 Search Strategies

2.16 Conclusion

References

3 Pattern Search

Summary

3.1 Introduction

3.2 Pattern Search (PS) Fundamentals

3.3 Generating an Initial Solution

3.4 Generating Trial Solutions

3.5 Updating the Mesh Size

3.6 Termination Criteria

3.7 User‐Defined Parameters of the PS

3.8 Pseudocode of the PS

3.9 Conclusion

References

4 Genetic Algorithm

Summary

4.1 Introduction

4.2 Mapping the Genetic Algorithm (GA) to Natural Evolution

4.3 Creating an Initial Population

4.4 Selection of Parents to Create a New Generation

4.5 Population Diversity and Selective Pressure

4.6 Reproduction

4.7 Termination Criteria

4.8 User‐Defined Parameters of the GA

4.9 Pseudocode of the GA

4.10 Conclusion

References

5 Simulated Annealing

Summary

5.1 Introduction

5.2 Mapping the Simulated Annealing (SA) Algorithm to the Physical Annealing Process

5.3 Generating an Initial State

5.4 Generating a New State

5.5 Acceptance Function

5.6 Thermal Equilibrium

5.7 Temperature Reduction

5.8 Termination Criteria

5.9 User‐Defined Parameters of the SA

5.10 Pseudocode of the SA

5.11 Conclusion

References

6 Tabu Search

Summary

6.1 Introduction

6.2 Tabu Search (TS) Foundation

6.3 Generating an Initial Searching Point

6.4 Neighboring Points

6.5 Tabu Lists

6.6 Updating the Tabu List

6.7 Attributive Memory

6.8 Aspiration Criteria

6.9 Intensification and Diversification Strategies

6.10 Termination Criteria

6.11 User‐Defined Parameters of the TS

6.12 Pseudocode of the TS

6.13 Conclusion

References

7 Ant Colony Optimization

Summary

7.1 Introduction

7.2 Mapping Ant Colony Optimization (ACO) to Ants’ Foraging Behavior

7.3 Creating an Initial Population

7.4 Allocating Pheromone to the Decision Space

7.5 Generation of New Solutions

7.6 Termination Criteria

7.7 User‐Defined Parameters of the ACO

7.8 Pseudocode of the ACO

7.9 Conclusion

References

8 Particle Swarm Optimization

Summary

8.1 Introduction

8.2 Mapping Particle Swarm Optimization (PSO) to the Social Behavior of Some Animals

8.3 Creating an Initial Population of Particles

8.4 The Individual and Global Best Positions

8.5 Velocities of Particles

8.6 Updating the Positions of Particles

8.7 Termination Criteria

8.8 User‐Defined Parameters of the PSO

8.9 Pseudocode of the PSO

8.10 Conclusion

References

9 Differential Evolution

Summary

9.1 Introduction

9.2 Differential Evolution (DE) Fundamentals

9.3 Creating an Initial Population

9.4 Generating Trial Solutions

9.5 Greedy Criteria

9.6 Termination Criteria

9.7 User‐Defined Parameters of the DE

9.8 Pseudocode of the DE

9.9 Conclusion

References

10 Harmony Search

Summary

10.1 Introduction

10.2 Inspiration of the Harmony Search (HS)

10.3 Initializing the Harmony Memory

10.4 Generating New Harmonies (Solutions)

10.5 Updating the Harmony Memory

10.6 Termination Criteria

10.7 User‐Defined Parameters of the HS

10.8 Pseudocode of the HS

10.9 Conclusion

References

11 Shuffled Frog‐Leaping Algorithm

Summary

11.1 Introduction

11.2 Mapping Memetic Evolution of Frogs to the Shuffled Frog Leaping Algorithm (SFLA)

11.3 Creating an Initial Population

11.4 Classifying Frogs into Memeplexes

11.5 Frog Leaping

11.6 Shuffling Process

11.7 Termination Criteria

11.8 User‐Defined Parameters of the SFLA

11.9 Pseudocode of the SFLA

11.10 Conclusion

References

12 Honey‐Bee Mating Optimization

Summary

12.1 Introduction

12.2 Mapping Honey‐Bee Mating Optimization (HBMO) to the Honey‐Bee Colony Structure

12.3 Creating an Initial Population

12.4 The Queen

12.5 Drone Selection

12.6 Brood (New Solution) Production

12.7 Improving Broods (New Solutions) by Workers

12.8 Termination Criteria

12.9 User‐Defined Parameters of the HBMO

12.10 Pseudocode of the HBMO

12.11 Conclusion

References

13 Invasive Weed Optimization

Summary

13.1 Introduction

13.2 Mapping Invasive Weed Optimization (IWO) to Weeds’ Biology

13.3 Creating an Initial Population

13.4 Reproduction

13.5 The Spread of Seeds

13.6 Eliminating Weeds with Low Fitness

13.7 Termination Criteria

13.8 User‐Defined Parameters of the IWO

13.9 Pseudocode of the IWO

13.10 Conclusion

References

14 Central Force Optimization

Summary

14.1 Introduction

14.2 Mapping Central Force Optimization (CFO) to Newton’s Gravitational Law

14.3 Initializing the Position of Probes

14.4 Calculation of Accelerations

14.5 Movement of Probes

14.6 Modification of Deviated Probes

14.7 Termination Criteria

14.8 User‐Defined Parameters of the CFO

14.9 Pseudocode of the CFO

14.10 Conclusion

References

15 Biogeography‐Based Optimization

Summary

15.1 Introduction

15.2 Mapping Biogeography‐Based Optimization (BBO) to Biogeography Concepts

15.3 Creating an Initial Population

15.4 Migration Process

15.5 Mutation

15.6 Termination Criteria

15.7 User‐Defined Parameters of the BBO

15.8 Pseudocode of the BBO

15.9 Conclusion

References

16 Firefly Algorithm

Summary

16.1 Introduction

16.2 Mapping the Firefly Algorithm (FA) to the Flashing Characteristics of Fireflies

16.3 Creating an Initial Population

16.4 Attractiveness

16.5 Distance and Movement

16.6 Termination Criteria

16.7 User‐Defined Parameters of the FA

16.8 Pseudocode of the FA

16.9 Conclusion

References

17 Gravity Search Algorithm

Summary

17.1 Introduction

17.2 Mapping the Gravity Search Algorithm (GSA) to the Law of Gravity

17.3 Creating an Initial Population

17.4 Evaluation of Particle Masses

17.5 Updating Velocities and Positions

17.6 Updating Newton’s Gravitational Factor

17.7 Termination Criteria

17.8 User‐Defined Parameters of the GSA

17.9 Pseudocode of the GSA

17.10 Conclusion

References

18 Bat Algorithm

Summary

18.1 Introduction

18.2 Mapping the Bat Algorithm (BA) to the Behavior of Microbats

18.3 Creating an Initial Population

18.4 Movement of Virtual Bats

18.5 Local Search and Random Flying

18.6 Loudness and Pulse Emission

18.7 Termination Criteria

18.8 User‐Defined Parameters of the BA

18.9 Pseudocode of the BA

18.10 Conclusion

References

19 Plant Propagation Algorithm

Summary

19.1 Introduction

19.2 Mapping the Natural Process to the Planet Propagation Algorithm (PPA)

19.3 Creating an Initial Population of Plants

19.4 Normalizing the Fitness Function

19.5 Propagation

19.6 Elimination of Extra Solutions

19.7 Termination Criteria

19.8 User‐Defined Parameters of the PPA

19.9 Pseudocode of the PPA

19.10 Conclusion

References

20 Water Cycle Algorithm

Summary

20.1 Introduction

20.2 Mapping the Water Cycle Algorithm (WCA) to the Water Cycle

20.3 Creating an Initial Population

20.4 Classification of Raindrops

20.5 Streams Flowing to the Rivers or Sea

20.6 Evaporation

20.7 Raining Process

20.8 Termination Criteria

20.9 User‐Defined Parameters of the WCA

20.10 Pseudocode of the WCA

20.11 Conclusion

References

21 Symbiotic Organisms Search

Summary

21.1 Introduction

21.2 Mapping Symbiotic Relations to the Symbiotic Organisms Search (SOS)

21.3 Creating an Initial Ecosystem

21.4 Mutualism

21.5 Commensalism

21.6 Parasitism

21.7 Termination Criteria

21.8 Pseudocode of the SOS

21.9 Conclusion

References

22 Comprehensive Evolutionary Algorithm

Summary

22.1 Introduction

22.2 Fundamentals of the Comprehensive Evolutionary Algorithm (CEA)

22.3 Generating an Initial Population of Solutions

22.4 Selection

22.5 Reproduction

22.6 Roles of Operators

22.7 Input Data to the CEA

22.8 Termination Criteria

22.9 Pseudocode of the CEA

22.10 Conclusion

References

Wiley Series in Operations Research and Management Science

Index

End User License Agreement

List of Tables

Chapter 02

Table 2.1 The input data of the example problems presented in Chapter 1.

Table 2.2 The decision variables of the example problems presented in Chapter 1.

Table 2.3 The state variables of the example problems presented in Chapter 1.

Table 2.4 The objective function of the example problems presented in Chapter 1.

Table 2.5 The constraints of the example problems presented in Chapter 1.

Table 2.6 Recommended reporting of the solutions calculated in

k

runs of an algorithm that solves a minimization problem.

Chapter 03

Table 3.1 The characteristics of the PS.

Chapter 04

Table 4.1 The characteristics of the GA.

Chapter 05

Table 5.1 The characteristics of the SA.

Chapter 06

Table 6.1 The characteristics of the TS.

Chapter 07

Table 7.1 The characteristics of the ACO.

Chapter 08

Table 8.1 The characteristics of the PSO.

Chapter 09

Table 9.1 The characteristics of the DE.

Chapter 10

Table 10.1 The characteristics of the HS.

Chapter 11

Table 11.1 The characteristics of the SFLA.

Chapter 12

Table 12.1 The characteristics of the HBMO.

Chapter 13

Table 13.1 The characteristics of the IWO.

Chapter 14

Table 14.1 The characteristics of the CFO.

Chapter 15

Table 15.1 The characteristics of the BBO.

Chapter 16

Table 16.1 The characteristics of the FA.

Chapter 17

Table 17.1 The characteristics of the GSA.

Chapter 18

Table 18.1 The characteristics of the BA.

Chapter 19

Table 19.1 The characteristics of the PPA.

Chapter 20

Table 20.1 The characteristics of the WCA.

Chapter 21

Table 21.1 The characteristics of the SOS.

Chapter 22

Table 22.1 The characteristics of the CEA.

Table 22.2 Types of crossover operators in the CEA.

Table 22.3 Types of mutation operators in the CEA.

Table 22.4 List of the parameters of the CEA.

List of Illustrations

Chapter 01

Figure 1.1 Decision space of a constrained two‐dimensional optimization problem.

Figure 1.2 Schematic of global and local optimums in a one‐dimensional maximizing optimization problem.

Figure 1.3 Different types of decision spaces: (a) maximization problem with single‐modal surface and one global optimum; (b) maximization problem with multimodal surface that has one global optimum.

Figure 1.4 Demonstration of near optima in a one‐dimensional maximizing optimization problem.

Figure 1.5 Compound gear train made of four gears.

Figure 1.6 Schematic of a two‐bar truss.

Figure 1.7 Schematic of a hydropower dam.

Chapter 02

Figure 2.1 Sampling grid on a two‐dimensional decision space.

Figure 2.2 General schematic of a simple algorithm; K denotes the counter of iterations.

Figure 2.3 Diagram depicting the relation between a simulation model and an optimization algorithm.

Figure 2.4 The main components of the optimization by meta‐heuristic and evolutionary algorithms.

Figure 2.5 Different solutions in a two‐dimensional decision space.

Figure 2.6 Selection probability of a set of solutions 1–10 of a hypothetical maximization problem.

Figure 2.7 The flowchart of the general algorithm.

Figure 2.8 Convergence history of an optimization algorithm toward the best solution in a minimization problem.

Figure 2.9 Convergence of an optimization algorithm in which the best solution is not always transferred to the next iteration during the search in a minimization problem.

Figure 2.10 Convergence of different runs of an optimization algorithm toward near‐optimal solutions of a minimization problem.

Figure 2.11 Convergence of different runs of an optimization algorithm in a minimization problem.

Chapter 03

Figure 3.1 The flowchart of the PS.

Figure 3.2 Meshes generated by the GPS and MADS methods in a two‐dimensional decision space.

Chapter 04

Figure 4.1 The flowchart of the GA.

Figure 4.2 Demonstration of a roulette wheel.

F

 = Fitness function value;

P

 = probability of selecting a solution.

Figure 4.3 Ranking chromosomes (or solutions) according to the fitness function (

F

) in a maximizing problem.

Figure 4.4 The process of constituting a new generation from the previous generation.

Figure 4.5 Different approaches of crossover: (a) one‐point crossover, (b) two‐point crossover, and (c) uniform crossover.

Figure 4.6 An example of the mutation operator.

Chapter 05

Figure 5.1 The flowchart of the SA.

Chapter 06

Figure 6.1 The flowchart of the TS.

Figure 6.2 The decision space of a two‐dimensional optimization problem.

Figure 6.3 Illustration of steps based on recency‐based memory in a two‐dimensional decision space.

Chapter 07

Figure 7.1 A double‐bridge experiment with pathways of unequal length.

Figure 7.2 The flowchart of the ACO.

Figure 7.3 Representation of a solution in the ACO; there are

M

such solutions.

Chapter 08

Figure 8.1 The flowchart of the PSO.

Figure 8.2 Concepts of the best individual position in a two‐dimensional maximization problem.

Figure 8.3 Concept of the global best position in a maximization problem.

Chapter 09

Figure 9.1 The flowchart of the DE.

Chapter 10

Figure 10.1 The flowchart of the HS.

Figure 10.2 Generating a new solution based on memory strategy.

Chapter 11

Figure 11.1 The flowchart of the SFLA.

Figure 11.2 Sorting frogs according to the fitness function

F

(

X

) in a maximizing problem.

Figure 11.3 Assigning frogs to different memeplexes;

Z

 = number of memeplexes and

Y

 = number of frogs assigned to each memeplex.

Figure 11.4 The representation of a memeplex and a submemeplex within the entire population.

Chapter 12

Figure 12.1 The flowchart of the HBMO algorithm.

Figure 12.2 Determining the queen and trial solutions according to the fitness function

F

(

X

).

Figure 12.3 Different crossover approaches: (a) one‐point crossover, (b) two‐point crossover, and (c) uniform crossover.

Figure 12.4 Performance of mutation operator.

Chapter 13

Figure 13.1 The flowchart of the IWO.

Figure 13.2 Number of seeds for each weed with respect to the fitness value.

Chapter 14

Figure 14.1 The flowchart of the CFO.

Figure 14.2 Distribution of initial probes in the CFO algorithm.

Chapter 15

Figure 15.1 Species immigration and emigration pattern in a habitat.

Figure 15.2 The flowchart of the BBO.

Figure 15.3 Comparison of two solutions for one problem.

Figure 15.4 Ranking habitats (solutions) according to their fitness values in a minimization problem.

Chapter 16

Figure 16.1 The flowchart of the FA.

Chapter 17

Figure 17.1 Gravity force between different particles;

Force

1

is the resultant force on

Mass

1

.

Figure 17.2 The flowchart of the GSA.

Chapter 18

Figure 18.1 The flowchart of the BA.

Chapter 19

Figure 19.1 The flowchart of the PPA.

Chapter 20

Figure 20.1 The flowchart of the WCA for a minimization problem.

Figure 20.2 Classification of raindrops and relations between the sea, rivers, and streams according to their fitness values (

F

(

X

)) for a minimization problem where

M

 = 10 and

R

 = 3.

Figure 20.3 Schematic of stream flowing toward a river at distance

d

:

X

: existing stream;

X′

: new stream.

Chapter 21

Figure 21.1 The flowchart of the SOS.

Chapter 22

Figure 22.1 The flowchart of the CEA.

Figure 22.2 Illustration of a roulette wheel.

Figure 22.3 Types of one‐point cut crossover in the CEA: (a) Type 1 (new solution (1)) and type 2 (new solution (2)), (b) Type 3 (new solution (1)) and type 4 (new solution (2)), (c) Type 5 (new solution (1)) and type 6 (new solution (2)), (d) Type 7 (new solution (1)) and type 8 (new solution (2)), and (e) Type 9 (new solution (1)) and type 10 (new solution (2)).

Figure 22.4 Types of two‐point cut crossover in the CEA: (a) Type 11 (new solution (1)) and type 12 (new solution (2)), (b) Type 13 (new solution (1)) and type 14 (new solution (2)), (c) Type 15 (new solution (1)) and type 16 (new solution (2)), (d) Type 17 (new solution (1)) and type 18 (new solution (2)), and (e) Type 19 (new solution (1)) and type 20 (new solution (2)).

Figure 22.5 Types of overall crossover in CEA: (a) Type 21, (b) Type 22, and (c) Type 23.

Guide

Cover

Table of Contents

Begin Reading

Pages

ii

iii

iv

xv

xvi

xvii

xviii

xix

xx

xxi

xxii

xxiii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

163

164

165

166

167

168

169

170

171

172

173

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

213

214

215

216

217

218

219

220

221

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

Wiley Series in Operations Research and Management Science

A complete list of the titles in this series appears at the end of this volume

Meta‐Heuristic and Evolutionary Algorithms for Engineering Optimization

 

Omid Bozorg‐Haddad

University of Tehran, Iran

Mohammad Solgi

University of Tehran, Iran

Hugo A. Loáiciga

University of California, Santa Barbara, USA

 

 

 

 

 

 

 

 

 

This edition first published 2017© 2017 John Wiley & Sons, Inc.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Omid Bozorg‐Haddad, Mohammad Solgi, and Hugo A Loáiciga to be identified as the authors of this work has been asserted in accordance with law.

Registered OfficeJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA

Editorial Office111 River Street, Hoboken, NJ 07030, USA

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of WarrantyThe publisher and the authors make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties; including without limitation any implied warranties of fitness for a particular purpose. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for every situation. In view of on‐going research, equipment modifications, changes in governmental regulations, and the constant flow of information relating to the use of experimental reagents, equipment, and devices, the reader is urged to review and evaluate the information provided in the package insert or instructions for each chemical, piece of equipment, reagent, or device for, among other things, any changes in the instructions or indication of usage and for added warnings and precautions. The fact that an organization or website is referred to in this work as a citation and/or potential source of further information does not mean that the author or the publisher endorses the information the organization or website may provide or recommendations it may make. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this works was written and when it is read. No warranty may be created or extended by any promotional statements for this work. Neither the publisher nor the author shall be liable for any damages arising here from.

Library of Congress Cataloguing‐in‐Publication Data

Names: Bozorg‐Haddad, Omid, 1974– author. | Solgi, Mohammad, 1989– author. | Loaiciga, Hugo A., author.Title: Meta‐heuristic and evolutionary algorithms for engineering optimization / Omid Bozorg‐Haddad, Mohammad Solgi, Hugo A. Loáiciga.Description: Hoboken, NJ : John Wiley & Sons, 2017. | Series: Wiley series in operations research and management science | Includes bibliographical references and index. |Identifiers: LCCN 2017017765 (print) | LCCN 2017026412 (ebook) | ISBN 9781119387077 (pdf) | ISBN 9781119387060 (epub) | ISBN 9781119386995 (cloth)Subjects: LCSH: Mathematical optimization. | Engineering design–Mathematics.Classification: LCC QA402.5 (ebook) | LCC QA402.5 .B695 2017 (print) | DDC 620/.0042015196–dc23LC record available at https://lccn.loc.gov/2017017765

Cover Design: WileyCover Images: (Top Image) © Georgina198/Gettyimages;(Bottom Image) © RomanOkopny/Gettyimagess

Preface

Engineers search for designs of new systems that perform optimally and are cost effective or for the optimal operation and rehabilitation of existing systems. It turns out that design and operation usually involve the calibration of models that describe physical systems. The tasks of design, operation, and model calibration can be approached systematically by the application of optimization. Optimization is defined as the selection of the best elements or actions from a set of feasible alternatives. More precisely, optimization consists of finding the set of variables that produces the best values of objective functions in which the feasible domain of the variables is restricted by constraints.

Meta‐heuristic and evolutionary algorithms, many of which are inspired by natural systems, are optimization methods commonly employed to calculate good approximate solutions to optimization problems that are difficult or impossible to solve with other optimization techniques such as linear programming, nonlinear programming, integer programming, and dynamic programming. Meta‐heuristic and evolutionary algorithms are problem‐independent methods of wide applicability that have been proven effective in solving a wide range of real‐world and complex engineering problems. Meta‐heuristic and evolutionary algorithms have become popular methods for solving real‐world and complex engineering optimization problems.

Yet, in spite of meta‐heuristic and evolutionary algorithms’ frequent application, there is not at present a reference that presents and explains them in a clear, systematic, and comprehensive manner. There are several bibliographical sources dealing with engineering optimization and the application of meta‐heuristic and evolutionary algorithms. However, their focus is largely on the results of application of these algorithms and less on their basic concepts on which they are founded. In view of this, it appears that a comprehensive, unified, and insightful overview of these algorithms is timely and would be welcome by those who seek to learn the principles and ways to apply meta‐heuristic and evolutionary algorithms.

This book fills the cited gap by presenting the best‐known meta‐heuristic and evolutionary algorithms, those whose performance has been tested in many engineering domains. Chapter 1 provides an overview of optimization and illustration of its application to engineering problems in various specialties. Chapter 2 presents an introduction to meta‐heuristic and evolutionary algorithms and their relation to engineering problems. Chapters 3–22 are dedicated to pattern search (PS), genetic algorithm (GA), simulated annealing (SA), tabu search (TS), ant colony optimization (ACO), particle swarm optimization (PSO), differential evolution (DE), harmony search (HS), shuffled frog‐leaping algorithm (SFLA), honey‐bee mating optimization (HBMO), invasive weed optimization (IWO), central force optimization (CFO), biogeography‐based optimization (BBO), firefly algorithm (FA), gravity search algorithm (GSA), bat algorithm (BA), plant propagation algorithm (PPA), water cycle algorithm (WCA), symbiotic organisms search (SOS), and comprehensive evolutionary algorithm (CEA), respectively. The order of the chapters corresponds to the order of chronological appearance of the various algorithms, with the most recent ones receiving the larger chapter numbers. Each chapter describes a specific algorithm and starts with a brief literature review of its development and subsequent modification since the time of inception. This is followed by the presentation of the basic concept on which the algorithm is based and the mathematical statement of the algorithm. The workings of the algorithm are subsequently described in detail. Each chapter closes with a pseudocode of the algorithm that constitutes an insightful and sufficient guideline for coding the algorithm to solve specific applications.

Several of the algorithms reviewed in this book were developed decades ago, and some have experienced modifications and hybridization with other algorithms. This presentation is concerned primarily with the original version of each algorithm, yet it provides references that are concerned with modifications to the algorithms.

This book was written for graduate students, researchers, educators, and practitioners with interests in the field of engineering optimization. The format and contents chosen are intended to satisfy the needs of beginners and experts seeking a unifying, complete, and clear presentation of meta‐heuristic and evolutionary algorithms.

Omid Bozorg‐HaddadMohammad SolgiHugo A. Loáiciga

About the Authors

Omid Bozorg‐Haddad

He is a professor at the Department of Irrigation and Reclamation Engineering of the University of Tehran, Iran (E‐mail: [email protected]). His teaching and research interests include water resources and environmental systems analysis, planning, and management as well as application of optimization algorithms in water‐related systems. He has published more than 200 articles in peer‐reviewed journals and 100 papers in conference proceedings. He has also supervised more than 50 M.Sc. and Ph.D. students.

Mohammad Solgi

He received his M.Sc. degree in hydrological engineering from the University of Tehran, Iran (E‐mail: [email protected]). His research interests include development of mathematical and computational techniques and their application in hydrology. He has published several articles in peer‐reviewed journals and received awards for his works.

Hugo A. Loáiciga

He is a professor in the Department of Geography, University of California (Santa Barbara), United States (E‐mail: [email protected]). His teaching and research interests include hydrology and water resources systems. He has published more than 250 articles in peer‐reviewed journals and received numerous awards for his work.

List of Figures

Figure 1.1

Decision space of a constrained two‐dimensional optimization problem.

Figure 1.2

Schematic of global and local optimums in a one‐dimensional maximizing optimization problem.

Figure 1.3

Different types of decision spaces: (a) maximization problem with single‐modal surface and one global optimum; (b) maximization problem with multimodal surface that has one global optimum.

Figure 1.4

Demonstration of near optima in a one‐dimensional maximizing optimization problem.

Figure 1.5

Compound gear train made of four gears.

Figure 1.6

Schematic of a two‐bar truss.

Figure 1.7

Schematic of a hydropower dam.

Figure 2.1

Sampling grid on a two‐dimensional decision space.

Figure 2.2

General schematic of a simple algorithm; K denotes the counter of iterations.

Figure 2.3

Diagram depicting the relation between a simulation model and an optimization algorithm.

Figure 2.4

The main components of the optimization by meta‐heuristic and evolutionary algorithms.

Figure 2.5

Different solutions in a two‐dimensional decision space.

Figure 2.6

Selection probability of a set of solutions 1–10 of a hypothetical maximization problem.

Figure 2.7

The flowchart of the general algorithm.

Figure 2.8

Convergence history of an optimization algorithm toward the best solution in a minimization problem.

Figure 2.9

Convergence of an optimization algorithm in which the best solution is not always transferred to the next iteration during the search in a minimization problem.

Figure 2.10

Convergence of different runs of an optimization algorithm toward near‐optimal solutions of a minimization problem.

Figure 2.11

Convergence of different runs of an optimization algorithm in a minimization problem.

Figure 3.1

The flowchart of the PS.

Figure 3.2

Meshes generated by the GPS and MADS methods in a two‐dimensional decision space.

Figure 4.1

The flowchart of the GA.

Figure 4.2

Demonstration of a roulette wheel. F = Fitness function value; P = probability of selecting a solution.

Figure 4.3

Ranking chromosomes (or solutions) according to the fitness function (F) in a maximizing problem.

Figure 4.4

The process of constituting a new generation from the previous generation.

Figure 4.5

Different approaches of crossover: (a) one‐point crossover, (b) two‐point crossover, and (c) uniform crossover.

Figure 4.6

An example of the mutation operator.62Figure 5.1The flowchart of the SA.

Figure 6.1

The flowchart of the TS.

Figure 6.2

The decision space of a two‐dimensional optimization problem.

Figure 6.3

Illustration of steps based on recency‐based memory in a two‐dimensional decision space.

Figure 7.1

A double‐bridge experiment with pathways of unequal length.

Figure 7.2

The flowchart of the ACO.

Figure 7.3

Representation of a solution in the ACO; there are M such solutions.

Figure 8.1

The flowchart of the PSO.

Figure 8.2

Concepts of the best individual position in a two‐dimensional maximization problem.

Figure 8.3

Concept of the global best position in a maximization problem.

Figure 9.1

The flowchart of the DE.

Figure 10.1

The flowchart of the HS.

Figure 10.2

Generating a new solution based on memory strategy.

Figure 11.1

The flowchart of the SFLA.

Figure 11.2

Sorting frogs according to the fitness function F(X) in a maximizing problem.

Figure 11.3

Assigning frogs to different memeplexes; Z = number of memeplexes and Y = number of frogs assigned to each memeplex.

Figure 11.4

The representation of a memeplex and a submemeplex within the entire population.

Figure 12.1

The flowchart of the HBMO algorithm.

Figure 12.2

Determining the queen and trial solutions according to the fitness function F(X).

Figure 12.3

Different crossover approaches: (a) one‐point crossover, (b) two‐point crossover, and (c) uniform crossover.

Figure 12.4

Performance of mutation operator.

Figure 13.1

The flowchart of the IWO.

Figure 13.2

Number of seeds for each weed with respect to the fitness value.

Figure 14.1

The flowchart of the CFO.

Figure 14.2

Distribution of initial probes in the CFO algorithm.

Figure 15.1

Species immigration and emigration pattern in a habitat.

Figure 15.2

The flowchart of the BBO.

Figure 15.3

Comparison of two solutions for one problem.

Figure 15.4

Ranking habitats (solutions) according to their fitness values in a minimization problem.

Figure 16.1

The flowchart of the FA.

Figure 17.1

Gravity force between different particles; Force1 is the resultant force on Mass1.

Figure 17.2

The flowchart of the GSA.

Figure 18.1

The flowchart of the BA.

Figure 19.1

The flowchart of the PPA.

Figure 20.1

The flowchart of the WCA for a minimization problem.

Figure 20.2

Classification of raindrops and relations between the sea, rivers, and streams according to their fitness values (F(X)) for a minimization problem where M = 10 and R = 3.

Figure 20.3

Schematic of stream flowing toward a river at distance d: X: existing stream; X′: new stream.

Figure 21.1

The flowchart of the SOS.

Figure 22.1

The flowchart of the CEA.

Figure 22.2

Illustration of a roulette wheel.

Figure 22.3

Types of one‐point cut crossover in the CEA: (a) Type 1 (new solution (1)) and type 2 (new solution (2)), (b) Type 3 (new solution (1)) and type 4 (new solution (2)), (c) Type 5 (new solution (1)) and type 6 (new solution (2)), (d) Type 7 (new solution (1)) and type 8 (new solution (2)), and (e) Type 9 (new solution (1)) and type 10 (new solution (2)).

Figure 22.4

Types of two‐point cut crossover in the CEA: (a) Type 11 (new solution (1)) and type 12 (new solution (2)), (b) Type 13 (new solution (1)) and type 14 (new solution (2)), (c) Type 15 (new solution (1)) and type 16 (new solution (2)), (d) Type 17 (new solution (1)) and type 18 (new solution (2)), and (e) Type 19 (new solution (1)) and type 20 (new solution (2)).

Figure 22.5

Types of overall crossover in CEA: (a) Type 21, (b) Type 22, and (c) Type 23.

Meta‐heuristic and evolutionary algorithms are problem‐independent optimization techniques. They are effective in solving a wide range of real‐world and complex engineering problems. This book presents and explains the most important meta‐heuristic and evolutionary algorithms known to date in a clear, systematic, and comprehensive manner. The algorithms presented in this book are pattern search (PS), genetic algorithm (GA), simulated annealing (SA), tabu search (TS), ant colony optimization (ACO), particle swarm optimization (PSO), differential evolution (DE), harmony search (HS), shuffled frog‐leaping algorithm (SFLA), honey‐bee mating optimization (HBMO), invasive weed optimization (IWO), central force optimization (CFO), biogeography‐based optimization (BBO), firefly algorithm (FA), gravity search algorithm (GSA), bat algorithm (BA), plant propagation algorithm (PPA), water cycle algorithm (WCA), symbiotic organisms search (SOS), and comprehensive evolutionary algorithm (CEA). These algorithms are presented in a consistent and systematic format, explaining their applications to engineering optimization problems. This book provides students, researchers, and teachers with a comprehensive exposition of meta‐heuristic and evolutionary algorithms with sufficient detail to understand their principles and apply them to specific problems.

Keywords:

Optimization

Engineering optimization

Meta‐heuristic search

Evolutionary algorithms

Nature‐inspired optimization algorithms

1Overview of Optimization

Summary

This chapter defines optimization and its basic concepts. It provides examples of various engineering optimization problems.

1.1 Optimization

Engineers are commonly confronted with the tasks of designing and operating systems to meet or surpass specified goals while meeting numerous constraints imposed on the design and operation. Optimization is the organized search for such designs and operating modes. It determines the set of actions or elements that must be implemented to achieve optimized systems. In the simplest case, optimization seeks the maximum or minimum value of an objective function corresponding to variables defined in a feasible range or space. More generally, optimization is the search of the set of variables that produces the best values of one or more objective functions while complying with multiple constraints. A single‐objective optimization model embodies several mathematical expressions including an objective function and constraints as follows:

(1.1)

subject to

(1.2)
(1.3)

in which f(X) = the objective function; X = a set of decision variables xi that constitutes a possible solution to the optimization problem; xi = ith decision variable; N = the number of decision variables that determines the dimension of the optimization problem; gj(X) = jth constraint; bj = constant of the jth constraint; m = the total number of constraints;  = the lower bound of the ith decision variable; and  = the upper bound of the ith decision variable.

1.1.1 Objective Function

The objective function constitutes the goal of an optimization problem. That goal could be maximized or minimized by choosing variables, or decision variables, that satisfy all problem constraints. The desirability of a set of variables as a possible solution to an optimization problem is measured by the value of objective function corresponding to a set of variables.

Some of the algorithms reviewed in this book are explained with optimization problems that involve maximizing the objective function. Others do so with optimization problems that minimize the objective function. It is useful to keep in mind that a maximization (or minimization) problem can be readily converted, if desired, to a minimization (or maximization) problem by multiplying its objective function by −1.

1.1.2 Decision Variables

The decision variables determine the value of the objective function. In each optimization problem we search for the decision variables that yield the best value of the objective function or optimum.

In some optimization problems the decision variables range between an upper bound and a lower bound. This type of decision variables forms a continuous decision space. For example, choosing adequate proportions of different substances to make a mixture of them involves variables that are part of a continuous decision space in which the proportions can take any value in the range [0,1]. On the other hand, there are optimization problems in which the decision variables are discrete. Discrete decision variables refer to variables that take specific values between an upper bound and a lower bound. Integer values are examples of discrete values. For instance, the number of groundwater wells in a groundwater withdrawal problem must be an integer number. Binary variables are of the discrete type also. The typical case is that when taken the value 1 implies choosing one type of action, while taking the value 0 implies that no action is taken. For example, a decision variable equal to 1 could mean building a water treatment plant at a site, while its value equal to 0 means that the plant would not be constructed at that site. Optimization problems involving continuous decision variables are called continuous problems, and those problems defined in terms of discrete decision variables are known as discrete problems. There are, furthermore, optimization problems that may involve discrete and continuous variables. One such example would be an optimization involving the decision of whether or not to build a facility at a certain location and, if so, what its capacity ought to be. The siting variable is of the binary type (0 or 1), whereas its capacity is a real, continuous variable. This type of optimization problem is said to be of mixed type.

1.1.3 Solutions of an Optimization Problem

Each objective function is expressed in terms of decision variables. When there is only one decision variable, the optimization problem is said to be one‐dimensional, while optimization problems with two or more decision variables are called N‐dimensional. An N‐dimensional optimization problem has solutions that are expressed in terms of one or more sets of solutions in which each solution has N decision variables.

1.1.4 Decision Space

The set of decision variables that satisfy the constraints of an optimization problem is called the feasible decision space. In an N‐dimensional problem, each possible solution is an N‐vector variable with N elements. Each element of this vector is a decision variable. Optimization algorithms search for a point (i.e., a vector of decision variables) or points (i.e., more than one vector of decision variables) in the decision space that optimizes the objective function.

1.1.5 Constraints or Restrictions

Each optimization problem may have two types of constraints. Some constraints directly restrict the possible value of the decision variables, such as a decision variable x being a positive real number, x > 0, or analogous to Equation (1.3). Another form of constraint is written in terms of formulas, such as when two decision variables x1 and x2 are restricted to the space x1 + x2 ≤ b or analogous to Equation (1.2). The goal of an optimization problem is to find an optimal feasible solution that satisfies all the constraints and yields the best value of the objective function among all feasible solutions. Figure 1.1 depicts a constrained two‐dimensional decision space with infeasible and feasible spaces.

Figure 1.1 Decision space of a constrained two‐dimensional optimization problem.

The set of all feasible solutions constitute the feasible decision space, and the infeasible decision space is made up of all the infeasible decision variables. Evidently, the optimal solution must be in the feasible space.

1.1.6 State Variables

State variables are dependent variables whose values change as the decision variables change their values. State variables are important in engineering problems because they describe the system being modeled and the objective function and constraints are evaluated employing their values. As an example, consider an optimization problem whose objective is to maximize hydropower generation by operating a reservoir. The decision variable is the amount of daily water release passing through turbines. The state variable is the amount of water stored in the reservoir, which is affected by the water released through turbines according to an equation of water balance that also involves water inflow to the reservoir, evaporation from the reservoir, water diversions or imports to the reservoir, water released from the reservoir bypassing turbines, and other water fluxes that change the amount of reservoir storage.

1.1.7 Local and Global Optima

It has been established that a well‐defined optimization problem has a well‐defined decision space. Each point of the decision space defines a value of the objective function. A local optimum refers to a solution that has the best objective function in its neighborhood. In a one‐dimensional optimization problem, a feasible decision variable X* is a local optimum of a maximization problem if the following condition holds:

(1.4)

In a minimization problem the local‐optimum condition becomes

(1.5)

where   a local optimum and ɛ = limited length in the neighborhood about the local optimum . A local optimum is limited to a neighborhood of the decision space, and it might not be the best solution over the entire decision space.

A global optimum is the best solution in the decision space. Some optimization problems may have more than one—in fact, an infinite number of global optima. These situations arise commonly in linear programming problems. In this case, all the global optima produce the same value of the objective function. There are not decision variables that produce a better objective function value than the global optimum. A one‐dimensional optimization problem with decision variable X and objective function f(X) the value X* is a global optimum of a maximization problem if for any decision variable X the following is true:

(1.6)

In a minimization problem we would have

(1.7)

Figure 1.2 illustrates global and local optima for a one‐dimensional maximization problem.

Figure 1.2 Schematic of global and local optimums in a one‐dimensional maximizing optimization problem.

L1, L2, and L3 in Figure 1.2 are local optima, and G denotes the global optimum with the largest value of the objective function. The decision space may be single modal or multimodal. In a single‐modal surface, there is only one extreme point, while there are several extremes on the surface of a multimodal problem. In a single‐modal problem, there is a single local optimum that is also the global optimum. On the other hand, a multimodal problem may include several local and global optima. However, the decision variables that produce a global optimum must all produce the same value of the global optimum, by definition. Figure 1.3 illustrates the surface of one‐dimensional optimization problems with single‐modal and multimodal decision spaces in which there is one single optimum.

Figure 1.3 Different types of decision spaces: (a) maximization problem with single‐modal surface and one global optimum; (b) maximization problem with multimodal surface that has one global optimum.

1.1.8 Near‐Optimal Solutions

A near optimum has a very close but inferior value to the global optimum. In some engineering problems, achieving the absolute global optimum is extremely difficult or sometimes impossible because of the innate complexity of the problem or the method employed to solve the problem. Or achieving the global optimum may be computationally prohibitive. In this situation, a near optimum is calculated and reported as an approximation to the global optimum. Near optima are satisfactory in solving many real‐world problems. The proximity of a near optimum to the global optimum depends on the optimization problem being solved and the judgment of the analyst. Figure 1.4 depicts the concept of a near optimum in a maximization problem.

Figure 1.4 Demonstration of near optima in a one‐dimensional maximizing optimization problem.

1.1.9 Simulation

Each decision variable of an optimization problem defines an objective function value. The process of evaluating the state variables, which are necessary for estimation of the objective function, and constraints with any decision variable is known as simulation. A simulation model receives the decision variables as inputs and simulates the system’s state variables. Sometimes the simulation model consists of one or more simple mathematical functions and equations. However, most real‐world and engineering problems require simulation models with complex procedures that most solve systems of equations and various formulas that approximate physical processes. Simulation is, therefore, the computational imitation of the operation of a real‐world process or system over time.

1.2 Examples of the Formulation of Various Engineering Optimization Problems

This section presents examples of the formulation of different types of engineering optimization problems including mechanical design, structural design, electrical engineering optimization, water resources optimization, and calibration of hydrological models.

1.2.1 Mechanical Design

Designing a compound gear train is exemplary of optimal designing. A compound gear train is designed to achieve a particular gear ratio between the driver and driven shafts. The purpose of the gear train design is finding the number of teeth in each gear so that the error between the obtained and required gear ratios is minimized. In practice, the term gear ratio is used interchangeably with velocity ratio. It is defined as the ratio of the angular velocity of the output shaft to that of the input shaft. For a pair of matching gears, the velocity or gear ratio α is calculated as follows:

(1.8)

in which α = gear ratio; ωin = angular velocity of the input shaft; ωout = angular velocity of the output shaft; θin = the number of teeth on the input gear; and θout = the number of teeth on the output gear. The ratio is, thus, inversely proportional to the number of teeth on the input and output gears.

Figure 1.5 shows a compound gear train that is made of four gears. It is desired to produce a gear ratio as close as possible to a required value μ. The objective of the design is to find the number of teeth in each gear so that the error between the obtained and required gear ratios is minimized. Normally, additional considerations such as the number of gear pairs to use and the geometric arrangement of the shafts must be considered in addition to wear. To simplify the problem only the particular configuration shown in Figure 1.5 is considered here.

Figure 1.5 Compound gear train made of four gears.

For the system shown in Figure 1.5, the gear ratio is evaluated as follows:

(1.9)

in which τd, τa, τb, and τf = the number of teeth on gears D, A, B, and F, respectively.

The number of teeth on each gear constitutes the decision variables:

(1.10)

Minimizing the square of the difference between the desired gear ratio (μ) and the actual design gear ratio (α) the optimization problem leads to the following optimization problem:

(1.11)

in which

(1.12)

subject to

(1.13)

where μ = required gear ratio; α = actual gear ratio; x(L) and x(U) = minimum and maximum number of teeth on each gear, respectively. The minimization of the objective function (Equation (1.11)) is with respect to x1, x2, x3, and x4. The objective function is nonlinear, and the constraints (Equation (1.13)) are simple bounds on the decision variables. Since the number of teeth is an integer number, this problem has a discrete domain, and the decision variables must take integers values.

1.2.2 Structural Design

Structural optimization problems are created and solved to determine the configurations of structures that satisfy specifications and produce an optimum for a chosen objective function. The main purpose of structural optimization is to minimize the weight of a structure or the vertical deflection of a loaded member. Here, a two‐bar truss design model is considered for illustration purposes.

The truss shown in Figure 1.6 is designed to carry a certain load without elastic failure. In addition, the truss is subject to limitations in geometry, area, and stress.

Figure 1.6 Schematic of a two‐bar truss.

The stresses on nodes A and B are calculated as follows:

(1.14)
(1.15)

in which σAC and σBC = the stress on node A and B, respectively (N/m2); Force = force on node C (N); H = perpendicular distance from AB to point C (m); L