Metaheuristics for Machine Learning -  - E-Book

Metaheuristics for Machine Learning E-Book

0,0
168,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

METAHEURISTICS for MACHINE LEARNING The book unlocks the power of nature-inspired optimization in machine learning and presents a comprehensive guide to cutting-edge algorithms, interdisciplinary insights, and real-world applications. The field of metaheuristic optimization algorithms is experiencing rapid growth, both in academic research and industrial applications. These nature-inspired algorithms, which draw on phenomena like evolution, swarm behavior, and neural systems, have shown remarkable efficiency in solving complex optimization problems. With advancements in machine learning and artificial intelligence, the application of metaheuristic optimization techniques has expanded, demonstrating significant potential in optimizing machine learning models, hyperparameter tuning, and feature selection, among other use-cases. In the industrial landscape, these techniques are becoming indispensable for solving real-world problems in sectors ranging from healthcare to cybersecurity and sustainability. Businesses are incorporating metaheuristic optimization into machine learning workflows to improve decision-making, automate processes, and enhance system performance. As the boundaries of what is computationally possible continue to expand, the integration of metaheuristic optimization and machine learning represents a pioneering frontier in computational intelligence, making this book a timely resource for anyone involved in this interdisciplinary field. Metaheuristics for Machine Learning: Algorithms and Applications serves as a comprehensive guide to the intersection of nature-inspired optimization and machine learning. Authored by leading experts, this book seamlessly integrates insights from computer science, biology, and mathematics to offer a panoramic view of the latest advancements in metaheuristic algorithms. You'll find detailed yet accessible discussions of algorithmic theory alongside real-world case studies that demonstrate their practical applications in machine learning optimization. Perfect for researchers, practitioners, and students, this book provides cutting-edge content with a focus on applicability and interdisciplinary knowledge. Whether you aim to optimize complex systems, delve into neural networks, or enhance predictive modeling, this book arms you with the tools and understanding you need to tackle challenges efficiently. Equip yourself with this essential resource and navigate the ever-evolving landscape of machine learning and optimization with confidence. Audience The book is aimed at a broad audience encompassing researchers, practitioners, and students in the fields of computer science, data science, engineering, and mathematics. The detailed but accessible content makes it a must-have for both academia and industry professionals interested in the optimization aspects of machine learning algorithms.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Veröffentlichungsjahr: 2024

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Table of Contents

Series Page

Title Page

Copyright Page

Foreword

Preface

1 Metaheuristic Algorithms and Their Applications in Different Fields: A Comprehensive Review

1.1 Introduction

1.2 Types of Metaheuristic Algorithms

1.3 Application of Metaheuristic Algorithms

1.4 Future Direction

1.5 Conclusion

References

2 A Comprehensive Review of Metaheuristics for Hyperparameter Optimization in Machine Learning

2.1 Introduction

2.2 Fundamentals of Hyperparameter Optimization

2.3 Overview of Metaheuristic Optimization Techniques

2.4 Population-Based Metaheuristic Techniques

2.5 Single Solution-Based Metaheuristic Techniques

2.6 Hybrid Metaheuristic Techniques

2.7 Metaheuristics in Bayesian Optimization

2.8 Metaheuristics in Neural Architecture Search

2.9 Comparison of Metaheuristic Techniques for Hyperparameter Optimization

2.10 Applications of Metaheuristics in Machine Learning

2.11 Future Directions and Open Challenges

2.12 Conclusion

References

3 A Survey of Computer-Aided Diagnosis Systems for Breast Cancer Detection

3.1 Introduction

3.2 Procedure for Research Survey

3.3 Imaging Modalities and Their Datasets

3.4 Research Survey

3.5 Conclusion

3.6 Acknowledgment

References

4 Enhancing Feature Selection Through Metaheuristic Hybrid Cuckoo Search and Harris Hawks Optimization for Cancer Classification

4.1 Introduction

4.2 Related Work

4.3 Proposed Methodology

4.4 Experimental Setup

4.5 Results and Discussion

4.6 Conclusion

References

5 Anomaly Identification in Surveillance Video Using Regressive Bidirectional LSTM with Hyperparameter Optimization

5.1 Introduction

5.2 Literature Survey

5.3 Proposed Methodology

5.4 Result and Discussion

5.5 Conclusion

References

6 Ensemble Machine Learning-Based Botnet Attack Detection for IoT Applications

6.1 Introduction

6.2 Literature Survey

6.3 Proposed System

6.4 Results and Discussion

6.5 Conclusion

References

7 Machine Learning-Based Intrusion Detection System with Tuned Spider Monkey Optimization for Wireless Sensor Networks

7.1 Introduction

7.2 Literature Review

7.3 Proposed Methodology

7.4 Result and Discussion

7.5 Conclusion

References

8 Security Enhancement in IoMT‑Assisted Smart Healthcare System Using the Machine Learning Approach

8.1 Introduction

8.2 Literature Review

8.3 Proposed Methodology

8.4 Conclusion

References

9 Building Sustainable Communication: A Game-Theoretic Approach in 5G and 6G Cellular Networks

9.1 Introduction

9.2 Related Works

9.3 Methodology

9.4 Result

9.5 Conclusion

References

10 Autonomous Vehicle Optimization: Striking a Balance Between Cost-Effectiveness and Sustainability

10.1 Introduction

10.2 Methods

10.3 Results

10.4 Conclusions

References

11 Adapting Underground Parking for the Future: Sustainability and Shared Autonomous Vehicles

11.1 Introduction

11.2 Related Works

11.3 Methodology

11.4 Analysis

11.5 Conclusion

References

12 Big Data Analytics for a Sustainable Competitive Edge: An Impact Assessment

12.1 Introduction

12.2 Related Works

12.3 Hypothesis and Research Model

12.4 Results

12.5 Conclusion

References

13 Sustainability and Technological Innovation in Organizations: The Mediating Role of Green Practices

13.1 Introduction

13.2 Related Work

13.3 Methodology

13.4 Discussion

13.5 Conclusions

References

14 Optimal Cell Planning in Two Tier Heterogeneous Network through Meta-Heuristic Algorithms

14.1 Introduction

14.2 System Model and Formulation of the Problem

14.3 Result and Discussion

14.4 Conclusion

References

15 Soil Aggregate Stability Prediction Using a Hybrid Machine Learning Algorithm

15.1 Introduction

15.2 Related Works

15.3 Proposed Methodology

15.4 Result and Discussion

15.5 Conclusion

References

Index

Also of Interest

End User License Agreement

List of Tables

Chapter 1

Table 1.1 Strengths and weaknesses of metaheuristic algorithms.

Chapter 2

Table 2.1 Breakdown of popular metaheuristics and their I&D components [75]....

Table 2.2 Performance comparison of four different metaheuristics based on a...

Table 2.3 Performance comparison of eight population-based metaheuristics fo...

Chapter 3

Table 3.1 Summary of the medical jargon used.

Table 3.2 Advantages and disadvantages.

Chapter 4

Table 4.1 Information regarding the six cancer microarray data.

Table 4.2 Parameter settings of the proposed algorithm.

Table 4.3 Accuracies of the proposed algorithm with the mRMR, mRMR+CSA, and ...

Table 4.4 Accuracies of the proposed algorithm with the mRMR, mRMR+CSA, and ...

Table 4.5 Accuracies of the proposed algorithm with the mRMR, mRMR+CSA, and ...

Table 4.6 Comparison of the different published methods with the proposed me...

Chapter 8

Table 8.1 Comparison of the accuracy.

Table 8.2 Comparison of the precision.

Table 8.3 Comparison of the sensitivity.

Table 8.4 Comparison of the specificity.

Table 8.5 Comparison of the security.

Chapter 10

Table 10.1 Example of demand and supply data from expert interviews with the...

Table 10.2 Provides an analysis of the logistic network situation.

Chapter 11

Table 11.1 Index of the driver, status, and response model system.

Table 11.2 The DSR indexes’ weights and value attributions.

Table 11.3 Rankings of function replacement for each UPS type.

Chapter 12

Table 12.1 Results of validity and reliability tests.

Table 12.2 HTMT values.

Table 12.3 Examine the legitimacy of differences.

Table 12.4 The model’s fit outcomes.

Chapter 13

Table 13.1 Description of the companies.

Table 13.2 Presentation of illustrative information.

Table 13.3 Impact on the various aspects.

Table 13.4 Credibility, dependability, and relevance.

Table 13.5 Inferential statistics.

Table 13.6 Evaluation of interactions.

Chapter 14

Table 14.1 Parameter values.

List of Illustrations

Chapter 1

Figure 1.1 Flowchart of the genetic algorithm.

Figure 1.2 Flowchart of simulated annealing.

Figure 1.3 Flowchart of the particle swarm optimization.

Figure 1.4 Flowchart of the ant colony optimization.

Chapter 2

Figure 2.1 Tabu search for optimizing the tour cost for a city plotted vs. ite...

Figure 2.2 A Gaussian process approximation of an objective function being ite...

Figure 2.3 Convergence comparison of four metaheuristics based on the first 10...

Figure 2.4 Best score convergence profiles vs. iterations for eight renowned a...

Figure 2.5 Accuracy of metaheuristics for different ML models [83].

Chapter 3

Figure 3.1 Pictorial representation of the imaging modalities.

Figure 3.2 CNN architecture as illustrated by Mohamed

et al.

in [29].

Chapter 4

Figure 4.1 The proposed research methodology.

Figure 4.2 Hybrid flowchart of the HHO and CSA.

Figure 4.3 Error comparison with the SVM classifier.

Figure 4.4 The variance observed in the proposed algorithm (mRMR+CSAHHO) compa...

Figure 4.5 Error comparison with the KNN classifier.

Figure 4.6 The variance observed in the proposed algorithm (mRMR+CSAHHO) compa...

Figure 4.7 Error comparison with the NB classifier.

Figure 4.8 The variance observed in the proposed algorithm (mRMR+CSAHHO) compa...

Chapter 5

Figure 5.1 Schematic architecture of our proposed system.

Figure 5.2 Normal and abnormal clips from the ShanghaiTech dataset.

Figure 5.3 Accuracy comparison between the suggested and current techniques.

Figure 5.4 Precision comparison between the suggested and current techniques....

Figure 5.5 Recall comparison between the suggested and current techniques.

Figure 5.6 Error rate comparison between the suggested and current techniques....

Chapter 6

Figure 6.1 The proposed methodology.

Figure 6.2 The dataset’s distribution.

Figure 6.3 Architecture of the ANN.

Figure 6.4 Results of accuracy.

Figure 6.5 Results of precision.

Figure 6.6 Results of recall.

Figure 6.7 Results of the F-measure.

Chapter 7

Figure 7.1 Flowchart of the proposed SVM-TSMO model.

Figure 7.2 The support vector machine.

Figure 7.3 Accuracy of the existing and proposed methods.

Figure 7.4 Precision of the existing and proposed methods.

Figure 7.5 Recall % of the existing and proposed methods.

Figure 7.6 F1-measure of the existing and proposed methods.

Chapter 8

Figure 8.1 The IoMT-smart healthcare system.

Figure 8.2 A systematic diagram of security enhancement in the IoMT using mach...

Figure 8.3 Diagrammatic representation of the proposed method.

Figure 8.4 The linear SVM model.

Figure 8.5 The MLPSO algorithm flowchart.

Chapter 9

Figure 9.1 IDO based on a game model.

Figure 9.2 Spectrum use ratio.

Figure 9.3 Offload ratio.

Figure 9.4 Throughput analysis.

Figure 9.5 Response delay analysis.

Figure 9.6 Energy consumption analysis.

Chapter 10

Figure 10.1 Framework for logistic clusters that limits supply chain managemen...

Figure 10.2 Illustrates the assumptions of the logistic network model.

Figure 10.3 Distribution model simulations with simulated annealing.

Chapter 11

Figure 11.1 Weight matrix of attributes.

Figure 11.2 Ranking of the factors.

Figure 11.3 UPS-type characteristics.

Figure 11.4 Renewal time outcomes.

Figure 11.5 Analyzing renewal timing and UPS properties.

Figure 11.6 Distribution of renewal times.

Chapter 12

Figure 12.1 Suggested research design.

Figure 12.2 Reliability and validity of the CA.

Figure 12.3 Reliability and validity of the CR.

Figure 12.4 Reliability and validity of the AVE.

Chapter 13

Figure 13.1 Method of measuring model.

Figure 13.2 Model of structure.

Chapter 14

Figure 14.1 System model.

Figure 14.2 Flowchart of the proposed model.

Figure 14.3 Optimal user association to BSs with data suit-1.

Figure 14.4 Optimal user association to BSs with data suit-2.

Figure 14.5 Network utility maximization graph.

Chapter 15

Figure 15.1 Block diagram of soil aggregation.

Figure 15.2 C5.0’s algorithm flow.

Figure 15.3 Comparative analysis of the RMSE.

Figure 15.4 Comparative analysis of the

R

2

.

Figure 15.5 Comparative analysis of the nRMSE.

Figure 15.6 Comparative analysis of the MAE.

Guide

Cover Page

Table of Contents

Series Page

Title Page

Copyright Page

Foreword

Preface

Begin Reading

Index

Also of Interest

WILEY END USER LICENSE AGREEMENT

Pages

ii

iii

iv

xv

xvi

xvii

xviii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

Scrivener Publishing100 Cummings Center, Suite 541JBeverly, MA 01915-6106

Artificial Intelligence and Soft Computing for Industrial Transformation

Series Editor: Dr S. Balamurugan ([email protected])

The book series is aimed to provide comprehensive handbooks and reference books for the benefit of scientists, research scholars, students and industry professional working towards next generation industrial transformation.

Publishers at ScrivenerMartin Scrivener ([email protected])Phillip Carmical ([email protected])

Metaheuristics for Machine Learning

Algorithms and Applications

Edited by

Kanak Kalita

Vel Tech University, Avadi, India

Narayanan Ganesh

Vellore Institute of Technology, Chennai, India

and

S. Balamurugan

Intelligent Research Consultancy Services, Coimbatore, Tamilnadu, India

This edition first published 2024 by John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA and Scrivener Publishing LLC, 100 Cummings Center, Suite 541J, Beverly, MA 01915, USA© 2024 Scrivener Publishing LLCFor more information about Scrivener publications please visit www.scrivenerpublishing.com.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

Wiley Global Headquarters111 River Street, Hoboken, NJ 07030, USA

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials, or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read.

Library of Congress Cataloging-in-Publication Data

ISBN 978-1-394-23392-2

Cover image: Pixabay.ComCover design by Russell Richardson

Foreword

In the dynamic landscape of today’s technological revolution, machine learning and its applications span multiple domains, offering both opportunities and challenges. As we navigate this terrain, the significance of data has shifted; it has transformed from merely passive entities to active drivers influencing decisions, sculpting perceptions, and determining collective trajectories. This book serves a pivotal reference that sheds light upon complex computational arenas and provides clarity to those navigating this domain.

This book is more than an aggregation of knowledge. It epitomizes the expertise and adaptability of current computational researchers and accentuates the potential of metaheuristics. For those unfamiliar with the term, envision metaheuristics as high-level strategists that steer a multitude of heuristic methodologies toward their zenith. They offer the requisite tools to address complex challenges where conventional algorithms might be inadequate.

Throughout the book, you will find a wide range of applications and potential uses of metaheuristics that span across domains from machine learning to the cutting-edge fields of sustainability, communication, and networking. It is fascinating to note that the algorithms aren’t just theoretical entities; they resonate with pressing real-world challenges. For instance, consider the pivotal role of metaheuristics in life-saving applications like breast cancer detection, or in ensuring security through anomaly identification in surveillance systems and botnet attack detection.

Moreover, as we delve deeper, we witness the subtle yet profound synergies between metaheuristics and contemporary technological innovations. The chapters dedicated to the advancements in 5G and 6G communication, and the future of autonomous vehicles, are prime examples. These sections underline the intricate balance and interdependence of the challenges we face today and the innovative solutions metaheuristics can offer.

For researchers who dedicate their lives to exploration, practitioners at the frontline of technological innovations, and students who look with hopeful eyes toward the future, this book will be a pivotal tool. Let it guide you, as it did for me, through the mesmerizing world of algorithms and their real-world applications.

Diego Alberto Oliva

Universidad de Guadalajara, Mexico

Preface

While compiling this book, we were guided by a singular vision: to sculpt a resource that seamlessly melds the theoretical intricacies of metaheuristics with their myriad practical applications. Our aspiration was to produce a reference that not only delves deeply into the subject, but is also accessible to readers across spectra, offering a holistic understanding that is both profound and practical.

With every chapter, we strived to weave a narrative, oscillating between the vast expanse of the topic and the intricate minutiae that define it. The book commences with a foundational introduction, leading readers through the labyrinthine world of metaheuristics. Going forward, the narrative transitions, diving deeper into their multifaceted applications—spanning from the dynamic domain of machine learning to the ever-evolving spheres of technology, sustainability, and the intricate web of communication networks.

Metaheuristics present a promising solution to many formidable optimization conundrums. Yet, their true allure comes not just from their theoretical promise but their practical prowess. This book attempts to unveil this allure, transforming nebulous algorithms into tangible entities with real-world resonances—whether in the life-saving realm of healthcare or the cutting-edge world of vehicular communications.

We extend our endless gratitude to the brilliant authors, reviewers, and countless others whose relentless dedication, insight, and expertise are evident in these pages. The editorial journey has been one of profound learning and growth for all involved. With each chapter, we have gleaned new perspectives, and we hope this book becomes a wellspring of knowledge, inspiration, and introspection for both scholars and professionals.

In closing, we offer our sincere thanks to the Scrivener and Wiley publishing teams for their help with this book. We entreat you to immerse your intellect and curiosity in the mesmerizing world of metaheuristics and their applications. Here’s to an enlightening reading journey ahead!

Kanak Kalita

Narayanan Ganesh

S. Balamurugan

1Metaheuristic Algorithms and Their Applications in Different Fields: A Comprehensive Review

Abrar Yaqoob1*, Navneet Kumar Verma2and Rabia Musheer Aziz1

1School of Advanced Science and Language, VIT Bhopal University, Kothrikalan, Sehore, India

2State Planning Institute (New Division), Planning Department Lucknow, Utter Pradesh, India

Abstract

A potent method for resolving challenging optimization issues is provided by metaheuristic algorithms, which are heuristic optimization approaches. They provide an effective technique to explore huge solution spaces and identify close to ideal or optimal solutions. They are iterative and often inspired by natural or social processes. This study provides comprehensive information on metaheuristic algorithms and the many areas in which they are used. Heuristic optimization algorithms are well-known for their success in handling challenging optimization issues. They are a potent tool for problem-solving. Twenty well-known metaheuristic algorithms, such as the tabu search, particle swarm optimization, ant colony optimization, genetic algorithms, simulated annealing, and harmony search, are included in the article. The article extensively explores the applications of these algorithms in diverse domains such as engineering, finance, logistics, and computer science. It underscores particular instances where metaheuristic algorithms have found utility, such as optimizing structural design, controlling dynamic systems, enhancing manufacturing processes, managing supply chains, and addressing problems in artificial intelligence, data mining, and software engineering. The paper provides a thorough insight into the versatile deployment of metaheuristic algorithms across different sectors, highlighting their capacity to tackle complex optimization problems across a wide range of real-world scenarios.

Keywords: Optimization, metaheuristics, machine learning, swarm intelligence

1.1 Introduction

Metaheuristics represent a category of optimization methods widely employed to tackle intricate challenges in diverse domains such as engineering, economics, computer science, and operations research. These adaptable techniques are designed to locate favorable solutions by exploring an extensive array of possibilities and avoiding stagnation in suboptimal outcomes [1]. The roots and advancement of metaheuristics can be traced back to the early 1950s when George Dantzig introduced the simplex approach for linear programming [2]. This innovative technique marked a pivotal point in optimization and paved the way for the emergence of subsequent optimization algorithms. Nonetheless, the simplex method’s applicability is confined to linear programming issues and does not extend to nonlinear problems. In the latter part of the 1950s, John Holland devised the genetic algorithm, drawing inspiration from concepts of natural selection and evolution [3]. The genetic algorithm assembles a set of potential solutions and iteratively enhances this set through genetic operations like mutation, crossover, and selection [4]. The genetic algorithm was a major milestone in the development of metaheuristics and opened up new possibilities for resolving difficult optimization issues. During the 1980s and 1990s, the field of metaheuristics experienced significant expansion and the emergence of numerous novel algorithms. These techniques, which include simulated annealing (SA), tabu search (TS), ant colony optimization (ACO), particle swarm optimization (PSO), and differential evolution (DE), were created expressly to deal with a variety of optimization issues. They drew inspiration from concepts like simulated annealing, tabu search, swarm intelligence, and evolutionary algorithms [5].

The term “meta-” in metaheuristic algorithms indicates a higher level of operation beyond simple heuristics, leading to enhanced performance. These algorithms balance local search and global exploration by using randomness to provide a range of solutions. Despite the fact that metaheuristics are frequently employed, there is not a single definition of heuristics and metaheuristics in academic literature, and some academics even use the terms synonymously. However, it is currently fashionable to classify as metaheuristics all algorithms of a stochastic nature that utilize randomness and comprehensive exploration across the entire system. Metaheuristic algorithms are ideally suited for global optimization and nonlinear modeling because randomization is a useful method for switching from local to global search. As a result, almost all metaheuristic algorithms can be used to solve issues involving nonlinear optimization at the global level [6]. In recent years, the study of metaheuristics has developed over time and new algorithms are being developed that combine different concepts and techniques from various fields such as machine learning, deep learning, and data science. The development and evolution of metaheuristics have made significant contributions to solving complex optimization problems and have led to the development of powerful tools for decision-making in various domains [7]. In order to find solutions in a huge search area, metaheuristic algorithms are founded on the idea of mimicking the behaviors of natural or artificial systems. These algorithms are particularly valuable for tackling problems that are challenging or impossible to solve using traditional optimization methods. Typically, metaheuristic algorithms involve iterations and a series of steps that modify a potential solution until an acceptable one is discovered. Unlike other optimization techniques that may become stuck in local optimal solutions, metaheuristic algorithms are designed to explore the entire search space. They also exhibit resilience to noise or uncertainty in the optimization problem. The adaptability and plasticity of metaheuristic algorithms are two of their main features. They can be modified to take into account certain limitations or goals of the current task and are applicable to a wide variety of optimization situations. However, for complex problems with extensive search spaces, these algorithms may converge slowly toward an optimal solution, and there is no guarantee that they will find the global optimum. Metaheuristic algorithms find extensive application in various fields including engineering, finance, logistics, and computer science. They have been successfully employed in solving diverse problems such as optimizing design, control, and manufacturing processes, portfolio selection, and risk management strategies [8].

1.2 Types of Metaheuristic Algorithms

We shall outline some of the most popular metaheuristic methods in this section.

1.2.1 Genetic Algorithms

Genetic algorithms (GAs) fit to a cluster of metaheuristic optimization techniques that draw inspiration from natural selection and genetics [9–11]. In order to find the optimal solution for a particular issue, the core idea underlying the GA is to mimic the evolutionary process. The genetic algorithm has the capability to address challenges spanning various fields such as biology, engineering, and finance [12–14]. In the methodology of the GA, a potential solution is denoted as a chromosome, or a collection of genes. Each gene within the context of the problem signifies an individual variable, and its value corresponds to the potential range of values that the variable can take [15, 16]. Subsequently, these chromosomes undergo genetic operations like mutation and crossover. This process can give rise to a fresh population of potential solutions, resulting in a novel set of potential outcomes [17–19].

The following are the major steps in the GA:

Initialization: The algorithm initializes a set of potential responses first. A chromosome is used to symbolize each solution, which is a string of genes randomly generated based on the problem domain [20].

Evaluation: The suitability of each chromosome is assessed based on the objective function of the problem. The quality of the solution is evaluated by the fitness function, and the objective is to optimize the fitness function by either maximizing or minimizing it, depending on the particular problem [21].

Selection: Chromosomes that possess higher fitness values are chosen to form a fresh population of potential solutions. Various techniques, such as roulette wheel selection, tournament selection, and rank-based selection, are employed for the selection process [22].

Crossover: The selected chromosomes are combined through crossover to generate new offspring chromosomes. The crossover operation exchanges the genetic information from the parent chromosomes and is utilized to generate novel solutions [23].

Mutation: The offspring chromosomes are subjected to mutation, which introduces random changes to the genetic information. Mutation aids in preserving diversity within the population and preventing the occurrence of local optima [24].

Replacement: As the child chromosomes multiply, a new population of potential solutions is formed and replaces the less fit members of the prior population.

Termination: The technique proceeds to iterate through the selection, crossover, mutation, and replacement phases until a specific termination condition is satisfied. Reaching a predetermined maximum for iterations is one scenario for termination, attaining a desired fitness value, or exceeding a predetermined computational time limit.

Figure 1.1 Flowchart of the genetic algorithm.

The GA has several advantages, such as being capable of solving complex issues, locating the global optimum, and being applicable to various domains. However, the GA also has some limitations, such as the need for a suitable fitness function, the possibility of premature convergence, and the high computational cost for complex problems. Figure 1.1 shows the flowchart of the genetic algorithm.

1.2.2 Simulated Annealing

Simulated annealing is a probabilistic method for optimizing complex multidimensional problems by seeking the global best solution. It draws inspiration from the metallurgical technique of annealing, which includes heating and gradually cooling a metal to enhance its strength and resilience [25]. Similarly, simulated annealing commences at an elevated temperature, enabling the algorithm to extensively investigate a vast array of possible solutions, and then slowly decreases the temperature to narrow down the search to the most promising areas. SA works by maintaining a current solution and repeatedly making small changes to it in search of a better solution. At each iteration, the algorithm calculates a cost function that measures how good the current solution is. The cost function can be any function that assigns a score to a potential solution, such as a distance metric or a likelihood function. Subsequently, the algorithm determines whether to embrace or disregard a new solution by utilizing a probability distribution that relies on the existing temperature and the disparity between the costs of the current and new solutions [26]. High temperatures increase SA’s propensity to embrace novel solutions even if they have a higher cost than the current solution. This is because the algorithm is still exploring the space of potential solutions and needs to be open to new possibilities. As the temperature decreases, SA becomes increasingly discriminating and admits novel solutions solely if they surpass the existing solution. By employing this approach, SA prevents itself from becoming trapped in local peaks and eventually achieves convergence toward the global peak [27]. SA offers a notable benefit by effectively addressing non-convex optimization problems, characterized by numerous local optima. By permitting the acceptance of solutions with greater costs, SA can navigate diverse areas within the solution space and prevent entrapment in local optima. Moreover, SA boasts ease of implementation and independence from cost function gradients, rendering it suitable for scenarios where the cost function lacks differentiability. However, SA does have some limitations. It can be slow to converge, especially for large or complex problems, and may require many iterations to find the global optimum. SA’s effectiveness is also influenced by the decision of cooling schedule, which determines how quickly the temperature decreases. If the cooling schedule is too slow, the algorithm may take too long to converge, while if it is too fast, the algorithm may converge too quickly to a suboptimal solution [28, 29].

Figure 1.2 Flowchart of simulated annealing.

To put it briefly, simulated annealing is a highly effective optimization method that has the capability to address intricate problems, multidimensional problems with multiple local optima. It works by exploring the solution space and gradually narrowing down the search for the most promising regions. While it has some limitations, SA is a helpful tool for a variety of optimization issues in the real world. Figure 1.2 shows the flowchart of simulated annealing.

1.2.3 Particle Swarm Optimization

Particle swarm optimization is a technique for optimization that employs a population-based strategy to address a wide range of optimization problems. First introduced by Kennedy and Eberhart in 1995, this concept takes inspiration from the coordinated movements observed in the flocking of birds and the schooling of fish [30–32]. This algorithm emulates the social dynamics exhibited by these creatures, where each member learns from its own encounters and the experiences of its nearby peers, with the aim of discovering the best possible solution [33]. The PSO method begins by generating a population of particles, each of which acts as a potential solution to the optimization issue at hand. These particles, which have both a location and a velocity vector, are randomly distributed throughout the search space [34]. The location vector represents the particle’s current solution, whereas the velocity vector represents the particle’s moving direction and magnitude inside the search space. Through iterative steps, each particle’s location and velocity vectors undergo constant modification and adjustment in the PSO algorithm, guided by its own best solution encountered thus far and the solutions of its neighboring particles [35]. Collaborative learning continues until a predetermined stopping condition is met, such as when the desired outcome is attained or the maximum number of iterations has been reached. Compared to other optimization algorithms, the PSO algorithm boasts various advantages, including simplicity, rapid convergence, and robustness [36, 37]. PSO has found applications in diverse problem domains, spanning function optimization, neural network training, image processing, and feature selection. Nevertheless, the algorithm does come with certain limitations. These include the risk of premature convergence, where the algorithm may converge to suboptimal solutions prematurely, and challenges in effectively handling problems with high-dimensional spaces [38].

Figure 1.3 Flowchart of the particle swarm optimization.

In general, the particle swarm optimization algorithm is a robust and effective optimization method capable of addressing numerous practical optimization problems. Its simplicity and intuitive approach make it an appealing choice compared to more intricate optimization methods. Figure 1.3 shows the flowchart of the particle swarm optimization.

1.2.4 Ant Colony Optimization

Ant colony optimization is a nature-inspired method that addresses difficult optimization problems by mimicking the behavior of ant colonies. The program takes its cues from the behavior of ant colonies, specifically the way ants communicate to discover the shortest path toward food sources. The fundamental idea behind the ACO is to simulate the foraging behavior of ants to solve optimization problems effectively. A simulated group of ants is put on a graph representing the problem space in the ACO. These ants navigate the graph by selecting the next node to visit based on the pheromone trails left behind by other ants. The strength of the pheromone trail represents the quality of the solution that passed through that edge. As more ants traverse the same edge, the pheromone trail becomes stronger. This is similar to how ants communicate with each other in real life by leaving pheromone trails to signal the location of food sources [39, 40]. The ACO algorithm has several key parameters, such as the amount of pheromone each ant leaves, the rate at which pheromones evaporate, and the balance between exploiting the best solution and exploring new solutions. The optimal values of the parameters in the algorithm are determined through a process of experimentation and refinement to obtain the best possible results for a specific problem [41].

The ACO has showcased impressive achievements in resolving diverse optimization challenges, including but not limited to the traveling salesman problem, vehicle routing, and job scheduling. One notable advantage of the algorithm is its ability to swiftly discover favorable solutions, even when confronted with extensive search spaces. Furthermore, because the ACO belongs to the category of metaheuristic algorithms, it can be applied to a variety of situations without requiring a deep understanding of the underlying structure of those problems [42]. Figure 1.4 shows the flowchart of the ant colony optimization.

Figure 1.4 Flowchart of the ant colony optimization.

1.2.5 Tabu Search

The tabu search is a metaheuristic technique utilized for optimization problems, initially proposed by Fred Glover in 1986. It has gained significant popularity across diverse domains, including operations research, engineering, and computer science. The core concept behind the tabu search involves systematically traversing the search space by transitioning between different solutions in order to identify the optimal solution. However, unlike other local search algorithms, the tabu search incorporates a memory structure that records previous moves executed during the search. These data are then used to steer the search to potential places within the search space [43]. The tabu list, a memory structure that plays an important part in the algorithm, is at the heart of the tabu search. This list serves to store and remember previous moves made during the search process, ensuring that the algorithm avoids revisiting solutions that have already been explored. By utilizing the tabu list, the tabu search effectively restricts the search to new and unexplored regions of the solution space, promoting efficient exploration and preventing repetitive or redundant searches. This list is used to enforce a set of constraints, known as the tabu tenure, which determines how long a move is considered tabu. By imposing this constraint, the algorithm is compelled to investigate diverse regions within the search space and evade being trapped in local optima. This ensures that the algorithm remains dynamic and continually explores new possibilities, preventing it from being overly fixated on suboptimal solutions [43]. The tabu search is a versatile optimization algorithm applicable to both continuous and discrete optimization problems. When addressing continuous optimization problems, the algorithm typically uses a neighborhood function to generate new solutions by perturbing the present solution. In the event of discrete optimization problems, the neighborhood function is typically defined in terms of specific moves that can be made to the solution, such as swapping two elements in a permutation. The effectiveness of the tabu search is based on a number of variables, such as the choice of neighborhood function, the tabu tenure, and the stopping criterion. The algorithm can be enhanced by using various strategies, such as diversification and intensification, which balance the search space’s exploitation and exploration [44].

The tabu search, a metaheuristic approach introduced by Fred Glover in 1986, is utilized for optimizing problems. Its application spans a wide array of domains, including operations research, engineering, and computer science, establishing it as a widely recognized technique. The fundamental principle of the tabu search involves a systematic traversal of the solution space, shifting between different solutions to ascertain the optimal one. Distinguishing itself from conventional local search algorithms, the tabu search incorporates a memory structure that logs prior moves executed during the search process. This stored information guides the search toward unexplored areas within the solution space [45]. Central to the tabu search is the tabu list, a pivotal memory structure. This list retains and recalls previous moves executed during the search, ensuring that revisiting already explored solutions is avoided. The tabu list effectively restricts the exploration to untrodden regions, preventing redundant searches and fostering efficient investigation. Governing the tabu list is the concept of the tabu tenure, setting the duration for which a move remains prohibited. This constraint compels the algorithm to explore diverse solution space regions, eluding entrapment in local optima. This dynamic approach continuously explores novel avenues, counteracting fixation on suboptimal solutions [46]. The tabu search is a versatile optimization algorithm suitable for both continuous and discrete optimization challenges. For continuous optimization, a neighborhood function is commonly used to generate new solutions by perturbing the current one. In the context of discrete optimization, the neighborhood function is typically defined by specific permissible moves, such as element swaps in a permutation. The efficacy of the tabu search hinges on factors like neighborhood function choice, tabu tenure, and the stopping criteria. The algorithm can be augmented through strategies like diversification and intensification, ensuring a balance between exploiting and exploring the search space.

In general, the tabu search is a robust and adaptable optimization technique that has demonstrated its effectiveness in addressing diverse problem sets. It can be employed independently or integrated into more intricate optimization algorithms. Its popularity stems from its versatility and straightforwardness, making it a favored option for tackling real-life challenges in various domains.

1.2.6 Differential Evolution

The DE is an optimization algorithm based on populations, originally created by Storn and Price in 1997 [47]. It fit to the category of evolutionary algorithms that iteratively grow a population of potential solutions to find the optimal solution. The algorithm adheres to the fundamental steps of mutation, crossover, and selection, which are key elements commonly shared among numerous evolutionary algorithms [48].

In the process of the differential evolution, a population of potential solutions undergoes iterative evolution through the implementation of the following sequential steps:

Initialization: A population of N possible solutions is produced at random.

Mutation: Involves randomly selecting three candidate solutions and modifying them to create a trial vector.

Crossover: It is a technique used in optimization algorithms to create a new candidate solution by combining the trial vector with the target vector.

Selection: If the new candidate solution has a higher fitness, it will take the place of the target vector.

The success of the differential evolution depends on the selection of the optimization technique’s adjustable settings, such as mutation rate, crossover rate, and population size [49, 50]. Several variants of the DE have been proposed, including the SHADE (success history-based adaptive differential evolution) and JADE (adaptive differential evolution) algorithms, which incorporate adaptive control parameters to improve the algorithm’s performance.

1.2.7 Harmony Search

The harmony search (HS) is an optimization technique motivated by the musical improvization process. Geem [125] had put forward the idea in 2001 and has now been used to solve several optimization issues. The techniques mimic the process of improvization by a group of musicians, where they adjust their pitches (or notes) to create harmony. In the HS, the choice variables of an optimization problem in high school are comparable to musical notes, and the value of the goal function reflects harmony [51].

Starting individuals of decision variable vectors (i.e., the notes) are used in the HS approaches and iteratively search for better solutions by generating new solutions through the following steps:

Harmony memory: A set of the best candidate solutions (i.e., the harmonies) is maintained.

Harmony creation: A potential solution is created by chance selecting values from the harmony memory.

Pitch adjustment: The values in the new candidate solution are adjusted with a probability based on a pitch adjustment rate.

Acceptance: The new candidate solution is accepted if it improves the objective function value.

The control variables, such as the harmony memory size, pitch adjustment rate, and number of iterations, affect how well the HS performs. The method follows the core phases of mutation, crossover, and selection, and the approach has been utilized to tackle diverse optimization challenges, such as managing water resources, designing structures, and operating power systems [52, 53].

1.2.8 Artificial Bee Colony

The artificial bee colony (ABC) is a population-based optimization method that draws inspiration from honey bees’ feeding habits. Since its introduction by Karaboga in 2005, the technique has been used to solve a number of optimization issues [54]. The ABC mimics the foraging process of bees, where they search for food sources by visiting the flowers in the vicinity of the hive [55].

The artificial bee colony technique starts with an arbitrarily generated population of candidate solutions (i.e., food sources) and iteratively searches for better solutions by simulating the foraging process of bees through the following steps:

Phase of employed bees: The employed bees develop new candidate solutions by modifying the values of current solutions.

Phase of the onlooker bees: The onlooker bees choose the candidate solutions with the highest fitness values and send this information to the employed bees.

Phase of scout bees: The scout bees search for new candidate solutions by randomly generating new solutions.

The success of the ABC depends on the control parameters, such as the population size, the number of iterations, and the probability of abandoning a food source. The algorithm has been applied to a wide range of optimization problems, including image processing, wireless sensor networks, and fuzzy control systems [56].

1.2.9 Firefly Algorithm

A metaheuristic optimization technique called the firefly algorithm (FA) is based on how fireflies behave. Dr. Xin-She Yang first presented the FA in 2008. The social behavior of fireflies, which is characterized by their flashing light to attract mates or prey, served as an inspiration for the algorithm [57]. The firefly technique’s primary role is to increase light intensity, and in order to maximize light intensity, fireflies travel toward the brighter fireflies. Starting with a random population of fireflies, using the goal function, the program calculates the light intensity of each firefly [58]. The movement of fireflies is governed by their attractiveness, which is determined by the brightness of their light, and the distance between them. The fireflies move toward the brighter fireflies and update their positions until the maximum light intensity is achieved [59]. Numerous optimization issues, including those involving machine learning, image processing, and function optimization, have been effectively solved using the firefly algorithm.

1.2.10 Gray Wolf Optimizer

The gray wolf optimizer (GWO) is a metaheuristic optimization system based on wolves’ social structure and hunting techniques. The algorithm was developed in 2014 by Seyed Ali Mirjalili, Seyed Mohammad Mirjalili, and Andrew Lewis [60]. The optimization issue is viewed in the gray wolf optimizer as a wolf pack’s prey–predator dynamic. The wolf population used by the method is randomly divided into four groups: alpha, beta, delta, and omega. While the omega wolf is the weakest and has the lowest fitness value, the alpha wolf assumes the role of the leader within the wolf pack and has the greatest fitness value [61–63]. The movement of the wolves is governed by three different types of hunting techniques: hunting for prey, following the alpha wolf, and surrounding the prey. The wolves adjust their locations based on these techniques until the optimal solution is accomplished. The gray wolf optimizer’s application has been expanded to address a variety of optimization issues, such as function optimization, engineering design, and feature selection [64, 65].

1.2.11 Imperialist Competitive Algorithm

The imperialist competitive algorithm (ICA), a metaheuristic optimization method designed to address numerous optimization issues, was inspired by the political and economic rivalry of imperialist nations. Amir Hossein Gandomi and Ali Alavi introduced the method in 2010 [66]. In the imperialist competitive algorithm, the optimization problem is considered as a competition between empires. The algorithm starts with a random population of empires, which are categorized into two groups: imperialist and colonies. The imperialist empires have higher fitness values, and they expand their territories by annexing colonies. The movement of empires is governed by two different types of actions: assimilation and revolution. In the assimilation process, the imperialist empires try to improve the fitness of their colonies, while in the revolution process, the colonies rebel against their imperialist empires and become independent. Various optimization problems, including function optimization, image segmentation, and parameter estimation, have been effectively solved through the successful application of the imperialist competitive algorithm [67, 68].

1.2.12 Bat Algorithm

Xin-She Yang created the bat algorithm (BA) in 2010, which is a metaheuristic optimization technique used to tackle a variety of optimization challenges [69]. The bat algorithm is inspired by bats’ echolocation activity, which uses ultrasonic noises to navigate and locate prey in the dark. It replicates the activity of bats in their hunt for prey in order to discover the best solution to a given optimization issue [70].

To solve an optimization problem using the bat algorithm, a population of bats is created in the search space with random placements and velocities. The bats move randomly, emitting frequencies proportional to their fitness values. Bats with better fitness emit higher-frequency sounds that attract other bats toward their position in the search space [71]. In adding to the frequency-based attraction mechanism, the bat algorithm includes a random walk component that allows the bats to explore uncharted regions of the search space. During each iteration, the algorithm updates the velocity and position of each bat using information on the best solution found thus far, as well as the loudness and frequency of its emitted signal. The algorithm iterates until it reaches a present stopping point, such as a maximum number of iterations or a goal fitness value [72].

1.2.13 Cuckoo Search

The cuckoo search, a metaheuristic optimization method inspired by the cuckoo bird reproductive behavior, was introduced in 2009 by Xin-She Yang and Suash Deb. Cuckoo birds use an unusual approach in which they deposit their eggs in the nests of other bird species, leaving the host birds to care for and rear their children [73]. The cuckoo search algorithm imitates the foraging behavior of cuckoos as they search for food, employing the strategy of brood parasitism. The objective is to discover the optimal solution for an optimization problem. The technique commences by initializing a population of cuckoos with random positions and velocities. Subsequently, each cuckoo deposits an egg in a nest that is chosen randomly, with the likelihood of selecting a specific nest being proportionate to its fitness level [74–76]. The cuckoo search algorithm incorporates a random walk element, allowing the cuckoos to explore unexplored regions within the search space. During each iteration, the algorithm adjusts the position and speed of each cuckoo based on the best solution found thus far and the potential for uncovering a superior solution by depositing an egg in a new nest. To enhance the solution quality by exploring the vicinity of the current answer, the algorithm also integrates a local search component. The process persists until a predetermined stopping condition is fulfilled, such as reaching the maximum iteration count or achieving the desired fitness level [77, 78].

1.2.14 Flower Pollination Algorithm

The flower pollination algorithm (FPA), created by Xin-She Yang in 2012, is a metaheuristic optimization algorithm that draws pollinators like butterflies and bees with their aroma [79]. The flower pollination algorithm emulates flower pollination behaviors to discern the optimal solution for a given problem. In 2012, Xin-She Yang introduced this algorithm, a metaheuristic optimization technique inspired by the way flowers attract pollinators like butterflies and bees through their fragrance. This involves initializing a group of flowers with random positions and fragrances. Each flower releases a fragrance that draws pollinators, with the likelihood of attraction determined by its fitness level. A random walk element is also integrated, enabling pollinators to explore novel sections of the search space [80]. Throughout each iteration, the flower pollination algorithm adjusts the position and fragrance of each flower based on the current best solution and the probability of enticing new pollinators to potentially uncover an improved solution. Furthermore, the algorithm incorporates a local search component that investigates the neighboring vicinity to enhance solution quality. The algorithm continues until a predefined stopping condition is met, which could be achieving a specific fitness target or reaching the maximum iteration count [81].

1.2.15 Krill Herd Algorithm

In 2012, Amir H. Gandomi and Amir H. Alavi introduced the krill herd algorithm (KHA), a metaheuristic optimization technique inspired by the coordinated motions and interplays of krill within their oceanic habitat. This algorithm endeavors to replicate the herding conduct of krill as they forage for sustenance and navigate survival challenges, all with the aim of identifying the finest solution for a specified problem [82].

The krill herd algorithm emulates the synchronized movements and interactions of krill in their pursuit of nourishment and companionship, striving to pinpoint the optimal solution for a given task. Commencing with the creation of a krill population featuring random positions and velocities, each individual krill engages in random movements across the search space. The course and speed of these movements are shaped by forces of attraction, repulsion from other krill, and environmental influences [83].

Depending on the best solution found thus far, the impact of other krill and the environment and the location and velocity of each krill are adjusted by the algorithm at each iteration. The technique also has a random walk component that enables krill to explore new regions of the search space [84]. The algorithm continues to run until a predefined stopping condition is met, such as a predetermined number of iterations or a predetermined goal fitness value [85].

1.2.16 Whale Optimization Algorithm

In 2016, Seyedali Mirjalili and Andrew Lewis introduced the whale optimization algorithm (WOA), a metaheuristic optimization technique. This concept drew inspiration from the hunting behaviors of humpback whales, characterized by a blend of independent and cooperative movements and vocalizations [86]. The core concept of the whale optimization algorithm is to replicate the foraging conduct of humpback whales as they seek sustenance, aiming to uncover the optimal solution for a particular problem. The algorithm commences by establishing a group of whales, each assigned random positions and velocities. Subsequently, every whale undertakes random movements across the search space, with the course and speed influenced by the forces of attraction, repulsion exerted by other whales, and environmental conditions [87]. Throughout each iteration, the whale optimization algorithm modifies the positions and velocities of individual whales based on the current best solution and the effects of other whales and the environment. Moreover, the algorithm integrates a random walk aspect to facilitate exploration of uncharted regions within the search space. Termination of the algorithm occurs upon meeting predefined cessation criteria, such as attaining the maximum iteration count or reaching a desired fitness level [88–90].

1.2.17 Glowworm Swarm Optimization

The glowworm swarm optimization (GSO) is an optimization technique inspired by nature and was introduced by Kalyanmoy Deb and Samir Dasgupta in 2006. It draws inspiration from the bioluminescent communication exhibited by fireflies and glowworms to emulate the coordinated behavior of swarms [91]. The glowworm swarm optimization serves as a metaheuristic optimization algorithm that replicates the bioluminescent actions of glowworms as they navigate their surroundings in pursuit of sustenance and companionship, all with the aim of identifying the optimal solution for a given problem. Commencing with the creation of a group of glowworms, each assigned random positions and luminosities, every glowworm emits light that attracts other individuals, with the degree of attraction being proportional to both the luminosity and the distance of neighboring glowworms [92]. Throughout each iteration, the glowworm swarm optimization algorithm updates the position and luminosity of each glowworm based on the likelihood of drawing new glowworms and the best solution attained up to that point. A random walk component is also integrated into the algorithm, allowing glowworms to venture into unexplored territories of the search space. The algorithm’s execution continues until reaching a specific termination criterion, such as achieving a target fitness level or reaching the maximum permissible number of iterations [93].

1.2.18 Cat Swarm Optimization

The cat swarm optimization (CSO) method, which is a metaheuristic optimization algorithm based on the cooperative hunting behavior of a colony of cats, was first proposed by Ying Tan and Yuhui Shi in 2006. The method is modeled after the collaboration and communication that occur among cats while they are hunting [94]. The fundamental concept behind the cat swarm optimization involves emulating the collective hunting actions of a group of cats to ascertain the finest solution for a given problem. The approach commences by placing and endowing a population of cats with random positions and velocities. Subsequently, each cat undertakes randomized movements within the search space, wherein the course and speed of movement are influenced by the forces of attraction and repulsion exerted by fellow cats, alongside environmental variables [95].

In each iteration of the cat swarm optimization algorithm, adjustments are made to the positions and velocities of the cats based on the prevailing best solution, impact of other cats, and environmental conditions. Furthermore, a random walk element is infused into the algorithm, granting cats the capability to explore previously uncharted territories within the search space. This optimization process continues until a predetermined halting criterion, such as reaching a desired fitness threshold or attaining the maximum allowable number of iterations, is met. This algorithm was introduced by Ying Tan and Yuhui Shi in 2006, with inspiration drawn from the collaborative hunting demeanor of cats [96].

1.2.19 Grasshopper Optimization Algorithm

The grasshopper optimization algorithm (GOA), created in 2014 by Seyedali Mirjalili and Andrew Lewis, is a nature-inspired optimization algorithm that imitates the swarming behavior of grasshoppers. The GOA bases its optimization on the collective behavior of grasshoppers, which involves interpersonal communication and cooperation [97].

Throughout each iteration of the grasshopper optimization algorithm, adjustments are made to the position and velocity of every grasshopper, guided by the prevailing best solution and the influences stemming from fellow grasshoppers and environmental factors. Moreover, a random walk attribute is introduced, empowering grasshoppers to explore and uncover previously unexplored sectors within the search space [98, 99]. This algorithm persists until a predetermined cessation criterion is satisfied, such as reaching a maximum iteration count or achieving a designated fitness target.

1.2.20 Moth–Flame Optimization

The moth–flame optimization (MFO) algorithm draws inspiration from the natural behavior of moths, which exhibit an inherent attraction to flames and utilize celestial cues for navigation. This technique emulates the search pattern of moths, with the goal of uncovering the most optimal solution for a specific problem [100].

In the initial stages of the optimization procedure, the algorithm commences by generating a cluster of moths, each assigned random positions and luminosities. Subsequent to this, every moth traverses the search space via randomized movements, adjusting its velocity and trajectory based on two key factors: its inclination toward the brightest moth and the impact of environmental conditions [101