96,99 €
Principles of Soft Computing Using Python Programming An accessible guide to the revolutionary techniques of soft computing Soft computing is a computing approach designed to replicate the human mind's unique capacity to integrate uncertainty and imprecision into its reasoning. It is uniquely suited to computing operations where rigid analytical models will fail to account for the variety and ambiguity of possible solutions. As machine learning and artificial intelligence become more and more prominent in the computing landscape, the potential for soft computing techniques to revolutionize computing has never been greater. Principles of Soft Computing Using Python Programming provides readers with the knowledge required to apply soft computing models and techniques to real computational problems. Beginning with a foundational discussion of soft or fuzzy computing and its differences from hard computing, it describes different models for soft computing and their many applications, both demonstrated and theoretical. The result is a set of tools with the potential to produce new solutions to the thorniest computing problems. Readers of Principles of Soft Computing Using Python Programming will also find: * Each chapter accompanied with Python codes and step-by-step comments to illustrate applications * Detailed discussion of topics including artificial neural networks, rough set theory, genetic algorithms, and more * Exercises at the end of each chapter including both short- and long-answer questions to reinforce learning Principles of Soft Computing Using Python Programming is ideal for researchers and engineers in a variety of fields looking for new solutions to computing problems, as well as for advanced students in programming or the computer sciences.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 500
Veröffentlichungsjahr: 2023
Cover
Table of Contents
Title Page
Copyright
About the Author
Preface
1 Fundamentals of Soft Computing
1.1 Introduction to Soft Computing
1.2 Soft Computing versus Hard Computing
1.3 Characteristics of Soft Computing
1.4 Components of Soft Computing
Exercises
2 Fuzzy Computing
2.1 Fuzzy Sets
2.2 Fuzzy Set Operations
2.3 Fuzzy Set Properties
2.4 Binary Fuzzy Relation
2.5 Fuzzy Membership Functions
2.6 Methods of Membership Value Assignments
2.7 Fuzzification vs. Defuzzification
2.8 Fuzzy c-Means
Exercises
3 Artificial Neural Network
3.1 Fundamentals of Artificial Neural Network (ANN)
3.2 Standard Activation Functions in Neural Networks
3.3 Basic Learning Rules in ANN
3.4 McCulloch–Pitts ANN Model
3.5 Feed-Forward Neural Network
3.6 Feedback Neural Network
Exercises
4 Deep Learning
4.1 Introduction to Deep Learning
4.2 Classification of Deep Learning Techniques
Exercises
5 Probabilistic Reasoning
5.1 Introduction to Probabilistic Reasoning
5.2 Four Perspectives on Probability
5.3 The Principles of Bayesian Inference
5.4 Belief Network and Markovian Network
5.5 Hidden Markov Model
5.6 Markov Decision Processes
5.7 Machine Learning and Probabilistic Models
Exercises
6 Population-Based Algorithms
6.1 Introduction to Genetic Algorithms
6.2 Five Phases of Genetic Algorithms
6.3 How Genetic Algorithm Works?
6.4 Application Areas of Genetic Algorithms
6.5 Python Code for Implementing a Simple Genetic Algorithm
6.6 Introduction to Swarm Intelligence
6.7 Few Important Aspects of Swarm Intelligence
6.8 Swarm Intelligence Techniques
Exercises
7 Rough Set Theory
7.1 The Pawlak Rough Set Model
7.2 Using Rough Sets for Information System
7.3 Decision Rules and Decision Tables
7.4 Application Areas of Rough Set Theory
7.5 Using ROSE Tool for RST Operations
Exercises
8 Hybrid Systems
8.1 Introduction to Hybrid Systems
8.2 Neurogenetic Systems
8.3 Fuzzy-Neural Systems
8.4 Fuzzy-Genetic Systems
8.5 Hybrid Systems in Medical Devices
Exercises
Index
End User License Agreement
Chapter 1
Table 1.1 Important points of differences between soft computing and hard co...
Table 1.2 Transaction details of a supermarket store.
Chapter 2
Table 2.1 The
x
value and the corresponding membership function value.
Table 2.2 Paired comparison for breakfast item preference.
Table 2.3 Input vector set to the neural network.
Table 2.4 Subareas and their properties.
Table 2.5 Fuzzy Membership Matrix.
Table 2.6 Data point distance to a given cluster.
Table 2.7 Updated Fuzzy Membership Matrix.
Chapter 3
Table 3.1 List of standard activation functions.
Chapter 5
Table 5.1 Random experiments and possible outcomes.
Chapter 6
Table 6.1 Initial population and the fitness values.
Table 6.2 Clusters arranged for selection based on fitness values.
Table 6.3 New population generated after mutation.
Table 6.4 A 2-D array of cities and their randomly generated weight values....
Chapter 7
Table 7.1 An information table.
Table 7.2 A simple information table.
Table 7.3 A decision table for bank loan.
Table 7.4 Support values of a decision table for bank loan.
Table 7.5 Strength, certainty, and coverage values of decision rules.
Table 7.6 An exemplary decision table containing patient details.
Table 7.7 Support, strength, certainty, and coverage values of all decision ...
Table A An information table.
Chapter 1
Figure 1.1 Basic concepts of computing.
Figure 1.2 Summarization of three varying cases of a self-driving car.
Figure 1.3 Classification of computing (in computer science).
Figure 1.4 Soft computing characteristics.
Figure 1.5 Components of soft computing.
Figure 1.6 (a) Boolean (nonfuzzy) and (b) fuzzy logic-based solutions for a ...
Figure 1.7 Parts of a neuron.
Figure 1.8 A simple example of neural network.
Figure 1.9 The design of an artificial neuron.
Figure 1.10 Basic steps of evolutionary algorithms.
Figure 1.11 Main families of evolutionary algorithms.
Figure 1.12 Colony of ants marching toward food source.
Figure 1.13 Steps followed in genetic algorithms.
Figure 1.14 Steps followed in differential evolution.
Figure 1.15 Machine learning algorithm used for training data to form cluste...
Figure 1.16 Machine learning algorithm used for classifying email as spam or...
Figure 1.17 Types of machine learning.
Figure 1.18 Supervised learning.
Figure 1.19 The two main types of regression.
Figure 1.20 Unsupervised learning.
Figure 1.21 Reinforcement learning.
Chapter 2
Figure 2.1 (a) Crisp set of values (b) Fuzzy set of values.
Figure 2.2 (a) Crisp logic; (b) fuzzy logic.
Figure 2.3 Membership function of fuzzy set having real number close to 0.
Figure 2.4 Support, core, and boundary of fuzzy membership function.
Figure 2.5 (a) Normal fuzzy set; (b) subnormal fuzzy set.
Figure 2.6 (a) Convex fuzzy set; (b) nonconvex fuzzy set.
Figure 2.7 Fuzzy set operations: (a) Fuzzy union; (b) fuzzy intersection; an...
Figure 2.8 Singleton membership function.
Figure 2.9 Triangular membership function.
Figure 2.10 Trapezoidal membership function.
Figure 2.11 Gaussian membership function.
Figure 2.12 Sigmoidal membership function.
Figure 2.13 Membership function for the fuzzy variable “weight”.
Figure 2.14 Triangle for prediction to a category.
Figure 2.15 Membership functions based on rank ordering of items.
Figure 2.16 Linguistic terms and their corresponding
θ
values using ang...
Figure 2.17 An example of company's earnings for a year using angular fuzzy ...
Figure 2.18 Angular fuzzy membership function.
Figure 2.19 (a) ANN with two input and three class output (b) Graphical resu...
Figure 2.20 Input and output membership function of a fuzzy system.
Figure 2.21 Solution of first chromosome in population.
Figure 2.22 The fuzzification and defuzzification processes.
Figure 2.23 An example of max-membership principle of defuzzification.
Figure 2.24 An example of mean-max membership method of defuzzification.
Figure 2.25 An example of center of mass method of defuzzification.
Figure 2.26 An example of weighted average method of defuzzification.
Figure 2.27 Fuzzy membership plot.
Figure 2.28 Scatter plot for (a) Sepal length versus sepal width, and (b) Pe...
Chapter 3
Figure 3.1 Main parts of a neuron.
Figure 3.2 Working of a neuron.
Figure 3.3 Binary step activation function.
Figure 3.4 Linear activation function.
Figure 3.5 Sigmoid activation function.
Figure 3.6 ReLU activation function.
Figure 3.7 tanh activation function.
Figure 3.8 Leaky ReLU activation function.
Figure 3.9 SoftMax activation function.
Figure 3.10
McCulloch–Pitts
neuron model.
Figure 3.11 Types of artificial neural network (ANN).
Figure 3.12 Single-layer
Perceptron
model of neural network.
Figure 3.13 A one-unit neural network model.
Figure 3.14 ANN output representation corresponding to Figure 3.11.
Figure 3.15 ANN output representation corresponding to Figure 3.14 after inc...
Figure 3.16 Multilayer Perceptron (all neuron connections are not shown).
Figure 3.17 Multilayer Perceptron that solves the XOR problem (balanced weig...
Figure 3.18 Radial basis function neural network architecture.
Figure 3.19 Von Der Malsburg and Willshaw Model of SOM.
Figure 3.20 Kohonen's self-organizing model.
Figure 3.21 A self-organizing model (SOM) having 500 data points and 8 featu...
Chapter 4
Figure 4.1 Deep learning performance.
Figure 4.2 (a) Traditional machine learning (b) Deep learning.
Figure 4.3 Standard deep learning techniques.
Figure 4.4 An example of feature map.
Figure 4.5 The first step of convolution operation.
Figure 4.6 The second step of convolution operation.
Figure 4.7 The convoluted feature after applying the convolution operation....
Figure 4.8 Max pooling vs Average pooling.
Figure 4.9 Flattening of pooled feature map.
Figure 4.10 Basic CNN architecture.
Figure 4.11 Block diagram of CNN architecture.
Figure 4.12 Training and test loss.
Figure 4.13 Training and test accuracy.
Figure 4.14 Working of RNN.
Figure 4.15 Use of GAN to generate synthetic images.
Figure 4.16 An Autoencoder used for denoising images.
Chapter 5
Figure 5.1 Schematic representation of Bayes' theorem.
Figure 5.2 (a) Belief network where A and C are conditionally independent, g...
Figure 5.3 Markov decision process.
Chapter 6
Figure 6.1 Standard population-based algorithms.
Figure 6.2 Flowchart of the phases of genetic algorithm.
Figure 6.3 A Sample example of gene, chromosome, and population.
Figure 6.4 A chromosome consisting of seven genes.
Figure 6.5 The Roulette wheel parent selection.
Figure 6.6 The Stochastic universal sampling parent selection.
Figure 6.7 An example of tournament selection.
Figure 6.8 An example of rank-based selection.
Figure 6.9 An example of one-point crossover.
Figure 6.10 An example of two-point crossover.
Figure 6.11 An example of uniform crossover.
Figure 6.12 An example of partially mapped crossover.
Figure 6.13 An example of bit flip mutation.
Figure 6.14 An example of random resetting mutation.
Figure 6.15 An example of swap mutation.
Figure 6.16 An Example of scramble mutation.
Figure 6.17 An example of inverse mutation.
Figure 6.18 The Roulette wheel consisting of clusters of population.
Figure 6.19 Crossover operations performed for two pairs of chromosomes.
Figure 6.20 Routes connecting every capital of North-East India.
Figure 6.21 An example of crossover operation to solve TSP using GA.
Figure 6.22 An example of mutation to solve TSP using GA.
Figure 6.23 An example of the vehicle routing problem.
Figure 6.24 Flowchart of the VRP solution using genetic algorithm.
Figure 6.25 Crossover operation for VRP using Genetic Algorithm approach. (a...
Figure 6.26 Output of the genetic algorithm code.
Figure 6.27 (a) A colony of ants.(b) A swam of honey bees.(c) A scho...
Figure 6.28 Nest built by termites.
Figure 6.29 Few aspects of swarm intelligence.
Figure 6.30 The positive feedback loop for birth rate of a population.
Figure 6.31 The positive feedback loop for clotting of wounded tissues.
Figure 6.32 The negative feedback loop for death rate of a population.
Figure 6.33 Ant foraging leaving pheromones trails.
Figure 6.34 Travelling all the cities through the shortest path.
Figure 6.35 Flowchart of TSP using ACO.
Figure 6.36 Output of ant colony optimization.
Figure 6.37 Velocity and position updates of particle in the PSO algorithm....
Figure 6.38 Flowchart of the PSO algorithm.
Figure 6.39 Output of particle swarm optimization.
Chapter 7
Figure 7.1 Basics of rough set theory.
Figure 7.2 The standard classification problem.
Figure 7.3 Sequence of steps used by RSES tool.
Figure 7.4 Determining the nearest centroid for an object.
Figure 7.5 Algorithmic steps for image segmentation.
Figure 7.6 The standard speech recognition process.
Figure 7.7 Rough set-based vector quantizier.
Figure 7.8 The BANK.ISF dataset.
Figure 7.9 Dialog box for local discretization process.
Figure 7.10 Display of BANK.ISF dataset (having discretized values for all a...
Figure 7.11 The MUSHROOM.ISF dataset.
Figure 7.12 Dialog box for finding approximations.
Figure 7.13 Display of approximations in the approximation viewer window.
Chapter 8
Figure 8.1 (a) Sequential hybrid system, (b) embedded hybrid system, and (c)...
Figure 8.2 Block diagram of a Fuzzy-Neural system.
Cover
Table of Contents
Title Page
Copyright
About the Author
Preface
Begin Reading
Index
End User License Agreement
ii
iii
iv
xi
xii
xiii
xiv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
327
328
329
330
331
332
IEEE Press445 Hoes LanePiscataway, NJ 08854IEEE Press Editorial BoardSarah Spurgeon, Editor in Chief
Jón Atli Benediktsson Anjan Bose James Duncan Amin Moeness Desineni Subbaram Naidu
Behzad Razavi Jim Lyke Hai Li Brian Johnson
Jeffrey Reed Diomidis Spinellis Adam Drobot Tom Robertazzi Ahmet Murat Tekalp
Gypsy NandiAssam Don Bosco UniversityGuwahati, India
Copyright © 2024 by The Institute of Electrical and Electronics Engineers, Inc.All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.
Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data Applied for:
Hardback ISBN: 9781394173136
Cover Design: WileyCover Image: © Tuomas A. Lehtinen/Getty Images
Dr. Gypsy Nandi currently holds the position of Associate Professor and Head of the Department of Computer Applications at Assam Don Bosco University, located in Assam, India. With a profound educational background that includes a PhD in computer science, she has amassed nearly two decades of invaluable experience within the academic sphere. Driven by a fervent passion for cutting-edge technologies, she has been instrumental in advancing the fields of machine learning, data science, and social network analysis through her significant contributions.
Her accomplishments extend to successfully managing various government-sanctioned consultancy and research-based projects. Additionally, she has authored two impactful books that delve into the domains of data science and soft computing. She has also secured an Indian patent grant for her innovative design of a versatile, multi-functional robot.
Over the course of her illustrious academic career spanning 18 years, she has been a sought-after speaker for both national and state-level events, where she shares her expertise in her respective fields. Her extensive research output is evident in her numerous publications, which include esteemed journal articles, conference papers, and book chapters.
Beyond her academic pursuits, she remains actively engaged in social commitment activities. She serves as the coordinator of VanitAgrata, a women empowerment cell at Assam Don Bosco University. Through this initiative, she provides free digital literacy training to girls and women in rural areas, contributing to the advancement of society. She has also received international recognition from a university in the Philippines for her dedication to service-learning at the institutional level. Her commitment extends to offering free digital literacy training to various underprivileged communities in rural Assam.
In summary, she stands as a distinguished scholar and educator who has left an indelible mark on the fields of computer science and technology. Her accomplishments showcase not only academic excellence but also a profound dedication to social development and enriching the lives of students and communities alike.
In an era defined by rapid technological advancements, the field of soft computing has emerged as a powerful paradigm for solving complex real-world problems. Soft computing leverages the principles of human-like decision-making, allowing machines to handle uncertainty, vagueness, and imprecision in data and reasoning. This interdisciplinary field encompasses a variety of computational techniques, each with its unique strengths and applications.
This comprehensive textbook, Principles of Soft Computing Using Python Programming, is designed to provide students, researchers, and practitioners with a solid foundation in the core concepts and techniques of soft computing. With a focus on clarity and accessibility, this book takes you on a journey through the fundamental principles and methods that underpin soft computing.
Chapter 1 – Fundamentals of Soft Computing initiates our exploration, setting the stage by introducing soft computing and distinguishing it from its counterpart, hard computing. It delves into the key characteristics of soft computing and explores its essential components, including fuzzy computing, neural networks, evolutionary computing, machine learning, and other techniques. Engaging exercises at the end of the chapter invite you to apply your newfound knowledge.
Chapter 2 – Fuzzy Computing delves deeper into one of the cornerstone techniques of soft computing. It covers fuzzy sets, operations on fuzzy sets, properties, and more. You will also explore the practical aspects of fuzzy computing, such as membership functions, fuzzification, defuzzification, and the application of fuzzy c-means clustering.
Chapter 3 – Artificial Neural Network introduces the fundamentals of artificial neural networks (ANNs), a powerful tool inspired by the human brain. You will learn about standard activation functions, basic learning rules, and various types of neural network architectures, including feedforward and feedback networks. Engaging exercises will help reinforce your understanding of ANN concepts.
Chapter 4 – Deep Learning delves into the realm of deep neural networks, which have revolutionized fields such as computer vision, natural language processing, and speech recognition. This chapter provides an overview of deep learning techniques, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), generative adversarial networks (GANs), and autoencoders.
Chapter 5 – Probabilistic Reasoning explores the world of probability and its applications in soft computing. You will delve into random experiments, random variables, and different perspectives on probability. Bayesian inference, belief networks, Markovian models, and their applications in machine learning are also covered.
Chapter 6 – Population Based Algorithms introduces genetic algorithms and swarm intelligence techniques. You will discover how genetic algorithms work and explore their applications in optimization problems. Additionally, you will dive into swarm intelligence methods, including ant colony optimization (ACO) and particle swarm optimization (PSO), with practical Python code examples.
Chapter 7 – Rough Set Theory delves into the Pawlak Rough Set Model and its applications in information systems, decision rules, and decision tables. You will explore the use of rough sets in various domains such as classification, clustering, medical diagnosis, image processing, and speech analysis.
Chapter 8 – Hybrid Systems concludes our journey by discussing hybrid systems that combine different soft computing techniques, including neuro-genetic systems, fuzzy-neural systems, and fuzzy-genetic systems. You will also explore their applications in medical devices.
Each chapter in this book is carefully structured to provide a clear understanding of the topic, with practical exercises to reinforce your learning. Whether you are a student, researcher, or practitioner, “Principles of Soft Computing Using Python Programming” equips you with the knowledge and skills to tackle complex real-world problems using the power of soft computing. So, let us embark on this enlightening journey through the world of soft computing.
16 October 2023
Dr. Gypsy NandiGuwahati, Assam, India
Soft computing is a vital tool used for performing several computing operations. It uses one or more computational models or techniques to generate optimum outcomes. To understand this concept, let us first clarify our idea about computation. In any computation operation, inputs are fed into the computing model for performing some operations based on which results are accordingly produced. In the context of computing, the input provided for computation is called an antecedent, and the output generated is called the consequence. Figure 1.1 illustrates the basics of any computing operation where computing is done using a control action (series of steps or actions). Here, in this example, the control action is stated as p = f(q), where “q” is the input, “p” is the output, and “f” is the mapping function, which can be any formal method or algorithm to solve a problem.
Hence, it can be concluded that computing is nothing but a mapping function that helps in solving a problem to produce an output based on the input provided. The control action for computing should be precise and definite so as to provide accurate solution for a given problem.
Many a time, it has been noticed that no fixed solution can be found for a computationally hard task. In such a case, a precisely stated analytical model may not work to produce precise results. For this, the soft computing approach can be used that does not require a fixed mathematical modeling for problem solving. In fact, the uniqueness and strength of soft computing lie in its superpower of fusing two or more soft computing computational models/techniques to generate optimum results.
The concept of soft computing was evolved by Prof. Lofti A. Zadeh (University of California, USA) in the year 1981. Soft computing, as described by Prof. Zadeh, is “a collection of methodologies that aim to exploit the tolerance for imprecision and uncertainty to achieve tractability, robustness, and low solution cost.” Prof. Zadeh also emphasized that “soft computing is likely to play an increasingly important role in many application areas, including software engineering. The role model for soft computing is the human mind.” Soft computing mimics the notable ability of the human mind to reason and make decisions in an environment of improbability and imprecision. The principal components of soft computing include fuzzy logic, neurocomputing, and probabilistic reasoning (PR).
Figure 1.1 Basic concepts of computing.
If you are wondering in which areas soft computing is being used in our day-to-day lives, the simplest and most common examples include kitchen appliances (rice cookers, microwaves, etc.) and home appliances (washing machines, refrigerators, etc.). Soft computing also finds its dominance in gaming (chess, poker, etc.), as well as in robotics work. Prominent research areas such as data compression, image/video recognition, speech processing, and handwriting recognition are some of the popular applications of soft computing.
If we consider computing from the perspective of computer science, it is considered a certain task that can be accomplished using computers. Such computing may require certain software or hardware systems to accomplish the task(s) and to derive a certain outcome or output. To understand this easily, let us take a simple example of a self-driving car (named, say, Ziva). Now, the car Ziva is given the instructions to start moving (say, from point A) and to arrive at a destination point B. To accomplish this task, two possible cases can be considered, as discussed below:
Case A: The car Ziva uses a software program to make movement decisions. The path coordinates for movement decisions are already included in the software program with the help of which Ziva can take a predefined path to arrive at its destination. Now, suppose, while moving, Ziva encounters an obstacle in the path. In such a case, the software program can direct it to move to either to the right, or to the left, or to take a back turn. In this case, the self-driving car is not modeled to identify the nature and complexity of the obstacle to make a meaningful and proper decision. In this situation, the computation model used for the car is deterministic in nature, and the output is also concrete. Undoubtedly, there is less complexity in solving the problem, but the output is always fixed due to the rigidness of the computation method.
Case B: The car Ziva uses a software program to make movement decisions. However, in this case, the complexity of the program is more compared to the complexity of the program defined in Case A. This is so as the car is much more involved in complex decision-making. Ziva can now mimic the human brain in making decisions when any kind of obstacle is met in between its travel.
Ziva, first of all, assesses the type of the obstacle, then decides whether it can overcome the obstacle by any means, and finally, it keeps track of if any other alternate path can be chosen instead of overcoming the obstacle found in the same path. The decision to be taken by Ziva is not very crisp and precise, as there are many alternative solutions that can be followed to reach destination point B. For example, if the obstacle is a small stone, Ziva can easily climb up the stone and continue on the same path, as it will lead to a computationally less-expensive solution. However, if the obstacle is a big rock, Ziva may choose an alternative to choose another path to reach the destination point.
Case C: Now, let us consider Case C, in which the software program is written to let the self-driving car reach its destination by initially listing out all the possible paths available to reach from source A to destination B. For each path available, the cost of traveling the path is calculated and accordingly sorted to reach at the fastest time possible. Finally, the optimum path is chosen, considering the minimum cost as well as considering avoidance of any major obstacle. It can be realized that Case C appends both the cases of Case A and Case B to inherit approaches from both cases. It also adds some functionalities to tackle complex scenarios by choosing an optimum decision to finally reach destination point B.
The above three cases can be summarized (as listed in Figure 1.2) to check the points of differences among each of these cases. From each of the above three cases, it can be observed that the nature of computation in each of the three cases is not similar.
Notice that emphasis is given on reaching the destination point in the first case. As the result is precise and fixed, the computation of the Case A type is termed hard computing. Now, in the second case, the interest is to arrive at an approximate result, as a precise result is not guaranteed by this approach. The computation of the Case B type is termed soft computing. The third case inherits the properties of both Case A and Case B, and this part of computing is referred to as hybrid computing. Thus, computing in perspective of computer science can be broadly categorized, as shown in Figure 1.3.
Figure 1.2 Summarization of three varying cases of a self-driving car.
Figure 1.3 Classification of computing (in computer science).
The choice on which classification of computing should be used relies mainly on the nature of the problem to be solved. However, it is important that before choosing any of the computing techniques for problem solving, we should be clear about the main differences between hard computing and soft computing. Table 1.1 lists a few notable differences between hard computing and soft computing to deal with real-world problems.
The points of differences listed in Table 1.1 clear out the fact that soft computing methods are more suitable for solving real-world problems in which ideal models are not available. To name a few applications that may be solved using soft computing techniques include signal processing, robotics control, pattern recognition, business forecasting, speech processing, and many more. Recent research has given a lot of importance to the field of computational intelligence (CI). While traditional artificial intelligence (AI) follows the principle of hard computing, CI follows the principle of soft computing.
As we understood that soft computing can deal with imprecision, partial truth, and uncertainty, its applications are varied, ranging from day-to-day applications to various applications related to science and engineering. Some of the dominant characteristics of soft computing are listed in Figure 1.4, and a brief discussion on each of these characteristics is given next:
Table 1.1 Important points of differences between soft computing and hard computing.
Sl. no.
Hard computing
Soft computing
1
Requires a precisely stated analytical model
Can deal with imprecise models
2
Often requires a lot of computation time to solve a problem
Can solve a problem in reasonably less time
3
These techniques commonly use arithmetic, science, and computing
It mostly imitates the model from nature
4
Cannot be used in real-world problems for which an ideal model is not present
Suitable for real-world problems for which an ideal model is not present
5
Requires full truth to produce optimum result
Can work with partial truth to produce optimum result
6
Needs a precise and accurate environment
Can work in an environment of improbability and imprecision
7
The programs that are written using these techniques are deterministic
The soft computing techniques are developed mainly to get better results for any
nondeterministic polynomial
(
NP
)-complete problems
8
Usually, high cost is involved in developing solutions
Low cost is involved in developing solutions
Human expertise:
Soft computing utilizes human expertise by framing fuzzy if–then rules as well as conventional knowledge representation for solving real-world problems that may consist of some degree of truth or false. In short, where a concrete decision fails to represent a solution, soft computing techniques work best to provide human-like conditional solutions.
Biologically inspired computational models:
Computational learning models that follow the neural model of the human brain have been studied and framed for complex problem solving with approximation solutions. A few such popular neural network models include the
artificial neural network
(
ANN
)-,
convolutional neural network
(
CNN
)-, and the
recurrent neural network
(
RNN
)-based models. These models are commonly used for solving classification problems, pattern recognition, and sentiment analysis.
Optimization techniques:
Complex optimization problems that are inspired by nature are often used as soft computing techniques. For example,
genetic algorithms
(
GA
) can be used to select top-N fit people out of a human population of a hundred people. The selection of the most fit people is done by using the mutation properties inspired by biological evolution of genes.
Figure 1.4 Soft computing characteristics.
Fault tolerant:
Fault tolerance of a computational model indicates the capacity of the model to continue operating without interruption, even if any software or hardware failure occurs. That is, the normal computational process is not affected even if any of the software or hardware components fail.
Goal-driven:
Soft computing techniques are considered to be goal-driven. This indicates that emphasis is given more on reaching the goal or destination than on the path considered to be taken from the current state to reach the goal. Simulated annealing and GA are good examples of goal-driven soft computing techniques.
Model-free learning:
The training models used in soft computing need not be already aware of all the states in the environment. Learning of a step takes place in due course of actions taken in the present state. In other words, it can be said that there is a teacher who specifies beforehand all the precise actions to be taken per condition or state. The learning algorithm indirectly only has a critic that provides feedback as to whether the action taken can be rewarded or punished. The rewards or punishments given help in better decision-making for future actions.
Applicable to real-world problems:
Most of the real-world problems are built on uncertainties. In such circumstances, soft computing techniques are often used to construct satisfactory solutions to deal with such real-world problems.
The three principal components of soft computing include fuzzy logic-based computing, neurocomputing, and GA. These three components form the core of soft computing. There are a few other components of soft computing often used for problem solving, such as machine learning (ML), PR, evolutionary reasoning, and chaos theory. A brief summary of all these components of soft computing techniques is explained next, along with an illustrative diagram, as given in Figure 1.5.
While fuzzy computing involves understanding fuzzy logic and fuzzy sets, neural networks include the study of several neural network systems such as artificial neural network (ANN) and CNN. Evolutionary computing (EC) involves a wide range of techniques such as GA and swarm intelligence. Techniques for ML are categorized mainly as supervised learning (SL), unsupervised learning, and reinforcement learning (RL). Soft computing also involves a wide variety of techniques such as chaos theory, PR, and evolutionary reasoning.
The idea of fuzzy logic was first familiarized by Dr. Lotfi Zadeh of the University of California at Berkeley in the 1960s. While Boolean logic allows evaluation of output to either 0 (false) or 1 (true), and no other acceptance values in between, fuzzy logic, on the other hand, is an approach to computing that works on the basis of the “degrees of truth” that can consider any values between 0 and 1. That is, fuzzy logic considers 0 and 1 as extreme values of a fact or truth (value “0” represents absolute false, and value “1” represents absolute true). Any value between 0 and 1 in fuzzy logic indicates the various levels or states of truth.
Figure 1.5 Components of soft computing.
Figure 1.6 (a) Boolean (nonfuzzy) and (b) fuzzy logic-based solutions for a problem.
Let us understand this simple concept with the help of an example. For instance, if we consider the question, “Is the XYZ Courier Service Profitable?” the reply to this question can be simply stated as either “Yes” or “No.” If only two close-ended choices are provided for this question, it can be considered as value 1 if the answer given is “Yes” or 0 if the answer given is “No.” However, what if the profit is not remarkably well, and only a moderate profit is incurred from the courier service? If we have a deeper look at the question, there is a possibility that the answer can be within a range between 0 and 1, as the amount of profitability level may be not totally 100% profitable or 100% unprofitable. Here, the role of fuzzy logic comes into play where the values can be considered in percentages (say, neither profit nor loss, i.e., 0.5). Thus, fuzzy logic tries to deal with real-world situations, which consider partial truth as a possible solution for a problem.
Figure 1.6(a) illustrates the two outcomes provided for the question “Is the XYZ Courier Service Profitable?” The solution provided for the question in this case is Boolean logic based, as only two extreme choices are provided for responses. On the other hand, Figure 1.6(b) illustrates the various possibilities of answers that can be provided for the same question “Is the XYZ Courier Service Profitable?” Here, the concept of fuzzy logic is applied to the given question by providing a few possibilities of answers such as “fully unprofitable,” “moderately unprofitable,” “neither profitable nor unprofitable,” “moderately profitable,” and “fully profitable.” The class membership is determined by the fuzzy membership function. As seen in Figure 1.6(b), the membership degree (e.g., 0, 0.25, 0.5, 0.75, and 1) is taken as output value for each response given.
One common example of using fuzzy sets in computer science is in the field of image processing, specifically in edge detection. Edge detection is the process of identifying boundaries within an image, which are areas of rapid-intensity changes. Fuzzy logic can be used to make edge detection more robust and accurate, especially in cases where the edges are not clearly defined. Let us consider a grayscale image where each pixel's intensity value represents its brightness. To detect edges using fuzzy logic, one might define a fuzzy set for “edgeness” that includes membership functions like “definitely an edge,” “possibly an edge,” and “not an edge”. In such a case, the membership functions can be defined as follows:
Definitely an edge:
If the intensity difference is high, the pixel is more likely to be on an edge.
Possibly an edge:
If the intensity difference is moderate, the pixel might be on an edge.
Not an edge:
If the intensity difference is low, the pixel is unlikely to be on an edge.
Using these membership functions, you can assign degrees of membership to each pixel for each of these fuzzy sets. For instance, a pixel with a high-intensity difference would have a high degree of membership in the “definitely an edge” fuzzy set.
A crisp set, as you may know, is a set with fixed and well-defined boundaries. For instance, if the universal set (U) is a set of all states of India, a crisp set may be a set of all states of North-East India for the universal set (U). A crisp set (A) can be represented in two ways, as shown in Equations (1.1) and (1.2)
Here, in Equation (1.2), the crisp set “A” consists of a collection of elements ranging from a1 to an. Equation (1.2) shows the other way of representing a crisp set “A,” where “A” consists of a collection of values of “x” such that it has got the property P(x).
Now, a crisp set can also be represented using a characteristic function, as shown in Equation (1.3):
A fuzzy set is a more general concept of the crisp set. It is a potential tool to deal with uncertainty and imprecision. It is usually represented by an ordered pair where the first element of the ordered pair represents the element belonging to a set, and the second element represents the degree of membership of the element to the set. The membership function value may vary from 0 to 1. Mathematically, a fuzzy set A′ is represented as shown in Equation 1.4.
Here, the membership function value indicates the degree of belongingness and is denoted by Here, in Equation (1.4), “X” indicates the universal set, which consists of a set of elements “x.” A membership function can either be any standard function (for example, the Gaussian function) or any user-defined function in requirement to the problem domain. As this membership function is used to represent the degree of truth in fuzzy logic, its value on the universe of discourse “X” is defined as:
Here, in Equation (1.5), each value of “X” represents an element that is mapped to a value between 0 and 1.
The above explanations lead us to the understanding that a fuzzy set does not have a crisp, clearly defined boundary; rather it contains elements with only a partial degree of membership. Some of the standard properties of fuzzy sets include commutative property, associative property, distributive property, transitivity, and idempotent property. There are few other properties of fuzzy sets that will be elaborately discussed in Chapter 2.
Also, there are three standard fuzzy set operators used in fuzzy logic – fuzzy union, fuzzy intersection, and fuzzy complement. In case of complement operation, while a crisp set determines “Who do not belong to the set?,” a fuzzy set determines “How many elements do not belong to the set?” Again, in case of union operation, while a crisp set determines “Which element belongs to either of the set?,” a fuzzy set determines “How much of the element is in either of the set?” Lastly, in case of intersection operation, while a crisp set determines “Which element belongs to both the sets?,” a fuzzy set determines “How much of the element is in both the sets?” These fuzzy operations will also be elaborately discussed in Chapter 2.
Fuzzy logic systems have proved to be extremely helpful in dealing with situations that involve decision-making. As some problems cannot be solved by simply determining whether it is True/Yes or False/No, fuzzy logic is used to offer flexibility in reasoning in order to deal with uncertainty in such a situation. The applications of fuzzy logic are varied, ranging from domestic appliances to automobiles, aviation industries to robotics.
The human brain consists of billions of interconnected neurons. These neurons are cells that use biochemical reactions to receive data, and accordingly process and transmit information. A typical neuron consists of four main parts – dendrites (receptors that receive signals from other neurons), soma (the cell body that sums up all the incoming signals to create input), axon (the area through which neuron signals travel to other neurons when a neuron is fired), and synapses (point of interconnection of one neuron with other neurons). The different parts of a neuron are illustrated in Figure 1.7. A neuron gets fired only if certain conditions are met.
Figure 1.7 Parts of a neuron.
The signals received on each synapse may be of excitatory or inhibitory type. When the excitatory signals exceed the inhibitory signals by certain quantified threshold value, the neuron gets fired. Accordingly, either positive or negative weights are assigned to signals – a positive weight is assigned to excitatory signals, whereas a negative weight is assigned to inhibitory signals. This weight value indicates the amount of impact of a signal on excitation of the neuron. The signals multiplied by the weight in all the incoming synapse is summed up to get a final cumulative value. If this value exceeds the threshold, then the neuron is excited. This biological model has been mathematically formulated to accomplish optimal solutions to different problems and is technically termed as “Artificial Neural Network (ANN).” ANN has been applied in a large number of applications such as pattern matching, pattern completion, classification, optimization, and time-series modeling.
A simple example of an ANN is given in Figure 1.8. The nodes in ANN are organized in a layered structure (input layer, hidden layer, and output layer) in which each signal is derived from an input and passes via nodes to reach the output. Each black circular structure in Figure 1.8 represents a single neuron. The simplest artificial neuron can be considered to be the threshold logic unit (TLU). The TLU operation performs a weighted sum of its inputs and then outputs either a “0” or “1.” An output of “1” occurs if the sum value exceeds a threshold value and a “0” otherwise. TLU thus models the basic “integrate-and-fire” mechanism of real neurons.
The basic building block of every ANN is the artificial neuron. At the entrance section of an artificial neuron, inputs are assigned weights. For this, every input value is multiplied by an individual weight (Figure 1.9). In the middle section of the artificial neuron, a sum function is evaluated to find the sum of all the weighted inputs and bias. Next, toward the exit of the artificial neuron, the calculated sum value is passed through an activation function, also called a transfer function.
Figure 1.8 A simple example of neural network.
Figure 1.9 The design of an artificial neuron.
ANN provides a simplified model of the network of neurons that occur in the human or animal brain. ANN was initially found with the sole purpose of solving problems in the same way that a human or animal brain does. However, more and more research on ANN has led to the deviation of ANN from biology to solve several challenging tasks such as speech recognition, medical diagnosis, computer vision, and social network filtering.
EC is a distinct subfield of soft computing that has gained wide popularity in the past decade in various areas of research related to natural evolution. In the case of natural evolution, an environment consists of a population of individuals that struggle for survival and strive for reproduction. The fitness of each individual decides its probability of being able to survive in a given environment. Evolutionary algorithms (EA) follow heuristic-based approach to problem solving, as these algorithms cannot be solved in polynomial time. Many variants of EC have evolved over time, and each variant is suitable for more specific types of problems and data structures. At times, two or more evolutionary algorithms (EA) are applied together for problem solving in order to generate better results. This makes EC very popular in computer science, and a lot of research is explored in this area.
Figure 1.10 Basic steps of evolutionary algorithms.
In general, EA mimic the behavior of biological species based on Darwin's theory of evolution and natural selection mechanism. The four main steps involved in EA include – initialization, selection, use of genetic operators (crossover and mutation), and termination. Each of these chronological steps makes an important contribution to the process of natural selection and also provides easy ways to modularize implementations of EA. The four basic steps of EA are illustrated in Figure 1.10, which begins with the initialization process and ends with the termination process.
The initialization step of EA helps in creating an initial population of solutions. The initial population is either created randomly or created considering the ideal condition(s). Once the population is created in the first step, the selection step is carried out to select the top-N population members. This is done using a fitness function that can accurately select the right members of the population. The next step involves use of two genetic operators – crossover and mutation – to create the next generation of population. Simply stated, these two genetic operators help in creating new offspring from the given population by introducing new genetic material into the new generation. Lastly, the EA involve the termination step to end the process. The termination step occurs in either of the cases – the algorithm has reached some maximum runtime, or the algorithm has reached some threshold value based on performance.
Independent research work on EA led to the development of five main streams of EA, namely, the evolutionary programming (EP), the evolution strategies (ES), swarm intelligence, the GA, and the differential evolution (DE) (as shown in Figure 1.11). A brief discussion on each of these subareas of EA is discussed in the later part of this section.
Evolutionary programming:
The concept of EP was originally conceived by Lawrence J. Fogel in the early 1960s. It is a stochastic optimization strategy similar to GA. However, a study is made on the behavioral linkage of parents and offspring in EP, while genetic operators (such as crossover operators) are applied to produce better offspring from given parents in GA. EP usually involves four main steps, as mentioned below. Step 1 involves choosing an initial population of trial solutions. Step 2 and Step 3 are repeated either until a threshold value for iteration exceeds or an adequate solution for the given problem is obtained:
Figure 1.11 Main families of evolutionary algorithms.
Step 1
: An initial population of trial solutions is chosen at random.
Step 2
: Each solution is replicated into a new population. Each of these offspring solutions is mutated.
Step 3
: Each offspring solution is assessed by computing its fitness.
Step 4
: Terminate.
The three common variations of EP include the Classical EP (uses Gaussian mutation for mutating the genome), the Fast EP (uses the Cauchy distribution for mutating the genome), and the Exponential EP (uses the double exponential distribution as the mutation operator). A few of the common application areas of EP include path planning, traffic routing, game learning, cancer detection, military planning, combinatorial optimization, and hierarchical system design.
Evolution strategies:
Evolution strategies (ES) is yet another optimization technique that is an instance of an evolutionary algorithm. The concept of ES was proposed by three students, Bienert, Rechenberg, and Schwefel, of the Technical University in Berlin in 1964. ES is also inspired by the theory of evolution. In fact, it is inspired mainly by the species-level process of evolution (phenotype, hereditary, and variation). The main aim of ES algorithm is to maximize the fitness of a group of candidate solutions in the context of an objective function from a domain. ES usually involves six main steps as mentioned below, out of which Steps 2–5 are repeated until convergence:
Step 1
: Randomly choose
n
population of individuals.
Step 2
: Create
n
population of parameters
θ
1
,
θ
2
, …,
θ
n
by adding Gaussian noise to the best parameter (
θ
is the parameter vector).
Step 3
: Evaluate the objective function for all the parameters, and select the top-N best-performing parameters (elite parameters).
Step 4
: Find the best parameter (best parameter = mean(top-N elite parameters)).
Step 5
: Decay (minimize) the noise by some factor.
Step 6
: Terminate.
Mutation and selection are two main operations performed in evolution strategies. These two operations are applied continuously until a termination criterion is met. While selection operation is deterministic and based on fitness rankings, mutation is performed by adding a normally distributed random value to each vector component. The simplest case of ES can be a population of size two – the parent (current point) and the result of its mutation. In such a case, if the fitness of the mutant is either equal or better than the fitness of the parent, the mutant then becomes the parent for the next iteration. If not, the mutant is discarded.
Swarm intelligence:
Swarm intelligence (SI) algorithms are a special type of EA. The concept of SI was first introduced by Beni and Wang in “
swarm intelligence in cellular robotic system
.” SI algorithms adapt to the concept of the behavior of swarms. A swarm is a dense group of homogenous agents that coordinate among themselves and the environment for an interesting, combined clustered behavior to emerge. The term swarm is mostly used in biological concepts to explain the coordinated behavior of a group of animals, fish, or birds. For example, a colony of ants marching together in search of food is a remarkable example of clustered behavior found in biological environment. SI algorithms mainly include
particle swarm optimization
(
PSO
),
ant colony optimization
(
ACO
), and
artificial bee colony
(
ABC
).
Particle swarm optimization:
Particle swarm optimization
(
PSO
) is a nature-inspired population-based stochastic optimization technique developed by Kennedy and Eberhart in 1995. PSO algorithms mimic the social behavior of animals, such as fish schooling and bird flocking, in which fish or birds move collectively to solve a task. The PSO technique works on the same principle of foraging behavior of biological species. PSO is easy to implement, as it requires adjustment of only a few parameters. This is the reason why PSO has been successfully applied in many application areas, such as the traveling salesman problem, scheduling problem, sequential ordering problem, and vehicle routing problem.
Ant colony optimization:
Ant colony optimization (ACO) is a population-based metaheuristic mainly used to find an approximate solution to a given challenging optimization problem. ACO uses a set of software agents called
artificial ants
, which help find good solutions to a given optimization problem.
Figure 1.12
shows the concept of how a colony of ants marches together to reach the food source. In between, if any obstacle is met for the first time, the ants divide among themselves to travel in both directions. However, the shorter route is noted and followed by the rest of the ant based on the pheromone (chemical substance produced and released into the environment) deposit concentration.
Figure 1.12 Colony of ants marching toward food source.
The pheromone smell deposited by the ant on the pathway provides an indication to the other worker ants about the presence of food in a nearby area. The pheromone trails help create indirect communication between all the nearby ants, which helps in finding the shortest path between the food source and the nest.
Artificial bee colony:
Artificial bee colony (ABC) is a computing technique that is based on the intelligent foraging behavior of honey bee swarms. This concept was first proposed by Dervis Karaboga of Erciyes University in the year 2005. The ABC model considers three types of bees: the employed bees, the onlooker bees, and the scout bees. Usually, only one artificial employed bee is hired for a food source. The artificial employed bee targets the food source and performs a dance after returning to its hive. Once the target food source is over, the employed bee shifts its position to a scout and starts hunting for a new food source. The role of onlookers is to watch the dance of employed bees and choose food sources based on the performance of the dances. The onlookers and the scouts are considered unemployed bees.
ABC accomplishes the task of hunting for food through social cooperation. In the ABC problem, each food source signifies a possible solution to the optimization problem, and the amount of nectar in the food sources decides the quality or fitness of the given solution. In fact, the quality of a food source depends on many factors, such as the amount of food source available, the ease of extracting its nectar, and also its distance from the nest. Depending on the number of food sources, the same number of employed bees is chosen to solve a problem. It is the role of employed bees to carry on the information about the quality of the food source and share this information with the other bees.
The unemployed bees also play an active role in the food hunt. One type of unemployed bee is the scout, which explores the environment near the nest in search of food. The other type of unemployed bee is the onlooker, which waits in the nest to get information about the quality of food sources from the employed bees and establish the better food sources. Communication among bees related to the quality of food sources takes place through the famous “waggle dance” of honey bees. This exchange of information among the three types of bees is the most vital occurrence in the formation of collective knowledge.
Genetic algorithms:
The concept of genetic algorithms (GA) was proposed by John Holland in the 1960s. Later, Holland along with his colleagues and students developed the concepts of GA at the University of Michigan in the 1960s and 1970s as well. A genetic algorithm is a metaheuristic that is inspired by Charles Darwin's theory of natural evolution. GA are a part of the larger class of EA that emphasize on selecting the fittest individuals for reproduction in order to produce offspring. The generated offspring inherit the characteristics of their parents and is therefore expected to have better fitness if the parents do have good fitness values. Such offspring, in turn, will have a better chance of survival. If this process continues to repeat multiple times, at some point in time, a generation of the fittest individuals will be formed.
There are basically five main phases of GA (as illustrated in Figure 1.13): population initialization, fitness function calculation, parent selection, crossover, and mutation. Initially, a random population of size “n” consisting of several individual chromosomes is chosen. Next, the fitness value of each of the individual chromosomes is calculated based on a fitness function. The fitness value plays a vital role in the decision-making of the selection of chromosomes for crossover.
In the crossover phase, every two individual chromosomes selected are reproduced using a standard crossover operator. This results in the generation of two offspring from each pair of chromosomes. The new offspring generated are then mutated to produce a better set of individual chromosomes in the newly generated population. These entire five phases of GA are repeated until a termination condition is met. Each iteration of the GA is called a generation, and the entire set of generations is called a run. The final output (result) is the generation of the fittest individuals that have the greatest chance of survival.
Figure 1.13 Steps followed in genetic algorithms.
Differential evolution:
Differential evolution (DE) is a common evolutionary algorithm stimulated by Darwin's theory of evolution and has been studied widely to solve diverse areas of optimization applications since its inception by Storn and Price in the 1990s. The various steps involved in DE include population initialization, mutation, crossover, selection, and result generation (illustrated in
Figure 1.14