Simulation and Analysis of Mathematical Methods in Real-Time Engineering Applications -  - E-Book

Simulation and Analysis of Mathematical Methods in Real-Time Engineering Applications E-Book

0,0
190,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

SIMULATIONS AND ANALYSIS of Mathematical Methods Written and edited by a group of international experts in the field, this exciting new volume covers the state of the art of real-time applications of computer science using mathematics. This breakthrough edited volume highlights the security, privacy, artificial intelligence, and practical approaches needed by engineers and scientists in all fields of science and technology. It highlights the current research, which is intended to advance not only mathematics but all areas of science, research, and development, and where these disciplines intersect. As the book is focused on emerging concepts in machine learning and artificial intelligence algorithmic approaches and soft computing techniques, it is an invaluable tool for researchers, academicians, data scientists, and technology developers. The newest and most comprehensive volume in the area of mathematical methods for use in real-time engineering, this groundbreaking new work is a must-have for any engineer or scientist's library. Also useful as a textbook for the student, it is a valuable contribution to the advancement of the science, both a working handbook for the new hire or student, and a reference for the veteran engineer.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 458

Veröffentlichungsjahr: 2021

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title page

Copyright

Preface

Acknowledgments

1 Certain Investigations on Different Mathematical Models in Machine Learning and Artificial Intelligence

1.1 Introduction

1.2 Mathematical Models of Classification Algorithm of Machine Learning

1.3 Mathematical Models and Covid-19

1.4 Conclusion

References

2 Edge Computing Optimization Using Mathematical Modeling, Deep Learning Models, and Evolutionary Algorithms

2.1 Introduction to Edge Computing and Research Challenges

2.2 Introduction for Computational Offloading in Edge Computing

2.3 Mathematical Model for Offloading

2.4 QoS and Optimization in Edge Computing

2.5 Deep Learning Mathematical Models for Edge Computing

2.6 Evolutionary Algorithm and Edge Computing

2.7 Conclusion

References

3 Mathematical Modelling of Cryptographic Approaches in Cloud Computing Scenario

3.1 Introduction to IoT

3.2 Data Computation Process

3.3 Data Partition Process

3.4 Data Encryption Process

3.5 Results and Discussions

3.6 Overview and Conclusion

References

4 An Exploration of Networking and Communication Methodologies for Security and Privacy Preservation in Edge Computing Platforms

Introduction

4.1 State-of-the-Art Edge Security and Privacy Preservation Protocols

4.2 Authentication and Trust Management in Edge Computing Paradigms

4.3 Key Management in Edge Computing Platforms

4.4 Secure Edge Computing in IoT Platforms

4.5 Secure Edge Computing Architectures Using Block Chain Technologies

4.6 Machine Learning Perspectives on Edge Security

4.7 Privacy Preservation in Edge Computing

4.8 Advances of On-Device Intelligence for Secured Data Transmission

4.9 Security and Privacy Preservation for Edge Intelligence in Beyond 5G Networks

4.10 Providing Cyber Security Using Network and Communication Protocols for Edge Computing Devices

4.11 Conclusion

References

5 Nature Inspired Algorithm for Placing Sensors in Structural Health Monitoring System - Mouth Brooding Fish Approach

5.1 Introduction

5.2 Structural Health Monitoring

5.3 Machine Learning

5.4 Approaches of ML in SHM

5.5 Mouth Brooding Fish Algorithm

5.6 Case Studies On OSP Using Mouth Brooding Fish Algorithms

5.7 Conclusions

References

6 Heat Source/Sink Effects on Convective Flow of a Newtonian Fluid Past an Inclined Vertical Plate in Conducting Field

6.1 Introduction

6.2 Mathematic Formulation and Physical Design

6.3 Discusion of Findings

6.4 Conclusion

References

7 Application of Fuzzy Differential Equations in Digital Images Via Fixed Point Techniques

7.1 Introduction

7.2 Preliminaries

7.3 Applications of Fixed-Point Techniques

7.4 An Application

7.5 Conclusion

References

8 The Convergence of Novel Deep Learning Approaches in Cybersecurity and Digital Forensics

8.1 Introduction

8.2 Digital Forensics

8.3 Biometric Analysis of Crime Scene Traces of Forensic Investigation

8.4 Forensic Data Analytics (FDA) for Risk Management

8.5 Forensic Data Subsets and Open-Source Intelligence for Cybersecurity

8.6 Recent Detection and Prevention Mechanisms for Ensuring Privacy and Security in Forensic Investigation

8.7 Adversarial Deep Learning in Cybersecurity and Privacy

8.8 Efficient Control of System-Environment Interactions Against Cyber Threats

8.9 Incident Response Applications of Digital Forensics

8.10 Deep Learning for Modeling Secure Interactions Between Systems

8.11 Recent Advancements in Internet of Things Forensics

References

9 Mathematical Models for Computer Vision in Cardiovascular Image Segmentation

9.1 Introduction

9.2 Cardiac Image Segmentation Using Deep Learning

9.3 Proposed Method

9.4 Algorithm Behaviors and Characteristics

9.5 Computed Tomography Cardiovascular Data

9.6 Performance Evaluation

9.7 Conclusion

References

10 Modeling of Diabetic Retinopathy Grading Using Deep Learning

10.1 Introduction

10.2 Related Works

10.3 Methodology

10.4 Dataset

10.5 Results and Discussion

10.6 Conclusion

References

11 Novel Deep-Learning Approaches for Future Computing Applications and Services

11.1 Introduction

11.2 Architecture

11.3 Multiple Applications of Deep Learning

11.4 Challenges

11.5 Conclusion and Future Aspects

References

12 Effects of Radiation Absorption and Aligned Magnetic Field on MHD Cassion Fluid Past an Inclined Vertical Porous Plate in Porous Media

12.1 Introduction

12.2 Physical Configuration and Mathematical Formulation

12.3 Discussion of Result

12.4 Conclusion

References

13 Integrated Mathematical Modelling and Analysis of Paddy Crop Pest Detection Framework Using Convolutional Classifiers

13.1 Introduction

13.2 Literature Survey

13.3 Proposed System Model

13.4 Paddy Pest Database Model

13.5 Implementation and Results

13.6 Conclusion

References

14 A Novel Machine Learning Approach in Edge Analytics with Mathematical Modeling for IoT Test Optimization

14.1 Introduction: Background and Driving Forces

14.2 Objectives

14.3 Mathematical Model for IoT Test Optimization

14.4 Introduction to Internet of Things (IoT)

14.5 IoT Analytics

14.6 Survey on IoT Testing

14.7 Optimization of End-User Application Testing in IoT

14.8 Machine Learning in Edge Analytics for IoT Testing

14.9 Proposed IoT Operations Framework Using Machine Learning on the Edge

14.10 Expected Advantages and Challenges in Applying Machine Learning Techniques in End-User Application Testing on the Edge

14.11 Conclusion

References

Index

End User License Agreement

Guide

Cover

Table of Contents

Title page

Copyright

Preface

Acknowledgments

Begin Reading

Index

End User License Agreement

List of Tables

Chapter 1

Table 1.1 Accuracy of classifiers.

Chapter 2

Table 2.1 Existing studies using deep learning in edge.

Chapter 4

Table 4.1 Protocols and its features.

Chapter 8

Table 8.1 Performance of biometric in forensic investigation.

Table 8.2 List of datasets for various biometric identity.

Chapter 9

Table 9.1 Acronym used in the chapter.

Table 9.2 Comparison of algorithms.

Chapter 10

Table 10.1 Data type for attributes of dataset.

Table 10.2 Statistical description of dataset.

Table 10.3 Correlation between attributes in dataset.

Table 10.4 Dataset sample.

Table 10.5 Comparison of the evaluation results.

Chapter 11

Table 11.1 Different architecture of deeper learning and its applications.

Chapter 12

Table 12.1 Skin friction (τ).

Table 12.2 Nusselt numeral (Nu).

Table 12.3 Sherwood numeral (Sh).

Chapter 13

Table 13.1 Sensors and their methodologies.

Table 13.2 Pest of rice – sample dataset.

Table 13.3 Gall midge – GLCM features.

Table 13.4 Classification accuracy for paddy insect with SIFT features.

Chapter 14

Table 14.1 Test cases generated for each of the scenarios.

Table 14.2 Comparison of end-user application testing at the edge with ML and ot...

Pages

v

ii

iii

iv

xv

xvi

xvii

xviii

xix

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

151

152

153

154

155

156

157

158

159

160

161

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

Scrivener Publishing100 Cummings Center, Suite 541JBeverly, MA 01915-6106

Modern Mathematics in Computer Science

Series Editors: Hanaa Hachimi, PhD, G. Suseendran, PhD, and Noor Zaman, PhD

Scope: The idea of a series of books on modern math methods used in computer science was conceived to address the great demand for information about today’s emerging computer science technologies. Modern math methods, including algorithms, encryptions, security, communication, machine learning, artificial intelligence and other math-based advanced concepts, form the backbone of these technologies and are crucial to them. Modern math plays a vital role in computing technologies by enhancing communication, computing, and extending security through different encryption algorithms. The ever-increasing demand for data storage capacity, from gigabytes to petabytes and higher, has higher requirements that need to be met. Modern math can match those requirements.

Empirical studies, theoretical and numerical analysis, and novel research findings are included in this series. The information highlighted in this series encourages cross-fertilization of ideas concerning the application of modern math methods in computer science.

Publishers at ScrivenerMartin Scrivener ([email protected])Phillip Carmical ([email protected])

Simulation and Analysis of Mathematical Methods in Real-Time Engineering Applications

Edited by

T. Ananth Kumar,

E. Golden Julie,

Y. Harold Robinson,

and

S. M. Jaisakthi

This edition first published 2021 by John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA and Scrivener Publishing LLC, 100 Cummings Center, Suite 541J, Beverly, MA 01915, USA

© 2021 Scrivener Publishing LLC

For more information about Scrivener publications please visit www.scrivenerpublishing.com.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

Wiley Global Headquarters

111 River Street, Hoboken, NJ 07030, USA

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Limit of Liability/Disclaimer of Warranty

While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials, or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read.

Library of Congress Cataloging-in-Publication Data

ISBN 9781119785378

Cover image: Mathematical - Jevtic | Dreamstime.com

Cover design by Kris Hackerott

Set in size of 11pt and Minion Pro by Manila Typesetting Company, Makati, Philippines

Printed in the USA

10 9 8 7 6 5 4 3 2 1

Preface

This book addresses primary computational techniques for developing new technologies in terms of soft computing. It also highlights the security, privacy, artificial intelligence, and practical approach in all fields of science and technologies. It highlights the current research which is intended to advance not only in mathematics but in all the possible areas of science and technologies for research and development. As the book is focused on the emerging concepts in machine learning and artificial intelligence algorithmic approaches and soft computing techniques, it will be used by researchers, academicians, data scientists and technology developers.

Chapter 1 deals with Investigations on Different Mathematical Models in Machine Learning and Artificial Intelligence. It starts with a discussion about knowledge-based expert systems. It contains primitive representation and primitive inference. This is followed by problem-solving techniques and a mathematical model of classification algorithms. This chapter discusses various mathematical algorithms like Markov chain model, automated simulation algorithms, KNN, SVM and comparison analysis of KNN and SVM. Finally, it describes the SEIR model for COVID-19.

Chapter 2 mainly discusses edge computing optimization using mathematical modelling. It includes edge computing architecture, challenges, motivation and research direction. This is followed by Computational offloading in edge computing applications, classification, mathematical schemes like Markov chain-based schemes, hidden Markov model, Qos and optimization. The author then discusses Deep Learning Mathematical Models and Evolutionary algorithm in edge computing. Chapter 3 discusses various cryptography approaches used in cloud computing based on a mathematical model. This chapter starts with an introduction to IoT and the cloud, integration and application. It is followed by a discussion of the data computation process and data partition. This includes Shamir Secret (SS) Share Algorithm for Data Partition and data encryption; AES algorithms with results are discussed.

Chapter 4 deals with Security and Privacy Preservation in Edge Computing Platforms. It contains key management schemes and secure IoT-based edge computing. For providing maximal security the authors conducted an extensive exploration on adoption of blockchain technologies across edge computing networks and privacy preservation practices. Finally, they explore the machine learning approaches and advancements of on-device intelligence in edge computing infrastructures. Chapter 5 is about Mouth Brooding Fish Approach (MBF) for Placing Sensors in Structural Health Monitoring System. MBF can handle a wide scope of worldwide streamlining issues and has the probability to be utilized to take care of entire issues since it depends on a certifiable phenomenon. The combination of MBF-ILS algorithm improves the optimal sensor placement and hence reduced the usage of more sensors. Due to the ILS algorithm, there is a perfect gap maintained between the global and local best solution. So this will increase the convergence speed of an algorithm.

Chapter 6 mainly deals with the impact of the heat source/decrease effects on convective fluid movement beyond an inclined vertical plate in the field. Disruption techniques regulate the fluid velocity, temperature, and concentration equations in terms of dimensional parameters. Next the authors discuss Mathematic Formulation and Physical Design. Finally they discuss finding with graph. Chapter 7 focuses on Application of Fuzzy Differential Equations in Digital Images via Fixed Point Techniques. It begins by discussing the basics of Fuzzy logic methods, which seem promising and useful in drug research and design. Digital topology is a developing field that uses objects’ topological properties to relate to 2D and 3D digital image features. The fixed-point theorem due to Banach is a valuable method in metric space theory; This chapter contains well-known fixed point theorem for studying the nature of digital images. That is established by applying the concept of fuzzy numbers. Sufficient conditions are also determined to get the desired result.

Chapter 8 discusses Novel Deep Learning Approaches in Cyber security and Digital Forensics. Digital forensics play a vital role in solving cybercrime and identifying the proper solution for the threat that occurs in the network. It includes Biometric analysis of crime scene traces of forensic investigation. Forensic science holds a major position in all the informative and scientific domains due to its significance in social impacts. Varieties of data forensic analytical methods were proposed by various researchers, much concentrating on the domain of physics. Better security can be provided for forensic science through the cryptographic algorithms which perform the authentication verification process effectively. Chapter 9 deals with Mathematical Models for Computer Vision in Cardiovascular Image Segmentation. It gives a detailed review of the state of the art through practitioner processes and methods. Three popular imaging models offer a detailed summary of these DL strategies, providing a broad spectrum of current deep learning methods designed to classify various cardiac functional structures. In the three methods, deep learning-based segmentation approaches highlighted future promise and the existing shortcomings of these methods of cardiac segmentation based on deep learning that may impede broad practical implications. Deep learning-based approaches have made a massive impact on the segmentation of cardiac images but also raise awareness and understanding problems that demand significant contributions in this area.

Chapter 10 discusses Modelling of Diabetic Retinopathy Grading Using Deep Learning. It contains a deep introduction about Diabetic Retinopathy Grading and a brief review of related work done by various authors. The authors show the application of deep learning to predict the DR from the retinal images. They propose a hybrid model and presented a CNN-LSTM classifier for the DR classification using the DRDC dataset. The proposed hybrid model comprises the CNN- LSTM network and has better accuracy. This approach is faster and obtained an accuracy of 98.56% for the DRDC dataset. Also, the training and validation loss of the hybrid model is 0.04 and 0.06, respectively. The AUC is measured around 99.9%, demonstrating the reliable performance of the hybrid system. Overall processing time of the proposed hybrid system is around seven minutes.

Chapter 11 describes the Novel Deep-Learning Approaches for Future Computing Applications and Services. After their introduction, the authors discuss architecture, auto encoder, Convolutional Neural Network (CNN), hierarchical of layers and supervision of mastering as the important factors for booming a programme for learning. The level of layers is important for proper monitoring and the classification of data shows the advantages of keeping the database. In the current and forthcoming period, richness learning could be performed as a useful safety application through facial recognition and mixed speech recognition. Furthermore, electronic image processing is a kind of research discipline that can be used in several locations. Chapter 12 gives full analyses of the magnetic field, substance and therapeutic utility effects, the study-Absolute convective motions of a viscous, impenetrable, and electrically regulated fluid moving a slanting platter through a powerful media, free stream speed can obey the exponentially expanding small disturbance rule. Skin pressure is enhanced by the increase of (Gr), (Gc), (Ko) and (α), and is minimized by the effect of (M)(β) and (α). The amount of Nusselt rises with Ec, while under the control of (Pr) and (Q), it decreases.

Chapter 13 describes Paddy crop cultivation in one of the foremost financial maneuvers of the Southern Province of India. Such Paddy crops are influenced by the assault of pest and the disease influenced by them. The authors discuss an efficient pest identification framework based on histogram-gradient feature processing, and deep CNN algorithm with SVM classification is proposed for improving paddy crop cultivation. A deep CNN algorithm is used for noise reduction in unclassified pest images to improve classification under linear SVM. The identification of pest from the de-noised images is performed using a linear SVM classifier along histogram variants embedded with gradient feature. The descriptors feature such as SIFT, SURF, and HOG are computed for all classifiers. It is found that the proposed methodology has evidenced to achieve improved classification when compared with all other existing algorithms.

Chapter 14 describes the term Edge Analytics, which can be defined as tools and algorithms that are deployed in the internal storage of the IoT devices or IoT gateways that collects, processes, and analyses the data at the deployed place itself rather than sending that data to the cloud for analytics. It contains novel end-user application testing equipped with ML on the edge of the IoT devices. A novel framework to achieve this is also proposed. The case study taken is a real-time one and has been tested successfully using the test cases generated on the edge.

Acknowledgments

We deeply indebted to Almighty god for giving this opportunity and it only possible with presents of God.

We extend our deep sense of gratitude to our Son Master H. Jubin, for moral support and encouragement, at all stages, for the successful completion of this Book.

We extend our deep sense of gratitude to our scholars and friends for writing the chapter in time and amenities to complete this book. I sincerely thank our parents and family member for providing necessary support.

We express our sincere thanks to the management of Vellore Institute of Technology, Vellore, India and Anna University, Regional Campus, Tirunelveli. Finally, we would like to take this opportunity to specially thank Wiley Scrivener publisher for his kind help, encouragement and moral support.

—Y. Harold Robinson, Ph.D.

— E. Golden Julie, Ph.D.

I would like to thank the Almighty for giving me enough mental strength and belief in completing this work successfully. I thank my friends and family members for their help and support. I express my sincere thanks to the management of IFET College of Engineering, Tamilnadu, India. I wish to express my deep sense of gratitude and thanks to Wiley Scrivener publisher for their valuable suggestions and encouragement.

—T. Ananth Kumar, Ph.D.

I express my sincere thanks to the management of Vellore Institute of Technology, Vellore, India. Also, I would like to thank the Wiley Scrivener Press for giving me the opportunity to edit this book.

—S. M. Jaisakthi, Ph.D.

1Certain Investigations on Different Mathematical Models in Machine Learning and Artificial Intelligence

Ms. Akshatha Y* and Dr. S Pravinth Raja†

Dept. of CSE, Presidency University, Bengaluru, Karnataka, India

Abstract

Artificial Intelligence (AI) is as wide as the other branches of computer science, including computational methods, language analysis, programming systems, and hardware systems. Machine learning algorithm has brought greater change in the field of artificial intelligence which has supported the power of human perception in a splendid way. The algorithm has different sections, of which the most common segment is classification. Decision tree, logistic regression, naïve bays algorithm, support vector machine algorithm, boosted tree, random forest and k nearest neighbor algorithm come under the classification of algorithms. The classification process requires some pre-defined method leading the process of choosing train data from the user’s sample data. A host of AI Advanced AI programming languages and methodologies can provide high-level frameworks for implementing numerical models and approaches, resulting in simpler computational mechanics codes, easier to write, and more adaptable. A range of heuristic search, planning, and geometric reasoning algorithms can provide efficient and comprehensive mechanisms for resolving problems such as shape description and transformation, and model representation based on constraints. So behind every algorithm there lies a strong mathematical model, based on conditional probability. This article is the analysis of those mathematical models and logic behind different classification algorithms that allow users to make the training dataset based on which computer can predict the correct performance.

Keywords:Artificial intelligence, classification, computation, machine learning

1.1 Introduction

The increasing popularity of large computing power in recent years, due to the availability of big data and the relevant developments in algorithms, has contributed to an exponential growth in Machine Learning (ML) applications for predictive tasks related to complex systems. In general, by utilizing an appropriate broad dataset of input features coupled to the corresponding predicted outputs, ML automatically constructs a model of the scheme under analysis. Although automatically learning data models is an extremely powerful approach, the generalization capability of ML models can easily be reduced in the case of complex systems dynamics, i.e., the predictions can be incorrect if the model is extended beyond the limits of ML models [1]. A collection of AI ideas and techniques has the potential to influence mathematical modelling study. In particular, information-based systems and environments may include representations and associated problem-solving techniques that can be used in model generation and result analysis to encode domain knowledge and domain-specific strategies for a variety of ill-structured problems. Advanced AI programming languages and methodologies may include high-level frameworks to implement numerical models and solutions, resulting in codes for computational mechanics that are cleaner, easier to write and more adaptable. A variety of heuristic search, scheduling, and geometric reasoning algorithms may provide efficient and comprehensive mechanisms for addressing issues such as shape definition and transformation, and model representation based on constraints. We study knowledge-based expert systems and problem-solving methods briefly before exploring the applications of AI in mathematical modelling.

1.1.1 Knowledge-Based Expert Systems

Knowledge-based systems are about a decade old as a distinctly separate AI research field. Many changes in the emphasis put on different elements of methodology have been seen in this decade of study. Methodological transition is the most characteristic; the emphasis has changed from application areas and implementation instruments to architectures and unifying concepts underlying a range of tasks for problem-solving. The presentation and analysis were at two levels in the early days of knowledge-based systems: 1) the primitive mechanisms of representation (rules, frames, etc.) and their related primitive mechanisms of inference (forward and backward chaining, inheritance, demon firing, etc.), and 2) the definition of the problem.

A level of definition is needed that describes adequately what heuristic programmers do and know, a computational characterization of their competence that is independent of the implementation of both the task domain and the programming language. Recently in the study, many characterizations of generic tasks that exist in a multitude of domains have been described. The kind of information they rely on and their control of problem solving are represented by generic tasks. For expert systems architecture, generic tasks constitute higher-level building blocks. Their characteristics form the basis for the study of the content of the knowledge base (completeness, accuracy, etc.) in order to explain system operations and limitations and to establish advanced tools for acquiring knowledge.

1.1.2 Problem-Solving Techniques

Several problem-solving tasks can be formulated as a state-space search. A state space is made up of all the domain states and a set of operators that transform one state into another. In a connected graph, the states can best be thought of as nodes and the operators as edges. Some nodes are designated as target nodes, and when a path from an initial state to a goal state has been identified, a problem is said to be solved. State spaces can get very big, and different search methods are necessary to monitor the effectiveness of the search [7].

A) Problem Reduction: To make searching simpler, this strategy requires transforming the problem space. Examples of problem reduction include: (a) organizing in an abstract space with macro operators before getting to the real operator details; (b) mean-end analysis, which tries to reason backwards from a known objective; and (c) sub-goaling.

B) Search Reduction: This approach includes demonstrating that the solution to the problem cannot rely on searching for a certain node. There are several explanations why this may be true: (a) There can be no solution in this node’s subtree. This approach has been referred to as “constraint satisfaction” and includes noting that the circumstances that can be accomplished in the subtree below a node are inadequate to create any minimum solution requirement. (b) In the subtree below this node, the solution in another direction is superior to any possible solution. (c) In the quest, the node has already been investigated elsewhere.

C) Use information of domains: The addition of additional information to non-goal nodes is one way to monitor the quest. This knowledge could take the form of a distance from a hypothetical target, operators that can be applied to it usefully, possible positions of backtracking, similarities to other nodes that could be used to prune the search, or some general formation goodness.

D) Adaptive searching techniques: In order to extend the “next best” node, these strategies use assessment functions. The node most likely to contain the optimal solution will be extended by certain algorithms (A *). The node that is most likely to add the most information to the solution process will be expanded by others (B *).

1.2 Mathematical Models of Classification Algorithm of Machine Learning

In the artificial learning area, the machine learning algorithm has brought about a growing change, knowledge that spoke of human discerning power in a splendid manner. There are various types of algorithms, the most common feature of which is grouping. Computer algorithm, logistic regression, naive bay algorithm, decision tree, enhanced tree, all under classification algorithms, random forest and k nearest neighbour algorithm support vector support. The classification process involves some predefined method that leads to the train data method of selection from the sample data provided by the user. Decision-making is the centre of all users, and the algorithm of classification as supervised learning stands out from the decision of the user.

Machine learning (ML) and deep learning (DL) are common right now, as there is a lot of fascinating work going on there, and for good reason. The hype makes it easy to forget about more tried and tested methods of mathematical modelling, but that doesn’t make it easier to forget about those methods.

We can look at the landscape in terms of the Gartner Hype Cycle:

Figure 1.1 is curve that first ramps up to a peak, then falls down into a low and gets back up into a plateau. We think that ML, and DL in particular, is (or at least is very close to) the Height of Unrealistic Expectations. Meanwhile, the Shortage of Productivity has several other methods. People understand them and use them all the time, but nobody speaks about them. They’re workhorses. They’re still important, though, and we at Manifold understand that. You also have to deploy the full spectrum of available resources, well beyond ML, to build effective data items. What does that mean in practice?

Figure 1.1 Gartner hyper cycle.

1.2.1 Tried and True Tools

Let’s look at a couple of these advanced tools that continue to be helpful: the theory of control, signal processing, and optimization of mathematics.

Control theory [2], which in the late 1950s became its own discipline, deals with real-time observation, inference, and control of a complex system’s (potentially unnoticed) states. When you understand the physics of a system, i.e., where the dynamics are not random, it is especially useful. This is a big difference because when we don’t completely understand the underlying physics, such as retail demand behaviour or ad buying on the internet, ML is really useful. Consider vehicular motion, which has physical laws that we don’t need to learn from an ML algorithm; we know how the equations of Newton operate and we can write down the differential equations that control a vehicle’s motion. Building ML models to learn this physics will burn data reams and compute cycles to learn something that is already understood; it’s wasteful. On the contrary, we can learn something important more quickly by putting the known physics in a state-space model and presenting the assumption in the language of control theory.

Signal processing, which deals with the representation and transformation of any signal, from time-series to hyper-spectral images, is another useful instrument. Classical transformations of signal processing, such as spectrograms and wavelet transforms, are also useful features to be used with ML techniques. These representations are currently used by many developments in speech ML as inputs to a deep neural network. At the same time, classical signal processing philtres, such as the Kalman philtre, are also very effective first solutions to issues that, with 20% of the effort, get you 80% of the way to a solution. Furthermore, strategies such as this are also much more interpretable than more advanced DL ones [9].

Mathematical optimization, finally, is concerned with finding optimal solutions to a given objective function. Linear programming to optimise product allocation and nonlinear programming to optimise financial portfolio allocations are classical applications. Advances in DL are partly due to advances in the underlying optimization techniques that allow the training to get out of local minima, such as stochastic gradient descent with momentum.

Mathematical optimization, as with other methods, is very complementary to ML. Both of these instruments do not work against each other, but provide interesting ways of combining them instead.

1.2.2 Joining Together Old and New

Many active solutions across different fields are used to combine the modern ML/DL environment with conventional mathematical modelling techniques. For instance, you can combine state-space modelling techniques with ML in a thermodynamic parameter estimation problem to infer unobserved system parameters. Or, you can combine ML-based forecasting of consumer behaviour with a broader mathematical optimization in a marketing coupon optimization issue to optimise the coupons sent.

Manifold has extensive experience with signal processing interfaces and ML. Using signal processing for feature engineering and combining it with modern ML to identify temporal events based on these features is a common pattern we have deployed. Features inspired by multi-variate time series signal processing, such as short time short time Fourier Transform (STFT), exponential moving averages, and edge finders, allow domain experts to quickly encode information into the modelling problem. Using ML helps the device to learn from additional annotated data continuously and improve its output over time.

In the end, that’s what’s crucial to remember: all of these methods are complementary, and to build data products that solve real business challenges, you need to remember all of them. The forest for the trees is overlooked by an unnecessarily limited emphasis on ML.

1.2.3 Markov Chain Model

A statistical and mathematical structure with some hidden layer configurations, the Markov chain model can be interpreted as the simple Basyian network that is directly visible to the Spectator Basyian network. For supervised and supervised simulations, this model makes a remarkable contribution. Education for strengthening and for pattern recognition, i.e. groups, if two instances are taken into account. A and B and it has 4 transitions when the system is in A, so it can be viewed similarly, as a transition from B when a system is in B, it can be viewed as a transition from A (Figure 1.2). In this way, a transition matrix will be created that will define the probability of the transformation of the state. In this way, it states not only in two classes, but even without classes or classes, that the model can be created [3].

1.2.4 Method for Automated Simulation of Dynamical Systems

The problem of performing automated dynamic system simulation and how to solve this problem using AI techniques will be considered. Next, we’re going to consider some of the key ideas involved in the mathematical model simulation process. Then, as a software program, we can explore how these concepts can be applied.

a. Simulation of mathematical engineering models

If we consider a particular mathematical model, the problem of performing an effective simulation for a specific engineering system can be better understood. Let us consider the model below: X′ = σ(Y-X) Y′ = rX - Y - XZ (1.1)

Figure 1.2 Two state Markov model.

(1.1)

where X, Y, Z, σ, r, b ∈ R, and σ, r and b are three parameters, which are usually taken to be optimistic, regardless of their physical origins. For different values of r in 0 < r < ∞, equations are also studied. Few researchers has studied this mathematical model to some degree, but there are still several questions to be answered regarding this model with regard to its very complicated dynamics for some ranges of parameter values [4].

For example, if we consider simulating eq. (1.1), the problem is choosing the appropriate parameter values for σ, r, b, so that the model’s interesting dynamic behaviour can be extracted. As we need to consider a three-dimensional search space σ r b and there are several possible dynamic behaviours for this model, the problem is not a simple one. In this case, the model is composed of three simultaneous differential equations, the behaviors can range from simple periodic orbits to very complicated chaotic attractors. Once the parameter values are selected then the problem becomes a numerical one, since then we need to iterate an appropriate map to approximate the solutions numerically.

b. Method for automated simulation using AI

Then determining the “best” set of parameter values BP for the mathematical model is the issue of performing automatic simulation for a specific engineering system. Here is where the technique of AI is really beneficial. In AI, the main concept is that we can use those techniques to simulate human experts in a specific application domain. In this case, we then use heuristics and statistical estimates derived from experts in this field to limit the computer program’s search space. You may define the algorithm for selecting the “best” set of parameter values as follows [9].

Step 1: Read the mathematical model M.

Step 2: Analyze the model M to “understand” its complexity.

Step 3: Generate a set of permissible AP parameters using the model’s initial “understanding.” This collection is generated by heuristics (expressed in the knowledge base as rules) and by solving some mathematical relationships that will later be described.

Step 4: Perform a selection of the “best” set of parameter values BP. This set is generated using heuristics (expressed as rules in the knowledge base).

Step 5: Execute the simulations by numerically solving the mathematical model equations. The various forms of complex behaviours are described at this stage.

A computer algorithm that can be called an intelligent device for simulating dynamical engineering systems is the result of this implementation [5].

1.2.5 kNN is a Case-Based Learning Method

For classification, it holds all the training details. In several applications, such as dynamic web mining for a wide repository, being a lazy learning technique prevents it. One way to enhance its effectiveness is to find some representatives to represent the entire classification training data, viz. Building an inductive learning model from the dataset of training and using this model for classification (representatives). There are several existing algorithms initially built to construct such a model, such as decision trees or neural networks. Their efficiency is one of the assessment benchmarks for various algorithms. Since kNN is a simple but effective classification method and is convincing as one of the most effective methods in text categorization for Reuters corpus of newswire articles, it motivates us to create a model for kNN to boost its efficiency while also maintaining its accuracy of classification [9]. Figure 1.3 shows frequency distribution in statistics is a representation that displays the number of observations within a given interval.

B. Support Vector Machine (SVM)

Support vector machines [7] (SVMs, also support vector networks) analyse knowledge used for classification and regression analysis in machine learning. An SVM training algorithm generates a model that assigns new examples to one category or another, given a set of training examples, each marked as belonging to one or the other of two categories. The picture below has two data forms, red and blue. In kNN, we used the test data to measure its distance to all training samples and to take the minimum distance sample. It takes a lot of time to calculate all the distances and a lot of memory to store all the samples from the training.

Figure 1.3 Distribution of data points and first obtained representative.

Figure 1.4 SVM.

Our primary objective is to find a line that divides the data uniquely into two regions. Such knowledge that can be split into two with a straight line (or high dimension hyperplanes) is called Linear Separable.

In the above Figure 1.4, intuitively, the line should pass as far as possible from all the points as there can be noise in the incoming data. The accuracy of the classification does not affect this data. Thus, the farthest line would offer more immunity against noise. Therefore, SVM discovers a straight line (or hyperplane) with the highest minimum distance from the training samples [10].

1.2.6 Comparison for KNN and SVM

KNN classifies knowledge based on the distance metric, while SVM needs a proper training process. Because of the optimal design of the SVM, it is guaranteed that the separated data will be separated optimally. KNN is commonly used as multi-class classifiers, while regular binary data belonging to either one class is different from SVM. A One-vs-One and One-vs-All strategy is used for a multiclass SVM. Figure 1.5 explains about hyperplanes are decision boundaries that help classify the data points. Data points falling on either side of the hyperplane can be attributed to different classes. Also, the dimension of the hyperplane depends upon the number of features. In the One-vs-one method, n*(n-1)/2 SVMs must be trained: one SVM for each pair of classes. Among all outcomes of all SVMs, we feed the unknown pattern to the entity and the final decision on the form of data is determined by majority result. This method is often used in the classification of multiple groups. We have to train as many SVMs when it comes to the One-vs-All approach as there are groups of unlabeled data. As in the other method, if given to the SVM with the greatest decision value, we give the unidentified pattern to the system and the final result.

Figure 1.5 Hyperplane in SVM.

Although SVMs look more computationally intensive, this model can be used to predict classes once data training is completed, even when we come across new unlabelled data. In KNN, however, each time we come across a collection of new unlabelled data, the distance metric is determined. In KNN, therefore, we still need to specify the distance metric. There are two major cases of SVMs in which groups can be linearly or non-linearly separable. We use kernel functions such as the Gaussian basis function or polynomials when the classes are non-linearly separable [11].

Therefore, we only have to set the K parameter and select the distance metric appropriate for KNN classification, while in SVMs, if the classes are not linearly separable, we only have to select the R parameter (regularisation term) and also the kernel parameters. Table 1.1 gives the comparison between 2 classifiers to check accuracy.

When we check about accuracy of both of the classifiers, SVMs usually have higher accuracy than KNN as shown [6–8].

When performing the visual studio tests after integrating OpenCV libraries, the accuracy percentage for SVM [7] was found to be 94 percent and 93 percent for KNN [6].

Table 1.1Accuracy of classifiers.

Classifier

Training set

Test set

Accuracy rate (in %)

SVM

10,000

10,000

98.9

KNN

10,000

10,000

96.47

1.3 Mathematical Models and Covid-19

The compartmental models are divided into different groups depending on the essence of the disease and its pattern of spread:

Susceptible-Infected-Recovered (SIR)

: This model divides the N-size population into three epidemiological overview subpopulations; Susceptible, Contaminated and Recovered, represented respectively by variables S, I and R. It is also possible to add birth, mortality and vaccination rates to this model. Individuals in the susceptible class are born. Infected individuals transmit the disease to susceptible individuals and stay in the infected class (the infected period) and individuals are believed to be resistant to life in the recovered class.

Susceptible-Exposed-Infected-Recovered (SEIR)

: This model divides the N-size population into four epidemiological overview subpopulations; Prone, Exposed, Contaminated, and Recovered, represented respectively by variables S, E, I, and R. In this model, birth rate, death rate, and vaccination rate, if known/applicable, can also be considered. For a disease where there is a substantial post-infection incubation period in which an infected person is not yet infectious, this is an acceptable model.

Susceptible-Infected-Susceptible (SIS)

: Some diseases, such as those caused by the common cold, do not have long-lasting immunity. Upon recovery from infection, certain infections do not have immunisation and individuals become susceptible again.

Susceptible-Exposed-Infected-Susceptible (SEIS)

: When there is no immunity to the pathogen, this model can be used (implying that the R class would be zero). Tuberculosis can be an instance of this model [12].

1.3.1 SEIR Model (Susceptible-Exposed-Infectious-Removed)

To estimate the infected numbers, the traditional Susceptible-Exposed-Infected-Recovered (SEIR) model is used. Viruses or bacteria are the cause of infectious diseases such as rubella, mumps, measles and pertussis. The transmission of these diseases requires a time of incubation. The incubation period is a period in which clinical signs are displayed by people who have started being attacked by viruses or bacteria but have not been able to spread the disease. The Susceptible-Exposed-Infected-Recovery (SEIR) model can reflect the spread of illness by observing the incubation time. Immigration has an effect on the spread of illness. This is caused by refugees who may bring the disease from their regions to other countries. For this reason, the immigration SEIR model should be considered. We will define the SEIR model with immigration here, decide an equilibrium point and state the stability of the equilibrium. The model is then extended to the illness of herpes [13].