168,99 €
COGNITIVE ANALYTICS AND REINFORCEMENT LEARNING The combination of cognitive analytics and reinforcement learning is a transformational force in the field of modern technological breakthroughs, reshaping the decision-making, problem-solving, and innovation landscape; this book offers an examination of the profound overlap between these two fields and illuminates its significant consequences for business, academia, and research. Cognitive analytics and reinforcement learning are pivotal branches of artificial intelligence. They have garnered increased attention in the research field and industry domain on how humans perceive, interpret, and respond to information. Cognitive science allows us to understand data, mimic human cognitive processes, and make informed decisions to identify patterns and adapt to dynamic situations. The process enhances the capabilities of various applications. Readers will uncover the latest advancements in AI and machine learning, gaining valuable insights into how these technologies are revolutionizing various industries, including transforming healthcare by enabling smarter diagnosis and treatment decisions, enhancing the efficiency of smart cities through dynamic decision control, optimizing debt collection strategies, predicting optimal moves in complex scenarios like chess, and much more. With a focus on bridging the gap between theory and practice, this book serves as an invaluable resource for researchers and industry professionals seeking to leverage cognitive analytics and reinforcement learning to drive innovation and solve complex problems. The book's real strength lies in bridging the gap between theoretical knowledge and practical implementation. It offers a rich tapestry of use cases and examples. Whether you are a student looking to gain a deeper understanding of these cutting-edge technologies, an AI practitioner seeking innovative solutions for your projects, or an industry leader interested in the strategic applications of AI, this book offers a treasure trove of insights and knowledge to help you navigate the complex and exciting world of cognitive analytics and reinforcement learning. Audience The book caters to a diverse audience that spans academic researchers, AI practitioners, data scientists, industry leaders, tech enthusiasts, and educators who associate with artificial intelligence, data analytics, and cognitive sciences.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 562
Veröffentlichungsjahr: 2024
Cover
Table of Contents
Series Page
Title Page
Copyright Page
Preface
Part I: COGNITIVE ANALYTICS IN CONTINUAL LEARNING
1 Cognitive Analytics in Continual Learning: A New Frontier in Machine Learning Research
1.1 Introduction
1.2 Evolution of Data Analytics
1.3 Conceptual View of Cognitive Systems
1.4 Elements of Cognitive Systems
1.5 Features, Scope, and Characteristics of Cognitive System
1.6 Cognitive System Design Principles
1.7 Backbone of Cognitive System Learning/Building Process [10]
1.8 Cognitive Systems vs. AI
1.9 Use Cases
1.10 Conclusion
References
2 Cognitive Computing System-Based Dynamic Decision Control for Smart City Using Reinforcement Learning Model
2.1 Introduction
2.2 Smart City Applications
2.3 Related Work
2.4 Proposed Cognitive Computing RL Model
2.5 Simulation Results
2.6 Conclusion
References
3 Deep Recommender System for Optimizing Debt Collection Using Reinforcement Learning
3.1 Introduction
3.2 Terminologies in RL
3.3 Different Forms of RL
3.4 Related Works
3.5 Proposed Methodology
3.6 Result Analysis
3.7 Conclusion
References
Part II: COMPUTATIONAL INTELLIGENCE OF REINFORCEMENT LEARNING
4 Predicting Optimal Moves in Chess Board Using Artificial Intelligence
4.1 Introduction
4.2 Literature Survey
4.3 Proposed System
4.4 Results and Discussion
4.5 Conclusion
References
5 Virtual Makeup Try-On System Using Cognitive Learning
5.1 Introduction
5.2 Related Works
5.3 Proposed Method
5.4 Experimental Results and Analysis
5.5 Conclusion
References
6 Reinforcement Learning for Demand Forecasting and Customized Services
6.1 Introduction
6.2 RL Fundamentals
6.3 Demand Forecasting and Customized Services
6.4 eMart: Forecasting of a Real-World Scenario
6.5 Conclusion and Future Works
References
7 COVID-19 Detection through CT Scan Image Analysis: A Transfer Learning Approach with Ensemble Technique
7.1 Introduction
7.2 Literature Survey
7.3 Methodology
7.4 Results and Discussion
7.5 Conclusion
References
8 Paddy Leaf Classification Using Computational Intelligence
8.1 Introduction
8.2 Literature Review
8.3 Methodology
8.4 Results and Discussion
8.5 Conclusion
References
9 An Artificial Intelligent Methodology to Classify Knee Joint Disorder Using Machine Learning and Image Processing Techniques
9.1 Introduction
9.2 Literature Survey
9.3 Proposed Methodology
9.4 Experimental Results
9.5 Conclusion
References
Part III: ADVANCEMENTS IN COGNITIVE COMPUTING: PRACTICAL IMPLEMENTATIONS
10 Fuzzy-Based Efficient Resource Allocation and Scheduling in a Computational Distributed Environment
10.1 Introduction
10.2 Proposed System
10.3 Experimental Results
10.4 Conclusion
References
11 A Lightweight CNN Architecture for Prediction of Plant Diseases
11.1 Introduction
11.2 Precision Agriculture
11.3 Related Work
11.4 Proposed Architecture for Prediction of Plant Diseases
11.5 Experimental Results and Discussion
11.6 Conclusion
References
12 Investigation of Feature Fusioned Dictionary Learning Model for Accurate Brain Tumor Classification
12.1 Introduction
12.2 Literature Review
12.3 Proposed Feature Fusioned Dictionary Learning Model
12.4 Experimental Results and Discussion
12.5 Conclusion and Future Work
References
13 Cognitive Analytics-Based Diagnostic Solutions in Healthcare Infrastructure
13.1 Introduction
13.2 Cognitive Computing in Action
13.3 Increasing the Capabilities of Smart Cities Using Cognitive Computing
13.4 Cognitive Solutions Revolutionizing the Healthcare Industry
13.5 Application of Cognitive Computing to Smart Healthcare in Seoul, South Korea (Case Study)
13.6 Conclusion and Future Work
References
14 Automating ESG Score Rating with Reinforcement Learning for Responsible Investment
14.1 Introduction
14.2 Comparative Study
14.3 Literature Survey
14.4 Methods
14.5 Experimental Results
14.6 Discussion
14.7 Conclusion
References
15 Reinforcement Learning in Healthcare: Applications and Challenges
15.1 Introduction
15.2 Structure of Reinforcement Learning
15.3 Applications
15.4 Challenges
15.5 Conclusion
References
16 Cognitive Computing in Smart Cities and Healthcare
16.1 Introduction
16.2 Machine Learning Inventions and Its Applications
16.3 What is Reinforcement Learning and Cognitive Computing?
16.4 Cognitive Computing
16.5 Data Expressed by the Healthcare and Smart Cities
16.6 Use of Computers to Analyze the Data and Predict the Outcome
16.7 Machine Learning Algorithm
16.8 How to Perform Machine Learning?
16.9 Machine Learning Algorithm
16.10 Common Libraries for Machine Learning Projects
16.11 Supervised Learning Algorithm
16.12 Future of the Healthcare
16.13 Development of Model and Its Workflow
16.14 Future of Smart Cities
16.15 Case Study I
16.16 Case Study II
16.17 Case Study III
16.18 Case Study IV
16.19 Conclusion
References
Index
End User License Agreement
Chapter 3
Table 3.1 Literature review—RL in finance.
Table 3.2 Literature review—debt collection process.
Table 3.3 Comparison with existing works.
Table 3.4 RL algorithms—Result comparison.
Chapter 4
Table 4.1 ELO rating-based results.
Chapter 5
Table 5.1 Literature survey table summary.
Table 5.2 Performance of different models for virtual makeup try-on using fa...
Chapter 7
Table 7.1 Literature review.
Table 7.2 Result obtained from different models.
Chapter 8
Table 8.1 Sample features.
Table 8.2 Performance comparison classifiers based on accuracy.
Table 8.3 Performance comparison with existing methods.
Chapter 9
Table 9.1 Comparison of performance metrics for the proposed HIF with existi...
Chapter 10
Table 10.1 Fuzzy inference rule.
Table 10.2 Expected execution time matrix.
Table 10.3 Average response time comparison.
Table 10.4 Scheduling success rate comparison.
Chapter 12
Table 12.1 Average accuracy produced by various classification models.
Chapter 14
Table 14.1 Glassdoor—sustainability report metrics.
Table 14.2 ESG scores of the companies.
Chapter 15
Table 15.1 Overview of all the application.
Chapter 16
Table 16.1 Usage of machine learning models in healthcare and the model that...
Table 16.2 Evaluation metrics for machine learning models.
Chapter 1
Figure 1.1 Benefits of analytics (source: https://swifterm.com/the-difference-...
Figure 1.2 Conceptual view of cognitive computing [6].
Figure 1.3 Components of cognitive computing [7].
Figure 1.4 Functions of cognitive computing [9].
Figure 1.5 Types of learning [4].
Figure 1.6 History of IBM Watson (source https://andrewlenhardtmd.com/blog/wp-...
Figure 1.7 Human-centered cognitive cycle [17].
Figure 1.8 Cognitive computing system architecture [18].
Figure 1.9 High-level cognitive IoT architecture [31].
Chapter 2
Figure 2.1 Smart city components.
Figure 2.2 The Reinforcement learning model-based cognitive computing architec...
Figure 2.3 Evaluation time of the product order completion process.
Figure 2.4 Energy consumption of the product order completion process.
Chapter 3
Figure 3.1 Debt collection optimization using RL.
Figure 3.2 Performance of the RL model.
Chapter 4
Figure 4.1 Proposed system.
Figure 4.2 Alpha-beta pruning in min–max tree.
Figure 4.3 Architecture of move prediction using alpha-beta pruning.
Figure 4.4 Architecture of predicting moves using CNN.
Figure 4.5 CNN layers.
Figure 4.6 Architecture of predicting moves using the hybrid algorithm.
Figure 4.7 Main menu vs. AI Human vs. Computer (if the user presses key “1”)....
Figure 4.8 (a) Alpha-beta, (b) Moving a pawn.
Figure 4.9 Choose castling.
Figure 4.10 Game over and get final score.
Figure 4.11 Accuracy analysis.
Chapter 5
Figure 5.1 Model workflow.
Figure 5.2 Image preprocessing.
Figure 5.3 CNN architectural diagram.
Figure 5.4 Face key points.
Figure 5.5 Masked image.
Figure 5.6 Color selector bar.
Figure 5.7 Final output image for lipstick.
Figure 5.8 Output for eyebrows.
Figure 5.9 Output for eyeliner.
Figure 5.10 Real-time output.
Figure 5.11 Accuracy in training and validation.
Chapter 6
Figure 6.1 Reinforcement learning—key components flowchart.
Figure 6.2 Robot working in the maze.
Figure 6.3 Stage of the exploration vs. exploitation process.
Figure 6.4 Workflow diagram.
Chapter 7
Figure 7.1 Layered architecture of deep learning model.
Figure 7.2 Flowchart for classification of COVID-19.
Figure 7.3 Resnet architecture.
Figure 7.4 Inception architecture.
Figure 7.5 Sample images of infected and non-infected patients.
Chapter 8
Figure 8.1 Types of diseases.
Figure 8.2 Overview of the proposed methodology.
Figure 8.3 Illustration of LBP calculation.
Figure 8.4 Proposed ILBP computation.
Figure 8.5 Illustration of rotation of transition bit.
Figure 8.6 The log–log plot for the computation of FD.
Figure 8.7 Sample output images.
Figure 8.8 Performance comparison of KNN, SVM, and AdaBoost classifiers.
Chapter 9
Figure 9.1 Architecture of the proposed feature selection and HIF classificati...
Figure 9.2 SFT time–frequency spectrogram representation of normal and abnorma...
Figure 9.3 Comparison between normal (a) and abnormal (b, c, d, e) VAG signals...
Figure 9.4 Box plot of the features selected.
Figure 9.5 Data spread in feature space for standard samples.
Figure 9.6 Data spread in feature space for standard samples.
Chapter 10
Figure 10.1 System architecture.
Figure 10.2 Resource allocation using cloud virtual machine.
Figure 10.3 (a) Allocating to GIS, (b) Calculating load.
Figure 10.4 Average response time comparison.
Figure 10.5 Scheduling success rate comparison.
Figure 10.6 (a) Replication cost graph, (b) Reliability cost graph.
Chapter 11
Figure 11.1 Challenges of precision agriculture.
Figure 11.2 Proposed lightweight CNN architecture for prediction of plant dise...
Figure 11.3 Proposed system GUI for plant disease diagnosis. (a) Input image. ...
Figure 11.4 Proposed system GUI for plant diseases diagnosis. (a) Input image....
Chapter 12
Figure 12.1 Architecture of proposed FFDLM for brain tumor classification.
Figure 12.2 Performance of proposed model—training accuracy vs. validation acc...
Figure 12.3 Performance of proposed model—training loss vs. validation loss.
Figure 12.4 Performance of various classification model and their accuracy.
Chapter 13
Figure 13.1 Understanding cognitive computing and AI.
Figure 13.2 AI-driven diagnostics and disease detection.
Figure 13.3 Seoul smart city healthcare: cognitive computing transformation.
Chapter 14
Figure 14.1 Distribution of ages of respondents in the survey.
Figure 14.2 Percentage of the investors who are aware of ESG scoring metric.
Figure 14.3 Major reason for investing into sustainable companies.
Figure 14.4 Block diagram of our methodology.
Figure 14.5 Percentile score.
Figure 14.6 UI developed using the streamlit python framework.
Figure 14.7 Deepdive image.
Chapter 15
Figure 15.1 Reinforcement learning process.
Figure 15.2 Policies learned by the various models.
Figure 15.3 Comparison of the difference in doses between those recommended by...
Figure 15.4 Proposed reinforcement learning model.
Figure 15.5 Framework for supervised reinforcement learning using a recurrent ...
Figure 15.6 Supervised learning frameworks, (a) initial conditioning, (b) trea...
Figure 15.7 Success rate of model.
Figure 15.8 A trial treatment strategy and therapy choices for advanced NSCLC....
Figure 15.9 Model used in proposed approach.
Figure 15.10 Review of the study.
Figure 15.11 Framework overview.
Chapter 16
Figure 16.1 Different types of artificial intelligence and prospects.
Figure 16.2 The most commonly used programming languages in artificial languag...
Figure 16.3 Top three applications of artificial intelligence.
Figure 16.4 The importance of artificial intelligence in everyday life.
Figure 16.5 Modern application of machine learning.
Figure 16.6 Pictorial representation of machine learning algorithm types.
Figure 16.7 Workflow of supervised learning algorithm.
Figure 16.8 Workflow of unsupervised learning algorithm.
Figure 16.9 Workflow of reinforcement machine learning algorithm.
Figure 16.10 Overview of machine learning approaches.
Figure 16.11 Methodology for a typical machine learning approach.
Figure 16.12 The eight best programming libraries for machine learning.
Figure 16.13 Future of machine learning (ML) in healthcare.
Figure 16.14 Machine learning model development and workflow.
Figure 16.15 Smart kidney uses for end-stage renal disease (ERSD) patients.
Figure 16.16 Artificial intelligence in renal disorders aids in the identifica...
Cover Page
Table of Contents
Series Page
Title Page
Copyright Page
Begin Reading
Index
WILEY END USER LICENSE AGREEMENT
ii
iii
iv
xiii
xiv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
Scrivener Publishing100 Cummings Center, Suite 541JBeverly, MA 01915-6106
Publishers at ScrivenerMartin Scrivener ([email protected])Phillip Carmical ([email protected])
Edited by
Elakkiya, R.
Department of Computer Science, Birla Institute of Technology & Science Pilani, Dubai Campus, UAE
and
Subramaniyaswamy V.
School of Computing, SASTRA Deemed University, Thanjavur, India
This edition first published 2024 by John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA and Scrivener Publishing LLC, 100 Cummings Center, Suite 541J, Beverly, MA 01915, USA© 2024 Scrivener Publishing LLCFor more information about Scrivener publications please visit www.scrivenerpublishing.com.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
Wiley Global Headquarters111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchant-ability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials, or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read.
Library of Congress Cataloging-in-Publication Data
ISBN 978-1-394-21403-7
Cover image: Pixabay.ComCover design by Russell Richardson
Cognitive analytics and reinforcement learning is a transformational force in the field of modern technological breakthroughs, reshaping the decision-making, problem-solving, and innovation landscape. This book offers a means of examining the profound overlap between these two fields and illuminating its significant consequences for business, academia, and research.
The harmonious combination of cognitive analytics and reinforcement learning has emerged as a beacon of wise decision-making in a world that is continually evolving and where data-driven insights drive progress. The symbiotic relationship between cognitive capacities and reinforcement learning techniques has become a cornerstone of dealing with complexity as the globe struggles with complicated problems that call for immediate solutions.
This book’s main goal is to shed light on the development of cognitive-enhanced reinforcement learning and all of its numerous uses. Cognitive insights and reinforcement learning dynamics are combined to create a dynamic framework that empowers academics, researchers, and business leaders looking for practical answers to challenging decision-making problems.
This book explores the ideas, methods, and real-world applications that influence the development of cognitive analytics and reinforcement learning. Each chapter contains a narrative of innovation, ranging from the improvement of prediction models to optimizing resource allocation, from identifying healthcare problems to transforming smart cities.
The editors have created a compendium that perfectly captures the essence of reinforcement learning and cognitive analytics—where academic concepts meet practical implementations. This book is a monument to the teamwork of perceptive authors who illuminate the points where different domains converge and provide readers with a broad perspective on the potential that emerges when cognition and reinforcement come together.
We are exceedingly grateful to the authors for their outstanding contribution of knowledge and intelligence, which have made this book a treasure trove of wisdom and innovation. We also thank the readers for joining us on this trip, whether they are seasoned professionals, inquisitive researchers, or enthusiastic learners.
We sincerely hope that this book will spark knowledge, inquiry, and creativity—a physical manifestation of the motivation that propels us ahead in the field of intelligent decision-making.
The EditorsDecember 2023
Renuga Devi T.1, Muthukumar K.2*, Sujatha M.1†and Ezhilarasie R.1
1School of Computing, SASTRA Deemed University, Thanjavur, India
2School of Electrical & Electronics Engineering, SASTRA Deemed University, Thanjavur, India
The cognitive system that started with automation has now set its benchmark to reach human-centric intelligence. The slow adoption of cognitive systems is most likely due to its meticulous training process. With cognitive computing as its backbone nowadays, any data can be converted into an asset anytime and anywhere. The complexity of data and its abandonment nature demand the coexistence of many technologies to provide deep insights in a domain. A generic artificial intelligence system built on deep learning and natural language processing evolves into a personalized business partner and a life companion that continuously learns. Combining tremendous power, humanity’s relationship with technology has undergone incredible shifts. The adaptation and embracement have led to a higher level of intelligence augmentation, mainly in decision support and engagement systems, penetrating its need in various fields, especially in the healthcare industry, business-to-business, industrial marketing, autonomous driving, financial services, manufacturing sectors, and as a human assistant in day-to-day activities. The expensive and complex process of using cognitive systems to get complete resolutions for specific business segments on historical static data and dynamic real-time data should be addressed with Hadoop, Spark, NoSQL, and other technologies that are part of cognitive systems besides NLP, AI, and ML. This chapter begins with an understanding of different analytics and the need of the hour, then gradually penetrates to give insights into cognitive systems, design principles, and key characteristics of the system, dwelling in the backbone of cognitive systems and its different learning approaches with some prominent use cases.
Keywords: Cognitive computing, machine learning algorithms, natural language processing, artificial intelligence, cognitive analytics
The cognitive age is a continuous trend of massive technological development. The driving force behind this trend is the developing field of cognitive technology, which consists of profoundly disruptive systems that interpret unstructured data, reason to generate hypotheses, learn from experience, and organically interact with humans. With this technology, the capacity to generate insight from all types of data will be critical to success in the cognitive age.
Cognitive computing is likely most notable for upending the conventional IT view that a technology’s worth reduces with time; because cognitive systems improve as they learn, they actually grow more useful. This trait makes cognitive technology very valuable for business, and many early adopters are capitalizing on the competitive edge it provides. The cognitive era has arrived, not just because technology has matured, but also because the phenomena of big data necessitate it. The goal of cognitive computing is to be able to solve some uncertain real-world issues comparable to those addressed by the human brain [1].
Since its inception in the 1950s, cognitive science has grown at a rapid pace. Furthermore, as a key component of cognitive science, cognitive computing has a significant influence on artificial intelligence and information technology [2]. Computing systems in the past could gather, transport, and store unstructured data, but they could not interpret it. Cognitive computing systems are intended to foster a better “symbiotic relationship” between humans and technology by replicating human reasoning and problem-solving. Cognitive computing simulates the human brain using computerized models. It is accomplished by the combination of the Von Neumann paradigm and neuromorphic computing, which combines analytic, iterative processes with extremely sophisticated logical and reasoning operations in a very short period of time while utilizing very little power.
The excitement around AI equipment has been dubbed a “renaissance of equipment,” as vendors race to manufacture space-explicit or exceptional job-at-hand explicit designs that can fundamentally scale and increase computing productivity [3]. Cognitive systems are probabilistic in nature that hold the capability to adapt and sense the unpredictability and complexity of unstructured input. They analyze that information, organize it, and explain what it means, as well as the reasons for their judgments [4]. Cognitive computing refers to technological platforms that combine reasoning, machine learning, natural language processing, vision, voice, and human computer interaction that replicates the human brain operation and aid in decision-making. The progression of cognitive thought evolves from pure descriptivism through past prediction to prescriptiveness, reflecting a journey from understanding to anticipation and active guidance.
As we go forward, the graph in Figure 1.1 shows us the benefits that each type of analytics provides.
Acquiring and evaluating facts to explain what has happened. The majority of business reports are descriptive in nature, which is capable of providing historical data summary or explaining differences from one another. Insights from past data are provided in detail by descriptive analytics via data aggregation and data mining but fail to explain the reason behind the insights.
Figure 1.1 Benefits of analytics (source: https://swifterm.com/the-difference-between-descriptive-diagnostic-predictive-and-cognitive-analytics/).
Diagnostic analytics addresses the reason behind the inference and discovers answers to why questions. The data are compared with past data to identify why the particular situation has happened. This method of data evaluation is useful to uncover data anomalies, determine the relationships within the data, and detect patterns and trends in product market analysis. Some of the diagnostic analytics used by various business firms include data discovery, alarms, drill-down, correlation, drill-up, and data mining. In-depth analysis by experienced demand planners provides assistance for better decision choices. Diagnostic analytics is a reactive process; it helps us only to anticipate the possibility of continuation of the current situation even when used with forecasting.
Predictive analytics forms a part of business intelligence that uses predictive and descriptive factors of the available data to forecast and identify the possibility of the occurrence of an unknown pattern in the near future. Predictive analytics is a subset of business intelligence that analyzes and predicts the possibility of an unknown future result using descriptive and predictive factors from the past. It combines analytical techniques, data mining strategies, predictive models, and forecasting methods to assess the possibility of risk and linkages in the current data to perform future predictions. At this point, you are more interested in why something happened than in what happened. It offers proactive market responses.
Prescriptive analytics combines descriptive, predictive, and diagnostic analysis to create the possibility to make things happen. Beginning with descriptive analysis, which informed us about what has happened, the next stage was to do a diagnostic about why it happened and the next was predictive analysis to predict when it would happen. As a consequence, prescriptive analysis uses business principles and mathematical models on the data to infer future decisions/actions from the current data. Business firms can implement prescriptive analytics in day-to-day transactions only when analytics-driven culture is followed for the entire organization. Larger firms such as Amazon and McDonald’s employ prescriptive analytics to increase revenue and customer experience by increasing their demand planning.
A software that takes all data and analytics and also learns on its own without explicit human direction is cognitive analytics. To achieve this self-learning, cognitive analytics combines advanced technologies like Natural Language Processing (NLP), artificial intelligence algorithms, machine learning and deep learning, semantics, data mining, and emotional intelligence [5]. Using these techniques, the cognitive application would become smarter and repair itself.
Figure 1.2 Conceptual view of cognitive computing [6].
Internal components of the cognitive analytics engine are depicted in Figure 1.2 by the large rectangle. To represent and reason with information, many knowledge representation structures are required. A variety of machine learning methods and inference engines are also required. Domain cognitive models encapsulate domain-specific cognitive processes to facilitate cognitive style problem solving. The learning and adaptation component increases system performance by learning from prior encounters with users. In contrast to all previous analytics, cognitive analytics provides many solutions to a query and assigns a level of confidence to each response. In other words, cognitive analytics use probabilistic algorithms to provide several responses with variable degrees of relevance. Noncognitive analytics, on the other hand, uses deterministic algorithms to calculate just one solution to each inquiry. Another component, labeled Hypothesis Generation & Validation, is required to compute numerous responses [6].
In general, the important components in Figure 1.3 of a cognitive computing system may be divided into three groups.
Figure 1.3 Components of cognitive computing [7].
A method of analyzing input
A collection of content or information supporting the conclusion
A method of comparing the signal to the content/information corpus
Let us go a little further by looking at some of the bits that make up such components. A highly parallel and distributed infrastructure is provided along with computing and storage cloud resources.
It uses the data acquired from the database and its provenance along with the methods to identify the properties of those data that are not static in nature (e.g., when is the data source created, by whom, etc.). The corpus comprises internal and external data sources that prepare the data to be used within it (i.e., data have to be selected, cleaned, and accuracy monitored).
Corpus contains huge amounts of data that are mainly text-based data such as documents, reports of patients, consumers, etc. It also has unstructured and semi-structured data such as videos, photos, and audio. The corpus also has information about the ontology and the connections of data. A taxonomy provides persuasion of data present in an ontology.
These services are used to create knowledge about the data that have been ingested and processed within the corpus. A cognitive system model is created by a collection of sophisticated algorithms. The machine learning algorithms adopted in Cognitive Systems have two sets of dynamics, namely, (i) hypothesis generation and (ii) hypothesis evaluation. A hypothesis demonstrates observable occurrence based on the evidence and is a testable statement. The evidence to support the hypothesis is derived from a repeated process of training the data. The learning of structured and unstructured data requires suitable tools for processing it. NLP (Natural Language Processing) services may analyze and find patterns in unstructured textual input to help a cognitive system. Deep Learning (DL) technologies are required for unstructured data such as photos, videos, and audio.
Data visualization is very much useful to interpret the results in graphical form and assist in easily drawing recommendations. The visualization of patterns and their connections with color, structure, and other forms are easier to identify and understand the pattern of data. Above all, the applications must be developed by making use of all the capabilities of the cognitive system that modulate the business in different verticals.
The objective of cognitive computing is to design a framework that can handle complex issues without human intervention. Applying cognitive function for commercial and general application development can be done by incorporating the features proposed by Cognitive Computing Consortium. The application must have the following features:
The machine learning algorithms are used to build the cognitive system at its initial stage. The system must learn and train the system as well as learn to adapt to the surroundings as it is mimicking the human brain. A single job cannot be coded into the systems. It must be dynamic in terms of data collection, goal comprehension, and requirement fulfilment.
The solution developed using cognitive systems like the human brain will interact with different kinds of systems such as processors, cloud services, mobile, and the user. The interaction with the system is bidirectional. The human input is interpreted by natural language processing and deep learning techniques to give a suitable result. This can be done by various chatbots like Mitsuku.
The cognitive system remembers the previous data input in a process and substitutes the appropriate input when the application is called in the future. This is to characterize the problem by asking questions or locating further information. This feature demands the proper use of quality data and validation procedures to substantiate that the system is sufficiently supplied with acceptable information in turn to offer reliable and up-to-date input.
The cognitive system must comprehend, recognize, and draw contextual characteristics such as proper domain, user profile, synonyms, objective, time, place, regulations, process, task, and syntax. The system relies on various sources including unstructured and organized digital data, as well as sensory data (gestural, visual, aural, or sensor-provided).
According to IBM Institute for Business Value, the scope of cognitive computing [8] involves discovery, interaction, and decision. This is much similar to the cognitive ability of human in everyday life. The simplified figure displaying the cognitive functionality is shown in Figure 1.4.
Cognitive systems contain unstructured and organized data. It exerts deep domain knowledge and delivers proper expert advice. The testable statement and arguments are constructed by the model by taking into account contextual relationships among various items in the system. This is helpful in finding solutions to unclear and irrational facts. This leads to the capability of the system to engage people in discerning discussions such as chatbot, which is the best example of its kind. Many AI-enabled chatbots are well trained with the required domain knowledge, enabling them to be adopted in many specific business applications.
Figure 1.4 Functions of cognitive computing [9].
The cognitive systems are more ahead in taking decisions when compared to others. This can be achieved by reinforcement learning. These decisions are continuously evolving based on the latest information, recent results, and actions. The self-adaptive system has the capability to change the confidence score by retracing the decisions and is helpful in self-governing decisions. The best example of this kind is IBM Watson in the healthcare industry. This system acquires and analyze patient data like medical history and diagnosis. The solution given by the system must have the capacity to read the queries and, based on the complex medical data and doctor’s notes, clinical comments.
The next advanced level of cognitive computing is discovery. Discovery means finding insights and grasping a huge volume of information. This model is built using deep learning and unsupervised machine learning methods. The system adapts to the growth of data and should assist the people by efficiently utilizing the information. While in the early development stage of the system, certain discovery features surfaced, the benefits for future demands of application are also attractive. One of the cognitive solutions is Louisiana State University’s (LSU) Cognitive Information Management (CIM) shell. The intelligent agents of the system acquire streaming data, such as text and video to enable the real-time monitoring and analysis achieved through inspection and visualization of the system. The CIM Shell not only delivers a warning, but also reconfigures on the fly to isolate a crucial event and correct the fault.
Adaptability:
The model’s networked intelligent agents acquire data from various sources; the data may be in the form of documents and video, to form an interactive sensing, inspection, and visualization system that allows real-time monitoring and analysis. The CIM Shell not only gives a warning, but also reconfigures spontaneously to isolate and rectify a critical event.
Contextual Understanding:
Cognitive learning helps AI systems to perceive and interpret data in context. They can recognize important patterns, linkages, and dependencies, allowing for more accurate predictions and judgments.
Continuous Improvement:
Cognitive learning is an iterative process in which AI systems learn from input, identify areas for development, and continuously update their models and algorithms. To improve performance, they can use techniques such as reinforcement learning.
The model in a cognitive computing system emphasizes the collection of data as well as the set of techniques used to produce and score hypotheses in order to solve problems, answer questions, or uncover recent information from the corpus. What type of forecasts you can make, trends and anomalies you can spot, and actions you can take are all determined by how you model the environment. The system’s designers provide the initial model, but the cognitive system updates it and uses it to give solutions to queries or give in-depth information. The corpus is the storage of knowledge used by machine learning algorithms to continually modify the model based on feedback from the user. The system performance is mainly influenced by the choice of data structure since it is repeated for the purpose of accomplishing hypotheses and retrieval of information. The design process is carried out with the common workloads before being implemented to particular architectures.
A cognitive system is intended to anticipate future outcomes by using a domain model. A cognitive system is created with a number of phases. It necessitates comprehension of the sorts of queries that must be asked, accessible data, and the development of a data source large enough to support the production of hypotheses about the domain based on observable facts. As a result, a cognitive system is designed to analyze alternate hypotheses, generate hypotheses from data, and decide the availability of supporting evidence to solve issues. A cognitive system can give end users a strong method to learn and train the system by employing machine learning algorithms, question analysis, and advanced analytics on relevant data, which may be unstructured or organized.
Taxonomies give machine-ordered representations. The W3C [11] refers to ontology as a collection of terms that are more complex in nature. Taxonomy is the organization of different items or classes. It is observed that taxonomies:
Use a hierarchical framework and give identifying name for every item with respect to other objects.
Record membership attributes of every object with respect to other objects.
The objects in any domain are classified or categorized based on distinct criteria. These guidelines must be consistent, comprehensive, and clear.
Ensure that the newly added item must fit into any of the objects or categories and include rigor specifications for the new item.
The characteristics of the class above it are derived and it may also have additional attributes.
Ontology is a subset of taxonomy; however, it contains additional information about the behavior of things and their connections. Ontologies take into account how a domain impacts components such as model selection, rules, representations, and needed operations. Modeling the style of thinking taxonomy alone is not enough. The incorporation of ontology in representation presents the information to the user in simple terms after processing it. Thus, the Ontology Web Language (OWL) is promoted to present the content to the user, in which AI may comprehend complex items, and ontology is used for this purpose. OWL adds vocabulary to formal semantics, allowing for increased machine interpretability of material. A cognitive system can generate its own internal representations and understandings of ideas in a data-driven way by using techniques such as deep learning, reinforcement learning, or other cognitive architectures, without explicitly relying on pre-defined taxonomies or ontologies.
The usage of ontology and taxonomies enables the system to learn efficiently, and the tools used to apply these in system architecture need to be investigated. To design a system with cognitive activities then, the concept that comes to mind is taxonomies. There are a variety of data structures used to store taxonomies. The structure may be a taxonomy tool, draught database, or relational database. Taxonomies and ontologies serve as the foundation for computer self-learning, opening the door to previously inconceivable and useful cooperation with machines.
The cognitive agents become smarter after repeated processing of data from which the system learns by itself through the system that can give accurate results. There are several techniques available for the learning process and they are depicted in Figure 1.5.
Supervised Learning for Beginners
Artificial intelligence plays a very important role in the cognitive systems learning process. In the learning phase, systems are trained using supervised machine learning algorithms to recognize the association among the available data. When presented with fresh data, the system tries to relate the new data based on the pattern rule and constantly improves its decision by developing the ability to relate new patterns by updating the additional pattern finding rules into its knowledge base. This type of artificial intelligence is sometimes used in business chatbots built to answer commonly asked questions through the phone or instant messenger.
Figure 1.5 Types of learning [4].
Unsupervised Learning for Beginners
Unlike supervised learning, the cognitive system is not fed with rules based on the available data in the learning phase; instead, the system should educate itself on the association between the available data and frame rules and modify it in the course of time based on the learning experience.
Reinforcement Learning for Beginners
Unlike supervised and unsupervised learning, the system is not provided with patterns or data. The reinforcement learning is provided with reward points and goals. The system aims to maximize its reward by learning its surroundings. Data discovery and pattern identification from the environment data in the learning process assisted through a reward-seeking approach. Consider a hide-and-seek game playing example, The cognitive system plays games against itself millions of times to learn the game. More complex approaches are learned based on the problem like erecting barriers. In another illustration of AI’s divergent thinking, the cognitive agents discovered flaws inside the physical restrictions of their simulation. This enabled them to create gaming tactics that their developers had not expected.
Q-Learning for Beginners
The system learns through a model-free learning approach. The system starts with no rules and zero knowledge about the surroundings in the learning phase; instead, all possible options available are given to the system to explore. Based on trial-and-error actions, the system has to decide the best option. It is like searching for a place that is available in a locality without GPS or a map. The system discovers the path itself through trial and error. Following that, it learns how to optimize its travel path and speed to that place by drawing on previous experiences.
Combining Learning Strategies
Each technique has its own limitations and advantages. The more human-like thinking can be made possible by ensembling the learning techniques. The learning methodologies allow the cognitive system to learn from the available data based on the application, Furthermore, we enable the AI to learn from its own experiences and build its own rules concerning the problem-specific strategies by utilizing reinforcement learning and Q-learning. When it comes to SDi’s AI military simulation, we have combined the aforementioned learning methodologies into our ensemble methodology with the objective of building a cognitive agent that can one day outperform even the finest military tacticians. Supervised and unsupervised learning approaches, for example, enable us to feed the AI data sets from pre-existing military simulations with real-world battle data.
These learning methodologies also allow us to teach the cognitive agent the basic laws of human fighting and intellect, such as fundamental spatial knowledge. Furthermore, we enable the AI to learn from its own experiences playing against top-tier military specialists and build its own rules concerning ideal fighting strategies by utilizing reinforcement learning and Q-learning. Unsupervised learning cognitive agent is widely used in personalized suggestions like Netflix movie recommendations or Amazon product suggestions or personalized advertisements encountered while browsing through social media.
Many everyday goods and services, ranging from search-engine advertising applications to face recognition on social networking sites to “smart” automobiles, phones, and electric grids, are beginning to display characteristics of Artificial Intelligence (white paper). Cognitive systems, on the other hand, integrate five fundamental skills.
Cognitive systems provide more complete human connections with individuals depending on the mode, form, and quality that each person chooses. They sort through all of this organized and unstructured data to determine what is truly important in engaging a person. These encounters grow more natural, anticipatory, and emotionally appropriate as they continue to learn.
Knowledge in every business and profession is growing faster than any practitioner can keep up with—journals, new procedures, new regulations, new practices, and totally new areas. A notable example may be seen in healthcare, where it is estimated that it took 50 years in 1950 to double the world’s medical knowledge; seven years in 1980; and less than three years in 2015. Meanwhile, in his or her lifetime, each individual will create one million terabytes of health-related data, the equivalent of almost 300 million books. Cognitive systems are intended to assist organizations keep up with the times by acting as a companion for professionals looking to improve their performance.
Cognition allows new types of goods and services to perceive, reason, and learn about consumers and the environment around them. This enables for ongoing refinement and adaptability, as well as augmentation of their capabilities to provide previously unimagined uses. This is already occurring with vehicles, medical gadgets, appliances, and even toys. The Internet of Things is drastically increasing the range of digital products and services—and cognition can now move where code and data go.
Cognition also changes the way a business functions. Business operations with cognitive capabilities capitalize on the phenomenon of data from both internal and external sources. This increases their knowledge of processes, context, and surroundings, leading to continuous learning, improved forecasting, and enhanced operational performance—as well as decision-making at the pace of today’s data.
Finally, the most effective weapon that cognitive firms will have is considerably better “headlights” into an ever-turbulent and complicated future. Such flashlights are growing increasingly crucial as executives in many industries are forced to make large bets—on medication discovery, complicated financial modeling, materials science breakthrough, or founding a business. Leaders may reveal patterns, possibilities, and actionable ideas by applying cognitive technology to massive volumes of data, which would be practically hard to discover with conventional research or programmed systems alone. Cognitive computing is the third computer age that uses deep learning algorithms and large data analytics to address very important problems.
Cognitive computing is based on the scientific principles of artificial intelligence and signal processing. It uses machine learning, reasoning, natural language processing, speech recognition and vision, human–computer interaction, dialogue, and narrative generation, and is integrated into many cross-discipline platforms to improve decision-making. It is modeled based on how the human brain senses, reasons, and responds to stimuli. It is capable of customizing its functionality based on individuals and varying environments. Cognitive computing applications are more effective and influenced by their design. One temptation, however, is to pursue cognitive technology for the technology’s sake. AI technology can assist in searching for a variety of meanings hidden deep in data, but cognitive computing will assist in making sensible judgments. Thus, cognitive computing is the way to go to make complex judgments. The success and failure of the applications depend on how it is started. Most losses are seen when starting with the technology instead of the business case. People get excited when they can do so many things with cognitive technology. Nevertheless, focusing on what impacts the bottom line would be best. Cognitive computing system applications are numerous and varied.
In 2011, manufacturing floors throughout the country welcomed a new employee: Baxter, a six-foot-tall, 300-pound robot with two long, dexterous arms and a set of expressive digital eyes that followed its arms everywhere they went. Unlike previous industrial robots, Baxter was collaborative because of cognitive computing—an AI method aimed to imitate the human reasoning process. That is how it was taught. Humans could hold its arms and show it how to perform jobs more effectively, acting as a mentor to Baxter, who could then perfect those skills. Unfortunately, Baxter’s life was cut short. After much early hype, its inventor, Rethink Robotics, struggled for years to enhance its operations. IBM’s Watson as in Figure 1.6, a cognitive computing system that conquered Brad Rutter and Ken Jennings on Jeopardy!, was the first cognitive computing demonstration published in February 2011 to the world; it was the cause of the end of the so-called artificial intelligence winter [4]. Watson assists doctors with diagnosing challenging patients in the world of medicine. Peer-reviewed research has been integrated into the system along with medical textbooks and journal articles. When making his most likely diagnosis, Watson refers to these sources. Natural language processing (NLP) is a tool that Watson utilizes to assist the banking sector with finding pertinent information and insights more quickly, facilitating decision-making, and enhancing client experiences. It improves operational effectiveness, analysis, and credit initiation through enterprise-wide data management. IBM’s Watson with its natural language processing (NLP) and AI capabilities is perhaps the most well-known “face” of psychological registration; the term subjective processing is coined to describe frameworks that can learn, reason, and communicate in a human-like manner.
Cognitive computing holds a data analysis behavior that increases cognitive capacity by continually altering and gaining information from the data in an interactive manner [12]. The study builds a cognitive model that integrates cognitive computing and deep belief network algorithms. The developed cognitive model acts as a control system for collaborative robots. A collaborative robot control system’s performance has improved by combining cognitive computing technology and a deep belief network algorithm [13]. The edge computing vision and the cognitive computing success laid the foundation for a promising technology approach named edge cognitive computing (ECC). Cognitive computing is implemented at the network edge. Edge cognitive computing saves computing resources and provides energy-efficient service to the user with ultra-low latency [14]. The observations established that the ECC realizes an optimized human-centered behavior prediction to assist in service migration considering the network traffic and environment network resource availability.
Figure 1.6 History of IBM Watson (source https://andrewlenhardtmd.com/blog/wp-content/uploads/2017/07/IBM-Watson-1.jpg).
Cognitive computing applications in smart cities have seen prominent growth and received much attention. The three defining properties of big data, namely, velocity, variety, and volume, have been handled using data analytics, machine learning, and recommender systems. It also explored how the cognitive applications side of smart cities was driven by deep reinforcement learning and its semi-supervision. The data analytics and machine learning algorithms’ role in different aspects of the smart-city application projects framework, like smart environment, parking, and transportation, is analyzed to discourse the challenges in big data [15].
Life sciences have always been demanding and need rapid advancements to improve people’s well-being. Traditional medicines offer a steady foundation for treating individuals, but they cannot fulfill the requirements of today’s unhealthy lifestyle. Even though new technologies are penetrating themselves across healthcare communities, cognitive computing has also made its way into the field to address the constraints of traditional medicine [16]. With integrity and intelligence, cognitive systems may be implemented to solve a wide range of applications in healthcare systems. Figure 1.7 depicts the life cycle of cognitive computing systems in cyberspace [17]. The life cycle consists of different stages like data collection and transformation followed by data analysis and response. The response received is added to cyberspace as a fresh interpretation.
Figure 1.8 depicts a cognitive computing system architecture [18]. The architecture defines the integration of the public cloud, database tools, and tensor flow for the development of cognitive applications. Cognitive data transmission technique (CDTM) in health care uses simulated annealing to perform cognitive tasks and uses map reduction to perform data analytics. The cognitive EEG approach to detect pathology retains the two convolution neural network approaches used by the EEG model for pathology classification, including AlexNet and VGG 16. The CDTM collects the EEG signals and other medical data using appropriate sensors and transfers them to the cloud infrastructure. The data sent to the cloud is processed using a deep learning model in cognitive systems to detect pathology. The report generated by the cognitive model is populated with the required medical practitioners for further treatment of patients on time [19].
Figure 1.7 Human-centered cognitive cycle [17].
Figure 1.8 Cognitive computing system architecture [18].
Personalized medicine may create individualized pharmaceuticals that assure exact drug combinations and consistent dose recommendations. Personalized medicine is a scientifically proven concept in the medical domain. It aids in the early diagnosis of potential health difficulties, giving medical professionals adequate time to implement preventative measures and save patients’ lives [20]. A Personal Medical Digital Assistant (PMDA) performs cognitive screening and assists healthcare professionals in trauma centers. The PMDA model is dynamic and flexible to perform situation-aware trauma tracking. This model interacts with hospital cognitive service to adjust itself to the patient’s current health situation [21]. Healthcare collaboration tools are inherently complicated. As a result, it takes a long time to train individuals how to utilize these cutting-edge instruments. Cognitive computing requires a massive amount of data related to each patient to record accurate findings about a particular patient [22]. The cognitive model algorithms fail in sensor-based machines when there is a lack of a sufficient amount of data. Preserving patient privacy on data in the healthcare industry is a trivial challenge in applying cognitive computing. As the future landscape for research avenues in health care, the cognitive model ECC can be converted to a 5G cognitive healthcare model, including emotion detection and intelligent sensors to collect body signals to provide adequate healthcare [23].
In the current era of the digital age, the usage of wireless network devices is witnessing exponential growth [23]. Intelligent networking systems are prone to cyber-attacks, a significant challenge in today’s environment. Cybersecurity is a barrier that protects the cyber system from cyberattacks [24]. The dynamic nature of the network demands a self-contained system capable of making immediate, precise decisions than relying on solutions based on predefined static scenarios [25]. Security still needs to be improved despite technological advancement in networking and various protection methodologies adopted in smart devices. There is continuous research for quality protocols and standards that will safeguard the data generated and stored with efficient sharing [26]. Foreseen solutions to the challenges in cybersecurity can be addressed using cognitive technologies. It plays a prominent role in identifying the behavioral pattern of cybersecurity attacks by understanding human psychological behavior. Linguistic biometric threshold schemes facilitate data exchange that makes use of cognitive processes. This method’s key benefit lies in its data handling and flexible architecture. The hierarchical, layered, and mixed architecture, along with multiple levels of data handling at cloud, fog, and basics, are the main advantages of this scheme [27].
A cognitive dynamic system (CDS) is used in smart grid networks to detect false data injection (FDI). FDI is a serious issue to be considered as it may lead to blackouts and dangerous consequences in the smart grid electrical network. IEEE 4 bus and 14 bus distribution networks are used to carry out the simulation. Bayesian approach-based reinforcement learning is used to perform data analysis on previous and current perception–action cycles. The system’s stability without degradation is maintained using Bayesian filtering even when multiple operations are carried out [28]. Bayesian filtering-based predictive analysis to develop cognitive dynamic systems hold human-like behavioral skills in handling direct grid current estimation models and grid networks [29]. Most of the approaches need more actual deployment in real time and exist in the literature as a theoretical framework [30].
Cognitive IoT architecture for smart buildings shown in Figure 1.9 uses several sensors. The integrated data obtained from different sensors are analyzed to avoid catastrophic accidents. The CIoT architecture consists of three essential components, edge, platform, and enterprise, combined with cognitive computing techniques [31]. The topmost layer in CIoT is the cloud infrastructure accessed through the platform as a service (Paas). The middle platform layer provides API support to facilitate user interaction for machine learning. The web interface is used to populate the data among the layers. Different application accesses are made more accessible through PaaS. It comprises three fundamental tiers: enterprise, platform, and edge, deployed with cognitive computing techniques [31]. For reasoning, the platform tier enables APIs, user interactions, and machine learning. Every layer is deployed at the cloud/edge level. The cloud’s top layer provides a platform as a service (PaaS). With the web service interface, data are exchanged among various groups. PaaS facilitates client access to many apps.
Figure 1.9 High-level cognitive IoT architecture [31].
People-centric cognitive Internet of things (PIoT) holds two layers, namely, the local resource layer and the cloud layer, with three essential nodes: cloud coordinator, data source, and people node. PIoT provides a device-to-device interface managing CPS service and human data. The cloud layer analyzes data using an ML algorithm. Using cognitive computing, PIoT can also provide a report on the particulate matter in the air and the degree of exposure of each user to it [32].
Cognitive computing, a new AI idea that copycats the human brain’s thinking process, is gradually thriving in the automation of Industry 4.0. The problem of “data island” can be solved by federated learning with efficient processing and privacy-preserving. The blockchain-based federated knowledge obtains faster convergence through a completely decentralized approach, making the learning model self-resilient to poison attack. A decentralized cognitive computing paradigm is established in the Industry 4.0 model by combining federated learning with blockchain. The performance improvement in federated learning with the inclusion of CC for Industry 4.0 increases its accuracy, resistance to poisoning attacks, and incentive mechanism [33].
Blockchain-based architecture is proposed to offer private and secure spatiotemporal innovative contract services for the long-term Internet of Things (IoT)-enabled sharing economy in mega smart cities. The framework provides a long-term incentive mechanism that might support safe, smart city services such as smart contracts, sharing economy, and cyber-physical communication with IoT and blockchain [34].
A Recommender System with cognitive intelligence should be capable of learning from experienced domain experts in the context of recommendation. Recommender Systems have traditionally been thought of as e-commerce product recommenders (e.g., Amazon and eBay), playlist generators for video/music services (e.g., Netflix and Spotify), or social content recommenders (e.g., Facebook and Twitter). In modern organizations, the recommender system, on the other hand, is heavily data/ knowledge-driven and depends on cognitive elements such as the attitude of users, behavior, and personality [35].