190,99 €
The book details deep learning models like ANN, RNN, LSTM, in many industrial sectors such as transportation, healthcare, military, agriculture, with valid and effective results, which will help researchers find solutions to their deep learning research problems.
We have entered the era of smart world devices, where robots or machines are being used in most applications to solve real-world problems. These smart machines/devices reduce the burden on doctors, which in turn make their lives easier and the lives of their patients better, thereby increasing patient longevity, which is the ultimate goal of computer vision. Therefore, the goal in writing this book is to attempt to provide complete information on reliable deep learning models required for e-healthcare applications. Ways in which deep learning can enhance healthcare images or text data for making useful decisions are discussed. Also presented are reliable deep learning models, such as neural networks, convolutional neural networks, backpropagation, and recurrent neural networks, which are increasingly being used in medical image processing, including for colorization of black and white X-ray images, automatic machine translation images, object classification in photographs/images (CT scans), character or useful generation (ECG), image caption generation, etc. Hence, reliable deep learning methods for the perception or production of better results are a necessity for highly effective e-healthcare applications. Currently, the most difficult data-related problem that needs to be solved concerns the rapid increase of data occurring each day via billions of smart devices. To address the growing amount of data in healthcare applications, challenges such as not having standard tools, efficient algorithms, and a sufficient number of skilled data scientists need to be overcome. Hence, there is growing interest in investigating deep learning models and their use in e-healthcare applications.
Audience
Researchers in artificial intelligence, big data, computer science, and electronic engineering, as well as industry engineers in transportation, healthcare, biomedicine, military, agriculture.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 686
Veröffentlichungsjahr: 2021
Cover
Title Page
Copyright
Preface
Part 1: Deep Learning and Its Models
1 CNN: A Review of Models, Application of IVD Segmentation
1.1 Introduction
1.2 Various CNN Models
1.3 Application of CNN to IVD Detection
1.4 Comparison With State-of-the-Art Segmentation Approaches for Spine T2W Images
1.5 Conclusion
References
2 Location-Aware Keyword Query Suggestion Techniques With Artificial Intelligence Perspective
2.1 Introduction
2.2 Related Work
2.3 Artificial Intelligence Perspective
2.4 Architecture
2.5 Conclusion
References
3 Identification of a Suitable Transfer Learning Architecture for Classification: A Case Study with Liver Tumors
3.1 Introduction
3.2 Related Works
3.3 Convolutional Neural Networks
3.4 Transfer Learning
3.5 System Model
3.6 Results and Discussions
3.7 Conclusion
References
4 Optimization and Deep Learning-Based Content Retrieval, Indexing, and Metric Learning Approach for Medical Images
4.1 Introduction
4.2 Related Works
4.3 Proposed Method
4.4 Results and Discussion
4.5 Conclusion
References
Part 2: Applications of Deep Learning
5 Deep Learning for Clinical and Health Informatics
5.1 Introduction
5.2 Related Work
5.3 Motivation
5.4 Scope of the Work in Past, Present, and Future
5.5 Deep Learning Tools, Methods Available for Clinical, and Health Informatics
5.6 Deep Learning: Not-So-Near Future in Biomedical Imaging
5.7 Challenges Faced Toward Deep Learning Using in Biomedical Imaging
5.8 Open Research Issues and Future Research Directions in Biomedical Imaging (Healthcare Informatics)
5.9 Conclusion
References
6 Biomedical Image Segmentation by Deep Learning Methods
6.1 Introduction
6.2 Overview of Deep Learning Algorithms
6.3 Other Deep Learning Architecture
6.4 Biomedical Image Segmentation
6.5 Conclusion
References
7 Multi-Lingual Handwritten Character Recognition Using Deep Learning
7.1 Introduction
7.2 Related Works
7.3 Materials and Methods
7.4 Experiments and Results
7.5 Conclusion
References
8 Disease Detection Platform Using Image Processing Through OpenCV
8.1 Introduction
8.2 Problem Statement
8.3 Conclusion
8.4 Summary
References
9 Computer-Aided Diagnosis of Liver Fibrosis in Hepatitis Patients Using Convolutional Neural Network
9.1 Introduction
9.2 Overview of System
9.3 Methodology
9.4 Performance and Analysis
9.5 Experimental Results
9.6 Conclusion and Future Scope
References
Part 3: Future Deep Learning Models
10 Lung Cancer Prediction in Deep Learning Perspective
10.1 Introduction
10.2 Machine Learning and Its Application
10.3 Related Work
10.4 Why Deep Learning on Top of Machine Learning?
10.5 How is Deep Learning Used for Prediction of Lungs Cancer?
10.6 Conclusion
References
11 Lesion Detection and Classification for Breast Cancer Diagnosis Based on Deep CNNs from Digital Mammographic Data
11.1 Introduction
11.2 Background
11.3 Methods
11.4 Application of Deep CNN for Mammography
11.5 System Model and Results
11.6 Research Challenges and Discussion on Future Directions
11.7 Conclusion
References
12 Health Prediction Analytics Using Deep Learning Methods and Applications
12.1 Introduction
12.2 Background
12.3 Predictive Analytics
12.4 Deep Learning Predictive Analysis Applications
12.5 Discussion
12.6 Conclusion
References
13 Ambient-Assisted Living of Disabled Elderly in an Intelligent Home Using Behavior Prediction—A Reliable Deep Learning Prediction System
13.1 Introduction
13.2 Activities of Daily Living and Behavior Analysis
13.3 Intelligent Home Architecture
13.4 Methodology
13.5 Senior Analytics Care Model
13.6 Results and Discussions
13.7 Conclusion
Nomenclature
References
14 Early Diagnosis Tool for Alzheimer’s Disease Using 3D Slicer
14.1 Introduction
14.2 Related Work
14.3 Existing System
14.4 Proposed System
14.5 Results and Discussion
14.6 Conclusion
References
Part 4: Deep Learning - Importance and Challenges for Other Sectors
15 Deep Learning for Medical Healthcare: Issues, Challenges, and Opportunities
15.1 Introduction
15.2 Related Work
15.3 Development of Personalized Medicine Using Deep Learning: A New Revolution in Healthcare Industry
15.4 Deep Learning Applications in Precision Medicine
15.5 Deep Learning for Medical Imaging
15.6 Drug Discovery and Development: A Promise Fulfilled by Deep Learning Technology
15.7 Application Areas of Deep Learning in Healthcare
15.8 Privacy Issues Arising With the Usage of Deep Learning in Healthcare
15.9 Challenges and Opportunities in Healthcare Using Deep Learning
15.10 Conclusion and Future Scope
References
16 A Perspective Analysis of Regularization and Optimization Techniques in Machine Learning
16.1 Introduction
16.2 Regularization in Machine Learning
16.3 Convexity Principles
16.4 Conclusion and Discussion
References
17 Deep Learning-Based Prediction Techniques for Medical Care: Opportunities and Challenges
17.1 Introduction
17.2 Machine Learning and Deep Learning Framework
17.3 Challenges and Opportunities
17.4 Clinical Databases—Electronic Health Records
17.5 Data Analytics Models—Classifiers and Clusters
17.6 Deep Learning Approaches and Association Predictions
17.7 Conclusion
17.8 Applications
References
18 Machine Learning and Deep Learning: Open Issues and Future Research Directions for the Next 10 Years
18.1 Introduction
18.2 Evolution of Machine Learning and Deep Learning
18.3 The Forefront of Machine Learning Technology
18.4 The Challenges Facing Machine Learning and Deep Learning
18.5 Possibilities With Machine Learning and Deep Learning
18.6 Potential Limitations of Machine Learning and Deep Learning
18.7 Conclusion
Acknowledgement
Contribution/Disclosure
References
Index
Chapter 1
Figure 1.1 Architecture of LeNet-5.
Figure 1.2 Architecture of AlexNet.
Figure 1.3 Architecture of ZFNet.
Figure 1.4 Architecture of VGG-16.
Figure 1.5 Inception module.
Figure 1.6 Architecture of GoogleNet.
Figure 1.7 (a) A residual block.
Figure 1.8 Architecture of ResNeXt.
Figure 1.9 Architecture of SE-ResNet.
Figure 1.10 Architecture of DenseNet.
Figure 1.11 Architecture of MobileNets.
Chapter 2
Figure 2.1 General architecture of a search engine.
Figure 2.2 The increased mobile users.
Figure 2.3 AI-powered location-based system.
Figure 2.4 Architecture diagram for querying.
Chapter 3
Figure 3.1 Phases of CECT images (1: normal liver; 2: tumor within liver; 3: sto...
Figure 3.2 Architecture of convolutional neural network.
Figure 3.3 AlexNet architecture.
Figure 3.4 GoogLeNet architecture.
Figure 3.5 Residual learning—building block.
Figure 3.6 Architecture of ResNet-18.
Figure 3.7 System model for case study on liver tumor diagnosis.
Figure 3.8 Output of bidirectional region growing segmentation algorithm: (a) in...
Figure 3.9 HA Phase Liver CT images: (a) normal liver; (b) HCC; (c) hemangioma; ...
Figure 3.10 Training progress for AlexNet.
Figure 3.11 Training progress for GoogLeNet.
Figure 3.12 Training progress for ResNet-18.
Figure 3.13 Training progress for ResNet-50.
Chapter 4
Figure 4.1 Proposed system for image retrieval.
Figure 4.2 Schematic of the deep convolutional neural networks.
Figure 4.3 Proposed feature extraction system.
Figure 4.4 Proposed model for the localization of the abnormalities.
Figure 4.5 Graph for the retrieval performance of the metric learning for VGG19.
Figure 4.6 PR values for state of art ConvNet model for CT images.
Figure 4.7 PR values for state of art CNN model for CT images.
Figure 4.8 Proposed system—PR values for the CT images.
Figure 4.9 PR values for proposed content-based image retrieval.
Figure 4.10 Graph for loss function of proposed deep regression networks for tra...
Figure 4.11 Graph for loss function of proposed deep regression networks for val...
Chapter 6
Figure 5.1 Different informatics in healthcare [28].
Chapter 6
Figure 6.1 CT image reconstruction (past, present, and future) [3].
Figure 6.2 (a) Classic machine learning algorithm, (b) Deep learning algorithm.
Figure 6.3 Traditional neural network.
Figure 6.4 Convolutional Neural Network.
Figure 6.5 Psoriasis images [2].
Figure 6.6 Restricted Boltzmann Machine.
Figure 6.7 Autoencoder architecture with vector and image inputs [1].
Figure 6.8 Image of chest x-ray [60].
Figure 6.9 Regular thoracic disease identified in chest x-rays [23].
Figure 6.10 MRI of human brain [4].
Chapter 7
Figure 7.1 Architecture of the proposed approach.
Figure 7.2 Sample Math dataset (including English characters).
Figure 7.3 Sample Bangla dataset (including Bangla numeric).
Figure 7.4 Sample Devanagari dataset (including Hindi numeric).
Figure 7.5 Dataset distribution for English dataset.
Figure 7.6 Dataset distribution for Hindi dataset.
Figure 7.7 Dataset distribution for Bangla dataset.
Figure 7.8 Dataset distribution for Math Symbol dataset.
Figure 7.9 Dataset distribution.
Figure 7.10 Precision-recall curve on English dataset.
Figure 7.11 ROC curve on English dataset.
Figure 7.12 Precision-recall curve on Hindi dataset.
Figure 7.13 ROC curve on Hindi dataset.
Figure 7.14 Precision-recall curve on Bangla dataset.
Figure 7.15 ROC curve on Bangla dataset.
Figure 7.16 Precision-recall curve on Math Symbol dataset.
Figure 7.17 ROC curve on Math symbol dataset.
Figure 7.18 Precision-recall curve of the proposed model.
Figure 7.19 ROC curve of the proposed model.
Chapter 8
Figure 8.1 Eye image dissection [34].
Figure 8.2 Cataract algorithm [10].
Figure 8.3 Pre-processing algorithm [48].
Figure 8.4 Pre-processing analysis [39].
Figure 8.5 Morphologically opened [39].
Figure 8.6 Finding circles [40].
Figure 8.7 Iris contour separation [40].
Figure 8.8 Image inversion [41].
Figure 8.9 Iris detection [41].
Figure 8.10 Cataract detection [41].
Figure 8.11 Healthy eye vs. retinoblastoma [33].
Figure 8.12 Unilateral retinoblastoma [18].
Figure 8.13 Bilateral retinoblastoma [19].
Figure 8.14 Classification of stages of skin cancer [20].
Figure 8.15 Eye cancer detection algorithm.
Figure 8.16 Sample test cases.
Figure 8.17 Actual working of the eye cancer detection algorithm.
Figure 8.18 Melanoma example [27].
Figure 8.19 Melanoma detection algorithm.
Figure 8.20 Asymmetry analysis.
Figure 8.21 Border analysis.
Figure 8.22 Color analysis.
Figure 8.23 Diameter analysis.
Figure 8.24 Completed detailed algorithm.
Chapter 9
Figure 9.1 Basic overview of a proposed computer-aided system.
Figure 9.2 Block diagram of the proposed system for finding out liver fibrosis.
Figure 9.3 Block diagram representing different pre-processing stages in liver f...
Figure 9.4 Flow chart showing student’s t test.
Figure 9.5 Diagram showing SegNet architecture for convolutional encoder and dec...
Figure 9.6 Basic block diagram of VGG-16 architecture.
Figure 9.7 Flow chart showing SegNet working process for classifying liver fibro...
Figure 9.8 Overall process of the CNN of the system.
Figure 9.9 The stages in identifying liver fibrosis by using Conventional Neural...
Figure 9.10 Multi-layer neural network architecture for a CAD system for diagnos...
Figure 9.11 Graphical representation of Support Vector Machine.
Figure 9.12 Experimental analysis graph for different classifier in terms of acc...
Chapter 10
Figure 10.1 Block diagram of machine learning.
Figure 10.2 Machine learning algorithm.
Figure 10.3 Structure of deep learning.
Figure 10.4 Architecture of DNN.
Figure 10.5 Architecture of CNN.
Figure 10.6 System architecture.
Figure 10.7 Image before histogram equalization.
Figure 10.8 Image after histogram equalization.
Figure 10.9 Edge detection.
Figure 10.10 Edge segmented image.
Figure 10.11 Total cases.
Figure 10.12 Result comparison.
Chapter 11
Figure 11.1 Breast cancer incidence rates worldwide (source: International Agenc...
Figure 11.2 Images from MIAS database showing normal, benign, malignant mammogra...
Figure 11.3 Image depicting noise in a mammogram.
Figure 11.4 Architecture of CNN.
Figure 11.5 A complete representation of all the operation that take place at va...
Figure 11.6 An image depicting Pouter, Plesion, and Pbreast in a mammogram.
Figure 11.7 The figure depicts two images: (a) mammogram with a malignant mass a...
Figure 11.8 A figure depicting the various components of a breast as identified ...
Figure 11.9 An illustration of how a mammogram image having tumor is segmented t...
Figure 11.10 A schematic representation of classification procedure of CNN.
Figure 11.11 A schematic representation of classification procedure of CNN durin...
Figure 11.12 Proposed system model.
Figure 11.13 Flowchart for MIAS database and unannotated labeled images.
Figure 11.14 Image distribution for training model.
Figure 11.15 The graph shows the loss for the trained model on train and test da...
Figure 11.16 The graph shows the accuracy of the trained model for both test and...
Figure 11.17 Depiction of the confusion matrix for the trained CNN model.
Figure 11.18 Receiver operating characteristics of the trained model.
Figure 11.19 The image shows the summary of the CNN model.
Figure 11.20 Performance parameters of the trained model.
Figure 11.21 Prediction of one of the image collected from diagnostic center.
Chapter 12
Figure 12.1 Deep learning [14]. (a) A simple, multilayer deep neural network tha...
Figure 12.2 Flowchart of the model [25]. The orange icon indicates the dataset, ...
Figure 12.3 Evaluation result [25].
Figure 12.4 Deep learning techniques evaluation results [25].
Figure 12.5 Deep transfer learning–based screening system [38].
Figure 12.6 Classification result.
Figure 12.7 Regression result [45].
Figure 12.8 AE model of deep learning [47].
Figure 12.9 DBN for induction motor fault diagnosis [68].
Figure 12.10 CNN model for health monitoring [80].
Figure 12.11 RNN model for health monitoring [87].
Figure 12.12 Deep learning models usage.
Chapter 13
Figure 13.1 Intelligent home layout model.
Figure 13.2 Deep learning model in predicting behavior analysis.
Figure 13.3 Lifestyle-oriented context aware model.
Figure 13.4 Components for the identification, simulation, and detection of acti...
Figure 13.5 Prediction stages.
Figure 13.6 Analytics of event.
Figure 13.7 Prediction of activity duration.
Chapter 14
Figure 14.1 Comparison of normal and Alzheimer brain.
Figure 14.2 Proposed AD prediction system.
Figure 14.3 KNN classification.
Figure 14.4 SVM classification.
Figure 14.5 Load data in 3D slicer.
Figure 14.6 3D slicer visualization.
Figure 14.7 Normal patient MRI.
Figure 14.8 Alzheimer patient MRI.
Figure 14.9 Comparison of hippocampus region.
Figure 14.10 Accuracy of algorithms with baseline records.
Figure 14.11 Accuracy of algorithms with current records.
Figure 14.12 Comparison of without and with dice coefficient.
Chapter 15
Figure 15.1 U-Net architecture [19].
Figure 15.2 Architecture of the 3D-DCSRN model [29].
Figure 15.3 SMILES code for Cyclohexane and Acetaminophen [32].
Figure 15.4 Medical chatbot architecture [36].
Chapter 16
Figure 16.1 A classical perceptron.
Figure 16.2 Forward and backward paths on an ANN architecture.
Figure 16.3 A DNN architecture.
Figure 16.4 A DNN architecture for digit classification.
Figure 16.5 Underfit and overfit.
Figure 16.6 Functional mapping.
Figure 16.7 A generalized Tikhonov functional.
Figure 16.8 (a) With hidden layers (b) Dropping h2 and h5.
Figure 16.9 Image cropping as one of the features of data augmentation.
Figure 16.10 Early stopping criteria based on errors.
Figure 16.11 (a) Convex, (b) Non-convex.
Figure 16.12 (a) Affine (b) Convex function.
Figure 16.13 Workflow and an optimizer.
Figure 16.14 (a) Error (cost) function (b) Elliptical: Horizontal cross section.
Figure 16.15 Contour plot for a quadratic cost function with elliptical contours...
Figure 16.16 Gradients when steps are varying.
Figure 16.17 Local minima. (When the gradient ∇ of the partial derivatives is po...
Figure 16.18 Contour plot showing basins of attraction.
Figure 16.19 (a) Saddle point S. (b) Saddle point over a two-dimensional error s...
Figure 16.20 Local information encoded by the gradient usually does not support ...
Figure 16.21 Direction of gradient change.
Figure 16.22 Rolling ball and its trajectory.
Chapter 17
Figure 17.1 Artificial Neural Networks vs. Architecture of Deep Learning Model [...
Figure 17.2 Machine learning and deep learning techniques [4, 5].
Figure 17.3 Model of reinforcement learning (https://www.kdnuggets.com).
Figure 17.4 Data analytical model [5].
Figure 17.5 Support Vector Machine—classification approach [1].
Figure 17.6 Expected output of K-means clustering [1].
Figure 17.7 Output of mean shift clustering [2].
Figure 17.8 Genetic Signature–based Hierarchical Random Forest Cluster (G-HR Clu...
Figure 17.9 Artificial Neural Networks vs. Deep Learning Neural Networks.
Figure 17.10 Architecture of Convolution Neural Network.
Figure 17.11 Architecture of the Human Diseases Pattern Prediction Technique (EC...
Figure 17.12 Comparative analysis: processing time vs. classifiers.
Figure 17.13 Comparative analysis: memory usage vs. classifiers.
Figure 17.14 Comparative analysis: classification accuracy vs. classifiers.
Figure 17.15 Comparative analysis: sensitivity vs. classifiers.
Figure 17.16 Comparative analysis: specificity vs. classifiers.
Figure 17.17 Comparative analysis: FScore vs. classifiers.
Chapter 18
Figure 18.1 Deep Neural Network (DNN).
Figure 18.2 The evolution of machine learning techniques (year-wise).
Chapter 1
Table 1.1 Various parameters of the layers of LeNet.
Table 1.2 Every column indicates which feature map in S2 are combined by the uni...
Table 1.3 AlexNet layer details.
Table 1.4 Various parameters of ZFNet.
Table 1.5 Various parameters of VGG-16.
Table 1.6 Various parameters of GoogleNet.
Table 1.7 Various parameters of ResNet.
Table 1.8 Comparison of ResNet-50 and ResNext-50 (32 × 4d).
Table 1.9 Comparison of ResNet-50 and ResNext-50 and SE-ResNeXt-50 (32 × 4d).
Table 1.10 Comparison of DenseNet.
Table 1.11 Various parameters of MobileNets.
Table 1.12 State-of-art of spine segmentation approaches.
Chapter 2
Table 2.1 History of search engines.
Table 2.2 Three types of user refinement of queries.
Table 2.3 Different approaches for the query suggestion techniques.
Chapter 3
Table 3.1 Types of liver lesions.
Table 3.2 Dataset count.
Table 3.3 Hyperparameter settings for training.
Table 3.4 Confusion matrix for AlexNet.
Table 3.5 Confusion matrix for GoogLeNet.
Table 3.6 Confusion matrix for ResNet-18.
Table 3.7 Confusion matrix for ResNet-50.
Table 3.8 Comparison of classification accuracies.
Chapter 4
Table 4.1 Retrieval performance of metric learning for VGG19.
Table 4.2 Performance of retrieval techniques of the trained VGG19 among fine-tu...
Table 4.3 PR values of various models—a comparison for CT image retrieval.
Table 4.4 Recall vs. precision for proposed content-based image retrieval.
Table 4.5 Loss function of proposed deep regression networks for training datase...
Table 4.6 Loss function of proposed deep regression networks for validation data...
Table 4.7 Land mark details (identification rates vs. distance error) for the pr...
Table 4.8 Accuracy value of the proposed system.
Table 4.9 Accuracy of the retrieval methods compared with the metric learning–ba...
Chapter 6
Table 6.1 Definition of the abbreviations.
Chapter 7
Table 7.1 Performance of proposed models on English dataset.
Table 7.2 Performance of proposed model on Bangla dataset.
Table 7.3 Performance of proposed model on Math Symbol dataset.
Chapter 8
Table 8.1 ABCD factor for TDS value.
Table 8.2 Classify mole according to TDS value.
Chapter 9
Table 9.1 The confusion matrix for different classifier.
Table 9.2 Performance analysis of different classifiers: Random Forest, SVM, Naï...
Chapter 10
Table 10.1 Result analysis.
Chapter 11
Table 11.1 Comparison of different techniques and tumor.
Chapter 13
Table 13.1 Cognitive functions related with routine activities.
Table 13.2 Situation and design features.
Table 13.3 Accuracy of prediction.
Chapter 14
Table 14.1 Accuracy comparison and mean of algorithms with baseline records.
Table 14.2 Accuracy comparison and mean of algorithms with current records.
Chapter 15
Table 15.1 Variances of Convolutional Neural Network (CNN).
Table 15.2 Various issues challenges faced by researchers for using deep learnin...
Chapter 17
Table 17.1 Comparative analysis: classification accuracy for 10 datasets—analysi...
Chapter 18
Table 18.1 Comparison among data mining, machine learning, and deep learning.
Cover
Table of Contents
Title Page
Copyright
Preface
Begin Reading
Index
End User License Agreement
v
ii
iii
iv
xix
xx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
361
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
Scrivener Publishing100 Cummings Center, Suite 541JBeverly, MA 01915-6106
Publishers at ScrivenerMartin Scrivener ([email protected])Phillip Carmical ([email protected])
Edited by
Amit Kumar Tyagi
This edition first published 2021 by John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA and Scrivener Publishing LLC, 100 Cummings Center, Suite 541J, Beverly, MA 01915, USA© 2021 Scrivener Publishing LLCFor more information about Scrivener publications please visit www.scrivenerpublishing.com.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
Wiley Global Headquarters111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials, or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read.
Library of Congress Cataloging-in-Publication Data
ISBN 9781119785729
Cover image: Pixabay.ComCover design by Russell Richardson
Set in size of 11pt and Minion Pro by Manila Typesetting Company, Makati, Philippines
Printed in the USA
10 9 8 7 6 5 4 3 2 1
Due to recent technological developments and the integration of millions of Internet of Things (IoT)-connected devices, a large volume of data is being generated every day. This data, known as big data, is summed up by the 7 V’s—Volume, Velocity, Variety, Variability, Veracity, Visualization, and Value. Efficient tools, models and algorithms are required to analyze this data in order to advance the development of applications in several sectors, including e-healthcare (i.e., for disease prediction) and satellites (i.e., for weather prediction) among others. In the case of data related to biomedical imaging, this analyzed data is very useful to doctors and their patients in making predictive and effective decisions when treating disease. The healthcare sector needs to rely on smart machines/devices to collect data; however, nowadays, these smart machines/devices are facing several critical issues, including security breaches, data leaks of private information, loss of trust, etc.
We are currently entering the era of smart world devices, where robots or machines are being used in most applications to solve real-world problems. These smart machines/devices reduce the burden on doctors, which in turn make their lives easier and the lives of their patients better, thereby increasing patient longevity, which is the ultimate goal of computer vision. Therefore, our goal in writing this book is to attempt to provide complete information on reliable deep learning models required for e-healthcare applications. Ways in which deep learning can enhance healthcare images or text data for making useful decisions will be discussed. Also presented are reliable deep learning models, such as neural networks, convolutional neural networks, backpropagation, and recurrent neural networks, which are increasingly being used in medical image processing, including for colorization of black and white X-ray images, automatic machine translation images, object classification in photographs/images (CT scans), character or useful generation (ECG), image caption generation, etc. Hence, reliable deep learning methods for the perception or production of better results are a necessity for highly effective e-healthcare applications. Currently, the most difficult data-related problem that needs to be solved concerns the rapid increase of data occurring each day via billions of smart devices. To address the growing amount of data in healthcare applications, challenges such as not having standard tools, efficient algorithms, and a sufficient number of skilled data scientists need to be faced. Hence, there is growing interest in investigating deep learning models and their use in e-healthcare applications.
Based on the above facts, some reliable deep learning and deep neural network models for healthcare applications are contained in this book on computational analysis and deep learning for medical care. These chapters are contributed by reputed authors; the importance of deep learning models is discussed along with the issues and challenges facing available current deep learning models. Also included are innovative deep learning algorithms/models for treating disease in the Medicare population. Finally, several research gaps are revealed in deep learning models for healthcare applications that will provide opportunities for several research communities.
In conclusion, we want to thank our God, family members, teachers, friends and last but not least, all our authors from the bottom of our hearts (including publisher) for helping us complete this book before the deadline. Really, kudos to all.
Amit Kumar Tyagi
