189,99 €
The book gives invaluable insights into how artificial intelligence is revolutionizing the management and treatment of neurological disorders, empowering you to stay ahead in the rapidly evolving landscape of healthcare.
Embark on a groundbreaking exploration of the intersection between cutting-edge technology and the intricate complexities of neurological disorders. Artificial Intelligence in Neurological Disorders: Management, Diagnosis and Treatment comprehensively introduces how artificial intelligence is becoming a vital ally in neurology, offering unprecedented advancements in management, diagnosis, and treatment. As the digital age converges with medical expertise, this book unveils a comprehensive roadmap for leveraging artificial intelligence to revolutionize neurological healthcare. Delve into the core principles that underpin AI applications in the field by exploring intricate algorithms that enhance the precision of diagnosis and how machine learning not only refines the understanding of neurological disorders but also paves the way for personalized treatment strategies tailored to individual patient needs. With compelling case studies and real-world examples, the realms of neuroscience and artificial intelligence converge, illustrating the symbiotic relationship that holds the promise of transforming patient care.
Readers of this book will find it:
Audience
Researchers, scientists, industrialists, faculty members, healthcare professionals, hospital management, biomedical industrialists, engineers, and IT professionals interested in studying the intersection of AI and neurology.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 458
Veröffentlichungsjahr: 2025
Cover
Table of Contents
Series Page
Title Page
Copyright Page
Foreword
Preface
1 Artificial Intelligence in Neuroscience: A Clinician’s Perception
1.1 Introduction
1.2 Artificial Intelligence and Healthcare
1.3 Prediction Model of AI
1.4 AI in Upskilling Neurosurgical Procedures
1.5 Artificial-Intelligence-Enhanced Smart Gadgets
1.6 The Use of Deep Learning to Glean Insights from Massive Datasets is Gaining Popularity
1.7 Multimodal Data Improves the Predictive Ability of AI Models
1.8 Conclusion
References
2 Application of AI to a Neurological Disorder
2.1 Introduction
2.2 Various Forms of AI and Existing Studies
2.3 Outcomes and Limitation
2.4 Future Direction
2.5 Conclusion
References
3 Treatment Strategies of Neurological Disorders with Deep Learning Algorithm
3.1 Introduction
3.2 Concept of Deep Learning
3.3 Literature Review
3.4 Deep Learning in Neurology
3.5 Challenges
3.6 Conclusion
References
4 Deep Learning for Early Diagnosis of Neurological Disorders
4.1 Introduction
4.2 Reconstructing and Cleaning Up Raw Data
4.3 Extraction of Biomarkers
4.4 Disease Detection and Diagnosis
4.5 Disease Prediction
4.6 Advancing our Knowledge of Disease
4.7 Curing Ailments
4.8 Future Tendencies
4.9 Conclusion
References
5 Diagnosis of Neurological Disorders Using Artificial Intelligence Advances
5.1 Introduction
5.2 Evolutionary Model for Generalized BCI Technologies
5.3 BCI and AI
5.4 Challenges and Opportunities
5.5 Application of Radiology in Neurological Disorder
5.6 Conclusion
References
6 Integrating Artificial Intelligence with Neuroimaging
6.1 Introduction
6.2 Classification and Regression of Deep Learning for Neuroimaging
6.3 Deep Learning Model
6.4 Various DL to Mitigate the Peril of Image Acquisition
6.5 Applications for the Analysis of Brain Disorders Using Medical Images
6.6 Conclusion
References
7 Cognitive Therapy for Brain Disease: Using a Deep Learning Model
7.1 Introduction
7.2 Background
7.3 Related Work
7.4 Methods
7.5 CNN Model Identifies Phases of AD
7.6 Conclusion
References
8 AI Advancements in Tailored Healthcare for Neurodevelopmental Disorders
8.1 Introduction
8.2 Integration of Personalized Medicine and Artificial Intelligence
8.3 Neurodevelopmental Disorders (NDDs)
8.4 Artificial Intelligence in NDDs
8.5 Challenges for Artificial Intelligence about NDDs
8.6 Conclusion
References
9 Artificial Intelligence and Nanorobotic Application in Neurological Disorder
9.1 Introduction
9.2 Methods for the Determination of AI and Nanorobotic Application in Neurological Disorder
9.3 Artificial Intelligence Tools for Self-Driving Pharmaceutical Treatment
9.4 Telemedicine Tools that Can be Used at Dwelling
9.5 Robotics and Artificial Intelligence (AI) are Used to Manage and Control Human Walking Patterns
9.6 Conclusion
References
10 Insightful Vision: Exploring the Contemporary Applications of Artificial Intelligence in Ophthalmology
10.1 Introduction
10.2 Essential Elements of an Artificial Intelligence Platform
10.3 Utilizing Deep Learning-Based Artificial Intelligence to Forecast Visual Acuity Following Vitrectomy Surgery
10.4 Applications of Artificial Intelligence in Ophthalmology
10.5 Discussion and Perspectives
10.6 The Benefits and Constraints of Utilizing AI Tools in Ophthalmology
10.7 Conclusions
References
Index
End User License Agreement
Chapter 2
Table 2.1 Therapeutic uses of AI technology for neurological illness.
Table 2.2 AI and ML in the treatment of neurological disorders.
Table 2.3 Pros and cons in the diagnosis of neurological illnesses using artif...
Chapter 8
Table 8.1 Categorized as neurodevelopmental disorders (NDDs).
Chapter 9
Table 9.1 Utilizing artificial intelligence (AI) for the automated administrat...
Table 9.2 Utilizing home-based telemedicine procedures for the treatment and c...
Chapter 1
Figure 1.1 Translating technical accomplishment into real clinical impact.
Figure 1.2 Biological and artificial neuron.
Figure 1.3 Artificial neural network.
Figure 1.4 Importance of label in AI.
Chapter 2
Figure 2.1 Artificial intelligence and its subtype.
Chapter 3
Figure 3.1 Machine learning flowchart.
Chapter 4
Figure 4.1 DL structures for brain disorder: (a) U-NET, (b) autocoder, (c) var...
Chapter 6
Figure 6.1 Components of biological and computer neural network.
Figure 6.2 Imaging value chain.
Figure 6.3 Structural and functional imaging.
Figure 6.4 (a) Single layer. (b) Multiple layer.
Figure 6.5 Systematically representing the stacked autoencoder.
Figure 6.6 Deep belief network.
Figure 6.7 Deep Boltzmann machine.
Chapter 7
Figure 7.1 A neuroimaging-based machine learning framework for AD diagnosis.
Figure 7.2 Stacked auto-encoder model.
Figure 7.3 The statistical characteristics of both MRI and PET are represented...
Chapter 8
Figure 8.1 The constituent parts of a stochastic algorithm structure that addr...
Figure 8.2 The AI algorithms with the greatest potential promise. Artificial i...
Figure 8.3 Significant events in history in the field of (tailored) precise me...
Chapter 9
Figure 9.1 The research decision-making process is represented by a PRISMA flo...
Chapter 10
Figure 10.1 Networks of neurons encompass both natural and synthetic systems....
Figure 10.2 A study exploring the relationships between artificial intelligenc...
Figure 10.3 The schematic represents the glaucoma forecasting algorithm’s conc...
Cover Page
Table of Contents
Series Page
Title Page
Copyright Page
Foreword
Preface
Begin Reading
Index
WILEY END USER LICENSE AGREEMENT
ii
iii
iv
xi
xii
xiii
xiv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
Scrivener Publishing100 Cummings Center, Suite 541JBeverly, MA 01915-6106
Publishers at ScrivenerMartin Scrivener ([email protected])Phillip Carmical ([email protected])
Rishabha Malviya
Department of Pharmacy, School of Medical and Allied Sciences, Galgotias University, Greater Noida, India
Galgotias Multi-Disciplinary Research & Development Cell(G-MRDC), Galgotias University, Greater Noida, UP, India
Suraj Kumar
Pragya College of Pharmaceutical Science, Gaya, Bihar, India
Aditya Sushil Solanke
Byramjee Jeejeebhoy Government Medical College and Sassoon Hospital, Pune, India
Priyanshi Goyal
School of Pharmacy, Mangalayantan University, Aligarth, India
and
Kapil Chauhan
Max Hospital, Dehradun, Uttarakhand, India
This edition first published 2025 by John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA and Scrivener Publishing LLC, 100 Cummings Center, Suite 541J, Beverly, MA 01915, USA© 2025 Scrivener Publishing LLCFor more information about Scrivener publications please visit www.scrivenerpublishing.com.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
Wiley Global Headquarters111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchant-ability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials, or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read.
Library of Congress Cataloging-in-Publication Data
ISBN 978-1-394-34750-6
Cover image: Generated with AI using Adobe FireflyCover design by Russell Richardson
Rapid advancements in artificial intelligence (AI) have had far-reaching consequences in various industries, but arguably none more significant than its impact on healthcare, particularly in neurology. The complicated structure of the brain and nervous system makes neurological illnesses among the most complex and difficult to detect and cure. In this context, Artificial Intelligence in Neurological Disorders: Management, Diagnosis, and Treatment appears as a significant and essential contribution to the emerging field of AI applications in healthcare. The integration of AI and neurology is more than just a technical accomplishment; it is transforming the landscape of diagnosis, therapy, and patient management in previously imagined ways. Under the expert guidance of Dr. Rishabha Malviya, this book brings together diverse applications of AI in the management of Neurological disorders.
This book provides an extensive discussion of the possible benefits of AI for patients, ranging from the use of deep learning algorithms for early diagnosis to the implementation of AI in personalized treatment plans. The chapters carefully consider the applications of AI to diseases like dementia, stroke, and epilepsy. They also look ahead to potential applications in neuro-oncology, neurorehabilitation, and neuroimaging. Furthermore, the book addresses the fundamental problems associated with these advancements, such as ethical concerns, data integrity, and the adaptability of AI models.
As worldwide healthcare systems confront growing demands, the role of AI in enhancing efficiency, accuracy, and personalized care has never been more important. We are at the forefront of a new era when neurological illnesses can be understood and treated with greater precision and understanding because of the application of AI. This book stands as a symbol of that progress. I hope that this book will not only educate its readers but also inspire future study and innovation in the field. Dr. Rishabha Malviya and his team have put together a genuinely cutting-edge summary of findings, and I am convinced that this book will be a significant resource for medical practitioners, clinical experts, researchers, and technologists as well.
Happy reading and best wishes.
Dr. Dhruv Galgotia
CEO, Galgotias University,Greater Noida, India
The use of artificial intelligence (AI) in different aspects of diagnosis, treatment, and patient care has initiated a paradigm change in the field of neurology. The advancement of AI technology, particularly machine learning and deep learning, has the potential to transform the understanding, treatment, and management of neurological illnesses. As the healthcare environment evolves, AI opens new avenues for early detection, precise treatment, and personalized medication.
This book bridges the gap between cutting-edge AI technology and its practical applications in neurology. The chapters present comprehensive discussions of the use of artificial intelligence in neurosurgical operations, neuroimaging, brain-computer interfaces, and neurorehabilitation.
AI is changing the way we forecast, diagnose, and treat diseases like epilepsy, stroke, dementia, and mobility problems. It allows for more efficient medical imaging, aids in developing new biomarkers, and promotes cognitive therapy for neurological illnesses. The potential of AI-enhanced smart devices and nanorobotics is expanding the field of Neuropharmacology and neurodevelopmental care, resulting in a new era of personalized, AI-driven healthcare.
This book also discusses the problems and ethical implications associated with the application of AI in neurology. The problems of data quantity, quality, and interpretability are discussed, as well as the legal and ethical implications of AI-driven therapy procedures. This comprehensive investigation of AI applications aims to give readers a sophisticated understanding of AI’s capabilities and limitations in neurological illnesses.
I hope that Artificial Intelligence in Neurological Disorders: Management, Diagnosis, and Treatment will be a useful resource for healthcare practitioners, researchers, and technologists interested in the role of AI in improving neurological care. I believe this book will inspire more innovation in the medical field, fostering collaborative endeavors to enhance life for patients worldwide. Finally, my gratitude goes to Martin Scrivener and the team at Scrivener Publishing for their support in bringing this volume to light.
The EditorsApril 2025
Undoubtedly, it is a fact that, despite the unprecedented hype, the implementation of AI in the next decade will cause a paradigm shift in healthcare for drug delivery. The following study aims to determine the clinical application by which Al is being used in the field of neurology. From a clinical perspective, the author will discuss the field’s exponential growth, although they will not provide the sophisticated, technical, computational jargon that typically accompanies such discussions. This article will introduce the basics of artificial intelligence in healthcare and its numerous uses in neurosciences. Clinically significant information is concealed in these enormous data sets, but powerful AI algorithms can unlock it. However, it is difficult to translate technical computational accomplishment into meaningful therapeutic impact. Earlier AI may be used in therapeutic settings, but that undergoes extensive and methodical studies. Its potential to create a significant influence should not be underestimated, as was the case with previous disruptive innovations. Soon we will be living in a world where medical data collected as point-of-service data is analyzed by sophisticated machine algorithms in real time to provide useful insights.
Keywords: Artificial intelligence, neuroscience, healthcare, machine learning, clinical application
McCarthy originally used the term “artificial intelligence” in [1], which was published in 1956. One of the main authors is a practicing therapist who believes that the “A” in AI should refer to “ambient,” as in “to enhance, magnify, accelerate, and help.” Just what does “artificial” mean in the context of artificial intelligence? Ultimately, AI may be seen as an advancement of the inherent cognitive abilities that are present in every human being. The scope of a domain expert’s work can be widened with the aid of augmented intelligence. The processing of data-rich processes can be sped up with the help of accelerated engineering and analysis. In the present day, AI enables a constellation of ubiquitous technologies that have a major effect on regular life. By comparison, AI is more like a pole vault than a simple technological advance. Artificial intelligence (AI) is the application of computing resources to accomplishing activities typically associated with human intelligence, such as the perception of visual or auditory stimuli, the recognition of spoken language, the making of decisions, and the translation of languages. Overall functioning, AI will be an integral part of the medicine of the future and will be more predictive, personalized, precise, participatory, and preventative, or 5P. Because most of the 410 trillion gigabytes (41 zettabytes) of digital data that we have access to today are unstructured, artificial intelligence will be necessary to spot patterns and trends that humans simply cannot see.
Major repercussions for healthcare are brought about by AI’s reliance on data. Both the collection and dissemination of medical records and the businesses that deal with them are subject to strict rules and regulations. AI makes it simpler to adhere to system regulations. Anticipate the future clinician to become a relic where algorithms can make diagnoses, wearable tech can monitor vital signs, and robots can be sent in to do surgical procedures at the surgeon’s command. It may be asserted that the dominion of human physicians doctors is gradually yielding to a new era. Although artificial intelligence (AI) is currently the purview of the world’s largest technological corporations, the use of AI technologies intended to increase patient interaction will still require endorsement and recommendation from clinicians. The future duties of specialists, with the assistance of AI technologies, will transition from extracting data (from photos and histology) to managing data (inside a therapeutic context). By relieving doctors of the burden of sifting through large quantities of data, AI promises to restore a human touch to medicine [2]. In his provocative piece “Surgery, Virtual Reality, and the Future,” Vosburgh stresses the importance of AI in addressing the challenges faced by surgeons as opposed to the problems assumed to be faced by surgeons by programmers. Patients need to have their morphological, functional, and physiological status evaluated accurately represented by technology, which is why [3] this concept is so important. Only necessary information should be provided and only at the appropriate time.
To avoid extinction, workflows need to be supplemented rather than reimagined. Even though artificial intelligence (AI) is currently the purview of the world’s largest technological companies, physicians’ support and suggestions are still necessary for AI products designed to increase patient involvement to be widely used. The future duty of professionals, with the help of AI technologies, is data management for AI systems, not image and histological analysis, in a therapeutic setting. By relieving doctors of the burden of sifting through mountains of data, AI promises to restore a human touch to healthcare. For example, in the study “Surgery, Virtual Reality, and the Future,” Vosburgh maintains that artificial intelligence research should center on easing the burdens of practicing doctors rather than those that technologists assume they face [3]. There is a pressing need to develop technological solutions that can accurately reflect the patient’s anatomic, functional, and physiologic state. Only necessary information should be provided and only at the appropriate time. Instead of redefining workflows, they should be supplemented. Neuroscience necessitates knowledge of the intelligent operation of the organic brain for the application of AI in neurosciences [4]. Artificial intelligence attempts to ape human intelligence. A paradigm shift is happening in healthcare as a result of the healthcare data being widely available, and analytical methods are advancing quickly. AI aids in healthcare decision-making by sifting through mountains of data to find the nuggets of knowledge that are most relevant to individual patients. Learning and self-correcting features can be built into AI to help it get better at its job with each new piece of data. An AI system can help doctors by giving them access to the most recent research published in medical journals, textbooks, and practical practices. Information on a big group of patients can be gleaned by an AI system.
Machine learning methods analyze genetic data, structured images, and patient trait clusters and infer disease prognoses. Information from clinical notes and medical journals, for example, is extracted using NLP techniques, adding to the depth and breadth of structured medical data. To facilitate ML analysis, NLP methods seek to convert texts into structured data that computers can understand [5]. Medical data collected at the point of service could soon be analyzed by complex machine algorithms to give real-time, actionable insights. Making accurate predictions based on collected data is crucial to the fields of personalized medicine and precision public health. The next hurdle will be translating technical accomplishment into real clinical impact (Figure 1.1). Positive outcomes are emerging from collaborations between physicians and data scientists, a process bolstered by the maturing field of clinical informatics. Although AI should be evaluated thoroughly and systematically before being included in ordinary clinical treatment, its potential to cause a large impact should not be underestimated, as it is similar to that of other paradigm-shifting innovations [6].
Figure 1.1 Translating technical accomplishment into real clinical impact.
The future of neurological management will include the use of precision medicine (PM), which is founded on AI. Treatment and prevention of disease using PM is a new method that recognizes the importance of individual variability in genetics, environment, and lifestyle. To function, PM requires both an abundance of computing resources with the creation of self-teaching computer programs at a previously unimaginable rate [7].
Data mining in clinical research is being combined with evolutionary computation to develop very resilient models that can categorize >99% of instances. EpiCS is a clinical data modeling learning classifier system that achieves statistical parity with traditional methods (logistic regression analysis and decision tree induction) after training [8]. In a study of 1,271 patient records, researchers compared the performance of artificial neural networks (ANNs) and multivariable logistic regression models in predicting outcomes after head trauma. Researchers looked into how easily these findings might be replicated. ANNs significantly beat logistic models in discrimination and calibration but lacked accuracy [9]. It has been claimed that ANN can predict changes in intracranial pressure. In a randomized clinical trial involving 150 patients undergoing low back surgery, AI predicted more accurate results than doctors did 86% of the time. However, it typically takes a significant number of individuals to construct a database for probability systems. When it comes to predicting the reappearance of craniocervical trauma and chronic subdural hematoma, ANN is superior to logistic regression models. The use of ensembled ANN networks to predict brain death in a neurosurgical ICU has been reported [10–12]. Middleton has underlined the need for predictive analytics and cognitive assistance across the translational spectrum and continuum of care in improving health care [13, 14], a 25-year retrospective analysis and a 25-year future vision [15].
Ablation of brain tumors is one of the most promising applications of autonomous robotic surgery. The robotic system must take in information about its surroundings and adjust its actions accordingly. An additional difficulty in using AI in suturing is the necessity of tying knots [16]. As a result, motor cortical regions range in size and position, depending on the person. Accurate awareness of these spots is essential for neurosurgical planning. Through the use of robot-assisted image-guided transcranial magnetic stimulation (RiTMS), scientists have been able to reconstruct functional motor maps of the primary motor cortex. by recording evoked potentials from specific muscles [17]. Neurosurgical residents are finding it harder to “learn” about a patient in the operating room. The time has come for more lifelike virtual practice in neurosurgery. The insertion of a ventriculostomy catheter through the brain’s parenchyma and into the ventricle is simulated in a computer-based virtual reality platform complete with artificial resistance and relaxation. Learning clinical skills and procedures has been aided by recent developments in artificial intelligence and haptics (object recognition through touch) [18].
To predict the result of an experiment using machine learning, surgeries for epilepsy based on supervised categorization and patient data and data mining, considering the standard factors, including pathological and neuropsychiatric assessments, are now accessible. There is predictability of the result with the use of a few clinical neural and psychological indicators. Note that not all available options were required for making the forecast [19, 20], e.g., automatic seizure detection with the use of electroencephalogram (EEG). Recently, sophisticated AI methods have been reported immediately after filtering and artifact elimination during preprocessing [21].
Recent research has confirmed that tumors in the brain can significantly alter normal bodily functions. The predictive capability of functional registration, enhanced by artificial intelligence, surpasses that of anatomical registration. In circumstances where tumor-induced alterations in the hemodynamics to direct localization become problematic, the process of determining the geographic origin of an energized yet dispersed area is essential. The placement and size of different people’s functional brain areas make it difficult to find a good match. When disease is present, it can disrupt functional systems in profound ways. These problems have been solved by artificial intelligence (AI)-enhanced neural information processing systems [22].
The advent of machine learning has made it possible to develop a method of automatically and noninvasively grading gliomas that makes use of quantitative factors acquired from multi-modal MRI scans. Over 90% accuracy in categorization was reported by Zhang et al., (2017) which is better than even the most seasoned neuroradiologist [23]. The prognosis of malignant gliomas may be predicted with significantly greater accuracy by utilizing prediction models based on data mining and machine learning algorithms than it can use histopathologic categorization alone [24]. The semi-automatic segmentation performed by four specialists was compared to the fully automated method of increasing tumor compartmentalization. Despite being doable, the outcomes were the same and the process was slower [25]. Extraction of significant information from MRI images and MR spectra using machine learning algorithms shows promise as a noninvasive alternative to traditional techniques of tumor classification [26].
Models using machine learning to predict motorbike riders’ post-accident death have been created [27]. The effectiveness of artificial neural networks (ANN) for prognostic purposes after head trauma has been studied. On a variety of performance metrics, ANN vastly surpassed both regression models and human doctors. The authors believe that this type of modeling has potential as a clinical decision support tool [28]. Several methods based on machine learning and fuzzy logic have been applied to the study of TBIs [29, 30].
Prioritizing filtering through deep learning incoming images is already widely recognized. The photos are analyzed by an algorithm that looks for signs of stroke or bleeding in the brain. Patients’ photos will be prioritized for analysis if the algorithm finds one of the flagged factors. If the algorithm does not find any urgent information, the file will be assigned a low priority. Picture QC, IR triage, FP, CAD/CAT, and AI-generated reports are all areas where AI has proven useful. The quality of MRI scans can be improved with the help of deep learning algorithms, which can even alert the technician if the scans are too blurry to be read properly. Time spent in the MRI machine can be shortened without sacrificing image quality. The development of improved neuroimaging tools and more refined methods to analyze neural networks has led to more accurate preoperative diagnoses in recent years. Some applications of AI include pinpointing the epileptic focus before surgery, identifying the grade of a tumor via the use of fMRI (functional magnetic resonance imaging) while the subject is in a state of deep sleep, and pinpointing the eloquent cortex before surgery. Individual differences for functional localization have been quantified and visualized thanks to big data analysis of healthy controls. With data from a thousand people who are considered to be healthy, an atlas of the cortical regions’ functional organization has been created [31].
Stroke management includes AI in various tools, including detection, diagnosis, therapy, outcome prediction, and prognosis assessment. The United States Food and Drug Administration has approved a mHealth app that uses artificial intelligence software to analyze CT images for symptoms of a stroke and then texts a neurologist about its findings (FDA). A clinical decision support tool has the potential to reduce the wait time for stroke patients to get treatment. The Contact app [32] developed by VizAI is used in the context of clinical decision support for early detection of a stroke and the activation of the required specialized resources to begin treatment. The Food and Drug Administration has warned that this software should not be used in place of a thorough examination of patients but rather for analyzing imaging data.
The invention of cutting-edge brain–machine interfaces, aided by AI, is opening up the world to those who are otherwise unable to do so. No longer will sight and touch be necessary for advances in artificial intelligence. Those with severe motor limitations will be able to control a neural prosthesis or robotic arm with the power of thought alone [33, 34].
The Swallowscope is a high-tech gadget that uses artificial intelligence. To automate, the smartphone app uses a real-time swallowing sound processing algorithm to perform screening, quantitative evaluation, and visualization of swallowing ability. A cloud-based analysis and distribution mechanism for the swallowing noise is included as well. This non-intrusive wearable device can keep a constant eye on the wearer’s swallowing actions and evaluate their swallowing capacity in real time [35].
Smartphones and sensors accumulate a large amount of information that may be used to train deep neural networks and that may be indicative of a patient’s everyday status. Sensors collect sound and movement data, whereas cell phones collect more data. Movement disorders benefit from sensor data. Kim et al. (2018) [36] measured PD tremor severity with a CNN, while Nancy Jane et al. (2016) [37] measured gait severity with a time-delay neural network. Using a 1D CNN, Camps et al. (2018) [38] detected PD gait freezing. Data from demographic surveys, medical records, and smartphone/watch sensors were all put to use in the BEAT-PD DREAM challenge (synapse.org/beatpdchallenge) to predict dyskinesia and tremor medication status and severity. Recurrent neural networks also diagnose late-life depression [39] and autistic spectrum disorder [40]. Smartphone apps collect more specific data. Speech and motor testing separates people with PD from those who do not have it [41]. To speed up the process of grading the interlocking pentagon drawing test, Park et al. (2020) [42] developed a smartphone app powered by the U-Net.
More and more often, DL architectures use genetic and genomic data, either as a standalone modality or in combination with other modalities. To determine the role, Zhou et al. (2019) [43] employed a CNN to conduct a whole-genome investigation of the role of non-coding regions in autism risk. Yin et al. (2019) [44] developed a convolutional neural network (CNN) that considers the structure of genomic data to improve amyotrophic lateral sclerosis (ALS) prediction. With genome data as input, a neural network developed by Sun et al. (2019) [45] successfully classified 12 distinct forms of cancer.
Many research efforts try to combine many modalities because most illnesses are multifaceted and multifactorial. Using stacked autoencoders and multi-kernel learning, Suk and Shen [46] extracted characteristics from combined information from MRI, PET, and CSF and then combined them with clinical data. Recent research has consolidated the processing of multimodal input into a single neural network. MRI and PET data together were used to diagnose Alzheimer’s disease by Punjabi et al. (2019) [47], who demonstrated that utilizing both modalities improves diagnosis accuracy. Improved cancer survival prediction can also be achieved through the integration of histopathological pictures, genetic data, and clinical data [48, 49]. There is currently no agreed-upon method for integrating multimodal data. Data integration from several sources, like pretreatment methods, may require specialization. However, one common practice is to combine the features gleaned from the various modalities at the last completely linked layer of the network [47–50].
Various types of “deep” neural networks, including recurrent and convolutional neural networks, as well as “deep belief ” networks, are designed [51]. Strategies that use networks of generators and discriminators to improve efficiency include generative adversarial network techniques [52]. All of these networks are capable of modeling non-linear and high-dimensional characteristics as well as learning from massive amounts of unstructured data like images and text. In doing so, they sidestep several obstacles that have plagued attempts to adapt traditional machine learning techniques throughout the previous few decades into practical medical biomarkerfinding tools [53–56]. In a nutshell, while traditional machine learning algorithms necessitate human intervention to reduce the volume of data via feature reduction and feature selection strategies, deep learning interacts with and uses these enormous datasets with minimal effort [57, 58].
Neuronal firing patterns provide a useful metaphor for grasping the principles behind deep learning [59]. Both analogy between brain cells and deep learning node networks take in information and produce new information based on a set of rules that have been established to facilitate learning (Figure 1.2) [54, 60].
To others, deep learning networks may sound familiar; they are sometimes referred to as artificial neural networks due to their resemblance to biological neural networks [61, 62].
Network interactions between various elements, such as brain cells (neurons) or artificial neural networks (nodes), lead to iterative learning, which contributes to the network’s overall complexity. Several “hidden network layers” are used in a deep learning network, which allows it to learn by passing information between them (Figure 1.3) for a visual representation of this concept. Since nonlinearities allow for highly adaptable modifications, higher-order properties of the input data can be “self-learned” by a deep learning neural network.
The weights assigned to nodes in a deep learning network are calculated by multiplying the number of incoming nodes (resembling the neuron’s dendrites) by the number of incoming edges (resembling the neuron’s synapses) and then adding a biased score (resembling the neuron’s resting membrane potential, which determines whether or not the neuron will initiate an action potential). Similar to the membrane potential and action potential threshold in neurons, this score is placed into a non-linear activation function. Rectified linear units are the most widely used activation function in modern AI; they are non-linear, quick, and simple, and they permit learning at the layer level [63]. Since its gradient is always equal to its input, this function can be thought of as representing the initiation of an action potential (or the lack thereof) in neurons: negative input values are transformed into a score of zero (activation is not passed onto the subsequent layer), and positive input values are transformed into a score of one (activation is passed onto the subsequent layer). In deep learning, the activation function of the final “output” layer is typically Softmax, in contrast to the standard neuron, used in the hidden layers [64]. Since Softmax delivers a single score for all of the nodes in the output graph, it is widely used. This means that a study of the relationship between the results of deep learning and relevant clinical labels can make use of the probabilistic output provided by Softmax.
Figure 1.2 Biological and artificial neuron.
Figure 1.3 Artificial neural network.
For optimal performance, deep-learning networks are calibrated with a loss function that measures how well their predictions match the true clinical label value from the training data. Error standard deviation hinge loss and cross-entropy loss [65] are just a few examples of loss functions that can be used to quantify model performance; each has a variable trade-off between the number of false positives and false negatives, with the option to prioritize one or the other specific context at hand.
After settling on a loss function, the network learns its task by adjusting the weights among its neurons in the various layers to minimize the absolute value of the loss function across the whole training set samples.
The back-propagation algorithm [66] is used for this purpose; it determines the significance of each weight and makes fine adjustments by multiplying the weights for each set of training examples by a specified learning rate coefficient, usually in the range of 0.1 to 0.5, to maximize the value of the loss function [67]. Loss can be minimized by slowing the rate of learning function gradually over all of the training data, and the optimal point can be accurately pinpointed [68]. Researchers should not give in to the temptation of using a higher learning rate (i.e., >0.6) in a deep-learning model, as pointed out by Smith et al (2018) [68].
Because finding the local minimum in a noisy gradient descent curve might be challenging, increasing the learning rate results in faster but less accurate deep learning prediction. Faster learning can be achieved by increasing the size of the production run (the number of training examples used in one iteration of the deep learning model).
Incorporating many types by combining them all into one artificial intelligence model has been shown to boost both model performance and forecast accuracy [69]. However, ensemble methods, which use groups of models that have been trained independently, have been demonstrated to outperform single-model approaches on a variety of tasks [70]. This is progress. This is because there is still a lot of work being done to figure out how to deal with the problem of integrating data with radically different dimensions, time scales, and scopes.
Epilepsy is a disease in clinical situations where it is anticipated that multimodal data may be helpful. Genealogical and imaging data from the brain with high resolution have both greatly contributed to the understanding of epilepsy in recent years and decades [71–77]. Because various data sources may provide different but complementary information about the disease, merging them into a single classifier is likely to produce more accurate predictive AI modeling of epilepsy than a classifier based on a single data type. Experiential electroencephalography (EEG) [78, 79] and clinical documentation of patient features [80] are two more data sources that may further enrich the modeling.
Since there is a lot of information in high-dimensional data [81], it can be challenging to understand and analyze using traditional statistical approaches [82, 83]. Deep learning’s ability to process high-dimensional data can pose important questions about epilepsy diagnosis and treatment that cannot be answered by physicians using the tools at their disposal today (Figure 1.4).
Figure 1.4 Importance of label in AI.
Artificial intelligence models that incorporate many data types are a hot topic in the field [84–87] to learn the inherent cross-relationships between data modalities. This helps AI models perform better and make more accurate predictions by isolating and combining the most relevant aspects of many input modalities. For instance, data fusion can be performed ahead of schedule [88]. To do this, a unified deep-learning model is required in which the intrinsic relationships between data modalities are taken into account. In this setup, data from many modalities is integrated throughout the training process and the model is trained using the fused representations. In clinical AI, it is crucial to emphasize having accurate and comprehensive data but early fusion is susceptible to missing data, which undermines its benefits. Later data fusion is another approach to integrating data types [89]. Similarly, this method likewise necessitates a single AI model, but it operates under the premise that different types of data not be significantly connected, but that their sum is nonetheless crucial to the final model’s accuracy and success. Joint fusion [87] is a more recent model fusion strategy that integrates data from several levels of the deep learning model. There are a variety of data types that can benefit from this, including text and image files.
Listen to the patient; medicine is a science of uncertainty and an art of probability. It is unclear how the creator of the aforementioned statements, Sir William Osler, would have felt about the application of AI to the medical field. In medicine, a doctor’s primary role has always been to learn as much as possible about a patient’s condition and make educated recommendations based on those facts. Knowledge requires the ability to make sound decisions under pressure and solve problems with simple means. The lead author, a neurosurgeon who had their education in the BC era, must now be conversant in the concepts of “deep learning” and “Bayesian networks”. It was the finest of times; it was worst of times, the age of knowledge, the age of ignorance, the season of hope, the season of sorrow, springtime, wintertime. It was the beginning of an immortal tale; it was the beginning of a tale. It is possible that he was referring to artificial intelligence. Both good and evil have their uses. Only time will tell if the use of AI in neurosciences will be beneficial or detrimental. Better outcomes and lower costs associated with AI would convince clinicians to adopt it as a standard tool for neuroscientists. Right now, we are in a transitional period. Whenever you are going through a change, you should look at it as an opportunity. Compassionate doctors will always be needed, and AI will never be able to replace them. With the help of AI, doctors should be able to have more one-on-one time with their patients and less time spent wading through mountains of paperwork.
1. Myers, A.,
Stanford’s John McCarthy, a seminal figure of artificial intelligence, dies at 84
, Stanford University, Stanford, CA, USA, 2011, Available from:
https://news.stanford.edu/news/2011/October/john-mccarthy-obit-102511.html
. [Last accessed on 2018 Apr 11].
2. Arbour, K.C., Mezquita, L., Long, N., Rizvi, H., Auclin, E., Ni, A., Martínez-Bernal, G., Ferrara, R., Lai, W.V., Hendriks, L.E., Sabari, J.K., Impact of baseline steroids on efficacy of programmed cell death-1 and programmed death-ligand 1 blockade in patients with non–small-cell lung cancer.
J. Clin. Oncol.,
36, 28, 2872–8, 2018 Oct 1.
3. Vosburgh, K.G., Golby, A., Pieper, S.D., Surgery, virtual reality, and the future.
Stud. Health Technol. Inf.
, 184, 7–13, 2013.
4. Hassabis, D., Kumaran, D., Summerfield, C., Botvinick, M., Neuroscience-inspired artificial intelligence.
Neuron
, 95, 245–58, 2017.
5. Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., Wang, Y., Dong, Q., Shen, H., Wang, Y., Artificial intelligence in healthcare: Past, present and future.
Stroke Vasc. Neurol.
, 2, 4, 230–43, 2017 Dec 1.
6. Editorial: Artificial intelligence in health care: within touching distance.
Lancet
, 390, 27–39, 2017.
7. Mesko, B., The role of artificial intelligence in precision medicine.
Expert Rev. Precis. Med. Drug Dev.
, 2, 239–41, 2017.
8. Holmes, J.H., Durbin, D.R., Winston, F.K., Discovery of predictive models in an injury surveillance database: an application of data mining in clinical research, in:
Proceedings of the AMIA Symposium
, p. 359, American Medical Informatics Association, 2000.
9. Eftekhar, B., Mohammad, K., Ardebili, H.E., Ghodsi, M., Ketabchi, E., Comparison of artificial neural network and logistic regression models for prediction of mortality in head trauma based on initial clinical data.
BMC Med. Inform. Decis. Mak.
, 5, 1–8, 2005.
10. Mathew, B., Norris, D., Mackintosh, I., Waddell, G., Artificial intelligence in the prediction of operative findings in low back surgery.
Br. J. Neurosurg.
, 3, 161–70, 1989.
11. Swiercz, M., Mariak, Z., Krejza, J., Lewko, J., Szydlik, P., Intracranial pressure processing with artificial neural networks: Prediction of ICP trends.
Acta Neurochirur.
, 142, 401–6, 2000.
12. Abouzari, M., Rashidi, A., Zandi-Toghani, M., Behzadi, M., Asadollahi, M., Chronic subdural hematoma outcome prediction using logistic regression and an artificial neural network.
Neurosurg. Rev.
, 32, 479–84, 2009.
13. Bektas, F., Eken, C., Soyuncu, S., Kilicaslan, I., Cete, Y., Artificial neural network in predicting craniocervical junction injury: An alternative approach to trauma patients.
Eur. J. Emerg. Med.
, 15, 318–23, 2008.
14. Liu, Q., Cui, X., Abbod, M.F., Huang, S.-J., Han, Y.-Y., Shieh, J.-S., Brain death prediction based on ensembled artificial neural networks in neurosurgical intensive care unit.
J. Taiwan Inst. Chem. Eng.
, 42, 97–107, 2011.
15. Middleton, B., Sittig, D.F., Wright, A., Clinical decision support: A 25-year retrospective and a 25-year vision.
Yearb. Med. Inform.
, 25, Suppl 1, S103–16, 2016.
16. Hu, D., Gong, Y., Hannaford, B., Seibel, E.J., Semi-autonomous simulated brain tumor ablation with RavenII surgical robot using behaviour tree.
IEEE Int. Conf. Robot Autom.
, 2015, 3868–75, 2015.
17. Kantelhardt, S.R., Fadini, T., Finke, M., Kallenberg, K., Siemerkus, J., Bockermann, V.,
et al.
, Robot-assisted image-guided transcranial magnetic stimulation for somatotopic mapping of the motor cortex: A clinical pilot study.
Acta Neurochir. (Wien)
, 152, 333–43, 2010.
18. Kapoor, S., Arora, P., Kapoor, V., Jayachandran, M., Tiwari, M., Hapticstouch feedback technology widening the horizon of medicine.
J. Clin. Diagn. Res.
, 8, 294–9, 2014.
19. Armananzas, R., Alonso-Nanclares, L., Defelipe-Oroquieta, J., Kastanauskaite, A., de Sola, R.G., Defelipe, J., Bielza, C., Larrañaga, P., Machine learning approach for the outcome prediction of temporal lobe epilepsy surgery.
PLoS One
, 8, 4, e62819, 2013 Apr 30.
20. Sinha, N., Dauwels, J., Kaiser, M., Cash, S.S., Brandon Westover, M., Wang, Y.,
et al.
, Predicting neurosurgical outcomes in focal epilepsy patients using computational modeling.
Brain
, 140, 319–32, 2017.
21. Fergus, P., Hignett, D., Hussain, A., Al-Jumpily, D., Abdel-Aziz, K., Automatic epileptic seizure detection using scalp EEG and advanced artificial intelligence techniques.
Biomed Res. Int.
, 2015, 1–17, 2015.
22. Langs, G., Golland, P., Tie, Y., Rigolo, L., Golby, A.J., Functional geometry alignment and localization of brain areas.
Adv. Neural Inf. Process Syst.
, 1, 1225–33, 2010.
23. Zhang, X., L-F, Y., Hu, Y.-C., Li, G., Yang, Y., Han, Y.,
et al.
, Optimizing a machine learning-based glioma grading system using multi-parametric MRI histogram and texture features.
Oncotarget
, 8, 47816–30, 2017.
24. Zacharaki, E.I., Morita, N., Bhatt, P., O’Rourke, D.M., Melhem, E.R., Davatzikos, C., Survival analysis of patients with high-grade gliomas based on data mining of imaging variables.
AJNR Am. J. Neuroradiol.
, 33, 1065–71, 2012.
25. Porz, N., Habegger, S., Meier, R., Verma, R., Jilch, A., Fichtner, J.,
et al.
, Fully automated enhanced tumor compartmentalization: Man vs. machine reloaded.
PLoS One
, 11, 1–16, 2016.
26. Ranjith, G., Parvathy, R., Vikas, V., Chandrasekharan, K., Nair, S., Machine learning methods for the classification of gliomas: Initial results using features extracted from MR spectroscopy.
Neuroradiol. J.
, 28, 106–11, 2015.
27. Kuo, P.-J., Wu, S.-C., Chien, P.-C., Rau, C.-S., Chen, Y.-C., Hsieh, H.-Y.,
et al.
, Derivation and validation of different machine-learning models in mortality prediction of trauma in motorcycle riders: A cross-sectional retrospective study in southern Taiwan.
BMJ Open
, 8, 1–11, 2018.
28. Rughani, A.I., Bongard, J., Dumont, T.M., Horgan, M.A., Tranmer, B.I., Use of an artificial neural network to predict head injury outcome.
J. Neurosurg.
, 113, 585–90, 2010.
29. Guler, I., Tunca, A., Gulbandilar, E., Detection of traumatic brain injuries using fuzzy logic algorithm.
Expert Syst. Appl.
, 34, 1312–17, 2008.
30. Liu, N.T. and Salinas, J., Machine learning for predicting outcomes in trauma.
Shock
, 48, 504–10, 2017.
31. Maesawa, S., Bagarinao, E., Fujii, M., Futamura, M., Wakabayashi, T., Use of network analysis to establish neurosurgical parameters in gliomas and epilepsy.
Neurol. Med. Chir. (Tokyo)
, 56, 158–69, 2016.
32. Available from:
https://www.viz.ai
. [Last accessed on 2018 Apr 11].
33. Brower, V., When mind meets machine.
Eur. Mol. Biol. Organ.
, 6, 108–10, 2005.
34. Gnanayutham, P., Bloor, C., Cockton, G., Artificial Intelligence to enhance a Brain-computer interface, in:
HCI International 2003 Proceedings
, Stephanidis, C. (Ed.), pp. 1397–401, 2003.
35. Jayatilake, D., Ueno, T., Teramoto, Y., Nakai, K., Hidaka, K., Ayuzawa, S.,
et al.
, Smartphone-based real-time assessment of swallowing ability from the swallowing sound.
IEEE J. Transl. Eng. Health Med.
, 3, 1–10, 2015.
36. Kim, H.B., Lee, W.W., Kim, A.,
et al.
, Wrist sensor-based tremor severity quantification in Parkinson’s disease using convolutional neural network.
Comput. Biol. Med.
, 95, 140–6, 2018.
37. Nancy Jane, Y., Khanna Nehemiah, H., Arputharaj, K., A Qbackpropagated time delay neural network for diagnosing severity of gait disturbances in Parkinson’s disease.
J. Biomed. Inform.
, 60, 169–76, 2016.
38. Camps, J., Samà, A., Martín, M.,
et al.
, Deep learning for freezing of gait detection in Parkinson’s disease patients in their homes using a waist-worn inertial measurement unit.
Knowl. Based Syst.
, 139, 119–31, 2018.
39. Little, B., Alshabrawy, O., Stow, D., Ferrier, I.N., McNaney, R., Jackson, D.G., Ladha, K., Ladha, C., Ploetz, T., Bacardit, J., Olivier, P., Deep learning-based automated speech detection as a marker of social functioning in late-life depression.
Psychol. Med.
, 51, 9, 1441–50, 2021 Jul.
40. Li, J., Zhong, Y., Han, J.,
et al.
, Classifying ASD children with LSTM based on raw videos.
Neurocomputing
, 390, 226–38, 2019.
41. Zhang, Y.N., Can a smartphone diagnose Parkinson’s disease? A deep neural network method and Telediagnosis system implementation.
Park Dis.
, 2017, 6209703, 2017.
42. Park, I., Kim, Y.J., Kim, Y.J.,
et al.
, Automatic, qualitative scoring of the interlocking pentagon drawing test (PDT) based on U-net and mobile sensor data.
Sensors
, 20, 1283, 2020.
43. Zhou, J., Park, C.Y., Theesfeld, C.L.,
et al.
, Whole-genome learning analysis identifies the contribution of noncoding mutations to autism risk.
Nat. Genet.
, 51, 973–80, 2019.
44. Yin, B., Balvert, M., Van Der Spek, R.A.A.,
et al.
, Using the structure of genome data in the design of deep neural networks for predicting amyotrophic lateral sclerosis from genotype.
Bioinformatics
, 35, i538–47, 2019.
45. Sun, Y., Zhu, S., Ma, K.,
et al.
, Identification of 12 cancer types through genome deep learning.
Sci. Rep.
, 9, 17256, 2019.
46. Suk, H.-I. and Shen, D., Deep learning-based feature representation for AD/ MCI classification.
Med. Image Comput. Comput. Assist. Interv.—MICCAI
, 2013, 8150, 583–90, 2013.
47. Punjabi, A., Meterstick, A., Wang, Y.,
et al.
, Neuroimaging modality fusion in Alzheimer’s classification using convolutional neural networks.
PLoS One
, 14, e0225759, 2019.
48. Hao, J., Kosaraju, S.C., Tsaku, N.Z.,
et al.
, PAGE-net: interpretable and integrative deep learning for survival analysis using Histopathological images and genomic data.
Biocomput
, 2020, 355–66, 2019.
49. Mobadersany, P., Yousefi, S., Amgad, M.,
et al.
, Predicting cancer outcomes from histology and genomics using convolutional networks.
Proc. Natl. Acad. Sci.
, 115, E297, 2018.
50. Ning, K., Chen, B., Sun, F.,
et al.
, Classifying Alzheimer’s disease with brain imaging and genetic data using a neural network framework.
Neurobiol. Aging
, 68, 151–8, 2018.
51. Sainath, T.N., Vinyals, O., Senior, A., Sak, H., Convolutional, Long Short-Term Memory, fully connected Deep Neural Networks, in:
2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
, IEEE, pp. 4580–4, 2015.
52. Xing, Y., Ge, Z., Zeng, R., Mahapatra, D., Seah, J., Law, M.,
et al.
, Adversarial pulmonary pathology translation for pairwise chest X-ray data augmentation.
ArXiv191004961 Cs Eess.
, 11769, 757–65, 2019.
53. Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H., Ng, A.Y., Multimodal deep learning.
InICML,
11, 689–696, 2011 Jun 28.
54. LeCun, Y., Bengio, Y., Hinton, G., Deep learning.
Nature
, 521, 436–444, 2015.
55. Schmidhuber, J., Deep learning in neural networks: an overview.
Neural. Netw.
, 61, 85–117, 2015.
56. Goodfellow, I., Bengio, Y., Courville, A.,
Deep learning
, MIT Press, Cambridge, 2016.
57. Mwangi, B., Tian, T.S., Soares, J.C., A review of feature reduction techniques in neuroimaging.
Neuroinformatic
, 12, 229–244, 2014.
58. Hestness, J., Narang, S., Ardalani, N., Diamos, G., Jun, H., Kianinejad, H., Patwary, M.M., Yang, Y., Zhou, Y., Deep Learning Scaling is Predictable, Empirically, Cornell University (arXiv), NewYork, America, arXiv:1712.00409. 2017 Dec 1. Available from:
http://arxiv.org/abs/1712.00409
.
59. Savage, N., How AI and neuroscience drive each other forward.
Nature
, 571, S15–7, 2019.
60. Daubechies, I., DeVore, R., Foucart, S., Hanin, B., Petrova, G., Nonlinear approximation and (deep) ReLU networks.
Constructive Approximation.
vol. 55, 1, pp. 127–72, Springer, Berlin, Heidelberg, Dordrecht, and New York City, 2022 Feb.
61. Hassoun, M.H.,
Fundamentals of artificial neural networks
, MIT Press, Cambridge, 1995.
62. Dreiseitl, S. and Ohno-Machado, L., Logistic regression and artificial neural network classification models: a methodology review.
J. Biomed. Inform.
, 35, 352–9, 2002.
63. Dahl, G.E., Sainath, T.N., Hinton, G.E., Improving deep neural networks for LVCSR using rectified linear units and dropout, in:
2013 IEEE International Conference on Acoustics, Speech, and Signal Processing
, IEEE, pp. 8609–8613, 2013.
64. Gibbs, J.W.,
Elementary Principles in Statistical Mechanics: Developed with Especial Reference to the Rational Foundation of Thermodynamics (Cambridge Library Collection - Mathematics)
, Cambridge University Press, Cambridge, 2010, doi: 10.1017/CBO9780511686948.
65. Janocha, K. and Czarnecki, W.M.,
On Loss Functions for Deep Neural Networks in Classification
, Cornell University (arXiv), New York, America, 2017 Feb 18, arXiv preprint arXiv:1702.05659.
66. Rojas, R., The back propagation algorithm, in:
Neural networks: a systematic introduction
, R. Rojas (Ed.), pp. 149–82, Springer, Berlin, Heidelberg, 1996.
67. Le, Q.V., Ngiam, J., Coates, A., Lahiri, A., Prochnow, B., Ng, A.Y., On optimization methods for deep learning, in:
Proceedings of the 28th International Conference on International Conference on Machine Learning
, pp. 265–272. Stanford University, Stanford, CA,, USA, 2011 Jun 28.
https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Le%2C+Q.V.%2C+Ngiam%2C+J.%2C+Coates%2C+A.%2C+Lahiri%2C+A.%2C+Prochnow%2C+B.%2C+Ng%2C+A.Y.%2C+On+opti-+mization+methods+for+deep+learning%2C+2011&btnG=
68. Smith, S.L., Kindermans, P.-J., Ying, C., Le, Q.V.,
Don’t Decay the Learning Rate, Increase the Batch Size
, Cornell university (arXiv), Newyork, America, 2018. arXiv preprint arXiv:1711.00489. 2017. Available at:
http://arxiv.org/abs/1711.00489
(8 April 2020, date last accessed).
69. Baltrušaitis, T., Ahuja, C., Morency, L.P., Multimodal machine learning: A survey and taxonomy, in:
IEEE transactions on pattern analysis and machine intelligence,
vol. 41, 2, pp. 423–43, IEEE, USA/Canada, 2018 Jan 25.
70. D’Mello, S.K. and Westlund, J.K., A review and meta-analysis of multimodal affect detection systems.
ACM Comput. Surv. (CSUR)
, 47, 1–36, 2015.
71. Jackson, G.D., New techniques in magnetic resonance and epilepsy.
Epilepsia
, 35, S2–13, 1994.
72. Kuzniecky, R.I., Bilir, E., Gilliam, F., Faught, E., Palmer, C., Morawetz, R.,
et al.
, Multimodality MRI in mesial temporal sclerosis: relative sensitivity and specificity.
Neurology
, 49, 774–8, 1997.
73. Scheffer, I.E. and Berkovic, S.F., Generalized epilepsy with febrile seizures plus. A genetic disorder with heterogeneous clinical phenotypes.
Brain
, 120, 479–90, 1997.
74. Marini, C., Harkin, L.A., Wallace, R.H., Mulley, J.C., Scheffer, I.E., Berkovic, S.F., Childhood absence epilepsy and febrile seizures: a family with a GABAA receptor mutation.
Brain
, 126, 230–40, 2003.
75. Dibbens, L.M., de Vries, B., Donatello, S., Heron, S.E., Hodgson, B.L., Chintawar, S.,
et al.
, Mutations in DEPDC5 cause familial focal epilepsy with variable foci.
Nat. Genet.
, 45, 546–51, 2013.