Cognitive Modeling of Human Memory and Learning - Lidia Ghosh - E-Book

Cognitive Modeling of Human Memory and Learning E-Book

Lidia Ghosh

0,0
117,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Proposes computational models of human memory and learning using a brain-computer interfacing (BCI) approach Human memory modeling is important from two perspectives. First, the precise fitting of the model to an individual's short-term or working memory may help in predicting memory performance of the subject in future. Second, memory models provide a biological insight to the encoding and recall mechanisms undertaken by the neurons present in active brain lobes, participating in the memorization process. This book models human memory from a cognitive standpoint by utilizing brain activations acquired from the cortex by electroencephalographic (EEG) and functional near-infrared-spectroscopic (f-NIRs) means. Cognitive Modeling of Human Memory and Learning A Non-invasive Brain-Computer Interfacing Approach begins with an overview of the early models of memory. The authors then propose a simplistic model of Working Memory (WM) built with fuzzy Hebbian learning. A second perspective of memory models is concerned with Short-Term Memory (STM)-modeling in the context of 2-dimensional object-shape reconstruction from visually examined memorized instances. A third model assesses the subjective motor learning skill in driving from erroneous motor actions. Other models introduce a novel strategy of designing a two-layered deep Long Short-Term Memory (LSTM) classifier network and also deal with cognitive load assessment in motor learning tasks associated with driving. The book ends with concluding remarks based on principles and experimental results acquired in previous chapters. * Examines the scope of computational models of memory and learning with special emphasis on classification of memory tasks by deep learning-based models * Proposes two algorithms of type-2 fuzzy reasoning: Interval Type-2 fuzzy reasoning (IT2FR) and General Type-2 Fuzzy Sets (GT2FS) * Considers three classes of cognitive loads in the motor learning tasks for driving learners Cognitive Modeling of Human Memory and Learning A Non-invasive Brain-Computer Interfacing Approach will appeal to researchers in cognitive neuro-science and human/brain-computer interfaces. It is also beneficial to graduate students of computer science/electrical/electronic engineering.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 443

Veröffentlichungsjahr: 2020

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Preface

Acknowledgments

About the Authors

1 Introduction to Brain‐Inspired Memory and Learning Models

1.1 Introduction

1.2 Philosophical Contributions to Memory Research

1.3 Brain‐Theoretic Interpretation of Memory Formation

1.4 Cognitive Maps

1.5 Neural Plasticity

1.6 Modularity

1.7 The Cellular Process Behind STM Formation

1.8 LTM Formation

1.9 Brain Signal Analysis in the Context of Memory and Learning

1.10 Memory Modeling by Computational Intelligence Techniques

1.11 Scope of the Book

References

2 Working Memory Modeling Using Inverse Fuzzy Relational Approach

2.1 Introduction

2.2 Problem Formulation and Approach

2.3 Experiments and Performance Analysis

2.4 Discussion

2.5 Conclusions

References

3 Short‐Term Memory Modeling in Shape‐Recognition Task by Type‐2 Fuzzy Deep Brain Learning

3.1 Introduction

3.2 System Overview

3.3 Brain Functional Mapping Using Type‐2 Fuzzy DBLN

3.4 Experiments and Results

3.5 Biological Implications

3.6 Performance Analysis

3.7 Conclusions

C.A Appendix

References

4 EEG Analysis for Subjective Assessment of Motor Learning Skill in Driving Using Type‐2 Fuzzy Reasoning

4.1 Introduction

4.2 System Overview

4.3 Determining Type and Degree of Learning by Type‐2 Fuzzy Reasoning

4.4 Experiments and Results

4.5 Performance Analysis and Statistical Validation

4.6 Conclusions

References

5 EEG Analysis to Decode Human Memory Responses in Face Recognition Task Using Deep LSTM Network

5.1 Introduction

5.2 CSP Modeling

5.3 Proposed LSTM Classifier with Attention Mechanism

5.4 Experiments and Results

5.5 Conclusions

References

6 Cognitive Load Assessment in Motor Learning Tasks by Near‐Infrared Spectroscopy Using Type‐2 Fuzzy Sets

6.1 Introduction

6.2 Principles and Methodologies

6.3 Classifier Design

6.4 Experiments and Results

6.5 Biological Implications

6.6 Performance Analysis

6.7 Conclusions

References

7 Conclusions and Future Directions of Research on BCI‐Based Memory and Learning

7.1 Self‐Review of the Works Undertaken in the Book

7.2 Limitations of EEG BCI‐Based Memory Experiments

7.3 Further Scope of Future Research on Memory and Learning

References

Index

End User License Agreement

List of Tables

Chapter 2

Table 2.1 Connection weights between sources responsible for STM encoding and...

Table 2.2 Partial faces of full‐face stimuli.

Table 2.3 Error calculated (in percentage) for both the θ power and α power f...

Table 2.4 Connecting weights computed for the θ power features for a given su...

Table 2.5 Connecting weights computed for the α power features for a given su...

Table 2.6 Inter‐person variability of

W

matrix for θ and α power features.

Table 2.7 Error metric

E

due to variation in imaging attributes.

Table 2.8 Comparison between the accuracy measures of proposed method and exi...

Chapter 3

Table 3.1 Validation of the STM model with respect to

ξ

for two objects....

Table 3.2 Error metric

ξ

for more complex shape.

Table 3.3 STM model G for similar but non‐identical object shapes.

Table 3.4 Object shapes according to the increased shape complexity (SC1 < SC...

Table 3.5 Comparison of

E

'

obtained by the proposed mapping methods agains...

Table 3.6 Order of complexity of the proposed T2FS algorithms and other compe...

Table 3.7 Results of statistical validation with the proposed methods as refe...

Chapter 4

Table 4.1 Range of PNDLS

t

for four different degrees of learning.

Table 4.2 List of stimuli and required action.

Table 4.3 Activation of scalp maps for different EEG signal detection at diff...

Table 4.4 Reduced feature dimension using PCA.

Table 4.5 Comparison between PNDLS of proposed GT2FS reasoning with and witho...

Table 4.6 TPR, TNR, and percentage classification accuracy of the four LSVM c...

Table 4.7 Comparative performance analysis of the LSVM classifier with the ex...

Table 4.8 Robustness study.

Table 4.9 Comparison of

E

t

obtained by the proposed reasoning methods against ...

Table 4.10 Run‐time of proposed T2FS algorithms and other competitive reasoni...

Table 4.11 Results of statistical validation with the proposed methods 1–3 as...

Chapter 5

Table 5.1 Comparative analysis of the proposed CSP algorithm with the other f...

Table 5.2 Comparative evaluation of the proposed classifier with other standa...

Table 5.3 Comparison of the classifier accuracy

with attention

under varying t...

Table 5.4 Comparison of the classifier accuracy

without attention

under varyin...

Table 5.5 Statistical validation of the classifiers using McNamer's test.

Chapter 6

Table 6.1 List of stimuli and required actions.

Table 6.2 Mean percentage classification accuracy (standard deviation) of pro...

Table 6.3 Run‐time of the proposed classifiers and other competitive classifi...

Table 6.4 Comparative study of percentage TPR and TNR of the proposed classif...

Table 6.5 Statistical validation of the classifiers using McNemar's test.

List of Illustrations

Chapter 1

Figure 1.1 Atkinson–Shiffrin's model of cognitive memory.

Figure 1.2 The architecture of Tveter's model.

Figure 1.3 The architecture of memory hierarchy in Tulving's model.

Figure 1.4 The interconnection between procedural and declarative memory.

Figure 1.5 The interconnection between STM and LTM through memory consolidat...

Figure 1.6 Transient frequency between θ and α frequency band.

Chapter 2

Figure 2.1 (a) First phase dealing with WM modeling. (b) Second phase dealin...

Figure 2.2 Intervals in the sample space of

.

Figure 2.3 Computation of STM to WM connectivity from the membership functio...

Figure 2.4 Ring topology of neighborhood in UPSO. The colored spheres indica...

Figure 2.5 EEG signal acquisition of a subject, participating in memory enco...

Figure 2.6 eLORETA solutions obtained during face encoding: (a) axial view, ...

Figure 2.7 eLORETA solutions obtained during face recall: (a) axial view, (b...

Figure 2.8 Frequency spectra of (a) Butterworth filter, (b) elliptical filte...

Figure 2.9 (a) θ and (b) α band activity of the EEG signals acquired from pr...

Chapter 3

Figure 3.1 General block‐diagram of the proposed DBLN describing four‐stage ...

Figure 3.2 The model used in four‐stage mapping of the DBLN, explicitly show...

Figure 3.3 Iconic memory encoding by Hebbian learning.

Figure 3.4 Second level encoding occipital EEG features and prefrontal EEG f...

Figure 3.5 Functional approximation of the prefrontal to the parietal to the...

Figure 3.6 Computation of flat‐top IT2FS: (a) type‐1 MFs, (b) IT2FS represen...

Figure 3.7 Adaptation of the IT2FS‐induced mapping function by perceptron‐li...

Figure 3.8 Secondary membership assignment in the proposed GT2FS‐based mappi...

Figure 3.9 GT2FS‐based mapping adapted with perceptron‐like learning.

Figure 3.10 Ten to twenty electrode placement system (only the dark circled ...

Figure 3.11 Stimulus preparation.

Figure 3.12 Ten objects (with sample number) used in the experiment with inc...

Figure 3.13 Learning ability of the subject with increasing shape complexity...

Figure 3.14 Convergence of the error metric

ξ

(and weight matrix

G)

ove...

Figure 3.15 Dissimilar region of the

G

matrix in successive trials obtained ...

Figure 3.16 eLORETA tomography based on the current electric density (activi...

Figure 3.17 N400 repetition effects along with eLORETA solutions for success...

Figure 3.18 Increasing N400 negativity with increasing shape complexity.

Figure 3.19 Parameter selection of the type‐2 fuzzy DBLN model.

Chapter 4

Figure 4.1 Learning timely motor action from delayed motor execution.

Figure 4.2 Schematic overview of the proposed system.

Figure 4.3 Structure of a stimulus with timing details.

Figure 4.4 Construction of flat‐top CIT2FS: (a) type‐1 MFs, (b) CIT2FS repre...

Figure 4.5 Computation of firing strength in CIT2FS‐based reasoning.

Figure 4.6 Secondary membership assignment.

Figure 4.7

PNDLS

t

computation in the proposed triangular vertical slice‐base...

Figure 4.8 Illustrating secondary membership computation.

Figure 4.9

PNDLS

computation in the proposed GT2FS‐induced reasoning with Ga...

Figure 4.10 The experimental set‐up.

Figure 4.11 eLORETA tomography based on the current electric density (activi...

Figure 4.12 Selection of pass band (8–12 Hz) for elliptical filter for P300 ...

Figure 4.13 Selection of pass band (4–7 Hz) for elliptical filter during N40...

Figure 4.14 Selection of pass band (12–24 Hz) for elliptical filtazer during...

Figure 4.15 Selection of pass band (4–10 Hz) for elliptical filter during Er...

Figure 4.16 Separated artifact‐free ERP in (a) component 5, (b) component 12...

Figure 4.17 N400 repetition effects along with eLORETA solutions for differe...

Chapter 5

Figure 5.1 Block diagram of the proposed framework.

Figure 5.2 The proposed two‐layer LSTM network with attention in each layer....

Figure 5.3 The experimental set‐up.

Figure 5.4 Structure of a stimulus used with timing details.

Figure 5.5 eLORETA solutions obtained for a single trial in case of familiar...

Figure 5.6 eLORETA solutions obtained for a single trial in case of unfamili...

Figure 5.7 eLORETA solutions obtained for a single trial of 4 different subj...

Figure 5.8 Grand‐averaged ERPs based on 40 participants in response to famil...

Chapter 6

Figure 6.1 Defining trial and session for a given subject during offline tra...

Figure 6.2 Construction of flat‐top IT2FS: (a) type‐1 MFs, (b) IT2FS represe...

Figure 6.3 (a) Consequent type‐2 class MF and (b) IT2FS classifier design.

Figure 6.4 (a) Consequent IT2 MF for three classes: low, med., and high cogn...

Figure 6.5 Secondary membership assignment.

Figure 6.6 GT2FS classifier design.

Figure 6.7 (a) Experimental set‐up with fNIRs data acquisition, (b) IR penet...

Figure 6.8 Structure of the stimulus used with timing for online cognitive l...

Figure 6.9 The cognitive load, diff

avg

, and the topographic maps obtained du...

Figure 6.10 Extracted fNIRs features to discriminate cognitive load of three...

Figure 6.11 Parameter selection of the GT2FS classifier for each healthy sub...

Figure 6.12 Parameter selection of the IT2FS classifier for each healthy sub...

Figure 6.13 (a) Regions of the PFC; voxel plot of the fNIRs data at (b) LE4 ...

Figure 6.14 CLV variations in the prefrontal lobe with decreasing cognitive ...

Guide

Cover

Table of Contents

Begin Reading

Pages

ii

iv

xi

xii

xiii

xiv

xv

xvi

xvii

xviii

xix

xx

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

65

66

67

68

69

70

71

72

73

74

75

76

77

78

84

85

86

87

88

89

90

91

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

239

240

241

242

243

244

245

247

248

249

250

251

IEEE Press

445 Hoes Lane

Piscataway, NJ 08854

IEEE Press Editorial Board

Ekram Hossain, Editor in Chief

Jón Atli Benediktsson

David Alan Grier

Elya B. Joffe

Xiaoou Li

Peter Lian

Andreas Molisch

Saeid Nahavandi

Jeffrey Reed

Diomidis Spinellis

Sarah Spurgeon

Ahmet Murat Tekalp

Cognitive Modeling of Human Memory and Learning

A Non‐invasive Brain‐Computer Interfacing Approach

Lidia Ghosh, Amit Konar, and Pratyusha Rakshit

Artificial Intelligence LaboratoryDepartment of Electronics and Tele‐Communication EngineeringJadavpur University, Kolkata‐700032, India

 

 

 

 

 

Copyright © 2021 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.

Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per‐copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750‐8400, fax (978) 750‐4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748‐6011, fax (201) 748‐6008, or online at http://www.wiley.com/go/permission.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762‐2974, outside the United States at (317) 572‐3993 or fax (317) 572‐4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging‐in‐Publication Data

Names: Ghosh, Lidia, author. | Konar, Amit, author. | Rakshit, Pratyusha,

  author.

Title: Cognitive modeling of human memory and learning : a non‐invasive

  brain-computer interfacing approach / Lidia Ghosh, Artificial

  Intelligence Lab., Dept. of Electronics and Tele‐Communication

  Engineering, Amit Konar, Artificial Intelligence Lab., Dept. of

  Electronics and Tele‐Communication Engineering, Pratyusha Rakshit,

  Artificial Intelligence Lab., Dept. of Electronics and

  Tele‐Communication Engineering.

Description: Hoboken, New Jersey : Wiley, [2021] | Includes bibliographical

  references and index.

Identifiers: LCCN 2020015457 (print) | LCCN 2020015458 (ebook) | ISBN

  9781119705864 (cloth) | ISBN 9781119705871 (adobe pdf) | ISBN

  9781119705918 (epub)

Subjects: LCSH: Memory. | Brain‐computer interfaces. | Cognitive

  neuroscience.

Classification: LCC BF371 .G46 2021 (print) | LCC BF371 (ebook) | DDC

  153.1/20113–dc23

LC record available at https://lccn.loc.gov/2020015457

LC ebook record available at https://lccn.loc.gov/2020015458

Cover Design: Wiley

Cover Image: © Paolo Carnassale/Getty Images

Preface

Existing works on human memory models take into account the behavioral perspectives of learning/memory and thus have limited scope in diagnostic and therapeutic applications of memory. The present title makes a humble attempt to model human memory from the cognitive standpoint by utilizing the brain activations acquired from the cortex by electroencephalography (EEG) and functional near‐infrared‐spectroscopy (fNIRs) means (during the course of subjective learning and memory recall). The EEG‐based memory modeling is advantageous for its inherent merit to offer prompt temporal response (of memory) to perceptual cues. The prompt response of memory helps in understanding its correspondence with the stimulus, thereby justifying the selection of the EEG modality for the experimental protocol design for memory modeling. The fNIRs device, on the other hand, having good spatial resolution, is useful to accurately localize the regions of brain activations for selected memory tasks. Thus memory activation study with EEG preceded by localization of brain regions by fNIRs device for a given memory task is an ideal choice for memory modeling by experimental means. Although functional magnetic resonance imaging (fMRI) is a better choice for spatial localization of brain regions for memory tasks, in this book spatial localization is undertaken by fNIRs device only for its portability, user‐friendliness, and cost‐effectiveness.

Like computer memory, human memory too maintains a hierarchy. For instance, information acquired by sense organs is temporarily stored into sensory registers for transfer into short‐term memory (STM) located in the prefrontal lobe. Next, depending on the relative importance of the information, sometimes the contents of STM are transferred into long‐term memory (LTM), located in the hippocampus region. A third form of memory, referred to as working memory (WM), also resides in the prefrontal lobe. The WM is generally used to analyze information stored into STM for logical reasoning, information matching, and decision making. There exist signaling pathways from the STM to the WM to the LTM. There also exist direct routes from the STM to the LTM. These signaling pathways include long chain on neurons, forming deep brain networks. The book attempts to model the deep signaling pathways connecting the modules in the memory hierarchy by deep learning. The proposed models of memory developed with brain activations are advantageous in the early diagnosis and prognosis of certain brain diseases, such as the Alzheimer's disease, schizophrenia, prosopagnosia, and many others.

Computational intelligence (CI) is the rubric of a number of intelligent tools and techniques that synergistically complement each other's performance and thus jointly may serve as a complete approach to handle complex real‐world problems. The memory encoding and recall processes and signal transduction across distributed modules of memory is primary controlled by the human nervous system. This can be taken up by artificial neural networks and deep learning models of CI. In addition, the brain signals being non‐stationary have intra‐ and inter‐session fluctuations, which can be modeled by fuzzy sets (in particular type‐2 fuzzy sets [T2FS]). In fact, the book proposes interesting models of memory and learning by amalgamating fuzziness in the settings of deep brain learning network. The parameter optimization of the memory models to attain the best performance can be designed by evolutionary computation. Thus memory modeling can be performed efficiently by synergism of different CI techniques.

Memory modeling in this book is undertaken by considering the acquired brain signals from the expected input and output brain regions/lobes of respective memory segments/modules. The primary hindrance in memory modeling using brain signals lies in the non‐stationary characteristic of the signals. Non‐stationarity of brain signals results in wider fluctuation of the signals of a selected lobe (for a given memory task) within and across experimental sessions on the same subject. The non‐stationary characteristics of brain signals, here, have largely been modeled using T2FS. The intra‐session variations of the brain signals are represented by a (Gaussian/Triangular) type‐1 fuzzy membership function (MF) and inter‐session measurements by a mixture of MFs. The mixture of MFs here is represented by an interval type‐2 fuzzy set (IT2FS), where the upper membership function (UMF) and lower membership function (LMF) represent bounds of the mixture of MFs, and the space between them is called the footprint of uncertainty (FOU). The FOU provides a wide span of variations in the primary membership of each measurement point. In order to grade the degree of precision of the primary membership assignments, the notion of secondary memberships is added to the FOU as triangular vertical slices, located at the measurement points. The triangular slices emphasize that the mid‐point of the UMF and the LMF have the highest degree of certainty of the primary membership value for a given measurement point. The falling slope of the edges in the vertical slice denotes that the secondary membership gradually diminishes from the mid‐point to the extremities of the FOU at the selected measurement point.

The memory models proposed in the book are developed by utilizing the functional mapping between the brain signals acquired from the expected input and the output regions of the selected memory modules. Different forms of functional mapping, realized with bidirectional fuzzy associative memory (vertical slice based), general type‐2 fuzzy sets (GT2FS) and deep long short‐term memory (LSTM) models are taken up in different chapters of the book to examine the efficacy of the models. The models are built during the training session and tested against the same human subject during the test sessions to validate its efficacy.

The book includes seven chapters. In a nutshell, the book provides five interesting and useful models of memory and learning. Chapter 1 provides a thorough review of the existing works on human memory and learning paradigms. It begins with an overview of the early models of memory, proposed by the philosophers from its behavioral perspectives. Gradually, the chapter explores the brain‐theoretic interpretation of memory formation and consolidation. The later part of the chapter includes surgical and therapeutic experiments on memory encoding by considering plasticity and stability issues of memory and learning. Finally, the chapter examines the scope of computational models of memory and learning with special emphasis to classification of memory tasks by deep learning‐based models.

Chapter 2 proposes a simplistic model of WM built with fuzzy Hebbian learning. Hebbian learning is widely acclaimed as the local learning of neurons and particularly for memory. In the present context, Hebbian learning is extended to represent the mapping from the STM response to the WM response using the notion of fuzzy sets. The reconstruction of the STM response from the WM response is also undertaken by an inverse implication relation. Although the forward and the inverse relations can be designed by a variety of ways, fuzzy relational approach is employed here for the following reasons. First and foremost, the memory models are generally non‐deterministic, i.e. hardly any speculation in the WM response can be inferred, even if there are traces of similarity between its response and the STM response. Fuzzy logic has the freedom to model such non‐determinism and thus would serve an interesting application in modeling the STM to WM mapping. In addition, the fuzzy relational system has provisions to compute the inverse relation, which usually is absent in traditional non‐deterministic system modeling. The previously mentioned two characteristics of fuzzy logic support its choice in STM to WM connectivity modeling for the present application.

The merit of the previously mentioned work lies in predicting model behavior by acquisition of the EEG signals from the dorsolateral region of the prefrontal lobe, representing the WM, and predicting the response of the STM located in the orbitofrontal region from the acquired WM response. The fuzzy relational model has been employed to represent the forward brain connectivity, and an inverse solution to the forward max–min composition model is proposed to predict the STM response from the WM response. A study is undertaken here in the context of face recognition experiments to validate the proposed WM model. Here, complete facial images of people are submitted to a subject one after the other for several minutes, until the subject can remember and recognize the person from his/her image. After the learning session is over, the subject is asked to recognize the person from the partial face images of the acquainted faces. The partial faces used include only chin, only eyes, one side face, forehead and eyes, and the like. Here, during the presentation session of complete facial images, the mapping between the EEG responses of the STM to WM is developed using fuzzy relational approach. Later, during the recognition phase of the person from their partial faces, the (fuzzy encoded) EEG response of the WM is measured, and the (fuzzy encoded) STM response is predicted by utilizing the inverse fuzzy relational mapping from the WM response. An estimate of error between the measured response and predicted response is evaluated to determine the performance in predicting STM response from WM response by the proposed model.

The second perspective of memory models, introduced in Chapter 3, is concerned with STM modeling in the context of two‐dimensional object‐shape reconstruction from visually examined memorized instances. The model employs four brain lobes for the reconstruction/recall of memorized instances. The brain lobes involved include occipital lobe, containing the iconic memory for storing information about visually examined object shape, WM to process the traces of the iconic response, parietal lobe for making necessary plans to move the subject's arm for hand‐drawing of the object, and motor cortex for execution of the motor‐activity needed in drawing the recollected object. The quality of the reconstructed/recalled object shape is tested by comparing (the subject‐produced) hand‐drawn object geometry with the geometry of the original object presented to the subject for inspection.

Two different forms of computational errors are employed in Chapter 3. The first error signal, representing error in the model, is estimated and used as a control signal to feedback to the STM to adapt itself in order to reduce the error due to the computational model. This corrective error feedback is employed at the end of each learning epoch. In other words, subjects at the end of each learning epoch, reproduce the object shape from memory, and the model is adapted to reproduce the geometry of the object as produced by the subject in that learning epoch. Similar learning cycles are repeated until no further improvement in the reproduced object geometry is detected. The second error signal corresponds to the perfection in the learning with regard to actual object shape to adapt the model further at the iconic memory level to reduce the error committed by the iconic memory model.

Examining performance of the iconic memory–STM interaction in the reconstruction of two‐dimensional imageries of acquired objects is difficult, unless there are provisions for reproduction of the mental thoughts into realization. Here, the subjects are asked to draw the recollected object shape. So, naturally, several brain modules like parietal lobe and motor cortex are involved to enable the subject draw the object shape. Realization of the prefrontal to parietal lobe mapping and parietal to desired object geometric shape mapping is undertaken here by general type‐2 fuzzy relations for its advantage in approximate reasoning under uncertainty. The uncertainty here appears due to contamination of the EEG instances by noise due to parallel thoughts/undesirable brain activations. Experiments undertaken on 30 healthy and 5 memory‐diseased people (suffering from prefrontal lobe impairment) reveal that the proposed model can successfully retrieve the object geometry from STM‐reproduced imagery.

The third model proposed in Chapter 4, attempts to assess the subjective motor learning skill in driving from erroneous motor actions. A set of fuzzy production rules, describing the motor actions to be learnt by the subject for possible sequence of occurrence of the selected brain signals, is provided. Type‐2 fuzzy reasoning algorithms are proposed to infer the degree of motor actions learnt by the subjects, when one or more rules are fired after getting instantiated by available observations, concerning error in motor actions. Two different algorithms of type‐2 fuzzy reasoning are proposed. The first one, called interval type‐2 fuzzy reasoning (IT2FR), provides a simple scheme for automated reasoning to infer the degree of motor actions learnt. IT2FR requires insignificantly small computational overhead and thus is useful for real‐time applications like the present one. IT2FS, having limited information resources, such as the bunches of user‐provided type‐1 MFs only, is unable to offer quality inference in contrast to the inferences generated by its classical counterpart, the GT2FS. The GT2FS is equipped with secondary measure of the primary MFs, thereby offering users the benefits of natural selection of primary MF values based on their secondary measures. The secondary MFs in GT2FS thus enhance one additional dimension of judgment in the process of automated inference generation. The inferences generated by GT2FS are later type‐reduced and defuzzified (decoded) to obtain the degree of motor actions learnt in the scale (0–100).

The fourth model proposed in Chapter 5 of the book, introduces a novel strategy of designing a two‐layered deep LSTM classifier network to classify the human memory response involved in the face recognition task by utilizing the event‐related potential signals. The first layer of the proposed deep LSTM network evaluates the spatial and local temporal correlations between the obtained samples of local EEG time‐windows. The second layer of this network aims at modeling the temporal correlations between the time‐windows. An attention mechanism has been introduced in each layer of the proposed model to compute the contribution of each EEG time‐window in face recognition task, where the attention weights are optimized using differential evolution algorithm in order to maximize the overall classification accuracy. Two event‐related potential signals N250 and P600 are used for the present study to recognize familiar and non‐familiar faces. Experiments undertaken envisage that N250 signal is larger during familiar face recognition as compared with the unfamiliar face recognition and P600 signal appears during familiar face recognition only.

The last model proposed in Chapter 6 of the book deals with cognitive load assessment in motor learning tasks associated with driving. fNIRs is employed to capture the brain activations during different motor activities, such as braking, acceleration, and steering control. The prefrontal hemodynamic response is recorded in response to certain stimuli, such as sudden appearance of a child in front of the car, presence of a bumper ahead of the car, and the like. The recorded fNIRs data is preprocessed to keep it free from noise, and a set of statistical features are extracted from the filtered fNIRs data. Here, three classes of cognitive loads in the motor learning tasks for driving learners are considered. The classes are LOW, MEDIUM, and HIGH cognitive load. The fuzzy attributes are used to ensure classification in presence of measurement noise. Type‐2 fuzzy classifiers are proposed to classify the measured cognitive load into one of three classes. Experiments undertaken confirm that the proposed vertical slice‐based general type‐2 fuzzy classifier outperforms its competitors in the classification of cognitive load of driving learners.

Chapter 7 provides the concluding remarks based on the principles and experimental results acquired in Chapters 1–6. Possible future directions of research are also examined briefly at the end of the chapter.

The book to the best of the authors' knowledge is the first comprehensive title on the subject and thus is unique for its content, originality, and above all its presentation style. Chapters are organized independently, so as to help readers directly access the topics/chapters of their interest. The background mathematics required to understand a chapter is covered at the beginning of the chapter itself to avoid unnecessary searches of the prerequisite mathematics elsewhere. Experiments are covered in sufficient depth to make readers understand the motivation, the experimental procedure, and the end results very easily. Additionally, the book includes plenty of line diagrams to visualize the topics to the readers easily.

January 25, 2020

Lidia GhoshAmit KonarPratyusha RakshitArtificial Intelligence LaboratoryDepartment of Electronicsand Telecommunication EngineeringJadavpur University, Kolkata‐32, India

Acknowledgments

The authors sincerely like to thank Prof. Surnajan Das, the vice‐chancellor of Jadavpur University (JU) and Prof. Chiranjib Bhattacharjee and Dr. Pradip Kumar Ghosh, the pro‐vice‐chancellors of JU, Kolkata, for creating a beautiful and lively academic environment to carry out the necessary scientific work and experiments for the present book. They also would like to acknowledge the technical and moral support they received from Prof. Sheli Sinha Chaudhuri, the HoD of the Department of Electronics and Telecommunication Engineering (ETCE), Jadavpur University, where the background research work for the present book is carried out.

The authors like to thank their family members for their support in many ways for the successful completion of the book. The first author wishes to mention the everlasting support and words of optimism she received from her parents, Mrs. Mithu Ghosh and Mr. Asoke Kumar Ghosh, without whose active support, love, and affection, it would not have been possible to complete the book in the current form. She likes to acknowledge the strong gratitude she has for her grandma, Mrs. Renubala Ghosh, who has nurtured her since her childhood and always remained as a source of inspiration in her life. She also remembers the evergreen faces of her dearest elder sister Mrs. Sonia Ray and brother‐in‐law Mr. Gautam Ray, whose inspiration helped her to complete the book. She likes to mention the special affection she has for her little niece, Ms. Barnalika Ray, whose presence always refreshed her while writing this book. The first author likes to express her deep feeling of gratitude for her beloved cousin, Ms. Minakshi Sinha, who always stood by her in the phase of mental crisis and literally indulged her to work for long hours to complete the book. The second and the third authors acknowledge the support they received from their parents and family members for sparing them from many family responsibilities while writing this book.

The authors like to thank their students, colleagues, and researchers of the AI lab, Jadavpur University for their support in many ways during the phase of writing the book. Finally, the authors thank all their well‐wishers, who have contributed directly and indirectly toward the completion of the book.

January 25, 2020

Lidia GhoshAmit KonarPratyusha RakshitArtificial Intelligence LaboratoryDepartment of Electronicsand Telecommunication EngineeringJadavpur University, Kolkata‐32, India

About the Authors

Lidia Ghosh received her BTech degree in Electronics and Telecommunication Engineering from Bengal Institute of Technology, Techno India College in 2011, and her MTech degree in Intelligent Automation and Robotics (IAR) from the Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, in 2015. She was awarded with gold medals for securing the highest percentage of marks in MTech in IAR in 2015. She is currently pursuing her PhD in Cognitive Intelligence in Jadavpur University under the guidance of Prof. Amit Konar and Dr. Pratyusha Rakshit. Her current research interest includes deep learning, type‐2 fuzzy sets, human memory formation, short‐ and long‐term memory interactions, and biological basis of perception and scientific creativity.

Amit Konar is currently a professor in the Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, India. He earned his BE degree from Bengal Engineering College, Sibpur in 1983, and his ME, MPhil, and PhD degrees, all from Jadavpur University in 1985, 1988, and 2004, respectively. Dr. Konar has published 15 books and over 350 research papers in leading international journals and conference proceeding. He has supervised 28 PhD theses and 262 Masters' theses. He is a recipient of AICTE‐accredited Career Award for Young Teachers for the period 1997–2000. He is nominated as a Fellow of West Bengal Academy of Science and Engineering in 2010 and (Indian) National Academy of Engineering in 2015. Dr. Konar has been serving as an associate editor of several international journals, including IEEE Transactions of Fuzzy Systems and IEEE Transactions of Emerging Trends in Computational Intelligence. His current research interest includes cognitive neuroscience, brain–computer interfaces, type‐2 fuzzy sets, and multi‐agent systems.

Pratyusha Rakshit received the BTech degree in Electronics and Communication Engineering (ECE) from Institute of Engineering and Management, India, and ME degree in Control Engineering from Electronics and Telecommunication Engineering (ETCE) Department, Jadavpur University, India in 2010 and 2012, respectively. She was earned her PhD (Engineering) degree from Jadavpur University, India in 2016. From August 2015 to November 2015, she was an assistant professor in ETCE Department, Indian Institute of Engineering Science and Technology, India. She is currently an assistant professor in ETCE Department, Jadavpur University. She was awarded with gold medals for securing the highest percentage of marks in BTech in ECE and among all the courses of ME, respectively, in 2010 and 2012. She was the recipient of CSIR Senior Research Fellowship, INSPIRE Fellowship, and UGC UPE‐II Junior Research Fellowship. Her principal research interests include artificial and computational intelligence, evolutionary computation, robotics, bioinformatics, pattern recognition, fuzzy logic, cognitive science and human–computer interaction. She is an author of over 50 papers published in top international journals and conference proceedings. She serves as a reviewer in IEEE‐TFS, IEEE‐SMC: Systems, Neurocomputing, Information Sciences, and Applied Soft Computing.

1Introduction to Brain‐Inspired Memory and Learning Models

This chapter overviews memory and learning from four different perspectives. First and foremost, it reviews the philosophical models of human memory. In this regard, it examines Atkinson and Shiffrin's model, Tulving's model, Tveter's model, and the well‐known parallel and distributed processing (PDP) approach. The chapter also gives an overview of the philosophical research results on procedural and declarative memory. Second, the chapter is concerned with coding for memory and memory consolidation. Third, the chapter is concerned with a discussion on cognitive maps, neural plasticity, modularity, and the cellular processes involved in short‐term memory (STM) and long‐term memory (LTM) formation. Finally, the chapter deals with the scope of brain signal analysis in the context of memory and learning. Possible scope of computational intelligence techniques on memory modeling is also appended at the end of the chapter.

1.1 Introduction

The human nervous system comprises several billions of neurons spread across the brain, spinal cord, and the rest of our body. These neurons collectively and/or independently participate in the cognitive processes undertaken by the brain. Usually, the efferent neurons receive stimuli from the receptors present in the cell membranes and carry the electrical activation due to the stimuli to the brain to recognize and interpret the stimuli. The brain in turn generates response through afferent neurons to trigger specific localized organs for its activation. Consider, for example, the experience of touching a hot body by a two‐year old baby. Presume that the baby has no prior experience to touch a hot body. As she touches the hot body accidentally/incidentally, the efferent neurons present in the receptor (neurons) of her skin receives thermal stimulation, the electrical activation of which reaches her brain, and the motor command generated by the brain is then transferred to her limbs to withdraw her hand. The first‐hand experience of the baby is unconsciously recorded in her brain, to provide her a cautionary support to avoid similar incidents in future. A natural question that appears before us is where does the baby save her learning experience? How does she automatically retrieve her knowledge to avoid similar situations in future?

The book aims at offering answers to the previously mentioned queries and the like by analysis of the acquired brain signals/images during memory formation (encoding) and memory recall stages in adults. Although very little of human memory encoding and recall processes are known till this date, it is almost unanimously accepted that the human memory is distributed in the cortex with localized activities in certain brain regions. For instance, the hippocampal region, residing in the medial temporal lobe, is found to have good correlations with relatively permanent LTM. Two other forms of short‐duration memory are also reported in the literature [1,2]. They are popularly known as STM and working memory (WM). It is known that STM can hold information for few minutes only [1–3], unless it is refreshed periodically. The WM, on the other hand, provides a support to human reasoning and apparently looks like cache memory in computer systems. It may be remembered that the central processing unit (CPU) in the computer receives and saves information from the cache while executing a program segment. Although major part of a selected program resides in the system random access memory (RAM), the cache saves only fewer bytes of storage currently under execution. The cache is designed with high speed logic circuits, such as emitter‐coupled logic or integrated injection logic (I2L) [4–6] to maintain parity in speed with the processor. Similarly, the brain performs reasoning time efficiently, which is often bottlenecked by relatively low speed LTM. The WM thus bridges the speed gap between human reasoning system and the LTM access, which usually is sluggish with respect to our speed of logical reasoning.

The book is all about WM and STM encoding and recall, with a small coverage on interactions between the WM and the STM. Although there is a magnificent reporting on memory encoding and recall, most of the research outcomes are based on behavioral experiments on humans [7]. Thus the existing research results cannot offer the cognitive basis of memory encoding and recall. With the advent of modern brain imaging and signal acquisition equipment, it is now possible to make a thorough study on memory encoding/recall processes. Although such study provides a more scientific basis to understand the memory encoding and retrieval processes, they too are not free from limitations. For instance, the existing non‐invasive techniques mostly rely on scalp potential and thus can hardly capture single neuron activation. So, the analysis is undertaken on the local response of a group of neurons. Second, while administering memory activity, the other activities of the neuron also appear on the scalp and thus act as noise input to the memory study. Elimination of the noise is not easy here as the noise distribution often falls in the same frequency spectra used by the memory system.

The mystery of memory formation largely relies on the regulatory and control mechanism of the cellular proteins. A brief review of molecular biology reveals that the neuronal cells, like any other cells in the human, contain deoxyribonucleic acid (DNA) double helix comprising several millions of four bases (adenine [A], guanine [G], thymine [T], and cytosine [C]). These four bases have an apparently random (positional) occurrence in the individual string of a DNA. Small sequences of such bases on the DNA that are responsible for inheritance of genetic materials from parents to children are called genes. The neuronal cells containing DNA double helix thus contain genes, which often translate to form cell proteins. The protein formation by DNA and particularly genes is a two‐step process. In the first step, the DNA translates to ribonucleic acid (RNA), and in the second step, the RNA transforms into proteins. These cell proteins help in permanent/semi‐permanent encoding of the acquired information in the LTM. How the protein help in encoding is a complex biochemical process, very little of which is known at present.

This chapter is organized into 11 sections. In Section 1.2, a philosophical survey to memory is undertaken. Section 1.3 is concerned with the brain‐theoretic interpretation of memory formation. This section also takes into account the experimental perspectives of memory and learning. It includes both surgical and therapeutic experiments on memory encoding by considering plasticity and stability issues of memory and learning. Sections 1.4, 1.5, 1.6, 1.7, and 1.8 are concerned with cognitive maps, neural plasticity, modularity, and the cellular process behind STM formation and LTM formation, respectively. Section 1.9 deals with brain signal analysis in the context of memory and learning. Section 1.10 examines the scope of mathematical/computational models of memory and learning. Section 1.11 reviews the scope of the book. This section also provides a summary of the work presented and future directions of research in memory and learning.

1.2 Philosophical Contributions to Memory Research

Among the early contributions in memory research [8–15], the works by Atkinson and Shiffrin [8] and Tulving [9] need special mention. The PDP approach to imitate natural learning by artificial neural networks, enunciated by Rumelhart and Hinton [10,11], also had a good impact in the late 1980s. In addition, there exist key contributions to memory research, concerning (mental) rotation of imagery in memory. In this regard, Kosslyn's work on mental imagery [12,13] studies needs mentioning. The role of memory in comparing relative object size with imagery also is important. Reed's work on part–whole relationship in mental imagery also gave a good impetus to memory studies. In this section, an overview of the philosophical issues to memory research is presented next.

1.2.1 Atkinson and Shiffrin's Model

Atkinson and Shiffrin proposed a hierarchical model of memory [8], comprising three layers, as depicted in Figure 1.1. The input layer represents sensory registers, which acquire information from the real world by three different modalities. The sensory registers are named according to the modality of their usage. For example, iconic register refers to visual information, the echoic register keeps track of the audio cues, olfactory register takes care of aroma of the stimulus, and the like. The registers can hold information for few seconds only, and they need to be refreshed to store new real‐world information. Thus the sensory registers primarily acquire dynamic information.

Figure 1.1 Atkinson–Shiffrin's model of cognitive memory.

Before the registers are refreshed, the information from the sensory registers is transferred to STM to hold it for several minutes. Finally, with repeated trials the information from the STM is transferred to LTM for permanent storage. Two fundamental aspects of Atkinson–Shiffrin's model are (i) natural decay of memory information both at sensory register and STM levels and (ii) provisions of feedback from the LTM to the STM.

Although Atkinson–Shiffrin's model received high appreciation for its pioneering contributions, it too is not free from limitations. For example, consider the following case history. In a bike accident, the person's left side of the cerebral cortex was damaged, causing a severe malfunctioning in the STM. However, the person's LTM could continue memory encoding and recall even after the accident. This naturally questions the architecture of Atkinson's and Shiffrin's model. The natural question is how does the person update his LTM without using the STM? Consider a second case study. In order to cure serious epilepsy, a part of the temporal lobe containing the hippocampus was removed. After the surgery was over, the person was found to have lost the power of encoding in the LTM but could retrieve well most of the past information before his surgery. Two questions here become apparent. First, if the hippocampus is removed, how does he retain his LTM? Second, in case the LTM prior to his surgery could retrieve many of his past information, why he cannot encode new information into the LTM after the surgery? The first question cannot be answered following Atkinson–Shiffrin's model. However, the answer to the second question is apparent following the Atkinson–Shiffrin's model, which claims that the feedback path from the STM to the LTM might have been damaged.

1.2.2 Tveter's Model

Tveter proposed an extension to the Atkinson–Shiffrin's model, which overcomes some of its shortcomings [16]. For example, in the bike accident problem referred to earlier, the updating of the LTM in the absence of the STM can be explained by Tveter's model. In a recent study [17], Tveter indicated a feedback path from the LTM to the STM and two alternative forward paths from the STM to the LTM, as shown in Figure 1.2. The first path from the STM to the LTM is used for decision making with incoming information into the STM, while the second alternative path is used for long‐term storage in the LTM.

Figure 1.2 The architecture of Tveter's model.

1.2.3 Tulving's Model

Tulving proposed a hierarchical model of three‐stage memory [18], where the first stage, located at the top of the memory hierarchy, refers to episodic memory. The second stage, located at the second level of the hierarchy, is called semantic memory, and the third stage, located at the bottommost layer of the hierarchy, is called procedural memory. The episodic memory in Tulving's model stores episodes (i.e. incidents that take place in individuals' personal perspective). The semantic memory derives relationship among connected (shared) episodes, whereas the procedural memory extracts and saves procedures (sequence of steps) to solve a complex problem from the semantic interrelationships about events/episodes.

Consider, for example, the experience of a child about rain and its precedence relationship with dark cloud. The child experiences the temporal precedence of the dark cloud to occurrence of rain and gradually learns the interrelationship between the two episodes. At a first glance, the child saves the two episodes independently in the episodic memory. Then with repeated occurrence of the temporal precedence of the dark cloud to rain, she/he learns the semantic relationships between the two episodes. Suppose, she/he further experiences of having city roads flooded with water due to severe rain and notices people to arrange opening of the blocked drainage system by devices like brush and sticks. The child may derive the procedure of cleaning the water‐drainage system and saves it in his/her procedural memory. The most interesting part of Tulving's model is hierarchical representation of memory, where a direct pathway from episodic to semantic to procedural memory exists in the memory hierarchy, and occurrence of the current episodes when matched with pre‐stored ones reminds the subject about the procedures he/she needs to adopt to handle the present circumstance. Figure 1.3 provides a schematic overview of the memory model following Tulving's postulates.

Figure 1.3 The architecture of memory hierarchy in Tulving's model.

1.2.4 The Parallel and Distributed Processing (PDP) Approach

Rumelhart and Hinton in the 1980s pioneered the art of parallel and distributed approach of neural signal processing in the context of learning and memory [11]. In their basic framework, they considered a feed‐forward topology of artificial neural network with provisions for supervised learning. Supervised learning usually refers to learning the interconnectivity among a set of neurons to satisfy a given set of externally generated training instances, comprising the input and output attributes of episodes/observations/experiments. The PDP approach keeps provisions for simultaneous learning of a number of neurons placed in layers based on the measure of an estimate of error at the neurons in the output layer. The error values at the neurons in the output layer represent the difference of the computed signals from the desired (targeted) signals at the respective neurons, which are propagated backward to adapt the synaptic connection strengths (weights) between neurons of two layers. The policy, well‐known as error back‐propagation, is naïve in the sense that it can quickly adjust layer‐wise neural connectivity with an aim to produce the targeted outputs from the measured values of the input instances of the training data set. Such process of adaptation of weights/neural synaptic connectivity is continued until the error value at the neurons in the output layer of the synthetic neural network goes below certain threshold, also called error‐limit. Naturally, when the training algorithm terminates, the strength of synaptic connectivity between neurons of one layer to the next converge, indicating stable layer‐wise interconnections to remember the complete set of training instances.

The PDP approach apparently mimics the biological process of neural learning particularly for two specific types of problems. First, it demonstrates a general approach to attack the problems of supervised pattern classification by remembering input–output training instances in the form of neural connectivity. Once the network is trained, i.e. the interconnection weights are extracted for a given set of training instances, the network allows generalization, so as to predict the output for an unknown input instance close to one of the known input instances. Second, it offers a new avenue to develop functional mapping between the input and output instances, particularly when the outputs involve nonlinearity of high orders of the input attributes. In fact, in many real‐world problems, the true functional form between the input and output is not clearly known. One typical example is the user's response in a nuclear power plant [19