Spiking Neural P Systems for Time Series Analysis - Jun Wang - E-Book

Spiking Neural P Systems for Time Series Analysis E-Book

Jun Wang

0,0
128,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

An up-to-date and accurate discussion of spiking neural P systems in time series analysis

In Spiking Neural P Systems for Time Series Analysis, the authors explore the fundamentals and the current states of both spiking neural P systems and time series analysis, examining the application models of time series analysis. You’ll also find walkthroughs of recurrent-like, echo-like, and reservoir computing models for time series prediction.

The book covers applications in time series analysis such as financial time series analysis, power load forecasting, photovoltaic power forecasting, and medical signal processing, and contains illustrative photographs and tables designed to improve reader understanding.

Readers will also find:

  • A thorough introduction to the theoretical and application research relevant to membrane computing and spiking P neural systems
  • Comprehensive explorations of a variety of recurrent-like models for time series forecasting, including LSTM-SNP and GSNP models
  • Practical discussions of common problems in reservoir computing models, including classification problems
  • Complete evaluations of models used in financial time series analysis, power load forecasting, and other techniques

Perfect for scientists, researchers, postgraduates, lecturers, and teachers, Spiking Neural P Systems for Time Series Analysis will also benefit undergraduate students interested in advanced techniques for time series analysis.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 319

Veröffentlichungsjahr: 2025

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Table of Contents

Title Page

Copyright

About the Authors

Preface

Acknowledgments

Acronyms

Chapter 1: Introduction

1.1 Background

1.2 Membrane Computing

1.3 Spiking Neural P Systems

1.4 Time Series Analysis

1.5 Organization of Chapters

References

Chapter 2: Spiking Neural P Systems

2.1 Introduction

2.2 Preliminaries

2.3 Spiking Neural P Systems

2.4 Spiking Neural P Systems with Extended Rules

2.5 Spiking Neural P Systems with Autapses

2.6 Nonlinear Spiking Neural P Systems

References

Chapter 3: Recurrent-like Models for Time-series Forecasting

3.1 Introduction

3.2 Recurrent Neural Networks

3.3 LSTM-SNP Model

3.4 GSNP Model

3.5 NSNP-AU Model

References

Chapter 4: Echo-like Models for Time Series Forecasting

4.1 Introduction

4.2 Echo State Networks

4.3 Echo-like Spiking Neural P Model

4.4 Echo-like Feedback Spiking Neural P Model

4.5 Deep Echo-like Spiking Neural P Model

References

Note

Chapter 5: Reservoir Computing Models for Time Series Classification

5.1 Introduction

5.2 Preliminaries

5.3 Basic Models

5.4 Improved Models

5.5 Model Evaluation

References

Notes

Chapter 6: Financial Time Series Analysis

6.1 Stock Market Index Prediction

6.2 Exchange Rate Price Prediction

6.3 Crude Oil Price Prediction

References

Notes

Chapter 7: Power Load Forecasting and Photovoltaic Power Forecasting

7.1 Power Load Forecasting

7.2 Photovoltaic Power Forecasting

References

Chapter 8: Medical Signal Processing

8.1 Introduction

8.2 Methodology

8.3 Model Evaluation

References

Index

End User License Agreement

List of Illustrations

Chapter 2

Figure 2.1 A SNP system (provided by authors).

Figure 2.2 An extended SNP system (provided by authors).

Figure 2.3 A SNP-AU system. Adapted from [17].

Figure 2.4 A NSNP system. Adapted from [18].

Chapter 3

Figure 3.1 The structure of the recurrent unit in LSTM (provided by authors).

Figure 3.2 The structure of the recurrent unit in GRU (provided by authors).

Figure 3.3 (a) In a NGSNP-I system, neuron and and its presynaptic neurons are ...

Figure 3.4 Functional structure of LSTM-SNP model [4]/with permission of Elsevier.

Figure 3.5 LSTM-SNP model for time series forecasting [4]/with permission of Elsevier.

Figure 3.6 (a) In a NGSNP-II system, neuron and its presynaptic neurons are sho...

Figure 3.7 Functional structure of GSNP model [5]/IEEE.

Figure 3.8 GSNP model for predicting time series. Adapted from [5].

Figure 3.9 (a) In a NGSNP-III system (or NSNP-AU system), neuron and its presyn...

Figure 3.10 Functional structure of NSNP-AU model [6]/IEEE.

Figure 3.11 NSNP-AU model [6]/IEEE.

Chapter 4

Figure 4.1 The structure of ESN (provided by authors).

Figure 4.2 The specialized NSNP system with external input [1]/with permission of Elsevier.

Figure 4.3 Echo spiking neural P system or ESNP model [1]/with permission of Elsevier.

Figure 4.4 A NSNP-TO system [2]/with permission of Elsevier.

Figure 4.5 An EFSNP model, in which the reservoir is a NSNP-TO system [2]/with permission o...

Figure 4.6 A shallow ESNP model [3]/with permission of Elsevier.

Figure 4.7 The Deep-ESNP model [3]/with permission of Elsevier.

Chapter 5

Figure 5.1 (a) NSNP system ; (b) Neuron and its presynaptic neuron...

Figure 5.2 RC-SNP model [28]/with permission of Elsevier.

Figure 5.3 RC-RMS-SNP model [28]/with permission of Elsevier.

Figure 5.4 Reservoir model space for NSNP system [29]/with permission of Elsevier.

Figure 5.5 The structure of TSC-DR-NSNP model [29]/with permission of Elsevier.

Figure 5.6 The structure of TSC-RMS-NSNP model [29]/with permission of Elsevier.

Chapter 6

Figure 6.1 The structure of the EMD-NSNP model for stock index time series prediction (prov...

Figure 6.2 The block diagram of the EMD-NSNP model (provided by authors).

Figure 6.3 The prediction results of the EMD-NSNP model on seven stock index time series: (...

Figure 6.4 Absolute percentage errors (APE) of the EMD-NSNP model on seven stock index time...

Figure 6.5 The neuron structure of GNSNP model [12]/World Scientific Publishing Co Pte Ltd.

Figure 6.6 GNSNP model [12]/World Scientific Publishing Co Pte Ltd.

Figure 6.7 The ERF-GNSNP model: (a) an extended GNSNP neuron; (b) its two-stage implementat...

Figure 6.8 The predictive curves of the ERF-GNSNP model for USDCNY, EURCNY, JPYCNY, and GBP...

Figure 6.9 The predictive curves of the ERF-GNSNP model for USDCAD, USDCNY and USDJPY data ...

Figure 6.10 The predictive curves of the ERF-GNSNP model for EURUSD, GBPUSD and MXNUSD data ...

Figure 6.11 (a) An ENSNP system; (b) AENSNP model (provided by authors).

Figure 6.12 The daily, weekly, and monthly crude oil price prediction results of the AENSNP ...

Figure 6.13 The daily, weekly, and monthly crude oil price prediction results of the AENSNP ...

Figure 6.14 The prediction results of the AENSNP model for WTI crude oil prices in three dif...

Figure 6.15 The prediction results of the AENSNP model for Brent crude oil prices in three d...

Figure 6.16 The prediction results of AENSNP for the Brent and WTI crude oil prices in Exper...

Chapter 7

Figure 7.1 Framework diagram of the prediction methodology. Adapted from [12].

Figure 7.2 Data tagging. Adapted from [12].

Figure 7.3 The structure of improved NSNP model. Adapted from [12].

Figure 7.4 Error curves (bottom) and curve fitting (top) in January 2010 for ISO. Adapted f...

Figure 7.5 Error curves (bottom) and curve fitting (top) in July 2010 for ISO. Adapted from [12].

Figure 7.6 Error curves (bottom) and curve fitting (top) in 2010 for ISO. Adapted from [12].

Figure 7.7 Error curves (bottom) and curve fitting (top) in 2011 for ISO. Adapted from [12].

Figure 7.8 Error curves (bottom) and curve fitting (top) for 2013 load forecast results for...

Figure 7.9 Error curves (bottom) and curve fitting (top) for 2009 load forecast results for...

Figure 7.10 The framework of the photovoltaic power generation prediction model. Adapted fro...

Figure 7.11 The flow chart of the prediction model. Adapted from [26].

Figure 7.12 A NSNP system [26]/MDPI/CC BY 4.0.

Figure 7.13 ESN-NSNP block [26]/MDPI/CC BY 4.0.

Figure 7.14 Prediction results of the discussed model at 15-minute resolution [26]/MDPI/CC BY 4.0.

Figure 7.15 Prediction results of the discussed model at 10-minute resolution [26]/MDPI/CC BY 4.0.

Figure 7.16 Prediction results of the discussed model at 5-minute resolution [26]/MDPI/CC BY 4.0.

Figure 7.17 Prediction results of the discussed model on 0.5Y [26]/MDPI/CC BY 4.0.

Figure 7.18 Prediction results of the discussed model on 4Y [26]/MDPI/CC BY 4.0.

Figure 7.19 Prediction results of the discussed model in spring [26]/MDPI/CC BY 4.0.

Figure 7.20 Forecasting results of the discussed model in summer [26]/MDPI/CC BY 4.0.

Figure 7.21 Forecasting results of the discussed model in autumn [26]/MDPI/CC BY 4.0.

Figure 7.22 Forecasting results of the discussed model in winter [26]/MDPI/CC BY 4.0.

Chapter 8

Figure 8.1 Framework of the DWT+LSTM-SNP method for seizure detection [7] / World Scientifi...

Figure 8.2 Schematic diagram of DWT multilevel decomposition [7] / World Scientific Publish...

Figure 8.3 Structure of the multi-channel LSTM-SNP [7] / World Scientific Publishing Co Pte Ltd.

Figure 8.4 Structure of LSTM-SNP [7] / World Scientific Publishing Co Pte Ltd.

List of Tables

Chapter 3

Table 3.1 Statistical information of datasets [4]/with permission of Elsevier.

Table 3.2 The comparisons of LSTM-SNP with other methods in terms of RMSE [4]/with permiss...

Table 3.3 Statistical information of datasets. Adapted from [5].

Table 3.4 The RMSE values of the different methods on univariate datasets. Adapted from [5].

Table 3.5 The RMSE and MAE values of the different methods on two multivariate datasets [5]/IEEE.

Table 3.6 The RMSE values of the different methods on Beijing PM2.5 multistep forecasting ...

Table 3.7 The results of the different models for Case I [6].

Table 3.8 The results of the different models for Case II [6]/IEEE.

Table 3.9 The results of the different models for Case III [6]/IEEE.

Table 3.10 The results of the distinct models on Mackey–Glass dataset [6]/IEEE.

Table 3.11 The results of the different methods on Sunspot dataset [6]/IEEE.

Table 3.12 The results of the different methods on Gas Furnace dataset [6]/IEEE.

Chapter 4

Table 4.1 Statistical information of time series [1]/with permission of Elsevier.

Table 4.2 Hyperparameter setting of the ESNP model [1]/with permission of Elsevier.

Table 4.3 The RMSE results of the distinct methods on eight univariate time series [1]/wit...

Table 4.4 The results of the different methods on Brent dataset [1]/with permission of Els...

Table 4.5 The results of the different methods on Beijing PM2.5 dataset [1]/with permissio...

Table 4.6 The results of the different models on Sunspot dataset [1]/with permission of Elsevier.

Table 4.7 The RMSE values of the different models [1]/with permission of Elsevier.

Table 4.8 Statistical information of datasets [2]/with permission of Elsevier.

Table 4.9 The RMSE values of the different models [2]/with permission of Elsevier.

Table 4.10 The RMSE values of the different models [2]/with permission of Elsevier.

Table 4.11 Results of the different methods on Brent dataset [2]/with permission of Elsevier.

Table 4.12 Results of the different methods on Beijing PM2.5 dataset [2]/with permission of...

Table 4.13 Experimental results of the different models on Beijing PM2.5 datasets [3]/with ...

Table 4.14 Experimental results of the different models on Lorenz time series [3]/with perm...

Table 4.15 Experimental results of the different models on the Brent time series [3]/with p...

Table 4.16 Experimental results of the different models on Sunspot datasets [3]/with permis...

Table 4.17 Experimental results of the different models on Temperature time series [3]/with...

Table 4.18 Experimental results of the different models on MG dataset [3]/with permission o...

Table 4.19 Experimental results of the different models on on Precipitation time series [3]...

Chapter 5

Table 5.1 Statistical information of 17 time series datasets [29]/with permission of Elsevier.

Table 5.2 The accuracy rate values of the different models [29]/with permission of Elsevier.

Table 5.3 The accuracy rate values of the different models [29]/with permission of Elsevier.

Chapter 6

Table 6.1 Statistical information of seven datasets for stock market index (provided by authors).

Table 6.2 Four metric values of EMD-NSNP model (provided by authors).

Table 6.3 Comparisons of the EMD-NSNP with RC, RNN, and LSTM on RMSE and MAE (provided by ...

Table 6.4 Comparisons of the EMD-NSNP model with EMD2FNN and RC models on SP500, NASDAQ, a...

Table 6.5 Comparisons of the EMD-NSNP model with ModAugNet and RC models on SP500 and KOSP...

Table 6.6 Statistical information of exchange rate dataset in case study I [12]/World Scie...

Table 6.7 Comparisons of the ERF-GNSNP with other methods [12]/World Scientific Publishing...

Table 6.8 Statistical information of exchange rate datasets in case study II [12]/World Sc...

Table 6.9 Comparisons of ERF-GN-SNP with other models for MAPE [12]/World Scientific Publi...

Table 6.10 Comparisons of ERF-GNSNP with other models [12]/World Scientific Publishing Co Pte Ltd.

Table 6.11 Statistical description of crude oil spot price data in experiment 1 (provided b...

Table 6.12 Comparison results of different prediction methods on WTI crude oil price (provi...

Table 6.13 Comparison results of different prediction methods on Brent crude oil price (pro...

Table 6.14 Statistical description of crude oil spot price data in experiment 2 (provided b...

Table 6.15 Comparison results of different prediction methods on WTI and Brent crude oil pr...

Table 6.16 Comparison results of different prediction methods on Brent crude oil price (pro...

Chapter 7

Table 7.1 Experimental parameter setting. Adapted from [12].

Table 7.2 Load forecast results for the ISO System for January 2010 and July 2010. Adapted...

Table 7.3 Load forecast results on MAPE% for 2010 and 2011. Adapted from [12].

Table 7.4 ISO system load forecast MAPE (%) for 2013. Adapted from [12].

Table 7.5 Load forecast results on MAPE (%) for 2009. Adapted from [12].

Table 7.6 The hyperparameters for the discussed model [26]/MDPI/CC BY 4.0.

Table 7.7 Comparison results of different models at 15-minute resolution [26]/MDPI/CC BY 4.0.

Table 7.8 Comparison results of different models at 10-minute resolution [26]/MDPI/CC BY 4.0.

Table 7.9 Comparison results of different models at 5-minute resolution [26]/MDPI/CC BY 4.0.

Table 7.10 Parameters configuration of the discussed model Adapted from Guo et al. [26].

Table 7.11 Outcomes of predictions generated by various models Adapted from [26].

Table 7.12 Parameters of the discussed mode [26]/MDPI/CC BY 4.0.

Table 7.13 Prediction results of distinct methods in spring. Adapted from [26].

Table 7.14 Prediction results of different models in summer. Adapted from [26].

Table 7.15 Prediction results of different models in autumn. Adapted from [26].

Table 7.16 Prediction results of different models in winter. Adapted from [26].

Chapter 8

Table 8.1 The description of CHB-MIT dataset [7] / World Scientific Publishing Co Pte Ltd.

Table 8.2 Frequency range corresponding to five-level decomposition [7] / World Scientific...

Table 8.3 Division of the dataset [7] / World Scientific Publishing Co Pte Ltd.

Table 8.4 Seizure detection results of the DWT+LSTM-SNP method [7] / World Scientific Publ...

Table 8.5 Comparisons of the DWT+LSTM-SNP with other models [7] / World Scientific Publish...

Guide

Cover

Table of Contents

Title Page

Copyright

About the Authors

Preface

Acknowledgments

Acronyms

Begin Reading

Index

End User License Agreement

Pages

ii

iii

iv

ix

xi

xii

xiii

xv

xvi

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

IEEE Press

445 Hoes Lane

Piscataway, NJ 08854

IEEE Press Editorial Board

Sarah Spurgeon, Editor-in-Chief

Moeness Amin

Jón Atli Benediktsson

Adam Drobot

James Duncan

Hugo Enrique Hernandez Figueroa

Ekram Hossain

Brian Johnson

Hai Li

James Lyke

Joydeep Mitra

Albert Wang

Desineni Subbaram Naidu

Yi Qian

Tony Quek

Behzad Razavi

Thomas Robertazzi

Patrick Chik Yue

Spiking Neural P Systems for Time Series Analysis

Jun Wang

Xihua University

Hong Peng

Xihua University

Copyright © 2025 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.

The manufacturer’s authorized representative according to the EU General Product Safety Regulation is Wiley-VCH GmbH, Boschstr. 12, 69469 Weinheim, Germany, e-mail: [email protected].

Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

Limit of Liability/Disclaimer of Warranty: While the publisher and the authors have used their best efforts in preparing this work, including a review of the content of the work, neither the publisher nor the authors make any representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging-in-Publication Data has been applied for:

Print ISBN: 9781394381579

ePDF ISBN: 9781394381593

ePub ISBN: 9781394381586

oBook ISBN: 9781394381609

Cover Design: Wiley

Cover Image: © Dmitry/stock.adobe.com

About the Authors

Jun Wang is a professor at the School of Electrical Engineering and Electronic Information at Xihua University, Chengdu, China. She is a member of the “International Membrane Computing Society (IMCS).” She is the main investigator of 3 scientific research projects funded by the National Natural Science Foundation of China and of more than 30 scientific research projects at the national and provincial levels. She was awarded the Sichuan Provincial Natural Science Award in 2017. Her research interests cover several topics, including membrane computing, time series analysis, artificial intelligence, machine learning, intelligent control, and power systems. She has published over 90 scientific papers in international journals and conferences. She has more than 3,000 citations with an H-index of 35, according to Google Scholar.

Hong Peng is a professor at the School of Computer and Software Engineering at Xihua University, Chengdu, China. He is a member of the International Membrane Computing Society (IMCS), IEEE, and CCF. He is the main investigator of 4 scientific research projects funded by the National Natural Science Foundation of China and of more than 20 scientific research projects at the national and provincial levels. He was awarded the Sichuan Provincial Natural Science Award in 2017. His research interests include membrane computing, artificial neural networks, time series analysis, deep learning, image and computer vision, and natural language processing. He has published over 230 scientific papers in international journals and conferences. He has more than 4,900 citations with an H-index of 42, according to Google Scholar.

Preface

Time series analysis has become an invisible pillar of modern scientific and engineering research. Essentially, it processes a set of data points arranged in chronological order, aimed at extracting valuable information, identifying potential patterns, and predicting future trends. In the field of meteorology, it is through time series modeling of parameters such as atmospheric pressure, temperature, and wind speed that we can achieve relatively accurate weather forecasts. In financial engineering, the time series analysis of stock prices, trading volumes, and market indices forms the basis of quantitative trading. In the field of neuroscience, functional magnetic resonance imaging (fMRI) maps the functional connectivity of different regions of the brain by analyzing the temporal correlation of blood oxygen level dependent (BOLD) signals. Neuroscientists are attempting to uncover warning signals for epileptic seizures through millisecond level electroencephalography (EEG) fluctuations recorded by electroencephalography.

With the explosive growth of the Internet of Things (IoT) and big data technology, time series analysis is facing unprecedented opportunities and challenges. Modern industrial equipment may be equipped with hundreds of sensors, each generating thousands of data points per second. The urban transportation system generates billions of vehicle location time series records every day. Dealing with this type of “time series big data” requires the development of new algorithms and computing architectures.

Membrane computing, as a branch of natural computing, has become a promising paradigm for designing non-traditional computational models. Within this framework, spiking neural P systems (SNP systems, in short) stand out due to their biologically inspired architecture, parallel processing capabilities, and dynamic spiking mechanism. These systems are rooted in the functions of brain neurons, providing a unique approach for modeling and solving complex computational problems. Due to their inherent parallelism, dynamic spiking behavior, and ability to model discrete events in a biologically plausible manner, the SNP systems have received attention. With their dynamic modeling capabilities, they are particularly suitable for processing time series and other temporal data.

The book, Spiking Neural P Systems for Time Series Analysis, explores the intersection of these two fields and proposes a new method for processing, analyzing, and predicting time series data using the SNP systems. Our goal is to bridge the gap between membrane computing theory and real-world time series problems by comprehensively discussing the recurrent-like and echo-like models of the SNP systems, as well as case studies, to demonstrate their effectiveness in time series analysis. Case studies include financial time series analysis, power load forecasting and photovoltaic power forecasting, and medical signal processing.

July, 2025

Jun Wang and Hong Peng

Xihua University, Chengdu, China

Acknowledgments

This book is the result of the collaboration between the authors and multiple graduate students. These graduate students are responsible for tasks such as algorithm design, programming implementation, and experimental verification. They are Qian Liu and Lifan Long (2020–2023), Xin Xiong and Yujie Zhang (2021–2024), Min Wu, Lin Guo, and Yunzhu Gao (2022–2025), and Juan He from Grade 2023. At the same time, we would like to express our gratitude to several members of the Natural Computing Research Group at the University of Seville in Spain for their long-term collaborative research on the topic of spiking neural P systems and time series analysis, including Professor Mario J. Pérez-Jiménez, Professor Agustín Riscos-Núñez, Dr. David Orellana-Martín, and Dr. Antonio Ramírez-de-Arellano.

The research work of this book was completed with support from the National Natural Science Foundation of China (Nos. 62176216 and 62076206) and the Sichuan Provincial Science Foundation Project (No. 2022ZYD0115). We would like to express our gratitude.

Acronyms

AI

Artificial Intelligence

ARIMA

Autoregressive Integrated Moving Average

BPTT

Backpropagation Through Time

ANN

Artificial Neural Networks

CNN

Convolutional Neural Networks

CNP

Coupled Neural P (systems)

CUDA

Compute Unified Device Architecture

DE

Differential Evolution

DNA

Deoxyribonucleic Acid

DTNP

Dynamic Threshold Neural P (systems)

EFSNP

Echo-like Feedback Spiking Neural P (model)

EMD

Empirical Mode Decomposition

ENSNP

Echo-like Nonlinear Spiking Neural P (model)

ESN

Echo State Networks

ESNP

Echo-like Spiking Neural P (model)

GA

Genetic Algorithm

GPU

Graphics Processing Unit

GRU

Gated Recurrent Unit

GNSNP

Gated Nonlinear Spiking Neural P (model)

GSNP

Gated Spiking Neural P (model)

IMF

Intrinsic Mode Functions

LSTM

Long Short-term Memory network

LSTM-SNP

Long Short-term Memory-like Spiking Neural P (model)

MTS

Multivariate Time Series

NSNP

Nonlinear Spiking Neural P (systems)

NSNP-AU

Nonlinear Spiking Neural P (systems) with Autapses

NSNP-TO

Nonlinear Spiking Neural P (systems) with Two Outputs

PSO

Particle Swarm Optimization

PSPACE

Polynomial SPACE Problem

RC

Reservoir Computing

RC-SNP

Reservoir Computing Based on Spiking Neural P (systems)

RMS

Reservoir Mode Space

RNN

Recurrent Neural Networks

SAT

Boolean Satisfiability Problem

SNP

Spiking Neural P (systems)

SNP-AU

Spiking Neural P (systems) with Autapses

SVM

Support Vector Machine

TSC

Time Series Classification

UTS

Univariate Time Series

Chapter 1Introduction

1.1 Background

Natural computing is a computing paradigm abstracted from nature, aimed at imitating the inherent phenomena and laws of nature or human society, exploring new algorithms for problems, and better solving them [1]. Natural computing typically encompasses adaptive computational frameworks capable of autonomous organization and continuous improvement, addressing intricate challenges beyond conventional computing paradigms. So far, natural computing has proposed numerous algorithms corresponding to various origins of inspiration, such as swarm intelligence algorithms (biological populations), artificial life (biological individuals), immune algorithms (immune systems), artificial neural networks (biological nervous systems), membrane computing (living cells), and DNA computing (DNA molecules), as well as some methods derived from physical and chemical phenomena.

Proposed by Gh. Pǎun in 1998, membrane computing represents an innovative computational framework within the broader field of natural computing [2]. Membrane computing aims to abstract computing models from the structure and function of living cells as well as tissues and organs that collaborate to process information. Membrane systems, alternatively called P systems, constitute a class of inherently parallel and spatially distributed computational models in membrane computing. In P systems, the interior of the cell is divided into several regions separated by the cell membranes, which are considered independent computing units. Each region contains multisets of objects (corresponding to compounds that are constantly evolving inside the cell) as carriers of information, and each computational module’s handling of object multisets (analogous to cellular biochemical reactions) constitutes its autonomous information processing mechanism. Intercellular object transfer across membranes embodies the computational units’ information-sharing mechanism. With distributed processing elements working in parallel, P systems achieve significantly greater theoretical efficiency than von Neumann-type computers [3]. Their computational power has been mathematically proven equivalent to Turing machines, with indications of transcending these classical constraints.

Drawing upon the electrophysiological properties of neural signal transmission, Ref. [4] introduced spiking neural P systems (SNP systems) in 2006 – a novel membrane computing framework mimicking neuronal architectures. The structural foundation of SNP systems consists of directed graphs wherein neuronal units serve as computational nodes interconnected by oriented synaptic pathways, facilitating concurrent distributed processing. For each neuron, there exists a state variable representing the number of spikes within it (simulating the membrane potential of biological neuron). Moreover, a set of spiking/firing rules is defined to update the state of the neuron (simulating the growth or decay of membrane potential). Due to common features with spiking neural networks [5], SNP systems are classified among third-generation neural network architectures. These research results have demonstrated that SNP systems can achieve Turing-complete computational power in many cases and have also demonstrated excellent performance and potential in solving computationally difficult problems.

Within the domain of biologically-inspired computation, membrane computing represents a significant advancement, offering fresh modeling techniques and computational methodologies for computing research. Concurrently, it furnishes innovative theoretical frameworks, methodological advancements, and practical instruments to address real-world challenges.

1.2 Membrane Computing

During his 1998 academic stay in Finland, Gh. Pǎun initially formulated the principles of membrane computing, which were subsequently documented in his pioneering 2000 publication [2]. Numerous researchers have focused their attention on membrane computing because of its extensive potential for real-world applications in diverse fields such as computer science, systems biology, and linguistics. The Institute for Scientific Information (ISI) recognized membrane computing in 2003 as an emerging and fast-growing discipline within computer science. The pioneering membrane computing paper emerged as both an immediate scientific breakthrough and one of ISI’s most frequently referenced publications during its publication year. Springer introduced the field’s first comprehensive publication on membrane computing during 2002, titled “Membrane Computing – An Introduction” [6]. Oxford University Press published “The Oxford Handbook of Membrane Computing” in 2010 [3]. In 2017, Springer published the book “Real Life Application with Membrane Computing” [7]. In 2024, Springer published the book “Advanced Spiking Neural P Systems: Models and Applications” [8]. The membrane computing domain has yielded substantial academic output to date, including 40+ journal special issues, 50+ conference proceedings volumes, 70+ doctoral dissertations, and numerous review articles.

In membrane computing community, theoretical research, application research, and software and hardware implementation are the three main contents. Below, we will briefly review their research status.

1.2.1 Theoretical Research

With over two decades of continuous advancement, membrane computing has established a substantial theoretical foundation. P systems exhibit three primary architectural variants based on their topological configurations: cell-like (hierarchical organization), tissue-like (networked arrangement), and neural-like (directed graph architecture). The biological diversity observed in living cells has led to the creation of numerous P system architectures, exemplified by catalytic variants, membrane transport systems utilizing symport/antiport mechanisms, and promoter/inhibitor-regulated models [9, 10]. Drawing on fundamental concepts in computer science, P systems exhibiting varied operational mechanisms and parallel processing capabilities have been constructed [11]. By simulating with computing devices in classical computational theory, including Turing machines, registration machines, etc., it was confirmed that most P systems are Turing complete, meaning that the computing power of these systems is equivalent to that of Turing machines. By introducing mechanisms such as cell division, cell separation, and cell generation, P systems can generate their own computational space during the computation process. This allows these systems to effectively solve some classic NP-problems, such as PSPACE, knapsack, and SAT problems, by exchanging space for time [12, 13].

1.2.2 Application Research

Due to some characteristics of membrane computing, such as distribution, uncertainty, and scalability, the computational framework of P systems demonstrates particular efficacy in resolving specific real-world problems. In the early stages of applications, the focus was mainly on the research of membrane algorithms. The fusion between the naturally parallelized framework of P systems and established optimization approaches (genetic algorithm [GA], particle swarm optimization [PSO], differential evolution [DE]) gave rise to an innovative algorithmic paradigm. The membrane algorithm was initially developed by Ref. [14] as a computational approach for combinatorial optimization challenges, with its practical implementation demonstrated through successful application to the traveling salesman problem. The integration of membrane computing with established optimization techniques has spawned multiple algorithmic derivatives, demonstrating effectiveness across diverse problem domains ranging from theoretical models to real-world applications. Beyond its core applications, membrane computing demonstrates significant utility across interdisciplinary domains including ecological system simulations [15], visual data analysis [16], and intelligent algorithm design [17].

In recent years, membrane computing learning models have received attention, focusing on how to construct learning models using the mechanisms of membrane computing. The unsupervised learning model for membrane computing focuses on clustering algorithms. By utilizing the structures of different P systems and the mechanisms of cells (or membranes), some membrane inspired clustering algorithms have been proposed [18–22], known as membrane clustering algorithms. These algorithms discussed problems such as data clustering, fuzzy clustering, multi-objective clustering, and automatic clustering.

1.2.3 Software and Hardware Implementation

With the deepening of research, the software and hardware implementation of membrane computing has made certain progress. Several simulation software have been developed for various membrane systems, including Membrane Simulator based on C++ for catalytic P systems and active membrane P systems [23], and SNUPS based on Java for numerical P systems [24]. However, current P system simulation tools predominantly exhibit limited functionality, typically restricted to emulating specific membrane system configurations. A research team specializing in natural computation at Spain’s Seville University developed P-Lingua, an integrative modeling framework [25]. The framework enables comprehensive P system modeling and execution through Java-based pLinguaCore kernel invocations. For hardware-level realization, the team employed CUDA parallel computing architecture to enable GPU-accelerated simulation of membrane computing systems [26] and proved that GPU parallel computing can greatly improve computational efficiency by solving difficult problems on the P system’s GPU simulator.

1.3 Spiking Neural P Systems

SNP systems are computing systems derived from the way neurons in biological neural networks process information in the form of spikes and interact with each other using pulses, and were proposed by Ref. [4]. Neural computing is a hot research field at the intersection of neuroscience and computational science, and as a new type of third-generation neural computing models, SNP systems have become a research hotspot since their proposal, achieving numerous research results. Simultaneously, the neural computing research community has disseminated significant findings through high-impact journal publications, such as Neural Networks and Neurocomputing under Elsevier, International Journal of Neural Systems under World Scientific, and IEEE Transactions on Neural Networks and Learning Systems.

SNP systems represent a category of bio-inspired computing architectures that exhibit both parallel processing and distributed characteristics within membrane computing paradigms [8, 27]. SNP systems fundamentally comprise three key components: a directed graph architecture, neuronal units containing discrete spike counts, and operational rules governing both spike consumption and generation processes. The operational rules fundamentally determine the spiking dynamics that define SNP systems’ computational behavior.

In the past decade, the investigation of SNP systems has yielded substantial findings across fundamental theory, practical applications, and software and hardware aspects. Below, we will introduce each of these three aspects separately.

1.3.1 Theoretical Research Work

The theoretical work of SNP systems mainly involves establishing various computational models and analyzing their computational capabilities, including two key aspects: Turing-equivalent computational capability assessment and the feasibility of deriving efficient algorithms for NP-hard problems.

Since Ref. [4] proposed SNP systems, numerous adaptations have been derived from diverse biological discoveries in the past decade, particularly those observed in neuroscience. Here are some discoveries listed: multiple channels [19], anti-spikes [28], astrocytes [29], polarization [30], rules on synapses [31], structural plasticity [32], communications on requests [33], inhibitory rules [34], autapses [35], dendritic mechanism [34], delay on synapses [36], nonlinear mechanism [37], coupling modulation [38], dynamic threshold [39], plasmids [40], etc. Recent studies on SNP systems reveal an increasing number of biologically inspired mechanisms.

The first theoretical question to address after building the model is its computational completeness, specifically examining if it achieves the same level of computation as a Turing machine. When examining computational completeness, computational models to analyze encompass digital generators/acceptors, functional computing devices, and formal language generators. Many results demonstrate that these models exhibit Turing-complete capabilities, such as digital generators/acceptors and functional computing devices [3]. In addition, it has been proven that some variants can generate regular languages and recursively enumeration languages [41].

SNP models generally operate under three distinct execution modes: sequential, asynchronous, and synchronous. For synchronous case, a global clock synchronizes each neuron. Therefore, neurons operate concurrently, however, every neuron uses the rules in sequence. In previous variants, most work in synchronous case. For asynchronous case, all neurons work asynchronously without considering the global clock [42]. The past few years have seen theoretical verification of computation universality in some asynchronous models. For sequential case, processing scheme requires neurons to activate one after another, while maintaining rule execution order within individual processing units [43]. Researchers have introduced various sequential models that demonstrate computational universality, serving as both digital generators/acceptors and functional computing devices.

Computational theorists continue developing scaled-down universal systems, exemplified by space-efficient Turing machines and register machines with minimal instruction sets. The investigation of minimal universal models has become an active research direction in SNP systems, with Gh. Pǎun pioneering the foundational work on their computational completeness at small scales [29].

Another key theoretical focus involves analyzing the computational efficiency of SNP system architectures. This study aims to evaluate computational efficiency through the application of SNP architectures and their modified forms in addressing some NP-hard problems. The purpose is to test their efficiency by using SNP systems and variants to solve some NP-hard problems, including PSPACE and SAT problems [44