126,99 €
Discover the design, implementation, and analytical techniques for multi-modal intelligent sensing in this cutting-edge text
The Internet of Things (IoT) is becoming ever more comprehensively integrated into everyday life. The intelligent systems that power smart technologies rely on increasingly sophisticated sensors in order to monitor inputs and respond dynamically. Multi-modal sensing offers enormous benefits for these technologies, but also comes with greater challenges; it has never been more essential to offer energy-efficient, reliable, interference-free sensing systems for use with the modern Internet of Things.
Multimodal Intelligent Sensing in Modern Applications provides an introduction to systems which incorporate multiple sensors to produce situational awareness and process inputs. It is divided into three parts—physical design aspects, data acquisition and analysis techniques, and security and energy challenges—which together cover all the major topics in multi-modal sensing. The result is an indispensable volume for engineers and other professionals looking to design the smart devices of the future.
Multimodal Intelligent Sensing in Modern Applications readers will also find:
Multimodal Intelligent Sensing in Modern Applications is ideal for experienced engineers and designers who need to apply their skills to Internet of Things and 5G/6G networks. It can also act as an introductory text for graduate researchers into understanding the background, design, and implementation of various sensor types and data analytics tools.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 500
Veröffentlichungsjahr: 2024
Cover
Table of Contents
Series Page
Title Page
Copyright Page
Dedication
About the Editors
List of Contributors
Preface
1 Advances in Multi‐modal Intelligent Sensing
1.1 Multi‐modal Intelligent Sensing
1.2 Sensors for Multi‐modal Intelligent Sensing
1.3 Applications of Multi‐modal Intelligent Sensing
1.4 Challenges and Opportunities in Multi‐modal Sensing
References
2 Antennas for Wireless Sensors
2.1 Wireless Sensors: Definition and Architecture
2.2 Multi‐modal Wireless Sensing
2.3 Antennas: The Sensory Gateway for Wireless Sensors
2.4 Fundamental Antenna Parameters
2.5 Key Operating Frequency Bands for Sensing Antennas
2.6 Fabrication Methods for Sensing Antennas
2.7 Antenna Types for Wireless Sensing Networks
2.8 Advantages of Electronic Beamsteering Antennas in Sensing Systems
2.9 Summary
References
3 Sensor Design for Multimodal Environmental Monitoring
3.1 Environment and Forests
3.2 Methods to Combat Deforestation
3.3 Design of a WSN to Combat Deforestation
3.4 Summary
References
4 Wireless Sensors for Multi‐modal Health Monitoring
4.1 Wearable Sensors
4.2 Flexible Sensors
4.3 Multi‐modal Healthcare Sensing Devices
4.4 AI Methods for Multi‐modal Healthcare Systems
4.5 Summary
References
5 Sensor Design for Industrial Automation
5.1 Multimodal Sensing in Industrial Automation
5.2 Sensors for Realizing Industrial Automation
5.3 Design Considerations for Effective Multimodal Industrial Automation
5.4 Challenges and Opportunities of Multimodal Sensing in Industrial Automation
5.5 Summary
References
6 Hybrid Neuromorphic‐Federated Learning for Activity Recognition Using Multi‐modal Wearable Sensors
6.1 Multi‐modal Human Activity Recognition
6.2 Machine Learning Methods in Multi‐modal Human Activity Recognition
6.3 System Model
6.4 Simulation Setup
6.5 Results and Discussion
6.6 Summary
References
7 Multi‐modal Beam Prediction for Enhanced Beam Management in Drone Communication Networks
7.1 Drone Communication
7.2 Beam Management
7.3 System Model
7.4 Simulation and Analysis
7.5 Summary
References
8 Multi‐modal‐Sensing System for Detection and Tracking of Mind Wandering
8.1 Mind Wandering
8.2 Multi‐modal Wearable Systems for Mind‐Wandering Detection and Monitoring
8.3 Design of Multi‐modal Wearable System
8.4 Results and Discussion
8.5 Summary
References
9 Adaptive Secure Multi‐modal Telehealth Patient‐Monitoring System
9.1 Healthcare Systems
9.2 Security in Healthcare Systems
9.3 Blockchain‐Powered ZTS for Enhanced Security of Telehealth Systems
9.4 Cyber‐resilient Telehealth‐Enabled Patient Management System
9.5 Summary
References
10 Advances in Multi‐modal Remote Infant Monitoring Systems
10.1 Remote Patient Monitoring
10.2 Remote Infant Monitoring (RIM) System
10.3 Disease‐Specific Remote Infant Monitoring Systems
10.4 Challenges in Remote Infant Monitoring Systems
10.5 Summary
References
11 Balancing Innovation with Ethics : Responsible Development of Multi‐modal Intelligent Tutoring Systems
11.1 Intelligent Tutoring Systems and Ethical Considerations
11.2 The Promise and Perils of ITS
11.3 Ethical Frameworks for ITS
11.4 Bias and Fairness in ITS
11.5 Privacy and Security Concerns
11.6 Socioeconomic Disparities in Access
11.7 Dependency on Technology
11.8 Summary
References
12 Road Ahead for Multi‐modal Intelligent Sensing in the Deep Learning Era
12.1 Future Challenges and Perspectives for Intelligent Multi‐modal Sensing
12.2 Summary
References
Index
End User License Agreement
Chapter 2
Table 2.1 Comparison of different operating systems used in wireless sensor...
Chapter 3
Table 3.1 Comparison of wireless sensor network‐based systems for combating...
Table 3.2 Comparison of different commercially available temperature sensor...
Table 3.3 Comparison of various processing boards for wireless sensor netwo...
Table 3.4 Comparison of various LoRa communication modules used for wireles...
Table 3.5 Comparison of batteries used for wireless sensor networks.
Table 3.6 Pros and cons of different solar panels for wireless sensor netwo...
Table 3.7 Comparison of various wireless protocols.
Chapter 6
Table 6.1 Comparative results of global models for CNN, S‐CNN, LSTM, and S‐...
Table 6.2 Comparison of different DL techniques for Real‐World dataset.
Table 6.3 Comparison of energy efficiency using single Eq. (6.15).
Chapter 7
Table 7.1 Hyper‐parameters for design and training.
Table 7.2 Model evaluation metrics.
Chapter 8
Table 8.1 Pros and cons of various sensors for measuring mind wandering.
Table 8.2 The list of numerical features extracted from the raw sensor data...
Chapter 9
Table 9.1 Telehealth systems risk due to security vulnerabilities and their ...
Table 9.2 Overview of the existing approaches being used to deter the cybera...
Table 9.3 Common types of blockchains and their core functionalities.
Chapter 10
Table 10.1 Features of physiological monitors for infant monitoring.
Table 10.2 Respiratory‐related disease observation in remote infant monitor...
Table 10.3 Summary of heart and blood‐related diseases infant monitoring sy...
Table 10.4 Summary of remote infant monitoring systems for various infant d...
Chapter 1
Figure 1.1 Multi‐modal intelligent sensing.
Figure 1.2 Key types of sensors.
Figure 1.3 Key considerations in multiple sensor integration.
Figure 1.4 Time‐division multiplexing.
Figure 1.5 Frequency‐division multiplexing.
Figure 1.6 Realization of smart city concept through multi‐modal sensing.
Chapter 2
Figure 2.1 Architecture of a typical WS node.
Figure 2.2 Classification of wireless sensors.
Figure 2.3 A generic wireless sensing environemnt.
Figure 2.4 Different kinds of antenna radiation patterns.
Figure 2.5 Antenna polarization matching scheme.
Figure 2.6 Patch antennas for environmental sensing.
Figure 2.7 Conceptual view of omni‐directional antennas and wide area covera...
Figure 2.8 Conceptual depiction of a pattern reconfigurable directional ante...
Chapter 3
Figure 3.1 High‐level end‐to‐end three‐stage WSN system architecture for com...
Figure 3.2 An end‐to‐end system architecture to set up a reliable communicat...
Figure 3.3 A high‐level representation of proposed communication links.
Figure 3.4 A high‐level block diagram of an energy harvesting and battery ma...
Figure 3.5 Illustration of sensor deployment scenarios.
Chapter 4
Figure 4.1 Wireless sensors for healthcare.
Figure 4.2 Wireless sensors positioned on the human body for remote healthca...
Figure 4.3 A typical multi‐modal remote healthcare system.
Chapter 5
Figure 5.1 Industrial automation and smart manufacturing avenues.
Figure 5.2 Conceptual depiction of a multi‐modal sensing network in industri...
Figure 5.3 A high‐level depiction of a typical radar sensing architecture.
Chapter 6
Figure 6.1 Conceptual framework of centralized indoor HAR using wearable sen...
Figure 6.2 Conceptual FL framework for HAR using wearable sensing in the out...
Figure 6.3 Spiking neurons propagation process.
Figure 6.4 Proposed hybrid S‐LSTM model where input LSTM layer activated by ...
Figure 6.5 Learning curve representing the accuracy for UCI dataset obtained...
Figure 6.6 The confusion matrix for four DL models compared in this study fo...
Figure 6.7 Learning curve of accuracy obtained using a global test set for R...
Figure 6.8 The confusion matrix CNN, S‐CNN, LSTM, and S‐LSTM models for Real...
Figure 6.9 Learning curve for Real‐World dataset, with 50% random client par...
Figure 6.10 Accuracy comparison graph for global and personalized models for...
Chapter 7
Figure 7.1 A real wireless communication scenario, the mmWave base station s...
Figure 7.2 A schematic illustration of the stacking model architecture for o...
Figure 7.3 The plot displays the accuracy scores for position‐based predicti...
Figure 7.4 The comparison of the top‐1 normalized power across different app...
Chapter 8
Figure 8.1 The conceptual framework outlines a comprehensive multisensory me...
Figure 8.2 The experimental configuration encompasses a wearable device that...
Figure 8.3 The raw data were collected from each sensor of the Pupil Core ey...
Figure 8.4 The block diagram of a simple GRU to process the recorded raw dat...
Figure 8.5 The SVM classification for GSR, eye tracker, PPG, and fusion. Cla...
Figure 8.6 The GRU classification for GSR, eye tracker, PPG, and fusion. Cla...
Figure 8.7 The boxplot of 10 iterations of training and testing using each i...
Chapter 9
Figure 9.1 Typical client–server architecture of the telehealth system.
Figure 9.2 Prevailing traditional client–server architecture‐based patient m...
Figure 9.3 Blockchain‐enabled zero‐trust security framework for secure teleh...
Figure 9.4 Zero‐trust security for telehealth‐enabled patient management sys...
Figure 9.5 Blockchain‐based zero‐trust compliance for cyber resilience.
Chapter 10
Figure 10.1 Infant mortality rates in South Asia between 2015 and 2020.
Cover Page
Series Page
Title Page
Copyright Page
Dedication
About the Editors
List of Contributors
Preface
Table of Contents
Begin Reading
Index
WILEY END USER LICENSE AGREEMENT
ii
iii
iv
v
xv
xvi
xvii
xix
xx
xxi
xxiii
xxiv
xxv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
275
276
277
278
279
280
281
282
283
285
286
287
288
289
290
291
292
IEEE Press
445 Hoes Lane
Piscataway, NJ 08854
IEEE Press Editorial Board
Sarah Spurgeon, Editor‐in‐Chief
Moeness Amin
Ekram Hossain
Desineni Subbaram Naidu
Jón Atli Benediktsson
Brian Johnson
Tony Q. S. Quek
Adam Drobot
Hai Li
Behzad Razavi
James Duncan
James Lyke
Thomas Robertazzi
Joydeep Mitra
Diomidis Spinellis
Edited by
Masood Ur Rehman
James Watt School of Engineering, University of Glasgow, Glasgow, UK
Ahmed Zoha
James Watt School of Engineering, University of Glasgow, Glasgow, UK
Muhammad Ali Jamshed
James Watt School of Engineering, University of Glasgow, Glasgow, UK
Naeem Ramzan
School of Computing, Engineering and Physical Sciences, University of the West of Scotland, Paisley, UK
Copyright © 2025 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per‐copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750‐8400, fax (978) 750‐4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748‐6011, fax (201) 748‐6008, or online at http://www.wiley.com/go/permission.
Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762‐2974, outside the United States at (317) 572‐3993 or fax (317) 572‐4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging‐in‐Publication Data Applied for:
Hardback ISBN: 9781394257713
Cover Design: WileyCover Image: © MF3d/Getty Images
To Allah, the most merciful and compassionate, who guides us through every challenge in life. His love and grace sustain us always and we dedicate this book to him, seeking his continued guidance and blessings.
To my parents, Khalil and Ilfaz, who have nurtured me with unconditional love, sacrifice, and wisdom. To my brothers, Habib and Waheed for their wholehearted assistance. To my wife, Faiza, for her being a great support system through thick and thin. To my son, Musaab, for filling my life with immense love and joy.
Masood Ur Rehman
To my wife, Mariyam, for her unwavering support, love, and patience throughout the journey of editing this book. Her encouragement has been invaluable. To my daughters, Zainab and Yusra, thank you for your joy and the inspiration you bring into my life every day. This accomplishment would not have been possible without the strength and motivation you all gave me.
Ahmed Zoha
To my biggest inspiration, my parents, Jamshed Iqbal and Nuzhut Jamshed my support system, wife: Aqsa Tariq, and my son: Zohaan Ali.
Muhammad Ali Jamshed
To my parents for all their love and for raising me in a way I am proud of. To my wife Nasira for her resolute support. To my daughters Saba, Bisma, and Hadiya for cherishing and elating my life with all the bliss and love.
Naeem Ramzan
Masood Ur Rehman received a BSc degree in electronics and telecommunication engineering from UET, Lahore, Pakistan in 2004 and an MSc and PhD in electronic engineering from Queen Mary University of London, UK, in 2006 and 2010, respectively. He worked at QMUL as a postdoctoral research assistant till 2012 before joining the Centre for Wireless Research at the University of Bedfordshire as a Lecturer. He served briefly at the University of Essex and then moved to the James Watt School of Engineering at the University of Glasgow as an Assistant Professor in 2019. He currently works as an Associate Professor at the University of Glasgow. His research interests include compact antenna design for 6G, Industry 5.0, and Global Navigation Satellite Systems; flexible, wearable and implantable sensors and systems; bio‐electromagnetics and exposure of biological tissues to RF; mmWave and nano‐communications for body‐centric networks and wireless sensor networks in industrial automation, healthcare and environmental monitoring, and device‐to‐device and human‐to‐human communications. He has worked on several projects supported by industrial partners and research councils. He has contributed to a patent and authored/coauthored 7 books, 13 book chapters, and over 200 technical articles in leading journals and peer‐reviewed conferences. He is a Fellow of the Higher Education Academy (UK), a Senior Member of the IEEE, a Member of the IET and BioEM and part of the technical program committees and organizing committees of several international conferences, workshops, and special sessions. He is a committee member of IEEE APS/SC WG P145, IEEE APS Best Paper Award committee and Pearson's focus group on formative assessment. He is acting as an editor of PeerJ Computer Science, associate editor of IEEE Sensors Journal, IEEE Journal of Electromagnetics, RF and Microwaves in Medicine and Biology, IEEE Access, IET Electronics Letters and Microwave & Optical Technology Letters, topic editor for MDPI Sensors, editorial advisor to Cambridge Scholars Publishing, and lead guest editor of numerous special issues of renowned journals. He is chair of the IEEE UKRI Section Young Professionals Affinity Group, vice‐chair of the IEEE UKRI Section APS/MTTS Joint Chapter and acting/acted as the Communications Chair for EuCAP2024, Workshop Chair for Workshop on Sustainable and Intelligent Green Internet of Things for 6G and Beyond in IEEE ICC 2024, IEEE GLOBECOM 2024, IEEE VTC‐S 2023, IEEE GLOBECOM 2023, and TPC chair for UCET 2020 and BodyNets 2021 conferences.
Ahmed Zoha is an Associate Professor at the James Watt School of Engineering, University of Glasgow, and a globally recognized expert in artificial intelligence, machine learning, and smart energy systems. With a PhD from the 6G/5GIC Centre at the University of Surrey and over 15 years of experience, he has contributed to cutting‐edge research in AI‐driven 5G networks, healthcare technologies, and smart energy monitoring, earning prestigious accolades including IEEE Best Paper Awards and endorsement as a UK exceptional talent by the Royal Academy of Engineering. He serves as an Associate Editor for the Journal of Big Data and Frontiers in Communication and Networks, as well as a Guest Editor for several Q1 journals. His work, widely cited and recognized in both academia and industry, supports scalable technological solutions that benefit vulnerable populations and advance the Sustainable Development Goals (SDGs).
Muhammad Ali Jamshed has been with the University of Glasgow since 2021. He is endorsed by the Royal Academy of Engineering under the exceptional talent category and was nominated for the Departmental Prize for Excellence in Research in 2019 and 2020 at the University of Surrey. He is a Fellow of the Royal Society of Arts, a Fellow of Higher Education Academy UK, a Senior Member of the IEEE, an Editor of IEEE Wireless Communication Letters, and an Associate Editor of IEEE Sensor Journal, IEEE Communication Standard Magazine and IEEE IoT Magazine. He is Co‐inventor of one patent and has more than 70 publications in top‐tier international journals, including IEEE Transactions and Magazines, flagship IEEE COMSOC conferences, such as ICC, VTC, INFOCOM, WCNC, etc. He is an Editor of four books. He has been the Lead Guest Editor for Special Issues in IEEE Wireless Communications Magazine (2024–2025), IEEE Communications Standards Magazine (2024–2025), and IEEE Internet of Things Magazine (2023–2024). He has been serving as General Chair for over 10 workshops at IEEE international conferences such as IEEE GLOBECOM 2024, IEEE VTC Fall 2024, IEEE MECOM 2024, IEEE CAMAD 2023, IEEE WCNC 2023, IEEE PIMRC 2022–2023, IEEE GLOBECOM 2023, IEEE VTC Spring 2022–2023, IEEE CAMAD 2023, and IEEE CAMAD 2019. He has been a TPC Member at IEEE ICC 2022–2024 IEEE VTC Spring 2024, IEEE GLOBECOM 2023, and Sessions Chair at IEEE VTC Spring 2022 and IEEE WCNC 2019 and 2023. He is a founding member of the IEEE Workshop on Sustainable and Intelligent Green Internet of Things.
Naeem Ramzan received an MSc degree in telecommunication from the University of Brest, France, in 2004, and a PhD in electronics engineering from Queen Mary University of London, London, UK, in 2008. Currently, he is a Full Professor of Artificial Intelligence and Computer Engineering and the Director of the Artificial Intelligence, Virtual Communication, Network (AVCN) Institute at the University of West of Scotland. Before that, he was a senior research fellow and lecturer at Queen Mary University of London from 2008 to 2012. He is a Fellow of the Royal Society of Edinburgh, a Senior Member of the IEEE, Senior Fellow of the Higher Education Academy (HEA), Cochair of MPEG HEVC verification (AHG5) group and a voting member of the British Standard Institution (BSI). In addition, he holds key roles in the Video Quality Expert Group (VQEG) such as Co‐chair of the Ultra High Definition (UltraHD) group; Co‐chair of the Visually Lossless Quality Analysis (VLQA) group; and Co‐chair of the Psycho‐Physiological Quality Assessment (PsyPhyQA). He has been a lead researcher in various nationally or EU‐sponsored multimillion‐funded international research projects. His research interests are cross‐disciplinary and industry‐focused and include video processing, analysis and communication, video quality evaluation, Brain‐inspired multimodal cognitive technology, Big Data analytics, Affective computing, IoT/smart environments, natural multimodal human–computer interaction, eHealth/connected Health. He has a global collaborative research network spanning both academia and key industrial players. He has been the Lead supervisor/supervisor for about 30 postdoctoral research fellows and PhD research students. He has published over 250 articles in peer‐reviewed journals, conferences, and book chapters including standardized contributions. His paper was awarded Best Paper Award 2016 of the IEEE Transaction on Circuit and System for Video Technology and three conference papers were selected for Best Student Paper awards in 2015/2016. He was awarded the Scottish Knowledge Exchange Champion award in 2023 and 2020 and only academics in Scotland got this award twice. He received STARS (Staff Appreciation and Recognition Scheme) award in 2014 and 2016 for “Outstanding Research and Knowledge Exchange” (University of the West of Scotland) and was awarded the Contribution Reward in 2009 and 2011 for outstanding research and teaching activities (Queen Mary University of London). He has chaired/cochaired/organized more than 25 workshops, special sessions, and tracks at International conferences.
Qammer H. AbbasiJames Watt School of EngineeringUniversity of GlasgowGlasgow, UK
Abdul JabbarJames Watt School of EngineeringUniversity of GlasgowGlasgow, UK
Iftikhar AhmadJames Watt School of EngineeringUniversity of GlasgowGlasgow, UK
Saima Gulzar AhmadDepartment of Computer ScienceCOMSATS University Islamabad, Wah CampusWah Cantt, Pakistan
Nadeem AjumDepartment of Software EngineeringCapital University of Science and TechnologyIslamabad, Pakistan
Romina Soledad Albornoz‐De LuiseDepartament d'InformàticaUniversitat de ValènciaValència, Spain
Kamran AliDepartment of Computer ScienceMiddlesex UniversityLondon, UK
Pablo Arnau‐GonzálezDepartament d'InformàticaUniversitat de ValènciaValència, Spain
Fahad AyazJames Watt School of EngineeringUniversity of GlasgowGlasgow, UK
Kashif AyyubDepartment of Computer ScienceCOMSATS University Islamabad, Wah CampusWah Cantt, Pakistan
Rami GhannamJames Watt School of EngineeringUniversity of GlasgowGlasgow, UK
Muhammad HanifDepartment of Computer ScienceCOMSATS University Islamabad, Wah CampusWah Cantt, Pakistan
Bushra HaqDepartment of Computer ScienceBalochistan University of Information Technology, Engineering and Management SciencesQuetta, Pakistan
Sajjad HussainJames Watt School of EngineeringUniversity of GlasgowGlasgow, UK
Shagufta IftikharDepartment of Software EngineeringCapital University of Science and TechnologyIslamabad, Pakistan
Muhammad Ali ImranJames Watt School of EngineeringUniversity of GlasgowGlasgow, UK
Tassawar IqbalDepartment of Computer ScienceCOMSATS University Islamabad Wah CampusWah Cantt, Pakistan
Muhammad Ali JamshedJames Watt School of EngineeringUniversity of GlasgowGlasgow, UK
Sana Ullah JanSchool of Computing Engineering and the Built EnvironmentEdinburgh Napier University, UK
Tahera KalsoomManchester Fashion InstituteManchester Metropolitan UniversityManchester, UK
Mumraiz Khan KasiDepartment of Computer ScienceBalochistan University of Information TechnologyEngineering and Management SciencesQuetta, Pakistan
Ahsan Raza KhanJames Watt School of Engineering University of GlasgowGlasgow, UK
Dost Muhammad KhanDepartment of Information TechnologyThe Islamia University of BahawalpurBahawalpur, Pakistan
Sara KhosraviJames Watt School of Engineering University of GlasgowGlasgow, UK
Nasira KirnSchool of Computing, Engineering and Physical SciencesUniversity of the West of ScotlandPaisley, UK
Haobo LiJames Watt School of Engineering University of GlasgowGlasgow, UK
Habib Ullah ManzoorJames Watt School of EngineeringUniversity of GlasgowGlasgow, UK
Ehsan Ullah MunirDepartment of Computer ScienceCOMSATS University Islamabad Wah CampusWah Cantt, Pakistan
Hassan RabahInstitut Jean LamourUniversité de LorraineFrance
Rao Naveed Bin RaisArtificial Intelligence Research Center (AIRC), Ajman UniversityAjman, UAE
Naeem RamzanSchool of Computing, Engineering and Physical SciencesUniversity of the West of ScotlandPaisley, UK
Muhammad Maaz RehanDepartment of Computer ScienceCOMSATS University Islamabad Wah CampusWah Cantt, Pakistan
Masood Ur RehmanJames Watt School of EngineeringUniversity of GlasgowGlasgow, UK
Omer RiazDepartment of Information TechnologyThe Islamia University of BahawalpurBahawalpur, Pakistan
Najia SaherDepartment of Information TechnologyThe Islamia University of BahawalpurBahawalpur, Pakistan
Ana Serrano‐MamolarDepartamento de Lenguajes y Sistemas, Universidad de Burgos Burgos, Spain
Syed Ahmed ShahDepartment of Computer Science, Balochistan University of Information Technology, Engineering and Management SciencesQuetta, Pakistan
Rizwan ShahidNHS Trust, Lincoln County Hospital Lincoln, UK
Sergi Solera‐MonforteDepartament d'Informàtica Universitat de ValènciaValència, Spain
Muhammad SulemanDepartment of Information TechnologyThe Islamia University of BahawalpurBahawalpur, Pakistan
Yuyan WuDepartament d'Informàtica, Universitat de València, València Spain
Ahmed ZohaJames Watt School of Engineering University of GlasgowGlasgow, UK
In recent years, there has been a significant surge in the utilization of the Internet of Things (IoT) and wireless sensors to meet escalating demands for high data rates, low latency, and ultrareliable communication in 5G/6G systems. Various strategies are under development utilizing intelligent sensing platforms to address these growing needs. Using multiple sensors has proven to be an effective approach to enhance reliability, efficiency, and user experience in diverse application scenarios across healthcare, transportation, environmental monitoring, industrial automation, and entertainment industries.
Incorporating data from multiple modalities such as visual images, radiation levels, texture details, and behavioral patterns captured by sensors like light, temperature, humidity, vision, and motion enhances the information assimilation process, leading to precise decision‐making. While offering immense benefits, multi‐modal sensing presents significant challenges such as energy efficiency, mobility, reliability, interference mitigation, reliability, security, and real‐time processing requirements.
In an interconnected world, seamless integration of IoT with intelligent sensing platforms is essential to deliver transformative solutions. The customization and diversification of intelligent sensors require efficient techniques for designing and integrating sensors, as well as extracting valuable insights from vast amounts of multimodal data. This involves leveraging advanced sensor design, robust big data analytics, and stringent security measures to promote sustainability, spur innovation, and explore new possibilities.
To date, there is a lack of comprehensive literature that addresses design, implementation, and analytical techniques for multimodal intelligent sensing. A dedicated book focusing on these crucial aspects will not only bridge this gap but also educate readers on the key aspects of efficient sensor networks, laying the groundwork for future advancements in a smart and interconnected world. This book is a structured effort in this direction that explores the cutting‐edge advancements and challenges in the realm of multimodal sensing, discussing both software and hardware solutions. It covers a broad spectrum of topics in multimodal intelligent sensing for a range of applications, bringing together experts from various disciplines including wireless communications, signal processing, and sensor design.
Key topics discussed include sensor design, deployment efficiency, energy management, data fusion, and information extraction through machine learning, deep learning, and federated learning to showcase the latest developments in this dynamic field. By considering challenges and future prospects, the book caters to a diverse readership within the scientific community.
Chapter 1 delves into the realm of multimodal intelligent sensing, uncovering the various sensor types and the integration of multiple sensors for enhanced capabilities. The chapter also explores the applications of multimodal sensing in different sectors and the challenges and opportunities that come with this dynamic field.
Chapter 2 focuses on antennas for wireless sensors, emphasizing their crucial role as the sensory gateway for wireless networks. Readers will gain insights into fundamental antenna parameters, fabrication methods, and the different types of antennas utilized in sensing networks.
Chapter 3 discusses the sensor design for environmental monitoring, shedding light on combating deforestation through wireless sensor networks and the design considerations involved in creating effective systems for environmental conservation.
Chapter 4 explores the applicability of wireless sensors for multimodal health monitoring, detailing the use of wearable and implantable sensors, multimodal healthcare sensing devices, and the use of AI methods in healthcare systems for enhanced monitoring and diagnosis.
Chapter 5 shifts gears to sensor design for industrial automation, highlighting the role of multimodal sensing in revolutionizing industrial processes. From RF sensors to vision sensors, this chapter explores the design considerations and challenges faced in implementing multimodal sensing in industrial settings.
Chapters 6 and 7 probe hybrid neuromorphic federated learning for activity recognition and multimodal beam prediction in drone communication networks, respectively, showcasing the integration of cutting‐edge technologies to enhance sensing capabilities in diverse applications.
Chapter 8 studies the domain of mind‐wandering through multiple wearable sensors with the aim of detecting mind‐wandering episodes through deep learning and paving ways to improve the detection of students’ learning and concentration levels.
Chapters 9 and 10 investigate the advancements in remote infant monitoring systems and telehealth patient monitoring, emphasizing the design of multi‐modal wearable systems and secure telehealth systems for improved patient care and monitoring.
Chapter 11 explores the ethical considerations involved in the development of multimodal intelligent tutoring systems, respectively, highlighting the balance between innovation and ethical responsibility in the deployment of such systems.
Lastly, Chapter 12 paves the way for the future of multimodal intelligent sensing in the deep learning era, outlining the challenges, perspectives, and ethical considerations that will shape the evolution of sensor technology in the years to come.
This book serves as a comprehensive guide for researchers, engineers, and professionals interested in the advancements and applications of multimodal intelligent sensing systems. Each chapter offers insights, practical considerations, and future directions to inspire further innovation and exploration in this vibrant field.
Masood Ur RehmanJames Watt School of Engineering, University of Glasgow, Glasgow, UK
Ahmed ZohaJames Watt School of Engineering, University of Glasgow, Glasgow, UK
Muhammad Ali JamshedJames Watt School of Engineering, University of Glasgow, Glasgow, UK
Naeem RamzanSchool of Computing, Engineering and Physical Sciences, University of the West of Scotland, Paisley, UK
Masood Ur Rehman1, Muhammad Ali Jamshed1, and Tahera Kalsoom2
1 James Watt School of Engineering, University of Glasgow, Glasgow, UK
2 Manchester Fashion Institute, Manchester Metropolitan University, Manchester, UK
Intelligent sensing systems play a crucial role in various fields, enabling the acquisition of valuable data for analysis, monitoring, and decision‐making processes. Multi‐modal intelligent sensing, with its capability to gather information from multiple sensor types and sensing parameters, has emerged as a powerful tool in diverse applications. This chapter aims to provide a comprehensive overview of multi‐modal intelligent sensing, offering insights into the diverse sensor types, sensing parameters, application scenarios, and data analysis tools associated with this rapidly evolving field.
Multi‐modal intelligent sensing refers to the use of multiple sensor types, such as optical, acoustic, thermal, and chemical sensors, to capture and analyze different aspects of the environment or a system. By combining data from various sensors, multi‐modal sensing systems can provide a more comprehensive and holistic view of the monitored phenomenon, leading to better insights and decision‐making [1].
A simple example is a system designed to monitor a remote environment, such as a home or office, with a view to enhancing security. Such a system might use a variety of sensing devices including visible light cameras, LiDAR, infra‐red motion detectors, and contact microphones, each capturing unique types of data as shown in Figure 1.1. The behavior of this environment could be quite complex, for example, the sound of a door opening followed by an increase in infra‐red activity and ending with the turning off of a light, could be automatically interpreted as a sequence involving an entry into the environment by an unwanted visitor (door opening), followed by movement to a specific location (increase in infra‐red activity), and culminating in an attempt to allay suspicion by simulating the turning off of a light. The analysis of this complex behavior would be greatly enhanced if the system could automatically determine spatio‐temporal relationships between events and classify these events into a taxonomy based on their threat to security. This could be achieved by using the information provided by the various sensing devices, organizing this information into one unified data structure through data fusion, with each type of data being a mode, and building a symbolic or semantic model of the observed events through automatic learning and reasoning using methods from artificial intelligence. The intelligent agent, influenced by this understanding, then executes actions or generates outputs accordingly [2].
Figure 1.1 Multi‐modal intelligent sensing.
The components and functionalities of this simple example can be tailored to various applications by selecting appropriate sensors, data fusion methods, and intelligent processing techniques. In recent times, sensing devices like digital cameras, microphones, and range finders have become ubiquitous. Single‐mode intelligent sensing refers to the automatic extraction of information from the data produced by these devices using processing and analysis techniques. There is great demand to build systems that can interact more richly with humans in their environment, for applications such as smart environments, augmented reality, and human–computer interfaces [3]. Systems that can emulate or augment human perception by automatically integrating and processing information from different modes (multi‐modal sensing) are a natural next step in this quest.
The success of multi‐modal intelligent sensing systems relies on the selection and integration of appropriate sensor types that can capture different aspects of the phenomenon under observation. Sensors have been known to guide humans since the Han Dynasty [4]. The seismometer is a sensor developed in the second century by Zhang Heng (a Chinese astronomer and mathematician), used to sense earthquakes [5], and the auxanometer and the crescograph are the sensors used to measure the growth in plants [6, 7]. Whereas the galvanometer developed by Hans Christian Ørsted in 1820 (a Danish chemist and physicist) is used to measure the flow of electric current [8]. Moreover, the actinometer developed in 1825 by Sir John Frederick William Herschel (an English astronomer and mathematician) is used to measure thermal power and radiations [9]. Based on the applications, scenarios, and the environment, the sensor(s) can be of different types. For instance, to measure sound/noise levels in a given environment, acoustic sensors, for example, hydrophone and microphone, are used [10]. Similarly, the thermometer is a popular sensor, used to measure the temperature [11]. Different sensor types are presented in Figure 1.2.
Nowadays, the sensors are playing a vital role in our daily routine tasks, for example, tactile sensors used in elevators, lamps that are brightened or dim by a touch of hand, fire alarms, motion detectors, etc., are a few examples. The advancements in micro‐machinery have enhanced the expansion of sensors beyond their traditional types as indicated in Figure 1.2. The magnetic, angular rate, and gravity, known as MARG, is one of the classic examples of such type of expansion, which is used to estimate the altitude of an aircraft [13]. The microelectromechanical systems (MEMS) technology has enabled the manufacturing of these sensors on a microscopic scale [14]. The microsensors developed using different microscopic approaches are comparatively more accurate and faster than the older sensors. Due to the increased demand for rapid, reliable, and affordable data access, the lost cost and easy to use disposable sensors have gained more importance [15].
Figure 1.2 Key types of sensors.
Source: Ref. [12]/IEEE.
With the evolution of telecommunication technology, especially, with the advent of wireless communication, more efforts have been put to integrate the sensors with the wireless antennas, that are capable of transmitting sensing information over large distances. The integration of wireless communication technology with the sensors has enabled humans to gain useful information from hard‐to‐reach areas. In the current era of fifth generation (5G) of mobile communication and beyond, the addition of massive machine type communication (mMTC) and ultra‐reliable low latency communications (URLLC) use cases have increased the connectivity and reliability of Internet of Things (IoT) devices [16, 17]. It is envisioned that the number of IoT devices will reach 75 billion by 2025, which will significantly increase the popularity of sensors in wireless communication domain [18]. This increase in the popularity of IoT devices will create a plethora of challenges to meet the end‐user requirements. In order to tackle such challenges, first, there is a need to understand the physics behind these sensors, which will result in a more fruitful integration with the wireless domain and in return provide a meaningful platform to address these challenges effectively.
Each sensor type has its own strengths and limitations, and the selection of sensors for a multi‐modal intelligent sensing system should be based on the specific requirements and characteristics of the application being monitored. By combining different sensor types, the system can leverage their respective advantages and mitigate their limitations, leading to a more robust and comprehensive sensing solution.
Integrating multiple sensor types in a sensing system can provide enhanced capabilities by leveraging the strengths of each sensor and compensating for their individual limitations. It can enhance capabilities, improve performance, and provide a more comprehensive and accurate representation of the monitored environment. By carefully selecting and integrating sensors with complementary strengths, the system can better adapt to dynamic conditions and deliver more valuable insights for various applications.
Some key advantages of integrating multiple sensor types in a multi‐modal sensing system include the following.
By having multiple sensors of different types that measure the same parameters, the system can cross‐check and validate the data, increasing reliability and reducing the risk of errors or false readings [19].
Different sensor types capture different aspects of the environment or object being monitored. Combining optical, thermal, and acoustic sensors, for example, can provide a more comprehensive picture of the surroundings, enabling better analysis and decision‐making [20].
Integrating sensors with different resolutions and sensing capabilities can lead to more accurate and precise measurements. For instance, combining a high‐resolution optical sensor with a thermal sensor can provide detailed visual information along with temperature data for better object identification and tracking [21].
Different sensor types may perform better in specific environmental conditions. By integrating sensors with complementary strengths, the system can maintain functionality across a wider range of operating conditions, such as varying light levels, temperatures, or noise levels [22].
Combining sensors with different modalities can expand the range of detectable signals or anomalies. For example, integrating optical, thermal, and acoustic sensors can enable the system to detect objects based on visual appearance, heat signatures, and sound signals simultaneously [23].
By combining multiple sensor types, the system can gather a multifaceted view of the environment, leading to a better understanding of the context in which events are occurring. This can enable more informed decision‐making and response strategies [24].
Integrating multiple sensor types for multi‐modal sensing is a complex task and requires careful consideration of various factors. Some of the key aspects that must be thoughtfully contemplated are illustrated in Figure 1.3 and discussed below.
Figure 1.3 Key considerations in multiple sensor integration.
Before integrating sensors, it is essential to identify the specific parameters or variables that need to be measured or monitored. A wide variety of physical quantities such as temperature, pressure, humidity, motion, light intensity, sound, gases and chemicals, proximity and presence, force and strain, and environmental parameters such as moisture level, pH, air quality, radiation level, and magnetic field can be sensed through various types of sensors. Understanding and monitoring these sensing parameters are essential for ensuring system performance, safety, efficiency, and reliability in a wide range of applications [25]. By choosing the right sensors and implementing accurate measurement techniques, informed decisions, optimized processes, and improved performance can be achieved through multi‐modal sensing.
Based on the identified sensing parameters, different sensor types can be chosen to capture the desired data. For example, temperature sensors such as thermocouples or infrared sensors can be used for temperature measurement, while accelerometers or gyroscopes can be used for motion sensing. Careful selection of sensors based on their sensitivity, accuracy, resolution, and range is essential for reliable data acquisition [26].
When integrating multiple sensor types, it is crucial to calibrate each sensor to ensure accurate and consistent readings. Calibration involves adjusting the sensor’s output based on known reference values [27]. Additionally, aligning sensors properly in the system is important to ensure that they are measuring the same location or object accurately.
Integrating multiple sensors often involves data fusion, which is the process of combining data from different sensors to improve overall system performance. This can be achieved through sensor fusion algorithms such as Kalman filters, Bayesian inference, or artificial neural networks [26]. Data fusion helps reduce uncertainties, enhance accuracy, and provide a more complete picture of the environment.
Data acquisition involves the process of sampling, digitizing, and storing sensor data for further processing and analysis. The sampling rate, resolution, and timing synchronization are important considerations in data acquisition. Depending on the application, continuous sampling or event‐triggered sampling may be required to capture relevant information [26].
Once the sensor data is acquired, signal processing techniques can be applied to extract meaningful information, detect patterns, or identify anomalies [28]. Signal processing methods such as noise reduction, filtering, feature extraction, and pattern recognition can be used to process the sensor data and derive useful insights.
Concurrent data acquisition from different sensors involves collecting, integrating, and analyzing data from multiple sensors simultaneously to obtain a comprehensive view of the system or environment being monitored. The selection of the appropriate method depends on the specific requirements of the monitoring system, the nature of the sensors involved, and the desired level of synchronization and accuracy in data acquisition. Apart from data fusion, some methods commonly used for concurrent data acquisition from different sensors are the following.
Multiplexing involves switching between different sensors to sample data sequentially. Analog multiplexers or digital multiplexing systems can be used to connect multiple sensors to a single data acquisition system, allowing data to be collected from each sensor in rapid succession [29]. Time‐division multiplexing (TDM) (each sensor is allocated a specific time slot during which it can transmit its data to the data acquisition system enabling better synchronization, as shown in Figure 1.4) or frequency‐division multiplexing (FDM) (assigning specific frequency bands to different sensors for data transmission (Figure 1.5) allowing true concurrent data acquisition without interference but at the cost of increased complexity requiring dedicated hardware) are two widely used methods [30].
Parallel processing involves acquiring data from multiple sensors simultaneously by using multiple data acquisition channels or systems. Each sensor is connected to a dedicated channel, allowing for independent data collection from different sensors in parallel. This offers the highest level of concurrency but requires more hardware resources [31].
Distributed data acquisition systems consist of multiple nodes or modules, each connected to one or more sensors. These nodes communicate with a central control unit or data aggregator to coordinate data acquisition from different sensors in real time. This approach is well suited for large‐scale, geographically distributed sensor deployments [31].
Synchronous sampling involves triggering all sensors simultaneously to capture data at the same time instance. This method ensures that data from different sensors is synchronized to prevent timing discrepancies and ensure accurate data comparison crucial for applications requiring precise temporal correlation between different modalities (e.g., audio and video synchronization) [32].
Figure 1.4 Time‐division multiplexing.
Figure 1.5 Frequency‐division multiplexing.
Sensor networks consist of interconnected sensors that communicate with each other and a centralized data collection point. Wireless sensor networks (WSNs) and IoT devices enable concurrent data acquisition from multiple sensors distributed across a wide area [33].
Using a common data bus or communication protocol such as controller area network (CAN) or Ethernet can facilitate concurrent data acquisition from different sensors. This method allows for standardized communication between sensors and data acquisition systems [34].
Employing real‐time data processing techniques such as data streaming, event‐driven programming, or edge computing enables immediate analysis of data collected from multiple sensors. Real‐time processing helps in making timely decisions based on the acquired sensor data [35].
Multi‐modal sensing involves the simultaneous collection of data from different sources or types of sensors, such as images, audio, motion, and temperature. Analyzing multi‐modal data requires sophisticated tools and techniques to effectively extract meaningful insights. Data analysis tools for multi‐modal sensing encompass a range of techniques from signal processing to machine learning, data fusion, and visualization.
Signal processing techniques play a crucial role in pre‐processing and enhancing multi‐modal data before analysis. For example, in image and video data, techniques such as filtering, noise reduction, and feature extraction can be applied to improve the quality of the data. In audio data, signal processing techniques like Fourier transforms, wavelet analysis, and spectral analysis can be used to extract relevant information [36]. Signal processing techniques help in reducing noise, extracting features, and preparing the data for further analysis.
Machine learning and artificial intelligence algorithms are essential for analyzing multi‐modal data and extracting patterns and insights. These algorithms can be used for tasks such as classification, clustering, regression, anomaly detection, and prediction [37]. Techniques like deep learning, neural networks, support vector machines, and random forests can be applied to multi‐modal data to uncover hidden relationships and patterns [38]. Machine learning algorithms can learn from the data and make predictions or decisions based on the patterns they find, enabling the automation of complex data analysis tasks.
Data fusion and integration involve combining information from multiple sensors or data sources to provide a holistic view of the data. Fusion techniques can be used to merge data at different levels, such as feature‐level fusion, decision‐level fusion, or sensor‐level fusion [39]. By integrating data from diverse sources, data fusion techniques help in reducing redundancy, improving the accuracy of analysis, and providing a more comprehensive understanding of the data. Methods like Bayesian inference, Kalman filtering, and Dempster‐Shafer theory can be applied for data fusion in multi‐modal sensing scenarios [40].
Visualization and interpretation of multi‐modal data play a crucial role in understanding the relationships and patterns within the data. Visualizing data from different modalities together can provide a more comprehensive view and facilitate the identification of correlations and insights. Techniques like dimensional reduction, scatter plots, heat maps, and clustering visualizations can be used to represent multi‐modal data in a human‐readable format [41]. Interpretation of multi‐modal data involves extracting meaningful insights, identifying trends, anomalies, and patterns, and communicating the findings effectively to stakeholders.
Data fusion and synchronization are crucial aspects of concurrent data acquisition from different sensors to ensure accurate and reliable data processing. Following are some of the key factors that should be deliberated for data fusion and synchronization to make multi‐modal sensing effective and efficient.
One of the major challenges in multi‐modal sensing is ensuring that sensors are accurately calibrated and synchronized. Sensor calibration involves adjusting sensor readings to account for factors such as systematic error, drift, noise, or bias, which can impact the accuracy and reliability of the data [42]. Synchronization is essential to ensure that data from different sensors are aligned in time enabling accurate fusion. Strategies for addressing calibration and synchronization issues include using calibration algorithms, calibration targets, time‐stamping, and synchronization protocols to improve the quality and consistency of the data. Utilizing a common time reference, such as a global positioning system (GPS) signal or a high‐precision clock, can ensure that data from various sensors is timestamped consistently [43].
Ensuring that data samples from different sensors correspond to the same time period is essential for successful data fusion. Aligning data streams by timestamp or aligning data sequences through interpolation techniques can help in merging data from disparate sources [44].
Data fusion involves combining information from multiple sensors to create a unified representation of the monitored system. Using fusion algorithms, such as Kalman filters, particle filters, or Bayesian inference, can help in reconciling conflicting or complementary data from different sensors [45].
Dealing with sensor errors, outliers, or missing data is crucial for maintaining data integrity during fusion. Implementing outlier detection algorithms, interpolation methods for missing values, and error correction techniques can improve the robustness of the data fusion process [46].
Incorporating redundancy in sensor data can enhance the reliability of the fusion process by cross‐validating measurements from different sensors. Redundant sensors or sensor networks can provide backup data in case of sensor failures or discrepancies [47].
Standardized communication protocols and data formats facilitate seamless data exchange between sensors, data acquisition systems, and data processing units. Utilizing protocols like MQTT, OPC UA, or Modbus can streamline data transmission and synchronization efforts [48].
Optimizing data fusion algorithms for efficiency and scalability is essential for processing large volumes of sensor data in real time. Implementing parallel processing techniques, distributed computing architectures, or edge computing solutions can enhance computational performance [49].
Maintaining the security and privacy of sensor data during fusion is critical to prevent unauthorized access, data breaches, or tampering. Implementing encryption, access controls, and data anonymization techniques can safeguard sensitive sensor data [50].
Integrating data fusion processes with the overall system architecture, control systems, visualization tools, or decision‐making frameworks is essential for leveraging fused sensor data effectively [51]. Ensuring interoperability and compatibility with existing systems is key to maximizing the impact of data fusion efforts.
The use of multi‐modal sensing is rapidly evolving, and nowadays sensors are considered an integral part in performing routine tasks. With the addition of mMTC as a use case in 5G/6G, the popularity and feasibility of using wireless sensors in IoT domain have gained a lot of interest, and it is predicted that the number of connected IoT devices will reach 125 billion by 2030 [52].
Multi‐modal intelligent sensing has various application scenarios across different industries and domains making human life more secure, increasing industrial production efficiency, helping towards global warming, etc. Some of the key application scenarios of multi‐modal intelligent sensing are discussed below.
The medical sector is experiencing a transformation thanks to multi‐modal intelligent sensing that fosters a holistic approach to healthcare. By leveraging data from diverse sources (implanted/wearable/environment‐embedded sensors), this technology empowers patients to manage their health, improves treatment outcomes, and facilitates better decision‐making by healthcare professionals [53]. It personalizes healthcare, fostering patient engagement and well‐being. This technology gathers data from various sensors (vital signs, imaging, and activity trackers) to provide a comprehensive picture of a patient's health. Healthcare providers can remotely track patients’ health in real time using wearable devices, smart sensors, and mobile apps. This allows for early detection of potential issues and timely interventions. Combining data from different sources (imaging, genetics, and biomarkers) improves the accuracy of disease detection and diagnosis [54]. This leads to earlier interventions and personalized treatment plans.
Multi‐modal sensing allows the analysis of vast datasets including genetics, lifestyle, and environment. This personalized approach tailors interventions and preventive strategies to each patient's unique needs. Surgeons benefit from real‐time feedback using data from surgical instruments, imaging devices, and physiological sensors. This results in enhanced surgical precision and improved patient outcomes [55]. Tracking patient progress, movement patterns, and optimizing treatment plans becomes possible with motion sensors, wearables, and biofeedback systems. This leads to improved recovery and rehabilitation. Smart pill bottles, wearables, and digital health platforms track medication adherence and outcomes. This helps healthcare providers identify and address adherence challenges proactively [56].
Orthopedics can monitor the physical conditions of bones in real time using implantable sensors [57]. Body‐worn sensors can be used to treat cardiovascular patients effectively [58]. It is estimated that three million elderly people are brought to Accidents & Emergency (A&E) for fall‐related injuries every year in the United States [59]. The environment‐embedded WSs also play a key role in monitoring the health of patients. The environment‐embedded sensors can be employed to monitor the health of the elderly, in mobile as well as static conditions [60].
Multi‐modal intelligent sensing is transforming the automotive and transportation landscape. It contributes to the development of smarter, safer, and more efficient transportation systems, leading to a transformed transportation experience for both passengers and operators [61]. By fusing data from various sensors (cameras, LiDAR, and radar), vehicles gain a 360° view of their surroundings, leading to enhanced safety [62]. Advanced driver assistance systems (ADAS) utilize these sensors to provide real‐time information, enabling features like lane departure warning and automatic emergency braking, ultimately reducing accidents [63].
Multi‐modal sensing is the backbone of self‐driving cars. By combining LiDAR, cameras, radars, and GPS data, autonomous vehicles navigate complex environments and make crucial decisions in real time [64]. Sensor data from vehicles and infrastructure helps optimize traffic flow. Intelligent traffic lights and vehicle‐to‐infrastructure communication systems reduce congestion, improve travel times, and enhance road safety. Sensor integration allows tracking of fuel consumption, engine health, and tire pressure, ensuring optimal vehicle operation and timely maintenance.
Sensors personalize the travel experience by adjusting climate control, lighting, and entertainment based on passenger preferences [65]. Air quality, noise, and emissions can be monitored using multi‐modal sensing. This data empowers policymakers to make informed decisions for cleaner and more sustainable transportation solutions.
Multi‐modal intelligent sensing allows for comprehensive environmental monitoring, data analysis, and implementation of effective conservation measures. It empowers researchers, policymakers, and conservationists to address environmental challenges, safeguard ecosystems, and promote sustainable development for future generations [66, 67]. Sensor networks, weather stations, and satellite imagery track air pollution, identify emission sources, and assess their impact such as the Breathe London project [68]. Similarly, sensors, acoustic devices, and satellite observations monitor water bodies, detect contaminants, and assess aquatic health. Data on temperature, pH, and oxygen levels helps identify pollution sources and protect water resources.
Camera traps, acoustic sensors, and satellite tracking devices monitor wildlife populations, endangered species, and migration patterns. This data helps prioritize conservation efforts, establish protected areas, and prevent extinction [69]. Moreover, soil sensors, hyperspectral imaging, and drone surveys assess soil properties, moisture levels, and degradation. Data on composition, nutrients, and erosion helps develop sustainable agricultural practices and improve soil fertility [70].
The impact of deforestation has also raised the need to measure the growth of trees, plants, etc., in real time as they directly impact the level of oxygen in the environment. A fast bacteria detection method in olive tree can be adaptive to other kinds of trees as well [71]. A company, Nature 4.0, is purely dedicated to develop innovative IoT products to save the environment from various adverse effects [72]. The TreeTalker (TT+) is one of their products that measures the water consumption, biomass growth, etc., in a tree. Unmanned aerial vehicles (UAVs), satellite imaging, and ground sensors monitor forests, detect deforestation, and assess fire risks. Real‐time data on forest cover, tree health, and fire hotspots enables forest conservation, wildfire prevention, and sustainable forest management [73]. Climate models, satellite observations, and ground sensors analyze trends in temperature, precipitation, sea level, and greenhouse gas emissions. This data informs climate policy decisions, helps mitigate climate risks, and promotes climate‐resilient practices.
The vision of cities that are efficient, sustainable, and responsive to residents' needs is becoming a reality with multi‐modal intelligent sensing technologies (Figure 1.6). Sensing technologies create efficient, sustainable, and resilient urban environments with optimized resource management practices and a higher quality of life for residents. Traffic sensors, GPS trackers, and smart cameras optimize traffic flow, reduce congestion, and improve public transit. Real‐time data on vehicles, pedestrians, and road conditions empowers intelligent transportation systems for efficient urban mobility [74, 75].
Smart meters, energy monitoring devices, and renewable energy sources optimize energy use and promote sustainability. Data on consumption, peak demand, and renewable generation allows for implementing smart grids, reducing energy costs, and minimizing a city's carbon footprint [76]. A continuous Wireless Sensor (WS) monitoring setup can save energy consumption by up to 18%, in comparison to traditional manual periodic check‐ups [75]. Water quality sensors, leak detection systems, and irrigation controllers ensure efficient water resource management. Data on water quality, consumption, and distribution networks helps identify leaks, prevent wastage, and sustain water resources for city residents [77, 78]. Waste sensors, smart bins, and sensor‐equipped collection vehicles optimize waste collection and promote recycling. Data on waste volumes, collection frequencies, and recycling rates allows for optimizing collection schedules, reducing costs, and improving waste disposal practices [79].
Figure 1.6 Realization of smart city concept through multi‐modal sensing.
Building automation systems, occupancy sensors, and energy management platforms optimize energy use and improve occupant comfort. Data on temperature, lighting, and occupancy patterns enables smart building solutions that reduce energy consumption and enhance indoor environment quality [75, 80]. Sensor data, satellite imagery, and weather monitoring devices improve disaster preparedness and response. This real‐time information helps track disasters, assess damage, and coordinate rescue efforts, mitigating the impact of natural disasters on urban communities [81].
In industrial settings, multi‐modal intelligent sensing is used for condition monitoring, predictive maintenance, and process optimization. By integrating data from sensors measuring temperature, vibration, pressure, and other parameters, manufacturers can identify equipment failures, optimize production processes, and minimize downtime [82]