114,99 €
A guide to intelligent decision and pervasive computing paradigms for healthcare analytics systems with a focus on the use of bio-sensors Intelligent Pervasive Computing Systems for Smarter Healthcare describes the innovations in healthcare made possible by computing through bio-sensors. The pervasive computing paradigm offers tremendous advantages in diversified areas of healthcare research and technology. The authors--noted experts in the field--provide the state-of-the-art intelligence paradigm that enables optimization of medical assessment for a healthy, authentic, safer, and more productive environment. Today's computers are integrated through bio-sensors and generate a huge amount of information that can enhance our ability to process enormous bio-informatics data that can be transformed into meaningful medical knowledge and help with diagnosis, monitoring and tracking health issues, clinical decision making, early detection of infectious disease prevention, and rapid analysis of health hazards. The text examines a wealth of topics such as the design and development of pervasive healthcare technologies, data modeling and information management, wearable biosensors and their systems, and more. This important resource: * Explores the recent trends and developments in computing through bio-sensors and its technological applications * Contains a review of biosensors and sensor systems and networks for mobile health monitoring * Offers an opportunity for readers to examine the concepts and future outlook of intelligence on healthcare systems incorporating biosensor applications * Includes information on privacy and security issues on wireless body area network for remote healthcare monitoring Written for scientists and application developers and professionals in related fields, Intelligent Pervasive Computing Systems for Smarter Healthcare is a guide to the most recent developments in intelligent computer systems that are applicable to the healthcare industry.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 694
Veröffentlichungsjahr: 2019
Arun Kumar Sangaiah and S. P. Shantharajah
VIT UniversityVellore, India
Padma Theagarajan
Sona College of TechnologySalem, India
This edition first published 2019
© 2019 John Wiley & Sons, Inc.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of Arun Kumar Sangaiah, S. P. Shantharajah, and Padma Theagarajan to be identified as the authors of this work this work has been asserted in accordance with law.
Registered Office(s)
John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA
Editorial Office
111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.
Limit of Liability/Disclaimer of Warranty
While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Library of Congress Cataloging‐in‐Publication Data
Names: Sangaiah, Arun Kumar, 1981‐ editor. | Shantharajah, S. P., 1972‐
editor. | Theagarajan, Padma, 1968‐ editor.
Title: Intelligent pervasive computing systems for smarter healthcare / Arun
Kumar Sangaiah, VIT University, Vellore, India, S.P. Shantharajah, VIT
University, Vellore, India, Padma Theagarajan, Sona College of Technology,
Salem, India.
Description: First edition. | Hoboken, NJ : John Wiley & Sons, Inc., [2019] |
Includes bibliographical references and index. |
Identifiers: LCCN 2019009752 (print) | LCCN 2019012900 (ebook) | ISBN
9781119438991 (Adobe PDF) | ISBN 9781119439011 (ePub) | ISBN 9781119438960
(hardcover)
Subjects: LCSH: Medical care–Data processing. | Ubiquitous computing. |
Medical electronics.
Classification: LCC R859.7.U27 (ebook) | LCC R859.7.U27 I58 2019 (print) |
DDC 610.285–dc23
LC record available at https://lccn.loc.gov/2019009752
Cover design by Wiley
Cover image: © ktsimage/Getty Images
Amudha Thangavel
Department of Computer Applications
Bharathiar University
Coimbatore
India
Ponnuraman Balakrishnan
Department of Analytics, SCOPE
VIT Deemed University
Vellore
India
Angelo Brayner
Computing Science Department
Federal University of Ceará
Fortaleza
Brazil
Chandrasekaran Vellankoil Marappan
School of Advanced Sciences
VIT University
Vellore
India
Habiba Chaoui
Systems Engineering Laboratory
National School of Applied Sciences
Ibn Tofail University
Kenitra
Morocco
Ashraf Darwish
Faculty of Science
Helwan University
Cairo
Egypt
Deepa Ganesan
School of Advanced Sciences
VIT University
Vellore
India
Dinakaran Karunakaran
Department of Information Technology
Saveetha Engineering College
Chennai
India
Sumathi Doraikannan
CSE
Malla Reddy Engineering College
JNTUH
Hyderabad
India
Younès El Bouzekri El Idrissi
Systems Engineering Laboratory
National School of Applied Sciences
Ibn Tofail University
Kenitra
Morocco
Fatna Elmendili
Systems Engineering Laboratory
National School of Applied Sciences
Ibn Tofail University
Kenitra
Morocco
Gowthambabu Karthikeyan
School of Computer Science and Engineering
VIT University
Vellore
India
Aboul Ella Hassanien
Faculty of Computers and Information
Cairo University
Cairo
Egypt
Jothilakshmi Rajendiran
Department of Physics
Veltech University
Chennai
India
Ramanathan Lakshmanan
School of Computer Science and Engineering
Vellore Institute of Technology
Vellore
India
João Paulo Madeiro
Institute for Engineering and Sustainable Development
University for the International Integration of the Afro‐Brazilian Lusophony
Redenção
Brazil
Mary Mekala
School of Information Technology and Engineering
VIT University
Vellore
India
José Maria Monteiro
Computing Science Department
Federal University of Ceará
Fortaleza
Brazil
Rui Silva Moreira
ISUS unit at FCT
University Fernando Pessoa
Porto, Portugal
INESC TEC and LIACC at FEUP
University of Porto
Porto
Portugal
Jayashree Nair
AIMS Institutes
Bangalore
India
Patitha Parameswaran
Department of Computer Technology
MIT Campus, Anna University
Chennai
India
Padma T.
Sona College of Technology
Salem
India
Ricky Parmar
Dell EMC
Bangaluru
India
Deepalakshmi Perumalsamy
Department of CSE
Kalasalingam Academy of Research and Education
Krishnankoil
India
Praba Bashyam
Department of Mathematics
SSN College of Engineering
Affiliated to Anna University
Chennai
India
Swarnalatha Purushotham
School of Computer Science and Engineering
Vellore Institute of Technology
Vellore
India
Pethru Raj
Site Reliability Engineering (SRE) Division
Reliance Jio Infocomm. Ltd. (RJIL)
Bangalore
India
Rajakumar Krishnan
School of Computer Science and Engineering
VIT
Vellore
India
Rajeswari Kurubarahalli Chinnasamy
Department of Computer Science
Sona College of Technology
Salem
India
Rajeswari Rajendran
Department of Computer Applications
Bharathiar University
Coimbatore
India
Nersisson Ruban
School of Electrical Engineering
VIT University
Vellore
India
Sangeetha Archunan
Department of Computer Applications
Bharathiar University
Coimbatore
India
Sasikala Ramasamy
School of Computer Science and Engineering
VIT University
Vellore
India
Gehad Ismail Sayed
Faculty of Computers and Information
Cairo University
Cairo
Egypt
Sathiyabhama Balasubramaniam
Department of Computer Science
Sona College of Technology
Salem
India
Prabha Selvaraj
CSE
Malla Reddy Institute of Engineering and Technology
JNTUH
Secunderabad
India
Kannan Shanmugam
Department of Computer Science and Engineering
Malla Reddy Engineering College
Hyderabad
India
Rajalakshmi Shenbaga Moorthy
Department of Computer Science and Engineering
St. Joseph's Institute of Technology
Anna University
Chennai
India
Christophe Soares
ISUS unit at FCT
University Fernando Pessoa
Porto
Portugal
Pedro Sobral
ISUS unit at FCT
University Fernando Pessoa
Porto
Portugal
Rajkumar Soundrapandiyan
School of Computer Science and Engineering
Vellore Institute of Technology
Vellore
India
Karthik Subburathinam
Department of Computer Science and Engineering
SNS College of Technology
Chennai
India
Suresh Kumar Nagarajan
School of Computer Science and Engineering
VIT University
Vellore
India
José Torres
ISUS unit at FCT
University Fernando Pessoa
Porto
Portugal
Valarmathie Palanisamy
Department of Computer Science and Engineering
Saveetha Engineering College
Anna University
Chennai
India
Navya Venkatamari
Department of ECE
Kalasalingam Academy of Research and Education
Krishnankoil
India
Krishnamoorthy Venkatesan
Department of Mathematics
College of Natural Sciences
Arba Minch University
Arba Minch
Ethiopia
Veeramuthu Venkatesh
School of Computing
SASTRA Deemed University
Thanjavur
India
Vishnu Priya
Department of Computer Science and Engineering
P.M.R. Engineering College
Anna University
Chennai
India
Rui Silva Moreira1,2, José Torres1, Pedro Sobral1 and Christophe Soares1
1ISUS unit at FCT, University Fernando Pessoa, Porto, Portugal
2INESC TEC and LIACC at FEUP, University of Porto, Porto, Portugal
The concept of ubiquitous computing (ubicomp), coined by Mark Weiser in 1991, focused on having computation in any regular “smart” object (Weiser, 1991). The key idea of ubicomp (aka pervasive computing (Satyanarayanan, 2001)) is the use of embedded technology everywhere and disappearing in background, i.e. not requiring any extra cognitive effort to use such “augmented” objects. Later, in 1999, Kevin Ashton devised the term Internet of things (IoT), which envisioned the interconnection of any physical object through the Internet (Ashton, 2009). Such concept opens the door for sensing massive amounts of data into cloud databases (cf. big data) and exposing general environment contexts to a multitude of analytic and automation tools. The ability to reasoning about human context environments allows also the orchestration of such environments by pushing back actuation over physical objects. Both ubicomp and IoT propose basically similar seminal ideas and are considered synonyms of physical computing.
It is clear that ubicomp or IoT technologies can tackle the growing need for ambient assisted living (AAL) environments, which are mainly driven by the aging of the world population. This phenomenon is usually associated with chronic or disabling diseases, such as memory loss, disorientation and loss hazards, polymedication coping, difficulties in adhering to clinical treatments, unintended erroneous medication intake, adherence to therapeutic exercises, etc. These problems pose several difficulties on executing even simple daily life tasks. However, some of these issues may be addressed and mitigated by the integration of ubicomp home support systems specially developed and tailored for the elderly or those with special needs, thus promoting and allowing outpatient and home healthcare. Therefore, this work advocates that there are several key capabilities that must be provided so that AAL environments may be automated from independently developed commercial off‐the‐shelf (COTS) systems. These key capabilities are as follows:
Processing and sensing issues
: Ubicomp systems sense and explore any knowledge about the context they operate in. The context refers to information that may be used to characterize the situation of an entity (Abowd et al.,
1999
) and may refer to user, physical, computational, and time context (Chen and Kotz,
2000
). Sensors are fundamental to collect data from any environment; however, raw sensor data most of the times is not enough to provide useful high level context information. Therefore, raw data must be processed into more high level information constructs. For example, use the signal of
Bluetooth Low Energy
(
BLE
) beacons to estimate distances and calculate the location of devices; use a three‐axis accelerometer data time series to estimate the posture of a person (e.g. fall, run, walk, stand, lay, sit).
Integration and management issues
: The deployment of COTS systems in the same AAL household has two fundamental concerns: (i) The standard interconnection and orchestration of devices for enabling seamless interactions and automation. The integration of COTS systems could be achieved through the use of local or edge middleware frameworks such as openHAB or
Simple Network Management Protocol
(
SNMP
)‐like tools. Another important trend is the integration through cloud services that glue together heterogeneous deployed systems. (ii) The secure or safe integration of COTS systems that are developed by independent vendors without integration concerns, thus not planned to be deployed together. These systems must coexist in the same ecosystem without causing crossed malfunctions. For example, a
drug dispenser
(
DD
) periodically issuing a sound alarm until the user takes a prescribed medicine could suffer a functional interference from an entertainment system that simultaneously could be playing a movie or TV series. When two or more systems compete for a shared medium (e.g. user attention), that may cause a behavior interference (e.g. prevent user from taking medication).
Communication and coordination issues
: Home healthcare COTS systems play an important role in the deployment of AAL smart spaces. However, it is important to guarantee agile and affordable deployment mechanisms without preinstalled communication infrastructures. Wireless mesh technologies are therefore fundamental to enable the growth and widespread of such ubiquitous systems. Most of smart spaces use body and environmental sensor motes deployed together without the need for fixed infrastructures. These modules communicate through heterogeneous wireless technologies, thus typically requiring bridging gateways equipped with multiple shields [e.g. ZigBee,
Bluetooth
(
BT
), Wi‐Fi,
General Packet Radio Service
(
GPRS
)], thus enabling simple and adaptable wireless topologies. These agile solutions allow easy data collection and storage for further analytical treatment and consultation by healthcare providers and also control interactions and trigger real‐time alerts in dangerous situations.
Intelligence and reasoning issues
: The representation of context information is fundamental in AAL systems. It is necessary to use knowledge representation models and tools that enable to reason about the premises and conditions about the user and its surroundings. Such tools may use typically deductive or inductive processes. This section proposes the use and combination of both types of reasoning. The former, deductive reasoning, uses the
Semantic Web Rule Language
(
SWRL
) that combines OWL and
Rule Markup Language
(
RuleML
) to allow the definition of Horn‐like rules. These rules specify a set of state conditions related by boolean operators that will allow the inference of other states or terms. The latter, inductive reasoning, uses
machine learning
(
ML
) algorithms that typically capture/learn patterns on sets of observations (training sets) and then generalize those patterns to classify new observations.
The remaining sections will revisit each of these key research issues on separate sections. For every topic, the respective section details one or two example project outcomes related with the research solutions typically applied to home healthcare use cases. Section 1.2 addresses the aspects related with processing raw data sensor signals to filter noise and build higher level useful information about the context of home healthcare scenarios and users profiles (e.g. activity, location). Section 1.3 focuses on the safe integration and management of heterogeneous ubicomp systems deployed together on the same household without interfering with each other. Section 1.4 is concerned with the communication and networking aspects that are faced when deploying ubicomp systems on premises without available connection infrastructures. Section 1.5 centers the analysis on different approaches for addressing reasoning in AAL scenarios. Each of these sections presents real use case solutions and examples applied to home healthcare scenarios. Finally, Section 1.6 summarizes the major contributions of this chapter by highlighting the main results of each section.
Working with hardware sensors and in general with physical computing is not an easy task. Most of the time sensors require careful calibration to the environment in order to adjust their readings to the reality. All sensor readings are also subject to noise. In many cases they require intensive signal processing techniques and statistical treatment in order to accurately represent the environment. In the following sections two application scenarios in the context of IoT are presented. The first application, presented in Section 1.2.1, deals with ambient monitoring and user activity detection for AAL scenarios (Goncalves et al., 2009). The second application, presented in Section 1.2.2, takes advantage of BLE beacons to enable indoor location and tracking scenarios in smart spaces (Gomes et al., 2018). Both examples deal with the sensing issues stated before. Moreover, the algorithms used to gather and process useful information from raw sensor data are crucial for the performance of both applications.
As the average span of life increases, people at the age of 65 or older are the fastest‐growing population in the world. According to the projections of a Eurostat report (Giannakouris, 2008), the median age of the European population will rise from 40.4 years in 2008 to 47.9 years in 2060. The share of people aged 65 years or over in the total population is projected to increase from 17.1% to 30.0%, and the number is projected to rise from 84.6 million in 2008 to 151.5 million in 2060. The healthcare system in the developed countries is growing under pressure and will not be efficient enough to provide a reliable service on the health treatment for this aging population (Venkatasubramanian et al., 2005).
A smart home‐care system can hold the essential elements of diagnostic used in medical facilities. It extends healthcare from traditional clinic or hospital settings to the patient's home. A smart home‐care system benefits the healthcare providers and their patients, allowing 24/7 physical monitoring, reducing labor costs, and increasing efficiency. Wearable sensors can notice even small changes in vital signs that humans might overlook (Stankovic et al., 2005).
There are some projects for remote medical monitoring (Jurik and Weaver, 2008). The following are some of the most relevant ones:
CodeBlue
: It is a wireless sensor network intended to assist the triage process for monitoring victims in emergencies and disaster scenarios (Welsh et al.,
2004
).
AMON
: It encapsulates many sensors into one wrist‐worn device that is connected directly to a telemedicine center via a GSM network, allowing direct contact with the patient (Anliker et al.,
2004
).
AlarmNet
: It continuously monitors assisted‐living and independent‐living residents. The system integrates information from sensors in the living areas as well as body sensors (Wood et al.,
2006
).
The main goal of this project is to develop an activity monitoring system with the following requirements such as design simplicity, reliability, and low cost and with the less possible user interaction as possible. The system has two elements: a corporal device and a wireless gateway. The corporal device detects the patient's vital signs as well as its activity. All the data gathered from the sensors is sent over a wireless point‐to‐point link to the gateway. The data can then be sent to a local WEB service or to the cloud.
All system components are built using low‐cost hardware. Size and shape of the corporal device were considered to improve the device usability. Sensor raw data is processed in the device and then transmitted to the gateway. For the device data processing and control, an Arduino Wee was used. The wireless link between the device and the gateway is established using a point‐to‐point wireless link configured on MaxStream XBee pro radios. Temperature sensing is done using a DALLAS DS18B20‐PAR 1‐wire Parasite‐Power digital thermometer. For patient activity monitoring the system uses a Freescale Semiconductor MMA7260QT 1.5g‐6g Three Axis Low‐g Micromachined Accelerometer.
The corporal device should be placed above the patient's right hip, pointing up, because this is the location in the human body with less position changes during activity. The accelerometer measures the acceleration on a 3D axis of that given point. The digital temperature sensor measures skin temperature, so for a more reliable body temperature, the sensor must be placed in the patient's armpit. The corporal device transmits the current patient activity and body temperature (Figure 1.1).
Figure 1.1 Corporal device. (a) XBee pro radio, (b) accelerometer, (c) microcontroller, and (d) digital temperature sensor.
After a series of tests, where the volunteered subjects performed their daily routine activities, the following main activities can be identified by the accelerometer: standing, sitting, walking, running, laying down (sleeping), and falling.
It is possible to determine if the patient is sleeping in the back position, side position, or stomach position. Although this is not a particular important distinction for monitoring elderly patients, it can be important for monitoring infant's sleeping position. For example, an infant sleeping on his stomach has up to 12.9 times more probability to die from sudden infant death syndrome (SIDS); hence, forcing children to sleep on their backs reduces the incidence of SIDS by 40% (Baker et al., 2007).
Detecting a fall is a two‐class decision problem; there may be positive samples for a fall and negative data for non‐fall. While the positive samples have a lot of commonness, negative samples are extremely diversified. So, for training a classifier correctly, a lot of negative samples are required, and even so a real fall could be classified into a doubtful dataset (Zhang et al., 2006). Processing all this data takes a lot of processing power and wastes a lot of battery power. These requirements are not suitable for this system. So, the goal was to find an algorithm to accurately classify activities in real time without heavy hardware demands. In Section 1.5 a different approach to this system is presented in which several ML techniques were tested in order to detect user activity from the accelerometer data.
For determining the activity pattern values, the volunteered subjects performed their activities, and we recorded the raw accelerometer data into the database. One value for each axis was reported by the accelerometer every 100 ms. After several runs, 200 values for each activity were selected. The analysis of the graphics generated by the stored data shows that single values have no meaning; sets of values, however, could be used to determine pattern activities. Nevertheless, different activities may sometimes produce similar sets of values, so it was important to find another characteristic that, combined with the set of values, could identify, without any doubt, a given activity. Further graphical analysis shows that the value of each axis could also determine an activity. Combining both parameters the current activity can be accurately determined.
The next challenge was to decide how large should be the set of values; if it is too small, it will not allow to identify a pattern, but if it is too large, the risk of overlapping different activity patterns increases. So a set of values cannot be longer than the fastest occurrence of an activity. In fact, the activity that takes less time to occur is a fall (about one second). Based on this, we selected a set of 10 values. Figures 1.2 and 1.3 show a sample of the graphics for walking and falling; the fall occurs only between readings 33 and 43; after that the volunteer subject lies down on his stomach.
Figure 1.2 Walking accelerometer raw data.
Figure 1.3 Fall accelerometer raw data.
A formula was needed for transforming each set of values into a single value without losing its meaning. The statistical variance was the best way to do it, but calculating the variance for the set of values of each axis would give us three distinct values. Hence, the average between these values would give us a value for activity indicator (VAI). The variance and VAI formula are shown in Eqs. 1.1 and 1.2:
The accelerometer values vary from subject to subject, and it is impossible, for example, to walk exactly the same way all the time. A list of range values for each activity was needed. Therefore using the VAI formula and the raw data previously acquired, Table 1.1 was built.
Table 1.1 Max and min VAI values for each activity.
Stand, sit, laying down
Walk
Run
Fall
Min
0
450
50 000
15 000
Max
120
5 000
—
48 000
Table 1.1 shows that sit and laying down activities have the same VAI range. In these cases the axis values are used to classify the activity. With this definition most activities can be well identified. Fall detection requires a different approach. Sometimes running is misidentified as fall, and a fall may be misidentified as running. To solve this problem we added to the algorithm an activity matrix that incorporates known situations in which a fall occurs, e.g. if someone is running and in the next second is lying in his stomach, it is feasible to say he suffered a fall. In order to correlate the past and present activities, a matrix was defined. The matrix is composed by two past activities, the activity to analyze, two future activities, and the activity we wish to identify. In real time an array is filled with the activities identified by the accelerometer readings. This array has two past activities, the “present” activity to analyze, and two “future” activities. The array is then compared with the matrix. If we have a match with a sequence of events in the matrix, then the algorithm outputs the corresponding activity. If we do not have a match, the algorithm decides based on the single window of data for the “present” activity. In reality, the detection has a delay of three seconds, because we need to wait for two more future readings before making a decision. After the use of the known scenario matrix, the fall detection improved from 30% to 60%, and adding more known cases to the matrix will improve even more the detection. However, it can cause some false positive detection. Running activity detection has a rate of 70% accuracy, and this number can be improved by adding known case for running to the matrix. All the other activities have 95% detection accuracy. However, when tested on an elderly subject, and being that our main objective, due to their degraded motor skills, the activity detection improves for near 100%.
Localization systems currently have a high accuracy and suppress various needs in our daily lives. When it comes to location systems, the best known is the global positioning system (GPS). This system uses signals sent by satellites, and with the appropriate number of signals, the device with a GPS receiver can with high precision estimate the current location coordinate (Kaplan and Hegarty, 2006). A different challenge is trying to estimate the location inside closed spaces where systems that rely on satellite reception are unable to operate due to the lack of coverage. For this purpose there is the need to take advantage of other technologies (Namiot, 2015; Becvarik and Devetsikiotis, 2016). One example is the BT technology that, after version 4.0, includes a low power version called Bluetooth Low Energy that can be used for indoor location systems. In this section we present an approach to indoor location estimation using BLE beacons. These are small and energy‐efficient devices that transmit small data packets that can be interpreted by intelligent devices whenever they are within reach. BLE beacons are used in different contexts. A very common scenario is their use in advertising where they present detailed information about a product on client's mobile devices when they are nearby. These devices are also used to search for lost objects. Once a beacon is attached to an object, it is possible through a mobile application to hear the BLE signals and thus to know if the object is in the vicinity of the user.
BLE technology is specified in Gupta (2016). The advertising mode on the BLE standard allows a very short message transmission in order to save energy. Those messages can be used for a device to detect the proximity of a specific location based on the received signal strength indicator (RSSI). The lightweight protocol stack allows integration with existing BT technology; long battery life, easy maintenance, and good signal coverage are very important factors to take into account. It is a recent technology whose characteristics make it very attractive for indoor location projects (Faragher and Harle, 2014, 2015).
BLE beacons are small portable devices that consist of a combination of electronic components inserted into a small circuit board. These devices use BLE technology to transmit data in the form of BT frames at predefined intervals. These signals include information about the beacon, allowing device identification, and can trigger certain predefined actions on the client device. Communication is unidirectional, from beacon to the receiving equipment. When a communication between the beacon and the mobile device is established, one of two possible actions can happen.
Passive
: The information that the communication has been established is simply stored on the mobile device.
Active
: Communication causes a particular application to be started on the mobile device or acts on an activity in an application prepared to deal with the events and signals of the beacons.
The distance estimation layered architecture is shown in Figure 1.4. The data collection process is responsible for capturing the frames emitted by the beacon device. It is necessary to perform the calibration process for the client device because the RSSI readings are affected by its hardware and software configuration. In the RSSI signal processing layer, several algorithms and statistical methods are applied to the raw data values in order to improve the accuracy of the distance estimation. Finally, the distance calculation process is responsible for calculating the distance between the beacon device and the mobile equipment.
Figure 1.4 Distance estimation layered architecture.
The distance calculation layer receives real‐time RSSI values affected by signal reflections, obstacles, and even interference from other radio communication signals (Seybold, 2005). However, it was expected that mobile devices, subject to the same conditions, would capture the same number of beacon frames with close RSSI values. This is not true as shown in Figure 1.5. The RSSI readings for the BLE beacon frames on the mobile device are influenced by the radio chipset and its configuration. For example, some vendors reduce the BLE beacon receiving rate in order to save battery. In this context, it was necessary to create a calibration process in order to adapt the distance estimation to each equipment.
Figure 1.5 RSSI readings for two LG phones on the same conditions.
Each time a frame is captured by the mobile device, the BT chipset measures the strength of the received signal returning an RSSI value. One of the other fields present in all captured frames is the txPower, which indicates the power used in the transmission of the beacon. The propagation of the radio‐frequency (RF) signal varies considerably depending on the distance, building materials, interference sources, etc.
The equipment calibration is performed as follows:
Calculate the mode of the RSSI values received for
known distances: 0.5, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12 m from the beacon. For each of the
distances,
samples are received. During the calibration process we used fifty samples for each distance
.
For each of the
distances, calculate
Calculate the potential regression where independent values (
) are given by
ratios and dependent values (
) are given by the (
) distances.
The results of applying the calibration process on a mobile phone (LG D855) is presented in Figure 1.6. We can see that the calibrated line shows much better accuracy on estimating the distance than the uncalibrated raw RSSI values.
Figure 1.6 Calibrated vs. uncalibrated distance calculation.
One way of calculating the distance estimation based on the RSSI of the received signal is to calculate a power regression taking into account parameters fitted during the calibration process. The calculation is done through a nonlinear model represented by a power function:
The regression algorithm returns the appropriate constants and to be used in Eq. 1.4 for the equipment used in the calibration process. In order to filter the noise from the samples received in real time by the mobile device while the user moves around the smart space, an algorithm was developed that takes into account the maximum movement speed for a person (walking or running) and the time interval between consecutive beacons. Considering that a person can reach a maximum of 2 m/s in the movement, we can discard values that determine a distance greater than possible for that speed in a certain period of time. Equation 1.5 is used to determine the acceptable distance for the movement of a person inside a building:
where is the instant of time when the device received the beacon and is the time instant of the previously received beacon. If the difference between the distance determined by the RSSI value read at the instant and the distance determined by the RSSI value read at the instant is greater than the acceptable distance, that value is discarded, since it would indicate that the person would have moved faster than 2 m/s, which is considered to be unlikely. In case the value is discarded, the time instant is not updated and is used in the following calculations. If the algorithm discards consecutive signals, the time window increases because is not updated. This allows the algorithm to adapt to unforeseen circumstances very quickly and has very satisfactory results in mobility scenarios. The ignored RSSI values are stored in a data structure, and a moving average with these values is calculated. The distance is calculated with the average of the last three filtered values. The algorithm is presented in Figure 1.7.
Figure 1.7 Moving average algorithm.
The comparison of moving average algorithm with the raw data RSSI values is presented in Figure 1.8 where it is clear that the distance estimation becomes much more accurate and stable. Another important feature of the moving average algorithm is its adaptability to motion. In Figure 1.8 four reference distances (1, 3, 5, 8 m) are represented, and the mobile equipment is at the same distance during the reference lines and moves in the interval between them. As can be seen, the graph shows the smoothing of the received signal that is reflected in the calculated distance as well as the adaptation of the algorithm to the movement.
Figure 1.8 Moving average algorithm results.
The smoothing of the received values aims to reduce the distance estimation error. As shown in Figure 1.9, the application of the moving average algorithm decreases the variance of the calculated distances. As we move away from the beacon, the instability of the received signal increases as does the error of the distance. Smoothing the signal improves the calculation behavior as shown in the graph.
Figure 1.9 Improving error variance with moving average algorithm.
Having a good distance prediction algorithm using BLE beacons will enable the development of many interesting applications on a smart space. There is ongoing work to apply this results in two areas: (i) developing an indoor navigation system where the user using a mobile application installed in the smartphone can navigate inside a complex building (our test bed is the University Hospital where we can navigate patients to their medical appointment place) and (ii) developing a high value asset tracking system where the location of important equipment is always known (our test bed is also in University Hospital tracking some portable expensive medical equipment).
In the context of AAL systems and particularly safe home healthcare environments (Chen and Kotz, 2000 ; Chen, 2005), the integration of heterogeneous ubiquitous systems is fundamental for being able to easily monitor and orchestrate deployed COTS systems. This work advocates two forms of integration: (i) via local or edge open hubs (e.g. openHAB or SNMP based) offering middleware frameworks for the automation of home environments (such solutions are agnostic in terms of supported COTS system's vendors and technologies) and (ii) via the use of cloud services that require connections between the home premises and the cloud infrastructure. This section reports our experiences on implementing home‐care scenarios based on both types of approaches. Section 1.3.1 reports the use of several cloud services to manage the integration of a DD system together with a mobile application, both developed for improving the autonomy of outpatient users. Section 1.3.2 reports an SNMP‐based integration of COTS systems for dealing with behavior interferences among such systems. Both projects are clear demonstrators of the deployment and integration of smart and secure home healthcare environments.
The integration of ubiquitous and mobile systems through the cloud will be fundamental in a multiparty response to outpatient users. This section reports the use of two systems that have been developed and integrated through the cloud to improve and mitigate the problems of lack of memory and disorientation associated with aging: (i) DD, specially developed to be integrated into a service cloud‐based architecture, and (ii) PerUbiAssis Android app, integrating on the smartphone the management of the DD together with other monitoring features, by taking advantage of a panoply of cloud services [cf. calendar, maps, time sync, mail, and Voice over IP (VoIP)]. There are several projects and products involving the use of ubiquitous technologies for home‐care support (Murtaza et al., 2006; Chaudhry et al., 2011; Lim et al., 2008; Fischer, 2008; Soares et al., 2012). These solutions focus on improving people's daily quality of life (e.g. cognitive stimulation, support alerts and task organization, mnemonics for management of daily routines, etc.). In addition, there are market solutions (Choi et al., 2006; Henricksen et al., 2005) typically addressing drug dispensary and treatment adherence problems. However, most focus on the specific functional aspects rather than on higher levels of integration with other systems and services. Hence, it is important to address the integration concerns in the genesis of COTS systems to facilitate future interoperation and coordination. For example, the medication dispensary prototype operation, scheduling, and alerts are coordinated through Google Calendar and SMS services; the PerUbiAssis Android app uses the smartphone communication interfaces (cf. Wi‐Fi and BT) to interact with various cloud services (cf. Maps, Calendar, Contacts) and the DD to coordinate the user's medicine intake and other daily activities and alerts (Figure 1.10).
Figure 1.10 (a) Cloud services for integration of COTS systems (b) DD prototype.
The DD prototype was built on the Arduino Mega platform and uses a set of hardware modules (cf. GSM/GPRS and BT BlueSMiRF shields, LCD, etc.). The initialization of the dispensary lasts about twenty seconds, time required for GSM/GPRS registration and sync with smartphone, via BT or SMS. In the latter case, the DD sends an SMS to the smartphone and waits for the automatic response, syncing date/time with the operator's messaging service. Calendar sync between Arduino and Android app may also use BT. The schedule information is stored in the Arduino EEPROM, and the current medicine intake info is placed on LCD. Reaching the next dose time, it rotates the medicine cup and triggers a series of beeps. The user must press the button to confirm the dose and stop the warning sound. For missed intakes, the DD rotates back to a rest position and an alert SMS is sent to caregiver.
The PerUbiAssis app uses the Amarino API [4] for BT linking with the DD Arduino that uses the MeetAndroid library. The main functionalities focuses on monitoring the elderly's location and the interaction with the DD: visualize current and relative location of elderly and caregivers; show return path to home or caregiver's address, in manual or automatic mode; manage daily activities and send SMS alerts when the security perimeter is exceeded or a missed task is detected; keep a history of elderly's locations; and configure the application, namely, the refresh rate of the map, the selection of caregivers, the definition of security perimeter, the activation of alerts by SMS, etc.
During initialization the operating conditions are checked (cf. status of data links and GPS state). An SMS is then sent to contacts marked as caregivers to check their availability to follow the elderly. Each caregiver will have to respond with a predefined SMS to be considered in the list of available caregivers. One of the main functionalities of the application is the visualization of the elderly location in relation to available caregiver.
