67,19 €
Computational Intelligence and Machine Learning Approaches in Biomedical Engineering and Health Care Systems explains the emerging technology that currently drives computer-aided diagnosis, medical analysis and other electronic healthcare systems. 11 book chapters cover advances in biomedical engineering fields achieved through deep learning and soft-computing techniques. Readers are given a fresh perspective of how intelligent systems impact patient outcomes for healthcare professionals who are assisted by advanced computing algorithms.
Key Features:
- Covers emerging technologies in biomedical engineering and healthcare that assist physicians in diagnosis, treatment, and surgical planning in a multidisciplinary context
- Provides examples of technical use cases for artificial intelligence, machine learning and deep learning in medicine, with examples of different algorithms
- Introduces readers to the concept of telemedicine and electronic healthcare systems
- Provides implementations of disease prediction models for different diseases including cardiovascular diseases, diabetes and Alzheimer’s disease
- Summarizes key information for learners
- Includes references for advanced readers
The book serves as an essential reference for academic readers, as well as computer science enthusiasts who want to familiarize themselves with the practical computing techniques in the field of biomedical engineering (with a focus on medical imaging) and medical informatics.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 281
Veröffentlichungsjahr: 2003
This is an agreement between you and Bentham Science Publishers Ltd. Please read this License Agreement carefully before using the book/echapter/ejournal (“Work”). Your use of the Work constitutes your agreement to the terms and conditions set forth in this License Agreement. If you do not agree to these terms and conditions then you should not use the Work.
Bentham Science Publishers agrees to grant you a non-exclusive, non-transferable limited license to use the Work subject to and in accordance with the following terms and conditions. This License Agreement is for non-library, personal use only. For a library / institutional / multi user license in respect of the Work, please contact: [email protected].
Bentham Science Publishers does not guarantee that the information in the Work is error-free, or warrant that it will meet your requirements or that access to the Work will be uninterrupted or error-free. The Work is provided "as is" without warranty of any kind, either express or implied or statutory, including, without limitation, implied warranties of merchantability and fitness for a particular purpose. The entire risk as to the results and performance of the Work is assumed by you. No responsibility is assumed by Bentham Science Publishers, its staff, editors and/or authors for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products instruction, advertisements or ideas contained in the Work.
In no event will Bentham Science Publishers, its staff, editors and/or authors, be liable for any damages, including, without limitation, special, incidental and/or consequential damages and/or damages for lost data and/or profits arising out of (whether directly or indirectly) the use or inability to use the Work. The entire liability of Bentham Science Publishers shall be limited to the amount actually paid by you for the Work.
Bentham Science Publishers Pte. Ltd. 80 Robinson Road #02-00 Singapore 068898 Singapore Email: [email protected]
Biomedical engineering and healthcare systems must transform using computational intelligence as a future idea to viewing it as a practical tool that can be used immediately. If machine learning plays a role in healthcare, a gradual approach is required. There is a great necessity to identify unique cases in which machine learning capabilities provide value to a particular technical application. This would be a regular process for developing analytics, artificial intelligence, and modeling techniques in clinical settings. Additionally, computational intelligence models may simplify physician usage of healthcare management systems by offering clinical decision assistance, automating imaging techniques, and incorporating telehealth technology. Health professionals are implementing machine intelligence-based frameworks and diagnostic tools to optimize the utility of such gathered data. Machine intelligence is essentially the potential of computers to mimic human cognition to rapidly extract data published by different datasets, allowing healthcare professionals to navigate vast quantities of data and conduct complex statistical analyses more efficiently and accurately.
Recently, technology like remote patient monitoring through the Internet of Things is mainly in demand. Machine learning is crucial in technologies like Healthcare Information Exchange through electronic healthcare record management and analytics. Emerging technologies like telemedicine and teleconsultation rely on machine learning for the effective treatment of patients. Such evolution has prompted researchers and healthcare service professionals to invest in application development to optimize their healthcare needs and improve patient health care. Advancement in innovative phone technology for disease identification and classification through machine learning models has paved a new dimension in the healthcare industry.
Advanced machine learning technologies like neural networks and deep learning models are extensively used in biomedical engineering and healthcare. Deep learning is a kind of technology that incorporates hidden layers of comparable functions into the network. It can gather insights from an enormous volume of healthcare records and diagnosis data. With light-weight neural network models, current transformation accelerators toward customized health care delivery will be feasible. The deep learning models have mainly influenced the health care domain and applications that need robust frameworks that can learn from sparsely labeled samples and deal with noisy and incorrect annotations. The frameworks are capable of adjusting continuously to novel information without losing prior knowledge.
In deep learning models, input is processed through a hierarchy of layers, with each successive layer informing its findings with the output from the preceding layer. Deep learning models may improve accuracy as more data is processed, basically by learning from past results to enhance their capacity to identify relationships and associations. Computational intelligence technologies improve efficiency and recognize valuable insights from massive volumes of complex medical imaging data through effective feature extraction techniques.
Emerging ecosystems in the biomedical engineering and healthcare industry will need to strike the right balance between doctors' and patients' usage and perceptions of machine intelligence. Researchers should design and implement hybrid models that include machine learning. It can be seen as a supplement to or accelerator for medical knowledge but not as a substitute for physicians. While machine intelligence should be utilized and viewed as assisting in diagnosis, treatment planning, and risk factor identification, doctors should maintain ultimate responsibility for the patient's care. The hybrid approach will increase healthcare professionals' use of machine intelligence while also providing quantifiable and sustainable benefits in health outcomes.
Biomedical engineering and healthcare systems are rapidly developing through computational intelligence and machine learning-based techniques for smart medical diagnosis and analysis. Biomedical engineering disciplines have been greatly assisted by advancements in deep learning and soft computing techniques, which lead to improved accuracy in diagnosis, smart treatment, and therapy. Moreover, with multidisciplinary strategies in biomedical research, the physician can deal with critical health issues like cardiac-related issues, high blood pressure, stroke, and liver diseases. Medical illnesses can be treated effectively by recognizing them in much earlier stages using sophisticated medical imaging technologies, including X-Ray, CT, MRI, PET scan, and Electronic Healthcare Records (EHR). Computational intelligence models are extensively used in several phases in medical imaging and medical data analysis, which include the initial rendering of images, image enhancement, complex hidden extraction of features, the segmentation of images, the post-processing of images for the identification of abnormalities, and the incorporation of evolutionary computations. EHRs are analyzed through machine learning techniques, and patients are regularly monitored to assist them in a better lifestyle that would reduce the chances of future illness.
Computational intelligence is the study, design, prototype, implementation, and development of computational paradigms inspired by biological and semantic principles. The intelligent computational models include various advanced technologies like Neural Networks, Ensemble models, Bioinspired models, evolutionary models, swarm intelligence, fuzzy technology, and data-centric knowledge-driven models. The computational intelligence models are proven to be robust in precisely predicting the future illness and diagnosis of the disease at the earlier stages of the abnormality that will assist the physician in providing better treatment and guide the individual in better living habits and lifestyle that are less likely to result in predicted future illness. Artificial intelligence and machine learning would keep improving in the healthcare sector, improving illness prevention and diagnosis, extracting deeper insight from data from many clinical trials, and assisting in developing individualized medicines.
This book encompasses path-breaking and remarkable contributions in the field of computer-aided diagnosis and biomedical analytics that can benefit a wide range of biomedical engineering disciplines, including medical imaging to computational medicine, smart diagnosis, healthcare informatics, ambient assisted living, managing and monitoring wearable medical devices, and even effective systems engineering. The book covers a broad range of machine learning techniques and deep neural network-based methodologies in the healthcare domain. The next horizon in image analysis, multimodal imaging mechanisms, assistive technology, telemedicine, and interdisciplinary applications is emphasized practically.
Medical image processing is critical in disease detection and prediction. For example, they locate lesions and measure an organ's morphological structures. Currently, cardiac magnetic resonance imaging (CMRI) plays an essential role in cardiac motion tracking and analyzing regional and global heart functions with high accuracy and reproducibility. Cardiac MRI datasets are images taken during the heart's cardiac cycles. These datasets require expert labeling to accurately recognize features and train neural networks to predict cardiac disease. Any erroneous prediction caused by image impairment will impact patients' diagnostic decisions. As a result, image preprocessing is used, including enhancement tools such as filtering and denoising. This paper introduces a denoising algorithm that uses a convolution neural network (CNN) to delineate left ventricle (LV) contours (endocardium and epicardium borders) from MRI images. With only a small amount of training data from the EMIDEC database, this network performs well for MRI image denoising.
Medical image analysis in radiology is critical in the healthcare system for detecting and diagnosing the disease at an early stage. Computed tomography (CT), ultrasound (US), positron emission tomography (PET), and Magnetic resonance imaging (MRI) are the most commonly used medical imaging tools. Because of its advantages over other imaging techniques, the MRI tool is widely used in clinical imaging. The MRI imaging technique uses contrast to create
diagnostic images by combining many pulse sequences. MRI also has distinct parameters such as a strong magnetic field, imaging planes, and dimensions [1]. Furthermore, cardiac MRI is one of the most effective techniques for estimating clinical parameters such as myocardium mass, ventricular volumes, stroke volume, and ejection fraction [2].
Medical image quality is still noised due to the imaging condition process and various patients, resulting in a lower resolution. As a result, improving image quality is critical for disease detection and prediction, particularly in the early stages of cardiovascular disease. There are two important factors. Image reconstruction, which is based on an algorithm that creates 2D and 3D images of an object, and image processing, which uses algorithms to improve image quality, remove noise, and detect regions of interest (ROIs), are both used in medical imaging [3, 4].
Many methods for denoising MRI images have been proposed in the literature, including spatial domain approaches, statistical techniques, transform domain methods, and filtering techniques [5]. Currently, a conventional denoising method, such as the block matching 3D (BM3D) filter, is being introduced [6]. The BM4D technique was developed by Foi et al. [7], who extended the BM3D filter to volumetric data. However, neither the BM3D nor the BM4D filters can be applied to varying image contents [8]. Several novel learning methods, such as neural network-based techniques [9-11], have recently been proposed to over come this limitation. With the current development in deep learning architecture, several models, including convolutional denoising autoencoder (CNN-DAE) [12], residual learning (RL) of deep convolutional neural network (DnCNN) [13], and generative adversarial network (GAN), have shown promising results for medical image denoising.
A significant amount of work has been done in the medical image analysis field to denoise images. Denoising is essential in image processing to improve segmentation and classification accuracy. Rician and Gaussian noise is the most common types of noise in MRI images [14]. An efficient algorithm for denoising MR images aims to minimize noise while retaining the image's useful features. The most important metric in processing a diagnostic image is edge preservation. As a result, the denoising algorithms must be robust to reduce noise effectively.
For denoising MRI images, various filters are proposed. A median filter [15] is a non-linear low pass filter that reduces unpredictable noise. Weiner filters [16] are used to reduce the gap between filtered and preferred output. The Gaussian filter [17] is used to remove blur noise. In contrast, the mean filter [18] replaces each pixel with pixels' calculated mean to reduce the intensity variation between two pixels using a convolution process. The wavelet filter uses an energy compaction feature to denoise the image. Also, filtering MRI images can be done through noise reduction, re-sampling, and interpolation. The selection of the applied filter is based on the type and amount of noise.
Linear and non-linear image filters are the two types of image filters. Linear filters have been used to eliminate spatial noise but without retaining image textures [19]. Mean filters have been proposed for reducing Gaussian noise, but they over-smooth images with high noise. A Wiener filter was used to address this issue, but it still blurs sharp edges [20]. Non-linear filters such as median and weighted median filters are used to eliminate noise.
Convolutional neural networks (CNNs) have recently demonstrated remarkable performance in image processing tasks such as image denoising [21] and image super-resolution [22]. Zhang et al. [21] created a noise removal model called fast flexible denoising CNN (FFDNet) that can handle white Gaussian noise. Recently, Jiang et al. [23] used the VGG [24] network with ten layers of CNN for MRI denoising. Tripathi and Bag [25] developed a CNN for MRI denoising, with the network employing an encoder-decoder structure to retain important image features while excluding unwanted ones. Furthermore, several methods for denoising medical images using CNN have recently been developed, as shown in Table 1. Also, the review paper [26] summarized numerous methods for using CNN in image filtering.
This section introduces a wide network regarding the number of convolutional layers and filters. Furthermore, the network demonstrates a sequence of MR images using the CNN model; the design and implementation of this network are thoroughly explained in the following subsections.
CNN is the prominent architecture of deep learning [36, 37] that is used for medical image processing. The input for the network is organized in a grid structure and then processed across layers that maintain the spatial relationship in the data. A CNN consists of several layers such as convolutions, activation, and pooling layers with a fully connected layer to compute the final outputs at the end. Fig. (1) depicts the architecture diagram for the designed model.
Fig. (1)) A diagram of network layers architecture.1. Convolutional layers: The previous layer's operations are convolved with filters that have small parameters, usually of size 3 × 3, stored in a tensor W(j,i) Where j is the number of filters and i is the number of layers.
2. Activation layer: the convolutional layer's feature maps are fed into non-linear activation function, frequently rectified linear units (ReLUs), to produce new tensors called feature maps.
3. Pooling layers: the feature maps are pooled in a pooling layer that takes small grid regions as input and provides single numbers for each area.
4. Batch normalization (BN) layer: this layer is typically stated after the activation layer, providing normalized activation maps to regularize the network and speed up the training.
As shown in Fig. (2), the network composition consists of an input layer, one convolutional layer with a rectified linear unit (ReLU), 17 convolutional layers with batch normalization (BN), and ReLU, one convolutional layer, and a regression output layer.
Fig. (2)) Network architecture for denoising MR images.To denoise the left ventricle (LV) MR image from Gaussian noise, the network architecture consists of 59 layers of depth. As shown in Fig. (3), 64 filters of size 3 × 3× 1 are applied to a noisy image in layer 1 to produce 64 feature maps, where 3 × 3 is the height by the width of convolution applied to the input image, and 1 is the channel for image gray-scale. The layer is followed by a ReLU activation function to obtain non-linearity. Then, for each hidden layer (layers 2-57), 64 filters of size 3 × 3 × 64 are applied to create 64 feature maps, which are then activated using the BN and ReLU functions. The final convolution layer (layer 58) has 64, 3 × 3 filters, followed by a final regression layer that generates the denoised image.
Fig. (3)) Model of convolutional neural network for denoising MR images.The experiments in this work are carried out in MATLAB version R2020a to process and denoise a sequence of MRI images. After reading all images, zero arrays are created to store the image sequence and apply gaussian noise before applying DnCNN to filter images. Fig. (4) depicts the experiment's implementation procedures.
Fig. (4)) Experimental procedures.The automatic evaluation of myocardial infarction from the delayed-enhancement cardiac MRI (EMIDEC) dataset is publicly available with clinical information for 150 patients, with the corresponding ground truth. The MRI images are composed of a sequence of DE-MRI scans in short-axis orientation that covers the heart's left ventricle (LV). For both input images covering the LV and the associated ground truth, all input images are stored in separate folders for each case in neuroimaging informatics technology initiative (NIfTI) format. Also, a text file containing clinical information for each case is included in the same folder [38]. In this experiment, only four patients' datasets were used.
Simulated data were taken from the EMIDEC web (http://emidec.com/). The experiments were conducted in MatLab (version R2020a) to denoise the MR image of the left ventricle (LV). The noise applied was a Gaussian noise with a range (0.001-0.01). Fig. (5) shows the processing of images denoising with their histograms, and Fig. (6) shows the same processing to denoise the contours of these images.
Fig. (5)) Processed images of the LV denoising MR images, original image (left), noisy image (middle), and denoised image (right). Fig. (6)) Processed contours of the LV denoising MR images, original image (left), noisy image (middle), and denoised image (right).To assess model performance, quantitative measures such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) were used. The following are the definitions of the PNSR and SSIM:
(1)where RMSE is the root-mean-square error between the denoised image and free-noise image,
(2)where μx and μy are the data means for x and y , respectively, C1 and C2 are constants, σxy are x and y covariance, σx and σy are the variance.
The SSIM is a perceptual index that compares the image quality of a reference image to a processed image. The PSNR is a significant metric that calculates the ratio of a signal's maximum possible power to the power of noise that influences signal representation. Table 2 shows both indices for images with Gaussian noise at various levels and denoised images. Using the DnCNN algorithm, the PSNR and SSIM at 0.001 level of Gaussian noise increased from 30.33 dB and 0.67 to 35.96 dB and 0.85, respectively.
Fig. (7