Regularization and Bayesian Methods for Inverse Problems in Signal and Image Processing -  - E-Book

Regularization and Bayesian Methods for Inverse Problems in Signal and Image Processing E-Book

0,0
139,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The focus of this book is on "ill-posed inverse problems". These problems cannot be solved only on the basis of observed data. The building of solutions involves the recognition of other pieces of a priori information. These solutions are then specific to the pieces of information taken into account. Clarifying and taking these pieces of information into account is necessary for grasping the domain of validity and the field of application for the solutions built.  For too long, the interest in these problems has remained very limited in the signal-image community. However, the community has since recognized that these matters are more interesting and they have become the subject of much greater enthusiasm.

From the application field’s point of view, a significant part of the book is devoted to conventional subjects in the field of inversion: biological and medical imaging, astronomy, non-destructive evaluation, processing of video sequences, target tracking, sensor networks and digital communications.

The variety of chapters is also clear, when we examine the acquisition modalities at stake: conventional modalities, such as tomography and NMR, visible or infrared optical imaging, or more recent modalities such as atomic force imaging and polarized light imaging.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 497

Veröffentlichungsjahr: 2015

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Introduction

1 3D Reconstruction in X-ray Tomography: Approach Example for Clinical Data Processing

1.1. Introduction

1.2. Problem statement

1.3. Method

1.4. Results

1.5. Conclusion

1.6. Acknowledgments

1.7. Bibliography

2 Analysis of Force-Volume Images in Atomic Force Microscopy Using Sparse Approximation

2.1. Introduction

2.2. Atomic force microscopy

2.3. Data processing in AFM spectroscopy

2.4. Sparse approximation algorithms

2.5. Real data processing

2.6. Conclusion

2.7. Bibliography

3 Polarimetric Image Restoration by Non-local Means

3.1. Introduction

3.2. Light polarization and the Stokes–Mueller formalism

3.3. Estimation of the Stokes vectors

3.4. Results

3.5. Conclusion

3.6. Bibliography

4 Video Processing and Regularized Inversion Methods

4.1. Introduction

4.2. Three applications

4.3. Dense image registration

4.4. A few achievements based on direct formulation

4.5. Conclusion

4.6. Bibliography

5 Bayesian Approach in Performance Modeling: Application to Superresolution

5.1. Introduction

5.2. Performance modeling and Bayesian paradigm

5.3. Superresolution techniques behavior

5.4. Application examples

5.5. Real data processing

5.6. Conclusion

5.7. Bibliography

6 Line Spectra Estimation for Irregularly Sampled Signals in Astrophysics

6.1. Introduction

6.2. Periodogram, irregular sampling, maximum likelihood

6.3. Line spectra models: spectral sparsity

6.4. Prewhitening, CLEAN and greedy approaches

6.5. Global approach and convex penalization

6.6. Probabilistic approach for sparsity

6.7. Conclusion

6.8. Bibliography

7 Joint Detection-Estimation in Functional MRI

7.1. Introduction to functional neuroimaging

7.2. Joint detection-estimation of brain activity

7.3. Bayesian approach

7.4. Scheme for stochastic MCMC inference

7.5. Alternative variational inference scheme

7.6. Comparison of both types of solutions

7.7. Conclusion

7.8. Bibliography

8 MCMC and Variational Approaches for Bayesian Inversion in Diffraction Imaging

8.1. Introduction

8.2. Measurement configuration

8.3. The forward model

8.4. Bayesian inversion approach

8.5. Results

8.6. Conclusions

8.7. Bibliography

9 Variational Bayesian Approach and Bi-Model For the Reconstruction-Separation of Astrophysics Components

9.1. Introduction

9.2. Variational Bayesian methodology

9.3. Exponentiated gradient for variational Bayesian

9.4. Application: reconstruction-separation of astrophysical components

9.5. Implementation of the variational Bayesian approach

9.6. Results

9.7. Conclusion

9.8. Bibliography

10 Kernel Variational Approach for Target Tracking in a Wireless Sensor Network

10.1. Introduction

10.2. State of the art: limitations of existing methods

10.3. Model-less target tracking

10.4. Simulation results

10.5. Conclusion

10.6. Bibliography

11 Entropies and Entropic Criteria

11.1. Introduction

11.2. Some entropies in information theory

11.3. Source coding with escort distributions and Rényi bounds

11.4. A simple transition model

11.5. Minimization of the Rényi divergence and associated entropies

11.6. Bibliography

List of Authors

Index

First published 2015 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

www.iste.co.uk

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.wiley.com

© ISTE Ltd 2015The rights of Jean-François Giovannelli and Jerome Idier to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2014956810

British Library Cataloguing-in-Publication DataA CIP record for this book is available from the British LibraryISBN 978-1-84821-637-2

Introduction

This book was written in tribute to our colleague Guy Demoment, who was a researcher at the CNRS from 1977 to 1988, then Professor at the University of Paris-Sud until 2008, member of the Laboratoire des Signaux et Systèmes (L2S, UMR 8506, Gif-sur-Yvette) and its director from 1997 to 2001, and the founder of a research group on inverse problems in signal and image processing at the beginning of the 1980s.

Guy Demoment’s research activities began in 1970, at the interface between biological and medical engineering, automatic control and the still fledgling field of signal processing. Guy was particularly interested in cardiac function and in the cardiovascular system [DEM 77]. He derived a mathematical model of the functioning of the cardiovascular hemodynamic loop which was subsequently used to develop the control law of cardiac replacement prostheses. He also focused on aspects closer to theoretical biology such as left ventricle modeling and questions closer to physics such as the determination of vascular impedance [DEM 81].

This latter aspect naturally leads us to confront models with reality by means of measurements. The idea makes sense whereas in practice these measurements provide only indirect and degraded information on the quantities of interest. These degradations are generally considered in two forms: structure (resolution limitations, dynamics, sampling, etc.) and uncertainty (measurement noise, model approximation, etc.). The restitution of the quantity of interest then raises a real ill-posed inversion or inference problem. By creating the Groupe Problèmes Inverses (GPI – Inverse Problems Group), Guy promoted this scientific approach within the L2S then to the whole of the signal-image community within the engineering sciences. Having shared this approach with him within the GPI is a fortunate opportunity that most of the co-authors of this book have benefited from, as doctoral students or beginner colleagues.

Undoubtedly, Guy Demoment was an essential contributor to the field of inverse problems in signal and image processing, and its main instigator in the French community. He has also been passionate about related issues such as that of the effective implementation of a number of algorithms. In particular, in the context of linear deconvolution and adaptive spectral analysis, Kalman filtering and smoothing algorithms were given particular attention. Guy made several significant contributions concerning fast versions of these algorithms [DEM 85, DEM 91].

The exploitation of probabilistic models for detection-estimation has also been a subject of choice for Guy and his collaborators since the end of the 1980s, resulting in recursive [GOU 89], and then iterative [GOU 90] computational structures. It is interesting to note that the latter are very competitive precursors to the well-known greedy algorithms in parsimonious approximation, as is clearly stated in the book.

Regarding more fundamental subjects, he has been interested in the issues of information modeling and Bayesian inference, inspired by E.T. Jaynes’ works. He contributed to the use of a maximum entropy principle for the synthesis of a priori models and to their application to tomography [MOH 87, MOH 88a], and then he studied the principle of maximum entropy on the mean in the context of inverse problems [LEB 99]. He then further explored these issues and, during his last period of scientific activity, became interested in variational approaches for Bayesian inference.

Guy’s scientific sensitivity has also been visible in a significant way in his teaching activities. He created several courses ranging from undergraduate to PhD levels, as well as in continuing education, and always dedicated to them a lot of energy and creativity. Some particular examples include a course on Bayesian inference and the basics of probabilities, and among the most in-depth themes, Kalman algorithms and their fast versions, as well as the deconvolution of signals.

Beyond his scientific activities, researches and teachings, Guy has also been involved in a remarkable way in community life. On a national level, he has been a member of the Conseil National des Universités (www.cpcnu.fr), a particularly active member of scholar and research networks, e.g. club EEA (www.clubeea.org) and the GdR ISIS (gdr-isis.fr). Within the University of Paris-Sud, he has chaired the pedagogy commission, he has been vice president of the Department of Physics in Orsay, responsible for bachelor-level diploma, and co-creator of a masters-level diploma.

With regard to the present book, it concerns “ill-posed inverse problems”. The readers can refer to the widely cited article [DEM 89] or to a previous collective book [IDI 08] on this subject, of which Guy Demoment is one of the main contributors. It is concerned with problems that cannot be resolved on the basis of the observed data only and the construction of solutions requires other information, referred to as a priori. These solutions are then specific to the information taken into account. The recognition and the explanation of this information are necessary to appreciate the range of validity and the scope of application of the constructed solutions. Over the 1980s, the scientific community has greatly acknowledged the significance of this problematic, and contributions have become very abundant not only in the signal-image community but also in that of mathematics, computer science and physics.

As a direct response to this thematic abundance concerning inverse problems, we have chosen to address a broad spectrum of data processing problems and application domains, with a particular focus on the diversity of mathematical tools.

From the point of view of application fields, an important part of the book is dedicated to different scientific fields, which present a large number of inversion problems: biological and medical imaging, and more specifically X-ray tomography (Chapters 1, 2 and 7), astronomy (Chapters 6 and 9) as well as non-destructive evaluation (Chapter 8). At least one other has been added: video sequence processing (Chapters 4 and 5). Two other applications that are more rarely met in the field of inversion: target tracking and sensor networks (Chapter 10) as well as digital communications (Chapter 11).

The diversity of chapters is also evident when the considered acquisition modalities come under scrutiny: from the more traditional ones such as tomography (Chapters 1 and 8) and MRI (Chapter 7), optical imaging in the visible (Chapters 4 and 5) or in the infrared spectrum (Chapters 5 and 9) to more recent modalities such as atomic force imaging (Chapter 2) and polarized optical imaging (Chapter 3).

Throughout the chapters, the duality between the approaches known as “energetic” and “probabilistic” emerges. The first type of approach is based on deterministic construction leading to criteria and to numerical optimization issues as typically in Chapters 1, 2, 3, 4 and 6. The second type of approach is based on a Bayesian construction, often hierarchical, which probabilizes unknown objects in addition to data. It thus leads to a joint distribution; therefore, optimal strategies are available and performance characterization right from the start becomes possible as in Chapter 5. The remaining chapters make use of a posteriori distributions to generate an estimation: they are explored by using stochastic sampling as in Chapters 6, 7 and 8 or by an approximated maximization as in Chapters 7, 8, 9 and 10. The latter also introduces a notion of learning and relies on informational principles discussed in Chapter 11, which presents more theoretical aspects related to entropy criteria.

I.1. Bibliography

[DEM 72] DEMOMENT G., Modèle de la boucle cardiovasculaire: évaluation de l’autorégulation mécanique et de la fonction ventriculaire gauche, PhD thesis, no.169, Orsay center, University of Paris-sud, 29 June 1972.

[DEM 77] DEMOMENT G., Contribution à l’étude du fonctionnement ventriculaire gauche par des méthodes d’identification paramétriques. obtention d’un observateur de l’état du ventricule, PhD thesis, no.1810, Orsay center, University of Paris-sud, 15 March 1977.

[DEM 82] DEMOMENT G., Introduction à la statistique, Lecture notes, École supérieure d’électrité, no. 2906, 1982.

[DEM 83] DEMOMENT G., Déconvolution des signaux, Lecture notes, École supérieure d’électrité no. 2964, 1983.

[DEM 85] DEMOMENT G., REYNAUD R., “Fast minimum-variance deconvolution”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP-33, pp. 1324–1326, 1985.

[DEM 87] DEMOMENT G., Algorithme rapides, Lecture notes, École supérieure d’électrité, no. 3152, 1987.

[DEM 89a] DEMOMENT G., “Equations de Chandrasekhar et algorithmes rapides pour le traitement du signal et des images”, Traitement du Signal, vol. 6, pp. 103–115, 1989.

[DEM 89b] DEMOMENT G., “Image reconstruction and restoration: Overview of common estimation structures and problems”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP-37, no. 12, pp. 2024–2036, December 1989.

[DEM 91] DEMOMENT G., REYNAUD R., “Fast RLS algorithms and Chandrasekhar equations”, HAYKIN S., (ed.), SPIE Conference on Adaptive Signal Processing, San Diego, CA, pp. 357–367, July 1991.

[DEM 05a] DEMOMENT G., Probabilités: modélisation des incertitudes, inférence logique, et traitement des données expérimentales. Deuxième partie: application au traitement du signal, Lecture notes, University of Paris-sud, Orsay center, 2005.

[DEM 05b] DEMOMENT G., Probabilités: modélisation des incertitudes, inférence logique, et traitement des données expérimentales. Première partie: bases de la théorie, Lecture notes, University of Paris-sud, Orsay center, 2005.

[GOU 89] GOUSSARD Y., DEMOMENT G., “Recursive deconvolution of Bernoulli-Gaussian processes using a MA representation”, IEEE Transactions on Geoscience and Remote Sensing, vol. GE-27, pp. 384–394, 1989.

[GOU 90] GOUSSARD Y., DEMOMENT G., IDIER J., “A new algorithm for iterative deconvolution of sparse spike trains”, IEEE International Conference on Acoustic, Speech and Signal Processing, Albuquerque, NM, pp. 1547–1550, April 1990.

[IDI 08] IDIER J., (ed.), Bayesian Approach to Inverse Problems, ISTE, London and John Wiley & Sons, New York, April 2008.

[LEB 99] LE BESNERAIS G., BERCHER J.-F., DEMOMENT G., “A new look at entropy for solving linear inverse problems”, IEEE Transactions on Information Theory, vol. 45, no. 5, pp. 1565–1578, July 1999.

[MOH 87] MOHAMMAD-DJAFARI A., DEMOMENT G., “Maximum entropy Fourier synthesis with application to diffraction tomography”, Applied Optics, vol. 26, no. 10, pp. 1745–1754, 1987.

[MOH 88a] MOHAMMAD-DJAFARI A., DEMOMENT G., “Maximum entropy reconstruction in X ray and diffraction tomography”, IEEE Transactions on Medical Imaging, vol. MI-7, no. 4, pp. 345–354, 1988.

[MOH 88b] MOHAMMAD-DJAFARI A., DEMOMENT G., “Utilisation de l’entropie dans les problèmes de restauration et de reconstruction d’images”, Traitement du Signal, vol. 5, no. 4, pp. 235–248, 1988.

1

3D Reconstruction in X-ray Tomography: Approach Example for Clinical Data Processing

1.1. Introduction

Works presented in this chapter stem from three-dimensional (3D) reconstruction problems, in X-ray computed tomography (XRCT), within a clinical framework. More specifically, the practical objective was to use XRCT to detect and quantify possible restenosis occurring in some patients after the insertion of a stent. The quality of reconstructions achieved by clinical tomographs being insufficient for this purpose, the concern was thus to develop a method capable of rebuilding small structures in the presence of metal objects, in a 3D context, with a precision higher than that of tomographs available in hospitals; in addition, this method was supposed to work with computers commonly available in most research laboratories (such as personal computers (PCs), without any particular architecture or processor hardware).

The development of a solution clearly falls within the framework of conventional inverse problem solving. However, it is essential to take into account the characteristics of 3D XRCT, and notably the very large volume of data to be processed, the geometric complexity of the collection process of raw data, and practical barriers to access these data. In order to achieve the objective stated above, difficulties are twofold: (1) at the methodological level, the development of an inversion method adapted to the intrinsic characteristics of the problem to be addressed and, (2) at the implementation level, accounting for the practical obstacles mentioned above as well as for constraints on the processing time inherent in any clinical application. In the following, we present the retained approach based on the analysis of the main factors likely to improve the quality of reconstructions while satisfying the practical constraints which we must face; this brings us to putting the methodological aspects into perspective, with regard to practical questions, in light of the applied objective of these works.

1.2. Problem statement

Although image reconstruction methods used in the first tomographs were of the analytical type [AMB 73, HOU 73], the advantages of approaches based on estimation [HER 71, HER 73, HER 76b], then the ill-posed nature of tomographic reconstruction problems [HER 76a, HER 79] were recognized very early on. Over the past 35 years, many academic studies focusing on tomographic reconstruction have been carried out in the context of solving inverse problems. Generally, the emphasis is on the main three elements of this type of approach, that is to say, modeling of the data formation process, choice of the estimator, and development of techniques that enable the practical computation of the estimate [DEM 89]. These works have been partly customized according to various imaging modalities (for example, transmission [HER 76a], emission [LEV 87], diffraction [BER 97] tomography, and more recently optical and/or multiphysics tomography (see [BOA 04] for a partial synthesis)) which present largely variable degrees of difficulty: if estimation conditions are often very unfavorable in diffraction tomography (eddy current tomography [TRI 10], seismic imaging [VAU 11]) due to the strong non-linearity of underlying physical phenomena, to the importance of attenuation phenomena and to the small number of observations with respect to the number of unknowns, the inversion conditions are generally better in emission tomography (SPECT for example) and can be qualified as relatively favorable in XRCT. This explains why, in this area, reconstruction methods known as “naive” provide results that have been used in clinical practice for several decades. Thereafter, we present the elements likely to have a significant impact on the performance of an XRCT inversion method.

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!