Advanced Markov Chain Monte Carlo Methods - Faming Liang - E-Book

Advanced Markov Chain Monte Carlo Methods E-Book

Faming Liang

0,0
96,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Markov Chain Monte Carlo (MCMC) methods are now an indispensable tool in scientific computing. This book discusses recent developments of MCMC methods with an emphasis on those making use of past sample information during simulations. The application examples are drawn from diverse fields such as bioinformatics, machine learning, social science, combinatorial optimization, and computational physics. Key Features: * Expanded coverage of the stochastic approximation Monte Carlo and dynamic weighting algorithms that are essentially immune to local trap problems. * A detailed discussion of the Monte Carlo Metropolis-Hastings algorithm that can be used for sampling from distributions with intractable normalizing constants. * Up-to-date accounts of recent developments of the Gibbs sampler. * Comprehensive overviews of the population-based MCMC algorithms and the MCMC algorithms with adaptive proposals. This book can be used as a textbook or a reference book for a one-semester graduate course in statistics, computational biology, engineering, and computer sciences. Applied or theoretical researchers will also find this book beneficial.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 634

Veröffentlichungsjahr: 2011

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Preface

Acknowledgments

Publisher’s Acknowledgments

Chapter 1: Bayesian Inference and Markov Chain Monte Carlo

1.1 Bayes

1.2 Bayes Output

1.3 Monte Carlo Integration

1.4 Random Variable Generation

1.5 Markov Chain Monte Carlo

Exercises

Chapter 2: The Gibbs Sampler

2.1 The Gibbs Sampler

2.2 Data Augmentation

2.3 Implementation Strategies and Acceleration Methods

2.4 Applications

Exercises

Appendix 2A: The EM and PX-EM Algorithms

Chapter 3: The Metropolis-Hastings Algorithm

3.1 The Metropolis-Hastings Algorithm

3.2 Variants of the Metropolis-Hastings Algorithm

3.3 Reversible Jump MCMC Algorithm for Bayesian Model Selection Problems

3.4 Metropolis-Within-Gibbs Sampler for ChIP-chip Data Analysis

Exercises

Chapter 4: Auxiliary Variable MCMC Methods

4.1 Simulated Annealing

4.2 Simulated Tempering

4.3 The Slice Sampler

4.4 The Swendsen-Wang Algorithm

4.5 The Wolff Algorithm

4.6 The Møller Algorithm

4.7 The Exchange Algorithm

4.8 The Double MH Sampler

4.9 Monte Carlo MH Sampler

4.10 Applications

Exercises

Chapter 5: Population-Based MCMC Methods

5.1 Adaptive Direction Sampling

5.2 Conjugate Gradient Monte Carlo

5.3 Sample Metropolis-Hastings Algorithm

5.4 Parallel Tempering

5.5 Evolutionary Monte Carlo

5.6 Sequential Parallel Tempering for Simulation of High Dimensional Systems

Equi-Energy Sampler

5.8 Applications

Exercises

Appendix 5A: Protein Sequences for 2D HP Models

Chapter 6: Dynamic Weighting

6.1 Dynamic Weighting

6.2 Dynamically Weighted Importance Sampling

6.3 Monte Carlo Dynamically Weighted Importance Sampling

6.4 Sequentially Dynamically Weighted Importance Sampling

Exercises

Chapter 7: Stochastic Approximation Monte Carlo

7.1 Multicanonical Monte Carlo

7.2 1/k-Ensemble Sampling

7.3 The Wang-Landau Algorithm

7.4 Stochastic Approximation Monte Carlo

7.5 Applications of Stochastic Approximation Monte Carlo

7.6 Variants of Stochastic Approximation Monte Carlo

7.7 Theory of Stochastic Approximation Monte Carlo

7.8 Trajectory Averaging: Toward the Optimal Convergence Rate

Exercises

Appendix 7A: Test Functions for Global Optimization

Chapter 8: Markov Chain Monte Carlo with Adaptive Proposals

8.1 Stochastic Approximation-Based Adaptive Algorithms

8.2 Adaptive Independent Metropolis-Hastings Algorithms

8.3 Regeneration-Based Adaptive Algorithms

8.4 Population-Based Adaptive Algorithms

Exercises

References

Index

Wiley Series in Computational Statistics

Consulting Editors:

Paolo GiudiciUniversity of Pavia, Italy

Geof H. GivensColorado State University, USA

Bani K. MallickTexas A & M University, USA

Wiley Series in Computational Statistics is comprised of practical guides and cutting edge research books on new developments in computational statistics. It features quality authors with a strong applications focus. The texts in the series provide detailed coverage of statistical concepts, methods and case studies in areas at the interface of statistics, computing, and numerics.

With sound motivation and a wealth of practical examples, the books show in concrete terms how to select and to use appropriate ranges of statistical computing techniques in particular fields of study. Readers are assumed to have a basic understanding of introductory terminology.

The series concentrates on applications of computational methods in statistics to fields of bioinformatics, genomics, epidemiology, business, engineering, finance and applied statistics.

Titles in the Series

Billard and Diday - Symbolic Data Analysis: Conceptual Statistics and Data MiningBolstad - Understanding Computational Bayesian StatisticsBorgelt, Steinbrecher and Kruse - Graphical Models, 2eDunne - A Statistical Approach to Neutral Networks for Pattern RecognitionLiang, Liu and Carroll - Advanced Marcov Chain Monte Carlo MethodsNtzoufras - Bayesian Modeling Using WinBUGS

This edition first published 2010© 2010 John Wiley and Sons Ltd

Registered officeJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom

For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.

The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

Library of Congress Cataloging-in-Publication DataLiang, F. (Faming), 1970-Advanced Markov Chain Monte Carlo methods : learning from past samples / Faming Liang, Chuanhai Liu, Raymond J. Carroll.p. cm.Includes bibliographical references and index.ISBN 978-0-470-74826-8 (cloth)1. Monte Carlo method. 2. Markov processes. I. Liu, Chuanhai, 1959- II. Carroll, Raymond J. III. Title.QA298.L53 2010518′.282 – dc22

2010013148

A catalogue record for this book is available from the British Library.

ISBN 978-0-470-74826-8

To our families

Acknowledgments

Faming Liang is most grateful to his PhD advisor professor, Wing Hung Wong, for his overwhelming passion for Markov Chain Monte Carlo and scientific problems, and for his constant encouragement. Liang’s research was partially supported by grants from the National Science Foundation (DMS-0607755 and CMMI-0926803).

Chuanhai Liu’s interest in computational statistics is due largely to the support and encouragement from his M.S. advisor professor, Yaoting Zhang and PhD advisor professor, Donald B. Rubin. In the mid-1980s, Chuanhai Liu learned from Professor Yaoting Zhang the importance of statistical computing. Over a time period of more than ten years from late 1980s, Chuanhai Liu learned from Professor Donald B. Rubin statistical thinking in developing iterative methods, such as EM and Gibbs-type algorithms.

Raymond Carroll’s research was partially supported by a grant from the National Cancer Institute (CA57030).

Finally, we wish to thank our families for their constant love, understanding and support. It is to them that we dedicate this book.

F.L., C.L. and R.C.

Publisher’s Acknowledgments

The publisher wishes to thank the following for permission to reproduce copyright material:

Table 3.2, Figure 3.4: Reproduced by permission of Licensee BioMed Central Ltd.

Table 4.1, Figure 4.2, Table 4.3: Reproduced by permission of Taylor & Francis.

Figure 5.1, Figure 5.5, Figure 5.6: Reproduced by permission of International Chinese Statistical Association.

Figure 5.2, Figure 5.3, Figure 6.5, Figure 7.1, Figure 7.3, Figure 7.10: Reproduced by permission of American Statistical Association.

Table 5.4, Figure 5.4: Reproduced by permission of American Physical Society.

Figure 5.7, Table 5.5, Figure 5.8: Reproduced by permission of American Institute of Physics.

Figure 5.9, Figure 7.11, Table 7.7, Figure 8.1, Figure 8.2, Figure 8.3: Reproduced by permission of Springer.

Figure 6.1, Figure 6.2, Table 7.2, Figure 7.2, Table 7.4, Figure 7.6, Table 7.5, Figure 7.7, Figure 7.8, Figure 7.9: Reproduced by permission of Elsevier.

Figure 6.3: Reproduced by permission of National Academy of Science.

Figure 7.4, Figure 7.5: Reproduced by permission of National Cancer Institute.

Every effort has been made to trace rights holders, but if any have been inadvertently overlooked the publishers would be pleased to make the necessary arrangements at the first opportunity.

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!