68,99 €
This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBUGS and OPENBUGS. This feature continues in the new edition along with examples using R to broaden appeal and for completeness of coverage.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 887
Veröffentlichungsjahr: 2014
Cover
Title Page
WILEY SERIES IN PROBABILITY AND STATISTICS
Copyright
Preface
Chapter 1: Bayesian methods and Bayesian estimation
1.1 Introduction
1.2 MCMC techniques: The Metropolis–Hastings algorithm
1.3 Software for MCMC: BUGS, JAGS and R-INLA
1.4 Monitoring MCMC chains and assessing convergence
1.5 Model assessment
References
Chapter 2: Hierarchical models for related units
2.1 Introduction: Smoothing to the hyper population
2.2 Approaches to model assessment: Penalised fit criteria, marginal likelihood and predictive methods
2.3 Ensemble estimates: Poisson–gamma and Beta-binomial hierarchical models
2.4 Hierarchical smoothing methods for continuous data
2.5 Discrete mixtures and dirichlet processes
2.6 General additive and histogram smoothing priors
Exercises
Notes
References
Chapter 3: Regression techniques
3.1 Introduction: Bayesian regression
3.2 Normal linear regression
3.3 Simple generalized linear models: Binomial, binary and Poisson regression
3.4 Augmented data regression
3.5 Predictor subset choice
3.6 Multinomial, nested and ordinal regression
Exercises
Notes
References
Chapter 4: More advanced regression techniques
4.1 Introduction
4.2 Departures from linear model assumptions and robust alternatives
4.3 Regression for overdispersed discrete outcomes
4.4 Link selection
4.5 Discrete mixture regressions for regression and outlier status
4.6 Modelling non-linear regression effects
4.7 Quantile regression
Exercises
Notes
References
Chapter 5: Meta-analysis and multilevel models
5.1 Introduction
5.2 Meta-analysis: Bayesian evidence synthesis
5.3 Multilevel models: Univariate continuous outcomes
5.4 Multilevel discrete responses
5.5 Modelling heteroscedasticity
5.6 Multilevel data on multivariate indices
Exercises
Notes
References
Chapter 6: Models for time series
6.1 Introduction
6.2 Autoregressive and moving average models
6.3 Discrete outcomes
6.4 Dynamic linear and general linear models
6.5 Stochastic variances and stochastic volatility
6.6 Modelling structural shifts
Exercises
Notes
References
Chapter 7: Analysis of panel data
7.1 Introduction
7.2 Hierarchical longitudinal models for metric data
7.3 Normal linear panel models and normal linear growth curves
7.4 Longitudinal discrete data: Binary, categorical and Poisson panel data
7.5 Random effects selection
7.6 Missing data in longitudinal studies
Exercises
Notes
References
Chapter 8: Models for spatial outcomes and geographical association
8.1 Introduction
8.2 Spatial regressions and simultaneous dependence
8.3 Conditional prior models
8.4 Spatial covariation and interpolation in continuous space
8.5 Spatial heterogeneity and spatially varying coefficient priors
8.6 Spatio-temporal models
8.7 Clustering in relation to known centres
Exercises
Notes
References
Chapter 9: Latent variable and structural equation models
9.1 Introduction
9.2 Normal linear structural equation models
9.3 Dynamic factor models, panel data factor models and spatial factor models
9.4 Latent trait and latent class analysis for discrete outcomes
9.5 Latent trait models for multilevel data
9.6 Structural equation models for missing data
Exercises
Notes
References
Chapter 10: Survival and event history models
10.1 Introduction
10.2 Continuous time functions for survival
10.3 Accelerated hazards
10.4 Discrete time approximations
10.5 Accounting for frailty in event history and survival models
10.6 Further applications of frailty models
10.7 Competing risks
Exercises
References
Index
WILEY SERIES IN PROBABILITY AND STATISTICS
End User License Agreement
xi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
234
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
344
343
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
431
432
433
434
435
436
437
439
440
441
442
443
444
445
446
447
448
449
450
451
439
439
439
439
439
Table of Contents
Figure 1.1
Figure 2.1
Figure 2.2
Figure 2.3
Figure 4.1
Figure 4.2
Figure 4.3
Figure 4.4
Figure 4.5
Figure 4.6
Figure 4.7
Figure 6.1
Figure 6.2
Figure 6.3
Figure 6.4
Figure 6.5
Figure 6.6
Figure 6.7
Figure 6.8
Figure 6.9
Figure 8.1
Figure 8.2
Figure 10.1
Table 2.1
Table 2.2
Table 2.3
Table 2.4
Table 2.5
Table 3.1
Table 3.2
Table 3.3
Table 3.4
Table 4.1
Table 4.2
Table 6.1
Table 6.2
Table 7.1
Table 7.2
Table 9.1
Table 9.2
Table 9.3
Table 9.4
Table 9.5
Table 9.6
Table 10.1
Table 10.2
Second Edition
Peter Congdon
Centre for Statistics and Department of Geography, Queen Mary, University of London, UK
Established by WALTER A. SHEWHART and SAMUEL S. WILKS
Editors: David J. Balding, Noel A. C. Cressie, Garrett M. Fitzmaurice,
Geof H. Givens, Harvey Goldstein, Geert Molenberghs, David W. Scott,Adrian F. M. Smith, Ruey S. Tsay, Sanford Weisberg
Editors Emeriti: J. Stuart Hunter, Iain M. Johnstone, Joseph B. Kadane,Jozef L. Teugels
A complete list of the titles in this series appears at the end of this volume.
This edition first published 2014
© 2014 John Wiley & Sons, Ltd
Registered office
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.
The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloging-in-Publication Data
Congdon, P.
Applied Bayesian modelling / Peter Congdon.— Second edition.
pages cm
Includes bibliographical references and index.
ISBN 978-1-119-95151-3 (cloth)
1. Bayesian statistical decision theory. 2. Mathematical statistics. I. Title.
QA279.5.C649 2014
519.5′42– dc23
2014004862
A catalogue record for this book is available from the British Library.
ISBN: 978-1-119-95151-3
My gratitude is due to Wiley for proposing a revised edition of Applied Bayesian Modelling, first published in 2003. Much has changed since then for those seeking to apply Bayesian principles or to exploit the growing advantages of Bayesian estimation.
The central program used throughout the text in worked examples is BUGS, though R packages such as R-INLA, R2BayesX and MCMCpack are also demonstrated. Reference throughout the text to BUGS can be taken to refer both to WinBUGS and the ongoing OpenBUGS program, on which future development will concentrate (see http://www.openbugs.info/w/). There is a good deal of continuity between the final WinBUGS14 version and OpenBUGS (for details of differences see http://www.openbugs.info/w.cgi/OpenVsWin), though OpenBUGS has a wider range of sampling choices, distributions and functions. BUGS code can also be simply adapted to JAGS applications and the JAGS interfaces with R such as rjags.
Although R interfaces to BUGS or encapsulating the program are now widely used, the BUGS programming language itself remains a central aspect. Direct experience in WinBUGS or OpenBUGS programming is important as a preliminary to using R Interfaces such as BRUGS and rjags.
For learning Bayesian methods, especially if the main goal is data analysis per se, BUGS has advantages both practical and pedagogical. It can be seen as a half-way house between menu driven Bayesian computing (still not really established in any major computing package, though SAS has growing Bayesian capabilities) on the one hand, and full development of independent code, including sampling algorithms, on the other.
Many thanks are due to the following for comments on chapters or programming advice: Sid Chib, Cathy Chen, Brajendra Sutradhar and Thomas Kneib.
Please send comments or questions to me at [email protected].
Peter Congdon, London
Bayesian analysis of data in the health, social and physical sciences has been greatly facilitated in the last two decades by improved scope for estimation via iterative sampling methods. Recent overviews are provided by Brooks et al. (2011), Hamelryck et al. (2012), and Damien et al. (2013). Since the first edition of this book in 2003, the major changes in Bayesian technology relevant to practical data analysis have arguably been in distinct new approaches to estimation, such as the INLA method, and in a much extended range of computer packages, especially in R, for applying Bayesian techniques (e.g. Martin and Quinn, 2006; Albert, 2007; Statisticat LLC, 2013).
Among the benefits of the Bayesian approach and of sampling methods of Bayesian estimation (Gelfand and Smith, 1990; Geyer, 2011) are a more natural interpretation of parameter uncertainty (e.g. through credible intervals) (Lu et al., 2012), and the ease with which the full parameter density (possibly skew or multi-modal) may be estimated. By contrast, frequentist estimates may rely on normality approximations based on large sample asymptotics (Bayarri and Berger, 2004). Unlike classical techniques, the Bayesian method allows model comparison across non-nested alternatives, and recent sampling estimation developments have facilitated new methods of model choice (e.g. Barbieri and Berger, 2004; Chib and Jeliazkov, 2005). The flexibility of Bayesian sampling estimation extends to derived ‘structural’ parameters combining model parameters and possibly data, and with substantive meaning in application areas, which under classical methods might require the delta technique. For example, Parent and Rivot (2012) refer to ‘management parameters’ derived from hierarchical ecological models.
New estimation methods also assist in the application of hierarchical models to represent latent process variables, which act to borrow strength in estimation across related units and outcomes (Wikle, 2003; Clark and Gelfand, 2006). Letting and denote joint and conditional densities respectively, the paradigm for a hierarchical model specifies
based on an assumption that observations are imperfect realisations of an underlying process and that units are exchangeable. Usually the observations are considered conditionally independent given the process and parameters.
Such techniques play a major role in applications such as spatial disease patterns, small domain estimation for survey outcomes (Ghosh and Rao, 1994), meta-analysis across several studies (Sutton and Abrams, 2001), educational and psychological testing (Sahu, 2002; Shiffrin et al., 2008) and performance comparisons (e.g. Racz and Sedransk, 2010; Ding et al., 2013).
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
