GARCH Models - Christian Francq - E-Book

GARCH Models E-Book

Christian Francq

0,0
103,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Provides a comprehensive and updated study of GARCH models and their applications in finance, covering new developments in the discipline 

This book provides a comprehensive and systematic approach to understanding GARCH time series models and their applications whilst presenting the most advanced results concerning the theory and practical aspects of GARCH. The probability structure of standard GARCH models is studied in detail as well as statistical inference such as identification, estimation, and tests. The book also provides new coverage of several extensions such as multivariate models, looks at financial applications, and explores the very validation of the models used.

GARCH Models: Structure, Statistical Inference and Financial Applications, 2nd Edition features a new chapter on Parameter-Driven Volatility Models, which covers Stochastic Volatility Models and Markov Switching Volatility Models. A second new chapter titled Alternative Models for the Conditional Variance contains a section on Stochastic Recurrence Equations and additional material on EGARCH, Log-GARCH, GAS, MIDAS, and intraday volatility models, among others. The book is also updated with a more complete discussion of multivariate GARCH; a new section on Cholesky GARCH; a larger emphasis on the inference of multivariate GARCH models; a new set of corrected problems available online; and an up-to-date list of references.

  • Features up-to-date coverage of the current research in the probability, statistics, and econometric theory of GARCH models
  • Covers significant developments in the field, especially in multivariate models
  • Contains completely renewed chapters with new topics and results
  • Handles both theoretical and applied aspects
  • Applies to researchers in different fields (time series, econometrics, finance)
  • Includes numerous illustrations and applications to real financial series
  • Presents a large collection of exercises with corrections
  • Supplemented by a supporting website featuring R codes, Fortran programs, data sets and Problems with corrections

GARCH Models, 2nd Edition is an authoritative, state-of-the-art reference that is ideal for graduate students, researchers, and practitioners in business and finance seeking to broaden their skills of understanding of econometric time series models.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 879

Veröffentlichungsjahr: 2019

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Preface to the Second Edition

Preface to the First Edition

Notation

1 Classical Time Series Models and Financial Series

1.1 Stationary Processes

1.2 ARMA and ARIMA Models

1.3 Financial Series

1.4 Random Variance Models

1.5 Bibliographical Notes

1.6 Exercises

Part I: Univariate GARCH Models

2 GARCH(

p

,

q

) Processes

2.1 Definitions and Representations

2.2 Stationarity Study

2.3 ARCH(∞)Representation

2.4 Properties of the Marginal Distribution

2.5 Autocovariances of the Squares of a GARCH

2.6 Theoretical Predictions

2.7 Bibliographical Notes

2.8 Exercises

3 Mixing*

3.1 Markov Chains with Continuous State Space

3.2 Mixing Properties of GARCH Processes

3.3 Bibliographical Notes

3.4 Exercises

4 Alternative Models for the Conditional Variance

4.1 Stochastic Recurrence Equation (SRE)

4.2 Exponential GARCH Model

4.3 Log‐GARCH Model

4.4 Threshold GARCH Model

4.5 Asymmetric Power GARCH Model

4.6 Other Asymmetric GARCH Models

4.7 A GARCH Model with Contemporaneous Conditional Asymmetry

4.8 Empirical Comparisons of Asymmetric GARCH Formulations

4.9 Models Incorporating External Information

4.10 Models Based on the Score: GAS and Beta‐t‐(E)GARCH

4.11 GARCH‐type Models for Observations Other Than Returns

4.12 Complementary Bibliographical Notes

4.13 Exercises

Part II: Statistical Inference

5 Identification

5.1 Autocorrelation Check for White Noise

5.2 Identifying the ARMA Orders of an ARMA‐GARCH

5.3 Identifying the GARCH Orders of an ARMA‐GARCH Model

5.4 Lagrange Multiplier Test for Conditional Homoscedasticity

5.5 Application to Real Series

5.6 Bibliographical Notes

5.7 Exercises

6 Estimating ARCH Models by Least Squares

6.1 Estimation of ARCH( q ) models by Ordinary Least Squares

6.2 Estimation of ARCH( q ) Models by Feasible Generalised Least Squares

6.3 Estimation by Constrained Ordinary Least Squares

6.4 Bibliographical Notes

6.5 Exercises

7 Estimating GARCH Models by Quasi‐Maximum Likelihood

7.1 Conditional Quasi‐Likelihood

7.2 Estimation of ARMA–GARCH Models by Quasi‐Maximum Likelihood

7.3 Application to Real Data

7.4 Proofs of the Asymptotic Results*

7.5 Bibliographical Notes

7.6 Exercises

8 Tests Based on the Likelihood

8.1 Test of the Second‐Order Stationarity Assumption

8.2 Asymptotic Distribution of the QML When θ 0 is at the Boundary

8.3 Significance of the GARCH Coefficients

8.4 Diagnostic Checking with Portmanteau Tests

8.5 Application: Is the GARCH(1,1) Model Overrepresented?

8.6 Proofs of the Main Results

8.7 Bibliographical Notes

8.8 Exercises

9 Optimal Inference and Alternatives to the QMLE*

9.1 Maximum Likelihood Estimator

9.2 Maximum Likelihood Estimator with Mis‐specified Density

9.3 Alternative Estimation Methods

9.4 Bibliographical Notes

9.5 Exercises

Part III: Extensions and Applications

10 Multivariate GARCH Processes

10.1 Multivariate Stationary Processes

10.2 Multivariate GARCH Models

10.3 Stationarity

10.4 QML Estimation of General MGARCH

10.5 Estimation of the CCC Model

10.6 Looking for Numerically Feasible Estimation Methods

10.7 Proofs of the Asymptotic Results

10.8 Bibliographical Notes

10.9 Exercises

11 Financial Applications

11.1 Relation Between GARCH and Continuous‐Time Models

11.2 Option Pricing

11.3 Value at Risk and Other Risk Measures

11.4 Bibliographical Notes

11.5 Exercises

12 Parameter‐Driven Volatility Models

12.1 Stochastic Volatility Models

12.2 Markov Switching Volatility Models

12.3 Bibliographical Notes

12.4 Exercises

Appendix B: Ergodicity, Martingales, Mixing

A.1. Ergodicity

A.2. Martingale Increments

A.3 Mixing

Appendix B: Autocorrelation and Partial Autocorrelation

B.1. Partial Autocorrelation

B.2. Generalised Bartlett Formula for Non‐linear Processes

Appendix C: Markov Chains on Countable State Spaces

C.1. Definition of a Markov Chain

C.2. Transition Probabilities

C.3. Classification of States

C.4. Invariant Probability and Stationarity

C.5. Ergodic Results

C.6. Limit Distributions

C.7. Examples

Appendix D: The Kalman Filter

D.1. General Form of the Kalman Filter

D.2. Prediction and Smoothing with the Kalman Filter

D.3. Kalman Filter in the Stationary Case

D.4. Statistical Inference with the Kalman Filter

Appendix E: Solutions to the Exercises

Chapter 1

Chapter 2

Chapter 3

Chapter 4

Chapter 5

Chapter 6

Chapter 7

Chapter 8

Chapter 9

Chapter 10

Chapter 11

Chapter 12

References

Index

End User License Agreement

List of Tables

Chapter 1

Sample autocorrelations of returns

ε

t

(CAC 40 index, 2 Janua...

Chapter 2

Estimations of

γ

obtained from 1000 simulations of size 1000 ...

Chapter 4

Table 4.1 Empirical autocorrelations (CAC 40 series, period 1988–1998)....

Table 4.2 Empirical autocorrelations (CAC 40, for the period 1988–1998)...

Table 4.3 Portmanteau test of the white noise hypothesis for the CAC 40...

Table 4.4 Likelihoods of the different models for the CAC 40 series.

Table 4.5 Means of the squared differences between the estimated volati...

Table 4.6 Variance (

×10

4

) and kurtosis of the CAC 40 index and of...

Table 4.7 Number of CAC returns outside the limits

(THEO being the ...

Table 4.8 Means of the squares of the differences between the estimated...

Table 4.9 SAS program for the fitting of a TGARCH

(1, 1)

model with inte...

Chapter 5

Portmanteau tests on a simulation of size

n

 = 5000

of...

As Table  5.1, for tests based on partial autocorrelations instead o...

White noise portmanteau tests on a simulation of size

n = 100

...

Portmanteau tests on the squared CAC 40 returns (2 March 1990 to 29 ...

LM tests for conditional homoscedasticity of the CAC 40 and FTSE 100...

Portmanteau tests on the CAC 40 (2 March 1990 to 29 December 2006). ...

Portmanteau tests on the FTSE 100 (3 April 1984 to 3 April 2007).

Studentised statistics for the corner method for the CAC 40 series a...

Studentised statistics for the corner method for the FTSE 100 series...

Studentised statistics for the corner method for the squared CAC 40 ...

Studentised statistics for the corner method for the squared FTSE 10...

Chapter 6

Table 6.1 Strict stationarity and moment conditions for the ARCH

(1)

mod...

Table 6.2 Asymptotic variance of the OLS estimator of an ARCH

(1)

model ...

Chapter 7

Asymptotic variance for the QMLE of an ARCH

(1)

process with

η t ∼풩

...

Comparison of the empirical and theoretical asymptotic variances, fo...

Matrices

Σ

of asymptotic variance of the estimator of

(a 0, α 0

...

GARCH

(1, 1)

models estimated by QML for 11 indices.

Chapter 8

Test of the infinite variance assumption for 11 stock market returns...

Asymptotic critical value

c

q

,

α

, at level

α

, of the...

Exact asymptotic level (%) of erroneous Wald tests, of rejection reg...

Portmanteau test

p

‐values for adequacy of the ARCH(5) and GARCH(1,1)...

p

‐values for tests of the null of a GARCH(1,1) model against the GA...

Chapter 9

Table 9.1 Asymptotic relative efficiency (ARE) of the MLE with respect ...

Table 9.2 QMLE and efficient estimator

, on

N

 = 1000

...

Table 9.3 Identifiability constraint under which

is consistent.

Table 9.4 Choice of

h

as function of the prediction problem.

Table 9.5 Asymptotic relative efficiency of the MLE with respect to the...

Chapter 10

Table 10.1 Seconds of CPU time for computing the VTE and QMLE (average ...

Table 10.2 Computation time (CPU time in seconds) and relative efficien...

Chapter 11

Table 11.1 Comparison of the four VaR estimation methods for the CAC 40...

5

Number of parameters as a function of

m

.

List of Illustrations

Chapter 1

Figure 1.1 CAC 40 index for the period from 1 March 1990 to 15 October ...

Figure 1.2 CAC 40 returns (2 March 1990 to 15 October 2008). 19 August ...

Figure 1.3 Returns of the CAC 40 (2 January 2008 to 15 October 2008).

Figure 1.4 Sample autocorrelations of (a) returns and (b) squared retur...

Figure 1.5 Kernel estimator of the CAC 40 returns density (solid line) ...

Figure 1.6 Sample autocorrelations

(

h

 = 1, …, 36

Chapter 2

Figure 2.1 Simulation of size 500 of the ARCH

(1)

process with

ω = 1

...

Figure 2.2Figure 2.2 Simulation of size 500 of the ARCH

(1)

process with ...

Figure 2.3 Simulation of size 500 of the ARCH

(1)

process with

ω = 1

...

Figure 2.4 Simulation of size 200 of the ARCH

(1)

process with

ω = 1

...

Figure 2.5 Observations 100–140 of Figure 2.4.

Figure 2.6 Simulation of size 500 of the GARCH

(1, 1)

process with

ω = 

...

Figure 2.7 Simulation of size 500 of the GARCH

(1, 1)

process with

ω = 

...

Figure 2.8 Stationarity regions for the GARCH

(1, 1)

model when

: 1, ...

Figure 2.9 Stationarity regions for the ARCH(2) model: 1, second‐order ...

Figure 2.10 Regions of moments existence for the GARCH

(1, 1)

model: 1, ...

Figure 2.11 Autocorrelation function (a) and partial autocorrelation fu...

Figure 2.2 Autocorrelation function (a) and partial autocorrelation fun...

Figure 2.3 Prediction intervals at horizon 1, at 95

%

, for the strong

...

Figure 2.4 Prediction intervals at horizon 1, at 95%, for the GARCH

(1,

...

Figure 2.15Figure 2.15 Prediction intervals at horizon 1, at 95%, for the...

Figure 2.6 Prediction intervals at horizon 1, at 95%, for the GARCH

(1,

...

Chapter 4

Figure 4.1 Volatility

(in full line) and volatility estimates

(...

Figure 4.2 Theoretical autocorrelation function of the squares of a Log...

Figure 4.3 The set of the Extended Log‐GARCH volatilities contains the ...

Figure 4.4 News impact curves for the ARCH

(1)

model,

(dashed line),...

Figure 4.5 Stationarity regions for the TARCH

(1)

model with

: 1, sec...

Figure 4.6 Stationarity regions for the APARCH(1,0) model with

: 1, ...

Figure 4.7 The first 500 values of the CAC 40 index (a) and of the squa...

Figure 4.8 Correlograms of the CAC 40 index (a) and the squared index (...

Figure 4.9 Correlogram

of the absolute CAC 40 returns (a) and cross...

Figure 4.10 From left to right and top to bottom, graph of the first 50...

Figure 4.11 Returns

r

t

of the CAC 40 index (solid lines) and confi...

Figure 4.12 Comparison of the estimated volatilities of the EGARCH and ...

Figure 4.13 Correlogram

h ↦ ρ(∣r t  ∣ , ∣ r t − h ∣)

...

Chapter 5

Figure 5.1 SACR of exchange rates against the euro, standard significan...

Figure 5.2 SACRs of a simulation of a strong white noise (a) and of the...

Figure 5.3 Sample autocorrelations of a simulation of size

n = 5000

...

Figure 5.4 Sample partial autocorrelations of a simulation of size

n =

...

Figure 5.5 Autocorrelations (a) and partial autocorrelations (b) for mo...

Figure 5.6 SACRs (a) and SPACs (b) of a simulation of size

n = 1000

...

Figure 5.7 Correlograms of returns and squared returns of the CAC 40 in...

Chapter 7

Figure 7.1 GARCH

(1, 1)

: zones of strict and second‐order stationa...

Figure 7.2 Box‐plots of the QML estimation errors for the parameters

ω

...

Chapter 8

Figure 8.1 ARCH

(1)

model with

θ

0

 = (

ω

0

, 0)

a...

Figure 8.2 Concentrated log‐likelihood (solid line)

for an ARCH

(1)

...

Figure 8.3 Comparison between a kernel density estimator of the Wald st...

Figure 8.4 Comparison of the observed powers of the Wald test (thick li...

Figure 8.5 Local asymptotic power of the Wald test (solid line) and of ...

Chapter 9

Figure 9.1 Density ( 9.9) for different values of

a > 0

...

Figure 9.2 Local asymptotic power of the optimal Wald test

(solid l...

Chapter 11

Figure 11.1 (a) VaR is the

(1 − 

α

)

‐quantile of t...

Figure 11.2 Effective losses of the CAC 40 (solid lines) and estimated ...

Chapter 12

Figure 12.1 Simulation of length 100 of a three‐regime HMM. The full li...

Figure 12.2 CAC 40 and SP 500 from March 1, 1990 to December 29, 2006, ...

Guide

Cover

Table of Contents

Begin Reading

Pages

xi

xiii

xiv

xv

xvi

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

15

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

271

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

387

387

388

389

390

391

392

393

393

394

395

396

397

398

399

400

401

401

402

403

404

405

406

407

408

409

410

411

412

413

414

415

416

417

418

419

420

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

436

437

438

439

440

441

442

443

444

445

446

447

448

449

450

451

452

453

454

455

456

457

458

459

460

461

462

463

464

465

466

467

467

485

486

487

488

489

GARCH Models

Structure, Statistical Inference and Financial Applications

Second Edition

Christian Francq

CREST and University of Lille, France

 

Jean-Michel Zakoian

CREST and University of Lille, France

Copyright

This edition first published 2019

© 2019 John Wiley & Sons Ltd

Edition History

John Wiley & Sons (1e, 2010)

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Christian Francq and Jean‐Michel Zakoian to be identified as the authors of this work has been asserted in accordance with law.

Registered Offices

John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

Editorial Office

9600 Garsington Road, Oxford, OX4 2DQ, UK

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of Warranty

While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Cataloging-in-Publication Data

Names: Francq, Christian, author. | Zakoian, Jean-Michel, author.

Title: GARCH models : structure, statistical inference and financial

applications / Christian Francq, Jean-Michel Zakoian.

Other titles: Modèles GARCH. English

Description: 2 edition. | Hoboken, NJ : John Wiley & Sons, 2019. | Includes

bibliographical references and index. |

Identifiers: LCCN 2018038962 (print) | LCCN 2019003658 (ebook) | ISBN

9781119313564 (Adobe PDF) | ISBN 9781119313489 (ePub) | ISBN 9781119313571

(hardcover)

Subjects: LCSH: Finance-Mathematical models. | Investments-Mathematical

models.

Classification: LCC HG106 (ebook) | LCC HG106 .F7213 2019 (print) | DDC

332.01/5195-dc23

LC record available at https://lccn.loc.gov/2018038962

Cover Design: Wiley

Cover Images: © teekid/E+/Getty Images

Preface to the Second Edition

This edition contains a large number of additions and corrections. The analysis of GARCH models – and more generally volatility models – has undergone various new developments in recent years. There was a need to make the material more complete.

A brief summary of the added material in the second edition is:

A new chapter entitled “Parameter‐driven volatility models”. This chapter is divided in two sections entitled “Stochastic Volatility Models” and “Markov Switching Volatility Models”. Two new appendices on “Markov Chains on Countable State Spaces” and “The Kalman Filter” are provided.

A new chapter entitled “Alternative Models for the Conditional Variance”, replacing and completing the chapter “Asymmetries” of the first version. This chapter contains a new section on “Stochastic Recurrence Equations” and additional material on EGARCH, Log‐GARCH, GAS, MIDAS and intraday volatility models among others.

A more complete discussion of multivariate GARCH models in Chapter 10. In particular a new section on “Cholesky GARCH” has been added. More emphasis has been given to the inference of multivariate GARCH models, through two new sections entitled “QML Estimation of General MGARCH” and “Looking for Numerically Feasible Estimation Methods”.

The previous Appendix D entitled “Problems” has been removed, but a new set of corrected problems is available on the webpages of the authors.

An up‐to‐date list of references.

On the other hand, there was not enough space to keep the previous 4th chapter of the 1st edition entitled “Temporal aggregation and weak GARCH models”.

The webpage http://christian.francq140.free.fr/Christian-Francq/book-GARCH.html features additional material (codes, data sets, and problems with corrections)

We are indebted to many readers who have used the book and made suggestions for improvements. In particular, we thank Francisco Blasques, Lajos Horvàth, Hamdi Raïssi, Roch Roy, Genaro Sucarrat. We are also indebted to Wiley for their support and assistance in preparing this edition.

Christian Francq

Jean‐Michel Zakoian

Palaiseau, France

September, 2018

Preface to the First Edition

Autoregressive conditionally heteroscedastic (ARCH) models were introduced by Engle in an article published in Econometrica in the early 1980s (Engle, 1982). The proposed application in that article focused on macroeconomic data and one could not imagine, at that time, that the main field of application for these models would be finance. Since the mid 1980s and the introduction of generalized ARCH (or GARCH) models, these models have become extremely popular among both academics and practitioners. GARCH models led to a fundamental changes to the approaches used in finance, through an efficient modeling of volatility (or variability) of the prices of financial assets. In 2003, the Nobel Prize for Economics was jointly awarded to Robert F. Engle and sharing the award with Clive W.J. Granger ‘for methods of analyzing economic time series with time‐varying volatility (ARCH)’.

Since the late 1980s, numerous extensions of the initial ARCH models have been published (see Bollerslev, 2008, for a (tentatively) exhaustive list). The aim of the present volume is not to review all these models, but rather to provide a panorama, as wide as possible, of current research into the concepts and methods of this field. Along with their development in econometrics and finance journals, GARCH models and their extensions have given rise to new directions for research in probability and statistics. Numerous classes of nonlinear time series models have been suggested, but none of them has generated interest comparable to that in GARCH models. The interest of the academic world in these models is explained by the fact that they are simple enough to be usable in practice, but also rich in theoretical problems, many of them unsolved.

This book is intended primarily for master's students and junior researchers, in the hope of attracting them to research in applied mathematics, statistics or econometrics. For experienced researchers, this book offers a set of results and references allowing them to move towards one of the many topics discussed. Finally, this book is aimed at practitioners and users who may be looking for new methods, or may want to learn the mathematical foundations of known methods.

Some parts of the text have been written for readers who are familiar with probability theory and with time series techniques. To make this book as self‐contained as possible, we provide demonstrations of most theoretical results. On first reading, however, many demonstrations can be omitted. Those sections or chapters that are the most mathematically sophisticated and can be skipped without loss of continuity are marked with an asterisk. We have illustrated the main techniques with numerical examples, using real or simulated data. Program codes allowing the experiments to be reproduced are provided in the text and on the authors' web pages. In general, we have tried to maintain a balance between theory and applications.

Readers wishing to delve more deeply into the concepts introduced in this book will find a large collection of exercises along with their solutions. Some of these complement the proofs given in the text.

The book is organized as follows. Chapter 1 introduces the basics of stationary processes and ARMA modeling. The rest of the book is divided into three parts. Part I deals with the standard univariate GARCH model. The main probabilistic properties (existence of stationary solutions, representations, properties of autocorrelations) are presented in Chapter 2. Chapter 3 deals with complementary properties related to mixing, allowing us to characterize the decay of the time dependence. Chapter 4 is devoted to temporal aggregation: it studies the impact of the observation frequency on the properties of GARCH processes.

Part II is concerned with statistical inference. We begin in Chapter 5 by studying the problem of identifying an appropriate model a priori. Then we present different estimation methods, starting with the method of least squares in Chapter 6 which, limited to ARCH, offers the advantage of simplicity. The central part of the statistical study is Chapter 7, devoted to the quasi‐maximum likelihood method. For these models, testing the nullity of coefficients is not standard and is the subject of Chapter 8. Optimality issues are discussed in Chapter 9, as well as alternative estimators allowing some of the drawbacks of standard methods to be overcome.

Part III is devoted to extensions and applications of the standard model. In Chapter 10, models allowing us to incorporate asymmetric effects in the volatility are discussed. There is no natural extension of GARCH models for vector series, and many multivariate formulations are presented in Chapter 11. Without carrying out an exhaustive statistical study, we consider the estimation of a particular class of models which appears to be of interest for applications. Chapter 12 presents applications to finance. We first study the link between GARCH and diffusion processes, when the time step between two observations converges to zero. Two applications to finance are then presented: risk measurement and the pricing of derivatives.

Appendix A includes the probabilistic properties which are of most importance for the study of GARCH models. Appendix B contains results on autocorrelations and partial autocorrelations. Appendix C provides solutions to the end‐of‐chapter exercises. Finally, a set of problems and (in most cases) their solutions are provided in Appendix D.

Notation

General notation

‘is defined as’

(or

in Chapter 10)

Sets and spaces

positive integers, integers, rational numbers, real numbers

positive real line

‐dimensional Euclidean space

complement of the set

half‐closed interval

Matrices

‐dimensional identity matrix

the set of

real matrices

Processes

iid

independent and identically distributed

iid (0,1)

iid centered with unit variance

or

discrete‐time process

GARCH process

conditional variance or volatility

strong white noise with unit variance

kurtosis coefficient of

or

lag operator

or

sigma‐field generated by the past of

Functions

1 if

, 0 otherwise

integer part of

autocovariance and autocorrelation functions of

sample autocovariance and autocorrelation

Probability

1Classical Time Series Models and Financial Series

The standard time series analysis rests on important concepts such as stationarity, autocorrelation, white noise, innovation, and on a central family of models, the autoregressive moving average (ARMA) models. We start by recalling their main properties and how they can be used. As we shall see, these concepts are insufficient for the analysis of financial time series. In particular, we shall introduce the concept of volatility, which is of crucial importance in finance.

In this chapter, we also present the main stylized facts (unpredictability of returns, volatility clustering and hence predictability of squared returns, leptokurticity of the marginal distributions, asymmetries, etc.) concerning financial series.

1.1 Stationary Processes

Stationarity plays a central part in time series analysis, because it replaces in a natural way the hypothesis of independent and identically distributed (iid) observations in standard statistics.

Consider a sequence of real random variables (Xt)t ∈ ℤ, defined on the same probability space. Such a sequence is called a time series, and is an example of a discrete‐time stochastic process.

We begin by introducing two standard notions of stationarity.

Definition 1.1 Strict stationarity

The process (Xt) is said to be strictly stationary if the vectors (X1,…, Xk)′ and (X1 + h, …, Xk + h)′ have the same joint distribution, for any k ∈ ℕ and any h ∈ ℤ.

The following notion may seem less demanding, because it only constrains the first two moments of the variables Xt, but contrary to strict stationarity, it requires the existence of such moments.

Definition 1.2 Second‐order stationarity

The process (Xt) is said to be second‐order stationary if

(i)

(ii)

(iii)

The function γX(⋅) (ρX(⋅) ≔ γX(⋅)/γX(0)) is called the autocovariance function (autocorrelation function) of (Xt).

The simplest example of a second‐order stationary process is white noise. This process is particularly important because it allows more complex stationary processes to be constructed.

Definition 1.3 Weak white noise

The process (εt) is called weak white noise if, for some positive constant σ2:

(i)

(ii)

(iii)

Remark 1.1 Strong white noise

It should be noted that no independence assumption is made in the definition of weak white noise. The variables at different dates are only uncorrelated, and the distinction is particularly crucial for financial time series. It is sometimes necessary to replace hypothesis (iii) by the stronger hypothesis

(iii′) the variables εt and εt + h are independent and identically distributed.

The process (εt) is then said to be strong white noise.

Estimating Autocovariances

The classical time series analysis is centred on the second‐order structure of the processes. Gaussian stationary processes are completely characterized by their mean and their autocovariance function. For non‐Gaussian processes, the mean and autocovariance give a first idea of the temporal dependence structure. In practice, these moments are unknown and are estimated from a realisation of size n of the series, denoted X1, …, Xn. This step is preliminary to any construction of an appropriate model. To estimate γ(h), we generally use the sample autocovariance defined, for 0 ≤ h < n, by

where denotes the sample mean. We similarly define the sample autocorrelation function by for ∣h ∣  < n.

The previous estimators have finite‐sample bias but are asymptotically unbiased. There are other similar estimators of the autocovariance function with the same asymptotic properties (for instance, obtained by replacing 1/n by 1/(n − h)). However, the proposed estimator is to be preferred over others because the matrix is positive semi‐definite (see Brockwell and Davis 1991, p. 221).

It is, of course, not recommended to use the sample autocovariances when h is close to n, because too few pairs (Xj, Xj + h) are available. Box, Jenkins, and Reinsel (1994, p. 32) suggest that useful estimates of the autocorrelations can only be made if, approximately, n > 50 and h ≤ n/4.

It is often of interest to know – for instance, in order to select an appropriate model – if some or all the sample autocovariances are significantly different from 0. It is then necessary to estimate the covariance structure of those sample autocovariances. We have the following result (see Brockwell and Davis 1991, pp. 222, 226).

Theorem 1.1 Bartlett's formulas for a strong linear process

Let (Xt) be a linear process satisfying

where (εt) is a sequence of iid variables such that

Appropriately normalized, the sample autocovariances and autocorrelations are asymptotically normal, with asymptotic variances given by the Bartlett formulas:

1.1

and

1.2

Formula (1.2) still holds under the assumptions

In particular, if Xt = εt and , we have

The assumptions of this theorem are demanding, because they require a strong white noise (εt). An extension allowing the strong linearity assumption to be relaxed is proposed in Appendix B.2. For many non‐linear processes, in particular the ARCH process studies in this book, the asymptotic covariance of the sample autocovariances can be very different from Eq. (1.1) (Exercises 1.6 and 1.8). Using the standard Bartlett formula can lead to specification errors (see Chapter 5).

1.2 ARMA and ARIMA Models

The aim of time series analysis is to construct a model for the underlying stochastic process. This model is then used for analysing the causal structure of the process or to obtain optimal predictions.

The class of ARMA models is the most widely used for the prediction of second‐order stationary processes. These models can be viewed as a natural consequence of a fundamental result due to Wold (1938), which can be stated as follows: any centred, second‐order stationary, and ‘purely non‐deterministic’1process admits an infinite moving‐average representation of the form

1.3

where(εt)is the linear innovation process of(Xt), that is

1.4

whereℋX(t − 1)denotes the Hilbert space generated by the random variablesXt − 1, Xt − 2, …. andE(Xt|ℋX(t − 1))denotes the orthogonal projection ofXtontoℋX(t − 1).2 The sequence of coefficients (ci) is such that . Note that (εt) is a weak white noise.

Truncating the infinite sum in Eq. (1.3), we obtain the process

called a moving average process of order q, or MA (q). We have

It follows that the set of all finite‐order moving averages is dense in the set of second‐order stationary and purely non‐deterministic processes. The class of ARMA models is often preferred to the MA models for parsimony reasons, because they generally require fewer parameters.

Definition 1.4 ARMA (p, q) process

A second‐order stationary process (Xt) is called ARMA (p, q), where p and q are integers, if there exist real coefficients c, a1, …, ap, b1, …, bq such that

1.5

where (εt) is the linear innovation process of (Xt).

This definition entails constraints on the zeros of the autoregressive and moving average polynomials, and (Exercise 1.9). The main attraction of this model, and the representations obtained by successively inverting the polynomials a(⋅) and b(⋅), is that it provides a framework for deriving the optimal linear predictions of the process, in much simpler way than by only assuming the second‐order stationarity.

Many economic series display trends, making the stationarity assumption unrealistic. Such trends often vanish when the series is differentiated, once or several times. Let ΔXt = Xt − Xt − 1 denote the first‐difference series, and let ΔdXt = Δ(Δd − 1Xt) (with Δ0Xt = Xt) denote the differences of order d.

Definition 1.5 ARIMA(p, d, q) process

Let d be a positive integer. The process (Xt) is said to be an ARIMA (p, d, q) process if, for k = 0, …, d − 1, the processes (ΔkXt) are not second‐order stationary, and (ΔdXt) is an ARMA (p, q) process.

The simplest ARIMA process is the ARIMA (0, 1, 0), also called the random walk, satisfying

where εt is a weak white noise.

For statistical convenience, ARMA (and ARIMA) models are generally used under stronger assumptions on the noise than that of weak white noise. Strong ARMA refers to the ARMA model of Definition 1.4 when εt is assumed to be a strong white noise. This additional assumption allows us to use convenient statistical tools developed in this framework, but considerably reduces the generality of the ARMA class. Indeed, assuming a strong ARMA is tantamount to assuming that (i) the optimal predictions of the process are linear ((εt) being the strong innovation of (Xt)) and (ii) the amplitudes of the prediction intervals depend on the horizon but not on the observations. We shall see in the next section how restrictive this assumption can be, in particular for financial time series modelling.

The orders (p, q) of an ARMA process are fully characterized through its autocorrelation function (see Brockwell and Davis 1991, pp. 89–90, for a proof).

Theorem 1.2 Characterisation of an ARMA process

Let (Xt) denote a second‐order stationary process. We have

if and only if (Xt) is an ARMA (p, q) process.

To close this section, we summarise the method for time series analysis proposed in the famous book by Box and Jenkins (1970). To simplify presentation, we do not consider seasonal series, for which SARIMA models can be considered.

Box–Jenkins Methodology

The aim of this methodology is to find the most appropriate ARIMA (p, d, q) model and to use it for forecasting. It uses an iterative six‐stage scheme:

(i) A priori identification of the differentiation order

d

(or choice of another transformation);

(ii) A priori identification of the orders

p

and

q

;

(iii) Estimation of the parameters (

a

1

, …,

a

p

,

b

1

, …,

b

q

and

σ

2

 = Var

ε

t

);

(iv) Validation;

(v) Choice of a model;

(vi) Prediction.

Although many unit root tests have been introduced in the last 30 years, step (i) is still essentially based on examining the graph of the series. If the data exhibit apparent deviations from stationarity, it will not be appropriate to choose d = 0. For instance, if the amplitude of the variations tends to increase, the assumption of constant variance can be questioned. This may be an indication that the underlying process is heteroscedastic.3 If a regular linear trend is observed, positive or negative, it can be assumed that the underlying process is such that EXt = at + b with a ≠ 0. If this assumption is correct, the first‐difference series ΔXt = Xt − Xt − 1 should not show any trend (EΔXt = a) and could be stationary. If no other sign of non‐stationarity can be detected (such as heteroscedasticity), the choice d = 1 seems suitable. The random walk (whose sample paths may resemble the graph of Figure 1.1), is another example where d = 1 is required, although this process does not have any deterministic trend.

Step (ii) is more problematic. The primary tool is the sample autocorrelation function. If, for instance, we observe that is far away from 0 but that for any h > 1, is close to 0,4 then, from Theorem 1.1, it is plausible that ρ(1) ≠ 0 and ρ(h) = 0 for all h > 1. In this case, Theorem 1.2 entails that Xt is an MA(1) process. To identify AR processes, the partial autocorrelation function (see Appendix B.1) plays an analogous role. For mixed models (that is, ARMA (p, q) with pq ≠ 0), more sophisticated statistics can be used, as will be seen in Chapter 5. Step (ii) often results in the selection of several candidates (p1, q1), …, (pk, qk) for the ARMA orders. These k models are estimated in step (iii), using, for instance, the least‐squares method. The aim of step (iv) is to gauge if the estimated models are reasonably compatible with the data. An important part of the procedure is to examine the residuals which, if the model is satisfactory, should have the appearance of white noise. The correlograms are examined and portmanteau tests are used to decide if the residuals are sufficiently close to white noise. These tools will be described in detail in Chapter 5. When the tests on the residuals fail to reject the model, the significance of the estimated coefficients is studied. Testing the nullity of coefficients sometimes allows the model to be simplified. This step may lead to rejection of all the estimated models, or to consideration of other models, in which case we are brought back to step (i) or (ii). If several models pass the validation step (iv), selection criteria can be used, the most popular being the Akaike (AIC) and Bayesian (BIC) information criteria. Complementing these criteria, the predictive properties of the models can be considered: different models can lead to almost equivalent predictive formulas. The parsimony principle would thus lead us to choose the simplest model, the one with the fewest parameters. Other considerations can also come into play, for instance, models frequently involve a lagged variable at the order 12 for monthly data, but this would seem less natural for weekly data. If the model is appropriate, step (vi) allows us to easily compute the best linear predictions at horizon h = 1, 2, …. Recall that these linear predictions do not necessarily lead to minimal quadratic errors. Non‐linear models, or non‐parametric methods, sometimes produce more accurate predictions. Finally, the interval predictions obtained in step (vi) of the Box–Jenkins methodology are based on Gaussian assumptions. Their magnitude does not depend on the data, which for financial series is not appropriate, as we shall see.

Figure 1.1 CAC 40 index for the period from 1 March 1990 to 15 October 2008 (4702 observations).

1.3 Financial Series

Modelling financial time series is a complex problem. This complexity is not only due to the variety of the series in use (stocks, exchange rates, interest rates, etc.), to the importance of the frequency of observation (second, minute, hour, day, etc.), or to the availability of very large data sets. It is mainly due to the existence of statistical regularities (stylised facts) which are common to a large number of financial series and are difficult to reproduce artificially using stochastic models.

Most of these stylised facts were put forward in a paper by Mandelbrot (1963). Since then, they have been documented, and completed, by many empirical studies. They can be observed more or less clearly depending on the nature of the series and its frequency. The properties that we now present are mainly concerned with daily stock prices.

Let pt denote the price of an asset at time t and let εt = log(pt/pt − 1) be the continuously compounded or log return (also simply called the return). The series (εt) is often close to the series of relative price variations rt = (pt − pt−1)/pt−1, since εt = log(1 + rt). In contrast to the prices, the returns or relative prices do not depend on monetary units which facilitates comparisons between assets. The following properties have been amply commented upon in the financial literature.

(i)

Non‐stationarity of price series

. Samples paths of prices are generally close to a random walk without intercept (see the CAC index series

5

displayed in Figure 

1.1

). On the other hand, sample paths of returns are generally compatible with the second‐order stationarity assumption. For instance, Figures 

1.2

and

1.3

show that the returns of the CAC index oscillate around zero. The oscillations vary a great deal in magnitude but are almost constant in average over long sub‐periods. The extreme volatility of prices in the last period, induced by the financial crisis of 2008, is worth noting.

(ii)

Absence of autocorrelation for the price variations

. The series of price variations generally displays small autocorrelations, making it close to a white noise. This is illustrated for the CAC in Figure 

1.4

a. The classical significance bands are used here, as an approximation, but we shall see in Chapter 5 that they must be corrected when the noise is not independent. Note that for intraday series, with very small time intervals between observations (measured in minutes or seconds) significant autocorrelations can be observed due to the so‐called

microstructure

effects.

(iii)

Autocorrelations of the squared price returns

. Squared returns

or absolute returns

(∣ε

t

∣)

are generally strongly autocorrelated (see Figure 

1.4

b). This property is not incompatible with the white noise assumption for the returns, but shows that the white noise is not strong.

(iv)

Volatility clustering

. Large absolute returns

∣ε

t

tend to appear in clusters. This property is generally visible on the sample paths (as in Figure 

1.3

). Turbulent (high‐volatility) sub‐periods are followed by quiet (low‐volatility) periods. These sub‐periods are recurrent but do not appear in a periodic way (which might contradict the stationarity assumption). In other words, volatility clustering is not incompatible with a homoscedastic (i.e. with a constant variance) marginal distribution for the returns.

(v)

Fat‐tailed distributions

. When the empirical distribution of daily returns is drawn, one can generally observe that it does not resemble a Gaussian distribution. Classical tests typically lead to rejection of the normality assumption at any reasonable level. More precisely, the densities have fat tails (decreasing to zero more slowly than

exp(−

x

2

/2)

) and are sharply peaked at zero: they are called leptokurtic. A measure of the leptokurticity is the kurtosis coefficient, defined as the ratio of the sample fourth‐order moment to the squared sample variance. Asymptotically equal to 3 for Gaussian iid observations, this coefficient is much greater than 3 for returns series. When the time interval over which the returns are computed increases, leptokurticity tends to vanish and the empirical distributions get closer to a Gaussian. Monthly returns, for instance, defined as the sum of daily returns over the month, have a distribution that is much closer to the normal than daily returns. Figure 

1.5

compares a kernel estimator of the density of the CAC returns with a Gaussian density. The peak around zero appears clearly, but the thickness of the tails is more difficult to visualise.

(vi)

Leverage effects

. The so‐called leverage effect was noted by Black (

1976

), and involves an asymmetry of the impact of past positive and negative values on the current volatility. Negative returns (corresponding to price decreases) tend to increase volatility by a larger amount than positive returns (price increases) of the same magnitude. Empirically, a positive correlation is often detected between

and

∣ε

t

 + 

h

(a price increase should entail future volatility increases), but, as shown in Table 

1.1

, this correlation is generally less than between

and

∣ε

t

 + 

h

.

(vii)

Seasonality

. Calendar effects are also worth mentioning. The day of the week, the proximity of holidays, among other seasonalities, may have significant effects on returns. Following a period of market closure, volatility tends to increase, reflecting the information cumulated during this break. However, it can be observed that the increase is less than if the information had cumulated at constant speed. Let us also mention that the seasonal effect is also very present for intraday series.

Figure 1.2 CAC 40 returns (2 March 1990 to 15 October 2008). 19 August 1991, Soviet Putsch attempt; 11 September 2001, fall of the Twin Towers; 21 January 2008, effect of the subprime mortgage crisis; 6 October 2008, effect of the financial crisis.

Figure 1.3 Returns of the CAC 40 (2 January 2008 to 15 October 2008).

Figure 1.4 Sample autocorrelations of (a) returns and (b) squared returns of the CAC 40 (2 January 2008 to 15 October 2008).

Figure 1.5 Kernel estimator of the CAC 40 returns density (solid line) and density of a Gaussian with mean and variance equal to the sample mean and variance of the returns (dotted line).

Sample autocorrelations of returns εt (CAC 40 index, 2 January 2008 to 15 October 2008), of absolute returns ∣εt∣, sample correlations between and ∣εt∣, and between and ∣εt∣.

h

1

2

3

4

5

6

7

0.012

0.014

0.047

0.025

0.043

0.023

0.014

0.175

0.229

0.235

0.200

0.218

0.212

0.203

0.038

0.059

0.051

0.055

0.059

0.109

0.061

0.160

0.200

0.215

0.173

0.190

0.136

0.173

We use here the notation and .

1.4 Random Variance Models

The previous properties illustrate the difficulty of financial series modelling. Any satisfactory statistical model for daily returns must be able to capture the main stylised facts described in the previous section. Of particular importance are the leptokurticity, the unpredictability of returns, and the existence of positive autocorrelations in the squared and absolute returns. Classical formulations (such as ARMA models) centred on the second‐order structure are inappropriate. Indeed, the second‐order structure of most financial time series is close to that of white noise.

The fact that large absolute returns tend to be followed by large absolute returns (whatever the sign of the price variations) is hardly compatible with the assumption of constant conditional variance. This phenomenon is called conditional heteroscedasticity:

Conditional heteroscedasticity is perfectly compatible with stationarity (in the strict and second‐order senses), just as the existence of a non‐constant conditional mean is compatible with stationarity. The GARCH processes studied in this book will amply illustrate this point.

The models introduced in the econometric literature to account for the very specific nature of financial series (price variations or log‐returns, interest rates, etc.) are generally written in the multiplicative form

1.6

where (ηt) and (σt) are real processes such that:

(i)

σ

t

is measurable with respect to a

σ

‐field, denoted

t

 − 1

;

(ii)

(

η

t

)

is an iid centred process with unit variance,

η

t

being independent of

t

 − 1

and

σ

u

;

u

 < 

t

)

;

(iii)

σ

t

 > 0

.

This formulation implies that the sign of the current price variation (that is, the sign of εt) is that of ηt, and is independent of past price variations. Moreover, if the first two conditional moments of εt exist, they are given by

The random variable σt is called the volatility6 of εt.

It may also be noted that (under existence assumptions)

and

which makes (εt) a weak white noise. The series of squares, on the other hand, generally have non‐zero autocovariances: (εt) is thus not a strong white noise.

The kurtosis coefficient of εt, if it exists, is related to that of ηt, denoted κη, by

1.7

This formula shows that the leptokurticity of financial time series can be taken into account in two different ways: either by using a leptokurtic distribution for the iid sequence (ηt), or by specifying a process with a great variability.

Different classes of models can be distinguished depending on the specification adopted for σt:

(i)

Conditionally heteroscedastic

(or GARCH‐type) processes for which

t

 − 1

 = 

σ

s

;

s

 < 

t

)

is the

σ

‐field generated by the past of

ε

t

. The volatility is here a deterministic function of the past of

ε

t

. Processes of this class differ by the choice of a specification for this function. The standard GARCH models are characterised by a volatility specified as a linear function of the past values of

. They will be studied in detail in Chapter 2.

(ii)

Stochastic volatility

processes

7

for which

t

 − 1

is the

σ

‐field generated by

, where (

) is a strong white noise and is independent of

(

η

t

)

. In these models, volatility is a latent process. The most popular model in this class assumes that the process

log 

σ

t

follows an AR

(1)

of the form

where the noises () and (ηt) are independent.

(iii)

Switching‐regime models

for which

σ

t

 = 

σ

t

, ℱ

t

 − 1

)

, where

t

)

is a latent (unobservable) integer‐valued process, independent of

(

η

t

)

. The state of the variable

Δ

t

is here interpreted as a regime and, conditionally on this state, the volatility of

ε

t

has a GARCH specification. The process

t

)

is generally supposed to be a finite‐state Markov chain. The models are thus called Markov‐switching models.

1.5 Bibliographical Notes

The time series concepts presented in this chapter are the subject of numerous books. Two classical references are Brockwell and Davis (1991) and Gouriéroux and Monfort (1995, 1996).

The assumption of iid Gaussian price variations has long been predominant in the finance literature and goes back to the dissertation by Bachelier (1900), where a precursor of Brownian motion can be found. This thesis, ignored for a long time until its rediscovery by Kolmogorov in 1931 (see Kahane 1998), constitutes the historical source of the link between Brownian motion and mathematical finance. Nonetheless, it relies on only a rough description of the behaviour of financial series. The stylised facts concerning these series can be attributed to Mandelbrot (1963) and Fama (1965). Based on the analysis of many stock returns series, their studies showed the leptokurticity, hence the non‐Gaussianity, of marginal distributions, some temporal dependencies, and non‐constant volatilities. Since then, many empirical studies have confirmed these findings. See, for instance, Taylor (2007) for a detailed presentation of the stylised facts of financial times series. In particular, the calendar effects are discussed in detail.

As noted by Shephard (2005), a precursor article on ARCH models is that of Rosenberg (1972). This article shows that the decomposition (1.6) allows the leptokurticity of financial series to be reproduced. It also proposes some volatility specifications which anticipate both the GARCH and stochastic volatility models. However, the GARCH models to be studied in the next chapters are not discussed in this article. The decomposition of the kurtosis coefficient in (1.7) can be found in Clark (1973).

A number of surveys have been devoted to GARCH models. See, among others, Bollerslev, Chou, and Kroner (1992), Bollerslev, Engle, and Nelson (1994), Pagan (1996), Palm (1996), Shephard (1996), Kim, Shephard, and Chib (1998), Engle (2001, 2002b, 2004), Engle and Patton (2001), Diebold (2004), Bauwens, Laurent, and Rombouts (2006), and Giraitis, Leipus, and Surgailis (2006). Moreover, the books by Gouriéroux (1997) and Xekalaki and Degiannakis (2009) are devoted to GARCH and several books devote a chapter to GARCH: Mills (1993), Hamilton (1994), Franses and van Dijk (2000), Gouriéroux and Jasiak (2001), Franke, Härdle, and Hafner chronological order (2004), McNeil, Frey, and Embrechts (2005), Taylor (2007), Andersen et al. (2009), and Tsay (2010). See also Mikosch (2001).

Although the focus of this book is on financial applications, it is worth mentioning that GARCH models have been used in other areas. Time series exhibiting GARCH‐type behaviour have also appeared, for example, in speech signals (Cohen 2004, 2006; Abramson and Cohen 2008), daily and monthly temperature measurements (Tol 1996; Campbell and Diebold 2005; Romilly 2006; Huang, Shiu, and Lin 2008), wind speeds (Ewing, Kruse, and Schroeder 2006), electricity prices (Dupuis 2017) and atmospheric CO2 concentrations (Hoti, McAleer, and Chan 2005; McAleer and Chan 2006).

Most econometric software (for instance, GAUSS, R, RATS, SAS and SPSS) incorporates routines that permit the estimation of GARCH models. Readers interested in the implementation with Ox may refer to Laurent (2009).

1.6 Exercises

1.1 (Stationarity, ARMA models, white noises)

Let (ηt) denote an iid centred sequence with unit variance (and if necessary with a finite fourth‐order moment).

Do the following models admit a stationary solution? If yes, derive the expectation and the autocorrelation function of this solution.

X

t

 = 1 + 0.5

X

t

 − 1

 + 

η

t

;

X

t

 = 1 + 2

X

t

 − 1

 + 

η

t

;

X

t

 = 1 + 0.5

X

t

 − 1

 + 

η

t

 − 0.4

η

t

 − 1

.

Identify the ARMA models compatible with the following recursive relations, where

ρ

(⋅)

denotes the autocorrelation function of some stationary process:

ρ

(

h

) = 0.4

ρ

(

h

 − 1)

, for all

h