99,99 €
Game-theoretic probability and finance come of age
Glenn Shafer and Vladimir Vovk’s Probability and Finance, published in 2001, showed that perfect-information games can be used to define mathematical probability. Based on fifteen years of further research, Game-Theoretic Foundations for Probability and Finance presents a mature view of the foundational role game theory can play. Its account of probability theory opens the way to new methods of prediction and testing and makes many statistical methods more transparent and widely usable. Its contributions to finance theory include purely game-theoretic accounts of Ito’s stochastic calculus, the capital asset pricing model, the equity premium, and portfolio theory.
Game-Theoretic Foundations for Probability and Finance is a book of research. It is also a teaching resource. Each chapter is supplemented with carefully designed exercises and notes relating the new theory to its historical context.
Praise from early readers
“Ever since Kolmogorov's Grundbegriffe, the standard mathematical treatment of probability theory has been measure-theoretic. In this ground-breaking work, Shafer and Vovk give a game-theoretic foundation instead. While being just as rigorous, the game-theoretic approach allows for vast and useful generalizations of classical measure-theoretic results, while also giving rise to new, radical ideas for prediction, statistics and mathematical finance without stochastic assumptions. The authors set out their theory in great detail, resulting in what is definitely one of the most important books on the foundations of probability to have appeared in the last few decades.” – Peter Grünwald, CWI and University of Leiden
“Shafer and Vovk have thoroughly re-written their 2001 book on the game-theoretic foundations for probability and for finance. They have included an account of the tremendous growth that has occurred since, in the game-theoretic and pathwise approaches to stochastic analysis and in their applications to continuous-time finance. This new book will undoubtedly spur a better understanding of the foundations of these very important fields, and we should all be grateful to its authors.” – Ioannis Karatzas, Columbia University
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 764
Veröffentlichungsjahr: 2019
Cover
WILEY SERIES IN PROBABILITY AND STATISTICS
Preface
Acknowledgments
Part I:
Examples in Discrete Time
1
Borel's Law of Large Numbers
1.1 A PROTOCOL FOR TESTING FORECASTS
1.2 A GAME‐THEORETIC GENERALIZATION OF BOREL'S THEOREM
1.3 BINARY OUTCOMES
1.4 SLACKENINGS AND SUPERMARTINGALES
1.5 CALIBRATION
1.6 THE COMPUTATION OF STRATEGIES
1.7 Exercises
1.8 CONTEXT
2
Bernoulli's and De Moivre's Theorems
2.1 Game‐Theoretic Expected value and Probability
2.2 Bernoulli's Theorem for Bounded Forecasting
2.3 A Central Limit Theorem
2.4 Global Upper Expected Values for Bounded Forecasting
2.5 Exercises
2.6 Context
3
Some Basic Supermartingales
3.1 KOLMOGOROV'S MARTINGALE
3.2 DOL
ANS'S SUPERMARTINGALE
3.3 HOEFFDING'S SUPERMARTINGALE
3.4 BERNSTEIN'S SUPERMARTINGALE
3.5 EXERCISES
3.6 CONTEXT
4
Kolmogorov's Law of Large Numbers
4.1 STATING KOLMOGOROV'S LAW
4.2 SUPERMARTINGALE CONVERGENCE THEOREM
4.3 HOW SKEPTIC FORCES CONVERGENCE
4.4 HOW REALITY FORCES DIVERGENCE
4.5 FORCING GAMES
4.6 EXERCISES
4.7 CONTEXT
5
The Law of the Iterated Logarithm
5.1 VALIDITY OF THE ITERATED‐LOGARITHM BOUND
5.2 SHARPNESS OF THE ITERATED‐LOGARITHM BOUND
5.3 ADDITIONAL RECENT GAME‐THEORETIC RESULTS
5.4 CONNECTIONS WITH LARGE DEVIATION INEQUALITIES
5.5 EXERCISES
5.6 CONTEXT
Part II:
Abstract Theory in Discrete Time
6
Betting on a Single Outcome
6.1 Upper and Lower Expectations
6.2 Upper and Lower Probabilities
6.3 Upper Expectations with Smaller Domains
6.4 Offers
6.5 Dropping the Continuity Axiom
6.6 Exercises
6.7 Context
7
Abstract Testing Protocols
7.1 Terminology and Notation
7.2 Supermartingales
7.3 Global Upper Expected Values
7.4 Lindeberg's Central Limit Theorem for Martingales
7.5 General Abstract Testing Protocols
7.6 Making the Results of Part I Abstract
7.7 Exercises
7.8 Context
8
Zero‐One Laws
8.1 LÉvy's Zero‐One Law
8.2 Global Upper Expectation
8.3 Global Upper and Lower Probabilities
8.4 Global Expected Values and Probabilities
8.5 Other Zero‐One Laws
8.6 Exercises
8.7 Context
9
Relation to Measure‐Theoretic Probability
9.1 VILLE'S THEOREM
9.2 MEASURE‐THEORETIC REPRESENTATION OF UPPER EXPECTATIONS
9.3 EMBEDDING GAME‐THEORETIC MARTINGALES IN PROBABILITY SPACES
9.4 EXERCISES
9.5 CONTEXT
Part III:
Applications in Discrete Time
10
Using Testing Protocols in Science and Technology
10.1 SIGNALS IN OPEN PROTOCOLS
10.2 COURNOT'S PRINCIPLE
10.3 DALTONISM
10.4 LEAST SQUARES
10.5 PARAMETRIC STATISTICS WITH SIGNALS
10.6 QUANTUM MECHANICS
10.7 JEFFREYS'S LAW
10.8 EXERCISES
10.9 Context
11
Calibrating Lookbacks and p‐Values
11.1 LOOKBACK CALIBRATORS
11.2 LOOKBACK PROTOCOLS
11.3 LOOKBACK COMPROMISES
11.4 LOOKBACKS IN FINANCIAL MARKETS
11.5 CALIBRATING
p
‐VALUES
11.6 EXERCISES
11.7 CONTEXT
12
Defensive Forecasting
12.1 DEFEATING STRATEGIES FOR SKEPTIC
12.2 CALIBRATED FORECASTS
12.3 PROVING THE CALIBRATION THEOREMS
12.4 USING CALIBRATED FORECASTS FOR DECISION MAKING
12.5 PROVING THE DECISION THEOREMS
12.6 FROM THEORY TO ALGORITHM
12.7 DISCONTINUOUS STRATEGIES FOR SKEPTIC
12.8 Exercises
12.9 CONTEXT
Part IV:
Game‐Theoretic Finance
13
Emergence of Randomness in Idealized Financial Markets
13.1 CAPITAL PROCESSES AND INSTANT ENFORCEMENT
13.2 EMERGENCE OF BROWNIAN RANDOMNESS
13.3 EMERGENCE OF BROWNIAN EXPECTATION
13.4 APPLICATIONS OF DUBINS–SCHWARZ
13.5 GETTING RICH QUICK WITH THE AXIOM OF CHOICE
13.6 Exercises
13.7 CONTEXT
14
A Game‐Theoretic It Calculus
14.1 Martingale Spaces
14.2 Conservatism of Continuous Martingales
14.3 It Integration
14.4 Covariation and Quadratic Variation
14.5 It's Formula
14.6 DOLÉANS EXPONENTIAL AND LOGARITHM
14.7 GAME‐THEORETIC EXPECTATION AND PROBABILITY
14.8 Game‐Theoretic Dubins–Schwarz Theorem
14.9 Coherence
14.10 Exercises
14.11 Context
15
Numeraires in Market Spaces
15.1 MARKET SPACES
15.2 MARTINGALE THEORY IN MARKET SPACES
15.3 GIRSANOV'S THEOREM
15.4 EXERCISES
15.5 CONTEXT
16
Equity Premium and CAPM
16.1 Three Fundamental Continuous
I
‐Martingales
16.2 Equity Premium
16.3 Capital Asset Pricing Model
16.4 Theoretical Performance Deficit
16.5 Sharpe Ratio
16.6 Exercises
16.7 Context
17
Game‐Theoretic Portfolio Theory
17.1 STROOCK–VARADHAN MARTINGALES
17.2 BOOSTING STROOCK–VARADHAN MARTINGALES
17.3 OUTPERFORMING THE MARKET WITH DUBINS–SCHWARZ
17.4 JEFFREYS'S LAW IN FINANCE
17.5 EXERCISES
17.6 CONTEXT
Terminology and Notation
List of Symbols
References
Index
End User License Agreement
Chapter 1
Table 1.1 Basic concepts in Protocol 1.3
Chapter 2
Table 2.1 Some global upper probabilities for
in Protocols 2.1 and 2.11.
Chapter 6
Table 6.1 Notation and terminology for upper expectations.
Chapter 7
Table 7.1 Three sets of processes in Protocol 7.1.
Chapter 11
Table 11.1 Values of some maximal lookback calibrators, rounded down to the next...
Table 11.2 Some values of the
p
‐value calibrators
and
and of the Vovk–Sellke ...
Chapter 16
Table 16.1 Percentage of active funds outperformed by relevant market indexes
Table 16.2 Continuous
‐martingales used in this chapter; more can be obtained us...
Chapter 2
Figure 2.1 Heat propagation according to the equation
. Part (a) shows an ini...
Figure 2.2 Heat propagation with thermostats set to keep the temperature at
f...
Chapter 3
Figure 3.1 Log–log plot showing two upper bounds on
as functions of
. The d...
Figure 3.2 The functions
(solid) and
(dashed) over a range of
.
Chapter 4
Figure 4.1 Evolution of a nonnegative process
and its transform
over the f...
Chapter 17
Figure 17.1 Lower bounds on the Stroock–Varadhan martingale generated by 17.3 ...
Cover
Table of Contents
Begin Reading
iii
iv
ii
xi
xii
xiii
xiv
xv
xvi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
305
306
307
308
309
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
E1
Established by Walter A. Shewhart and Samuel S. Wilks
Editors: David J. Balding, Noel A. C. Cressie, Garrett M. Fitzmaurice,
Geof H. Givens, Harvey Goldstein, Geert Molenberghs, David W. Scott,
Adrian F. M. Smith, Ruey S. Tsay
Editors Emeriti: J. Stuart Hunter, Iain M. Johnstone, Joseph B. Kadane,
Jozef L. Teugels
The Wiley Series in Probability and Statistics is well established and authoritative. It covers many topics of current research interest in both pure and applied statistics and probability theory. Written by leading statisticians and institutions, the titles span both state‐of‐the‐art developments in the field and classical methods.
Reflecting the wide range of current research in statistics, the series encompasses applied, methodological and theoretical statistics, ranging from applications and new techniques made possible by advances in computerized practice to rigorous treatment of theoretical approaches.
This series provides essential and invaluable reading for all statisticians, whether in academia, industry, government, or research.
A complete list of titles in this series can be found at
http://www.wiley.com/go/wsps
GLENN SHAFER
Rutgers Business School
VLADIMIR VOVK
Royal Holloway, University of London
This edition first published 2019
© 2019 John Wiley & Sons, Inc.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of Glenn Shafer and Vladimir Vovk to be identified as the authors of this work has been asserted in accordance with law.
Registered Office
John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA
Editorial Office
111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.
Limit of Liability/Disclaimer of Warranty
While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Library of Congress Cataloging‐in‐Publication Data
Names: Shafer, Glenn, 1946‐ author. | Vovk, Vladimir, 1960‐ author.
Title: Game‐theoretic foundations for probability and finance / Glenn Ray
Shafer, Rutgers University, New Jersey, USA, Vladimir Vovk, University of
London, Surrey, UK.
Other titles: Probability and finance
Description: First edition. | Hoboken, NJ : John Wiley & Sons, Inc., 2019. |
Series: Wiley series in probability and statistics | Earlier edition
published in 2001 as: Probability and finance : it's only a game! |
Includes bibliographical references and index. |
Identifiers: LCCN 2019003689 (print) | LCCN 2019005392 (ebook) | ISBN
9781118547939 (Adobe PDF) | ISBN 9781118548028 (ePub) | ISBN 9780470903056
(hardcover)
Subjects: LCSH: Finance–Statistical methods. | Finance–Mathematical models.
| Game theory.
Classification: LCC HG176.5 (ebook) | LCC HG176.5 .S53 2019 (print) | DDC
332.01/5193–dc23
LC record available at https://lccn.loc.gov/2019003689
Cover design by Wiley
Cover image: © Web Gallery of Art/Wikimedia Commons
Probability theory has always been closely associated with gambling. In the 1650s, Blaise Pascal and Christian Huygens based probability's concept of expectation on reasoning about gambles. Countless mathematicians since have looked to gambling for their intuition about probability. But the formal mathematics of probability has long leaned in a different direction. In his correspondence with Pascal, often cited as the origin of probability theory, Pierre Fermat favored combinatorial reasoning over Pascal's reasoning about gambles, and such combinatorial reasoning became dominant in Jacob Bernoulli's monumental Ars Conjectandi and its aftermath. In the twentieth century, the combinatorial foundation for probability evolved into a rigorous and sophisticated measure‐theoretic foundation, put in durable form by Andrei Kolmogorov and Joseph Doob.
The twentieth century also saw the emergence of a mathematical theory of games, just as rigorous as measure theory, albeit less austere. In the 1930s, Jean Ville gave a game‐theoretic interpretation of the key concept of probability 0. In the 1970s, Claus Peter Schnorr and Leonid Levin developed Ville's fundamental insight, introducing universal game‐theoretic strategies for testing randomness. But no attempt was made in the twentieth century to use game theory as a foundation for the modern mathematics of probability.
Probability and Finance: It's Only a Game, published in 2001, started to fill this gap. It gave game‐theoretic proofs of probability's most classical limit theorems (the laws of large numbers, the law of the iterated logarithm, and the central limit theorem), and it extended this game‐theoretic analysis to continuous‐time diffusion processes using nonstandard analysis. It applied the methods thus developed to finance, discussing how the availability of a variance swap in a securities market might allow other options to be priced without probabilistic assumptions and studying a purely game‐theoretic hypothesis of market efficiency.
The present book was originally conceived of as a second edition of Probability and Finance, but as the new title suggests, it is a very different book, reflecting the healthy growth of game‐theoretic probability since 2001. As in the earlier book, we show that game‐theoretic and measure‐theoretic probability provide equivalent descriptions of coin tossing, the archetype of probability theory, while generalizing this archetype in different directions. Now we show that the two descriptions are equivalent on a larger central core, including all discrete‐time stochastic processes that have only finitely many outcomes on each round, and we present an even broader array of new ideas.
We can identify seven important new ideas that have come out of game‐theoretic probability. Some of these already appeared, at least in part, in Probability and Finance, but most are developed further here or are entirely new.
Strategies for testing
. Theorems showing that certain events have small or zero probability are made constructive; they are proven by constructing gambling strategies that multiply the capital they risk by a large or infinite factor if the events happen. In
Probability and Finance
, we constructed such strategies for the law of large numbers and several other limit theorems. Now we add to the list the most fundamental limit theorem of probability – Lévy's zero‐one law. The topic of strategies for testing remains our most prominent theme, dominating Part I and
Chapters 7
and
8
in Part II.
Limited betting opportunities
. The betting rates suggested by a scientific theory or the investment opportunities in a financial market may fall short of defining a probability distribution for future developments or even for what will happen next. Sometimes a scientist or statistician tests a theory that asserts expected values for some variables but not for every function of those variables. Sometimes an investor in a market can buy a particular payoff but cannot sell it at the same price and cannot buy arbitrary options on it. Limited betting opportunities were emphasized by a number of twentieth‐century authors, including Peter Williams and Peter Walley. As we explained in
Probability and Finance
, we can combine Williams and Walley's picture of limited betting opportunities in individual situations with Pascal and Ville's insights into strategies for combining bets across situations to obtain interesting and powerful generalizations of classical results. These include theorems that are one‐sided in some sense (see
Sections 2.4
and 5.1).
Strategies for reality
. Most of our theorems concern what can be accomplished by a bettor playing against an opponent who determines outcomes. Our games are determined; one of the players has a winning strategy. In
Probability and Finance
, we exploited this determinacy and an argument of Kolmogorov's to show that in the game for Kolmogorov's law of large numbers, the opponent has a strategy that wins when Kolmogorov's hypotheses are not satisfied. In this book we construct such a strategy explicitly and discuss other interesting strategies for the opponent (see Sections 4.4, and 4.7).
Open protocols for science
. Scientific models are usually open to influences that are not themselves predicted by the models in any way. These influences are variously represented; they may be treated as human decisions, as signals, or even as constants. Because our theorems concern what one player can accomplish regardless of how the other players move, the fact that these signals or “independent variables” can be used by the players as they appear in the course of play does not impair the theorems' validity and actually enhances their applicability to scientific problems (see
Chapter 10
).
Insuring against loss of evidence
. The bettor can modify his own strategy or adapt bets made by another bettor so as to avoid a total loss of apparently strong evidence as play proceeds further. The same methods provide a way of calibrating the
p
‐values from classical hypothesis testing so as to correct for the failure to set an initial fixed significance level. These ideas have been developed since the publication of
Probability and Finance
(see
Chapter 11
).
Defensive forecasting
. In addition to the player who bets and the player who determines outcomes, our games can involve a third player who forecasts the outcomes. The problem of forecasting is the problem of devising strategies for this player, and we can tackle it in interesting ways once we learn what strategies for the bettor win when the match between forecasts and outcomes is too poor. This idea, which came to our attention only after the publication of
Probability and Finance
, is developed in
Chapter 12
.
Continuous‐time game‐theoretic finance
. Measure‐theoretic finance assumes that prices of securities in a financial market follow some probabilistic model such as geometric Brownian motion. We obtain many insights, some already provided by measure‐theoretic finance and some not, without any probabilistic model, using only the actual prices in the market. This is now much clearer than in
Probability and Finance
, as we use tools from standard analysis that are more familiar than the nonstandard methods we used there. We have abandoned our hypothesis concerning the effectiveness of variance swaps in stabilizing markets, now fearing that the trading of such instruments could soon make them nearly as liquid and consequently treacherous as the underlying securities. But we provide game‐theoretic accounts of a wider class of financial phenomena and models, including the
capital asset pricing model
(
CAPM
), the equity premium puzzle, and portfolio theory (see Part IV).
The book has four parts.
Part I, Examples in Discrete Time, uses concrete protocols to explain how game‐theoretic probability generalizes classical discrete‐time limit theorems. Most of these results were already reported in
Probability and Finance
in 2001, but our exposition has changed substantially. We seldom repeat word for word what we wrote in the earlier book, and we occasionally refer the reader to the earlier book for detailed arguments that are not central to our theme.
Part II, Abstract Theory in Discrete Time, treats game‐theoretic probability in an abstract way, mostly developed since 2001. It is relatively self‐contained, and readers familiar with measure‐theoretic probability will find it accessible without the introduction provided by Part I.
Part III, Applications in Discrete Time, uses Part II's theory to treat important applications of game‐theoretic probability, including two promising applications that have developed since 2001: calibration of lookbacks and
p
‐values, and defensive forecasting.
Part IV, Game‐Theoretic Finance, studies continuous‐time game‐theoretic probability and its application to finance. It requires different definitions from the discrete‐time theory and hence is also relatively self‐contained. Its first chapter uses a simple concrete protocol to derive game‐theoretic versions of the Dubins–Schwarz theorem and related results, while the remaining chapters use an abstract and more powerful protocol to develop a game‐theoretic version of the Itô calculus and to study classical topics in finance theory.
Each chapter includes exercises, which vary greatly in difficulty; some are simple exercises to enhance the reader's understanding of definitions, others complete details in proofs, and others point to related literature, open problems, or substantial research projects. Following each chapter's exercises, we provide notes on the historical and contemporary context of the chapter's topic. But as a result of the substantial increase in mathematical content, we have left aside much of the historical and philosophical discussion that we included in Probability and Finance.
We are pleased by the flowering of game‐theoretic probability since 2001 and by the number of authors who have made contributions. The field nevertheless remains in its infancy, and this book cannot be regarded as a definitive treatment. We anticipate and welcome the theory's further growth and its incorporation into probability's broad tapestry of mathematics, application, and philosophy.
GLENNSHAFERANDVLADIMIRVOVK
Newark, New Jersey, USA
and Egham, Surrey, UK
10 November 2018
For more than 20 years, game‐theoretic probability has been central to both our scholarly lives. During this period, we have been generously supported, personally and financially, by more individuals and institutions than we can possibly name. The list is headed by two of the most generous and thoughtful people we know, our wives Nell Painter and Lyuda Vovk. We dedicate this book to them.
Among the many other individuals to whom we are intellectually indebted, we must put at the top of the list our students, our coauthors, and our colleagues at Rutgers University and Royal Holloway, University of London. We have benefited especially from interactions with those who have joined us in working on game‐theoretic probability and closely related topics. Foremost on this list are the Japanese researchers on game‐theoretic probability, led by Kei Takeuchi and Akimichi Takemura, and Gert de Cooman, a leader in the field of imprecise probabilities. In the case of continuous time, we have learned a great deal from Nicolas Perkowski, David Prömel, and Rafał Łochowski. The book's title was suggested to us by Ioannis Karatzas, who also provided valuable encouragement in the final stages of the writing.
At the head of the list of other scholars who have contributed to our understanding of game‐theoretic probability, we place a number who are no longer living: Joe Doob, Jørgen Hoffmann‐Jørgensen, Jean‐Yves Jaffray, Hans‐Joachim Lenz, Laurie Snell, and Kurt Weichselberger.
We also extend our warmest thanks to Victor Perez Abreu, Beatrice Acciaio, John Aldrich, Thomas Augustin, Dániel Bálint, Traymon Beavers, James Berger, Mark Bernhardt, Laurent Bienvenu, Nic Bingham, Jasper de Bock, Bernadette Bouchon‐Meunier, Olivier Bousquet, Ivan Brick, Bernard Bru, Peter Carr, Nicolò Cesa‐Bianchi, Ren‐Raw Chen, Patrick Cheridito, Alexey Chernov, Roman Chychyla, Fernando Cobos, Rama Cont, Frank Coolen, Alexander Cox, Harry Crane, Pierre Crépel, Mark Davis, Philip Dawid, Freddy Delbaen, Art Dempster, Thierry Denoeux, Valentin Dimitrov, David Dowe, Didier Dubois, Hans Fischer, Hans Föllmer, Yoav Freund, Akio Fujiwara, Alex Gammerman, Jianxiang Gao, Peter Gillett, Michael Goldstein, Shelly Goldstein, Prakash Gorroochurn, Suresh Govindaraj, Peter Grünwald, Yuri Gurevich, Jan Hannig, Martin Huesmann, Yuri Kalnishkan, Alexander Kechris, Matti Kiiski, Jack King, Elinda Fishman Kiss, Alex Kogan, Wouter Koolen, Masayuki Kumon, Thomas Kühn, Steffen Lauritzen, Gabor Laszlo, Tatsiana Levina, Chuanhai Liu, Barry Loewer, George Lowther, Gábor Lugosi, Ryan Martin, Thierry Martin, Laurent Mazliak, Peter McCullagh, Frank McIntyre, Perry Mehrling, Xiao‐Li Meng, Robert Merton, David Mest, Kenshi Miyabe, Rimas Norvais&c.breve;a, Ilia Nouretdinov, Marcel Nutz, Jan Obłój, André Orléan, Barbara Osimani, Alexander Outkin, Darius Palia, Dan Palmon, Dusko Pavlovic, Ivan Petej, Marietta Peytcheva, Jan von Plato, Henri Prade, Philip Protter, Steven de Rooij, Johannes Ruf, Andrzej Ruszczyński, Bharat Sarath, Richard Scherl, Martin Schweizer, Teddy Seidenfeld, Thomas Sellke, Eugene Seneta, John Shawe‐Taylor, Alexander Shen, Yiwei Shen, Prakash Shenoy, Oscar Sheynin, Albert N. Shiryaev, Pietro Siorpaes, Alex Smola, Mete Soner, Steve Stigler, Tamas Szabados, Natan T'Joens, Paolo Toccaceli, Matthias Troffaes, Jean‐Philippe Touffut, Dimitris Tsementzis, Valery N. Tutubalin, Miklos Vasarhelyi, Nikolai Vereshchagin, John Vickers, Mikhail Vyugin, Vladimir V'yugin, Bernard Walliser, Chris Watkins, Wei Wu, Yangru Wu, Sandy Zabell, and Fedor Zhdanov.
We thank Rutgers Business School and Royal Holloway, University of London, as institutions, for their financial support and for the research environments they have created. We have also benefited from the hospitality of numerous other institutions where we have had the opportunity to share ideas with other researchers over these past 20 years. We are particularly grateful to the three institutions that have hosted workshops on game‐theoretic probability: the University of Tokyo (on several occasions), then Royal Holloway, University of London, and the latest one at CIMAT (Centro de Investigación en Matemáticas) in Guanajuato. We are grateful to the Web Gallery of Art and its editor, Dr. Emil Krén, for permission to use “Card Players” by Lucas van Leyden (Museo Nacional Thyssen‐Bornemisza, Madrid) on the cover.
GLENNSHAFERANDVLADIMIRVOVK
Many classical probability theorems conclude that some event has small or zero probability. These theorems can be used as predictions; they tell us what to expect. Like any predictions, they can also be used as tests. If we specify an event of putative small probability as a test, and the event happens, then the putative probability is called into question, and perhaps the authority behind it as well.
The key idea of game‐theoretic probability is to formulate probabilistic predictions and tests as strategies for a player in a betting game. The player – we call him Skeptic – may be betting not so much to make money as to refute a theory or forecaster – whatever or whoever is providing the probabilities. In this picture, the claim that an event has small probability becomes the claim that Skeptic can multiply the capital he risks by a large factor if the event happens.
There is nothing profound or original in the observation that you make a lot more money than you risk when you bet on an event of small probability, at the corresponding odds, and the event happens. But as Jean Ville explained in the 1930s 386,387, the game‐theoretic picture has a deeper message. In a sequential setting, where probabilities are given on each round for the next outcome, an event involving the whole sequence of outcomes has a small probability if and only if Skeptic has a strategy for successive bets that multiplies the capital it risks by a large factor when the event happens. In this part of the book, we develop the implications of Ville's insight. As we show, it leads to new generalizations of many classical results in probability theory, thus complementing the measure‐theoretic foundation for probability that became standard in the second half of the twentieth century.
The charm of the measure‐theoretic foundation lies in its power and simplicity. Starting with the short list of axioms and definitions that Andrei Kolmogorov laid out in 1933 224 and adding when needed the definition of a stochastic process developed by Joseph Doob 116, we can spin out the whole broad landscape of mathematical probability and its applications. The charm of the game‐theoretic foundation lies in its constructivity and overt flexibility. The strategies that prove classical theorems are computable and relatively simple. The mathematics is rigorous, because we define a precise game, with precise assumptions about the players, their information, their permitted moves, and rules for winning, but these elements of the game can be varied in many ways. For example, the bets offered to Skeptic on a given round may be too few to define a probability measure for the next outcome. This flexibility allows us to avoid some complications involving measurability, and it accommodates very naturally applications where the activity between bets includes not only events that settle Skeptic's last bet but also actions by other players that set up the options for his next bet.
Kolmogorov's 1933 formulation of the measure‐theoretic foundation is abstract. It begins with the notion of a probability measure on a ‐algebra of subsets of an abstract space , and it then proceeds to prove theorems about all such triplets . Outcomes of experiments are treated as random variables – i.e. as functions on that are measurable with respect to . But many of the most important theorems of modern probability, including Émile Borel's and Kolmogorov's laws of large numbers, Jarl Waldemar Lindeberg's central limit theorem, and Aleksandr Khinchin's law of the iterated logarithm, were proven before 1933 in specific concrete settings. These theorems, the theorems that we call classical, dealt with a sequence of outcomes by positing or defining in one way or another a system of probability distributions: (i) a probability distribution for and (ii) for each and each possible sequence of values for the first outcomes, a probability distribution for . We can fit this classical picture into the abstract measure‐theoretic picture by constructing a canonical space from the spaces of possible outcomes for the .
In this part of the book, we develop game‐theoretic generalizations of classical theorems. As in the classical picture, we construct global probabilities and expected values from ingredients given sequentially, but we generalize the classical picture in two ways. First, the betting offers on each round may be less extensive. Instead of a probability distribution, which defines odds for every possible bet about the outcome , we may offer Skeptic only a limited number of bets about . Second, these offers are not necessarily laid out at the beginning of the game. Instead, they may be given by a player in the game – we call this player Forecaster – as the game proceeds.
Our game‐theoretic results fall into two classes, finite‐horizon results, which concern a finite sequence of outcomes , and asymptotic results, which concern an infinite sequence of outcomes . The finite‐horizon results can be more directly relevant to applications, but the asymptotic results are often simpler.
Because of its simplicity, we begin with the most classical asymptotic result, Borel's law of large numbers. Borel's publication of this result in 1909 is often seen as the decisive step toward modern measure‐theoretic probability, because it exhibited for the first time the isomorphism between coin‐tossing and Lebesgue measure on the interval 54,350. But Borel's theorem can also be understood and generalized game‐theoretically. This is the topic of Chapter 1, where we also introduce the most fundamental mathematical tool of game‐theoretic probability, the concept of a supermartingale.
In Chapter 2 , we shift to finite‐horizon results, proving and generalizing game‐theoretic versions of Jacob Bernoulli's law of large numbers and Abraham De Moivre's central limit theorem. Here we introduce the concepts of game‐theoretic probability and game‐theoretic expected value, which we did not need in Chapter 1 . There zero was the only probability needed, and instead of saying that an event has probability 0, we can say simply that Skeptic becomes infinitely rich if it happens.
In Chapter 3 , we study some supermartingales that are relevant to the theory of large deviations. Three of these, Kolmogorov's martingale, Doléans's supermartingale, and Hoeffding's supermartingale, will recur in various forms later in the book, even in Part IV's continuous‐time theory.
In Chapter 4 , we return to the infinite‐horizon picture, generalizing Chapter 1's game‐theoretic version of Borel's 1909 law of large numbers to a game‐theoretic version of Kolmogorov's 1930 law of large numbers, which applies even when outcomes may be unbounded. Kolmogorov's classical theorem, later generalized to a martingale theorem within measure‐theoretic probability, gives conditions under which an average of outcomes asymptotically equals the average of the outcomes' expected values. Kolmogorov's necessary and sufficient conditions for the convergence are elaborated in the game‐theoretic framework by a strategy for Skeptic that succeeds if the conditions are satisfied and a strategy for Reality (the opponent who determines the outcomes) that succeeds if the conditions are not satisfied.
In Chapter 5
This chapter introduces game‐theoretic probability in a relatively simple and concrete setting, where outcomes are bounded real numbers. We use this setting to prove game‐theoretic generalizations of a theorem that was first published by Émile Borel in 1909 44 and is often called Borel's law of large numbers.
In its simplest form, Borel's theorem says that the frequency of heads in an infinite sequence of tosses of a coin, where the probability of heads is always , converges with probability one to . Later authors generalized the theorem in many directions. In an infinite sequence of independent trials with bounded outcomes and constant expected value, for example, the average outcome converges with probability one to the expected value.
Our game‐theoretic generalization of Borel's theorem begins not with probabilities and expected values but with a sequential game in which one player, whom we call Forecaster, forecasts each outcome and another, whom we call Skeptic, uses each forecast as a price at which he can buy any multiple (positive, negative, or zero) of the difference between the outcome and the forecast. Here Borel's theorem becomes a statement about how Skeptic can profit if the average difference does not converge to zero. Instead of saying that convergence happens with probability one, it says that Skeptic has a strategy that multiplies the capital it risks by infinity if the convergence does not happen.
In Section 1.1, we formalize the game for bounded outcomes. In Section 1.2, we state Borel's theorem for the game and prove it by constructing the required strategy. Many of the concepts we introduce as we do so (situations, events, variables, processes, forcing, almost sure events, etc.) will reappear throughout the book.
The outcomes in our game are determined by a third player, whom we call Reality. In Section 1.3, we consider the special case where Reality is allowed only a binary choice. Because our results tell us what Skeptic can do regardless of how Forecaster and Reality move, they remain valid under this restriction on Reality. They also remain valid when we then specify Forecaster's moves in advance, and this reduces them to familiar results in probability theory, including Borel's original theorem.
In Section 1.4, we develop terminology for the case where Skeptic is allowed to give up capital on each round. In this case, a capital process that results from fixing a strategy for Skeptic is called a supermartingale. Supermartingales are a fundamental tool in game‐theoretic probability.
In Section 1.5, we discuss how Borel's theorem can be adapted to test the calibration of forecasts, a topic we will study from Forecaster's point of view in Chapter 12 . In Section 1.6, we comment on the computability of the strategies we construct.
Consider a game with three players: Forecaster, Skeptic, and Reality. On each round of the game,
Forecaster decides and announces the price
for a payoff
,
Skeptic decides and announces how many units, say
, of
he will buy,
Reality decides and announces the value of
, and
Skeptic receives the net gain
, which may be positive, negative, or zero.
The players move in the order listed, and they see each other's moves.
We think of as a forecast of . Skeptic tests the forecast by betting on differing from it. By choosing positive, Skeptic bets will be greater than ; by choosing negative, he bets it will be less. Reality can keep Skeptic from making money. By setting , for example, she can assure that Skeptic's net gain is zero. But if she does this, she will be validating the forecast.
We write for Skeptic's capital after the th round of play. We allow Skeptic to specify his initial capital , we assume that Forecaster's and Reality's moves are all between and 1, and we assume that play continues indefinitely. These rules of play are summarized in the following protocol.
We call protocols of this type, in which Skeptic can test the consistency of forecasts with outcomes by gambling at prices given by the forecasts, testing protocols. We define the notion of a testing protocol precisely in Chapter 7 (see the discussion following Protocol 7.12).
To make a testing protocol into a game, we must specify goals for the players. We will do this for Protocol 1.1 in various ways. But we never assume that Skeptic merely wants to maximize his capital, and usually we do not assume that his gains are losses to the other players.
We can vary Protocol 1.1 in many ways, some of which will be important in this or later chapters. Here are some examples.
Instead of
, we can use
, where
is positive but different from 1, as the move space for Forecaster and Reality. Aside from occasional rescaling, this will not change the results of this chapter.
We can stop playing after a finite number of rounds. We do this in some of the testing protocols we use in Chapter
2
.
We can require Forecaster to set
equal to zero on every round. We will impose this requirement in most of this chapter, as it entails no loss of generality for the results we are proving.
We can use a two‐element set, say
or
, as Reality's move set instead of
. When we use
and require Forecaster to announce the same number
on each round, the picture reduces to coin tossing (see
Section 1.3
).
As we have explained, our emphasis in this chapter and in most of the book is on strategies for Skeptic. We show that Skeptic can achieve certain goals regardless of how Forecaster and Reality move. Since these are worst‐case results for Skeptic, they remain valid when we weaken Forecaster or Reality in any way: hiding information from them, requiring them to follow some strategy specified in advance, allowing Skeptic to influence their moves, or otherwise restricting their moves. They also remain valid when we enlarge Skeptic's discretion. They remain valid even when Skeptic's opponents know the strategy Skeptic will play; if a strategy for Skeptic reaches a certain goal no matter how his opponents move, it will reach this goal even if the opponents know it will be played.
We will present protocols in the style of Protocol 1.1 throughout the book. Unless otherwise stated, the players always have perfect information. They move in the order listed, and they see each other's moves. In general, we will use the term strategy as it is usually used in the study of perfect‐information games: unless otherwise stated, a strategy is a pure strategy, not a mixed or randomized strategy.
Skeptic might become infinitely rich in the limit as play continues:
Since Reality can always keep Skeptic from making money, Skeptic cannot expect to win a game in which (1.1) is his goal. But as we will see, Skeptic can play so that Reality and Forecaster will be forced, if they are to avoid (1.1), to make their moves satisfy various other conditions – conditions that can be said to validate the forecasts. Moreover, he can achieve this without risking more than the capital with which he begins.
The following game‐theoretic bounded law of large numbers is one example of what Skeptic can achieve.
In Protocol 1.1, Skeptic has a strategy that starts with nonnegative capital (), does not risk more than this initial capital ( for all is guaranteed), and guarantees that either
or (1.1) will happen.
Later in this chapter (Corollary 1.9), we will derive Borel's law of large numbers for coin tossing as a corollary of this proposition.
After simplifying our terminology and our protocol, we will prove Proposition 1.2 by constructing the required strategy for Skeptic. We will do this step by step, using a series of lemmas as we proceed. First we formalize the notion of forcing by Skeptic and show that a weaker concept suffices for the proof (Lemma 1.4). Then we construct a strategy that forces Reality to eventually keep the average less than a given small positive number in order to keep Skeptic's capital from tending to infinity, and another strategy that similarly forces her to keep it greater than (Lemma 1.7). Then we average the two strategies and average further over smaller and smaller values of . The final average shares the accomplishments of the individual strategies (Lemma 1.6) and hence forces Reality to move closer and closer to zero.
The strategy resulting from this construction can be called a momentum strategy. Whichever side of zero the average of the first of the falls, Skeptic bets that the th will also fall on that side: he makes positive if the average so far is positive, negative if the average so far is negative. Reality must make the average converge to zero in order to keep Skeptic's capital from tending to infinity.
We need a more concise way of saying that Skeptic can force Forecaster and Reality to do something.
Let us call a condition on the moves made by Skeptic's opponents an event. We say that a strategy for Skeptic forces an event if it guarantees both of the two following outcomes no matter how Skeptic's opponents move:
When Skeptic has a strategy that forces , we say that Skeptic can force. Proposition 1.2 can be restated by saying that Skeptic can force (1.2). When Skeptic can force , we also say that is almost sure, or happens almost surely. The concepts of forcing and being almost sure apply to all testing protocols (see Sections 6.2 7.2, and 8.2).
It may deepen our understanding to list some other ways of expressing conditions (1.3) and (1.4):
If
, we say that Skeptic is
bankrupt
at the end of the
th round. So the condition that (
1.3
) holds no matter how the opponents move can be expressed by saying that the strategy does not risk bankruptcy. It can also be expressed by saying that
and that the strategy risks only this initial capital.
If Skeptic has a strategy that forces
, and
is a positive number, then Skeptic has a strategy for forcing
that begins by setting
. To see this, consider these two cases:
If the strategy forcing
begins by setting
, where
, then change the strategy by setting
, leaving it otherwise unchanged.
If the strategy forcing
begins by setting
, where
, then change the strategy by multiplying all moves
it prescribes for Skeptic by
.
In both cases, (1.3) and (1.4) will still hold for the modified strategy.
We can weaken (
1.3
) to the condition that there exists a real number
such that Skeptic's capital never falls below
. Indeed, if Skeptic has a strategy that guarantees that (
1.4
) holds and his capital never falls below a negative number
, then we obtain a strategy that guarantees (
1.3
) and (
1.4
) simply by adding
to the initial capital.
Let us also reiterate that a strategy for Skeptic that forces has a double significance:
On the one hand, the strategy can be regarded as assurance that
will happen under the hypothesis that the forecasts are good enough that Skeptic cannot multiply infinitely the capital he risks by betting against them.
On the other hand, the strategy and
can be seen as a test of the forecasts. If
does not happen, then doubt is cast on the forecasts by Skeptic's success in betting against them.
To make the strategy we construct as simple as possible, we simplify Protocol 1.1 by assuming that Forecaster is constrained to set all the equal to zero. Under this assumption, Skeptic's goal (1.2) simplifies to the goal
where is the average of , and the protocol reduces to the following testing protocol, where Reality is Skeptic's only opponent.
Whenever we modify a testing protocol by constraining Skeptic's opponents but without constraining Skeptic or changing the rule by which his capital changes, we say that we are specializing the protocol. We call the modified protocol a specialization.
If Skeptic can force an event in one protocol, then he can force it in any specialization, because his opponents are weaker there. The following lemma confirms that this implication also goes the other way in the particular case of the specialization from Protocol 1.1 to Protocol 1.3.
Suppose Skeptic can force (1.5) in Protocol 1.3. Then he can force (1.2) in Protocol 1.1.
By assumption, Skeptic has a Protocol 1.3 strategy that multiplies its positive initial capital infinitely if (1.5) fails. Consider the Protocol 1.1 strategy that begins with the same initial capital and, when Forecaster and Reality move and , moves on the th round, where is the Protocol 1.3 strategy's th‐round move when Reality moves
(The and being in , the are also in .) When Reality moves (1.6) in Protocol 1.3, the strategy there multiplies its capital infinitely unless
The th‐round gain by the new strategy in Protocol 1.1, , being the same as the gain in Protocol 1.3 when Reality moves (1.6), this new strategy will also multiply its capital infinitely unless (1.7) happens. But (1.7) is equivalent to (1.2).
We now introduce some terminology and notation that is tailored to Protocol 1.3 but applies with some adjustment and elaboration to other testing protocols. Some basic concepts are summarized in Table 1.1.
Table 1.1 Basic concepts in Protocol 1.3
Concept
Definition
Notation
Situation
Sequence of moves by Reality
Situation space
Set of all situations
Initial situation
Empty sequence
Path
Complete sequence of moves by Reality
Sample space
Set of all paths
Event
Subset of the sample space
Variable
Function on the sample space
We begin with the concept of a situation. In general, this is a finite sequence of moves by Skeptic's opponents. In Protocol 1.3, it is a sequence of moves by Reality – i.e. a sequence of numbers from . We use the notation for sequences described in the section on terminology and notation at the end of the book: omitting commas, writing for the empty sequence, and writing for the th element of an infinite sequence and for its prefix of length . When Skeptic and Reality are playing the th round, after Reality has made the moves , they are in the situation. They are in the initial situation during the first round of play. We write for the set of all situations, including , and we call the situation space.
An infinite sequence of elements of is a path. This is a complete sequence of moves by Reality. We write for the set of all paths, and we call the sample space. We call a subset of an event. We often use an uppercase letter such as to denote an event, but we also sometimes use a statement about the path . As stated earlier, an event is a condition on the moves by Skeptic's opponents. Thus (1.5) is an event, but (1.3) and (1.4) are not events.
We call a real‐valued function on a variable. We often use an uppercase letter such as to denote a variable, but we also use expressions involving . For example, we use to denote the variable that maps to . We can even think of as a variable; it is the variable that maps to .
We call a real‐valued function on a process. Given a process and a nonnegative integer , we write for the variable . The variables fully determine the process . We sometimes define a process by specifying a sequence of variables such that depends only on . In measure‐theoretic probability it is conventional to call a sequence of variables such that depends only on an adapted process. We drop the adjective adapted because all the processes we consider in this book are adapted.
We call a real‐valued function on a predictable process if for all and , depends only on . Strictly speaking, a predictable process is not a process, because it is not defined on . But like a process, a predictable process can be specified as a sequence of variables, in this case the sequence given by .
A strategy for Skeptic in Protocol 1.3 can be represented as a pair , where and is a predictable process. Here is the value specifies for the initial capital , and is the move it specifies in the situation . The predictability property is required because in this situation Skeptic does not yet know , Reality's move on the th round.
The strategies for Skeptic form a vector space: (i) if is a strategy for Skeptic and is a real number, then is a strategy for Skeptic and (ii) if and are strategies for Skeptic, then is as well.
A strategy for Skeptic determines a process whose value in is Skeptic's capital in when he follows . This process, which we designate by , is given by
and
for . We refer to as 's capital process.
If and are nonnegative numbers adding to 1, and and are strategies that both begin with capital
