81,99 €
Introduces a powerful new approach to financial risk modeling with proven strategies for its real-world applications The 2008 credit crisis did much to debunk the much touted powers of Value at Risk (VaR) as a risk metric. Unlike most authors on VaR who focus on what it can do, in this book the author looks at what it cannot. In clear, accessible prose, finance practitioners, Max Wong, describes the VaR measure and what it was meant to do, then explores its various failures in the real world of crisis risk management. More importantly, he lays out a revolutionary new method of measuring risks, Bubble Value at Risk, that is countercyclical and offers a well-tested buffer against market crashes. * Describes Bubble VaR, a more macro-prudential risk measure proven to avoid the limitations of VaR and by providing a more accurate risk exposure estimation over market cycles * Makes a strong case that analysts and risk managers need to unlearn our existing "science" of risk measurement and discover more robust approaches to calculating risk capital * Illustrates every key concept or formula with an abundance of practical, numerical examples, most of them provided in interactive Excel spreadsheets * Features numerous real-world applications, throughout, based on the author's firsthand experience as a veteran financial risk analyst
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 516
Veröffentlichungsjahr: 2013
Contents
About the Author
Foreword
Preface
Acknowledgments
Part One: Background
Chapter 1: Introduction
1.1 The Evolution of Riskometer
1.2 Taleb’s Extremistan
1.3 The Turner Procyclicality
1.4 The Common Sense of Bubble Value-at-Risk (BuVaR)
Notes
Chapter 2: Essential Mathematics
2.1 Frequentist Statistics
2.2 Just Assumptions
2.3 Quantiles, VaR, and Tails
2.4 Correlation and Autocorrelation
2.5 Regression Models and Residual Errors
2.6 Significance Tests
2.7 Measuring Volatility
2.8 Markowitz Portfolio Theory
2.9 Maximum Likelihood Method
2.10 Cointegration
2.11 Monte Carlo Method
2.12 The Classical Decomposition
2.13 Quantile Regression Model
2.14 Spreadsheet Exercises
Notes
Part Two: Value at Risk Methodology
Chapter 3: Preprocessing
3.1 System Architecture
3.2 Risk Factor Mapping
3.3 Risk Factor Proxies
3.4 Scenario Generation
3.5 Basic VaR Specification
Notes
Chapter 4: Conventional VaR Methods
4.1 Parametric VaR
4.2 Monte Carlo VaR
4.3 Historical Simulation VaR
4.4 Issue: Convexity, Optionality, and Fat Tails
4.5 Issue: Hidden Correlation
4.6 Issue: Missing Basis and Beta Approach
4.7 Issue: The Real Risk of Premiums
4.8 Spreadsheet Exercises
Notes
Chapter 5: Advanced VaR Methods
5.1 Hybrid Historical Simulation VaR
5.2 Hull-White Volatility Updating VaR
5.3 Conditional Autoregressive VaR (CAViaR)
5.4 Extreme Value Theory VaR
5.5 Spreadsheet Exercises
Notes
Chapter 6: VaR Reporting
6.1 VaR Aggregation and Limits
6.2 Diversification
6.3 VaR Analytical Tools
6.4 Scaling and Basel Rules
6.5 Spreadsheet Exercises
Notes
Chapter 7: The Physics of Risk and Pseudoscience
7.1 Entropy, Leverage Effect, and Skewness
7.2 Volatility Clustering and the Folly of i.i.d.
7.3 “Volatility of Volatility” and Fat Tails
7.4 Extremistan and the Fourth Quadrant
7.5 Regime Change, Lagging Riskometer, and Procyclicality
7.6 Coherence and Expected Shortfall
7.7 Spreadsheet Exercises
Notes
Chapter 8: Model Testing
8.1 The Precision Test
8.2 The Frequency Back Test
8.3 The Bunching Test
8.4 The Whole Distribution Test
8.5 Spreadsheet Exercises
Notes
Chapter 9: Practical Limitations of VaR
9.1 Depegs and Changes to the Rules of the Game
9.2 Data Integrity Problems
9.3 Model Risk
9.4 Politics and Gaming
Notes
Chapter 10: Other Major Risk Classes
10.1 Credit Risk (and CreditMetrics)
10.2 Liquidity Risk
10.3 Operational Risk
10.4 The Problem of Aggregation
10.5 Spreadsheet Exercises
Notes
Part Three: The Great Regulatory Reform
Chapter 11: Regulatory Capital Reform
11.1 Basel I and Basel II
11.2 The Turner Review
11.3 Revisions to Basel II Market Risk Framework (Basel 2.5)
11.4 New Liquidity Framework
11.5 The New Basel III
11.6 The New Framework for the Trading Book
11.7 The Ideal Capital Regime
Notes
Chapter 12: Systemic Risk Initiatives
12.1 Soros’ Reflexivity, Endogenous Risks
12.2 CrashMetrics
12.3 New York Fed CoVaR
12.4 The Austrian Model and BOE RAMSI
12.5 The Global Systemic Risk Regulator
12.6 Spreadsheet Exercises
Notes
Part Four: Introduction to Bubble Value-at-Risk (BuVaR)
Chapter 13: Market BuVaR
13.1 Why an Alternative to VaR?
13.2 Classical Decomposition, New Interpretation
13.3 Measuring the Bubble
13.4 Calibration
13.5 Implementing the Inflator
13.6 Choosing the Best Tail-Risk Measure
13.7 Effect on Joint Distribution
13.8 The Scope of BuVaR
13.9 How Good Is the BuVaR Buffer?
13.10 The Brave New World
13.11 Spreadsheet Exercises
Notes
Chapter 14: Credit BuVaR
14.1 The Credit Bubble VaR Idea
14.2 Model Formulation
14.3 Behavior of Response Function
14.4 Characteristics of Credit BuVaR
14.5 Interpretation of Credit BuVaR
14.6 Spreadsheet Exercises
Notes
Chapter 15: Acceptance Tests
15.1 BuVaR Visual Checks
15.2 BuVaR Event Timing Tests
15.3 BuVaR Cyclicality Tests
15.4 Credit BuVaR Parameter Tuning
Notes
Chapter 16: Other Topics
16.1 Diversification and Basis Risks
16.2 Regulatory Reform and BuVaR
16.3 BuVaR and the Banking Book: Response Time as Risk
16.4 Can BuVaR Pick Tops and Bottoms Perfectly?
16.5 Postmodern Risk Management
16.6 Spreadsheet Exercises
Note
Chapter 17: Epilogue: Suggestions for Future Research
Note
About the Website
Bibliography
Index
Founded in 1807, John Wiley & Sons is the oldest independent publishing company in the United States. With offices in North America, Europe, Australia and Asia, Wiley is globally committed to developing and marketing print and electronic products and services for our customers’ professional and personal knowledge and understanding.
The Wiley Finance series contains books written specifically for finance and investment professionals as well as sophisticated individual investors and their financial advisors. Book topics range from portfolio management to e-commerce, risk management, financial engineering, valuation and financial instrument analysis, as well as much more.
For a list of available titles, visit our Web site at www.WileyFinance.com.
Copyright © 2013 by Max Wong Chan Yue
Published by John Wiley & Sons Singapore Pte. Ltd.
1 Fusionopolis Walk, #07-01, Solaris South Tower, Singapore 138628
All rights reserved.
First edition published by Immanuel Consulting Pte. Ltd. in 2011
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as expressly permitted by law, without either the prior written permission of the Publisher, or authorization through payment of the appropriate photocopy fee to the Copyright Clearance Center. Requests for permission should be addressed to the Publisher, John Wiley & Sons Singapore Pte. Ltd., 1 Fusionopolis Walk, #07-01, Solaris South Tower, Singapore 138628, tel: 65–6643–8000, fax: 65–6643–8008, e-mail: [email protected].
Limit of Liability/Disclaimer of Warranty: While the publisher, author and contributors have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher, authors, or contributors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Excel is a registered trademark of the Microsoft Corporation.
RiskMetrics is a registered trademark of RiskMetrics Group.
CreditMetrics is a registered trademark of RiskMetrics Solutions.
CreditRisk+ is a registered trademark of Credit Suisse Group.
Bloomberg is a registered trademark of Bloomberg L.P.
CrashMetrics is a registered trademark of Paul Wilmott and Philip Hua.
BUVAR™ is a registered trademark of Wong Chan Yue
Other Wiley Editorial Offices
John Wiley & Sons, 111 River Street, Hoboken, NJ 07030, USA
John Wiley & Sons, The Atrium, Southern Gate, Chichester, West Sussex, P019 8SQ, United Kingdom
John Wiley & Sons (Canada) Ltd., 5353 Dundas Street West, Suite 400, Toronto, Ontario, M9B 6HB, Canada
John Wiley & Sons Australia Ltd., 42 McDougall Street, Milton, Queensland 4064, Australia
Wiley-VCH, Boschstrasse 12, D-69469 Weinheim, Germany
Library of Congress Cataloging-in-Publication Data
ISBN 978-1-118-55034-2 (Hardcover)
ISBN 978-1-118-55035-9 (ePDF)
ISBN 978-1-118-55036-6 (Mobi)
ISBN 978-1-118-55037-3 (ePub)
To my heavenly Father, who gave me this assignment
About the Author
Max Wong is a specialist in the area of risk modeling and Basel III. He started his career as a derivatives consultant at Credit Suisse First Boston in 1996. During the Asian crisis in 1998, he traded index futures at the open-outcry floor of SIMEX (now SGX). From 2003 to 2011, he worked for Standard Chartered Bank as a risk manager and senior quant. He is currently head of VaR model testing at the Royal Bank of Scotland.
He has published papers on VaR models and Basel capital, recently looking at innovative ways to model risk more effectively during crises and to deal with the issues of procyclicality and Black Swan events in our financial system. He has spoken on the subject at various conferences and seminars.
He holds a BSc in physics from the University of Malaya (1994) and an MSc in financial engineering from the National University of Singapore (2004). He is an adjunct at Singapore Management University, a member of the editorial board of the Journal of Risk Management in Financial Institutions, and a member of the steering committee of PRMIA Singapore chapter.
Foreword
Financial markets are all about risk management. Banking and capital markets activities throw up all manner of risk exposures as a matter of course, and these need to be managed accordingly such that stakeholders are comfortable. “Market risk” traditionally referred to risks arising from a change in market factors, and when we say “risk” we mean risk to the profit and loss account or to revenues. These market factors might be interest rates, foreign currency rates, customer default rates, and so on. Managers of a financial institution should expect to have some idea of the extent of their risk to these dynamic factors at any one time, so that they can undertake management action to mitigate or minimize the risk exposure. This is Finance 101 and is as old as commerce and money itself.
Measuring market exposure has always been a combination of certain methods that might be called scientific and others that might be described as application of learned judgment. I have always been a fan of “modified duration” for interest rate risk and I still recommend it. Of course it has its flaws, which estimation method doesn’t? But when Value-at-Risk (VaR) was first presented to the world it appeared to promise to make the risk manager’s job easier, because it seemed to offer a more accurate estimate of risk exposure at any time. And the latter was all it ever was, or claimed to be: an estimation of risk exposure. A measure of risk, no better and no worse than the competence of the person who was making use of the calculated numbers.
Unfortunately, in some quarters VaR was viewed as being somehow a substitute for “risk management” itself. It didn’t help that the assumptions underpinning every single methodology for calculating VaR were never widely understood, at least not at the senior executive level, which made explaining losses that exceeded the VaR estimate even more difficult than usual. In 2012 JPMorgan announced losses of up to $9 billion in a portfolio of corporate credits that were managed by its London-based chief investment office. Depending on which media report one follows, the VaR number reported for the bank as a whole the day before the announcement was alleged to be between 1 percent and 10 percent of this value. Is there any point in going to the trouble of calculating this estimate if at any time it can be demonstrated to be so completely off the mark?
The short answer is yes and no. VaR is a tool, nothing more nor less, and like all tools must be used within its limitations. One could argue that a bank the size and complexity of JPMorgan is going to struggle to ever get a meaningful estimate of its true risk exposure under all marker conditions, but therein lies the value and the worthlessness of any statistical measure like VaR: it is reasonable for some, indeed most, of the time but when it does get out of kilter with market movements the difference could be huge (an order of magnitude of up to 100 times out, if some recent headlines are to be believed). It reminds one of the apocryphal story of the statistician who drowned in a lake that had an “average” depth of six inches.
The circle is complete of course. It was JPMorgan that gave the world VaR back in 1994 (one or two other banks, including CSFB, were applying a similar sort of methodology around the same time), and eighteen years later the bank saw for itself just how inaccurate it could be. Does that mean we should dispense with VaR and go back to what we had before, or look to devise some other method?
Again, yes and no. The key accompanying text for any statistical measurement, VaR most definitely included, has always been “use with care, and only within limitations.” That means, by all means, continue with your chosen VaR methodology for now, but perhaps be aware that an actual loss under conditions that the model is not picking up could well be many times beyond your VaR estimate. In other words, bring in your interest rate risk and credit risk exposure limits because the true picture is going to be in excess of what you think it is. That is true for whichever firm one is working at.
But that isn’t all. Knowing VaR’s limitations means also seeking to develop an understanding of what it doesn’t cover. And this is where Max Wong’s very worthwhile and interesting book comes in. In the Basel III era of “macroprudential regulation,” Mr Wong applies a similar logic for VaR and presents a new concept of Bubble VaR, which is countercyclical in approach and would be pertinent to a bank running complex exposures across multiple markets and asset classes. But I also rate highly the first half of the book, which gives an accessible description of the vanilla VaR concept and its variations before launching into its limitations and how Bubble VaR is a means of extending the concept’s usefulness. The content herein is technical and arcane by necessity, but remains firmly in the domain of “know your risk,” which is something every senior banker should be obsessed with.
This book is a fantastic addition to the finance literature, written by that rare beast in financial markets, management consulting, or academia: a person delivering something of practical value for the practitioner and that advances our understanding and appreciation of finance as a discipline. Finance 201, so to speak, for everyone with an interest in financial risk management.
Professor Moorad Choudhry
Department of Mathematical Sciences
Brunel University
16th December 2012
Preface
This is a story of the illusion of risk measurement. Financial risk management is in a state of confusion. The 2008 credit crisis has wreaked havoc on the Basel pillars of supervision by highlighting all the cracks in the current regulatory framework that had allowed the credit crisis to fester, and ultimately leading to the greatest crisis since the Great Depression. Policy responses were swift—UK’s Financial Services Authority (FSA) published the Turner Review, which calls for a revamp of many aspects of banking regulation, and the Bank of International Settlements (BIS) speedily passed a Revision to its Basel II, while the Obama administration called for a reregulation of the financial industry reversing the Greenspan legacy of deregulation. These initiatives eventually evolved into the Basel III framework and Dodd-Frank Act respectively.
The value-at-risk risk measure, VaR, a central ideology for risk management, was found to be wholly inadequate during the crisis. Critically, this riskometer is used as the basis for regulatory capital—the safety buffer money set aside by banks to protect against financial calamities. The foundation of risk measurement is now questionable.
The first half of this book develops the VaR riskometer with emphasis on its traditionally known weaknesses, and talks about current advances in risk research. The underlying theme throughout the book is that VaR is a faulty device during turbulent times, and by its mathematical sophistication it misled risk controllers into an illusion of safety. The author traces the fundamental flaw of VaR to its statistical assumptions—of normality, i.i.d., and stationarity—the Gang of Three.
These primitive assumptions are very pervasive in the frequentist statistics philosophy where probability is viewed as an objective notion and can be measured by sampling. A different school of thought, the Bayesian school, argues for subjective probability and has developed an entire mathematical framework to incorporate the observer’s opinion into the measurement (but this is subject matter for another publication). We argue that the frequentist’s strict mathematical sense often acts as a blinder that restricts the way we view and model the real world. In particular, two “newly” uncovered market phenomena—extremistan and procyclicality—cannot be engaged using the frequentist mindset. There were already a few other well-known market anomalies that tripped the VaR riskometer during the 2008 crisis. All these will be detailed later.
In Part Four of the book, the author proposes a new risk metric called bubble VaR (buVaR), which does not invoke any of the said assumptions. BuVaR is not really a precise measurement of risk; in fact, it presumes that extreme loss events are unknowable (extremistan) and moves on to the more pressing problem—how do we build an effective buffer for regulatory capital that is countercyclical, and that safeguards against extreme events.
This book is an appeal (as is this preface) to the reader to consider a new paradigm of viewing risk—that one need not measure risk (with precision) to protect against it. By being obsessively focused on measuring risk, the risk controller may be fooled by the many pitfalls of statistics and randomness. This could lead to a false sense of security and control over events that are highly unpredictable. It is ultimately a call for good judgment and pragmatism.
Since this book was first published in 2011, the financial industry has experienced a sea change in Basel regulation and new risk modeling requirements under the Basel III capital framework. There are also exciting developments in the modeling of risk at the research frontier. This revised edition is an update to include some of these topics, even though the primary objective remains to encourage an alternate paradigm of looking at market risk.
This book is intended to reach out to the top management of banks (CEOs and CROs), to regulators, to policy makers, and to risk practitioners—not all of whom may be as quantitatively inclined as the specialized risk professional. But they are the very influencers of the coming financial reregulation drama. We are living in epic times, and ideas help shape the world for better (or for worse). It is hoped that the ideas in this book can open up new and constructive research into countercyclical measures of risk.
With this target audience in mind, this book is written in plain English with as few Greek letters as possible; the focus is on concepts (and illustrations) rather than mathematics. Because it is narrowly focused on the topic, it can be self-contained. No prior knowledge of risk management is required; preuniversity-level algebra and some basic financial product knowledge are assumed.
In order to internalize the idea of risk, this book takes the reader through the developmental path of VaR starting from its mathematical foundation to its advanced forms. In this journey, fault lines and weaknesses of this methodology are uncovered and discussed. This will set the stage for the new approach, buVaR.
Chapter 2 goes into the foundational mathematics of VaR with emphasis on intuition and concepts rather than mathematical rigor.
Chapter 3 introduces the basic building blocks used in VaR. The conventional VaR systems are then formalized in Chapter 4. At the end of the chapter, readers will be able to calculate VaR on a simple spreadsheet and experiment with the various nuances of VaR.
Chapter 5 discusses some advanced VaR models developed in academia in the last decade. They are interesting and promising, and are selected to give the reader a flavor of current risk research.
Chapter 6 deals with the tools used by banks for VaR reporting. It also contains a prelude to the Basel Rules used to compute minimum capital.
Chapter 7 explores the phenomenology of risks. In particular, it details the inherent weaknesses of VaR and the dangers of extreme risks not captured by VaR.
Chapter 8 covers the statistical tests used to measure the goodness of a VaR model.
Chapter 9 discusses the weaknesses of VaR, which are not of a theoretical nature. These are practical problems commonly encountered in VaR implementation.
Since this book deals primarily with market risk, Chapter 10 is a minor digression devoted to other (nonmarket) risk classes. A broad understanding is necessary for the reader to appreciate the academic quest (and the industry’s ambition) for a unified risk framework where all risks are modeled under one umbrella.
Chapter 11 gives a brief history of the Basel capital framework. It then proceeds to summarize the key regulatory reforms (Basel III) that were introduced from 2009 to 2010.
Chapter 12 discusses developments in measuring and detecting systemic risks. These are recent research initiatives by regulators who are concerned about global crisis contagion. Network models are introduced with as little math as possible. The aim is to give the reader a foretaste of this important direction of development.
The final part of this book, Part Four—spanning five chapters in total—introduces various topics of bubble-VaR. Chapter 13 lays the conceptual framework for buVaR, formalized for market risk.
Chapter 14 shows that with a slight modification, the buVaR idea can be expanded to cover credit risks, including default risk.
Chapter 15 contains the results of various empirical tests of the effectiveness of buVaR.
Chapter 16 is a concluding chapter that covers miscellaneous topics for buVaR. In particular, it summarizes how buVaR is able to meet the ideals proposed by the Turner Review.
Lastly, Chapter 17 lists suggestions for future research. It is a wish list for buVaR which is beyond the scope of this volume.
Throughout this book, ideas are also formulated in the syntax of Excel functions so that the reader can easily implement examples in a spreadsheet. Exercises with important case studies and examples are included as Excel spreadsheets at the end of each chapter and can be downloaded from the companion website: www.wiley.com/go/bubblevalueatrisk.
Excel is an excellent learning platform for the risk apprentice. Monte Carlo simulations are used frequently to illustrate and experiment with key ideas, and, where unavoidable, VBA functions are used. The codes are written with pedagogy (not efficiency) in mind.
Acknowledgments
This book has benefited from the valuable comments of various practitioners and academics. I am most grateful to Michael Dutch for his generous proofreading of the manuscript; and to John Chin, Shen Qinghua, Jayaradha Shanker, and Moorad Choudhry for their useful comments. The book was further enriched by reviews and suggestions from Paul Embrechts from ETHZ.
The production of the book involved many excellent individuals. I thank Lim Tai Wei for grammatical edit work, Sylvia Low for web design and the cover design team in Beijing: Michael Wong, Kenny Chai, Liu DeBin, and Xiao Bin. I am grateful to Nick Wallwork and the staff at Wiley for the production of the revised edition.
I am grateful to my wife, Sylvia Chen, for her patience and for taking care of the children—Werner and Arwen—during this project.
The 2008 global credit crisis is by far the largest boom-bust cycle since the Great Depression (1929). Asset bubbles and manias have been around since the first recorded tulip mania in 1637 and in recent decades have become such a regularity that they are even expected as often as once every 10 years (1987, 1997, 2007). Asset bubbles are in reality more insidious than most people realize for it is not the massive loss of wealth that it brings (for which investor has not entertained the possibility of financial ruin) but because it widens the social wealth gap; it impoverishes the poor. The 2008 crisis highlighted this poignantly—in the run-up to the U.S. housing and credit bubble, the main beneficiaries were bankers (who sold complex derivatives on mortgages) and their cohorts. At the same time, a related commodity bubble temporarily caused a food and energy crisis in some parts of the developing world, notably Indonesia, the fourth-most-populous nation in the world and an OPEC member (until 2008). When the bubble burst, $10 trillion dollars of U.S. public money was used to bail out failing banks and to take over toxic derivatives created by banks. On their way out, CEOs and traders of affected banks were given million-dollar contractual bonuses, even as the main economy lost a few million jobs. Just as in 1929, blue-collar workers bore the brunt of the economic downturn in the form of unemployment in the United States.
The ensuing zero interest rate policy and quantitative easing (printing of dollars by the Fed) induced yet other bubbles—commodity prices are rising to alarming levels and asset bubbles are building up all over Asia, as investors chase non-U.S. dollar assets. We see home prices skyrocketing well beyond the reach of the average person in major cities. The wealthy are again speculating in homes, this time in East Asia. In many countries, huge public spending on infrastructure projects that is meant to support the headline GDP caused a substantial transfer of public wealth to property developers and cohorts. The lower income and underprivileged are once again left behind in the tide of inflation and growth.
The danger of an even larger crisis now looms. The U.S. dollar and treasuries are losing credibility as reserve currencies because of rising public debt. This means that flight-to-quality, which has in the past played the role of a pressure outlet for hot money during a crisis, is no longer an appealing option.
If there is a lesson from the 2008 crisis, it is that asset bubbles have to be reined in at all costs. It is not just John Keynes’ “animal spirits” at work here—the herd tipping the supply-demand imbalance—but the spirit of “mammon”—unfettered greed. There is something fundamentally dysfunctional about the way financial institutions are incentivized and regulated. Thus, a global regulatory reform is underway, led by the United Kingdom, the European Union (EU), and the United States, with target deadlines of 2012 and beyond. Our narrow escape from total financial meltdown has highlighted the criticality of systemic risks in an interconnected world; we can no longer think in isolated silos when solving problems in the banking system. The coming reregulation must be holistic and concerted.
One major aspect of the reform is in the way risk is measured and controlled. The great irony is that our progress in risk management has led to a new risk: the risk of risk assessment. What if we are wrong (unknowingly) about our measurement? The crisis is a rude wake-up call for regulators and bankers to reexamine our basic understanding of what risk is and how effective our regulatory safeguards are.
We start our journey with a review of how our current tools for measuring financial market risks were evolved. In this chapter, we will also give a prelude to two important concepts that grew out of crisis response—extremistan and procyclicality. These will likely become the next buzz words in the unfolding regulatory reform drama. The final section offers bubble VaR, a new tool researched by the author, which regulators can explore to strengthen the safeguards against future financial crises.
Necessity is the mother of invention.
—Plato, Greek philosopher, 427–347 BC
Ask a retail investor what the risks of his investment portfolio are, and he will say he owns USD30,000 in stocks and USD70,000 in bonds, and he is diversified and therefore safe. A lay investor thinks in notional terms, but this can be misleading since two bonds of different duration have very different risks for the same notional exposure. This is because of the convexity behavior peculiar to bonds. The idea of duration, a better risk measure for bonds, was known to bankers as early as 1938.
In the equities world, two different stocks of the same notional amount can also give very different risk. Hence, the idea of using volatility as a risk measure was introduced by Harry Markowitz (1952). His mean-variance method not only canonized standard deviation as a risk measure but also introduced correlation and diversification within a unified framework. Modern portfolio theory was born. In 1963, William Sharpe introduced the single factor beta model. Now investors can compare the riskiness of individual stocks in units of beta relative to the overall market index.
The advent of options introduced yet another dimension of risk, which notional alone fails to quantify, that of nonlinearity. The Black-Scholes option pricing model (1973) introduced the so-called Greeks, a measurement of sensitivity to market parameters that influence a product’s pricing, an idea that has gone beyond just option instruments. Risk managers now measure sensitivities to various parameters for every conceivable product and impose Greek limits on trading desks. The use of limits to control risk taking gained acceptance in the mid-1980s but sensitivity has one blind spot—it is a local risk measure. Consider, for example, the delta of an option (i.e., option price sensitivity to a 1% change in spot) that has a strike near spot price. For a 10% adverse move in spot, the real loss incurred by the option is a lot larger than what is estimated by delta (i.e., 10 times delta). This missing risk is due to nonlinearity, a behavior peculiar to all option products. The problem is more severe for options with complex (or exotic) features.
The impasse was solved from the early 1990s by the use of stress tests. Here, the risk manager makes up (literally) a set of likely bad scenarios—say a 20% drop in stocks and a 1% rise in bond yield—and computes the actual loss of this scenario. While this full revaluation approach accounts for loss due to nonlinearity, stress testing falls short of being the ideal riskometer—it is too subjective and it is a static risk measure—the result is not responsive to day-to-day market movements.
Then in 1994, JP Morgan came out with RiskMetrics, a methodology that promotes the use of value-at-risk (VaR) as the industry standard for measuring market risk.1 VaR is a user-determined loss quantile of a portfolio’s return distribution. For example, if a bank chooses to use a 99%-VaR, this result represents the minimum loss a bank is expected to incur with a 1% probability. By introducing a rolling window of say 250 days to collect the distributional data, VaR becomes a dynamic risk measure that changes with new market conditions.
In 1995, the Basel Committee of Banking Supervision enshrined VaR as the de facto riskometer for its Internal Model approach for market risk. Under Basel II, all banks are expected to come up with their implementation of VaR (internal) models for computing minimum capital.
The idea of extremistan was made popular by Nassim Taleb, author of the New York Times bestseller The Black Swan.2 The book narrates the probabilistic nature of catastrophic events and warns of the common misuse of statistics in understanding extreme events of low probability. It is uncanny that the book came out a few months before the subprime fiasco that marked the onset of the credit crisis.
The central idea is the distinction between two classes of probability structures—mediocristan and extremistan. Mediocristan deals with rare events that are thin tailed from a statistical distribution perspective. Large deviations can occur, but they are inconsequential. Take for example the chance occurrence of a 10-foot bird, which has little impact on the ecosystem as a whole. Such distributions are well described by the (tail of) bell-shaped Gaussian statistics or modeled by random walk processes. On the other hand, extremistan events are fat tailed—low probability, high impact events. Past occurrences offer no guidance on the magnitude of future occurrences. This is a downer for risk management. The effect of the outcome is literally immeasurable. Some examples are World Wars, flu pandemics, Ponzi schemes, wealth creation of the super rich, a breakthrough invention, and so on.
A philosophical digression—mediocristan and extremistan are closely associated with scalability. In mediocristan, the outlier is not scalable—its influence is limited by physical, biological, or environmental constraints. For example, our lone 10-foot bird cannot invade the whole ecosystem. Extremistan, in contrast, lies in the domain of scalability. For example, capitalism and free enterprise, if unrestrained by regulation, allow for limitless upside for the lucky few able to leverage off other people’s money (or time). Because of scalability, financial markets are extremistan—rare events of immeasurable devastation or Black Swans occur more often than predicted by thin-tailed distributions.
Another reason why financial markets are more extremistic than nature is because they involve thinking participants. The inability of science to quantify its cause and effect has pushed the study of this phenomenon to the domain of behavioral finance, with expressions such as herd mentality, animal spirits, madness of the crowd, reflexivity, endogeneity of risk, and positive feedback loops.
VaR is a victim of extremistan. Taleb, a strong critic of VaR, sees this method as a potentially dangerous malpractice.3 The main problem is that financial modelers are in love with Gaussian statistics in which simplistic assumptions make models more tractable. This allows risk modelers to quantify (or estimate) with a high degree of precision events that are by nature immeasurable (extremistan). That can lead to a false sense of security in risk management. Taleb’s extremistan, vindicated by the 2008 crisis, has dealt a serious blow to the pro-VaR camp.
This book introduces, bubble VaR (buVar), an extension of the VaR idea that denounces the common basic statistical assumptions (such as stationarity). It is fair to say that the only assumption made is that one cannot measure the true number. It is hypothetical, and it is a moving target. In fact, we need not measure the true expected loss in order to invent an effective safeguard. This is what buVaR attempts to achieve.
The idea of procyclicality is not new. In a consultative paper, Danielsson and colleagues (2001)4 first discussed procyclicality risk in the context of using credit ratings as input to regulatory capital computation as required under the Internal Rating Based (IRB) approach. Ratings tend to improve during an upturn of a business cycle and deteriorate during a downturn. If the minimum capital requirement is linked to ratings—requiring less capital when ratings are good—banks are encouraged to lend during an upturn and cut back loans during a downturn. Thus, the business cycle is self-reinforced artificially by policy. This has damaging effects during a downturn as margin and collateral are called back from other banks to meet higher regulatory minimum capital.
This danger is also highlighted in the now-famous Turner Review,5 named after Sir Adair Turner, the new Financial Service Authority (FSA) chief, who was tasked to reform the financial regulatory regime. The review has gone furthest to raise public awareness of hard-wired procyclicality as a key risk. It also correctly suggested that procyclicality is an inherent deficiency in the VaR measure as well. Plot any popular measure of value at risk (VaR) throughout a business cycle, and you will notice that VaR is low when markets are rallying and spikes up during a crisis.
This is similar to the leverage effect observed in the markets—rallies in stock indices are accompanied by low volatility, and sell downs are accompanied by high volatility. From the reasoning of behavioral science, fear is a stronger sentiment than greed.
However, this is where the analogy ends. The leverage effect deals with the way prices behave, whereas VaR is a measurement device (which can be corrected). The Turner Review says our VaR riskometer is faulty—it contains hardwired procyclicality. Compounding the problem is that trading positions are recorded using mark-to-market accounting. Hence, in a raging bull market, profits are realized and converted into additional capital for even more investment just as (VaR-based) regulatory capital requirements are reduced. It is easy to see that this is a recipe for disaster—the rules of the game encourage banks to chase the bubble.
To mitigate the risk of procyclicality, the Turner Review calls for a longer observation period—the so-called through-the-cycle rather than point-in-time (what VaR is doing currently) measures of risk—as well as more stress tests. Some critics6 argue that the correct solution is not simply to make the capital charge larger or more penal for banks, but also more timely. It is unavoidable that VaR based on short histories is procyclical, precisely because it gives a timely forecast. Efforts to dampen procyclicality by using a longer history will worsen the forecast; it is no longer market sensitive and timely.
As we shall see, buVaR addresses the procyclicality problem by being countercyclical in design, without sacrificing timeliness.
The idea of buVar came from a simple observation: when markets crash, they fall downwards, rather than upwards (?). Yes, this basic asymmetry is overlooked by present-day measures of risks. Let’s think along.
Even in the credit crisis in 2008 when credit spreads crashed upwards, that event came after a period of unsustainable credit-spread compression. So, to be more precise, a market crash happens only after an unsustainable price rally or decline—often called a bubble—and in the opposite direction to the prevailing trend.
If this is a universal truth, and there is overwhelming evidence that it is, then does it not make sense that market risk at point C is higher than at points A, B, and D? (Figure 1.1). We know this intuitively and emotionally as well; suppose you do not have any trading views, then a purchase (or sale) of stocks at which level would make you lose sleep? Because while the bubbles are obvious, when they will burst is not. Hence the trader’s adage “the markets climb the wall of worry.”7
FIGURE 1.1 Dow Jones Index
Yet the conventional measure of risk, VaR, does not account for this obvious asymmetry. Table 1.1 compares the 97.5% VaR8 for the Dow Jones index at various points. Notice that A, B, and C have about the same risks.
TABLE 1.1 97.5% Value-at-Risk for Dow Jones Index Using Historical Simulation
Only after the crash (at D) does VaR register any meaningful increase in risks. It’s like a tsunami warning system that issues alerts after the waves have reached landfall! It seems VaR is reactive rather than preventive. What happened?
The same situation can also be observed for Brent crude oil prices (Figure 1.2 and Table 1.2). Is VaR just a peacetime tool? The root cause can be traced back to model assumptions.
FIGURE 1.2 Crude Oil Price (in U.S. dollars)
TABLE 1.2 97.5% Value at Risk for Crude Oil Price
VaR and most risk models used by banks assume returns are independent and identically distributed (or i.i.d.), meaning that each return event is not affected by past returns, yet they are identical (in distribution)! As a result, the return time series is stationary. Here stationary means that if you take, say, a 250-day rolling window of daily returns, its distribution looks the same in terms of behavior whether you observe the rolling window today, a week ago, or at any date. In other words, the distribution is time invariant. Let’s look at one such time series, the one-day returns of the Dow Jones index (Figure 1.3). Compared to Figure 1.1, the trend has been removed completely (detrended by taking the daily change); you are left with wiggles that look almost identical anywhere along the time scale (say at A, B, or C) and almost symmetrical about zero. At D, risk is higher only because it wiggles more.
FIGURE 1.3 Daily Price Change of Dow Jones Index
VaR models are built on statistics of only these detrended wiggles. Information on price levels even if they contain telltale signs—such as the formation of bubbles, a price run-up, widening of spreads—are ignored (they do not meet the requirement of i.i.d.). VaR is truly nothing more than the science of wiggles. The i.i.d. assumption lends itself to a lot of mathematical tractability. It gives modelers a high degree of precision in their predictions.9 Unfortunately precision does not equate to accuracy. To see the difference between precision and accuracy, look at the bull’s-eye diagrams in Figure 1.4. The right-side diagram illustrates the shotgun approach to getting the correct answer—accurate but not precise. Accuracy is the degree of authenticity while precision is the degree of reproducibility.
FIGURE 1.4 Precision versus Accuracy
In risk measurement, Keynes’s dictum is spot on: “It is clearly better to be approximately right, than to be precisely wrong.” The gross underestimation of risk by VaR during the credit crisis, a Black Swan event, is a painful objective lesson for banks and regulators. The events of 2008 challenge the very foundation of VaR and are a wake-up call to consider exploring beyond the restrictive, albeit convenient, assumption of i.i.d. BuVaR is one such initiative.
The Turner Review calls for the creation of countercyclical capital buffers on a global scale. It will be ideal if we have a VaR system that automatically penalizes the bank—by inflating—when positions are long during a bubble rally, and continues to penalize the bank during a crash. Then when the crash is over and the market overshoots on the downside, VaR penalizes the short side positions instead. As we shall learn, buVaR does this—it is an asymmetrical, preventive, and countercyclical risk measure that discourages position taking in the direction of a bubble.
Figure 1.5 is a preview of buVaR versus VaR10 for the Dow Jones index during the recent credit crisis. VaR is perpetually late during a crisis and does not differentiate between long and short positions. BuVaR peaks ahead of the crash (is countercyclical) and is always larger than VaR, to buffer against the risk of a crash on one side. It recognizes that the crash risks faced by long and short positions are unequal. Used for capital purposes, it will penalize positions that are chasing an asset bubble more than contrarian positions.
FIGURE 1.5 BuVaR and VaR Comparison
If implemented on a global scale, buVaR would have the effect of regulating and dampening the market cycle. Perhaps then, this new framework echoes the venerable philosophy of the FED:
It’s the job of the FED to take away the punch bowl just as the party gets going.
—William McChesney Martin Jr., FED Chairman 1951–1970
1. There are claims that some groups may have experimented with risk measures similar to VaR as early as 1991.
2. Taleb, 2007, The Black Swan: The Impact of the Highly Improbable.
3. See the discussion “Against Value-at-Risk: Nassim Taleb Replies to Phillip Jorion,” Taleb, 1997.
4. Danielsson et al., “An Academic Response to Basel II,” Special Paper 130, ESRC Research Centre, 2001.
5. Financial Service Authority, 2009, The Turner Review—A Regulatory Response to the Global Banking Crisis.
6. RiskMetrics Group, 2009, “VaR Is from Mars, Capital Is from Venus.”
7. This is supported by empirical evidence that put-call ratios tend to rise as stock market bubbles peak. This is the ratio of premium between equally out-of-money puts and calls, and is a well-studied indicator of fears of a crash.
8. The VaR is computed using a 250-day observation period, and expressed as a percentage loss of the index. VaR should always be understood as a loss; sometimes a negative sign is used to denote the loss.
9. By assuming i.i.d., the return time series becomes stationary. This allows the Law of Large Numbers to apply. This law states that, as more data is collected, the sample mean will converge to a stable expected value. This gives the statistician the ability to predict (perform estimation) with a stated, often high, level of precision.
10. The VaR is computed by the RiskMetrics method using exponentially decaying weights.
This chapter provides the statistical concepts essential for the understanding of risk management. There are many good textbooks on the topic, see Carol Alexander (2008). Here, we have chosen to adopt a selective approach. Our goal is to provide adequate math background to understand the rest of the book. It is fair to say that if you do not find it here, it is not needed later. As mentioned in the preface, this book tells a story. In fact, the math here is part of the plot. Therefore, we will include philosophy or principles of statistical thinking and other pertinent topics that will contribute to the development of the story. And we will not sidetrack the reader with unneeded theorems and lemmas.
Two schools of thought have emerged from the history of statistics—frequentist and Bayesian schools of thought. Bayesians and frequentists hold very different philosophical views on what defines probability. From a frequentist perspective, probability is objective and can be inferred from the frequency of observation in a large number of trials. All parameters and unknowns that characterize an assumed distribution or regression relationship can be backed out from the sample data. Frequentists will base their interpretations on a limited sample; as we shall see, there is a limit to how much data they can collect without running into other practical difficulties. Frequentists will assume the true value of their estimate lies within the confidence interval that they set (typically at 95%). To qualify their estimate, they will perform hypothesis testing that will (or will not) reject their estimate, in which case they will assume the estimate as false (or true).
Bayesians, on the other hand, interpret the concept of probability as “a measure of a state of knowledge or personal belief” that can be updated on arrival of more information (i.e., incorporates learning). Bayesians embrace the universality of imperfect knowledge. Hence probability is subjective; beliefs and expert judgment are permissible inputs to the model and are also expressed in terms of probability distributions. As mentioned earlier, a frequentist hypothesis (or estimate) is either true or false, but in Bayesian statistics the hypothesis is also assigned a probability.
Value at risk (VaR) falls under the domain of frequentist statistics—inferences are backed out from data alone. The risk manager, by legacy of industry development, is a frequentist.1
A random variable or stochastic variable (often just called variable) is a variable that has an uncertain value in the future. Contrast this to a deterministic variable in physics; for example, the future position of a planet can be determined (calculated) to an exact value using Newton’s laws. But in financial markets, the price of a stock tomorrow is unknown and can only be estimated using statistics.
Let X be a random variable. The observation of X (data point) obtained by the act of sampling is denoted with a lower case letter xi as a convention, where the subscript i = 1,2, . . . , is a running index representing the number of observations. In general, X can be anything—price sequences, returns, heights of a group of people, a sample of dice tosses, income samples of a population, and so on. In finance, variables are usually price (levels) or returns (changes in levels). We shall discuss the various types of returns later and their subtle differences. Unless mentioned otherwise, we shall talk about returns as daily percentage change in prices. In VaR, the data set we will be working with is primarily distributions of sample returns and distributions of profit and loss (PL).
Figure 2.1 is a plot of the frequency distribution (or histogram) of S&P 500 index returns using 500 days data (Jul 2007 to Jun 2009). One can think of this as a probability distribution of events—each day’s return being a single event. So as we obtain more and more data (trials), we get closer to the correct estimate of the “true” distribution.
FIGURE 2.1 S&P 500 Index Frequency Distribution
We posit that this distribution contains all available information about risks of a particular market and we can use this distribution for forecasting. In so doing, we have implicitly assumed that the past is an accurate guide to future risks, at least for the next immediate time step. This is a necessary (though arguable) assumption; otherwise without an intelligent structure, forecasting would be no different from fortune telling.
In risk management, we want to estimate four properties of the return distribution—the so-called first four moments—mean, variance, skewness, and kurtosis. To be sure, higher moments exist mathematically, but they are not intuitive and hence of lesser interest.
The mean of a random variable X is also called the expectation or expected value, written μ = E(X). The mean or average of a sample x1, . . . , xn is just the sum of all the data divided by the number of observations n. It is denoted by or .
(2.1)
The Excel function is AVERAGE(.). It measures the center location of a sample. A word on statistical notation—generally, when we consider the actual parameter in question μ (a theoretical idea), we want to measure this parameter using an estimator (a formula). The outcome of this measurement is called an estimate, also denoted (a value). Note the use of the ^ symbol henceforth.
The kth moment of a sample x1, . . . , xn is defined and estimated as:
(2.2)
The variance or second moment of a sample is defined as the average of the squared distances to the mean:
(2.3)
The Excel function is VAR(.). It represents the dispersion from the mean. The square-root of variance is called the standard deviation or sigma σ. In risk management, risk is usually defined as uncertainty in returns, and is measured in terms of sigma. The Excel function is STDEV(.).
The skewness or third moment (divided by ) measures the degree of asymmetry about the mean of the sample distribution. A positive (negative) skew means the distribution slants to the right (left). The Excel function is SKEW(.).
(2.4)
The kurtosis or fourth moment (divided by ) measures the “peakness” of the sample distribution and is given by:
(2.5)
Since the total area under the probability distribution must sum up to a total probability of 1, a very peakish distribution will naturally have fatter tails. Such a behavior is called leptokurtic. Its Excel function is KURT(.). A normal distribution has a kurtosis of 3. For convenience, Excel shifts the KURT(.) function such that a normal distribution gives an excess kurtosis of 0. We will follow this convention and simply call it kurtosis for brevity.
Back to Figure 2.1, the S&P distribution is overlaid with a normal distribution (of the same variance) for comparison. Notice the sharp central peak above the normal line, and the more frequent than normal observations in the left and right tails. The sample period (Jul 2007 to Jun 2009) corresponds to the credit crisis—as expected the distribution is fat tailed. Interestingly, the distribution is not symmetric—it is positively skewed! (We shall see why in Section 7.1.)
This is a pillar assumption for most statistical modeling. A random sample (y1, . . . , yn) of size n is independent and identically distributed (or i.i.d.) if each observation in the sample belongs to the same probability distribution as all others, and all are mutually independent. Imagine yourself drawing random numbers from a distribution. Identical means each draw must come from the same distribution (it need not even be bell-shaped). Independent means you must not meddle with each draw, like making the next random draw a function of the previous draw. For example, a sample of coin tosses is i.i.d.
A time series is a sequence X1, . . . , Xt of random variables indexed by time. A time series is stationary if the distribution of (X1, . . . , Xt) is identical to that of (X1+k, . . . , Xt+k) for all t and all positive integer k. In other words, the distribution is invariant under time shift k. Since it is difficult to prove empirically that two distributions are identical (in every aspect), in financial modeling, we content ourselves with just showing that the first two moments—mean and variance—are invariant under time shift.2 This condition is called weakly stationary (often just called stationary) and is a common assumption.
A market price series is seldom stationary—trends and periodic components make the time series nonstationary. However, if we take the percentage change or take the first difference, this price change can be shown to be often stationary. This process is called detrending (of differencing) a time series and is a common practice.
Figure 2.2 illustrates a dummy price series and its corresponding return series. We divide the 200-day period into two 100-day periods, and compute the first two moments. For the price series, the mean moved from 4,693 (first half) to 5,109 (second half). Likewise, the standard deviation changed from 50 to 212. Clearly the price series is nonstationary. The return series, on the other hand, is stationary—its mean and standard deviation remained roughly unchanged at 0% and 0.5% respectively in both periods. Visually a stationary time series always looks like white noise.
FIGURE 2.2 Dummy Price and Return Series
An i.i.d. process will be stationary for finite distributions.3 The benefit of the stationarity assumption is we can then invoke the Law of Large Numbers to estimate properties such as mean and variance in a tractable way.
Expected values can be estimated by sampling. Let X be a random variable and suppose we want to estimate the expected value of some function g(X), where the expected value is μg ≡ E(g(X)). We sample for n observations xi of X where i = 1, . . . , n. The Law of Large Numbers states that if the sample is i.i.d. then:
(2.6)
For example, in equations (2.3) to (2.5) used to estimate the moments, as we take larger and larger samples, precision improves, our estimate converges to the (“true”) expected value. We say our estimate is consistent. On the other hand, if the sample is not i.i.d., one cannot guarantee the estimate will always converge (it may or may not); the forecast is said to be inconsistent. Needless to say, a modeler would strive to derive a consistent theory as this will mean that its conclusion can (like any good scientific result) be reproduced by other investigators.
Let’s look at some examples. We have a coin flip (head +1, tail −1), a return series of a standard normal N(0,1) process and a return series of an autoregressive process called AR(1). The first two are known i.i.d. processes; the AR(1) is not i.i.d. (it depends on the previous random variable) and is generated using:
AR(1) process:
(2.7)
where k0 and k1 are constants and εt, t=1, 2, . . . is a sequence of i.i.d. random variable with zero mean and a finite variance, also known as white noise. Under certain conditions (i.e., |k1| > 1), the AR(1) process becomes nonstationary.
Figure 2.3 illustrates the estimation of the expected value of the three processes using 1,000 simulations. For AR(1), we set k0 = 0, k1 = 0.99. The coin flip and normal process both converge to zero (the expected value) as n increases, but the AR(1) does not. See Spreadsheet 2.1.
FIGURE 2.3 Behavior of Mean Estimates as Number of Trials Increase
Figure 2.4 plots the return series for both the normal process and the AR(1) process. Notice there is some nonstationary pattern in the AR(1) plot, whereas the normal process shows characteristic white noise.
FIGURE 2.4 Return Time Series for Normal and AR(1) Processes
To ensure stationarity, risk modelers usually detrend the price data and model changes instead. In contrast, technical analysis (TA) has always modeled prices. This is a well-accepted technique for speculative trading since the 1960s after the ticker tape machine became obsolete. Dealing with non-i.i.d. data does not make TA any less effective. It does, however, mean that the method is less consistent in a statistical sense. Thus, TA has always been regarded as unscientific by academia.
In fact, it would seem that the popular and persistent use of TA (such as in program trading) by the global trading community has made its effectiveness self-fulfilling and the market returns more persistent (and less i.i.d.). Market momentum is a known fact. Ignoring it by detrending and assuming returns are i.i.d. does not make risk measurement more scientific. There is no compelling reason why risk management cannot borrow some of the modelling techniques of TA such as that pertaining to momentum and cycles. From an epistemology perspective, such debate can be seen as a tacit choice between intuitive knowledge (heuristics) and mathematical correctness.
From an academic standpoint, the first step in time series modelling is to find a certain aspect of the data set that has a repeatable pattern or is “invariant” across time in an i.i.d. way. This is mathematically necessary because if the variable under study is not repeatable, one cannot really say that a statistical result derived from a past sample is reflective of future occurrences—the notion of forecasting a number at a future horizon breaks down.
A simple graphical way to test for invariance is to plot a variable Xt with itself at a lagged time step (Xt−1). If the variable is i.i.d. this scatter plot will resemble a circular cloud. Figure 2.5 shows the scatter plots for the normal and AR(1) processes seen previously.
FIGURE 2.5 Scatter Plot of Xt versus Xt−1 for a Gaussian Process and an AR(1) Process
Clearly the AR(1) process is not i.i.d.; its cloud is not circular. For example, if the return of a particular stock, Xt, follows an AR(1) process, it is incorrect to calculate its moments and VaR using Xt. Instead, one should estimate the constant parameters in equation (2.7) from data and back out εt, then compute the moments and VaR from the εt component. The εt is the random (or stochastic) driver of risk and it being i.i.d. allows us to project the randomness to the desired forecast horizon. This is the invariant we are after. The astute reader should refer to Meucci (2011).
Needless to say, a price series is never i.i.d. because of the presence of trends (even if only small ones) in the series, hence, the need to transform the price series into returns.
In practice, moments are computed using discrete data from an observed frequency distribution (like the histogram in Figure 2.1). However, it is intuitive and often necessary to think in terms of an abstract continuous distribution. In the continuous world, frequency distribution is replaced by probability density function (PDF). If f(x) is a PDF of a random variable X, then the probability that X is between some numbers a and b is the area under the graph f(x) between a and b. In other words,
(2.8)
where f(.) is understood to be a function of x here. The probability density function f(x) has the following intuitive properties:
(2.9)
(2.10)
The cumulative distribution function (CDF) is defined as:
(2.11)
It is the probability of observing the variable having values at or below x, written F(x) = Pr[X ≤ x]. As shown in Figure 2.6, F(x) is just the area under the graph f(x) at a particular value x.
FIGURE 2.6 Density Function and Distribution Function
The most important continuous distribution is the normal distribution also known as Gaussian distribution. Among many bell-shaped distributions, this famous one describes amazingly well the physical characteristics of natural phenomena such as the biological growth of plants and animals, the so-called Brownian motion of gas, the outcome of casino games, and so on. It seems logical to assume that a distribution that describes science so accurately should also be applicable in the human sphere of trading.
The normal distribution can be described fully by just two parameters—its mean μ and variance σ2. Its PDF is given by:
(2.12)
The normal distribution is written in shorthand as X∼N(μ, σ2). The standard normal, defined as a normal distribution with mean zero and variance 1, is a convenient simplification for modeling purposes. We denote a random variable ε as following a standard normal distribution by ε∼N(0,1).
Figure 2.7 plots the standard normal distribution. Note that it is symmetric about the mean; it has skewness 0 and kurtosis 3 (or 0 excess kurtosis in Excel convention).
FIGURE 2.7 Standard Normal Probability Density Function
How do we interpret the idea of one standard deviation σ for N (μ, σ2)? It means 68% of its observations (area under f(x)) lies in the interval [−σ, +σ] and 95% of the observations lies within [−2σ, +2σ]. Under normal conditions, a stock’s daily return will fluctuate between −2σ and 2σ roughly 95% of the time, or roughly 237 days out of 250 trading days per year, as a rule of thumb.
The central limit theorem (CLT) is a very fundamental result. It states that the mean of a sample of i.i.d. distributed random variables (regardless of distribution shape) will converge to the normal distribution as the number of samples becomes very large. Hence, the normal distribution is the limiting distribution for the mean. See Spreadsheet 2.2 for an illustration.
