73,99 €
The latest tools and techniques for pricing and risk management This book introduces readers to the use of copula functions to represent the dynamics of financial assets and risk factors, integrated temporal and cross-section applications. The first part of the book will briefly introduce the standard the theory of copula functions, before examining the link between copulas and Markov processes. It will then introduce new techniques to design Markov processes that are suited to represent the dynamics of market risk factors and their co-movement, providing techniques to both estimate and simulate such dynamics. The second part of the book will show readers how to apply these methods to the evaluation of pricing of multivariate derivative contracts in the equity and credit markets. It will then move on to explore the applications of joint temporal and cross-section aggregation to the problem of risk integration.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 452
Veröffentlichungsjahr: 2011
Contents
Cover
Series
Title Page
Copyright
Preface
1: Correlation Risk in Finance
1.1 CORRELATION RISK IN PRICING AND RISK MANAGEMENT
1.2 IMPLIED VS REALIZED CORRELATION
1.3 BOTTOM-UP VS TOP-DOWN MODELS
1.4 COPULA FUNCTIONS
1.5 SPATIAL AND TEMPORAL DEPENDENCE
1.6 LONG-RANGE DEPENDENCE
1.7 MULTIVARIATE GARCH MODELS
1.8 COPULAS AND CONVOLUTION
2: Copula Functions: The State of The Art
2.1 COPULA FUNCTIONS: THE BASIC RECIPE
2.2 MARKET CO-MOVEMENTS
2.3 DELTA HEDGING MULTIVARIATE DIGITAL PRODUCTS
2.4 LINEAR CORRELATION
2.5 RANK CORRELATION
2.6 MULTIVARIATE SPEARMAN’S RHO
2.7 SURVIVAL COPULAS AND RADIAL SYMMETRY
2.8 COPULA VOLUME AND SURVIVAL COPULAS
2.9 TAIL DEPENDENCE
2.10 LONG/SHORT CORRELATION
2.11 FAMILIES OF COPULAS
2.12 KENDALL FUNCTION
2.13 EXCHANGEABILITY
2.14 HIERARCHICAL COPULAS
2.15 CONDITIONAL PROBABILITY AND FACTOR COPULAS
2.16 COPULA DENSITY AND VINE COPULAS
2.17 DYNAMIC COPULAS
3: Copula Functions and Asset Price Dynamics
3.1 THE DYNAMICS OF SPECULATIVE PRICES
3.2 COPULAS AND MARKOV PROCESSES: THE DNO APPROACH
3.3 TIME-CHANGED BROWNIAN COPULAS
3.4 COPULAS AND MARTINGALE PROCESSES
3.5 MULTIVARIATE PROCESSES
4: Copula-based Econometrics of Dynamic Processes
4.1 DYNAMIC COPULA QUANTILE REGRESSIONS
4.2 COPULA-BASED MARKOV PROCESSES: NON-LINEAR QUANTILE AUTOREGRESSION
4.3 COPULA-BASED MARKOV PROCESSES: SEMI-PARAMETRIC ESTIMATION
4.4 COPULA-BASED MARKOV PROCESSES: NON-PARAMETRIC ESTIMATION
4.5 COPULA-BASED MARKOV PROCESSES: MIXING PROPERTIES
4.6 PERSISTENCE AND LONG MEMORY
4.7 C-CONVOLUTION-BASED MARKOV PROCESSES: THE LIKELIHOOD FUNCTION
5: Multivariate Equity Products
5.1 MULTIVARIATE EQUITY PRODUCTS
5.2 RECURSIONS OF RUNNING MAXIMA AND MINIMA
5.3 THE MEMORY FEATURE
5.4 RISK-NEUTRAL PRICING RESTRICTIONS
5.5 TIME-CHANGED BROWNIAN COPULAS
5.6 VARIANCE SWAPS
5.7 SEMI-PARAMETRIC PRICING OF PATH-DEPENDENT DERIVATIVES
5.8 THE MULTIVARIATE PRICING SETTING
5.9 H-CONDITION AND GRANGER CAUSALITY
5.10 MULTIVARIATE PRICING RECURSION
5.11 HEDGING MULTIVARIATE EQUITY DERIVATIVES
5.12 CORRELATION SWAPS
5.13 THE TERM STRUCTURE OF MULTIVARIATE EQUITY DERIVATIVES
6: Multivariate Credit Products
6.1 CREDIT TRANSFER FINANCE
6.2 CREDIT INFORMATION: EQUITY VS CDS
6.3 STRUCTURAL MODELS
6.4 INTENSITY-BASED MODELS
6.5 FRAILTY MODELS
6.6 GRANULARITY ADJUSTMENT
6.7 CREDIT PORTFOLIO ANALYSIS
6.8 DYNAMIC ANALYSIS OF CREDIT RISK PORTFOLIOS
7: Risk Capital Management
7.1 A REVIEW OF VALUE-AT-RISK AND OTHER MEASURES
7.2 CAPITAL AGGREGATION AND ALLOCATION
7.3 RISK MEASUREMENT OF MANAGED PORTFOLIOS
7.4 TEMPORAL AGGREGATION OF RISK MEASURES
8: Frontier Issues
8.1 LÉVY COPULAS
8.2 PARETO COPULAS
8.3 SEMI-MARTINGALE COPULAS
Appendix A: Elements of Probability
A.1 ELEMENTS OF MEASURE THEORY
A.2 INTEGRATION
A.3 THE MOMENT-GENERATING FUNCTION OR LAPLACE TRANSFORM
A.4 THE CHARACTERISTIC FUNCTION
A.5 RELEVANT PROBABILITY DISTRIBUTIONS
A.6 RANDOM VECTORS AND MULTIVARIATE DISTRIBUTIONS
A.7 INFINITE DIVISIBILITY
A.8 CONVERGENCE OF SEQUENCES OF RANDOM VARIABLES
A.9 THE RADON–NIKODYM DERIVATIVE
A.10 CONDITIONAL EXPECTATION
Appendix B: Elements of Stochastic Processes Theory
B.1 STOCHASTIC PROCESSES
B.2 MARTINGALES
B.3 MARKOV PROCESSES
B.4 LÉVY PROCESSES
B.5 SEMI-MARTINGALES
References
Extra Reading
Index
For other titles in the Wiley Finance series please see www.wiley.com/finance
This edition first published 2012
© 2012 John Wiley & Sons, Ltd
Registered office John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.
The right of the authors to be identified as the authors of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloging-in-Publication Data
Dynamic copula methods in finance / Umberto Cherubini ... [et al.]. p. cm. — (The wiley finance series) Includes bibliographical references and index. ISBN 978-0-470-68307-1 (hardback) 1. Finance—Mathematical models. I. Cherubini, Umberto HG106.D96 2011 332.01′519233—dc23 2011034154
A catalogue record for this book is available from the British Library.
ISBN 978-0-470-68307-1 (hardback) ISBN 978-1-119-95451-4 (ebk)
ISBN 978-1-119-95452-1 (ebk) ISBN 978-1-119-95453-8 (ebk)
Preface
This book concludes five years of original research at the University of Bologna on the use of copulas in finance. We would like these results to be called the Bologna school. The problem tackled arises directly from financial applications and the fact that almost always in this field we are confronted with convolution problems along with non-normal distributions and non-linear dependence. More explicitly, almost always in finance we face the problem of evaluating the distribution of
where X and Y may have arbitrary distributions and may be dependent on each other in quite a strange fashion. Very often, we may also be interested in the dependence of this sum on either X or Y. The Bologna school has studied the class of convolution-based copulas that is well suited to address this kind of problem. It is easy to see that this operates a restriction on the choice of copulas. In a sense, convolution-based copulas address a special compatibility problem, enforcing coherence in the dependence structure between variables and their sum. This compatibility issue is paramount and unavoidable for almost all the applications in finance. The first concept that comes to mind is the linear law of price enforced by the fundamental theorem of asset pricing: in order to avoid arbitrage, prices of complex products must be linear combinations of the primitive products constituting the replicating portfolio. In asset allocation, portfolios are also strictly linear concepts, even though they may include (and today they typically do) option-like and other non-linear products whose distribution is far from Gaussian and whose dependence on the other components of the portfolio is not Gaussian either. Moreover, trading and investment activities involve more and more exposures to credit risk that are non-Gaussian by definition: this was actually the very reason for copula function applications to finance in the first place. But, even in the case of credit, losses may be linked by the most complex dependence structure, but nevertheless they cumulate one after the other by a linear combination: computing cumulated losses is again a convolution problem. Finally, linear aggregation is crucial to understand the dynamics of markets. From this viewpoint, finance theory has developed under the main assumption of processes with independent increments: convolution-based copulas may allow us to considerably extend the set of possible market dynamics, allowing for general dependence structures between the price level at a given point in time and its increment (the return) in the following period: describing the distribution of the price at the end of a period is again a convolution problem.
The main message of this book is that copulas remain a flexible tool for applications in finance, but this flexibility is finite, and the Bologna school sets the frontier of this flexibility at the family of convolution-based copulas.
Chapters 1 and 2 review the general problem of dependence and correlation in finance. More particularly, Chapter 2 specializes the analysis to a review of the basic concepts of copulas, as they have been applied to financial problems until today. Chapters 3 and 4 introduce the theory of convolution-based copulas, and the concept of C-convolution within the mainstream of the Darsow, Nguyen, and Olsen (DNO) application of copulas to Markov processes. More specifically, Chapter 3 addresses theory and Chapter 4 deals with the application to econometrics. Chapters 5, 6, and 7 discuss applications of the approach in turn to the problems of: (i) evaluating multivariate equity derivatives; (ii) analyzing the credit risk exposure of a portfolio, (iii) aggregating Value-at-Risk measures across risk factors and business units. In all these chapters, we exploit the model to address dependence both in a spatial and temporal perspective. This twofold perspective is entirely new to these applications, and may easily be handled within the set of convolution-based copulas. Chapter 8 concludes by surveying other methodologies available in the mathematical finance and probability literature to set a dependence structure among processes: these approaches are mainly in continuous time, and raise the question, that we leave for future research, of whether they represent some or all the possible solutions that one would obtain by taking the continuous time limit of our model, which is defined in discrete time.
We conclude with thanks to our colleagues in the international community who have helped us during these years of work. Their support has been particularly precious, because our work is entirely free from government support. Nemo propheta in patria. As for comments on this manuscript, we would particularly like to thank, without implication, Xiaohong Chen, Fabrizio Durante, Marius Hofert, Matthias Scherer, Bruno Remillard, Paramsoothy Silvapulle, and an anonymous referee provided by John Wiley. And we thank our readers in advance for any comments they would like to share with us.
1
Correlation Risk in Finance
Over the last decade, financial markets have witnessed a progressive concentration of focus on correlation dynamics models. New terms such as correlation trading and correlation products have become the frontier topic of financial innovation. Correlation trading denotes the trading activity aimed at exploiting changes in correlation, or more generally in the dependence structure of assets and risk factors. Correlation products denote financial structures designed with the purpose of exploiting these changes. Likewise, the new term correlation risk in risk management is meant to identify the exposure to losses triggered by changes in correlation. Going long or short correlation has become a standard concept for everyone working in dealing rooms and risk management committees. This actually completes a trend that led the market to evolve from taking positions on the direction of prices towards taking exposures to volatility and higher moments of their distribution, and finally speculating and hedging on cross-moments. These trends were also accompanied by the development of new practices to transfer risk from one unit to others. In the aftermath of the recent crisis, these products have been blamed as one of the main causes. It is well beyond the scope of this book to digress on the economics of the crisis. We would only like to point out that the modular approach which has been typical of financial innovation in the structured finance era may turn out extremely useful to ensure the efficient allocation of risks among the agents. While on the one hand the use of these techniques without adequate knowledge may represent a source risk, avoiding them for sure represents a distortion and a source of cost. Of course, accomplishing this requires the use of modular mathematical models to split and transfer risk. This book is devoted to such models, which in the framework of dependence are called dependence functions or copula functions.
1.1 CORRELATION RISK IN PRICING AND RISK MANAGEMENT
In order to measure the distance between the current practice of markets and standard textbook theory of finance, let us consider the standard static portfolio allocation problem. The aim is to maximize the expected utility of wealth W at some final date T using a set of risky assets, Si, . Formally, we have
where are the log-returns on the risky assets and Rf is the risk-free rate. The asset allocation problem is completely described by two functions: (i) the utility function , assumed strictly increasing and concave; (ii) the joint distribution function of the returns . While we could argue in depth about both of them, throughout this book the focus will be on the specification of the joint distribution function. In the standard textbook problem, this is actually kept in the background and returns are assumed to be jointly normally distributed, which leads to rewriting the expected utility in terms of a mean–variance problem. Nowadays, real-world asset management has moved miles away from this textbook problem, mainly for two reasons: first, investments are no longer restricted to linear products, such as stocks and bonds, but involve options and complex derivatives; second, the assumption that the distribution of returns is Gaussian is clearly rejected by the data. As a result, the expected utility problem should take into account three different dimensions of risk: (i) directional movements of the market; (ii) changes in volatility of the assets; (iii) changes in their correlation. More importantly, there is also clear evidence that changes in both volatility and correlation are themselves correlated with swings in the market. Typically, both volatility and correlation increase when the market is heading downward (which is called the leverage effect). It is the need to account for these new dimensions of risk that has led to the diffusion of derivative products to hedge against and take exposures to both changes in volatility and changes in correlation. In the same setting, it is easy to recover the other face of the same problem encountered by the pricer. From his point of view, the problem is tackled from the first-order conditions of the investment problem:
where the new probability measure is defined after the Radon–Nikodym derivative
Pricers face the problem of evaluating financial products using measure , which is called the risk-neutral measure (because all the risky assets are expected to yield the same return as the risk-free asset), or the equivalent martingale measure (EMM, because is a measure equivalent to with the property that prices expressed using the risk-free asset as numeraire are martingale). An open issue is whether and under what circumstances volatility and correlation of the original measure are preserved under this change of measure. If this is not the case, we say that volatility and correlation risks are priced in the market (that is, a risk premium is required for facing these risks). Under this new measure, the pricers face problems which are similar to those of the asset manager, that is evaluating the sensitivity of financial products to changes in the direction of the market (long/short the asset), volatility (long/short volatility) and correlation (long/short correlation). They face a further problem, though, that is going to be the main motivation of this book: they must ensure that prices of multivariate products are consistent with prices of univariate products. This consistency is part of the so-called arbitrage-free approach to pricing, which leads to the martingale requirement presented above. In the jargon of statisticians, this consistency leads to the term compatibility: the risk-neutral joint distribution has to be compatible with the marginal distributions .
Like the asset manager and the pricer, the risk manager also faces an intrinsically multivariate problem. This is the issue of measuring the exposure of the position to different risk factors. In standard practice, he transforms the financial positions in the different assets and markets into a set of exposures (buckets, in the jargon) to a set of risk factors (mapping process). The problem is then to estimate the joint distribution of losses on these exposures and define a risk measure on this distribution. Typical measures are Value-at-Risk (VaR) and Expected Shortfall (ES). These measures are multivariate in the sense that they must account for correlation among the losses, but there is a subtle point to be noticed here, which makes this practice obsolete with respect to structured finance products, and correlation products in particular. A first point is that these products are non-linear, so that their value may change even though market prices do not move but their volatilities do. As for volatility, the problem can be handled by including a bucket of volatility exposures for every risk factor. But there is a crucial point that gets lost if correlation products are taken into account. It is the fact that the value of these products can change even if neither the market prices nor their volatilities move, but simply because of a change in correlation. In fact, this exposure to correlation among the assets included in the specific product is lost in the mapping procedure. Correlation risk then induces risk managers to measure this dimension of risk on a product-by-product basis, using either historical simulation or stress-testing techniques.
1.2 IMPLIED VS REALIZED CORRELATION
A peculiar feature of applications of probability and statistics to finance is the distinction between historical and implied information. This duality, that is found in many (if not all) applications in univariate analysis, shows up in the multivariate setting as well. On the one side, standard time series data from the market enable us to gauge the relevance of market co-movements for investment strategies and risk management issues. On the other side, if there exist derivative prices which are dependent on market correlation, it is possible to recover the degree of co-movement credited by investors and financial intermediaries to the markets, and this is done by simply inverting the prices of these derivatives. Of course, recovering implied information is subject to the same flaws as those that are typical of the univariate setting. First, the possibility of neatly backing out this information may be limited by the market incompleteness problem, which has the effect of introducing a source of noise into market prices. Second, the distribution backed out is the risk-neutral one and a market price of risk could be charged to allow for the possibility of correlation changes. These problems are indeed compounded and in a sense magnified in the multivariate setting, in which the uncertainty concerning the dependence structure among the markets adds to that on the shape of marginal distributions.
Unfortunately, there are not many cases in which correlation can be implied from the market. An important exception is found in the FOREX market, because of the so-called triangular arbitrage relationship. Consider the Dollar/Euro (), the Euro/Yen () and the Dollar/Yen () exchange rates. Triangular arbitrage requires that
Taking logs and denoting by , , and the corresponding implied volatilities, we have that
from which
is the implied correlation between the Euro/Yen and the Dollar/Yen priced by the market.
1.3 BOTTOM-UP VS TOP-DOWN MODELS
For all the reasons above, estimating correlation, either historical or implied, has become the focus of research in the last decade. More precisely, the focus has been on the specification of the joint distribution of prices and risk factors. This has raised a first strategic choice between two opposite classes of models, that have been denoted top-down and bottom-up approaches. In all applications, pricing of equity and credit derivatives, risk management aggregation and allocation, the first choice is then to fit all markets and risk factors with a joint distribution and to specify in the process both the marginal distributions of the risk factors and their dependence structure. The alternative is to take care of marginal distributions first, and of the dependence structure in a second step. It is clear that copula functions represent the main tool of the latter approach. It is not difficult to gauge what the pros and cons of the two alternatives might be. Selecting a joint distribution fitting all risks may not be easy, beyond the standard choices of the normal distribution for continuous variables and the Poisson distribution for discrete random variables. If one settles instead for the choice of non-parametric statistics, even for a moderate number of risk factors, the implementation runs into the so-called curse of dimensionality. As for the advantages, a top-down model would make it fairly easy to impose restrictions that make prices consistent with the equilibrium or no-arbitrage restrictions. Nevertheless, this may come at the cost of marginal distributions that do not fit those observed in the market. Only seldom (to say never) does this poor fit correspond to arbitrage opportunities, while more often it is merely a symptom of model risk. On the opposite side, the bottom-up model may ensure that marginal distributions are properly fitted, but it may be the case that this fit does not abide by the consistency relationships that must exist among prices: the most well known example is the no-arbitrage restriction requiring that prices of assets in speculative markets follow martingale processes. The main goal of this book is actually to show how to impound restrictions like these in a bottom-up framework.
1.4 COPULA FUNCTIONS
Copula functions are the main tool for a bottom-up approach. They are actually built on purpose with the goal of pegging a multivariate structure to prescribed marginal distributions. This problem was first addressed and solved by Abe Sklar in 1959. His theorem showed that any joint distribution can be written as a function of marginal distributions:
and that the class of functions , denoted copula functions, may be used to extend the class of multivariate distributions well beyond those known and usually applied. To quote the dual approach above, the former result allows us to say that any top-down approach may be written in the formalism of copula functions, while the latter states that copulas can be applied in a bottom-up approach to generate infinitely many distributions. A question is whether this multiplicity may be excessive for financial applications, and this whole book is devoted to that question.
Often a more radical question is raised, whether there is any advantage at all to working with copulas. More explicitly, one could ask what can be done with copulas that cannot be done with other techniques. The answer is again the essence of the bottom-up philosophy. The crucial point is that in the market, we are used to observing marginal distributions. All the information that we can collect is about marginals: the time series of this and that price, and the implied distribution of the underlying asset of an option market for a given exercise date. We can couple time series of prices or of distributions together and study their dependence, but only seldom can we observe multivariate distributions. For this reason, it is mandatory that any model be made consistent with the univariate distributions observed in the market: this is nothing but an instance of that procedure pervasively used in the markets and called calibration.
1.5 SPATIAL AND TEMPORAL DEPENDENCE
To summarize the arguments of the previous section, it is of the utmost importance that multivariate models be consistent with univariate observed prices, but this consistency must be subject to some rules and cannot be set without limits. These limits were not considered in standard copula functions applications to finance problems. In these applications the term multivariate was used with the meaning that several different risk factors at a given point in time were responsible for the value of a position at that time. This concept is called spatial dependence in statistics and is also known as cross-section dependence in econometrics. Copula functions could be used in full flexibility to represent the consistency between the price of a multivariate product at a given date and the prices of the constituent products observed in the market. However, the term multivariate could be used with a different meaning, that would make things less easy. It could in fact refer to the dependence structure of the value of the same variable observed at different points in time: this is actually defined as a stochastic process. In the language of statistics, the dependence among these variables would be called temporal dependence. Curiously, in econometric applications copula functions have mainly been intended in this sense. If copulas are used in the same sense in derivative pricing problems, the flexibility of copulas immediately becomes a problem: for example, one would like to impose restrictions on the dynamics to have Markov processes and martingales, and only a proper specification of copulas could be selected to satisfy these requirements.
Even more restrictions would apply in an even more general setting, in which a multivariate process would be considered as a collection of random variables representing the value of each asset or risk factor at different points in time. In the standard practice of econometrics, in which it is often assumed that relationships are linear, this would give rise to the models called vector autoregression (VAR). Copula functions allow us to extend these models to a general setting in which relationships are allowed to be non-linear and non-Gaussian, which is the rule rather than the exception of portfolios of derivative products.
1.6 LONG-RANGE DEPENDENCE
These models that extend the traditional VAR time series with a specification in terms of copula functions are called semi-parametric and the most well known example is given by the so-called SCOMDY model (Semiparametric Copula-based Multivariate Dynamic). By taking the comparison with linear models one step further, these models raise the question of the behavior of the processes over long time horizons. We know from the theory of time series that a univariate process yt modeled with the dynamics
where , and are constant parameters, and is called an ARMA(p,q) model (Autoregressive Moving Average Process), that could be extended to a multiple set of processes called VARMA. In particular, the MA part of the process is represented by the dependence of yt on the past q innovations and the AR part is given by its dependence on the past p values of the process itself. If we focus on the autoregressive part, we know that in cases in which the characteristic equation of the process
has solutions strictly inside the unit circle, the process is said to be stationary in mean. To make the meaning clear, let us just focus on the simplest AR(1) process:
It is easy to show that if by recursive substitution of into , we have
and
where we have used the moments of the distribution of . Notice that if instead it is , the dynamics of yt is defined by
and neither the mean nor the variance of the unconditional distribution are defined. In this case the process is called integrated (of order 1) or difference stationary, or we say that the process contains a unit root. The idea is that the first difference of the process is stationary (in mean). The distinguishing feature of these processes is that any shock affecting a variable remains in its history forever, a property called persistence. As an extension, one can conceive that several processes may be linear combinations of the same persistent shock yt, that is also called the common stochastic trend of the processes. In this case we say that the set of processes constitutes a co-integrated system. More formally, a set of processes is said to constitute a co-integrated system if there exists at least one linear combination of the processes that is stationary in mean.
In another stream of literature, another intermediate case has been analyzed, in which the process is said to be fractionally integrated, so that the process is made stationary by taking fractional differences: the long-run behavior of these processes is denoted long memory. In Chapter 4 we shall give a formal definition of long memory (due to Granger, 2003) and we will discuss the linkage with a copula-based stochastic process. As for the contribution of copulas to these issues, notice that while most of the literature on unit roots and persistence vs stationary models has developed under the maintained assumption of Gaussian innovations, the use of copula functions extends the analysis to non-Gaussian models. Whether these models can represent a new specification for the long-run behavior of time series remains an open issue.
1.7 MULTIVARIATE GARCH MODELS
Since copula functions are a general technique to address multivariate non-Gaussian systems, one of the alternative approaches that represents the fiercest challenge to them is the multivariate version of conditional heteroscedasticity models. The idea is to assume that the Gaussian dynamics, or a dynamics that can be handled with tractable distributions, can be maintained if it is modeled conditionally on the dynamics of volatility. So, in the univariate case a shock has the twofold effect of changing the return of a period and the volatility of the following period. Just like in the univariate case, a possible choice to model the non-normal joint distribution of returns is to assume that they could be conditionally normal. A major problem with this approach is that the number of parameters to be estimated can become huge very soon. Furthermore, restrictions are to be imposed to ensure that the covariance matrix is symmetric and positive definite. In particular, imposing the latter restriction is not very easy. The most general model of covariance matrix dynamics is specified by arranging all the coefficients in the matrix in a vector. The most well known specification is, however, that called BEKK (from the authors: Baba, Engle, Kraft, and Kroner, 1990). Calling the covariance matrix at time t, this specification reads
where , and are n-dimensional matrices of coefficients. Very often, special restrictions are imposed on the matrices in order to reduce the dimension of the estimation problem. For example, typical restrictions are to assume the matrices and are diagonal, so limiting the flexibility of the representation.
In order to reduce the dimensionality of the problem, the typical recipe used in statistics is to resort to data compression methods: principal component analysis and factor analysis. Both these approaches have been applied to the multivariate GARCH problem. Engle, Ng and Rothschild (1990) resort to a factor GARCH representation in which the dynamics of the common factors are modeled as a multivariate factor model. This way, the dimension of the estimation problem is drastically reduced. Alexander (2001) proposes the so-called orthogonal GARCH model. The idea is to use principal component analysis to diagonalize the covariance matrix and to estimate a GARCH model on the diagonalized model. Eigenvectors are then used to reconstruct the variance matrix. The maintained assumption in this model is of course that the same linear transformation diagonalizes not only the unconditional variance matrix, but also the conditional one.
Besides data compression methods, a computationally effective alternative would be to separate the specification of the marginal distributions from the dependence structure. This approach to the specification of the system, that very closely resembles copula functions, was proposed by Engle (2002). The model is known as Dynamic Conditional Correlation (DCC). The idea is to separate the specification of a multivariate GARCH model into a two-step procedure in which univariate marginal GARCH processes are specified in the first step, and the dependence structure in the second step. More formally, the approach is based on standardized returns, defined as
which are assumed to have standard normal distribution. Now, notice that the pairwise conditional correlation of return j and k is
Engle (2002) proposes to model such conditional correlations in an autoregressive framework:
where is the steady-state value of the correlation between returns j and k.
The estimator proposed for conditional correlation is
Once written in matrix form, the DCC model can be estimated in a two-stage procedure similar to the IFM (inference functions for margins) method that is typical of copula functions. The model is also well suited to generalization beyond the Gaussian dependence structure, applying copula function specifications as suggested in Fengler et al. (2010).
1.8 COPULAS AND CONVOLUTION
In this book, we propose an original approach to the use of copulas in financial applications. Our main task is to identify the limits that must be imposed on the freedom of selecting whatever copula for whatever application. The motivation behind this project is that the feeling of complete flexibility and freedom that from the start was considered to be the main advantage of this tool in the end turned out to be its major limitation. As we are going to see, there is something of a paradox in this: the major flaw in the use of copulas in finance, which in the first place were applied to allow for non-linear relationships, is that it is quite difficult to apply them to linear combinations of variables. We could say that a sort of curse of linearity is haunting financial econometrics. To put it in more formal terms, in almost all applications in finance we face the problem of determining the distribution of a sum of random variables. If random variables are assumed to be independent, this distribution is called the convolution of the distributions of those variables. In a setting in which the variables are not independent, this convolution concept has to be extended. The extension of this concept to the general case of dependent variables is the innovation at the root of this book: we define and call C-convolution the distribution of the sum of variables linked by a copula function . We also show that once this convolution concept is established, we immediately obtain the copula functions that link the distribution of variables which are part of a sum and the convolution itself. It is our claim that this convolution restriction is the main limitation that must be imposed on the selection of copula functions if one wants to obtain well-founded financial applications. Throughout this book we provide examples of this consistency requirement in all typical fields of copula applications, namely multivariate equity and credit derivatives and risk aggregation and allocation.
As a guide to the reader, the book is made up of three parts. Chapter 2 recalls the main concepts of copula function theory and collects the latest results, limiting the review to the standard way in which copula functions are applied in finance. In Chapters 3 and 4 we construct the theory and the econometrics of convolution-based copulas. In Chapters 5, 6 and 7 we show that limiting the selection of copulas to members of this class enables us to solve many of the consistency problems arising in financial applications. In Chapter 8 we cast a bridge to future research by addressing the problem of convergence of our copula model, which is developed in a discrete time setting, to continuous time: the issue is open, and for the time being we cannot do any better than review existing copula models that were built in continuous time or that are able to deal with the convolution restriction.
To provide further motivation to the reader, we anticipate here examples in which the convolution restriction arises in applied work, in equity, credit and risk measurement. Consider multivariate equity or FOREX derivatives expiring at different dates. One of the current arguments against the use of copula functions is that they make it difficult to set price consistency between different dates. Indeed, both the efficient market hypothesis and the martingale-based pricing techniques require specific restrictions to the dynamics of market prices. These restrictions, which for short mean that price increments must be unpredictable, imply a restriction in the copula functions that may be used to represent the temporal dependence of the price process: more explicitly, future levels cannot be predicted using the dependence between increment and level. This restriction is exactly of the convolution type: the dependence of the price at different points in time must be consistent with the dependence between a price and its increment in the period, and the latter must be designed to fulfill the unpredictability requirement above. Let us now come to credit derivatives, and consider a typical term structure problem: assume you buy insurance on a basket of credit exposures for five years and sell the same insurance on the same basket for seven years. Could the prices of the two insurance contracts be chosen arbitrarily and independently of each other? The answer is clearly no, because the difference in the two prices is the value today of selling insurance for two years starting five years from now: put in other terms, the distribution of losses in seven years is the convolution of the losses in the first five years and those in the following two years; ignoring this convolution restriction could leave room for arbitrage possibilities. We may conclude with the most obvious risk management application: your firm is made up of a desk that invests in equity products and another one that invests in credit products. For each of the desks you may come up with a risk measure based on the profit and loss distribution. How much capital would you require for the firm as a whole? One more time, the answer to this standard question is convolution.
2
Copula Functions: The State of The Art
This chapter is devoted to a bird's-eye review of the basic concepts of the theory of dependence functions or copula functions, as they are more usually called. Experts in the field may skip this reading, or, if they do not, they are warned it could be boring. People who have never been exposed to concepts such as non-parametric association measures or tail dependence indexes are instead welcome to go through the chapter, and they will find almost all the basic tools they need to deal with the rest of the book. The menu includes a starter of basic concepts on copula functions, with some examples and their basic properties, a main course of the main families of copulas with a discussion of dependence measures on the side, and dessert of advanced modeling problems of copula functions. This will suffice as an introduction to the meals that will follow in the coming chapters. However, for those who want to undertake a career as a chef and get their own restaurant started, we suggest the classical books by Nelsen (2006) and Joe (1997) as mandatory readings.
2.1 COPULA FUNCTIONS: THE BASIC RECIPE
What ordinary people know about copula functions is that they are used to separate the specification of marginal distributions from the dependence structure. It is not difficult to take a deeper insight into this result and to become somewhat of an expert in the field. The key result upon which all of this theory is built is the well-known theorem of of continuous random variables, or . All it says is that if one takes whatever random variable with continuous distribution (that is the function that denotes ) and computes , the variable obtained as a result has uniform distribution in the unit interval (that is , for all ). This is true for every random variable endowed with a continuous distribution (the only kind of distribution that will be addressed throughout this book), such as with distribution . So, will also be uniformly distributed. A question arises quite naturally: what about the joint distribution (,)? Can we apply the probability integral transformation theorem to it? The answer is a resounding no, and one has always to keep in mind that the theorem holds for univariate distributions only. Nevertheless, the theorem may help to rewrite the joint distribution in an interesting way. Assume we may define the inverse function of both and : notice that this can always be done because distribution functions are non-decreasing, even though it may happen that such an inverse is not unique in points at which some of the distribution functions are represented by horizontal lines. In order to avoid this problem, we can use the following concept of as
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
