47,99 €
Transform your approach to oprisk modelling with a proven, non-statistical methodology Operational Risk Modeling in Financial Services provides risk professionals with a forward-looking approach to risk modelling, based on structured management judgement over obsolete statistical methods. Proven over a decade's use in significant banks and financial services firms in Europe and the US, the Exposure, Occurrence, Impact (XOI) method of operational risk modelling played an instrumental role in reshaping their oprisk modelling approaches; in this book, the expert team that developed this methodology offers practical, in-depth guidance on XOI use and applications for a variety of major risks. The Basel Committee has dismissed statistical approaches to risk modelling, leaving regulators and practitioners searching for the next generation of oprisk quantification. The XOI method is ideally suited to fulfil this need, as a calculated, coordinated, consistent approach designed to bridge the gap between risk quantification and risk management. This book details the XOI framework and provides essential guidance for practitioners looking to change the oprisk modelling paradigm. * Survey the range of current practices in operational risk analysis and modelling * Track recent regulatory trends including capital modelling, stress testing and more * Understand the XOI oprisk modelling method, and transition away from statistical approaches * Apply XOI to major operational risks, such as disasters, fraud, conduct, legal and cyber risk The financial services industry is in dire need of a new standard -- a proven, transformational approach to operational risk that eliminates or mitigates the common issues with traditional approaches. Operational Risk Modeling in Financial Services provides practical, real-world guidance toward a more reliable methodology, shifting the conversation toward the future with a new kind of oprisk modelling.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 563
Veröffentlichungsjahr: 2019
Cover
List of Figures
List of Tables
Foreword
Preface
NOTES
PART One: Lessons Learned in 10 Years of Practice
CHAPTER 1: Creation of the Method
1.1 FROM ARTIFICIAL INTELLIGENCE TO RISK MODELLING
1.2 MODEL LOSSES OR RISKS?
NOTE
CHAPTER 2: Introduction to the XOI Method
2.1 A RISK MODELLING DOCTRINE
2.2 A KNOWLEDGE MANAGEMENT PROCESS
2.3 THE EXPOSURE, OCCURRENCE, IMPACT (XOI) APPROACH
2.4 THE RETURN OF AI: BAYESIAN NETWORKS FOR RISK ASSESSMENT
NOTE
CHAPTER 3: Lessons Learned in 10 Years of Practice
3.1 RISK AND CONTROL SELF-ASSESSMENT
3.2 LOSS DATA
3.3 QUANTITATIVE MODELS
3.4 SCENARIOS WORKSHOPS
3.5 CORRELATIONS
3.6 MODEL VALIDATION
NOTES
PART Two: Challenges of Operational Risk Measurement
CHAPTER 4: Definition and Scope of Operational Risk
4.1 ON RISK TAXONOMIES
4.2 DEFINITION OF OPERATIONAL RISK
NOTES
CHAPTER 5: The Importance of Operational Risk
5.1 THE IMPORTANCE OF LOSSES
5.2 THE IMPORTANCE OF OPERATIONAL RISK CAPITAL
5.3 ADEQUACY OF CAPITAL TO LOSSES
CHAPTER 6: The Need for Measurement
6.1 REGULATORY REQUIREMENTS
6.2 NONREGULATORY REQUIREMENTS
NOTES
CHAPTER 7: The Challenges of Measurement
7.1 INTRODUCTION
7.2 MEASURING RISK OR MEASURING RISKS?
7.3 REQUIREMENTS OF A RISK MEASUREMENT METHOD
7.4 RISK MEASUREMENT PRACTICES
PART Three: The Practice of Operational Risk Management
CHAPTER 8: Risk and Control Self-Assessment
8.1 INTRODUCTION
8.2 RISK AND CONTROL IDENTIFICATION
8.3 RISK AND CONTROL ASSESSMENT
NOTES
CHAPTER 9: Losses Modelling
9.1 LOSS DISTRIBUTION APPROACH
9.2 LOSS REGRESSION
NOTES
CHAPTER 10: Scenario Analysis
10.1 SCOPE OF SCENARIO ANALYSIS
10.2 SCENARIO IDENTIFICATION
10.3 SCENARIO ASSESSMENT
NOTES
PART Four: The Exposure, Occurrence, Impact Method
CHAPTER 11: An Exposure-Based Model
11.1 A TSUNAMI IS NOT AN UNEXPECTEDLY BIG WAVE
11.2 USING AVAILABLE KNOWLEDGE TO INFORM RISK ANALYSIS
11.3 STRUCTURED SCENARIOS ASSESSMENT
11.4 THE XOI APPROACH: EXPOSURE, OCCURRENCE, AND IMPACT
CHAPTER 12: Introduction to Bayesian Networks
12.1 A BIT OF HISTORY
12.2 A BIT OF THEORY
12.3 INFLUENCE DIAGRAMS AND DECISION THEORY
12.4 INTRODUCTION TO INFERENCE IN BAYESIAN NETWORKS
12.5 INTRODUCTION TO LEARNING IN BAYESIAN NETWORKS
NOTE
CHAPTER 13: Bayesian Networks for Risk Measurement
13.1 AN EXAMPLE IN CAR FLEET MANAGEMENT
NOTES
CHAPTER 14: The XOI Methodology
14.1 STRUCTURE DESIGN
14.2 QUANTIFICATION
14.3 SIMULATION
CHAPTER 15: A Scenario in Internal Fraud
15.1 INTRODUCTION
15.2 XOI MODELLING
NOTES
CHAPTER 16: A Scenario in Cyber Risk
16.1 DEFINITION
16.2 XOI MODELLING
NOTES
CHAPTER 17: A Scenario in Conduct Risk
17.1 DEFINITION
17.2 TYPES OF MISCONDUCT
17.3 XOI MODELLING
NOTES
CHAPTER 18: Aggregation of Scenarios
18.1 INTRODUCTION
18.2 INFLUENCE OF A SCENARIO ON AN ENVIRONMENT FACTOR
18.3 INFLUENCE OF AN ENVIRONMENT FACTOR ON A SCENARIO
18.4 COMBINING THE INFLUENCES
18.5 TURNING THE DEPENDENCIES INTO CORRELATIONS
NOTE
CHAPTER 19: Applications
19.1 INTRODUCTION
19.2 REGULATORY APPLICATIONS
19.3 RISK MANAGEMENT
NOTES
CHAPTER 20: A Step towards “Oprisk Metrics”
20.1 INTRODUCTION
20.2 BUILDING EXPOSURE UNITS TABLES
20.3 SOURCES FOR DRIVER QUANTIFICATION
20.4 CONCLUSION
Index
End User License Agreement
Chapter 3
TABLE 3.1 Example of Risk Definition
TABLE 3.2 Basel Loss Event Categories
TABLE 3.3 Basel Lines of Business
TABLE 3.4 Scales for Frequency, Severity, and Control Efficiency
TABLE 3.5 Scales Are Based on Ordinal Numbers
TABLE 3.6 Relative Severity Scale
TABLE 3.7 Aggregation of Two Assessments
TABLE 3.8 Assessment of Loss Data Information Value
TABLE 3.9 Value of Information for Different Types of Loss Data
TABLE 3.10 Working Groups for Scenario Assessment
TABLE 3.11 Loss Equations for Sample Scenarios
TABLE 3.12 Table of Losses Used for the Correlation Matrix
TABLE 3.13 Validation of Model Components
Chapter 4
TABLE 4.1 Comparison of Risk Categorizations
TABLE 4.2 Risk Owners of Risk Categories According to RIMS
TABLE 4.3 World Economic Forum Taxonomy of Risks
TABLE 4.4 Basel Event Types and Associated Resources
Chapter 5
TABLE 5.1 Loss Data Collection Exercise, 2008
TABLE 5.2 ORX Public Losses Statistics (2017)
TABLE 5.3 Contribution of Operational Risk to Minimum Required Capital
Chapter 6
TABLE 6.1 Micro and Macro Level Risk Assessment in Industry and Finance
Chapter 7
TABLE 7.1 Mapping of Positions and Market Variables in Market Risk
Chapter 8
TABLE 8.1 A Qualitative Scale
TABLE 8.2 A Semi-quantitative Scale
TABLE 8.3 Averaging Qualitative Assessments
TABLE 8.4 A Semi-quantitative Scale for Controls
Chapter 9
TABLE 9.1 Comparison of Binomial and Poisson Distributions
TABLE 9.2 ORX Reported Number of Events, 2011–2016
TABLE 9.3 Number of Events for an Average Bank
Chapter 10
TABLE 10.1 The A1 Storyline Defined by the IPCC
TABLE 10.2 Population and World GDP Evolution
TABLE 10.3 Methane Emissions
TABLE 10.4 Key NIC Trends
TABLE 10.5 Key NIC Drivers
TABLE 10.6 Storyline for the Severely Adverse Scenario
TABLE 10.7 Application of the General Definition of a Scenario to Three Examp...
TABLE 10.8 Mapping the Risk Register and the Loss Data Register
TABLE 10.9 RMBS Cases As of April 2018
TABLE 10.10 Operating Income of Large US Banks
TABLE 10.11 Dates Involved in a Multiyear Loss
TABLE 10.12 Top 10 Operational Risks for 2018, According to Risk.net
TABLE 10.13 Review of Top 10 Operational Risks
TABLE 10.14 First Step of Scenario Stylisation
TABLE 10.15 Second Step of Scenario Stylisation
TABLE 10.16 Stylised Storyline
TABLE 10.17 Mis-Selling Scenario Summary
TABLE 10.18 Mis-Selling Scenario Loss Generation Mechanism
TABLE 10.19 Frequency Assessment
TABLE 10.20 Assessment of Scenario Percentiles Using a Benchmark Method
TABLE 10.21 Assessment of Scenario Percentiles Using a Driver Method
TABLE 10.22 Drivers Assumptions for Different Situations
Chapter 11
TABLE 11.1 Exposure, Occurrence, and Impact for Usual Risk Events
Chapter 12
TABLE 12.1 Probability Table for the Worker Accident Risk
Chapter 13
TABLE 13.1 Variables of the Car Fleet Management Model
TABLE 13.2 Distribution of Driver and Road Variables
TABLE 13.3 Distribution of Road Conditional to Driver
TABLE 13.4 Distribution of Speed Conditional to Road
TABLE 13.5 Conditional Probability of Accident
TABLE 13.6 Conditional Cost of Accident
TABLE 13.7 Table of Road Types usage
TABLE 13.8 Table of Distribution of Speed Conditional to Road Type
TABLE 13.9 Learning from Experts or from Data
Chapter 14
TABLE 14.1 Empirical Assessment of the Probability of Occurrence
TABLE 14.2 Indicator Characteristics
TABLE 14.3 Indicator Variability
TABLE 14.4 Indicator Predictability
TABLE 14.5 Data Representativeness
TABLE 14.6 KRI Evaluation Based on Empirical Distribution
Chapter 15
TABLE 15.1 Rogue Trading Cases
TABLE 15.2 Quantification of Rogue Trading Drivers
TABLE 15.3 Assessment of Concealed Trading Positions
TABLE 15.4 Assessment of Time to Detection
Chapter 16
TABLE 16.1 Evolution of JP Morgan Chase Deposits, 2011–2017
TABLE 16.2 Cyber Attacks: Attackers, Access, and Assets
TABLE 16.3 Cyber Risk Scenarios
TABLE 16.4 Quantification of Cyber Attack Drivers
Chapter 17
TABLE 17.1 Mapping of Misconduct Types to EBA Definition
TABLE 17.2 Quantifiction of Mis-selling Drivers
Chapter 18
TABLE 18.1 How a Scenario Influences an Environment Factor
TABLE 18.2 How a Scenario Is Influenced by an Environment Factor
TABLE 18.3 Scenarios and Factors: Mutual Influences
Chapter 19
TABLE 19.1 Selection of Plausible Scenarios
TABLE 19.2 Mapping Stress Factors to Scenarios Drivers
TABLE 19.3 Representation of Controls in XOI Models
Chapter 20
TABLE 20.1 Exposure Units Table for Cyber Attack Scenario
TABLE 20.2 Exposure Units Table for the Mis-Selling Scenario
TABLE 20.3 Exposure Units Table for the Rogue Trading Scenario
TABLE 20.4 Sources for Drivers Quantification
Chapter 2
FIGURE 2.1 Modelling Approach by Risk Type
FIGURE 2.2 The Three Actors of the Risk Modelling Process
Chapter 3
FIGURE 3.1 A Risk Matrix
FIGURE 3.2 Example of a Uniform Correlation Matrix
Chapter 4
FIGURE 4.1 Strategic versus Operational Risks
FIGURE 4.2 Knightian Uncertainty
FIGURE 4.3 RIMS Risk Taxonomy
FIGURE 4.4 AIRMIC, ALARM, and IRM Risk Taxonomy
Chapter 5
FIGURE 5.1 Results from the 2008 LDCE
FIGURE 5.2 Operational Risk Loss Data, 2011–2016
FIGURE 5.3 Operational Losses by Year of Public Disclosure
FIGURE 5.4 Legal Operational Risk Losses, 2002–2016
FIGURE 5.5 Evolution of Operational Risk Losses, 2002–2016
FIGURE 5.6 Share of Minimum Required Capital
FIGURE 5.7 Operational Risk Share of MRC
Chapter 6
FIGURE 6.1 Risk Appetite Matches Risk Distribution
FIGURE 6.2 Risk Appetite Does Not Match Risk Distribution
FIGURE 6.3 Efficient Frontier
FIGURE 6.4 Market Risk Efficient Frontier
FIGURE 6.5 Credit Risk Efficient Frontier
FIGURE 6.6 Operational Risk Efficient Frontier
FIGURE 6.7 Risk Management Causal Graph
FIGURE 6.8 Risk Management Using Risk Measurement
Chapter 7
FIGURE 7.1 Risk Assessment in the Evaluation Process (ISO)
Chapter 8
FIGURE 8.1 Example of a Simple Business Process View in Retail Banking
FIGURE 8.2 Example of Business Line Decomposition for Retail Banking
FIGURE 8.3 Example of Hybrid Decomposition for Asset Management
FIGURE 8.4 Example of Decomposition for External Fraud Event Category
FIGURE 8.5 Example of an Asset Management Related Risk in the RCSA
FIGURE 8.6 Distinction between Risk Identification and Risk Assessment
FIGURE 8.7 Example of Control Defined for a Cyber Risk
FIGURE 8.8 Assessment of One Risk in Three Business Units
FIGURE 8.9 Two Methods to Assess the Inherent and Residual Risks
Chapter 9
FIGURE 9.1 Principle of the Loss Distribution Approach
FIGURE 9.2 Number of Operational Risk Loss Events for the Banking Industry
FIGURE 9.3 Distribution of Operational Risk Losses for the Banking Industry
FIGURE 9.4 Simulation of a Loss Distribution Approach
FIGURE 9.5 Fitting a Distribution on Truncated Data with No Collection Thresh...
FIGURE 9.6 Fitting a Distribution on Truncated Data Using a Collection Thresh...
FIGURE 9.7 Dependencies between the State of the Economy and Operational Risk...
Chapter 10
FIGURE 10.1 Extract of One of the IPCC Scenarios for Gas Emissions
FIGURE 10.2 Severely Adverse Scenario for 11 of the Domestic Variables
FIGURE 10.3 Scenario Analysis Process in Operational Risk
FIGURE 10.4 Scenario Identification
FIGURE 10.5 Matrix Representation of a Risk Register
FIGURE 10.6 A Real Risk Register
FIGURE 10.7 Scenario Identification Using a Severity Threshold
Chapter 11
FIGURE 11.1 The XOI Method and ISO31000
Chapter 12
FIGURE 12.1 A Simple Causal Graph for Risk
FIGURE 12.2 An Influence Diagram Based on a Simple Risk Model
FIGURE 12.3 Inference in Bayesian Networks
FIGURE 12.4 Bayesian Learning in Bayesian Networks
Chapter 13
FIGURE 13.1 A Bayesian Network for Car Accident Risk
FIGURE 13.2 Car Accident Risk: Introducing a New Dependency to Reduce Risk
FIGURE 13.3 Marginal Distributions in the Car Accident Risk Bayesian Network...
FIGURE 13.4 Inference in a Bayesian Network (1)
FIGURE 13.5 Inference in a Bayesian Network (2)
Chapter 14
FIGURE 14.1 Representation of a Scenario as a Bayesian Network
Chapter 15
FIGURE 15.1 Daily Evolution of a Concealed Trading Position
FIGURE 15.2 Variations of a Concealed Trading Position
FIGURE 15.3 XOI Model for Rogue Trading Scenario
FIGURE 15.4 Simulation of the XOI Model for Rogue Trading
Chapter 16
FIGURE 16.1 Evolution of Deposits for JPMorgan Chase and Total FDIC
FIGURE 16.2 JPMC Share Price in the Period Before and After the Data Compromi...
FIGURE 16.3 JPMC Share Price One Year Before and After the Data Compromise
FIGURE 16.4 The Cyber Attack Wheel
FIGURE 16.5 The XOI Graph for the Scenario Cyberattack on Critical Applicatio...
FIGURE 16.6 Simulation of the XOI Model for Cyber Attack
Chapter 17
FIGURE 17.1 Average Conduct Loss as a Function of Bank Revenue (log2)
FIGURE 17.2 Dispersion of Conduct Loss as a Function of Bank Revenue (log2)
FIGURE 17.3 The Generic XOI Graph for Conduct Scenarios
FIGURE 17.4 An XOI Graph for the Mis-Selling Conduct Scenario
FIGURE 17.5 Simulation of the XOI Model for Mis-selling
Chapter 18
FIGURE 18.1 Factors Used for Scenario Dependency Assessment
FIGURE 18.2 Scenario Dependencies Paths
FIGURE 18.3 Serial Paths between Two Scenarios
FIGURE 18.4 Divergent Paths between Two Scenarios
Chapter 19
FIGURE 19.1 Inferring Regulatory and Economic Capital from a Loss Distributio...
FIGURE 19.2 A Complete Operational Risk Model in MSTAR Tool
FIGURE 19.3 Building the Potential Loss Distribution Using XOI Models
FIGURE 19.4 Multiperiod XOI Model for a Cyber Attack Scenario
FIGURE 19.5 Applying a macroeconomic scenario to an XOI model
FIGURE 19.6 Selection Method for Stress Testing
FIGURE 19.7 Enhanced Selection Method for Stress Testing
FIGURE 19.8 Representation of Controls in a Bow Tie Model
FIGURE 19.9 Mapping of a Bow-Tie Control Representation to an XOI Model
FIGURE 19.10 Representation of Barriers in an XOI Model
Cover
Table of Contents
Begin Reading
i
iv
v
vi
xi
xii
xiii
xv
xvi
xvii
xix
xx
xxi
xxii
xxiii
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
57
58
59
60
61
62
63
64
65
66
67
68
69
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
177
179
180
181
182
183
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
287
288
289
290
291
292
293
294
295
296
297
“Patrick Naim and Laurent Condamin articulate the most comprehensive quantitative and analytical framework that I have encountered for the identification, assessment and management of Operational Risk. I have employed it for five years and found it both usable and effective. I recommend this book as essential reading for senior risk managers.”
–C.S. Venkatakrishnan, CRO, Barclays
“I had the pleasure to work with Laurent and Patrick to implement the XOI approach across a large multinational insurer. The key benefits of the method are to provide an approach to understand, manage and quantify risks and, at the same time, to provide a robust framework for capital modelling. Thanks to this method, we have been able to demonstrate the business benefits of operational risk management. XOI is also well designed to support the Operational Resilience agenda in financial services, which is the new frontier for Op Risk Management.”
–Michael Sicsic, Head of Supervision, Financial Conduct Authority; Ex-Global Operational Risk Director, Aviva Plc
“The approach described in this book was a ‘Eureka!’ moment in my journey on operational risk. Coming from a market risk background, I had the impression that beyond the definition of operational risk, it was difficult to find a book that described a coherent framework for measuring and managing operational risk. Operational Risk Modeling in Financial Services is now filling this gap.”
–Olivier Vigneron, CRO EMEA, JPMorgan Chase & Co
“The XOI methodology provides a structured approach for the modelling of operational risk scenarios. The XOI methodology is robust, forward looking and easy to understand. This book will help you understand the XOI methodology by giving you practical guidance to show how risk managers, risk modellers and scenario owners can work together to model a range of operational risk scenarios using a consistent approach.”
–Michael Furnish, Head of Model Governance and Operational Risk, Aviva Plc
“The XOI approach is a simple framework that allows to measure operational risk by identifying and quantifying the main loss drivers per risk. This facilitates the business and management engagement as the various drivers are defined in business terms and not in risk management jargon. Further, the XOI approach can be used for risk appetite setting and monitoring. I strongly believe that the XOI approach has the potential to become an industry standard for banks and regulators.”
–Emile Dunand, ORM Scenarios & Stress Testing, Credit Suisse
Founded in 1807, John Wiley & Sons is the oldest independent publishing company in the United States. With offices in North America, Europe, Australia and Asia, Wiley is globally committed to developing and marketing print and electronic products and services for our customers' professional and personal knowledge and understanding.
The Wiley Finance series contains books written specifically for finance and investment professionals as well as sophisticated individual investors and their financial advisors. Book topics range from portfolio management to e-commerce, risk management, financial engineering, valuation, and financial instrument analysis, as well as much more.
For a list of available titles, visit our website at www.WileyFinance.com.
PATRICK NAIM
LAURENT CONDAMIN
This edition first published 2019.© 2019 John Wiley & Sons Ltd.
Registered office.John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloging-in-Publication Data
Names: Naim, Patrick, author. | Condamin, Laurent, author.
Title: Operational risk modeling in financial services : the exposure, occurrence, impact method / Patrick Naim, Laurent Condamin.
Description: Chichester, West Sussex, United Kingdom : John Wiley & Sons, [2019] | Includes index. |
Identifiers: LCCN 2018058857 (print) | LCCN 2019001678 (ebook) | ISBN 9781119508540 (Adobe PDF) | ISBN 9781119508434 (ePub) | ISBN 9781119508502 (hardcover)
Subjects: LCSH: Financial services industry—Risk management. | Banks and banking—Risk management. | Financial risk management.
Classification: LCC HG173 (ebook) | LCC HG173 .N25 2019 (print) | DDC 332.1068/1—dc23
LC record available at https://lccn.loc.gov/2018058857
Cover Design: WileyCover Images: © Verticalarray /Shutterstock, © vs148 /Shutterstock, ©monsitj / iStock.com, © vs148/Shutterstock
Figure 2.1 Modelling Approach by Risk Type
Figure 2.2 The Three Actors of the Risk Modelling Process
Figure 3.1 A Risk Matrix
Figure 3.2 Example of a Uniform Correlation Matrix
Figure 4.1 Strategic versus Operational Risks
Figure 4.2 Knightian Uncertainty
Figure 4.3 RIMS Risk Taxonomy
Figure 4.4 AIRMIC, ALARM, and IRM Risk Taxonomy
Figure 5.1 Results from the 2008 LDCE
Figure 5.2 Operational Risk Loss Data, 2011–2016
Figure 5.3 Operational Losses by Year of Public Disclosure
Figure 5.4 Legal Operational Risk Losses, 2002–2016
Figure 5.5 Evolution of Operational Risk Losses, 2002–2016
Figure 5.6 Share of Minimum Required Capital
Figure 5.7 Operational Risk Share of MRC
Figure 6.1 Risk Appetite Matches Risk Distribution
Figure 6.2 Risk Appetite Does Not Match Risk Distribution
Figure 6.3 Efficient Frontier
Figure 6.4 Market Risk Efficient Frontier
Figure 6.5 Credit Risk Efficient Frontier
Figure 6.6 Operational Risk Efficient Frontier
Figure 6.7 Risk Management Causal Graph
Figure 6.8 Risk Management Using Risk Measurement
Figure 7.1 Risk Assessment in the Evaluation Process (ISO)
Figure 8.1 Example of a Simple Business Process View in Retail Banking
Figure 8.2 Example of Business Line Decomposition for Retail Banking
Figure 8.3 Example of Hybrid Decomposition for Asset Management
Figure 8.4 Example of Decomposition for External Fraud Event Category
Figure 8.5 Example of an Asset Management Related Risk in the RCSA
Figure 8.6 Distinction between Risk Identification and Risk Assessment
Figure 8.7 Example of Control Defined for a Cyber Risk
Figure 8.8 Assessment of One Risk in Three Business Units
Figure 8.9 Two Methods to Assess the Inherent and Residual Risks
Figure 9.1 Principle of the Loss Distribution Approach
Figure 9.2 Number of Operational Risk Loss Events for the Banking Industry
Figure 9.3 Distribution of Operational Risk Losses for the Banking Industry
Figure 9.4 Simulation of a Loss Distribution Approach
Figure 9.5 Fitting a Distribution on Truncated Data with No Collection Threshold
Figure 9.6 Fitting a Distribution on Truncated Data Using a Collection Threshold
Figure 9.7 Dependencies between the State of the Economy and Operational Risk
Figure 10.1 Extract of One of the IPCC Scenarios for Gas Emissions
Figure 10.2 Severely Adverse Scenario for 11 of the Domestic Variables
Figure 10.3 Scenario Analysis Process in Operational Risk
Figure 10.4 Scenario Identification
Figure 10.5 Matrix Representation of a Risk Register
Figure 10.6 A Real Risk Register
Figure 10.7 Scenario Identification Using a Severity Threshold
Figure 11.1 The XOI Method and ISO31000
Figure 12.1 A Simple Causal Graph for Risk
Figure 12.2 An Influence Diagram Based on a Simple Risk Model
Figure 12.3 Inference in Bayesian Networks
Figure 12.4 Bayesian Learning in Bayesian Networks
Figure 13.1 A Bayesian Network for Car Accident Risk
Figure 13.2 Car Accident Risk: Introducing a New Dependency to Reduce Risk
Figure 13.3 Marginal Distributions in the Car Accident Risk Bayesian Network
Figure 13.4 Inference in a Bayesian Network (1)
Figure 13.5 Inference in a Bayesian Network (2)
Figure 14.1 Representation of a Scenario as a Bayesian Network
Figure 15.1 Daily Evolution of a Concealed Trading Position
Figure 15.2 Variations of a Concealed Trading Position
Figure 15.3 XOI Model for Rogue Trading Scenario
Figure 15.4 Simulation of the XOI Model for Rogue Trading
Figure 16.1 Evolution of Deposits for JPMorgan Chase and Total FDIC
Figure 16.2 JPMC Share Price in the Period Before and After the Data Compromise
Figure 16.3 JPMC Share Price One Year Before and After the Data Compromise
Figure 16.4 The Cyber Attack Wheel
Figure 16.5 The XOI Graph for the Scenario Cyberattack on Critical Application
Figure 16.6 Simulation of the XOI Model for Cyber Attack
Figure 17.1 Average Conduct Loss as a Function of Bank Revenue (log2)
Figure 17.2 Dispersion of Conduct Loss as a Function of Bank Revenue (log2)
Figure 17.3 The Generic XOI Graph for Conduct Scenarios
Figure 17.4 An XOI Graph for the Mis-Selling Conduct Scenario
Figure 17.5 Simulation of the XOI Model for Mis-selling
Figure 18.1 Factors Used for Scenario Dependency Assessment
Figure 18.2 Scenario Dependencies Paths
Figure 18.3 Serial Paths between Two Scenarios
Figure 18.4 Divergent Paths between Two Scenarios
Figure 19.1 Inferring Regulatory and Economic Capital from a Loss Distribution
Figure 19.2 A Complete Operational Risk Model in MSTAR Tool
Figure 19.3 Building the Potential Loss Distribution Using XOI Models
Figure 19.4 Multiperiod XOI Model for a Cyber Attack Scenario
Figure 19.5 Applying a macroeconomic scenario to an XOI model
Figure 19.6 Selection Method for Stress Testing
Figure 19.7 Enhanced Selection Method for Stress Testing
Figure 19.8 Representation of Controls in a Bow Tie Model
Figure 19.9 Mapping of a Bow-Tie Control Representation to an XOI Model
Figure 19.10 Representation of Barriers in an XOI Model
Table 3.1 Example of Risk Definition
Table 3.2 Basel Loss Event Categories
Table 3.3 Basel Lines of Business
Table 3.4 Scales for Frequency, Severity, and Control Efficiency
Table 3.5 Scales Are Based on Ordinal Numbers
Table 3.6 Relative Severity Scale
Table 3.7 Aggregation of Two Assessments
Table 3.8 Assessment of Loss Data Information Value
Table 3.9 Value of Information for Different Types of Loss Data
Table 3.10 Working Groups for Scenario Assessment
Table 3.11 Loss Equations for Sample Scenarios
Table 3.12 Table of Losses Used for the Correlation Matrix
Table 3.13 Validation of Model Components
Table 4.1 Comparison of Risk Categorizations
Table 4.2 Risk Owners of Risk Categories According to RIMS
Table 4.3 World Economic Forum Taxonomy of Risks
Table 4.4 Basel Event Types and Associated Resources
Table 5.1 Loss Data Collection Exercise, 2008
Table 5.2 ORX Public Losses Statistics (2017)
Table 5.3 Contribution of Operational Risk to Minimum Required Capital
Table 6.1 Micro and Macro Level Risk Assessment in Industry and Finance
Table 7.1 Mapping of Positions and Market Variables in Market Risk
Table 8.1 A Qualitative Scale
Table 8.2 A Semi-quantitative Scale
Table 8.3 Averaging Qualitative Assessments
Table 8.4 A Semi-quantitative Scale for Controls
Table 9.1 Comparison of Binomial and Poisson Distributions
Table 9.2 ORX Reported Number of Events, 2011–2016
Table 9.3 Number of Events for an Average Bank
Table 10.1 The A1 Storyline Defined by the IPCC
Table 10.2 Population and World GDP Evolution
Table 10.3 Methane Emissions
Table 10.4 Key NIC Trends
Table 10.5 Key NIC Drivers
Table 10.6 Storyline for the Severely Adverse Scenario
Table 10.7 Application of the General Definition of a Scenario to Three Examples
Table 10.8 Mapping the Risk Register and the Loss Data Register
Table 10.9 RMBS Cases As of April 2018
Table 10.10 Operating Income of Large US Banks
Table 10.11 Dates Involved in a Multiyear Loss
Table 10.12 Top 10 Operational Risks for 2018, According to risk.net
Table 10.13 Review of Top 10 Operational Risks
Table 10.14 First Step of Scenario Stylisation
Table 10.15 Second Step of Scenario Stylisation
Table 10.16 Stylised Storyline
Table 10.17 Mis-Selling Scenario Summary
Table 10.18 Mis-Selling Scenario Loss Generation Mechanism
Table 10.19 Frequency Assessment
Table 10.20 Assessment of Scenario Percentiles Using a Benchmark Method
Table 10.21 Assessment of Scenario Percentiles Using a Driver Method
Table 10.22 Drivers Assumptions for Different Situations
Table 11.1 Exposure, Occurrence, and Impact for Usual Risk Events
Table 12.1 Probability Table for the Worker Accident Risk
Table 13.1 Variables of the Car Fleet Management Model
Table 13.2 Distribution of Driver and Road Variables
Table 13.3 Distribution of Road Conditional to Driver
Table 13.4 Distribution of Speed Conditional to Road
Table 13.5 Conditional Probability of Accident
Table 13.6 Conditional Cost of Accident
Table 13.7 Table of Road Types usage
Table 13.8 Table of Distribution of Speed Conditional to Road Type
Table 13.9 Learning from Experts or from Data
Table 14.1 Empirical Assessment of the Probability of Occurrence
Table 14.2 Indicator Characteristics
Table 14.3 Indicator Variability
Table 14.4 Indicator Predictability
Table 14.5 Data Representativeness
Table 14.6 KRI Evaluation Based on Empirical Distribution
Table 15.1 Rogue Trading Cases
Table 15.2 Quantification of Rogue Trading Drivers
Table 15.3 Assessment of Concealed Trading Positions
Table 15.4 Assessment of Time to Detection
Table 16.1 Evolution of JP Morgan Chase Deposits, 2011–2017
Table 16.2 Cyber Attacks: Attackers, Access, and Assets
Table 16.3 Cyber Risk Scenarios
Table 16.4 Quantification of Cyber Attack Drivers
Table 17.1 Mapping of Misconduct Types to EBA Definition
Table 17.2 Quantifiction of Mis-selling Drivers
Table 18.1 How a Scenario Influences an Environment Factor
Table 18.2 How a Scenario Is Influenced by an Environment Factor
Table 18.3 Scenarios and Factors: Mutual Influences
Table 19.1 Selection of Plausible Scenarios
Table 19.2 Mapping Stress Factors to Scenarios Drivers
Table 19.3 Representation of Controls in XOI Models
Table 20.1 Exposure Units Table for Cyber Attack Scenario
Table 20.2 Exposure Units Table for the Mis-Selling Scenario
Table 20.3 Exposure Units Table for the Rogue Trading Scenario
Table 20.4 Sources for Drivers Quantification
I met Patrick and Laurent at a conference on operational risk in 2014. This meeting was a “Eureka!” moment in my journey on operational risk, which had started a year earlier.
I had been asked to examine operational risk management from a quantitative perspective. Coming from a market risk background, my first impressions were that, beyond the definition of operational risk, it was difficult to find a book that described a coherent framework for measuring and managing operational risk. Operational Risk Modelling in Financial Services is now filling this gap. Nevertheless, in the absence of such a book available at the time, I became familiar with the basic elements of operational risk: the risk and control self-assessment process (RCSA), the concept of key risk indicators (KRIs), and the advanced model approach (AMA) for capital calculation under Basel II.
In examining the practices of the financial industry, I had the impression that these essential components existed in isolation from each other, without a unifying framework.
The typical RCSA is overwhelming because of the complexity and granularity of the risks it identifies. This makes individual risk assessment largely qualitative and any aggregation of risks problematic.
KRIs were presented as great tools to monitor and control the level of operational risks, but in current practice they appeared to come from heuristics rather than from risk analysis or a risk appetite statement.
Finally, at the extreme end of the quantitative spectrum, all major institutions were relying on risk calculation teams specialising in loss distribution approaches, extreme value theories, or other sophisticated mathematical tools. Financial institutions have fuelled a very sustained activity of researchers extrapolating the 99.9% annual quantile of loss distributions from sparse operational losses data.
As difficult as this capital calculation proved to be, it was generally useless for risk managers and failed to pass the use test, which should ensure that risk measurement used for capital should be useful for day-to-day risk management. This failure should not be attributed to the Basel II framework, as AMA has tried to combine qualitative and quantitative methods in an interesting way and has introduced the important concept of operational risk scenarios!
In summary, I was confronted with an inconsistent operational risk management framework where the identification, control, and measurement of risks seemed to live on different planets. Each team was aware of the existence of the others, but they did not form a coordinated whole.
This inevitably raised the question of how to bridge the gap between risk management and risk measurement, which was precisely the title of Patrick's speech at the Oprisk Europe 2014 conference! Eureka! Never has a risk conference proven so timely.
The question is fundamental because it creates a bridge between an operational risk appetite statement and KRIs, and establishes a link between major risks, KRIs, and RCSA by leveraging the concept of operational risk scenarios.
The quantification of these risks (the risk measurement) can be compared to the stress testing frameworks used in other risk disciplines such as market risk. It can also be used to build a forward-looking economic capital model.
Once a quantitative risk appetite is formulated, once KRI are put in place to monitor key risks, and once an economic capital consistent with this risk measure is established, better risk management decisions can then be made. Cost-benefit analyses can be conducted to establish new controls to mitigate or prevent risk.
In other words, a useful risk management framework for the business has emerged!
I believe that Operational Risk Modelling in Financial Services is a book that will help at every level from the seasoned operational risk professional to the new practitioner. To the former, it will be an innovative way to link known concepts into a coherent whole, and to the latter it will serve as a clear and rigorous introduction to the operational risk management discipline.
Olivier VigneronManaging Director | Chief Risk Officer, EMEA |JPMorgan Chase & Co.
Thank you for taking the time to read or flip through this book. You probably chose this book because you are working in the area of operational risk, or you will soon be taking a new job in this area. To be perfectly honest, this is not a subject that someone might spontaneously decide to research personally, as can be the case today for climate change, artificial intelligence, or blockchain technologies.
However, we quickly became passionate about this subject when we first started working on it over 10 years ago. The reason for this is certainly that it remains a playground where the need for modelling, that is, a simplified and stylized description of reality, is crucial. Risk modelling presents a particular difficulty because, as the Bank for International Settlements rightly points out in a discussion paper in 20131: “Risk is of course unobservable”.
Risks are not observable, and yet everyone can talk about them, and have their own analysis. Risks are not observable, yet they have well observable consequences, such as the 2008 financial crisis. It can be said that risks do not exist – only their perceptions and consequences exist.
Risk modelling therefore had to follow one of two paths: modelling perceptions or modelling consequences. In the financial field, quantitative culture has prevailed, and consequence modelling has largely taken precedence over perception modelling. For a banking institution, the consequences of an operational risk are financial losses. The dominant approach has been based on the shortcut that since losses are the manifestation of risks, it is therefore sufficient to model losses.
As soon as we started working on the subject, we considered that this approach was wrong, because losses are the manifestation of past risks, not the risks we face today. We have therefore worked on the alternative path of understanding the risks, and the mechanisms that can generate adverse events. This approach is difficult because the object of modelling is a set of people, trades, activities, rules, which must be represented in a simple, useful way to consider – but not predict – future events, and at the same time seek ways to mitigate them. This is more difficult than considering that the modeling object is a loss data file, and using mathematical tools to represent them, while at the same time, and in a totally disconnected way, other people are thinking about the risks and trying to control or avoid them. This work on mechanisms that can lead to major losses bridges the gap between risk quantification and risk management, and is more demanding for both quantification and management, since modellers and business experts must find a common language.
It is only thanks to the many people who have trusted us over these 10 or 15 years that this work has gone beyond the scope of research, and has been applied in some of the largest financial institutions in France, the United Kingdom, and the United States. We have worked closely and generally for several years with the risk teams and business experts of these institutions, and for several of them we have accompanied them until the validation of these approaches by the regulatory authorities.
This book is therefore both a look back over these years of practice, to draw a number of the lessons learned, and a presentation of the approach we propose for the analysis and modelling of operational risks in financial institutions. We believe, of course, that this approach can still be greatly improved in its field, and extended to related areas, particularly for enterprise risk management in nonfinancial companies.
This book is not a summary or catalogue of best practices in the area of operational risks, although there are some excellent ones. In any case, we would not be objective on this subject, since even though we have been privileged observers of the practices of the largest institutions and have learned a lot from each of them, we have also tried to transform their practices.
The first part of this book is both a brief presentation of the method we recommend and a summary of the lessons learned during our years of experience on topics familiar to those working in operational risks: RCSA, loss data, quantitative models, scenario workshops, risk correlation analysis, and model validation. In this section, we have adopted a deliberately anecdotal tone to share some of our concrete experiences.
The second part describes the problem, that is, operational risk modelling. We go back to the definition of operational risk and its growing importance for financial institutions. Then we discuss the need to measure it for regulatory requirements such as capital charge calculation, or stress tests, or nonregulatory requirements such as risk appetite and risk management. Finally, we discuss the specific challenges of operational risk measurement.
The third part discusses the three main tools used in operational risk analysis and modelling: RCSA, loss data models, and scenario analyses. We present here the usual methods used by financial institutions, with a critical eye when we think it is necessary. This part of the book is the closest to what could be considered as a best-practice analysis.
Finally, the fourth part presents the XOI method, for Exposure, Occurrence, and Impact. The main argument of our method is to consider that it is possible to define the exposed resource for each operational risk considered. Once the exposed resource is identified, but only under this condition, it becomes possible to describe the mechanism that can generate losses. Once this mechanism is described, it becomes possible to model and quantify it.
The method we present in this book uses Bayesian networks. To put it simply, a Bayesian network is a graph representing causal relationships between variables; these relationships being quantified by probabilities. You go to the doctor in winter with a fever and a strong cough. The doctor knows that these symptoms can be caused by many diseases, but that the season makes some more likely. To eliminate some serious viral infections from his diagnosis, the doctor asks you a few questions about your background and in particular your recent travels. The following graph can be used to represent the underlying knowledge.
Nodes are the variables of the model, and links are represented by probabilities. The great advantage of Bayesian networks is that … they are Bayesian, that is, that probabilities are interpreted as beliefs, not as objective data. Any probability is the expression of a belief. Even using an observed frequency as a probability is an expression of a belief in the stability of the observed phenomenon.
Bayesian networks are considered to have been invented in the 1980s by Judea Pearl of UCLA2 and Stefen Lauritzen3 of University of Oxford. Judea Pearl, laureate of the Turing award in 2011, has written extensively on causality. His most recent publication is a non-specialist book called The Book of Why4. It is a plea for the understanding of phenomena in the era of big data: “Causal questions can never be answered by data alone. They require us to formulate a model of the process that generates the data”.
Pearl suggests that his book can be summarized in a simple sentence “You are smarter than your data”. We believe this applies to operational risk managers, too.
1
Basel Committee on Banking Supervision. Discussion Paper BCBS258, “The Regulatory Framework: Balancing Risk Sensitivity, Simplicity and Comparability,” July 2013,
https://www.bis.org/publ/bcbs258.pdf
.
2
Judea Pearl,
Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference
)(San Francisco: Morgan-Kaufmann, 1988).
3
R. G. Cowell, P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter,
Probabilistic Networks and Expert Systems: Exact Computational Methods for Bayesian Networks
New York: Springer-Verlag, 1999.
4
Judea Pearl and Dana McKenzie,
The Book of Why: The New Science of Cause and Effect
(New York: Basic Books, 2018).
This first part of the book presents our experiences in operational risk modelling from a subjective point of view over the past 10 years.
We are engineers specialized in artificial intelligence. We have been working together for about 25 years, and early in our careers we spent a lot of time on applications of neural networks, Bayesian networks, and what was called data mining at this time. It was almost a generation before the current popularity of these techniques.
Back in the 1990s, few industries had the means to invest in artificial intelligence and data mining: mostly banking, finance, and the defense sector, as they had identified applications with important stakes. The defense usually conducts its own research, and so, quite naturally, we spent a lot of time working in research and development with banks and insurance companies, for applications such as credit rating, forecasting financial markets, and portfolio allocation. We were fortunate enough that the French Central Bank was our first client for this service, for several years. Thanks to a visionary managing director, the French Central Bank created in the early 1990s an AI team of more than 20 people working on applications ranging from natural language processing to credit scoring.
Our conclusion was mixed. Machine-learning techniques were generally not better than conventional linear techniques. This mediocre performance was not related to the techniques themselves, but to the data. When you try to predict the default of a company from its financial ratios, you will always have several companies with exactly the same profile, but that will not share the same destiny. This is because the observed data do not include all of the variables that could help predict the future. The talent and the pugnacity of the leader, the competitive environment, and so on, are not directly represented in the accounting or financial data. However, these nonfinancial indicators are the ones that will make the difference, all things being equal otherwise. Finally, in rating or classification applications, and whatever the technique used, the rates of false positives or false negatives were usually very close.
This is even more applicable when you are trying to predict the markets. We were most of the time trying to forecast the return of one particular market at various horizons, using either macroeconomic variables, or technical variables. We would have been largely satisfied with a performance just slightly better than flipping a coin. Again, the performance of nonlinear models was comparable to other techniques. In a slightly more subtle way here, the limitation was expressed through the dilemma between the complexity of the model and the stability of its performances: to get a model with stable performance, this model must be simple. The best compromise is often the linear model.
About 10 years ago, a Head of Operational Risk for a large bank asked us to think about the use of the Bayesian networks to model the operational risk, and to seek to evaluate possible extreme events. Not surprisingly, she was advised to do so by the former managing director of the French Central Bank, which we mentioned previously.
We were immediately intrigued and interested in the subject. We liked the challenge of leaving aside for a while the “big data” analysis to work on models based on “scarce data”! We thought, and continue to think, that the work of a modeler is not to look for mathematical laws to represent data, but to understand the underlying mechanisms and to gain knowledge about them. It is not surprising that one of us wrote his PhD thesis on the translation of a trained neural network into an intelligible set of rules.
Going back to operational risks, or more precisely to one of the requirements of AMA (advanced measurement approach), the problem was formulated mathematically quite simply, but seemed to require an enormous work.
The mathematical problem was to estimate an amount M such that it could only be exceeded with a probability of 0.1%, regardless of the combination of operational risk events that could be observed in the forthcoming year.
In practical terms, this meant answering several questions, all of them more difficult than the other:
Identification
. What are the major events that my institution could be exposed to next year? How to identify them? How to structure them? How to keep only those that are extreme but realistic (that is, how not to quantify a
Jurassic Park
scenario!).
Evaluation
. How to evaluate the probability that one of them will occur? If it occurs, how to evaluate the variability of its consequences?
Interdependencies
. All adverse events will not happen at the same time. However, certain events can weaken a business and make other extreme events more likely. For example, a significant natural event can weaken control capabilities and increase the risk of fraud. How to evaluate the correlations between these events?
Once we became acquainted to the problem, we did two things:
As consultants, we studied closely the risk management system of the bank.
As researchers, we studied the state of the art on the question of quantification.
We must admit that if we were impressed by the work done by our client, this was not the case on the state of the art.
This customer, which is one of the largest French banks, serves today nearly 30 million customers with more than 70,000 employees, and covers most of the banking business lines, even if it does not compare in that with the large investment banks in Europe or in the United States.
The Head of Operational Risks had put in place a set of risk mappings.
This work was based on a breakdown of the bank's activities. The breakdown did not use processes – as we found later that most banks do – but objects. The objects were of different nature: products, people, systems, buildings, and so on. This approach was consistent with the overall risk analysis approach proposed by the ARM method.1 According to this approach, a risk is defined by a combination of Event, Object, and Consequence. A risk is therefore defined by the encounter of an event and a resource likely to be affected by this event. We will come back to this, but this approach, common to ARM and ISO 31000 and shared by most industries and research organizations working on major risks, is extremely structuring and fertile for modeling.
This mapping was not only a catalog. For each type of exposed object, the Operational Risk department of this bank had established a working group consisting of a risk manager and several experts to identify and assess risks in a simple way. Contrary to what we have seen later in sometimes more prestigious organizations, this work was not only an expert evaluation obtained during a meeting, but was a structured and well-argued document, which could be reviewed and discussed by the internal audit bodies and by the regulator. As a conclusion of each study, each of the risks identified and considered significant by the working group for the type of object considered, was the subject of a quantified evaluation. This assessment was in the form of a simple formula that evaluated the cost of risk.
The analysis was describing a mechanism by which a loss could be observed, and the indicators used made it possible to quantify it. For example, the default or disruption of a supplier could impact the business during the time needed to switch to a backup supplier. The switching time would of course depend on the quality of prior mitigation actions. This helps defining the outline of a “Supplier Failure” model: list all the critical suppliers, evaluate for each of them a probability of default, evaluate the impact of the supplier unavailability on the bank's revenue, and assess the time to return to normal operations. The combination of these different factors, all assessed with a certain degree of uncertainty, made it possible to consider building a model. We have subsequently validated this approach for all types of risks, irrespective of the type of exposed objects: people, buildings, products, stock market orders, applications, databases, suppliers, models , and so on.
The other part of our preparatory research concerned the state of the art on modeling. We were surprised to find that the dominant model was called the LDA, for Loss Distribution Approach, and was actually a statistical model of past losses, not a risk model.
The point that surprised us the most is the effort statisticians made to search for laws that would fit the data, without seeking any theoretical justification for choosing the law. We had some theoretical knowledge of the modelling of financial markets, in which the use of a normal law results from the theoretical framework of efficient markets. This framework, proposed by Bachelier, demonstrates that if the markets are efficient, then the distribution of returns follows a normal distribution. This theoretical hypothesis is clear and debatable. We can accept or reject the hypothesis of efficient markets. It can be considered that there exists insider information that distorts the markets. This discussion regards the validity of the models, of the same nature as the discussions that one may have in physics, on the fact that the hypothesis of perfect gases, or incompressible fluids is or is not acceptable, and therefore that the associated equations are applicable.
On the modelling of operational risks, nothing like that. The choice of a law did not come from a theoretical discussion, but only from its ability to fit to the data, which seemed to us contradictory to any modelling logic. Moreover, the data considered in the adjustment are not of the same nature.
The principle of the LDA is (1) to assume that the average number of losses observed in one year will also be observed in the following years although with some variance (this is represented by the use of a frequency law, for example a Poisson law), and (2) to adjust a theoretical distribution on the amounts of observed losses.
Taken literally, this approach means that the only variability of the losses lies in their number and in their arrangement (an unfavourable year can suffer several significant losses). In other words, randomness would lie only in the realizations, and not in the nature of the risk scenarios. According to this principle, a tsunami would be then only an unexpectedly big wave. Even if the adjustment of a theoretical distribution on the height of the waves makes it possible mathematically to calculate the probability of a wave of 20 or 30 meters of height, it does not account for the difference of nature of the two phenomena: tsunamis are not caused by the same process as waves.
This approach seemed to be wrong for several reasons. Regardless of the possibility of statistically adjusting a law without knowing the theoretical form that this law must take, what would be the logic to use past losses to anticipate future losses, even as technologies evolve, risks evolve, and banking activities evolve?
Why use credit card fraud loss history before EMV chips implementation, in a context where EMV chip cards are now widespread and being used? How not to see that the regulatory pressure on the risks related to the conduct of banks depends on the political climate? Would the political will to punish the banks for the economic and human disaster of the subprime crisis be applied with the same rigor if Barack Obama had not been president at that time? How about losses related to sold or obsolete activities? For example, our French client had in its accounts a significant loss related to a model error on market activities, which led the management of the bank to sell these activities: was it then justified to consider this loss in the history used to extrapolate future losses?
Of course, we know the argument of “quants” in banks, which can be summarized in a few words. Even if things change, past losses are representative of an institution, its size and its culture, and therefore they can be validly used, even to predict losses of another nature. In other words, a loss observed on market activities contains information to anticipate a possible loss on the use of cryptocurrencies, because the risk profile of a bank has a certain stability, which gives it a certain propensity to take risks, a certain appetite for risk, independent of activities and technologies. This sounds like an attempt to give a soul to a banking institution that would remain stable through all the changes. We will not engage in this metaphysical terrain of the soul of organizations, but to consider that this soul would manifest itself through the taking of operational risk, seems to us to be fanciful at best.
1
The method taught by “The Institutes” to obtain the qualification of Associate in Risk Management. See
https://www.theinstitutes.org
(accessed 5/10/2018).
From these observations and reflexions, we have formulated an Operational Risk Modelling doctrine. This doctrine proposes to adopt a statistical method for recurrent risks, and a scenario analysis method for rare risks.
It can be summarized in two sentences.
What has happened quite often will happen in similar conditions, in the absence of new preventative actions. For what has never happened, or very rarely occurred, we need to understand how this can happen and unfold, and assess the consequences in the absence of new protective actions.
If we interpret this in the space of risk represented in a usual way on a “Frequency – Severity” map (Figure 2.1), this doctrine is expressed as follows:
FIGURE 2.1 Modelling Approach by Risk Type
Potential losses due to high severity and low frequency risks are addressed through the development of probabilistic scenarios based on the analysis of the loss generation mechanism.
This approach can be extended to frequency risks with a potential for high severity, and for which an in-depth study of the possible evolutions of the risk is necessary (prevention and protection).
Potential losses due to low severity and high or medium frequency risks can be addressed by statistical models. In this context, the use of the LDA is acceptable. In fact, frequent losses can be validly modelled by a statistical law.
We present now in detail this approach of modelling, without insisting on the modelling of frequent risks through LDA, because this method is usual today and is not therefore specific of our approach. We first present the methodology of qualification, selection, and quantification of risk scenarios. Then we explain the principle of integration, to produce a valuation of capital for operational risks in each cell of the matrix of Basel, from scenario models and historical loss data.
Operational risk modelling should be viewed as a knowledge management process that ensures the continuous transformation of human expertise into a probabilistic model. The model allows us to calculate the distribution of potential losses, identify reduction levers, and perform impact analyses of contextual evolutions and strategic and commercial objectives.
