16,99 €
Manage and mitigate the human side of risk In Humanizing Rules: Bringing Behavioural Science to Ethics and Compliance, veteran risk adviser and trainer Christian Hunt delivers an incisive and practical discussion of how to mitigate the risk of people doing things they shouldn't or failing to do things they should. In the book, you'll explore effective strategies for achieving compliance that work with - rather than against - the grain of natural human thinking and behaviour. The authors challenge existing presumptions about managing risk and show you practical techniques and examples you can deploy today in your own organisation. You'll also find: * Strategies for preventing adverse events that go beyond simply assuming that, because someone is employed, they can be told what to do * Techniques for risk mitigation in environments which are difficult to codify * Ways to improve positive engagement on the part of employees critical to risk management An effective and essential text in managing the human contribution to adverse and negative events, Humanizing Rules is a must-read for compliance professionals, Chief Risk Officers and other risk executives, managers, directors, and other business leaders with an interest in reducing the likelihood and impact of risk.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 348
Veröffentlichungsjahr: 2023
COVER
TITLE PAGE
COPYRIGHT
PREFACE
What Is This Book About?
What Is Human Risk?
Who Is the Book For?
What Does It Cover?
What Doesn't It Cover?
How Is It Structured?
USER MANUAL
How to Use This Book
Terminology
INTRODUCTION
You Had One Job
Hitting the Headlines
The Business of Influencing Human Decision‐Making
The Riskiest Part of Banking Isn't the Banking
Human Risk 1
Human Risk 2
Human Reward
Notes
PART I: INTRODUCING BEHAVIOURAL SCIENCE
CHAPTER 1: A MATTER OF PERSPECTIVE
Gone Fishing
The Employment Contract Fallacy
Attestation
Induction Training
Because We Employ You, We Can Tell You What to Do
Contracts Are for Disputes, Not Relationships
What's In It for Me?
New Perspective, New Job
Notes
CHAPTER 2: RIGHT TOOLS FOR THE JOB
Scissors, But for Pizza
The Right Tools for the Job
CHAPTER 3: THE TRADITIONAL TOOLKIT
What We've Always Done
Back to Childhood
Orders
Rules & Principles
Bribery & Punishment
The Marketing Approach
Blunt Instruments
Escalation Risk
The Behavioural Toolkit
Notes
CHAPTER 4: WHAT
IS
THE JOB?
There Are Rules, and There Are Rules
COVID Compliance
Training or Box‐Ticking?
Awareness
Understanding
Autonomy
Desired Outcome
Note
CHAPTER 5: INTRODUCING BEHAVIOURAL SCIENCE
Behavioural Science
Nudge
Thinking Fast and Slow
Cognitive Biases and Heuristics
The Human Algorithm
Not at Home to Mr. Logic
Common Cognitive Biases
The Three Drivers
Notes
CHAPTER 6: BEHAVIOURAL DRIVER 1: OUR EXPERIENCE AND KNOWLEDGE
Invisible Ink
The Importance of Experience
Evolution
Confirmation Bias
Agency, or the Coat Problem
Repetition Suppression
The Curse of Knowledge
Sunk Cost Fallacy
Implications for Humanizing Rules
Notes
CHAPTER 7: BEHAVIOURAL DRIVER 2: OTHER PEOPLE
The Shed
The Wisdom of the Crowd
Behavioural Contagion
They Must Know Something I Don't
Social Proof
Blind Carbon Copy
Messengers
Implications for Humanizing Rules
Notes
CHAPTER 8: BEHAVIOURAL DRIVER 3: CONTEXT
Is It a Television, or Is It a Bike?
Context Matters
Passenger Experience
What You See Is All There Is
Framing
Sometimes the Survey
Is
the Advert
Anchoring
Scarcity
The Video Is the Training
Implications for Humanizing Rules
Notes
PART II: HUMANS
CHAPTER 9: INTRODUCING HUMANS
Why We Need a Framework
What Is a Behavioural Intervention?
Dress Code
HUMANS Overview
Affordances
What the HUMANS Framework Is Not
Notes
CHAPTER 10: H IS FOR HELPFUL
Der Grüne Pfeil
H Is for Helpful
What Is Helpful?
Timing Is Everything
Key Questions
Note
CHAPTER 11: U IS FOR UNDERSTAND
Save the Surprise
U Is for Understood
The Curse of Dunning and Kruger
Key Questions
CHAPTER 12: M IS FOR MANAGEABLE
See Something, Say Something
M Is for Manageable
When a Fine Is a Fee
Key Questions
Note
CHAPTER 13: A IS FOR ACCEPTABLE
Rebalancing the Marketplace
A Is for Acceptable
Is It Fair?
How Do We Know Something Is Fair?
Key Questions
Notes
CHAPTER 14: N IS FOR NORMAL
You Don't Need a Label
N Is for Normal
Key Questions
Note
CHAPTER 15: S IS FOR SALIENT
I Thought It Would Be Bigger
S Is for Salient
Coffee Shop
Station Announcements
Weather Forecasts
Key Questions
Notes
CHAPTER 16: HOW TO USE HUMANS
Potential Actions
Standing on the Shoulders
Going Dutch
FEAST
Other Frameworks Are Available
Notes
PART III: THE SIX RULES
CHAPTER 17: THE SIX RULES
Introduction
Rule One: Compliance Is an Outcome, Not a Process
Rule Two: 100% Compliance Is Neither Achievable Nor Desirable
Rule Three: When Putting on a Show, Make Sure You Know Your Audiences
Rule Four: Design for the Willing, Not the Wilful
Rule Five: If One Person Breaks a Rule, You've Got a People Problem; If Lots of People Break a Rule, You've Got a Rule Problem
Rule Six: Just Because You Can Doesn't Mean You Should
CHAPTER 18: RULE NUMBER ONE
Beatings Will Continue Until Morale Improves
Outcome
compliance Is Not Compliance
Look Where You Want to Go
The Business of Influencing Human Decision‐Making
The Human Algorithm Is Not Logical
CHAPTER 19: RULE NUMBER TWO
Zero Accidents
Zero Tolerance
People Are People
The World Is Changing
Hiring Humans to Be Human
Recoverable vs Irrecoverable
In Aggregate, Not in Isolation
Back to Zero Accidents
Note
CHAPTER 20: RULE NUMBER THREE
What's the Point of Airport Security?
Welcome to the Theatre
ROK Ready Theatre
Deterrent Theatre
Role Model Theatre
Box‐Ticking Theatre
Backstage Tour
When the Scenery Falls
CHAPTER 21: RULE NUMBER FOUR
Have You Ever…?
Immigration Logic
How “Bad” People React
“Bad” People vs “Good” People
How “Good” People React
Declassify Form I‐94W
Notes
CHAPTER 22: RULE NUMBER FIVE
Ever Stuck
The Bigger Picture
The Power of the Collective
The Wisdom of the Crowd
Ergodicity
Removing “Air Cover”
It's the Stupid, System
CHAPTER 23: RULE NUMBER SIX
Jurassic Park but for Komodo Dragons
Compliance Meets Ethics
COM‐B: how Capability, Opportunity, Motivation drives Behaviour
The Streisand Effect
Notes
PART IV: RADAR
CHAPTER 24: INTRODUCING RADAR
Where Do I Begin?
Introducing RADAR
Collective Not Individual
What Is RADAR?
Opposites Attract
How to Use RADAR
What Data Do You Have?
The Armour Goes Where the Bullet Holes Aren't
What We Have (May Be) All There Is
Data Dynamics
Notes
CHAPTER 25: R IS FOR REBELLIOUS
Introduction
Rumsfeldian Analysis
Crime Data
Unknowns
My Word Is My Bond
WYSIATI Again
Practical Application: Radar Inc.
CHAPTER 26: A IS FOR ADAPTIVE
Introduction
Cognitive Dissonance
Rationale
Identifying “Adaptive” Behaviour
Know Your Limits
Near Misses
I've Been Voluntold
Notes
CHAPTER 27: D IS FOR DISSENTING
Introduction
Rationale
Identifying Dissenting Behaviour
Surveys
Ask Disruptively
Feedback Matters
Actions Speak Louder Than Words
Notes
CHAPTER 28: A IS ALSO FOR ANALYTICAL
Introduction
Rationale
Genuine Interest
Fence Testing
FAQ Logic
Expected or Unexpected?
The Hypothetical Actual
Identifying “Analytical” Behaviour
Good News or Bad News?
Teaching Rule‐Breaking
Practical Application: Radar Inc.
Notes
CHAPTER 29: R IS ALSO FOR REMARKABLE
Introduction
Rationale
Two Unspoken Truths
What Is Realistic?
Expectation Management
CHAPTER 30: CONCLUSION
Lessons from a Sexologist
Compliance in the Wild
The Solution
And, Finally
Notes
ACKNOWLEDGEMENTS
ABOUT THE AUTHOR
INDEX
END USER LICENSE AGREEMENT
Cover Page
Title Page
Copyright
Preface
User Manual
Table of Contents
Begin Reading
Acknowledgements
About the Author
Index
Wiley End User License Agreement
iii
iv
vii
viii
ix
xi
xii
xiii
1
2
3
4
5
6
7
8
9
11
13
14
15
16
17
18
19
21
22
23
24
25
26
27
28
29
30
31
33
34
35
36
37
39
40
41
42
43
44
45
46
47
49
50
51
52
53
54
55
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
75
77
78
79
80
81
82
83
84
85
86
87
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
105
106
107
108
109
111
112
113
114
115
117
119
120
121
122
123
124
125
126
127
128
129
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
157
158
159
160
161
162
163
164
165
166
167
168
169
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
CHRISTIAN HUNT
Copyright © 2023 by Christian Hunt. All rights reserved.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of Christian Hunt to be identified as the author of this work has been asserted in accordance with law.
Registered Office(s)
John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
Editorial Office
The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty
While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Library of Congress Cataloging‐in‐Publication Data is Available:
ISBN 9781394177400 (Hardback)
ISBN 9781394187287 (ePDF)
ISBN 9781394187294 (ePub)
Cover Design: Wiley
Cover Image: © Misha Shutkevych/Getty Images
Humanizing Rules is a practical guide for anyone whose job involves managing human risk or who is interested in understanding more about it.
Human risk is “the risk of people doing things they shouldn't, or not doing things they should”. It's an intentionally broad definition that covers the full range of risks posed by human decision‐making. It includes wilful acts such as “I deliberately set out to commit fraud” and human errors such as “I was tired and made a mistake”.
Human risk is the largest risk facing all organisations and society as a whole. When things go wrong, there is always a human component involved. People either cause problems in the first place or make them worse by how they respond.
Humanizing Rules is designed for anyone responsible for influencing human decision‐making to reduce risk or ensure compliance with a set of rules or principles. It can help managers and senior leaders to be more effective in influencing the decisions of their employees and therefore deliver desired business outcomes.
It is also directly relevant to professionals in disciplines such as risk, ethics, legal, and compliance. But it is equally applicable to people in other functions who need to ensure that their fellow employees “comply” with rules, policies, or procedures. This includes but is not limited to functions like audit, legal, human resources, internal comms, procurement, and health & safety. If your role involves mitigating human risk, then Humanizing Rules is for you.
The book is written for a lay audience with little or no previous knowledge of either theoretical or practical behavioural science.
Humanizing Rules suggests an approach to managing human decision‐making that fuses creativity and behavioural science, the study of the drivers of human decision‐making. By bringing behavioural science to “compliance”, we can be more effective, mitigate human risk, reduce employee frustration, and get the best out of people.
Traditional approaches to managing human risk typically rely heavily on two presumptions. The first is that employers have the right to tell their employees what to do and that employees will therefore comply. The second is that compliance can be motivated via incentives and noncompliance can be deterred via punishment. These presumptions are theoretically correct, but often fail in practice. If we really want to influence human behaviour, we need to humanize our rules.
The book is designed to be practical. As we explore human behaviour, I will share stories and case studies from various industries and contexts. The ideas and suggestions I share have all been developed in the field. Either when I was working at UBS or in collaboration with my Human Risk clients. All of these ideas and suggestions are presented, not so that you can slavishly copy them – though if you think they might work in your environment, do feel free – but rather to inspire you to come up with your own ideas.
The book is not a step‐by‐step guide to managing human risk. Nor is it a technical guide to behavioural science or risk management. It is designed to inspire you to think differently about how to approach the challenge and to identify new solutions to old problems.
This book is split into four Parts. Part I begins with a story of a minor error with major consequences that, for reasons I explain, helped to inspire the book. I then explore why we need to humanize rules and how behavioural science can help us to do that. I also explore some basic behavioural science principles and highlight why these are relevant.
Part II introduces HUMANS, a simple, practical framework that will help you to deploy behavioural science techniques in your organisation. If you're introducing something new or refreshing your programme, it'll help you think about how your employees will likely react. If something isn't working as expected, HUMANS can help you diagnose why and improve things. Finally, HUMANS can also help you to predict where other parts of your compliance framework might be under stress and are worthy of your attention.
In Part III, I outline six rules. These aren't rules in a traditional sense of “things you need to follow” but principles that help to highlight common misperceptions about compliance and why they can reduce effectiveness and efficiency in our programmes. The six rules build on HUMANS and are there to help you challenge orthodox thinking and provide some guiding principles to help you think differently about solving compliance challenges.
Part IV will help you to understand where to deploy your newfound knowledge. Using the RADAR framework, we can use the behaviour of our employees to help us identify the biggest disconnects between what we would like them to do and what they are likely to do. These will give you pointers on where to focus your behavioural efforts, so you can target low‐hanging fruit, get results quicker, and keep stakeholders on board as you try new things. Finally, I share some thoughts about how you can humanize rules in your organisation. I'll introduce the idea of borrowing from “compliance in the wild” – examples from outside your organisation that you might not immediately think of as relevant – and tell you how to implement the lessons from them.
Before we begin, here's a brief user manual for how to get the most out of the book.
Since the book is called Humanizing Rules, you'll be relieved to hear it isn't a set of rules to be slavishly followed. It contains six specific ones, but don't worry; they're not traditional rules. Like the rest of this book, think of them as you would a travel guide or cookbook. I want to inspire you rather than provide you with an instruction manual to be followed to the letter. Humanizing Rules is written with a general audience in mind which should mean that the majority of dynamics I highlight and ideas I suggest are relevant to your situation. But, occasionally, they might not be. If that happens, feel free to adapt the rules to meet your needs, just as you would a travel itinerary or recipe.
Humanizing Rules isn't an academic book that presents the findings of rigorous academic research that has been tested and peer‐reviewed. None of the ideas suggested in the book has been tested in laboratory conditions. Most have, however, been tested in the real world, while others are still work in progress. In sharing these, I want to suggest new ideas you might not have previously considered. My hope in doing so is to inspire you to think differently about how you go about things.
Some ideas I propose in the book are counter‐intuitive and challenge traditional orthodoxy. There would be little point in writing it if I wasn't going to do that! Sometimes, I will point you towards things that might be challenging to implement, particularly if you are operating in a harsh regulatory environment. My advice is to do what you can. But don't let that be an excuse for not trying new things. Small changes in the right direction are better than no changes. Most importantly, do have fun!
As we try to humanize rules, there is a wide range of applications for the ideas I explore. For example, a technique that can help a team leader make it more likely their team will meet a requirement to fill in timesheets can also be used elsewhere. It can help Human Resources departments to encourage managers to attend a training course or by Compliance to get employees to complete a regulatory return.
I have used standard terms throughout the book to avoid confusion and repetition and deliver consistency and simplicity. To avoid misunderstandings, here are the terms I frequently use and an explanation of what they mean:
Employees
are the people in your organisation, the primary “target audience” we seek to influence. However, we can also use the same techniques and ideas to persuade other target audiences in entirely different contexts. For example, your customers, suppliers, regulators, investors, or people applying for jobs in your organisation.
Rules
are what you want or need your employees to do or not do. What we might call your “desired outcome”. The most obvious example of a desired outcome is compliance with the rules you have within your organisation. To avoid being repetitious – and because sometimes I need a different word – I occasionally use “Requirement” instead of “Rule”, for example, when I'm referring to the desired outcome of having your employees comply with externally imposed laws or regulations.
Rules
or
Regulations
might also mean following instructions, policies, codes, orders, mandates, or any other instrument or tool used to influence the behaviour of employees. That includes responding in the desired manner to email messages and poster campaigns.
Compliance
means the act of complying with something. In other words, doing “what we want them to do” or “what we need them to do”. It also captures concepts such as adherence. It is not the same as “big C” Compliance, which I use to refer to the function within an organisation responsible for ensuring compliance with regulations; though, if
compliance
comes at the start of a sentence, then I will capitalise it! Finally, for the avoidance of doubt, the term noncompliance means the opposite of
compliance
.
Framework
means the architecture you're using to influence your employees. The most obvious example is a compliance framework, but it can also mean any rules, programmes or systems you're using to deliver your desired outcomes, such as a communications campaign, for example.
Asking your employees to do something
means the act of communicating your desired outcome to them. You may prefer to think of it as “telling them what to do”, “reminding them of the rules”, “giving orders”, or “issuing warnings”. The verb “ask” isn't intended to suggest that you have no authority over your employees. I use it not only because it is polite but also as a reminder that, often, we need to work with, rather than against, our employees to achieve our desired outcome.
Sometimes in life, we say things we later come to regret. Very occasionally, those become “famous last words”, something we are likely to regret, if not for the rest of our lives, for a very long time. If we're unlucky, it'll play out in public.
In Brian Cullinan's case, the famous last words were probably something he said in an interview1 in January 2017:
It doesn't sound very complicated, but you have to make sure you're giving the presenter the right envelope.
Cullinan was talking about his forthcoming role in the 89th Academy of Motion Picture Arts & Sciences Annual Awards. You and I know it as “the Oscars”. At the time, Cullinan was a partner at the professional services firm PwC.
The purpose of the – now deleted but very much alive in archive form – interview was to showcase Cullinan and his fellow partner Martha Ruiz's role in supporting the awards.
Unsurprisingly, given the firm they worked for, Cullinan and Ruiz were involved in a simple but crucial logistical exercise. Their job was to count the votes and keep the names of the winners secret until the presenters revealed them during the Oscars ceremony. They would be responsible for handing over red envelopes containing the winner's name to the presenters just before they went on stage. As Cullinan's comment made clear, he had one job to do. In the same interview, he explained that nothing had ever gone wrong.
Until on 26 February 2017, it did. With most of the ceremony completed without incident, only one award remained. The biggest award of the night; the one for Best Picture. You wouldn't want anything to ever go wrong during the ceremony, but if it were going to happen, this would be the worst possible moment. As you'll already know or have worked out by now, this was the precise moment when Cullinan handed out the wrong envelope, and chaos ensued. If you've never seen it, or haven't watched it for some time, do take a moment to search out a clip on YouTube.
After the ceremony, PwC issued an apologetic statement explaining what had happened:
PwC Partner Brian Cullinan mistakenly handed the back‐up envelope for Actress in a Leading Role instead of the envelope for Best Picture to presenters Warren Beatty and Faye Dunaway. Once the error occurred, protocols for correcting it were not followed through quickly enough by Mr Cullinan or his partner.
Thanks to the (in)actions of Cullinan and Ruiz, the winners' names were relegated to being the second biggest story of the night. The Oscars had hit the headlines for all the wrong reasons.
As events unfolded in Los Angeles on Sunday night, I was getting ready to start my working week in London. As I waited for my coffee machine to warm up, I mindlessly scrolled through social media. It didn't take long for my feed to fill with video clips from the ceremony. Transfixed, I watched clip after clip, trying to make sense of it. Unbeknownst to the algorithms feeding my insatiable appetite, I wasn't just enjoying the drama. I had a light‐bulb moment that would change the course of my career.
At the time, I was a Managing Director at the Swiss Bank UBS, responsible for Compliance and Operational Risk for the firm's asset management division and the EMEA2 region of the firm as a whole. Compliance, as the name implies, involves ensuring the firm is compliant with all applicable rules and regulations. Operational or “op” risk is about ensuring the firm minimises and mitigates the impact of “non‐financial” threats such as fraud, cyber, or reputational risk. While Financial Services firms make money from taking calculated financial risks, compliance and operational risks are the kinds you want to avoid at all costs.
Having been in post for a few years, I realised that neither my job title nor my responsibilities reflected the substance of my job. Something was missing. But I couldn't work out what. The envelope incident at the Oscars – simultaneously, a compliance breach in not following protocol and an op risk incident – gave me the answer.
Thanks to Brian Cullinan, it suddenly dawned on me that the businesses of compliance and op risk were influencing human decision‐making. It wasn't just part of my job. It was my job! The only way the firm would be compliant was if we could successfully influence the decision‐making of the people within it. After all, you couldn't tell a company to be compliant and expect it to respond! Similar dynamics applied to op risk. Whenever things go wrong – in organisations or society – there is always a human component involved. People can create problems, for example, by giving out the wrong envelope at an awards ceremony. They can also make them worse by how they respond. Or, in the case of an awards ceremony where the wrong envelope has been handed out, don't respond.
While I didn't expect ever to have to deal with the risks associated with the delivery of award envelopes at a globally televised awards ceremony – though, as ever with risk, never say never – the case of the wrong envelope taught me a valuable lesson. Properly understanding human decision‐making was a vital skill I would need to master. People, it seemed to me, were the most significant driver of risk facing the organisation, and it was my job to help mitigate that.
I wasn't the only person thinking in those terms. Later that year, in an interview with Bloomberg Markets magazine, my ultimate boss, the then CEO of UBS, Sergio Ermotti, said:
The riskiest part of our business nowadays is operational risks. We can have hours of discussion on credit or market risks. But the one thing that really hurt in the last ten years of our industry is op risks, not credit or market risks. If you do something wrong as a bank, or you have people doing bad things within the bank, it costs you much more than any credit risk or market position.3
Which, when you think about it, is quite a statement! He was essentially saying that the riskiest thing about banking isn't the banking. The riskiest thing, Ermotti is saying, is poor decision‐making by the bank or its employees. Notice how he distinguishes between doing “something wrong as a bank” and “people doing bad things within the bank”. You can see something similar in PwC's statement about the Oscars incident. By emphasising a failure to follow protocol, they're seeking to reinforce the fact the organisation didn't sanction the “bad” behaviour.
To understand why Ermotti might choose to differentiate between the firm's and its employees' decisions, we need to go back to the circumstances of his appointment. In 2011, he'd become CEO of UBS following the resignation of his predecessor Ossie Grubel after the discovery of a rogue trader in the firm's investment bank. Between 2008 and 2011, a trader called Kweku Adoboli engaged in unauthorised trading activities that ultimately resulted in $2.3 billion in losses. At one point in time, Adoboli's trading positions exposed the firm to risk of losses of an eye‐watering $11.8 billion.
It's a story I know well because it unfolded on my watch. On September 14, 2011, Adoboli, already subject to a UBS internal investigation into his trading activities, left the office and sent an email admitting what he'd done.
Just ten days earlier, I'd started a new job at the Financial Services Authority, the industry's then regulator, as the head of department responsible for supervising the UK operations of non‐UK banks. Since Adoboli was based in London, it fell to my team to lead the regulatory response. As we investigated what had happened, our work focused on understanding how Adoboli had managed to do what he did and ensuring there couldn't be a repeat. But on a personal level, we couldn't help wondering what had driven him and how he'd justified his actions to himself.
Many of the answers came during Adoboli's trial. I was particularly struck by the words of the judge who, in sentencing him to seven years in prison, told him he was
profoundly unselfconscious of your own failings. There is the strong streak of the gambler in you, born out by your personal trading. You were arrogant enough to think that the bank's rules for traders did not apply to you. And you denied that you were a rogue trader, claiming that at all times you were acting in the bank's interests, while conveniently ignoring that the real characteristic of the rogue trader is that he ignores the rules designed to manage risk.4
I don't think Adoboli ever deliberately set out to break the rules. But somehow he did, on an astonishing scale.
Equally, Brian Cullinan clearly didn't set out to disrupt the Oscars. Most of us don't come to work to break the rules, make mistakes, or cause problems. Yet we all have the potential, and a few of us do.
On the face of it, Cullinan and Adoboli have absolutely nothing in common. One was a senior partner at a professional services firm who made one simple mistake on a public stage. The other was a mid‐level investment banking trader who repeatedly and wilfully breached limits and incurred losses that could have brought down the bank for whom he worked.
Yet both Cullinan and Adoboli are high‐profile examples of the risks posed to organisations by human decision‐making in the twenty‐first century. Historically, you would have needed to be at Cullinan's level of seniority to generate the reputational damage he did. This is because the risk profile of employees would generally have been closely correlated with their position. Senior people spoke to the media, signed letters on behalf of the organisation, approved financial expenditure and took strategic decisions. There were relatively few chances – though clearly, it was not impossible – for anyone below boardroom level to be in a position to cause material reputational or financial risk to their employer. As rogue traders before Adoboli had demonstrated, banking was one of the few industries where that was occasionally possible.
In many respects, the fact that it was generally only senior people who had access to tools that could cause real damage was a form of risk management. The idea was that senior people would have years of experience, that would make them far less likely to make mistakes than a junior employee with far less experience. It didn't always work, of course.
Readers of a certain age will remember the case of Gerald Ratner, the CEO of a chain of UK jewellers named after his family who, with one speech, almost destroyed his, up until then, very successful business. In 1991 – so, well before the advent of social media and smartphones – Ratner gave a speech to what he thought was a private audience, describing the products his business sold as “total crap”. Someone recorded it and thanks to the mainstream media, the story went “viral”. It wiped £500 million – which, considering inflation, is worth approximately £1.1 billion today – off the value of his business and saw Ratner removed from post.
Since then, technology has begun to democratise that risk. Adoboli wasn't the first rogue trader, nor were the losses his activities incurred the largest ever,5 but it was technology that enabled him to do what he did. His story is an extreme illustration of the fact that almost every employee within an organisation has the potential to pose a material risk to it.
From a reputational risk perspective, employees at all levels of organisations are now given weapons of mass destruction by being handed laptops and email addresses. Let alone the access they are given to systems and data. Thanks to the internet and, in particular social media, even the most junior employee in an organisation has the potential to cause their employer embarrassment. If someone makes a racist comment on Twitter that goes viral, there's a high likelihood people will link it back to their employer and expect the organisation to respond. Equally, a member of airline check‐in staff who is rude to a customer risks video footage of it appearing online.
While technology is the enabler of this dynamic, human decision‐making ultimately drives it.
The most significant risk facing organisations and society in the twenty‐first century is human, particularly in the Knowledge Economy. If that sounds like a bold statement, consider this; whenever something goes wrong, there's a human either causing the problem or making things worse by how they respond.
I call this “human risk”, which I define as
the risk of people doing things they shouldn't, or not doing things they should.
It's an intentionally broad definition that encompasses everything from “I deliberately set out to commit fraud” to “I was a bit tired, and I made a mistake”. It also captures a crucial dynamic of human behaviour: the significance of inaction.
We often think of bad outcomes as caused by actions; for example, people “taking a risk”, “making mistakes” or “committing crimes”. But inaction can be equally, if not more, dangerous if the thing they're not doing is critical. Forgetting to lock a door, failing to stop at a red light or ignoring evidence of wrongdoing are all things that can lead to bad outcomes.
The definition also includes a word I am often asked about; “things”. It is, people tell me, a surprisingly “loose” word to include in a definition; that's deliberate. When people used to ask me what “operational risk” was about, I'd tell them it involved “trying to stop bad things from happening”. That usually elicited a follow‐up statement, posing as a question: “why are you only trying to stop them? Shouldn't you actually be stopping them?”. My response was that if I knew what they were, of course, I'd stop them. Obviously, I didn't. That's why the word “things” is there.
One of the challenges of risk is that we can't always predict exactly what might go wrong. But we know that whatever it is, it will involve humans. We also know that whenever something goes wrong in organisations, someone somewhere knew something, that could have helped to prevent it.
In this book, I'm going to help you think about how to humanize rules to mitigate human risk. I'm also going to help you promote human reward.
If human beings pose so much risk, then why do we employ them? The answer is pretty obvious; we need to! Ask any CEO of a successful company what their biggest asset is, and chances are they'll tell you it's their people. The reason organisations spend vast amounts on paying people is because they are seen as a competitive advantage.
That has always been true, though the role they fulfil has changed and is changing. Historically, people were providers of manual labour. Nowadays, the ease with which companies in all sectors can deploy machines that are better and more cost‐effective than humans at both physical and computational tasks means the role people play within organisations is rapidly shifting. We're hiring people to do things the machines can't. At least, not yet. Tasks that involve skills like emotional intelligence, creativity, and judgement. This is when we are at our best. But it is also when we are at our riskiest.
The challenge facing organisations is, therefore, a balancing act between mitigating human risk and what I call “human reward” – getting the best out of them. Note “best” not “most”– this isn't about exploiting people. Human risk and human reward are interrelated. If you over‐emphasise human risk, you'll miss opportunities for human reward. If you over‐emphasise human reward – by, for example, saying to your employees, “do whatever it takes to hit your sales target” – you'll run unnecessary human risk. That's the balancing act facing companies in the twenty‐first century.
In Humanizing Rules, I will explore how we can manage the tension between those two dynamics. If we hire people because they're smart, then it's probably not a good idea to treat them in a manner that suggests we think the opposite. But equally, as we've seen from Cullinan and Adoboli, intelligent people don't always make wise decisions.
This book isn't about preventing outliers, like the next Kweku Adoboli. For that, we'll need to review incentive frameworks, monitoring and surveillance programmes and disciplinary processes. Instead, it looks at the day‐to‐day processes that we deploy to influence every employee; the controls, the processes, the frameworks, the communications campaigns, the training, and, above all, the rules. By humanizing those, we can get the best out of our people, while mitigating risk.
We'll begin our journey by looking at perspective. If we want to influence our employees effectively, we need to think less about things from our perspective and more from theirs.
1
The original article has now been deleted, but you can read it via the Wayback Machine here:
https://web.archive.org/web/20170213234409/https://medium.com/art-science/what-it-feels-like-to-count-oscar-votes-f89a38efdf1c
2
EMEA is Europe, the Middle East, and Africa. At UBS, this does not include the home market of Switzerland, which is classified as a separate region.
3
https://humanizingrules.link/adoboli
4
https://www.judiciary.uk/wp-content/uploads/JCO/Documents/Judgments/kweku-adoboli-sentencing-remarks-20112012.pdf
5
At the time of writing this book, that “title” is held by Jérôme Kerviel, whose $6.9 billion losses at French bank Société Générale between 2006 and 2008 dwarfed Adoboli's more modest $2.3 billion.
In a short story called “An anecdote on the lowering of productivity at work”,1