30,99 €
A clearer, more accurate performance management strategy Over the past two decades, performance measurement has profoundly changed societies, organizations and the way we live and work. We can now access incredible quantities of data, display, review and report complex information in real time, and monitor employees and processes in detail. But have all these investments in collecting, analysing and reporting data helped companies, governments and people perform better? Measurement Madness is an engaging read, full of anecdotes so peculiar you'll hardly believe them. Each one highlights a performance measurement initiative that went wrong, explains why and - most importantly - shows you how to avoid making the same mistake yourself. The dangers of poorly designed performance measurement are numerous, and even the best how-to guides don't explain how to avoid them. Measurement Madness fills in the gap, showing how to ensure you're measuring the right things, rewarding the behaviours that deserve rewarding, and interpreting results in a way that will improve things rather than complicate them. This book will help you to recognize, correct and even avoid common performance measurement problems, including: * Measuring for the sake of measuring * Assuming that measurement is an instant fix for performance issues * Comparing sets of data that have nothing in common and hoping to learn something * Using targets and rewards to promote certain behaviours, and achieving exactly the opposite ones. Reading Measurement Madness will enable you to design a simple, effective performance measurement system, which will have the intended result of creating value in your organization.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 371
Veröffentlichungsjahr: 2014
Title Page
Copyright
From the Authors
Part I: Introduction
Chapter 1: The Road to Insanity
Chapter 2: Performance and Measurement
What is performance measurement?
What is performance?
What is measurement?
Getting the number or changing the behaviour?
Part II: Performance Measurement
Chapter 3: Measurement for Measurement's Sake
Making things measurable
Measures and more measures
Excessive reliance on measures
Learning points
And finally…
Chapter 4: All I Need is the Right Measure!
How difficult can this be?
How strong are your indicators?
Learning points
And finally…
Chapter 5: Comparing Performance
Apples and pears
Timeliness
Special variation
Choice and relevance
Using data unintended for comparative purposes
Yes, but…
Moving up the rankings
Unintended consequences
Learning points
And finally…
Part III: Performance Management
Chapter 6: Target Turmoil
What are performance targets?
When targets go bad
Are targets so bad?
The main pitfalls
When targets do good
Clarity and commitment
Unexpected benefits
Learning points
And finally…
Chapter 7: Gaming and Cheating
Gaming: what is it?
Gaming and cheating
What drives gaming and cheating?
Types of gaming
Learning points
And finally…
Chapter 8: Hoping for A Whilst Rewarding B1
Common management reward follies
Learning points
And finally…
Chapter 9: Failing Rewards and Rewarding Failure
Top rewards for top performers
Rewarding failure
Failing rewards
Measurement, rewards and motivation
When financial rewards backfire
What motivates us?
Learning points
And finally…
Part IV: Conclusions
Chapter 10: Will Measurement Madness Ever be Cured?
And finally…
References
Index
End User License Agreement
xi
xii
xiii
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
199
200
201
202
203
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
Cover
Table of Contents
Part I: Introduction
Begin Reading
Figure 3.1
Figure 6.1
Figure 7.1
Table 4.1
Table 4.2
Table 5.1
Table 6.1
Table 6.2
Table 6.3
Table 7.1
Table 9.1
Dina Gray
Pietro Micheli
Andrey Pavlov
This edition first published 2015
© 2015 by Dina Gray, Pietro Micheli and Andrey Pavlov
Registered office
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloging-in-Publication Data
Measurement madness : recognizing and avoiding the pitfalls of performance measurement / Dina Gray, Pietro Micheli, Andrey Pavlov
pages cm
Includes bibliographical references and index.
ISBN 978-1-119-97070-5 (hardback) — ISBN 978-1-118-46451-9 (ebk) —
ISBN 978-1-119-96051-5 (ebk) 1. Performance—Management. 2. Performance—Measurement. I. Gray, Dina. II. Micheli, Pietro. III. Pavlov, Andrey.
HF5549.5.P35M427 2014
658.4'013—dc23 2014020919
A catalogue record for this book is available from the British Library.
ISBN 978–1–119–97070–5 (hardback) ISBN 978–1–118–46451–9 (ebk)
ISBN 978–1–119–96051–5 (ebk)
Cover design: Wiley
Over the past 20 years, the world has witnessed a booming interest in performance measurement across all walks of life. Today, the vast majority of organizations employ at least some aspects of a performance measurement system, be it the use of key performance indicators to track progress, the setting of targets to motivate and direct attention, or the use of measurable objectives for appraising and rewarding individual behaviour. In short, performance measurement has profoundly changed societies, organizations and the way we live and work. We can now access incredible quantities of data, display, review and report complex information in real time, and monitor employees and processes in detail. But have all these investments in collecting, analyzing and reporting data helped governments, organizations and people perform better?
Measurement is often associated with the objectivity and neatness of numbers, and performance measurement efforts are typically accompanied by hope, great expectations and promises of change; however, these are then often followed by disbelief, frustration and what appears to be sheer madness. Between the three of us, we have spent over four decades working, consulting, researching and teaching across the wide variety of topics associated with measuring and managing performance, and we are of the belief that performance measurement is first and foremost about behaviours. Our involvement with a large variety of organizations has taught us that performance measurement can be rewarding and frustrating, powerful and amusing, simple at times, but usually extremely difficult.
This book is not another manual on how to design a performance measurement system, or a set of steps on how to introduce the “right” set of performance indicators and the “clearest” dashboard for your organization; the business section of any bookshop is full of such manuals. Instead, this book looks at the consequences and behaviours that plague organizations after they introduce a performance measurement system, and we investigate the reasons why such madness occurs and how these pitfalls could be avoided.
Although performance measurement seemed to promise rational and positive actions, we are today surrounded by dysfunctional and perverse behaviours. Whilst the use of measures seemed to imply that the truth might be found by letting the numbers “speak for themselves”, we are now all drowning in a sea of subjective interpretations. Although the introduction of structured reporting systems hoped to encourage openness and transparency in the attainment of social and environmental outcomes, we are currently snowed under by reports compiled mainly for political purposes. And, even though many performance measurement initiatives promised to help managers engage and motivate their people through the use of targets and incentives, our organizations are now rife with cynicism and a lack of commitment.
Of course, the consequences of performance measurement are not all negative, and many organizations have reaped the benefits of introducing and using well-conceived and well-implemented performance measurement systems. However, performance measurement is one of those topics in which the devil is in the detail. It is one thing to design a logical and well-structured system; it is yet another to make it a part of people's everyday lives and ensure that it has a positive impact on performance.
Throughout the chapters of this book we will share with you the various stories and anecdotes that we have accumulated over the years to illustrate the madness that can ensue through the supposedly simple task of measuring performance. Our work and our teaching have taken us across many cultures on all five continents, and we hope that our stories and anecdotes will resonate with you wherever you may be reading this text. In reviewing these stories, what is startling is not just the variety of dysfunctional consequences that performance measurement can generate, but also the scale of the madness – something many of us could never have imagined. However, we do not just describe measurement pitfalls; we also provide practical guidance on how to avoid them. This book is meant to be a light-hearted take on how often and how quickly performance measurement can become absurdly dysfunctional, but it would be remiss of us not to provide indications on how to navigate a way around the common pitfalls. Therefore, this book aims to give you a fun and interesting read, whilst helping you make the task of measuring performance saner, simpler and easier.
Dr Dina GrayDr Pietro MicheliDr Andrey Pavlov
Performance measurement is everywhere: when we look at companies' financial statements, read reports on average waiting times in hospitals, carry out performance appraisals at work, or look at schools' records when deciding where to educate our children. The practice of collecting, analyzing and reporting information on the performance of individuals, groups, organizations, sectors and nations has been around for a long time – but what difference is it making?
You may be intrigued, enthusiastic or frustrated with measurement, by the behaviours it engenders and the results it leads to. Perhaps you are a proponent of the old adage that “what gets measured gets done” or you may take the alternative view that measurement only leads to dysfunction. Maybe you have used a performance report to make an important decision only to be let down by the result, or perhaps you have been pleasantly surprised when customer performance ratings of restaurants have helped you have an extraordinary dining experience whilst on holiday. Whatever your starting position, this book will not only present real-life stories of madness brought about by measurement, but will also discuss ways to navigate your way around the pitfalls so that you may, once again, use measurement as a means to improve your organization's performance.
In order to set the scene and outline many of the issues covered in this book, we would like to describe a fictional scenario, which is based on the many real events we have observed in practice, about a manager tasked with implementing a strategic performance measurement system and not fully understanding the consequences of such a programme. Let us introduce you to Mike, his hopes, ambitions and challenges.
My name is Mike, and I am a senior manager in a relatively large organization. Today is an important day for me as we are having our annual managers' meeting and I am leading it. The theme for this year's conference is “Performance Excellence”. All of the directors and the top managers are here, and we are going to have a series of short presentations on the work undertaken over the past year.
But firstly, let me tell you how all of this has come about. Just over 18 months ago the Board recognized that, as competition was becoming more intense and regulations were becoming ever tighter, we had to improve our performance in all areas of the business. The company therefore commissioned an external firm to carry out a review and they concluded that we were lacking a “comprehensive approach to measuring and managing performance”. Essentially, we did not really understand how efficient and productive we were; different units seemed to be run independently from each other; and employees were not clear about the organization's goals.
Shortly after the report was released I was tasked to lead the “Performance Excellence” project, with the aim of introducing a performance measurement and management system throughout the entire organization. It was hoped that the project would completely change the way in which we set and communicate our objectives; how we measure and report performance; and how we appraise and reward people. Today, after a year's work, it is time to check what progress we have made, both in terms of our achievements and in relation to the implementation of the system itself.
At this point in time, before the conference kicks off, I am feeling a little restless, but quietly confident. The lead up to today has been somewhat stressful, and, although I've spoken to most of the people who are attending today, I am not entirely sure what each speaker will say. The organization has always promoted a culture of openness and constructive criticism and I am looking forward to hearing my colleagues' presentations.
I kick off with a short ice-breaker to set the scene, explain what has been done over the past 12 months, and outline future steps. I see a lot of people nodding, which is encouraging as this project has been a priority within the company, and everyone is aware of what is going on. Then I hand over to our CEO. She seems positive about our results, but states that we have to do more as other companies are catching us up and we can't afford to be complacent. Referring to the Performance Excellence project, she says that we have made progress but that she is aware of some “question marks” that we should openly address today. I wonder what those “question marks” could be …?
The CEO concludes and it is now the turn of the Chief Financial Officer. Our financial results appear to be in line with forecast and it seems that we have even had a few unexpected successes. Referring to the Performance Excellence project, he reports that most people tend to regard indicators as relevant or highly relevant, which is music to my ears, but, somewhat unexpectedly, although the number of indicators has increased, the extent to which information is used to make decisions appears unchanged. He continues by saying that despite our immense efforts to provide a balanced view of the company through the introduction of more customer- and process-related indicators, financial indicators are still considered to be the most important ones. This is rather disappointing, even though he concludes by adding that it is just a matter of time before we see more profound changes.
I have to say I feel a bit of relief on hearing his conclusions, but my relief is short lived when, from the floor, one of our regional directors stands up and addresses the executives: “When the Performance Excellence project began we were promised that little effort would be required on our side. Instead, my people have ended up spending a lot of time collecting all sorts of data, yet nobody in headquarters seems to give two hoots about it. I presented our data at the meeting in June which resulted in us spending half an hour arguing over two irrelevant figures, then about the reliability of the data themselves, and we finally took a decision that bore no relation to the data. What was the point of spending so much time collecting and analyzing it?” Before the CFO can utter a response I intervene, pointing out that this should not be happening and that things are changing. From the look on his face I don't seem to have persuaded the regional director, but at least we can now move on to the next presenter.
The Supply Chain Director is up next, and he is renowned in the organization for his obsession with maximizing efficiency. Trained in Six Sigma and brought up on lean thinking, he has been one of the strongest supporters of the Performance Excellence project. His presentation focuses on operational improvements made in warehousing. After a short introduction he goes through a sequence of histograms and graphs reporting the comparative performance of our warehouses. One after the other, in his monotone voice, he presents figures on the number of cases picked per labour hour, inventory accuracy, warehouse order cycle time and finally concludes with a ranking of all of the warehouses. The league table suddenly wakes everyone up. I was not aware of this ranking, but what harm can it do? If anything, I think to myself, it should spur on a bit of competition among the regional and site directors. However, as he continues on down the list, an ever-increasing hum emanates from the audience. Some people are muttering that it is not clear how the comparisons were made and others question how he calculated the final scores. For me, this is a typical reaction of people who don't want to hear that they are doing worse than others. However, as I consider what he is saying, something starts to bug me: actually not all of the warehouses can be considered to be the same, because we are not using them in the same way. Some of them are inefficient, but that is because they have to handle peak demands; they have to work below capacity for a reason. I make a note that I will have to speak to the Supply Chain Director.
After a short break the conference resumes and the R&D director makes his way to the podium. I should point out that before the Performance Excellence project the organization had unsuccessfully tried to get a grip on this unit, but all attempts to monitor their performance had failed miserably. We had developed a number of measures, such as the number of patents filed, but we had never felt that this was a good indication of the unit's output. What about the R&D workers' productivity? What about forcing them to meet stricter deadlines? Or calculating the unit's financial contribution to the company? I am in no doubt that if we put more effort into more sophisticated measures we will certainly have a more accurate picture and should see an improvement in their performance.
As a bit of background, the R&D director was appointed nine months ago; he was previously in sales, where he achieved great results by introducing individual performance targets and regular, in-depth, performance reviews. Last week when I spoke to him he told me that a few of his R&D people were upset with senior management, although he did not elaborate on why, but he was hoping to resolve the issues very soon. So I am hoping for a critical, but positive presentation. What I get instead is a long list of complaints. He tells everyone that, shortly after he was appointed, he implemented a similar system to the one he had used in the sales department. However, while some of his new team showed their support, the majority of them demonstrated resistance; one could even say sheer defiance. A typical response was “Our performance cannot be measured, we are not sales people!” While he goes through his slides I can't help thinking that people in R&D have had it too good for far too long, and that this research-oriented culture places too low an importance on business performance. To my dismay the presentation ends on a fairly disappointing note with the R&D director suggesting that perhaps the performance measurement system should be designed and implemented differently depending on a unit's tasks and its culture. I am flabbergasted: that's madness, we would end up with dozens of different systems.
It is now the turn of the Sales Director for Western Europe. She says she will be brief: one slide and only a five-minute talk, because “the numbers are the numbers, and they speak for themselves”. After a brief preamble, she puts up one chart: her team's improvement is incredible! Sales in her region appear to have doubled over the past year. Can this be right? I look frantically through the draft report I received from Finance two days ago and I can only make out a 5% increase in total company sales over the past three quarters. I don't have a breakdown for each of the geographical areas, but I can't believe we achieved such a positive result in the saturated market that is Western Europe. While the Sales Director performs an act of self-congratulation and is thanking all her team as if they were in the room, I peer at the graph more carefully. It occurs to me that the y axis doesn't start at zero; so, I squint a bit more and I can now see some of the blurred numbers: the real increase is less than 10%! This is really annoying, as we had said that we would be using this forum to openly discuss failures and successes, and then people turn up with lame graphs just to show off. I will have to talk to her too.
The podium is now being prepared for our Chief Operating Officer. At the beginning of this year we made the news for a major fault in one of our products: after several customers reported having the same problem, we were forced to undertake the largest recall in the company's history. Not a happy time. Since we couldn't really blame the suppliers, this hiccup triggered a series of internal arguments between our design and production units. Eventually, the incumbent COO was replaced by his deputy. Before leaving, the old COO wrote an angry letter in which he accused the board of introducing a perverse system of targets and rewards that he labelled “bribes” in his letter, which had completely corrupted the ethics of his team. According to him, people were aiming to achieve tighter deadlines and reduce lead times, but only because they had been promised a short-term financial benefit. This, rather than incompetence or flaws in the processes, was the main reason for the product recall. To me that letter just felt like he was making excuses, but quite a few people at the senior level of the company showed support for his sentiments; so much so that I thought the whole Performance Excellence project was going to be canned. Thankfully, the new COO appears to have opted for a less controversial theme for his presentation and is showing some general trend data, without mentioning performance targets. Phew! There are still strong feelings in the company about what happened and I wouldn't want to have a heated debate right here and now.
The last presentation before my final wrap up is delivered by the CEO of our IT subsidiary. Five years ago she set up an IT firm to provide customized software to companies in our sector. This firm proved so successful that, two years ago, we decided to acquire it. Because of differences in our two histories, tasks and supposed company culture, they have always been given lots of freedom. At the beginning of the Performance Excellence project we discussed whether to introduce elements of the measurement system there, but in the end we decided to run a little experiment instead. In one half of the company things remained the same; in the other half, people capable of achieving “stretch targets” were offered a substantial bonus, up to 30% of their base pay, if they met those targets. In the beginning, a few of the employees seemed unhappy about the introduction of targets; however, since then, sentiment appears to have changed.
Somewhat surprisingly, the presentation starts on a positive note about the Performance Excellence project. Comparative data, recorded in the first two quarters, suggest that during the first six months the people who were striving for a bonus achieved higher levels of performance in terms of both efficiency and quality. This is great; finally we can see that when people are measured, managed and rewarded appropriately they do a better job. However, her second chart shows that this gap has almost reduced to zero over the past six months. It is almost as if people had put in a lot of effort to gain the reward and then just stopped trying. The rest of the presentation spells out the death sentence for my project. It appears that our IT subsidiary is pulling out of the Performance Excellence project because, apparently, quite a few episodes of cheating were found, and customer satisfaction and organizational climate have reached all-time lows.
When finally the last presenter leaves the podium, a deathly hush descends upon the audience. I am not only feeling puzzled, but I am also downright demoralized: I did all I could to get this project off the ground and, after a year, the results are abysmal. But what did we do wrong? Is there any way we can rescue the situation? I reach the microphone, I thank all the previous presenters, and say that we now need time to reflect and think about how to move forward.
Poor old Mike, he has stumbled across the numerous dangers that are inherent in using performance measures and he has discovered that measuring organizational performance is not an easy task. What to measure, how to measure, and perceptions of performance are all key components to avoiding the performance measurement pitfalls described in Mike's story. Throughout the book we will examine and address the issues Mike encountered. For example, Mike discovered that the data that were being collected were not being used for effective decision-making and that the cost of data collection, analysis and reporting was not outweighed by the real benefits achieved. This is a typical case of measurement for measurement's sake. Although Mike knew the business well, he still believed that it would be beneficial to compare R&D to sales, even though they operated in completely different ways. In contexts where measurement is difficult to undertake, we often resort to hoping for A whilst measuring and rewarding B, and this can cause serious damage to employees' motivation and performance. Also, all managers in the company were aware of the emphasis being put on the numbers, especially with respect to their bonuses, and they therefore exhibited classic behaviours to work around the system; even misrepresenting their data to look as though they were overachieving their targets. Dysfunctional behavioural consequences, such as gaming and cheating, that are often determined by misuse of performance targets and rewards, will be investigated in depth throughout the book.
Indeed, there are identifiable reasons why performance measurement fails, and we will review these in the context of the madness that ensues if the consequences of measurement are not considered. In Chapter 3, “Measurement for Measurement's Sake”, we will focus on one of the most common issues associated with performance measurement: the illusion of control that measurement generates and the resulting drive to measure more and more things, in more and more depth. In this chapter we will not only illustrate the negative consequences of this obsession, but we will also give you several ideas on how you can turn performance measurement from a sophisticated technical exercise into an instrument of effective management.
Chapter 4, “All I Need is the Right Measure!”, presents problems in designing performance indicators and suggests that, in order for a performance measure to be an effective tool for measurement, it needs to be designed with performance management in mind. The chapter will suggest a number of points that you need to think through if your performance indicators are to become a help rather than a hindrance in the task of improving performance.
In Chapter 5, “Comparing Performance”, we present the difficulties of undertaking benchmarking and using league tables in a meaningful way. The learning points will give you tips on determining which data to gather; how to ensure consistency in the collection, analysis and use of performance data; and how to report results without ambiguity.
In Chapter 6, “Target Turmoil”, we examine the dysfunctional behaviours caused by an excessive pressure to achieve performance targets. Here we will look at ways to use targets as a means to motivate employees; provide a greater sense of clarity around goals; and, eventually, improve business performance.
In Chapter 7, “Gaming and Cheating”, we get to the core of measurement madness and explore the depths that people will sink to in order to play the system in their favour. Whilst you may feel powerless when confronted by such uncontrollable and at times unimaginable behaviours – often targeted at improving reported results rather than underlying performance – we will show you how to take back the initiative, using both technical and cultural ammunition.
In Chapter 8, “Hoping for A Whilst Rewarding B”, we describe those situations in which we hope to achieve something, but attain quite the opposite, due to the focus on the wrong measures. At this point we will explore how, by introducing simple systems whose purpose is clear and understood across the organization, you can avoid such unintended consequences.
In Chapter 9, “Failing Rewards and Rewarding Failure”, we will navigate the shark-infested waters of financial rewards. Although measures are commonly linked to incentives, the accompanying effects on behaviour are often undesirable, if not downright destructive. Recounting some well-known examples, the chapter will delve into the complex relationship between rewards, measurement and performance. We will also demonstrate how you can use rewards and recognition systems to more positive effects if you have a better understanding of the links between performance measurement and motivation.
Finally, in Chapter 10, “Will Measurement Madness Ever Be Cured?”, we will discuss what the future holds for performance measurement and whether cross-organizational measurement systems, or the adoption of “Big Data” analytics have the potential to cure any of the madness described in this book.
As you will see, through reading this book, there is no end to the list of unintended consequences that spring up in organizations due to the application of measurement. We hope that, by emphasizing the outrageous, bizarre and often amusing ways in which people in organizations respond to performance measurement, you will begin to think differently, and challenge some of your own assumptions about the utility of your measurement practices. But let us first look at what “performance” and “measurement” really mean.
One of the most puzzling things about performance measurement is that, regardless of the countless negative experiences, as well as a constant stream of similar failures reported in the media, organizations continue to apply the same methods and constantly fall into the same traps. This is because commonly held beliefs about the measurement and management of performance are rarely challenged. However, successful performance measurement is possible. We have worked with numerous organizations that have managed to extract many benefits through the intelligent use of performance measurement. Success stories include businesses gaining an enhanced understanding of their internal and external environment; institutions triggering greater organizational learning; executives having the ability to make more informed decisions; and, ultimately, organizations achieving better financial results and stakeholder satisfaction. However, we are not just relying on our personal experience; a growing body of research evidence has demonstrated that performance measurement systems can be productive and helpful in improving organizational performance.1
Yet, more often than not, getting performance measurement right is a difficult task. Therefore, the central question of this whole endeavour is how can organizations use measurement systems as positive drivers of performance and change, whilst mitigating against negative behavioural consequences? Before trying to answer this question, and delving into this minefield of traps and tripwires, we would like to introduce some of the main concepts of performance measurement that will be used throughout the book. Understanding what these terms mean is a fundamental step towards avoiding the main pitfalls of performance measurement.
Different people have different views of what is meant by performance measurement. In this book, we define performance measurement as a formal process, which aims to obtain, analyze, and express information about an aspect of a process, an activity or a person.2 For example, if we are looking at “customers”, we could measure such aspects as satisfaction, loyalty or advocacy. Therefore, we need to be very clear about which aspect we are measuring and formalize the particular elements of the measure. As a minimum we need a definition; we need to specify how we will use the data; and we need to work out how we will derive value. If this is not done, data, on customer loyalty, for example, may be collected in different ways, analyzed in different ways, and interpreted in different ways. It may therefore be of little use, and at worse, misleading.
We have found that most organizations have a performance measurement system in some form or other, be that a scorecard, a dashboard, or a simple framework. Ideally, such systems should consist of three inter-related elements: individual indicators; targets; and a supporting infrastructure that enables data to be acquired, collated, sorted, analyzed, interpreted and disseminated.3 Importantly, indicators and targets have to be considered not only in terms of how robust they are individually, but also as a collective, because they have to assess the organization as a whole. Hence, the word “system”.
However, we are already jumping ahead of ourselves here. There are two distinct elements to performance measurement, the first being performance and the second being measurement. Let us focus on these two terms individually to see if they can give us any real insight into the nature of performance measurement pitfalls, and help us identify opportunities for improving the practice of performance measurement.
Before we measure something we must ask whether we understand what it is that we are trying to measure. This question, however, is so obvious that we often take the answer for granted. It turns out though that, in the case of organizational performance, providing the definition of the very object we are trying to measure is far from simple. Moreover, although everyone talks about performance as if it is a common term, they often mean very different things.
If you were to look up the word “performance” in a dictionary, you would come across three relevant definitions.i The first refers to the achievement of a result, sometimes against a set standard or target. The second refers to the process of performing something; in other words, what is being done to achieve the result. If you think about your own organization, it would be interesting to reflect on whether its main performance indicators are related to results, for example return on shareholder value or the number of patients treated; or, to the way in which that output is delivered, such as the productivity of the workforce or the accessibility of the service. The third meaning relates to the act of performing a play, or a piece of music, and how these are perceived. When reflecting on an evening at a concert, members of the audience will report, to their friends, whether they thought it was a good or a bad “performance”. Does this have any relevance for measurement in an organizational context? Absolutely! The organizational performance that we measure and report on is interpreted by our stakeholders, and it is they who judge whether we have performed well or not.
To illustrate this let us use a sporting analogy. Whatever team you support, be it a soccer team such as Manchester United or Bayern Munich, or a baseball team such as the Boston Red Sox or New York Yankees, you surely want to see your team win. Matches won, drawn or lost is an example of an outcome measure. However, as a fan you will want to see entertaining matches too, as you are also concerned with the way in which your team plays; in other words, you care about the process through which the team achieves its results. However, this is only the view of the supporter. What about if you are a neutral in the crowd? You may only be concerned with seeing an entertaining game. What about the manager of the team? Surely he or she will be focused on the result, as in a lot of cases this will be relevant to their continuing employment. And what about the owners of the team? For them it may be the financial return in terms of attendance, marketing rights and television revenues.
Similarly, let us consider the performance of a public hospital. As taxpayers, we are interested in the efficient use of public money; as patients, we value a responsive and effective service; as members of a patient representative group, we may be interested in openness and the opportunity to collaborate with the hospital; as suppliers, we might want to have clear service level agreements and prompt payment; as employees, we are more likely to identify a good performance with fair wages, security of employment and decent working conditions. So, in all organizations each stakeholder is concerned about that organization's performance, but in different ways. This poses a number of challenges from a performance measurement perspective, because what is being measured is no longer the only factor; who measures and who interprets the data are equally important.
For example, how often have you left a theatre, thinking you have just witnessed a great event, only to overhear another opera buff, on the train home, describe their disappointment at the way the opera had been produced? Likewise, have you ever read the next day's review in the newspaper to wonder if the opera being described is in fact the same as the one you went to see? People have different opinions about the same event and outcome: different stakeholders interpret the performance differently. Even before we get into the measurement and management of performance, we can see how we are dealing with something complex, multifaceted and difficult to pin down.
In large, diverse organizations, accurately defining what we mean by performance and creating a common view of “true” performance is not just tricky, it is nigh on impossible. To make sense of organizational performance we try to obtain an indication of important processes and activities that represent performance, and then report those results, hoping that everyone will interpret the data in the same way. But, of course, the data reported will only be as good as the measurement undertaken. In a sporting context it is easy to report on the outcome, the number of goals or baskets scored, much like it is easy to report on the number of widgets produced in a factory. However, judging and adequately reporting the quality of a sports event, or an opera or a concert is much more difficult, just as measuring employee performance or customer satisfaction or social and environmental performance is in a commercial context.
Consider this scenario. A newly married couple, moving into their first home together, are excited by the prospect of decorating and furnishing their home. They are both keen to paint all of the downstairs rooms, therefore they start by measuring the length, breadth and height of each of the rooms with a standard tape measure. After recording the measurements, they carefully calculate the surface area to cover with paint. On choosing the colours and paint manufacturer, they elicit the information on the volume of paint required to cover a certain area. Finally, armed with the data, the young couple visit their local do-it-yourself store and purchase the required volume of paint.
Whilst calculating the volume of paint for some home improvement job is a relatively simple task, the same cannot be said about measuring the performance of individuals in an organizational setting. For example, one of our couple, working as a design engineer, is asked by senior management to work out how many projects her team can complete in the next year. The company has calculated standard levels of work a senior engineer and a junior engineer can complete. Counting up the number of engineers at the two levels within her team, our young newlywed makes a number of calculations and confidently predicts an annual output. However, at the end of the year, although her home has been satisfactorily decorated, the number of projects completed has missed her original prediction by 30%. Why is this?
The problem here is, of course, that measures, productivity levels and benchmarks in organizations are not as accurate and precise as those in the physical world. Counting the number of senior engineers does not mean that they all have similar experiences or similar capabilities; after all, they are not tins of paint. These typical problems arise in organizations, because we often start with simple numbers, such as a headcount of engineers, rather than more complex evaluations of their various abilities. Subsequently, we perform some calculations on those data; for example, the annual target for projects to be completed could be determined by multiplying an average rate of performance by the number of senior engineers; and then we take the result as an “objective measure” of performance, rather than as an approximation for it. These simplified approaches ignore individual variation in performance and the intricacies inherent within different projects. Even worse, the very introduction of simple performance measures such as time spent on an activity could negatively affect the behaviours of the engineers, for example encouraging them to focus on easier projects or delaying the completion of projects until the budgeted time is reached. The problem with measurement in organizations is that we assume that the numbers are unassailable, so we confidently make decisions based on those data, and are then surprised by the final result – which is not always a perfectly decorated room.
Since perfect measures of performance do not exist, organizations use proxies – indicators that approximate or represent performance in the absence of perfect measures. For instance, higher education institutions use research publications in highly rated journals as a proxy for research excellence or knowledge creation. This solution, however, generates a further problem. Over time, proxies are perceived to represent true
