33,99 €
Now updated with new measurement methods and new examples, How to Measure Anything shows managers how to inform themselves in order to make less risky, more profitable business decisions
This insightful and eloquent book will show you how to measure those things in your own business, government agency or other organization that, until now, you may have considered "immeasurable," including customer satisfaction, organizational flexibility, technology risk, and technology ROI.
Written by recognized expert Douglas Hubbard—creator of Applied Information Economics—How to Measure Anything, Third Edition illustrates how the author has used his approach across various industries and how any problem, no matter how difficult, ill defined, or uncertain can lend itself to measurement using proven methods.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 794
Veröffentlichungsjahr: 2014
Third Edition
DOUGLAS W. HUBBARD
Cover design: Wiley Cover image: © iStockphoto.com (clockwise from the top); © graphxarts, © elly99, © derrrek, © procurator, © Olena_T, © miru5
Copyright © 2014 by Douglas W. Hubbard. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey. First edition published by John Wiley & Sons, Inc., in 2007. Second edition published by John Wiley & Sons, Inc., in 2010. Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the Web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993, or fax (317) 572-4002.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.
Library of Congress Cataloging-in-Publication DataHubbard, Douglas W., 1962– How to measure anything : finding the value of intangibles in business / Douglas W. Hubbard.—Third edition. pages cm Includes bibliographical references and index. ISBN 978-1-118-53927-9 (cloth); ISBN 978-1-118-83644-6 (ebk); ISBN 978-1-118-83649-1 (ebk) 1. Intangible property—Valuation. I. Title. HF5681.I55H83 2014 657'.7—dc23 2013044540
I dedicate this book to the people who are my inspirations for so many things: to my wife, Janet, and to our children, Evan, Madeleine, and Steven, who show every potential for being Renaissance people.
I also would like to dedicate this book to the military men and women of the United States, so many of whom I know personally. I've been out of the Army National Guard for many years, but I hope my efforts at improving battlefield logistics for the U.S. Marines by using better measurements have improved their effectiveness and safety.
Preface to the Third Edition
Acknowledgments
About the Author
PART I
:
The Measurement Solution Exists
CHAPTER 1: The Challenge of Intangibles
The Alleged Intangibles
Yes, I Mean
Anything
The Proposal: It’s about Decisions
A “Power Tools” Approach to Measurement
A Guide to the Rest of the Book
CHAPTER 2: An Intuitive Measurement Habit: Eratosthenes, Enrico, and Emily
How an Ancient Greek Measured the Size of Earth
Estimating: Be Like Fermi
Experiments: Not Just for Adults
Notes on What to Learn from Eratosthenes, Enrico, and Emily
Notes
CHAPTER 3: The Illusion of Intangibles: Why Immeasurables Aren’t
The Concept of Measurement
The Object of Measurement
The Methods of Measurement
Economic Objections to Measurement
The Broader Objection to the Usefulness of “Statistics”
Ethical Objections to Measurement
Reversing Old Assumptions
Notes
Note
PART II
:
Before You Measure
CHAPTER 4: Clarifying the Measurement Problem
Toward a Universal Approach to Measurement
The Unexpected Challenge of Defining a Decision
If You Understand It, You Can Model It
Getting the Language Right: What “Uncertainty” and “Risk” Really Mean
An Example of a Clarified Decision
Notes
Notes
CHAPTER 5: Calibrated Estimates: How Much Do You Know
Now
?
Calibration Exercise
Calibration Trick: Bet Money (or Even Just Pretend To)
Further Improvements on Calibration
Conceptual Obstacles to Calibration
The Effects of Calibration Training
Notes
Notes
CHAPTER 6: Quantifying Risk through Modeling
How
Not
to Quantify Risk
Real Risk Analysis: The Monte Carlo
An Example of the Monte Carlo Method and Risk
Tools and Other Resources for Monte Carlo Simulations
The Risk Paradox and the Need for Better Risk Analysis
Notes
CHAPTER 7: Quantifying the Value of Information
The Chance of Being Wrong and the Cost of Being Wrong: Expected Opportunity Loss
The Value of Information for Ranges
Beyond Yes/No: Decisions on a Continuum
The Imperfect World: The Value of Partial Uncertainty Reduction
The Epiphany Equation: How the Value of Information Changes Everything
Summarizing Uncertainty, Risk, and Information Value: The Pre-Measurements
Notes
PART III
:
Measurement Methods
CHAPTER 8: The Transition: From What to Measure to How to Measure
Tools of Observation: Introduction to the Instrument of Measurement
Decomposition
Secondary Research: Assuming You Weren’t the First to Measure It
The Basic Methods of Observation: If One Doesn’t Work, Try the Next
Measure Just Enough
Consider the Error
Choose and Design the Instrument
Note
CHAPTER 9: Sampling Reality: How Observing Some Things Tells Us about All Things
Building an Intuition for Random Sampling: The Jelly Bean Example
A Little about Little Samples: A Beer Brewer’s Approach
Are Small Samples Really “Statistically Significant”?
When Outliers Matter Most
The Easiest Sample Statistic Ever
A Biased Sample of Sampling Methods
Notes
Notes
CHAPTER 10: Bayes: Adding to What You Know Now
The Basics and Bayes
Using Your Natural Bayesian Instinct
Heterogeneous Benchmarking: A “Brand Damage” Application
Bayesian Inversion for Ranges: An Overview
The Lessons of Bayes
Notes
PART IV
:
Beyond the Basics
CHAPTER 11: Preference and Attitudes: The Softer Side of Measurement
Observing Opinions, Values, and the Pursuit of Happiness
A Willingness to Pay: Measuring Value via Trade-Offs
Putting It All on the Line: Quantifying Risk Tolerance
Quantifying Subjective Trade-Offs: Dealing with Multiple Conflicting Preferences
Keeping the Big Picture in Mind: Profit Maximization versus Purely Subjective Trade-Offs
Notes
CHAPTER 12: The Ultimate Measurement Instrument: Human Judges
Homo Absurdus
: The Weird Reasons behind Our Decisions
Getting Organized: A Performance Evaluation Example
Surprisingly Simple Linear Models
How to Standardize Any Evaluation: Rasch Models
Removing Human Inconsistency: The Lens Model
Panacea or Placebo?: Questionable Methods of Measurement
Comparing the Methods
Example: A Scientist Measures the Performance of a Decision Model
Notes
CHAPTER 13 : New Measurement Instruments for Management
The Twenty-First-Century Tracker: Keeping Tabs with Technology
Prediction Markets: A Dynamic Aggregation of Opinions
Notes
CHAPTER 14: A Universal Measurement Method: Applied Information Economics
Bringing the Pieces Together
Case: The Value of the System That Monitors Your Drinking Water
Case: Forecasting Fuel for the Marine Corps
Case: Measuring the Value of ACORD Standards
Ideas for Getting Started: A Few Final Examples
Summarizing the Philosophy
Notes
APPENDIX: Calibration Tests (and Their Answers)
Index
Appendix
Calibration Survey for Ranges: A
Answers for Calibration Survey for Ranges: A
Calibration Survey for Ranges: B
Answers to Calibration Survey for Ranges: B
Calibration Survey for Binary: A
Answers for Calibration Survey for Binary: A
Calibration Survey for Binary: B
Answers to Calibration Survey for Binary: B
Chapter 4
Exhibit 4.1 IT Security for the Department of Veterans Affairs
Exhibit 4.2 Department of Veterans Affairs Estimates for the Effects of Virus Attacks
Chapter 5
Exhibit 5.1 Sample Calibration Test
Exhibit 5.2 Actual versus Ideal Scores: Initial 10 Question 90% CI Test
Exhibit 5.3 Spin to Win!
Exhibit 5.4 Methods to Improve Your Probability Calibration
Exhibit 5.5 Aggregate Group Performance
Exhibit 5.6 90% Confidence Interval Test Score Distribution after Training (Final 20-Question Test)
21
Exhibit 5.7 Calibration Experiment Results for 20 IT Industry Predictions in 1997
Chapter 6
Exhibit 6.1 The Normal Distribution
Exhibit 6.2 Simple Monte Carlo Layout in Excel
Exhibit 6.3 Histogram
Exhibit 6.4 The Binary (a.k.a. Bernoulli) Distribution
Exhibit 6.5 The Uniform Distribution
Exhibit 6.6 Optional: Additional Monte Carlo Concepts for the More Ambitious Student
Exhibit 6.7 A Few Monte Carlo Tools
Chapter 7
Exhibit 7.1 Extremely Simple Expected Opportunity Loss Example
Exhibit 7.2 EOL “Slices” for Range Estimates
Exhibit 7.3 Example EVPI Calculation for Segments in a Range (total number of rows in actual table would be 20)
Exhibit 7.4 Example of the Relative Threshold
Exhibit 7.5 Expected Opportunity Loss Factor Chart
Exhibit 7.6 Loss Functions for Decisions on a Continuum
Exhibit 7.7 The Value verses Cost of Partial Information
Exhibit 7.8 The Effect of Time Sensitivity on EVPI and EVI
Exhibit 7.9 Measurement Inversion
Chapter 9
Exhibit 9.1 Simplified t-Statistic. Pick the nearest sample size (or interpolate if you prefer more precision).
Exhibit 9.2 How Uncertainty Changes with Sample Size
Exhibit 9.3 Varying Rates of Convergence for the Estimate of the Mean
Exhibit 9.4 Mathless 90% CI for the Median of Population
Exhibit 9.5 Population Proportion 90% CI for Small Samples
Exhibit 9.6 Example Distributions for Estimates of Population Proportion from Small Samples
Exhibit 9.7 Comparison of World War II German Mark V Tank Production Estimates
Exhibit 9.8 Serial Number Sampling
Exhibit 9.9 Threshold Probability Calculator
Exhibit 9.10 Example for a Customer Support Training Experiment
Exhibit 9.11 Probability of Correct Guesses Out of 280 Trials in Emily Rosa’s Experiment assuming a 50% chance per guess of being correct
Exhibit 9.12 Examples of Correlated Data
Exhibit 9.13 Promotion Period versus Ratings Points for a Cable Network
Exhibit 9.14 Selected Items from Excel’s Regression Tool “Summary Output” Table
Exhibit 9.15 Promotion Time versus Ratings Chart with the “Best-Fit” Regression Line Added
Chapter 10
Exhibit 10.1 Selected Basic Probability Concepts
Exhibit 10.2 The Bayesian Inversion Calculator Spreadsheet
Exhibit 10.3 Probability That the Majority Is Green, Given the First Five Samples*
Exhibit 10.4 Calibrated Subjective Probabilities versus Bayesian
Exhibit 10.5 Confidence versus Information Emphasis
Exhibit 10.6 Customer Retention Example Comparison of Prior Knowledge, Sampling without Prior Knowledge, and Sampling with Prior Knowledge (Bayesian Analysis)
Exhibit 10.7 Summary of Results of the Three Distributions versus Thresholds
Exhibit 10.8 Example Prior Distribution of Ranges (Low Resolution)
Exhibit 10.9 Chance of Each Population Distribution Based on Example of Sampling
Chapter 11
Exhibit 11.1 Partition Dependence Example: How Much Time Will It Take to Put Out a Fire at Building X?
Exhibit 11.2 An Investment Boundary Example
Exhibit 11.3 Hypothetical “Utility Curves”
Chapter 12
Exhibit 12.1 Asch Conformity Experiment
Exhibit 12.2 Effect of Lens Model on Improving Various Types of Estimates
Exhibit 12.3 Lens Model Process
Exhibit 12.4 Nonlinear Example of a Lens Model Variable
Exhibit 12.5 Relative Value of Estimation Methods for Groups of Similar Problems
Chapter 13
Exhibit 13.1 Summary of Available Prediction Markets
Exhibit 13.2 Share Price for “Apple Computer Dies by 2005” on Foresight Exchange
Exhibit 13.3 Performance of Prediction Markets: Price versus Reality
Exhibit 13.4 Comparison of Other Subjective Assessment Methods to Prediction Markets
Chapter 14
Exhibit 14.1 Summary of the AIE Process: The Universal Measurement Approach
Exhibit 14.2 Overview of the Spreadsheet Model for the Benefits of SDWIS Modification
Exhibit 14.3 Summary of Average Effects of Changing Supply Route Variables for a Marine Expeditionary Force (MEF)
Exhibit 14.4 The Information Value Results Extrapolated to the Entire Insurance Industry
Cover
Table of Contents
Preface
Part
Chapter 1
xxi
xxii
xiii
xiv
xv
xvi
xvii
xviii
xix
xx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
29
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
I can’t speak for all authors, but I feel that a book—especially one based largely on ongoing research—is never really finished. This is precisely what editions are for. In the time since the publication of the second edition of this book, I continue to come across fascinating published research about the power and oddities of human decision making. And as my small firm continues to apply the methods in this book to real-world problems, I have even more examples I can use to illustrate the concepts. Feedback from readers and my experience explaining these concepts to many audiences have also helped me refine the message.
Of course, if the demand for the book wasn’t still strong six years after the first edition was published, Wiley and I wouldn’t be quite as incentivized to publish another edition. We also found this book, written explicitly for business managers, was catching on in universities. Professors from all over the world were contacting me to say they were using this book in a course they were teaching. In some cases it was the primary text—even though How to Measure Anything (HTMA) was never written as a textbook. Now that we see this growing area of interest, Wiley and I decided we should also create an accompanying workbook and instructor materials with this edition. Instructor materials are available at www.wiley.com.
In the time since I wrote the first edition of HTMA, I’ve written a second edition (2010) and two other titles—The Failure of Risk Management: Why It’s Broken and How to Fix It and Pulse: The New Science of Harnessing Internet Buzz to Track Threats and Opportunities. I wrote these books to expand on ideas I mention in earlier editions of How to Measure Anything and I also combine some of the key points I make in these books into this new edition.
For example, I started writing The Failure of Risk Management because I felt that the topic of risk, on which I could spend only one chapter and a few other references in this book, merited much more space. I argued that a lot of the most popular methods used in risk assessments and risk management don’t stand up to the bright light of scientific scrutiny. And I wasn’t just talking about the financial industry. I started writing the book well before the financial crisis started. I wanted to make it just as relevant to another Hurricane Katrina, tsunami, or 9/11 as to a financial crisis. My third book, Pulse, deals with what I believe to be one of the most powerful new measurement instruments of the twenty-first century. It describes how the Internet and, in particular, social media can be used as a vast data source for measuring all sorts of macroscopic trends. I’ve also written several more articles, and the combined research from them, my other books, and comments from readers on the book’s website to create new material to add to this edition.
This edition also adds more philosophy about different approaches to probabilities, including what are known as the “Bayesian” versus “frequentist” interpretations of probability. These issues may not always seem relevant to a practical “how-to” business book, but I believe it is important as a foundation for better understanding of measurement methods in general. For readers not interested in these issues, I’ve relegated some of the discussion to a series of “Purely Philosophical Interludes” found between some chapters, which the reader is free to study as their interests lead them. For readers who choose to delve into the Purely Philosophical Interludes, they will discover that I argue strongly for what is known as the subjective Bayesian approach to probability. While not as explicit until this edition, the philosophical position I argue for was always underlying everything I’ve written about measurement. Some readers who have dug in their heels on the other side of the issue may take exception to some of my characterizations, but I believe I make the case that, for the purposes of decision analysis, Bayesian methods are the most appropriate. And I still discuss non-Bayesian methods both because they are useful by themselves and because they are so widely used that lacking some literacy in these methods would limit the reader’s understanding of the larger issue of measurement.
In total, each of these new topics adds a significant amount of content to this edition. Having said that, the basic message of HTMA is still the same as it has been in the earlier two editions. I wrote this book to correct a costly myth that permeates many organizations today: that certain things can’t be measured. This widely held belief is a significant drain on the economy, public welfare, the environment, and even national security. “Intangibles” such as the value of quality, employee morale, or even the economic impact of cleaner water are frequently part of some critical business or government policy decision. Often an important decision requires better knowledge of the alleged intangible, but when an executive believes something to be immeasurable, attempts to measure it will not even be considered.
As a result, decisions are less informed than they could be. The chance of error increases. Resources are misallocated, good ideas are rejected, and bad ideas are accepted. Money is wasted. In some cases, life and health are put in jeopardy. The belief that some things—even very important things—might be impossible to measure is sand in the gears of the entire economy and the welfare of the population.
All important decision makers could benefit from learning that anything they really need to know is measurable. However, in a democracy and a free-enterprise economy, voters and consumers count among these “important decision makers.” Chances are that your decisions in some part of your life or your professional responsibilities would be improved by better measurement. And it’s virtually certain that your life has already been affected—negatively—by the lack of measurement in someone else’s decisions in business or government.
I’ve made a career out of measuring the sorts of things many thought were immeasurable. I first started to notice the need for better measurement in 1988, shortly after I started working for Coopers & Lybrand as a brand-new MBA in the management consulting practice. I was surprised at how often clients dismissed a critical quantity—something that would affect a major new investment or policy decision—as completely beyond measurement. Statistics and quantitative methods courses were still fresh in my mind. In some cases, when someone called something “immeasurable,” I would remember a specific example where it was actually measured. I began to suspect any claim of immeasurability as possibly premature, and I would do research to confirm or refute the claim. Time after time, I kept finding that the allegedly immeasurable thing was already measured by an academic or perhaps professionals in another industry.
At the same time, I was noticing that books about quantitative methods didn’t focus on making the case that everything is measurable. They also did not focus on making the material accessible to the people who really needed it. They start with the assumption that the reader already believes something to be measurable, and it is just a matter of executing the appropriate algorithm. And these books tended to assume that the reader’s objective was a level of rigor that would suffice for publication in a scientific journal—not merely a decrease in uncertainty about some critical decision with a method a non-statistician could understand.
In 1995, after years of these observations, I decided that a market existed for better measurements for managers. I pulled together methods from several fields to create a solution. The wide variety of measurement-related projects I had since 1995 allowed me to fine-tune this method. Not only was every alleged immeasurable turning out not to be so, the most intractable “intangibles” were often being measured by surprisingly simple methods. It was time to challenge the persistent belief that important quantities were beyond measurement.
In the course of writing this book, I felt as if I were exposing a big secret and that once the secret was out, perhaps a lot of apparently intractable problems would be solved. I even imagined it would be a small “scientific revolution” of sorts for managers—a distant cousin of the methods of “scientific management” introduced a century ago by Frederick Taylor. This material should be even more relevant than Taylor’s methods turned out to be for twenty-first-century managers. Whereas scientific management originally focused on optimizing labor processes, we now need to optimize measurements for management decisions. Formal methods for measuring those things management usually ignores have often barely reached the level of alchemy. We need to move from alchemy to the equivalent of chemistry and physics.
The publisher and I considered several titles. All the titles considered started with “How to Measure Anything” but weren’t always followed by “Finding the Value of ‘Intangibles’ in Business.” I could have used the title of a seminar I give called “How to Measure Anything, But Only What You Need To.” Since the methods in this book include computing the economic value of measurement (so that we know where to spend our measurement efforts), it seemed particularly appropriate. We also considered “How to Measure Anything: Valuing Intangibles in Business, Government, and Technology” since there are so many technology and government examples in this book alongside the general business examples. But the title chosen, How to Measure Anything: Finding the Value of “Intangibles” in Business, seemed to grab the right audience and convey the point of the book without necessarily excluding much of what the book is about.
As Chapter 1 explains further, the book is organized into four parts. The chapters and sections should be read in order because each part tends to rely on instructions from the earlier parts. Part One makes the case that everything is measurable and offers some examples that should inspire readers to attempt measurements even when it seems impossible. It contains the basic philosophy of the entire book, so, if you don’t read anything else, read this section. In particular, the specific definition of measurement discussed in this section is critical to correctly understand the rest of the book.
In Chapter 1, I suggest a challenge for readers, and I will reinforce that challenge by mentioning it here. Write down one or more measurement challenges you have in home life or work, then read this book with the specific objective of finding a way to measure them. If those measurements influence a decision of any significance, then the cost of the book and the time to study it will be paid back many-fold.
How to Measure Anything has an accompanying website at www.howtomeasureanything.com. This site includes practical examples worked out in detailed spreadsheets. We refer to these spreadsheets as “power tools” for managers who need practical solutions to measurement problems which sometimes require a bit more math. Of course, understanding the principles behind these spreadsheets is still important so that they aren’t misapplied, but the reader doesn’t need to worry about memorizing equations. The spreadsheets are already worked out so that the manager can simply input data and get an answer.
The website also includes additional “calibration” tests used for training the reader how to subjectively assign probabilities. There are some tests already in the appendix of the book but the online tests are there for those who need more practice or those who simply prefer to work with electronic files.
For instructors, there is also a set of instructor materials at www.wiley.com. These include additional test bank questions to support the accompanying workbook and selected presentation slides.
So many contributed to the content of this book through their suggestions, reviews, and as sources of information about interesting measurement solutions. In no particular order, I would like to thank these people:
Freeman Dyson
Pat Plunkett
Robyn Dawes
Peter Tippett
Art Koines
Jay Edward Russo
Barry Nussbaum
Terry Kunneman
Reed Augliere
Skip Bailey
Luis Torres
Linda Rosa
James Randi
Mark Day
Mike McShea
Chuck McKay
Ray Epich
Robin Hanson
Ray Gilbert
Dominic Schilt
Mary Lunz
Henry Schaffer
Jeff Bryan
Andrew Oswald
Leo Champion
Peter Schay
George Eberstadt
Tom Bakewell
Betty Koleson
David Grether
Bill Beaver
Arkalgud Ramaprasad
David Todd Wilson
Julianna Hale
Harry Epstein
Emile Servan-Schreiber
James Hammitt
Rick Melberth
Bruce Law
Rob Donat
Sam Savage
Bob Clemen
Michael Brown
Gunther Eysenbach
Michael Hodgson
Sebastian Gheorghiu
Johan Braet
Moshe Kravitz
Jim Flyzik
Jack Stenner
Michael Gordon-Smith
Eric Hills
Tom Verdier
Greg Maciag
Barrett Thompson
Richard Seiersen
Keith Shepherd
Eike Luedeling
Doug Samuelson
Chris Maddy
Jolene Manning
Special thanks to Dominic Schilt at RiverPoint Group LLC, who saw the opportunities with this approach back in 1995 and has given so much support since then. And thanks to all of my blog readers who have contributed ideas for every edition of this book.
I would also like to thank my staff at Hubbard Decision Research, who pitched in when it really counted.
Doug Hubbard is the president and founder of Hubbard Decision Research and the inventor of the powerful Applied Information Economics (AIE) method. His first book, How to Measure Anything: Finding the Value of Intangibles in Business (John Wiley & Sons, 2007, 2nd ed., 2010, 3rd ed., 2014), has been one of the most successful business statistics books ever written. He also wrote The Failure of Risk Management: Why It’s Broken and How to Fix It (John Wiley & Sons, 2009), and Pulse: The New Science of Harnessing Internet Buzz to Track Threats and Opportunities (John Wiley & Sons, 2011). Over 75,000 copies of his books have been sold in five different languages.
Doug Hubbard’s career has focused on the application of AIE to solve current business issues facing today’s corporations. Mr. Hubbard has completed over 80 risk/return analyses of large critical projects, investments, and other management decisions in the past 19 years. AIE is the practical application of several fields of quantitative analysis including Bayesian analysis, Monte Carlo simulations, and many others. Mr. Hubbard’s consulting experience totals more than 25 years and spans many industries including insurance, banking, utilities, federal and state government, entertainment media, military logistics, pharmaceuticals, cybersecurity, and manufacturing.
In addition to his books, Mr. Hubbard has been published in CIO Magazine, Information Week, DBMS Magazine, Architecture Boston, OR/MS Today, and Analytics Magazine. His AIE methodology has received critical praise from The Gartner Group, The Giga Information Group, and Forrester Research. He is a popular speaker at IT metrics and economics conferences all over the world. Prior to specializing in Applied Information Economics, his experience includes data and process modeling at all levels as well as strategic planning and technical design of systems.
When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of science.
—Lord Kelvin (1824–1907), British physicist and member of the House of Lords
Anything can be measured. If something can be observed in any way at all, it lends itself to some type of measurement method. No matter how “fuzzy” the measurement is, it’s still a measurement if it tells you more than you knew before. And those very things most likely to be seen as immeasurable are, virtually always, solved by relatively simple measurement methods. As the title of this book indicates, we will discuss how to find the value of those things often called “intangibles” in business. The reader will also find that the same methods apply outside of business. In fact, my analysts and I have had the opportunity to apply quantitative measurements to problems as diverse as military logistics, government policy, and interventions in Africa for reducing poverty and hunger.
Like many hard problems in business or life in general, seemingly impossible measurements start with asking the right questions. Then, even once questions are framed the right way, managers and analysts may need a practical way to use tools to solve problems that might be perceived as complex. So, in this first chapter, I will propose a way to frame the measurement question and describe a strategy for solving measurement problems with some powerful tools. The end of this chapter will be an outline of the rest of the book—building further on these initial concepts. But first, let’s discuss a few examples of these so-called intangibles.
There are two common understandings of the word “intangible.” It is routinely applied to things that are literally not tangible (i.e., not touchable, physical objects) yet are widely considered to be measurable. Things like time, budget, patent ownership, and so on are good examples of things that you cannot literally touch though they are observable in other ways. In fact, there is a well-established industry around measuring so-called intangibles such as copyright and trademark valuation. But the word “intangible” has also come to mean utterly immeasurable in any way at all, directly or indirectly. It is in this context that I argue that intangibles do not exist—or, at the very least, could have no bearing on practical decisions.
If you are an experienced manager, you’ve heard of the latter type of “intangibles” in your own organization—things that presumably defy measurement of any type. The presumption of immeasurability is, in fact, so strong that no attempt is even made to make any observation that might tell you something about the alleged immeasurable that you might be surprised to learn. Here are a few examples:
The “flexibility” to create new products
The value of information
The risk of bankruptcy
Management effectiveness
The forecasted revenues of a new product
The public health impact of a new government environmental policy
The productivity of research
The chance of a given political party winning the White House
The risk of failure of an information technology (IT) project
Quality of customer interactions
Public image
The risk of famine in developing countries
Each of these examples can very well be relevant to some major decision an organization must make. The intangible could even be the single most important determinant of success or failure of an expensive new initiative in either business or government. Yet, in many organizations, because intangibles like these were assumed to be immeasurable, the decision was not nearly as informed as it could have been. For many decision makers, it is simply a habit to default to labeling something as intangible when the measurement method isn’t immediately apparent. This habit can sometimes be seen in the “steering committees” of many organizations. These committees may review proposed investments and decide which to accept or reject. The proposed investments could be related to IT, new product research and development, major real estate development, or advertising campaigns. In some cases I’ve observed, the committees were categorically rejecting any investment where the benefits were “soft.” Important factors with names like “improved word-of-mouth advertising,” “reduced strategic risk,” or “premium brand positioning” were being ignored in the evaluation process because they were considered immeasurable.
It’s not as if the proposed initiative was being rejected simply because the person proposing it hadn’t measured the benefit (which would be a valid objection to a proposal); rather, it was believed that the benefit couldn’t possibly be measured. Consequently, some of the most important strategic proposals were being overlooked in favor of minor cost-saving ideas simply because everyone knew how to measure some things and didn’t know how to measure others. In addition, many major investments were approved with no plans for measuring their effectiveness after they were implemented. There would be no way to know whether they ever worked at all.
In an equally irrational way, an immeasurable would be treated as a key strategic principle or “core value” of the organization. In some cases decision makers effectively treat this alleged intangible as a “must have” so that the question of the degree to which the intangible matters is never considered in a rational, quantitative way. If “improving customer relationships” is considered a core value, and one could make the case that a proposed investment supported it, then the investment was justified—no matter the degree to which customer relationships improved at a given cost.
In some cases, a decision maker might concede that something could be measured in principle, but for various reasons is not feasible. This also renders the thing, for all practical purposes, as another “intangible” in their eyes. For example, perhaps there is a belief that “management productivity” is measurable but that sufficient data is lacking or that getting the data is not economically feasible. This belief—not usually based on any specific calculation—is as big an obstacle to measurement as any other.
The fact of the matter is that all of the previously listed intangibles are not only measurable but have already been measured by someone (sometimes my own team of analysts), using methods that are probably less complicated and more economically feasible than you might think.
The reader should try this exercise: Before going on to the next chapter, write down those things you believe are immeasurable or, at least, you are not sure how to measure. After reading this book, my goal is that you will be able to identify methods for measuring each and every one of them. Don’t hold back. We will be talking about measuring such seemingly immeasurable things as the number of fish in the ocean, the value of a happy marriage, and even the value of a human life. Whether you want to measure phenomena related to business, government, education, art, or anything else, the methods herein apply.
With a title like How to Measure Anything, anything less than an enormous multivolume text would be sure to leave out something. My objective does not explicitly include every area of physical science or economics, especially where measurements are already well developed. Those disciplines have measurement methods for a variety of interesting problems, and the professionals in those disciplines are already much less inclined to apply the label “intangible” to something they are curious about. The focus here is on measurements that are relevant—even critical—to major organizational decisions, and yet don’t seem to lend themselves to an obvious and practical measurement solution.
So, regardless of your area of interest, if I do not mention your specific measurement problem by name, don’t conclude that methods relevant to that issue aren’t being covered. The approach I will talk about applies to any uncertainty that has some relevance to your firm, your community, or even your personal life. This extrapolation is not difficult. For example, when you studied arithmetic in elementary school, you may not have covered the solution to 347 times 79 in particular, but you knew that the same procedures applied to any combination of numbers and operations.
I mention this because I periodically receive emails from someone looking for a specific measurement problem mentioned by name in earlier editions of this book. They may write, “Aha, you didn’t mention X, and X is uniquely immeasurable.” The actual examples I’ve been given by earlier readers included the quality of education and the competency of medical staff. Yet, just as the same procedure in arithmetic applies to multiplying any two numbers, the methods we will discuss are fundamental to any measurement problem regardless of whether it is mentioned by name.
So, if your problem happens to be something that isn’t specifically analyzed in this book—such as measuring the value of better product labeling laws, the quality of a movie script, or the effectiveness of motivational seminars—don’t be dismayed. Just read the entire book and apply the steps described. Your immeasurable will turn out to be entirely measurable.
No matter what field you specialize in and no matter what the measurement problem may be, we start with the idea that if you care about this alleged intangible at all, it must be because it has observable consequences, and usually you care about it because you think knowing more about it would inform some decision. Everything else is a matter of clearly defining what you observe, why you care about it, and some (often surprisingly trivial) math.
Why do we care about measurements at all? There are just three reasons. The first reason—and the focus of this book—is that we should care about a measurement because it informs key decisions. Second, a measurement might also be taken because it has its own market value (e.g., results of a consumer survey) and could be sold to other parties for a profit. Third, perhaps a measurement is simply meant to entertain or satisfy a curiosity (e.g., academic research about the evolution of clay pottery). But the methods we discuss in this decision-focused approach to measurement should be useful on those occasions, too. If a measurement is not informing your decisions, it could still be informing the decisions of others who are willing to pay for the information. If you are an academic curious about what really happened to the woolly mammoth, then, again, I believe this book will have some bearing on how you define the problem and the methods you might use.
Upon reading the first edition of this book, a business school professor remarked that he thought I had written a book about the somewhat esoteric field called “decision analysis” and disguised it under a title about measurement so that people from business and government would read it. I think he hit the nail on the head. Measurement is about supporting decisions, and there are even “micro-decisions” to be made within measurements themselves. Consider the following points.
Decision makers usually have imperfect information (i.e., uncertainty) about the best choice for a decision.
These decisions should be modeled quantitatively because (as we will see) quantitative models have a favorable track record compared to unaided expert judgment.
Measurements inform uncertain decisions.
For any decision or set of decisions, there is a large combination of things to measure and ways to measure them—but perfect certainty is rarely a realistic option.
In other words, management needs a method to analyze options for reducing uncertainty about decisions. Now, it should be obvious that important decisions are usually made under some level of uncertainty. Still, all management consultants, performance metrics experts, or even statisticians approach measurements with the explicit purpose of supporting defined decisions.
Even when a measurement is framed in terms of some decision, that decision might not be modeled in a way that makes good use of measurements. Although subjective judgment informed by real data may be better than intuition alone, choices made entirely intuitively dilute the value of measurement. Instead, measurements can be fed directly into quantitative models so that optimal strategies are computed rather than guessed. Just think of a cost-benefit analysis in a spreadsheet. A manager may calculate benefits based on some estimates and check to see if they exceed the cost. If some input to one of the benefit calculations is measured, there is a place for that information to go and the net value of a choice can be immediately updated. You don’t try to run a spreadsheet in your head.
The benefits of modeling decisions quantitatively may not be obvious and may even be controversial to some. I have known managers who simply presume the superiority of their intuition over any quantitative model (this claim, of course, is never itself based on systematically measured outcomes of their decisions). Some have even blamed the 2008 global financial crisis, not on inadequate regulation or shortcomings of specific mathematical models, but on the use of mathematical models in general in business decisions. The overconfidence some bankers, hedge fund managers, and consumers had in their unaided intuition was likely a significant factor as well.
The fact is that the superiority of even simple quantitative models for decision making has been established for many areas normally thought to be the preserve of expert intuition, a point this book will spend some time supporting with citations of several published studies. I’m not promoting the disposal of expert intuition for such purposes—on the contrary, it is a key element of some of the methods described in this book. In some ways expert intuition is irreplaceable but it has its limits and decision makers at all levels must know when they are better off just “doing the math.”
When quantitatively modeled decisions are the focus of measurement, then we can address the last item in the list. We have many options for reducing uncertainty and some are economically preferable. It is unusual for most analysis in business or government to handle the economic questions of measurement explicitly, even when the decision is big and risky, and even in cultures that are proponents of quantitative analysis otherwise. Computing and using the economic value of measurements to guide the measurement process is, at a minimum, where a lot of business measurement methods fall short.
However, thinking about measurement as another type of choice among multiple strategies for reducing uncertainty is very powerful. If the decision to be analyzed is whether to invest in some new product development, then many intermediate micro-decisions about what to measure (e.g., emergence of competition, market size, project risks, etc.) can make a significant difference in the decision about whether to commit to the new product. Fortunately, in principle, the basis for assessing the value of information for decisions is simple. If the outcome of a decision in question is highly uncertain and has significant consequences, then measurements that reduce uncertainty about it have a high value.
Unless someone is planning on selling the information or using it for their own entertainment, they shouldn’t care about measuring something if it doesn’t inform a significant bet of some kind. So don’t confuse the proposition that anything can be measured with everything should be measured. This book supports the first proposition while the second proposition directly contradicts the economics of measurements made to support decisions. Likewise, if measurements were free, obvious, and instantaneous, we would have no dilemma about what, how, or even whether to measure. As simple as this seems, the specific calculations tend to be surprising to those who have tended to rely on intuition for deciding whether and what to measure.
So what does a decision-oriented, information-value-driven measurement process look like? This framework happens to be the basis of the method I call Applied Information Economics (AIE). I summarize this approach in the following steps.
Applied Information Economics: A Universal Approach to Measurement
Define the decision.
Determine what you know now.
Compute the value of additional information. (If none, go to step 5.)
Measure where information value is high. (Return to steps 2 and 3 until further measurement is not needed.)
Make a decision and act on it. (Return to step 1 and repeat as each action creates new decisions.)
Each of these steps will be explained in more detail in chapters to come. But, in short: measure what matters, make better decisions. My hope is that as we raise the curtain on each of these steps in the upcoming chapters, the reader may have a series of small revelations about measurement.
I think it is fair to say that most people have the impression that statistics or scientific methods are not accessible tools for practical use in real decisions. Managers may have been exposed to basic concepts behind scientific measurement in, say, a chemistry lab in high school, but that may have just left the impression that measurements are fairly exact and apply only to obvious and directly observable quantities like temperature and mass. They’ve probably had some exposure to statistics in college, but that experience seems to confuse as many people as it helps. After that, perhaps they’ve dealt with measurement within the exact world of accounting or other areas where there are huge databases of exact numbers to query. What they seem to take away from these experiences is that to use the methods from statistics one needs a lot of data, that the precise equations don’t deal with messy real-world decisions where we don’t have all of the data, or that one needs a PhD in statistics to use any statistics at all.
We need to change these misconceptions. Regardless of your background in statistics or scientific measurement methods, the goal of this book is to help you conduct measurements just like a bona fide real-world scientist usually would. Some might be surprised to learn that most scientists—after college—are not actually required to commit to memory hundreds of complex theorems and master deep, abstract mathematical concepts in order to perform their research. Many of my clients over the years have been PhD scientists in many fields and none of them have relied on their memory to apply the equations they regularly use—honest. Instead, they simply learn to identify the right methods to use and then they usually depend on software tools to convert the data they enter into the results they need.
Yes, real-world scientists effectively “copy/paste” the results of their statistical analyses of data even when producing research to be published in the most elite journals in the life and physical sciences. So, just like a scientist, we will use a “power tools” approach to measurements. Like many of the power tools you use already (I’m including your car and computer along with your power drill) these will make you more productive and allow you to do what would otherwise be difficult or impossible.
Power tools like ready-made spreadsheets, tables, charts, and procedures will allow you to use useful statistical methods without knowing how to derive them all from fundamental axioms of probability theory or even without memorizing equations. To be clear, I’m not saying you can just start entering data without knowing what is going on. It is critical that you understand some basic principles about how these methods work so that you don’t misuse them. However, memorizing the equations of statistics (much less deriving their mathematical proofs) will not be required any more than you are required to build your own computer or car to use them.
So, without compromising substance, we will attempt to make some of the more seemingly esoteric statistics around measurement as simple as they can be. Whenever possible, math will be relegated to Excel spreadsheets or even simpler charts, tables, and procedures. Some simple equations will be shown but, even then, I will usually show them in the form of Excel functions that you can type directly into a spreadsheet. My hope is that some of the methods are so much simpler than what is taught in the typical introductory statistics courses that we might be able to overcome many phobias about the use of quantitative measurement methods. Readers do not need any advanced training in any mathematical methods at all. They just need some aptitude for clearly defining problems.
Some of the power tools referred to in this book are in the form of spreadsheets available for download on this book’s website at www.howtomeasureanything.com. This free online library includes many of the more detailed calculations shown in this book. There are also examples, learning aids, and a discussion board for questions about the book or measurement challenges in general. And, since technologies and measurement topics evolve faster than publishing cycles of books, the site provides a way for me to discuss new issues as they arise.
As mentioned, the chapters are not organized by type of measurement whereby, for example, you could see the entire process for measuring improved efficiency or quality in one chapter. To measure any single thing, you need to understand the sequence of steps in a process which is described sequentially in various chapters. For this reason, I do not recommend skipping around from chapter to chapter. But I think a quick review of the entire book will help the reader see when they should expect certain topics to be covered. I’ve grouped the 14 chapters of this book into four major parts as follows.
Synopsis of the Four Parts of This Book
Part I: The Measurement Solution Exists.
The three chapters of the first section (including this chapter) address broadly the claims of immeasurability. In the next chapter we explore some interesting examples of measurements by focusing on three interesting individuals and the approaches they took to solve interesting problems (Chapter 2). These examples come from both ancient and recent history and were chosen primarily for what they teach us about measurement in general. Building on this, we then directly address common objections to measurement (Chapter 3). This is an attempt to preempt many of the objections managers or analysts have when considering measurement methods. I never see this treatment in standard college textbooks but it is important to directly confront the misconceptions that keep powerful methods from being attempted in the first place.
Part II: Before You Measure.
Chapters 4 through 7 discuss important “set up” questions that are prerequisites to good measurement and that coincide with steps 1 through 3 in the previously described “universal” approach to measurement. These steps include defining the decision problem well (Chapter 4). Then we estimate the current level of uncertainty about a problem. This is where we learn how to provide “calibrated probability assessments” to represent our uncertainties quantitatively (Chapter 5). Next, we put those initial estimates of uncertainty together in a model of decision risk (Chapter 6) and compute the value of additional information (Chapter 7). Before we discuss how to measure something, these sequential steps are critical to help us determine what to measure and how much of an effort a measurement is worth.
Part III: Measurement Methods.
Once we have determined what to measure, we explain some basic methods about how to conduct the required measurements in Chapters 8 through 10. This coincides with part of what is needed for step 4 in the universal approach. We talk about the general issue of how to decompose a measurement further, consider prior research done by others, and select and outline measurement instruments (Chapter 8). Then we discuss some basic traditional statistical sampling methods and how to
think
about sampling in a way that reduces misconceptions about it (Chapter 9). The last chapter of the section describes another powerful approach to sampling based on what are called “Bayesian methods,” contrasts it with other methods, and applies it to some interesting and common measurement problems (Chapter 10).
Part IV: Beyond the Basics.
The final section adds some additional tools and brings it all together with case examples. First, we build on the sampling methods by describing measurement instruments when the object of measurement is human attitudes and preferences (Chapter 11). Then we discuss methods in which refining human judgment can itself be a powerful type of a measurement instrument (Chapter 12). Next, we will explore some recent and developing trends in technology that will provide management with entirely new sources of data, such as using social media and advances in personal health and activity monitoring as measurement devices (Chapter 13). These three chapters also round out the remainder of step 4 and the issues of step 5 in the universal approach. Finally, we explain some case examples from beginning to end of the entire process and help the reader get started on some other common measurement problems (Chapter 14).
Again, each chapter builds on earlier chapters, especially once we get to Part 2 of the book. The reader might decide to skim later chapters, say, after Chapter 9, or to read them in different orders, but skipping earlier chapters would cause some problems. This applies even to the next two chapters (2 and 3) because, even though they may wax somewhat more philosophical, they are important foundations for the rest of the material.
The details might sometimes get complicated, but it is much less complicated than many other initiatives organizations routinely commit to. I know because I’ve helped many organizations apply these methods to the really complicated problems; allocating venture capital, reducing poverty and hunger, prioritizing technology projects, measuring training effectiveness, improving homeland security, and more. In fact, humans possess a basic instinct to measure, yet this instinct is suppressed in an environment that emphasizes committees and consensus over making basic observations. It simply won’t occur to many managers that an “intangible” can be measured with simple, cleverly designed observations.
Again, measurements that are useful are often much simpler than people first suspect. I make this point in the next chapter by showing how three clever individuals measured things that were previously thought to be difficult or impossible to measure. Viewing the world as these individuals do—through “calibrated” eyes that see things in a quantitative light—has been a historical force propelling both science and economic productivity. If you are prepared to rethink some assumptions and can put in the effort to work through this material, you will see through calibrated eyes as well.
Success is a function of persistence and doggedness and the willingness to work hard for twenty-two minutes to make sense of something that most people would give up on after thirty seconds.
—Malcolm Gladwell,
