Inside the Crystal Ball - Maury Harris - E-Book

Inside the Crystal Ball E-Book

Maury Harris

0,0
26,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

A practical guide to understanding economic forecasts

In Inside the Crystal Ball: How to Make and Use Forecasts, UBS Chief U.S. Economist Maury Harris helps readers improve their own forecasting abilities by examining the elements and processes that characterize successful and failed forecasts. The book:

  • Provides insights from Maury Harris, named among Bloomberg's 50 Most Influential People in Global Finance.
  • Demonstrates "best practices" in the assembly and evaluation of forecasts. Harris walks readers through the real-life steps he and other successful forecasters take in preparing their projections. These valuable procedures can help forecast users evaluate forecasts and forecasters as inputs for making their own specific business and investment decisions.
  • Emphasizes the critical role of judgment in improving projections derived from purely statistical methodologies. Harris explores the prerequisites for sound forecasting judgment—a good sense of history and an understanding of contemporary theoretical frameworks—in readable and illuminating detail.
  • Addresses everyday forecasting issues, including the credibility of government statistics and analyses, fickle consumers, and volatile business spirits. Harris also offers procedural guidelines for special circumstances, such as natural disasters, terrorist threats, gyrating oil and stock prices, and international economic crises.
  • Evaluates major contemporary forecasting issues—including the now commonplace hypothesis of sustained economic sluggishness, possible inflation outcomes in an environment of falling unemployment, and projecting interest rates when central banks implement unprecedented low interest rate and quantitative easing (QE) policies.
  • Brings to life Harris's own experiences and those of other leading economists in his almost four-decade career as a professional economist and forecaster. Dr. Harris presents his personal recipes for long-term credibility and commercial success to anyone offering advice about the future.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 530

Veröffentlichungsjahr: 2014

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Title Page

Copyright

Acknowledgments

Introduction: What You Need to Know about Forecasting

Notes

Chapter 1: What Makes a Successful Forecaster?

Grading Forecasters: How Many Pass?

Why It's So Difficult to Be Prescient

Bad Forecasters: One-Hit Wonders, Perennial Outliers, and Copycats

Success Factors: Why Some Forecasters Excel

Does Experience Make Much of a Difference in Forecasting?

Notes

Chapter 2: The Art and Science of Making and Using Forecasts

Judgment Counts More Than Math

Habits of Successful Forecasters: How to Cultivate Them

Judging and Scoring Forecasts by Statistics

Notes

Chapter 3: What Can We Learn from History?

It's Never Normal

Some Key Characteristics of Business Cycles

National versus State Business Cycles: Does a Rising Tide Lift All Boats?

U.S. Monetary Policy and the Great Depression

The Great Inflation Is Hard to Forget

The Great Moderation: Why It's Still Relevant

Why Was There Reduced Growth Volatility during the Great Moderation?

Notes

Chapter 4: When Forecasters Get It Wrong

The Granddaddy of Forecasting Debacles: The Great Depression

The Great Recession: Grandchild of the Granddaddy

The Great Recession: Lessons Learned

The Productivity Miracle and the “New Economy”

Productivity: Lessons Learned

Y2K: The Disaster That Wasn't

The Tech Crash Was Not Okay

Forecasters at Cyclical Turning Points: How to Evaluate Them

Forecasting Recessions

Forecasting Recessions: Lessons Learned

Notes

Chapter 5: Can We Believe What Washington, D.C. Tells Us?

Does the U.S. Government “Cook the Books” on Economic Data Reports?

To What Extent Are Government Forecasts Politically Motivated?

Can You Trust the Government's Analyses of Its Policies' Benefits?

The Beltway's Multiplier Mania

Multiplier Effects: How Real Are They?

Why Government Statistics Keep “Changing Their Mind”

Living with Revisions

Notes

Chapter 6: Four Gurus of Economics: Whom to Follow?

Four Competing Schools of Economic Thought

Minskyites: Should We Keep Listening to Them?

Monetarists: Do They Deserve More Respect?

Supply-Siders: Still a Role to Play?

Keynesians: Are They Just Too Old-Fashioned?

Notes

Chapter 7: The “New Normal”: Time to Curb Your Enthusiasm?

Must Forecasters Restrain Multiyear U.S. Growth Assumptions?

Supply-Side Forecasting: Labor, Capital, and Productivity

Are Demographics Destiny?

Pivotal Productivity Projections

Notes

Chapter 8: Animal Spirits: The Intangibles behind Business Spending

Animal Spirits on Main Street and Wall Street

Can We Base Forecasts on Confidence Indexes?

Business Confidence and Inventory Building

How Do Animal Spirits Relate to Job Creation?

Confidence and Capital Spending: Do They Move in Tandem?

Animal Spirits and Capital Spending

Notes

Chapter 9: Forecasting Fickle Consumers

Making and Spending Money

How Do Americans Make Their Money?

Will We Ever Start to Save More Money?

Why Don't Americans Save More?

More Wealth = Less Saving

Do More Confident Consumers Save Less and Spend More?

Does Income Distribution Make a Difference for Saving and Consumer Spending?

Pent-Up Demand and Household Formation

Notes

Chapter 10: What Will It Cost to Live in the Future?

Whose Prices Are You Forecasting?

Humans Cannot Live on Just Core Goods and Services

Sound Judgment Trumps Complexity in Forecasting Inflation

Should We Forecast Inflation by Money Supply or Phillips Curve?

Hitting Professor Phillips' Curve

A Statistical Lesson from Reviewing Phillips Curve Research

Notes

Chapter 11: Interest Rates: Forecasters' Toughest Challenge

Figuring the Fed

Federal Open Market Committee

What Is the Fed's “Reaction Function”?

Is the Fed “Behind the Curve”?

Can the Fed “Talk Down” Interest Rates?

Bond Yields: How Reliable Are “Rules of Thumb”?

Professor Bernanke's Expectations-Oriented Explanation of Long-Term Interest Rate Determinants

Supply and Demand Models of Interest Rate Determination

When Will OPEC, Japan, and China Stop Buying Our Bonds?

What Will Be the Legacy of QE for Interest Rates?

What Is the Effect of Fed MBS Purchases on Mortgage Rates?

Will Projected Future Budget Deficits Raise Interest Rates?

Notes

Chapter 12: Forecasting in Troubled Times

Natural Disasters: The Economic Cons and Pros

How to Respond to a Terrorist Attack

Why Oil Price Shocks Don't Shock So Much

Market Crashes: Why Investors Don't Jump from Buildings Anymore

Contagion Effects: When China Catches Cold, Will the United States Sneeze?

Notes

Chapter 13: How to Survive and Thrive in Forecasting

Surviving: What to Do When Wrong

Hold or Fold?

Thriving: Ten Keys to a Successful Career

Notes

About the Author

Index

End User License Agreement

Pages

xiii

xiv

xv

xvi

xvii

xviii

xix

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

341

342

343

344

345

346

347

348

349

350

351

352

353

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

Guide

Cover

Table of Contents

Introduction: What You Need to Know about Forecasting

Begin Reading

List of Illustrations

Figure 1.1

Figure 1.2

Figure 1.3

Figure 2.1

Figure 3.1

Figure 4.1

Figure 4.2

Figure 4.3

Figure 4.4

Figure 4.5

Figure 4.6

Figure 5.1

Figure 6.1

Figure 6.2

Figure 6.3

Figure 6.4

Figure 6.5

Figure 6.6

Figure 6.7

Figure 6.8

Figure 6.9

Figure 6.10

Figure 6.11

Figure 7.1

Figure 7.2

Figure 7.3

Figure 7.4

Figure 7.5

Figure 7.6

Figure 7.7

Figure 7.8

Figure 7.9

Figure 7.10

Figure 7.11

Figure 7.12

Figure 8.1

Figure 8.2

Figure 8.3

Figure 8.4

Figure 8.5

Figure 8.6

Figure 8.7

Figure 8.8

Figure 8.9

Figure 8.10

Figure 8.11

Figure 8.12

Figure 8.13

Figure 8.14

Figure 8.15

Figure 8.16

Figure 8.17

Figure 8.18

Figure 8.19

Figure 9.1

Figure 9.2

Figure 9.3

Figure 9.4

Figure 9.5

Figure 9.6

Figure 9.7

Figure 9.8

Figure 9.9

Figure 9.10

Figure 9.11

Figure 9.12

Figure 9.13

Figure 9.14

Figure 10.1

Figure 10.2

Figure 10.3

Figure 10.4

Figure 10.5

Figure 10.6

Figure 10.7

Figure 10.8

Figure 10.9

Figure 10.10

Figure 10.11

Figure 10.12

Figure 11.1

Figure 11.2

Figure 11.3

Figure 11.4

Figure 11.5

Figure 11.6

Figure 11.7

Figure 11.8

Figure 11.9

Figure 12.1

Figure 12.2

Figure 12.3

Figure 12.4

Figure 12.5

Figure 12.6

Figure 12.7

Figure 12.8

Figure 12.9

Figure 12.10

Figure 12.11

Figure 12.12

Figure 12.13

Figure 13.1

Figure 13.2

List of Tables

Table 1.1

Table 1.2

Table 1.3

Table 1.4

Table 1.5

Table 2.1

Table 2.2

Table 2.3

Table 2.4

Table 2.5

Table 2.6

Table 2.7

Table 2.8

Table 3.1

Table 3.2

Table 3.3

Table 3.4

Table 3.5

Table 3.6

Table 4.1

Table 4.2

Table 4.3

Table 4.4

Table 4.5

Table 4.6

Table 4.7

Table 4.8

Table 4.9

Table 5.1

Table 5.2

Table 5.3

Table 5.4

Table 5.5

Table 5.6

Table 5.7

Table 5.8

Table 5.9

Table 5.10

Table 5.11

Table 6.1

Table 7.1

Table 7.2

Table 7.3

Table 7.4

Table 7.5

Table 8.1

Table 8.2

Table 8.3

Table 8.4

Table 8.5

Table 8.6

Table 9.1

Table 9.2

Table 9.3

Table 10.1

Table 11.1

Table 11.2

Table 11.3

Table 11.4

Table 11.5

Table 12.1

Table 12.2

Table 12.3

Table 12.4

Table 12.5

Table 12.6

Table 12.7

Inside the Crystal Ball

How to Make and Use Forecasts

Maury Harris

Cover Image: © iStock.com/wragg

Cover Design: Wiley

Copyright © 2015 by Maury Harris. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.

Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the Web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

The views and opinions expressed in this material are those of the author and are not those of UBS AG, its subsidiaries or affiliate companies (“UBS”). Accordingly, UBS does not accept any liability over the content of this material or any claims, losses or damages arising from the use or reliance on all or any part thereof.

UBS materials on pages 226, 227, 235, 306 and 336, © UBS 2014. Reproduced with permission.

The above mentioned UBS material has no regard to the specific investment objectives, financial situation or particular needs of any specific recipient and is published solely for informational purposes. No representation or warranty, either express or implied, is provided in relation to the accuracy, completeness or reliability of the information contained therein, nor is it intended to be a complete statement or summary of the securities markets or developments referred to in the UBS material. Any opinions expressed in the UBS material are subject to change without notice and may differ or be contrary to opinions expressed by other business areas or groups of UBS as a result of using different assumptions and criteria. UBS is under no obligation to update or keep current the information contained therein. Neither UBS AG nor any of its affiliates, directors, employees or agents accepts any liability for any loss or damage arising out of the use of all or any part of the UBS material.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993, or fax (317) 572-4002.

Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.

Library of Congress Cataloging-in-Publication Data

Harris, Maury.

Inside the crystal ball : how to make and use forecasts / Maury Harris.

pages cm

Includes index.

ISBN 978-1-118-86507-1 (cloth) – ISBN 978-1-118-86517-0 (ePDF) – ISBN 978-1-118-86510-1 (ePub)

1. Economic forecasting. 2. Business cycles. 3. Forecasting. I. Title.

HB3730.H319 2015

330.01′12—dc23

2014027847

Acknowledgments

A long and rewarding career in forecasting has importantly reflected the consistent support and intellectual stimulation provided by my colleagues at the Federal Reserve Bank of New York, the Bank for International Settlements, PaineWebber, and UBS. Senior research management at those institutions rewarded me when I was right and were understanding at times when I was not so right. My colleagues over the years have been a source of inspiration, stimulation, criticism, and encouragement.

Special thanks are addressed to my professional investment clients at PaineWebber and UBS. Thoughtful and challenging questions from them have played a key role in my forming a commercially viable research agenda. Their financial support of my various economics teams via institutional brokerage commissions has always been much appreciated and never taken for granted in the highly competitive marketplace in which economic forecasters practice their trade.

For this book, the efforts on my behalf by my agent Jeffrey Krames, who led me to John Wiley & Sons, were essential. At Wiley, the editorial and publications support provided by Judy Howarth, Tula Batanchiev, Evan Burton, and Steven Kyritz were extremely helpful. And the guidance provided by my editorial consultant Tom Wynbrandt has been absolutely superb, as was the tech savvy contributed by Charles Harris. Also, thanks are due to Leigh Curry, Tom Doerflinger, Samuel Coffin, Drew Matus, Sheeba Joy, Lisa Harris Millhauser, and Greg Millhauser, who reviewed various chapters.

Most importantly, it would been impossible for me to complete this project without the steady support, encouragement, and editorial acumen provided by Laurie Levin Harris, my wife of 44 years. The year of weekends and weekday nights spent on this book subtracted from quality time we could have spent together. I always will be most grateful for her unwavering confidence in me and her creation of a stimulating home environment essential for the professional accomplishments of myself and our two children, Lisa Harris Millhauser and Charles.

Introduction

What You Need to Know about Forecasting

Everybody forecasts—it is an essential part of our lives. Predicting future outcomes is critical for success in everything from investing to careers to marriage. No one always makes the right choices, but we all strive to come close. This book shows you how to improve your decision-making by understanding how and why forecasters succeed—and sometimes fail—in their efforts. We're all familiar with economists' supposed ineptitude as prognosticators, but those who have been successful have lessons to teach us all.

I have been fortunate to have had a long and successful career in the field of economic forecasting, first at the Federal Reserve Bank of New York and the Bank for International Settlements, and then, for the majority of my working life, on Wall Street. Often I am asked about so-called tricks of the trade, of which there are many. People want to know my strategies and tactics for assembling effective forecasts and for convincing clients to trust me, even though no one's forecasts, including my own, are right all of the time. But most often, people ask me to tell them what they need to know in simple and accessible language. They want actionable information without having to wade through dense math, mounds of complicated data, or “inside-baseball” verbiage.

With that need in mind, Inside the Crystal Ball aims to help improve anyone's ability to forecast. It's designed to increase every reader's ability to make and communicate advice about the future to clients, bosses, colleagues, and anyone else whom we need to convince or whom we want to retain as a loyal listener. As such, this book shows you how to evaluate advice about the future more effectively. Its focus on the nonmathematical, judgmental element of forecasting is an ideal practitioners' supplement to standard statistical forecasting texts.

Forecasting in the worlds of business, marketing, and finance often hinges on assumptions about the U.S. economy and U.S. interest rates. Successful business forecasters, therefore, must have a solid understanding of the way the U.S. economy works. And as economic forecasts are a critical input for just about all others, delving deeper into this discipline can improve the quality of predictions in fields such as business planning, marketing, finance, and investments.

In U.S. universities, economics courses have long been among the most popular elective classes of study. However, there is an inevitable division of labor between academicians, who advance theoretical and empirical economic research, and practitioners.

My professional experience incorporates some of the most significant economic events of the past 40 years. I've “been there, done that” in good times and in bad, in stable environments and in volatile ones. One of the most valuable lessons I learned is that there is no substitute for real-world experience. Experience gives one the ability to address recurring forecasting problems and a history to draw on in making new predictions. And although practice does not make perfect, experienced forecasters generally have more accurate forecasting records than their less seasoned colleagues.

In my career, I have witnessed many forecasting victories and blunders, each of which had a huge impact on the U.S. economy. Every decade saw its own particular conditions—its own forecasting challenges. These events provide more than historical anecdotes: They offer fundamental lessons in forecasting.

At the start of my career as a Wall Street forecaster, I struggled, but I became much better over time. According to a study of interest rate forecasters published by the Wall Street Journal in 1993, I ranked second in accuracy among 34 bond-rate forecasters for the decade of the 1980s.1MarketWatch, in 2004, 2006, and again in 2008 ranked me and my colleague James O'Sullivan as the most accurate forecasters of week-ahead economic data. In the autumn of 2011, Bloomberg News cited my team at UBS as the most accurate forecasters across a broad range of economic data over a two-year period.2 Earning these accolades has been a long and exciting journey.

When I first peered into the crystal ball of forecasting I found cracks. I had joined the forecasting team in the Business Conditions Division at the Federal Reserve Bank of New York in 1973—just in time to be an eyewitness to what would become, then, the worst recession since the Great Depression. As the team's rookie, I did not get to choose my assignment, and I was handed the most difficult economic variable to forecast: inventories. It was a trial by fire as I struggled to build models of the most slippery of economic statistics. But it turned out to be a truly great learning experience. Mastering the mechanics of the business cycle is one of the most important steps in forecasting it—in any economy.

A key lesson to be learned from the failures of past forecasters is to avoid being a general fighting the last war. Fed officials were so chastened by their failure to foresee the severity of the 1973–1975 recession and the associated postwar high in the unemployment rate that they determined to do whatever was necessary not to repeat that mistake. But in seeking to avoid it, they allowed real (inflation-adjusted) interest rates to stay too low for too long, thus opening the door to runaway inflation. My ringside seat to this second forecasting fiasco of the 1970s taught me that past mistakes can definitely distort one's view of the future.

By the 1980s, economists knew that the interest-rate fever in the bond market would break when rates rose enough to whack inflation. But hardly anyone knew the “magic rate” at which that would occur. With both interest rates and inflation well above past postwar experience, history was not very helpful. That is, unless the forecaster could start to understand the likely analytics of a high inflation economy—a topic to be discussed in later chapters.

The 1990s started with a credit crunch, which again caught the Fed off guard. A group of U.S. senators, who had been pestered by credit-starved constituents, were forced to pester then–Fed Chair Alan Greenspan to belatedly recognize just how restrictive credit had become.3,4 That episode taught forecasters how to evaluate the Fed's quarterly Senior Loan Officer Opinion Survey more astutely. Today the Survey remains an underappreciated leading indicator, as we discuss in Chapter 9.

The economy improved as the decade progressed. In fact, growth became so strong that many economists wanted the Fed to tighten monetary policy to head off the possibility of higher inflation in the future. In the ensuing debate about the economy's so-called speed limit, a key issue was productivity growth. Fed Chair Greenspan this time correctly foresaw that a faster pace of technological change and innovation was enhancing productivity growth, even if the government's own statisticians had difficulty capturing it in their official measurements. Out of this episode came some important lessons on what to do when the measurement of a critical causal variable is in question.

A forecasting success story for most economists was to resist becoming involved in the public's angst over Y2K: the fearful anticipation that on January 1, 2000, the world's computers, programmed with two-digit dates, would not be able to understand that we were in a new century and would no longer function. Throughout 1999, in fact, pundits issued ever more dire warnings that, because of this danger, the global economy could grind to a halt even before the New Year's bells stopped ringing. Most economic forecasters, though, better understood the adaptability of businesses to such an unusual challenge. We revisit this experience later, to draw lessons on seeing through media hype and maintaining a rational perspective on what really makes businesses adapt.

Forecasters did not do well in anticipating the mild recession that began in 2001. The tech boom, which helped fuel growth at the end of the previous decade and made Alan Greenspan appear very astute in his predictions on productivity, also set the stage for a capital expenditure (capex) recession. Most economists became so enthralled with the productivity benefits of the tech boom that they lost sight of the inevitable negative consequences of overinvestment in initially very productive fields.

Perhaps the largest of all forecasting blunders was the failure to foresee the U.S. home price collapse that began in 2007. It set into motion forces culminating in the worst recession since the Great Depression—the Great Recession. Such an error merits further consideration in Chapter 4, focusing on specific episodes in which forecasters failed.

By now, it should be clear that experience counts—both for the historical perspective it confers and for having addressed repetitive problems, successfully, over a number of decades. In reading this book, you will live my four decades of experience and learn to apply my hard-learned lessons to your own forecasting.

The book begins by assessing why some forecasters are more reliable than others. I then present my approach to both the statistical and judgmental aspects of forecasting. Subsequent chapters are focused on some long-standing forecasting challenges (e.g., reliance on government information, shifting business “animal spirits,” and fickle consumers) as well as some newer ones (e.g., new normal, disinflation, and terrorism). The book concludes with guidance, drawn from my own experience, on how to have a successful career in forecasting. Throughout this volume, I aim to illustrate how successful forecasting is more about honing qualitative judgment than about proficiency in pure quantitative analysis—mathematics and statistics. In other words, forecasting is for all of us, not just the geeks.

Notes

1.

 Tom Herman, “How to Profit from Economists' Forecasts,”

Wall Street Journal

, January 22, 1992.

2.

 Timothy R. Homan, “The World's Top Forecasters,”

Bloomberg Markets

, January 2012.

3.

 Alan Murray, “Greenspan Met with GOP Senators to Hear Concerns About Credit Crunch,”

Wall Street Journal

, July 11, 1990.

4.

 Paul Duke Jr., “Greenspan Says Fed Poised to Ease Rates Amid Signs of a Credit Crunch,”

Wall Street Journal

, July 13, 1990.

Chapter 1What Makes a Successful Forecaster?

It's tough to make predictions, especially about the future.

—Yogi Berra

It was an embarrassing day for the forecasting profession: Wall Street's “crystal balls” were on display, and almost all of them were busted. A front-page article in the Wall Street Journal on January 22, 1993, told the story. It reported that during the previous decade, only 5 of 34 frequent forecasters had been right more than half of the time in predicting the direction of long-term bond yields over the next six months.1 I was among those five seers who were the exception to the article's smug conclusion that a simple flip of the coin would have outperformed the interest-rate forecasts of Wall Street's best-known economists. Portfolio manager Robert Beckwitt of Fidelity Investments, who compiled and evaluated the data for the Wall Street Journal, had this to say about rate forecasters: “I wouldn't want to have that job—and I'm glad I don't have it.”

Were the industry's top economists poor practitioners of the art and science of economic forecasting? Or were their disappointing performances simply indicative of how hard it is for anyone to forecast interest rates? I would argue the latter. Indeed, in a nationally televised 2012 ad campaign for Ally Bank, the Nobel Prize winning economist Thomas Sargent was asked if he could tell what certificate of deposit (CD) rates would be two years hence. His simple response was “no.”2

Economists' forecasting lapses are often pounced on by critics who seek to discredit the profession overall. However, the larger question is what makes the job so challenging, and how can we surmount those obstacles successfully. In this chapter, I explain just why it is so difficult to forecast the U.S. economy. None of us can avoid difficult decisions about the future. However, we can arm ourselves with the knowledge and tools that help us make the best possible business and investment choices. That is what this book is designed to do.

Grading Forecasters: How Many Pass?

If we look at studies of forecast accuracy, we see that economic forecasters have one of the toughest assignments in the academic or workplace world. These studies should remind us how difficult the job is; they shouldn't reinforce a poor opinion of forecasters. If we review the research carefully, we'll see that there's much to learn, both from what works and from what hinders success.

Economists at the Federal Reserve Bank of Cleveland studied the 1983 to 2005 performance of about 75 professional forecasters who participated in the Federal Reserve Bank of Philadelphia's Livingston forecaster survey.3 We examine their year-ahead forecasts of growth rates for real (inflation-adjusted) gross domestic product (GDP) and the consumer price index (CPI). (See Table 1.1.)

Table 1.1 Accuracy of the Year-Ahead Median Economists' Forecasts, 1983–2005

Grade(

*

)

Proportion of Forecasts within…

GDP Growth

CPI Inflation

A

0.5 percentage point

30.4%

39.1%

B

0.5–1 percentage point

21.7

30.4

C

1–1.5 percentage points

17.4

21.7

D

1.5–2 percentage points

8.7

8.7

E

2–2.5 percentage points

13.0

0.0

F

2.5–3 percentage points

8.7

0.0

* Assigned by the author.

Source: Michael F. Bryan and Linsey Molloy, “Mirror, Mirror, Who's the Best Forecaster of Them All?” Federal Reserve Bank of Cleveland, Economic Commentary, March 15, 2007.

If being very accurate is judged as being within half a percentage point of the actual outcome, only around 30 percent of GDP growth forecasts met this test. By the same grading criteria, approximately 39 percent were very accurate in projecting year-ahead CPI inflation. We give these forecasters an “A.” If we award “Bs” for being between one-half and one percentage point of reality, that grade was earned by almost 22 percent of the GDP growth forecasts and just over 30 percent of the CPI inflation projections. Thus, only around half the surveyed forecasters earned the top two grades for their year-ahead real GDP growth outlooks, although almost 7 in 10 earned those grades for their predictions of CPI inflation. (We should note that CPI is less volatile—and thus easier to predict—than real GDP growth.)

Is our grading too tough? Probably not. Consider that real GDP growth over 1983 to 2005 was 3.4 percent. A one-half percent miss was thus plus or minus 15 percent of reality. Misses between one-half and one percent could be off from reality by as much as 29 percent. For a business, sales forecast misses of 25 percent or more are likely to be viewed as problematic.

With that in mind, our “Cs” are for the just more than 17 percent of growth forecasts that missed actual growth by between 1 percent and 1.5 percent, and for the 22 percent of inflation forecasts that missed by the same amount. The remaining 30 percent of forecasters—those whose forecasts fell below our C grade—did not necessarily flunk out, though. The job security of professional economists depends on more than their forecasting prowess—a point that we discuss later.

The CPI inflation part of the test, as we have seen, was not quite as difficult. Throughout 1983 to 2005, the CPI rose at a 3.1 percent annual rate. Thirty-nine percent of the forecasts were within half a percent of reality—as much as a 16 percent miss. Another 30 percent of them earned a B, with misses between 0.5 and 1 percent of the actual outcome, or within 16 to 32 percent of reality. Still, 30 percent of the forecasters did no better than a C.

In forecasting, as in investments, one good year hardly guarantees success in the next. (See Table 1.2.) According to the study, the probabilities of outperforming the median real GDP forecast two years in a row were around 49 percent. The likelihood of a forecaster outperforming the median real GDP forecast for five straight years was 28 percent. For CPI inflation forecasts, there was a 47 percent probability of successive outperformances and a 35 percent probability of beating the median consensus forecast in five consecutive years.

Table 1.2 Probability of Repeating as a Good Forecaster

GDP GROWTH

Probability of Remaining Better Than the Median Forecast after…

Observed (%)

Expected

*

(%)

One success

48.7

49.4

Two successes

44.4

49.4

Three successes

38.7

48.6

Four successes

27.6

48.9

INFLATION

Probability of Remaining Better Than the Median Forecast after…

Observed (%)

Expected

*

(%)

One success

46.8

49.3

Two successes

43.5

49.0

Three successes

45.9

48.7

Four successes

35.3

48.6

* Proportion expected assuming random chance.

Source: Michael F. Bryan and Linsey Molloy, “Mirror, Mirror, Who's the Best Forecaster of Them All?” Federal Reserve Bank of Cleveland, Economic Commentary, March 15, 2007.

Similar results have been reported by Laster, Bennett, and In Sun Geoum in a study of the accuracy of real GDP forecasts by economists polled in the Blue Chip Economic Indicators—a widely followed survey of professional forecasters.4 In the 1977 to 1986 period, which included what was until then the deepest postwar recession, only 4 of 38 forecasters beat the consensus. However, in the subsequent 1987 to 1995 period, which included just one mild recession, 10 of 38 forecasters outperformed the consensus. Interestingly, none of the forecasters who outperformed the consensus in the first period were able to do so in the second!

Perhaps even more important than accurately forecasting economic growth rates is the ability to forecast “yes” or “no” on the likelihood of a major event, such as a recession. The Great Recession of 2008 to 2009 officially began in the United States in January of 2008. By then, the unemployment rate had risen from 4.4 percent in May of 2007 to 5.0 percent in December, and economists polled by the Wall Street Journal in January foresaw, on average, a 42 percent chance of recession. (See Figure 1.1.) Three months earlier, the consensus probability had been 34 percent. And it wasn't until we were three months into the recession that the consensus assessed its probability at more than 50 percent.

Figure 1.1 Unemployment and Consensus Recession Probabilities Heading into the Great Recession of 2008–2009

Source: Bureau of Labor Statistics, The Wall Street Journal.Note: Shaded area represents the recession.

The story was much the same in the United Kingdom (UK). By June of 2008 the recession there had already begun. Despite this, none of the two-dozen economists polled by Reuters at that time believed a recession would occur at any point in 2008 to 2009.5

In some instances, judging forecasters by how close they came to a target might be an unnecessarily stringent test. In the bond market, for example, just getting the future direction of rates correct is important for investors; but that can be a tall order, especially in volatile market conditions. Also, those who forecast business condition variables, such as GDP, can await numerous data revisions (to be discussed in Chapter 5) to see if the updated information is closer to their forecasts. Interest-rate outcomes, however, are not revised, thereby denying rate forecasters the opportunity to be bailed out by revised statistics. Let's grade interest rate forecasters, therefore, on a pass/fail basis, where just getting the future direction of rates correct is enough to pass.

Yet even on a pass/fail test, most forecasters have had trouble getting by. As earlier noted, only 5 of the 34 economists participating in 10 or more of the semiannual surveys of bond rates were directionally right more than half the time. And of those five forecasters, only two—Carol Leisenring of Core States Financial Group and I—made forecasts that, if followed, would have outperformed a simple buy-and-hold strategy employing intermediate-term bonds during the forecast periods. According to calculations discussed in the article, “buying and holding a basket of intermediate-term Treasury bonds would have produced an average annual return of 12.5 percent—or 3.7 percentage points more than betting on the consensus.”6

In their study of forecasters' performance in predicting interest rates and exchange rates six months ahead, Mitchell and Pearce found that barely more than half (52.4 percent) of Treasury bill rate forecasts got the direction right. (See Table 1.3.) Slightly less than half (46.4 percent) of the yen/dollar forecasts were directionally correct. And only around a third of the Treasury bond yield forecasts correctly predicted whether the 30-year Treasury bond yield would be higher or lower six months later.

Table 1.3 Percentages of 33 Economists' Six-Month-Ahead Directional Interest Rate and Exchange Rates Forecasts That Were Correct

Forecast Variable

Average (%)

Top Forecaster (%)

Worst (%)

Period

Treasury bill rate

52.4

65.2

23.8

1982–2002

Treasury bond yield

33.3

65.2

26.9

1982–2002

Yen/dollar

46.4

66.7

38.5

1989–2002

Source: Karlyn Mitchell and Douglas K. Pearce, “Professional Forecasts of Interest Rates and Exchange Rates: Evidence from the Wall Street Journal's Panel of Economists,” North Carolina State University Working Paper 004, March 2005.

Although it is easy to poke fun at the forecasting prowess of economists as a group, it is more important to note that some forecasters do a much better job than others. Indeed, the best forecasters of Treasury bill and Treasury bond yields and the yen/dollar were right approximately two-thirds of the time.

Some economic statistics are simply easier to forecast than others. Since big picture macroeconomic variables encompassing the entire U.S. economy often play a key role in marketing, business, and financial forecasting, it is important to know which macro variables are more reliably forecasted. As a rule, interest rates are more difficult to forecast than nonfinancial variables such as growth, unemployment, and inflation.

If we'd like to see why this is so, let's look at economists' track records in forecasting key economic statistics. Consider, in Table 1.4, the relative difficulty of forecasting economic growth, inflation, unemployment and interest rates. In this particular illustration, year-ahead forecast errors for these variables are compared with forecast errors by hypothetical, alternative, “naive straw man” projections. The latter were represented by no-change forecasts for interest rates and the unemployment rate, and the lagged values of the CPI and gross national product (GNP) growth. Displayed in the table are median ratios of errors by surveyed forecasters relative to errors by the “naive straw man.” For example, median errors in forecasting interest rates were 20 percent higher than what would have been generated by simple no-change forecasts. Errors in forecasting unemployment and GNP were about the same for forecasters and their naive straw man opponent. In the case of CPI forecasts, however, the forecasters' errors were only around half as large as forecasts generated by assuming no change from previously reported growth.

Table 1.4 Relative Year Ahead Errors of Forecasters versus “Naive Straw Man”

Forecast Variable

Worst

Best

Median

% of Forecasts Beating Straw Man

Short-term interest rate

1.67%

0.95%

1.20%

92

Long-term interest rate

1.57

0.89

1.20

83

Unemployment rate

2.71

0.63

0.97

31

CPI inflation rate

1.11

0.38

0.54

3

GNP growth

2.09

0.78

0.99

48

Note: Short-term and long-term interest rates and unemployment rates are relative to a hypothetical no-change straw man forecast. CPI and GNP growth rates are relative to a same-change straw man forecast.

Source: Twelve individual forecasters' interest rate forecasts, 1982–1991; other variables, 29 individual forecasts, 1986–1991, as published in the Wall Street Journal.Stephen K. McNees, “How Large Are Economic Forecast Errors?” New England Economic Review, July/August 1992.

There are many more examples of forecaster track records, and we examine some of them in subsequent chapters. While critics use such studies to disparage economists' performances, it's much more constructive to use the information to improve your own forecasting prowess.

Why It's So Difficult to Be Prescient

Because so many intelligent, well-educated economists struggle to provide forecasts that are more often right than wrong, it should be clear that forecasting is difficult. The following are among the eight most important reasons:

It is hard to know where you are, so it is even more difficult to know where you are going.

The economy is subject to myriad influences. At each moment, a world of inputs exerts subtle shifts on its direction and strength. It can be difficult for economists to estimate where the national economy is headed in the present, much less the future. Like a ship on the sea in the pre-GPS era, determining one's precise location at any given instant is a difficult challenge.

John Maynard Keynes—the father of Keynesian economics—taught that recessions need not automatically self-correct. Instead, turning the economy around requires reactive government fiscal policies—spending increases, tax cuts and at least temporary budget deficits. His “new economics” followers in the 1950s and 1960s took that conclusion a step further, claiming that recessions could be headed off by proactive, anticipatory countercyclical monetary and fiscal policies. But that approach assumed economists could foresee trouble down the road.

Not everyone agreed with Keynes' theories. Perhaps the most visible and influential objections were aired by University of Chicago economics professor Milton Friedman. In his classic address at the 1967 American Economic Association meeting, he argued against anticipatory macroeconomic stabilization policies.7 Why? “We simply do not know enough to be able to recognize minor disturbances when they occur or to be able to predict what their effects will be with any precision or what monetary policy is required to offset their effects,” he said.

Everyday professional practitioners of economics in the real world know the validity of Friedman's observation all too well. In Figure 1.2, for example, consider real GDP growth forecasts for a statistical quarter that were made in the third month of that quarter—after the quarter was almost over. In the current decade, such projections were 0.8 percent off from what was reported. (Note: This is judged by the mean absolute error—the absolute magnitude of an error without regard to whether the forecast was too high or too low.) Moreover, these “last minute” projections were even farther off in earlier decades.

Figure 1.2 In the Final Month of a Quarter, Forecasters' Growth Forecasts for That Quarter Can Still Err Substantially

Source: Federal Reserve Bank of Philadelphia.

Moving forward, we discuss how the various economic “weather reports” can suggest winter and summer on the same day! Let's note, too, that some of the key indicators of tomorrow's business weather are subject to substantial revisions. At times it seems like there are no reliable witnesses, because they all change their testimony under oath. In later chapters we discuss how to address these challenges.

History does not always repeat or even rhyme.

Forecasters address the future largely by extrapolating from the past. Consequently, prognosticators can't help but be historians. And just as the signals on current events are frequently mixed and may be subject to revision, so, too, when discussing a business or an economy, are interpretations of prior events. In subsequent chapters, we discuss how to sift through history and judge what really happened—a key step in predicting, successfully, what will happen in the future.

The initially widely acclaimed book, This Time Is Different: Eight Centuries of Financial Follies by Carmen Reinhart and Kenneth Rogoff, provides a good example of the difficulties in interpreting history in order to give advice about the future.8 Published in 2011, the book first attracted attention from global policymakers with its conclusion that, since World War II, economic growth turned negative when the government debt/GDP ratio exceeded 90 percent. Two years later, other researchers discovered calculation errors in the authors' statistical summary of economic history. Looking for repetitive historical patterns can be tricky!

Statistical crosscurrents make it hard to find safe footing.

Even if the past and present are clear, divining the future remains challenging when potential causal variables (e.g., the money supply and the Federal purchases of goods and services) are headed in opposite directions. However, successful and influential forecasters must avoid being hapless “two-handed economists” (i.e., “on the one hand, but on the other hand”).

Moreover, one's statistical coursework at the college and graduate level does not necessarily solve the problem of what matters most when signals diverge. Yes, there are multiple regression software packages readily available that can crank out estimated regression (i.e., response) coefficients for independent causal variables. But, alas, even the more advanced statistical courses and textbooks have yet to satisfactorily surmount the multicollinearity problem. That is when two highly correlated independent variables “compete” to claim historical credit for explaining dependent variables that must be forecast. As a professional forecaster, I have not solved this problem but have been coping with it almost every day for decades. As we proceed, you will find some helpful tips on dealing with this challenge.

Behavioral sciences are inevitably limited.

There have been quantum leaps in the science of public opinion polling since the fiasco of 1948, when President Truman's reelection stunned pollsters. Nevertheless, there continue to be plenty of surprises (“upsets”) on election night. Are there innate limits to humans' ability to understand and predict the behavior of other humans? That was what the well-known conservative economist Henry Hazlitt observed in reaction to all of the hand wringing about “scientific polling” in the aftermath of the 1948 debacle. Writing in the November 22, 1948, issue of Newsweek, Hazlitt noted: “The economic future, like the political future, will be determined by future human behavior and decisions. That is why it is uncertain. And in spite of the enormous and constantly growing literature on business cycles, business forecasting will never, any more than opinion polls, become an exact science.”9

In other words, forecast success or failure can reflect “what we don't know that we don't know” (generalized uncertainty) more than “what we know” (risk).

The most important determinants may not be measureable.

Statistics are all about measurement. But what if you cannot measure what matters? Statisticians often approach this stumbling block with a dummy variable. It is assigned a zero or one in each examined historical period (year, quarter, month, or week) according to whether the statistician believes that the unmeasurable variable was active or dormant in that period. (For example, when explaining U.S. inflation history with a regression model, a dummy variable might be used to identify periods when there were price controls.) If the dummy variable in an estimated multiple regression equation achieves statistical significance, the statistician can then claim that it reflects the influence of the unmeasured, hypothesized causal factor.

The problem, though, is that a statistically significant dummy variable can be credited for anything that cannot be otherwise accounted for. The label attached to the dummy variable may not be a true causal factor useful in forecasting. In other words, there can be a naming contest for a dummy variable that is statistically sweeping up what other variables cannot explain. There are some common sense approaches to addressing this problem, and we discuss them later.

There can be conflicts between the goal of accuracy and the goal of pleasing a forecaster's everyday workplace environment.

Many of the most publicly visible and influential forecasters—especially securities analysts and investment bank economists—have job-related considerations that can influence their advice about the future. It is ironic that financial analysts and economists whose good work has earned them national recognition can find pressures at the top that complicate their ability to give good advice once the internal and external audience enlarges.

Many Wall Street economists, for instance, are employed by fixed-income or currency trading desks. Huge amounts of their firms' and their clients' money are positioned before key economic statistics are reported. This knowledge might understandably make a forecaster reluctant to go against the consensus. And, as we discuss shortly, there can be other work-related pressures not to go against the grain as well.

Are trading desks' economists' forecasts sometimes made to assist their employers' business?

It is hard, if not impossible, to gauge how much and how frequently forecasts are conditioned by an employer's business interests. However, it can be observed that certain types of behavior are consistent with the hypothesis that forecasts are being affected in this manner. For instance, the economist Takatoshi Ito at the University of Tokyo has authored research suggesting that foreign exchange rate projections are systematically biased toward scenarios that would benefit the forecaster's employer. He has attached the label “wishful expectations” to such forecasts.10

What is the effect of the sell-side working environment on stock analysts' performance?

In order to be successful, sell-side securities analysts at brokerage houses and investment banks must, in addition to performing their analytical research, spend time and effort marketing their research to their firms' clients. In buy-side organizations, such as pension funds, mutual funds, and hedge funds, analysts generally do not have these marketing responsibilities. Do the two different work environments make a difference in performance? The evidence is inconclusive.

For instance, one study funded by the Division of Research at the Harvard Business School examined the July 1997 to December 2004 period and reached the following conclusions: “Sell-side firm analysts make more optimistic and less accurate earnings forecasts than their buy-side counterparts. In addition, abnormal returns from investing in their Strong Buy/Buy recommendations are negative and under-perform comparable sell-side recommendations.”11

There is a wide range of performance results within the sell-side analyst universe. For example, one study concluded that sell-side securities analysts ranked well by buy-side users of sell-side research out-performed lesser ranked sell-side analysts.12 (Note: This study, which was sponsored by the William E. Simon Graduate School of Business Administration, reviewed performance results from 1991 to 2000.)

How does media exposure affect forecasters?

To see how the working environment can affect the quality of advice, look at Wall Street's emphasis on “instant analysis.” Wall Street economists often devote considerable time and care to preparing economic-indicator forecasts. However, within seconds—literally, seconds—after data are reported at the normal 8:30 a.m. time, economists are called on to determine the implications of an economics report and announce them to clients.

Investment banks and trading firms want their analysts to offer good advice. But they also want publicity. They're happy to offer their analysts to the cameras for the instant analysis prized by the media. The awareness that a huge national television audience is watching and will know if they err can be stressful to the generally studious and usually thorough persons often attracted to the field of economics. Keep this in mind when deciding whether the televised advice of an investment bank analyst is a useful input for decision making. (Note: Securities firms in the current, more regulation-conscious decade generally scrutinize analysts' published reports, which should make the reports more reliable than televised sound bites.)

Audiences may condition forecasters' perceptions of professional risks.

John Maynard Keynes famously said: “Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist.” Forecasters subconsciously or consciously risk becoming the slaves of their intended audience of colleagues, employers, and clients. In other words, seers often fret about the reaction of their audience, especially if their proffered advice is errant. How the forecaster frames these risks is known as the loss function.

In some situations, such pressures can be constructive. The first trader I met on my first day working as a Wall Street economist had this greeting: “I like bulls and I like bears but I don't like chickens.” The message was clear: No one wants to hear anything from a two-handed economist. That was constructive pressure for a young forecaster embarking on a career.

That said, audience pressures might not be so benign. Yet they are inescapable. The ability to deal with them in a field in which periodic costly errors are inevitable is the key to a long, successful career for anyone giving advice about the future.

Statistics courses are not enough. It takes both math and experience to succeed.

To be sure, many dedicated statistics educators are also scholars working to advance the science of statistics. However, teaching and its attendant focus on academic research inevitably leaves less time for building a considerable body of practical experience.

No amount of schooling could have prepared me for what I experienced during my first week as a Wall Street economist in 1980. Neither a PhD in economics from Columbia University nor a half-dozen years as an economist at the Federal Reserve Bank of New York and the Bank for International Settlements in Basel, Switzerland had given me the slightest clue as to how to handle my duties as PaineWebber's Chief Money Market Economist.

At the New York Fed, my ability to digest freshly released labor market statistics, and to write a report about them before the close of business, helped trigger an early promotion for me. But on PaineWebber's New York fixed-income trading floor, I was expected to digest and opine on those same very important monthly data no more than five minutes after they hit the tape at 8:30 a.m.

There were other surprises as well. In graduate school, for example, macroeconomics courses usually skipped national income accounting and measurement. These topics were regarded as simply descriptive and too elementary for a graduate level academic curriculum. Instead, courses focused on the mathematical properties of macroeconomic mechanics and econometrics as the arbiters of economic “truth.” On Wall Street, however, the ability to understand and explain the accounting that underlies any important government or company data report is key to earning credibility with a firm's professional investor clients. In graduate school we did study more advanced statistical techniques. But they were mainly applied to testing hypotheses and studying statistical economic history, not forecasting per se.

In short, when I first peered into my crystal ball, I was behind the eight ball! As in the game of pool, survival would depend on bank shots that combined skill, nerve, and good luck. Fortunately, experience pays: More seasoned forecasters generally do better. (See Figure 1.3. The methodology for calculating the illustrated forecaster scores is discussed in Chapter 2.)

Figure 1.3 More Experienced Forecasters Usually Fare Better

*Number of surveys in which forecaster participated.Source: Andy Bauer, Robert A. Eisenbeis, Daniel F. Waggoner, and Tao Zha, “Forecast Evaluation with Cross-Sectional Data: The Blue Chip Survey,” Federal Reserve Bank of Atlanta, Second Quarter, 2003.

In summation, then, it is difficult to be prescient because:

Behavioral sciences are inevitably limited.

Interpreting current events and history is challenging.

Important causal factors may not be quantifiable.

Work environments and audiences can bias forecasts.

Experience counts more than statistical courses.

Bad Forecasters: One-Hit Wonders, Perennial Outliers, and Copycats

Some seers do much better than others in addressing the difficulties cited earlier. But what makes these individuals more accurate? The answer is critical for learning how to make better predictions and for selecting needed inputs from other forecasters. We first review some studies identifying characteristics of both successful and unsuccessful forecasters. That is followed in Chapter 2 by a discussion of my experience in striving for better forecasting accuracy throughout my career.

What Is “Success” in Forecasting?

A forecast is any statement regarding the future. With this broad definition in mind, there are several ways to evaluate success or failure. Statistics texts offer a number of conventional gauges for judging how close a forecaster comes to being right over a number of forecast periods. (See an explanation and examples of these measures in Chapter 2.) Sometimes, as in investing, where the direction of change is more important than the magnitude of change, success can be defined as being right more often than being wrong. Another criteria can be whether a forecaster is correct about outcomes that are especially important in terms of costs of being wrong and benefits of being right (i.e., forecasting the big one.)

Over a forecaster's career, success will be judged by all three criteria—accuracy, frequency of being correct, and the ability to forecast the big one. And, as we see, it's rare to be highly successful in addressing all of these challenges. The sometimes famous forecasters who nail the big one are often neither accurate nor even directionally correct most of the time. On the other hand, the most reliable forecasters are less likely to forecast rare and very important events.

One-Hit Wonders

Reputations often are based on an entrepreneur, marketer, or forecaster “being really right when it counted most.” Our society lauds and rewards such individuals. They may attain a guru status, with hordes of people seeking and following their advice after their “home run.” However, an impressive body of research suggests that these one-hit wonders are usually unreliable sources of advice and forecasts. In other words, they strike out a lot. There is much to learn about how to make and evaluate forecasts from this phenomenon.

In the decade since it was published in 2005, Phillip E. Tetlock's book Expert Political Judgment—How Good Is It? How Can We Know? has become a classic in the development of standards for evaluating political opinion.13 In assessing predictions from experts in different fields, Tetlock draws important conclusions for successful business and economic forecasting and for selecting appropriate decision-making/forecasting inputs. For instance:

“Experts” successfully predicting rare events were often wrong both before and after their highly visible success. Tetlock reports that “When we pit experts against minimalist performance benchmarks—dilettantes, dart-throwing chimps, and assorted extrapolation algorithms, we find few signs that expertise translates into greater ability to make either ‘well-calibrated’ or ‘discriminating’ forecasts.”

The one-hit wonders can be like broken clocks. They were more likely than most forecasters to occasionally predict extreme events, but only because they make extreme forecasts more frequently.

Tetlock's “hedgehogs” (generally inaccurate forecasters who manage to correctly forecast some hard-to-forecast rare event) have a very different approach to reasoning than his more reliable “foxes.” For example, hedgehogs often used one big idea or theme to explain a variety of occurrences. However, “the more eclectic foxes knew many little things and were content to improvise ad hoc solutions to keep pace with a rapidly changing world.”

While hedgehogs are less reliable as forecasters, foxes may be less stimulating analysts. The former encourage out-of-the-box thinking. The latter are more likely to be less decisive, two-handed economists.

Tetlock's findings about political forecasts also apply to business and economic forecasts. Jerker Denrell and Christina Fang have provided such illustrations in their 2010 Management Science article titled “Predicting the Next Big Thing: Success as a Signal of Poor Judgement.”14