Competing with High Quality Data - Rajesh Jugulum - E-Book

Competing with High Quality Data E-Book

Rajesh Jugulum

2,1
83,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Create a competitive advantage with data quality Data is rapidly becoming the powerhouse of industry, but low-quality data can actually put a company at a disadvantage. To be used effectively, data must accurately reflect the real-world scenario it represents, and it must be in a form that is usable and accessible. Quality data involves asking the right questions, targeting the correct parameters, and having an effective internal management, organization, and access system. It must be relevant, complete, and correct, while falling in line with pervasive regulatory oversight programs. Competing with High Quality Data: Concepts, Tools and Techniques for Building a Successful Approach to Data Quality takes a holistic approach to improving data quality, from collection to usage. Author Rajesh Jugulum is globally-recognized as a major voice in the data quality arena, with high-level backgrounds in international corporate finance. In the book, Jugulum provides a roadmap to data quality innovation, covering topics such as: * The four-phase approach to data quality control * Methodology that produces data sets for different aspects of a business * Streamlined data quality assessment and issue resolution * A structured, systematic, disciplined approach to effective data gathering The book also contains real-world case studies to illustrate how companies across a broad range of sectors have employed data quality systems, whether or not they succeeded, and what lessons were learned. High-quality data increases value throughout the information supply chain, and the benefits extend to the client, employee, and shareholder. Competing with High Quality Data: Concepts, Tools and Techniques for Building a Successful Approach to Data Quality provides the information and guidance necessary to formulate and activate an effective data quality plan today.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 292

Veröffentlichungsjahr: 2014

Bewertungen
2,1 (18 Bewertungen)
0
2
6
2
8
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



COMPETING WITH HIGH QUALITY DATA:

CONCEPTS, TOOLS, AND TECHNIQUES FOR BUILDING A SUCCESSFUL APPROACH TO DATA QUALITY

Rajesh Jugulum

Cover Design: C. Wallace Cover Illustration: Abstract Background © iStockphoto/ aleksandarvelasevic

This book is printed on acid-free paper.

Copyright © 2014 by John Wiley & Sons, Inc. All rights reserved

Published by John Wiley & Sons, Inc., Hoboken, New Jersey Published simultaneously in Canada

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with the respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor the author shall be liable for damages arising herefrom.

For general information about our other products and services, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.

Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.

Library of Congress Cataloging-in-Publication Data:

Jugulum, Rajesh. Competing with high quality data: concepts, tools, and techniques for building a successful approach to data quality / Rajesh Jugulum. pages cm Includes index. ISBN 978-1-118-34232-9 (hardback); ISBN: 978-1-118-41649-5 (ebk.); ISBN: 978-1-118-42013-3 (ebk.); ISBN 978-1-118-84096-2 (ebk.). 1. Electronic data processing—Quality control. 2. Management. I. Title. QA76.9.E95J84 2014 004—dc23 2013038107

I oweDr. Genichi Taguchia lot for instilling in me the desire to pursue a quest for Quality and for all his help and support in molding my career in Quality and Analytics.

CONTENTS

Foreword

Prelude

Preface

Acknowledgments

Chapter 1: The Importance of Data Quality

1.0 Introduction

1.1 Understanding the Implications of Data Quality

1.2 The Data Management Function

1.3 The Solution Strategy

1.4 Guide to This Book

Section I: Building a Data Quality Program

Chapter 2: The Data Quality Operating Model

1

2.0 Introduction

2.1 Data Quality Foundational Capabilities

2.2 The Data Quality Methodology

2.3 Conclusions

Note

Chapter 3: The DAIC Approach

1

3.0 Introduction

3.1 Six Sigma Methodologies

3.2 DAIC Approach for Data Quality

3.3 Conclusions

Note

Section II: Executing a Data Quality Program

Chapter 4: Quantification of the Impact of Data Quality

1

4.0 Introduction

4.1 Building a Data Quality Cost Quantification Framework

4.2 A Trading Office Illustrative Example

4.3 Conclusions

Note

Chapter 5: Statistical Process Control and Its Relevance in Data Quality Monitoring and Reporting

5.0 Introduction

5.1 What Is Statistical Process Control?

5.2 Control Charts

5.3 Relevance of Statistical Process Control in Data Quality Monitoring and Reporting

5.4 Conclusions

Chapter 6: Critical Data Elements: Identification, Validation, and Assessment

1

6.0 Introduction

6.1 Identification of Critical Data Elements

6.2 Assessment of Critical Data Elements

6.3 Conclusions

Notes

Chapter 7: Prioritization of Critical Data Elements (Funnel Approach)

1

7.0 Introduction

7.1 The Funnel Methodology (Statistical Analysis for CDE Reduction)

7.2 Case Study: Basel II

7.3 Conclusions

Notes

Chapter 8: Data Quality Monitoring and Reporting Scorecards

8.0 Introduction

8.1 Development of the DQ Scorecards

8.2 Analytical Framework (ANOVA, SPCs, Thresholds, Heat Maps)

8.3 Application of the Framework

8.4 Conclusions

Note

Chapter 9: Data Quality Issue Resolution

9.0 Introduction

9.1 Description of the Methodology

1

9.2 Data Quality Methodology

9.3 Process Quality/Six Sigma Approach

9.4 Case Study: Issue Resolution Process Reengineering

9.5 Conclusions

Notes

Chapter 10: Information System Testing

10.0 Introduction

10.1 Typical System Arrangement

10.2 Method of System Testing

10.3 MTS Software Testing

10.4 Case Study: A Japanese Software Company

10.5 Case Study: A Finance Company

10.6 Conclusions

Chapter 11: Statistical Approach for Data Tracing

11.0 Introduction

11.1 Data Tracing Methodology

11.2 Case Study: Tracing

11.3 Data Lineage through Data Tracing

11.4 Conclusions

Chapter 12: Design and Development of Multivariate Diagnostic Systems

12.0 Introduction

12.1 The Mahalanobis-Taguchi Strategy

12.2 Stages in MTS

12.3 The Role of Orthogonal Arrays and Signal-to-Noise Ratio in Multivariate Diagnosis

12.4 A Medical Diagnosis Example

12.5 Case Study: Improving Client Experience

12.6 Case Study: Understanding the Behavior Patterns of Defaulting Customers

12.7 Case Study: Marketing

12.8 Case Study: Gear Motor Assembly

12.9 Conclusions

Chapter 13: Data Analytics

13.0 Introduction

13.1 Data and Analytics as Key Resources

13.2 Data Innovation

13.3 Conclusions

Chapter 14: Building a Data Quality Practices Center

14.0 Introduction

14.1 Building a DQPC

14.2 Conclusions

Appendix A

Equations for Signal-to-Noise (S/N) Ratios

Appendix B

Matrix Theory: Related Topics

Appendix C

Some Useful Orthogonal Arrays

Index of Terms and Symbols

References

Referenced Resources

Further Resources

Index

End User License Agreement

List of Tables

Chapter 1

Table 1.1

Chapter 2

Table 2.1

Chapter 3

Table 3.1

Table 3.2

Table 3.3

Table 3.4

Table 3.5

Table 3.6

Chapter 4

Table 4.1

Table 4.2

Table 4.3

Chapter 5

Table 5.1

Chapter 6

Table 6.1

Table 6.2

Table 6.3

Table 6.4

Chapter 7

Table 7.1

Table 7.2

Table 7.3

Table 7.4

Table 7.5

Table 7.6

Chapter 8

Table 8.1

Table 8.2

Chapter 10

Table 10.1

Table 10.2

Table 10.3

Table 10.4

Table 10.5

Table 10.6

Table 10.7

Table 10.8

Table 10.9

Table 10.10

Table 10.11

Table 10.12

Table 10.13

Table 10.14

Table 10.15

Table 10.16

Chapter 11

Table 11.1

Table 11.2

Table 11.3

Chapter 12

Table 12.1

Table 12.2

Table 12.3

Table 12.4

Table 12.5

Table 12.6

Table 12.7

Table 12.8

Table 12.9

Chapter 13

Table 13.1

Appendix C

L

4

(2

3

) Orthogonal Array

L

8

(2

7

) Orthogonal Array

L

12

(2

11

) Orthogonal Array

L

16

(2

15

) Orthogonal Array

L

32

(2

31

) Orthogonal Array

L

64

(2

63

) Orthogonal Array

L

128

(2

127

) Orthogonal Array

L

18

(2

1

×

3

7

) Orthogonal Array

L

27

(3

13

) Orthogonal Array

L

36

(2

11

×

3

12

) Orthogonal Array

List of Illustrations

Chapter 1

Figure 1.1 Quality Loss Function (QLF)

Figure 1.2 Loss Function for Data Quality Levels (Higher-the-Better Type of Characteristic)

Figure 1.3 Sources of Societal Losses

Figure 1.4 DQ Solution Strategy

Chapter 3

Figure 3.1 Concept of Variation and Sigma Level

Figure 3.2 DMAIC Methodology

Figure 3.3 DFSS Methodology

Figure 3.4 DQ Methodology (DAIC)

Figure 3.5 Steady-State Monitor and Control Operating Environment

Chapter 4

Figure 4.1 Cost Waterfall

Figure 4.2 Cost Implications and Lost Opportunities for Cost of Poor-Quality Data (COPQD)

Figure 4.3 Example of a Process Issue Prioritization Matrix

Figure 4.4 Building DQ Quantification Framework

Chapter 5

Figure 5.1 Source of Variation

Figure 5.2 Distribution of Normal Population

Figure 5.3 Various Components of a Control Chart

Figure 5.4 Preliminary Data Collection to Find Sampling Frequency

Figure 5.5 p-Chart for Defective KYC Records

Figure 5.6 c-Chart for Number of Defects in a Sample with 100 Mortgage Accounts

Figure 5.7 Control Chart for Individuals and the Moving Ranges for Loan Processing Times

Figure 5.8 Selection of Suitable Control Charts

Chapter 6

Figure 6.1 Enterprise CDE Rationalization Matrix

Figure 6.2 Flowchart of Data Quality Assessment

Figure 6.3 Data Quality Scores at Various Levels

Chapter 7

Figure 7.1 CDE Reduction through the Funnel Approach

Figure 7.2 Examples of Variable Pairs with Correlations of 1

Figure 7.3 A Pair of Highly Correlated CDEs

Figure 7.4 Process of Reducing the CDEs from 35 to 15

Chapter 8

Figure 8.1 Analytics Framework for DQ Scorecards

Figure 8.2 SPC Analysis for Determining Thresholds

Figure 8.3 Analysis of Variance (ANOVA)

Figure 8.4 Pareto Analysis for Nonvalid Balances

Figure 8.5 Heat Map for an Enterprise-Level CDE—Records and Balances

1

Chapter 9

Figure 9.1 Issue Resolution—Linkage of Data Quality and Process Quality

Figure 9.2 Resolution of DQ-Related Issues with DQ Methodology

2

Figure 9.3 Distribution of 157 Issues

Figure 9.4 Issues Heat Map

Chapter 10

Figure 10.1 P-Diagram for System Testing

Chapter 11

Figure 11.1 Generalized Data Tracing Approach (the CDE numbers in funnel are only for illustration)

Figure 11.2 Pareto Analysis to Prioritize the CDEs

Figure 11.3 Example of an SPC Chart

Figure 11.4 CDE Prioritization Using Pareto Analysis

Figure 11.5 ANOVA to test the significance of main factor effects. Low P-value in both cases indicates that the effects are statistically insignificant.

Figure 11.6 Control Chart for the Defect Rate

Figure 11.7 Illustrative Example to Show the Reduction in Failure Rate

Chapter 12

Figure 12.1 Importance of Correlations in Multivariate Systems

Figure 12.2 The Gram-Schmidt Process

Figure 12.3 Pattern Information or Diagnostic System Used in MTS

Figure 12.4 Steps in MTS

Figure 12.5 Both U

1

and U

2

are Larger-the-Better Type

Figure 12.6 U

1

—Smaller-the-Better Type and U

2

—Larger-the-Better Type

Figure 12.7 U

1

—Larger-the-Better Type and U

2

—Smaller-the-Better Type

Figure 12.8 Both U

1

and U

2

Are Smaller-the-Better Type

Figure 12.9 Differentiation between Normals and Abnormals (validation of the scale)

Figure 12.10 Normals and Abnormals after Optimization

Figure 12.11 Validation of Scale—Distance between Green, Yellow, and Red Clients (with 49 variables)

Figure 12.12 Separation between Clients (after Optimization, with 11 Variables)

Figure 12.13 MTS Scale Validation

Figure 12.14 Separation with Optimized Scale

Figure 12.15 Pattern Recognition with a Useful Set of Variables

Figure 12.16 127K27330 Test Fixture

Figure 12.17 Patterns Corresponding to Some Parameters

Figure 12.18 MTS Scale Validation

Figure 12.19 MTS Scale Performance Before and After Optimization

Chapter 13

Figure 13.1 Seven Levers of a Disciplined and Effective Organization

Figure 13.2 Successful Analytics Execution

Figure 13.3 Process of Executing Analytics

Figure 13.4 Importance of the Combination of High-Quality Data and Analytics

Figure 13.5 Big Data Analytics Operating Model

Chapter 14

Figure 14.1 End-to-End Data Quality Management with DQPC

Guide

Cover

Table of Contents

Preface

Foreword

Part

Chapter

Pages

xiii

xiv

xv

xvi

xvii

xviii

xix

xx

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

Foreword

Over the past few years, there has been a dramatic shift in focus in information technology from the technology to the information. Inexpensive, large-scale storage and high-performance computing systems, easy access to cloud computing; and the widespread use of software-as-a-service, are all contributing to the commoditization of technology. Organizations are now beginning to realize that their competitiveness will be based on their data, not on their technology, and that their data and information are among their most important assets.

In this new data-driven environment, companies are increasingly utilizing analytical techniques to draw meaningful conclusions from data. However, the garbage-in-garbage-out rule still applies. Analytics can only be effective when the data being analyzed is of high quality. Decisions made based on conclusions drawn from poor quality data can result in equally poor outcomes resulting in significant losses and strategic missteps for the company. At the same time, the seemingly countless numbers of data elements that manifest themselves in the daily processes of a modern enterprise make the task of ensuring high data quality both difficult and complex. A well-ground data quality program must understand the complete environment of systems, architectures, people, and processes. It must also be aligned with business goals and strategy and understand the intended purposes associated with specific data elements in order to prioritize them, build business rules, calculate data quality scores, and then take appropriate actions. To accomplish all of these things, companies need to have a mature data quality capability that provides the services, tools and governance to deliver tangible insights and business value from the data. Firms with this capability will be able to make sounder decisions based on high quality data. Consistently applied, this discipline can produce a competitive advantage for serious practitioners.

Those embarking on their journey to data quality will find this book to be a most useful companion. The data quality concepts and approaches are presented in a simple and straightforward manner. The relevant materials are organized into two sections- Section I focuses on building an effective data quality program, while Section II concentrates on the tools and techniques essential to the program's implementation and execution. In addition, this book explores the relationship between data analytics and high-quality data in the context of big data as well as providing other important data quality insights.

The application of the approaches and frameworks described in this book will help improve the level of data quality effectiveness and efficiency in any organization. One of the book's more salient features is the inclusion of case examples. These case studies clearly illustrate how the application of these methods has proven successful in actual instances.

This book is unique in the field of data quality as it comprehensively explains the creation of a data quality program from its initial planning to its complete implementation. I recommend this book as a valuable addition to the library of every data quality professional and business leader searching for a data quality framework that will, at journey's end, produce and ensure high quality data!

John R. Talburt Professor of Information Science and Acxiom Chair of Information Quality at the University of Arkansas at Little Rock (UALR)

Prelude

When I begin to invest my time reading a professional text, I wonder to what degree I can trust the material. I question whether it will be relevant for my challenge. And I hope that the author or authors have applied expertise that makes the pages in front of me worthy of my personal commitment. In a short number of short paragraphs I will address these questions, and describe how this book can best be leveraged.

I am a practicing data management executive, and I had the honor and privilege of leading the author and the contributors to this book through a very large-scale, extremely successful global data quality program design, implementation, and operation for one of the world's great financial services companies. The progressive topics of this book have been born from a powerful combination of academic/intellectual expertise and learning from applied business experience.

I have since moved from financial services to healthcare and am currently responsible for building an enterprise-wide data management program and capability for a global industry leader. I am benefiting greatly from the application of the techniques outlined in this book to positively affect the reliability, usability, accessibility, and relevance for my company's most important enterprise data assets. The foundation for this journey must be formed around a robust and appropriately pervasive data quality program.

Competing with High Quality Data chapter topics, such as how to construct a Data Quality Operating Model, can be raised to fully global levels, but can also provide meaningful lift at a departmental or data domain scale. The same holds true for utilizing Statistical Process Controls, ­Critical Data Element Identification and Prioritization, and the other valuable capability areas discussed in the book.

The subject areas also lead the reader from the basics of ­organizing an effort and creating relevance, all the way to utilizing sophisticated advanced techniques such as Data Quality Scorecards, Information ­System ­Testing, Statistical Data Tracing, and Developing Multivariate Diagnostic Systems. Experiencing this range of capability is not only important to accommodate readers with different levels of experience, but also because the data quality improvement journey will often need to start with rudimentary base level improvements that later need to be pressed forward into finer levels of tuning and precision.

You can have confidence in the author and the contributors. You can trust the techniques, the approaches, and the systematic design brought forth throughout this book. They work. And they can carry you from data quality program inception to pervasive and highly precise levels of execution.

Don Gray Head of Global Enterprise Data Management at Cigna

Preface

According to Dr. Genichi Taguchi's quality loss function (QLF), there is an associated loss when a quality characteristic deviates from its target value. The loss function concept can easily be extended to the data quality (DQ) world. If the quality levels associated with the data elements used in various decision-making activities are not at the desired levels (also known as specifications or thresholds), then calculations or decisions made based on this data will not be accurate, resulting in huge losses to the ­organization. The overall loss (referred to as “loss to society” by Dr. Taguchi) includes direct costs, indirect costs, warranty costs, reputation costs, loss due to lost customers, and costs associated with rework and rejection. The results of this loss include system breakdowns, company failures, and company bankruptcies. In this context, everything is considered part of society (­customers, organizations, government, etc.). The effect of poor data ­quality during the global crisis that began in 2007 cannot be ignored because inadequate information technology and data architectures to support the management of risk were considered as one of the key factors.

Because of the adverse impacts that poor-quality data can have, organizations have begun to increase the focus on data quality in business in general, and they are viewing data as a critical resource like others such as people, capital, raw materials, and facilities. Many companies have started to establish a dedicated data management function in the form of the chief data office (CDO). An important component of the CDO is the data quality team, which is responsible for ensuring high quality levels for the underlying data and ensuring that the data is fit for its intended purpose. The responsibilities of the DQ constituent should include building an end-to-end DQ program and executing it with appropriate concepts, methods, tools, and techniques.

Much of this book is concerned with describing how to build a DQ program with an operating model that has a four-phase DAIC (Define, Assess, Improve, and Control) approach and showing how various concepts, tools, and techniques can be modified and tailored to solve DQ problems. In addition, discussions on data analytics (including the big data context) and establishing a data quality practices center (DQPC) are also provided.

This book is divided into two sections—Section I: Building a Data Quality program and Section II: Executing a Data Quality program—with 14 ­chapters covering various aspects of the DQ function. In the first section, the DQ operating model (DQOM) and the four-phase DAIC approach are described. The second section focuses on a wide range of concepts, methodologies, approaches, frameworks, tools, and techniques, all of which are required for successful execution of a DQ program. Wherever possible, case studies or illustrative examples are provided to make the discussion more interesting and provide a practical context. In ­Chapter 13, which focuses on data analytics, emphasis is given to having good quality data for analytics (even in the big data context) so that benefits can be maximized. The concluding chapter highlights the importance of building an enterprise-wide data quality practices center. This center helps organizations identify common enterprise problems and solve them through a systematic and standardized approach.

I believe that the application of approaches or frameworks provided in this book will help achieve the desired levels of data quality and that such data can be successfully used in the various decision-making activities of an enterprise. I also think that the topics covered in this book strike a balance between rigor and creativity. In many cases, there may be other methods for solving DQ problems. The methods in this book present some perspectives for designing a DQ problem-solving approach. In the coming years, the methods provided in this book may become elementary, with the introduction of newer methods. Before that happens, if the contents of this book help industries solve some important DQ problems, while minimizing the losses to society, then it will have served a fruitful purpose.

I would like to conclude this section with the following quote from Arthur Conan Doyle's The Adventure of the Copper Beeches:

“Data! Data!” I cried impatiently, “I cannot make bricks without clay.”

I venture to modify this quote as follows:

“Good data! Good data!” I cried impatiently, “I cannot make usable bricks without good clay.”

Rajesh Jugulum

Acknowledgments

Writing this book was a great learning experience. The project would not have been completed without help and support from many talented and outstanding individuals.

I would like to thank Joe Smialowski for his support and guidance provided by reviewing this manuscript and offering valuable suggestions. Joe was very patient in reviewing three versions of the manuscript, and he helped me to make sure that the contents are appropriate and made sense. I wish to thank Don Gray for the support he provided from the beginning of this project and writing the Prelude to the book. I also thank Professor John R Talburt for writing the Foreword and his ­helpful remarks to improve the contents of the book. Thanks are also due to Brian Bramson, Bob Granese, Chuan Shi, Chris Heien, Raji Ramachandran, Ian Joyce, Greg Somerville, and Jagmeet Singh for their help ­during this project. Bob and Brian contributed to two chapters in this book. Chuan deserves special credit for his efforts in the CDE-related chapters (­Chapters 6 and 7), and sampling discussion in data tracing chapter (Chapter 11), and thanks to Ian for editing these chapters.

I would like to express my gratitude to Professor Nam P. Suh, and Dr. Desh Deshpande for the support provided by giving the quotes for the book.

I am also thankful to Ken Brzozowski and Jennifer Courant for the help provided in data tracing–related activities. Thanks are due to ­Shannon Bell for help in getting the required approvals for this book project.

I will always be indebted to late Dr. Genichi Taguchi for what he did for me. I believe his philosophy is helpful not only in industry-related activities, but also in day-to-day human activities. My thanks are always due to Professor K. Narayana Reddy, Professor A.K. Choudhury, Professor B.K. Pal, Mr. Shin Taguchi, Mr. R.C. Sarangi, and Professor Ken Chelst for their help and guidance in my activities.

I am very grateful to John Wiley & Sons for giving me an opportunity to publish this book. I am particularly thankful to Amanda Shettleton and Nancy Cintron for their continued cooperation and support for this project. They were quite patient and flexible in accommodating my requests. I would also like to thank Bob Argentieri, Margaret Cummins, and Daniel Magers for their cooperation and support in this effort.

Finally, I would like to thank my family for their help and support throughout this effort.