Experimentation, Validation, and Uncertainty Analysis for Engineers - Hugh W. Coleman - E-Book

Experimentation, Validation, and Uncertainty Analysis for Engineers E-Book

Hugh W. Coleman

0,0
116,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Helps engineers and scientists assess and manage uncertainty at all stages of experimentation and validation of simulations Fully updated from its previous edition, Experimentation, Validation, and Uncertainty Analysis for Engineers, Fourth Edition includes expanded coverage and new examples of applying the Monte Carlo Method (MCM) in performing uncertainty analyses. Presenting the current, internationally accepted methodology from ISO, ANSI, and ASME standards for propagating uncertainties using both the MCM and the Taylor Series Method (TSM), it provides a logical approach to experimentation and validation through the application of uncertainty analysis in the planning, design, construction, debugging, execution, data analysis, and reporting phases of experimental and validation programs. It also illustrates how to use a spreadsheet approach to apply the MCM and the TSM, based on the authors' experience in applying uncertainty analysis in complex, large-scale testing of real engineering systems. Experimentation, Validation, and Uncertainty Analysis for Engineers, Fourth Edition includes examples throughout, contains end of chapter problems, and is accompanied by the authors' website www.uncertainty-analysis.com. * Guides readers through all aspects of experimentation, validation, and uncertainty analysis * Emphasizes the use of the Monte Carlo Method in performing uncertainty analysis * Includes complete new examples throughout * Features workable problems at the end of chapters Experimentation, Validation, and Uncertainty Analysis for Engineers, Fourth Edition is an ideal text and guide for researchers, engineers, and graduate and senior undergraduate students in engineering and science disciplines. Knowledge of the material in this Fourth Edition is a must for those involved in executing or managing experimental programs or validating models and simulations.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 550

Veröffentlichungsjahr: 2018

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



TABLE OF CONTENTS

COVER

TITLE PAGE

PREFACE

1 EXPERIMENTATION, ERRORS, AND UNCERTAINTY

1-1 EXPERIMENTATION

1-2 EXPERIMENTAL APPROACH

1-3 BASIC CONCEPTS AND DEFINITIONS

1-4 EXPERIMENTAL RESULTS DETERMINED FROM A DATA REDUCTION EQUATION COMBINING MULTIPLE MEASURED VARIABLES

1-5 GUIDES AND STANDARDS

1-6 A NOTE ON NOMENCLATURE

REFERENCES

PROBLEMS

NOTES

2 COVERAGE AND CONFIDENCE INTERVALS FOR AN INDIVIDUAL MEASURED VARIABLE

2-1 COVERAGE INTERVALS FROM THE MONTE CARLO METHOD FOR A SINGLE MEASURED VARIABLE

2-2 CONFIDENCE INTERVALS FROM THE TAYLOR SERIES METHOD FOR A SINGLE MEASURED VARIABLE, ONLY RANDOM ERRORS CONSIDERED

2-3 CONFIDENCE INTERVALS FROM THE TAYLOR SERIES METHOD FOR A SINGLE MEASURED VARIABLE: RANDOM AND SYSTEMATIC ERRORS CONSIDERED

2-4 UNCERTAINTY OF UNCERTAINTY ESTIMATES AND CONFIDENCE INTERVAL LIMITS FOR A MEASURED VARIABLE

REFERENCES

PROBLEMS

NOTES

3 UNCERTAINTY IN A RESULT DETERMINED FROM MULTIPLE VARIABLES

3-1 GENERAL UNCERTAINTY ANALYSIS VS. DETAILED UNCERTAINTY ANALYSIS

3-2 MONTE CARLO METHOD FOR PROPAGATION OF UNCERTAINTIES

3-3 TAYLOR SERIES METHOD FOR PROPAGATION OF UNCERTAINTIES

3-4 DETERMINING MCM COVERAGE INTERVALS AND TSM EXPANDED UNCERTAINTY

3-5 GENERAL UNCERTAINTY ANALYSIS USING THE TSM AND MSM APPROACHES FOR A ROUGH-WALLED PIPE FLOW EXPERIMENT

3-6 COMMENTS ON IMPLEMENTING DETAILED UNCERTAINTY ANALYSIS USING A SPREADSHEET

REFERENCES

PROBLEMS

4 GENERAL UNCERTAINTY ANALYSIS USING THE TAYLOR SERIES METHOD (TSM)

4-1 TSM APPLICATION TO EXPERIMENT PLANNING

4-2 TSM APPLICATION TO EXPERIMENT PLANNING: SPECIAL FUNCTIONAL FORM

4-3 USING TSM UNCERTAINTY ANALYSIS IN PLANNING AN EXPERIMENT

4-4 EXAMPLE: ANALYSIS OF PROPOSED PARTICULATE MEASURING SYSTEM

4-5 EXAMPLE: ANALYSIS OF PROPOSED HEAT TRANSFER EXPERIMENT

4-6 EXAMPLES OF PRESENTATION OF RESULTS FROM ACTUAL APPLICATIONS

REFERENCES

PROBLEMS

5 DETAILED UNCERTAINTY ANALYSIS: OVERVIEW AND DETERMINING RANDOM UNCERTAINTIES IN RESULTS

5-1 USING DETAILED UNCERTAINTY ANALYSIS

5-2 DETAILED UNCERTAINTY ANALYSIS: OVERVIEW OF COMPLETE METHODOLOGY

5-3 DETERMINING RANDOM UNCERTAINTY OF EXPERIMENTAL RESULT

REFERENCES

6 DETAILED UNCERTAINTY ANALYSIS: DETERMINING SYSTEMATIC UNCERTAINTIES IN RESULTS

6-1 ESTIMATING SYSTEMATIC UNCERTAINTIES

6-2 DETERMINING SYSTEMATIC UNCERTAINTY OF EXPERIMENTAL RESULT INCLUDING CORRELATED SYSTEMATIC ERROR EFFECTS

6-3 COMPARATIVE TESTING

6-4 SOME ADDITIONAL CONSIDERATIONS IN EXPERIMENT EXECUTION

REFERENCES

PROBLEMS

7 DETAILED UNCERTAINTY ANALYSIS: COMPREHENSIVE EXAMPLES

7-1 TSM COMPREHENSIVE EXAMPLE: SAMPLE-TO-SAMPLE EXPERIMENT

7-2 TSM COMPREHENSIVE EXAMPLE: USE OF BALANCE CHECKS

7-3 COMPREHENSIVE EXAMPLE: DEBUGGING AND QUALIFICATION OF A TIMEWISE EXPERIMENT

7-4 COMPREHENSIVE EXAMPLE: HEAT EXCHANGER TEST FACILITY FOR SINGLE AND COMPARATIVE TESTS

7-5 CASE STUDY: EXAMPLES OF SINGLE AND COMPARATIVE TESTS OF NUCLEAR POWER PLANT RESIDUAL HEAT REMOVAL HEAT EXCHANGER

REFERENCES

PROBLEMS

8 THE UNCERTAINTY ASSOCIATED WITH THE USE OF REGRESSIONS

8-1 OVERVIEW OF LINEAR REGRESSION ANALYSIS AND ITS UNCERTAINTY

8-2 DETERMINING AND REPORTING REGRESSION UNCERTAINTY

8-3 METHOD OF LEAST SQUARES REGRESSION

8-4 FIRST-ORDER REGRESSION EXAMPLE: MCM APPROACH TO DETERMINE REGRESSION UNCERTAINTY

8-5 REGRESSION EXAMPLES: TSM APPROACH TO DETERMINE REGRESSION UNCERTAINTY

8-6 COMPREHENSIVE TSM EXAMPLE: REGRESSIONS AND THEIR UNCERTAINTIES IN A FLOW TEST

REFERENCES

PROBLEMS

NOTES

9 VALIDATION OF SIMULATIONS

9-1 INTRODUCTION TO VALIDATION METHODOLOGY

9-2 ERRORS AND UNCERTAINTIES

9-3 VALIDATION NOMENCLATURE

9-4 VALIDATION APPROACH

9-5 CODE AND SOLUTION VERIFICATION

9-6 INTERPRETATION OF VALIDATION RESULTS USING

E

AND

u

val

9-7 ESTIMATION OF VALIDATION UNCERTAINTY

u

val

9-8 SOME PRACTICAL POINTS

REFERENCES

ANSWERS TO SELECTED PROBLEMS

APPENDIX A: USEFUL STATISTICS

APPENDIX B: TAYLOR SERIES METHOD (TSM) FOR UNCERTAINTY PROPAGATION

B-1 DERIVATION OF UNCERTAINTY PROPAGATION EQUATION

B-2 COMPARISON WITH PREVIOUS APPROACHES

B-3 ADDITIONAL ASSUMPTIONS FOR ENGINEERING APPLICATIONS

REFERENCES

NOTE

APPENDIX C: COMPARISON OF MODELS FOR CALCULATION OF UNCERTAINTY

C-1 MONTE CARLO SIMULATIONS

C-2 SIMULATION RESULTS

REFERENCES

APPENDIX D: SHORTEST COVERAGE INTERVAL FOR MONTE CARLO METHOD

REFERENCE

APPENDIX E: ASYMMETRIC SYSTEMATIC UNCERTAINTIES

E-1 PROCEDURE FOR ASYMMETRIC SYSTEMATIC UNCERTAINTIES USING TSM PROPAGATION

E-2 PROCEDURE FOR ASYMMETRIC SYSTEMATIC UNCERTAINTIES USING MCM PROPAGATION

E-3 EXAMPLE: BIASES IN A GAS TEMPERATURE MEASUREMENT SYSTEM

REFERENCES

APPENDIX F: DYNAMIC RESPONSE OF INSTRUMENT SYSTEMS

F-1 GENERAL INSTRUMENT RESPONSE

F-2 RESPONSE OF ZERO-ORDER INSTRUMENTS

F-3 RESPONSE OF FIRST-ORDER INSTRUMENTS

F-4 RESPONSE OF SECOND-ORDER INSTRUMENTS

F-5 SUMMARY

REFERENCES

INDEX

END USER LICENSE AGREEMENT

List of Tables

Chapter 2

Table 2.1 Chauvenet's Criterion for Rejecting a Reading

Table 2.2 The Relative Uncertainty in Standard Deviation Calculated with N Measurements from a Gaussian Parent Population

Table 2.3 Uncertainties in Confidence Interval Limits for the Stated Conditions

Chapter 3

Table 3.1 Results for General Uncertainty Analysis Example

Chapter 4

Table 4.1 Results of General Uncertainty Analysis for Steady-State Technique

Table 4.2 Results of General Uncertainty Analysis for Transient Technique

Chapter 5

Table 5.1 Uncertainty Analysis in Experimentation

Table 5.2 Random Uncertainty Values for

C

d

Using Both the TSM Propagation and the Direct Calculation Methods

Chapter 7

Table 7.1 Zeroth-Order Estimates of Systematic and Random Standard Uncertainties for Variables in Heating Value Determination

Table 7.2 Nominal Values from Previous Test

Table 7.3 Heating Value Results for 26 Lignite Samples

Chapter 8

Table 8.1 Systematic Standard Uncertainty Components for Special Case

Table 8.2 Systematic Standard Uncertainty for

Y

in Special Case

Table 8.3 Pressure Transducer Calibration Data Set

Table 8.4 Venturi Calibration Data Set

Table 8.5 Uncertainties (95%) for Venturi Calibration

Appendix A

Table A.1 Tabulation of Two-Tailed Gaussian Probabilities

Table A.2 The t Distribution

a

Table A.3 Factors for Tolerance Interval

Table A.4 Factors for Prediction Interval to Contain the Values of All of 1, 2, 5, 10, and 20 Future Observations at a Confidence Level of 95%

Appendix C

Table C.1 Data Reduction Equations, Hypothetical True Values of Test Variables, and Balanced Case Error Estimates for the Example Experiments

Table C.2 Confidence Level (%) Results for Current Information Study with

ν

r

≈ 9

Table C.3 Confidence Level (%) Results for Previous Information Study with

ν

r

≈ 9

Table C.4 Confidence Level (%) Results for Current Information Study with Each

and

= 9

Table C.5 Confidence Level (%) Results for Previous Information Study with Each

and

Appendix E

Table E.1 Expressions for

c

X

for Gaussian, Rectangular, and Triangular Distributions in Figures E.1 through E.3

Table E.2 Standard Deviation

for Gaussian, Rectangular, and Triangular Distributions in Figures E.1 through E.3

List of Illustrations

Chapter 1

Figure 1.1 Analytical approach to solution of a problem.

Figure 1.2 Comparison of model results with experimental data (

a

) without and (

b

) with consideration of uncertainty in

Y

.

Figure 1.3 An uncertainty

u

defines an interval that is estimated to contain the actual value of an error of unknown sign and magnitude.

Figure 1.4 Measurement of a variable influenced by five error sources.

Figure 1.5 Effect of errors on successive measurements of a variable

X

.

Figure 1.6 Histogram of temperatures read from a thermometer by 24 students.

Figure 1.7 A population of “identical” voltmeters with their different systematic errors.

Figure 1.8 Uniform distribution of possible systematic errors

β

.

Figure 1.9 Triangular distribution of possible systematic errors

β

.

Figure 1.10 Using the MCM approach to combine elemental standard uncertainties.

Figure 1.11 (

a

) Thermocouple system with its output voltage

E

. (

b

) No individual calibration—used with generic temperature–voltage table to “measure” a temperature.

Figure 1.12 (

a

) Calibration of tc system of Figure 1.8 using a temperature standard. (

b

) Use of calibrated system to measure a temperature.

Figure 1.13 Region containing lignite deposits and potential power plant site.

Figure 1.14 Experimental determination of resistance characteristics of rough-walled pipe.

Figure 1.15 Rough-walled pipe results of Nikuradse [12].

Figure 1.16 Schematic showing nomenclature for validation approach [14].

Figure 1.17 Overview of validation analysis approach [14].

Chapter 2

Figure 2.1 Monte Carlo Method of combining elemental errors for a single variable.

Figure 2.2 Histogram of measurement of output of a pressure transducer.

Figure 2.3 Distribution of measurements of output of a pressure transducer as the number of measurements approaches infinity.

Figure 2.4 Plot of the Gaussian distribution showing the effect of different values of standard deviation.

Figure 2.5 Graphic representation of the probability Prob(

τ

1

).

Figure 2.6 Probability sketch for Example 2.1(a).

Figure 2.7 Probability sketch for Example 2.1(b).

Figure 2.8 Probability sketch for Example 2.1(c).

Figure 2.9 Probability sketch for Example 2.1(d).

Figure 2.10 The 95% confidence interval about a single measurement from a Gaussian distribution.

Figure 2.11 Graphic representation of Chauvenet's criterion.

Figure 2.12 Distribution of errors for central limit theorem example.

Figure 2.13 Distribution of values of measurement of variable

X

for (

a

) case 1 with two error sources and (

b

) case 2 with eight error sources.

Figure 2.14 Distribution of possible systematic errors

β

and the interval with a 95% level of confidence of containing a particular

β

value.

Figure 2.15 Uniform distribution of possible systematic errors

β

.

Figure 2.16 Triangular distribution of possible systematic errors

β

.

Figure 2.17 Effective degrees of freedom associated with the relative uncertainty in a systematic uncertainty estimate.

Figure 2.18 Percent coverage of true value

X

true

for a measurement

X

with various degrees of freedom.

Figure 2.19 Schematic showing uncertainty in limits of uncertainty intervals for N = 20.

Chapter 3

Figure 3.1 Using the MCM in a general uncertainty analysis.

Figure 3.2 Using the MCM in a detailed uncertainty analysis with propagation of random uncertainties of the individual measured variables.

Figure 3.3 Using the MCM in a detailed uncertainty analysis with directly determined random uncertainty of the result.

Figure 3.4 Asymmetric distribution of MCM results.

Figure 3.5 Probabilistically symmetric coverage interval.

Figure 3.6 Histogram of levels of confidence provided by 95% uncertainty models for all experiments where

ν

r

≈ 9 (Appendix C).

Figure 3.7 Histogram of levels of confidence provided by 95% uncertainty models for all experiments when

(Appendix C).

Figure 3.8 Experimental determination of resistance characteristics of rough-walled pipe.

Figure 3.9 Schematic showing core of MCM general uncertainty analysis.

Figure 3.10 Layout of worksheets in an Excel™ MCM General Uncertainty Analysis Spreadsheet.

Figure 3.11 The “error d (gaussian)” worksheet.

Figure 3.12 The “nominal values & u's” worksheet.

Figure 3.13 The “perturbed variables” worksheet.

Figure 3.14 The “results” worksheet.

Figure 3.15 The “result histogram-stats” worksheet.

Figure 3.16 MCM friction factor distribution for u = 0.5% for all input variables.

Figure 3.17 MCM friction factor distribution for u = 2.5% for all input variables.

Figure 3.18 MCM friction factor distribution for u = 5.0% for all input variables.

Figure 3.19 MCM friction factor distribution for u = 10.0% for all input variables.

Figure 3.20 Comparison of MCM 95% coverage and TSM 95% confidence intervals.

Figure 3.21 Convergence of u

f

for the case in which u of the inputs = 0.5%.

Figure 3.22 Convergence of u

f

for the case in which u of the inputs = 10%.

Chapter 4

Figure 4.1 Sketch for Example 4.1.

Figure 4.2 Sketch for Example 4.2.

Figure 4.3 Sketch for Example 4.3.

Figure 4.4 Sketch for Example 4.4.

Figure 4.5 Sketch for Example 4.5.

Figure 4.6 Schematic of a laser transmissometer system for monitoring particulate concentrations in exhaust gases.

Figure 4.7 Uncertainty in the experimental result as a function of the transmittance.

Figure 4.8 Multiple-pass beam arrangement for the transmissometer.

Figure 4.9 Uncertainty in the experimental result as a function of the number of beam passes.

Figure 4.10 Finite-length circular cylinder in crossflow.

Figure 4.11 Behavior of the cylinder temperature in a test using the transient technique.

Figure 4.12 Uncertainty analysis results for the steady-state technique.

Figure 4.13 Uncertainty analysis results for the transient technique.

Figure 4.14 Comparison of uncertainty analysis results for the two proposed techniques.

Figure 4.15 Presentation of UMF results for thermodynamic method of determining turbine efficiency for 24 different set points (from Ref. 7).

Figure 4.16 Presentation of UPC results for thermodynamic method of determining turbine efficiency for 24 different set points (from Ref. 7).

Figure 4.17 Presentation of UMF and UPC results for the specific impulse of a solar thermal absorber/thruster at one set point (from Ref. 8; originally published by American Institute of Aeronautics and Astronautics).

Chapter 5

Figure 5.1 Experimental result determined from multiple measured variables.

Figure 5.2 Variation with time of

Y

, a measured variable or an experimental result, for a “steady-state” condition.

Figure 5.3 Venturi calibration schematic (not to scale) (from Ref. 1).

Figure 5.4 Normalized venturi inlet and throat pressures for a test (from Ref. 1).

Figure 5.5 Diagram of ambient temperature flow ejector.

Figure 5.6 Ratios of random standard uncertainties from ejector tests.

Figure 5.7 Normalized measurements from a liquid rocket engine ground test.

Figure 5.8 Ratios of random standard uncertainties for average thrust.

Figure 5.9 Ratios of random standard uncertainties for specific impulse.

Chapter 6

Figure 6.1 TSM Detailed uncertainty analysis: determination of systematic uncertainty for experimental result.

Figure 6.2 Departures of low-pressure experimental specific heats from tabulated values for air (from Ref. 2).

Figure 6.3 Departures of low-pressure experimental thermal conductivities from tabulated values for air (from Ref. 2).

Figure 6.4 Schematic of the turbulent heat transfer test facility (THTTF).

Figure 6.5 Schematic of flow over heated test plate.

Figure 6.6 Steady-state energy balance on test plate.

Figure 6.7 Cross-section of the THTTF test section.

Figure 6.8 Schematic of a test plate with heater wire placement shown [4].

Figure 6.9 Conduction model with heater wire [4].

Figure 6.10 Conduction model with heater wire and “thermistor” installed [4].

Figure 6.11 Example of conduction model output for several assumed designs [4].

Figure 6.12 Schematic of the thermistor calibration approach.

Figure 6.13 Cumulative distribution plot of thermistor calibration data.

Figure 6.14 Calibration approach for test plate power wattmeter.

Figure 6.15 Cumulative distribution plot of wattmeter calibration data.

Figure 6.16 MCM approach with correlated systematic error effects.

Figure 6.17 Schematic for truck and lumber example.

Figure 6.18 Load cell calibration schematic.

Figure 6.19 Heat exchanger diagram for example.

Figure 6.20 Schematic of flow loop for friction factor determination.

Figure 6.21 Friction factor versus Reynolds number data for two rough wall pipes: (

a

) points equispaced in Re; (

b

) points equispaced in log Re.

Figure 6.22 Data taken in sequential order and including 5% hysteresis effect.

Figure 6.23 Data taken in random order and including 5% hysteresis effect.

Figure 6.24 Schematic of jitter program procedure.

Chapter 7

Figure 7.1 Schematic of oxygen bomb calorimeter.

Figure 7.2 Flow system schematic for balance check example.

Figure 7.3 Debugging check at first-order replication level,

u

= 67 m/s.

Figure 7.4 Debugging check at first-order replication level,

u

= 57 m/s. Problem indicated since

s

St

≈ 0 was expected.

Figure 7.5 Debugging check at first-order replication level,

u

= 12 m/s. Problem indicated since

s

St

≈ 0 was expected.

Figure 7.6 Classic data set of Reynolds et al. [10], widely accepted as a standard.

Figure 7.7 Qualification check at

N

th-order replication level. Successful comparison of new data with the standard set.

Figure 7.8 Heat exchanger flow configuration.

Figure 7.9 MCM diagram for no shared error sources case.

Figure 7.10 MCM diagram showing elemental error sources.

Figure 7.11 Steady state test temperature data for HX1.

Figure 7.12 Steady state test heat transfer rate data for HX1.

Figure 7.13 MCM results for fouling resistance for 2012 HX1 test.

Figure 7.14 MCM results for fouling resistance for 2016 HX1 test.

Figure 7.15 MCM results for difference in fouling resistance in two HX1 tests.

Chapter 8

Figure 8.1 MCM approach for determining uncertainty of a regression expression.

Figure 8.2 TSM approach for determining uncertainty of a regression expression.

Figure 8.3 Expressing regression uncertainty intervals (from Refs. 1 and 2).

Figure 8.4 Schematic showing core analysis for MCM determination of u

Y

.

Figure 8.5 Schematic of experimental apparatus for venturi calibration (from Ref. 1).

Figure 8.6 Differential pressure transducer calibration curve (from Ref. 1).

Figure 8.7 Differential pressure transducer calibration curve uncertainty (from Refs. 1 and 2).

Figure 8.8 Discharge coefficient versus Reynolds number (from Refs. 1 and 2).

Figure 8.9 Uncertainty in venturi flow rate as a function of Reynolds Number (from Refs. 1 and 2).

Chapter 9

Figure 9.1 Schematic of rough-walled pipe flow experiment.

Figure 9.2 Schematic showing nomenclature for validation approach.

Figure 9.3 Overview of validation process with sources of error in ovals.

Figure 9.4 Schematic for V&V example case of combustion gas flow through a duct.

Figure 9.5 Case 1: TSM approach for estimating

u

val

when experiment validation variable (Δ

P

) is directly measured.

Figure 9.6 Case 1: MCM approach for estimating

u

val

when experiment validation variable (Δ

P

) is directly measured.

Figure 9.7 Case 2: TSM approach when validation result is defined by data reduction equation that combines variables measured in experiment.

Figure 9.8 Case 2: MCM approach when validation result is defined by data reduction equation that combines variables measured in experiment.

Figure 9.9 Case 3: TSM approach when validation result is defined by data reduction equation that combines variables measured in experiment that share an identical error source.

Figure 9.10 Case 3: MCM approach when the validation result is defined by a data reduction equation that combines variables measured in the experiment that share an identical error source.

Figure 9.11 TSM approach when experimental validation result is defined by data reduction equation that itself is a model (Case 4).

Figure 9.12 MCM approach when experimental validation result is defined by data reduction equation that itself is a model (case 4).

Appendix A

Figure A.1 Graphic representation of the two-tailed Gaussian probability.

Appendix B

Figure B.1 Propagation of systematic errors and random errors into an experimental result.

Figure B.2 Range of variation of sample standard deviation as a function of number of readings in the sample.

Appendix C

Figure C.1 Histogram of confidence levels provided by 99% uncertainty models for all example experiments when

ν

r

≈ 9.

Figure C.2 Histogram of confidence levels provided by 95% uncertainty models for all example experiments when

ν

r

≈ 9.

Figure C.3 Histogram of confidence levels provided by 99% uncertainty models for all example experiments when

.

Figure C.4 Histogram of confidence levels provided by 95% uncertainty models for all example experiments when

.

Appendix D

Figure D.1 Concept for determining shortest coverage interval for a 100

p

% level of confidence.

Appendix E

Figure E.1 Gaussian asymmetric systematic error distribution for

β

3

.

Figure E.2 Rectangular asymmetric systematic error distribution for

β

3

.

Figure E.3 Triangular asymmetric systematic error distribution for

β

3

.

Figure E.4 Thermocouple probe in engine exhaust pipe.

Appendix F

Figure F.1 Step change in input to an instrument.

Figure F.2 Ramp change in input to an instrument.

Figure F.3 Sinusoidally varying input to an instrument.

Figure F.4 Response of a first-order instrument to a step change input versus nondimensional time.

Figure F.5 Response of a first-order instrument to a ramp input versus time.

Figure F.6 Amplitude response of a first-order instrument to a sinusoidal input of frequency

ω

.

Figure F.7 Phase error in the response of a first-order instrument to a sinusoidal input of frequency

ω

.

Figure F.8 Response of a second-order instrument to a step change input for various damping factors.

Figure F.9 Amplitude response of a second-order instrument to a sinusoidal input of frequency

ω

.

Figure F.10 Phase error in the response of a second-order instrument to a sinusoidal input of frequency

ω

.

Guide

Cover

Table of Contents

Begin Reading

Pages

C1

iii

iv

v

xv

xvi

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

305

306

307

308

309

311

312

313

314

315

316

317

318

319

320

321

322

323

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

E1

EXPERIMENTATION, VALIDATION, AND UNCERTAINTY ANALYSIS FOR ENGINEERS

FOURTH EDITION

 

 

HUGH W. COLEMAN and W. GLENN STEELE

 

 

 

 

 

 

 

This edition first published 2018

©2018 John Wiley & Sons, Inc.

Edition History

John Wiley & Sons (1e, 1989)

John Wiley & Sons (2e, 1999)

John Wiley & Sons (3e, 2009)

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Hugh W Coleman and W. Glenn Steele to be identified as the authors of this work has been asserted in accordance with law.

Registered Office

John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA

Editorial Office

The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print-on-demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of Warranty

While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Cataloging-in-Publication Data:

Names: Coleman, Hugh W., author. | Steele, W. Glenn, author.

Title: Experimentation, validation, and uncertainty analysis for engineers / by Hugh W. Coleman and W. Glenn Steele.

Other titles: Experimentation and uncertainty analysis for engineers

Description: Fourth edition. | Hoboken, NJ, USA : Wiley, [2018] | First edition entitled: Experimentation and uncertainty analysis for engineers. | Includes bibliographical references and index. | Identifiers: LCCN 2017055307 (print) | LCCN 2017056271 (ebook) | ISBN 9781119417668 (pdf) | ISBN 9781119417705 (epub) | ISBN 9781119417514 (cloth)

Subjects: LCSH: Engineering—Experiments. | Uncertainty.

Classification: LCC TA153 (ebook) | LCC TA153 .C66 2018 (print) | DDC 620.0072—dc23

LC record available at https://lccn.loc.gov/2017055307

Cover Design: Wiley

For our parentsMable Morrison Coleman and Marion Templeton ColemanandMary Evans Steele and Wilbur Glenn Steele, Sr.who were reared and came of age during the Great Depression.Their work ethic, nurturing, support, and respect for the value of education have always been our foundation.

H.W.CW.G.S

PREFACE

When we first agreed to produce a fourth edition, our plan was to update some material on the Monte Carlo Method (MCM) and to add a few examples. However, we quickly realized two things.

First, since we are both now retired from our professor day jobs, this edition is the first where we have the luxury of adequate time.

Second, since publication of the first edition we have taught our two-day short course based on the book to more than 125 classes containing people of almost every conceivable engineering and scientific background. Using what we learned from their questions and suggestions, we adjusted the sequence and logic of the presentation of the basic concepts in the course—and that sequence and logic no longer corresponded to the presentation in the book.

In this fourth edition, we have updated the sequence of presentation of basic ideas by introducing some topics earlier and by expanding the discussion of others. The chapter on uncertainty propagation has been rewritten to stress that the MCM has become the primary method for propagating uncertainties. This revised Chapter 3 also more clearly shows our concepts of “general uncertainty analysis” and “detailed uncertainty analysis” and shows how to use a spreadsheet to apply the MCM even in complex uncertainty propagation situations. The presentation of detailed uncertainty analysis has been divided into three chapters: determining random uncertainties in results, determining systematic uncertainties in results, and comprehensive examples. We have also added a large number of examples drawn from our personal experience, including a comprehensive example that covers all areas of uncertainty analysis.

Perhaps the thing that sets our book apart is its basis on the wide background of our experiences during the combined 100 years of our engineering careers. Our expertise in heat transfer, fluid mechanics, propulsion, energy systems, and uncertainty analysis has been used in applications for land-based, aviation, naval, and space projects, and we have personally worked on testing programs from laboratory-scale to full-scale trials. We have been active participants in professional societies' committees developing experimental uncertainty and validation standards for use by practicing engineers. Our interactions in classes of our short course with participants from industry, laboratories, and academia have been stimulating and informative.

We would like to acknowledge the invaluable contributions of all our students to this work and also the contributions of our university and technical committee colleagues. We are always excited to present the material to classes and to observe how the students quickly see the myriad of applications of applied uncertainty analysis to their specific tests and analyses.

HUGH W. COLEMAN and W. GLENN STEELE

November 2017

1EXPERIMENTATION, ERRORS, AND UNCERTAINTY

When the word experimentation is encountered, most of us immediately envision someone in a laboratory “taking data.” This idea has been fostered over many decades by portrayals in periodicals, television shows, and movies of an engineer or scientist in a white lab coat writing on a clipboard while surrounded by the piping and gauges in a refinery or by an impressive complexity of laboratory glassware. In recent years, the location is often a control room filled with computerized data acquisition equipment with lights blinking on the racks and panels. To some extent, the manner in which laboratory classes are typically implemented in university curricula also reinforces this idea. Students often encounter most instruction in experimentation as demonstration experiments that are already set up when the students walk into the laboratory. Data are often taken under the pressure of time, and much of the interpretation of the data and the reporting of results is spent on trying to rationalize what went wrong and what the results “would have shown if…”

Experimentation is not just data taking. Any engineer or scientist who subscribes to the widely held but erroneous belief that experimentation is making measurements in the laboratory will be a failure as an experimentalist. The actual data-taking portion of a well-run experimental program generally constitutes a small percentage of the total time and effort expended. In this book we examine and discuss the steps and techniques involved in a logical, thorough approach to the subject of experimentation.

1-1 EXPERIMENTATION

1-1.1 Why Is Experimentation Necessary?

Why are experiments necessary? Why do we need to study the subject of experimentation? The experiments run in science and engineering courses demonstrate physical principles and processes, but once these demonstrations are made and their lessons taken to heart, why bother with experiments? With the laws of physics we know, with the sophisticated analytical solution methods we study, with the increasing knowledge of numerical solution techniques, and with the awesome computing power available, is there any longer a need for experimentation in the real world?

These are fair questions to ask. To address them, it is instructive to consider Figure 1.1, which illustrates a typical analytical approach to finding a solution to a physical problem. Experimental information is almost always required at one or more stages of the solution process, even when an analytical approach is used. Sometimes experimental results are necessary before realistic assumptions and idealizations can be made so that a mathematical model of the real-world process can be formulated using the basic laws of physics. In addition, experimentally determined information is generally present in the form of physical property values and the auxiliary equations (e.g., equations of state) necessary for obtaining a solution. So we see that even in situations in which the solution approach is analytical (or numerical), information from experiments is included in the solution process.

Figure 1.1 Analytical approach to solution of a problem.

From a more general perspective, experimentation lies at the very foundations of science and engineering. Webster's [1] defines science as “systematized knowledge derived from observation, study, and experimentation carried on in order to determine the nature or principles of what is being studied.” In discussing the scientific method, Shortley and Williams [2] state: “The scientific method is the systematic attempt to construct theories that correlate wide groups of observed facts and are capable of predicting the results of future observations. Such theories are tested by controlled experimentation and are accepted only so long as they are consistent with all observed facts.”

In many systems and processes of scientific and engineering interest, the geometry, boundary conditions, and physical phenomena are so complex that it is beyond our present technical capability to formulate satisfactory analytical or numerical models and approaches. In these cases, experimentation is necessary to define the behavior of the systems and/or processes (i.e., to find a solution to the problem).

1-1.2 Degree of Goodness and Uncertainty Analysis

If we are using property data or other experimentally determined information in an analytical solution, we should certainly consider how “good” the experimental information is. Similarly, anyone comparing results of a mathematical model with experimental data (and perhaps also with the results of other mathematical models) should certainly consider the degree of goodness of the data when drawing conclusions based on the comparisons. This situation is illustrated in Figure 1.2. In Figure 1.2a the results of two different mathematical models are compared with each other and with a set of experimental data. The authors of the two models might have a fine time arguing over which model compares better with the data. In Figure 1.2b, the same information is presented, but a range representing the uncertainty (likely amount of error) in the experimental value of Y has been plotted for each data point. It is immediately obvious that once the degree of goodness of the Y value is taken into consideration it is fruitless to argue for the validity of one model over another based only on how well the model results match the data. The “noise level” established by the data uncertainty effectively sets the resolution at which such comparisons can be made.

Figure 1.2 Comparison of model results with experimental data (a) without and (b) with consideration of uncertainty in Y.

We will discuss such “validation” comparisons between simulation results and experimental results in considerable detail as we proceed. At this point, we will note that the experimental values of X will also contain errors, and so an uncertainty should also be associated with X. In addition, the simulation result also has uncertainty arising from modeling errors, errors in the inputs to the model, and possibly errors from the algorithms used to numerically solve the simulation equations.

From this example, one might conclude that even a person with no ambition to become an experimentalist needs an appreciation of the experimental process and the factors that influence the degree of goodness of experimental data and results from simulations.

Whenever the experimental approach is to be used to answer a question or to find the solution to a problem, the question of how good the results will be should be considered long before an experimental apparatus is constructed and data are taken. If the answer or solution must be known within, say, 5% for it to be useful to us, it would make no sense to spend the time and money to perform the experiment only to find that the probable amount of error in the results was considerably more than 5%.

In this book we use the concept of uncertainty to describe the degree of goodness of a measurement, experimental result, or analytical (simulation) result. Schenck [3] quotes S. J. Kline as defining an experimental uncertainty as “what we think the error would be if we could and did measure it by calibration.”

An error δ is a quantity that has a particular sign and magnitude, and a specific error δi is the difference caused by error source i between a quantity (measured or simulated) and its true value. As we will discuss in detail later, it is generally assumed that each error whose sign and magnitude are known has been removed by correction. Any remaining error is thus of unknown sign and magnitude,1 and an uncertainty u is estimated with the idea that ±u characterizes the range containing δ.

Uncertainty u is thus an estimate: a ±u interval2 is an estimate of a range within which we believe the actual (but unknown) value of an error δ lies. This is illustrated in Figure 1.3, which shows an uncertainty interval ±ud that contains the error δd whose actual sign and magnitude are unknown.

Figure 1.3 An uncertainty u defines an interval that is estimated to contain the actual value of an error of unknown sign and magnitude.

Uncertainty analysis (the analysis of the uncertainties in experimental measurements and in experimental and simulation results) is a powerful tool. This is particularly true when it is used in the planning and design of experiments. As we will see in Chapter 4, there are realistic, practical cases in which all the measurements in an experiment can be made with 1% uncertainty yet the uncertainty in the final experimental result will be greater than 50%. Uncertainty analysis, when used in the initial planning phase of an experiment, can identify such situations and save the experimentalist much time, money, and embarrassment.

1-1.3 Experimentation and Validation of Simulations

Over the past several decades, advances in computing power, modeling approaches, and numerical solution algorithms have increased the ability of the scientific and engineering community to simulate real-world processes to the point that it is realistic for predictions from surprisingly detailed simulations to be used to replace much of the experimentation that was previously necessary to develop designs for new systems and bring them to the market. The new systems to which we refer cover the gamut from simple mechanical and structural devices to rocket engine injectors to commercial aircraft to military weapons systems to nuclear power systems.

In the past, it was necessary to test (experimentally determine) subsystem and system performance at numerous set points covering the expected domain of operation of the system. For large, complex systems the required testing program can be prohibitively expensive, if not outright impossible, with available finite resources. The current approach seeks to replace some or much of the experimentation with (cheaper) simulation results that have been validated with experimental results at selected set points—but to do this with confidence one must know “how good” the predictions are at the selected set points. This has led to the emergence of the field called verification and validation (V&V) of simulations (e.g., models, codes).

The verification part refers to application of approaches to determine that the algorithms solve the equations in the model correctly and to estimate the numerical uncertainty if the equations are discretized as, for example, in the finite-difference, finite-element, and finite-volume approaches used in computational mechanics. Verification addresses the question of whether the equations are solved correctly but does not address the question of how well the equations represent the real world. Validation is the process of determining the degree to which a model is an accurate representation of the real world—it addresses the question of how good the predictions are.

Verification is a necessary component of the validation process and will be described briefly with references cited to guide the reader who desires more detail, but more than that is beyond the scope of what we want to cover in this book. Since experimentation and the uncertainties in experimental results and in simulation results are central issues in validation, the details of validation are covered in this book. Basic ideas and concepts are developed as they arise naturally in the discussion of experimental uncertainty analysis—for example, estimating the uncertainty in the simulation result due to the uncertainties in the simulation inputs. The application of the ideas and concepts in validation are covered in Chapter 9 with detailed discussion and examples.

1-2 EXPERIMENTAL APPROACH

In most experimental programs the experimental result (or question to be answered) is not directly measured but is determined by combining multiple measured variables using a data reduction equation (DRE). Examples are determination of the rate of heat transfer within a heat exchanger by measuring flow rates and temperatures and using tabulated fluid properties. Likewise, all of the dimensionless groups such as drag coefficient, Nusselt number, Reynolds number, and Mach number that are used to present the results of a test are themselves DREs. In addition to determining the appropriate DRE(s) for the experimental program, other questions must be answered.

1-2.1 Questions to Be Considered

When an experimental approach is to be used to find a solution to a problem, many questions must be considered. Among these are the following:

What question are we trying to answer? (What is the problem?)

How accurately do we need to know the answer? (How is the answer to be used?)

What physical principles are involved? (What physical laws govern the situation?)

What experiment or set of experiments might provide the answer?

What variables must be controlled? How well?

What quantities must be measured? How accurately?

What instrumentation is to be used?

How are the data to be acquired, conditioned, and stored?

How many data points must be taken? In what order?

Can the requirements be satisfied within the budget and time constraints?

What techniques of data analysis should be used?

What is the most effective and revealing way to present the data?

What unanticipated questions are raised by the data?

In what manner should the data and results be reported?

Although by no means all-inclusive, this list does indicate the range of factors that must be considered by the experimentalist. This might seem to be a discouraging and somewhat overwhelming list, but it need not be. With the aid of uncertainty analysis and a logical, thorough approach in each phase of an experimental program, the apparent complexities often can be reduced and the chances of achieving a successful conclusion enhanced.

A key point is to avoid becoming so immersed in the many details that must be considered that the overall objective of the experiment is forgotten. This statement may sound trite, but it is true nonetheless. We perform an experiment to find the answer to a question. We need to know the answer within some uncertainty, the magnitude of which is usually determined by the intended use of the answer. Uncertainty analysis is a tool that we use to make decisions in each phase of an experiment, always keeping in mind the desired result and uncertainty. Properly applied, this approach will guide us past the pitfalls that are usually not at all obvious and will enable us to obtain an answer with an acceptable uncertainty.

1-2.2 Phases of Experimental Program

There are numerous ways that a general experimental program can be divided into different components or phases. For our discussions in this book, we consider the experimental phases to be planning, design, construction, debugging, execution, data analysis, and reporting of results. There are not sharp divisions between these phases—in fact, there is generally overlap and sometimes several phases will be ongoing simultaneously (as when something discovered during debugging leads to a design change and additional construction on the apparatus).

In the planning phase we consider and evaluate the various approaches that might be used to find an answer to the question being addressed. This is sometimes referred to as the preliminary design phase.

In the design phase we use the information found in the planning phase to specify the instrumentation needed and the details of the configuration of the experimental apparatus. The test plan is identified and decisions made on the ranges of conditions to be run, the data to be taken, the order in which the runs will be made, and so on.

During the construction phase, the individual components are assembled into the overall experimental apparatus, and necessary instrument calibrations are performed.

In the debugging phase, the initial runs using the apparatus are made and the unanticipated problems (which must always be expected!) are addressed. Often, results obtained in the debugging phase will lead to some redesign and changes in the construction and/or operation of the experimental apparatus. At the completion of the debugging phase, the experimentalist should be confident that the operation of the apparatus and the factors influencing the uncertainty in the results are well understood.

During the execution phase, the experimental runs are made and the data are acquired, recorded, and stored. Often, the operation of the apparatus is monitored using checks that were designed into the system to guard against unnoticed and unwanted changes in the apparatus or operating conditions.

During the data analysis phase, the data are analyzed to determine the answer to the original question or the solution to the problem being investigated. In the reporting phase, the data and conclusions should be presented in a form that will maximize the usefulness of the experimental results.

In the chapters that follow we discuss a logical approach for each of these phases. We will find that the use of uncertainty analysis and related techniques (e.g., balance checks) will help to ensure a maximum return for the time, effort, and financial resources invested.

1-3 BASIC CONCEPTS AND DEFINITIONS

There is no such thing as a perfect measurement. All measurements of a variable contain inaccuracies. Because it is important to have an understanding of these inaccuracies if we are to perform experiments (use the experimental approach to answer a question) or if we are simply to use values that have been determined experimentally, we must carefully define the concepts involved. As stated in the previous section, generally a data reduction equation DRE is used to combine multiple measured variables into a test result, so it is necessary to consider errors and uncertainties within the context of a single measured variable and then to consider how those errors and uncertainties propagate through the DRE to produce the errors and uncertainties in the result. There are currently two approaches used to model the propagation.

The first is the Monte Carlo Method (MCM), which samples errors drawn from assumed distributions and simulates running the test many times with a different set of sampled errors each time. The authoritative international guide for this approach is the Joint Committee for Guides in Metrology (JCGM) [4], which we will henceforth refer to as the 2008 GUM and use as the standard reference for the MCM.

The second is the classic Taylor Series Method (TSM) with higher-order terms neglected (thus making it less “exact”). The authoritative international guide for this approach is from the International Organization for Standardization (ISO) [5], which we will henceforth refer to as the 1993 GUM and use as the standard reference for the TSM.

The two propagation methods are discussed in detail beginning in Chapter 3, but some differences in the way they model the errors and uncertainties will be discussed in the following sections and also in Chapter 2 as we consider the concepts of coverage intervals and confidence intervals in cases in which a directly measured variable is the desired experimental result and thus there is no DRE.

1-3.1 Errors and Uncertainties

Consider a variable X in a process that is considered to be steady so that its true value (Xtrue) is constant. Measurements of the variable are influenced by a number of elemental error sources—such as the errors in the standard used for calibration and from an imperfect calibration process; errors caused by variations in ambient temperature, humidity, pressure, vibrations, electromagnetic influences; unsteadiness in the “steady-state” phenomenon being measured; errors due to undesired interactions of the transducer with the environment; errors due to imperfect installation of the transducer; and others.

As an example, suppose that a measurement system is used to make N successive measurements of X and that the measurements are influenced by five significant error sources, as shown in Figure 1.4.

Figure 1.4 Measurement of a variable influenced by five error sources.

The first three of those measurements are given by

(1.1)

where δ1 is the value of the error from the first source, δ2 the value of the error from the second source, and so on. Each of the measurements X1, X2, and X3 has a different value since errors from some of the sources vary during the period when measurements are taken and so are different for each measurement while others do not vary and so are the same for each measurement. Using traditional nomenclature, we assign the symbol β (beta) to designate an error that does not vary during the measurement period and the symbol ε (epsilon) to designate an error that does vary during the measurement period. For this example, we will assume that the errors from sources 1 and 2 do not vary and the errors from sources 3, 4, and 5 do vary, so that Eq. (1.1) can be written

(1.2)

Since just by looking at the measured values we cannot distinguish between β1 and β2 or among ε1, ε2, and ε3, Eq. (1.3) describes what we actually have,

(1.3)

where now

(1.4)

This process of making successive measurements of X is shown schematically in Figure 1.5. In Figure 1.5a, the first measurement X1 is shown. The difference between the measured value and the true value is the total error , which is the sum of the invariant error β (the combination of all of the errors from the invariant elemental error sources) and the varying error (ε)1 (the combination at the time X1 is measured of all of the errors from the error sources that vary during the period that our N measurements are taken). In Figure 1.5b the second measurement is also shown, and of course the total error differs from because the varying error ε is different for each measurement.

Figure 1.5 Effect of errors on successive measurements of a variable X.

If we continued to acquire additional measurements, we could plot a histogram, which presents the fraction of the N total measurements with values between X and X + Δ X, X + ΔX and X + 2Δ X, X + 2Δ X and X + 3Δ X, and so on, versus X, where Δ X is the bin width. This is shown schematically in Figure 1.5c and allows us to view the distribution of the total of N measured values. This distribution of the sample population of N measurements often tends to have a larger number of the measured values near the mean of the sample and a decreasing number of measured values as one moves away from the mean. A mean value can be calculated, as can a standard deviation s, which is an indicator of the width of the distribution of the X values (the amount of “scatter” of the measurements caused by the errors from the elemental sources that varied during the measurement period).

As the number of measurements approaches infinity, the parent population distribution would likely appear as shown in Figure 1.5d (although it would not necessarily be exactly symmetric), with the mean μ offset from Xtrue by β, the combination of all of the invariant errors. Of course, we never have an infinite number of measurements, but conceptually the idea of the parent population distribution is very useful to us.

An example of this behavior exhibited by a real set of measurements is shown in Figure 1.6. A thermometer immersed in an insulated container of water was read independently by 24 of our students to the nearest tenth of a degree Fahrenheit. Unknown to the students, the thermometer was biased (“read high”) by a little over a degree, and the “true” temperature of the water was about 96.0 F. The temperatures read by the students are distributed around an average value of about 97.2 F and are biased (offset) from the true value of 96.0 F.

Figure 1.6 Histogram of temperatures read from a thermometer by 24 students.

With such a data sample, what we would like to do is use information from the sample to specify some range (Xbest ± uX) within which we think Xtrue falls. Generally we take Xbest to be equal to the average value of the N measurements (or to the single measured value X if N = 1). The uncertainty uX is an estimate of the interval (±uX) that likely contains the magnitude of the combination of all of the errors affecting the measured value X. Look back at the first measurement of X illustrated in Figure 1.5a and imagine that it is influenced by five significant error sources as in Figure 1.4. Then, recalling Eq. (1.2), the expression for the first measured value X1 is given by

(1.5)

To associate an uncertainty with a measured X value, we need to have elemental uncertainty estimates for all of the elemental error sources. That is, u1 is an uncertainty that defines an interval (±u1) within which we think the value of β1 falls, while u3 is an uncertainty that defines an interval (±u3) within which we think the value of ε3 falls.

Using the concepts and procedures in the 2008 GUM [4] and the 1993 GUM [5], a standard uncertainty (u) is defined as an estimate of the standard deviation of the parent population from which a particular elemental error originates. For N measurements of X, the standard deviation sX of the sample distribution shown in Figure 1.5c can be calculated as

(1.6)

where the mean value of X is calculated from

(1.7)

How can we determine which error sources' influences are included in (the standard uncertainty) sX and which ones are not? There is only one apparent answer—we must determine which of the elemental error sources did not vary during the measurement period and thus produced errors that were the same in each measurement. These are the invariant error sources, and their influence is not included in sX. Conversely, the influences of all of the elemental error sources that varied during the measurement period (whether one knows the number of them or not) are included in sX. To understand and to take into account the effects of all of the significant error sources, then, we must identify two categories—the first category contains all of the invariant sources whose effects are not included in sX, and the second category contains all of the sources that varied during the measurement period and whose effects are included in sX. This leaves the standard uncertainties for the invariant error sources to be estimated before we can determine the standard uncertainty uX to associate with the measured variable X.

1-3.2 Categorizing and Naming Errors and Uncertainties

Traditional U.S.: Random/Systematic Categorization. In the U.S., traditionally errors have been categorized by their effect on measurement. The resulting nomenclature has used the name “random” for errors that varied during the measurement period and the name ‘systematic” for errors that were invariant during the measurement period. The uncertainties associated with the errors are given similar random/systematic designations. This nomenclature will be used in this book, so we will be referring to random standard uncertainties and to systematic standard uncertainties.

The use of the word “random” is somewhat unfortunate, since the category actually contains all of the errors that vary and many of the variations are not random. Often, in “steady state” tests of engineering systems there is a drift with time that contributes to the observed variability that is definitely not random. We will use the random designation while noting the often incorrect connotation that it implies.

Systematic errors are invariant at a given set point, but may have different values at different set points. Systematic uncertainties that are quoted as a “percent of reading” are a good example of this. This will be covered in detail as we proceed with our discussions.

1993 GUM: Type A/Type B Categorization. The 1993 GUM 10 recommended designation of the standard uncertainties for the elemental sources by the way in which they are evaluated. A type A evaluation of uncertainty is defined as a “method of evaluation of uncertainty by the statistical analysis of series of observations,” and the symbol s is used. A type B evaluation of uncertainty is defined as a “method of evaluation of uncertainty by means other than the statistical analysis of series of observations,” and the generic symbol u is used. If, in the case discussed in Sec. 1-3.1, b1 was estimated by a statistical evaluation using calibration data, it would be a type A standard uncertainty and could be designated b1,A. If b2 was estimated using an analytical model of the transducer and its boundary conditions, it would be a type B standard uncertainty and could be designated b2,B. If sX was calculated statistically as described, it would be a type A standard uncertainty and could be designated sX,A.

Engineering Risk Analysis: Aleatory/Epistemic/Ontological Categorization. In the fields of engineering risk, safety, reliability analysis an uncertainty categorization is used with the nomenclature aleatory, epistemic, and ontological. Squair [6] presents an interesting discussion of this and compares it with former U.S. Secretary of Defense Rumsfeld's “known knowns, known unknowns, and unknown unknowns.” Aleatory uncertainty is related to variability and can be viewed as a “known known” when the variability is calculated (perhaps as a standard deviation) from multiple measurements in an engineering experiment. Epistemic uncertainty is related to incertitude and corresponds to the “known unknowns.” In Squair's words: “We lack in knowledge, but are aware of our lack.” The ontological uncertainty