Planning and Executing Credible Experiments - Robert J. Moffat - E-Book

Planning and Executing Credible Experiments E-Book

Robert J. Moffat

0,0
110,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Covers experiment planning, execution, analysis, and reporting

This single-source resource guides readers in planning and conducting credible experiments for engineering, science, industrial processes, agriculture, and business. The text takes experimenters all the way through conducting a high-impact experiment, from initial conception, through execution of the experiment, to a defensible final report. It prepares the reader to anticipate the choices faced during each stage.

Filled with real-world examples from engineering science and industry, Planning and Executing Credible Experiments: A Guidebook for Engineering, Science, Industrial Processes, Agriculture, and Business offers chapters that challenge experimenters at each stage of planning and execution and emphasizes uncertainty analysis as a design tool in addition to its role for reporting results. Tested over decades at Stanford University and internationally, the text employs two powerful, free, open-source software tools: GOSSET to optimize experiment design, and R for statistical computing and graphics. A website accompanies the text, providing additional resources and software downloads.

  • A comprehensive guide to experiment planning, execution, and analysis
  • Leads from initial conception, through the experiment’s launch, to final report
  • Prepares the reader to anticipate the choices faced throughout an experiment
  • Hones the motivating question
  • Employs principles and techniques from Design of Experiments (DoE)
  • Selects experiment designs to obtain the most information from fewer experimental runs
  • Offers chapters that propose questions that an experimenter will need to ask and answer during each stage of planning and execution
  • Demonstrates how uncertainty analysis guides and strengthens each stage
  • Includes examples from real-life industrial experiments
  • Accompanied by a website hosting open-source software

Planning and Executing Credible Experiments is an excellent resource for graduates and senior undergraduates—as well as professionals—across a wide variety of engineering disciplines.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 625

Veröffentlichungsjahr: 2021

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright Page

Dedication Page

About the Authors

Preface

Audience

Accompanying Material

Recommended Companion Texts

Acknowledgments

About the Companion Website

1 Choosing Credibility

1.1 The Responsibility of an Experimentalist

1.2 Losses of Credibility

1.3 Recovering Credibility

1.4 Starting with a Sharp Axe

1.5 A Systems View of Experimental Work

1.6 In Defense of Being a Generalist

References

Homework

2 The Nature of Experimental Work

2.1 Tested Guide of Strategy and Tactics

2.2 What Can Be Measured and What Cannot?

2.3 Beware Measuring Without Understanding: Warnings from History

2.4 How Does Experimental Work Differ from Theory and Analysis?

2.5 Uncertainty

2.6 Uncertainty Analysis

References

Homework

3 An Overview of Experiment Planning

3.1 Steps in an Experimental Plan

3.2 Iteration and Refinement

3.3 Risk Assessment/Risk Abatement

3.4 Questions to Guide Planning of an Experiment

Homework

4 Identifying the Motivating Question

4.1 The Prime Need

4.2 An Anchor and a Sieve

4.3 Identifying the Motivating Question Clarifies Thinking

4.4 Three Levels of Questions

4.5 Strong Inference

4.6 Agree on the Form of an Acceptable Answer

4.7 Specify the Allowable Uncertainty

4.8 Final Closure

Reference

Homework

5 Choosing the Approach

5.1 Laying Groundwork

5.2 Experiment Classifications

5.3 Real or Simplified Conditions?

5.4 Single‐Sample or Multiple‐Sample?

5.5 Statistical or Parametric Experiment Design?

5.6 Supportive or Refutative?

5.7 The Bottom Line

References

Homework

6 Mapping for Safety, Operation, and Results

6.1 Construct Multiple Maps to Illustrate and Guide Experiment Plan

6.2 Mapping Prior Work and Proposed Work

6.3 Mapping the Operable Domain of an Apparatus

6.4 Mapping in Operator's Coordinates

6.5 Mapping the Response Surface

7 Refreshing Statistics

7.1 Reviving Key Terms to Quantify Uncertainty

7.2 The Data Distribution Most Commonly Encountered

The Normal Distribution for Samples of Infinite Size

7.3 Account for Small Samples: The t‐Distribution

7.4 Construct Simple Models by Computer to Explain the Data

7.5 Gain Confidence and Skill at Statistical Modeling Via the R Language

7.6 Report Uncertainty

7.7 Decrease Uncertainty (Improve Credibility) by Isolating Distinct Groups

7.8 Original Data, Summary, and R

References

Homework

8 Exploring Statistical Design of Experiments

8.1 Always Seeking Wiser Strategies

8.2 Evolving from Novice Experiment Design

8.3 Two‐Level and Three‐Level Factorial Experiment Plans

8.4 A Three‐Level, Three‐Factor Design

8.5 The Plackett–Burman 12‐Run Screening Design

8.6 Details About Analysis of Statistically Designed Experiments

8.7 Retrospect of Statistical Design Examples

8.8 Philosophy of Statistical Design

8.9 Statistical Design for Conditions That Challenge Factorial Designs

8.10 A Highly Recommended Tool for Statistical Design of Experiments

8.11 More Tools for Statistical Design of Experiments

8.12 Conclusion

Further Reading

Homework

9 Selecting the Data Points

9.1 The Three Categories of Data

9.2 Populating the Operating Volume

9.3 Example from Velocimetry

9.4 Organize the Data

9.5 Strategies to Select Next Data Points

9.6 Demonstrate Gosset for Selecting Data

9.7 Use Gosset to Analyze Results

9.8 Other Options and Features of Gosset

9.9 Using Gosset to Find Local Extrema in a Function of Several Variables

9.10 Summary

Further Reading

Homework

10 Analyzing Measurement Uncertainty

10.1 Clarifying Uncertainty Analysis

10.2 Definitions

10.3 The Sources and Types of Errors

10.4 The Basic Mathematics

10.5 Automating the Uncertainty Analysis

10.6 Single‐Sample Uncertainty Analysis

References

Further Reading

Homework

11 Using Uncertainty Analysis in Planning and Execution

11.1 Using Uncertainty Analysis in Planning

11.2 Perform Uncertainty Analysis on the DREs

11.3 Using Uncertainty Analysis in Selecting Instruments

11.4 Using Uncertainty Analysis in Debugging an Experiment

11.5 Reporting the Uncertainties in an Experiment

11.6 Multiple‐Sample Uncertainty Analysis

11.7 Coordinate with Uncertainty Analysis Standards

11.8 Describing the Overall Uncertainty in a Single Measurement

11.9 Additional Statistical Tools and Elements

References

Homework

12 Debugging an Experiment, Shakedown, and Validation

12.1 Introduction

12.2 Classes of Error

12.3 Using Time‐Series Analysis in Debugging

12.4 Examples

12.5 Process Unsteadiness

12.6 The Effect of Time‐Constant Mismatching

12.7 Using Uncertainty Analysis in Debugging an Experiment

12.8 Debugging the Experiment via the Data Interpretation Program

12.9 Situational Uncertainty

13 Trimming Uncertainty

13.1 Focusing on the Goal

13.2 A Mlotivating Question for Industrial Production

13.3 Plackett–Burman 12‐Run Results and Motivating Question #3

13.4 PB 12‐Run Results and Motivating Question #1

13.5 Uncertainty Analysis of Dual Predictive Model and Motivating Question #2

13.6 The PB 12‐Run Results and Individual Machine Models

13.7 Final Answers to All Motivating Questions for the PB Example Experiment

13.8 Conclusions

14 Documenting the Experiment

14.1 The Logbook

14.2 Report Writing

14.3 International Organization for Standardization, ISO 9000 and other Standards

14.4 Never Forget. Always Remember

Appendix A: Distributing Variation and Pooled Variance

A.1 Inescapable Distributions

A.2 Other Common Distributions

A.3 Pooled Variance (Advanced Topic)

Appendix B: Illustrative Tables for Statistical Design

B.1 Useful Tables for Statistical Design of Experiments

B.2 The Plackett–Burman (PB) Screening Designs

Appendix C: Hand Analysis of Two‐Level Factorial Designs

C.1 The General Two‐Level Factorial Design

C.2 Estimating the Significance of the Apparent Factor Effects

C.3 Hand Analysis of a Plackett–Burman (PB) 12‐Run Design

C.4 Illustrative Practice Example for the PB 12‐Run Pattern

C.5 Answer Key: Compare Your Hand Calculations

C.6 Equations for Hand Calculations

Appendix D: Free Recommended Software: Obtain Recommended Free, Open‐Source Software for Your Computer

D.1 Instructions to Obtain the R Language for Statistics

D.2 Instructions to Obtain LibreOffice

D.3 Instructions to Obtain Gosset

D.4 Possible Use of RStudio

Index

End User License Agreement

List of Tables

Chapter 3

Table 3.1 Overview of a research experiment plan.

Table 3.2 Review the program plan. Do risk assessment and plan risk abatement. I...

Table 3.3 Assess the credibility of the program. Do risk assessment and plan ...

Chapter 6

Table 6.1 Stanton number as a function of blowing fraction, F.

Table 6.2 Data format for computer and statistical analysis: text example.

Chapter 7

Table 7.1 Economic experiment example for shoe store.

Table 7.2 t‐Distribution.

Chapter 8

Table 8.1 Example 1, Part 1: codebook of test conditions for a three‐level, t...

Table 8.2 Example l, Part 2: coding and assignment of trial identification nu...

Table 8.3 Example 1, Part 3: the randomized trial schedule and test results.

Table 8.4 Example 1, Part 4: Analysis of Variance of the modeling terms.

Table 8.5 Example 2, Part 1: Codebook of factors and codings for a six‐factor...

Table 8.6 Example 2, Part 2: 12‐run PB design for a six‐ parameter problem.

Table 8.7 Example 2, Part 3: Analysis of Variance as parsed for each model te...

Chapter 9

Table 9.1 Codebook for life jacket safety spreadsheet (total versus partial r...

Table 9.2 List of factors for industrial test plan example, plus material pro...

Table 9.3 PB‐guided initial 12 runs of industrial example.

Table 9.4 Analysis of Variance of factors for the full first‐order linear mod...

Chapter 11

Table 11.1 Sensitivity coefficients.

Table 11.2 Probability points of the double‐sided student's t‐distribution.

Table 11.3 Estimating sigma from

S

N

.

Chapter 13

Table 13.1 Candidate factors for factory production.

Table 13.2 Uncertainty of predicted material strength, for each machine, at k...

Table 13.3 Example 8.2.

Analysis of Variance

(anova) as parsed for each model ...

Table 13.4 R results of dual predictive model, mod9.

Table 13.5 Predicted strengths over the operating map using the dual predicti...

Table 13.6 Uncertainty of dual prediction model material strength, for each m...

Table 13.7 Overall Uncertainty at key operating points. Compare dual and indi...

Table 13.8 Predicted strengths over the operating mmap using individual machi...

Table 13.9 Side‐by‐side comparison of all model values derived in this chapte...

1

Table A.1 Factors for various confidence intervals, Student's double‐sided t‐...

2

Table B.1 Sets of random orders for trials.

Table B.2 Two‐level factorial design logical patterns.

Table B.3 Twelve‐run PB design pattern.

Table B.4 Eight‐run PB design pattern.

3

Table C.1 Quick hand statistics: means, ranges, variances (Chapter 8, Example...

Table C.2 Legend of worksheet interim values (Chapter 8, Example l, Part 6).

Table C.3 Computing table for two‐level factorial experiments (≤ five factors...

Table C.4 Computing the factor effects from a two‐level design (Chapter 8, Ex...

Table C.5 Twelve‐run PB worksheet (using data from Chapter 8, Example 2).

Table C.6 Selecting a recommended significance level.

List of Illustrations

Chapter 1

Figure 1.1 The experiment viewed as an instrument. Adjust the instrument by ...

Figure 1.2 The Bundt cake as delivered. A high heat‐transfer coefficient lif...

Chapter 2

Figure 2.1 Rice cooker design trajectory.

Figure 2.2 Is this a single sine wave with some scatter in the data? Does it...

Chapter 5

Figure 5.1 A statistical plan focuses on a region around the central point. ...

Chapter 6

Figure 6.1 Mapping the prior art shows where to work and where to look for a...

Figure 6.2 Positioning the operable domain of the second rig in this way all...

Figure 6.3 The hardware components of the applied thermometry rig.

Figure 6.4 The first description of the operable domain, in the simplest coo...

Figure 6.5 The domain is closed by two constraints: the mass flow limit line...

Figure 6.6 Unit Reynolds number versus velocity, with overlaid lines of cons...

Figure 6.7 The final form of the operable domain map, in unit Reynolds numbe...

Figure 6.8 A map in operator's coordinates allows for speedy and safe change...

Figure 6.9 The data from Table 6.1, plotted in a conventional manner, are no...

Figure 6.10 The data from Table 6.1, plotted in an oblique view, reveals tre...

Chapter 7

Figure 7.1 Model prediction and experimental results.

Figure 7.2 The Normal Distribution.

Figure 7.3 Various plots to inspect results of the shoe store example experi...

Figure 7.4 Initial model of hiloy data.

Figure 7.5 Model 5 provided best “least‐squares” straight line fits for each...

Figure 7.6 Contributions to Uncertainty at 95% confidence for model 5 terms....

Figure 7.7 Compare Uncertainty bands for two models of alloy C.

Chapter 8

Figure 8.1 Sampling locations.

Figure 8.2 The coded unit cube with eight corner points.

Chapter 9

Figure 9.1 We generally can estimate the shape of the Response Surface at on...

Figure 9.2 An estimated Response surface.

Figure 9.3 The Response Surface of a heat transfer experiment on a transpire...

Figure 9.4 (a) Six PB samples on Machine 1. (b) Six PB samples on Machine 2....

Figure 9.5 Strength results from the PB 12‐run screening.

Figure 9.6 (a) Gosset‐style samples on Machine 1. (b) Gosset‐style samples o...

Figure 9.7 (a) Operator conditions on Machine 1 for total experiment plan. (...

Chapter 10

Figure 10.1 Typical heat‐transfer data from a simple situation: a round cyli...

Figure 10.2 (a) Slow sampling on a rapidly changing signal gives the appeara...

Figure 10.3 Cumulative probability distributions for factory and field tests...

Figure 10.4 The measurement chain.

Figure 10.5 Worksheet for estimating fixed and random errors in a thermocoup...

Figure 10.6 (a) Schematic of high‐velocity flow in a duct with probe. (b) Sc...

Figure 10.7 Calibration data with its “scatter band” and two ways to describ...

Figure 10.8 Uncertainty in velocity measured with two different micro‐manome...

Figure 10.9 Flowchart for estimating the Uncertainty in a result by sequenti...

Figure 10.10 (a) A spreadsheet for estimating the Uncertainty in

h

by the pe...

Chapter 11

Figure 11.1 The test specimen.

Figure 11.2 Semi‐log plot of transient response of a first‐order system.

Figure 11.3 The Uncertainty is largest at low values of

h

when the test is r...

Figure 11.4 The Relative Uncertainty using the transient method is low at lo...

Figure 11.5 Direct comparison of the Uncertainties allows selection of the b...

Figure 11.6 The experiment as an instrument. Employ Uncertainty Analysis at ...

Chapter 12

Figure 12.1 The general calibration process applies to whole experiments as ...

Appendix C

Figure C.1 The coded unit cube with eight corner points.

Guide

Cover

Table of Contents

Begin Reading

Pages

iii

iv

v

xxi

xxii

xxiii

xxiv

xxv

xxvi

xxvii

xxix

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

307

308

309

311

312

313

314

315

316

317

318

Planning and Executing Credible Experiments

A Guidebook for Engineering, Science, Industrial Processes, Agriculture, and Business

Robert J. Moffat

Stanford University, USA

Roy W. Henk, Ph.D., P.E.

Kyoto University, Japan (retired)

This edition first published 2021© 2021 Robert J. Moffat and Roy W. Henk.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Robert J. Moffat and Roy W. Henk to be identified as the authors of this work has been asserted in accordance with law.

Registered OfficesJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USAJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

Editorial OfficeThe Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Cataloging‐in‐Publication data applied forISBN HB: 9781119532873

Cover Design: WileyCover Image: © NASA

In memory of Lou LondonRJMFor Cherrine and Neal and those they loveRWH

Robert J. MoffatStanford University, USA

Roy W. HenkKyoto University, Japan

About the Authors

Dr. Robert J. Moffat is Professor Emeritus at Stanford University and former President of Moffat Thermosciences, Inc. Professor Moffat started his professional career at the General Motors Research Laboratories in the Gas Turbine Laboratory. He and a small group of engineers designed, built, and tested a high efficiency, two‐spool regenerative gas turbine that, starting from 30 below zero Fahrenheit, could deliver 350 HP in less than 2 minutes. He graduated from the University of Michigan and received his Master of Science at Wayne State University. Enrolling at Stanford University, he earned degrees in Mechanical Engineering as Master of Science, Engineer, and Ph.D. He became a Stanford professor and served as chairman of the Thermosciences Division for 13 years.

Professor Moffat’s research efforts have involved three areas: convective heat transfer in engineering systems, experimental methods in heat transfer and fluid mechanics, and biomedical thermal issues. His largest body of work concerns convective heat transfer. One program focused on gas turbine blade heat transfer. A second program aimed at convective cooling of electronic components covering forced, free and mixed convection. Several contributions arose including invariant descriptors, a new heat transfer coefficient for electronics cooling, and a simple correlation based on turbulence intensity.

His second area of research concerned experimental methods in the thermosciences namely full‐field imaging techniques for temperature, heat flux, and heat transfer coefficient measurement using thermochromic liquid crystals. He contributed regularly to the theory of uncertainty analysis. Dr. Moffat was an invited lecturer for 40 consecutive years in the Measurement Engineering Series, for more than 20 years in the Instrument Society of America Test Measurements Division and, for ten years in the ASME Professional Development program.

Dr. Moffat worked on biomedical engineering problems, in particular the thermal protection of newborn infants. He jointly developed a self‐contained, portable incubator which provided a neutral thermal environment for the infant while allowing free access by the attending physicians. Used on almost every continent where cold‐weather transport is needed, it received the ASME Holley Medal Award, 1987. He founded Moffat Thermosciences as a vehicle for consulting, research, and teaching in Heat Transfer and Experimental Methods. He delivered short courses in Electronics Cooling, Experimental Methods, and Uncertainty Analysis.

Dr. Roy W. Henk, professor in the Graduate School of Energy Science at Kyoto University, Japan, earned his bachelors degree at Virginia Tech and his masters and doctorate at Stanford University. Professor Henk taught courses within natural and experimental philosophy, currently known as classical physics and mechanical engineering.

Professor Henk’s experience includes industry, government labs, and academia. His work included wind tunnel tests at Virginia Tech and Japan’s Mach 5 tunnel at NAL, and water tunnels at the U.S. Naval Research Lab and at Stanford. He worked internationally in the aerospace industry, designing and testing advanced engine components with IHI (). He spearheaded im‐provements to the turbine engine design process. A registered Professional Engineer, Dr. Henk is keenly interested in appropriate energy.

Dr. Henk’s recent work focused on experimental methods. He has done diverse experiments, from field work on environmental flows, to tests in pristine laboratories, to materials tests for structures. He founded Royal HanMi to promote international energy science and design of experiments.

Professor Henk has taught at universities, public and private, internationally as well as in the USA. Two of Dr. Henk’s courses, Experiment Design and Statistical Modeling, drew students from the medical school, business, engineering and environmental schools. Researchers learned advanced strategies to select data and how to draw strong defensible conclusions from data.

Together with colleagues at Kyoto University (), Science University of Tokyo, IHI, and Tokyo University they helped the nation brave the impact of the Fukushima Tsunami. Together with colleagues at LeTourneau, Virginia Military Institute, Handong () University, he trained our future. Dr. Henk has served within the Commonwealth of Virginia’s Governor’s Schools and STEM Academies.

Dr. Henk’s youth science book, UnLock Rocks, explores how rocks and crystals reveal the age of the Earth. Simple experiments, using kitchen ingredients, make concepts tangible and tasty. Two books were published in South Korea.

TitlesUnLock RocksPhysics 1 Lab with Experiment Design (RoK)Physics 2 Lab with Experiment Design (RoK)

with R.J. MoffatPlanning and Executing Credible Experiments

Preface

Laboratories (and businesses) need people who can answer difficult questions experimentally. If data are needed for decision‐making, this book is for you.

Science, medicine, the environment, agriculture, and engineering depend heavily on experiments. Business does as well, when surveying the added complexity of human taste and markets. In any of these fields, an experiment must be credible or it wastes time and resources. This book offers a tested guide, developed and used for decades at Stanford University, to equip novice researchers and experienced technicians to plan and execute credible experiments.

This book prepares the reader to anticipate the choices one will face from the launch of your project to the final report. We come alongside the reader, emphasizing the strategies of experiment planning, execution, and reporting.

The foundation of this book originated in “Planning Experimental Programs,” a set of class notes by Robert Moffat, our first author. Our second author, Roy Henk, is one of thousands whose own experiments benefited from Moffat's notes. This book blends our approaches to developing and planning experimental programs.

Bob Moffat: Our goal for this book was to collect, organize, summarize, and present what we think are important ideas about developing, running, and reporting reproducible, provably accurate experiments – with minimum wasted effort.

My contribution has to do with the process of developing experimental programs that can be trusted to produce trustworthy data. I used the word “developing” because “new” experiments generally require quite a few changes to eliminate unforeseen (but finally recognized) errors before they qualify as “good” experiments. This is called “the shakedown phase.”

This raises a question, however: how can you spot an error in a “new” experiment? The only way I know is to run a test for which you already know the answer. We have the three conservation laws we can trust (conservation of mass, momentum, and energy), but they are a bit hard to work with. I think it generally requires less work to simply stretch the operable domain of the proposed experiment until it includes some conditions that have already yielded data that you trust. That data become the “qualification test” for your experiment.

I've been designing and running experiments for more than 60 years, and my experience has been that the development of a good experiment is always iterative. It takes a lot of work during the shakedown period to eliminate all the sources of error. Surprisingly, this aspect of experiment planning isn't generally “taught.” That's really what started this book! The suggestions presented here reflect things I learned “on the job” during my first 10 years in industry and then refined during the rest of my career at Stanford.

When I got my BSME from the University of Michigan, I was offered a job in the General Motors (GM) Research Laboratories (GMR). I stayed with GMR almost exactly 10 years, rising from working on little problems like calibrating instruments to developing a high‐precision, high‐temperature wind tunnel for testing aircraft engine thermocouple probes up to 1600 °F, finding what caused the bearings to fail on an experimental engine, and developing high‐effectiveness heat exchangers for gas turbine engines. In the last two years of my stay, I was part of a small group of engineers who designed, built, and tested a high‐efficiency, two‐spool regenerative gas turbine that, starting from 30° below zero Fahrenheit, could deliver 350 HP in less than two minutes. That was developed for GM trucks and buses, but a descendant of that engine powered the US Army's M1A1 battle tank – the one designed for cold weather.

The increasingly interesting problems I encountered in the last three or four years made it clear to me that I needed to go back to school – there was too much that I didn't know.

I applied to Stanford with references from GMR and was offered a scholarship, which lead to a PhD, a Stanford professorship, 15 years as head of the mechanical engineering department, and 25 years of research and teaching on engineering problems. In my spare time, I worked with Dr. Alvin Hackel, a Stanford pediatrician, to develop the Stanford Transport Incubator. Dr. Hackel provided the medical insights and I did the engineering. His patients, usually newborn and sometimes premature, often needed intensive medical care during transport between hospitals. That incubator won the American Society of Mechanical Engineers' 1987 Holley Medal for Service for the Benefit of Mankind and is displayed on the cover.

I retired and began to put this book together.

Now a word from my colleague, Roy Henk.

Roy Henk: With the aim to continually improve, experiments can fill one's personal daily life. My experimenting began on a small farm with vegetables and livestock (plus honeybees). Later experiments involved concrete mixing, internal combustion engines, molten metal, and measuring ore. At Virginia Tech, I joined low‐noise precision wind‐tunnel tests on flow transition; I have used wind tunnels at Stanford and in Japan at the National Aerospace Laboratory of Japan (NAL), even joining tests in a Mach 5 tunnel. NAL together with JAXA is Japan’s equivalent to NASA. My research used water tunnels at the US Naval Research Lab and at Stanford, and Bob’s notes. Ten years in industry had me designing advanced components of commercial aerospace engines. Compared to air flow around an airplane, thermo‐fluid flow inside an aerospace engine is about the most challenging area of classical physics. Those engines have flown on aircraft for years without incident.

I have thrilled with experimental success, yet failures too provided valuable data and lessons. For example, as a young researcher I did not realize the advantage of randomizing the order of data collection. How did this matter? In the middle of one particularly costly data run, a valve failed staying open. Had I randomized, I would have had two valuable datasets at lower resolution. Instead that data had little use. Design of Experiments spared me from much learning the hard way, like ancient wisdom.

No longer a bother, Uncertainty Analysis now informs every stage, fortifying results, averting disaster. To postpone uncertainty analysis starves one’s experiment.

While teaching Design of Experiments, I asked Bob Moffat about his book. Bob graciously allowed his notes to supplement my course. Over 18 years, I had developed notes on certain powerful, open‐source software for experiment design and analysis. Bob at Stanford and I, now as professor of energy science at Kyoto University, began exploring how to merge our individual works into a single book. This text is the fruit of our efforts.

This book serves as a generalist guide to experiment planning, execution, and analysis. The text also introduces two powerful, free, open‐source software tools: Gosset to optimize experiment design, and R for statistical computing. This book addresses renewed demands by the public and science community that science be credible. Recently the purported reliability of many landmark medical studies has been undermined due to poor experiment design, execution, or analysis. Separate studies could not reproduce their results. Furthermore, a rash of scientific papers have been retracted due to fraud. Credible science needs a solid new footing.

Audience

We encourage readers to use our text while planning and executing an actual experiment. If the reader already has a problem that must be answered by experiment, that provides the best motivation for each chapter. Each chapter proposes questions that an experimenter will need to ask and answer during each stage of planning and execution.

Major portions of this text have been class‐tested at Stanford in graduate and undergraduate mechanical engineering courses for decades (since the late 1970s). This text has also been class‐tested internationally, serving engineering, physics, chemistry, agriculture, industrial processes, medical, and business students. Drafts of this book have been used for a continuing education course by researchers at the National Renewal Energy Laboratory, among other national laboratories and industrial laboratories.

Our book (hereafter referred to as M&H) is designed to be a time‐proven, single‐source guide for an experimentalist, from initial conception of a need, through execution of the experiment, to final report. Our book will stand alone in the lab, yet introduces researchers into specialist texts.

Accompanying Material

The open‐source software referenced in this text is free on the internet. The software, along with example data from the text, is additionally provided on an online website for this text.

Recommended Companion Texts

Our text forms a close companion with Hugh W. Coleman and W. Glenn Steele, Experimentation, Validation, and Uncertainty Analysis for Engineers, 4th ed., published by Wiley (hereafter referred to as C&S) (2018). Coleman was a student of Moffat's. C&S section 1.2, “Experimental Approach,” outlines our book in one page.

Statistics for Experimenters, 2nd ed., by George E. P. Box, J. Stuart Hunter, and William G. Hunter (hereafter referred to as BH&H), is a Wiley classic (2005).

Response Surface Methodology, 4th ed., by Raymond H. Myers, Douglas C. Montgomery, and Christine M. Anderson‐Cook (hereafter referred to as MM&AC), is also published by Wiley (2016).

How Is This Book Used for Teaching?

Our book has been used in Experiment Design courses as well as in diverse laboratory courses. Students in a variety of fields, including physics, engineering, chemistry, genetics, economics, medicine, and environmental studies, have used this book. Our text has also found a home directly in the lab (independent of classroom instruction) as a guidebook. It is written for self‐study and continuing education; as such, it has been the text for short courses at national labs.

Acknowledgments

We thank N.J.A. Sloane and R.H. Hardin for developing the Gosset Program for Design of Experiments. We deeply appreciate their releasing Gosset to the public domain in 2018. We expect that researchers will benefit from the capabilities of Gosset for decades to come.

We are grateful to the thousands upon thousands of users of the R statistical language, contributors and researchers worldwide who have evolved R into a powerful tool for statistical analysis, modeling, and plotting. Thank you all for promoting this open‐source, free software.

Robert thanks and dedicates this book to Lou London of Stanford University who taught me, inspired me, mentored me, and changed my life, opening doors across the world.

Roy especially thanks Helen L. Reed, William S. Saric, and William C. Reynolds for launching me into this most fascinating field, experimental thermal‐fluid physics. R.J. Hansen and R. Rollins at the U.S. Naval Research Lab introduced me to advanced instruments and signal processing. Stanford colleagues and students ensured that all angles were considered and defensible.

Colleagues at Kyoto University (京都大學), IHI (石川島播磨重工業株式会社), Science University of Tokyo, NAL, and Tokyo University provided a convivial environment while I worked in the aerospace engine industry and energy science. Together we braved the national impact of the Fukushima tsunami. Colleagues and students at Handong (韓東大學校, ) and LeTourneau provided valued support and feedback.

D.T. Kaplan and his books are outstanding for training statistical modeling.

We thank science editor A. Hunt of Wiley who paved the way for us as authors. S. Benjamin assembled documents. S. Brown ensured protocol. Ashwani Veejai Raj and her team converted diverse formats into typeset proofs. Eagle eyes of Cherrine R., Elaine H. and Lucy K. spotted stray prey. L. Poplawski corralled our project into final form. We thank you all.

Bob Moffat and his wife, Karina, have been constant encouragement as this work progressed. Whether on the other side of the world or in their dining area, they have been most generous.

My family and their enduring support are treasure beyond measure.

About the Companion Website

This book is accompanied by a companion website:

www.wiley.com/go/moffat/Planning

Scan this QR code to visit the companion website.

The website includes:

Problems

Sample data sets

Archived sources to install Gosset and R

Solution Manuals

1Choosing Credibility

No one believes a theory except its originator.

Everyone believes an experiment except the experimenter.1

The decision to design a credible experiment sets you on a path to research with impact. Along this path you will make many decisions. This book prepares you to anticipate the choices you will face to plan and achieve an experiment that you the experimentalist also can believe.

There are two kinds of material to consider with respect to experimental methods: the mechanics of measurement and the strategy of experimentation. This book emphasizes the strategy and tactics of experiment planning.

The fact is that laboratories need people who can answer difficult questions experimentally. We offer this text to answer this need, to promote balanced competence in the field of experimental work. Abraham Lincoln is attributed with the saying “Give me six hours to chop down a tree and I will spend the first four sharpening the axe.”2 If there is anything like a “law of compound interest” in experimental work, then the effort spent to improve an experiment plan will return a bigger payoff than the same effort applied to fine‐tuning transducers.

It is relatively easy to deal with sensors, calibrations, and corrections: those are concrete, factual bits of technology. The basic mechanisms are well known and tested. Could this be why there are so many technical references to transducers and so few to experiment planning?

Strategy is not concrete, however; it contains large elements of opinion, experience, and personal preference. One cannot prove that a particular strategy is best! What seems to be a “clever insight” to one person may seem “dull and pedestrian” to another.

The ideas presented in this text were developed over 60 years of teaching and consulting on experimental methods. They provide an outline of a systematic way of designing experiments of provable accuracy. Not the only way, certainly, but at least one way. Each reader who has much experimental experience will have techniques to add to the list – we ask and welcome your feedback.

Novices might be inclined to take this text too literally, as though experiment planning were a quantitative discipline with rules that always worked. That would be a mistake. One must be flexible in the laboratory – following a sound basic philosophy – but taking advantage of the specific opportunities each experiment offers.

1.1 The Responsibility of an Experimentalist

People have always been impressed by data, as though it could never be wrong. As a young experimentalist, the saying that leads this chapter impressed me greatly. At first, I thought it was a clever play on words. Then I began to take it more seriously. It is true. People do seem to put more credence to experimental results than in analysis. I cannot tell you how many times I have heard someone say, with a tone of absolute finality, “Well, you can't argue with the data!” Fact is, you can and should.

This places a heavy responsibility on the experimenter. One may respond to such responsibility by taking great pains to establish the credibility of the data before actually taking data “for the record.” Another response is to simply crash ahead and take data on a plausible but not proven experiment, because you can't think of anything you have done wrong. After all, these latter folks seem to think, “If I don't like the data, I will do it again, differently.”

That latter view undermines credibility. If you feel free to ignore data because you don't like it or don't understand it, then you haven't run an experiment, you have just been playing around in the lab.

Many of the suggestions offered in this work are related to establishing credibility before the production data are taken: calibration of instruments, running baseline tests, etc. In this respect, an experiment is like an iceberg – 90% of the effort is “unseen,” whereas only about 10% of the effort is invested in taking the production data.

Practically all of the experimentalists we know find the saying as a unique privilege and honor and as exhortation to produce the very best science. It gives us pause that people may base their design choices, their engineering, or even their lifestyle choices on their trust of our reported results and conclusions. This is a weighty responsibility worthy of doing our very best.

1.2 Losses of Credibility

Sadly, not all reported science is credible. In 2005, the Journal of the American Medical Association published “Contradicted and Initially Stronger Effects in Highly Cited Clinical Research,” by J.P.A. Ioannidis (now at Stanford University) (Ioannidis 2005). Among his findings, “Five of six highly‐cited nonrandomized studies had been contradicted or had found stronger effects” than other, better‐designed studies. In other words, only one in six (17%) of these highly influential medical studies was credible.

Some scientific studies lost credibility due to poorly designed experiments. We offer this text to our readers to help them plan and execute experiments that are credible. There is no excuse for poorly designed experiments.

Retractions of scientific studies are no longer rare. The New York Times reported in 2011, that a “well‐known psychologist…whose work has been published widely in professional journals falsified data and made up entire experiments” (Carey 2011). In his case, more than 50 articles were retracted. A site devoted to retracted science, Retraction Watch, can be found at http://retractionwatch.com.

A whole industry of fact‐checkers has come into existence, purportedly to expose the false and reveal the true. We know, however, even fact‐checkers must be checked. Let this motivate us.

1.3 Recovering Credibility

Science and engineering recently found a champion in Ioannidis.

By careful planning and execution of our own experiments, we too become champions of credibility.

As experimentalists, we must be our own front‐line fact‐checker, tackling errors as they arise. We put effort into uncovering every reason to not believe our method, equipment, results, and especially our modeling. We diligently report the uncertainty of our measurements.

When you have completed an experiment, you must have assembled so much evidence of credibility that, like it or not, you have to believe the data. The experimenter must be on guard all the time, looking for anomalies, looking for ways to challenge the credibility of the experiment. If something unexpected happens, the diligent experimenter will find a way to challenge it and either confirm it or refute it. Let “except the experimenter” spur you to create experiments you can believe and defend.

1.4 Starting with a Sharp Axe

In this text, we reintroduce one of the sharpest tools for designing an experiment, the computer program Gosset. Gosset was developed at AT&T Bell Laboratories in the 1980s. Although a few companies and researchers have used Gosset extensively, experimentalists across the world can benefit from its powerful features. The developers of Gosset released it for free to the public domain in 2018. We demonstrate Gosset in Chapter 9. With skill using Gosset, your axe will be sharper.

We also encourage use of one of the sharpest tools for data analysis, the R language. The R language is open‐source software, available for free. In addition to the basic package, thousands of researchers from around the world have contributed specialized enhancements to R, also available for free. We demonstrate how to use R for data analysis, statistics, model design, and uncertainty analysis in Chapters 7–10 and 13. If you use an alternative commercial package for statistical analysis, you can test and compare.

1.5 A Systems View of Experimental Work

An experiment is a system designed to make a measurement. The system consists of the hardware (test rig and specimens), the instruments (including sensors, amplifiers, extension wires, etc.), and the interpretive software (calibration routines, data reduction programs, etc.). The whole system is an instrument designed to make a particular kind of measurement. As such, the system must be designed so that it can be calibrated. The system must be calibrated before it is used to generate new data. It must be possible, using diagnostic tests on the system, to confirm the accuracy of the measurements made with the system.

This view of an experiment is illustrated in Figure 1.1, which also shows some of the necessary features of the system design.

Perhaps the most important feature of this view of experimental work is the important role given to uncertainty analysis. There are uncertainties in every measurement and, therefore, in every parameter calculated using experimental data. When the results of an experiment scatter (i.e. are different on repeated trials), the question always arises, “Is this scatter due to the uncertainties in the input data or is something changing in the experiment?”

Uncertainty analysis provides the proven credible way to answer that question. By quantifying and reporting the uncertainty of each value, we allow our client confidence and credence in our results. Figure 1.1 shows the uncertainty analysis as a key part of the data reduction program, although it is too often neglected. Using either Root‐Sum‐Squared estimation or Monte Carlo simulation, the uncertainty in experimental results can be calculated with little additional effort on the part of the experimenter.

Figure 1.1 The experiment viewed as an instrument. Adjust the instrument by analyzing Uncertainty in each Bubble.

1.6 In Defense of Being a Generalist

The last point we wish to make before sending you off into this body of work has to do with the level of expertise one needs to run good experiments.

We think it is more important for an experimentalist to have a working knowledge of many areas than to be a specialist in any one. The lab is a real place; Mother Nature never forgets to apply her own laws. If you are unaware of the Coanda effect, you will wonder why the water runs under the counter instead of falling off the edge. If you haven't heard of Joule–Thomson cooling, you will have a tough time figuring out why you get frost on the valve of a CO2 system.

Accordingly, if you aren't aware of the limitations of statistics, then using a statistical software package may lead you to indefensible conclusions.

It is not necessary to be the world's top authority on any of the mechanisms you encounter in the lab. You simply have to know enough to spot anomalies, to recognize that something unexpected or interesting is happening, and to know where to go for detailed help.

As an experimentalist, always beware of assumptions and presuppositions. See Figure 1.2 and “The Bundt Cake Story” (Panel 1.1). Step forward and predict. Then be ready and humble to course correct.

The lab is a great place for an observant generalist. The things that happen in the lab are real and reflect real phenomena. When something unexpected happens in the lab, if you are alert, you may learn something! As Pasteur said, “Chance favors only the prepared mind” (Pasteur 1854).

Let’s now launch toward planning and executing credible experiments.

Figure 1.2 The Bundt cake as delivered. A high heat‐transfer coefficient lifts the fluid batter like a hot air balloon. But which stagnation point is up, and which is down?

Panel 1.1 The Bundt Cake Story

One night, years ago, my wife baked a Bundt cake (chocolate and vanilla batter layered in a toroidal pan). When she presented me with a slice of that cake for dessert, I was impressed. But, also, I noticed something interesting about the pattern the batter had made as it cooked.

I recognized that the flow pattern, as drawn in figure 1.2, was related to the heat‐transfer coefficient distribution around the baking pan.

I tried to impress my wife with my knowledge of heat transfer by explaining to her what I thought I saw. “Look,” I said, “see how the batter rose up in the center, and came down on the sides. That means that the batter got hot in the center sooner than it did on the edges. That means that the heat‐transfer coefficient is highest at the bottom center stagnation point for a cylinder in free convection with a negative Grashof number.”

My wife was silent for a minute, then gently corrected me “I baked the cake upside down.”

Of course, as soon as I learned that, I was able to say with confidence that “The heat‐transfer coefficient is lowest at the bottom center stagnation point and high on the sides, for a cylinder in free convection with a negative Grashof number.”

The Moral of This Story?

It is critically important that you can trust your data before you try to interpret it. Beware! Once we accept our results as valid, how can we avoid constructing or searching for an explanation? Does not the scientific method and our human nature spur us to do so?

References

Carey, B. (2011). Fraud case seen as a red flag for psychology research.

http://www.nytimes.com/2011/11/03/health/research/noted‐dutch‐psychologist‐stapel‐accused‐of‐research‐fraud.html

.

Ioannidis, J.P.A. (13 July 2005).

Contradicted and initially stronger effects in highly cited clinical research

.

JAMA

294 (2): 218–228.

https://doi.org/10.1001/jama.294.2.218

. PMID 16014596.

Pasteur L. (Dec. 7, 1854). “

Dans les champs de l'observation le hasard ne favorise que les esprits préparés

,” translated as “In the fields of observation chance favours only the prepared mind.” Lecture, University of Lille.

http://en.wikiquote.org/wiki/Louis_Pasteur

.

Homework

1.1

Following the guide in Appendix D, Section D.1, download and install the statistical language R, which is open source and free. Please consider this software tool essential.

Following the guide in Appendix D, Section D.2, download and install LibreOffice, open source and free.

1.2 LibreOffice is compatible with msOffice documents. LibreOffice can even read and write antiquated *.doc and *.xls files of obsolete versions of msOffice better than ms does. The interface is more accessible and less bloated than msOffice. Consider this optional, but highly recommended and free.

1.3

Following the guide in Appendix D, Section D.4, consider R‐Studio. Please consider this software tool optional.

Notes

1

Variations of the quote are attributed to Albert Einstein and to William Ian Beardmore Beveridge. In

The Art of Scientific Investigation

(1950), p. 65, “A theory is something nobody believes, except the person who made it. An experiment is something everybody believes, except the person who made it.”

http://en.wikiquote.org/wiki/William_Ian_Beardmore_Beveridge

.

2

https://quoteinvestigator.com/2014/03/29/sharp‐axe

asserts no evidence of Lincoln writing this. Having grown up on a small farm with one chore of clearing hundreds of pines out of our pasture, I (RH) can attest to its advice. Would rail‐splitter Lincoln not agree?