59,99 €
Praise for Common Errors in Statistics (and How to Avoid Them) "A very engaging and valuable book for all who use statistics in any setting." --CHOICE "Addresses popular mistakes often made in data collection and provides an indispensable guide to accurate statistical analysis and reporting. The authors' emphasis on careful practice, combined with a focus on the development of solutions, reveals the true value of statistics when applied correctly in any area of research." --MAA Reviews Common Errors in Statistics (and How to Avoid Them), Fourth Edition provides a mathematically rigorous, yet readily accessible foundation in statistics for experienced readers as well as students learning to design and complete experiments, surveys, and clinical trials. Providing a consistent level of coherency throughout, the highly readable Fourth Edition focuses on debunking popular myths, analyzing common mistakes, and instructing readers on how to choose the appropriate statistical technique to address their specific task. The authors begin with an introduction to the main sources of error and provide techniques for avoiding them. Subsequent chapters outline key methods and practices for accurate analysis, reporting, and model building. The Fourth Edition features newly added topics, including: * Baseline data * Detecting fraud * Linear regression versus linear behavior * Case control studies * Minimum reporting requirements * Non-random samples The book concludes with a glossary that outlines key terms, and an extensive bibliography with several hundred citations directing readers to resources for further study. Presented in an easy-to-follow style, Common Errors in Statistics, Fourth Edition is an excellent book for students and professionals in industry, government, medicine, and the social sciences.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 525
Veröffentlichungsjahr: 2012
Table of Contents
Cover
Title page
Copyright page
Preface
Part I: FOUNDATIONS
Chapter 1 Sources of Error
PRESCRIPTION
FUNDAMENTAL CONCEPTS
SURVEYS AND LONG-TERM STUDIES
AD-HOC, POST-HOC HYPOTHESES
TO LEARN MORE
Chapter 2 Hypotheses: The Why of Your Research
PRESCRIPTION
WHAT IS A HYPOTHESIS?
HOW PRECISE MUST A HYPOTHESIS BE?
FOUND DATA
NULL OR NIL HYPOTHESIS
NEYMAN–PEARSON THEORY
DEDUCTION AND INDUCTION
LOSSES
DECISIONS
TO LEARN MORE
Chapter 3 Collecting Data
PREPARATION
RESPONSE VARIABLES
DETERMINING SAMPLE SIZE
FUNDAMENTAL ASSUMPTIONS
EXPERIMENTAL DESIGN
FOUR GUIDELINES
ARE EXPERIMENTS REALLY NECESSARY?
TO LEARN MORE
Part II: STATISTICAL ANALYSIS
Chapter 4 Data Quality Assessment
OBJECTIVES
REVIEW THE SAMPLING DESIGN
DATA REVIEW
TO LEARN MORE
Chapter 5 Estimation
PREVENTION
DESIRABLE AND NOT-SO-DESIRABLE ESTIMATORS
INTERVAL ESTIMATES
IMPROVED RESULTS
SUMMARY
TO LEARN MORE
Chapter 6 Testing Hypotheses: Choosing a Test Statistic
FIRST STEPS
TEST ASSUMPTIONS
BINOMIAL TRIALS
CATEGORICAL DATA
TIME-TO-EVENT DATA (SURVIVAL ANALYSIS)
COMPARING THE MEANS OF TWO SETS OF MEASUREMENTS
DO NOT LET YOUR SOFTWARE DO YOUR THINKING FOR YOU
COMPARING VARIANCES
COMPARING THE MEANS OF K SAMPLES
HIGHER-ORDER EXPERIMENTAL DESIGNS
INFERIOR TESTS
MULTIPLE TESTS
BEFORE YOU DRAW CONCLUSIONS
INDUCTION
SUMMARY
TO LEARN MORE
Chapter 7 Strengths and Limitations of Some Miscellaneous Statistical Procedures
NONRANDOM SAMPLES
MODERN STATISTICAL METHODS
BOOTSTRAP
BAYESIAN METHODOLOGY
META-ANALYSIS
PERMUTATION TESTS
TO LEARN MORE
Chapter 8 Reporting Your Results
FUNDAMENTALS
DESCRIPTIVE STATISTICS
ORDINAL DATA
TABLES
STANDARD ERROR
P-VALUES
CONFIDENCE INTERVALS
RECOGNIZING AND REPORTING BIASES
REPORTING POWER
DRAWING CONCLUSIONS
PUBLISHING STATISTICAL THEORY
A SLIPPERY SLOPE
SUMMARY
TO LEARN MORE
Chapter 9 Interpreting Reports
WITH A GRAIN OF SALT
THE AUTHORS
COST–BENEFIT ANALYSIS
THE SAMPLES
AGGREGATING DATA
EXPERIMENTAL DESIGN
DESCRIPTIVE STATISTICS
THE ANALYSIS
CORRELATION AND REGRESSION
GRAPHICS
CONCLUSIONS
RATES AND PERCENTAGES
INTERPRETING COMPUTER PRINTOUTS
SUMMARY
TO LEARN MORE
Chapter 10 Graphics
IS A GRAPH REALLY NECESSARY?
KISS
THE SOCCER DATA
FIVE RULES FOR AVOIDING BAD GRAPHICS
ONE RULE FOR CORRECT USAGE OF THREE-DIMENSIONAL GRAPHICS
THE MISUNDERSTOOD AND MALIGNED PIE CHART
TWO RULES FOR EFFECTIVE DISPLAY OF SUBGROUP INFORMATION
TWO RULES FOR TEXT ELEMENTS IN GRAPHICS
MULTIDIMENSIONAL DISPLAYS
CHOOSING EFFECTIVE DISPLAY ELEMENTS
ORAL PRESENTATIONS
SUMMARY
TO LEARN MORE
Part III: BUILDING A MODEL
Chapter 11 Univariate Regression
MODEL SELECTION
STRATIFICATION
FURTHER CONSIDERATIONS
SUMMARY
TO LEARN MORE
Chapter 12 Alternate Methods of Regression
LINEAR VERSUS NONLINEAR REGRESSION
LEAST-ABSOLUTE-DEVIATION REGRESSION
QUANTILE REGRESSION
SURVIVAL ANALYSIS
THE ECOLOGICAL FALLACY
NONSENSE REGRESSION
REPORTING THE RESULTS
SUMMARY
TO LEARN MORE
Chapter 13 Multivariable Regression
CAVEATS
DYNAMIC MODELS
FACTOR ANALYSIS
REPORTING YOUR RESULTS
A CONJECTURE
DECISION TREES
BUILDING A SUCCESSFUL MODEL
TO LEARN MORE
Chapter 14 Modeling Counts and Correlated Data
COUNTS
BINOMIAL OUTCOMES
COMMON SOURCES OF ERROR
PANEL DATA
FIXED- AND RANDOM-EFFECTS MODELS
POPULATION-AVERAGED GENERALIZED ESTIMATING EQUATION MODELS (GEEs)
SUBJECT-SPECIFIC OR POPULATION-AVERAGED?
VARIANCE ESTIMATION
QUICK REFERENCE FOR POPULAR PANEL ESTIMATORS
TO LEARN MORE
Chapter 15 Validation
OBJECTIVES
METHODS OF VALIDATION
MEASURES OF PREDICTIVE SUCCESS
TO LEARN MORE
Glossary
Bibliography
Author Index
Subject Index
Cover photo: Gary Carlsen, DDS
Copyright © 2012 by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey
Published simultaneously in Canada
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
Good, Phillip I.
Common errors in statistics (and how to avoid them) / Phillip I. Good, Statcourse.com, Huntington Beach, CA, James W. Hardin, Dept. of Epidemiology & Biostatistics, University of South Carolina, Columbia, SC. – Fourth edition.
pages cm
Includes bibliographical references and index.
ISBN 978-1-118-29439-0 (pbk.)
1. Statistics. I. Hardin, James W. (James William) II. Title.
QA276.G586 2012
519.5–dc23
2012005888
Preface
ONE OF THE VERY FIRST TIMES DR. GOOD served as a statistical consultant, he was asked to analyze the occurrence rate of leukemia cases in Hiroshima, Japan following World War II. On August 7, 1945 this city was the target site of the first atomic bomb dropped by the United States. Was the high incidence of leukemia cases among survivors the result of exposure to radiation from the atomic bomb? Was there a relationship between the number of leukemia cases and the number of survivors at certain distances from the atomic bomb’s epicenter?
To assist in the analysis, Dr. Good had an electric (not an electronic) calculator, reams of paper on which to write down intermediate results, and a prepublication copy of Scheffe’s Analysis of Variance. The work took several months and the results were somewhat inconclusive, mainly because he could never seem to get the same answer twice—a consequence of errors in transcription rather than the absence of any actual relationship between radiation and leukemia.
Today, of course, we have high-speed computers and prepackaged statistical routines to perform the necessary calculations. Yet, statistical software will no more make one a statistician than a scalpel will turn one into a neurosurgeon. Allowing these tools to do our thinking is a sure recipe for disaster.
Pressed by management or the need for funding, too many research workers have no choice but to go forward with data analysis despite having insufficient statistical training. Alas, though a semester or two of undergraduate statistics may develop familiarity with the names of some statistical methods, it is not enough to be aware of all the circumstances under which these methods may be applicable.
The purpose of the present text is to provide a mathematically rigorous but readily understandable foundation for statistical procedures. Here are such basic concepts in statistics as null and alternative hypotheses, p-value, significance level, and power. Assisted by reprints from the statistical literature, we reexamine sample selection, linear regression, the analysis of variance, maximum likelihood, Bayes’ Theorem, meta-analysis and the bootstrap. New to this edition are sections on fraud and on the potential sources of error to be found in epidemiological and case-control studies.
Examples of good and bad statistical methodology are drawn from agronomy, astronomy, bacteriology, chemistry, criminology, data mining, epidemiology, hydrology, immunology, law, medical devices, medicine, neurology, observational studies, oncology, pricing, quality control, seismology, sociology, time series, and toxicology.
More good news: Dr. Good’s articles on women sports have appeared in the San Francisco Examiner, Sports Now, and Volleyball Monthly; 22 short stories of his are in print; and you can find his 21 novels on Amazon and zanybooks.com. So, if you can read the sports page, you’ll find this text easy to read and to follow. Lest the statisticians among you believe this book is too introductory, we point out the existence of hundreds of citations in statistical literature calling for the comprehensive treatment we have provided. Regardless of past training or current specialization, this book will serve as a useful reference; you will find applications for the information contained herein whether you are a practicing statistician or a well-trained scientist who just happens to apply statistics in the pursuit of other science.
The primary objective of the opening chapter is to describe the main sources of error and provide a preliminary prescription for avoiding them. The hypothesis formulation—data gathering—hypothesis testing and estimation—cycle is introduced, and the rationale for gathering additional data before attempting to test after-the-fact hypotheses detailed.
A rewritten Chapter 2 places our work in the context of decision theory. We emphasize the importance of providing an interpretation of each and every potential outcome in advance data collection.
A much expanded Chapter 3 focuses on study design and data collection, as failure at the planning stage can render all further efforts valueless. The work of Berger and his colleagues on selection bias is given particular emphasis.
Chapter 4 on data quality assessment reminds us that just as 95% of research efforts are devoted to data collection, 95% of the time remaining should be spent on ensuring that the data collected warrant analysis.
Desirable features of point and interval estimates are detailed in Chapter 5 along with procedures for deriving estimates in a variety of practical situations. This chapter also serves to debunk several myths surrounding estimation procedures.
Chapter 6 reexamines the assumptions underlying testing hypotheses and presents the correct techniques for analyzing binomial trials, counts, categorical data, continuous measurements, and time-to-event data. We review the impacts of violations of assumptions, and detail the procedures to follow when making two- and k-sample comparisons.
Chapter 7 is devoted to the analysis of nonrandom data (cohort and case-control studies), plus discussions of the value and limitations of Bayes’ theorem, meta-analysis, and the bootstrap and permutation tests, and contains essential tips on getting the most from these methods.
A much expanded Chapter 8 lists the essentials of any report that will utilize statistics, debunks the myth of the “standard” error, and describes the value and limitations of p-values and confidence intervals for reporting results. Practical significance is distinguished from statistical significance and induction is distinguished from deduction. Chapter 9 covers much the same material but from the viewpoint of the reader rather than the writer. Of particular importance are sections on interpreting computer output and detecting fraud.
Twelve rules for more effective graphic presentations are given in Chapter 10 along with numerous examples of the right and wrong ways to maintain reader interest while communicating essential statistical information.
Chapters 11 through 15 are devoted to model building and to the assumptions and limitations of a multitude of regression methods and data mining techniques. A distinction is drawn between goodness of fit and prediction, and the importance of model validation is emphasized.
Finally, for the further convenience of readers, we provide a glossary grouped by related but contrasting terms, an annotated bibliography, and subject and author indexes.
Our thanks go to William Anderson, Leonardo Auslender, Vance Berger, Peter Bruce, Bernard Choi, Tony DuSoir, Cliff Lunneborg, Mona Hardin, Gunter Hartel, Fortunato Pesarin, Henrik Schmiediche, Marjorie Stinespring, and Peter A. Wright for their critical reviews of portions of this text. Doug Altman, Mark Hearnden, Elaine Hand, and David Parkhurst gave us a running start with their bibliographies. Brian Cade, David Rhodes, and the late Cliff Lunneborg helped us complete the second edition. Terry Therneau and Roswitha Blasche helped us complete the third edition.
We hope you soon put this text to practical use.
Phillip [email protected] Beach, CAJames [email protected], SCMay 2012
Part IFOUNDATIONS
Chapter 1
Sources of Error
Don’t think—use the computer.
Dyke (tongue in cheek) [1997].
We cannot help remarking that it is very surprising that research in an area that depends so heavily on statistical methods has not been carried out in close collaboration with professional statisticians, the panel remarked in its conclusions. From the report of an independent panel looking into “Climategate.”1
STATISTICAL PROCEDURES FOR HYPOTHESIS TESTING, ESTIMATION, AND MODEL building are only a part of the decision-making process. They should never be quoted as the sole basis for making a decision (yes, even those procedures that are based on a solid deductive mathematical foundation). As philosophers have known for centuries, extrapolation from a sample or samples to a larger, incompletely examined population must entail a leap of faith.
The sources of error in applying statistical procedures are legion and include all of the following:
But perhaps the most serious source of error lies in letting statistical procedures make decisions for you.
In this chapter, as throughout this text, we offer first a preventive prescription, followed by a list of common errors. If these prescriptions are followed carefully, you will be guided to the correct, proper, and effective use of statistics and avoid the pitfalls.
Statistical methods used for experimental design and analysis should be viewed in their rightful role as merely a part, albeit an essential part, of the decision-making procedure.
Here is a partial prescription for the error-free application of statistics.
Three concepts are fundamental to the design of experiments and surveys: variation, population, and sample. A thorough understanding of these concepts will prevent many errors in the collection and interpretation of data.
If there were no variation, if every observation were predictable, a mere repetition of what had gone before, there would be no need for statistics.
Variation is inherent in virtually all our observations. We would not expect outcomes of two consecutive spins of a roulette wheel to be identical. One result might be red, the other black. The outcome varies from spin to spin.
There are gamblers who watch and record the spins of a single roulette wheel hour after hour hoping to discern a pattern. A roulette wheel is, after all, a mechanical device and perhaps a pattern will emerge. But even those observers do not anticipate finding a pattern that is 100% predetermined. The outcomes are just too variable.
Anyone who spends time in a schoolroom, as a parent or as a child, can see the vast differences among individuals. This one is tall, that one short, though all are the same age. Half an aspirin and Dr. Good’s headache is gone, but his wife requires four times that dosage.
There is variability even among observations on deterministic formula-satisfying phenomena such as the position of a planet in space or the volume of gas at a given temperature and pressure. Position and volume satisfy Kepler’s Laws and Boyle’s Law, respectively (the latter over a limited range), but the observations we collect will depend upon the measuring instrument (which may be affected by the surrounding environment) and the observer. Cut a length of string and measure it three times. Do you record the same length each time?
In designing an experiment or survey we must always consider the possibility of errors arising from the measuring instrument and from the observer. It is one of the wonders of science that Kepler was able to formulate his laws at all given the relatively crude instruments at his disposal.
A phenomenon is said to be deterministic if given sufficient information regarding its origins, we can successfully make predictions regarding its future behavior. But we do not always have all the necessary information. Planetary motion falls into the deterministic category once one makes adjustments for all gravitational influences, the other planets as well as the sun.
Nineteenth century physicists held steadfast to the belief that all atomic phenomena could be explained in deterministic fashion. Slowly, it became evident that at the subatomic level many phenomena were inherently stochastic in nature, that is, one could only specify a probability distribution of possible outcomes, rather than fix on any particular outcome as certain.
Strangely, twenty-first century astrophysicists continue to reason in terms of deterministic models. They add parameter after parameter to the lambda cold-dark-matter model hoping to improve the goodness of fit of this model to astronomical observations. Yet, if the universe we observe is only one of many possible realizations of a stochastic process, goodness of fit offers absolutely no guarantee of the model’s applicability. (See, for example, Good, 2012.)
Chaotic phenomena differ from the strictly deterministic in that they are strongly dependent upon initial conditions. A random perturbation from an unexpected source (the proverbial butterfly’s wing) can result in an unexpected outcome. The growth of cell populations has been described in both deterministic (differential equations) and stochastic terms (birth and death process), but a chaotic model (difference-lag equations) is more accurate.
The population(s) of interest must be clearly defined before we begin to gather data.
From time to time, someone will ask us how to generate confidence intervals (see Chapter 8) for the statistics arising from a total census of a population. Our answer is no, we cannot help. Population statistics (mean, median, and thirtieth percentile) are not estimates. They are fixed values and will be known with 100% accuracy if two criteria are fulfilled:
Confidence intervals would be appropriate if the first criterion is violated, for then we are looking at a sample, not a population. And if the second criterion is violated, then we might want to talk about the confidence we have in our measurements.
Debates about the accuracy of the 2000 United States Census arose from doubts about the fulfillment of these criteria.2 “You didn’t count the homeless,” was one challenge. “You didn’t verify the answers,” was another. Whether we collect data for a sample or an entire population, both these challenges or their equivalents can and should be made.
Kepler’s “laws” of planetary movement are not testable by statistical means when applied to the original planets (Jupiter, Mars, Mercury, and Venus) for which they were formulated. But when we make statements such as “Planets that revolve around Alpha Centauri will also follow Kepler’s Laws,” then we begin to view our original population, the planets of our sun, as a sample of all possible planets in all possible solar systems.
A major problem with many studies is that the population of interest is not adequately defined before the sample is drawn. Do not make this mistake. A second major problem is that the sample proves to have been drawn from a different population than was originally envisioned. We consider these issues in the next section and again in Chapters 2, 6, and 7.
A sample is any (proper) subset of a population. Small samples may give a distorted view of the population. For example, if a minority group comprises 10% or less of a population, a jury of 12 persons selected at random from that population fails to contain any members of that minority at least 28% of the time.
As a sample grows larger, or as we combine more clusters within a single sample, the sample will grow to more closely resemble the population from which it is drawn.
How large a sample must be to obtain a sufficient degree of closeness will depend upon the manner in which the sample is chosen from the population.
Are the elements of the sample drawn at random, so that each unit in the population has an equal probability of being selected? Are the elements of the sample drawn independently of one another? If either of these criteria is not satisfied, then even a very large sample may bear little or no relation to the population from which it was drawn.
An obvious example is the use of recruits from a Marine boot camp as representatives of the population as a whole or even as representatives of all Marines. In fact, any group or cluster of individuals who live, work, study, or pray together may fail to be representative for any or all of the following reasons (Cummings and Koepsell, 2002):
A sample consisting of the first few animals to be removed from a cage will not satisfy these criteria either, because, depending on how we grab, we are more likely to select more active or more passive animals. Activity tends to be associated with higher levels of corticosteroids, and corticosteroids are associated with virtually every body function.
Sample bias is a danger in every research field. For example, Bothun [1998] documents the many factors that can bias sample selection in astronomical research.
To prevent sample bias in your studies, before you begin determine all the factors that can affect the study outcome (gender and lifestyle, for example). Subdivide the population into strata (males, females, city dwellers, farmers) and then draw separate samples from each stratum. Ideally, you would assign a random number to each member of the stratum and let a computer’s random number generator determine which members are to be included in the sample.
Being selected at random does not mean that an individual will be willing to participate in a public opinion poll or some other survey. But if survey results are to be representative of the population at large, then pollsters must find some way to interview nonresponders as well. This difficulty is exacerbated in long-term studies, as subjects fail to return for follow-up appointments and move without leaving a forwarding address. Again, if the sample results are to be representative, some way must be found to report on subsamples of the nonresponders and the dropouts.
Formulate and write down your hypotheses before you examine the data.
Patterns in data can suggest, but cannot confirm, hypotheses unless these hypotheses were formulated before the data were collected.
Everywhere we look, there are patterns. In fact, the harder we look the more patterns we see. Three rock stars die in a given year. Fold the United States twenty-dollar bill in just the right way and not only the Pentagon but the Twin Towers in flames are revealed.3 It is natural for us to want to attribute some underlying cause to these patterns, but those who have studied the laws of probability tell us that more often than not patterns are simply the result of random events.
Put another way, finding at least one cluster of events in time or in space has a greater probability than finding no clusters at all (equally spaced events).
How can we determine whether an observed association represents an underlying cause-and-effect relationship or is merely the result of chance? The answer lies in our research protocol. When we set out to test a specific hypothesis, the probability of a specific event is predetermined. But when we uncover an apparent association, one that may well have arisen purely by chance, we cannot be sure of the association’s validity until we conduct a second set of controlled trials.
In the International Study of Infarct Survival [1988], patients born under the Gemini or Libra astrological birth signs did not survive as long when their treatment included aspirin. By contrast, aspirin offered apparent beneficial effects (longer survival time) to study participants from all other astrological birth signs. Szydloa et al. [2010] report similar spurious correlations when hypothesis are formulated with the data in hand.
Except for those who guide their lives by the stars, there is no hidden meaning or conspiracy in this result. When we describe a test as significant at the 5% or one-in-20 level, we mean that one in 20 times we will get a significant result even though the hypothesis is true. That is, when we test to see if there are any differences in the baseline values of the control and treatment groups, if we have made 20 different measurements, we can expect to see at least one statistically significant difference; in fact, we will see this result almost two-thirds of the time. This difference will not represent a flaw in our design but simply chance at work. To avoid this undesirable result—that is, to avoid attributing statistical significance to an insignificant random event, a so-called Type I error—we must distinguish between the hypotheses with which we began the study and those which came to mind afterward. We must accept or reject our initial hypotheses at the original significance level while demanding additional corroborating evidence for those exceptional results (such as a dependence of an outcome on astrological sign) that are uncovered for the first time during the trials.
No reputable scientist would ever report results before successfully reproducing the experimental findings twice, once in the original laboratory and once in that of a colleague.4 The latter experiment can be particularly telling, as all too often some overlooked factor not controlled in the experiment—such as the quality of the laboratory water—proves responsible for the results observed initially. It is better to be found wrong in private, than in public. The only remedy is to attempt to replicate the findings with different sets of subjects, replicate, then replicate again.
Persi Diaconis [1978] spent some years investigating paranormal phenomena. His scientific inquiries included investigating the powers linked to Uri Geller, the man who claimed he could bend spoons with his mind. Diaconis was not surprised to find that the hidden “powers” of Geller were more or less those of the average nightclub magician, down to and including forcing a card and taking advantage of ad-hoc, post-hoc hypotheses (Figure 1.1).
FIGURE 1.1. Photo of Geller.
(Reprinted from German Language Wikipedia.)
When three buses show up at your stop simultaneously, or three rock stars die in the same year, or a stand of cherry trees is found amid a forest of oaks, a good statistician remembers the Poisson distribution. This distribution applies to relatively rare events that occur independently of one another (see Figure 1.2). The calculations performed by Siméon-Denis Poisson reveal that if there is an average of one event per interval (in time or in space), whereas more than a third of the intervals will be empty, at least a quarter of the intervals are likely to include multiple events.
FIGURE 1.2. Frequency plot of the number of deaths in the Prussian army as a result of being kicked by a horse (there are 200 total observations).
Anyone who has played poker will concede that one out of every two hands contains “something” interesting. Do not allow naturally occurring results to fool you nor lead you to fool others by shouting, “Isn’t this incredible?”
TABLE 1.1. Probability of finding something interesting in a five-card hand
Hand
Probability
Straight flush
0.0000
4-of-a-kind
0.0002
Full house
0.0014
Flush
0.0020
Straight
0.0039
Three of a kind
0.0211
Two pairs
0.0475
Pair
0.4226
Total
0.4988
The purpose of a recent set of clinical trials was to see if blood flow and distribution in the lower leg could be improved by carrying out a simple surgical procedure prior to the administration of standard prescription medicine.
The results were disappointing on the whole, but one of the marketing representatives noted that the long-term prognosis was excellent when a marked increase in blood flow was observed just after surgery. She suggested we calculate a p-value5 for a comparison of patients with an improved blood flow after surgery versus patients who had taken the prescription medicine alone.
Such a p-value is meaningless. Only one of the two samples of patients in question had been taken at random from the population (those patients who received the prescription medicine alone). The other sample (those patients who had increased blood flow following surgery) was determined after the fact. To extrapolate results from the samples in hand to a larger population, the samples must be taken at random from, and be representative of, that population.
The preliminary findings clearly called for an examination of surgical procedures and of patient characteristics that might help forecast successful surgery. But the generation of a p-value and the drawing of any final conclusions had to wait for clinical trials specifically designed for that purpose.
This does not mean that one should not report anomalies and other unexpected findings. Rather, one should not attempt to provide p-values or confidence intervals in support of them. Successful researchers engage in a cycle of theorizing and experimentation so that the results of one experiment become the basis for the hypotheses tested in the next.
A related, extremely common error whose correction we discuss at length in Chapters 13 and 15 is to use the same data to select variables for inclusion in a model and to assess their significance. Successful model builders develop their frameworks in a series of stages, validating each model against a second independent dataset before drawing conclusions.
One reason why many statistical models are incomplete is that they do not specify the sources of randomness generating variability among agents, i.e., they do not specify why otherwise observationally identical people make different choices and have different outcomes given the same choice.
—James J. Heckman
On the necessity for improvements in the use of statistics in research publications, see Altman [1982, 1991, 1994, 2000, 2002]; Cooper and Rosenthal [1980]; Dar, Serlin, and Omer [1994]; Gardner and Bond [1990]; George [1985]; Glantz [1980]; Goodman, Altman, and George [1998]; MacArthur and Jackson [1984]; Morris [1988]; Strasak et al. [2007]; Thorn et al. [1985]; and Tyson et al. [1983].
Brockman and Chowdhury [1997] discuss the costly errors that can result from treating chaotic phenomena as stochastic.
Notes
1 This is from an inquiry at the University of East Anglia headed by Lord Oxburgh. The inquiry was the result of emails from climate scientists being released to the public.
2 City of New York v. Department of Commerce, 822 F. Supp. 906 (E.D.N.Y, 1993). The arguments of four statistical experts who testified in the case may be found in Volume 34 of Jurimetrics, 1993, 64–115.
3 A website with pictures is located at http://www.foldmoney.com/.
4 Remember “cold fusion”? In 1989, two University of Utah professors told the newspapers they could fuse deuterium molecules in the laboratory, solving the world’s energy problems for years to come. Alas, neither those professors nor anyone else could replicate their findings, though true believers abound (see http://www.ncas.org/erab/intro.htm).
5 A p-value is the probability under the primary hypothesis of observing the set of observations we have in hand. We can calculate a p-value once we make a series of assumptions about how the data were gathered. These days, statistical software does the calculations, but it’s still up to us to validate the assumptions.
Chapter 2
Hypotheses: The Why of Your Research
All who drink of this treatment recover in a short time,Except those whom it does not help, who all die,It is obvious therefore, that it only fails in incurable cases.
—Galen (129–199)
IN THIS CHAPTER, AIMED AT BOTH RESEARCHERS WHO will analyze their own data as well as those researchers who will rely on others to assist them in the analysis, we review how to formulate a hypothesis that is testable by statistical means, the appropriate use of the null hypothesis, Neyman–Pearson theory, the two types of error, and the more general theory of decisions and losses.
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!