35,99 €
Learn to assess published research in this best-selling introduction to evidence-based healthcare
Evidence-based practices have revolutionized medical care. Clinical and scientific papers have something to offer practitioners at every level of the profession, from students to established clinicians in medicine, nursing and allied professions. Novices are often intimidated by the idea of reading and appraising the research literature. How to Read a Paper demystifies this process with a thorough, engaging introduction to how clinical research papers are constructed and how to evaluate them. Now fully updated to incorporate new areas of research, readers of the seventh edition of How to Read a Paper will also find:
How to Read a Paper is ideal for all healthcare students and professionals seeking an accessible introduction to evidence-based healthcare – particularly those sitting undergraduate and postgraduate exams and preparing for interviews.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 677
Veröffentlichungsjahr: 2024
Cover
Table of Contents
Title Page
Copyright Page
Dedication Page
Foreword to the first edition by Professor Sir David Weatherall
Preface to the seventh edition
From Trisha
From Paul
Preface to the first edition: do you need to read this book?
Acknowledgements
Chapter 1: Why read papers at all?
Does ‘evidence‐based medicine’ simply mean ‘reading papers in medical journals’?
Why do people sometimes groan when you mention evidence‐based healthcare?
Before you start: formulate the problem
Exercises based on this chapter
References
Chapter 2: Searching the literature
The information jungle
What are you looking for?
Levels upon levels of evidence
Synthesised sources: systems, summaries and syntheses
Pre‐appraised sources: synopses of systematic reviews and primary studies
Specialised resources
Primary studies: tackling the jungle
One‐stop shopping: federated search engines
Using artificial intelligence to search the literature
Asking for help and asking around
Online tutorials for effective searching
Exercises based on this chapter
References
Chapter 3: Getting your bearings: what is this paper about?
The science of ‘trashing’ papers
Three preliminary questions to get your bearings
What are randomised controlled trials and why do they matter?
What are cohort studies?
What are case–control studies?
What are cross‐sectional surveys?
What are case reports?
The traditional hierarchy of evidence
Exercises based on this chapter
References
Chapter 4: Assessing methodological quality
Was the study original?
Who is the study about?
Was the design of the study sensible?
Was bias avoided or minimised?
Was assessment ‘blind’?
Were preliminary statistical questions addressed?
A note on ethical considerations
Summing up
Exercises based on this chapter
References
Chapter 5: Statistics for the non‐statistician
How can non‐statisticians evaluate statistical tests?
Have the authors set the scene correctly?
Paired data, tails and outliers
Correlation, regression and causation
Probability and confidence
The bottom line (quantifying the chance of benefit and harm)
Summary
Exercises based on this chapter
References
Chapter 6: Papers that report clinical trials of simple interventions
What is a clinical trial?
Drug trials: ‘evidence’ and marketing
Making decisions about therapy
Surrogate endpoints
What information to expect in a paper describing a randomised controlled trial: the CONSORT statement
Getting worthwhile evidence from pharmaceutical representatives
A note on vaccine trials
Exercises based on this chapter
References
Chapter 7: Papers that report trials of complex interventions
Complex interventions
Ten questions to ask about a paper describing a complex intervention
Exercises based on this chapter
References
Chapter 8: Papers that report diagnostic or screening tests
Ten suspects in the dock
Validating diagnostic tests against a gold standard
Ten questions to ask about a paper that claims to validate a diagnostic or screening test
Likelihood ratios
Clinical prediction models
Exercises based on this chapter
References
Chapter 9: Papers that summarise other papers (systematic reviews and meta‐analyses)
When is a review systematic?
Evaluating systematic reviews: five questions to ask
Meta‐analysis for the non‐statistician
Explaining heterogeneity
New approaches to systematic review
Exercises based on this chapter
References
Chapter 10: Papers that advise you what to do (guidelines)
The great guidelines debate
Ten questions to ask about a clinical guideline
Exercises based on this chapter
References
Chapter 11: Papers that estimate what things cost (health economic evaluations)
What is an economic evaluation?
Health economics studies: two key approaches
Costs and benefits of health interventions
Measuring the value of health states
Quality‐adjusted life‐years
Low‐value health: choosing wisely
Twelve questions to ask about a health economic evaluation
Conclusion
Exercises based on this chapter
References
Chapter 12: Papers that go beyond numbers (qualitative research)
What is qualitative research?
Summarising and synthesising qualitative research
Nine questions to ask about a qualitative research paper
Conclusion
Exercises based on this chapter
References
Chapter 13: Papers that report questionnaire research
The rise and rise of questionnaire research
Ten questions to ask about a paper describing a questionnaire study
Exercises based on this chapter
References
Chapter 14: Papers that report quality improvement case studies
What are quality improvement studies and how should we research them?
Ten questions to ask about a paper describing a quality improvement initiative
Conclusion
Exercises based on this chapter
References
Chapter 15: Papers that describe genetic association studies
The three eras of human genetic studies (so far)
What is a genome‐wide association study?
Clinical applications of genome‐wide association studies
Direct‐to‐consumer genetic testing
Mendelian randomisation studies
Epigenetics: a space to watch
Ten questions to ask about a genetic association study
Exercises based on this chapter
References
Chapter 16: Applying evidence with patients
The patient perspective
Patient‐reported outcome measures
Shared decision‐making
Option grids
n
‐of‐1 trials and other individualised approaches
Exercises based on this chapter
References
Chapter 17: Papers on artificial intelligence in healthcare
Introduction
Artificial intelligence
Big data
Machine learning
Generative artificial intelligence: large language and multimodal models
Ethical principles for the use of artificial intelligence for health
Appraising artificial intelligence papers: a plethora of checklists
Ten questions to ask about a paper that reports AI studies in healthcare
Summary
Exercises based on this chapter
References
Chapter 18: EBM+: the importance of mechanistic evidence
What is mechanistic evidence? An example
The many types of mechanistic evidence and a preliminary hierarchy
EBM+ means ‘both and’, not ‘either or’
Mechanistic evidence in the COVID‐19 pandemic
Exercises based on this chapter
References
Chapter 19: Papers that report consensus exercises
Why are consensus method papers important?
How do experts choose and reach consensus on a specific topic?
Consensus methods
Ten questions to ask about a paper that reports a consensus statement
Exercises based on this chapter
References
Chapter 20: Criticisms of evidence‐based healthcare
What’s wrong with evidence‐based healthcare when it’s done badly?
What’s wrong with evidence‐based healthcare when it’s done well?
Why is ‘evidence‐based policymaking’ so hard to achieve?
Exercises based on this chapter
References
Appendix 1: Checklists for finding, appraising and implementing evidence
Is my practice evidence‐based? A context‐sensitive checklist for individual clinical encounters (see Chapter 1)
Checklist for searching (see Chapter 2)
Checklist to determine what a paper is about (see Chapter 3)
Checklist for the methods section of a paper (see Chapter 4)
Checklist for the statistical aspects of a paper (see Chapter 5)
Checklist for material provided by a pharmaceutical company representative (see Chapter 6)
Checklist for a paper describing a study of a complex intervention (see Chapter 7)
Checklist for a paper that claims to validate a diagnostic or screening test (see Chapter 8)
Checklist for a systematic review or meta‐analysis (see Chapter 9)
Checklist for a set of clinical guidelines (see Chapter 10)
Checklist for an economic analysis (see Chapter 11)
Checklist for a qualitative research paper (see Chapter 12)
Checklist for a paper describing questionnaire research (see Chapter 13)
Checklist for a paper describing a quality improvement study (see Chapter 14)
Checklist for a paper describing a genetic association study (see Chapter 15)
Checklist for involving patients in clinical decision‐making (see Chapter 16)
Checklist for a paper describing an artificial intelligence study (see Chapter 17)
Checklist for a paper on mechanistic evidence (see Chapter 18)
Checklist for a paper describing a consensus study (see Chapter 19)
Appendix 2: Assessing the effects of an intervention
Acknowledgement
Index
End User License Agreement
Chapter 1
Table 1.1 Examples of harmful practices once strongly supported by ‘expert ...
Chapter 4
Table 4.1 Examples of problematic descriptions in the methods section of a ...
Chapter 5
Table 5.1 Some commonly used statistical tests
Table 5.2 Data from a trial of medical therapy versus coronary artery bypas...
Chapter 6
Table 6.1 Checklist for a randomised controlled trial based on the CONSORT ...
Chapter 8
Table 8.1 Features of a diagnostic test that can be calculated by comparing...
Chapter 11
Table 11.1 Types of economic analysis
Table 11.2 Examples of costs and benefits of health interventions
Table 11.3 Cost per quality‐adjusted life year [7–12]
Chapter 12
Table 12.1 Examples of qualitative research methods
Table 12.2 Qualitative versus quantitative research: the overstated dichoto...
Chapter 13
Table 13.1 Examples of research questions for which a questionnaire may
not
Table 13.2 Types of sampling frame for questionnaire research
Chapter 17
Table 17.1 Terminology and definitions
Table 17.2 Artificial intelligence reporting guidelines
Table 17.3 Simplified DECIDE‐AI checklist
Chapter 18
Table 18.1 A preliminary hierarchy of evidence for mechanistic evidence
Table 18.2 Bradford Hill indicators of causality, showing the importance of...
Chapter 19
Table 19.1 Some consensus methods in healthcare
Chapter 2
Figure 2.1 A simple hierarchy of evidence for assessing the quality of trial...
Chapter 4
Figure 4.1 Sources of bias to check for in a randomised controlled trial.
Chapter 5
Figure 5.1 Example of a normal curve.
Figure 5.2 Example of a skew curve.
Chapter 8
Figure 8.1 2 × 2 table showing outcome of trial for 10 suspects accused of m...
Figure 8.2 2 × 2 table notation for expressing the results of a validation s...
Figure 8.3 2 × 2 table showing results of a validation study of urine glucos...
Figure 8.4 Using likelihood ratios to calculating the post‐test probability ...
Figure 8.5 Leaky prognostic model adoption pipeline. Examples of reasons for...
Chapter 9
Figure 9.1 Method for a systematic review of randomised controlled trials (R...
Figure 9.2 Forest plot showing long‐term effects of cognitive behaviour ther...
Figure 9.3 Cochrane Collaboration logo.
Figure 9.4 Cumulative meta‐analysis of randomised controlled trials of aprot...
Figure 9.5 Reduction in heart disease risk by cholesterol‐lowering strategie...
Chapter 15
Figure 15.1 Spectrum of disease allele effects revealed by GWAS studies.
Figure 15.2 Indirect association of a genetic biomarker with a disease.
Figure 15.3 Example of meaningless finding provided by private direct‐to‐con...
Chapter 16
Figure 16.1 Example of a decision aid: choosing statin in a diabetes patient...
Figure 16.2 Example of an option grid.
Chapter 19
Figure 19.1 Example of an eight‐step consensus exercise.
Cover Page
Table of Contents
Title Page
Copyright Page
Dedication Page
Foreword to the first edition by Professor Sir David Weatherall
Preface to the seventh edition
Preface to the first edition: do you need to read this book?
Acknowledgements
Begin Reading
Appendix 1 Checklists for finding, appraising and implementing evidence
Appendix 2 Assessing the effects of an intervention
Index
WILEY END USER LICENSE AGREEMENT
iii
iv
v
xii
xiii
xiv
xv
xvi
xvii
xiii
xix
xx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
SEVENTH EDITION
Trisha Greenhalgh
Professor of Primary Care Health SciencesUniversity of OxfordOxford, UK
Paul Dijkstra
Director of Medical Education and Consultant Sport and Exercise Medicine PhysicianAspetar Orthopaedic and Sports Medicine HospitalDoha, QatarNuffield Department of Orthopaedics, Rheumatology and Musculoskeletal SciencesUniversity of OxfordOxford, UK
This edition first published 2025© 2025 John Wiley & Sons Ltd
Edition HistoryJohn Wiley & Sons Ltd (4e, 2010; 5e, 2014; 6e, 2019)
All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of Trisha Greenhalgh and Paul Dijkstra to be identified as the authors of this work has been asserted in accordance with law.
Registered OfficesJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USAJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.
Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of WarrantyThe contents of this work are intended to further general scientific research, understanding, and discussion only and are not intended and should not be relied upon as recommending or promoting scientific method, diagnosis, or treatment by physicians for any particular patient. In view of ongoing research, equipment modifications, changes in governmental regulations, and the constant flow of information relating to the use of medicines, equipment, and devices, the reader is urged to review and evaluate the information provided in the package insert or instructions for each medicine, equipment, or device for, among other things, any changes in the instructions or indication of usage and for added warnings and precautions. While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Library of Congress Cataloging‐in‐Publication Data Applied for
Paperback ISBN: 9781394206902
Cover Design: Wiley
In November 1995, Trisha’s friend Ruth Holland, book reviews editor of the British Medical Journal, suggested that she write a book to demystify the important but often inaccessible subject of evidence‐based medicine. She provided invaluable comments on the original draft of the manuscript but was tragically killed in a train crash on 8th August 1996. This book is dedicated to her memory.
Not surprisingly, the wide publicity given to what is now called evidence‐based medicine has been greeted with mixed reactions by those who are involved in the provision of patient care. The bulk of the medical profession appears to be slightly hurt by the concept, suggesting as it does that until recently all medical practice was what Lewis Thomas has described as a frivolous and irresponsible kind of human experimentation, based on nothing but trial and error, and usually resulting in precisely that sequence. On the other hand, politicians and those who administrate our health services have greeted the notion with enormous glee. They had suspected all along that doctors were totally uncritical and now they had it on paper. Evidence‐based medicine came as a gift from the gods because, at least as they perceived it, its implied efficiency must inevitably result in cost saving.
The concept of controlled clinical trials and evidence‐based medicine is not new, however. It is recorded that Frederick II, Emperor of the Romans and King of Sicily and Jerusalem, who lived from 1192 to 1250 CE, and who was interested in the effects of exercise on digestion, took two knights and gave them identical meals. One was then sent out hunting and the other ordered to bed. At the end of several hours he killed both and examined the contents of their alimentary canals; digestion had proceeded further in the stomach of the sleeping knight. In the 17th century, Jan Baptista van Helmont, a physician and philosopher, became sceptical of the practice of blood‐letting. Hence he proposed what was almost certainly the first clinical trial involving large numbers, randomisation and statistical analysis. This involved taking 200–500 poor people, dividing them into two groups by casting lots, and protecting one from phlebotomy while allowing the other to be treated with as much blood‐letting as his colleagues thought appropriate. The number of funerals in each group would be used to assess the efficacy of blood‐letting. History does not record why this splendid experiment was never carried out.
If modern scientific medicine can be said to have had a beginning, it was in Paris in the mid‐19th century and where it had its roots in the work and teachings of Pierre Charles Alexandre Louis. Louis introduced statistical analysis to the evaluation of medical treatment and, incidentally, showed that blood‐letting was a valueless form of treatment, although this did not change the habits of the physicians of the time, or for many years to come. Despite this pioneering work, few clinicians on either side of the Atlantic urged that trials of clinical outcome should be adopted, although the principles of numerically based experimental design were enunciated in the 1920s by the geneticist Ronald Fisher. The field only started to make a major impact on clinical practice after the Second World War following the seminal work of Sir Austin Bradford Hill and the British epidemiologists who followed him, notably Richard Doll and Archie Cochrane.
But although the idea of evidence‐based medicine is not new, modern disciples like David Sackett and his colleagues are doing a great service to clinical practice, not just by popularising the idea, but by bringing home to clinicians the notion that it is not a dry academic subject but more a way of thinking that should permeate every aspect of medical practice. While much of it is based on mega‐trials and meta‐analyses, it should also be used to influence almost everything that a doctor does. After all, the medical profession has been brain‐washed for years by examiners in medical schools and royal colleges to believe that there is only one way of examining a patient. Our bedside rituals could do with as much critical evaluation as our operations and drug regimes; the same goes for almost every aspect of doctoring. As clinical practice becomes busier, and time for reading and reflection becomes even more precious, the ability effectively to peruse the medical literature and, in the future, to become familiar with a knowledge of best practice from modern communication systems, will be essential skills for doctors. In this lively book, Trisha Greenhalgh provides an excellent approach to how to make best use of medical literature and the benefits of evidence‐based medicine. It should have equal appeal for first year medical students and grey‐haired consultants, and deserves to be read widely.
With increasing years, the privilege of being invited to write a foreword to a book by one’s ex‐students becomes less of a rarity. Trisha Greenhalgh was the kind of medical student who never let her teachers get away with a loose thought and this inquiring attitude seems to have flowered over the years; this is a splendid and timely book and I wish it all the success it deserves. After all, the concept of evidence‐based medicine is nothing more than the state of mind that every clinical teacher hopes to develop in their students; Dr Greenhalgh’s sceptical but constructive approach to medical literature suggests that such a happy outcome is possible at least once in the lifetime of a professor of medicine.
DJ Weatherall
Oxford
September 1996
When I published the first edition of this book in 1996, I was a young physician in family medicine and a junior lecturer in a university; evidence‐based medicine was still somewhat of an unknown quantity. It’s now 2024, I am now approaching retirement (no longer practising clinical medicine but still working as a full‐time professor) and evidence‐based healthcare (no longer ‘medicine’ alone) is a major force in science and clinical practice. This seventh edition is co‐written with new blood in the shape of Paul Dijkstra, a consultant physician and academic who has applied evidence‐based healthcare in rigorous and imaginative ways in his own clinical field (sports medicine).
Back in 1995, when the idea for this book emerged, a handful of academics (including me) were already enthusiastic and had begun running ‘training the trainers’ courses to disseminate what we saw as a highly logical and systematic approach to clinical practice. Others – the majority of clinicians – were convinced that this was a passing fad that was of limited importance and would never catch on. I wrote How to Read a Paper for two reasons. First, students on my own courses were asking for a simple introduction to the principles presented in what was then known as ‘Dave Sackett’s big red book’ (Sackett DL, Haynes RB, Guyatt GH, Tugwell P. Clinical Epidemiology: A basic science for clinical medicine. London: Little, Brown; 1991), an outstanding and inspirational volume that was already in its fourth reprint, but which some novices apparently found a hard read. Second, it was clear to me that many of the critics of evidence‐based medicine did not really understand what they were dismissing and that until they did, serious debate on the clinical, pedagogical and even political place of evidence‐based medicine as a discipline could not begin.
I am of course delighted that How to Read a Paper has become a standard reader in many medical and nursing schools, and that it so far been translated into over 20 languages, including French, German, Italian, Spanish, Portuguese, Chinese, Polish, Japanese, Czech and Russian. I am also delighted that what was initially dismissed as a fringe subject in academia has been well and truly mainstreamed in clinical service. In the UK, for example, it is now a contractual requirement for all doctors, nurses and pharmacists to practise (and for managers to manage) according to best research evidence.
In the 28 years since the first edition of this book was published, evidence‐based medicine (and, more broadly, evidence‐based healthcare) has waxed and waned in popularity. Hundreds of textbooks and tens of thousands of journal articles now offer different angles on the ‘basics of EBM’ covered briefly in the chapters that follow. An increasing number of these sources point out genuine limitations of evidence‐based healthcare in certain contexts. Others look at evidence‐based medicine and healthcare as a social movement – a ‘bandwagon’ that took off at a particular time (the 1990s) and place (North America) and spread quickly with all sorts of knock‐on effects for particular interest groups.
It has been a delight working with Paul on this latest edition of what has become a classic introductory textbook. I think the new jointly authored text is more vibrant and varied than the previous single‐author editions, and I hope you agree! As ever, we would welcome any feedback that will help make the text more accurate, readable and practical.
When my wife Andrea and I bought our first copy of How to Read a Paper (at the time, I was a young sports medicine doctor and Andrea a masters student in experimental therapeutics at Oxford), I never thought I would one day have the privilege to co‐author edition seven with Trisha Greenhalgh! While Andrea introduced Oxford and the Centre for Evidence‐Based Medicine to me, Trisha opened my eyes to the new world (for me) of evidence‐based healthcare: How to Read a Paper spotlighted shortcomings in my own undergraduate and early graduate training and changed how I practised sports medicine. The book inspired me to think and practice in a more ‘evidence‐based’ way, to embrace patients’ expertise more, to listen and question more, and to read healthcare (and other) papers more critically. Working with Trisha on the seventh edition (and to have had Trisha as one of my five DPhil in Evidence‐Based Health Care mentors), was far more than an enlightening experience; it continues to be a joyous and humbling learning journey for which I’m eternally grateful! I am keen to share the lessons from this journey with you too.
When preparing this seventh edition, Trisha and I began with some formal reviews of the previous edition, and also a social media call for suggestions on how to improve it (including ones from students, who are the book’s main target audience). They wanted a wider variety of chapters, updated examples and – the most significant suggestion perhaps – coverage of how the artificial intelligence (AI) revolution changes EBM and EBHC. After all, in these days of ChatGPT, maybe you don’t need to read a paper at all, since your digital assistant could read it for you! We’ve included more examples of big data studies and other AI‐supported research (see, in particular, Chapter 17). We added two more chapters, one on mechanistic evidence (Chapter 18) and another on papers reporting consensus exercises (Chapter 19).
Trisha Greenhalgh
Paul Dijkstra
September 2024
This book is intended for anyone, whether medically qualified or not, who wishes to find their way into the medical and healthcare literature, assess the scientific validity and practical relevance of the articles they find and, where appropriate, put the results into practice. These skills constitute the basics of evidence‐based medicine (if you’re thinking about what doctors do) or evidence‐based healthcare (if you’re looking at the care of patients more widely).
I hope this book will improve your confidence in reading and interpreting papers relating to clinical decision‐making. I hope, in addition, to convey a further message, which is this. Many of the descriptions given by cynics of what evidence‐based healthcare is (the glorification of things that can be measured without regard for the usefulness or accuracy of what is measured, the uncritical acceptance of published numerical data, the preparation of all‐encompassing guidelines by self‐appointed “experts” who are out of touch with real medicine, the debasement of clinical freedom through the imposition of rigid and dogmatic clinical protocols, and the over‐reliance on simplistic, inappropriate, and often incorrect economic analyses), are actually criticisms of what the evidence‐based healthcare movement is fighting against, rather than of what it represents.
Do not, however, think of me as an evangelist for the gospel according to evidence‐based healthcare. I believe that the science of finding, evaluating and implementing the results of clinical research can, and often does, make patient care more objective, more logical, and more cost‐effective. If I didn’t believe that, I wouldn’t spend so much of my time teaching it and trying, as a doctor, to practise it. Nevertheless, I believe that when applied in a vacuum (that is, in the absence of common sense and without regard to the individual circumstances and priorities of the person being offered treatment or to the complex nature of clinical practice and policymaking), ‘evidence‐based’ decision‐making is a reductionist process with a real potential for harm.
Finally, you should note that I am neither an epidemiologist nor a statistician, but a person who reads papers and who has developed a pragmatic (and at times unconventional) system for testing their merits. If you wish to pursue the epidemiological or statistical themes covered in this book, I would encourage you to move on to a more definitive text, references for which you will find at the end of each chapter.
Trisha Greenhalgh
November 1996
We are grateful to the people listed below for help and advice in preparing this book, though we take full responsibility for any inaccuracies.
To the people who, long ago, inspired and supported Trisha to write the first edition of How to Read a Paper, including Ruth Holland, Professor Sir Andy Haines, Professor Dave Sackett and Dr Anna Donald.
To people who have contributed ideas, references, feedback or suggestions to particular chapters for the current edition (those contributing to previous editions are mentioned in the text of the relevant chapter). We mention them in the relevant chapters. In sum, they are:
Drs Jason Oke and Mohammed Farooq (
chapter 5
)
Professor Mike Clarke (
chapter 9
)
Professor Stavros Petrou (
chapter 11
)
Dr Lennard Lee (
chapter 15
);
Ms Yosra Mekki (
chapter 17
)
To the authors and publishers of articles who gave permission to reproduce figures or tables. Details are given in the text.
To various additional advisers and proofreaders who had direct input to this new edition or who advised Trisha on previous editions.
To the many readers, too numerous to mention individually, who took time to write in and point out ambiguities and typographical and factual errors in previous editions.
To our followers on social media who proposed numerous ideas and constructive criticisms. We are @trishgreenhalgh and @drpauldijkstra on X and can also be found on other platforms.
To our partners and families for their unfailing support for our academic work and writing. Shout out to Trisha’s husband Dr Fraser Macfarlane and their sons Rob and Al Macfarlane. Our sons had not long been born when the first edition of this book was being written and are now pursuing their own scientific careers (Rob in marine biology, Al in medicine). Another shout out to Paul’s wife Andrea Dijkstra and their daughters Elisabet and Anne – Elisabet pursuing doctoral studies in music at Guildhall School of Music and Drama in London and Anne well on her way to becoming an architect.
Evidence‐based medicine (EBM), which is part of the broader field of evidence‐based healthcare (EBHC), is much more than just reading papers. According to what is still (more than 25 years after it was written) the most widely quoted definition, it is ‘the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients’ [1]. This definition is useful up to a point, but it misses out a very important aspect of the subject – and that is the use of mathematics. Even if you know almost nothing about EBHC, you probably know it talks a lot about numbers and ratios! A few years ago, Trisha and Anna Donald decided to be upfront about this in our own teaching, and proposed this alternative definition:
Evidence‐based medicine is the use of mathematical estimates of the risk of benefit and harm, derived from high‐quality research on population samples, to inform clinical decision‐making in the diagnosis, investigation or management of individual patients.
The defining feature of EBHC, then, is the use of numbers derived from research on population samples to inform decisions about individuals. This, of course, begs the question ‘What is research?’ – for which a reasonably accurate answer might be ‘Focused, systematic enquiry aimed at generating new knowledge’. In later chapters, we explain how this definition can help you distinguish genuine research (which should inform your practice) from the poor‐quality endeavours of well‐meaning amateurs (which you should politely ignore). (As an aside, it has become fashionable to include qualitative research within EBHC, and we do cover this in chapter 12, but most people talking about EBM and EBHC are referring to research that generates numbers).
If you follow an evidence‐based approach to clinical decision‐making, therefore, all sorts of issues relating to your patients (or, if you work in public health medicine, issues relating to groups of people) will prompt you to ask questions about scientific evidence, seek answers to those questions in a systematic way and alter your practice accordingly.
You might ask questions, for example, about a patient’s symptoms (‘In a 34‐year‐old man with left‐sided chest pain, what is the probability that there is a serious heart problem, and, if there is, will it show up on a resting ECG?’), about physical or diagnostic signs (‘In an otherwise uncomplicated labour, does the presence of meconium [indicating fetal bowel movement] in the amniotic fluid indicate significant deterioration in the physiological state of the fetus?’), about the prognosis of an illness (‘If a previously well two‐year‐old has a short fit associated with a high temperature, what is the chance that she will subsequently develop epilepsy?’), about therapy (‘In patients with acute coronary syndrome [heart attack], are the risks associated with thrombolytic drugs [clot busters] outweighed by the benefits, whatever the patient’s age, sex and ethnic origin?’), about cost‐effectiveness (‘Is the cost of this new anti‐cancer drug justified, compared with other ways of spending limited healthcare resources?’), about patients’ preferences (‘In an 87‐year‐old woman with intermittent atrial fibrillation and a recent transient ischaemic attack, do the potential harms and inconvenience of thrombolytic therapy outweigh the risks of not taking it?’) and about a host of other aspects of health and health services.
Professor Sackett, in the opening editorial of the very first issue of the journal Evidence‐Based Medicine, summarised the essential steps in the emerging science of EBM [2]:
Convert our information needs into answerable questions (i.e. to formulate the problem).
Track down the best evidence with which to answer these questions – which may come from the clinical examination, the diagnostic laboratory, the published literature or other sources.
Appraise the evidence critically (i.e. weigh it up) to assess its validity (closeness to the truth) and usefulness (clinical applicability).
Implement the results of this appraisal in our clinical practice.
Evaluate our performance.
Hence, EBHC requires you not only to read papers but to read the right papers at the right time, and then to alter your behaviour (and, what is often more difficult, influence the behaviour of other people) in the light of what you have found. Sometimes, how‐to‐do‐it courses in EBHC concentrate too heavily on the third of these five steps (critical appraisal) to the exclusion of all the others. Yet, if you have asked the wrong question or sought answers from the wrong sources, you might as well not read any papers at all. And all your training in search techniques and critical appraisal will go to waste if you do not put at least as much effort into implementing valid evidence and measuring progress towards your goals as you do into reading the paper. A few years ago, Trisha added three more stages to Sackett’s five‐stage model to incorporate the patient’s perspective: the resulting eight stages, producing a context‐sensitive checklist for evidence‐based practice, which (like the other checklists in this book) is given in Appendix 1.
If we were to be pedantic about the title of this book, these broader aspects of EBHC should not even get a mention here. But we hope you understand that the book would be incomplete without the final section of this chapter (Before you start: formulate the problem), Chapter 2 (Searching the literature), and Chapter 16 (Applying evidence with patients). Chapters 3–15 describe step three of the EBHC process: critical appraisal; that is, what you should do when you actually have the paper in front of you. Chapter 20 deals with common criticisms of EBHC. The challenges of implementation are so complex that they needed a book of their own, How to Implement Evidence‐Based Healthcare[3].
If you want to explore the subject of EBHC on the Internet, you could try the websites listed in Box 1.1 (these were the top suggestions when we asked our X [formerly Twitter] followers which ones they found most useful). If you’re not ready for that yet, don’t worry at this stage, but do put learning to use web‐based resources on your to‐do list. Don’t worry either when you discover that there are over 1000 websites dedicated to EBM and EBHC; they all offer very similar material and you certainly don’t need to visit them all.
BMJ Evidence‐Based Medicine Toolkit: a resource site maintained by this leading UK medical journal containing a wealth of resources and links for EBM, including links to critical appraisal checklists and statistical tools. https://bestpractice.bmj.com/info/toolkit
National Institute for Health and Care Excellence: this UK‐based website, which is also popular outside the UK, links to evidence‐based guidelines and topic reviews. www.nice.org.uk
The A–Z List of Evidence‐Based Medicine Resources: A one‐stop shop for various databases maintained by Dartmouth Libraries at Dartmouth College, Hanover, NH, USA, including PubMed, the Cochrane Database of Systematic Reviews and the Database of Abstracts of Reviews of Effectiveness (DARE): https://www.dartmouth.edu/library/biomed/guides/research/ebm‐az‐list.html
Critics of EBHC might define it as ‘the tendency of a group of young, confident and highly numerate medical academics to belittle the performance of experienced clinicians using a combination of epidemiological jargon and statistical sleight of hand’ or ‘the argument, usually presented with near‐evangelistic zeal, that no health‐related action should ever be taken by a doctor, a nurse, a purchaser of health services or a policymaker unless and until the results of several large and expensive research trials have appeared in print and approved by a committee of experts’.
Anyone who works face to face with patients knows how often it is necessary to seek new information before making a clinical decision. In general, we don’t put a patient on a drug without evidence that it is likely to work. Apart from anything else, such off‐licence use of medication is, strictly speaking, illegal. Surely we have all been practising EBHC for years?
Well, no, we haven’t. There have been a number of surveys on the behaviour of doctors, nurses and related professionals and, while things seem to be improving, performance still falls short. It was estimated in the 1970s in the United States that only around 10–20% of all health technologies then available (i.e. drugs, procedures, operations, etc.) were evidence‐based; that estimate improved to 21% in 1990. Studies of the interventions offered to consecutive series of patients suggested that 60–90% of clinical decisions, depending on the specialty, were ‘evidence‐based’ [4]. But such studies had major methodological limitations (in particular, they were done in international centres of excellence and they did not take a particularly nuanced look at whether the patient would have been better off on a different drug or no drug at all).
Evidence‐based decision‐making is more common in some specialties than others. A large survey by an Australian team, for example, looked at 1000 patients treated for the 22 most commonly seen conditions in a primary‐care setting. The researchers found that while 90% of patients received evidence‐based care for coronary heart disease, only 13% did so for alcohol dependence [5]. Furthermore, the extent to which any individual practitioner provided evidence‐based care varied in the sample from 32% of the time to 86% of the time. More recently, a review in BMJ Evidence‐Based Medicine cited studies of the proportion of doctors’ clinical decisions that were based on strong research evidence; the figure varied from 14% (in thoracic surgery) to 65% (in psychiatry); this paper also reported new data on primary healthcare, in which around 18% of decisions were based on ‘patient‐oriented high‐quality evidence’ [6].
The fashion to analyse what proportion of clinical decisions are evidence‐based seems to have waned in recent years. But an online survey of UK general practitioners published by our team in 2020 showed that their knowledge of the quantitative benefits and harms of different treatments for long‐term conditions such as diabetes or heart disease was very poor, and that most of them were aware that they were ignorant in this regard [7].
Let’s take a look at the various approaches that health professionals use to reach their decisions in reality – all of which are examples of what EBHC isn’t.
When Trisha was a medical student, she occasionally joined the retinue of a distinguished professor as he made his daily ward rounds. On seeing a new patient, he would enquire about the patient’s symptoms, turn to the massed ranks of juniors around the bed, and relate the story of a similar patient encountered a few years previously. ‘Ah, yes. I remember we gave her such‐and‐such, and she was fine after that’. He was cynical, often rightly, about new drugs and technologies and his clinical acumen was second to none. Nevertheless, it had taken him 40 years to accumulate his expertise, and the largest medical textbook of all – the collection of cases that were outside his personal experience – was forever closed to him.
Anecdote (storytelling) has an important place in clinical practice [8]. Psychologists have shown that students acquire the skills of medicine, nursing and so on by memorising what was wrong with particular patients, and what happened to them, in the form of stories or ‘illness scripts’. Stories about patients are the unit of analysis (i.e. the thing we study) in grand rounds and teaching sessions. Clinicians glean crucial information from patients’ illness narratives; most crucially, perhaps, what being ill means to the patient. And experienced doctors and nurses rightly take account of the accumulated ‘illness scripts’ of all their previous patients when managing subsequent patients. But that doesn’t mean simply doing the same for patient B as you did for patient A if your treatment worked, and doing precisely the opposite if it didn’t!
The dangers of decision‐making by anecdote are well illustrated by considering the risk–benefit ratio of drugs and medicines. When Trisha was in her first pregnancy, she developed severe vomiting and was given the anti‐sickness drug prochlorperazine, and developed a very distressing neurological spasm. Two days later, she had recovered fully from this idiosyncratic reaction, but she never prescribed the drug since, even though the estimated prevalence of neurological reactions to prochlorperazine is only one in several thousand cases. Conversely, it is tempting to dismiss the possibility of rare but potentially serious adverse effects from familiar drugs, such as thrombosis on the contraceptive pill, when one has never encountered such problems in oneself or one’s patients.
We clinicians would not be human if we ignored our personal clinical experiences, but we would be better to base our decisions on the collective experience of thousands of clinicians treating millions of patients, rather than on what we as individuals have seen and felt. Chapter 5 (Statistics for the non‐statistician) describes some more objective methods, such as the number needed to treat, for deciding whether a particular drug (or other intervention) is likely to do a patient significant good or harm.
When the EBM movement was still in its infancy, Sackett emphasised that evidence‐based practice was no threat to old‐fashioned clinical experience or judgement [1]. The question of how clinicians can manage to be both ‘evidence based’ (i.e. systematically informing their decisions by research evidence) and ‘narrative based’ (i.e. embodying all the richness of their accumulated clinical anecdotes and treating each patient’s problem as a unique illness story rather than as a ‘case of X’) is a difficult one to address philosophically, and beyond the scope of this book. The interested reader might like to look up two articles by Trisha on this topic [9, 10].
Trisha qualified as a doctor back in 1983, when medical journals were mostly still in paper form. She used to keep a file of papers ripped out of her medical weeklies before binning the less interesting parts. If an article or editorial seemed to have something new to say, she consciously altered her clinical practice in line with its conclusions. One paper, for example, said that all children with suspected urinary tract infections should be sent for scans of the kidneys, so she began referring anyone under the age of 16 with urinary symptoms for specialist investigations. The advice was in print, and it was recent, so it must surely replace what had been standard practice – in this case, referring only the small minority of such children who display ‘atypical’ features.
This approach to clinical decision‐making is still common, although the file of paper cuttings has usually been replaced by online articles that the clinician has bookmarked. How many clinicians do you know who justify their approach to a particular clinical problem by citing the results section of a single published study, even though they could not tell you anything at all about the methods used to obtain those results? Was the trial randomised and controlled (see section ‘What are randomised controlled trials and why do they matter?’ in Chapter 3)? How many patients, of what age, sex and disease severity, were involved (see section ‘Who is the study about?’ in Chapter 4)? How many withdrew from (‘dropped out of’) the study and why (see section ‘Were preliminary statistical questions addressed?’ in Chapter 4)? By what criteria were patients judged cured (see section ‘Surrogate endpoints’ in Chapter 6)? If the findings of the study appeared to contradict those of other researchers, what attempt was made to validate (confirm) and replicate (repeat) them (see section ‘Ten questions to ask about a paper that claims to validate a diagnostic or screening test’ in Chapter 8)? Were the statistical tests that allegedly proved the authors’ point appropriately chosen and correctly performed (see Chapter 5)? Has the patient’s perspective been systematically sought and incorporated via a shared decision‐making tool (see Chapter 16)? Doctors (and nurses, midwifes, allied health professionals, medical managers, psychologists, medical students and consumer activists) who like to cite the results of medical research studies have a responsibility to ensure that they first go through a checklist of questions like these (more of which are listed in Appendix 1).
When Trisha wrote the first edition of this book in the mid‐1990s, she was critical of the so‐called ‘GOBSAT (good old boys sat around a table) method for producing guidelines. Professor Cindy Mulrow [11], one of the founders of the science of systematic review (see Chapter 9) showed a few years ago that experts in a particular clinical field are less likely to provide an objective review of all the available evidence than a non‐expert who approaches the literature with unbiased eyes, partly because non‐evidence‐based habits may get passed on unquestioningly from seniors to juniors in a specialty. Table 1.1 gives examples of practices that were at one time widely accepted as good clinical practice (and which would have made it into the GOBSAT guideline of the day) but which have subsequently been discredited by high‐quality clinical trials. Indeed, one growth area in EBHC is using evidence to inform disinvestment in practices that were once believed to be evidence based [12].
While you should be wary of the ‘GOBSAT’ approach, there is increasing evidence that ignoring the views of subject experts entirely when constructing guidelines is not a sensible approach, for two reasons. Firstly, the embodied wisdom of people who have managed hundreds of patients with a condition can add great value to a thorough review of the published literature. And secondly, because evidence‐based information is now much more readily available than it used to be, many subject experts these days have both clinical wisdom and up‐to‐date knowledge of the evidence base. Another growth area in EBHC is the science of how to use consensus processes in a systematic and objective manner rather than an opportunistic and partisan one. Chapter 19, new for this edition, explains a relatively new methodology for combining reviews of the evidence with tapping into experts’ clinical wisdom.
Chapter 9 takes you through a checklist for assessing whether a ‘systematic review of the evidence’ produced to support recommendations for practice or policymaking really merits the description, and Chapter 10 discusses the harm that can be done by applying guidelines that are not evidence based.
Table 1.1 Examples of harmful practices once strongly supported by ‘expert opinion’
Approximate time period
Clinical practice accepted by experts of the day
Practice shown to be harmful
Impact on clinical practice
From 500
BCE
Bloodletting (for just about any acute illness)
1820
a
Bloodletting ceased around 1910
1957
Thalidomide for ‘morning sickness’ in early pregnancy led to the birth of over 8000 severely malformed babies worldwide
1960
The teratogenic effects of this drug were so dramatic that thalidomide was rapidly withdrawn when the first case report appeared
From at least 1900
Bed rest for acute low back pain
1986
Many doctors still advise people with back pain to ‘rest up’
1960s
Benzodiazepines (e.g. diazepam) for mild anxiety and insomnia were initially marketed as ‘non‐addictive’ but subsequently shown to cause severe dependence and withdrawal symptoms
1975
Benzodiazepine prescribing for these indications fell in the 1990s
1970s
Intravenous lignocaine in acute myocardial infarction, with a view to preventing arrhythmias, was subsequently shown to have no overall benefit and in some cases to
cause
fatal arrhythmias
1974
Lignocaine continued to be given routinely until the mid‐1980s
Late 1990s
Rofecoxib (one of a new class of non‐steroidal anti‐inflammatory drug introduced for the treatment of arthritis) was later shown to increase the risk of heart attack and stroke
2004
Rofecoxib was quickly withdrawn following some high‐profile legal cases in the USA, although new uses for cancer treatment (where risks may be outweighed by benefits) are now being explored
2000s
Glitazones (a new class of drug for type 2 diabetes) were initially believed to produce better blood glucose control and improved cardiovascular risk compared with older classes of oral hypoglycaemic
2010
Rosiglitazone, for example, was withdrawn in Europe following post‐marketing surveillance data showing increased risk of heart attack and death
2000s
Hydroxyethyl starch (HES) was standard practice for volume replacement in intensive care units
2013
Meta‐analyses showed that not only does HES not improve survival but it is associated with adverse effects including bleeding, kidney damage, damage to organs (liver, lungs, spleen, bone marrow) and severe itching
2010s
Vaginal mesh implants for prolapse (a common complication after childbirth) were initially viewed as more effective and safer than traditional repair
2018
A review in UK in 2018 found that vaginal mesh implants were no more effective than standard repairs; adverse effects in some women required removal and, in some cases severe complications occurred, including (rare) deaths
2020s
Convalescent plasma was briefly hailed as potentially life‐saving in the treatment of acute severe COVID‐19 in early 2020 on the basis of non‐randomised studies
2021
Randomised controlled trials showed that convalescent plasma had no benefit in most patients, except in rare cases where the plasma contained unusually high levels of neutralising antibodies. Some patients came to harm from transfusion reactions
a Interestingly, bloodletting was probably the first practice for which a randomised controlled trial was suggested. The physician van Helmont issued this challenge to his colleagues as early as 1662: ‘Let us take 200 or 500 poor people that have fevers. Let us cast lots, that one half of them may fall to my share, and the others to yours. I will cure them without blood‐letting, but you do as you know – and we shall see how many funerals both of us shall have’ [13]. Thanks to Matthias Egger for this example.
The popular press tends to be horrified when they learn that a treatment has been withheld from a patient for reasons of cost. Managers, politicians and, increasingly, doctors can count on being pilloried when a child with a rare cancer is not sent to a specialist unit in America or an elderly patient is denied a drug to stop her visual loss from macular degeneration. Yet, in the real world, all healthcare is provided from a limited budget and it is increasingly recognised that clinical decisions must take into account the economic costs of a given intervention. As Chapter 11 argues, clinical decision‐making purely on the grounds of cost (‘cost minimisation’ – purchasing the cheapest option with no regard to how effective it is) is generally ethically unjustified and we are right to object vocally when this occurs.
Expensive interventions should not, however, be justified simply because they are new or because they ought to work in theory, or because the only alternative is to do nothing – but because they are very likely to save life or significantly improve its quality. How, though, can the benefits of a hip replacement in a 75‐year‐old be meaningfully compared with that of cholesterol‐lowering drugs in a middle‐aged man or infertility investigations for a couple in their twenties? Somewhat counterintuitively, there is no self‐evident set of ethical principles or analytical tools that we can use to match limited resources to unlimited demand. As you can see in Chapter 11, the much‐derided quality‐adjusted life year or QALY and similar utility‐based units are simply attempts to lend some objectivity to the illogical but unavoidable comparison of apples with oranges in the field of human suffering. In the UK, the National Institute for Health and Care Excellence (www.nice.org.uk) seeks to develop both evidence‐based guidelines and fair allocation of NHS resources.
There is one more reason why some people find the term evidence‐based medicine (or healthcare) unpalatable. This chapter has argued that EBHC is about coping with change, not about knowing all the answers before you start. In other words, it is not so much about what you have read in the past but about how you go about identifying and meeting your ongoing learning needs and applying your knowledge appropriately and consistently in new clinical situations. Doctors who were brought up in the old‐school style of never admitting ignorance may find it hard to accept that a major element of scientific uncertainty exists in practically every clinical encounter, although, in most cases, the clinician fails to identify the uncertainty or to articulate it in terms of an answerable question (see next section). If you are interested in the research evidence on doctors’ [lack of] questioning behaviour, see an excellent review by Swinglehurst [14].
The fact that none of us – not even the cleverest or most experienced – can answer all the questions that arise in the average clinical encounter means that the ‘expert’ is more fallible than they were traditionally cracked up to be. An evidence‐based approach to ward rounds may turn the traditional medical hierarchy on its head when the staff nurse or junior doctor produces new evidence that challenges what the consultant taught everyone last week. For some senior clinicians, learning the skills of critical appraisal is the least of their problems in adjusting to an evidence‐based teaching style!
Having defended EBHC against all the standard arguments put forward by clinicians, we should also acknowledge that a number of legitimate criticisms have been raised by philosophers and social scientists. Such arguments, summarised in Chapter 20, address the nature of knowledge and the question of how much medicine really rests on decisions at all. But please don’t turn to that chapter (which is, philosophically speaking, a ‘hard read’) until you have fully grasped the basic arguments in the first few chapters of this book or you risk becoming confused!