Evidence-Based Chronic Pain Management -  - E-Book

Evidence-Based Chronic Pain Management E-Book

0,0
183,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

A genuine evidence-based text for optimum pain relief in various chronic conditions * Contributes an important advance in the practice of pain management providing the information on which to build more coherent and standardised strategies for relief of patient suffering * Answers questions about which are the most effective methods, AND those which are not effective yet continue to be used * Includes discussion of the positive and the negative evidence, and addresses the grey areas where evidence is ambivalent * Written by the world's leading experts in evidence-based pain management this is a seminal text in the field of pain

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 1351

Veröffentlichungsjahr: 2011

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

List of contributors

Preface

List of abbreviations

Part 1 Understanding evidence and pain

1 Why evidence mattersAndrew Moore and Sheena Derry

Introduction

Most published research false?

Limitations

Acknowledging limitations

Statistical testing

Multiple statistical testing

Size is everything

How many events?

Subgroup analyses

Trivial differences

Confounding by indication

Adverse events

Safety

Importance of the individual patient

Outcomes

Conclusion

References

2 Clinical trial design for chronic pain treatmentsAlec B. O’Connor and Robert H. Dworkin

Types of clinical trials

Prospective cohort trials

Randomized clinical trials: general considerations

Randomization

Blinding

Parallel group trials

Cross-over trials

Treatment features

Patient selection

Treatment outcomes

Statistical analysis

Interpretation of results

Clinical trial quality and sources of bias

Conclusion

References

3 Introduction to evaluation of evidenceEija Kalso

What is evidence-based medicine?

Evaluating adverse effects

Using evidence for the individual patient

Future of evidence?

References

4 Neurobiology of painVictoria Harvey and Anthony Dickenson

Introduction

Peripheral events

Tissue damage

Nerve damage

Central excitatory systems

Central inhibitory systems

Conclusion

References

5 Intractable pain and the perception of time: every patient is an anecdoteDavid B. Morris

Acknowledgment

References

6 Psychology of chronic pain and evidence-based psychological interventionsChristopher Eccleston

A primer in psychology

A definitional interlude

Cognitive psychology and pain: private mental experience and pain

Social psychology and pain: collective experience

Clinical psychology: applications of psychological knowledge

Cognitive behavioral therapy for chronic pain management

The evidence base

Extending the evidence base

Next steps

References

Part 2 Clinical pain syndromes: the evidence

7 Chronic low back painMaurits van Tulder and Bart Koes

Background

The current underst nding of relevant pathophysiology

Prevalence figures/epidemiology

Risk factors

Diagnosis

Treatment

Guidelines and implementation

How to produce evidence of effectiveness in the future

Authors’ recommendations

References

8 Chronic neck pain and whiplashAllan Binder

Introduction

Background

Risk factors

Therapeutic interventions for & neck pain

Future research

Discussion

Author recommendation: a pragmatic approach to the treatment of nonspecific neck pain, including untested measures

Comment

References

9 Pain associated with osteo-arthritisDavid L. Scott

Background

Interventions

Future research to improve management of pain in osteo-arthritis

Author recommendations

References

10 Pain associated with rheumatoid arthritisPaul Creamer and Sarah Love-Jones

Background

Pathophysiology

Epidemiology

Risk factors

Assessing pain in rheumatoid arthritis

Management of rheumatoid arthritis pain

Future research into pain associated with rheumatoid arthritis

Personal recommendations for management

Conclusion

References

11 FibromyalgiaWinfried Häuser, Kati Thieme, Frank Petzke and Claudia Sommer

Background

Definition and classification

Prevalence

Course of fibromyalgia syndrome

Risk factors

Protective factors

Pathophysiology

Diagnosis

Treatment

Guidelines and implementation

Producing evidence of effectiveness in the future

Authors’ recommendations

References

12 Facial painJoanna M. Zakrzewska

Background

Chronic idiopathic facial pain/persistent facial pain/atypical facial pain

Temporomandibular disorders (TMD)

Trigeminal neuralgia

References

13 Pelvic and perineal pain in womenWilliam Stones and Beverly Collett

Pathophysiology of chronic pelvic and perineal pain in women

Classification of the causes of pelvic pain

Epidemiology of chronic pelvic and vulval/perineal pain in women

Risk factors

Treatment of pelvic pain

Issues of cost-effectiveness

The future: strategies to improve the evidence base for the treatment of pelvic and vulval/perineal pain in women

Authors’ recommendations

References

14 Perineal pain in malesAndrew P. Baranowski

Introduction

Background to perineal/pelvic pain syndromes in males

Current understanding of relevant pathophysiology for perineal/pelvic pain syndromes

Prevalence figures/epidemiology

Risk factors

Management of perineal pain

Author’s recommendations

References

15 Pain from abdominal organsTimothy J. Ness and L. Vandy Black

Sources of abdominal pain

Evaluation of abdominal symptoms

Visceral pain arising from cancer

Visceral pain arising from the gastrointestinal tract

Visceral pain arising from the hepatobiliary system and pancreas

Visceral pain arising from urologic organs

Visceral pain arising from reproductive organs

Other disorders with abdominal pain as a symptom

Future research priorities

General recommendations related to therapeutics

Conclusion

References

16 Postsurgical pain syndromesFred Perkins and Jane Ballantyne

Background

Prevalence

Definition and timing of postsurgical pain

Common postsurgical pain syndromes

Summary of present evidence and its limitations

Author’s recommendations

References

17 Painful diabetic neuropathyChristina Daousi and Turo J. Nurmikko

Introduction

Epidemiology and natural history of chronic painful diabetic neuropathy

Pathogenesis of neuropathic pain in chronic pain diabetic neuropathy

Therapies for painful diabetic neuropathy

Author’s recommendations

Future research directions

Conclusion

References

18 Postherpetic neuralgiaTuro J. Nurmikko

Introduction

Pathophysiology

Epidemiology of herpes zoster

Prevention of postherpetic neuralgia.

Treatment of postherpetic neuralgia

References

19 Phantom limb painLone Nikolajsen

Introduction

Pathophysiology

Epidemiology

Risk factors

Interventions

Future research

Author recommendations for the management of phantom pain

References

20 Complex regional pain syndromeAndreas Binder and Ralf Baron

Introduction

The current understanding of relevant pathophysiology

Epidemiology

Risk factors

Treatment

Cost of treatment and cost-benefit

How to produce evidence of effectiveness in future

Authors' recommendations

References

21 Central pain syndromesKristina B. Svendsen, Nanna B. Finnerup, Henriette Klit and Troels Staehelin Jensen

Introduction

Pathophysiology

Central post-stroke pain

Spinal cord injury pain

Central pain in multiple sclerosis

Side effects of commonly used drugs in central pain

Conclusion – evidence-based treatment of central pain

References

22 HeadachePeer Tfelt-Hansen

Introduction

Primary headaches

Pathophysiology

Systematic reviews and meta-analyses in migraine

Clinical trials in tension-type headache

Clinical trials in cluster headache

Research needed to improve the evidence-based management of migraine

References

23 Chest pain syndromesAustin Leach and Michael Chester

Introduction

Anatomy

Angina pectoris

Chest wall pain

Future directions for research

Authors’ recommendations

References

Part 3 Cancer pain

24 Oncologic therapy in cancer painRita Janes and Tiina Saarto

Background

Epidemiology

Current understanding of relevant pathophysiology

Radiotherapy

Bisphosphonates

Conclusion

Authors’ recommendations

References

25 Cancer pain: analgesics and co-analgesicsRae Frances Bell

Background

Prevalence, epidemiology and risk factors

Interventions supported by evidence (ranked according to evidence level)

The future: how to produce evidence of effectiveness

Conclusion

Author’s recommendations

References

26 Psychologic interventions for cancer painFrancis J. Keefe, Tamara J. Somers and Amy Abernethy

Introduction

Conceptual background

Efficacy of psychosocial interventions

Future directions

References

27 Transcutaneous electrical nerve stimulation and acupunctureMark I. Johnson

Introduction

Transcutaneous electrical nerve stimulation

Acupuncture

Conclusion

References

Part 4 Treatment modalities: the evidence

CHAPTER 28 Interventional therapiesAnthony Dragovich and Steven P. Cohen

Introduction

Epidural injections

Facet interventions

Sacroiliac joint interventions

Spinal cord stimulation

Intradiskal electrothermal therapy (IDET)

Continuous neuraxial infusions

Conclusion

References

CHAPTER 29 Spinal cord stimulation for refractory anginaMats Börjesson, Clas Mannheimer, Paulin Andréll and Bengt Linderoth

Introduction

Treatment of refractory angina

Neurostimulation in ischemic pain

Mechanisms of action

Clinical effects

Conclusion

References

30 Rehabilitative treatment for chronic painJames P. Robinson, Raphael Leo, Joseph Wallach, Ellen McGough and Michael Schatman

Introduction

Psychologic therapy

Physical therapy

Combination therapy – physical and psychologic

Multidisciplinary pain rehabilitation

Conclusion

A final word

References

31 Drug treatment of chronic painHenry McQuay

Introduction

Menus and ladders

Choosing drugs to treat nociceptive pain

Choosing drugs to treat neuropathic pain

Conclusion

References

32 Complementary therapies for pain reliefEdzard Ernst

Introduction

Effectiveness

Risks

Conclusion

References

Index

This edition first published 2010, © 2010 by Blackwell Publishing Ltd

BMJ Books is an imprint of BMJ Publishing Group Limited, used under licence by Blackwell Publishing which was acquired by John Wiley & Sons in February 2007. Blackwell's publishing program has been merged with Wiley's global Scientific, Technical and Medical business to form Wiley-Blackwell.

Registered office: John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

Editorial offices: 9600 Garsington Road, Oxford, OX4 2DQ, UK

111 River Street, Hoboken, NJ 07030–5774, USA"

The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com/wiley-blackwell.

The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

The contents of this work are intended to further general scientific research, understanding, and discussion only and are not intended and should not be relied upon as recommending or promoting a specific method, diagnosis, or treatment by physicians for any particular patient. The publisher and the authors make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of fitness for a particular purpose. In view of ongoing research, equipment modifications, changes in governmental regulations, and the constant flow of information relating to the use of medicines, equipment, and devices, the reader is urged to review and evaluate the information provided in the package insert or instructions for each medicine, equipment, or device for, among other things, any changes in the instructions or indication of usage and for added warnings and precautions. Readers should consult with a specialist where appropriate. The fact that an organization or website is referred to in this work as a citation and/or a potential source of further information does not mean that the authors or the publisher endorse the information the organization or website may provide or recommendations it may make. Further, readers should be aware that internet websites listed in this work may have changed or disappeared between when this work was written and when it is read. No warranty may be created or extended by any promotional statements for this work. Neither the publisher nor the author shall be liable for any damages arising herefrom.

ISBN: 9781405152914

Library of Congress Cataloging-in-Publication Data

Evidence-based chronic pain management/edited by Catherine F. Stannard, Eija Kalso, Jane Ballantyne. p.; cm.

Includes bibliographical references and index.

ISBN 978-1-4051-5291-4

1. Chronic pain. I. Stannard, Catherine F. II. Kalso, Eija, 1955- III. Ballantyne, Jane, 1948-

[DNLM: 1. Pain—therapy. 2. Chronic Disease. 3. Evidence-Based Medicine. WL 704 E925 2010]

RB127.E95 2010

616'.0472—dc22

2009042743

A catalogue record for this book is available from the British Library.

List of contributors

Amy Abernethy MD

Associate Director

Duke Comprehensive Cancer Center

Associate Professor of Medicine

Duke University Medical Center

Durham, NC, USA

PaulinAndréllMD

Pain Centre, Department of Medicine

Sahlgrenska University Hospital/Östra

Göteborg University

Göteborg, Sweden

Andrew P. Baranowski BSc Hons, MBBS, FRCA,MD, FFPMRCA

Consultant and Honorary Senior Lecturer in Pain Medicine

The Pain Management Centre

The National Hospital for Neurology and Neurosurgery University College London Hospitals London, UK

Ralf Baron MD

Head, Division of Neurological Pain Research and Therapy

Department of Neurology

Universitaetsklinikum Schleswig-Holstein

Kiel, Germany

Rae Frances Bell MD, PhD

Senior Consultant Anaesthetist

Head of Multidisciplinary Pain Clinic/Research Fellow

Regional Centre of Excellence in Palliative Care

Haukeland University Hospital Bergen, Norway

Andreas Binder MD

Division of Neurological Pain Research and Therapy

Department of Neurology

Universitaetsklinikum Schleswig-Holstein

Kiel, Germany

Allan Binder

Lister Hospital

E & N Hertfordshire NHS Trust

Stevenage, UK

L. Vandy Black MD

Division of Pediatric Hematology

Johns Hopkins University

Baltimore, MD, USA

MatsBörjesson MD, PhD

Associate Professor, Sahgrenska University

Hospital/Östra

Department of Medicine and Pain Center

Göteborg, Sweden

Michael Chester MBBSMRCP MD FESC

Consultant Cardiologist & Director

National Refractory Angina Centre

Royal Liverpool and Broadgreen University Hospital

Liverpool, UK

Steven P. Cohen MD

Johns Hopkins Medical Institutions

Baltimore, MD

Walter Reed Army Medical Center

Washington, DC, USA

Beverly Collett MB.BS, FRCA, FFPMRCA

Consultant in Pain Medicine

Pain Management Service

University Hospitals of Leicester

Leicester, UK

Paul Creamer MD, FRCP

Consultant Rheumatologist

Southmead Hospital, Bristol, UK

Christina DaousiMRCP, MD

Senior Lecturer and Honorary Consultant Physician in

Diabetes & Endocrinology

University Hospital Aintree

Clinical Sciences Centre

Liverpool, UK

Sheena Derry MA

Senior Research Officer

Nuffield Department Anaesthetics

University of Oxford

Oxford, UK

Anthony Dickenson PhD

Professor of Neuropharmacology

Department of Pharmacology

University College London

London, UK

Anthony Dragovich MD

Assistant Professor of Anesthesiology

Womack Army Medical Center

Fort Bragg, NC, USA

Robert H. Dworkin PhD

Professor of Anesthesiology, Neurology, Oncology, and

Psychiatry

University of Rochester School of Medicine and

Dentistry

Rochester, NY, USA

Christopher Eccleston PhD

Professor of Psychology & Director

Centre for Pain Research and

Coordinating Editor of Pain Palliative and Supportive Care

Cochrane Review Group

University of Bath, Bath, UK

Edzard Ernst MD, PhD, FMed Sci, FSB, FRCP, FRCP (Edin.)

Laing Chair of Complementary Medicine

Peninsula Medical School

Universities of Exeter and Plymouth

Exeter, UK

Nanna B. Finnerup MD, PhD

Associate Professor, Danish Pain Research Center

and Department of Neurology

Aarhus University Hospital, Aarhus, Denmark

Victoria Harvey PhD

Department of Pharmacology

University College London

London, UK

WinfriedHäuser MD

Head, Psychosomatic Medicine

Department Internal Medicine 1

Center of Pain Therapy

Klinikum Saarbrücken

Saarbrücken, Germany

Rita Janes MD

Consultant in Oncology

Department of Oncology

Helsinki University Hospital

Helsinki, Finland

Troels Staehelin Jensen MD, DMSc

Professor of Experimental and Clinical Pain Research

Danish Pain Research Center and Department of Neurology

Aarhus University Hospital

Aarhus, Denmark

Mark I. Johnson PhD, BSc

Professor of Pain and Analgesia

Faculty of Health, Leeds Metropolitan University

and Leeds Pallium Research Group

Leeds, UK

Francis J. Keefe PhD

Professor & Director, Pain Prevention and Treatment

Research Program

Department of Psychiatry and Behavioral Sciences

Duke University Medical Center

Durham, NC, USA

Henriette Klit MD

Danish Pain Research Center and Department of Neurology

Aarhus University Hospital

Aarhus, Denmark

Bart Koes PhD

Professor of General Practice

Erasmus MC-University Medical Center

Rotterdam, The Netherlands

Austin Leach FRCA, FFPMRCA

Consultant in Pain Medicine

National Refractory Angina Centre

Royal Liverpool and Broadgreen University Hospital

Liverpool, UK

Raphael Leo MA, MD

Associate Professor, Department of

Psychiatry

School of Medicine and Biomedical Sciences

State University of New York at Buffalo

Buffalo, NY, USA

Bengt Linderoth MD, PhD

Professor & Head; Functional

Neurosurgery

and Applied Neuroscience Research

Program

Karolinska Institutet

Stockholm, Sweden

Sarah Love-Jones

Frenchay Hospital

Bristol, UK

ClasMannheimer MD

Professor & Head, Multidisciplinary

Pain Center

Department of Medicine

Sahlgrenska University Hospital/Östra

University of Göteborg

Göteborg, Sweden

Ellen McGough PhD, PT

Biobehavioral Nursing and Health Systems

University of Washington

Washington, DC, USA

Henry McQuay DM, FRCA, FRCP(Edin)

Nuffield Professor of Clinical Anaesthetics

John Radcliffe Hospital

University of Oxford

Oxford, UK

Andrew Moore DSC

Research Director

Nuffield Department of Anaesthetics

University of OxfordJohn Radcliffe Hospital

Oxford, UK

David B. Morris PhD

University Professor

University of Virginia

Charlottesville

VA, USA

Timothy J. Ness MD, PHD

Simon Gelman Endowed Professor

Department of Anesthesiology

University of Alabama at Birmingham

Birmingham, AL, USA

Lone Nikolajsen MD, PhD

Consultant, Department of Anaesthesiology

and Danish Pain Research Center

Aarhus University Hospital

Aarhus, Denmark

Turo J. Nurmikko MD, PhD

Professor of Pain Science

Neuroscience Research Unit

School of Clinical Sciences

University of Liverpool

Liverpool, UK

Alec B. O'ConnorMD, MPH

Associate Professor of Medicine

University of Rochester School of

Medicine and Dentistry

Rochester, NY, USA

Frederick M. Perkins MD

Chief, Anesthesia

United States Department of Veteran Affairs

White River Junction, VT, USA

Frank Petzke MD

Uniklinik Köln, Department of Anesthesiology

and Postoperative Intensive Care Medicine

University Hospital of Cologne

Köln, Germany

James P. Robinson MD, PhD

Department of Rehabilitation Medicine

University of Washington Washington,

DC, USA

Tiina Saarto MD, PhD

Consultant in Oncology and

Head, Department of Oncology

Helsinki University Hospital

Helsinki, Finland

Michael Schatman PhD, CPE

Research Director, Pain and Addiction Study Foundation

Bellevue, WA, USA

David L. Scott BSc, MD, FRCP

Professor of Clinical Rheumatology

Department of Rheumatology and Weston Education

Centre

Kings College London School of Medicine

London, UK

Tamara J. Somers PhD

Assistant Professor, Department of Psychiatry

and Behavioral Sciences

Duke University Medical Center

Durham, NC USA

Claudia Sommer MD

Professor of Neurology

Universität Würzburg

Würzburg, Germany.

William Stones MD

Chair, Department of Obstetrics and Gynaecology

Aga Khan University Hospital

Nairobi, Kenya

Kristina B. Svendsen MD, PhD

Danish Pain Research Center and Department of Neurology

Aarhus University Hospital

Aarhus, Denmark

Peer Tfelt-Hansen MD DMSc

Danish Headache Centre

Department of Neurology

University of Copenhagen Glostrup Hospital

Glostrup, Denmark

Kati Thieme PhD

Center for Neurosensory Disorders

Thurston Arthritis Research Center

University of North Carolina

Chapel Hill, NC, USA

Maurits van Tulder PhD

Professor, Department of Health Sciences

and EMGO Insitute for Health and Care Research

Faculty of Earth and Life Sciences VU University

Amsterdam, The Netherlands

Joseph Wallach MD, PhD

Rehabilitation Institute of Chicago

Chicago, IL, USA

Joanna M. ZakrzewskaMD, FDSRCS, FFDRCS

Division of Diagnostic, Surgical and Medical Sciences

Eastman Dental Hospital

UCLH NHS Foundation Trust

London, UK

Preface

Evidence-based medicine is now firmly established as a basis for clinical decision making. It is also advocated by national and international institutions and policy makers. Systematic reviews are used for the writing of guidelines and consensus documents relating to clinical practice.

Evidence-based pain management had its start around 15 years ago when the doctoral thesis Meta- Analysis of Randomised Clinical Trials in Pain Relief by Alejandro Jadad-Bechara was approved at the University of Oxford. The first database that was used for Dr Jadad's thesis was compiled from articles that were hand searched and photocopied. Today's meta- analyses are facilitated considerably by advances in electronic database and search engine technology.

In 1998 Oxford University Press published An Evidence-Based Resource for Pain Relief by Henry McQuay and Andrew Moore. This was followed by Bandolier's Little Book of Pain and Making Sense of the Medical Evidence. These books and many original papers based on meta-analyses and systematic reviews have changed the way in which clinical research papers are assessed. Many early studies addressed methodological issues. One of the most obvious consequences of these seminal papers was the improvement in the design of clinical trials in pain relief in line with the developments in other fields of medicine. Randomization, blinding and the appropriate selection control groups, both active and inactive, were the most important issues. More recently, the CONSORT and QUORUM statements have provided guidance on how these factors should be addressed in clinical trials.

Trial sensitivity and the placebo response are particularly important questions in studies of pain relieving interventions. Trial sensitivity means that there should be enough pain to be relieved. Expectation and conditioning are important in both the placebo effect and in pain relief. Another challenge has been the number of patients needing to be included in a treatment arm in order to provide the study with enough power to produce reliable results. In addition to trial quality, issues of validity have become increasingly important. Validity involves understanding both the clinical condition and the interventions that are studied. This means systematic reviews and meta-analyses need collaboration between contributors who have competence in search and meta-analytical methods and clinicians who are experienced in the clinical field being studied.

Traditional randomized and controlled trials concentrate on the mean effect and what happens to the majority, i.e. the average patient. With increasing understanding of the genetic and environmental effects on individual differences the average response needs to be considered critically. Evidence-based medicine will provide the basis for treatment choices but the patient's individual characteristics also need to be considered. Clinical trial methodology must be developed in order to take patient variability into consideration. Performing meta-analyses based on individual patient data could provide new possibilities for understanding the pathophysiology of chronic pain.

During the production of this book a prolific US researcher in the field of pain was shown to have fabricated data in some 21 studies published in peer- reviewed journals. The fraud is believed to be one of the largest known cases of academic misconduct and was widely reported in the American media. Academic dishonesty on this scale produces enormous collateral damage. The papers were withdrawn from the journals (and all relevant references have been removed from this book). All authors and publishers in the field have had to re-examine the fraudulent material and mitigate the influence of these studies. Systematic reviews and meta-analyses containing the data have needed to be recalculated. The episode has brought to the fore discussions regarding academic integrity and probity and highlights the vigilance with which journal editors, publishers and readers of scientific material must exclude sources of bias and to identify data that may mislead either deliberately or unintentionally.

We have been fortunate to attract international leaders in the field of pain management, as well as experts in systematic analysis, to contribute to this book on evidence-based chronic pain management. The involvement of such individuals is a testament to a shared recognition that a book that consolidates evidence supporting and refuting the many available approaches to managing chronic pain will be a valuable addition to the literature. We hope this book will guide practitioners in their treatment choices by helping them to identify which treatments offer the greatest hope of improving pain for patients, and those therapies which evidence suggests have low likelihood of success, poor cost-effectiveness, or both.

Cathy Stannard

Eija Kalso

Jane Ballantyne

List of abbreviations

ACC anterior cingulate cortex/American College of Cardiology ACE angiotensin-converting enzyme ACEI angiotensin-converting enzyme inhibitor ACR American College of Rheumatology ADR adverse drug reaction AE adverse effects AIMS Arthritis Impact Measurement Scale AL-TENS acupuncture-like TENS ANS autonomic nervous system APF antiproliferative factor ATP adenosine triphosphate BOCF baseline observations carried forward BPS/IC bladder pain syndrome/interstitial cystitis BT behavior therapy CABG coronary artery bypass graft CAD coronary artery disease CAM complementary and alternative medicine CBFV coronary blood flow velocity CBM cannabis-based medicine CBT cognitive behavioral therapy CD Crohn's disease CDLBP chronic diskogenic low back pain CER control event rate CGRP calcitonin gene-related peptide CI confidence interval CNCP chronic noncancer pain CNS central nervous system COMT catecholamine-O-methyltransferase CONSORT Consolidated Standards of Reporting Trials COX cyclo-oxygenase COXIBs COX-2 inhibitors CP central pain CPDN chronic painful diabetic neuropathy CPSP central post-stroke pain CR controlled release CRP C-reactive protein CRPS complex regional pain syndrome CT computed tomography CVA cerebrovascular accident CVD cardiovascular disease CWP chronic widespread pain D double blinded DAS Disease Activity Scale DBS deep brain stimulation DH dorsal horn DHE dihydroergotamine DMARDs disease-modifying antirheumatic drugs DMSO dimethylsulfoxide DP directional preference DRG dorsal root ganglia EBM evidence-based medicine EDSS Expanded Disability Status Scale EECP external enhanced counterpulsation EER experimental event rate EFNS European Federation of Neurological Societies EMDA electromotive drug administration EMG electromyogram ER extended release ERCP endoscopic retrograde cholangiopancreatography ES effect size ESCS electrical spinal cord stimulation ESI epidural steroid injections ESR erythrocyte sedimentation rate FBSS failed back surgery syndrome FBT fentanyl buccal tablet FDA Food and Drug Administration FMS fibromyalgia syndrome FSS functional somatic syndrome GABA γ-aminobutyric acid GI gastrointestinal GLA γ-linolenic acid GM-CSF granulocyte macrophage-colony stimulating factor GnRH gonadotrophin-releasing hormone GTN glyceryl trinitrate HLA human leukocyte antigen HNP herniated nucleus pulposus HPA hypothalamic-pituitary-adrenal axis HRQOL health-related quality of life HZ herpes zoster IAP intermittent acute porphyria IASP International Association for the Study of Pain IBD inflammatory bowel disease IBS irritable bowel syndrome IC interstitial cystitis IDDS intrathecal drug delivery system IDET intradiskal electrothermal therapy IL interleukin IMMPACT Initiative on Methods, Measurement, and Pain Assessment in Clinical Trials IN intranasal ISDN isosorbide dinitrate IT intrathecal ITT intention to treat IV intravenous IVRA intravenous regional anesthesia IVRS intravenous regional sympatholysis LA locus coeruleus/left anterior LBP low back pain LOCF last observation carried forward LP long-term potentiation LUNA laparoscopic uterine nerve ablation LV left ventricle MAOI monoamine oxidase inhibitor MCP metacarpophalangeal MEG magnetoencephalographic MHC major histocompatibility complex MI myocardial infarction MRA magnetic resonance angiography MRI magnetic resonance imaging MS multiple sclerosis MTP metatarsophalangeal NA noradrenaline NAC N-acetylcysteine NGF nerve growth factor NMDA N-methyl-D-aspartate NNH number needed to harm NNT number needed to treat NO nitric oxide NRAC National Refractory Angina Centre NSAIDs nonsteroidal anti-inflammatory drugs NSE negative sexual events OA osteo-arthritis ODI Oswestry Disability Index OMT optimal medical therapy OR Odds ratio OTFC oral transmucosal fentanyl citrate PAF primary afferent fibers PAG periaqueductal gray PBS painful bladder syndrome PCI percutaneous coronary intervention PDN painful diabetic neuropathy PEMF pulsed electromagnetic field PENS percutaneous electrical nerve stimulation PET positron emission tomography PHN postherpetic neuralgia PIP proximal interphalangeal PL placebo PMP pain management program PMR percutaneous myocardial laser revascularization PPS pentosanpolysulfate PSN presacral neurectomy PT physical therapist/therapy PTS painful tonic seizures QALY quality-adjusted life-year QST quantitative/qualitative sensory testing QUOROM quality of assessement of systematic reviews RA rheumatoid arthritis RCT randomized clinical/controlled trial RDC Research Diagnostic Criteria RF rheumatoid factor/radiofrequency RR relative risk RVM rostroventral medulla SD standard deviation SCI spinal cord injury SCS spinal cord stimulation SI sacroiliac SIP sympathetically independent pain SLR straight leg raising SMA supplementary motor area SMD standard mean difference SMP sympathetically maintained pain SNRI serotonin and noradrenaline reuptake inhibitor SP substance P SPID sum of pain intensity difference SP-SAP substance P-saporin SRT self-regulatory treatments SSRI Selective serotonin reuptake inhibitors SUNA short-lasting neuralgiform pain with autonomic symptoms SUNCT short-lasting unilateral neuralgiform headaches with conjunctival tearing TCA tricyclic antidepressant TENS transcutaneous electrical nerve stimulation TFESI transforaminal epidural steroid injection TG therapeutic gain THC δ-9-tetrahydrocannabinol TMD temporomandibular disorders TMJ temporomandibular joint TMR transmyocardial myocardial laser revascularization TMS transcranial magnetic stimulation TN trigeminal neuralgia TNF-α tumor necrosis factor α TOTPAR total pain relief TRP transient receptor potential TSE transcutaneous spinal electroanalgesia TTF time to treatment failure TTX tetrodotoxin UC ulcerative colitis VAS visual analog scale VATS video-assisted thoracoscopic sugery VVS vulval vestibulitis syndrome VZV varicella zoster virus WHO World Health Organization WMD weighted mean difference

Part 1

Understanding evidence and pain

CHAPTER 1

Why evidence matters

Andrew Moore and Sheena Derry

Pain Research, Nuffield Department of Anaesthetics, John Radcliffe Hospital, Oxford, UK

Introduction

There are two ways of answering a question about what evidence-based medicine (EBM) is good for or even what it is. One is the dry, formal approach, essentially statistical, essentially justifying a prescriptive approach to medicine. We have chosen, instead, a freer approach, emphasizing the utility of knowing when “stuff” is likely to be wrong and being able to spot those places where, as the old maps would tell us, “here be monsters.” This is the Bandolier approach, the product of the hard knocks of a couple of decades or more of trying to understand evidence.

What both of us (and Henry McQuay and other collaborators over the years), on our different journeys, have brought to the examination of evidence is a healthy dose of skepticism, perhaps epitomized in the birth of Bandolier. It came during a lecture on evidence-based medicine by a public health doctor, who proclaimed that only seven things were known to work in medicine. By known, he meant that they were evidenced by systematic review and meta-analysis. A reasonable point, but there were unreasonable people in the audience. One mentioned thiopentone for induction of anesthesia, explaining that with a syringe and needle anyone, without exception, could be put to sleep given enough of this useful barbiturate; today we would say that it had an NNT of 1. So now we had seven things known to work in medicine, plus thiopentone. We needed somewhere to put the bullet points of evidence; you put bullets in a bandolier (a shoulder belt with loops for ammunition).

The point of this tale is not to traduce well-meaning public health docs, or meta-analyses, but rather to make the point that evidence comes in different ways and that different types of evidence have different weight in different circumstances. There is no single answer to what is needed, and we have often to think outside what is a very large box. Too often, EBM seems to be corralled into a very small box, with the lid nailed tightly shut and no outside thinking allowed.

If there is a single unifying theory behind EBM, it is that, whatever sort of evidence you are looking at, you need to apply the criteria of quality, validity, and size. These issues have been explored in depth for clinical trials, observational studies, adverse events, diagnosis, and health economics [1], and will not be rehearsed in detail in what follows. Rather, we will try to explore some issues that we think are commonly overlooked in discussions about EBM.

We talk to many people about EBM and those not actively engaged in research in the area are frequently frustrated by what they see as an impossibly complicated discipline. Someone once quoted Ed Murrow at us, who, talking about the Vietnam war, said that “Anyone who isn’t confused doesn’t really understand the situation” (Walter Bryan, The Improbable Irish, 1969). We understand the sense of confusion that can arise, but there are good reasons for continuing to grapple with EBM. The first of these is all about the propensity of research and other papers you read to be wrong. You need to know about that, if you know nothing else.

Most published research false?

It has been said that only 1% of articles in scientific journals are scientifically sound [2]. Whatever the exact percentage, a paper from Greece [3], replete with Greek mathematical symbols and philosophy, makes a number of important points which are useful to think of as a series of little laws (some of which we explore more fully later) to use when considering evidence.

The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.The greater the number and the fewer the selection of tested relationships in a scientific field, the less likely the research findings are to be true.The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true. (These might include research grants or the promise of future research grants.)The hotter a scientific field (the more scientific teams involved), the less likely the research findings are to be true.

Ioannidis then performs a pile of calculations and simulations and demonstrates the likelihood of us getting at the truth from different typical study types (Table 1.1). This ranges from odds of 2:1 on (67% likely to be true) from a systematic review of good-quality randomized trials, through 1:3 against (25% likely to be true) from a systematic review of small inconclusive randomized trials, to even lower levels for other study architectures.

There are many traps and pitfalls to negotiate when assessing evidence, and it is all too easy to be misled by an apparently perfect study that later turns out to be wrong or by a meta-analysis with impeccable credentials that seems to be trying to pull the wool over our eyes. Often, early outstanding results are followed by others that are less impressive. It is almost as if there is a law that states that first results are always spectacular and subsequent ones are mediocre: the law of initial results. It now seems that there may be some truth in this.

Three major general medical journals (New England Journal of Medicine, JAMA, and Lancet) were searched for studies with more than 1000 citations published between 1990 and 2003 [4]. This is an extraordinarily high number of citations when you think that most papers are cited once if at all, and that a citation of more than a few hundred times is almost as rare as hens’ teeth.

Of the 115 articles published, 49 were eligible for the study because they were reports of original clinical research (like tamoxifen for breast cancer prevention or stent versus balloon angioplasty). Studies had sample sizes as low as nine and as high as 87,000. There were two case series, four cohort studies, and 43 randomized trials. The randomized trials were very varied in size, though, from 146 to 29,133 subjects (median 1817). Fourteen of the 43 randomized trials (33%) had fewer than 1000 patients and 25 (58%) had fewer than 2500 patients.

Of the 49 studies, seven were contradicted by later research. These seven contradicted studies included one case series with nine patients, three cohort studies with 40,000–80,000 patients, and three randomized trials, with 200, 875 and 2002 patients respectively. So only three of 43 randomized trials were contradicted (7%), compared with half the case series and three-quarters of the cohort studies.

Table 1.1 Likelihood of truth of research findings from various typical study architectures

ExampleRatio of true to not true Confirmatory meta-analysis of good-quality RCTs 2:1 Adequately powered RCT with little bias and 1:1 prestudy odds 1:1 Meta-analysis of small, inconclusive studies 1:3 Underpowered and poorly performed phase I–II RCT 1:5 Underpowered but well-performed phase I–II RCT 1:5 Adequately powered exploratory epidemiologic study 1:10 Underpowered exploratory epidemiologic study 1:10 Discovery-orientated exploratory research with massive testing 1:1000

A further seven studies found effects stronger than subsequent research. One of these was a cohort study with 800 patients. The other six were randomized trials, four with fewer than 1000 patients and two with about 1500 patients.

Most of the observational studies had been contradicted, or subsequent research had shown substantially smaller effects, but most randomized studies had results that had not been challenged. Of the nine randomized trials that were challenged, six had fewer than 1000 patients, and all had fewer than 2003 patients. Of 23 randomized trials with 2002 patients or fewer, nine were contradicted or challenged. None of the 20 randomized studies with more than 2003 patients were challenged.

There is much more in these fascinating papers, but it is more detailed and more complex without becoming necessarily much easier to understand. There is nothing that contradicts what we already know, namely that if we accept evidence of poor quality, without validity or where there are few events or numbers of patients, we are likely, often highly likely, to be misled.

If we concentrate on evidence of high quality, which is valid, and with large numbers, that will hardly ever happen. As Ioannidis also comments, if instead of chasing some ephemeral statistical significance we concentrate our efforts where there is good prior evidence, our chances of getting the true result are better. This may be why clinical trials on pharmaceuticals are so often significant statistically, and in the direction of supporting a drug. Yet even in that very special circumstance, where so much treasure is expended, years of work with positive results can come to naught when the big trials are done and do not produce the expected answer.

Limitations

Whatever evidence we look at, there are likely to be limitations to it. After all, there are few circumstances in which one study, of whatever architecture, is likely to be able to answer all the questions we need to know about an intervention. For example, trials capturing information about the benefits of treatment will not be able to speak to the question of rare, but serious, adverse events.

There are many more potential limitations. Studies may not be properly conducted or reported according to recognized standards, like CONSORT for randomized trials (www.consort-statement.org), QUOROM for systematic reviews, and other standards for other studies. They may not measure outcomes that are useful, or be conducted on patients like ours, or present results in ways that we can easily comprehend; trials may have few events, when not much happens, but make much of not much, as it were. Observational studies, diagnostic studies, and health economic studies all have their own particular set of limitations, as well as the more pervasive sins of significance chasing, or finding evidence to support only preconceptions oridées fixes.

Perfection in terms of the overall quality and extent of evidence is never going to happen in a single study, if only because the ultimate question – whether this intervention will work in this patient and produce no adverse effects – cannot be answered. The average results we obtain from trials are difficult to extrapolate to individuals, and especially the patients in front of us (of which more later).

Acknowledging limitations

Increasingly we have come to expect authors to make some comment about the limitations of their studies, even if it is only a nod in the direction of acknowledging that there are some. This is not easy, because there is an element of subjectivity about this. Authors may also believe, with some reason, that spending too much time rubbishing their own results will result in rejection by journals, and rejection is not appreciated by pointy-headed academics who live or die by publications.

Even so, the dearth of space given over to discussing the limitations of studies is worrying. A recent survey [5] that examined 400 papers from 2005 in the six most cited research journals and two open-access journals showed that only 17% used at least one word denoting limitations in the context of the scientific work presented. Among the 25 most cited journals, only one (JAMA) asks for a comments section on study limitations, and most were silent.

Statistical testing

It is an unspoken belief that to have a paper published, it helps to report some measure with a statistically significant difference. This leads to the phenomenon of significance chasing, in which data are analyzed to death and the aim is to find any test with any data that show significance at the paltry level of 5%. A P value of 0.05, or significance at the 5% level, tells us that there is a 1 in 20 chance that the results occurred by chance. As an aside, you might want to ask yourself how happy you are with 1 in 20; after all, if you throw two dice, double six seems to occur frequently and that is a chance of 1 in 36. If you want to examine evidence with a cold and fishy eye, try recognizing significance only when it is at the 1 in 100 level, or 1%, or a P value of 0.001; it often changes your view of things.

Multiple statistical testing

The perils of multiple statistical testing might have been drummed into us during our education but as researchers, we often forget them in the search for “results,” especially when such testing confirms our pre-existing biases. A large and thorough examination of multiple statistical tests underscores the problems this can pose [6].

This was a population-based retrospective cohort study which used linked administrative databases covering 10.7 million residents of Ontario aged 18–100 years who were alive and had a birthday in the year 2000. Before any analyses, the database was split in two to provide both derivation and validation cohorts, each of about 5.3 million persons, so that associations found in one cohort could be confirmed in the other cohort.

The cohort comprised all admissions to Ontario hospitals classified as urgent (but not elective or planned) using DSM criteria, and ranked by frequency. This was used to determine which persons were admitted within the 365 days following their birthday in 2000, and the proportion admitted under each astrological sign. The astrological sign with the highest hospital admission rate was then tested statistically against the rate for all 11 other signs combined, using a significance level of 0.05. This was done until two statistically significant diagnoses were identified for each astrological sign.

In all, 223 diagnoses (accounting for 92% of all urgent admissions) were examined to find two statistically significant results for each astrological sign. Of these, 72 (32%) were statistically significant for at least one sign compared with all the others combined. The extremes were Scorpio, with two significant results, and Taurus, with 10, with significance levels of 0.0003 to 0.048.

The two most frequent diagnoses for each sign were used to select 24 significant associations in the derivation cohort. These included, for instance, intestinal obstructions and anemia for people with the astrological sign of Cancer, and head and neck symptoms and fracture of the humerus for Sagittarius. Levels of statistical significance ranged from 0.0006 to 0.048, and relative risk from 1.1 to 1.8 (Fig. 1.1), with most being modest.

Protection against spurious statistical significance from multiple comparisons was tested in several ways.

When the 24 associations were tested in the validation cohort, only two remained significant: gastrointestinal haemorrhage and Leo (relative risk 1.2), and fractured humerus for Sagittarius (relative risk 1.4).

Using a Bonferoni correction for 24 multiple comparisons would have set the level of significance acceptable as 0.002 rather than 0.05. In this case, nine of 24 comparisons would have been significant in the derivation cohort, but none in the both derivation and validation cohort. Correcting for all 14,718 comparisons used in the derivation cohort would have meant using a significance level of 0.000003, and no comparison would have been significant in either derivation or validation cohort.

Figure 1.1 Relative risk of associations between astrological sign and illness for the 24 chosen associations, using a statistical significance of 0.05, uncorrected for multiple comparisons.

This study is a sobering reminder that statistical significance can mislead when we don’t use statistics properly: don’t blame statistics or the statisticians, blame our use of them. There is no biologic plausibility for a relationship between astrological sign and illness, yet many could be found in this huge data set when using standard levels of statistical significance without thinking about the problem of multiple comparisons. Even using a derivation and validation set did not offer complete protection against spurious results in enormous data sets.

Multiple subgroup analyses are common in published articles in our journals, usually without any adjustment for multiple testing. The authors examined 131 randomized trials published in top journals in 6 months in 2004. These had an average of five subgroup analyses, and 27 significance tests for efficacy and safety. The danger is that we may react to results that may have spurious statistical significance, especially when the size of the effect is not large.

Size is everything

The more important question, not asked anything like often enough, is whether any statistical testing is appropriate. Put another way, when can we be sure that we have enough information to be sure of the result, using the mathematical perspective of “sure,” meaning the probability to a certain degree that we are not being mucked about by the random play of chance? This is not a trivial question, given that many results, especially concerning rare but serious harm, are driven by very few events.

In a clinical trial of drug A against placebo, the size of the trial is set according to how much better drug A is expected to be. For instance, if it is expected to be hugely better, the trial will be small but if the improvement is not expected to be large, the trial will have to be huge. Big effect, small trial; small effect, big trial; statisticians perform power calculations to determine the size of the trial beforehand. But remember that the only thing being tested here is whether the prior estimate of the expected treatment effect is actually met. If it is, great, but when you calculate the effect size from that trial, using number needed to treat (NNT), say, you probably have insufficient information to do so because the trial was never designed to measure the size of the effect. If it were, then many more patients would have been needed.

In practice, what is important is the size of the effect – how many patients benefit. With individual trials we can be misled. Figure 1.2 shows an example of six large trials (213–575 patients, 2000 in all) of a single oral dose of eletriptan 80 mg for acute migraine, using the outcome of headache relief (mild or no pain) at 2 hours. NNTs measured in the individual trials range from 1.6 to 3.1, an almost twofold difference in the estimate of the size of the effect (overall, the NNT was 2.6). Even with these excellent trials, impeccably conducted, variations in response with eletriptan (between 56% and 69% in individual trials) and placebo (between 21% and 40%) mean that there is uncertainty over the size of the effect. For many treatments and dose/drug/condition combinations, we have much less information, fewer events, and much more uncertainty over the size of the effect.

Consider Figure 1.3, which looks at the variation in the response to placebo in over 50 meta-analyses in acute pain. In all the 12,000 or more patients given placebo, the response rate was 18% (meaning not that placebo caused 18% of people to have at least 50% pain relief over 6 hours, but that 18% of people in trials like these will have at least 50% pain relief over 6 hours if you do nothing at all). With small numbers, the measured effect with placebo varies from 0% to almost 50%. Only when the numbers are large is there greater consistency, and there are many other examples like this of size overcoming variability caused by the random play of chance.

Figure 1.2 Headache response at 2 hours for oral eletriptan 80 mg. Size of symbol is proportional to number of patients in a trial.

Figure 1.3 Percentage of patients with at least 50% pain relief with placebo in 56 meta-analyses in acute pain. Size of symbol is proportional to number of patients given placebo. Vertical line is the overall average.

How many events?

A few older papers keep being forgotten. When looking at the strengths and weaknesses of smaller meta-analyses versus larger randomized trials, a group from McMaster suggested that with fewer than 200 outcome events, research (meta-analyses in this case) may only be useful for summarizing information and generating hypotheses for future research [7]. A different approach using simulations of clinical trials and meta-analyses arrived at pretty much the same conclusion, that with fewer than 200 events, the magnitude and direction of an effect become increasingly uncertain [8].

Just how many events are needed to be reasonably sure of a result when event rates are low (as is the case for rare but serious adverse events) was explored some while ago [9]. This looks at a number of examples, varying event rates in experimental and control groups, using probability limits of 5% and 1%, and with lower and higher power to detect any difference. Higher power, greater stringency in probability values, lower event rates, and smaller differences in event rates between groups all suggest the need for more events and larger numbers of patients in trials. Once event rates fall to about 1% or so, and differences between experimental and control to less than 1%, the number of events needed approaches 100 and number of patients rises to tens of thousands.

All of which points to the inescapable conclusion that with few events, our ability to make sense of things is highly impaired. As a rule of thumb, we can probably dismiss studies with fewer than 20 events, be very cautious with 20–50 events, and reasonably confident with more than 200 events – if everything else is OK.

Subgroup analyses

Almost any paper you read, be it analysis of a clinical trial, an observational study or meta-analysis of either, will involve some form of subgroup analysis, such as severity of condition, age or sex. In addition to the problems of multiple testing, subgroup analyses also tend to involve small numbers – because the more you slice and dice the data, the fewer the number of actual events – and, if they are clinical trials, remove the benefits of randomization. They almost always introduce the danger of some unknown confounding.

One of the best examples of the dangers of subgroup analysis, due to unknown confounding, comes from a review article examining the 30-day outcome of death or myocardial infarction from a meta-analysis of platelet glycoprotein inhibitors [10]. Analysis indicated different results for women and men (Fig. 1.4), with benefits in men but not women. Statistically this was highly significant (P<0.0001).

In fact, it was found that men had higher levels of troponins (a marker of myocardial damage) than women and when this was taken into account, the difference between men and women was understandable, with more effect with greater myocardial damage; sex wasn’t the source of the difference.

Figure 1.4 Subgroup analysis in women and men of death or MI with platelet glycoprotein inhibitors (95% confidence interval).

Trivial differences

It is worth remembering what relative risks tell us in terms of raw data (Table 1.2). Suppose we have a population in which 100 events occur with our control intervention, whatever that is. If we have 150 events with an experimental, the relative risk is now 1.5. It may be statistically significant, but most events were those occurring anyway. If there were 250 events, the relative risk would be 2.5, and now most events would occur because of the experimental intervention.

Large relative risks may be important, even with more limited data. Small relative risks, probably below 2.0 and certainly below about 1.5, should be treated with caution, especially where the number of events is small, and even more especially outside the context of the randomized trial.

Table 1.2 Rules of causation

FeatureComment Consistency and unbiasedness of finding Confirmation of the association by different investigators, in different populations, using different methods Strength of association Two aspects: the frequency with which the factor is found in the disease, and the frequency with which it occurs in the absence of the disease. The larger the relative risk, the more the hypothesis is strengthened Temporal sequence Obviously, exposure to the factor must occur before onset of the disease. In addition, if it is possible to show a temporal relationship, as between exposure to the factor in the population and frequency of the disease, the case is strengthened “Biologic gradient (dose–response relationship)” Finding a quantitative relationship between the factor and the frequency of the disease. The intensity or duration of exposure may be measured Specificity If the determinant being studied can be isolated from others and shown to produce changes in the incidence of the disease, e.g. if thyroid cancer can be shown to have a higher incidence specifically associated with fluoride, this is convincing evidence of causation Coherence with biologic background and previous knowledge The evidence must fit the facts that are thought to be related, e.g. the rising incidence of dental fluorosis and the rising consumption of fluoride are coherent Biologic plausibility The statistically significant association fits well with previously existing knowledge Reasoning by analogy Common sense, especially when you have other similar examples for types of intervention and outcome Experimental evidence This aspect focuses on what happens when the suspected offending agent is removed. Is there improvement? The evidence of remission – or even resolution of significant medical symptoms – following explanation obviously would strengthen the case It is unethical to do an experiment that exposes people to the risk of illness, but it is permissible and indeed desirable to conduct an experiment, i.e. a randomized controlled trial on control measures. If fluoride is suspected of causing thyroid dysfunction, for example, the experiment of eliminating or reducing occupational exposure to the toxin and conducting detailed endocrine tests on the workers could help to confirm or refute the suspicion

The importance of a relative risk of 2.0 has been accepted in US courts [11]. “A relative risk of 2.0 would permit an inference than an individual plaintiff’s disease was more likely than not caused by the implicated agent. A substantial number of courts in a variety of toxic substance cases have accepted this reasoning.”

Confounding by indication

Bias arises in observational studies when patients with the worst prognosis are allocated preferentially to a particular treatment. These patients are likely to be systematically different from those not treated or treated with something else (paracetamol rather than nonsteroidal anti-inflammatory drugs (NSAID) in asthma, for instance).

Confounding, by factors known or unknown, is potentially a big problem, because we do not know what we do not know and the unknown could have big effects, like troponin above. When relative risks are small, say below about 1.3, potential bias created because of unknown confounding, or confounding by indication improperly adjusted, becomes so great that it makes any conclusion at best unreliable. This is especially important when interpreting observational studies that appear to link a particular intervention with a particular outcome.

Adverse events

Evidence around adverse events is important, complicated, yet often poor. It is impossible to do justice to adverse event evidence in a few paragraphs, so perhaps it is worth sticking to the highlights.

Adverse events are important because the “value” of a particular therapeutic intervention depends on both potential benefit and potential harm in the individual. To assess this trade-off, we need evidence for both, and while evidence about benefit is generally well documented, at least in clinical trials of newer interventions, evidence about harm has been neglected.

Long-term drug therapy is increasingly being used for primary prevention. Asymptomatic patients may be asked to tolerate adverse effects when the likelihood of therapeutic benefit is small. Adverse events are a major influence on compliance and the most common reason for discontinuation in clinical practice. A medicine not taken is one that cannot work. There is an increasing tendency for more openness and accountability in clinical decision making, with patients asking for more information and taking a more active role in their care.

Adverse events occur in the absence of treatment, something to remember when looking at data. Symptoms commonly listed as adverse events in clinical trials happen to all of us at some time. Fortunately most of them are not serious and even if severe, are reversible. Most are not related to any therapeutic intervention. Groups of medical and nonmedical people in the USA in the 1960s [12], and medical students in Germany in the 1990s [13], who were free of disease and not in any kind of trial or taking any medication, were asked about symptoms. Most participants were in their 20s. They were given a list of symptoms and asked to record whether or not they had experienced any in the previous 3 days. Overall, 83% experienced at least one of the symptoms and only 17% reported none. There were no major differences between medical and nonmedical participants, or between studies carried out 30 years apart. The most common symptom reported by at least 40% was fatigue. Having an idea of the background rate of an adverse event in a study population is important as it can affect tolerability, and also how easy it is to establish a causal association with the intervention.

Another example of common adverse events would be constipation, something we worry about a lot when prescribing opioids. Constipation occurs in about 15% of people with chronic pain using weak opioids [14].

The overall average percentage of people with constipation in a systematic review of constipation prevalence in the US was about 15% (1 in 7 adults [15]). The range was 1.9–27%, depending to some extent on how constipation was ascertained. Most reports were in the range of 12–19%, with some self-reported prevalence being higher and two face-to-face questioning reports below 4%. There was a distinctly higher prevalence in women compared with men in almost every study, irrespective of method of ascertainment. Prevalence of constipation in women was on average about twice as high as in men. There was also a consistent finding of higher constipation prevalence in non-Caucasian people, by a factor of about 1.4 to 1, though nonwhite racial groups were not subdivided. Other trends were for decreased prevalence in people with highest income and highest educational attainment or years of education, though these may well be measuring different aspects of the same phenomenon. Older age, especially age over 70 years, was also associated with higher constipation rates.

With any examination of adverse events, it is worth bearing in mind that what we want to establish is causation. The most important aide-mémoire is the Bradford-Hill rules, summarized in Table 1.2. They ask about strength of association, timing, dose–response, and other linking evidence. We need more than association to proceed to causation.

Safety

Claims are all too often made about safeties that are unfounded. To some extent, it depends what one means by safety, but members of the public say that they want to know about any adverse event that occurs at a rate more frequently than 1 in 100,000 [16]. To be even remotely confident about an adverse event occurring at a rate 10 times more frequently than that (1 in 10,000), we would need information from about 2 million people.

Clinical trials, even meta-analyses of clinical trials, will not have this amount of information. Nor will most observational studies or even meta-analyses of observational studies. Things may be changing, because large databases are beginning to be interrogated to provide data on safety. Caution is required because of confounding by indication and small numbers of events, so that individual studies can give very different results. For instance, a systematic review looking at NSAIDs and risk of myocardial infarction showed that the risk for naproxen compared to non-use of NSAIDs varied linearly between a relative risk of 0.5 and one of 1.5 (with a mean of 1.0).

Large database studies may also surprise. A good example of the surprising results of database studies (good as in good study, as well as a surprising result) indicated that long-term use of proton pump inhibitors significantly increased risk of hip fracture in older people [17]. It might be that the risk of using a proton pump inhibitor with NSAID incurs a bigger risk to life from hip fracture than did the gastrointestinal bleed the proton pump inhibitor was protecting against.

In any event, claims of absolute safety cannot be made, and we will see more examples of rare but serious adverse events in future than ever we did in the past.

Importance of the individual patient

The two quotations below come from people who argued vehemently over the role and importance of EBM yet agreed on the importance of the individual within the system.

“Evidence-based medicine is the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patient.” [18].

“Managers and trialists may be happy for treatments to work on average; patients expect their doctors to do better than that.” [19].

This underlines the importance of looking at information from the point of view of the individual patient. In acute pain, patients have been shown generally to obtain pain relief that is either very good or poor, but the average of responses to analgesics is at a point where there are few, if any, patients [20]. It is commonly understood that not every patient with a particular condition benefits from treatments known to work (on average). Patients may discontinue therapy because of adverse events as well as lack of efficacy, especially in chronic conditions. A clinical trial may tell us that 50% of patients have pain relief with drug, compared with 20% with placebo, and we applaud a good NNT of 3.3. Yet that obscures the fact that half the patients do not have pain relief but may have adverse effects.