43,99 €
The Evidence-Based Nursing Series is co-published with Sigma Theta Tau International (STTI). The series focuses on implementing evidence-based practice in nursing and midwifery and mirrors the remit of Worldviews on Evidence-Based Nursing, encompassing clinical practice, administration, research and public policy.
Evaluating the Impact of Implementing Evidence-Based Practice considers the importance of approaches to evaluate the implementation of evidence-based practice.
Outcomes of evidence-based practice can be wide ranging and sometimes unexpected. It is therefore important to evaluate the success of any implementation in terms of clinical outcomes, influence on health status, service users and health policy and long-term sustainability, as well as economic impacts.
This, the third and final book in the series, looks at how best to identify, evaluate and assess the outcomes of implementation , reflecting a wide range of issues to consider and address when planning and measuring outcomes.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 361
Veröffentlichungsjahr: 2013
Contents
Contributors’ information
Foreword
Preface
1 The importance of addressing outcomes of evidence-based practice
Introduction
Why are outcomes of EBP important?
The development of EBP
What is evidence?
Models and frameworks to support research use
Why is it important to measure/evaluate the impact of EBP?
Why do interventions of unproven benefit continue to be implemented?
The importance of outcomes in policy and politics
Conclusion
References
2 Measuring outcomes of evidence-based practice: Distinguishing between knowledge use and its impact
Introduction
Knowledge use
Measurement considerations
Measuring knowledge use
Evaluating the impact of knowledge use
Measuring outcomes of EBP and return on investment
Conclusion
References
3 Models and approaches to inform the impacts of implementation of evidence-based practice
Introduction
A model typology
Conclusion
Acknowledgment
References
4 An outcomes framework for knowledge translation
Introduction
Purpose of the framework
The focus on outcomes
Practice reflection based on outcomes
Outcomes feedback
Patient preferences for care
Facilitation to support change
Related theoretical models
Conclusion
References
5 Outcomes of evidence-based practice: practice to policy perspectives
Introduction
Background
Building the case for change for EBP
Understanding the environment and current care
Measuring the impact of EBP/care
Limitations of different approaches
Choose outcomes wisely
Conclusion
References
6 Implementing and sustaining evidence in nursing care of cardiovascular disease
Introduction
A brief history of the evolution of cardiology in practice
System approaches to changing delivery of cardiovascular care
Outcomes and their impact
Nursing-focused evidence-informed care for patients with cardiovascular health problems
Conclusion
References
7 Outcomes of implementation that matter to health service users
Introduction
Enhancing the contribution of service users to health service policy and research
Defining EBP priorities and outcomes from a service user perspective
Patient-reported outcome measures
Addressing the challenges and developing the evidence base
Acknowledgments
References
8 Evaluating the impact of implementation on economic outcomes
Introduction
Economics
Economic evaluation of EBP
Barriers to economic evaluation—ethics and research efficiency
Conclusion
References
9 Sustaining evidence-based practice systems and measuring the impacts
Introduction
The need for and concept of “sustainability” of EBP change
Maintaining the integrity of EBP
Sustainability models
Strategies for more sustainable EBP
A suggested development process for indicators that will be sustainable
Criteria for selecting sustainability indicators
Exemplars and issues around the sustainability of EBP systems
Conclusion
References
10 A review of the use of outcome measures of evidence-based practice in guideline implementation studies in Nursing, Allied Health Professions, and Medicine
Method and background on the reviews
Discussion
Conclusion
References
Appendix
Index
This edition first published 2010© 2010 by Sigma Theta Tau International
Blackwell Publishing was acquired by John Wiley & Sons in February 2007.Blackwell’s publishing programme has been merged with Wiley’s global Scientific, Technical, and Medical business to form Wiley-Blackwell.
Registered officeJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom
Editorial offices9600 Garsington Road, Oxford, OX4 2DQ, United Kingdom 2121 State Avenue, Ames, Iowa 50014-8300, USA
For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com/wiley-blackwell.
The right of the author to be identified as the author of this work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.
Library of Congress Cataloging-in-Publication DataEvaluating the impact of implementing evidence-based practice / edited by Debra Bick and Ian D. Graham.
p. ; cm. – (Evidence based nursing series)
Includes bibliographical references and index.
ISBN 978-1-4051-8384-0 (pbk. : alk. paper)
1. Evidence-based nursing. I. Bick. Debra. II. Graham, Ian D. III. Sigma Theta Tau International. IV. Series. Evidence-based nursing series.
[DNLM: 1. Evidence-Based Nursing. 2. Evidence-Based Medicine. 3. Program Evaluation–methods. 4. Treatment outcome. WY 100.7 E92 2010]
RT84.5.E927 2010
610.73—dc22
2009046229
A catalogue record for this book is available from the British Library.
1 2010
Debra Bick (RM, BA, MedSci, PhD) was appointed Professor of Evidence Based Midwifery Practice at King’s College London in September 2008. Debra’s research interests include maternal physical and psychological morbidity, the content and organization of services for postnatal women and their families, approaches to evidence synthesis and transfer and factors affecting clinical decision making. She is Editor-in-Chief of “Midwifery: An International Journal,” Visiting Professor at the University of Sao Paulo, and Visiting Fellow at Bournemouth University. Her current research projects include the Hospital to Home postnatal care study and PEARLS a UK-wide matched pair cluster trial of a training intervention to enhance perineal trauma outcomes. She is a collaborator on an NIHR RfPB funded trial of diamorphine compared with pethidine for pain relief during labor and an NIHR HTA funded trial of upright compared with supine positions in the second stage of labor in primiparous women who have epidural analgesia. She was also a collaborator on two recently completed NIHR SDO projects on protocol-based care which highlighted a number of important issues for EBP.
Barbara Davies (RN PhD) is Associate Professor, School of Nursing, University of Ottawa, Canada, and is the Co-Director of the Nursing Best Practice Research Unit, a partnership with the Registered Nurses’ Association of Ontario. She is the Site Director at Ottawa of the Ontario Training Centre for Health Services and Policy Research and co-teaches an interprofessional distance graduate course entitled Knowledge Transfer. She holds a Premier’s Research Excellence Award for a program of research entitled Interventions to promote successful sustained research transfer in nursing practice and health care.
Diane Doran (RN, PhD, FCAHS) is a full Professor at the Lawrence S. Bloomberg Faculty of Nursing, University of Toronto where she also holds a Ministry of Health and Long-Term Care Nursing Senior Researcher Award. She is an adjunct professor at the School of Nursing, Queens University, and the School of Nursing, University of Technology, Sydney, Australia. She is Director of the Nursing Health Services Research Unit, University of Toronto. Dr Doran has recognized expertise in outcomes measurement, patient safety, knowledge translation, and e-Health.
Nancy Edwards is a Professor, School of Nursing and Department of Epidemiology and Community Medicine, University of Ottawa. She is an Associate Scientist, Elisabeth-Bruyere Research Institute, a Principal Scientist, Institute of Population Health and a Fellow of the Canadian Academy of Health Sciences. Dr. Edwards is also Scientific Director, Canadian Institutes of Health Research, Institute of Population and Public Health. She holds a Nursing Chair funded by the Canadian Health Services Research Foundation, the Canadian Institutes of Health Research and the Government of Ontario. Her research interests are in the area of multi-strategy and multilevel interventions in community health. She is leading programs of research in Canada and internationally.
Christina Godfrey (BA (Hons) (Psychology), BNSc (Nursing), MSc (Nursing)) is currently enrolled as a PhD candidate in the School of Rehabilitation Science, Queen’s University. Specializing in the methodo-logy of synthesis and integrative research, Christina has received comprehensive training (Cochrane & Joanna Briggs Institute), and is co-author of both quantitative and qualitative systematic reviews. Christina continues in her role of Assistant Director of the Queen’s Joanna Briggs Collaboration and is currently teaching methodology at the graduate level in her role as adjunct faculty, Queen’s University School of Nursing.
Lisa Gold (MA Cantab (Economics), MSc Oxon (Economics for Development)) is a Senior Research Fellow at Deakin Health Economics, Deakin University, Australia. Lisa is a health economist with over 10 years experience in the economic evaluation of maternal and child public health and social interventions to improve population health and reduce health inequalities. She has also conducted systematic reviews of evidence and methodological development in economic evaluation in the UK and Australia. Her research interests focus on the evaluation of complex and community-based interventions and the use of stated preference methods to explore individual and community values for such interventions.
Ian Graham obtained a PhD in medical sociology from McGill University. He took leave from his position as an Associate Professor in the School of Nursing at the University of Ottawa to assume the position of Vice President of Knowledge Translation at the Canadian Institutes of Health Research. His research has largely focused on knowledge translation (the process of research use) and conducting applied research on strategies to increase implementation of research findings and EBP. He has studied adaptation, implementation, and quality appraisal of practice guidelines, as well as the uptake of guidelines and decision support tools by practitioners. He has also studied researchers’ and health research funding agencies’ KT activities, the determinants of research use, and theories/models of planned change.
Margaret B. Harrison (RN PhD) is a Professor at the School of Nursing, and cross-appointed with the Department of Community Health and Epidemiology at Queen’s University. She is a Senior Scientist with the Practice and Research in Nursing Group (PRN) at Queen’s University, an innovative academic-practice partnership to advance research at the point of care. She is also Director of the Queen’s Joanna Briggs Collaboration, the first North American partner of the Joanna Briggs Institute, a sister organization to Cochrane, engaged in advancing syntheses of all types of evidence. Dr Harrison is a founding member of the international ADAPTE collaboration, a group that is focused on development and testing a rigorous methodology to develop guidelines for different contexts.
Her research program is focused continuity of care for complex health populations and EBP—themes which are intertwined. Knowledge translation and implementation of guidelines is an important strategy for improving continuity. She has worked with pan-Canadian organizations such as the Stroke Network and the Cancer Partnership, as well as more local and regional bodies reorganizing care based on evidence. Her research crosses community, hospital, and long-term care sectors, and she has received support from provincial (MOHLTC, OHSF) and national funding councils (NHRDP, CIHR, SSHRC).
Neil Johnson (RGN, BA (Hons), PGCTLT, MSc) is a Lecturer in adult nursing at the Robert Gordon University, School of Nursing and Midwifery, Aberdeen. At present Neil teaches within both undergraduate and postgraduate programs and has been involved in a variety of projects exploring the implementation of evidence in practice and impacts from evidence in practice. At present Neil is evaluating the use of practice manuals in nursing.
Anne Sales (RN PhD) is a Professor in the Faculty of Nursing, University of Alberta, Canada Research Chair in Interdisciplinary Healthcare Teams, and Chair in Primary Care Research. She has conducted 19 funded research projects, focusing on improving quality of care, knowledge translation, and implementation of evidence-based best practice, and has over 65 peer-reviewed publications. Her training is in sociology, health economics, econometrics, and general health services research. She is currently conducting two studies of audit with feedback interventions, one in long-term care settings, the other in acute hospital and primary care settings.
Sharon Straus (MD, MSc, FRCPC) is a geriatrician/general internist/clinical epidemiologist. She is an Associate Professor in the Department of Medicine at the University of Toronto. She is the Director of the Knowledge Translation Program, Li Ka Shing Knowledge Institute at St Michael’s Hospital and the University of Toronto. Her research interests include mentorship and the evaluation of interventions to facilitate knowledge translation and promote quality of care.
She holds more than $18 million in peer-reviewed grants from the CIHR, CHSRF, and the Premier’s Research Excellence Award amongst others. She was awarded a Canada Research Chair in Knowledge Translation and a Health Scholar Award from the Alberta Heritage Foundation for Medical Research.
Jacqueline Tetroe has a Masters Degree in developmental psychology and studied cognitive and educational psychology at the Ontario Institute for Studies in Education. She currently works as a senior advisor in Knowledge Translation at the Canadian Institutes of Health Research. Her research interests focus on the process of knowledge translation and on strategies to increase the uptake and implementation of EBP as well as to increase the understanding of the barriers and facilitators that impact on successful implementation. She is a strong advocate of the use of conceptual models to both guide and interpret research.
Dominique Tremblay is a Canadian Health Services Research Foundation Postdoctoral Fellow. Her professional background is in clinical, first-line, and senior management of nursing services. She completed a PhD in nursing administration at University of Montreal and currently conducting her postdoctoral research program with Dr Nancy Edwards at the University of Ottawa. Her research interests focus on the translation process of innovative multiple interventions in cancer services using mixed methods research design. EBP and outcomes of EBP relate to her domain of interest representing a typical case of an innovative intervention involving multiple actors, multiple strategies, and multiple levels of the health care system.
Joyce E. Wilkinson (PhD, BA, DipCPCouns, RSCN, RGN) is RHV Research Fellow, Social Dimensions of Health Institute, University of St Andrews. Her research interests include RU/KU/EBP implementation processes and evaluation of impacts and outcomes of RU/KU/EBPI.
Peter Wimpenny (RGN, BSc (Hons), Cert Ed, PhD) is the Associate Director of the Joanna Briggs Collaborating Centre in Aberdeen, Scotland, based at the Robert Gordon University. He has had involvement and interest in EBP for some years. He was a member of SIGN (Scottish Intercollegiate Guideline Network) Council from 2001 to 2006 and actively involved in a number of projects exploring guideline development and implementation. At present he is involved in a variety of EBP-related work that includes systematic reviews, evaluation of use of practice manuals, and development of summarized evidence for community staff as part of the Joanna Briggs Institute.
The Evidence-Based Nursing Series
The Evidence-Based Nursing Series is co-published with Sigma Theta Tau International (STTI). The series focuses on implementing evidence-based practice in nursing and midwifery, and mirrors the remit of Worldviews on Evidence-Based Nursing, encompassing clinical practice, administration, research and public policy.
Other titles in the Evidence-Based Nursing Series:
Models and Frameworks for Implementing Evidence-Based Practice: Linking Evidence to Action
Edited by Jo Rycroft-Malone and Tracey Bucknall
ISBN: 978-1-4051-7594-4
Clinical Context for Evidence-Based Nursing Practice
Edited by Bridie Kent and Brendan McCormack
ISBN: 978-1-4051-8433-5
Foreword
The arrival of the evidence-based ‘movement’ emerged at a particular moment in history when faith in technocratic and scientific rationalism in policy circles seemed to reach its apotheosis. What this book demonstrates so compellingly is that this ‘movement’ was the product of the convergence of ideas, individuals, institutions and infrastructure as well as policy drivers. It is hard to imagine the global spread of this endeavour and its embedding within health-care without the use of the internet but, in evaluating the impact of implementing evidence-based practice, this book reveals what a sophisticated science evidence-based practice has become. Most of all, it shows the importance of under-pinning implementation of evidence with a change management strategy. The story of EBP is well told – its impetus lies in the desire to improve outcomes for patients by implementing those that were clinically effective and cost-effective, eliminating those practices that did not meet the criteria. The apostle of this new movement was Archie Cochrane, whose name christened the collaboration that now represents arguably the most authoritative and largest methodological evidence synthesis industry globally. Cochrane also advocated the application of EBP to education, social work, criminology and social policy, which is now being taken forward by the Campbell Collaboration. However, under the leadership of Sir Iain Chalmers and colleagues at the National Perinatal Epidemiology Unit in Oxford, it was pregnancy and childbirth that blazed the trail, starting in the late 1970s through the development of stringent standards for the evaluation of evidence and the development of EBP. Fuelled partly by impatience with the vagaries of evidence, its implementation, as well as a sense of moral outrage at persistent inequalities in outcomes, Chalmers and colleagues created an encyclopaedia of evidence that became the landmark reference in the field. In the early 1990s, Chalmers turned his attention to establishing the international Cochrane Collaboration. A prominent part of the early EBP movement was support from consumers and the user voice continues to be given due prominence by the Collaboration.
The epicentre of the EBP movement in the UK was Oxford in the mid-1980s, where acolytes such as David Sackett congregated and spread the word from cognate initiatives across the Atlantic; Canada in particular. The influence of EBP quickly spread to the UK and Australia. It was not just that the use of evidence and measures of rigour and quality promulgated, which was itself new, but a new urgency coalesced that the clinical- and cost-effectiveness of health-care interventions needed to be considered. Concurrent initiatives within the UK National Health Service, including the development of the NHS Library and Information service founded by Sir Muir Gray provided infrastructure, and the establishment of the National Institute for Health and Clinical Excellence and the Centre for Reviews and Dissemination, University of York in the UK, were tasked to sieve the evidence and turn it into guidelines. David Sackett, and Jonathan Lomas from Canada became synonymous with the success of the movement, which swiftly spawned further branches of activity, including research utilization, knowledge transfer and exchange, knowledge translation and implementation science. Similarly, as this book reveals, the embedding of the approach within nursing, specifically cardiovascular nursing and allied health professions, has impacted albeit in a patchy manner. The value and utility of the effort invested in EBP lies not only in putting the evidence into the hands of clinicians, turning that into better outcomes for patients in a cost-effective manner, but also in developing the mechanisms and conditions to do so. This volume not only pulls the evidence together from key leaders and experts in the field of implementation nationally, but also internationally. The resulting synthesis summarizes the state of the science but also reveals that outcomes rely upon multi-layered and multi-faceted interventions, and a skill set that stretches from the academic to advocacy and political mobilization. Politics is never far from the surface in this gripping tale. We might explain the rise of EBP as the desire on the part of policy makers to distance themselves from the dilemmas of allocating resources in health-care and the evidence presented here reveals the unintended consequences and challenges of sustaining impacts along the way. This book has the rare virtue of not taking us round the reflexive loop of only providing the tools to evaluate outcomes of implementation. It advises on what needs to be considered when making change and on what will make that change stick.
Professor Anne Marie Rafferty
BSc MPhil DPhil (Oxon) RGN DN FRCN
Head of School
Florence Nightingale School of Nursing & Midwifery
King’s College London
Preface
As a consequence of evidence-based practice (EBP), many of us reside in countries where there have been tremendous changes in approaches to clinical education and skills training, patient care, and funding of our health services. EBP evolved from recognition that many areas of clinical care were unsupported by evidence of clinical or cost-effectiveness, with continued use of unproven interventions relying in many cases on an assumption of benefit. Once an area of practice had been questioned and evidence of impact became available, it was clear that interventions which did not result in benefit or could lead to harm should be withdrawn. Conversely, interventions associated with improved outcomes should be universally implemented. The role of EBP in informing the provision and content of 21st century health care is now a policy priority, a move which has triggered ongoing public debate about the role of politics in health care.
As researchers, contributors to national policy guidance, members of funding bodies, and educators, we have followed recent debates with interest. What stimulated us to produce this book was recognition of the continuing gap between use of evidence and impact on outcomes. Despite the drive to implement EBP, there is still limited guidance on how to assess if implementation was effective, the issues which should be considered or range of approaches which could be used to measure outcomes. Work to date which has addressed implementation has often measured success in terms of whether a specific clinical outcome was or was not achieved, and has not considered other impacts, some of which may not have occurred immediately (such as whether an intervention was sustained in practice) or may have occurred at other points along the pathway of implementation (such as clinician or patient behavior change). Detrimental practices persist despite a plethora of systematic reviews and tools to assist implementation such as guidelines and protocols. This may be due to poor implementation of the evidence in the first instance, failure to sustain successful implementation, lack of robust evaluation of implementation processes and outcomes, or failure to consider the range of impacts (intended and unintended) which may have occurred.
Our contributors have all had “hands-on” experience of contributing to or leading on studies of implementation of EBP and bring together a wealth of research experience. We appreciate that work related to evaluation of outcomes is developing and that funding bodies and policy makers are crucial to taking this work forward. In the interim, we hope that this book will stimulate those involved with the development, implementation, and evaluation of EBP to appreciate that equal priority needs to be accorded to how outcomes of implementation are derived, measured, and reported on.
Debra Bick and
Ian D. Graham
Debra Bick and Ian D. Graham
In this chapter, the background to the development of the book is outlined as are some of the reasons why we felt it was timely and appropriate to bring together a text which focuses on the outcomes of implementation of evidence-based practice (EBP). Experts in the field of knowledge translation and EBP invited to contribute chapters to the book were asked to consider how to determine if outcomes of EBP in their areas of expertise were efficacious, how efficacy could be measured, and how to ascertain if the outcomes of interest were the most important from the perspectives of relevant stakeholders. As described by Ian Graham and colleagues in Chapter 2, outcomes of EBP could include change in behavior demonstrating use of evidence in practice and impact of use on outcomes such as better health and more effective use of healthcare resources.
We hope that by reading the chapters and following the perspectives presented by the authors that the need to accord equal priority to the outcomes of implementation as with all other steps to support the use of research in practice will become apparent. Most of us live in countries where healthcare resources are finite, an issue whether our healthcare is largely funded through our taxes or private insurance schemes. Some readers will reside in countries where healthcare systems face an unprecedented increase in the burden of ill health arising from chronic, non-communicable diseases—for example as a consequence of the epidemic of obesity or an aging population. Others will reside in countries which face epidemics of disease including TB, HIV/AIDS, persistent high maternal and infant mortality and morbidity, or where poor or fractured infrastructure cannot support an effective healthcare system. For those living in developed countries, while there have been unprecedented advances in healthcare technology and year on year increases in healthcare funding from government, the increase in resources has not been matched by improvements in health. This is most evident in the US, where it is estimated that healthcare costs for 2009 were $2.7 trillion, the highest level of healthcare spend anywhere in the world, yet life expectancy is lower than in many other developed and middle-income countries indicating large discrepancies between healthcare costs and outcomes (Institute of Medicine 2009). We also have healthcare systems where despite a plethora of technology, gaps remain in the quality of data to accurately inform and compare the outcomes of care. In the UK, efforts to gauge whether investment in healthcare following the election of a Labour government in 1997 had resulted in improved health outcomes were hampered by constraints in measures of quality and need for better measures of output and outcome extending beyond hospital episode data (Lakhani et al. 2005).
For the last two decades, in response to some of the reasons outlined above, greater emphasis has been placed on the need to provide healthcare informed by evidence of effectiveness, the premise being that use of evidence will optimize health outcomes for the service user and maximize use of finite healthcare resources. The main drivers for EBP have come from political and policy initiatives which also instigated the establishment of organizations to develop guidance to inform healthcare such as the National Institute for Health and Clinical Effectiveness (NICE) in England and Wales, the Scottish Intercollegiate Guideline Network, and the US Agency for Healthcare Research and Quality. The remit of a national body such as NICE is to make recommendations for care based on best evidence of clinical and cost-effectiveness. Suites of guidelines to inform a range of acute and chronic physical and psychological health conditions and appraisals of innovations in technology and pharmacology have been developed and published by NICE which aim to standardize patient care, reduce variation in health outcomes, discourage use of interventions with no proven efficacy, and encourage systematic assessment of patient outcomes. The National Institute for Health Research which funds research to inform National Health Service (NHS) care in England requires studies funded across all of its programs to provide evidence of clinical and cost-effectiveness.
The role of NICE in the synthesis and dissemination of evidence to prioritize healthcare interventions has generated criticism that it promotes rationing in healthcare (Maynard et al. 2004), an issue with implications for determining how outcome measures are derived to elicit benefit and from whose perspective. As Maynard and colleagues (2004) write “…rationing is the inevitable corollary of prioritization, and NICE must fully inform rationing in the NHS,” the issue being not whether but how to ration (p. 227). In the UK, publication of NICE guidance which does not support the use of a particular drug or therapy because the evidence reviewed did not indicate clinical or cost-effectiveness has frequently been challenged by industry (Maynard & Bloor 2009), service user charities, and in media reports of an individual’s experience of being refused treatment which did not comply with NICE recommendations. Recent NICE recommendations which generated criticism about its role include restrictions on use of the drugs for people with early stage Alzheimer’s disease, restrictions to fertility treatments, and use of drugs to treat kidney cancer. In some instances, the Department of Health was forced to reverse the original NICE recommendation to deflect public criticism, for example the use of Herceptin for women with early stage breast cancer (Lancet 2005). Nevertheless, this is an interesting juxtaposition—whose outcomes should receive the highest priority when decisions about healthcare interventions and optimal use of finite resources are made? That certain treatments may make a difference to someone’s quality of life will not influence recommendation for use across the NHS if the evidence assessed does not demonstrate clinical or cost-effectiveness at thresholds set by NICE. The recent introduction of “top up” fees to enable patients to bypass NICE recommendations and purchase drugs not recommended for NHS use reflects the power of today’s informed healthcare consumer (Gubb 2008). Although only likely to be utilized by a small group of people, as Maynard and Bloor (2009) propose, this raises issues about the role of NICE and regulation of the pharmaceutical industry; how drug prices should be determined; and how, if at all, to deal differently with rare or end-of-life conditions when making resource allocation decisions in healthcare. It also introduces the issue of consumers opting to purchase interventions which they view as likely to provide a better outcome which could include aspects of physical and/or psychological health and/or well-being.
The development of strategies to encourage use of evidence to inform decisions about healthcare was stimulated initially by what has been referred to as a “movement” for evidence-based medicine (EBM). One of the first people to propose that medical care should be informed by evidence of effectiveness was Archie Cochrane, whose book Effectiveness and Efficiency: Random Reflections on Health Services was published in 1972. Cochrane also advocated that this approach should be applied to education, social work, criminology, and social policy (Cochrane 1972). The work of Archie Cochrane triggered groups such as those led by Gordon Guyatt and David Sackett to develop methods to synthesize and critique evidence to support decisions in clinical practice. In the late 1970s and 1980s, Ian Chalmers at the National Perinatal Epidemiology Unit in Oxford pioneered the methodology to systematically review the evidence related to effective care in pregnancy and childbirth. Building on this work, the Cochrane Centre was established in 1992 and was crucial for the spread of EBM, which in turn stimulated revisions to healthcare education and training, policy development, publication of new journals, and establishment of academic centers. Principles of EBM have subsequently been applied to support the commissioning of healthcare services, recommendations for pharmacology treatments, surgical interventions, diagnostic tests, and medical devices. Of note is that although attention has been paid to the use of measures of “outcome,” limited attention has been paid to the definition or consequences of a “good” or “poor” outcome. Reviewers for the Cochrane Pregnancy and Childbirth group define an outcome as an “adverse health event” (Hofmeyr et al. 2008). In a Cochrane review, data from meta-analyses of relevant trials will be presented in a forest plot with the beneficial effect of an intervention presented to the left of the “no effect” line and a harmful effect to the right of the line. This is an extremely useful way to present outcomes of pooled data, but it is one part of the picture if we are to ensure that outcomes are the most relevant for all concerned. Further exploration of outcomes is required in order that consequences beyond implementation can be considered from a range of perspectives, an important stage in the continuum of research use.
There is ongoing debate as to the definition of “evidence” and what counts as evidence, although it seems consensus has been reached that evidence can come from a number of sources and not just the findings of randomized controlled trials (RCTs). A recent position paper from Sigma Theta Tau describes research evidence as:
methodologically sound, clinically relevant research about the effectiveness and safety of interventions, the accuracy and precision of assessment measures, the power of prognostic markers, the strength of causal relationships, the cost-effectiveness of nursing interventions, and the meaning of illness or patient experiences.
(Sigma Theta Tau International 2005–2007, Research and Scholarship Advisory Committee Position Statement 2008, p. 57)
In a 1996 commentary in the British Medical Journal, Sackett et al. (1996) defined EBM as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients,” and stressed the need for the clinician to use evidence along with their expertise and judgment to make decisions which also reflected the choice of the individual patient. A later British Medical Journal commentary reiterated that evidence alone should not be the main driver to change practice and that preferences and values needed to be explicit in clinical decision making (Guyatt et al. 2004). Of note is that the authors highlighted that the biggest future challenge for EBM was knowledge translation (Guyatt et al. 2004). The need to synthesize evidence for use by busy clinicians, to place evidence in a “hierarchy” with the most robust evidence at the top of the hierarchy and acknowledgment that evidence can come from a number of external sources continues to be emphasized (Bellomo and Bagshaw 2006).
When reading any literature which refers to use of evidence, it is apparent that a number of terms have been used to describe the process which include EBM, EBP, evidence-based clinical decision making and evidence-informed practice. The term evidence-based practice is more commonly used to describe evidence use by nurses, midwives, and members of the allied health professions (Sigma Theta Tau International Position Statement 2008).
Throughout this book, we refer to EBP in line with the following definition:
the process of shared decision making between practitioner, patient, and others significant to them based on research evidence, the patient’s experiences and preferences, clinical expertise or know-how, and other available robust sources of information.
(Rycroft-Malone et al. 2004)
As we have already indicated, an outcome could reflect behavior change at the individual, team, or organizational level, an improvement in individual health status or better use of healthcare resources. The increase in access to electronic bibliographic databases, such as the Cochrane Library of Systematic Reviews, and dissemination strategies originally adopted by groups such as NICE, professional organizations, and healthcare providers were viewed as ways to increase clinician awareness of research, with an assumption that the use of research evidence would spontaneously occur and improved health patient outcomes would follow. Studies of dissemination and implementation strategies found that few were effective (Grimshaw et al. 2004). Grimshaw and colleagues (2004) undertook a systematic review of the effectiveness and efficiency of guideline dissemination and implementation strategies. Studies were selected for inclusion if they were RCTs, controlled clinical trials, controlled before and after studies, and interrupted time series. A total of 235 studies which looked at 309 comparisons met inclusion criteria. Overall study quality was poor. Multifaceted interventions were addressed in 73% of the comparisons. The majority of comparisons which reported dichotomous outcome data (87%) found some differences in outcomes with considerable variation in observed effects both within and across interventions. Single interventions which were commonly evaluated included reminders, dissemination of educational materials and audit and feedback. The majority of studies only reported costs of treatment, and only 25 studies reported costs of guideline development, dissemination or implementation although data presented in most cases was of low quality and not suitable for extraction for the review. In conclusion, the authors recommended that decision makers needed to use considerable judgment when making decisions about how best to use limited resources to maximum population health.
A number of models and theoretical frameworks to support research use in practice have been developed—for example, the IOWA model (Titler et al. 2001), the PARiHS framework (Kitson et al. 1998) and the Ottawa Model (Graham & Logan 2004) which are described further in Chapter 3 of this book and are the focus of Book 1 of this series (Rycroft-Malone & Bucknall, 2010). It is now appreciated that implementation is complex, multifaceted, and multilayered and interventions need to be able to reflect and take account of context, culture, and facilitation to support and sustain research use. Despite the development of frameworks and models as Helfrich and colleagues (2009) highlight with respect to PARiHS, there is as yet no pool of validated measures to operationalize the constructs defined in the framework. Work in this area is ongoing, as is other work to support research use including tools to assess the extent to which an organization is ready to adopt change. An example of this is the organization readiness to change assessment (ORCA) instrument developed by the Veterans Health Administration (VHA) Quality Enhancement Research Initiative for Ischemic Heart Disease (Helfrich et al. 2009). Although still in the developmental stage, this could be a useful approach for future implementation strategies.
As illustrated in the following chapters that describe examples ranging from evaluation of outcomes of wound care interventions, cardiac care interventions, and the perspectives of service users, the importance of evaluating the outcomes of use of evidence is essential. The need to submit the evaluation of outcomes to the same level of rigor as other interventions and procedures that the EBP movement focuses on is also apparent.
There are many examples in clinical practice of interventions introduced on assumption of benefit rather than evaluation of impact on a range of outcomes from the perspectives of the relevant stakeholders. In maternity care, universal roll-out of interventions such as routine perineal shaving and enemas at the onset of labor, separation of mothers and babies after birth to prevent infection, and routine use of episiotomy occurred with no supporting evidence that immediate or longer-term outcomes were better—it was assumed that they would be. When these interventions were eventually subjected to rigorous evaluation more often than not there were no differences in outcomes or indications of potential harm (Basevi & Lavender 2008; Carroli & Mignini 2008; Reveiz et al. 2007; Widstrom et al. 1990). The Term Breech Trial (Hannah et al. 2000) provides a useful example of why longer-term outcomes from different stakeholders’ perspectives need to be considered and evaluated before universal change in practice takes place.
A small proportion of women (around 2–3%) will have a baby which presents at term in a breech presentation and studies which had previously considered which mode of birth was optimal for the baby and for the woman had been inconclusive due to methodological issues and small sample sizes. In certain cases, for example if it was a footling breech or if the baby was large, planned cesarean section (CS) had been considered safer than planned vaginal birth. The Term Breech Trial was designed to provide the ultimate answer to the mode of birth debate, with the proviso that study centers would have clinicians with the expertise to support vaginal breech births. The trial took place in 121 centers in 26 countries and recruited over 2,000 women. Women and their babies were initially followed up to 6 weeks post-birth. Primary study outcomes included perinatal and neonatal mortality or serious neonatal morbidity and maternal mortality or serious maternal morbidity. At 6 weeks, perinatal and neonatal mortality and morbidity were significantly lower among the planned CS group (17 of 1039 [1.6%] versus 52 of 1039 [5.0%]; relative risk 0.33 [95% CI 0.19–0.56]; p < 0.0001). There were no differences in any of the maternal outcomes. The trial was stopped early due to a higher event rate than expected. The authors concluded that planned CS was better than planned vaginal birth. Trial results were fast-tracked for publication by The Lancet (Hannah et al. 2000) despite need for caution raised by one peer reviewer because of concerns about the impact on practice of differential findings and implications this could have for maternity care in both developed and developing countries (Bewley & Shennan 2007).
Contrary to the usually slow uptake of research findings, in this case, the trial rapidly changed practice in many countries, with planned CS rates rising steeply following publication of the trial (Alexandersson et al. 2005; Carayol et al. 2007; Molkenboer et al. 2003). In England, planned elective CS is now the preferred mode of birth for women with a diagnosed breech baby at term (Department of Health 2008). Debate about the findings of the Term Breech Trial has continued particularly following publication of a two-year planned follow-up of women and babies, which showed no differences in outcomes between the study groups (Whyte et al. 2004). Criticisms of the original trial included lack of adherence to the study protocol, variation in standards of care between trial centers, inadequate methods of fetal assessment, and recruitment of women during active labor when they may not have had a chance to properly consider participation (Glezerman 2006). That women were not supported to birth in upright positions which could have increased the likelihood of a vaginal birth was also criticized (Gyte & Frolich 2001). Criticisms have been refuted by the trial team who defended their position that this was a peer reviewed trial evaluated in a number of countries and that criticisms in the main reflected the prior beliefs of clinicians (Ross & Hannah 2006).
The worldwide impact of study findings and rapid implementation of its findings into practice has already had an unplanned outcome on the erosion of clinical skills to support vaginal breech birth (Glezerman 2006). As such it is important to consider if immediate and longer-term outcomes and reporting in response to queries raised should have been assessed prior to publication, and if the trial outcomes were the most appropriate for all relevant stakeholders. Practice changed globally based on publication of immediate outcomes, despite criticisms that study results may have been subject to bias due to problems with the trial protocol (Glezerman 2006); however, the longer-term (2 year) outcomes showed no difference (Whyte et al. 2004) which may have been a more reassuring finding for clinicians and for women, posing the issue of which outcomes at which time point should be used to inform practice? In terms of how women were reassured, we would also have to consider the basis for outcomes on which obstetricians defined their expertise in supporting vaginal breech birth and whether outcomes would have differed if midwives had also been involved. The other moot point here is the prior beliefs of those most likely to be implementing change and those likely to be the recipients of the change, and whether an RCT was the most appropriate research method to use given the vagaries of maternity practice context, policy, and culture across the globe (Kotaska 2004). The trial which aimed to provide the definitive answer has changed practice when perhaps it should not have done, given the concerns about the protocol and the presentation and interpretation of outcomes. What is clear is that this trial could never be repeated due to change in routine practice and loss of clinical skills.