38,99 €
British Medical Association Book Award Winner - President's Award of the Year 2018
From the author of the bestselling introduction to evidence-based medicine, this brand new title makes sense of the complex and confusing landscape of implementation science, the role of research impact, and how to avoid research waste.
How to Implement Evidence-Based Healthcare clearly and succinctly demystifies the implementation process, and explains how to successfully apply evidence-based healthcare to practice in order to ensure safe and effective practice. Written in an engaging and practical style, it includes frameworks, tools and techniques for successful implementation and behavioural change, as well as in-depth coverage and analysis of key themes and topics with a focus on:
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 522
Veröffentlichungsjahr: 2017
In memory of Anna Donald. I have finally finished our book.
Trisha Greenhalgh
Professor of Primary Care Health SciencesNuffield Department of Primary Care Health SciencesUniversity of OxfordOxford, UK
This edition first published 2018© 2018 John Wiley & Sons Ltd.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of Trisha Greenhalgh to be identified as the author of this work has been asserted in accordance with law.
Registered OfficesJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USAJohn Wiley & Sons Ltd., The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
Editorial Office9600 Garsington Road, Oxford, OX4 2DQ, UK
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.
Limit of Liability/Disclaimer of Warranty
The contents of this work are intended to further general scientific research, understanding, and discussion only and are not intended and should not be relied upon as recommending or promoting scientific method, diagnosis, or treatment by physicians for any particular patient. The publisher and the authors make no representations or warranties with respect to the accuracy and completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of fitness for a particular purpose. In view of ongoing research, equipment modifications, changes in governmental regulations, and the constant flow of information relating to the use of medicines, equipment, and devices, the reader is urged to review and evaluate the information provided in the package insert or instructions for each medicine, equipment, or device for, among other things, any changes in the instructions or indication of usage and for added warnings and precautions. Readers should consult with a specialist where appropriate. The fact that an organisation or website is referred to in this work as a citation and/or potential source of further information does not mean that the author or the publisher endorses the information the organisation or website may provide or recommendations it may make. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this works was written and when it is read. No warranty may be created or extended by any promotional statements for this work. Neither the publisher nor the author shall be liable for any damages arising herefrom.
Library of Congress Cataloging‐in‐Publication data are available
ISBN: 9781119238522
Cover image: Meaden CreativeCover design by Wiley
Starting to read the new book by Trisha Greenhalgh on the implementation of evidence‐based healthcare (EBHC), I remembered one of the major implementation projects in healthcare in the Netherlands, in which I was involved in the 1990s. A national collaboration between the Department of Health and the GP organisations aimed at improving prevention in primary care by implementing national guidelines for flu vaccination, cervical screening and cardiovascular risk reduction. My team developed the implementation programmme and studied the impact. We developed a programme at different levels of healthcare (national, local, practice, professional) using a variety of theories on implementing change, including the setting of evidence‐based guidelines, education for professionals, educational materials for patients, a national steering group, outreach visitors to support practices, a fee for the extra work, etc. The results were great: an enormous increase in vaccination and screening rates in a very short time. However, within 3 years the project stopped, because of a political fight between the GP organisations and the Department of Health over both the aims of the programme (more or less primary prevention) and general policies and honoraria for primary care. It took years before prevention was put on the agenda of primary care again.
There were many lessons in this project that I will never forget, even though it is 20 years ago and I retired from scientific work a while ago. For instance, the impact of political support on effective change, but also the role of professional attitude to the change required (many GPs were quite reluctant to primary prevention). Reading this new book by one of our absolute experts in the field of implementation of change in healthcare, I understand the successes and failures of this project even better. EBHC and evidence‐based guidelines do not implement themselves, even if the evidence is sound. Many policy‐makers and professionals sometimes have naive ideas about the implementation of change in practice. Ineffective implementation of EBHC is, most of the time, a waste of time and money. Implementation of change in healthcare is, most of the time, only successful when you work at different levels of healthcare, not only at the level of professionals, and when you use sound theories and assumptions to guide your efforts.
Exactly these messages you will find in this very interesting and useful book. It is rich in ideas and provides a whole range of theories. It makes clear how important it is to use knowledge and theories from different sources, including social and political sciences and humanities. The author wrote the book in the memory of her friend, Anna Donald. Together they started with the idea of the book back in the 1990s, but only now has the author been able to work on it and finish it. A great achievement, which demonstrates how this complex field of implementation of EBHC has developed over the years and has come to maturity over the past decades. Please read the book and use it in your programmes and projects.
Professor Richard Grol,Emeritus Professor on Quality of Care;Founder and former Director of the Scientific Institute on Quality of Healthcare
To the late Dr Anna Donald, for beginning this journey with me (see Chapter 1 for the full story).
To my students, for asking the awkward questions that inspired much of this book.
To the many colleagues in – and especially beyond – the world of academia, for introducing me to different perspectives.
To everyone on Twitter who contributed suggestions.
To my family (Fraser, Rob, Al) for being cool about this writing habit of mine – and chipping in occasionally.
To the doctors and nurses who saved my life last year (using NICE guidelines).
To everyone at Wiley who supported the development, preparation, publication and sale of this book.
To Gaby Leuenberger for proofreading the manuscript.
Let me start with a warning: this book is not going to give you a cookbook answer to the question of how to implement evidence‐based healthcare (EBHC). My (more modest) aim is threefold:
To introduce you to different ways of thinking about the evidence, people, organisations, technologies and so on (read the chapter headings) that are relevant to the challenge of implementing EBHC.
To persuade you that implementing EBHC is not an exact science and can never be undertaken in a formulaic, algorithmic way. Rather – and notwithstanding all the things that are known to help or hinder the process – it will always require contextual judgement, rules of thumb, instinct and perhaps a lucky alignment of circumstances.
To promote interest in the social sciences (e.g. sociology, social psychology, anthropology) and humanities (e.g. philosophy, literature/storytelling, design) as the intellectual basis for many of the approaches described in this book.
This book was a long time in gestation. The idea first came to Anna Donald and me in the late 1990s. At the time, we were both working in roles that involved helping people and organisations implement evidence – and it was proving a lot harder than the textbooks of the time implied. That was the decade in which evidence‐based medicine (EBM), which later expanded beyond the exclusive realm of doctors to EBHC (to include the activities of other health professionals, managers and lay people), was depicted as a straightforward sequence of asking a clinical question, searching the literature for relevant research articles, critically appraising those articles and implementing the findings. The last task in the sequence was depicted as something that could be ticked off from a checklist.
Anna and I penned an outline for the book (it looked very different then – because most of the research into knowledge translation and implementation cited here had not yet been done). But, tragically, Anna became ill before we got much further and died a few years later, with our magnum opus barely started. Whilst the detail of what is described here is my own work, there is still a sense in which it is Anna’s work too. Even in those early days, before terms like ‘implementation science’, ‘research utilisation’, ‘knowledge translation’ and ‘evidence‐into‐action’ became part of our vocabulary, Anna recognised that we would never be able to produce a set of evidence implementation checklists in the same way as she and I once drew up a set of critical appraisal checklists for our students.
It has taken me nearly 20 years to produce this book, partly because when Anna died, I lost a dear friend as well as a formidable intellectual sparring partner – but also because the question ‘How do you implement EBHC?’ is a good deal too broad for a single book. And yet, one book to scope the field and run a narrative through its many dimensions was exactly what was needed. I have long been convinced that whilst there are definite advantages to asking dozens of different authors, each with different views on the subject, to cover different aspects of this complex and contested field (Sharon Straus and her team did just that, and the book they edited is worth reading [1]), the EBHC community (nay, network of communities) also needs a single‐author textbook whose goal is to achieve some degree of coherence across the disparate topics.
EBM and EBHC have come a long way since the 1990s. The ‘campaign for real EBM’, which I helped establish in 2014, has called for a broadening of EBM’s parameters to include the use of social science methodologies to study the nuances of clinical practice, policymaking and the patient experience – as well as considering the political dimension of conflicts of interest in research funding and industry sponsorship of trials [2]. It is, perhaps, a reflection of the broadening of the EBM/EBHC agenda that implementation science has been established as a separate interdisciplinary field of inquiry (with much internal contestation), with its own suite of journals, research funding panels and conference circuit [3].
One important development in EBHC in recent years is the growing emphasis on value for money in the research process and an emerging evidence base on how little impact research so often has on practice and policy. This overlaps with the expectation on universities (in the United Kingdom at least, via the Research Excellence Framework) to demonstrate that the research they undertake has impact beyond publishing papers in journals read only by other academics. I have reviewed the literature on research impact elsewhere [4].
In 2014, Sir Iain Chalmers led a series in the Lancet that highlighted different aspects of research waste, including waste in the allocation of research funds (too often, we study questions people don’t want answered and fail to study the ones they do) [5]; waste in the conduct of research (studies are underpowered, use the wrong primary endpoints and/or the wrong measurements and so on) [6]; and waste when the findings of research prove ‘unusable’ in practice (because the findings are not presented in ways that could be applied by practitioners or policymakers) [7]. Most recently, John Ioannidis has written a masterly review on ‘Why Most Clinical Research Is Not Useful’ [8]. I look at this last paper in detail in Section 9.1. The bottom line is clear: there is a huge gap between evidence and its implementation – and it’s not easily explained.
The final impetus for me finishing this book was taking up a new job at the University of Oxford in 2015. My new job description included leading (along with Kamal Mahtani) the module ‘Knowledge Into Action’. This was part of the popular and well‐regarded MSc in Evidence‐Based Health Care run by Carl Heneghan and his team from the Centre for Evidence‐Based Medicine. The students on the Knowledge Into Action course were asking for a textbook. Some (the less experienced ones) were looking for checklists and formulae – but many who had worked at the interface between evidence and practice for years knew that the field was not predictable enough to be solved by such things. These more enlightened students wanted a way to get their heads round why implementing EBHC is not an exact science.
In sum, this book looks two ways. Looking retrospectively, it is dedicated to the memory of Anna Donald, who helped inspire it. And looking prospectively, it is dedicated to those who study the implementation of EBHC with a view to improving outcomes for patients. It also seeks to make a contribution to increasing value and reducing waste in research by increasing the proportion of good research that has a worthwhile impact on patients (the sick) and on citizens (including those of us who pay taxes and who may become sick).
This section started life as a blog on the website of the Centre for Evidence Based Health Care at the University of Oxford. I wrote it to set the scene for the Knowledge Into Action MSc module that Kamal Mahtani and I were running in 2016. Our group of students had already completed modules on critical appraisal, randomised controlled trials and other highly rigorous methodological approaches. They perhaps anticipated that ‘rigorous methodology’ would get them through the implementation stage too. To get my excuses in before the course began, I penned this blog entry:
Tools and resources for critical appraisal of research evidence are widely available and extremely useful. Whatever the topic and whatever the study design used to research it, there is probably a checklist to guide you step by step through assessing its validity and relevance.
The implementation challenge is different. Let me break this news to you gently: there is no tooth fairy. Nor is there any formal framework or model or checklist of things to do (or questions to ask) that will take you systematically through everything you need to do to ‘implement’ a particular piece of evidence in a particular setting.
There are certainly tools available [see Appendices], and you should try to become familiar with them. They will prompt you to adapt your evidence to suit a local context, identify local ‘barriers’ and ‘facilitators’ to knowledge use, select and tailor your interventions, and monitor and evaluate your progress. All these aspects of implementation are indeed important.
But here’s the rub: despite their value, knowledge‐to‐action tools cannot be applied mechanistically in the same way as the CONSORT checklist
[2]
can be applied to a paper describing a randomised controlled trial. This is not because the tools are in some way flawed (in which case, the solution would be to refine the tools, just as people have refined the CONSORT checklist over the years). It is because implementation is infinitely more complex (and hence unpredictable) than a research study in which confounding variables have been (or should have been) controlled or corrected for.
Implementing research evidence is not just a matter of following procedural steps. You will probably relate to that statement if you’ve ever tried it, just as you may know as a parent that raising a child is not just a matter of reading and applying the child‐rearing manual, or as a tennis player that winning a match cannot be achieved merely by knowing the rules of tennis and studying detailed statistics on your opponent’s performance in previous games. All these are examples of
complex practices
that require skill and situational judgement (which comes from experience) as well as evidence on ‘what works’.
So‐called ‘implementation science’ is, in reality, not a science at all – nor is it an art. It is a
science‐informed practice
. And just as with child‐rearing and tennis‐playing, you get better at it by doing two things in addition to learning about ‘what works’: doing it, and sharing stories about doing it with others who are also doing it. By reflecting carefully on your own practice and by discussing real case examples shared by others, you will acquire not just the abstract knowledge about ‘what works’ but also the practical wisdom that will help you make contextual judgements about
what is likely to work
(or at least, what might be tried out to see if it works) in
this situation
for
these people
in
this organisation
with
these constraints
.
There is a philosophical point here. Much healthcare research is oriented to producing statistical generalisations based on one population sample to predict what will happen in a comparable sample. In such cases, there is usually a single correct interpretation of the findings. In contrast, implementation science is at least partly about using unique case examples as a window to wider truths through the enrichment of understanding (what philosophers of science call ‘naturalistic generalisation’). In such cases, multiple interpretations of a case are possible and there may be no such thing as the ‘correct’ answer (recall the example of raising a child above).
In the Knowledge Into Action module, some of the time will be spent on learning about conceptual tools such as the Knowledge to Action Framework [see
Appendix A
]. But the module is deliberately designed to expose students to detailed case examples that offer multiple different interpretations. We anticipate that at least as much learning will occur as students not only apply ‘tools’ but also bring their rich and varied life experience (as healthcare professionals, policymakers, managers and service users) to bear on the case studies presented by their fellow students and visiting speakers. Students will also have an opportunity to explore different interpretations of their chosen case in a written assignment.
I hope this blog entry has conveyed the inherent complexity and uncertainty of the field I will be exploring in this book. If you are interested in attending the Knowledge Into Action course, google ‘Oxford MSc in Evidence Based Health Care’ and find it on the list of modules. The residential week usually runs in late spring, when Oxford is at its glorious best – but be warned: the course usually books up several months in advance.
As you can see from the list of chapter titles, each chapter looks at a different level of analysis. Separating the world out into different levels is a useful analytic technique but is in danger of introducing an artificial sense of order. Any attempt to implement EBHC in real life will require you to consider the material from more than one chapter (and ideally all the chapters) in combination.
Chapter 2 looks at evidence. It begins by problematising the very word ‘evidence’ and encourages you to question the provenance, completeness, relevance and ways of interpreting a piece of evidence – even when it is a randomised controlled trial or systematic review that appears to tick all the right methodological boxes. It also explains the term ‘knowledge translation’ and reminds you that different users of evidence (researchers, policymakers, practitioners, managers, patients, citizens) come from different cultural ‘worlds’ and have different values and expectations. It also considers the attributes of evidence (a guideline, for example) that tend to promote its adoption in practice. I offer some tips for generating the kind of evidence that potential users are likely to find useful.
Chapter 3 is about people – all people, since it covers the discipline of psychology, but mainly clinicians, since it relates to the adoption and non‐adoption of evidence‐based guidelines. I offer a highly eclectic selection of theories of human behaviour, notably ‘fast’ and ‘slow’ thinking and the science of heuristics (Kahneman, Gigerenzer); the theory of planned behaviour (Ajzen and Fishbein) and critiques thereof; learning domains of knowledge, skills and attitudes (Bloom); adult learning theory (Kolb, Knowles); social learning theory and self‐efficacy (Bandura); and dynamic or staged theories (e.g. Prochaska and DiClemente’s stages of change, Rogers’ stages of adoption). I also summarise some reviews and empirical studies of why clinicians do not always follow evidence‐based guidelines, including work by Michael Cabana, Susan Michie and Richard Grol. I consider empirical evidence from interventions intended to change clinician behaviour – including interventions that prompt, reward or feed back on behaviour; interventions that seek to improve knowledge; interventions that promote the use of heuristics; interventions that promote adult (on‐the‐job) learning; interventions that promote social influence; and interventions aimed at influencing the stages of change. In a final section, I offer some tips for those who seek to change clinicians’ behaviour.
Chapter 4 is about groups and teams. It emphasises the team‐based nature of much clinical care these days, and presents evidence on what makes a group or team effective (and, by implication, what may make one ineffective). I contrast different models of leadership – including hierarchical, democratic and distributed; and I suggest, provocatively perhaps, that there are ‘male’ and ‘female’ leadership styles (although the former can be adopted by women and the latter by men). I emphasise the importance of facilitation, and introduce organisational learning theory (Argyris and Schön). I give some examples of empirical studies of leadership and facilitation. By way of a summary, I offer tips for leading and facilitating your team to implement best evidence.
Chapter 5 considers organisations. Most of the chapter summarises a systematic review my team published in 2004–05 on the diffusion of innovations in healthcare organisations, which has been widely cited and used. I introduce various components of our diffusion of innovations model in turn, including structural features of the organisation, its propensity to take up new knowledge (absorptive capacity) and the presence or not of a receptive context for change (including things like organisational culture and climate); the organisation’s readiness to adopt a particular innovation (including innovation‐system fit); the process of assimilation (i.e. the organisation’s initial efforts to take up the innovation); how the innovation is implemented within the organisation; the external (‘outer’) context, including the behaviour of other organisations in the same sector; and the dynamic linkage between all these elements. The chapter also includes the findings from a later update to our original diffusion of innovations review, covering the routinisation and sustainability of complex service‐level innovations. I suggest some tips for promoting organisational innovativeness.
Chapter 6 looks at citizens – that is, lay people who are not currently patients. This chapter is about the involvement of citizens in the research process: why it is a good idea to involve them (and why it will help the implementation of best practice); how to avoid tokenism; how to ‘co‐create’ research with citizens and communities; and how to communicate the findings of research to a lay audience. I summarise with some tips on how to improve patient and public involvement in your own research.
Chapter 7 is about patients – that is, all of us when we are sick or in need of care, or believe ourselves to be so. I take a hard look at whether the EBHC community is (or ever has been) ‘biased’ against patients – in the sense that it has (with the best of intentions) served a researcher or clinician agenda at the expense of the needs of the sick patient. I look at the evidence on implementing evidence with patients in the clinical encounter (‘shared decision‐making’), drawing heavily on the work of Glyn Elwyn. I also look at the literature on self‐management of chronic illness and consider two framings of such management (‘biomedical’ and ‘lifeworld’). I look at patient involvement in service improvement efforts. I then offer some tips for improving evidence‐based patient care.
Chapter 8 addresses technology. It begins by trying to bust the myth of technological determinism (i.e. by explaining why technologies do not, in and of themselves, cause change). It looks at the expanding industry of medical apps (downloadable pieces of software intended to help the clinician and/or the patient implement evidence in clinical care). Acknowledging that a high proportion of technology projects in healthcare fail, I spend a lot of time discussing the non‐adoption and abandonment of technologies by both patients and clinicians. I finish with some tips for using technologies to implement evidence.
Chapter 9 is about policy. I take issue with the research tradition of identifying barriers and facilitators to the use of research evidence in policy, arguing that we first need to understand what policymaking is. I describe some theories of how policymaking actually happens (I like to define it as the struggle over ideas). I introduce Carol Weiss’s taxonomy of how evidence is used in this struggle – including the instrumental and tactical use of evidence in the rhetorical game of influencing significant stakeholders. Much of this game is about the use of language and ‘social drama’. I introduce the terms ‘value based healthcare’ (Sir Muir Gray) and ‘values based healthcare, (Mike Kelly and colleagues), and propose that facts and values are not (as is sometimes assumed in the EBHC world) separate and separable. Rather, the ‘facts’ of EBHC are irredeemably value‐laden. I end with some tips for getting closer alignment between research and policy.
In Chapter 10, I talk about networks. Networks are important because knowledge is more social and more fluid than we often assume. Knowledge (both explicit and tacit) is generated, negotiated, refined and circulated in networks of various kinds. Specifically, I consider social networks and social influence (beginning with Coleman et al.’s classic 1964 study of how Pfizer discovered the power of social influence in drug prescribing); professional communities of practice (and the concept of clinical ‘mindlines’ developed by John Gabbay and Andrée Le May); and patient communities (especially online support groups for chronic illness). I give some tips for improving networks and networking.
Chapter 11 is about systems. It introduces the concept of complex adaptive systems (which Paul Plsek and I wrote about in a BMJ series some years ago). Complex systems are unpredictable and emergent, so they do not lend themselves well to rational planning and rigid milestones. Rather, they need an emergent approach in which there is careful collection of, and response to, emerging data. In this chapter, I also cover realist evaluation and review actor‐networks and multi‐stakeholder health research systems. My final tips are for working effectively with complex systems.
With practical applications in mind, Appendix A provides an overview of frameworks, tools and techniques, including driver diagrams, process mapping, stakeholder mapping, plan–do–study–act cycles and many more. Appendix B details many (although not all) of the different psychological theories of behaviour change.
One final introductory point: this book is not a comprehensive overview of every aspect of implementing EBHC (any more than a manual on child‐rearing could possibly cover every challenge a parent might face). Different authors would have put different things in – and left different things out – from the topics I selected. The ones I cover in this book are the ones I personally think are important and the ones I feel confident to cover. I write it as an introduction to a complex, interdisciplinary and rapidly expanding field of inquiry on which there is (thankfully) no firm consensus. If you want to go beyond one person’s perspective on this field, I recommend that you explore beyond the topics covered in this book. A good place to start might be the journal Implementation Science(www.implementationscience.biomedcentral.com), which is freely available online, or two key books Knowledge Translation in Health Care, edited by Sharon Straus and colleagues [1] and Improving Patient Care: The implementation of Change in Healthcare [9].
1. Straus, S., Tetroe, J., & Graham, I.D. (2013).
Knowledge Translation in Health Care: Moving from Evidence to Practice
. Chichester, John Wiley & Sons.
2. Schulz, K.F., Altman, D.G., & Moher, D. (2010). CONSORT 2010 statement: updated guidelines for reporting parallel group randomized trials.
Annals of Internal Medicine
,
152
(11), 726–732.
3. Bauer, M.S., Damschroder, L., Hagedorn, H., Smith, J., & Kilbourne, A.M. (2015). An introduction to implementation science for the non‐specialist.
BMC Psychology
,
3
, 32.
4. Greenhalgh, T., Raftery, J., Hanney, S., & Glover, M. Research impact: a narrative review.
BMC Medicine
,
14
(1), 78.
5. Chalmers, I., Bracken, M.B., Djulbegovic, B., Garattini, S., Grant, J., Gülmezoglu, A.M., et al. (2014). How to increase value and reduce waste when research priorities are set.
Lancet
,
383
(9912), 156–165.
6. Ioannidis, J.P., Greenland, S., Hlatky, M.A., Khoury, M.J., Macleod, M.R., Moher, D., et al. (2014). Increasing value and reducing waste in research design, conduct, and analysis.
Lancet (London, England)
,
383
(9912), 166–175.
7. Glasziou, P., Altman, D.G., Bossuyt, P., Boutron, I., Clarke, M., Julious, S., et al. (2014). Reducing waste from incomplete or unusable reports of biomedical research.
Lancet
,
383
(9913), 267–276.
8. Ioannidis, J.P. (2016). Why most clinical research is not useful.
PLOS Medicine
,
13
(6), e1002049.
9. Grol, R., Wensing, M., Eccles, M., & Davis, D. (Eds.). (2013).
Improving patient care: the implementation of change in health care
. Oxford, John Wiley & Sons.
This chapter is about evidence (by which, for the purposes of this book, I mean research evidence) and its translation. I want to start by problematising the very notion of evidence.
Research findings – published in academic journals, synthesised in systematic reviews and distilled into guidelines – even when they appear to be of the highest quality, are almost always incomplete, ambiguous and contested. And they usually address a problem that is one step removed from the one that needs solving. This is partly because research studies contain numerous methodological flaws [1] – but also because, even when studies are not seriously flawed, science is inherently uncertain.
Take, for example, SPRINT (the Systolic Blood Pressure Intervention Trial), which compared tight versus not‐so‐tight blood pressure control in people at high risk of cardiovascular events but without diabetes. Its preliminary results were published in the New England Journal of Medicine in November 2015 [2]. The trial recruited 9361 people with a mean age of 68 years. The research question (paraphrased) was, ‘Should we aim for a systolic blood pressure target of 120 (intervention arm) rather than 140 (control arm) mmHg in people at high risk of heart attack or stroke?’ Over a median period of 39 months, participants in the intervention arm experienced a significant reduction in death and cardiovascular events compared to those in the control arm – so much so that the trial was stopped early by the data‐monitoring committee.
The SPRINT trial ticked most if not all of the boxes for a methodologically robust clinical trial. It was adequately powered (i.e. big enough) to ensure that if there were a clinically significant effect, differences in outcomes between the groups would be statistically significant. Randomisation was double blind, with identical placebos. Appropriate statistical tests (I was told) were used. And so on. Extrapolating from the findings, commentators concluded that tight control of high blood pressure in people of comparable cardiovascular risk to those in the SPRINT trial could save thousands of lives per year in the United States alone. Coverage in the medical and lay press for the study included such terms as ‘landmark’, ‘groundbreaking’, ‘obviously worthwhile’ (i.e. it was obviously worthwhile to treat blood pressure aggressively in this group) and ‘120 is the new 140’.
Yet, despite strong evidence for significant potential impact on mortality, criticisms of SPRINT emerged within hours of its publication. The authors, said critics, had focused entirely on the alleged benefits of tight blood pressure control without taking full account of the potential harms (significant risk of low blood pressure, fainting and deterioration in kidney function). Lifestyle measures (diet, exercise, weight loss) had not been tried before putting participants on medication. The multiple medications needed to achieve the 120 target in most participants would bring all the well‐known dangers of polypharmacy. Since 90 people needed to be treated to prevent one death, 89 in every 90 would be treated unnecessarily. A previous Cochrane review of comparable (although not identical) studies found no benefit in reducing systolic blood pressure below 140 mmHg [3]. Some statisticians questioned the justification for stopping the trial early. And so on.
As our colleagues in the humanities are fond of telling us, there is no text that is self‐interpreting. That maxim applies as much to a randomised controlled trial as to Shakespeare’s plays. Facts, as I will argue in Section 9.3, are value‐laden. The trade‐off between risks and benefits for an individual patient is always a matter of judgement and preference. That, of course, is why clinicians need wisdom as well as evidence.
In a recent paper on knowledge mobilisation in complex systems (see Chapter 11 for more on those), Bev Holmes and colleagues said this:
The very meaning of evidence is now the subject of lively debate. However defined, the emerging consensus is that evidence is not a thing apart, generated in isolation and then passed on to those who will use it. It is clear that evidence alone does not solve problems, and that myriad elements of context – including different professional, organisational and sectoral cultures and the role of power and politics – are critical considerations. [4]
In sum, even scientifically ‘robust’ research evidence has a history and a context. It may be inherently uncertain, incomplete and open to multiple interpretations. It sits better in some contexts than others. And it competes for our attention with many other issues, some of which are extremely important. As we consider the question of how to implement research evidence, let us bear in mind that such evidence is rarely a set of final and incontrovertible ‘facts’ that simply need to be cascaded into practice. Much of the rest of this book picks up on this central theme.
Let us put aside for now the contestability and inherent uncertainty of research evidence, and assume that there is a set of research findings that are relevant to the issue at hand. The remainder of this chapter considers how to maximise the chances of that evidence being accessed, understood and put into practice by clinicians, managers, policymakers and patients. In other words, I will be addressing the science, art and practice of knowledge translation. Be warned – I base much of this section on an article I co‐authored a few years ago entitled ‘Is it Time to Drop the Knowledge Translation Metaphor?’ [5].
‘Knowledge translation’ is a relatively new term that has come to replace the older concept of knowledge transfer, used to depict the one‐way shift of research knowledge from researchers to – well, just about anyone else. Jonathan Lomas usefully depicted a continuum with three essential processes:
diffusion (a passive phenomenon akin to osmosis);
dissemination (involving active efforts from researchers and intermediaries to raise awareness and promote interest in research findings);
implementation (involving proactive efforts to understand the needs of the research user and follow‐through to achieve a change in behaviour) [6].
In today’s terminology, the first two processes might be considered knowledge transfer; the third, knowledge translation.
Knowledge translation was originally defined at a consensus meeting of the World Health Organization (WHO) in 2005:
the synthesis, exchange and application of knowledge by relevant stakeholders to accelerate the benefits of global and local innovation in strengthening health systems and advancing people’s health. [7]
More recently, this definition was refined by the Canadian Institutes of Health Research (CIHR):
a dynamic and iterative process that includes the synthesis, dissemination, exchange and ethically sound application of knowledge to improve health, provide more effective health services and products and strengthen the healthcare system. [8]
At the original WHO consensus meeting in 2005, successful knowledge translation was conceptualised as dependent on ‘supply’ or ‘push factors’ (availability of evidence; appropriate packaging, e.g. in ‘evidence‐based actionable messages’; credible knowledge brokers and opinion leaders) and ‘demand’ or ‘pull factors’ (e.g. local knowledge champions; political support for implementation of particular research evidence; strategic presence on local decision‐making bodies). Barriers to knowledge translation were likewise divided into push factors (e.g. evidence too complex; cost of producing, packaging and distributing evidence too prohibitive; poor local access to relevant evidence) and pull factors (e.g. low demand for scientific evidence by policymakers; political and/or financial reasons for not acting on evidence; ‘paradigm differences’ between researchers, policymakers and practitioners) [7].
Many published analyses of the knowledge translation challenge offer similar taxonomies of problems and solutions. Clinicians, it is lamented, only rarely follow evidence‐based guidelines; managers and policymakers fail to draw consistently on robust evidence when designing services or allocating resources (for more on those challenges, see Sections 3.3 and 9.1 respectively). Solutions to these problems are generally framed in terms of a more efficient ‘evidence pathway’, ‘evidence‐based decision support’, ‘evidence‐based policymaking’ and ‘evidence‐based management’ – all of which entail the controlled supply of research evidence that has been vetted, summarised and made accessible to its intended audience and/or the shaping of demand for this evidence through education, facilitation, financial incentives or inscription of decision pathways into technology.
Three assumptions underpin the knowledge translation metaphor. The first is that ‘knowledge’ equates with objective, impersonal research findings – a form of what Aristotle called episteme and later writers have called explicit knowledge (for more on explicit versus tacit knowledge, see Section 10.1). In basic science, research evidence means consistent and reproducible laboratory findings; in health services research, it means (usually) randomised controlled trials or meta‐analyses; in management, it may mean findings from cognitive psychology about how people assimilate information or what motivates them. In all these cases, knowledge is seen as unproblematically separable from the scientists who generate it and the practitioners who may use it (the ‘objectivist’ approach to knowledge).
The second assumption is that it is useful to conceptualise a ‘know–do gap’ between scientific facts and practice (whether in the clinical encounter, in the management of staff or around the policymaking table). This implies that knowledge and practice can be cleanly separated, both empirically and analytically.
The third assumption is that practice consists more or less of a series of rational decisions on which scientific research findings can be brought to bear.
These three assumptions are widely held within the medical field, but as Sietse Wieringa and I argued in our paper [5], and as I argue in more depth in the remainder of this book, they are widely questioned by scholars outside it.
In brief, knowledge is not so easily separated from the context in which it was generated (or the context in which it has been successfully applied); transferring successful innovations or service models from setting A to setting B is notoriously difficult [9]. The ‘know–do gap’ is appealing in its simplicity, but filling gaps may turn out to be a misleading metaphor in the social science of improving practice. Neither clinical practice nor policymaking is, in reality, an exercise in rational decision science.
Despite these caveats, the metaphor of knowledge translation is neither obsolete nor useless – so long as you remember that it is something of an oversimplification. It can apply pretty well in some contexts – but falls very flat in others (see Section 10.1 for more detail on such contexts). In the last Section (2.5), I offer tips for improving your knowledge translation skills.
If you inhabit the world of research, there is nothing more robust, nothing more meaningful and nothing more exciting than a well‐conducted empirical study or systematic review that has been written up in IMRaD (Introduction, Methods, Results and Discussion) format and published in an academic journal. You may or may not be aware that not everyone inhabits your world. But to convey your research knowledge effectively, you must reflect on the assumptions and priorities of your own world and learn about the very different worlds of non‐researchers.
In the world of research, we value precision, accuracy, logical argument, careful measurement and detailed analysis. Getting an answer correct, and perhaps checking it using more than one set of instruments, is viewed as more important than producing an answer by a certain date or keeping a study within an allocated budget. Research involves questioning, challenging and attempting to replicate (or, indeed, refute) the work of other researchers. Its goal is usually to produce generalisable findings, free of the ephemera of any particular context.
Given these priorities, it is not surprising that the research world is oriented to producing lengthy, pedantic, jargon‐ridden papers that address narrowly defined questions and which centre on abstract variables whilst systematically and carefully excluding (or ‘controlling for’) any local, here‐and‐now contingencies.
In the clinical world, excellence is defined differently. Good clinical practice draws on objective science (including examining the patient, selecting and interpreting tests and finding and applying relevant research evidence), but it also involves contextual judgement and attention to the subjective experience of the particular patient being treated. Above all else, the clinician asks an ethical and uniquely personal question: What is the right thing to do, for this patient in this situation, today? This may include asking whether the patient and the healthcare system can afford the tests or treatments on offer, and whether allocating a particular investigation or treatment to the patient might create a shortfall elsewhere in the system.
For all these reasons, and as John Ioannidis recently argued, research is very often not actually useful to clinicians [10]. A clinically useful research study satisfies the criteria listed in Box 2.1.
A clinically useful research study satisfies the following criteria:
It was designed to address a real and important problem (as opposed to focusing on one that has been concocted by disease mongerers or intervention zealots, for example).
It adds substantially and systematically to what we already know (i.e. its authors began with a thorough review of the literature, identified a key knowledge gap and set out to fill it).
Its design was pragmatic (i.e. the study participants and context reflect real‐world patients and circumstances rather than a highly selected ‘clean’ patient sample who are given extra attention and fringe benefits).
It measured outcomes that matter to patients (rather than blood test results, blobs on X‐rays or other surrogate endpoints).
The intervention is good value for money (hence, if it ‘works’, healthcare funders will be able to afford it without axing some other crucial service).
The intervention is feasible and acceptable in the real world.
The study data are available for verification and challenge (as opposed to locked up in a ‘commercial‐in‐confidence’ file kept by company lawyers – in which case most people won’t trust the findings).
Source: Adapted from Ioannidis [10].
I live in hope that the next generation of clinical research will pay more attention to usefulness than the previous generation(s). But right now, the mismatch between what researchers produce and what clinicians want and need can be almost comical.
As I will explain in more detail in Chapter 9, the policy world sings in yet another key [11]. Policymaking is about defining and pursuing the right course of action in a particular context, at a particular time, for a particular group of people and with a particular allocation of resources. Policymaking requires decisions to be timely and to fit with the (usually annual) cycle of resource allocation. Getting something on the table for next Monday’s board meeting is often more important than waiting for researchers to finish analysing their data. And the only evidence policymakers currently want is evidence to address the problems they have defined as the current priorities. Finally, whilst many (although not all) researchers are still focused exclusively on randomised controlled trial evidence, because someone told them it was the ‘gold standard’, policymakers actually need a much wider range of evidence, including broad‐ranging ‘scoping reviews’ of key topic areas, qualitative studies, economic evaluations and policy analyses [12].
Whilst we are exploring the worlds of non‐researchers, spare a thought for the patient. As I wrote with some colleagues in a paper recently:
Even when patients are ‘informed’, ‘empowered’, and ‘health‐literate’ (and especially when they are not), they rarely inhabit a world of controlled experiments, abstracted variables, objective measurement of pre‐defined outcomes, average results, or generalizable truths. Rather, they live in the messy, idiosyncratic, and unpredictable world of a particular person in a particular family context (or, for some, in a context of social isolation and/or abandonment by family) … The clinical encounter, whether patient‐initiated (e.g. to present a symptom or concern) or clinician‐initiated (e.g. an invitation for screening or chronic disease surveillance), has cultural and moral significance and occurs against a complex backdrop of personal sense making, information seeking, and lay consultations. [13]
I will pick up on the ‘worlds’ of clinicians, policymakers and patients in later chapters, but for now, let us just recognise that the conventional outputs of the scientific research community are not usually the right size or shape, nor are they produced to the right timescale, to meet the needs of any of these groups [14]. The tips for knowledge translation in Section 2.5 are designed to better align the outputs of academic research with the needs of people who use (or might use) such research.
Diffusion of innovation theory was developed in relation to individual adopters by Everett Rogers [15] and extended by my own team to encompass the organisational and system context of healthcare innovation (see Chapter 5) [16]. Rogers defined an innovation as ‘an idea, practice or object that is perceived as new’. He certainly included research evidence within that definition (he was a social scientist studying the adoption by American farmers of new farming practices developed by university researchers). Summarising his own research and that of others, Rogers identified a number of features (‘attributes’ as perceived by potential adopters) of innovations that tend to promote their adoption in practice. These are listed in Box 2.2.
An innovation is an idea, practice or object that is perceived as new. An innovation is more likely to be adopted if potential adopters consider that it has the following attributes:
Relative advantage:
The innovation is better or more efficient than whatever is currently used.
Low complexity:
The innovation is simple to understand and use (or, if complex, can be broken down into simpler components).
Compatibility:
The innovation and its use align with prevailing values and ways of working.
Observability:
The effects of the innovation are easily observed and measured, and can be unambiguously attributed to it.
Trialability:
The innovation can be tried out on a small scale before people commit.
Potential for reinvention:
Users can customise the innovation to suit personal preferences and/or local circumstances.
Ease of use (for technologies):
The innovation is easy to use and/or comes with adequate technical support.
Source: Adapted from Rogers [15].
Of all the attributes in Box 2.2, the single most important – for research evidence, as for almost all other innovations – is relative advantage. If a clinician does not believe that following the recommendations from the SPRINT trial would be in the best interests of eligible patients, or believes that following these recommendations would be practically possible, he or she will almost certainly not even attempt to follow them. Conversely, if the clinician is convinced that SPRINT represents a new and achievable gold standard of care for people at high risk of cardiovascular events, practice is very likely to change.
I cannot stress this point enough. Far too many papers and books about implementing evidence place too much emphasis on minor details and not enough on the central issue on which the adoption decision turns: Is the clinician (or manager, or policymaker, or patient) persuaded by the evidence and does he or she believe that change is possible?
Many of the other attributes in Rogers’ original list (e.g. compatibility, observability and trialability) and in the numerous lists of attributes that have been demonstrated in empirical studies of guideline adoption are, to a large extent, factors that explain whether potential adopters are likely to be persuaded by the evidence and whether they believe the recommendations will be workable in practice.
Incidentally, if you are hungry for more attributes, try Richard Shiffman’s 10‐point list: decidability, executability, general characteristics, presentation and formatting, measurable outcomes, apparent validity, flexibility, effect on process of care, novelty and computability [17], or even Anna Gagliardi’s 22‐attribute list for guideline implementablity, grouped under adaptability, usability, validity, applicability, communicability, accommodation, implementation and evaluation [18]. Personally, I find relative advantage, low complexity and trialability cover most bases when I am asking questions about innovations.
