Practitioner's Guide to Using Research for Evidence-Informed Practice - Allen Rubin - E-Book

Practitioner's Guide to Using Research for Evidence-Informed Practice E-Book

Allen Rubin

0,0
77,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The latest edition of an essential text to help students and practitioners distinguish between research studies that should and should not influence practice decisions 

Now in its third edition, Practitioner's Guide to Using Research for Evidence-Informed Practice delivers an essential and practical guide to integrating research appraisal into evidence-informed practice. The book walks you through the skills, knowledge, and strategies you can use to identify significant strengths and limitations in research. 

The ability to appraise the veracity and validity of research will improve your service provision and practice decisions. By teaching you to be a critical consumer of modern research, this book helps you avoid treatments based on fatally flawed research and methodologies. 

Practitioner's Guide to Using Research for Evidence-Informed Practice, Third Edition offers: 

  • An extensive introduction to evidence-informed practice, including explorations of unethical research and discussions of social justice in the context of evidence-informed practice. 
  • Explanations of how to appraise studies on intervention efficacy, including the criteria for inferring effectiveness and critically examining experiments. 
  • Discussions of how to critically appraise studies for alternative evidence-informed practice questions, including nonexperimental quantitative studies and qualitative studies. 
A comprehensive and authoritative blueprint for critically assessing research studies, interventions, programs, policies, and assessment tools, Practitioner's Guide to Using Research for Evidence-Informed Practice belongs in the bookshelves of students and practitioners of the social sciences. 

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 782

Veröffentlichungsjahr: 2022

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

COVER

TITLE PAGE

COPYRIGHT

PREFACE

Organization and Special Features

Significant Additions to This Edition

ACKNOWLEDGEMENTS

ABOUT THE AUTHORS

ABOUT THE COMPANION WEBSITE

PART 1: OVERVIEW OF EVIDENCE-INFORMED PRACTICE

1 Introduction to Evidence-Informed Practice (EIP)

1.1 Emergence of EIP

1.2 Defining EIP

1.3 Types of EIP Questions

1.4 EIP Practice Regarding Policy and Social Justice

1.5 EIP and Black Lives Matter

1.6 Developing an EIP Practice Process Outlook

1.7 EIP as a Client-Centered, Compassionate Means, Not an End unto Itself

1.8 EIP and Professional Ethics

Key Chapter Concepts

Additional Reading

2 Steps in the EIP Process

2.1 Step 1: Question Formulation

2.2 Step 2: Evidence Search

2.3 Step 3: Critically Appraising Studies and Reviews

2.4 Step 4: Selecting and Implementing the Intervention

2.5 Step 5: Monitor Client Progress

2.6 Feasibility Constraints

2.7 But What about the Dodo Bird Verdict?

Key Chapter Concepts

Additional Reading

3 Research Hierarchies: Which Types of Research Are Best for Which Questions?

3.1 More than One Type of Hierarchy for More than One Type of EIP Question

3.2 Qualitative and Quantitative Studies

3.3 Which Types of Research Designs Apply to Which Types of EIP Questions?

Key Chapter Concepts

Additional Reading

PART 2: CRITICALLY APPRAISING STUDIES FOR EIP QUESTIONS ABOUT INTERVENTION EFFECTIVENESS

4 Criteria for Inferring Effectiveness: How Do We Know What Works?

4.1 Internal Validity

4.2 Measurement Issues

4.3 Statistical Chance

4.4 External Validity

4.5 Synopses of Fictitious Research Studies

Key Chapter Concepts

Exercise for Critically Appraising Published Articles

Additional Reading

5 Critically Appraising Experiments

5.1 Classic Pretest-Posttest Control Group Design

5.2 Posttest-Only Control Group Design

5.3 Solomon Four-Group Design

5.4 Alternative Treatment Designs

5.5 Dismantling Designs

5.6 Placebo Control Group Designs

5.7 Experimental Demand and Experimenter Expectancies

5.8 Obtrusive Versus Unobtrusive Observation

5.9 Compensatory Equalization and Compensatory Rivalry

5.10 Resentful Demoralization

5.11 Treatment Diffusion

5.12 Treatment Fidelity

5.13 Practitioner Equivalence

5.14 Differential Attrition

5.15 Synopses of Research Studies

Key Chapter Concepts

Exercise for Critically Appraising Published Articles

Additional Reading

6 Critically Appraising Quasi-Experiments: Nonequivalent Comparison Groups Designs

6.1 Nonequivalent Comparison Groups Designs

6.2 Additional Logical Arrangements to Control for Potential Selectivity Biases

6.3 Statistical Controls for Potential Selectivity Biases

6.4 Creating Matched Comparison Groups Using Propensity Score Matching

6.5 Pilot Studies

6.6 Synopses of Research Studies

Key Chapter Concepts

Exercise for Critically Appraising Published Articles

Additional Reading

7 Critically Appraising Quasi-Experiments: Time-Series Designs and Single-Case Designs

7.1 Simple Time-Series Designs

7.2 Multiple Time-Series Designs

7.3 Single-Case Designs

7.4 Synopses of Research Studies

Key Chapter Concepts

Exercise for Critically Appraising Published Articles

Additional Reading

8 Critically Appraising Systematic Reviews and Meta-Analyses

8.1 Advantages of Systematic Reviews and Meta-Analyses

8.2 Risks in Relying Exclusively on Systematic Reviews and Meta-Analyses

8.3 Where to Start

8.4 What to Look for When Critically Appraising Systematic Reviews

8.5 What Distinguishes a Systematic Review from Other Types of Reviews?

8.6 What to Look for When Critically Appraising Meta-Analyses

8.7 Synopses of Research Studies

Key Chapter Concepts

Exercise for Critically Appraising Published Articles

Additional Reading

PART 3: CRITICALLY APPRAISING STUDIES FOR ALTERNATIVE EIP QUESTIONS

9 Critically Appraising Nonexperimental Quantitative Studies

9.1 Surveys

9.2 Cross-Sectional and Longitudinal Studies

9.3 Case-Control Studies

9.4 Synopses of Research Studies

Key Chapter Concepts

Exercise for Critically Appraising Published Articles

Additional Reading

10 Critically Appraising Qualitative Studies

10.1 Qualitative Observation

10.2 Qualitative Interviewing

10.3 Other Qualitative Methodologies

10.4 Qualitative Sampling

10.5 Grounded Theory

10.6 Alternatives to Grounded Theory

10.7 Frameworks for Appraising Qualitative Studies

10.8 Mixed Model and Mixed Methods Studies

10.9 Synopses of Research Studies

Key Chapter Concepts

Exercise for Critically Appraising Published Articles

Additional Reading

PART 4: ASSESSMENT AND MONITORING IN EVIDENCE-INFORMED PRACTICE

11 Critically Appraising, Selecting, and Constructing Assessment Instruments

11.1 Reliability

11.2 Validity

11.3 Feasibility

11.4 Sample Characteristics

11.5 Locating Assessment Instruments

11.6 Constructing Assessment Instruments

11.7 Synopses of Research Studies

Key Chapter Concepts

Exercise for Critically Appraising Published Articles

Additional Reading

12 Monitoring Client Progress

12.1 A Practitioner-Friendly Single-Case Design

12.2 Using Within-Group Effect-Size Benchmarks

Key Chapter Concepts

Additional Reading

PART 5: ADDITIONAL ASPECTS OF EVIDENCE-INFORMED PRACTICE

13 Appraising and Conducting Data Analyses in EIP

13.1 Introduction

13.2 Ruling Out Statistical Chance

13.3 What Else Do You Need to Know?

13.4 The 05 Cutoff Point Is Not Sacred!

13.5 What Else Do You Need to Know?

13.6 Calculating Within-Group Effect Sizes and Using Benchmarks

13.7 Conclusion

Key Chapter Concepts

Additional Reading

14 Critically Appraising Social Justice Research Studies

14.1 Introduction

14.2 Evidence-Informed Social Action

14.3 What Type of Evidence?

14.4 Participatory Action Research (PAR)

14.5 Illustrations of Other Types of Social Justice Research

14.6 Conclusion

Key Chapter Concepts

Additional Reading

Note

GLOSSARY

REFERENCES

INDEX

END USER LICENSE AGREEMENT

List of Tables

Chapter 2

Table 2.1 The PICO Framework

Table 2.2 Web Search Example Using the Term

EMDR

at http://Google.com

Table 2.3 Internet Sites for Reviews and Practice Guidelines

Chapter 3

Table 3.1 Evidentiary Hierarchy for EIP Questions about Effectiveness

Table 3.2 Matrix of Research Designs by Research Questions

Chapter 6

Table 6.1 Guide for Appraising Studies Using a Nonequivalent Comparison Gro...

Table 6.2 Outcome Data for a Fictitious Evaluation of Two Dropout Preventio...

Table 6.3 Outcome Data for a Fictitious Evaluation of Two Dropout Preventio...

Table 6.4 Mean Outcome Scores on a Family Risk Scale for a Fictitious Evalu...

Table 6.5 Mean Pretest and Posttest Scores on a Family Risk Scale for a Fic...

Table 6.6 Participants' Mean Pretest and Posttest Scores before and after R...

Chapter 7

Table 7.1 Four Alternative Data Patterns of Incidents of Police Brutality i...

Table 7.2 Data Points for a Comparison Group over the Same Baseline and Int...

Table 7.3 Questions for Appraising Studies Using Time-Series Designs

Chapter 8

Table 8.1 Questions to Ask When Critically Appraising Systematic Reviews an...

Table 8.2 Interpretive Guidelines for

d

-Indexes and Correlations

Table 8.3 Types of Questions that Systematic Reviews and Meta-Analyses Seek...

Chapter 9

Table 9.1 Questions to Ask When Critically Appraising Surveys

Chapter 10

Table 10.1 Criteria Emphasized by Alternative Frameworks for Appraising Qua...

Chapter 11

Table 11.1 Internal Consistency Reliability

Table 11.2 Test–Retest Reliability

Table 11.3 Circle the Number that Best Describes the Child's Observed Behav...

Chapter 12

Table 12.1 An Individualized Daily Rating Scale for Depressed Mood

Table 12.2 Frequency Recording Template

Table 12.3 Duration Recording Template

Chapter 13

Table 13.1 Possible Random Assignment Outcomes of Four Research Participant...

Table 13.2 Possible Random Assignment Outcomes of Six Research Participants...

Table 13.3 Some Commonly Used Tests of Statistical Significance

Table 13.4 Within-Group Aggregate Effect-Size of Trauma Symptoms by Treatme...

List of Illustrations

Chapter 1

FIGURE 1.1 Original EIP model.

FIGURE 1.2 Newer EIP model.

FIGURE 1.3 The transdisciplinary model of evidence-informed practice. ...

Chapter 2

FIGURE 2.1 Screen for advanced search option in google scholar.

Chapter 5

FIGURE 5.1 Classic pretest-posttest control group design.

FIGURE 5.2 Three sets of results in an imaginary experiment using a Solomon f...

Chapter 7

FIGURE 7.1 A visually significant data pattern for an intervention seeking to...

FIGURE 7.2 A data pattern that lacks visual significance for an intervention ...

FIGURE 7.3 A visually significant data pattern for an intervention ...

FIGURE 7.4 An inconclusive data pattern for an intervention seeking to reduce...

FIGURE 7.5 A visually significant multiple baseline data pattern for an inter...

FIGURE 7.6 An inconclusive multiple baseline data pattern for an intervention...

FIGURE 7.7 A multiple component design data pattern suggesting that the eye m...

FIGURE 7.8 A multiple component design data pattern suggesting that the eye m...

FIGURE 7.9 Multiple baseline results.

FIGURE 7.10 Re-arrest rates for study prison and aggregated for all other med...

Chapter 9

FIGURE 9.1 Circles depicting the results of a fictitious multiple regression ...

FIGURE 9.2 Circles depicting the results of a fictitious multiple regression ...

Chapter 12

FIGURE 12.1 Various possible results using the B+ design.

Guide

Cover Page

Title Page

Copyright

Table of Contents

Preface

Acknowledgements

About the Authors

About the Companion Website

Begin Reading

Glossary

References

Index

WILEY END USER LICENSE AGREEMENT

Pages

iii

iv

xi

xii

xiii

xv

xvii

xix

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

PRACTITIONER'S GUIDE TO USING RESEARCH FOR EVIDENCE-INFORMED PRACTICE

 

 

THIRD EDITION

 

Allen Rubin

and

Jennifer Bellamy

 

 

 

 

 

 

Copyright © 2022 by John Wiley & Sons, Inc. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.

Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging-in-Publication Data

Names: Rubin, Allen, author. | Bellamy, Jennifer, author.

Title: Practitioner’s guide to using research for evidence-informed practice / Allen Rubin, Jennifer Bellamy.

Other titles: Practitioner’s guide to using research for evidence-based practice Description: Third edition. | Hoboken, NJ : Wiley, 2022. | Preceded by Practitioner’s guide to using research for evidence-based practice / Allen Rubin and Jennifer Bellamy. 2nd ed. c2012. | Includes bibliographical references and index.

Identifiers: LCCN 2021041536 (print) | LCCN 2021041537 (ebook) | ISBN 9781119858560 (paperback) | ISBN 9781119858577 (adobe pdf) | ISBN 9781119858584 (epub)

Subjects: MESH: Evidence-Based Practice | Social Work | Evaluation Studies as Topic | Outcome and Process Assessment, Health Care

Classification: LCC RC337 (print) | LCC RC337 (ebook) | NLM WB 102.5 | DDC 616.89/140072—dc23 LC record available at https://lccn.loc.gov/2021041536

LC ebook record available at https://lccn.loc.gov/2021041537

Cover Design: Wiley

Cover Image: © Dilen/Shutterstock

PREFACE

Approximately a decade has elapsed since the second edition of this book was published. During that time there have been some important developments pertaining to evidence-informed practice (EIP). Those developments spurred us to write a new, third edition of our book. One such development was the preference to replace the term evidence-based practice (EBP) with the term EIP. We changed our title to conform to that change, and in Chapter 1, we explain why the latter term is preferred. The development of effective vaccines to fight the COVID-19 pandemic of 2020–2021 provided an example we could cite at the beginning of this book that we hope will help readers shed any ambivalence that they may have had about the relevance of research to helping people.

Another significant change is the growing commitment among social work and other human service practitioners to address social justice issues. Racial injustice, in particular, has become a key focus in our missions, especially in the aftermath of the recent police murders of innocent Black people. Consequently, we added a chapter that focuses exclusively on social justice and how to take an EIP approach to pursuing it. In fact, we have added attention to that issue in our first chapter, which now includes a section on Black Lives Matter and how President Barack Obama took an EIP approach when formulating his policy position regarding how to effectively reduce incidents of police misconduct and violence.

Yet another recent development has been the recognition of how rarely practitioners are able to evaluate their practice with designs that meet all of the criteria for causal inferences. Consequently, we have added much more attention to the degree of certainty needed when making practice decisions when evidence sufficiently supports the plausibility of causality to imply practice and policy decisions when some, but not all, of the criteria for inferring causality are met. In that connection, we have added content on the use of within-group effect-size benchmarks, which can be used to evaluate how adequately practitioners or agencies are implementing evidence-supported interventions.

Organization and Special Features

Part I contains three chapters that provide an overview of evidence-informed practice (EIP) that provide a backdrop for the rest of the book.

Chapter 1 introduces readers to the meaning of EIP, its history, types of EIP questions, and developing an EIP outlook. New material includes a section on research ethics and a section on EIP regarding social justice and Black Lives Matter.

Chapter 2 covers the steps in the EIP process, including new material on strategies for overcoming feasibility obstacles to engaging in the EIP process.

Chapter 3 delves into research hierarchies and philosophical objections to the traditional scientific method, including a critical look at how some recent politicians have preferred their own “alternative facts” to scientific facts that they did not like.

Part II contains five chapters on critically appraising studies that evaluate the effectiveness of interventions.

Chapter 4 covers criteria for making causal inferences, including material on internal validity, measurement issues, statistical chance, and external validity. Major new additions to this chapter include sections on inferring the plausibility of causality and the degree of certainty needed in making EIP decisions when ideal experimental outcome studies are not available or not feasible. To illustrate that content we have added two more study synopses to Chapter 4. Another significant change to this chapter was the removal of several pages on statistical significance, which we moved to a new, penultimate chapter on data analysis. We felt that the removed pages delved too far in the weeds of statistical significance for this early in the book, and thus might overwhelm readers.

Chapter 5 helps readers learn how to critically appraise experiments. We were happy with this chapter and made only some minor tweaks to it.

Chapter 6, on critically appraising quasi-experiments, also had few changes, the main one being more attention to the potential value of pilot studies regarding the plausibility of causality in regard to the degree of certainty needed in making practice decisions.

Chapter 7, on critically appraising time-series designs and single-case designs, has been tweaked in various ways that we think will enhance its value to readers. For example, we added several examples of time series studies to evaluate the impact of police reform policies aiming to reduce incidents of police violence.

Chapter 8 examines how to critically appraise systematic reviews and meta-analyses. The main changes in this chapter include increased coverage of odds ratios and risk ratios.

Part III contains two chapters on critically appraising studies for alternative EIP questions.

Chapter 9 does so regarding nonexperimental quantitative studies, including surveys, longitudinal studies, and case-control studies. A new addition to this chapter discusses how some survey results can have value even when based on nonprobability samples.

Chapter 10 describes qualitative research and frameworks to critically appraise qualitative studies. Additional details on qualitative methods as well as alternative frameworks to grounded theory has been added to this chapter.

Part IV contains two chapters on assessment and monitoring in EIP.

Chapter 11 covers critically appraising, selecting, and constructing assessment instruments. In our previous edition, this chapter looked only at appraising and selecting instruments. New in this edition is a section on constructing instruments.

Chapter 12 covers monitoring client progress. New in this edition is more attention to factors that impair the ability of practitioners in service-oriented settings to implement evidence-supported interventions with adequate fidelity and a new section on the use of within-group effect size benchmarks to evaluate that adequacy.

Part V contains two new chapters on additional aspects of EIP now fully covered in the previous sections.

Chapter 13 explains how to appraise and conduct data analysis in the EIP process. Some of the material in this chapter was moved from the previous edition's Chapter 4. Other material appeared in an appendix on statistics in the previous edition. A major new section, which did not appear in our previous edition, shows how to calculate within-group effect sizes and compare them to benchmarks derived from meta-analyses of randomized clinical trials (RCTs) that can show practitioners and agencies whether their treatment recipients appear to be benefiting from treatment approximately as much as recipients in the RCTs.

Chapter 14 examines critically appraising social justice research studies. This is a new chapter, one that emphasizes the importance of being informed by research evidence in making decisions about efforts to promote social justice rather than being guided solely by noble intentions, emotions, or well-meaning ideologies. In addition, much of the content in this chapter examines participatory action research.

Significant Additions to This Edition

Among the changes that have been made in various chapters throughout this edition, the following are the most significant:

Connecting EIP to social justice efforts, the Black Lives Matter movement, and reducing incidents of police misconduct and violence.

Replacing evidence-

based

terminology with evidence-

informed

terminology.

Increased attention to the value of limited studies that do not permit

conclusive

causal inferences, but that do provide enough support for the

plausibility

of causality when practice decisions do not require the degree of certainty associated with eliminating

all

threats to internal validity.

Expanded coverage of qualitative methods.

Constructing measurement instruments.

A new chapter on data analysis.

A new chapter on social justice.

Calculating within-group effect sizes and comparing them to benchmarks to assess whether practitioners and agencies are implementing with adequate fidelity interventions that have strong research support in RCTs.

ACKNOWLEDGEMENTS

In addition to the various individuals whom we have acknowledged in previous editions of this book, we thank the following people at Wiley who have been particularly helpful in the development of this edition: Monica Rogers (Associate Managing Editor) and Darren Lalonde (Acquisitions Editor).

ABOUT THE AUTHORS

Allen Rubin, Ph.D., holds the Kantambu Latting College Professorship for Leadership and Change at the University of Houston Graduate College of Social Work. He has been teaching courses on research and practice evaluation for more than 42 years, including 34 years at the University of Texas at Austin. He is internationally known for his many published books and articles on research methods and evidence-informed practice, and has received various awards for his distinguished career achievements. He was a founding member of and is a Fellow of the Society for Social Work and Research. He is also a Fellow of the American Academy of Social Work and Social Welfare.

Jennifer Bellamy, Ph.D., is Associate Dean for Research and Faculty Development and Professor at the University of Denver Graduate School of Social Work. Her research focuses on the engagement of fathers in child and family services, child welfare, implementation science, and evidence-based practice. She works with state, federal, and community partners to develop, test, and implement strategies and interventions to better serve fathers in child welfare, home visiting, and community programs. She recently served as Board Member-at-Large for the Society for Social Work and Research (SSWR) and is the Evidence-Based Practice Track Co-Chair for the Council on Social Work Education (CSWE).

ABOUT THE COMPANION WEBSITE

This book is accompanied by a companion website.

www.wiley.com/go/rubin/researchguide3e

This website includes:

Instructor's Manual

Test Banks

PowerPoint Slides

Sample Syllabi

PART 1OVERVIEW OF EVIDENCE-INFORMED PRACTICE