Improvement Science in Evaluation -  - E-Book

Improvement Science in Evaluation E-Book

0,0
22,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

While improvement science has experienced a surge of interest over the past 30 years, applications of it are rare in the evaluation literature. This issue promotes the cross-fertilization of ideas, techniques, and tools between evaluation and improvement science. There are at least four areas where this cross-fertilization is particularly relevant: learning from error, examining variation, appreciating context, and focusing on systems change. This volume considers: * the conceptual similarities and distinctions between improvement science and evaluation; * the intellectual foundations, methods, and tools that collectively comprise improvement science; and * case chapters that offer an inspiring review of state-of-the-art improvement science applications. Cutting across all of these applications is a shared grounding in systems thinking, a determination to capture and better understand variation and contextual complexity, as well as a sustained commitment to generative learning about projects and programs--all issues of great concern to evaluators. The issue offers producers and users of evaluations the potential benefits of a closer engagement with improvement science. This is the 153rd issue in the New Directions for Evaluation series from Jossey-Bass. It is an official publication ofthe American Evaluation Association.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 200

Veröffentlichungsjahr: 2017

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Number 153 Spring 2017 New Directions for Evaluation

Paul R. Brandon Editor-in-Chief

Improvement Science in Evaluation: Methods and Uses

Christina A. Christie Moira Inkelas Sebastian Lemire

Editors

Improvement Science in Evaluation: Methods and Uses

Christina A. Christie, Moira Inkelas, and Sebastian Lemire (eds.)

New Directions for Evaluation, no. 153

Editor‐in‐Chief:Paul R. Brandon

New Directions for Evaluation, (ISSN 1097‐6736; Online ISSN: 1534‐875X), is published quarterly on behalf of the American Evaluation Association by Wiley Subscription Services, Inc., a Wiley Company, 111 River St., Hoboken, NJ 07030‐5774 USA.

Postmaster: Send all address changes to New Directions for Evaluation, John Wiley & Sons Inc., C/O The Sheridan Press, PO Box 465, Hanover, PA 17331 USA.

Copyright and Copying (in any format)

Copyright © 2017 Wiley Periodicals, Inc., a Wiley Company, and the American Evaluation Association. All rights reserved. No part of this publication may be reproduced, stored or transmitted in any form or by any means without the prior permission in writing from the copyright holder. Authorization to copy items for internal and personal use is granted by the copyright holder for libraries and other users registered with their local Reproduction Rights Organisation (RRO), e.g. Copyright Clearance Center (CCC), 222 Rosewood Drive, Danvers, MA 01923, USA (www.copyright.com), provided the appropriate fee is paid directly to the RRO. This consent does not extend to other kinds of copying such as copying for general distribution, for advertising or promotional purposes, for republication, for creating new collective works or for resale. Permissions for such reuse can be obtained using the RightsLink “Request Permissions” link on Wiley Online Library. Special requests should be addressed to: permissions@wiley.com

Information for subscribers

New Directions for Evaluation is published in 4 issues per year. Institutional subscription prices for 2017 are:

Print & Online: US$484 (US), US$538 (Canada & Mexico), US$584 (Rest of World), €381 (Europe), £304 (UK). Prices are exclusive of tax. Asia‐Pacific GST, Canadian GST/HST and European VAT will be applied at the appropriate rates. For more information on current tax rates, please go to www.wileyonlinelibrary.com/tax-vat. The price includes online access to the current and all online back‐files to January 1st 2013, where available. For other pricing options, including access information and terms and conditions, please visit www.wileyonlinelibrary.com/access.

Delivery Terms and Legal Title

Where the subscription price includes print issues and delivery is to the recipient's address, delivery terms are Delivered at Place (DAP); the recipient is responsible for paying any import duty or taxes. Title to all issues transfers FOB our shipping point, freight prepaid. We will endeavour to fulfil claims for missing or damaged copies within six months of publication, within our reasonable discretion and subject to availability.

Back issues: Single issues from current and recent volumes are available at the current single issue price from cs‐journals@wiley.com.

Disclaimer

The Publisher, the American Evaluation Association and Editors cannot be held responsible for errors or any consequences arising from the use of information contained in this journal; the views and opinions expressed do not necessarily reflect those of the Publisher, the American Evaluation Association and Editors, neither does the publication of advertisements constitute any endorsement by the Publisher, the American Evaluation Association and Editors of the products advertised.

Publisher: New Directions for Evaluation is published by Wiley Periodicals, Inc., 350 Main St., Malden, MA 02148‐5020.

Journal Customer Services: For ordering information, claims and any enquiry concerning your journal subscription please go to www.wileycustomerhelp.com/ask or contact your nearest office.

Americas: Email: cs‐journals@wiley.com; Tel: +1 781 388 8598 or +1 800 835 6770 (toll free in the USA & Canada).

Europe, Middle East and Africa: Email: cs‐journals@wiley.com; Tel: +44 (0) 1865 778315.

Asia Pacific: Email: cs‐journals@wiley.com; Tel: +65 6511 8000.

Japan: For Japanese speaking support, Email: cs‐japan@wiley.com.

Visit our Online Customer Help available in 7 languages at www.wileycustomerhelp.com/ask

Production Editor: Meghanjali Singh (email: [email protected]).

Wiley's Corporate Citizenship initiative seeks to address the environmental, social, economic, and ethical challenges faced in our business and which are important to our diverse stakeholder groups. Since launching the initiative, we have focused on sharing our content with those in need, enhancing community philanthropy, reducing our carbon impact, creating global guidelines and best practices for paper use, establishing a vendor code of ethics, and engaging our colleagues and other stakeholders in our efforts. Follow our progress at www.wiley.com/go/citizenship

View this journal online at wileyonlinelibrary.com/journal/ev

Wiley is a founding member of the UN‐backed HINARI, AGORA, and OARE initiatives. They are now collectively known as Research4Life, making online scientific content available free or at nominal cost to researchers in developing countries. Please visit Wiley's Content Access ‐ Corporate Citizenship site: http://www.wiley.com/WileyCDA/Section/id-390082.html

Address for Editorial Correspondence: Editor‐in‐chief, Paul R. Brandon, New Directions for Evaluation, Email: [email protected]

Abstracting and Indexing Services

The Journal is indexed by Academic Search Alumni Edition (EBSCO Publishing); Education Research Complete (EBSCO Publishing); Higher Education Abstracts (Claremont Graduate University); SCOPUS (Elsevier); Social Services Abstracts (ProQuest); Sociological Abstracts (ProQuest); Worldwide Political Sciences Abstracts (ProQuest).

Cover design: Wiley

Cover Images: © Lava 4 images | Shutterstock

For submission instructions, subscription and all other information visit:

wileyonlinelibrary.com/journal/ev

New Directions for Evaluation

Sponsored by the American Evaluation Association

EDITOR-IN-CHIEF

Paul R. Brandon

University of Hawai‘i at Mānoa

Associate Editors

J. Bradley Cousins

University of Ottawa

Lois-ellin Datta

Datta Analysis

Editorial Advisory Board

Anna Ah Sam

University of Hawai‘i at Mānoa

Michael Bamberger

Independent consultant

Gail Barrington

Barrington Research Group, Inc.

Fred Carden

International Development Research Centre

Thomas Chapel

Centers for Disease Control and Prevention

Leslie Cooksy

Sierra Health Foundation

Fiona Cram

Katoa Ltd.

Peter Dahler-Larsen

University of Southern Denmark

E. Jane Davidson

Real Evaluation Ltd.

Stewart Donaldson

Claremont Graduate University

Jody Fitzpatrick

University of Colorado Denver

Deborah M. Fournier

Boston University

Jennifer Greene

University of Illinois at Urbana-Champaign

Melvin Hall

Northern Arizona University

George M. Harrison

University of Hawai‘i at Mānoa

Gary Henry

Vanderbilt University

Rodney Hopson

George Mason University

George Julnes

University of Baltimore

Jean King

University of Minnesota

Saville Kushner

University of Auckland

Robert Lahey

REL Solutions Inc.

Miri Levin-Rozalis

Ben Gurion University of the Negev and Davidson Institute at the Weizmann Institute of Science

Laura Leviton

Robert Wood Johnson Foundation

Melvin Mark

Pennsylvania State University

Sandra Mathison

University of British Columbia

Robin Lin Miller

Michigan State University

Michael Morris

University of New Haven

Debra Rog

Westat and the Rockville Institute

Patricia Rogers

Royal Melbourne Institute of Technology

Mary Ann Scheirer

Scheirer Consulting

Robert Schwarz

University of Toronto

Lyn Shulha

Queen's University

Nick L. Smith

Syracuse University

Sanjeev Sridharan

University of Toronto

Monica Stitt-Bergh

University of Hawai‘i at Mānoa

Editorial Policy and Procedures

New Directions for Evaluation, a quarterly sourcebook, is an official publication of the American Evaluation Association. The journal publishes works on all aspects of evaluation, with an emphasis on presenting timely and thoughtful reflections on leading‐edge issues of evaluation theory, practice, methods, the profession, and the organizational, cultural, and societal context within which evaluation occurs. Each issue of the journal is devoted to a single topic, with contributions solicited, organized, reviewed, and edited by one or more guest editors.

The editor‐in‐chief is seeking proposals for journal issues from around the globe about topics new to the journal (although topics discussed in the past can be revisited). A diversity of perspectives and creative bridges between evaluation and other disciplines, as well as chapters reporting original empirical research on evaluation, are encouraged. A wide range of topics and substantive domains are appropriate for publication, including evaluative endeavors other than program evaluation; however, the proposed topic must be of interest to a broad evaluation audience.

Journal issues may take any of several forms. Typically they are presented as a series of related chapters, but they might also be presented as a debate; an account, with critique and commentary, of an exemplary evaluation; a feature‐length article followed by brief critical commentaries; or perhaps another form proposed by guest editors.

Submitted proposals must follow the format found via the Association's website at http://www.eval.org/Publications/NDE.asp. Proposals are sent to members of the journal's Editorial Advisory Board and to relevant substantive experts for single‐blind peer review. The process may result in acceptance, a recommendation to revise and resubmit, or rejection. The journal does not consider or publish unsolicited single manuscripts.

Before submitting proposals, all parties are asked to contact the editor‐in‐chief, who is committed to working constructively with potential guest editors to help them develop acceptable proposals. For additional information about the journal, see the “Statement of the Editor‐in‐Chief” in the Spring 2013 issue (No. 137).

Paul R. Brandon, Editor‐in‐Chief University of Hawai‘i at Mānoa College of Education 1776 University Avenue Castle Memorial Hall, Rm. 118 Honolulu, HI 968222463 e‐mail: [email protected]

CONTENTS

Editors’ Notes

References

1: Understanding the Similarities and Distinctions Between Improvement Science and Evaluation

Theoretical Foundations of Evaluation and Improvement Science

The Use Dimensions of Evaluation and Improvement Science

The Valuing Dimensions of Evaluation and Improvement Science

The Methods Dimensions of Evaluation and Improvement Science

The Way Forward

References

2: The Methods and Tools of Improvement Science

Toward a Definition of Improvement Science

The Model for Improvement—An Operational Framework for Practice

The Control Chart—A Central Tool in the Improvement Science Toolbox

References

3: Timely and Appropriate Healthcare Access for Newborns: A Neighborhood-Based, Improvement Science Approach

The Model for Improvement

Progress, Challenges, and Future Directions

References

4: Improvement for a Community Population: The Magnolia Community Initiative

Introduction of Improvement Science

Conclusion

References

5: Breaking the “Adopt, Attack, Abandon” Cycle: A Case for Improvement Science in K–12 Education

The Case

Methodology

Findings

Implications for Practice and Evaluation

References

6: Online Learning as a Wind Tunnel for Improving Teaching

Improving Teaching: Why It's Hard

Why Labs Settings and Randomized Controlled Trials Aren't the Answer

Improving Systems

Online Learning as a Wind Tunnel

Concluding Thoughts

Notes

References

7: Value and Opportunity for Improvement Science in Evaluation

Value and Opportunity for Improvement Science in Evaluation

Professional Development in Improvement Science

References

Order Form

Index

End User License Agreement

List of Tables

Chapter 3

Table 3.1

List of Illustrations

Chapter 2

Figure 2.1

The Plan–Do–Study–Act Cycle

Figure 2.2

The Interplay of Inductive and Deductive Logic

Figure 2.3

Sequential PDSA Cycle

Figure 2.4

Control Chart for Coverage Scores (by Month)

Chapter 3

Figure 3.1

Monthly Percentage of Avondale Newborns at CCHMC Clinics Attending First Primary Care Visit by 9 Days of Life

Figure 3.2

Monthly Rate of Emergency Department Visits for Nonurgent Conditions Among Avondale Infants < 6 Months Old (Per 100 Infants < 6 Months Old in Avondale Population)

Figure 3.3

Monthly Number of Emergency Department Visits for Nonurgent Conditions by CCHMC Clinic Patients ≤ 60 Days Old from All Zip Codes

Figure 3.4

Simplified Key Driver Diagram for Reduction of Emergency Department Visits for Nonurgent Conditions Among CCHMC Clinic Patients in the First 6 Months of Life

Figure 3.5

Pareto Chart of Reasons for Emergency Department Visits for Nonurgent Conditions Among CCHMC Clinic Patients ≤ 60 Days Old from January 2011 to December 2013

Chapter 4

Figure 4.1

Magnolia Community Initiative Driver Diagram

Figure 4.2

Magnolia Community Initiative Population Dashboard

Figure 4.3

Improvement Example

Chapter 6

Figure 6.1

Overview of the PDSA Cycle

Guide

Cover

Table of Contents

1

Pages

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

23

24

25

26

27

28

29

30

31

32

33

35

36

37

38

39

40

41

42

43

44

45

47

48

49

50

51

52

53

54

55

57

58

60

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

79

80

81

82

83

84

85

86

87

88

89

90

91

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

Editors’ Notes

Evaluation is about change. As Carol Weiss reminds us, “Evaluation is a practical craft, designed to help make programs work better and to allocate resources to better programs. Evaluators expect people in authority to use evaluation results to take wise action. They take satisfaction from the chance to contribute to social betterment” (1998, p. 5). Speaking to this central purpose of evaluation, a broad range of topics related to utilization have been explored in the evaluation literature, including the conceptualization of evaluation use (Leviton & Hughes, 1981), concepts of research use (Cousins & Shulha, 2006; Hofstetter & Alkin, 2003), and empirical research on utilization (Cousins & Leithwood, 1986; Cousins & Shulha, 2006; Hofstetter & Alkin, 2003). At root, all of these contributions are about how evaluation brings about change, how evaluation contributes to social betterment.

Sharing this overarching purpose, improvement science is an approach to increasing knowledge that leads to an improvement of a product, process, or system (Moen, Nolan, & Provost, 2012). Improvement science has experienced a surge of interest over the past 30 years—especially in the health sciences. Despite the rapidly expanding reach of improvement science in education, criminal justice, and social care, among other fields, published applications of improvement science are close to nonexistent in the evaluation literature. Indeed, many evaluators know little about improvement science. What is improvement science? What does improvement science look like in real-world applications? And what might we, as evaluators, learn from the theory and practice of improvement science? These and other questions are considered in this issue of New Directions for Evaluation.

The primary motivation for the issue is to promote increased cross-talk and perhaps even cross-fertilization of ideas, techniques, and tools between evaluation and improvement science. Speaking directly to this aim, there are at least four areas where this cross-fertilization is particularly relevant: learning from error, examining variation, appreciating context, and focusing on systems change.

Learning from error is both friend and foe in evaluation. To be sure, the idea of trial and error can be traced back to the early ideas of social engineering (e.g., Campbell's notion of the “experimenting society”) and the distinction between theory and implementation failure is a staple of theory-based evaluation. We learn from error; evaluation is no exception. That being said, the heavy focus on outcomes and fervent pursuit of “what works” have also served to depress the room for error and, in effect, any learning that results from this, in the context of many contract-funded evaluations of public programs. The error as foe is even evident in the designs and methods often employed in evaluation that intentionally seek to “control,” “rule out,” or at least “adjust” for error. A lot of learning is potentially lost. From this perspective, improvement science offers a much-welcomed framework to carve out a learning space for error. The stepwise, piecemeal experimentation central to improvement science serves well to reduce the adverse consequences of error and allow for a progressive, trial-and-error learning.

The importance of variation has not been lost on evaluators. Most evaluators agree that programs rarely work or fail to work. Even though programs may fail on average to produce positive outcomes across many contexts, there are some contexts in which these failed programs actually deliver value. Programs work for some, under certain circumstances, and in constant interaction with local conditions. The sustained interest in what works for whom and under what conditions speaks to this awareness. Speaking directly to this interest in variation, improvement science offers operational guidance and concrete techniques for examining outcome variations and connecting these with program changes.

On a related point, and often as part of what explains variation in program outcomes, the complexity of the contexts in which programs are delivered is often of interest to evaluators. Evaluators work in the contexts in which problems must be understood. Because of this, evaluators encounter complex contextual issues that largely determine the success of initiatives. Evaluations that use an approach that examines program implementation and differences in success will lead to better programs because variability will be better understood. Grounded on decades of real-world applications, improvement science offers key insights on and practical guidelines for addressing the complexity of context.

Systems thinking has recently received a surge of interest among evaluators. The recognition that programs and the problems they seek to address function within broader systems is difficult to dispute. As such, it is necessary to understand the component processes of a system and how they work together, so as to understand the roots of the problem and generate innovative solutions. Sometimes quality can be improved by merely tweaking the system, that is, making small changes that enable the system to function in context the way it was designed to function. But other times the system must be redesigned from the ground up or major components changed. Motivated by systemic change, improvement science is grounded in a framework for improving systems that has been highly successful in fields as diverse as the automotive industry and health care (Kenney, 2008; Rother, 2009).

With these observations as our backdrop, the chapters in this volume address issues that are critical to both improvement science and evaluation.

Chapter 1 sets the stage by considering some of the conceptual similarities and distinctions between improvement science and evaluation. Chapter 2 provides a general introduction to the intellectual foundations, methods, and tools that collectively comprise improvement science. Chapter 3 provides the purest example of the implementation of improvement science, showcasing how iterative cycles of development and testing can provide solutions to address family- and system-level barriers to primary care. The other chapters offer illustrations of improvement science in a variety of contexts and provide illustrations of the benefits and challenges of implementing improvement science for evaluative purposes. Chapter 4 illustrates how network of diverse organizations can use iterative learning cycles to come up with promising ideas, test and prototype these ideas, and spread and sustain what is found to work for a community population. Chapter 5 describes the implementation of rapid cycles of evaluations (Plan–Do–Study–Act cycles) to adapt interventions to local school contexts. Chapter 6 considers the potential value of combining improvement science and online learning. Chapter 7 concludes the volume with a set of reflections on the major benefits and implications of integrating improvement science more firmly in evaluation.

Collectively, the case chapters in this volume offer an inspiring review of state-of-the-art improvement science applications, providing a broad range of analytical strategies, data visualization techniques, and data collection strategies to be potentially applied in future evaluation contexts. Whereas the cases do not elucidate explicit connections to evaluation, several themes cutting across the cases speak directly to core themes in evaluation. These themes include a persistent focus on systems thinking, a determination to capture and better understand variation and contextual complexity, as well as a sustained commitment to generative learning about projects and programs: all issues of great concern to evaluators. The final chapter connects these themes, among others, with current trends in evaluation.

It is our hope that the volume will promote cross-talk between evaluation and improvement science—a field that continues to gain traction in an increasing range of public policy areas. From this perspective, the issue comes at just the right time to help both producers and users of evaluations to see the potential benefits of a closer engagement with improvement science.

References

Cousins, J. B., & Leithwood, K. A. (1986). Current empirical research on evaluation utilization.

Review of Educational Research

,

56

, 331–364.

Cousins, J. B., & Shulha, L. M. (2006). A comparative analysis of evaluation utilization and its cognate fields of inquiry: Current issues and trends. In I. Shaw, J. Greene, & M. Mark (Eds.),

Handbook of evaluation: Program, policy, and practice

(pp. 266–291). Thousand Oaks, CA: Sage.

Hofstetter, C. H., & Alkin, M. (2003). Evaluation use revisited. In T. Kellaghan, D. L. Stufflebeam, & L. Wingate (Eds.),

International handbook of educational evaluation

(pp. 189–196). Boston, MA: Kluwer.

Kenney, C. C. (2008).

The best practice: How the new quality movement is transforming medicine

. New York, NY: Public Affairs.

Leviton, L. C., & Hughes, E. F. X. (1981). Research on the utilization of evaluations: A review and synthesis.

Evaluation Review

,

5

, 525–549.

Moen, R. D., Nolan, T. W., & Provost, L. P. (2012).

Quality improvement through planned experimentation

. New York, NY: McGraw-Hill.

Rother, M. (2009).

Toyota kata: Managing people for improvement, adaptiveness and superior results

. New York, NY: McGraw-Hill Professional.

Weiss, C. H. (1998).

Evaluation

(2nd ed.). Upper Saddle River, NJ: Prentice-Hall.

 

 

 

Christina A. ChristieMoira InkelasSebastian LemireEditors

 

 

 

Christina A. Christie

is professor and chair of the Department of Education in the Graduate School of Education and Information Studies, University of California, Los Angeles.

Moira Inkelas

is an associate professor in the Department of Health Policy and Management in the UCLA Fielding School of Public Health and assistant director of the Center for Healthier Children, Families and Communities.

Sebastian Lemire

is a doctoral candidate in the Social Research Methodology Division in the Graduate School of Education and Information Studies, University of California, Los Angeles.

Christie, C. A., Lemire, S., & Inkelas, M. (2017). Understanding the similarities and distinctions between improvement science and evaluation. In C. A. Christie, M. Inkelas & S. Lemire (Eds.), Improvement Science in Evaluation: Methods and Uses. New Directions for Evaluation, 153, 11–21.

 

1

Understanding the Similarities and Distinctions Between Improvement Science and Evaluation

Christina A. Christie, Sebastian Lemire, Moira Inkelas

Abstract

In this chapter, we discuss the similarities and points of departure between improvement science and evaluation, according to use, valuing, and methods—three dimensions of evaluation theory to which all theorists attend (Christie & Alkin, 2012). Using these three dimensions as a framework for discussion, we show some of the ways in which improvement science and evaluation are similar and how they are different in terms of purposes, goals, and processes. By doing so we frame the illustrative cases of improvement science that follow in this issue. © 2017 Wiley Periodicals, Inc., and the American Evaluation Association.

Improvement science is an approach to increasing knowledge that leads to an improvement of a product, process, or system (Moen, Nolan, & Provost, 2012). Evaluation is a systematic process designed to yield information about the merit, worth, or value of “something” and, for the context of this journal issue, that something is assumed to be a program or policy. These two approaches have much in common, but little has been written about the ways in which they are similar, different, or how they can be used cooperatively. In what follows, we consider similarities and distinctions across three dimensions of evaluation theory: use, valuing, and methods. Before advancing this comparison, however, an important distinction about the theoretical foundations of improvement science and evaluation is called for.

Theoretical Foundations of Evaluation and Improvement Science