Confirmative Evaluation - Joan C. Dessinger - E-Book

Confirmative Evaluation E-Book

Joan C. Dessinger

0,0
39,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

This much-needed book offers trainers, consultants, evaluation professionals, and human resource executives and practitioners a hands-on resource for understanding and applying the proven principles of confirmative evaluation. Confirmative evaluation is a marriage of evaluation and continuous improvement. Unlike other types of evaluation--which are used during the design of a learning program or applied immediately after conducting a program--confirmative evaluation follows several months after the program is implemented. It tests the endurance of outcomes, the return on investment, and establishes the effectivenss, efficiency, impact, and value of the training over time.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 311

Veröffentlichungsjahr: 2015

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



CONTENTS

Cover

Contents

Title Page

Copyright

Dedication

List of Figures, Tables and Performance Support Tools

Acknowledgments

Introduction: Getting the Most from This Resource

PART 1: The Challenge

1 Full-Scope Evaluation: Raising the Bar

Evaluation: The Full Scope

Comparing the Four Types of Evaluation

Evaluation: Full-Scope Model

Challenges to Full-Scope Evaluation

2 Confirmative Evaluation: A Model Guides the Way

Confirmative Evaluation Model

Challenges to Implementing Confirmative Evaluation

Why Bother?

PART 2: Meeting the Challenge

3 Preplan:Assess Training Program Evaluability

When to Plan Confirmative Evaluation

How to Plan a Confirmative Evaluation

Assess Evaluability

Challenges to Evaluability Assessment

4 Plan: The Plan’s the Thing

What’s in a Confirmative Evaluation Plan?

Review, Validate, and Approve the Plan

5 Do: For Goodness’ Sake

Jump-Start Data Collection

Focus Data Collection

Collect the Data

Train the Data Collectors

Store the Data

Manage the Data-Collection Process

6 Analyze: Everything Old Is New Again

Get Ready, Get Set

Prepare the Confirmative Evaluation Data

Now Analyze

Interpret Confirmative Evaluation Results

Make Results-Based Recommendations

Report Confirmative Evaluation Results

7 Improve: Now What?

Focus on Utilization

Assume the Role

Accept the Challenge

Alignment: The Last Word

PART 3: Lessons from Oz

8 Case Study: Lions and Tigers and Bears, Oh My!

The Case Study

Meta Evaluation

Final Thoughts

9 Conclusion: We’re Not in Oz Anymore

Issues That Challenge Confirmative Evaluators

Evaluation as an Emerging Discipline

Improving the Process

Put Yourself in the Picture

Glossary

References

Index

About the Authors

About the Series Editors

About the Advisory Board Members

End User License Agreement

List of Illustrations

Chapter 1: Full-Scope Evaluation: Raising the Bar

Figure 1.1. Dessinger-Moseley Full-Scope Evaluation Model.

Chapter 2: Confirmative Evaluation: A Model Guides the Way

Figure 2.1. Moseley-Dessinger Confirmative Evaluation Model.

Figure 2.2. Ins and Outs of Confirmative Evaluation.

Chapter 4: Plan: The Plan’s the Thing

Figure 4.1. Seven Steps to Successful Evaluation.

Chapter 9: Conclusion: We’re Not in Oz Anymore

Figure 9.1. Sample Qualities of a Stellar Confirmative Evaluator.

List of Tables

Chapter 1: Full-Scope Evaluation: Raising the Bar

Table 1.1. Meta Evaluation: Type, Timing, and Purpose.

Table 1.2. Evaluation Types: Timing, Purpose, and Customers.

Chapter 3: Preplan:Assess Training Program Evaluability

Table 3.1. Proactive or Reactive Planning for Confirmative Evaluation?

Chapter 4: Plan: The Plan’s the Thing

Table 4.1. Overview of Evaluation Approaches.

Table 4.2. Types of Data for Judging Confirmative Evaluation Outcomes.

Table 4.3. Data-Collection Techniques, Tools, and Technology.

Chapter 6: Analyze: Everything Old Is New Againk

Table 6.1. Options for Analyzing Quantitative Data.

Chapter 7: Improve: Now What?

Table 7.1. Decision, Decisions, Decisions.

Chapter 8: Case Study: Lions and Tigers and Bears, Oh My!

Table 8.1..Summary of Docent Training Program Confirmative Evaluation.

Table 8.2. Strengths and Limitations of Docent Training Program Confirmative Evaluation.

Guide

Cover

Contents

Start Reading

Pages

i

iii

v

vi

vii

viii

ix

xv

xvi

xvii

xix

xx

xxi

xxii

xxiii

xxiv

xxv

xxvi

1

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

39

40

41

42

43

44

45

46

47

48

49

52

50

51

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

125

124

126

127

128

129

130

131

131

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

221

222

223

224

225

226

227

228

229

230

231

233

234

235

236

237

238

239

240

Confirmative Evaluation

Practical Strategies for Valuing Continuous Improvement

JOAN CONWAY DESSINGERAMES L . MOSELEY

Copyright © 2004 by John Wiley & Sons, Inc.

Published by Pfeiffer

An Imprint of Wiley

989 Market Street, San Francisco, CA 94103-1741 www.pfeiffer.com

Except as noted specifically below, no part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, 201-748-6011, fax 201-748-6008, or e-mail: permcoordinator@wiley.com.

Readers should be aware that Internet websites offered as citations and/or sources for further information may have changed or disappeared between the time this was written and when it is read.

Certain pages from this book are designed for use in a group setting and may be reproduced for educational/training activities. These pages are designated by the appearance of the following copyright notice at the foot of the page:

Confirmative Evaluation: Practical Strategies for Valuing Continuous Improvement. Copyright © 2004 by John Wiley & Sons, Inc. Reproduced by permission of Pfeiffer, an Imprint of Wiley. www.pfeiffer.com

This notice must appear on all reproductions as printed.

This free permission is limited to the paper reproduction of such materials for educational/training events. It does not allow for systematic or large-scale reproduction or distribution (more than 100 copies per page, per year), electronic reproduction, or inclusion in any publications offered for sale or used for commercial purposes—none of which may be done without prior written permission of the Publisher.

For additional copies/bulk purchases of this book in the U.S. please contact 800-274-4434.

Pfeiffer books and products are available through most bookstores. To contact Pfeiffer directly call our Customer Care Department within the U.S. at 800-274-4434, outside the U.S. at 317-572-3985, fax 317-572-4002, or visit www.pfeiffer.com.

Pfeiffer also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

ISBN: 0-7879-6500-6

Library of Congress Cataloging-in-Publication Data

Dessinger, Joan Conway.

Confirmative evaluation: practical strategies for valuing continuous improvement / Joan Conway Dessinger and James L. Moseley.

p. cm.

Includes index.

ISBN 0-7879-6500-6 (alk. paper)

1. Employees—Training of—Evaluation. I. Moseley, James L. (James Lee), 1942- II. Title.

HF5549.5.T7D433 2004

658.3'124—dc22

2003018776

Acquiring Editor: Matthew Davis

Director of Development: Kathleen Dolan Davies

Developmental Editor: Susan Rachmeler

Production Editor: Nina Kreiden

Editor: Thomas Finnegan

Manufacturing Supervisor: Bill Matherly

Editorial Assistant: Laura Reizman

Illustrations: Lotus Art

Printed in the United States of America

Printing 10 9 8 7 6 5 4 3 2 1

About This Book

Why is this topic important?

Evaluation, training, and human performance technology (HPT) practitioners are faced with an increasing need to confirm the continuing efficiency, effectiveness, impact, and value of training programs and the continuing competence of learners. Yet within the literature related to instructional technology, educational technology, performance technology, and even evaluation itself, there is a lack of reference to confirmative evaluation as a distinct type of evaluation that goes beyond formative and summative evaluation to measure ongoing behavior, accomplishments (job outputs), and business results. This book is all about confirmative evaluation, an approach to evaluation that values the continuing merit, worth, and value of instruction over time.

What can you achieve with this book?

The purpose of the book is to ground the practice of confirmative evaluation in the literature on the theory and application of evaluation and research. The authors view evaluation as a technology in itself and suggest how to use hard and soft technology techniques and tools to plan and implement confirmative evaluation of training programs.

How is this book organized?

The book consists of nine chapters divided into three parts. Part One, “The Challenge,” contains two chapters, which establish the conceptual framework for the book and present the systems-based procedural framework for confirmative evaluation: the Confirmative Evaluation Model. Part Two, “Meeting the Challenge,” provides both theory and practice to help the reader master the art and science of confirmative evaluation. Each of the five chapters in this part focuses on one part of the process: preplanning, planning, doing, analyzing, and improving. Part Three, “Lessons from Oz,” examines the “lions and tigers and bears” surrounding confirmative evaluation, presents a case study and looks at trends that are likely to have an impact on evaluation. The book concludes with a glossary of terms and a list of references.

About Pfeiffer

Pfeiffer serves the professional development and hands-on resource needs of training and human resource practitioners and gives them products to do their jobs better. We deliver proven ideas and solutions from experts in HR development and HR management, and we offer effective and customizable tools to improve workplace performance. From novice to seasoned professional, Pfeiffer is the source you can trust to make yourself and your organization more successful.

Essential Knowledge Pfeiffer produces insightful, practical, and comprehensive materials on topics that matter the most to training and HR professionals. Our Essential Knowledge resources translate the expertise of seasoned professionals into practical, how-to guidance on critical workplace issues and problems. These resources are supported by case studies, worksheets, and job aids and are frequently supplemented with CD-ROMs, websites, and other means of making the content easier to read, understand, and use.

Essential Tools Pfeiffer’s Essential Tools resources save time and expense by offering proven, ready-to-use materials—including exercises, activities, games, instruments, and assessments—for use during a training or team-learning event. These resources are frequently offered in looseleaf or CD-ROM format to facilitate copying and customization of the material.

Pfeiffer also recognizes the remarkable power of new technologies in expanding the reach and effectiveness of training. While e-hype has often created whizbang solutions in search of a problem, we are dedicated to bringing convenience and enhancements to proven training solutions. All our e-tools comply with rigorous functionality standards. The most appropriate technology wrapped around essential content yields the perfect solution for today’s on-the-go trainers and human resource professionals.

Essential resources for training and HR professionals

ABOUT THE INSTRUCTIONAL TECHNOLOGY AND TRAINING SERIES

This comprehensive series responds to the rapidly changing training field by focusing on all forms of instructional and training technology—from the well-known to the emerging and state-of-the-art approaches. These books take a broad view of technology, which is viewed as systematized, practical knowledge that improves productivity. For many, such knowledge is typically equated with computer applications; however, we see it as also encompassing other nonmechanical strategies such as systematic design processes or new tactics for working with individuals and groups of learners.

The series is also based upon a recognition that the people working in the training community are a diverse group. They have a wide range of professional experience, expertise, and interests. Consequently, this series is dedicated to two distinct goals: helping those new to technology and training become familiar with basic principles and techniques, and helping those seasoned in the training field become familiar with cutting-edge practices. The books for both groups are rooted in solid research, but are still designed to help readers readily apply what they learn.

The Instructional Technology and Training Series is directed to persons working in many roles, including trainers and training managers, business leaders, instructional designers, instructional facilitators, and consultants. These books are also geared for practitioners who want to know how to apply technology to training and learning in practical, results-driven ways. Experts and leaders in the field who need to explore the more advanced, high-level practices that respond to the growing pressures and complexities of today’s training environment will find indispensable tools and techniques in this groundbreaking series of books.

Rita C. Richey

Kent L. Gustafson

William J. Rothwell

M. David Merrill

Timothy W. Spannaus

Allison Rossett

Series Editors

Advisory Board

OTHER INSTRUCTIONAL TECHNOLOGY AND TRAINING SERIES TITLES

Instructional Engineering in Networked Environments

Gilbert Paquette

Learning to Solve Problems:

An Instructional Design Guide

David H. Jonassen

Thank you . . .

To all the evaluation, training, and HPT practitioners whose shared wisdom, experience, and humor fired the creation of this book

To family and friends who adapted to my schedule during the creation process

Joan Conway Dessinger

To my teachers, who inspire me . . .

To my students, who challenge me . . .

To my friends and fellow practitioners, who make evaluation work . . .

And to Midnite Moseley, for unconditional love, the fond and funny memories, and the once-in-a-lifetime friendship he shared . . .

My sincere thanks

James L. Moseley

LIST OF FIGURES, TABLES, AND PERFORMANCE SUPPORT TOOLS

Figures

1.1 Dessinger-Moseley Full-Scope Evaluation Model

2.1 Moseley-Dessinger Confirmative Evaluation Model

2.2 Ins and Outs of Confirmative Evaluation

4.1 Seven Steps to Successful Evaluation

9.1 Sample Qualities of a Stellar Confirmative Evaluator

Tables

1.1 Meta Evaluation: Type, Timing, and Purpose

1.2 Evaluation Types: Timing, Purpose, and Customers

3.1 Proactive or Reactive Planning for Confirmative Evaluation?

4.1 Overview of Evaluation Approaches

4.2 Types of Data for Judging Confirmative Evaluation Outcomes

4.3 Data-Collection Techniques, Tools, and Technology

6.1 Options for Analyzing Quantitative Data

7.1 Decisions, Decisions, Decisions

8.1 Summary of Docent Training Program Confirmative Evaluation

8.2 Strengths and Limitations of Docent Training Program Confirmative Evaluation

Performance Support Tools (PSTs)

1.1 When to Conduct a Confirmative Evaluation

3.1 Confirmative Evaluation Planning Process Flowchart

3.2 Confirmative Evaluation Evaluability Assessment Form for Training Programs

3.3 Steps in Negotiating Stakeholder Information Needs

3.4 Good-Better-Best Dialogue

3.5 From Needs to Outcomes to Questions

4.1 Getting Started on a Confirmative Evaluation Plan

4.2 Confirmative Evaluation Plan Outline

5.1 Matrix to Focus and Plan Data Collection

5.2 Form for Recording Information During Extant Data Analysis

5.3 Checklist for Evaluating the Effectiveness of a Survey or Questionnaire

6.1 Checklist for Interpreting Confirmative Evaluation Results

6.2 Outline for a Formal Confirmative Evaluation Final Report

7.1 Testing Alignment to Build a Foundation for Continuous Improvement

9.1 Self-Assessment: Qualities of a Stellar Confirmative Evaluator

ACKNOWLEDGMENTS

THE AUTHORS WISH to acknowledge the following people:

Instructional Technology and Training series editors Rita Richey, William Rothwell, and Tim Spannaus

Pfeiffer editors Matt Davis, Kathleen Dolan Davies, Susan Rachmeler, Nina Kreiden, and Tom Finnegan

Joyce Wilkins, who assisted with production

David Solomon and April Davis, who supported and inspired us professionally

Kim Sneden and colleagues Paulla Wissel and Sara Weertz who walked us through Oz

Tore Stellas and Pete Jr., who introduced us to the concept “what you said would happen has happened”

INTRODUCTION

HAVE YOU EVER . . .

Helped an employee maintain or continue to improve performance long after initial training or learning occurred?

Found that new contexts or new performance standards mandated a change in performance?

Experienced ineffective skill-building programs that had to be discarded, repurposed, or replaced?

Needed to determine how critical a particular performance factor was to organizational success?

Needed to establish that your training program has measurably improved business results?

If you answered yes to any of these questions, then read on . . .

This book is all about confirmative evaluation, “a new paradigm for continuous improvement” (Moseley and Solomon, 1997, p. 12). Confirmative evaluation verifies the continuing merit, worth, and value of instruction over time. Evaluation, training, and HPT (human performance technology) practitioners are faced with an increasing need to confirm the continuing efficiency, effectiveness, impact, and value of training programs and the continuing competence of learners. Yet within the literature related to instructional technology, educational technology, performance technology, and even evaluation itself, there is a lack of reference to confirmative evaluation as a distinct type of evaluation that goes beyond formative and summative evaluation to measure ongoing behavior, accomplishments (job outputs), and business results. Training practitioners themselves, when asked whether they have any experience with confirmative evaluation, tend to respond “Is that one of the four levels?” They are referring to Kirkpatrick’s four levels of evaluation (Kirkpatrick, 1959, 1994).

Purpose

Confirmative Evaluation: Practical Strategies for Valuing Continuous Improvement sets out to fill the gap and provide a well-referenced and highly practical book for practitioners in training, evaluation, and HPT on why, when, and how to plan and conduct confirmative evaluation of training programs. The purpose of the book is to ground the practice of confirmative evaluation in the literature on the theory and application of evaluation and research. The Instructional Technology and Training Series focuses on instructional technology and training, so we view evaluation as a technology in itself and suggest how to use hard and soft technology techniques and tools to plan and implement confirmative evaluation of training programs.

Scope

This book presents an overview of full-scope evaluation (formative, summative, confirmative, and meta) using the Dessinger-Moseley Full-Scope Evaluation Model. The model also illustrates how confirmative evaluation fits within the current typology of evaluation. After a close-up look at full-scope evaluation, we present and discuss the Moseley-Dessinger Confirmative Evaluation Model. The remainder of the book concentrates on how to use hard and soft technologies to plan and conduct an effective and efficient confirmative evaluation. We also suggest future directions for utilization of confirmative evaluation as an integral part of the technology of training and learning.

The focus of the Instructional Technology and Training Series is training. However, the theory and practice of confirmative evaluation applies to the evaluation of all performance improvement interventions, instructional and noninstructional. Therefore, we ask the reader to make a quantum leap whenever necessary to adapt the practical strategies in this book to noninstructional interventions such as incentive and reward programs, suggestion systems, career development initiatives, and so forth.

Audience

The audience for this series is a broad one. It goes beyond training to encompass all human performance improvement (HPI), evaluation, human resource development (HRD), management, and quality practitioners who are on the cutting edge of continuous improvement efforts. The audience also includes researchers and university professors or instructors in evaluation, instructional technology (IT), human performance technology (HPT), HRD, management, and related fields.

How This Book Is Organized

The book consists of nine chapters divided into three parts: “The Challenge,” “Meeting the Challenge,” and “Lessons from Oz.” Each chapter is enhanced with figures, tables, and performance support tools. Real-world examples of confirmative evaluation are difficult to find; however, we use examples whenever possible to clarify concepts and offer on-the-job guidance for planning and conducting confirmative evaluation of training programs. We also include a glossary of terms and a list of references at the end of the book.

Part One: The Challenge

The first part contains two chapters. These opening chapters challenge the reader to take a risk and commit to full-scope evaluation. We encourage evaluators, training and HPT practitioners, and others to go beyond traditional formative and summative evaluation and add confirmative evaluation to their repertoire of knowledge and skills.

Chapter One: Full-Scope Evaluation: Raising the Bar

This chapter establishes the conceptual framework for the book and challenges evaluators and other professionals to raise the evaluation bar to include full-scope evaluation—formative, summative, confirmative, and meta.

Chapter Two: Confirmative Evaluation: A Model Guides the Way

The second chapter presents the systems-based procedural framework for confirmative evaluation, the Confirmative Evaluation Model. We walk the reader through the model using the inputs, processes, outputs, and outcomes of confirmative evaluation as a guide, and we also look into the heart of the model: meta evaluation. We end Part One with a discussion of the purpose and challenge of confirmative evaluation and how to justify using time, money, and human resources to plan and implement confirmative evaluation.

Part Two: Meeting the Challenge

The second part of this book lays out both theory and practice to help the reader master the art and science of confirmative evaluation. This part contains five chapters on the process components of the Confirmative Evaluation Model. Chapters Three and Four present plan as a two-step process: preplanning or evaluability assessment, and developing a confirmative evaluation plan. Chapters Five through Seven focus on the other process components of the Confirmative Evaluation Model: do, analyze, and improve. Chapters Three through Seven each contain a toolbox of additional references to help the evaluation, training, or HPT practitioner gain additional knowledge and skills related to the chapter topic.

Chapter Three: Preplan: Assess Training Program Evaluability

The first chapter in Part Two looks at the preplanning step in the confirmative evaluation planning process and stresses the importance of assessing the evaluability of the training program. We introduce a confirmative evaluation planning process flowchart and discuss the difference between proactive and reactive planning. Then we help the reader learn how to use the process flowchart plus a rating form and other performance support techniques and tools to assess the evaluability of a training program on the basis of criteria such as program life cycle, organization-specific requirements, stakeholder information needs, and intended evaluation outcomes.

Chapter Four: Plan: The Plan’s the Thing

Chapter Four continues the discussion of the confirmative evaluation planning process by focusing on how to develop a confirmative evaluation plan and how to monitor the training program and maintain the plan if planning is proactive. We present two performance support tools: “Getting Started on a Confirmative Evaluation Plan” and a confirmative evaluation plan outline to help readers develop a complete, accurate, and useful confirmative evaluation plan. The chapter also discusses what happens after the plan is approved: reactive planners begin the confirmative evaluation, whereas proactive planners must maintain the plan for several months or more until it is time to conduct the confirmative evaluation. Planning a confirmative evaluation and preparing the confirmative evaluation plan require general project management skills, evaluation skills, analysis skills, and knowledge of how to evaluate learning and instruction technologies. So we give you a toolbox at the end of the chapter, a list of resources to help you increase your knowledge and skills in these areas.

Chapter Five: Do: For Goodness’ Sake

Goodness is a term used by the military and others to indicate the degree to which people, places, situations, or things meet stated or implicit standards for excellence and integrity. In this chapter, we discuss how to use selected hard and soft technologies to conduct an efficient and effective confirmative evaluation. The topics include developing data-collection instruments, collecting the data, and documenting the process and the findings. Of course, there are challenges to face at every step, but we give you another toolbox of resources to meet those challenges.

Chapter Six: Analyze: Everything Old Is New Again

In Chapter Six, we focus again on hard and soft technologies, this time to analyze and interpret the data and communicate the results of the confirmative evaluation. This chapter contains practical suggestions and guidelines on how to analyze and interpret data and communicate the confirmative evaluation results. We differentiate between quantitative and qualitative data analysis, focus on analyzing and interpreting the confirmative evaluation results, spend some time outlining what constitutes an effective confirmative evaluation report, and present another toolbox—this time containing professional books and software packages to jump-start the analysis and communication process.

Chapter Seven: Improve: Now What?

This chapter presents the ultimate challenge: continuous quality improvement, assurance, and control. Once more the reader is encouraged to use the appropriate hard and soft technologies to support, implement, assure, and control the continuous quality improvement of the learners, the organization, and the global community. Resources in the toolbox at the end of the chapter include practical ways to apply the theory and practice of utilization-focused evaluation to confirmative evaluation and a self-assessment.

Part Three: Lessons from Oz

In the third part, we take a trip to Oz via a metropolitan zoo to examine the lions and tigers and bears surrounding confirmative evaluation; then we rub our crystal ball and acknowledge we’re not in Oz anymore. Organizations, whether local or global, need full-scope evaluation to enable and support their continuous quality improvement efforts.

Chapter Eight: Case Study: Lions and Tigers and Bears, Oh My!

Pardon our whimsical side, but in this chapter we draw a parallel between Dorothy’s journey to Oz and the development and evaluation of a training program or other instructional or noninstructional performance improvement interventions. There are even live lions and tigers and bears lurking in the shadows as we perform a meta evaluation of a confirmative evaluation of a training program for docents at a metropolitan zoo.

Chapter Nine: Conclusion: We’re Not in Oz Anymore

The final chapter continues the journey to Oz as we look at trends and other challenges that affect program evaluation, continuous quality improvement, and technology. We also discuss evaluation as an emerging discipline, how to improve the confirmative evaluation process, and the qualities that make a stellar confirmative evaluator.

How to Use This Book

We foresee that evaluation, training, and HPT practitioners will use Confirmative Evaluation: Practical Strategies for Valuing Continuous Improvement as a desktop reference and that professors or instructors may use this book as a reference manual in the classroom. Even seasoned practitioners will find new insights and rules of thumb in its comprehensive presentations.

There are several approaches to using this book. Choose one or more of these suggestions to guide your understanding of and skill in using confirmative evaluation:

Use this book as a just-in-time learning tool or as a performance support tool (PST). Skim the book to familiarize yourself with its layout. Each chapter builds on the preceding chapter or chapters. Look at the tables and figures. Review and use the PSTs. Refer to the Glossary for terminology with which you are unfamiliar. Know where you can find the information you need when you need it. Use the toolboxes at the end of

Chapters Three

through

Seven

to find additional, practical resources.

Use the book as a primer. Learn about confirmative evaluation as a new evaluation paradigm, a process for ensuring and verifying the continuous improvement of instructional technology and training initiatives.

View the book as a reference on the systemic approach to evaluation. It presents confirmative evaluation as a series of interrelated inputs, processes, outputs, and outcomes. Outputs of one event become the inputs of another event as the confirmative evaluation process moves toward the final outcome: continuous quality improvement of the learners, training program, work group, business, organization, or global community.

Use this book to learn about full-scope evaluation and how to use proactive or reactive strategies for planning and conducting confirmative evaluation. Read the chapters as they are presented.

Chapters One

and

Two

set the tone and give an overview of full-scope evaluation and confirmative evaluation.

Chapters Three

through

Seven

are how-to guides for planning and conducting confirmative evaluation. Use the PSTs, and then discuss the outcomes with your team members for verification and validation. Explore the resources in the toolboxes for

Chapters Three

through

Seven

.

Finally, just use this book for its value-added impact. The material in

Confirmative Evaluation

will benefit your organization (whether you represent business, industry, government, health care, or education) and you as an evaluation, training, or HPT practitioner.

Joan Conway Dessinger, Ed.D., CPTThe Lake GroupSt. Clair Shores, Michigan

James L. Moseley, Ed.D., CPTWayne State UniversityDetroit, MichiganOctober 2003

PART 1The Challenge

It’s time to raise the bar on evaluation . . . and confirm that what we said would happen has happened.

1Full-Scope Evaluation: Raising the Bar

SEELS AND RICHEY (1994, p. 52) call evaluation “a commonplace human activity” and indicate that as far back as the 1930s instructional designers, evaluators, and other training/HPT (human performance technology) practitioners discussed, wrote about, and sometimes implemented evaluation activities to measure the value of training and learning. The evaluation bar was raised in 1967 when Scriven suggested that exemplary instructional designers and evaluators plan and conduct two types of evaluation: formative evaluation, to improve instructional programs or products during the development phase; and summative evaluation, to measure the effectiveness of education, training, and learning during or immediately after implementation. The terms formative and summative have “not only served the field well in providing a usable language to describe important uses of evaluation, but have also been a rich conceptual seedbed for the sprouting of many proposed refinements and extensions to the field” (Worthen, Sanders, and Fitzpatrick, 1997, p. 18). Now it’s time to raise the bar again.

We challenge evaluation, training, and HPT practitioners to add confirmative evaluation to their repertoire of knowledge and skills. Confirmative evaluation goes beyond formative and summative evaluation to judge the continuing merit, value, or worth of a long-term training program. More specifically, we challenge training and evaluation practitioners to consistently use full-scope evaluation: formative, summative, confirmative, and meta. Confirmative evaluation encourages and supports continuous improvement efforts within organizations. Meta evaluation evaluates evaluation and adds credibility to evaluation activities. However, meta evaluation is another story and another book. Meanwhile, we need to focus on confirmative evaluation.

In this chapter, we set the stage for confirmative evaluation. First, we introduce the concept of full-scope evaluation as an integrated plan that uses four types of evaluation—formative, summative, confirmative, and meta—to judge the continuing merit and worth of long-term training programs. We use models to illustrate how the four types of evaluation work together and how full-scope evaluation fits into the instructional system design (ISD) process. Then we discuss the challenges faced by individuals and organizations that commit to full-scope evaluation.

One issue that arose when we began writing this book is that although there is common evaluation vocabulary, there is limited shared meaning. When discussing evaluation, the literature uses the words types, roles, stages, phases, and forms of evaluation. For consistency, we use the word type when referring to formative, summative, and confirmative evaluation.

After reading this chapter, you will be able to:

Explain the concept of full-scope evaluation

Describe and compare the components of full-scope evaluation (formative, summative, confirmative, and meta evaluation)

Explain how full-scope evaluation turns ADDIE into ADDI/E (more on this later; also, see the Glossary at the end of the book)

Recognize the challenges associated with committing to full-scope evaluation

Evaluation: The Full Scope

Full-scope evaluation systematically judges the merit and worth of a long-term training program before, during, and after implementation. Full-scope evaluation is appropriate only for training programs that are designed to run for one year or more; it is not appropriate for a one-time training event, such as a single-session workshop to introduce a new product to sales representatives.

Full-scope evaluation integrates four types of program evaluation—formative, summative, confirmative, and meta—into the training program evaluation plan (see Chapter Three). Working together, the four types of evaluation help to determine the value of a long-term training program and develop the business case or rationale for maintaining, changing, discarding, or replacing the program. We describe all four types of evaluation here.

Formative Evaluation

Formative evaluation is the oldest type of evaluation. Scriven (1967) was the first to use the term; however, the concept and practice of evaluating instruction during development predated both the term and the ISD movement (Tessmer, 1994). Thiagarajan (1991) defines and describes formative evaluation from a quality perspective as “a quality control method to improve, not prove, instructional effectiveness” (p. 22) and “a continuous process incorporated into different stages of development” (p. 26). Dick and King (1994) add that formative evaluation is a way to “… facilitate the transfer of learning from the classroom to the performance context” (p. 8).

Formative evaluation is usually conducted by the designer or developer; however, large organizations sometimes call on the services of a practitioner evaluator. Van Tiem, Moseley, and Dessinger (2000) describe four basic strategies for conducting formative evaluation:

Expert review using an individual or group familiar with the content and need

One-to-one evaluation involving the designer or evaluator and a learner or performer

Live or virtual small-group evaluation

Field testing or piloting either segments or all of the program or product (pp. 164–167)

The outputs and outcomes of formative evaluation mold the training program and set the stage for summative evaluation of immediate program results. Therefore the primary customers of formative evaluation are the instructional designers and developers who are responsible for selecting or developing the instructional performance support system or training package.

Summative Evaluation

Summative evaluation “involves gathering information on adequacy and using this information to make decisions about utilization” (Seels and Richey, 1994, p. 57). Summative evaluation is conducted during or immediately after implementation. There is also a purposeful difference between formative and summative evaluation: “If the purpose of evaluation is to improve … then it is formative evaluation. (In contrast, if the purpose is to prove, justify, certify, make a ‘go/no’ decision, or validate … then it is summative evaluation.)” (Thiagarajan, 1991, p. 22).

The primary customers are the decision makers who need to approve installation of the instructional performance support system, or in the case of a one-time offering put a final seal of approval on the instructional package. These decision makers may or may not participate in earlier instructional design and development activities. In either case, they need immediate feedback from the first session or the first several sessions: How well did the training meet the stated instructional objectives? How well did it meet expectations of the instructor(s) and participants?

During summative evaluation, “any aspect of the total education or training system can be evaluated: the student, the instructor, instructional strategies, the facilities, even the training organization itself” (Smith and Brandenburg, 1991, p. 35). The designer/developer or evaluator may select from or blend a number of strategies for conducting summative evaluation: cost-benefit analysis, attitude ratings (student, instructor, client, and other stakeholders), testing (pre-, post-, embedded, and performance tests), surveys, observation, interviews, focus groups, and statistical analysis. The focus is on immediate results; in a situation involving a long-term program, the outputs and outcomes of summative evaluation become inputs for the next step, confirmative evaluation.

Confirmative Evaluation

Confirmative evaluation goes beyond formative and summative evaluation; it moves traditional evaluation a step closer to full-scope evaluation. During confirmative evaluation, the evaluation, training, or HPT practitioner collects, analyzes, and interprets data related to behavior, accomplishment, and results in order to determine “the continuing competence of learners or the continuing effectiveness of instructional materials” (Hellebrandt and Russell, 1993, p. 22) and to verify the continuous quality improvement of education and training programs (Mark and Pines, 1995).

The concept of going beyond formative and summative evaluation is not new. The first reference to confirmative evaluation came in the late 1970s: “The formative-summative description set ought to be expanded to include a third element, confirmative evaluation” (Misanchuk, 1978, p. 16). Eight years later, Beer and Bloomer (1986) from Xerox suggested a limited strategy for going beyond the formative and summative distinctions in evaluation by focusing on three levels for each type of evaluation:

Level one: evaluate programs while they are still in draft form, focusing on the needs of the learners and the developers