Evaluating Public and Community Health Programs - Muriel J. Harris - E-Book

Evaluating Public and Community Health Programs E-Book

Muriel J. Harris

0,0
83,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

A practical introduction to participatory program evaluation

Evaluating Public and Community Health Programs provides a comprehensive introduction to the theory and practice of evaluation, with a participatory model that brings stakeholders together for the good of the program. Linking community assessment, program implementation, and program evaluation, this book emphasizes practical, ongoing evaluation strategies that connect theory with application. This updated second edition includes new discussion on planning policy change programs using logic models and theory of change, plus expanded coverage of processes, outcomes, data collection, and more. Each chapter includes classroom activities and group discussion prompts, and the companion website provides worksheets, lecture slides, and a test bank for instructors. Mini cases help illustrate the real-world applications of the methods described, and expanded case studies allow students to dig deeper into practice and apply what they've learned.

Accurate and effective evaluation is the key to a successful program. This book provides a thorough introduction to all aspects of this critical function, with a wealth of opportunities to apply new concepts.

  • Learn evaluation strategies that involve all program stakeholders
  • Link theory to practice with new mini cases and examples
  • Understand the uses, processes, and approaches to evaluation
  • Discover how ongoing evaluation increases program effectiveness

Public and community health programs are a vital part of our social infrastructure, and the more effective they are, the more people they can serve. Proper planning is important, but continued evaluation is what keeps a program on track for the long term. Evaluating Public and Community Health Programs provides clear instruction and insightful discussion on the many facets of evaluation, with a central focus on real-world service.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 736

Veröffentlichungsjahr: 2016

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright Page

Dedication

Preface

Acknowledgments

CHAPTER 1: AN INTRODUCTION TO PUBLIC AND COMMUNITY HEALTH EVALUATION

Overview of Evaluation

Levels of Evaluation

Preassessment Evaluations

The Participatory Approach to Evaluation

The Participatory Model for Evaluation

The Precursors to Program Evaluation

Cultural Considerations in Evaluation

Summary

Discussion Questions and Activities

Key Terms

CHAPTER 2: THE COMMUNITY ASSESSMENT: AN OVERVIEW

Theoretical Considerations

The Ecological Model

Data Collection

Data Sources

Reviewing the Scientific Literature

The Report

Stakeholders’ Participation in Community Assessments

Summary

Discussion Questions and Activities

Key Terms

CHAPTER 3: DEVELOPING INITIATIVES: AN OVERVIEW

The Organization’s Mission

Planning the Initiative

Incorporate Theory

Goals and Objectives

The Initiative’s Activities

Use Existing Evidence‐Based Programs

The Program’s Theory of Change

The Logic Model Depicting the Theory of Change

Criteria for Successful Initiatives

Stakeholders’ Participation in Planning and Developing Initiatives

Summary

Discussion Questions and Activities

Key Terms

CHAPTER 4: PLANNING FOR EVALUATION: PURPOSE AND PROCESSES

The Timing of the Evaluation

The Purpose of Evaluation

The Contract for Evaluation

The Evaluation Team

Evaluation Standards

Managing the Evaluation Process

Factors That Influence the Evaluation Process

Planning for Ethical Program Evaluation

Involving Stakeholders

Creating and Maintaining Effective Partnerships

Summary

Discussion Questions and Activities

Key Terms

CHAPTER 5: DESIGNING THE EVALUATION: PART 1: DESCRIBING THE PROGRAM OR POLICY

The Context of the Initiative

The Social, Political, and Economic Environment

The Organizational Structure and Resources

The Initiative and Its Relationship to the Organization

The Stage of Development of the Initiative

Data Access and Availability

The Program Initiative

The Policy Initiatives

Summary

Discussion Questions and Activities

Key Terms

CHAPTER 6: DESIGNING THE EVALUATION: PART 2A: PROCESS EVALUATION

Purposes of Process Evaluation

Key Issues in Process Evaluation

Selecting Questions for Process Evaluation

Resources for Evaluation

Measuring Resources, Processes, and Outputs

Tools for Process Evaluation

Ethical and Cultural Considerations

Summary

Discussion Questions and Activities

Key Terms

CHAPTER 7: DESIGNING THE EVALUATION: PART 2B: OUTCOME EVALUATION

The Relationship Between Process and Outcome Evaluation

Sorting and Selecting Evaluation Questions

Summary

Discussion Questions and Activities

Key Terms

CHAPTER 8: COLLECTING THE DATA: QUANTITATIVE

Quantitative Data

Factors Influencing Data Collection

Using Surveys

Designing Survey Instruments

Pilot Testing

Triangulation

Institutional Review and Ethics Boards

The Data Collection Team

Managing and Storing Data

Stakeholder Involvement

Summary

Discussion Questions and Activities

Key Terms

CHAPTER 9: COLLECTING THE DATA: QUALITATIVE

Qualitative Data

Ensuring Validity and Reliability

Interview‐Format Approaches

Document and Record Review

Observational Approaches

Case Reviews

Digital Approaches

Geographic Information Systems

Training Data Collectors

Managing and Storing Qualitative Data

Stakeholder Involvement

The Data Collection Team

Summary

Discussion Questions and Activities

Key Terms

CHAPTER 10: ANALYZING AND INTERPRETING QUANTITATIVE AND QUALITATIVE DATA: QUANTITATIVE (PART 1)

Analyzing and Reporting Quantitative Data

Reaching Conclusions

Stakeholder Involvement

Summary

Discussion Questions and Activities

QUALITATIVE (PART 2)

Analyzing Qualitative Data

Interpreting the Data and Reaching Conclusions

The Role of Stakeholders

Summary

Discussion Questions and Activities

Key Terms

CHAPTER 11: REPORTING EVALUATION FINDINGS

The Content of the Report

The Audience for the Report

The Timing of the Report

The Format of the Report

Summary

Discussion Questions and Activities

Key Terms

CHAPTER 12: CASE STUDY: THE COMMUNITY ASSESSMENT

Background

Establish a Team

Determine the Availability of Data

Decide on the Data‐Collection Approaches and Methods

Training

Resource Procurement

Analysis and Interpretation of the Data

Summary of Findings

The Intervention

Design the Evaluation

Collect the Data

Analyze and Interpret the Data

Report the Results

Discussion Questions and Activities

CHAPTER 13: CASE STUDY: PROCESS EVALUATION

Background

Theoretical Framework

Community Assessment Findings

The Evaluation Plan

Answering the Evaluation Question

Reporting the Results

Discussion Questions and Activities

APPENDIX A: MODEL AGREEMENT BETWEEN EVALUATION CONSULTANT AND TEAM MEMBERS

APPENDIX B: MODEL PREAMBLE FOR ADULT INDIVIDUAL INTERVIEWS

APPENDIX C: MODEL DEMOGRAPHIC SHEET

APPENDIX D: MODEL FIELD NOTES REPORT

APPENDIX E: MODEL INTERVIEW REFUSAL REPORT

APPENDIX F: DATA COLLECTION TRAINING MANUAL TEMPLATE

Conducting the Interview

APPENDIX G: GUIDELINES FOR COMPLETING AN EVALUATION REPORT

Methodology

REFERENCES

INDEX

End User License Agreement

List of Tables

CHAPTER 2: THE COMMUNITY ASSESSMENT: AN OVERVIEW

Table 2.1 Commonly Used Theories and Models in Public Health

Table 2.2 Risk Factor, Protective Factors, and Level of Influence

Table 2.3 The Community Assessment Team

Table 2.4 Example of a Data‐Collection Plan

CHAPTER 3: DEVELOPING INITIATIVES: AN OVERVIEW

Table 3.1 Incorporating Theory in Program Planning using the Health Belief Model

Table 3.2 Using Multiple Theories Within an Ecological Framework

Table 3.3 Goal Analysis

Table 3.4 Example of the Application of the SMART Attributes to an Objective

Table 3.5 Writing Time Lines into Objectives

CHAPTER 4: PLANNING FOR EVALUATION: PURPOSE AND PROCESSES

Table 4.1 A Basic Action Plan

Table 4.2 An Action Plan for Team Recruitment and Development

Table 4.3 Sample Budget

CHAPTER 6: DESIGNING THE EVALUATION: PART 2A: PROCESS EVALUATION

Table 6.1 Activity Objective: Expected Output and Actual Output

Table 6.2 Components in Formulating Policy

Table 6.3 Workplan for Identifying Questions for Process Evaluation

Table 6.4 Logic Model Components: Questions, Indicators, and Data Sources

Table 6.5 Assessing the Context for Policy Change Using a Table Format in a Survey

Table 6.6 Log Frame

Table 6.7 Alignment of Evaluation Research Questions (EQ) With Specific Project Objectives (SPO)

Table 6.8 Process Evaluation Work Plan With Funding From July Year 1 to March Year 3

CHAPTER 7: DESIGNING THE EVALUATION: PART 2B: OUTCOME EVALUATION

Table 7.1 Tools for Outcome Evaluation (What you use to answer the questions)

Table 7.2 Assessing Effectiveness of Policy Change from the Perspective of Stakeholders

CHAPTER 8: COLLECTING THE DATA: QUANTITATIVE

Table 8.1 Questions and Response Formats

CHAPTER 9: COLLECTING THE DATA: QUALITATIVE

Table 9.1 Template for Document Review

CHAPTER 10: ANALYZING AND INTERPRETING QUANTITATIVE AND QUALITATIVE DATA: QUANTITATIVE (PART 1)

Table 10.1 Single‐Variable Table Format for Presenting Quantitative Data

Table 10.2 Multiple‐Variable Table Format for Presenting Quantitative Data

CHAPTER 11: REPORTING EVALUATION FINDINGS

Table 11.1 List of Acronyms

CHAPTER 12: CASE STUDY: THE COMMUNITY ASSESSMENT

Table 12.1 Potential Contributions of Team Members

Table 12.2 Program Goals, Expected Outcomes, and Intervention Approaches

Table 12.3 Outcome Objectives and Initiative Activities for Healthy‐Weight Goal

Table 12.4 Outcome Objectives and Initiative Activities for Affordable‐Produce Goal

Table 12.5 The Logic‐Model Components and Healthy Soon Project Activities

Table 12.6 Evaluation Question

Table 12.7 Indicators and Data Sources

Table 12.8 Addressing Threats to Internal Validity

Table 12.9 Sample Site‐Visit Report

Table 12.10 Sample Attendance Sheet

Table 12.11 Site‐Visit Report at 2 Weeks

Table 12.12 Intervention Group and Control Group at Baseline on Knowledge About Physical Activity and Nutrition

Table 12.13 Intervention Group and Control Group at Baseline on Attitude Toward Diabetes

CHAPTER 13: CASE STUDY: PROCESS EVALUATION

Table 13.1 Summary of Community Assessment Results

Table 13.2 Interventions of the Leighster County Youth Initiative and Equivalent Levels of Influence

Table 13.3 Overarching and Subquestions in the Evaluation

Table 13.4 Evaluation Questions, Indicators, Data Type, Data Collection Approach, and the Time Line for Data Collection

Table 13.5 Baseline Level of Services and Participation in the Cinapsih Wellness Center Activities

Table 13.6a Personnel Salaries and Annual Cost

Table 13.6b Volunteers and Effort

Table 13.7 Survey Questions From the Section on Recruitment in Assessing Levels of Participation in the Cinapsih Wellness Center Programs and Activities

List of Illustrations

CHAPTER 1: AN INTRODUCTION TO PUBLIC AND COMMUNITY HEALTH EVALUATION

Figure 1.1 Components for Preassessment of Program's Readiness for Evaluation

Figure 1.2 Framework for Program Evaluation in Public Health

Figure 1.3 The Participatory Model for Evaluation

Figure 1.4 Evaluation in Context

Figure 1.5 Valuable Take‐Aways

CHAPTER 2: THE COMMUNITY ASSESSMENT: AN OVERVIEW

Figure 2.1 The Community Assessment as a Component of the Participatory Model for Evaluation

Figure 2.2 The Community Assessment, the Initiative, and the Evaluation

Figure 2.3 The Ecological‐Model Framework

Figure 2.4 Core Components of the Community Assessment Process

Figure 2.5 Major Factors Influencing Data Collection in the Community Assessment Process

Figure 2.6 Major Components of a Community Assessment

Figure 2.7 Summarizing Steps 1–10

Figure 2.8 Framework for Summarizing a Literature Review

Figure 2.9 Valuable Take‐Aways

CHAPTER 3: DEVELOPING INITIATIVES: AN OVERVIEW

Figure 3.1 Outcomes of a Community Assessment Process

Figure 3.2 The Hierarchical Relationship of Goals and Objectives

Figure 3.3 Verbs for Writing Objectives

Figure 3.4 Components of Evidence‐Based Programs

Figure 3.5 Classification System for Evidence‐Based HIV/AIDS Interventions

Figure 3.6 Basic Framework for a Logic Model for a Health‐Related Program

Figure 3.7 Basic Framework for a Logic Model for a Policy Related Initiative

Figure 3.8 Valuable Take‐Aways

CHAPTER 4: PLANNING FOR EVALUATION: PURPOSE AND PROCESSES

Figure 4.1 Poster from the Revised Media Campaign

Figure 4.2 Evaluation Standards

Figure 4.3 Nurturing Strong Evaluation Partnerships

Figure 4.4 Table of Contents for an Inception Report

Figure 4.5 Valuable Take‐Aways

CHAPTER 5: DESIGNING THE EVALUATION: PART 1: DESCRIBING THE PROGRAM OR POLICY

Figure 5.1 The Participatory Model for Evaluation: Design the Evaluation

Figure 5.2 Components for Reviewing the Program or Policy Initiative

Figure 5.3 Components for Understanding the Context of the Initiative

Figure 5.4 Organizational Chart

Figure 5.5 Tips for Drawing a Logic Model

Figure 5.6 Basic Framework for Completing a Logic Model

Figure 5.7 The Ecological‐Model Framework Showing P and p

Figure 5.8 Policy Logic Model

Figure 5.9 Valuable Take‐Aways

CHAPTER 6: DESIGNING THE EVALUATION: PART 2A: PROCESS EVALUATION

Figure 6.1 The Participatory Model for Evaluation: Design the Evaluation

Figure 6.2 Primary Purposes of Process Evaluation

Figure 6.3 Relationship Between Outcome and Activity Objectives and Outputs

Figure 6.4 Logic Model Framework for Process Evaluation

Figure 6.5 Two‐by‐Two Table

Figure 6.6 Changes in an Indicator Across Four Observations of Three Samples

Figure 6.7 Logic Model for the Wesleyan Church Public Health Ministry

Figure 6.8 Valuable Take‐Aways

CHAPTER 7: DESIGNING THE EVALUATION: PART 2B: OUTCOME EVALUATION

Figure 7.1 Logic‐Model Framework for Identifying Evaluation Questions

Figure 7.2 Logic Model–Inspired Questions

Figure 7.3 Two‐by‐Two Table

Figure 7.4 Changes in Conditions from the First to the Fourth Quarter

Figure 7.5 Pre (O

1

) and Post (O

2

) Test Scores

Figure 7.6 Valuable Take‐Aways

CHAPTER 8: COLLECTING THE DATA: QUANTITATIVE

Figure 8.1 The Participatory Framework for Evaluation: Collect the Data

Figure 8.2 Valuable Take‐Aways

CHAPTER 9: COLLECTING THE DATA: QUALITATIVE

Figure 9.1 Uses of Focus Groups

Figure 9.2 Requirements for a Focus Group Discussion

Figure 9.3 Mini Focus Group Interview

Figure 9.4 Research Process for the Interview Instrument

Figure 9.5 Assessing Incivilities in the Park

Figure 9.6 Distribution of Grocery and Corner Stores

Figure 9.7 Valuable Take‐Aways

CHAPTER 10: ANALYZING AND INTERPRETING QUANTITATIVE AND QUALITATIVE DATA: QUANTITATIVE (PART 1)

Figure 10.1 The Participatory Framework for Evaluation: Analyze and Interpret the Data

Figure 10.2 Bar‐Chart Format for Presenting Quantitative Data

Figure 10.3 Pie‐Chart Format for Presenting Quantitative Data

Figure 10.4 Screen Shot of Qualitative‐Data Coding from NVivo® QSR

Figure 10.5 Valuable Take‐Aways

CHAPTER 11: REPORTING EVALUATION FINDINGS

Figure 11.1 The Participatory Model for Evaluation—Report the Findings

Figure 11.2 Criteria for an Overall Assessment of the Initiative

Figure 11.3 Valuable Take‐Aways

CHAPTER 12: CASE STUDY: THE COMMUNITY ASSESSMENT

Figure 12.1 Theory of Change

Figure 12.2 Two‐by‐Two Table

CHAPTER 13: CASE STUDY: PROCESS EVALUATION

Figure 13.1 Model of the Ecological Model interacting With the Health‐belief Model and the Social Cognitive Theory

Figure 13.2 Leighster County Intervention Logic Model

Figure 13.3 Levels of Collaboration

Guide

Cover

Table of Contents

Begin Reading

Pages

iii

iv

v

vii

viii

ix

x

xi

xii

xiii

xiv

xv

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

EVALUATING PUBLIC AND COMMUNITY HEALTH PROGRAMS

 

 

MURIEL J . HARRIS

 

 

 

 

 

 

Copyright © 2017 by John Wiley & Sons, Inc. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the Web at www.copyright.com. Requests to the publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, 201-748-6011, fax 201-748-6008, or online at www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Readers should be aware that Internet Web sites offered as citations and/or sources for further information may have changed or disappeared between the time this was written and when it is read.

This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold with the understanding that the publisher is not engaged in rendering professional services. If legal, accounting, medical, psychological or any other expert assistance is required, the services of a competent professional should be sought.

For general information on our other products and services, please contact our Customer Care Department within the U.S. at 800-956-7739, outside the U.S. at 317-572-3986, or fax 317-572-4002.

Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.

Library of Congress Cataloging-in-Publication Data:

Names: Harris, Muriel J., 1955- author.Title: Evaluating public and community health programs / Muriel J. Harris.Description: 2nd edition. | Hoboken, New Jersey : Jossey-Bass & Pfeiffer Imprints, Wiley, [2017] | Includes bibliographical references and index.Identifiers: LCCN 2016025319 (print) | LCCN 2016026276 (ebook) | ISBN 9781119151050 (pbk.) | ISBN 9781119151074 (epdf) | ISBN 9781119151081 (epub)Subjects: | MESH: Community Health Services-standards | Program Evaluation-methods | Data Collection-methods | Evaluation Studies as Topic | Community-Based Participatory Research Classification: LCC RA440.4 (print) | LCC RA440.4 (ebook) | NLM WA 546.1 | DDC 362.1072-dc23LC record available at https://lccn.loc.gov/2016025319

Cover Design: WileyCover Photo: © Mats Anda/Getty Images, Inc.

This edition is dedicated to the memory father, Dr. Evelyn C. Cummings.

PREFACE

You may not know what the term evaluation means, and, like me all those years ago and many of my students now, you are probably still a little wary of the term and wondering where this is all leading. No matter where you are in your understanding of program and policy evaluation, my hope is that whether you are a practitioner, a student, or both, you will find this book helpful on your journey and on your path to understanding. Just as I did many years ago, you probably evaluate what you do all the time without giving it a name. Evaluation is often an unconscious activity that is carried out before choosing among one or many options, both informally and formally. Informal evaluations range from selecting a restaurant for dinner to selecting a course of dishes off the menu. All the decisions you make along the way have implications for the success or failure of the outing. At the end of the evening, you go over the steps you took and decide whether the trip was worth it. If it wasn't, you may decide never to go to that restaurant again. So it is with program evaluation. We assess the resources and activities that went into a program, and then we determine whether the program or policy achieved what was intended, was worth it to those who experienced it and to those who funded it.

Evaluation activities occur in a range of work‐related settings including community‐based organizations, coalitions and partnerships, government‐funded entities, the pharmaceutical industry, and the media. Program evaluations assess how an event or activity was conducted, how well it was conducted, and whether it achieved its goal. Evaluation determines the merit of a program or policy, and it forms the basis for evidence‐based decision‐making.

Evaluation is the cornerstone of program improvement and must be carefully planned and executed to be effective. It helps make the task of assessing the appropriateness of a public health intervention or the success of a program or policy explicit by using appropriate research methods. In evaluation, a plan is developed to assess the achievement of program objectives. The plan states the standards against which the intervention will be assessed, the scope of the evaluation, and appropriate tools and approaches for data collection and analysis.

There are many opportunities to conduct an evaluation during the life of an intervention, and the approaches to conducting the evaluation in each case will differ. The methods and tools for an evaluation that is conducted during the first few months of a program are different from those used when the program or participation in the program ends and the effectiveness of the program or policy is being assessed. In addition, during the life of the program, evaluation tools and approaches can be used to record program and policy participation and progress.

This book presents a model for evaluation and describes the approaches and methods for evaluating community health program and policy interventions. It is aimed at public health and community health students as well as practitioners who are new to program and policy evaluation. This book makes no assumptions of prior knowledge about evaluation. The approach to evaluation that is presented allows for the development of simple or complex evaluation plans while focusing on practical approaches. It encourages a critical thinking and reflective approach with the full involvement of multiple stakeholders throughout the evaluation process. This book provides learners with a systematic, step‐by‐step approach to program evaluation.

The book is organized into 13 chapters. It discusses the community assessment and the development of the public health initiative as the precursors to the four‐step participatory model for evaluation with stakeholders at the center of each component. It frames program evaluation in the context of community‐based participatory research. This edition also includes a chapter on process evaluation. Two case studies help the reader experience virtual evaluations, and mini‐case studies and opportunities to “Think About It” allow the reader to reflect on the material and improve critical thinking skills. Valuable Takeaways provide simple reminders of important concepts covered in the chapter. An appendix provides some additional resources for evaluation.

ACKNOWLEDGMENTS

This edition is dedicated to the memory of my father, Dr. Evelyn C. Cummings. My sincere appreciation for all their support over the years also goes to my mother and all members of my family in the diaspora. To all the friends who have been a part of my amazing journey and have inspired me to explore the world and follow my passion, thank you. I have had the pleasure of working and teaching in Liberia, Sierra Leone, South Africa, the United Kingdom, the United States, and most recently, in Ghana as a Fulbright Scholar, from where I draw much of my inspiration. I would, however, be remiss if I did not also remember the person who gave me the opportunity to write this book. Sadly, he passed away just as we started working on this edition. Dad, Andy Pasternak, and all the departed, continue to rest in perfect peace.

CHAPTER 1AN INTRODUCTION TO PUBLIC AND COMMUNITY HEALTH EVALUATION

LEARNING OBJECTIVES

Identify the uses and approaches of evaluation.

Describe preassessment evaluation.

List the principles of participatory evaluation.

Describe the links among community assessment, program implementation, and program evaluation.

Explain the ethical and cultural issues in evaluation.

Describe the value and role of stakeholders in evaluation.

Public health may be assessed by the impact it has on improving the quality of life of people and communities through the elimination or the reduction in the incidence, prevalence, and rates of disease and disability. An additional aspect of public health is to create social and physical environments that promote good health for all. The Healthy People 2020 goal describes health as being produced at multiple levels: households, neighborhoods and communities. In addition, it describes the importance of social and economic resources for health with a new focus on the social determinants of health. Its overarching goals are as follows:

Attain high quality, longer lives free of preventable disease, disability, injury, and premature death.

Achieve health equity, eliminate disparities, and improve the health of all groups.

Create social and physical environments that promote good health for all.

Promote quality of life, healthy development, and healthy behaviors across all life stages. (

http://www.healthypeople.gov/2020/About‐Healthy‐People

)

Public health, therefore, has an obligation to improve conditions and access to appropriate and adequate resources for healthy living for all people, and it includes education, nutrition, exercise, and social environments. Public health programs and policies may be instituted at the local, state, national, or international level.

The Committee for the Study of the Future of Public Health defines the mission of public health as “fulfilling society's interest in assuring conditions in which people can be healthy” (Institute of Medicine, 2001, p. 7). Public and community health programs and initiatives exist in order to “do good” and to address social problems or to improve social conditions (Rossi, Lipsey, & Freeman, 2004, p. 17). Public health interventions address social problems or conditions by taking into consideration the underlying factors and core causes of the problem. Within this context, program evaluation determines whether public health program and policy initiatives improve health and quality of life.

Evaluation is often referred to as applied research. Using the word applied in the definition lends it certain characteristics that allow it to differ from traditional research in significant ways.

Evaluation is about a particular initiative. It is generally carried out for the purposes of assessing the initiative, and the results are not generalizable. However, with the scaling up of programs to reach increasingly large segments of the population, and with common outcome expectations and common measures, evaluations can increase their generalizability. Research traditionally aims to produce results that are generalizable to a whole population, place, or setting in a single experiment.

Evaluations are designed to improve an initiative and to provide information for decision‐making at the program or policy level; research aims to prove whether there is a cause‐and‐effect relationship between two entities in a controlled situation.

Evaluation questions are generally related to understanding why and how well an intervention worked, as well as to determining whether it worked. Research is much more focused on the end point, on whether an intervention worked and much less on the process for achieving the end result.

Evaluation questions are identified by the stakeholders in collaboration with the evaluators; research questions are usually dictated by the researcher's agenda.

Some approaches to evaluation, such as those that rely on determining whether goals and objectives are achieved, assess the effects of a program; the judicial approach asks for arguments for and against the program, and program accreditations seek ratings of programs based on a professional judgment of their quality and are usually preceded by a self‐study. Consumer‐oriented approaches are responsive to stakeholders and encourage their participation. This book focuses on the evaluation of public health programs primarily at the community and program level.

OVERVIEW OF EVALUATION

Rossi et al. (2004) describe evaluation as “the use of social research methods to systematically investigate the effectiveness of social intervention programs in ways that are adapted to their political and organizational environments and are designed to inform social action to inform social conditions” (p. 16). In addition, these authors caution that evaluation provides the best information possible under conditions that involve a political process of balancing interests and reaching decisions (p. 419).

Evaluation is the cornerstone for improving public health programs and is conducted for the purpose of making a judgment of a program's worth or value. Evaluation incorporates steps that specify and describe the activities and the process of evaluation; the initiative and why it is being evaluated; the measures needed to assess the inputs, outputs, and outcomes; and the methodology for collecting the information (data). In addition, an evaluation analyzes data and disseminates results in ways that ensure that the evaluation is useful.

This definition of evaluation as adopted by the social sciences and public health reflects a long tradition of evaluation that takes different approaches to evaluation and are applied across a wide field of study. Each has its own criteria, and the evaluator chooses the approach that best suits their field, their inclination, or the purpose for which the evaluation is being conducted.

The next section provides a brief overview of the most widely used approaches. These evaluation approaches include the consumer‐based, decision‐based, goal‐free, participatory, expertise‐oriented, and objectives‐based.

Consumer‐Based Approach

In the consumer‐based evaluation approach, the needs of the consumer are the primary focus and the role of the evaluator is to develop or select criteria against which the initiative or product is judged for its worth. The focus of this evaluation is on the cost, durability, and performance of the initiative or product being evaluated.

Decision‐Based Approach

This approach adopts a framework for conducting evaluation that includes the context, inputs, process, and product. It is also referred to as the context, input, process, and product (CIPP) approach. In including the context in the evaluation, this approach considers both the problem that is being addressed and the intervention that addresses it. In the context of public health, adopting this model requires understanding the public health problem being addressed and the program or policy intended to address it. The community or needs assessment forms the basis for developing the intervention. The input components of the evaluation assess the relationship between the resources available for the program and the activities identified to address the problem. Process evaluation, which is the third component of this model, asks the question, “Is the program being implemented as planned?” The last component, the product, assesses the extent to which goals and objectives have been met.

Goal‐Free Approach

A goal‐free approach to evaluation is just that. The evaluation does not start out with any predefined goals or objectives related to the initiative being evaluated. It is expected that the initiative will have many outcomes that are not necessarily related to the objectives that may have been crafted when the initiative was initially conceived and started. Therefore, not having defined objectives allows the evaluator to explore a wide range of options for evaluation.

Participatory Approach

The participatory approach to evaluation adopts an approach that values and integrates stakeholders into the process. Stakeholders in this process are the beneficiaries of the initiative's interventions. In this case, the evaluator serves as technical advisor allowing the stakeholders to take responsibility for most aspects of the evaluation process. The aim of this approach is to transfer skills in a co‐learning setting and to empower stakeholders to become evaluators of their own initiatives.

Expertise‐Oriented Approach

The expertise‐oriented approach expects the evaluator to be a content expert who draws on his life experience to judge a program's worth. It may or not be accompanied by specified clearly defined and explicit criteria. This approach is often used in judging competitions and in public health and other fields in accreditation. However, in accreditation, such as the accreditation of schools of public health, although the institution provides the self‐study narrative based on predefined criteria, the judgment of the program's merits and the decision to grant accreditation is made by the accrediting body.

Objectives‐Based Approach

The objectives‐based evaluation is the most commonly used in public health practice especially recently as responses to calls for proposals for funding now invariably require the applicant to include objectives. The objectives for an initiative are developed following the community assessment, and form the bases on which the initiative is developed focusing on risk or protective factors that would have an impact on the problem being addressed. Additional objectives that may address concerns of the evaluator or the implementing team may be written as necessary to guide the evaluation and for the framework upon which the evaluation questions and the evaluation are designed.

LEVELS OF EVALUATION

Evaluation at the Project and Program Level

Evaluation may be conducted at the project or program level. Public health organizations and agencies may achieve the overall mission of the organization through a number of stand‐alone projects that together make up a program. For example, a local service organization of an agency may have activities that address many of the determinants of health—for example, low literacy, access to health insurance, low levels of physical activity and poor nutrition. Addressing each of these determinants of health may occur in a department of health promotion, yet each may have an independent set of activities to achieve an overall goal to improve the health of minority, low‐income populations within a jurisdiction. At the project level, process evaluation may be concerned with how the set of activities is being implemented and the extent to which each is being implemented according to a previously established plan. The link between literacy, lack of insurance, healthy nutrition, and physical activity is fairly well understood, so that, in combination, it is assumed that sets of activities at the project level will, over a specified time, address common objectives, such as reduce the percentage of individuals who are diagnosed with heart disease or increase the number of individuals with diabetes who achieve HbA1c levels of less than 7%. This evaluation takes place in the context of a carefully selected set of activities based on theoretically sound community assessment, which provides the framework for an intervention designed to achieve a stated set of goals and objectives.

Evaluation at the Organization Level

Evaluation may not only be concerned with the project and programs that are run out of the organization, but the organization may also have needs for its own development in order to provide needed services. Evaluating the organization may involve assessing the extent to which the organization is able to implement its strategic plan, the extent to which it is achieving its stated mission and reaching the populations it intends to serve. It may also assess its organizational capacity and relationships with others. Organizational development components that may be assessed include the capacity of its staff to address present and emerging health problems in the community and the extent to which projects and programs are institutionalized for long‐term sustainability. Organizational culture, climate, and competency to deal effectively with the populations it serves may be the foci of evaluation. Policy development and implementation that occurs at the organizational level may also form the basis for evaluation.

Evaluation at the Community Level

Community‐level engagement in projects and programs in the community and provision of services may form part of an evaluation, as well as might the social norms of the community. Using community organization theory as the basis for the evaluation, the extent to which communities have embraced new ideas, the extent of social networking and the level of social capital may be critical components of an evaluation. The empowerment continuum described by Rissel in 1994 assesses individual and community capacity to act in ways that bring about change and ultimately engage in collective political and social action.

Evaluation of Local, State, and National Level Policies

The evaluation of local, state, and national levels is generally carried out by organizations that have the capacity to coordinate, collect, and analyze large amounts of data from across jurisdictions. At the local or state health department, a research unit may have the responsibility to collect statewide data for the purpose of evaluating the impact of community‐wide efforts. The Centers for Disease Control and Prevention (CDC)‐supported Behavioral Risk Factor Surveillance System (BRFSS) survey serves the purpose of continual assessment of healthy‐people goals by determining risk factors and disease‐related outcomes. When state and national level policies are enacted, the BRFSS may serve to monitor changes at the population level in addition to other forms of data collection that may be required for evaluation. For example, when the State Children's Health Insurance Program (SCHIP) was enacted by Congress in 1997 and Title XXI of the Social Security Act was created, they aimed to design programs to expand health insurance for low‐income children less than 19 years of age who were uninsured. The evaluation plan was designed with seven assessment components:

Analysis of SCHIP enrollment patterns

Analysis of trends and rates of uninsured children

Synthesis of published and unpublished literature on retention, substitutions, and access to care

Special studies on outreach and access to care

Analysis of outreach and enrollment effectiveness

Case study of program implementation

Analysis of SCHIP performance measures

(https://www.cms.gov)

As with all evaluation, state and national level evaluations focus on the effect on the larger population (impact) rather than at the more limited outcome level of risk factors. However, special studies as evidenced by the SCHIP evaluation may focus on program implementation and attempts at assessing changes in risk factors (for example, assessing enrollment effectiveness) rather than just assessing trends and rates of uninsured children alone or the four core health measures, well‐child visits for children 15 months and ages 3–6, the use of appropriate medication for asthma, and visits to primary care providers.

PREASSESSMENT EVALUATIONS

One major assumption in evaluating an initiative is that it was well planned and fully implemented. This, however, is not always the case, and the evaluation team may find it must balance the expense associated with undertaking the evaluation with the likely result of the evaluation. If the evaluation is unlikely to provide information that is useful to the organization, it may be expedient to consider an alternative use of resources. An alternative use of the resources available for evaluation could be to answer a different question. The question becomes, “In undertaking this evaluation, will it provide useful information to the stakeholder for decision‐making or program improvement?” This contrasts with the kinds of questions that precede a full evaluation of the initiative which are, “Is the initiative being implemented according to the plan?” and “Did the initiative have an effect on the beneficiaries?” If the evaluator is unable to provide the stakeholder with information that is useful for decision‐making, program improvement, or replication, consultation may be necessary with regard to the type of evaluation that is required. The decision about the approach to the evaluation is made in consultation with the stakeholder. A decision to conduct a preassessment recognizes the need to assess the initiative's readiness to be evaluated rather than the initiative's implementation (process evaluation) or outcomes (outcome evaluation).

Components of a feasibility evaluation may include:

Assessing the readiness of executives, staff, and stakeholders to support an evaluation and to use the results.

Determining whether the stated goals and objectives are clear and reflect the intended direction of the organization.

Assessing the logic of the program and its ability to achieve the stated goal and objectives given the initiative's activities and resources.

Assessing whether data collected of the program's implementation activities are likely to be suitable for showing the effects of the program.

Assessing whether processes exist or can be developed to provide sufficient information to assess the program's activities, outputs, and outcomes.

Assessing access to program participants, program staff, and other stakeholders.

Assessing the logistics and resources available to conduct an evaluation.

One of the detailed tasks in carrying out a preassessment is to work with the organization to understand the epidemiological and community data‐based rationale; its interventions; the resources for the intervention; and the social, political, economic, and cultural context in which it operates. In assessing the interventions, the evaluator identifies the intervention components, understands the initiatives theory of change, and creates a logic model. The logic model shows the relationship between the activities implemented to achieve the objectives, and the resources devoted to them. The preassessment determines the existence (or nonexistence) of specific, measurable, realistic, achievable, and time‐oriented short‐term, intermediate, and long‐term outcome objectives.

Whether preassessment is completed formally or informally, the result may be either that the evaluation is able to go ahead or that it has to be delayed until various conditions are met. Meeting the conditions for evaluation varies from one organization to the next. One organization may not have a document detailing its structure or processes with regard to its interventions, and it may require the evaluation team to work with them on developing the documents describing the community assessment findings, the goals and objectives, the theory undergirding the intervention, activities to address the problem and achieve the goals and objectives or tools for carrying out an evaluation. Another organization may only require data‐management and evaluation tools that allow for appropriate and adequate data collection, whereas another may need help with ensuring that the plans for data analysis are developed. On the analysis of the existing documents, it may become clear that the initiative requires restructuring to ensure it uses a best‐practice approach and has the capacity to get to outcomes. Such actions ensure that in the future the organization and the intervention have the components and tools essential for undertaking an appropriate and meaningful evaluation. Components for preasessment of a program's readiness for evaluation are depicted in above Figure 1.1.

FIGURE 1.1 Components for Preassessment of Program's Readiness for Evaluation

THE PARTICIPATORY APPROACH TO EVALUATION

A participatory model for evaluation views evaluation as a team effort that involves people internal and external to the organization with varying levels of evaluation expertise in a power‐sharing and co‐learning relationship. Patton (2008, p. 175) identifies nine principles of participatory evaluation:

The process involves participants in learning skills.

Participants own the evaluation and are active in the process.

Participants focus the evaluation on what they consider important.

Participants work together as a group.

The whole evaluation process is understandable and meaningful to the participants.

Accountability to oneself and to others is valued and supported.

The perspectives and expertise of all persons are recognized and valued.

The evaluator facilitates the process and is a collaborator and a resource for the team.

The status of the evaluator relative to the team is minimized (to allow equitable participation).

A participatory model for evaluation embraces the stakeholders in the process and utilizes approaches to help the organization develop the capacity to evaluate its own programs and institute program improvement (Fetterman, Kaftarian, & Wandersman, 1996). The community‐based participatory‐research (CBPR) approach (Israel, Eng, Schulz, & Parker, 2005) proposes nine guiding principles that support effective research, which are easily incorporated into participatory program evaluation of public health initiatives. CBPR principles require that researchers

Acknowledge community as a unit of identity in which people have membership; it may be identified as a geographical area or a group of individuals.

Build on strengths and resources of the community and utilize them to address the needs of the community.

Facilitate a collaborative, equitable partnership in all phases of research, involving an empowering and power‐sharing process that attends to social inequalities with open communication among all partners and an equitable share in the decision‐making.

Foster co‐learning and capacity building among all partners with a recognition that people bring a variety of skills, expertise, and experience to the process.

Integrate and achieve a balance between knowledge generation and intervention for the mutual benefit of all partners with the translation of research findings into action.

Focus on the local relevance of public health problems from an ecological perspective that addresses the multiple determinants of health including biological, social, economic, cultural, and physical factors.

Involve systems development using a cyclical and iterative process that includes all the stages of the research process from assessing and identifying the problem to action.

Disseminate results to all partners and involve them in the wide dissemination of results in ways that are respectful.

Involves a long‐term process and commitment to sustainability in order to build trust and have the ability to address multiple determinants of health over an extended period. (Israel et al., 2005, pp. 7–9)

Important outcomes of CBPR approaches are building community infrastructure and community capacity, knowledge, and skills (O'Fallon & Dearry, 2002). The participatory model, through its engagement of stakeholders throughout the process, fosters the ideals of cooperation, collaboration, and partnerships, and ensures co‐learning and empowerment.

THE PARTICIPATORY MODEL FOR EVALUATION

The Framework for Program Evaluation developed by Milstein, Wetterhall, & the Evaluation Group (2000) has six evaluation steps: Step 1, engage stakeholders; Step 2, describe the program; Step 3, focus the evaluation design; Step 4, collect credible evidence; Step 5, justify conclusions; and Step 6, ensure use and share lessons learned. The framework is associated with four standards: utility, feasibility, propriety, and accuracy (Figure 1.2). It has been adopted and used in the evaluation of public health programs since its development, and its subsequent publication as a monograph by the Centers for Disease Control and Prevention. The participatory model for evaluation that is introduced and expounded in this book builds on this approach to evaluation. Like the framework for program evaluation, the participatory model for evaluation uses an objectives‐based approach to evaluation and draws on concepts from the other approaches outlined earlier.

FIGURE 1.2 Framework for Program Evaluation in Public Health

Source: From Milstein, Wetterhall, and Group (2000).

The participatory model for evaluation incorporates community‐based participatory research principles (Israel et al., 2005) and supports a collaborative, equitable partnership in all phases of the evaluation process. It fosters co‐learning and capacity building while acknowledging and utilizing existing experience and expertise. It incorporates all the elements of the evaluation process but does so in a flexible and simplified way. It recognizes the often iterative and integrative nature of evaluation in designing the evaluation; collecting, analyzing, and interpreting the data; and reporting the findings. It links the evaluation process to community assessment and program planning and implementation in a deliberative and iterative way. Stakeholders' active participation in the entire process provides flexibility in the evaluation and allows it to be customizable to the users' needs. Because conducting an evaluation depends on a thorough knowledge and understanding of a program's development and implementation, this book provides an overview of these critical precursors to evaluation, the community assessment, and development and implementation of programs. This model recognizes the dynamic nature of programs and the changing needs of the evaluation over time; hence, the cyclical nature of the process.

The participatory model for evaluation consists of four major steps:

Design the evaluation.

Collect the data.

Analyze and interpret the data.

Report the findings.

The participatory model for evaluation (Figure 1.3) used to evaluate public health community or policy initiatives and the focus of this book acknowledges the participatory nature of evaluation, recognizes that the community assessment and the public health initiative are precursors to an evaluation, and adopts an objectives‐based approach to evaluation. In this model for evaluation, stakeholders who have a vested interest in the program's development, implementation, or results are part of the evaluation team and involved in each step of the evaluation process. In addition to acknowledging the inclusion of stakeholders as good practice in evaluation, the Public Health Leadership Society (2002) recognizes their inclusion as being ethical. Its third principle states that public health “policies, programs, and priorities should be developed and evaluated through processes that ensure an opportunity for input from community members” (p. 4). Stakeholders provide multiple perspectives and a deep understanding of the cultural context in which an initiative is developed and an evaluation is conducted.

FIGURE 1.3 The Participatory Model for Evaluation

THE PRECURSORS TO PROGRAM EVALUATION

When a community or individual identifies a public health problem among a population, steps are taken to understand the problem. These steps constitute community assessments, which define the problem using qualitative and quantitative measures. They assess the extent of the problem, who is most affected, and the individual and environmental factors that may be contributing to and exacerbating the problem. Community assessments determine the activities that will potentially lead to change in the factors that put the population at risk of disease and disability. Programs are planned and implemented based on the findings of the community assessment and the resources available. The Merriam‐Webster dictionary describes community as “a unified body of individuals” who have common interests; history; or social, economic, and political interests. The unified body of individuals may occur in homes, workplaces, or houses of worship, and community assessments may be expanded to also include assessing organizational structures through which initiatives are developed and implemented.

The terms initiative and intervention are used in this book to refer to a program or policy that addresses a health or social concern identified by the community assessment. The health or social concern may be influenced by a variety of factors at the individual, interpersonal, community, organizational, or policy level. Details about conducting a community assessment and developing initiatives are discussed in Chapters 2 and 3. Examples of initiatives are a program for low‐income families to increase their knowledge and skills with regard to accessing health care and an after‐school program to improve physical fitness. Initiatives may also be based on the development of a public or organizational policy that also addresses a public health concern. Programs may modify the environment to improve access to conditions that support health, such as improving conditions for walking in a community or improving access to fresh produce. At the organizational level, factors that influence access to services may be subject to development and training and related initiatives. Initiatives can also develop or change public policy so that more people can have health insurance and improved access to health care. Another policy that you are no doubt familiar with is the seat‐belt policy that was enacted to reduce the risk of injury and mortality associated with vehicular accidents.

An initiative or intervention may have multiple components such as activities, programs, or policies associated with it. One example is prevention of the onset of diabetes, which requires a multipronged intervention for those at risk. Individual components that constitute the initiative may include physical activity, diet control, outreach education, and policies that increase the availability of fresh produce and access to opportunities for physical activities. In addition, access to health care to ensure that screening is available and case management and care when needed is a critical component of assuring health. Evaluating a multipronged initiative requires assessing both process and outcomes for each component as well as assessing the overall effect of the initiative on preventing diabetes among the target population.

Evaluation activities may occur at multiple points on a continuum, from planning the initiative, through implementation, to assessing the effect on the populations served and meeting the goals outlined in the Healthy People objectives (U.S. Department of Health and Human Services, 2020). The Healthy People documents identify the most significant preventable threats to health and establish national goals to reduce these threats. Individuals, groups, and organizations are encouraged to integrate the Healthy People objectives into the development of initiatives. In addition, businesses can use the framework to build worksite health‐promotion activities; schools and colleges can undertake programs and activities to improve the health of students and staff. Healthcare providers can encourage their patients to pursue healthy lifestyles; community‐based organizations and civic and faith‐based organizations can develop initiatives to address health issues in a community, especially among hard‐to‐reach populations, and to ensure that everybody has access to information and resources for healthy living. Determining the effectiveness of the implementation of programs and policies and the impact of such initiatives on the population that is reached is the task of program‐ or policy‐evaluation activities. Although evaluation activities may use different approaches, their function is similar across disciplines. Formative evaluation is the appropriate approach during the program planning and development phase of an initiative; process monitoring and evaluation are useful during the implementation phase and when the aim of the evaluation is to understand what went into the program and how well it is being implemented.

Outcome evaluations are carried out after programs have been in place for a time and are considered stable; such an evaluation can assess the effect of a program or policy on individuals or a community. Outcome evaluation aims to understand whether a program was effective and achieved what it set out to accomplish. Impact evaluation is the last stage of the evaluation continuum. It is used when multiple programs and policy initiatives affect the quality of life of a large population over a long period. Multiple interventions on the population or subpopulation are assessed for changes in quality of life and for the incidence and prevalence of disease or disability within a jurisdiction. A jurisdiction is a legally defined unit overseen by political and administrative structures. Discussions of impact evaluation may be found in other texts. Figure 1.4 illustrates the context of evaluation; the specific kinds of evaluation are discussed in detail in the next section.

FIGURE 1.4 Evaluation in Context

The Evaluation Team

The evaluation team is led by an experienced evaluator who may be internal or external to the organization. Historically, the evaluator has been an outsider who comes in to give an independent, “unbiased” review of the initiative. This approach to evaluation has limited evaluation to a few institutions and specifically when funding is available for evaluation. Evaluators may have titles that are more akin to researchers and who may be associated with a local university or community college. More recently, agencies and large nonprofit organizations have hired in‐house evaluators or modified the roles of staff to provide evaluation and thereby strengthen the overall capacity of the organization. A significant advantage is that the agency may be able to provide a more sustained evaluation conducted at lower cost. Irrespective of the approach used, participatory models include stakeholders as part of the evaluation design and implementation in order to facilitate the use of the findings.

There are advantages and disadvantages to choosing an internal or an external evaluator. An internal evaluator who has the expertise to conduct an evaluation and who knows the program well may also have easy access to materials, logistics, resources, and data. However, internal evaluators are often too busy, may be less objective than those external to the organization, and may have limited expertise to conduct a full and complete evaluation. However, an internal evaluator is an important and valuable resource for an external evaluator who may be contracted with to conduct the evaluation.

An external evaluator is often viewed as being more credible, more objective, and able to offer additional insights for the development of the program and to serve as a facilitator than someone from inside the organization. An external evaluator may also be able to provide additional human and material resources and an expertise that may not be available within the organization. Additionally, external evaluators may not know the program, policies, and procedures of the organization, may not understand the program context, and may be perceived as adversarial and an imposition. This may be particularly true since in the process of conducting the evaluation, the evaluator will require access to staff and other stakeholders. The participatory approach to evaluation encourages and supports all stakeholders' engagement in the process from start to finish and the relationship between an internal evaluator and the external evaluator may be the difference between an evaluation considered to be useful and credible and one that is dismissed and left on the shelf to gather dust. It is important, therefore, to nurture this relationship should an internal evaluator be involved.

Whether an evaluator is internal or external, the person who has the primary responsibility for the evaluation should have these essential competencies:

Know and maintain professional norms and values, including evaluation standards and principles.

Use expertise in the technical aspects of evaluation such as design, measurement, data analysis, interpretation, and sharing results.

Use situational analysis, understand and attend to contextual and political issues of an evaluation.

Understand the nuts and bolts of evaluation, including contract negotiation, budgeting, and identifying and coordinating needed resources for a timely evaluation.

Be reflective regarding one's practice and be aware of one's expertise as well as the need for professional growth.

Have interpersonal competence in written communication and the cross‐cultural skills needed to work with diverse groups of stakeholders. (Ghere, King, Stevahn, & Minnema, 2006; King, Stevahn, Ghere, & Minnema, 2001)

In addition, five ethical principles of program evaluation were adopted and ratified by the American Evaluation Association. These principles reflect the fundamental ethical principles of autonomy, nonmaleficence, beneficence, justice, and fidelity (Veach, 1997) and as such provide an ethical compass for action and decision‐making throughout the evaluation process. These principles are the following:

Systematic inquiry

: Evaluators conduct systematic, data‐based inquiries. They adhere to the highest technical standards; explore the shortcomings and strengths of evaluation questions and approaches; communicate the approaches, methods, and limitations of the evaluation accurately; and allow others to be able to understand, interpret, and critique their work.

Competence

: Evaluators provide competent performance to stakeholders. They ensure that the evaluation team possesses the knowledge, skills, and experience required; that it demonstrates cultural competence; practices within its limits; and continuously provides the highest level of performance.

Integrity/honesty

: Evaluators display honesty and integrity in their own behavior and attempt to ensure the honesty of the entire evaluation process. They negotiate honestly, disclose any conflicts of interest and values and any sources of financial support. They disclose changes to the evaluation, resolve any concerns, accurately represent their findings, and attempt to prevent any misuse of those findings.

Respect for people

: Evaluators respect the security, dignity, and worth of respondents, program participants, clients, and other stakeholders. They understand the context of the evaluation, abide by ethical standards, conduct the evaluation and communicate results in a way that respects the stakeholders' dignity and worth, fosters social equity, and takes into account all persons.