Internal Evaluation in the 21st Century -  - E-Book

Internal Evaluation in the 21st Century E-Book

0,0
22,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Nowadays, a considerable amount of evaluation work is implementedinternally--both nationally and across the world. As such, itis exceedingly important for evaluators and organizations to beaware of the issues in designing and implementing internalevaluation to realize its potential for enhancing organizationalgrowth, competitive advantage, and social impact. This issueincludes perspectives on internal evaluation from experiencedevaluation practitioners from different fields and organizationswho share theoretical and practical examples and case studies inpromoting and conducting internal evaluation. The chapters: * Highlight societal and organizational changes that have shaped thecurrent trends in internal evaluation * Discuss foundational issues in internal evaluation * Provide rich illustrations of internal evaluation practice indifferent settings with diverse foci (customer-driven vision and aresults-based orientation for evaluation, accountability anddevelopment, and building evaluation capacity). This is the 132nd volume of the Jossey-Bass quarterly reportseries New Directions for Evaluation, an officialpublication of the American Evaluation Association.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 238

Veröffentlichungsjahr: 2011

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Editors’ Notes

Chapter 1: Internal Evaluation a Quarter-Century Later: A Conversation With Arnold J. Love

Chapter 2: Internal Evaluation, Historically Speaking

The 1960s and the Expansion of Evaluation Practice

The 1980s and Evaluation for Efficiency and Cost-Effectiveness

Internal Evaluation and Decision Making

The 1990s and the New Public Management Influence on Evaluation

The Potential Contradictory Consequences of NPM on Program Evaluation

“Is Internal or External Evaluation Better?” Is the Wrong Question

Future Trends for Internal Evaluation

Chapter 3: Beyond Being an Evaluator: The Multiplicity of Roles of the Internal Evaluator

Defining Internal Evaluation

Expanding the Traditional Evaluator Role

Essential Roles Associated With Internal Evaluation

Change Agent

Educator About Evaluation

ECB Practitioner

(Management) Decision-Making Supporter

Consultant

Researcher/Technician

Advocate

Organizational Learning Supporter

Conclusion

Chapter 4: Predicament and Promise: The Internal Evaluator as Ethical Leader

The Predicament of the Internal Evaluator

The Promise of the Internal Evaluator: The View From Nowhere

The Promise of the Internal Evaluator: Public Ethics and Public Deliberation

New Directions in Public Ethics

Chapter 5: Evaluation Survivor: How to Outwit, Outplay, and Outlast as an Internal Government Evaluator

A Brief History of Evaluation in the Bureau of Educational and Cultural Affairs

The Challenge and the Change

The Conquest

The Conclusions

Evaluation Kill: Shelf-Life or Half-Life

The Counsel: Implications for Internal Evaluation

Chapter 6: Internal Evaluation in American Public School Districts: The Importance of Externally Driven Accountability Mandates

A Brief History of Internal Educational Evaluation in the United States

Components of Internal Evaluation in U.S. School Districts

One District’s Story

What Distinguishes Internal Evaluation in American School Districts?

Implications for Internal Educational Evaluation

Chapter 7: Designing Internal Evaluation for a Small Organization With Limited Resources

Building Internal Evaluation: A Lesson From Capacity Building

Scenario Example: The Army Inspector General (AIG)

Evaluating Capacity-Building Efforts

Keys to Success of Capacity Building by an Internal Evaluator in a Small Organization

Conclusion

Chapter 8: Issues in Internal Evaluation: Implications for Practice, Training, and Research

Contextual Factors

Situational Responsiveness

Use of Information Technology

Ethical Issues

Evaluator Credibility

Evaluation Training

Research on Internal Evaluation

Index

Internal Evaluation in the 21st Century

Boris B. Volkov, Michelle E. Baron (eds.)

New Directions for Evaluation, no. 132

Sandra Mathison, Editor-in-Chief

Copyright © 2011 Wiley Periodicals, Inc., A Wiley Company, and the American Evaluation Association. All rights reserved. No part of this publication may be reproduced in any form or by any means, except as permitted under sections 107 and 108 of the 1976 United States Copyright Act, without either the prior written permission of the publisher or authorization through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923; (978) 750-8400; fax (978) 646-8600. The copyright notice appearing at the bottom of the first page of a chapter in this journal indicates the copyright holder’s consent that copies may be made for personal or internal use, or for personal or internal use of specific clients, on the condition that the copier pay for copying beyond that permitted by law. This consent does not extend to other kinds of copying, such as copying for general distribution, for advertising or promotional purposes, for creating collective works, or for resale. Such permission requests and other permission inquiries should be addressed to the Permissions Department, c/o John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030; (201) 748-6011, fax (201) 748-6008, www.wiley.com/go/permissions.

Microfilm copies of issues and articles are available in 16mm and 35mm, as well as microfiche in 105mm, through University Microfilms Inc., 300 North Zeeb Road, Ann Arbor, MI 48106-1346.

New Directions for Evaluation is indexed in Cambridge Scientific Abstracts (CSA/CIG), Contents Pages in Education (T & F), Higher Education Abstracts (Claremont Graduate University), Social Services Abstracts (CSA/CIG), Sociological Abstracts (CSA/CIG), and Worldwide Political Sciences Abstracts (CSA/CIG).

New Directions for Evaluation (ISSN 1097-6736, electronic ISSN 1534-875X) is part of The Jossey-Bass Education Series and is published quarterly by Wiley Subscription Services, Inc., A Wiley Company, at Jossey-Bass, One Montgomery Street, Suite 1200, San Francisco, CA 94104-4594.

Subscriptions cost $89 for U.S./Canada/Mexico; $113 international. For institutions, agencies, and libraries, $295 U.S.; $335 Canada/Mexico; $369 international. Prices subject to change.

Editorial correspondence should be addressed to the Editor-in-Chief, Sandra Mathison, University of British Columbia, 2125 Main Mall, Vancouver, BC V6T 1Z4, Canada.

www.josseybass.com

New Directions for Evaluation

Sponsored by the American Evaluation Association

Editor-in-ChiefSandra MathisonUniversity of British ColumbiaAssociate EditorsSaville KushnerUniversity of the West of EnglandPatrick McKnightGeorge Mason UniversityPatricia RogersRoyal Melbourne Institute of TechnologyEditorial Advisory BoardMichael BambergerIndependent consultantGail BarringtonBarrington Research Group Inc.Nicole BowmanBowman ConsultingHuey ChenUniversity of Alabama at BirminghamLois-ellin DattaDatta AnalysisStewart I. DonaldsonClaremont Graduate UniversityMichael DuttweilerCornell UniversityJody FitzpatrickUniversity of Colorado at DenverGary HenryUniversity of North Carolina, Chapel HillStafford HoodArizona State UniversityGeorge JulnesUtah State UniversityJean KingUniversity of MinnesotaNancy KingsburyU.S. Government Accountability OfficeHenry M. LevinTeachers College, Columbia UniversityLaura LevitonRobert Wood Johnson FoundationRichard LightHarvard UniversityLinda MabryWashington State University, VancouverCheryl MacNeilSage CollegeAnna MadisonUniversity of Massachusetts, BostonMelvin M. MarkThe Pennsylvania State UniversityDonna MertensGallaudet UniversityRakesh MohanIdaho State LegislatureMichael MorrisUniversity of New HavenRosalie T. TorresTorres Consulting GroupElizabeth WhitmoreCarleton UniversityMaria Defino WhitsettAustin Independent School DistrictBob WilliamsIndependent consultantDavid B. WilsonUniversity of Maryland, College ParkNancy C. ZajanoLearning Point Associates

Editorial Policy and Procedures

New Directions for Evaluation, a quarterly sourcebook, is an official publication of the American Evaluation Association. The journal publishes empirical, methodological, and theoretical works on all aspects of evaluation. A reflective approach to evaluation is an essential strand to be woven through every issue. The editors encourage issues that have one of three foci: (1) craft issues that present approaches, methods, or techniques that can be applied in evaluation practice, such as the use of templates, case studies, or survey research; (2) professional issues that present topics of import for the field of evaluation, such as utilization of evaluation or locus of evaluation capacity; (3) societal issues that draw out the implications of intellectual, social, or cultural developments for the field of evaluation, such as the women’s movement, communitarianism, or multiculturalism. A wide range of substantive domains is appropriate for New Directions for Evaluation; however, the domains must be of interest to a large audience within the field of evaluation. We encourage a diversity of perspectives and experiences within each issue, as well as creative bridges between evaluation and other sectors of our collective lives.

The editors do not consider or publish unsolicited single manuscripts. Each issue of the journal is devoted to a single topic, with contributions solicited, organized, reviewed, and edited by a guest editor. Issues may take any of several forms, such as a series of related chapters, a debate, or a long article followed by brief critical commentaries. In all cases, the proposals must follow a specific format, which can be obtained from the editor-in-chief. These proposals are sent to members of the editorial board and to relevant substantive experts for peer review. The process may result in acceptance, a recommendation to revise and resubmit, or rejection. However, the editors are committed to working constructively with potential guest editors to help them develop acceptable proposals.

Sandra Mathison, Editor-in-Chief

University of British Columbia

2125 Main Mall

Vancouver, BC V6T 1Z4

Canada

e-mail: [email protected]

Editors’ Notes

The growth of internal evaluation is both remarkable and timely. Internal evaluation can be considered a key factor for the overall success of the evaluation field—considering that what happens within an organization is in many ways reflective of the overall attention and trust given to evaluation. Gradually, evaluators and their organizations are bringing evaluative thinking to an internal level, strategically focusing on organizational development and improvement. The internal evaluators are using diverse evaluation tools to conduct evaluations and make their results useful, while at the same time building organizational capacity for integrating evaluation into daily activities.

Given the growing focus on evidence-based policies, organizational accountability, and program improvement, internal evaluation has increasingly become an important subfield, or a specialty area in the field of evaluation. A recent memorandum for the heads of executive departments and agencies from the director of the Office of Management and Budget (OMB), Peter Orszag, is titled “Increased Emphasis on Program Evaluations.” The essence of the document is in that, in collaboration with other key agencies, OMB is planning to reconstitute an interagency working group of evaluation experts under the Performance Improvement Council. The objectives of the work group include helping build agency evaluation capacity; creating effective evaluation networks that draw on the best expertise inside and outside the federal government; and sharing best practices from agencies with strong, independent evaluation offices. Agencies are encouraged to propose pertinent changes or reforms and request funding to strengthen their internal evaluation expertise and processes.

This fact and a number of others support the timeliness of this issue. A considerable amount of evaluation work is implemented internally—both nationally and across the world. Other evidence is the considerable number of internal-evaluation–related sessions and presentations offered during the annual conferences of the American Evaluation Association (AEA). The foci of these sessions have taken hold of the field as well—with an emphasis on organizational interrelations, the mainstreaming of evaluation, building evaluation capacity, and creating dynamic evaluation cultures. The heightened interest and activity in the arena of internal evaluation also resulted in the recent formation of the AEA’s Internal Evaluation TIG (topical interest group) in 2010, which currently has around 400 members. Also important is the fact that the last—and the only—edition of NDE dedicated to the topic of internal evaluation was published close to three decades ago (see “Developing Effective Internal Evaluation” by Love, 1983). It is a new century now, and a thoughtful discussion about the state of the field of internal evaluation is overdue.

It is in this context that we introduce this issue on internal evaluation. The issue includes evidence-based perspectives on internal evaluation from a number of experienced evaluation practitioners from different fields and organizations, who share practical examples and case studies of their work promoting and conducting internal evaluation in different areas of social programming. The expected readership for this issue is a diverse audience including internal and external evaluators, organization development practitioners interested in program evaluation, and multiple stakeholders who are engaged—or thinking about being engaged—in evaluation and who are sharing commitment to accountability and improvement. This includes professionals in different areas of social programming and specialty evaluation practice (e.g., education, government, and nonprofit).

The issue has the following structure. The first two chapters highlight societal and organizational changes that have influenced the evaluation field and shaped the current trends in internal evaluation. Serving as a springboard for the rest of the issue, Chapter 1 contains an interview by Boris B. Volkov with Arnold J. Love, an internationally recognized internal evaluation expert and author. The choice of the interviewee was by no means accidental. One of the most cited authors on internal evaluation, Arnold J. Love was the editor of the “Developing Effective Internal Evaluation” issue of New Directions for Program Evaluation. He shares his experiences and understanding of the development of internal evaluation. Chapter 2 by Sandra Mathison presents an overview of the historical context of internal evaluation from the 1960s to the present, arguing that the growth of the internal evaluation function in organizations has been mainly due to its perceived importance.

The next two chapters are concerned with foundational issues in internal evaluation. Chapter 3 by Boris B. Volkov includes a grounded-in-the-evaluation-literature overview of the essential internal evaluator roles from a macrolevel perspective. The systematic advancing of evaluation capacity, evaluative thinking, and learning in organizations is suggested as one of the future directions for the internal evaluator’s progressively changing and expanding roles. In Chapter 4, Francis J. Schweigert focuses on the ethical aspects of the internal evaluation practice. The internal evaluator’s ethical promise lies in his or her unique position as a co-worker—within the organization—viewing the organization’s work and results with the eye of impartial spectator.

Three more chapters provide rich illustrations of internal evaluation practice in different settings (federal government, public education, military, as well as small organizations) with specific foci (customer-driven vision and a results-based orientation for evaluation, accountability and development, and building evaluation capacity).

In Chapter 5, Ted Kniker addresses the key challenges and opportunities faced by internal government evaluators. His case study is drawn from his experiences as the chief of evaluation for public diplomacy at the U.S. Department of State and as a consultant assisting federal agencies to enhance internal evaluation functions. His internal evaluation office was deemed a best practice by the State Department Office of Inspector General and was recommended to be a model for other U.S. government evaluation units by the Office of Management and Budget.

Chapter 6 by Jean A. King and Johnna A. Rohmer-Hirt discusses the processes of internal evaluation in public education in the United States in general and also illuminates a 10-year reflective case study of internal evaluation in the largest district in Minnesota. The authors believe that the form and viability of internal evaluation is shaped by the unique requirements of the educational sector and that finding resources to sustain meaningful evaluation efforts over time remains a formidable challenge in American public education.

In Chapter 7, Michelle E. Baron presents a case study of her experiences as an inspector general with the military and outlines strategies to develop and maintain internal evaluation systems for small organizations at the early, midterm, and seasoned levels of evaluation capacity. The author believes that internal evaluation can thrive in organizations regardless of their size or resource limitations.

The closing Chapter 8 by Boris B. Volkov and Michelle E. Baron lays out a summary reflection on the key issues and perspectives that emerged in the preceding chapters and other evaluation literature with suggestions for the future directions for internal evaluation research, practice, and training. For example, the directions for practice contain a difficult task of the continuous building of evaluation capacity across the entire organization while cultivating strong independence and credibility of internal evaluation. Collaboration of internal and external evaluators from different organizations, as well as sharing best practices and lessons learned, are among advantageous practices. Future research can benefit the field via identifying an appropriate, comprehensive set of competencies and skills required to be a successful internal evaluator. Such a set could be used in the university and other professional development training to buttress the cadre of the current and aspiring IE practitioners.

The current steady societal movement advocating program accountability, monitoring, and improvement at all organizational levels has significant implications for the entire field of evaluation and for internal evaluation specifically, and will impact both publicly and privately funded programs and organizations. Such a movement makes it exceedingly important for both internal and external evaluators to be aware of the burning issues in conducting evaluation internally and the implications for practice. We hope that this volume will prompt further interest in and research on internal evaluation—with both researchers and communities of practice engaged in dialogue around the issues mentioned in this volume and beyond.

Reference

Love, A.J. (Ed.). (1983). Developing effective internal evaluation [Special issue]. New Directions for Program Evaluation, 20.

Boris B.Volkov

Michelle E. Baron

Editors

Boris B. Volkov is an assistant professor of evaluation studies with the Center for Rural Health and Department of Family and Community Medicine at the University of North Dakota School of Medicine and Health Sciences.

Michelle E. Baron is an independent evaluation strategist based in Arlington, Virginia.

Chapter 1

Internal Evaluation a Quarter-Century Later: A Conversation With Arnold J. Love

Boris B. Volkov

Abstract

This chapter features a recent conversation with Dr. Arnold J. Love, a long-time proponent of internal evaluation and one of the most cited internal evaluation authors. In 1983, Love edited the first issue of New Directions for Program Evaluation on the topic of internal evaluation. He is the author of the book Internal Evaluation: Building Organizations from Within (1991), editor of a special issue of the Canadian Journal of Program Evaluation about internal evaluation, and the author of a chapter on internal evaluation in Encyclopedia of Evaluation (2005). Currently working as an independent evaluation consultant, Love has more than 25 years of experience in evaluation. Based in Toronto, Canada, but also a founding member of the American Evaluation Association, he also brings an important international perspective to our discussion of the status of internal evaluation. © Wiley Periodicals, Inc., and the American Evaluation Association.

BORIS VOLKOV: Arnold, I would like to start our conversation by thanking you for your willingness to share your thoughts in this New Directions for Evaluation issue and by asking you about your personal story of being involved with internal evaluation.

ARNOLD LOVE: It is my great pleasure to speak with you, Boris, about a topic so close to the heart of my career as an evaluator. Before I answer your question, I would like to set the record straight about my position regarding internal evaluation. Because my name is associated so closely with internal evaluation, there is often the misperception that I am promoting internal evaluation as the preferred alternative to external evaluation. Nothing could be further from my own position. I feel that internal evaluation is a valuable form of evaluation, but the choice of any particular form (internal or external) depends on the purpose for the evaluation and a careful consideration of who is in the best position to conduct the evaluation. In some cases it is internal evaluators, but in other cases it is external evaluators.

In terms of my own story, in graduate school I developed an interest in applied research, especially assessing the effectiveness of public and nonprofit policies and programs. The term evaluation research was just being coined to describe this form of research, primarily the application of rigorous research methodology to the assessment of the process and outcomes of programs. I was fortunate that I learned a wide range of research and measurement approaches, including quantitative methods and qualitative approaches, behavioral analysis and scale construction, complex systems and organizational analyses, and European phenomenological investigation.

To keep the story short, I was hired by a large multiservice agency in Toronto that wanted to build evaluation capacity into their organization. I was very fortunate that the executive director and senior staff practiced leading-edge management approaches that were very much in line with Aaron Wildavsky’s concept of “self-evaluating organizations.” One of Wildavsky’s ideas was that internal evaluation was a key way for organizations to set their own directions, foster change, and know if they were achieving results. So when I saw the notice in a journal that an Evaluation Research Society (ERS) was being formed in the USA, I eagerly attended the first meetings. At those meetings I met kindred spirits—in fact, the Canadian Evaluation Society (CES) began its life as a chapter of the ERS. A few years later, we formed the CES and then provincial chapters. The regional structure of CES encouraged working groups on various topics (similar to AEA’s TIG structure), including the interests of internal evaluators.

This was an important step forward, because internal evaluation usually was not seen to be legitimate evaluation at all. A practical consequence was that internal evaluators were generally excluded from conferences, meetings, and conversations with other people who considered themselves to be “real” evaluators. In my experience, the situation was more critical in the United States. There the evaluation field was heavily populated by doctoral level academics and consultants who were external evaluators. They defined the field. In Canada, on the other hand, the average evaluator held a master’s degree and tended to work for government, nonprofit organizations, or for private-sector organizations. In both countries, however, many internal evaluators often carried only part-time evaluation responsibilities, lacked doctoral degrees, and conducted evaluations that served the limited purposes of their organizations. In working with my colleagues and attending evaluation conferences, I was confronted by a paradox: It seemed that many more evaluators were doing internal evaluations, but their needs were ignored. Little was known about doing evaluation effectively within organizations. In a nutshell, that is how I came to become a student of internal evaluation.

BORIS VOLKOV: It is hard to quantify the scope of internal evaluation’s growth; however, we know that it is on the rise in the U.S. and across the world. What is your perception of the contemporary history of internal evaluation?

ARNOLD LOVE: International surveys and estimates by those who study internal evaluation show considerable variation across countries and cultures. In Canada, at the time that the Canadian Evaluation Society was formed in the late 1970s, our federal government deliberately decided that internal evaluation would be a major model for evaluation. The Royal Commission on Government Organization recommended that government “should be run more like a business” by adopting methods that proved effective in the private sector. Under the slogan “Let the managers manage!” internal evaluation addressed the need for accountability together with systematic program development and quality improvement. Without in-house evaluation and evaluators who were subject-matter experts, there was the fear that reforming government and controlling expenditures was like “conducting an operation on a man carrying a piano upstairs.” As you can see, the focus on organizational reform and learning in Canada meant that internal evaluation was not only accepted, but it was promoted as an important form of evaluation.

The history in the U.S. is quite different. In my mind, the watershed point was that infamous “crisis of relevance” in evaluation in the early 1970s. A study commissioned by the Comptroller General’s Office concluded that the vast majority of evaluations were not relevant—they took too long, were hamstrung by methodological caveats, reported results long after decisions were made, and were incomprehensible. This report had a dramatic negative impact on the evaluation field and it went into sharp decline. In my opinion, good field research rescued the U.S. evaluation field by carefully examining the small percentage of evaluations that were considered relevant. This led to different approaches to evaluation, such as Michael Q. Patton’s utilization-focused evaluation (UFE) model that ensures relevance by building use right into the evaluation process. These new approaches to evaluation gave legitimacy to identifying stakeholders, understanding evaluation needs, participation, selection of appropriate and feasible methods, and the importance of communicating evaluation findings that were already at the heart of the internal evaluation process. I think that the second watershed in the U.S. was the acceptance of internal evaluation as an integral part of the management reforms in the 1980s and 1990s—the notion that evaluation could be a legitimate tool for managing organizations and that organizational learning was as important as accountability. At that point in time, an estimated 60% of the evaluations in the U.S. were internal evaluations and for the next decade that percentage continued to increase.

In some countries, the percentage is far higher. For example, 5 years ago the Japanese Evaluation Society estimated that 99% of evaluations were internal. This appears to be the situation for most Asian countries, where external evaluation is mistrusted. I find myself in the unusual position of promoting the potential benefits of external evaluation, although my audience cannot imagine how an external evaluator—a perfect stranger, who does not have firsthand knowledge of the program, its politics, its people, its limitations, and its values—can do an evaluation in a relatively short period of time and produce findings that are meaningful to anybody. It is so foreign to cultures which value learning circles or total quality management groups that deeply involve all relevant parties over a long period of time.

In other parts of the world, there is a fair degree of mix between internal and external evaluation. Estimates indicate that some of the northern European countries are least likely to use internal evaluation. This may be traced to their traditional focus on supreme audit organizations where evaluation and auditing are seeing as similar and complementary to each other. Even that situation appears to be changing.

BORIS VOLKOV: More than a quarter-century ago, you wrote: “the notion of the self-evaluating organization that uses program evaluation as the basis for program development and change remains largely an ideal . . .” (Love, 1983, p. 5). What is your current view of the self-evaluating organization? Is it still a dream yet to come true?

ARNOLD LOVE: This vision comes from Aaron Wildavsky (1979), who is best known for his book Speaking Truth to Power: The Art and Craft of Policy Analysis. One of his ideas in terms of the self-evaluating organization would be to have people within the organization actively using evaluation information to shape and transform it. Although Wildavsky’s dream is not completely realized, it is more a possibility now than ever before. Over the last few decades, we have become much more aware of the way organizations are designed and supported. The advent of affordable computer systems today gives internal evaluators enormous power to collect, analyze, and communicate information. Demands for evaluation from funders, board members, program managers, and partners have made the pressure for evaluation across all sorts of organizations very real, so I believe that we are seeing much more internal evaluation now than in 1983.

In other words, organizations are using evaluation much more, but the second part of Wildavsky’s dream of a self-evaluating organization was to have staff that he called “evaluator–manager.” In other words, when people became managers, they also had expertise as evaluators and they used that evaluation expertise to actively manage the organization. Over the years, I have taught evaluation to managers of business administration, public administration, and nonprofit administration. I usually teach just one basic graduate course in evaluation to help further the “evaluator–manager” concept. In most MBA programs, there is no training in evaluation whatsoever. Although the pure vision of Wildavsky has not been fully actualized, it’s been partially actualized and it has emerged in new varieties, such as networks using evaluation together to learn, and leading-edge practitioners and theorists of evaluation are engaged in that today.

BORIS VOLKOV: Some evaluation pundits insist that practicing internal evaluation in organizations presents unique ethical dilemmas. What major ethical issues do you think should be recognized and how can they be dealt with when practicing internal evaluation?

ARNOLD LOVE: The number one issue is the credibility of internal evaluation. To improve the credibility, you have to reduce the perception that internal evaluation is biased evaluation. Everyone recognizes that when someone is an employee of an organization there may be pressures, subtle or not, to report the desired results. In my internal evaluation courses, I educate evaluators about a variety of proven strategies for reducing bias and increasing the credibility of internal evaluation. For example, one is to apply the AEA Program Evaluation Standards and ethical guidelines so that internal evaluators and their internal clients know them, subscribe to them, and practice them. Another strategy is to have a periodic expert review by an external evaluator(s) to review a sample of internal evaluation studies and give feedback about their quality and potential areas where bias could be an issue. Another strategy is having an evaluation steering committee guide the evaluation, even for internal evaluation. You might include a client representative or student or parent representative if it is in education, as well as perhaps someone from a partner organization who is at arm’s length. It gives, in other words, additional eyes and ears to ensure that some issues around credibility and potential bias are being addressed.

The last issue concerns where the evaluation unit is located structurally in an organization. The higher in the organization the evaluation unit is located, the more it is perceived to be independent. Again, the ideal is that the evaluation unit would report directly to the CEO or executive director of the organization, and, if not to that person, to a senior vice-president of the organization. Below that in the organization, internal evaluation is seen as subject to the pressures of managers and colleagues. Likewise, if there is a problem that you identify in an organization, by reporting to the highest level you are in a much better position to shape change.

BORIS VOLKOV: You have been busy working as an independent evaluation consultant internationally. What is your perception of the differences between the ways internal evaluation is practiced in North America and in the rest of the world?