Assessing Organizational Performance in Higher Education - Barbara A. Miller - E-Book

Assessing Organizational Performance in Higher Education E-Book

Barbara A. Miller

0,0
49,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The book provides a full complement of assessment technologies that enable leaders to measure and evaluate performance using qualitative and quantitative performance indicators and reference points in each of seven areas of organizational performance. While these technologies are not new, applying them in a comprehensive assessment of the performance of both academic and administrative organization in higher education is a true innovation. Assessing Organizational Performance in Higher Education defines four types of assessment user groups, each of which has unique interest in organizational performance. This offers a new perspective on who uses performance results and why they use them. These varied groups emphasize that assessment results must be tailored to fit the needs of specific groups, that “one-size-fits-all” does not apply in assessment. An assessment process must be robust and capable of delivering the right information at the right time to the right user group. 

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 376

Veröffentlichungsjahr: 2016

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title

Copyright

TABLES, FIGURES, EXHIBITS, AND WORKSHEETS

FOREWORD

PREFACE

Organization of the Book

Acknowledgments

ABOUT THE AUTHOR

CHAPTER 1: Purpose of Assessment

Assessment User Groups

External Assessment User Groups

Internal Assessment User Groups

Summary

CHAPTER 2: Organizations as Systems

Systems Thinking

Internal System Elements

Summary

CHAPTER 3: Organizations as System

External System Elements

Summary

CHAPTER 4: Assessment Methods and Terminology

Measurement

Evaluation

Data Collection

Dissemination of Assessment Findings

Summary

CHAPTER 5: Defining and Measuring Organizational Performance

Areas of Organizational Performance

Effectiveness

Productivity

Quality

Customer and Stakeholder Satisfaction

Efficiency

Innovation

Financial Durability

Critical Success Factors

Assessment Users’ Preferred Areas of Organizational Performance

Interrelationships in Organizational Performance

Summary

CHAPTER 6: Creating and Maintaining Assessment Programs

Building Assessment Programs

Deploying Assessment Programs

Assessing Assessment Programs

Summary

GLOSSARY

REFERENCES

INDEX

End User License Agreement

List of Tables

CHAPTER 4: Assessment Methods and Terminology

Table 4.1 Gap Analysis: Cycle Time for the Registrar’s Posting of End-of-Term Grades (Number of Days After Last Scheduled Final Exam)

List of Illustrations

CHAPTER 2: Organizations as Systems

Figure 2.1 The Organization as a System

Figure 2.2 Internal Elements of an Organizational System

Figure 2.3 Leadership Systems

Figure 2.4 Inputs

Figure 2.5 Key Work Processes

Figure 2.6 Outputs

Figure 2.7 Outcomes

CHAPTER 3: Organizations as System

Figure 3.1 External Elements of an Organizational System

Figure 3.2 Upstream Systems

Figure 3.3 Customers

Figure 3.4 Students as Inputs, Customers, and Stakeholders

Figure 3.5 Stakeholders

CHAPTER 5: Defining and Measuring Organizational Performance

Figure 5.1 The Seven Areas of Organizational Performance

Figure 5.2 Effectiveness

Figure 5.3 Productivity

Figure 5.4 Quality

Figure 5.5 Customer and Stakeholder Satisfaction

Figure 5.6 Efficiency

Figure 5.7 Innovation

Figure 5.8 Financial Durability

Guide

Cover

Table of Contents

Begin Reading

Pages

C1

iii

iv

v

vii

viii

ix

x

xi

xii

xiii

xiv

xv

xvi

xvii

xviii

xix

xx

xxi

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

249

250

251

252

253

254

255

256

257

258

Assessing Organizational Performance in Higher Education

BARBARA A. MILLER

Copyright © 2007 by John Wiley & Sons, Inc. All rights reserved.

Published by Jossey-Bass

A Wiley Imprint

989 Market Street, San Francisco, CA 94103-1741 www.josseybass.com

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the Web at www.copyright.com. Requests to the publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, 201-748-6011, fax 201-748-6008, or online at http://www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Readers should be aware that Internet Web sites offered as citations and/or sources for further information may have changed or disappeared between the time this was written and when it is read.

Jossey-Bass books and products are available through most bookstores. To contact Jossey-Bass directly call our Customer Care Department within the U.S. at 800-956-7739, outside the U.S. at 317-572-3986, or fax 317-572-4002.

Jossey-Bass also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Library of Congress Cataloging-in-Publication Data

Miller, Barbara A., 1943-

Assessing organizational performance in higher education / Barbara A. Miller ; foreword by Suzanne Swope.

p. cm.

Includes bibliographical references and index.

ISBN-13: 978-0-7879-8640-7 (pbk.)

ISBN-10: 0-7879-8640-2 (pbk.)

1. Universities and colleges--United States--Evaluation.

2. Universities and colleges--United States--Administration. I. Title. LB2331.63.M55 2007

378.1'07--dc22

2006022688

TABLES, FIGURES, EXHIBITS, AND WORKSHEETS

TABLES

4.1

Gap Analysis: Cycle Time for the Registrar’s Posting of End-of-Term Grades (Number of Days After Last Scheduled Final Exam)

FIGURES

2.1

The Organization as a System

2.2

Internal Elements of an Organizational System

2.3

Leadership Systems

2.4

Inputs

2.5

Key Work Processes

2.6

Outputs

2.7

Outcomes

3.1

External Elements of an Organizational System

3.2

Upstream Systems

3.3

Customers

3.4

Students as Inputs, Customers, and Stakeholders

3.5

Stakeholders

5.1

The Seven Areas of Organizational Performance

5.2

Effectiveness

5.3

Productivity

5.4

Quality

5.5

Customer and Stakeholder Satisfaction

5.6

Efficiency

5.7

Innovation

5.8

Financial Durability

EXHIBITS

1.1

Performance Indicators and Reference Points for the Strategic Goal “Increase Enrollment”

1.2

Performance Indicators and Reference Points for the Strategic Subgoal “Increase Retention”

2.1

Excerpts from the Mission Statement, Vision Statement, Guiding Principles, Strategic Goals, and Organizational Structure for an Academic Department

2.2

Excerpts from the Mission Statement, Vision Statement, Guiding Principles, Strategic Goals, and Organizational Structure for Information Services

2.3

Examples of Inputs for an Academic Department

2.4

Examples of Inputs for Information Services

2.5

Examples of Key Work Processes for an Academic Department

2.6

Examples of Key Work Processes for Information Services

2.7

Examples of Outputs for an Academic Department

2.8

Examples of Outputs for Information Services

2.9

Examples of Intended Outcomes for an Academic Department

2.10

Examples of Intended Outcomes for Information Services

3.1

Examples of Upstream Supplier Systems for an Academic Department

3.2

Examples of Upstream Supplier Systems for Information Services

3.3

Examples of Upstream Constraining Systems for an Academic Department

3.4

Examples of Upstream Constraining Systems for Information Services

3.5

Examples of Upstream Service Partner Systems for an Academic Department

3.6

Examples of Upstream Service Partner Systems for Information Services

3.7

Examples of Internal and External Customers for an Academic Department

3.8

Examples of Internal and External Customers for Information Services

3.9

Examples of Internal and External Stakeholders for an Academic Department

3.10

Examples of Internal and External Stakeholders for Information Services

4.1

Examples of Critical Success Factors for an Academic Department

4.2

Examples of Critical Success Factors for Information Services

4.3

Examples of Reference Points

5.1

Examples of Performance Indicators for Effectiveness in an Academic Department

5.2

Examples of Performance Indicators for Effectiveness in Information Services

5.3

Examples of Performance Indicators for Productivity in an Academic Department

5.4

Examples of Performance Indicators for Productivity in Information Services

5.5

Examples of Performance Indicators for Q

1

: Quality of Upstream Systems in an Academic Department

5.6

Examples of Performance Indicators for Q

1

: Quality of Upstream Systems in Information Services

5.7

Examples of Performance Indicators for Q

2

: Quality of Inputs in an Academic Department

5.8

Examples of Performance Indicators for Q

2

: Quality of Inputs in Information Services

5.9

Examples of Performance Indicators for Q

3

: Quality of Key Work Processes in an Academic Department

5.10

Examples of Performance Indicators for Q

3

: Quality of Key Work Processes in Information Services

5.11

Examples of Performance Indicators for Q

4

: Quality of Outputs in an Academic Department

5.12

Examples of Performance Indicators for Q

4

: Quality of Outputs in Information Services

5.13

Examples of Performance Indicators for Q

5

: Quality of Leadership Systems: Follower and Stakeholder Perceptions and External Relations

5.14

Examples of Performance Indicators for Q

5

: Quality of Leadership Systems: Mission

5.15

Examples of Performance Indicators for Q

5

: Quality of Leadership Systems: Vision

5.16

Examples of Performance Indicators for Q

5

: Quality of Leadership Systems: Guiding Principles

5.17

Examples of Performance Indicators for Q

5

: Quality of Leadership Systems: Strategic Goals

5.18

Examples of Performance Indicators for Q

5

: Quality of Leadership Systems: Organizational Structure: Design and Governance

5.19

Examples of Performance Indicators for Q

5

: Quality of Leadership Systems: Resource Acquisition and Allocation

5.20

Examples of Performance Indicators for Q

5

: Quality of Leadership Systems: Costs and Benefits

5.21

Examples of Performance Indicators for Q

6

: Quality of Worklife

5.22

Examples of Performance Indicators for Customer and Stakeholder Satisfaction in an Academic Department

5.23

Examples of Performance Indicators for Customer and Stakeholder Satisfaction in Information Services

5.24

Examples of Performance Indicators for Efficiency in an Academic Department

5.25

Examples of Performance Indicators for Efficiency in Information Services

5.26

Examples of Creative Changes Supporting Innovation

5.27

Examples of Performance Indicators for Financial Durability in an Academic Department

5.28

Examples of Performance Indicators for Financial Durability in Information Services

6.1

Examples of Mission, Vision, Guiding Principles, Strategic Goals, and Organizational Structure for an Assessment Program

6.2

Examples of Direct and Indirect Assessment Costs

6.3

Internal and External Assessment Program Elements

6.4

Examples of Critical Success Factors for an Institutional Assessment Program

6.5

Examples of Performance Indicators for Measuring Assessment Program Performance

WORKSHEETS

1.1

Assessment User Group Analysis

2.1

Mission Analysis

2.2

Vision Analysis

2.3

Guiding Principles Analysis

2.4

Strategic Goals Analysis

2.5

Organizational Design Analysis

2.6

Organizational Governance Analysis

2.7

Inputs Analysis

2.8

Key Work Processes, Outputs, and Outcomes Analysis

3.1

Upstream Systems Analysis

3.2

Customers Analysis

3.3

Stakeholders Analysis

4.1

Critical Success Factor Analysis

4.2

Assessment Report Schedule

5.1

Assessing Effectiveness

5.2

Assessing Productivity

5.3

Assessing Q

1

: Quality of Upstream Systems

5.4

Assessing Q

2

: Quality of Inputs

5.5

Assessing Q

3

: Quality of Key Work Processes

5.6

Assessing Q

4

: Quality of Outputs

5.7

Assessing Q

5

: Quality of Leadership Systems: Follower Satisfaction and External Relations

5.8

Assessing Q

5

: Quality of Leadership Systems: Direction and Support

5.9

Assessing Q

6

: Quality of Worklife

5.10

Assessing Customer Satisfaction

5.11

Assessing Stakeholder Satisfaction

5.12

Assessing Efficiency

5.13

Assessing Innovation

5.14

Assessing Financial Durability

5.15

Assessing Critical Success Factors

5.16

Organizational Performance Areas Important to Assessment Users

6.1

Communication Planning

FOREWORD

Anyone interested in the survival of higher education realizes that the industry is going through a profound change. Just like manufacturing and health care before it, higher education must face the reality that costs, new technologies, and changing customer expectations create pressures on the industry. Anyone who works in colleges or has a stake in their success will find this book of great interest. Quality education in all its manifestations is crucial to the survival of democracy, as well as to the industry itself.

Peter Drucker, a longtime authority in management, proposed in Management Challenges for the 21st Century (1999) that we may need to stop thinking from a perspective of managing the work of people and begin managing for performance. To be effective, we must define customers’ values and their decision-making processes regarding their income distribution. Management must organize and evaluate the entire operational process, focusing on results and performance.

No one knows these principles better as they relate to higher education than Barbara A. Miller (formerly Lembcke). She has served as an administrative leader, teacher, researcher, and consultant in private and public universities. Her breadth of perspective and knowledge about systems—how they are defined, measured, evaluated, and changed—are extensive. Miller’s broadly based higher education background, combined with her teaching and administrative experience, makes her insights and analysis extremely valuable for those of us serving a variety of roles in the institution as well as those in evaluation positions as stakeholders outside the organization.

Assessing Organizational Performance in Higher Education embraces assessment at the organizational, program, and process levels and evaluates the work from a perspective rooted in systems thinking. Readers will be able to identify major work processes, the significance of these processes in producing quality outcomes, and the strategies necessary for continuous improvement. The book complements the body of literature on assessment, providing both an in-depth theoretical framework and techniques useful for implementation. The information in it is pertinent to everyone from the boardroom to the individual faculty or staff member and will serve as a set of tools to improve the work of the institution. Readers who fully understand the message Miller presents and who work through the exercises as they apply to the institution or program they are assessing will have done a great service to their constituencies—to the students whom they so gratefully serve and to others, both staff and faculty, who care about the quality of their work and the important role they play in this society.

Suzanne Swope, Ed.D.

Vice President for Enrollment and Student AffairsEmerson College, Boston

PREFACE

I wrote this book to meet the needs of two important groups associated with assessment in higher education: assessors and assessment users. The first group, assessors, consists of persons engaged in day-to-day assessment work. They are faculty, staff, and administrators with part-time or full-time, temporary or permanent responsibilities for assessment. The second group, assessment users, are persons who evaluate or judge performance results measured and conveyed by assessors. I see assessment users as the end users or customers of assessment programs.

Assessors seek avenues for measuring performance required of assessment users; assessment users seek appropriate contexts for evaluating assessment findings measured and conveyed by assessors. Often assessors and assessment users are actually the same persons. However, I choose to differentiate the roles for purposes of discussion, assessor referring exclusively to persons exploring matters of measurement, and assessment user referring exclusively to persons engaged in evaluation. I describe various groups of external and internal assessment users and explain how each group uses assessment findings to support a wide range of decisions that have a potential impact on an organization’s capacity to perform.

My purpose in writing this book is to strengthen the knowledge, skills, and abilities of assessors and assessment users in higher education whether they are novices or experts. I define assessment as the measurement of organizational performance that assessment users evaluate in relation to reference points for the purpose of supporting their requirements and expectations.

The premise of this book is that assessors in higher education must go beyond assessment of student learning outcomes and institutional effectiveness and into assessment of performance of whole organizations, programs, and processes. This raises two questions: why? and how?

Why assess performance at the organization, program, and process levels? For a variety of reasons:

Colleges and universities are open, interdependent systems in which the performance of one organization depends on and affects the performance of other organizations.

Colleges and universities account for performance at the organizational level to many powerful external assessment users, including governing boards, governmental agencies, and organizations that affirm accreditation, rank, and classification.

All aspects of performance should be assessed within the context of an organization’s mission, goals, and requirements and the expectations of the people it serves.

Performance at the organization, program, and process levels is complex and requires a holistic view of how one area of performance affects another area of performance within the same unit of analysis.

How is performance assessed at the organizational level?

Performance is assessed for a designated unit of analysis whose boundaries, mission, and goals are clearly defined. For example, a unit of analysis can be the institution as a whole, a college such as the College of Arts and Sciences, a school such as the Law School or Medical School, a department such as the Chemistry Department, a program such as General Education or Writing Across the Curriculum, or an administrative office such as the Admissions Office, Development Office, or Registrar’s Office. A unit of analysis can also be a key work process (such as teaching or research) or a cross-functional process (such as payroll).

Performance is measured through performance indicators in seven interrelated areas of organizational performance, each of which is linked to specific organizational elements. The seven areas of organizational performance are effectiveness, productivity, quality (including quality of leadership systems, of inputs, of key work processes, of programs and services, and of worklife), customer and stakeholder satisfaction, efficiency, innovation, and financial durability.

Performance is measured in selected areas of performance deemed critical to the unit’s performance success.

Performance is evaluated within the context of the unit’s mission and goals.

Performance is evaluated against specific performance requirements and expectations of the organization’s powerful external and internal assessment users, other important stakeholders, and the people the unit serves.

The book’s focus on performance at the organization, program, and process levels complements and advances the many published works available today on assessment of student learning outcomes and institutional effectiveness. This focus helps readers understand the interdependence of organizations in higher education and complexities inherent in organizational performance. I believe that this understanding is fundamental to the practice and scholarship of assessment.

For assessors, the book offers a conceptual framework to guide the measurement of organizational performance in all seven areas of organizational performance. The conceptual framework applies to both academic and administrative units of analysis at any level within the hierarchical structure of educational institutions; it also applies to important programs and key work processes that operate within single organizations or across several organizations or functions within an institution.

What is most exciting about this book is its examination of assessment in several new and different areas of organizational performance—areas that include but go beyond institutional effectiveness, student learning outcomes, and input quality. The following are some of the new areas of performance that assessors can measure:

Quality of an organization’s leadership system as a measure of quality of direction and support it provides to the unit under review

Quality of organizational structure as a measure of how organizational design and governance hinders or enhances organizational performance

Quality assurance of partnerships with important upstream systems that supply, constrain, and serve the units under review

Quality of worklife as a measure of employee attitudes and perceptions about the quality of their work experiences and workplace

Quality of key work processes as a measure of cycle time, cost, rework, waste, and scrap that characterize key work processes

Organizational innovation as a measure of an organization’s learning culture and a measure of creative changes put in place to improve organizational performance

Efficiency as a measure of how well organizations use their scarce and critical resources, as well as a measure of the costs and benefits of quality management

Customer and stakeholder satisfaction as a measure of the extent to which organizations meet the needs of the people they serve

Financial durability as a measure of the financial health and well-being of the units under review.

For external assessment users such as governing boards, governmental agencies, and organizations that affirm accreditation, classification, rank, and eligibility, the book is designed to expand knowledge of the nature and complexity of organizational performance in higher education—knowledge that will, ideally, enhance the ability to frame appropriate accountability questions of educational leaders.

For internal assessment users, such as senior leaders, administrators, and faculty and staff, the book is designed to expand knowledge of the internal workings and interdependence of organizations both inside and outside the institution, complexities inherent in organizational performance, and important links among organizational system elements, areas of organizational performance, and assessment. This knowledge will enhance their ability, as assessment users, to frame better performance questions that lead to better assessments of organizational performance.

Finally, the book offers educational leaders specific recommendations on how to build, deploy, and evaluate assessment programs in ways that provide the right information, at the right time, in the right format to meet ever-changing needs of important external and internal assessment users. The book presents many examples and worksheets to help assessors describe their unit’s organizational system elements and measure complex and interdependent areas of organizational performance using performance indicators and reference points appropriate to the organization’s mission, vision, strategic goals, and critical success factors.

Organization of the Book

The book is organized into six chapters. Chapter One describes external and internal assessment user groups in higher education. It explains what types of organizational performance results assessment users want to know, how they typically use assessment findings in their decision-making processes, and what is at stake for organizations whose performance is under review. A worksheet is provided to help assessors identify assessment information required of important external and internal assessment users groups.

Chapter Two introduces systems thinking and explains the benefits of viewing organizations as open, living, unique systems with a purpose. It begins with a discussion of interdependent system elements that make up organizations, programs, and processes in higher education and explains how each system element presents opportunities for assessment. Chapter Two describes five internal system elements: leadership systems, inputs, key work processes, outputs, and outcomes. Many examples are provided for academic and administrative organizations. Worksheets are also provided to help assessors identify and describe internal system elements of units whose performance they intend to measure.

Chapter Three continues the discussion of system elements and their link to assessment. It describes three external system elements: upstream systems, customers, and stakeholders. Again, many examples are provided for academic and administrative organizations. Worksheets are also provided to help assessors identify and describe external system elements of units whose performance they intend to measure.

Chapter Four is a discussion of how to assess organizational performance. It summarizes assessment methods and terminology. The chapter begins by differentiating the work of measurement from evaluation in assessment. It explains how to clarify units of analysis and the proper ways to use time frames, critical success factors, performance indicators, and reference points. It describes methods for collecting assessment data and disseminating performance results. Worksheets are provided to help assessors identify critical success factors and build an assessment report schedule for units whose performance they intend to measure.

Chapter Five is a discussion of what to assess in organizational performance. It covers the seven operational definitions of organizational performance noted earlier in this Preface: effectiveness, productivity, quality, customer and stakeholder satisfaction, efficiency, innovation, and financial durability. Many examples of performance indicators in each area are provided for academic and administrative organizations. Worksheets are provided to help assessors identify and describe performance indicators and reference points in all seven areas (including critical success factors) and link performance areas to specific assessment user needs and preferences.

Finally, Chapter Six is about how to build, deploy, and assess new or more formalized campuswide assessment programs. It offers suggestions about the importance of clarifying purpose, identifying important assessment user groups, and ensuring two-way, ongoing communication about assessment. It explains how to create and sustain a supportive organizational culture for assessment and how to build a leadership structure that ensures program success. It describes direct and indirect costs of assessment. It presents external and internal system elements of an assessment program as well as examples of indicators for measuring performance in areas deemed critical to program success. A worksheet is provided to help assessment leaders build an assessment communication plan.

Acknowledgments

This book reflects many years of work with friends and colleagues who helped me frame and apply this conceptual model for assessing performance of organizations in higher education. In particular, I would like to thank my husband and longtime friend and colleague, Louie Miller III, who not only served as my sounding board throughout the development of this book but also provided patient guidance and expertise resulting from his long and successful professional career as a tenured professor in sociology and senior executive in information services. I would also like to thank my friend and colleague Suzanne Swope, currently vice president for enrollment and student affairs at Emerson College, for her advice and collaboration over the many years we worked together at George Mason University. I would also like to thank my longtime friend Sandra Everett at Lorain County Community College for sharing her expertise in the area of quality management and helping me understand and apply those principles in the context of organizations in higher education. Finally, I would like to thank Scott Sink, Tom Tuttle, and Carl Thor, whose early works inspired the formation of this conceptual framework for assessing performance of organizations in higher education.

Greencastle, IndianaJune 2006

Barbara A. Miller

ABOUT THE AUTHOR

Barbara A. Miller (formerly Lembcke) is an experienced administrator in higher education and has served as a director of institutional planning and research, a senior planning and policy analyst, and an internal management consultant specializing in organizational development, and continuous quality improvement. She is also an experienced faculty member who has taught courses in management, leadership theory, organizational development, and communication. Her expertise in assessment results from thirty years of experience in large public research institutions, large and medium-sized two-year comprehensive community colleges, and small liberal arts institutions. She served for two years as an examiner for the Malcolm Baldrige National Quality Award Program and one year as an evaluator in the Baldrige pilot program in education, where documentation of performance results is critical.

Miller earned her bachelor of arts degree in sociology at the University of California, Berkeley; her master of arts degree in higher education and student personnel administration at Syracuse University; and her doctorate in higher education administration at the University of Florida in Gainesville. She has also taken M.B.A. courses at the University of North Florida, Jacksonville.

Miller lives in Greencastle, Indiana, where she serves as guest scholar at DePauw University and coordinates her consulting service.

CHAPTER 1Purpose of Assessment

Assessment in higher education has a long history in the United States. According to Victor Borden and Karen Bottrill (1994), college reputational ranking studies began in 1910, followed by peer comparisons of faculty workload and salary guidelines. Resource allocation measures emerged in the 1960s, and activity-based costing methods for generating financial performance information and benchmarking projects began in the 1990s. Finally, student outcomes assessment and process reengineering surfaced in the late 1980s and 1990s.

This book extends higher education’s experience with assessment into the arena of performance of whole organizations, programs, and processes within the framework of systems thinking. For the purpose of this book, assessment of organizational performance is defined as the measurement of organizational performance that assessment users evaluate in relation to reference points for the purpose of supporting their requirements and expectations.

The discussion begins with an explanation of assessment’s purpose as seen through the lens of those who use assessment results. It explores how groups inside and outside the institution use assessment, what assessment information they seek, and the potential impact they have on an organization’s capacity to perform. Since assessment users are the “end users” of the assessment program, they represent the program’s “customers.” Indeed, it is their needs, preferences, and requirements that drive the development, deployment, and evaluation of assessment programs.

Assessment User Groups

Anthropology Department at a Large State-Supported Research University

The call came early one morning, just before class. He remembers it well because it upset him so much that he had difficulty preparing for class. He had been chair of the Cultural Anthropology Department for nearly three years and was finally getting to understand, or so he thought, the politics of this large, state-supported institution. To be honest, he never really thought it was possible that the dean would seriously consider dropping the department. After all, who ever heard of a high-quality university without a cultural anthropology department?

It all began about ten years earlier when PBS filmed a program on DNA in the new DNA lab. Everyone considered DNA the answer to many of life’s baffling questions. The lab catapulted the discipline of physical anthropology to the top of the dean’s “list of favorites.” Unlike cultural anthropology, which has been around since the beginning of time (or so it seemed), physical anthropology was a growing discipline (thanks to DNA) replete with its own professional association and refereed journals.

At this institution, national ranking was everything. Unfortunately, the Cultural Anthropology Department was ranked unacceptably low. The chair defended his department to the dean by explaining that it was extremely difficult to get published in the refereed journals because there were so many distinguished scholars in the field. He also explained that their salaries were below those in other disciplines, which made recruitment nearly impossible. And because so many positions remained unfilled, he was forced to use adjunct faculty, which, of course, contributed to a lower ranking.

This vignette exemplifies the power that external assessment users—in this case, organizations that rank academic programs—have over organizations in higher education. Their decisions have a staggering impact on an organization’s capacity to perform. It is therefore very important for educational leaders to clarify for assessors (1) who their important external assessment users are, (2) the types of assessment information they need, (3) the types of decisions they make based on assessment results, and (4) the potential impact those decisions have on the organization’s capacity to perform. High-quality assessment programs are robust and capable of providing the right information at the right time in the right format to meet ever-changing needs of all the organization’s important assessment user groups.

There are two types of groups who use assessment results in higher education: external and internal. External user groups are governing boards; governmental agencies; potential students, donors, employees, and contractors; organizations that affirm; and external academic peers. Based on their evaluation of assessment findings, these groups make important decisions that greatly affect the following organizational aspects:

Operating and capital resources

Research grants and contracts

Program mix and pricing structures

Student financial aid

Sanctions for noncompliance

Accreditation

Rank

Eligibility

Censure

Future enrollments

Future workforce

Donations and gifts

Access to contractors

Workforce strikes and slowdowns

Internal user groups exist inside the institution. There are three types of internal user groups: senior leaders, administrators and managers, and faculty and staff. Internal user groups use assessment for the following purposes:

To account to others

To manage strategy

To allocate resources

To manage and control quality of processes and organizational culture

To improve programs and services

To support personnel decisions

To advocate causes

This chapter explores external and internal user groups typical in higher education. It is intended that this discussion will help assessors widen their own analysis of important assessment user groups to their organizations.

External Assessment User Groups

External user groups, by definition, reside outside the institution. Each group has a unique interest in assessment based on its function and relationship with the organization. As noted earlier, the major external assessment user groups in higher education discussed in this chapter are governing boards; governmental agencies; potential students, donors, employees, and contractors; organizations that affirm; and external academic peers.

Governing Boards

For assessment purposes, governing boards are defined as bodies that govern, coordinate, and advise institutions and programs at the local and state levels. Using this definition, local governing boards and statewide boards of regents are all considered governing boards because they use assessment for similar purposes. The discussion begins with local governing boards.

Local Governing Boards

Local governing boards typically use assessment results to hold senior leaders accountable for the overall performance of the institution or program. They seek assessment findings that answer the following accountability questions, among others:

Is the organization clear in its purpose and do members of the organizational community share a vision of excellence?

Is the institution achieving its mission (outcomes performance)?

To what extent do members of the organizational community practice the organization’s values and beliefs?

Does the organization offer high-quality programs and services? How does the organization assess its academic programs and services, and how does it use assessment findings for improvement?

What is the role of sponsored and unsponsored research as defined by the institution’s mission and strategic plans? What types of research are taking place? Who are the major sponsors?

What are the funding patterns, overhead rates, budgetary consequences, and other financial considerations, both now and in the future?

Does the organization have clear policies regarding intellectual property rights and publication of results of research sponsored by corporations?

Who are the faculty, and what do they do?

To what extent are students, alumni, faculty, staff, and other partners satisfied?

Who graduates, and what do they end up doing?

Is the organization efficiently using its critical resources?

Does the organization have adequate and reliable revenues and expenditures that ensure financial durability?

Does the organization’s costs and service quality compare favorably with comparable institutions?

What is the organization’s overall return on investment?

Statewide Governing Boards

Statewide governing boards seek answers to the same accountability questions as local boards, as well as additional questions pertaining to specific issues important to the state. For example, in 2005, the State Council of Higher Education for Virginia (SCHEV) established performance standards to “certify” state-supported four-year public research institutions and two- and four-year public nonresearch institutions (see State Council, 2005). For certification, SCHEV seeks answers to the following accountability questions:

Access

Does the institution provide access to higher education for all citizens throughout the state, including underrepresented populations?

Does the institution meet its enrollment projections?

Does the institution meet its degree estimates?

Affordability

Does the institution ensure that higher education remains affordable, regardless of individual or family income? What are the costs, and are they reasonable?

Does the institution conduct periodic assessment of the likely impact of tuition and fee levels net of financial aid on applications, enrollment, and student indebtedness?

Academic Offerings

Does the institution offer a broad range of undergraduate and (where appropriate) graduate programs?

Does the institution regularly assess the extent to which the institution’s curricula and degree programs address the state’s need for sufficient graduates in particular shortage areas as determined by the state?

Academic Standards

Does the institution maintain high academic standards by undertaking continual review and improvement of academic programs?

Is the institution decreasing the number of lower-division students denied enrollment in introductory courses?

Is the institution maintaining or increasing the ratio of degrees conferred per FTE faculty member?

Student Progress and Success

Is the institution improving its student retention and progression rates?

Is the ratio of degrees awarded increasing as the number of degree-seeking undergraduates increases?

Articulation

Does the institution develop articulation agreements that have uniform application to all state colleges?

Does the institution provide additional opportunities for associate degree graduates to be admitted and enrolled?

Does the institution offer dual enrollment programs in cooperation with high schools?

Economic Development

Does the institution actively contribute to efforts that stimulate the economic development of the state, and if so, in what ways?

Research

Has the institution increased its level of externally funded research conducted at the institution?

How does the institution facilitate the transfer of technology from university research centers to private sector companies?

K–12 Enhancement

Does the institution enhance K–12 student achievement, upgrade teachers’ knowledge and skills, and strengthen leadership skills of school administrators? If so, in what ways?

All Governing Boards

Governing boards also often seek answers to a variety of accountability questions pertaining to the institution’s past performance problems, hot political and economic issues, and important local, statewide, and national strategic initiatives. Governing boards typically prefer assessment findings presented within the context of past performance or comparable institutions through benchmarking (or both). Governing boards that operate under so-called sunshine laws are restricted in their use of assessment findings. Governing boards, like other important assessment user groups, make many important decisions in a scheduled and somewhat predictable time frame based on annual academic and fiscal cycles. The assessment program should be able to predict and therefore provide reports in a timely manner.

Based on their evaluation of assessment findings, governing boards make many important policy decisions that influence an institution’s mission, financial resources, physical plant expansion and renovation, program mix, and pricing structures. They also make personnel decisions about the institution’s leadership system.

Governmental Agencies

For assessment purposes, governmental agencies are defined as federal, state, and local governmental and quasi-governmental organizations, commissions, task forces, and legislative delegations. For discussion purposes, this definition excludes state governing and coordinating boards defined earlier as governing boards.

Governmental agencies, like governing boards, use assessment results to hold organizational leaders accountable for some or all of an organization’s performance results. In addition, they use assessment to determine the extent to which institutions and programs help the government achieve its goals and objectives such as workforce development and creation and transfer of knowledge and technology. They use assessment to determine institutional eligibility for grants, contracts, and student financial aid. Finally, state and federal auditors and inspectors use assessment to ensure compliance with tax codes, labor and civil rights laws, disability laws, safety (fire) and security standards, standards for the use of human subjects and animals in research, environmental regulations, accounting standards, civil rights, affirmative action, Title IX, health and food services standards, and so forth.

In general, governmental agencies seek answers to the following questions:

Does the organization offer high-quality programs and services in areas important to the government? How do these programs and services compare with those offered by other organizations?

Does the organization have adequate and reliable revenues and expenditures that ensure financial durability?

Does the organization comply with laws, regulations, and research guidelines?

Does the organization use its critical resources efficiently?

Does the organization meet eligibility requirements to receive grants, contracts, and student financial aid?

Based on their evaluation of assessment findings, governmental agencies make important decisions that greatly affect an organization’s capacity to perform. For example, they use assessment to support decisions to award grants, contracts, and student financial aid. They also use assessment to support decisions to impose sanctions and penalties for noncompliance.

An important federal agency that collects institutional data often used in assessment for comparisons and benchmarks is the National Center for Educational Statistics (NCEA), part of the U.S. Department of Education and the Institute of Education Sciences. NCEA is the primary federal entity for collecting and analyzing data related to education. The center collects data related to higher education through its program called Integrated Postsecondary Education Data System (IPEDS). IPEDS is a system of survey components designed to collect data from postsecondary educational institutions that receive federal dollars through aid, grants, and contracts (see National Center, 2005).

IPEDS collects and reports data on institutional characteristics, completions, enrollment, graduation rates, student financial aid, employees by assigned position, fall staff, salaries, and finance. An important mission of NCEA is to make statistics collected through IPEDS available to the public. NCEA disseminates IPEDS data in several formats, including peer analyses, data sets, predetermined data tables, and a searchable Web site providing current statistics on a broad range of topics.

Potential Students (Including Parents), Donors (Including Alumni), Employees, and Contractors

A third type of external assessment user group represents potential students and their parents, potential donors including alumni, potential employees, and potential contractors. This group uses assessment to support “choice” decisions.

According to Daniel Seymour (1993), potential students and their parents consider an organization’s academic quality an important factor in making choice decisions. He recommends that academic leaders use assessment to “tell the quality story” to these important stakeholders. However, for leaders to use assessment findings effectively in marketing materials, they must first understand what quality means to the market and match market needs with organizational resources, vision, and competitive position to determine and communicate its competitive advantage.

To support their choice decisions, these assessment users seek answers to the following questions pertaining to academic quality:

What is this institution’s quality of programs and services? What is the quality of housing and athletic facilities? How does this compare with the quality at other comparable organizations?

How satisfied are students, faculty, and staff? What percentage of students complete their educational goals (retention, transfer admission, graduation, placement, graduate school admission)?

What is the cost of attending this institution in relation to the quality of its educational offerings? How does it compare with the cost at other comparable organizations?

Does the organization have adequate resources to maintain quality in its programs and services?

What reputation, national ranking, and accreditation status does this institution and its programs have?

Potential employees seek answers to the same quality questions, however, they are also concerned with the quality of faculty and staff, quality of teaching and research facilities, and competitiveness of compensation and benefits. Potential contractors are concerned with reliability of organizational revenues that ensure financial durability and the organization’s track record for making promised payments in a timely manner.

Based on their evaluation of organizational performance, this group of assessment users makes important choice decisions that greatly affect an institution’s future enrollment, donations, gifts, workforce, and the willingness of qualified service providers to bid for and contract with the institution.

Organizations That Affirm