22,99 €
While the term benchmarking is commonplace nowadays ininstitutional research and higher education, less common, is asolid understanding of what it really means and how it has been,and can be, used effectively. This volume begins by defining benchmarking as "astrategic and structured approach whereby an organization comparesaspects of its processes and/or outcomes to those of anotherorganization or set of organizations to identify opportunities forimprovement." Building on this definition, the chapters provide a briefhistory of the evolution and emergence of benchmarking in generaland in higher education in particular. The authors applybenchmarking to: * Enrollment management and student success * Institutional effectiveness * The potential economic impact of higher education institutionson their host communities. They look at the use of national external survey data ininstitutional benchmarking and selection of peer institutions,introduce multivariate statistical methodologies for guiding thatselection, and consider a novel application of baseball sabermetricmethods. The volume offers a solid starting point for those new tobenchmarking in higher education and provides examples of currentbest practices and prospective new directions. This is the 156th volume of this Jossey-Bass series.Always timely and comprehensive, New Directions forInstitutional Research provides planners and administratorsin all types of academic institutions with guidelines in such areasas resource coordination, information analysis, program evaluation,and institutional management.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 230
Veröffentlichungsjahr: 2012
Table of Contents
Title page
Copyright page
About AIR
Editors’ Notes
Chapter 1: How Benchmarking and Higher Education Came Together
The Quality Movement and Benchmarking
Benchmarking in Higher Education
Conclusions
Chapter 2: Internal Benchmarking for Institutional Effectiveness
Internal Benchmarking in Higher Education
Are You Ready to Benchmark?
Initial Steps
How to Benchmark
Conclusions
Chapter 3: Benchmarking and Enrollment Management
Overview
Some Notes on Benchmarking
Scope of Enrollment Management
Challenges in Interpretation of Benchmarks
What’s Not Available for Comparison with Comparable Institutions?
Conclusion
Chapter 4: Using Institutional Survey Data to Jump-Start Your Benchmarking Process
Defining Benchmarking
Relationships between Self-Assessment and Benchmarking
Institutional Researchers’ Role in Benchmarking
Relevant Institutional Survey Data for Benchmarking
Topics to be Addressed by Using Institutional Survey Data
Example of Using Institutional Survey Data for Benchmarking
Known Limitations and Issues
Concluding Remarks
Chapter 5: Learning How to Play Ball: Applying Sabermetric Thinking to Benchmarking in Higher Education
What Is Sabermetrics?
Some Common Sabermetrics
A Modest Application of Sabermetric Thinking to Higher Education Benchmarking
Conclusions
Chapter 6: Selecting Peer Institutions with IPEDS and Other Nationally Available Data
Integrated Postsecondary Education Data System (IPEDS)
Association for Institutional Research and IPEDS
Library Statistics Program
CUPA-HR Surveys and Data on Demand
Carnegie Classification of Institutions of Higher Education
UNC System Peer Requirement
Chapter 7: Benchmarking Tier-One Universities: “Keeping Up with the Joneses”
Literature Review
Methodology
Data Analyses
Results of Analyses
Conclusion
Appendix A: Variables Used in Study
Appendix B
Appendix C
Appendix D
Appendix E
Chapter 8: Taming Multivariate Data: Conceptual and Methodological Issues
People-Processing Institutions
Who’s in the Universe?
Clarifying the Question: What Is the Purpose and Who Wants to Know?
Variable Selection and Specification Issues
Proxy Variables and Weighting
Taming Multivariate Data
Technical Appendix
Chapter 9: Conclusions and Future Directions
Where It All Began
Practical Uses of Higher Education Benchmarking
This Could Really Work
Finding Benchmark Peers
Benchmarking Town and Gown
The Path Less Traveled
Index
OTHER TITLES AVAILABLE IN THE NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH SERIES
BENCHMARKING IN INSTITUTIONAL RESEARCH
Gary D. Levy and Nicolas A. Valcik (eds.)
New Directions for Institutional Research, no. 156
Paul D. Umbach, Editor-in-Chief
Copyright © 2012 Wiley Periodicals, Inc., A Wiley Company
All rights reserved. No part of this publication may be reproduced in any form or by any means, except as permitted under section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the publisher or authorization through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923; (978) 750-8400; fax (978) 646-8600. The code and copyright notice appearing at the bottom of the first page of an article in this journal indicate the copyright holder’s consent that copies may be made for personal or internal use, or for personal or internal use of specific clients, on the condition that the copier pay for copying beyond that permitted by law. This consent does not extend to other kinds of copying, such as copying for general distribution, for advertising or promotional purposes, for creating collective works, or for resale. Such permission requests and other permission inquiries should be addressed to the Permissions Department, c/o John Wiley & Sons, Inc., 111 River St., Hoboken, NJ 07030; (201) 748-8789, fax (201) 748-6326, http://www.wiley.com/go/permissions.
NEW DIRECTIONS FOR INSTITUTIONAL RESEARCH (ISSN 0271-0579, electronic ISSN 1536-075X) is part of The Jossey-Bass Higher and Adult Education Series and is published quarterly by Wiley Subscription Services, Inc., A Wiley Company, at Jossey-Bass, One Montgomery Street, Suite 1200, San Francisco, California 94104-4594 (publication number USPS 098-830). Periodicals Postage Paid at San Francisco, California, and at additional mailing offices. POSTMASTER: Send address changes to New Directions for Institutional Research, Jossey-Bass, One Montgomery Street, Suite 1200, San Francisco, California 94104-4594.
INDIVIDUAL SUBSCRIPTION RATE (in USD): $89 per year US/Can/Mex, $113 rest of world; institutional subscription rate: $297 US, $337 Can/Mex, $371 rest of world. Single copy rate: $29. Electronic only–all regions: $89 individual, $297 institutional; Print & Electronic–US: $98 individual, $342 institutional; Print & Electronic–Canada/Mexico: $98 individual, $382 institutional; Print & Electronic–Rest of World: $122 individual, $416 institutional.
EDITORIAL CORRESPONDENCE should be sent to Paul D. Umbach, Leadership, Policy and Adult and Higher Education, North Carolina State University, Poe 300, Box 7801, Raleigh, NC 27695-7801.
New Directions for Institutional Research is indexed in Academic Search (EBSCO), Academic Search Elite (EBSCO), Academic Search Premier (EBSCO), CIJE: Current Index to Journals in Education (ERIC), Contents Pages in Education (T&F), EBSCO Professional Development Collection (EBSCO), Educational Research Abstracts Online (T&F), ERIC Database (Education Resources Information Center), Higher Education Abstracts (Claremont Graduate University), Multicultural Education Abstracts (T&F), Sociology of Education Abstracts (T&F).
Microfilm copies of issues and chapters are available in 16mm and 35mm, as well as microfiche in 105mm, through University Microfilms, Inc., 300 North Zeeb Road, Ann Arbor, Michigan 48106-1346.
www.josseybass.com
ISBN: 9781118608838
ISBN: 9781118640791 (epdf)
ISBN: 9781118641040 (epub)
ISBN: 9781118641026 (mobi)
THE ASSOCIATION FOR INSTITUTIONAL RESEARCH was created in 1966 to benefit, assist, and advance research leading to improved understanding, planning, and operation of institutions of higher education. Publication policy is set by its Publications Committee.
PUBLICATIONS COMMITTEE
Gary R. Pike (Chair)
Indiana University–Purdue University Indianapolis
Gloria Crisp
University of Texas at San Antonio
Paul Duby
Northern Michigan University
James Hearn
University of Georgia
Terry T. Ishitani
University of Memphis
Jan W. Lyddon
San Jacinto Community College
John R. Ryan
The Ohio State University
EX-OFFICIO MEMBERS OF THE PUBLICATIONS COMMITTEE
John Muffo (Editor, Assessment in the Disciplines), Ohio Board of Regents
John C. Smart (Editor, Research in Higher Education), University of Memphis
Richard D. Howard (Editor, Resources in Institutional Research), University of Minnesota
Paul D. Umbach (Editor, New Directions for Institutional Research), North Carolina State University
Marne K. Einarson (Editor, AIR Electronic Newsletter), Cornell University
Gerald W. McLaughlin (Editor, AIR Professional File/IR Applications), DePaul University
Richard J. Kroc II (Chair, Forum Publications Committee), University of Arizona
Sharron L. Ronco (Chair, Best Visual Presentation Committee), Florida Atlantic University
Randy Swing (Staff Liaison)
For information about the Association for Institutional Research, write to the following address:
AIR Executive Office
1435 E. Piedmont Drive
Suite 211
Tallahassee, FL 32308-7955
(850) 385-4155
http://airweb.org
Editors’ Notes
Benchmarking has several different uses and purposes in higher education. Some institutions use benchmarking to gauge performance internally to their own organization in regard to departmental performance. Other organizations use benchmarking to gauge their institution among external entities to improve certain performance measures in relation to a specific goal (such as increase in research expenditures or improving the national ranking of the institution).
The definition that the authors relied on in this volume of New Directions for Institutional Research is the following:
Benchmarking is a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to identify opportunities for improvement.
In some of the chapters (Chapters 2, 3, 4, and 5) of this volume, benchmarking is used in assessment work to measure different aspects of the institution’s mission. For example, benchmarking can be used to gauge effectiveness of certain aspects of university or college instruction by carrying out assessment surveys from year to year among students or alumni. Several institutions also participate in such services as the National Survey of Student Engagement (NSSE), which provides data on surveyed students that can be utilized for benchmarking to help institutions improve aspects of their organization (NSSE, 2011). This volume shows different aspects of benchmarking from a wide variety of perspectives that institutional research offices can use for strategic planning and institutional research purposes.
Chapter 1 provides a historical context and background to how benchmarking has evolved in its usage in higher education institutions over time. Chapters 2 through 5 delve into how benchmarking can be used internally for institutions. Chapter 2 discusses institutional use of comparative analysis of different processes, results, and procedures to obtain useful metrics that can be used to improve performance for different institutional programs. Chapter 3 discusses how benchmarking is used to assist with enrollment management to attain efficiencies in relation to admissions, enrollment, and financial aid.
Chapter 4 discusses how institutions can use surveys that are completed by different entities as starting points for an external benchmarking. An institution can gauge certain performance indices against peer institutions by comparing similar survey information.
Chapter 5 shows how Sabermetrics is used to assess faculty productivity using common statistical techniques that have been used in baseball to show how they can be applied to higher education.
Chapters 6, 7, and 8 focus primarily on external benchmarking for institutional analysis. Chapters 6 and 8 used different methodologies for selecting peers for their institutions. Chapter 6 relates how institutions can use the Integrated Postsecondary Education Data System (IPEDS) for external benchmarking purposes. It also discusses the advantages of using standardized data to assist in the selection of peer institutions for external benchmarking purposes.
Chapter 7 provides a research project to determine whether or not a top-tier-ranked public institution of higher education’s linkage with their host municipality had any impact upon their rankings in U.S. News and World Report. The research methodology with the top-tier public institutions was also used to explore the same variables in public “emerging universities” benchmarked against top-tier public institutions. The authors attempted to see if any common characteristics with the linkages could be leveraged to improve their rankings in U.S. News and World Report. The research in the end revealed other findings that were not initially anticipated by the authors.
Chapter 8 took a very different approach than that in Chapter 7 by developing a statistical model to assist in determining institutional peers. The authors worked with their system colleagues to implement a strategy of minimizing the number of variables used in the statistical model in order to answer a specific benchmarking question. The authors’ goal with this methodology was to use variables that were not integral with one another or weighted, which could skew the results. The authors use a step-by-step process in the chapter to show the reader how their statistical model was developed for the purpose of benchmarking to address a specific research question.
Chapter 9 provides a summary of the volume and takes a look into how benchmarking may be used in higher education in the future.
The editors would like to thank all of the authors and personnel that worked so hard and diligently on this volume. Without researchers and support staff, this volume—and journal series, for that matter—would not be possible.
Gary D. LevyNicolas A. ValcikEditors
Reference
NSSE. National Survey of Student Engagement: About NSSE. Retrieved on May 5, 2011, from http://nsse.iub.edu/html/about.cfm.
GARY D. LEVY is the Associate Provost for Academic Resources & Planning and a professor of psychology for Towson University.
NICOLAS A. VALCIK is the Associate Director of the Office of Strategic Planning and Analysis and a clinical assistant professor for the Program of Public Affairs at the University of Texas at Dallas.
1
How Benchmarking and Higher Education Came Together
Gary D. Levy, Sharron L. Ronco
This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes.
The precise origin of the term benchmarking is unknown. However, it may have originated from the ancient Egyptian practice of using the surface of a workbench to make dimensional measurements of an object, from the mark cut into a stone or wall by surveyors measuring the altitude of a tract of land, or by cobblers measuring people’s feet for shoes. Whatever its origins, implicit in the concept of benchmarking is the use of standards or references by which other objects or actions can be measured, compared, or judged. Modern commercial benchmarking has now come to refer to the process of identifying the best methods, practices, and processes of an organization and implementing them to improve one’s own industry, company, or institution. For the sake of this chapter (and this volume) we define benchmarking as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to identify opportunities for improvement.
The practice of benchmarking in American business is widely considered to have originated in the late 1970’s. The literature generally acknowledges Xerox Corporation as the first American business organization to formally apply comprehensive benchmarking techniques. In his seminal book on benchmarking, Camp (1989) a former Xerox employee, described how U.S. businesses, smug in their superiority, were blindsided by the invasion into the U.S. marketplace of less expensive and often higher-quality Japanese-produced goods. Noting that Americans had no equivalent for the Japanese term dantotsu, which means striving to be the “best of the best,” Camp speculated that Americans were caught off-guard by always assuming that they were the best.
Once Xerox realized that Japanese competitors were able to sell their copiers for about what it was costing Xerox to make its copiers, they undertook a thorough examination of competitors’ processes, operation by operation. An application of the lessons learned allowed Xerox to later increase design and production efficiency, resulting in reduced manufacturing costs (Yasin, 2002). The concept soon spread to health care, human resource management, the financial service sector, telecommunications, education, and the aerospace industry (Doerfel and Ruben, 2002; Zairi, 1996). IBM, Motorola, and 3M were also early adopters of benchmarking.
Xerox also led the way in another innovative approach, termed cross-industry benchmarking. Xerox looked to L.L. Bean, a non-competitor with superior warehousing operations, to address inefficiencies in its warehousing function. Nissan/Infinity benchmarked against Disney, McDonald’s, and Nordstrom as well as Ritz-Carlton to improve their human resources and customer service (Yasin, 2002). Southwest Airlines looked to the pit crew of an Indianapolis 500 race car team, and the staff of a hospital emergency room learned how to get customer information quickly from Domino’s Pizza (Epper, 1999).
Before benchmarking, most management processes simply projected future performance from past practices, without consideration of targets or superior functional practices in outside organizations. What was innovative about benchmarking was that it established a structured process of comparison that emphasized practices over metrics (Birnbaum, 2001; Camp, 1989). That an organization might be recognized for process excellence instead of just product or service excellence was a radical concept for many businesses (Spendolini, 1992).
The quality movement in the United States of the 1990s (Mouradian, 2002) ushered in a new emphasis on the use of benchmarking as a tool to gauge and improve organizational quality. Benchmarking, viewed as a process-based means to measure and enhance some aspect of an organization’s performance, is a fundamental component and tool used in varied approaches to quality enhancement (Yasin, 2002), including Total Quality Management (TQM), Continuous Quality Improvement (CQI), and the Malcolm Baldrige framework.
Total Quality Management (TQM) arose largely in the 1980s and was based greatly on the works of Deming (1986; also see Seymour, 1993). Benchmarking is often noted as a practice that grew out of the TQM movement (Achtemeier and Simpson, 2005), in part because of increased calls for accountability and performance measurement from federal and state governments. Accordingly, benchmarking processes were viewed as a means to improve performance and delivery of quality to varied “customers” (a term often spurned by those in higher education) and identifying opportunities for improvement and greater efficiency (Shafer and Coate, 1992).
The Continuous Quality Improvement (CQI) approach was derived in part from aspects of the TQM model. CQI approaches focus on organizational processes and systems and consistently improving, rather than simply maintaining organizational quality. Within CQI, benchmarking processes are used to assess current quality in an organization and to identify targets for future improvements.
Benchmarking was given a boost in 1988 when the United States Congress passed legislation to create the Malcolm Baldrige National Quality Award (named after a former Secretary of Commerce) partly as a response to the Japanese government’s establishment of the Deming Award to recognize quality in industry, but also to promote U.S. business to enhance their international competitiveness through identification of best practices and improvements in quality. The Baldrige framework set forth guidelines for organizational excellence, and incorporated the benchmarking process as an important award criterion.
In short, the Malcolm Baldrige approach is based on several criteria, or foci, used to move toward performance excellence. These criteria include leadership, strategic planning, customer focus, measurement, analysis, knowledge management (where benchmarking resides), workforce focus, and operations focus (www.nist.gov/baldrige/publications/upload/2011_2012_Education_Criteria.pdf). By the mid-1990s, benchmarking was seen as a winning business strategy, surpassing other techniques of the quality movement (e.g., TQM, CQI, and Business Process Reengineering [BPR]). Apparently, the news spread because by 1997 86 percent of companies claimed to use benchmarking in some form or another (Rigby, 1995).
Benchmarking was optimistically greeted by higher education in the early 1990s as a positive endeavor that would help overcome resistance to change, provide a structure for external evaluation, and create new networks of communication between institutions (Alstete, 1995). Marchese (1995) included benchmarking on the list of “What’s In” in Change magazine. In 1996 the Houston-based American Productivity and Quality Center (APQC) began facilitating benchmarking studies in higher education in cooperation with the State Higher Education Executive Officers (SHEEO) and other organizations supporting higher education. At this point, benchmarking—and its focus on measuring performance—was becoming entrenched in many areas of higher education (Achtemeier and Simpson, 2005). In 2001 the University of Wisconsin–Stout became the first Malcolm Baldrige National Quality Award winner in higher education.
Over the past two decades, both the National Association of College and University Business Officers (NACUBO) and the American Council on Education (ACE) have developed awards for higher education institutions based largely on the Baldrige model. Ruben (2007), in association with NACUBO, developed a model titled “Excellence in Higher Education” that is based largely on the Baldrige criteria. More recently, the California State University system embraced this model (calstate.edu/qi/ehe/), along with dozens of individual institutions (Doerfel and Ruben, 2002).
Benchmarking activities in higher education are not limited to the United States, however. Fueled by governmental and public concerns for standards and cost-effectiveness, many nations adopted a range of approaches to higher education benchmarking. Jackson and Lund (2000) describe the substantial performance measurement and benchmarking activities undertaken in the United Kingdom since the 1980s. Similar endeavors are evident in Australia, New Zealand, Canada, Germany, and other European nations (Farquhar, 1998; Lund, 1998; Lund and Jackson, 2000; Massaro, 1998; Schreiterer, 1998). More recently, benchmarking of disciplinary learning outcomes was an integral part of the Bologna Process in an effort to create comparable and compatible quality assurance and academic degree standards across several continents (Adelman, 2009).
Despite the potential benefits, full-scale benchmarking has been undertaken by only a relatively small number of higher education institutions (in one example, Penn State undertook a comprehensive benchmarking process against other Big Ten schools; Secor, 2002). More commonly, institutions seek to improve operations more informally by identifying best practices elsewhere and then attempting to adopt those practices. Most of this activity goes unreported in the literature, and so it is impossible to determine how widespread it is.
Birnbaum (2001) asserted that what ultimately found a home in higher education was not benchmarking at all, but a “half-sibling called performance indicators and a kissing cousin called performance funding.” With the rush for greater accountability, the thoughtful processes envisioned by the concept of benchmarking were quickly replaced by a lust for quick measurement and data. As attractive as this option may be, the sole use of performance indicators ignores a basic and powerful premise of benchmarking—namely, that what is needed for improvement is to identify the processes behind the benchmarks or metrics that can lead to improved performance, subsequently demonstrated by associated benchmarks or metrics.
Like most practices, benchmarking is actually a collection of approaches and techniques that can be conceptualized as a classification scheme or a continuum of self-evaluation activities (Jackson and Lund, 2000). To begin with, benchmarking can be internally or externally focused. Internal benchmarking may be appropriate where similar operations, functions, or activities are performed within the same organization. For colleges and universities, these could be academic, administrative, or student service units and involve processes like admissions, hiring, assessment of student learning, or delivery of online instruction. Internal benchmarking may be an end in itself or the starting point for understanding processes that will be externally benchmarked. See Chapter 2 for a more detailed discussion of internal benchmarking.
External benchmarking seeks best practices outside the organization. In competitive benchmarking, the products, services, and process of an organization are compared with those of direct competitors; in comparison, functional benchmarking examines similar functions in institutions that are not direct competitors. Best-in-class or generic benchmarking seeks new and innovative practices across multiple industries to uncover the “best of the best.”
Benchmarking can also be categorized by its approach: metric, process, or diagnostic (Yarrow and Prabhu, 1999). Almost all benchmarking in higher education can be characterized as metric or performance benchmarking, which compares selected indicators or metrics among similar institutions to evaluate relative performances (Smith, Armstrong, and Brown, 1999). Metric benchmarking is limited to superficial manifestations of business practices (Doerfel and Ruben, 2002) and is restricted to those characteristics that can be quantified.
Process benchmarking, on the other hand, involves a comprehensive comparison of specific business practices with the intention of identifying those aspects of best practice that can lead to improved performance. Process benchmarking is often time consuming and expensive; few who have set out to do it have capitalized fully on its potential (Bender, 2002; Yarrow and Prabdu, 1999).
Diagnostic benchmarking explores both practice and performance, functioning as a continuous “health check” where practices that need to be changed are identified and improvement approaches are devised (Yarrow and Prabhu, 1999). Diagnostic benchmarking may have found its way into higher education via the continuous improvement processes expected by accreditors to document institutional effectiveness.
Since the early 1990s, several large and high-profile benchmarking studies have been conducted within higher education (Jackson and Lund, 2000; Shafer and Coate, 1992). Perhaps the best-known benchmarking studies in American higher education are those conducted by NACUBO in the early 1990s (Alstete, 1995; NACUBO, 1995). The overall goals of these studies were to develop a standard set of accepted indicators and benchmarks that can be used to improve operational quality and performance, as well as relevant cost information. The initial and wide-ranging goal of the NACUBO study was to help higher education institutions identify “best practices” across varied core functional areas such as admissions, advancement/development, payroll, and so on. Groundbreaking when it was initiated, many of the data points and calculations that emerged from the NACUBO study are now routine. Moreover, some of the ideas and their financial calculations (e.g., ratios) are now included in many standard Department of Education’s National Center of Education Statistics (NCES) Integrated Postsecondary Data Analysis System (IPEDS) reports (www.nacubo.org/Research/Benchmarking_Resources/Data_Resource_Details.html).
Another classic benchmarking study in higher education is the University of Delaware’s National Study of Instructional Costs and Productivity (better known as “the Delaware Study”; www.udel.edu/IR/cost/). This study originated in the early 1990s from a need at the University of Delaware to evaluate academic programs to be curtailed during a series of difficult budget years (Middaugh, 1994), but now exists as a means for academic leaders and managers to improve program quality and efficiency. Since then, the study has evolved and grown to where it is now the definitive benchmarking study of instructional costs and productivity (Middaugh, 2001). Results provide opportunities to benchmark both internally (across colleges and departments) as well as externally (across institutions and institutional Carnegie classification).
Seybert, Weed, and Bers (2012) provide a succinct summary of other large higher education benchmarking endeavors that provide useful information for institutional researchers, including the National Community College Benchmark Project (NCCBP), the Voluntary System of Accountability (VSA; www.voluntarysystem.org), the IDEA (Individual Development and Educational Assessment) Center (www.theideacenter.org), and the many peer comparison tools available from the National Center for Education Statistics web page (nces.ed.gov). In addition, EBI (Educational Benchmarking, Inc.; www.webebi.com) also offers a variety of benchmarking endeavors covering many academic (for example, student success, first-year experience, diversity) and non-academic (for example, auxiliary services, residence hall, dining services, student unions) facets of higher education. Seybert and colleagues (2012) also provide a detailed account of the National Community College Benchmark Project (NCCP).
As Seybert and colleagues (2012) note, benchmarking has also been commonly used in survey research. There is a large range of surveys sampling student and faculty perceptions of a variety of items within higher education that provide benchmarking opportunities. Some of the largest of these include the National Survey of Student Engagement (NSSE) and its many offshoots (for example, Faculty Survey of Student Engagement [FSSE], Beginning College Survey of Student Engagement [BCSSE], Community College Survey of Student Engagement [CCSSE]; see nsse.iub.edu/, and ACT’s many student surveys [www.act.org/highered]).
Whereas benchmarking has become a staple tool for process improvement in business and industry (Southard and Parente, 2007; Stapenhurst, 2009), examples of full-scale benchmarking in higher education are scarce in the literature. It is not hard to identify reasons why benchmarking has failed to gain traction in higher education.
Benchmarking has been co-opted by benchmarks
The push for accountability and transparency in higher education has created an industry of rankings, ratings, and comparison mechanisms. Institutions can populate dashboards with a vast number of data indicators, and compare these over time within the institution, or against comparison groups by using an increasing number of tools and by joining consortia. As mesmerizing as these data may be, none of it provides information about how the superior performance identified with benchmarks can be achieved. True benchmarking leads to understanding the processes that create best practices and creatively adapting those practices.
We are unique (just like everybody else)
American higher education rightly prides itself on its diversity and complexity, as reflected by myriad governing and financing structures, types of students and faculty, nature of curriculum and pedagogy, student life, and many other characteristics. Benchmarking assumes that there are practices or processes in other institutions or organizations, comparable to those in one’s own, that are superior and worthy of emulation.There is a tendency to consider benchmarking as copying, and a resistance to imposing practices that don’t fit with the institution. Rather than copying, benchmarking is identifying and assimilating those aspects of best practice that are good for one’s own institution.
The higher education environment does not support benchmarking
Benchmarking was an easier “sell” in business, where motives to improve profitability and gain competitive advantages are naturally embraced and outcomes are simpler to define and measure. Higher education has traditionally resisted characterizing its stakeholders as customers or itself as a business and is notoriously suspicious of rapid change.Hoffman and Holzhuter (2012) argue that all of the key components needed to support the benchmarking process—acknowledgment of weaknesses, open and honest communication, readiness and flexibility to implement change—are stymied by the models entrenched in the higher education system.
Benchmarking is complex, expensive, and labor intensive
Practitioners agree that, done correctly, benchmarking requires a significant investment of time and effort. Deciding on the scope of the study, identifying best practice organizations, traveling to those sites, deciding on and capturing best practices, reporting and disseminating features that can be transferred—these are just some of the considerations for planning a benchmarking study. There are no guarantees of return on investment if the identified best practice ends up being incompatible or even counterproductive with practices in the target institution.