Assessment Clear and Simple - Barbara E. Walvoord - E-Book

Assessment Clear and Simple E-Book

Barbara E. Walvoord

0,0
28,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

The first edition of Assessment Clear and Simple quickly became the essential go-to guide for anyone who participates in the assessment process in higher education. With the increased pressure to perform assessment to demonstrate accountability, Assessment Clear and Simple is needed more than ever. This second edition of the classic resource offers a concise, step-by-step guide that helps make assessment simple, cost-efficient, and useful to an institution. It contains effective strategies for meeting the requirements of accreditation agencies, legislatures, review boards, and others, while emphasizing and showing how to move from data to actions that improve student learning. This thoroughly revised and updated edition includes many new or expanded features, including: * Illustrative examples drawn from the author's experience consulting with more than 350 institutions * A basic, no-frills assessment plan for departments and for general education * Tips on how to integrate portfolios and e-portfolios into the assessment process * Suggestions for using rubrics and alternatives to rubrics, including doing assessment for multidisciplinary work * Clear instructions on how to construct a coherent institution-wide assessment system and explain it to accreditors * Ideas for assigning responsibility for general education assessment * Strategies for gathering information about departmental assessment while keeping the departmental workload manageable * Information on how to manage assessment in times of budgetary cutbacks Praise for the Second Edition of Assessment Clear and Simple "Walvoord's approach to assessment is wonderfully straightforward; it is also effective in facilitating faculty engagement in assessment. We've applied a number of her methods to our campus assessment efforts with success. This book makes assessment both manageable and useful in improving and enhancing student learning."--Martha L. A. Stassen, director of assessment, University of Massachusetts, Amherst, and president, New England Educational Assessment Network (NEEAN) "Walvoord's work clearly presents the basics for getting started in assessment of student learning while honestly addressing the complexities of assessment when driven by faculty passion for student learning. This book is a valuable resource for the novice as well as the developing experts who are leading their institutions in academic assessment."--Bobbi Allen, faculty assessment director, Delta College

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 209

Veröffentlichungsjahr: 2010

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title

Copyright

Dedication

Foreword to the First Edition

About the Author

Chapter 1: For Everyone

The Purpose of This Book

The Organization of This Book

Themes of the Book

What Is Assessment?

Concerns About Assessment

Benefits of Assessment

Communicating About Assessment

General Guidelines for the Three Steps

Okay, So What Should We Do?

Chapter 2: For Institution-Wide Leaders and Planners

Establish Vision, Audience, Purpose, Goals

Analyze Your Overall Assessment System

Make Improvements to the Assessment System

Documenting Assessment for Accreditors and Others

Budgeting for Assessment

Chapter 3: For Departments and Programs

The Basic No-Frills Departmental Assessment System

Case Studies and Examples

Guidelines for Departmental Assessment

Special Circumstances for Assessment

Reporting Departmental Assessment

Chapter 4: For General Education

Establish Vision, Responsibilities, Audiences, and Goals

The Basic No-Frills System

Guidelines for General Education Assessment

Reporting General Education Assessment

Appendix A: Curriculum Map

Appendix B: Student Consent Form

Appendix C: Analyzing Audiences for Assessment

Appendix D: Sample Rubrics

Example 1: Rubric for Essay of Literary Analysis

Example 2: Rubric for Scientific Experiment in Biology Capstone Course

Resources: A Short List

References

Index

End User License Agreement

Guide

Cover

Table of Contents

Begin Reading

List of Illustrations

Chapter 1: For Everyone: The Basics of Assessment

FIGURE 1.1 Evaluating Student Classroom Work: Two Options

Chapter 2: For Institution-Wide Leaders and Planners

FIGURE 2.1 A Problematic Assessment System

FIGURE 2.2 An Ideal Assessment System

Chapter 4: For General Education

FIGURE 4.1 System of General Education Assessment

List of Tables

Chapter 3: For Departments and Programs

Table 3.1. Class Average Rubric Scores for Science Reports

Pages

cover

contents

iii

iv

v

ix

x

xi

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

101

103

104

105

107

108

109

110

111

112

113

114

115

117

118

119

120

121

122

123

124

125

126

Assessment Clear and Simple

A Practical Guide for Institutions, Departments, and General Education

Second Edition

Barbara E. Walvoord

Trudy W. Banta

Copyright © 2010 by John Wiley & Sons, Inc. All rights reserved.

Published by Jossey-Bass

A Wiley Imprint

989 Market Street, San Francisco, CA 94103-1741—www.josseybass.com

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the Web at www.copyright.com. Requests to the publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, 201-748-6011, fax 201-748-6008, or online at www.wiley.com/go/permissions.

Readers should be aware that Internet Web sites offered as citations and/or sources for further information may have changed or disappeared between the time this was written and when it is read.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Jossey-Bass books and products are available through most bookstores. To contact Jossey-Bass directly call our Customer Care Department within the U.S. at 800-956-7739, outside the U.S. at 317-572-3986, or fax 317-572-4002.

Jossey-Bass also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Library of Congress Cataloging-in-Publication Data

Walvoord, Barbara E. Fassler, 1941-

Assessment clear and simple : a practical guide for institutions, departments, and general education/Barbara E. Walvoord; foreword by Trudy W. Banta.—2nd ed.

p. cm.

Includes bibliographical references and index.

ISBN 978-0-470-54119-7 (pbk.)

1. Educational evaluation. I. Title.

LB2822.75W35 2010

379.1’58—dc22

2009049497

SECOND EDITION

To the wonderful people at more than 350 institutions where I have consulted and led workshops—people who have been generous hosts in every way, who have not only listened but also shared their own creative insights and practices, and who have not only sought to learn from my experience but also taught me, challenged me, made me think, and provided me with the examples that fill this book

Foreword to the First Edition

IN OUR RESPECTIVE travels around the country talking with faculty about assessment, Barbara Walvoord and I have heard this question many times: “How do we get started in assessment?” That is often followed with a plea, “Isn’t there a simple step-by-step guide we can follow?” Until this point, we have had to say no to that entreaty. But now Barbara has stepped forward to present Assessment Clear and Simple, and all of us—the novices who seek help and experienced practitioners who try to provide it—are indebted to her.

In clear, persuasive prose that reflects her grounding in the discipline of English, Barbara brings us a straightforward definition of assessment that emphasizes the use of carefully considered evidence to improve learning. True to her promise in the subtitle to keep her message short, Barbara defines her audience narrowly and then tells each of three groups that they need to read just two of the book’s four chapters! There is an introductory chapter for everyone, then a special chapter each for institution-wide planners and administrators, faculty concerned with assessment at the department or program level, and faculty and staff charged with the responsibility of assessing the general education experience.

Despite promising to keep things simple, Barbara Walvoord is never simplistic in her presentation. She acknowledges the complexity of learning and of the assessment that must match its features. While endeavoring to keep assessment “simple, cost efficient, and useful,” she encourages faculty to set ambitious goals for student learning, even if they may seem ambiguous in terms of their measurement potential. She urges that we not fall into the trap of discarding goals like preparing students to become ethical decision makers and good citizens just because these abilities seem difficult to measure. Even today we can employ questionnaires and interviews to ask current students and recent graduates if they perceive that they have experienced growth in these areas as a result of their college experiences, and in future years we can operationalize these concepts and develop more direct measures of associated behaviors.

When faculty are confronted with the necessity of creating an assessment initiative to satisfy a state or board of trustees’ mandate or the requirements of an accreditor, they often respond—quite rightly—”Aren’t we already assessing student learning? After all, we evaluate student work and give grades.” One of the many features of this work that I admire is Barbara Walvoord’s willingness to identify and respond to legitimate concerns about outcomes assessment. In this case, she not only acknowledges that faculty and student affairs staff on every campus are engaged in assessment but includes in every chapter the vital step of completing an audit of all the assessment activities already in place and asking how the use of the data from these activities to improve student learning could be enhanced. In her prior presentations and publications, Barbara has become well known for her advocacy of the use of rubrics to make meaning of grades in the outcomes assessment process. In this volume, we are treated to new examples of rubric construction and of the use of classroom assessment techniques in the quest for data that can help us improve instruction and ultimately learning.

In reviewing such a brief work, my greatest concern was related to the limited ability to provide context for the steps to be taken in inaugurating and sustaining an assessment initiative. Assessment approaches are unique, due primarily to the diverse organizational structures and background experiences, expertise, and personalities of instructors and student affairs staff that constitute the environments on different campuses. Barbara has addressed this concern by providing examples and options for proceeding in a variety of contexts, and in the appendices, specific illustrations designed for a variety of institutions.

Barbara Walvoord gives us detailed examples of reporting formats applicable at department and institution-wide levels. She urges that responses to assessment findings be based on the best current theories of student and organizational growth and development, then cites references that can be helpful in the search for such theories.

I could say more, but I am reminded of Barbara’s emphasis on brevity. My overview, then, is designed simply to whet your appetite for the rich educational experience that lies in the pages ahead. Happy reading!

Trudy W. Banta

About the Author

BARBARA E. WALVOORD, PH.D., is Concurrent Professor Emerita at the University of Notre Dame. She has consulted and led workshops on assessment, effective teaching, and writing across the curriculum at more than 350 institutions of higher education. She coordinated Notre Dame’s re-accreditation self-study. She founded and directed four college and university faculty development centers, each of which won national recognition. She taught English and interdisciplinary humanities courses for more than thirty years and was named Maryland English Teacher of the Year for Higher Education in 1987. Her publications include Effective Grading: A Tool for Learning and Assessment in College, 2nd ed. (with V. J. Anderson; Jossey-Bass, 2010); Teaching and Learning in College Introductory Religion Courses (Blackwell/Jossey-Bass, 2008); Academic Departments: How They Work, How They Change (with others; ASHE ERIC Higher Education Reports, Jossey-Bass, 2000); In the Long Run: A Study of Faculty in Three Writing-Across-the-Curriculum Programs (with L. L. Hunt, H. F. Dowling, Jr., and J. D. McMahon; National Council of Teachers of English, 1997); and Thinking and Writing in College: A Naturalistic Study of Students in Four Disciplines (with L. P. McCarthy in collaboration with V. J. Anderson, J. R. Breihan, S. M. Robison, and A. K. Sherman; National Council of Teachers of English, 1990).

Chapter 1For Everyone:The Basics of Assessment

YOU PROBABLY ARE reading this book because you are an administrator, department chair, assessment director, general education committee member, or faculty member involved in assessment. I wrote this book after serving in several of those administrative and faculty roles myself and serving as a consultant on assessment for more than 350 institutions, public and private, large and small, traditional and nontraditional. I have written this book for all those people and their institutions.

The Purpose of This Book

This book provides a short, clear, no-nonsense guide to assessment. The book examines how assessment can serve departmental and institutional goals—not merely external mandates—and how assessment can be conducted effectively and efficiently with ordinary peoples’ available time, expertise, and resources. This book aims to make assessment simple, cost efficient, and useful for student learning, while meeting the assessment requirements of accreditation agencies, legislatures, review boards, and others.

Relation to Other Resources

I have emphasized brevity and practicality. Other types of resources for assessment include collections of case studies such as Banta, Jones, and Black (2009) and longer books such as Suskie (2009), a very thorough guide at more than 300 pages. My and Anderson’s Effective Grading (2010) focuses on classroom assessment, including establishing goals, designing assignments, encouraging student motivation, designing the course, communicating with students about their work, and saving time in the grading process. It forms a kind of Venn diagram with this book, because its final section discusses how to use student classroom work, as well as other measures, for assessment in departments or general education programs.

The Organization of This Book

This book is organized in the following way:

This chapter, which everyone should read. It defines assessment, answers common concerns, and lays the groundwork for each of the following chapters.

Chapter Two

, for institution-wide leaders and planners: assessment directors and committees, provosts, deans, and anyone who wants to see the “big picture” for the institution.

Chapter Three

, for department members and chairs.

Chapter Four

, for general education leaders and faculty.

Themes of the Book

The following themes recur throughout this book:

Assessment is a natural, scholarly act that can bring important benefits.

Assessment is composed of three steps: goals, information, action.

The end of assessment is action.

Assessment involves communicating across cultures, within and outside the institution.

You need not only individual data collection, but systems for feeding data into decision making.

Build on what you’re already doing.

Use students’ classroom work, evaluated by faculty, as a valuable source of information about learning.

Keep it simple!

What Is Assessment?

Assessment is the systematic collection of information about student learning, using the time, knowledge, expertise, and resources available, in order to inform decisions that affect student learning.

Assessment as a Natural, Scholarly Act

Assessment is a natural, inescapable, human, and scholarly act. When we spend time teaching students how to shape an argument or solve an equation, we naturally ask, “Well, did they learn it?” Our academic training urges us to look for evidence to support claims, so when the college catalogue claims that students learn to be critical thinkers, we ask, “Well, do they?”

We’re Already Doing Assessment

Assessment is so natural we have been doing it all along. Whenever a department or program says, “Students’ senior capstone projects showed that, as a group, they are not doing well on X. Maybe we could …”—that’s assessment. It happens all the time in responsible departments and programs.

Assessment as a Reform Movement

Assessment is a powerful national reform movement. The movement draws from public dissatisfaction with the perceived shortcomings of college graduates. Proponents of assessment believe that higher education should examine what students have learned, not just what the institution or department did that supposedly resulted in learning. The movement has become a mandate, imposed by accreditors and by some boards and state legislatures. Issues of accountability and public disclosure have become conflated with assessment (Ewell, 2004). It’s a complicated scene. To follow the national movement, consult Ewell (2008) and the pages of the monthly newsletter Assessment Update, especially the columns by Ewell (www.interscience.wiley.com).

Movements and mandates may present both opportunities and dangers. Faculty often voice fears that appropriate faculty control over what is taught and how it is tested will be curtailed; results of assessment will be used irresponsibly; standardized tests will drive instruction; the goals of higher education will be dumbed down to what is measurable only in a narrow sense; higher education will be held responsible for things it can’t control, such as the students’ previous education or their lack of motivation; or educators will be forced to create costly and time-consuming bureaucratic systems that comply with accreditors’ demands for assessment but that do not really result in improved student learning. These are real dangers. But the answer is not to ignore assessment, resist it, or leave it to others. Instead, we must improve our assessment systems so that they help us enhance student learning, draw upon the best aspects of academic culture, and are sustainable in terms of time and resources. Then we need to explain our assessment systems clearly and without arrogance to our various constituencies. I believe that we and our students can profit from assessment while minimizing the dangers. The purpose of this book is to show how.

The Three Steps of Assessment

The good news is that accreditors ask us to follow three steps that are natural and scholarly:

Goals.

What do we want students to be able to do when they complete our courses of study? (Goals may also be called “outcomes” or “objectives.” Issues of language are discussed later in this chapter.)

Information.

How well are students achieving these goals, and what factors influence their learning? (Information may be called “measures” or “evidence.”)

Action.

How can we use the information to improve student learning? (Using the information may be called “closing the loop.”)

Sometimes an additional step is added between 2 and 3: identifying where in the curriculum the goals are addressed (sometimes called “curriculum mapping”; see example in Appendix A). This step is not assessment per se, because it focuses on what the institution or department does to bring about student learning, not on what the students learned. Nevertheless, curriculum mapping is useful to identify goals that are not being consistently addressed. The three steps of assessment are discussed in detail within this chapter and the other chapters in this book.

Classroom Assessment and Program Assessment

Classroom assessment takes place within the confines of a single class. The instructor examines student work, talks with students about what worked for them, and then makes changes to his or her pedagogy or classroom activities.

Program assessment involves the department, program, general education, or institution examining student learning within those larger arenas and then taking action. For example, a department may examine a sample of capstone research projects from its senior undergraduate majors, as well as results from a senior student survey, in order to determine where the department can improve students’ learning within the program as a whole. A general education committee may examine student work from a sample of general education courses, not to evaluate each teacher’s performance but to assess how well the general education program as a whole is meeting its goals.

The End of Assessment Is Action

The goal of assessment is information-based decision making. To put it another way, the end of assessment is action. Assessment helps the organization determine how well it is achieving its goals and suggests effective steps for improvement.

That means you should conduct assessment for yourselves and your students, not just for compliance with accreditors. You don’t need to build a whole superstructure of assessment bureaucracy; it’s much more important to incorporate good assessment into all the institution’s core decision-making processes that are already in place: departmental decision making, committee deliberations, administrative policies, budgeting, and planning. You don’t need to collect data you don’t use; it’s much more important to collect a small amount of useful data than to proliferate data that sit in a drawer or on a computer file. If you are collecting information you are not using, either start using it or stop collecting it. Instead of focusing on compliance, focus on the information you need for wise action. Remember that when you do assessment, whether in the department, the general education program, or at the institutional level, you are not trying to achieve the perfect research design; you are trying to gather enough data to provide a reasonable basis for action. You are looking for something to work on.

The Most Common Actions Resulting from Assessment

Three common actions that result from assessment in the department, in general education, and in the institution are these:

Changes to curriculum, requirements, programmatic structures, or other aspects of the students’ course of study

Changes to the policies, funding, and planning that support learning

Faculty development

Sometimes the first action from an assessment is to gather additional information.

Pitfalls of Assessment

Common pitfalls of assessment include

Mere compliance with external demands

Gathering data no one will use

Making the process too complicated

Section Summary

Assessment is a natural, scholarly act, asking, “Are students learning what we want them to?” and “How can we better help them learn?”

Assessment is also a national movement that poses both potential dangers and great promise for improving student learning.

Assessment has three steps: goals, information, and action.

The purpose of assessment is informed decision making.

Assessment can go wrong when it focuses on compliance or on complex data gathering without using the information for decision making.

Concerns About Assessment

Aren’t Grades Assessment?

Yes. But grades by themselves have limited use for program assessment. A department might know that the average grade on student senior research projects was 3.6, but that doesn’t tell them much. It’s not enough to say that we know students learned X if they got a grade of C or better in such-and-such a course. Instead, the department needs more specific, diagnostic information: students were strong in X and Y, but weak in Q and R. That detailed information tells the department what to work on. Such detailed information may emerge as faculty are grading student work, but then it must be aggregated and analyzed at the department or general education level, as each chapter in this book explains.

Sometimes grades can be used as a red flag. Especially, departments may want to monitor the grade distribution in introductory courses.

Example: Uncomfortable with the proportion of D and F grades and withdrawals from the introductory General Chemistry course at the University of Notre Dame, faculty members, led by Professor Dennis Jacobs, began a more extensive assessment. Faculty analyzed students’ performance on the common chemistry exams and students’ math Scholastic Aptitude Test (SAT) scores; they conducted interviews and focus groups with students; and they examined the research literature on how students most effectively learn in science. The grades were a red flag; the faculty used other data to expand their understanding of what was happening. Their findings and actions led to significant improvement in student learning (Jacobs, 2000, and Jacobs’s Web site at www.nd.edu/~djacobs).

How Can We Assess Complex Learning?

Assessment can and should be applied to the learning that the department, program, or institution most values, including the inclination to question assumptions, sensitivity to poverty and injustice, scientific literacy, the ability to work effectively with people of diverse backgrounds and cultures, or the development of ethical reasoning and action (for one list of liberal learning outcomes, see www.aacu.org/leap/vision.cfm).

We can’t fully assess such ineffable qualities, but we can get indications. We are not caught between “objectivity” (in the sense that all judges of a student performance will agree on its quality) and “subjectivity” in the sense of individual whim. Between those two poles stands informed judgment of work in our fields. As professionals, we assess our colleagues’ work all the time. Assessing students’ work is part of that responsibility. In assessing student work, not all judges of a single piece of student work will agree on its quality, but that’s how disciplines move forward. If raters disagree, assessors can use established methods: take the average score, ask another rater to break the tie, or have raters discuss the student work to see whether they can come to agreement.

To get indications about how well our students are achieving ineffable goals, we must rely on student work or student actions that may offer only a small window into the ineffable quality. For example, suppose you want students to develop “ethical reasoning and action,” which is one of the essential liberal learning outcomes identified by the LEAP (Liberal Education and America’s Promise) project of the Association of American Colleges and Universities (www.aacu.org/leap/vision.cfm). To assess whether your students are developing this quality, you might rely on two methods:

Ask them in surveys whether they believe your program helped them develop ethical reasoning and action.

Evaluate something they do.

Under these two headings, many options are available. For example, Gelmon, Holland, Driscoll, Spring, and Kerrigan (2001) compare and contrast a variety of methods for assessment of aspects such as “awareness of community” and “sensitivity to diversity” that may result from students’ service learning.

Example: Columbus State Community College faculty asked students to write about a scenario; the writings were evaluated for ability to “value diversity” (Hunt, 2000).

• • •

Example: