22,99 €
What factors contribute to students' lasting success? Much research has explored the impact of the first year of college on student retention and success. With the new performance-based funding initiatives, institutional administrators are taking a laser-focused approach to aligning retention and success strategies to first-year student transition points. This volume enlightens the discussion and highlights new directions for assessment and research practices within the scope of the first year experience. Administrators, faculty, and data scientists provide a conceptual and analytical approach to investigating the first-year experience for entry-level and seasoned practitioners alike. The emerging research throughout this volume suggests that while many first-year programs and services have significant benefits across a number of success outcomes, these benefits may not be universal for all students. This volume: * Examines sophisticated empirical models * Provides critical assessment practices and implications. * Examines the four-year college and the two-year institution, which is just as critical. This is the 161st volume of this Jossey-Bass quarterly report series. Timely and comprehensive, New Directions for Institutional Research provides planners and administrators in all types of academic institutions with guidelines in such areas as resource coordination, information analysis, program evaluation, and institutional management.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 191
Veröffentlichungsjahr: 2014
New Directions for Institutional Research
John F. Ryan EDITOR-IN-CHIEF
Gloria Crisp ASSOCIATE EDITOR
Ryan D. Padgett
EDITOR
Number 160
Jossey-Bass
San Francisco
Emerging Research and Practices on First-Year Students Ryan D. Padgett (ed.) New Directions for Institutional Research, no. 160 John F. Ryan, Editor-in-Chief Gloria Crisp, Associate Editor
Copyright © 2014 Wiley Periodicals, Inc., A Wiley Company
All rights reserved. No part of this publication may be reproduced in any form or by any means, except as permitted under section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the publisher or authorization through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923; (978) 750-8400; fax (978) 646-8600. The code and copyright notice appearing at the bottom of the first page of an article in this journal indicate the copyright holder's consent that copies may be made for personal or internal use, or for personal or internal use of specific clients, on the condition that the copier pay for copying beyond that permitted by law. This consent does not extend to other kinds of copying, such as copying for general distribution, for advertising or promotional purposes, for creating collective works, or for resale. Such permission requests and other permission inquiries should be addressed to the Permissions Department, c/o John Wiley & Sons, Inc., 111 River St., Hoboken, NJ 07030; (201) 748-8789, fax (201) 748-6326, http://www.wiley.com/go/permissions.
New Directions for Institutional Research (ISSN 0271-0579, electronic ISSN 1536-075X) is part of The Jossey-Bass Higher and Adult Education Series and is published quarterly by Wiley Subscription Services, Inc., A Wiley Company, at Jossey-Bass, One Montgomery Street, Suite 1200, San Francisco, California 94104-4594 (publication number USPS 098-830). POSTMASTER: Send address changes to New Directions for Institutional Research, Jossey-Bass, One Montgomery Street, Suite 1200, San Francisco, California 94104-4594.
Individual Subscription Rate (in USD): $89 per year US/Can/Mex, $113 rest of world; institutional subscription rate: $317 US, $357 Can/Mex, $391 rest of world. Single copy rate: $29. Electronic only--all regions: $89 individual, $317 institutional; Print & Electronic--US: $98 individual, $365 institutional; Print & Electronic--Canada/Mexico: $98 individual, $405 institutional; Print & Electronic--Rest of World: $122 individual, $439 institutional.
Editorial Correspondence should be sent to John F. Ryan at [email protected].
New Directions for Institutional Research is indexed in Academic Search (EBSCO), Academic Search Elite (EBSCO), Academic Search Premier (EBSCO), CIJE: Current Index to Journals in Education (ERIC), Contents Pages in Education (T&F), EBSCO Professional Development Collection (EBSCO), Educational Research Abstracts Online (T&F), ERIC Database (Education Resources Information Center), Higher Education Abstracts (Claremont Graduate University), Multicultural Education Abstracts (T&F), Sociology of Education Abstracts (T&F).
Microfilm copies of issues and chapters are available in 16mm and 35mm, as well as microfi che in 105mm, through University Microfilms, Inc., 300 North Zeeb Road, Ann Arbor, Michigan 48106-1346.
www.josseybass.com
The Association for Institutional Research (AIR) is the world's largest professional association for institutional researchers. The organization provides educational resources, best practices, and professional development opportunities for more than 4,000 members. Its primary purpose is to support members in the process of collecting, analyzing, and converting data into information that supports decision making in higher education.
Editor's Notes
Reference
1: Conceptual Considerations for First-Year Assessment
Objectives and Outcomes
Quantitative, Qualitative, and Mixed Methodologies
National Surveys and Locally Developed Assessment Instruments
Direct and Indirect Measures
Conclusion
References
2: High-Impact Practices and the First-Year Student
Service Learning
Learning Communities
Undergraduate Research
Methods
Conclusions
References
Appendix: Dependent Variables: Scales and Component Items
Approaches to Deep Learning
Self-Reported Gains
Student Satisfaction
3: Good Practices for Whom? A Vital Question for Understanding the First Year of College
Methods
Findings
Discussion
References
4: Programs and Practices That Retain Students From the First to Second Year: Results From a National Study
Introduction
Research Background
Research Objective
Methodology
Results and Discussion
Conclusion and Implications
Note
References
5: The First-Year Experience in Community Colleges
Introduction
Characteristics of First-Year Experience Programs and the Extant Research
Core Components of FYE Programs
National Initiatives
Strategies for Integrating Research and Program Delivery and Using Data
Challenges of Delivering and Conducting Research
Conclusion
References
Index
Wiley End User License Agreement
Chapter 2
Table 2.1
Table 2.2
Table 2.3
Chapter 3
Table 3.1
Table 3.2
Table 3.3
Table 3.4
Table 3.5
Table 3.6
Chapter 4
Table 4.1
Table 4.2
Table 4.3
Appendix 4.1
Appendix 4.2
Chapter 5
Table 5.1
Cover
Table of Contents
1
1
2
3
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
28
30
31
32
33
34
35
37
38
39
40
41
43
44
45
46
47
48
49
50
51
53
54
55
56
57
58
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
95
96
97
98
99
Research on the first-year experience is as ubiquitous as the components of “the first-year experience.” The myriad of research and assessment—both national and local—have produced a sort of internal debate within higher education as to the consistency of the findings and the success of their application to practice. Stated differently, initiatives that produce results on one campus may not on another. This inconsistency has ignited researchers to be more conscientious about assessing first-year experiences, as evident from the increase in quality data, applied complex statistical models, and assessment strategies within the literature.
Yet despite higher education's best efforts, pinpointing the exact means through which students succeed or persist on a four-year graduation track is as elusive as ever. With a greater diversity in the demographics of students, decreasing public appropriations, and calls for more stringent levels of accountability, research and assessment on the first year continues to be prevalent and relevant. In the face of these challenges, “institutions of higher education have increasingly embraced their obligation for assisting students with the transition to the college learning environment” (Swing, 2004, p. ix). To this end, this volume continues the examination of the first year and the factors that impact student success and persistence.
Together, the chapters within this volume provide a template for researchers on the statistical methods that need to be considered when conducting assessment on first-year experiences. Chapter 1 provides a comprehensive blueprint outlining the foundational understandings of first-year assessment. Grounded in the conceptual framework of sound assessment, Chapter 1 provides novice and intermediate researchers with a firm understanding of the decisions to consider prior to assessment. Chapters 2–4 utilize large, national surveys to examine the impact of the first year on a variety of learning and retention outcomes. Chapter 2 highlights three first-year programs (e.g., high-impact practices) that statistically increase student learning. Chapter 3 expands upon the programs of the first year by estimating the effects of vetted good practices across two psychosocial measures. In addition, Chapter 3 provides a sound argument for the need to disaggregate data across student characteristics to more accurately assess the impact of these practices across groups. Chapter 4 provides a comprehensive predictive model, illustrating the importance of the use of control and covariate measures to estimate student persistence. Finally, Chapter 5 expands our understanding of the first-year experience by documenting the challenges of applying these experiences within two-year institutions.
Before practitioners can utilize data-driven processes, sound empirical evidence must be collected. The question as to “where to begin?” is thoroughly discussed in Chapter 1. Jennifer R. Keup and Cindy A. Kilgo provide one of the more compelling and comprehensive examinations of assessment techniques. Supported by recent research, Keup and Kilgo provide a road map for emerging researchers on first-year assessment as well as considerations to help support these decisions.
Over the last decade, research on high-impact practices has become as prevalent as research on first-year experiences. Yet despite the overwhelmingly positive evidence, few have isolated the effects of high-impact practices within the first year. Using data from the National Survey of Student Engagement, Malika Tukibayeva and Robert M. Gonyea estimate the impact of first-year participation in service learning, learning communities, and research with faculty on student learning. Critical to this evaluation is the snapshot of participation broken down by student and institutional characteristics.
The assessment movement within higher education has relied heavily on longitudinal studies to accurately estimate student learning and development. Using data from the Wabash National Study of Liberal Arts Education, Kathleen M. Goodman presents a strong argument for the necessity to disaggregate data by student characteristics when measuring the effects of a program/practice on any outcome. If the ultimate goal for a researcher is to accurately measure the impact of participation and engagement, Goodman suggests that disaggregating data or accounting for conditional effects must become routine.
There is no magic potion with regard to fully understanding student persistence. However, the soundness of the analyses and statistical model are vital to the success of accurate estimates of college impact. Linda DeAngelo illustrates how to construct a strong and accurate model for prediction by accounting for prior research and utilizing valid controls and covariates. Using data from the Cooperative Institutional Research Program's 2007 Freshman Survey and 2008 Your First College Year survey, DeAngelo walks through the creation of a prediction model and how the findings can influence future models and practice.
The overwhelming majority of research on first-year experiences has been limited to four-year institutions. Trudy Bers and Donna Younger examine the first-year experience but apply the experiences within the two-year setting. In addition to providing an overview of the literature, Bers and Younger discuss how the research can be integrated within community colleges and the challenges of delivering these programs and assessing them. The incorporation of this chapter within this volume advances the argument that first-year experiences are just as important and influential on two-year campuses as they are on four-year campuses.
By no means does the concentration of these chapters dedicated to first-year experiences indicate the end of such research. Arguably, the culmination of this research has generated practical debate on how to best serve and support student learning, development, and success. Together, these chapters serve to enlighten the discussion and highlight new directions for assessment and research practices within the scope of the first-year experience.
Ryan D. PadgettEditor
Swing, R. L. (2004).
Proving and improving, Volume II: Tools and techniques for assessing the first college year
(Monograph No. 37). Columbia: University of South Carolina, National Resource Center for The First-Year Experience and Students in Transition.
Ryan D. Padgett
, PhD, is the assistant vice president for student success and assessment in the Division of Student Affairs at Northern Kentucky University.
Supported by emerging research and practice, this chapter provides a comprehensive conceptual framework for first-year assessment.
Jennifer R. Keup, Cindy A. Kilgo
For decades, issues surrounding student access and success have been of perennial interest to college educators and researchers, and the first year of college has been recognized as both the springboard for student achievement and success and a significant leakage point in the educational pipeline. Recently, the early success of first-year students has taken on even greater importance due to changes in the higher education landscape, including demands from regional accrediting agencies for more accountability, shifting demographics, differential success rates among new student populations, and a realization on the part of institutions about the importance—both financially and in meeting their commitment to students—of retaining their currently enrolled undergraduates. As such, institutional budget officers, policy makers, and others who invest in first-year student success are searching for research and resources to help inform data-driven decision making about promising practices to support the adjustment and success of first-year students and the effective use of high-impact educational experiences and practices in first-year experience programs.
Institutional assessment activities focused on first-year students have been both the impetus and response to this emphasis on first-year student success and first-year experience programs. Assessment data collected from first-year students are able to serve a wide range of purposes. They can provide an understanding of the background, characteristics, and needs of the student cohort; gauge satisfaction with their college choice and experiences; provide perceptions of campus climate from the newest members of the campus community; evaluate the impact and cost-effectiveness of first-year programs and initiatives; measure student learning outcomes and program outcomes; and create benchmarks against comparable institutions, an aspirant group, or nationally accepted standards (Schuh, 2005; Siegel, 2003; Swing, 2004; Upcraft, 2004). Further, empirical data collected from students throughout their first year in college have great utility with respect to their timing in the trajectory of student performance and success. For example, first-year assessment activity can generate follow-up data to outreach and admissions efforts as a posttest. However, they also provide baseline data, or a pretest, for the curricular and cocurricular experiences in students’ first year and beyond that are intended to enhance the cognitive, affective, and interpersonal development of college undergraduates.
And yet, the power of first-year assessment data to enhance programs, pedagogies, and policies for first-year students is dependent upon effective strategies and decisions within the data collection, analysis, and dissemination process. While in certain instances there are clear “right” and “wrong” choices, more often, higher education assessment activity is comprised of subtle, nuanced judgments that are dependent upon a number of other factors. In sum, assessment of students, programs, and outcomes represents a web of decisions. This chapter attempts to address a few of the more common “forks in the assessment pathway” in an effort to provide a foundational understanding of first-year assessment activities as well as some considerations to help support key decisions within that process. National data on institutional assessment practices for first-year initiatives punctuate the discussion, illustrate the national trends and issues with respect to first-year assessment, and provide a broader context for institutional assessment decisions. However, the overall goal of this chapter is to provide a conceptual framework for first-year assessment as a scaffold for more sophisticated statistical and methodological aspects of assessment practice. In particular, this chapter addresses (a) the match between objectives and outcomes, (b) quantitative, qualitative, and mixed methodologies, (c) national surveys and locally developed assessment instruments, and (d) direct and indirect measures.
First-year assessment activities can take a number of forms and pursue a range of purposes. However, the rise in national attention to accountability in higher education, institutional efforts for data-driven decision making, and return on resource investment have brought greater attention to outcome assessment. As such, the evaluation of impact on outcomes is often the primary focus of assessment strategies in undergraduate education overall and the first-year experience in particular. Such endeavors attempt to address “What happened?” and “What mattered?” (Ewell, 2001, p. 3) leading ultimately to “the most fundamental question of all: Is what we are doing [for first-year students] having any effect, and is that effect the intended one?” (Upcraft, 2004, p. 478).
For most outcome assessment processes to be effective, they must draw from clearly defined purposes and goals of the initiative under evaluation (Huba & Freed, 2000; Maki, 2002; Upcraft, Crissman Ishler, & Swing, 2004). Much like a journey without a destination is likely to be inefficient and ineffective, without identifying the desired outcomes of a student success effort, assessment results are likely to reveal a lack of cohesiveness and impact. Common objectives for first-year student initiatives represent longstanding and perennial outcomes of interest to higher education such as retention, satisfaction, and grade point average. However, these measures are more frequently complemented by broader learning objectives that are “developmental or emergent over time,” “more complex and sophisticated,” and focused on fostering “robust learning” skills rather than sole reliance upon specific subject-knowledge acquisition, personal satisfaction, or persistence (Rhodes, 2010, p. 1). These “21st century learning outcomes” represent “a powerful core of knowledge and capacities that all student should acquire” to become “intentional learners, self-aware about the reasons for their studies, adaptable in using knowledge, and able to connect seemingly disparate experiences” (Leskes & Miller, 2006, p. 2). For example, national data on first-year program and assessment outcomes include academic, personal, civic, and interpersonal engagement and skill development; clearer understanding of the purposes of higher education, general education, and liberal arts; intercultural competence and global citizenship; and appreciation of interdisciplinary perspectives, in addition to more traditional measures such as persistence to the second year; improved academic performance and achievement; sense of belonging; satisfaction with the institution, classes, faculty, and peer connections; and knowledge and use of campus resources (Barefoot & Koch, 2011; Padgett & Keup, 2011).
The identification and expansion of first-year outcomes to more current definitions of learning and development represent a promising practice in first-year experience programming and assessment. However, progress is hampered by a disconnect between these inclusive outcomes and the measures employed in assessment strategies. Multifarious measures that directly gauge knowledge and skills—such as rubrics, portfolios, and capstone projects—are influential in effectively assessing learning objectives. Too often, program objectives represent articulation of broad learning objectives but the assessment strategy relies upon transactional measures that do not adequately capture progress and achievement of student learning and program goals.
Data from the 2009 National Survey of First-Year Seminars, administered by the National Resource Center for The First-Year Experience and Students in Transition, provide an example of this disconnect between learning/program objectives and assessment outcomes (Padgett & Keup, 2011). Eight hundred and fifty-seven institutions provided information about their first-year seminars, which included their most important course objectives. The top objectives were:
Develop academic skills
Develop a connection with the institution
Provide orientation to campus resources and services
Self-exploration/personal development
Create common first-year experience
Develop support network/friendships
However, information collected about the measures and outcomes of assessment activities revealed a very different picture of priorities. Almost three quarters of these survey respondents reported that they measured first-to-second year persistence as an outcome of first-year seminars, but only 15.5% of respondents identified this measure as one of their three most important course objectives. It is also interesting to note that more than 70% of institutions participating in the national survey measured satisfaction with faculty as a seminar outcome, but fewer than 20% prioritized increased student/faculty interaction among their top course objectives. Similarly, there was a 15.1 percentage-point gap between the proportion of survey respondents that measured satisfaction with the institution as a first-year seminar outcome (65.3%) and the percent that reported that developing a connection with the institution was an important course objective (50.2%).
In order to truly inform the relevance and excellence of the first-year student success initiatives such as first-year seminars, it is critical that assessment strategies measure the stated objectives of the program. While not conclusive evidence, the statistics generated by the 2009 National Survey of First-Year Seminars suggest institutional overreliance upon easily acquired assessment outcomes, such as retention rates and satisfaction measures, regardless of their alignment with the stated objectives for the seminar and true measurement of student development and learning.
Another key decision in a first-year assessment pertains to the selection of methodology and the resultant type of data. All inquiry, including assessment and research, draws from two methodological approaches: qualitative and quantitative. These assessment perspectives are often discussed as if they represent an “either/or” scenario. However, each approach has its own strengths, is appropriate to the study of first-year students, and can be used in combination as a part of mixed-methodology studies.
As a review of its most basic tenets, qualitative methodology collects data about the meaning of events and activities to the people involved. This assessment approach is concerned with understanding the individual within a particular context, and thus the findings generated from these data are not intended to be generalizable. Instead, qualitative inquiry seeks to “achieve an understanding of how people make sense out of their lives, to delineate the process (rather than the outcome or product) of meaning making, and to describe how people interpret what they experience” (Merriam & Simpson, 2000, p. 97). Most often, the data collected by these means are narrative and commonly gathered via individual interviews, focus groups (i.e., group interviews), written responses to open-ended survey items, document analyses, and journals. However, qualitative data can also be gathered visually via observation of student behavior in classrooms, student life, and performances or captured in portfolios, reflective photography, or other forms of artistic expression. These data are then analyzed to find themes and nonstatistical relationships and patterns. While certainly informed by the assessment outcomes, the process of analysis and data distillation in qualitative assessment is typically inductive in nature and builds meaning and theory from the research procedure itself.
The purpose of quantitative methodology is to describe what is occurring; test relationships between individuals, programs, environments, and outcomes; and determine causality of events and effects upon the outcome of interest. In short, it seeks to answer the questions: (a) what is happening? and (b) what caused it? Quantitative data are numerical and most frequently drawn from surveys and analyses of existing data such as those drawn from the admissions office, campus registrar, utilization statistics, placement testing, advising and counseling offices, and course/program evaluation processes. Because quantitative methodology seeks to identify potentially causal relationships between variables and to test certain conditions and outcomes, these assessment procedures are typically deductive and product-oriented, rather than process-oriented. Quantitative analysis uses statistics to test assumptions, to predict behavior and outcomes, and to determine the impact of programs, experiences, and intentional interventions.
