Online Student Ratings of Instruction -  - E-Book

Online Student Ratings of Instruction E-Book

0,0
22,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

This volume examines the development and growing use of onlinestudent ratings and the potential impact online rating systems willhave on the future of students' evaluations ofteaching. The contributors demonstrate how the preference foronline evaluation is growing, even amidst challenges and doubt.Sharing their first-hand experience as researchers andadministrators of online systems, they explore major concernsregarding online student ratings and suggest possible solutions. D. Lynn Sorenson and Christian M. Reiner review existingonline-rating systems that have been developed independently acrossthe globe. Kevin Hoffman presents the results of a national surveythat tracks the increased use of the Internet for student ratingsof instruction. At Northwestern University, Nedra Hardydemonstrates how ongoing research about online student evaluationsis helping to dispel common misperceptions. Application of online rating systems can present institutionswith new challenges and obligations. Trav D. Johnson details a casestudy based on five years of research in the response rates for oneuniversity's online evaluation system and suggests strategiesto increase student participation. Reviewing online reportingof results of online student ratings, Donna C. Llewellyn exploresthe emerging issues of security, logistics, andconfidentiality. Other chapters explore existing online systems, highlightingtheir potential benefits for institution and instructor alike.Beatrice Tucker, Sue Jones, Lean Straker, and Joan Cole analyzeCourse Evaluation on the Web (CEW), a comprehensive online systemfor instructional feedback and improvement. Cheryl Davis Bullockreviews the Evaluation Online (EON) system and its successful rolein facilitating midcourse student feedback. The fate of online rating may rest in the unique advantages itmay - or may not - have over traditional ratingssystems. Debbie E. McGhee and Nana Lowell compare online andpaper-based methods through mean ratings, inter-rater reliabilitiesand factor structure of items. Comparing systems from anotherangle, Timothy W. Bothell and Tom Henderson examine the fiscalcosts and benefits of implementing an online evaluation system overpaper-based systems. Finally, Christina Ballantyne considers the prominent issues andthought-provoking ideas for the future of online student ratingsraised in this volume. Together, the contributors bring insight andunderstanding to the processes involved in researching andinitiating innovations in online-rating systems. This is the 96th issues of the quarterly journal NewDirections for Teaching and Learning.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 208

Veröffentlichungsjahr: 2011

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Chapter 1: Charting the Uncharted Seas of Online Student Ratings of Instruction

Institutions Using Online Student Ratings

Context

Why Consider an Online Student Ratings System?

Challenges for Online Course Ratings

Organizational Issues and Suggestions

Collaboration and Conclusion

Chapter 2: Online Course Evaluation and Reporting in Higher Education

Method

Survey Results

Conclusion

Chapter 3: Online Ratings: Fact and Fiction

Background

Student Rating Scores

Student Comments

Conclusion

Questions for Further Investigation

Chapter 4: Psychometric Properties of Student Ratings of Instruction in Online and on-Campus Courses

Instructional Assessment System

IAS Online

Method

Results

Discussion

Conclusion

Chapter 5: Online Student Ratings: Will Students Respond?

Response Rates for Online Ratings

Studies of Response Rates at Brigham Young University

Possible Reasons for Increase in Response Rates over Time

Response Rates Under Various Conditions

Response Rate and Length of Student Rating Forms

Students Completing Rating Forms for All Their Courses

Response Rate for Open-Ended Comments

Why Some Students Did Not Respond

Possible Bias When Response Rates Are Low

Strategies to Increase Response Rates

Response Rates for Campuswide Online Student Ratings

Discussion

Conclusion

Chapter 6: Online Reporting of Results for Online Student Ratings

Background

Logistics of Online Reporting

Benefits of Online Reporting

Concerns Related to Online Reporting

Unresolved Issues and Questions

Chapter 7: Do Online Ratings of Instruction Make $ense?

Importance of the Costs Comparisons

Related Research

Student Ratings of Instruction at BYU

Costs Categories

Actual and Estimated Costs

Costs Comparison

Personnel Time Expended in Processing Paper-Based Surveys

Other Issues Beyond Costs

Conclusions

Chapter 8: Course Evaluation on the Web: Facilitating Student and Teacher Reflection to Improve Learninga

Traditional Feedback Systems

A Broader View of Evaluation

CEW Feedback Instrument

CEW Process

Closing the Feedback Loop

Benefits of Online Evaluation

Conclusion

Chapter 9: Online Collection of Midterm Student Feedback

Review of the Literature

EON System

Results

Summary and Conclusions

Implications for Future Research

Chapter 10: Online Evaluations of Teaching: An Examination of Current Practice and Considerations for the Future

Background

Online Surveys of Courses at Murdoch University

Issues Related to Online Surveys

The Future

Conclusion

Index

Online Student Ratings of Instruction

D. Lynn Sorenson, Trav D. Johnson (eds.)

New Directions for Teaching and Learning, no. 96

Marilla D. Svinicki, Editor-in-Chief

R. Eugene Rice, Consulting Editor

Copyright © 2003 Wiley Periodicals, Inc., A Wiley Company. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Permissions Department, c/o John Wiley & Sons, Inc., 111 River St., Hoboken, NJ 07030; (201) 748-8789, fax (201) 748-6326, http://www.wiley.com/go/permissions.

Microfilm copies of issues and articles are available in 16mm and 35mm, as well as microfiche in 105mm, through University Microfilms Inc., 300 North Zeeb Road, Ann Arbor, Michigan 48106-1346.

ISSN 0271-0633 electronic ISSN 1536-0768

New Directions for Teaching and Learning is part of The Jossey-Bass Higher and Adult Education Series and is published quarterly by Wiley Subscription Services, Inc., A Wiley Company, at Jossey-Bass, 989 Market Street, San Francisco, California 94103-1741. Periodicals postage paid at San Francisco, California, and at additional mailing offices. Postmaster: Send address changes to New Directions for Teaching and Learning, Jossey-Bass, 989 Market Street, San Francisco, California 94103-1741.

New Directions for Teaching and Learning is indexed in College Student Personnel Abstracts, Contents Pages in Education, and Current Index to Journals in Education (ERIC).

Subscriptions cost $80 for individuals and $160 for institutions, agencies, and libraries. Prices subject to change.

Editorial correspondence should be sent to the editor-in-chief, Marilla D. Svinicki, The Center for Teaching Effectiveness, University of Texas at Austin, Main Building 2200, Austin, TX 78712-1111.

Cover photograph by Richard Blair/Color & Light © 1990.

www.josseybass.com

From the Series Editor

About This Publication. Since 1980, New Directions for Teaching and Learning (NDTL) has brought a unique blend of theory, research, and practice to leaders in postsecondary education. NDTL sourcebooks strive not only for solid substance but also for timeliness, compactness, and accessibility.

The series has four goals: to inform readers about current and future directions in teaching and learning in postsecondary education, to illuminate the context that shapes these new directions, to illustrate these new directions through examples from real settings, and to propose ways in which these new directions can be incorporated into still other settings.

This publication reflects the view that teaching deserves respect as a high form of scholarship. We believe that significant scholarship is conducted not only by researchers who report results of empirical investigations but also by practitioners who share disciplined reflections about teaching. Contributors to NDTL approach questions of teaching and learning as seriously as they approach substantive questions in their own disciplines, and they deal not only with pedagogical issues but also with the intellectual and social context in which these issues arise. Authors deal on the one hand with theory and research and on the other with practice, and they translate from research and theory to practice and back again.

About This Volume. Technology is having an impact on every aspect of higher education, from class management to grade delivery to student-faculty interactions. Each of these uses has its pluses and minuses, its critics and supporters. It is not surprising to find technology being explored as the answer to some of the problems of students’ evaluation of teaching. This issue of NDTL tackles that very question and, as with most examinations of technology, finds that the answers are not simple—but they are encouraging.

Marilla D. Svinicki

Editor-in-Chief

Marilla D. Svinicki is director of the Center for Teaching Effectiveness at the University of Texas at Austin.

Chapter 1

Charting the Uncharted Seas of Online Student Ratings of Instruction

D. Lynn Sorenson, Christian Reiner

Are online student ratings the “wave of the future”? This chapter introduces numerous advantages and challenges of adopting an online system for student evaluation of teaching; in it, the authors preview the research of the other authors of this volume and suggest areas that universities can investigate when determining the desirability of initiating an online ratings system for student evaluation of instruction.

In attempting to “chart uncharted seas,” it is sometimes helpful to look back at earlier journeys that were once uncharted but are now well traveled. Consider that, in the 1970s, it seemed unlikely that word processing would be useful anywhere except in a typing pool. Now it is ubiquitous, and typing pools, as such, have ceased to exist. Then, in the 1980s, when the Internet made its arcane and awkward entrance onto the world’s stage, it appeared to be a fun toy for playful “techies” or, perhaps, a serious communication device for NASA scientists. It seemed unlikely that it would affect much of anything in the real world or in most of academe. Now, time has revealed its irreplaceable value to all of academe, to business, to government, and even to isolated villagers in newly named countries. In a word, the world will never be the same.

Today nearly every function in society can be—and is—performed online: online shopping, online reservations, online chat rooms, online music, online movies, online dating, online counseling, online birthing instruction, and online funeral planning. And, of course, academe has embraced the Web for a myriad of functions: online admissions, online registration, online grades, online libraries, online databases, online research, online teaching, online testing, online conferences, and online universities! Is it such a far reach to imagine the Internet supplanting cumbersome paper systems for the student ratings of instruction in higher education—slowly now at first, and rapidly, even completely, in the future? Will paper ratings go the way of typing pools and slide rules?

The idea of an online student-rating system is a “cutting-edge” proposition (in comparison to a traditional paper-based system). An electronic system can provide nearly instantaneous recording of data, reduced processing time and costs, more accurate data collection and reporting, easy administration, faster completion for students, and longer, more thoughtful student comments. Dozens of colleges and universities have initiated online ratings of instruction for face-to-face classes—usually creating the systems in isolation, as “islands” unto themselves. Often they have been unaware of “neighboring islands” engaged in the same intense work of developing an online rating system. This volume endeavors to initiate communication and exchange among some “early adopters” in the United States and Australia. Who are the early adopters? How many institutions of higher education have implemented online student ratings of instruction?

Institutions Using Online Student Ratings

Until the publication of this volume, the study reported by Hmieleski and Champagne (2000) stood as the only available data on the number of institutions using online student evaluations. At that time, they found a meager 2 percent of the surveyed U.S. institutions reporting the campuswide use of online student ratings of instruction. As might be expected, many more institutions evaluate online courses through the Web now.

Current Survey Research

Kevin M. Hoffman (Chapter Two in this volume) provides more recent data about the pervasiveness of online ratings through 2002. Of the hundreds of campuses he surveyed, 17 percent of the responding institutions “reported using the Internet in some capacity to collect student evaluation data for face-to-face courses.” Another “10 percent indicated that their institutions planned to initiate Internet evaluations of face-to-face courses in 2003.” Still another 18 percent reported that their institutions were “in the process of reviewing Internet options.” In other words, nearly half of the institutions responding to Hoffman’s survey had initiated some degree of online ratings collection or were considering doing so.

Internet Resources

In an informal search of the World Wide Web in the summer of 2003, Susan J. Clark of Brigham Young University found some three dozen university Web sites with information about their institutions’ use of online student ratings to evaluate face-to-face classes, either for entire campuses or for entire divisions, colleges, schools, or departments (see the Appendix at the end of this chapter). An additional twenty-five institutions’ Web sites indicated that their campuses were using online ratings solely for online courses. The number of postsecondary education institutions implementing online student ratings is growing. (For updated information on institutions using online student ratings or to share information about an institution’s use of online student ratings, go to the Web site for Online Student Evaluation of Teaching (OnSET), http://OnSET.byu.edu.)

This volume can serve as a guidebook for travelers exploring these “islands” of online ratings “sprinkled across the globe.” Riding a wave of the future, the authors have braved uncharted seas to research and create systems where the Internet pervades the process of student evaluation of instruction.

Other travelers who wish to explore these islands of innovation must engage in some important preparation before embarking on the journey. That is, they must first contextualize online ratings within the framework of student evaluation of instruction, in general, and then within the even larger context of the teaching-evaluation process in higher education.

Context

Student evaluations of teaching began in the fifties and sixties. Through the years, they have been driven by many factors: accountability, teaching improvement, legal considerations, and budget concerns, to name a few (Ory, 2000). Student ratings of instruction are “arguably the largest single area of research in postsecondary education” (Theall and Franklin, 1990). In 1996, researchers at the University of Michigan estimated that more than two thousand articles about student ratings of instruction had been printed over the previous fifty years (McKeachie and Kaplan, 1996).

This intense scrutiny, research, and publication have continued; for example, New Directions for Teaching and Learning (NDTL) has published three volumes related to the evaluation of teaching within a recent two-year period: Evaluating Teaching in Higher Education: A Vision for the Future (K. E. Ryan, editor, 2000); Fresh Approaches to the Evaluation of Teaching (C. Knapper and P. Cranton, editors, 2001); and Techniques and Strategies for Interpreting Student Evaluations (K. G. Lewis, editor, 2001). An earlier NDTL can serve as an excellent resource: Student Ratings of Instruction: Issues for Improving Practice (M. Theall and J. Franklin, editors, 1990). In addition, New Directions for Institutional Research issued another important resource, The Student Ratings Debate: Are They Valid? How Can We Best Use Them? (M. Theall, P. C. Abrami, and L. A. Mets, editors, 2001). All of these New Directions publications provide excellent resources for academics and administrators to review the important contextual issues of teaching evaluation and improvement (of which online ratings of instruction have become a part).

Michael Theall, respected researcher, practitioner, and author on the evaluation of teaching, has suggested a context for good practice in teaching evaluation (regardless of whether ratings are collected online or on paper). In addition to emphasizing that student ratings are an important part of evaluation, Theall (2002) suggests a number of guidelines for an effective teaching evaluation process and system:

Establish the purposes of the evaluation and who the users will be.Include stakeholders in decisions about evaluation process and policy.Keep in mind a balance between individual and institutional needs.Publicly present clear information about the evaluation criteria, process, and procedures.Be sure to provide resources for improvement and support of teaching and teachers.Build a coherent system for evaluation, rather than a piecemeal process.Establish clear lines of responsibility and reporting for those who administer the system.Invest in the superior evaluation system and evaluate it regularly.Use, adapt, or develop instruments suited to institutional and individual needs.Use multiple sources of information for evaluation decisions.Collect data on ratings and validate the instrument(s) used.Produce reports that can be easily and accurately understood.Educate the users of rating results to avoid misuse and misinterpretation.Keep formative evaluation confidential and separate from summative decision making.In summative decisions, compare teachers on the basis of data from similar teaching situations.Consider the appropriate use of evaluation data for assessment and other purposes.Seek expert outside assistance when necessary or appropriate.

(See the Web site http://www.byu.edu/fc/pages/tchlrnpages/focusnewsletters/Focus_Fall_2002.pdf.)

As the possibility of Web-based ratings has arisen within this context of teaching evaluation, some innovators have sought wider support for new online student evaluation systems. Given that the initiation of an online ratings system is a sizable endeavor—involving seemingly “a cast of thousands” and substantial resources—why would any institution want to sail into this “uncharted sea”?

Why Consider an Online Student Ratings System?

A closer look at some of the possible advantages of an online rating system is helpful to understand why colleges are considering and initiating the use of the Internet as an alternative to the traditional paper-pencil rating medium. This discussion about advantages of online course ratings necessarily involves a comparison of online and paper-pencil rating systems because the online ratings usually replace or supplement paper ratings.

Time

An online course-rating system frees up valuable class time because students can complete their ratings outside of class. Not only teachers value this advantage, but several studies have shown that students also tend to perceive saved class time as an advantage (Dommeyer, Baum, and Hanna, 2002; Johnson, 2001; Layne, DeCristoforo, and McGinty, 1999). In Chapter Seven of this volume, Timothy Bothell and Tom Henderson discuss, among other things, the use of class time for student ratings.

The class-time-saving advantages of online ratings come with some possible problems. Some students are concerned that they and their peers may be less likely to complete their course ratings if they must do them outside of class in their free time, rather than doing them in class (Hardy, 2002; Johnson, 2001; Layne, DeCristoforo, and McGinty, 1999). In Chapter Five of this volume, Trav Johnson reports some student concerns and suggestions on this topic.

Besides freeing up valuable class time, online course ratings provide students with a longer time period during which to complete their ratings. When filling out forms in class, students must do so in a few minutes. Using an online student-rating system increases this time span because ratings are completed outside of class. Because students have more time to complete their course ratings, the quantity and quality of their written responses may increase. When completing online ratings, students’ comments may be longer and more thoughtful because they are more likely to provide their feedback when they feel ready to do so and with sufficient time to write all they want to write. Some research has shown that students completing online ratings tend to provide more and longer written comments than students using the traditional paper-pencil process. In Chapter Three of this volume, Nedra Hardy compares students’ written comments collected through each medium. In addition, Trav Johnson addresses this issue in Chapter Five of this volume.

An online course-rating system also improves on one of the major weaknesses of paper-pencil course ratings—high turnaround time (that is, the time required for instructors to receive reports of results after students have submitted their ratings). Of the 105 colleges responding to Hmieleski’s previously mentioned survey (2000), 65 percent reported that, on average, it takes three weeks to two months before teachers receive the results of their course ratings. An online ratings system can substantially shorten the time to receive ratings reports, thereby enabling teachers to consider and act on student feedback in a more timely manner.

The administration of course ratings is eased considerably by an online system. An automated Web-based system saves much of the time spent on printing and distributing the rating forms, cleaning up the returned forms and running them through a scanner, and distributing the results. More-over, the use of an online rating system frees up time for department secretaries and others who currently spend hours transcribing handwritten student feedback to ensure students’ anonymity.

Flexibility

In some online rating systems, instructors are given the flexibility to adapt and personalize the rating forms. They can easily change or add questions (or both) to elicit feedback according to their individual needs. Of course, most institutions with online rating systems do not allow unlimited “teacher tinkering” with the form. The system has to ensure that the mandated items cannot be changed or eliminated by instructors.

Another benefit of the online system is the flexibility it provides in accessing reports. In most cases, as long as instructors have access to computers and the Internet, they can look at and print their online rating results at their own convenience.

In addition to having personalized rating forms and the ease of accessing reports, teachers can use an online system to obtain midterm and ongoing feedback from students in addition to the required end-of-course ratings. In a study of an online rating system that allowed for ongoing feedback, students indicated that they liked the availability of such a system even if they did not take advantage of it often. To them, it was good to know that they could give feedback if they desired (Ravelli, 2000). The Curtin University School of Physiotherapy has developed a comprehensive system for online feedback, reported in Chapter Eight of this volume by Beatrice Tucker, Sue Jones, Leon Straker, and Joan Cole. In addition, Cheryl Davis Bullock outlines a mid-course online evaluation system in Chapter Nine of this volume. As mentioned earlier in the “Time” section of this chapter, an asset of the online systems is the flexibility afforded to student respondents when completing their course ratings. With the online systems, students gain flexibility as to when and where they complete the rating form, provided they have access to a computer and the Internet. Enabling students to complete the form at their own convenience increases the likelihood that responding students will have the time needed to consider their rating and write all that they want to say in the student comments section.

Quantity and Quality of Written Comments

Research indicates that students provide more and longer responses online than they do using a traditional paper-pencil system (Hardy, 2002; Hmieleski and Champagne, 2000; Johnson, 2001; Layne, DeCristoforo, and McGinty, 1999). The greater length and frequency of written responses may be due to students being less rushed in giving feedback, students feeling that typing their responses is easier and faster than writing them, and students now believing that their handwriting cannot be used to identify them (Johnson, 2001; Layne, DeCristoforo, and McGinty, 1999). Students have also reported that online course ratings allow them more time to consider their answers and provide more thoughtful written responses (Johnson, 2001; Ravelli, 2000; see also Hardy, Chapter Three, and Johnson, Chapter Five in this volume, for their studies about students’ written comments).

Reporting

Having used online course ratings for several years now, the Georgia Institute of Technology has experienced several benefits from the electronic reporting of course-rating results. Specialized reports are fairly easy to create and make available to all users; reports can be accessed from a personal computer; the rating results are more accessible to a broader group of individuals (for example, researchers); data are more readily available for analysis across different types of classes and different course sections; and perhaps most important, reports are available almost immediately (Llewellyn, 2002; Donna C. Llewellyn amplifies and updates these earlier studies in Chapter Six of this volume).

The crucial difference between Web-based evaluation reporting systems and paper evaluation reporting systems appears to be in the time it takes to get the data into the system for the processing of the results. When the data obtained by a paper-pencil system are entered into an electronic system, the same reporting benefits could be realized as those experienced by the Georgia Institute of Technology for the reporting of online ratings. Still, online course ratings have an edge on paper-pencil ratings in regard to turnaround time because of the reduced time needed to collect and enter data in the paper-pencil system.

Costs

Online student-rating systems are generally perceived as less expensive than paper-pencil rating systems. Automating the course-rating process eliminates the paper costs and reduces personnel costs for processing rating forms. Human involvement in the process of collecting, entering, and reporting course-rating data is minimized. One study suggests that conducting course ratings online leads to savings of 97 percent over the traditional paper-pencil method (see Hmieleski and Champagne, 2000). However, Theall (2000) has questioned the generalizability of this study because it “present[ed] the best case for electronic data processing and the worst case for paper-based systems.” Bothell and Henderson (Chapter Seven in this volume) have undertaken a more rigorous costs study. They found the overall costs for online systems substantially lower than those for paper-based systems.

Challenges for Online Course Ratings

Online student evaluations of teaching present a number of challenges. Some difficulties are overstated during early preconception (or misperception) stages; others are unforeseen until the implementation (or maintenance) stages. This section outlines some of the common challenges of online student-rating systems.

Response Rates

Response rates are one of the most frequently raised issues in discussions of online student ratings of instruction; they are also becoming the area most often studied (for example, Cummings, Ballantyne, and Fowler, 2001; Dommeyer, Baum, and Hanna, 2002; Hmieleski, 2000; Johnson, 2002; Hardy, 2002; McGourty, Scoles, and Thorpe, 2002a). Some Web-based ratings have yielded lower response rates than paper-based systems. Researchers have suggested possible explanations for the lower response rates: perceived lack of anonymity of responses, lack of compulsion to complete ratings online, student apathy, inconvenience, technical problems, and required time for completing the ratings (Ballantyne, 2000; Dommeyer, Baum, and Hanna, 2002).

Several studies have shown that it is possible to spur response rates and even to obtain response rates of 80 percent and higher (Cummings, Ballantyne, and Fowler, 2001; Goodman and Campbell, 1999; Ha and Marsh, n.d.; Hardy, 2002; Hmieleski, 2000; Johnson, 2002; McGourty, Scoles, and Thorpe, 2002a). In Chapter Ten of this volume, Christina Ballantyne elaborates on these issues; see also Hardy, Chapter Three, and Johnson, Chapter Five.

Response Biases

Some faculty are also concerned about response bias, which they perceive as linked to response rates. They wonder to what degree the group of responding students is representative of the whole class and to what degree the results are generalizable. For example, some studies have shown that students with higher grade-point averages (GPAs) tend to be more likely to complete online student ratings than students with lower GPAs (Layne, DeCristoforo, and McGinty, 1999; McGourty, Scoles, and Thorpe, 2002a). Researchers who have also studied a number of student-rating biases have found mixed or inconclusive results: gender biases (Dommeyer, Baum, and Hanna, 2002; Layne, DeCristoforo, and McGinty, 1999); year-in-school biases (Layne, DeCristoforo, and McGinty, 1999; McGourty, Scoles, and Thorpe, 2002a); and department, discipline, or course biases (Goodman and Campbell, 1999; Layne, DeCristoforo, and McGinty, 1999; Thorpe, 2002). More research on response bias—especially in regard to online student ratings of instruction—is needed to determine if, how, and to what degree online student ratings may favor responses from certain groups of students.