Methods in Educational Research - Marguerite G. Lodico - E-Book

Methods in Educational Research E-Book

Marguerite G. Lodico

0,0
88,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Methods in Educational Research

Methods in Educational Research is designed to prepare students for the real world of educational research. It focuses on scientifically-based methods, school accountability, and the professional demands of the twenty-first century, empowering researchers to take an active role in conducting research in their classrooms, districts, and the greater educational community. Like the first edition, this edition helps students, educators, and researchers develop a broad and deep understanding of research methodologies. It includes substantial new content on the impact of No Child Left Behind legislation, school reform, quantitative and qualitative methodologies, logic modeling, action research, and other areas. Special features to assist the teaching and learning processes include vignettes illustrating research tied to practice, suggested readings at the end of each chapter, and discussion questions to reinforce chapter content.

Praise for the Previous Edition

"A new attempt to make this subject more relevant and appealing to students. Most striking is how useful this book is because it is really grounded in educational research. It is very well written and quite relevant for educational researchers or for the student hoping to become one." -PsycCRITIQUES/American Psychological Association

"I applaud the authors for their attempt to cover a wide range of material. The straightforward language of the book helps make the material understandable for readers." -Journal of MultiDisciplinary Evaluation

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 825

Veröffentlichungsjahr: 2010

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents
Title Page
Copyright Page
Table of Exhibits
Table of Figures
List of Tables
Dedication
PREFACE
Acknowledgments
THE AUTHORS
CHAPTER ONE - INTRODUCTION TO EDUCATIONAL RESEARCH
EDUCATIONAL ACCOUNTABILITY AND EDUCATIONAL RESEARCH
CONDUCTING EDUCATIONAL RESEARCH
PHILOSOPHICAL FRAMEWORKS FOR EDUCATIONAL RESEARCH
RESEARCH ETHICS
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SUGGESTED READINGS
CHAPTER TWO - TYPES OF EDUCATIONAL RESEARCH DESIGNS AND RELATED MAJOR CONCEPTS
TYPES OF APPROACHES USED IN EDUCATIONAL RESEARCH
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SUGGESTED READINGS
CHAPTER THREE - DESCRIPTIVE STATISTICS
CHARACTERISTICS OF DATA
SUMMARIZING DATA USING DESCRIPTIVE STATISTICS
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SUGGESTED READINGS
CHAPTER FOUR - EDUCATIONAL MEASUREMENT
MEASUREMENT IN EDUCATION
EVALUATING THE QUALITY OF STANDARDIZED INSTRUMENTS: RELIABILITY AND VALIDITY
ISSUES IN FINDING AND USING STANDARDIZED INSTRUMENTS
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SUGGESTED READINGS
CHAPTER FIVE - QUALITATIVE MEASURES AND PROCEDURES
CHARACTERISTICS OF QUALITATIVE MEASUREMENT
SAMPLING IN QUALITATIVE RESEARCH
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SUGGESTED READINGS
CHAPTER SIX - QUALITATIVE RESEARCH
RESEARCH VIGNETTE
UNDERSTANDING QUALITATIVE RESEARCH
STEPS IN DESIGNING QUALITATIVE RESEARCH
EVALUATING NARRATIVE INQUIRY AND PHENOMENOLOGICAL RESEARCH
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SAMPLES OF QUALITATIVE RESEARCH STUDIES
SUGGESTED READINGS
CHAPTER SEVEN - ORGANIZATION AND ANALYSIS OF QUALITATIVE DATA
ANALYSIS OF QUALITATIVE DATA
STEPS IN ANALYZING QUALITATIVE DATA
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SUGGESTED READINGS
CHAPTER EIGHT - DESCRIPTIVE SURVEY RESEARCH
RESEARCH VIGNETTE
CHARACTERISTICS OF DESCRIPTIVE SURVEY RESEARCH
STEPS IN CONDUCTING DESCRIPTIVE SURVEY RESEARCH
EVALUATING DESCRIPTIVE SURVEY RESEARCH
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SAMPLES OF DESCRIPTIVE SURVEY STUDIES
SUGGESTED READINGS
CHAPTER NINE - EXPERIMENTAL RESEARCH
RESEARCH VIGNETTE
UNDERSTANDING EXPERIMENTAL RESEARCH
STEPS IN PLANNING AND CONDUCTING EXPERIMENTAL RESEARCH
THREATS TO EXPERIMENTAL VALIDITY
SINGLE-SUBJECT RESEARCH DESIGNS
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SAMPLE GROUP EXPERIMENTAL STUDY
SUGGESTED READINGS
CHAPTER TEN - NONEXPERIMENTAL APPROACHES
RESEARCH VIGNETTE
CAUSAL-COMPARATIVE RESEARCH
CORRELATIONAL RESEARCH
MULTIPLE REGRESSION STUDIES
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SAMPLE CAUSAL-COMPARATIVE STUDY
SAMPLE CORRELATIONAL STUDIES
SUGGESTED READINGS
CHAPTER ELEVEN - INFERENTIAL STATISTICS
BEYOND DESCRIPTIVE STATISTICS: INFERENTIAL STATISTICS
STEPS IN ANALYZING DATA USING INFERENTIAL TESTS
DESIGNS WITH MORE THAN ONE INDEPENDENT OR DEPENDENT VARIABLE
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SUGGESTED READINGS
CHAPTER TWELVE - ACTION RESEARCH
RESEARCH VIGNETTE
UNDERSTANDING ACTION RESEARCH
STEPS IN CONDUCTING ACTION RESEARCH
DATA SOURCES FOR ACTION RESEARCH
EVALUATION OF ACTION RESEARCH
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SAMPLE ACTION RESEARCH STUDIES
SUGGESTED READINGS
CHAPTER THIRTEEN - PROGRAM EVALUATION IN EDUCATION
RESEARCH VIGNETTE
WHAT IS PROGRAM EVALUATION?
TYPES, APPROACHES, AND MODELS OF PROGRAM EVALUATION
STEPS IN DESIGNING PROGRAM EVALUATION
PROGRAM EVALUATION DATA AND CRITIQUING EVALUATION REPORTS
TRAINING AND CAREERS IN PROGRAM EVALUATION
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS AND ACTIVITIES
SAMPLE PROGRAM EVALUATIONS
SUGGESTED READINGS
CHAPTER FOURTEEN - IDENTIFYING AND RESEARCHING A TOPIC
GETTING STARTED
IDENTIFY A RESEARCH TOPIC
REFINE YOUR TOPIC AS YOU SEARCH
SEARCH THE LITERATURE
IDENTIFY AND SUMMARIZE KEY INFORMATION FROM ARTICLES
WHEN DO I HAVE ENOUGH?
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SUGGESTED READING
CHAPTER FIFTEEN - THE RESEARCH PROPOSAL
PREPARING A RESEARCH PROPOSAL
SUMMARY
KEY CONCEPTS
DISCUSSION QUESTIONS OR ACTIVITIES
SUGGESTED READINGS
REFERENCES
APPENDIX A
APPENDIX B
INDEX
Table of Figures
FIGURE 3.1 Sample Frequency Polygon
FIGURE 3.2 Sample Histogram
FIGURE 3.3 Examples of Positively Skewed and Negatively Skewed Distributions
FIGURE 3.4 Grades in Distance Education Classes
FIGURE 3.5 Distributions of Test Scores for Two School Districts
FIGURE 3.6 Normal Curve
FIGURE 3.7 “Box-and-Whisker” Graphs
FIGURE 3.8 Relationship Between High-Jump and Long-Jump Scores
FIGURE 3.9 Scatterplot of Children’s Weight and Amount of Physical Exercise
FIGURE 3.10 Scatterplot with Prediction Line
FIGURE 3.11 Scatterplot Representations of Correlation Coefficients of Different Sizes
FIGURE 3.12 Scatterplot of Relationship Between Achievement Test Scores and Test Anxiety
FIGURE 3.13 Relationship Between Achievement and Self-Esteem
FIGURE 4.1 Normal Curve with Test Scores
FIGURE 4.2 Overview of the Process in Developing Standardized Instruments
FIGURE 7.1 Steps Involved in Analyzing Qualitative Data
FIGURE 9.1 Random Selection and Random Assignment
FIGURE 9.2 Number of Words Spoken by a Child with Autism During Baseline Period
FIGURE 9.3 Sample A-B-A Design
FIGURE 9.4 A-B-A-B Design
FIGURE 9.5 Sample Multiple-Baseline Design
FIGURE 9.6 Group Experimental and Single-Subject Research Add to Professional Knowledge and Decision Making
FIGURE 10.1 Television Watching and Body Weight
FIGURE 10.2 Predicted Correlations Among Four Variables
FIGURE 10.3 Scatterplot Showing Restriction of Range in Amount of Physical Exercise
FIGURE 10.4 Correlation Matrix for Study of Bullying
FIGURE 10.5 Example of Intervening Variable
FIGURE 11.1 A Distribution of 100 Scores
FIGURE 11.2 Normal Curve
FIGURE 11.3 Graph Showing Interaction of Gender and Computer Use
FIGURE 12.1 Action Research Process
FIGURE 12.2 Rating Scale Using Nonverbal Responses
FIGURE 12.3 Photographs as Data Sources in Action Research
FIGURE 12.4 Use of Concept Map to Depict Relationships Among Themes
FIGURE 12.5 Use of a Histogram to Display Results from a Parent Survey
FIGURE 13.1 The Summative and Formative Process in Program Evaluation
FIGURE 13.2 Overview of a Logic Model
FIGURE 14.1 Ways to Generate a Research Topic
FIGURE 14.2 Development of a Research Question
FIGURE 15.1 How Research Proposals Aid the Development of Research
List of Tables
TABLE 1.1 Sample Value-Added Data for Two School Districts
TABLE 1.2 Qualitative and Quantitative Approaches: The Scientific Process
TABLE 1.3 Frameworks and Assumptions Underlying Educational Research
TABLE 3.1 Scales of Measurement
TABLE 3.2 Frequency Table of Number of Books Read by 30 Students
TABLE 3.3 Sample Grouped Frequency Table
TABLE 3.4 Calculation of Mean from a Frequency Table
TABLE 3.5 Means and Standard Deviations for the Number of Words Written
TABLE 3.6 Sample Table Showing Relationship Between Two Variables
TABLE 3.8 Correlations Among Five Variables
TABLE 3.7 Student Math and ELA Scores
TABLE 5.1 Characteristics of Qualitative Measurement
TABLE 5.2 Advantages and Disadvantages of Surveys Versus Interviews
TABLE 5.3 Types of Interview Questions and Probes
TABLE 5.4 Summary of Purposeful Sampling Strategies
TABLE 6.1 Types of Narrative Inquiry
TABLE 6.2 Theoretical Assumptions Underlying Narrative Inquiry
TABLE 6.3 Theoretical Assumptions Underlying Phenomenological Research
TABLE 6.4 Types of Ethnographic Studies
TABLE 6.5 Steps in Conducting Qualitative Research
TABLE 6.6 Criteria for Evaluating Qualitative Research Studies
TABLE 7.1 Common Code Categories and Examples of Code Names from a Life Story of a Latino Youth
TABLE 7.2 Report Formats for Presenting Findings from Qualitative Research
TABLE 8.1 Types of Descriptive Survey Designs
TABLE 8.2 Simple Random Selection
TABLE 8.3 Criteria for Evaluating Descriptive Survey Research
TABLE 9.1 Experimental Research
TABLE 9.2 Hypotheses in Experimental Research
TABLE 9.3 Factorial Design
TABLE 9.4 Math Scores from Johanna’s Research Study
TABLE 9.5 Threats to Internal Validity
TABLE 9.6 Threats to External Validity
TABLE 10.1 Common Statistical Tests for Examining Relationships Between Variables
TABLE 10.2 Practical Interpretations of Correlation Coefficients
TABLE 10.3 Multiple Regression Analysis for Study of Reading
TABLE 11.1 Commonly Used Inferential Statistical Texts
TABLE 12.1 Action Research Pioneers
TABLE 12.2 Benefits of Action Research to Practitioners and the Field of Education
TABLE 12.3 A Triangulation Matrix
TABLE 12.4 Question for Identifying Patterns in an Action Research Study
TABLE 13.1 Components of a Logic Model
TABLE 13.2 Example of an Evaluation Matrix
TABLE 13.3 Data from School Records for Fifth-Grade Students in Math
TABLE 14.1 Commonly Used Databases in Education
TABLE 14.2 General and Academically Focused Search Engines and Their Web Sites
TABLE 15.1 Sample Design
TABLE 1 . Subquestions and Methods of Data Collection
Table of Exhibits
EXHIBIT 3.1
EXHIBIT5.1
EXHIBIT 5.2
EXHIBIT 5.3
EXHIBIT 8.1
EXHIBIT 8.2
EXHIBIT 8.3
EXHIBIT 8.4
EXHIBIT 11.1
EXHIBIT 11.2
EXHIBIT 11.3
EXHIBIT 12.1
EXHIBIT 12.2
EXHIBIT12.3
EXHIBIT 12.4
EXHIBIT 12.5
EXHIBIT 12.6
EXHIBIT 12.7
EXHIBIT 12.8
EXHIBIT 12.9
EXHIBIT 12.10
EXHIBIT 12.11
EXHIBIT 13.1
EXHIBIT 14.1
EXHIBIT 14.2
EXHIBIT 15.1
EXHIBIT 15.2
EXHIBIT 15.3
EXHIBIT 15.4
EXHIBIT 15.5
EXHIBIT 15.6
Copyright © 2010 byJohn Wiley & Sons, Inc. All rights reserved.
Published byJossey-Bass A Wiley Imprint 989 Market Street, San Francisco, CA 94103-1741—www.josseybass.com
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the Web at www.copyright.com. Requests to the publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, 201-748-6011, fax 201-748-6008, or online at www.wiley.com/go/permissions.
Readers should be aware that Internet Web sites offered as citations and/or sources for further information may have changed or disappeared between the time this was written and when it is read.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Jossey-Bass books and products are available through most bookstores. To contact Jossey-Bass directly call our Customer Care Department within the U.S. at 800-956-7739, outside the U.S. at 317-572-3986, or fax 317-572-4002.
Jossey-Bass also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.
Library of Congress Cataloging-in-Publication Data
Lodico, Marguerite G.
p. cm.
Includes bibliographical references and index.
eISBN : 978-0-470-58869-7
1. Education—Research—Methodology. I. Spaulding, Dean T II. Voegtle, Katherine H. III. Title.
LB1028.L586 2010
370.72—dc22
2009051930
PB Printing
In loving memory of our colleagues Huey Bogan and Mark Ylvisaker. Huey inspired his students inTeacher Education to be reflective practitioners who continually strive to improve their practice.Mark exemplified the integration of research and clinical practice in his numerouspublications and inspired his students in Communication Sciences andDisorders to become scholar practitioners. We miss them both dearly.
PREFACE
Four years ago we wrote the first edition of Methods in Educational Research: From Theory to Practice with many expectations. As educational psychologists, we wanted to write a textbook from the fundamental perspective of how one learns in general and, more specifically, how one learns through conducting research. In addition, we wanted to create a book that would pertain to techniques and instructional practices underlying good teaching as well as to teach people about educational research. We wanted to pay close attention to the metacognitive processes associated with learning about research and developing and becoming an active participant in the educational research community.
We believe the purpose of this second edition is still to assist students, primarily graduate students, who are practitioners in education or related fields (administration, school psychology, or school counseling) to use educational research so that they can become more effective educators. Specifically, the purpose of this book is to help students develop a broad and deep understanding of research methodologies that can be used to analyze and improve their practices. Overall, we believe that we accomplished much of what we set out to do with the first edition of this book; however, in using it for the past three years, we realized that some areas needed to be expanded. This second edition expands on areas previously discussed as well as brings you several new chapters. We hope you enjoy the additions and wish you the best in your educational research pursuits.
In chapter 1, we have updated information on No Child Left Behind (NCLB) and school reform as well as the knowledge and research skills needed by educators in the 21st century. Chapter 2 has been expanded to provide more in-depth coverage of key concepts that we believe students need to be able to read and understand the research on a particular topic or issue. Chapter 3 focuses specifically on descriptive statistics and provides students with detailed examples of basic statistical computations and how results are displayed. Chapter 4 has an expanded section on archival data and descriptive statistics has been moved to chapter 3. Chapters 5 through 7 have been expanded, focusing on the different quantitative and qualitative types of research, with specific data exercises embedded in each chapter to give students a realistic framework for the types of data and possible analyses they may use when conducting research. We have embedded sampling strategies for descriptive survey research in chapter 7. In chapter 8, we expanded types of qualitative research to those most relevant to practitioners. Chapters nine through eleven are similar to those in the original edition. Perhaps the biggest change we have made to our book is chapter 12, which focuses entirely on action research. The chapter includes a data activity to give students a sense of the action research process. While the action research chapter is placed late in the book, there are no concepts in it that preclude coverage earlier in the semester. Chapter 13, program evaluation, features a section on logic modeling. Last, chapters 14 and 15 focus on generating ideas and researchable topics and preparing the research proposal. We moved these chapters to the end of the book to give instructors more flexibility in when they are assigned.
The book includes special features designed to assist the teaching and learning processes:
• Research vignettes illustrating research that is tied to practice and used to make decisions about educational practices open each chapter on research approaches and are discussed throughout these chapters.
• The book includes extensive discussion of research issues and concepts relevant to the accountability movement and using data to make decisions in educational settings.
• Developmental processes involved in researching and writing a research proposal are emphasized.
• Research proposals using both an action and a descriptive survey approach are included in appendices, because we feel these approaches are useful to practitioners. The appendices also include criteria for evaluating proposals using these approaches.
• Key concepts students should know are set in bold letters in each chapter.
• Suggested readings are provided at the end of each chapter to extend the discussion of general issues raised in the chapter and provide citations for sample studies that illustrate the type of research discussed.
• Discussion questions or activities are provided to stimulate thinking about the issues raised in the chapter or encourage students to apply the concepts presented.
• We did not include sample studies in the book; however, each chapter discussing a specific research approach (for example, descriptive survey, action, experimental) includes a list of studies that may be used for class discussion or assignments. Since many studies are available as full-text documents, we decided to decrease our carbon footprint by encouraging students to access these online.
ACKNOWLEDGMENTS
Any textbook on educational research owes a debt to the numerous people who have built the rich and varied literature in this field. In some sense, this book grew out of the conversations and relationships we have enjoyed with colleagues over many years, especially at meetings of the American Educational Research Association and the American Evaluation Association. Although we cannot name all of these persons, we certainly could not have begun to think about this book without the stimulation of many people in these vibrant educational communities.
However, many people closer to home also made this book possible. The College of Saint Rose and especially our dean, Margaret M. Kirwin, provided substantial support for our work by granting us sabbatical leaves and making available to us capable graduate assistants. The revisions of the book were informed by feedback from faculty members who were using it and from students in our classes. Members of our department consistently encouraged us in our writing, and our department chair, Richard Brody, always managed to get people to cover courses as needed each semester. Our colleagues who are practicing educational researchers—James Allen, Aviva Bower, Donna Burns, David DeBonis, Ron Dugan, Margaret McLane, Heta-Maria Miller, Travis Plowman, and Ismael Ramos—each contributed his or her own special expertise and pedagogical ideas to the book.
Moira DeSanta, our graduate assistant, carefully read and edited chapters from our book. We are also certainly grateful to the staff at Jossey-Bass, including Kelsey McGee, Andy Pasternack, and Seth Schwartz, for their continued support.
We also thank the students from our educational research classes who patiently read through often-imperfect drafts of the book, providing feedback and suggestions. Many of them allowed us to include samples of their work in this book to help us fulfill our goal of making courses on educational research more comprehensible, relevant, and useful to future generations of preservice educators. In particular, we thank Robert Dexter and Megan Rempe, whose research proposals are included as Appendix A and Appendix B in our book.
Finally, on a personal level, we thank our partners, Phil Lodico, Evan Seiden, and Jim Fahey, who kept us sane, well fed, and entertained throughout the often hectic job of revising this book.
M.G.L., D.T.S., and K.H.V
THE AUTHORS
Marguerite G. Lodico received her EdD from the University of Houston and is a professor at the College of Saint Rose, where she teaches child development and educational research. In addition to her teaching responsibilities, she has served as chair of the Educational and School Psychology Department, as interim dean and as director of an after-school mentoring program. She has conducted research on students in urban environments and school-to-college collaborations. She is currently involved in a professional development school initiative with a local elementary school. She is the coauthor of the case book Child and Adolescent Life Stories: Perspectives from Youth, Parents, and Teachers published in 2004.
Dean T. Spaulding received his PhD from State University of New York at Albany in 2001 and is currently an associate professor at the College of Saint Rose where he teaches educational research and program evaluation. His research and program evaluation work has focused on technology, after-school and enrichment programs, educational leadership, and environmental and science education in K-12 settings and in higher education. He is the author of Program Evaluation in Practice: Core Concepts and Examples for Discussion and Analysis published in 2008.
Katherine H. Voegtle received her PhD from the University of Cincinnati and is currently a professor at the College of Saint Rose, where she teaches courses in human development, educational research, and educational psychology. She is coauthor of Child and Adolescent Life Stories: Perspectives of Youth, Parents and Teachers and has conducted qualitative and quantitative research projects on creative language development, school-based ally groups, mentoring programs, and arts-based educational programs. She is currently working on several action research projects that are part of her college’s ongoing efforts to build a professional development school.
CHAPTER ONE
INTRODUCTION TO EDUCATIONAL RESEARCH
CHAPTER OBJECTIVES
• Become familiar with the recent history of the educational accountability movement and describe the role of research in accountability
• Understand the role of action research in improving teaching and learning
• Explain value-added assessment
• Describe key aspects of the No Child Left Behind Act
• Explain the differences between inductive and deductive reasoning
• Articulate the key differences between knowledge-oriented philosophical frameworks for educational research (scientific realism and social constructivism) and action-oriented approaches (advocacy or liberatory and pragmatism) and begin to define your own framework
• Explain the differences among and provide a simple example of quantitative and qualitative methods of data collection and basic and applied educational research
• Understand the essentials of research ethics and how ethics apply to research questions and methodology

EDUCATIONAL ACCOUNTABILITY AND EDUCATIONAL RESEARCH

At the beginning of the 21 st century, the educational research community is again responding to the call for increased accountability in our nation’s schools. This call for accountability comes from both within and outside the educational community. Educators, parents, students, communities, and politicians are hopeful that the new accountability will result in increased achievement for America’s students.
BOX 1.1
Educational Reform and the No Child Left Behind Act
In 1965, the Elementary and Secondary Education Act (ESEA) was passed by the U.S. Congress to achieve three major goals. These goals included the desire to improve Scholastic Aptitude Test (SAT) scores, increase academic proficiency, and close the achievement gap that separated students of color and low-income students from White and more affluent students (Nichols & Berliner, 2007). ESEA provided funding to schools (labeled “Title I” schools) with high poverty levels and large numbers of students of color. In 1983, eighteen years after the passage of ESEA, the National Commission of Excellence in Education published a report entitled A Nation at Risk: The Imperative for Education Reform in America. Troubling to all, the report stated that ESEA had failed to achieve its goals and that academic proficiency of U.S. students remained low. A Nation at Risk called for additional reforms to increase parental and community involvement, improve achievement, enhance the quality of teachers, and close the achievement gap. While A Nation at Risk drew attention from educators, parents, and legislators, it resulted in little change or reform. It was not until 1994, under the administration of President Bill Clinton, that serious educational reform came under increased scrutiny. This occurred with another reauthorization of ESEA entitled Goals 2000, which focused greater attention on school accountability. As part of this legislation, schools that developed annual testing practices received financial incentives.
Goals 2000 provided a skeletal foundation for the next iteration of ESEA, called the No Child Left Behind Act (NCLB). NCLB was passed by Congress and signed into law by President George W. Bush in 2001. The rationale, in part, was based on the fact that in spite of spending more than $300 billion since 1965 to educate youth from low-income families, only 32% of fourth graders could read at grade level, and most of those who could not read were ethnic minorities (U.S. Department of Education, 2005b). Believing that the money spent was not improving education, NCLB was designed to increase accountability of individual schools and states and ultimately reform education.
The legislation significantly increased the role of the federal government in education and set into place regulations that reached into nearly all public schools in this country. In short, the legislation requires (U.S. Department of Education, 2005a):
1. Annual testing. By the 2005-2006 school year, states were required to test reading and math annually in Grades 3-8. By 2007-2008, states were required to develop tests to measure science achievement at least once in elementary school, middle school, and high school. All tests must be aligned with state standards and be reliable and valid measures. Additionally, a sample of the fourth and eighth grades must participate in the National Assessment of Educational Progress testing program every other year in the content areas of reading and math.
2. Academic progress. States are responsible for bringing all students up to a level of proficiency by the 2013-2014 academic school year. Each year, every school must demonstrate adequate yearly progress (AYP) toward this goal. If a school fails to meet this goal for two years in a row and receives Title I funding (federal dollars), the state must provide technical assistance and families must be allowed a choice of other public schools (assuming there is available space and that the other schools are making adequate progress). If a school fails to meet the defined level of proficiency for three years in a row, it must offer students supplemental educational services, which could include tutoring.
3. Report cards. All states must prepare individual school report cards on all schools. These report cards must be made public and must demonstrate progress in reaching the state standards.
4. Teaching quality. Currently, the federal government provides money to states and school districts to improve the quality of their teaching forces. Under the NCLB legislation, the federal government has indicated that it will provide greater flexibility in the spending of that federal money.
5. Reading First. NCLB offers competitive grants called Reading First that will help states and school districts set up scientific and reliable research-based reading programs for children in kindergarten through Grade 3. School districts in high-poverty areas will be given priority for these grants.
According to the U.S. Department of Education (2005a), the key characteristics of reliable research are
1. A study that uses the scientific method, which includes a research hypothesis, a treatment group, and a control group
2. A study that can be replicated and generalized
3. A study that meets rigorous standards in design, methods used, and interpretation of the results
4. A study that produces convergent findings, for example, findings are consistent using various approaches
These guidelines have significant implications for the way research is conducted in education. Specifically, the legislation calls for researchers to conduct studies with scientific rigor. According to Neuman (2002), NCLB’s definition of scientific rigor is consistent with randomized experimental designs—study designs in which persons are randomly assigned to groups that are treated differently. Randomized studies are one approach for establishing causality but may not be appropriate for all research questions. Nearly everyone agrees that research studies should be rigorous and scientific. However, the narrow definition of scientific rigor as randomized experimental studies has the potential for greatly limiting the scope of educational research. Furthermore, according to Davies (2003), “Devoting singular attention to one tool of scientific research jeopardizes inquiry efforts into a range of problems best addressed by other scientific methods (pp. 4-5).”
A school’s failure to meet its AYP has serious consequences; these consequences become more severe the longer it takes schools to reach their defined benchmarks. (Benchmarks are predetermined levels of achievement for which states or federal officials set performance levels.) For example, a school that fails to make its AYP two years in a row is labeled as a school in need of improvement or a SINI school. The SINI school must then develop an improvement plan that describes the necessary changes that will result in meeting its AYP. SINI schools must offer public choice to their students, allowing transfer in-district to a school in good standing or to a nearby charter school (charter schools are public schools funded with tax dollars that permit some flexibility regarding some state education regulations). Schools that fail to make AYP for three years must provide and pay for supplemental educational services for eligible students. This often includes tutoring services offered by approved providers. Those SINI schools that continue to fail to meet AYP for four consecutive years must take “corrective action” in addition to the sanctions noted. This action could include replacing administrative staff, hiring outside consultants to run the school, implementing a new curriculum, and extending the school year, to name a few. Schools that fail to make AYP for five consecutive years must develop a restructuring plan that may result in a state takeover or new governance for the school.
Accountability and educational reform are by no means new in education (see Box 1.1). The newest accountability legislation, No Child Left Behind (NCLB), holds schools accountable for monitoring and reporting student progress based on test scores. Monies for schools are made available for programs that are scientific and reliable (see Box 1.1, number 5), although the federal government’s definition of scientific research is very narrow (Neuman, 2002).
NCLB requirements and other accountability measures make knowledge of educational research an essential component of professional preparation for all educators. However, to promote creative, innovative, yet sound solutions to current educational problems, future educators must become knowledgeable about a multitude of research approaches that reach beyond those techniques defined as reliable under the NCLB legislation. It is our hope that this book will enable you to participate in ongoing debates about the status and future of education on both national and local levels. We also hope that you will develop skills and knowledge to take part in a much longer and broader tradition: using scientific research to identify, develop, and assess effective educational practices. Furthermore, by using this knowledge you will be better able to make informed decisions based on data and evidence collected in your practice (for example, what is often referred to as evidence-based practice).
It is our belief that practitioners can have a major role in influencing positive change in their classrooms, schools, and districts if they actively engage in the research process. This does not necessitate that practitioners become involved in large-scale research projects. We are all aware that teachers and other educational professionals have very heavy workloads. In spite of this, many practitioners currently conduct small-scale research projects to evaluate their own practices. This type of research is often referred to as action research or practitioner research which is discussed in depth in Chapter 12. Briefly, action research (see Box 1.2 for an example) is a type of research that is conducted by the practitioner in order to improve teaching and learning. Action research is conducted by teachers, counselors, school psychologists, speech language pathologists, administrators, or any educational professionals looking to improve their practice. It is often done in a collaborative environment in which practitioners engage in a cycle of reflection and action to gain knowledge about ways to improve their practices. More specifically, action research provides practitioners with a process that involves reflection or assessment of needs, utilization of a systematic inquiry, collection and analysis of data, and informed decision making.
Action researchers strive to find solutions that can bring immediate change and facilitate improvement in student learning. One might ask why practitioners are increasingly involved in action research. The answer is quite simple. Schools and school districts are involving practitioners to a greater degree in the operation of schools. Additionally, practitioners are being held accountable for student learning. These factors have increased the level of participation of the practitioner beyond his or her traditional responsibilities. Practitioners are assessing their own practices and, where appropriate, modifying those practices. Most important, engaging in action research empowers practitioners. They can identify their own practical research problems and set in motion immediate plans to improve practices. This immediacy is attractive to practitioners who are looking to make quick yet responsible and defensible changes or improve the learning of their students. As you read this book, we hope that you will appreciate the importance of considering ways in which the practitioner can use research to make a difference in the quality of our educational systems.
BOX 1.2
Action Research Example
Ms. Lovett, a first-year teacher, is teaching a ninth-grade biology unit on parts of the human respiratory system. On the first quiz, which covered the initial part of the unit, 50% of her students failed the quiz. She reviews her quiz and finds it to be fair. Her next step is to reflect on the strategies she used to cover the content. She realizes that her primary instructional strategy was lecture, multimedia, and student note taking. After talking to colleagues and researching best practices, she decides to develop an alternative instructional approach. Her plan of action involves introducing students to a new biology computer software program that allows them to see, through computer animation, the functions of each part of the respiratory system. She decides that for the next section in the respiratory unit, she will take her students to the computer lab. While in the lab, students will spend half the class working with the new software, and for the second half of class she will continue to use lecture, multimedia presentations, and note taking. Ms. Lovett administers a second quiz to the students after two weeks of study. This time only 10% of the students fail. She decides that she will continue to incorporate computer time for the next unit and continue to monitor and assess all the students through the next unit, with a special focus on those who did not improve their performance.

Results of NCLB and New Directions in Accountability

NCLB has both supporters and critics. Those who support the legislation believe strongly that the regulations and accountability through standardized testing will increase student achievement and close the achievement gap (goals of the original ESEA and each of its iterations). However, there are many strong and very vocal critics. Much of the criticism focuses on the use of standardized testing as a single measure of accountability; the results of current research suggest that NCLB is not achieving its goals.
For example, Lee (2006) conducted a comprehensive study and systematic trend analysis of national- and state-level public school achievement in math and reading during the pre-NCLB years (1990-2001) and post-NCLB years (2002-05). The study analyzed achievement across socioeconomic and racial groups with an eye on determining whether the gap in achievement was closing and whether all groups were on target to meet the goals of NCLB (100% proficiency by 2014). Primarily utilizing the National Assessment of Educational Progress (NAEP) test data, the study determined the following:
• NCLB has not improved student achievement in reading. A comparison of NAEP pre- and post-NCLB reading scores was flat, indicating no growth or loss in achievement. While there was a slight increase in math scores immediately following the implementation of NCLB, scores returned quickly to pre-NCLB levels.
• The gap in achievement between racial and socioeconomic groups persists.
• While state assessments in reading and math show some improvement (these trends in many cases began prior to NCLB), these improvements are not demonstrated in the only national test of achievement (NAEP).
The debate over NCLB and its use of yearly standardized assessments has resulted in some educators calling for changes in the way students are assessed and schools are held accountable. According to Doran and Fleischman (2005), “The NCLB approach rests on the assumption that assessment data can provide credible information to gauge how effectively schools and teachers are serving their students” (p. 85). While assessment data may in fact be able to achieve such a goal, a concern under NCLB is the kind of data collected and how adequate yearly progress is calculated and then used as a measure of school effectiveness. AYP is the way that states measure the yearly progress schools are making toward the goal of 100% student proficiency in at least reading/language arts and math. It sets a benchmark or minimum level of proficiency that students must achieve on yearly tests of achievement. It should be noted here that this process puts at a disadvantage a school with a high number of students whose beginning achievement levels are much farther below those of their affluent counterparts. AYP is much easier to meet in a school where only 10 % of students are performing below state benchmarks than a school with 90% of its students below state benchmarks for achievement.
Fundamentally, AYP is determined by comparing student academic performance on a single standardized test administered from year to year. For example, let’s say that Green Elementary schools’ third graders fail to meet the AYP in math in 2007. The method used to determine such failure required that the school compare the performance of the 2006 third graders to the 2007 third graders, two different cohorts. The question asked by educators and administrators is “What information does this comparison tell us about individual student growth and teacher effectiveness when it involves comparing different groups of students?” The answer is that it provides little, if any, information about the progress of individual students. As a result, many in the field of education, and more recently the federal government (see U.S. Department of Education, 2006a & b), are calling for value-added assessment systems. Unlike the way student progress is monitored under NCLB, value-added assessment allows educators “to examine and assess their [student] learning trajectories as they progress over time through different classrooms taught by different teachers in different schools and districts” (Amerein-Beardsley, 2008, p. 65). In the value-added model of assessment, teachers and administrators are held accountable through the examination of how much value or improvement they have contributed to an individual student’s learning. For example, in schools using value-added assessment, the growth of individual students can be tracked across teachers and subjects from year to year. The gains or losses of these individual students are then summed to provide a picture of a school or school districts’ progress under the value-added model. Table 1.1 displays the data for school district A and school district B. On this assessment, students’ scoring levels 1 and 2 are not meeting learning standards. Students scoring 3 or 4 are meeting or exceeding learning standards. Based on the data provided in Table 1.1, which school district added more value according to a value-added assessment system? If you said, “district A,” you are correct. While 80% of the students in district A did not meet learning standards in year 2, the level of growth within this group was greater than in district B.
While there are multiple value-added assessment models currently utilized by a variety of school districts across the country, all models recognize that children come into the educational system with a wide variety of backgrounds and skills. Given this fact, examination of a yearly standardized test score does not accurately identify effective and ineffective teachers and schools. According to Doran and Fleischman (2005, p. 85), “The idea behind value-added modeling is to level the playing field by using statistical procedures that allow direct comparisons between schools and teachers even when those schools are working with quite different populations of students.” Value-added assessment measures individual student achievement on a yearly basis and calculates a gain score. The gain score is then used as a more fair assessment of effective schooling. The models utilize
TABLE 1.1Sample Value-Added Data for Two School Districts
District ACriteriaYear 1 (third grade 2006)Year 2 (third grade 2007)Level 130%20%Level 250%60%Level 310%15%Level 410%5%District BCriteriaYear 1 (third grade 2006)Year 2 (third grade 2007)Level 17%6%Level 28%7%Level 363%65%Level 422%22%
complex statistical techniques (well beyond the scope of this book) in order to estimate teacher and curricular effects on students. A major question, however, is whether the gain score obtained from a value-added assessment can be attributed to teaching effectiveness. Is it really possible to determine, even using careful statistical procedures, the relative influence of a wide range of variables (such as socioeconomic status, ongoing after-school reinforcement, preschool attendance) and conclude that the gains are due to a teacher, teaching method, or curricular effects? The answer is not yet clearly known. Research studies are being conducted to determine whether these sophisticated statistical models can separate the differential effects of the many variables that influence student progress.
The value-added assessment system is consistent with the way many action researchers evaluate interventions. Action researchers might ask questions like, “How much has Samuel improved following the use of math manipulatives?” and “How does that compare with how much Louisa improved?”

CONDUCTING EDUCATIONAL RESEARCH

Recent accountability efforts are certainly not the first effort to apply scientific methods to educational practices. Since the beginning of formalized education, research has been used to help improve education and to determine how education works in a wide range of situations.

The Scientific Method

Through scientific research, educators hope to obtain accurate and reliable information about important issues and problems that face the educational community. Scientific research as applied to education is defined as the application of systematic methods and techniques that help researchers and practitioners understand and enhance the teaching and learning process.
Much like research in other fields, research in education uses two basic types of reasoning: inductive reasoning and deductive reasoning. Inductive reasoning is often referred to as a “bottom-up” approach to knowing, in which the researcher uses observations to build an abstraction or to describe a picture of the phenomenon that is being studied. Inductive reasoning usually leads to inductive methods of data collection through which the researcher (1) systematically observes the phenomena under investigation, (2) searches for patterns or themes in the observations, and (3) develops a generalization from the analysis of those themes. The researcher proceeds from specific observations to general statements—a type of discovery approach to knowing. For example, a researcher is interested in determining the nature of the interactions that occur between students with disabilities and regular education students who are educated together in a preschool setting. The researcher spends two days a week for six months observing and interviewing the preschoolers. She specifically focuses on the types of activities these two populations engage in together during the course of the school day She gathers the notes from her observations and interviews and concludes that the students with disabilities and regular education students play together, eat lunch together, and express positive attitudes toward each other.
In contrast, deductive reasoning uses a “top-down” approach to knowing. Educational researchers use one aspect of deductive reasoning by first making a general statement or prediction and then seeking evidence that would support or disconfirm that statement. This type of research employs what is known as the hypothetic-deductive method, which begins by forming a hypothesis—a tentative explanation that can be tested by collecting data. For example, one might hypothesize that small classes would result in a greater amount of student learning than large classes. This hypothesis would be based on a theory or a knowledge base composed of the results of previous research studies. A theory is a well-developed explanation of how some aspect of the world works using a framework of concepts, principles, and other hypotheses. For example, a humanistic theory of education might emphasize strong teacher-student relationships as part of effective learning. Previous research studies may have shown that such relationships are more common in small classes. Therefore, based on the humanistic theory and these previous studies, the researcher in our example may have hypothesized that small class sizes will result in better student learning based on humanistic theory and previous studies. The next step in the hypothetic-deductive approach is to collect data to see whether the hypothesis is true or should be rejected as false. The researcher might compare student learning in classrooms of 15 or fewer students with those of 25 or more students. If students in the smaller classes show a greater amount of learning, the hypothesis would be supported. If the students in the smaller classes do not show a greater learning, then by deductive reasoning, the hypothesis is shown to be false. To summarize, the researcher (1) began with a theory and a knowledge base and used them to form a hypothesis, (2) collected data, and (3) made a decision based on the data to either accept or reject the hypothesis or prediction.
The inductive and hypothetic-deductive approaches to knowing represent two general routes to knowledge used in educational research. Inductive reasoning is most closely associated with qualitative research (see Table 1.2) which collects and summarizes data using primarily narrative or verbal methods: observations, interviews, and document analysis. Qualitative researchers are often said to take inductive approaches to data collection because they formulate hypotheses only after they begin to make observations, interview people, and analyze documents. These hypotheses are examined and modified by further data collection rather than being accepted or rejected outright. Qualitative researchers believe that full understanding of phenomena is dependent on the context; they use theories primarily after data collection to help them interpret the patterns observed. Ultimately, qualitative researchers attempt to make claims about the truth of a set of hypotheses, although they may confirm these hypotheses primarily for a given setting or context.
The hypothetic-deductive method is most closely associated with quantitativeresearch, which summarizes data using numbers. Hypotheses and methods of data collection in quantitative research (see Table 1.2) are created before the research begins. Hypotheses or theories are then tested and when supported add evidence supporting the theory. Over time, supportive findings with different groups in different settings increase the generalizability of the theory or the hypothesis. Quantitative researchers may also use inductive reasoning as they look for similar experiences and results and form new ideas, concepts, or theories.

Basic Versus Applied Research Approaches

Research often strikes many students as abstract and distant from real life. This is especially true if research aims primarily at knowledge creation through theory building. The goal of basic research is to design studies that can test, refine, modify, or develop theories. As an example of basic research, Marcia’s (1966)
TABLE 1.2Qualitative and Quantitative Approaches: The Scientific Process
The Scientific ProcessQualitative ResearchQuantitative ResearchStep 1 : Ask a general questionObservation, reflection, and a review of research leads to a questionReview of research and theory leads to a questionStep 2: Generate more specific questions or a research hypothesisInductive reasoning leads to more specific yet flexible research questionsDeductive reasoning leads to a research hypothesisStep 3: Collect data to answer question or hypothesisData are in narrative or image form, collected through methods such as interviews or observationsNumerical data collected, such as tests, checklists, surveysStep 4: Data analysisIdentify patterns or themesConduct statistical analysisStep 5: Interpret findingMake conclusions based upon themes and patternsHypothesis rejected or accepted based upon statistical results
research on adolescent identity led to a refinement of one stage of Erik Erikson’s psychosocial theory of development. Marcia’s goal was not to create a program to address practical ways to help adolescents but rather to extend and support the theory. In contrast, applied research does try to apply knowledge to actual practice. Applied research studies examine the effectiveness and usefulness of particular educational practices. Here the goal is to determine the applicability of educational theory and principles by testing hypotheses within specific settings. For example, Schmitt-Rodermund and Vondracek (1998) examined whether parenting behaviors predicted the amount of adolescent identity exploration as described by Marcia. The results of their study have implications for how parents and adolescents interact.
Both basic and applied methods of research have their places in the educational research field. To some degree, the approach selected depends on whether the findings are utilized and result in a change in practice. In basic research, the overarching goal is to develop and modify theory. These theory-based studies, while critical to the formulation of applied research, often have low utilization and do not result in systemwide change. Although the goal of applied research is to demonstrate the usefulness of theories in practice, the reality is that applied research studies often take many years to stimulate change, even when the findings are disseminated to large groups of individuals through applied research journals. Two approaches that do result in more immediate change are program evaluation and action research.

PHILOSOPHICAL FRAMEWORKS FOR EDUCATIONAL RESEARCH

Educational research today is beginning to move away from a hard and fast distinction between qualitative and quantitative research methods. In fact, many researchers combine both approaches in order to gather a breadth of data and to validate their results. Researchers can, however, be separated into groups based on their philosophical frameworks, identified by the assumptions they make about the nature of the reality being studied, claims about what we can and cannot know, and the ways in which they utilize theories and findings. Each framework makes assumptions about whether qualitative or quantitative methods are most appropriate for extending our knowledge about education. As a beginning researcher, it is important that you consider which approach best captures your own assumptions about how the world works.

Scientific Realism

Scientific realism is a term applied to the framework used by most researchers who take a purely quantitative approach to research. Quantitative research is characterized by a desire to answer research questions by producing numerical data that represent various constructs and variables. A construct is a hypothetical concept that is typically developed from a theoretical framework. Although constructs are names for things that cannot be seen (for example, intelligence, motivation, self-esteem), they are assumed to be real characteristics that influence educational outcomes. When constructs are measured in educational research, they are known as variables. Like the constructs they represent, variables are defined as attributes, qualities, and characteristics of persons, groups, settings, or institutions, such as gender, social skills, socioeconomic status, exclusiveness, or achievement. Scientific realists strive to establish cause-and-effect relationships when possible, using data collection methods such as questionnaires, tests, and observational checklists to produce quantitative data.
The philosophical underpinnings of the scientific realism approach can be found in the arguments of philosophers known as positivists who have primarily tried to describe knowledge generation in the physical sciences. The first assumption made by scientific realists is that there is a real social and psychological world that can be accurately captured through research. In other words, there is an objective reality that research aims to describe. Scientific realists further assume that the social and psychological world can be studied in much the same way as the natural world, by breaking complex phenomena and problems into smaller parts. The major job for the researcher is to identify the most important parts or variables and accurately describe how these are related to each other in the real world. However, because humans are fallible and social scientists study human characteristics, reporting that reality must be done with a certain degree of probability. Scientific realists see knowledge as conjectural (Phillips & Burbules, 2000) and therefore subject to possible revision. All hypotheses are tested using statistical tests that establish the level of confidence that one can have in the results obtained. Scientific realists do recognize that because educators study human behaviors and characteristics, research may be influenced by the investigator. For an investigator to maintain clear objectivity, he or she must play a detached role through which there is little opportunity for interaction with the participants under study. Scientific realists believe that inquiry can be value-free and that a researcher who strives to eliminate any personal bias can reliably determine findings. Although they borrow rigorous scientific techniques from the natural sciences, they recognize that, in education and psychology, true scientific experiments are not always possible. Scientific realists concede that different persons might have different perceptions of reality; however, they assume that experiences overlap to a large degree and that a good researcher can take these different perceptions into account in providing the best possible explanation of reality.

Social Constructivism

Traditionally, purely qualitative research is often done by persons who hold a framework referred to as interpretive, constructivist, or naturalistic. (We use the term social constructivism to refer to this approach.) Social constructivists challenge the scientific realist assumption that reality can be reduced to its component parts. Instead, they argue that phenomena must be understood as complex “wholes” that are inextricably bound up with the historical, socioeconomic, and cultural contexts in which they are embedded. Therefore, social constructivists attempt to understand social phenomena from a context-specific perspective.
Social constructivists view scientific inquiry as value-bound and not value-free. According to Lincoln and Guba (1985), this means that the process of inquiry is influenced by the researcher and by the context under study. This philosophical perspective argues that reality is socially constructed by individuals and this social construction leads to multiple meanings. Different persons may bring different conceptual frameworks to a situation based on their experiences, and this influences what they perceive in a particular situation. In other words, there is no one true reality, nor can one assume that the experiences that people have had will overlap to a large degree. Rather, we construct reality in accord with the concepts most appropriate to our personal experiences. Therefore, the researcher must attempt to understand the complex and often multiple realities from the perspectives of the participants. The acceptance of the existence of multiple realities leads social constructivists to insist that a set of initial questions asked in a study will likely change or be modified as these multiple realities are uncovered or reconstructed during the process of conducting research. The only true way to accomplish this understanding is for the researcher to become involved in the reality of the participants and interact with them in deeply meaningful ways. This provides an opportunity for mutual influence and allows the researcher to see the world through the eyes of the participants. “The inquirer and the object of inquiry interact to influence one another; knower and known are inseparable” (Lincoln & Guba, p. 37). This approach, then, requires that researchers use data collection methods that bring them closer to the participants using techniques such as in-depth observations, life histories, interviews, videos, and pictures.

Advocacy-LiberatoryFramework

Researchers taking an advocacy-liberatory framework for research also assume that there are multiple possible realities that are dependent on social, political, and economic contexts. However, these researchers go beyond the social constructivist claim that researchers’ values can influence research by insisting that moral values should form the impetus for research and that research should seek to improve the lives of persons who have little social power and have been marginalized by more powerful groups in their societies. In essence, the goal of advocacy or liberatory researchers is liberation through knowledge gathering. Paulo Freire (1921-1997), a literacy worker from South America and author of Pedagogyofthe Oppressed (1970), based his philosophy of research on these principles and argued that research should provide freedom from oppression and debilitating living environments. Working on literacy skills with poor and oppressed Chilean workers in the 1960s and 1970s, Freire asserted that research should be conducted in a collaborative manner, with community members participating in the selection and analysis of themes during data analysis. This collaboration requires that the researcher engage in respectful dialogue with the study participants and understand reality from the perspectives of the community. According to Freire and other advocacy-liberatory investigators, research should not only use inductive processes to gather information but engage in research as a form of social advocacy in which participants identify the types of changes sought. Whereas this type of research usually uses qualitative methods of data collection, it might use quantitative methods constructed in collaboration with participants if these data will help the people achieve social changes in their society. The type of data collected is less dependent on philosophical assumptions than by its potential to illuminate experiences and facilitate action to achieve a better life. In other words, research should be used not only to educate and produce knowledge but also to empower people to take political action and use their political voice to change and improve their place in society

Pragmatism

Pragmatism is the framework that has been most developed by American philosophers. Unlike the other frameworks, pragmatism is not concerned with whether research is describing either a real or socially constructed world. Instead, for pragmatists, research simply helps us to identify what works. Of course, we might ask our pragmatists what they mean by what works. They are likely to reply that knowledge arises from examining problems and determining what works in a particular situation. It does not matter if there is a single reality or multiple realities, as long as we discover answers that help us do things that we want to do. A pragmatist might insist that a good theory is one that helps us accomplish a specific goal (or set of goals) or one that reduces our doubt about the outcome of a given action. Most pragmatic researchers use a mixed-methods approach to research; for example, they use both qualitative and quantitative methods to answer their research questions. Pragmatic researchers propose that even within the same study, quantitative and qualitative methods can be combined in creative ways to more fully answer research questions. Campbell and Fiske (1959) are often thought to be among the first researchers to introduce the notion of using both qualitative and quantitative techniques to study the same phenomena. In current research, pragmatic frameworks are used by both professional researchers and researchers who are primarily practitioners (for example, teachers, counselors, administrators, school psychologists).
The assumptions underlying the philosophical frameworks described previously are summarized in Table 1.3.

RESEARCH ETHICS

Regardless of the type of research conducted, research ethics is an important consideration. Most professional organizations have their own codes of ethics (see the American Psychological Association and the American Sociological Association for examples). In addition, colleges, universities, and other institutions that conduct research have institutional review boards (IRBs) whose members review proposals for research to determine if ethical issues have been considered. If you are conducting research in a noncollege setting, in an elementary or secondary school or a community organization, there may not be a committee called an “IRB.” In this case, you will need to find out who will review your proposal and the procedures you will need to follow to obtain approval.
TABLE 1.3Frameworks and Assumptions Underlying Educational Research
Scientific RealismSocial ConstructivismKnowledge-oriented approaches• Research aims to describe an objective reality that most or all people would agree is real• Reality is historically and culturally constructed so there are multiple possible realities• Educational settings and problems can be studied by empirical analysis of component parts• Educational settings and problems must be understood as complex wholes• Research should be value-free• Researchers must continually strive to be aware of and control their values• Researchers should be detached from participants and strive to be objective• Researchers should become actively involved with participants in order to understand their perspectives• Theories and hypotheses are formed and then confirmed or disconfirmed through collection of data• Theories and hypotheses are generated during data collection and achieve meaning through human interactionsAdvocacy-LiberatoryPragmatismAction-oriented approaches• Reality is socially constructed and influenced by social, political, and cultural inequalities• The immediate reality of solving educational problems should be the focus of educational research• Although qualitative methods are preferred, educational settings and problems can be studied using any methods that truly represent the experiences of the participants• Educational settings and problems can be studied using any method that accurately describes or solves a problem• Research should strive to find ways to make education better• Research must be based in values and should empower marginalized groups to improve their lives• Researchers should collaborate with participants to fully understand what works• Researchers should collaborate with participants as equal partners• Theories and hypotheses are useful tools in helping to improve education• Theories and hypotheses should provide action plans to achieve a better life
For the most part, issues of ethics focus on establishing safeguards that will protect the rights of the participants. The traditional and often dominant issues that emerge when considering research ethics involve obtaining informed consent from participants, protecting them from harm, and ensuring confidentiality. Informed consent means that participants have been given information about procedures and risks involved in the study and have been informed that their participation is voluntary and they have the right to withdraw from the study without repercussions. IRB committees typically scrutinize research proposals for these issues and will weigh any potential risk to the participants against any possible gains for science. Keep in mind that the process for addressing ethical issues might change the participants in your study This may happen because some of the people that you have selected will not agree to be in the study, or the IRB may not give you permission to use those participants. Even if you use a random sampling procedure, the final participants in your study are volunteers. All quantitative researchers should consider how this might change the representativeness of the sample or exclude key informants.
The members of the IRB review proposals and examine the methods described in the proposal to ensure that all ethical considerations have been addressed and that sufficient detail of the actions to be taken by the researcher are provided. As already mentioned, most institutions whose students, professors, and staff conduct research have their own IRB committees that provide specific guidelines for study approval. IRB committees are mandated by national legislation (National Research Act, Public Law 93-438). To be well versed in specific requirements and procedures, you should contact your own university or college’s IRB committee. IRB committees typically require that the researcher prepare a document that includes the following:
• Acover page. In this the researcher introduces the principal investigator and his or her qualifications and contact information, the project title, and the type of research that is being proposed.
• Adetailed description ofthe study