Research Methods in Psychology For Dummies - Martin Dempster - E-Book

Research Methods in Psychology For Dummies E-Book

Martin Dempster

0,0
19,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Your hands-on introduction to research methods in psychology Looking for an easily accessible overview of research methods in psychology? This is the book for you! Whether you need to get ahead in class, you're pressed for time, or you just want a take on a topic that's not covered in your textbook, Research Methods in Psychology For Dummies has you covered. Written in plain English and packed with easy-to-follow instruction, this friendly guide takes the intimidation out of the subject and tackles the fundamentals of psychology research in a way that makes it approachable and comprehensible, no matter your background. Inside, you'll find expert coverage of qualitative and quantitative research methods, including surveys, case studies, laboratory observations, tests and experiments--and much more. * Serves as an excellent supplement to course textbooks * Provides a clear introduction to the scientific method * Presents the methodologies and techniques used in psychology research * Written by the authors of Psychology Statistics For Dummies If you're a first or second year psychology student and want to supplement your doorstop-sized psychology textbook--and boost your chances of scoring higher at exam time--this hands-on guide breaks down the subject into easily digestible bits and propels you towards success.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 520

Veröffentlichungsjahr: 2015

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Research Methods in Psychology For Dummies®

Published by: John Wiley & Sons, Ltd., The Atrium, Southern Gate, Chichester, www.wiley.com

This edition first published 2015

© 2016 John Wiley & Sons, Ltd., Chichester, West Sussex.

Registered office

John Wiley & Sons Ltd., The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom

For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book, please see our website at www.wiley.com.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.

LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: WHILE THE PUBLISHER AND AUTHOR HAVE USED THEIR BEST EFFORTS IN PREPARING THIS BOOK, THEY MAKE NO REPRESENTATIONS OR WARRANTIES WITH THE RESPECT TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS BOOK AND SPECIFICALLY DISCLAIM ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. IT IS SOLD ON THE UNDERSTANDING THAT THE PUBLISHER IS NOT ENGAGED IN RENDERING PROFESSIONAL SERVICES AND NEITHER THE PUBLISHER NOR THE AUTHOR SHALL BE LIABLE FOR DAMAGES ARISING HEREFROM. IF PROFESSIONAL ADVICE OR OTHER EXPERT ASSISTANCE IS REQUIRED, THE SERVICES OF A COMPETENT PROFESSIONAL SHOULD BE SOUGHT.

For general information on our other products and services, please contact our Customer Care Department within the U.S. at 877-762-2974, outside the U.S. at (001) 317-572-3993, or fax 317-572-4002. For technical support, please visit www.wiley.com/techsupport.

For technical support, please visit www.wiley.com/techsupport.

A catalogue record for this book is available from the British Library.

ISBN 978-1-119-03508-4 (paperback) ISBN 978-1-119-03510-7 (ebk)

ISBN 978-1-119-03512-1 (ebk)

Research Methods in Psychology For Dummies®

Visit www.dummies.com/cheatsheet/researchmethodsinpsych to view this book's cheat sheet.

Table of Contents

Cover

Introduction

About This Book

Foolish Assumptions

Icons Used in This Book

Beyond the Book

Where to Go from Here

Part I: Getting Started with Research Methods

Chapter 1: Why Do Research in Psychology?

What Is Research?

Why Do Psychologists Need to Do Research?

Doing Psychological Research

Exploring Research Methods

Chapter 2: Reliability and Validity

Evaluating Study Validity

Taking a Look at Study Reliability

Focusing on the Reliability and Validity of Tests

Chapter 3: Research Ethics

Understanding Ethics

Doing No Harm

Looking at Research Ethics with Human Participants

Maintaining Scientific Integrity

Applying for Ethical Approval

Part II: Enhancing External Validity

Chapter 4: Survey Designs and Methods

Checking Out Survey Designs

Reviewing Survey Methods

Keeping Your Study Natural

Chapter 5: Sampling Methods

Looking at Samples and Populations

Understanding Your Sampling Options

Preventing a Good Sample Going Bad

Chapter 6: Questionnaires and Psychometric Tests

Measuring Psychological Variables

Choosing Existing Questionnaires

Designing a Questionnaire

Individual Versus Group Responses

Part III: Enhancing Internal Validity

Chapter 7: Basic Experimental Designs

Understanding Experimental Designs

Taking a Look at Basic Experimental Designs

Considering Repeated Measures Design (or Why You Need a Pre-Test)

Looking at Independent Groups Design

Getting the Best of Both Worlds: Pre-Test and Comparison Groups Together

Using Randomised Controlled Trials

Treading Carefully with Quasi-Experimental Designs

Chapter 8: Looking at More Complex Experimental Designs

Using Studies with More than Two Conditions

Addressing Realistic Hypotheses with Factorial Designs

Understanding Covariates

Using a Pre-Test Can Be Problematic

Chapter 9: Small Experiments

Conducting Experiments Using Small Sample Sizes

Interrupted Time Series Designs

Introducing Multiple Baseline Designs

Analysing Small Experiments

We’re Small, but We’re Not Experiments

Part IV: Qualitative Research

Chapter 10: Achieving Quality in Qualitative Research

Understanding Qualitative Research

Sampling in Qualitative Research

Collecting Qualitative Data

Transcribing Qualitative Data

Chapter 11: Analysing Qualitative Data

Principles for Analysing Qualitative Data

Looking at an Example: Thematic Analysis

Chapter 12: Theoretical Approaches and Methodologies in Qualitative Research

Experiential Versus Discursive Approaches

Exploring Interpretative Phenomenological Analysis

Understanding Grounded Theory

Part V: Reporting Research

Chapter 13: Preparing a Written Report

Coming Up with a Title

Focusing on the Abstract

Putting Together the Introduction

Mastering the Method Section

Rounding Up the Results

Delving In to the Discussion

Turning to the References

Adding Information in Appendices

Chapter 14: Preparing a Research Presentation

Posters Aren’t Research Reports

Presenting Your Poster at a Plenary Session

Creating and Delivering Effective and Engaging Presentations

Chapter 15: APA Guidelines for Reporting Research

Following APA Style

Discovering the Why, What and When of Referencing

Citing References in Your Report

Laying Out Your Reference Section

Reporting Numbers

Part VI: Research Proposals

Chapter 16: Finding Research Literature

Deciding Whether to Do a Literature Review

Finding the Literature to Review

Obtaining Identified Articles

Storing References Electronically

Chapter 17: Sample Size Calculations

Sizing Up Effects

Obtaining an Effect Size

Powering Up Your Study

Estimating Sample Size

Chapter 18: Developing a Research Proposal

Developing an Idea for a Research Project

Determining the Feasibility of a Research Idea

Writing a Research Proposal

Part VII: The Part of Tens

Chapter 19: Ten Pitfalls to Avoid When Selecting Your Sample

Random Sampling Is Not the Same as Randomisation

Random Means Systematic

Sampling Is Always Important in Quantitative Research

It’s Not All about Random Sampling

Random Sampling Is Always Best in Quantitative Research (Except When It’s Not)

Lack of a Random Sample Doesn’t Always Equal Poor Research

Think Random Sampling, Think Big

Bigger Is Better for Sampling, but Know Your Limits

You Can’t Talk Your Way Out of Having a Small Sample

Don’t State the Obvious

Chapter 20: Ten Tips for Reporting Your Research

Consistency Is the Key!

Answer Your Own Question

Tell a Story …

Know Your Audience

Go with the Flow

It’s Great to Integrate!

Critically Evaluate but Do Not Condemn

Redundancy Is, Well, Redundant

Double-Check Your Fiddly Bits

The Proof Is in the Pudding

About the Authors

Cheat Sheet

Advertisement Page

Connect with Dummies

End User License Agreement

Guide

Cover

Table of Contents

Begin Reading

Pages

i

ii

iii

iv

v

vi

vii

viii

ix

x

xi

xii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

333

334

335

336

337

338

339

340

Introduction

We know that research methods isn’t every psychology student’s favourite subject. In fact, we know that some students see conducting research as a ‘necessary evil’ when completing their psychology qualification. Why is this? Well, we think it’s because people who are interested in studying psychology are interested in examining the thoughts, behaviours and emotions of others, and that’s what they want to find out more about – thoughts, behaviours and emotions. They’d rather not spend time thinking about how to design a research project or how to recruit participants. But it’s important to reflect on how you come to know what you know about psychology: it’s because of the research that psychologists and others have conducted into these topics. Without research, psychology (like many other disciplines) would be non-existent or, at best, relegated to being a set of opinions with no credibility.

Therefore, research is essential to psychology. It’s the lifeblood of psychology! Without robust, rigorous research, we wouldn’t know (among many other things) that people’s quality of life can be improved by finding effective ways to facilitate change in their thoughts that result in beneficial emotional and behavioural changes. Research, therefore, is responsible for improving the psychological wellbeing of countless people over the years.

But, note that we highlight the important role of robust and rigorous research. In other words, good quality research. To conduct any other type of research won’t advance the discipline of psychology, is probably a waste of everyone’s time, and may raise some ethical issues. As a result, every student of psychology requires a firm grasp on how to conduct good quality research. And that’s what this book aims to deliver.

We’ve written this book in a clear and concise manner to help you design and conduct good quality research. We don’t assume any previous knowledge of research. We hope that this book will excite you about conducting psychological research (as much as it’s possible to do so) and that your research will contribute to improving psychology for the benefit of others in the years to come.

About This Book

The aim of this book is to provide an easily accessible reference guide, written in plain English, that allows students to readily understand, carry out, interpret and report on psychological research. While we have targeted this book at psychology undergraduate students, we hope that it will be useful for all social science and health science students, and that it may also act as a reminder for those of you who haven’t been students for some time!

You don’t need to read the chapters in this book in order, from start to finish. We’ve organised the book into different parts, which broadly address the different types of research designs that you’re likely to encounter in psychology and the different ways of reporting research. This makes it easy to find the information you need quickly. Each chapter is designed to be self-contained and doesn’t necessarily require any previous knowledge.

You’ll find that the book covers a wide range of research designs that are seldom found together in a single book. We deal with survey designs, experimental designs, single case designs and qualitative designs. We also provide clear guidance on how to write and develop a research proposal, and how to prepare information for a research paper or a conference presentation. Therefore, this book provides a comprehensive introduction to the main topics in psychological research.

We’ve deliberately tried to keep our explanations concise and to the point, but you’ll still find a lot of information contained in this book. Occasionally, you may see a Technical Stuff icon. This highlights rather technical information that we regard as valuable for understanding the concept under discussion, but not crucial. You can skip these sections and still understand the topic in question. Likewise, you may come across sidebars (grey boxes) where we elaborate on a topic with an interesting aside (well, we think they’re interesting!). If you’re in a hurry, you can skip these sections without missing out on any essential information.

Foolish Assumptions

For better or worse, we made some assumptions while writing this book. We assumed that:

You’re familiar with the type of research that’s conducted in psychology. You may be a psychology undergraduate, or studying a related subject (in another social or health science).

You’re a novice when it comes to conducting a research study; that is, you’ve never conducted your own research study before, or you have only done this once or twice previously.

You refer to a statistics book to help you understand some of the statistical concepts we discuss. We highlight when you need to do this in the text. We also recommend that you have

Psychology Statistics For Dummies

(also authored by us and published by Wiley) to hand to refer to when you’re trying to make sense of some of the trickier statistical concepts that we can’t cover in detail in this book.

Icons Used in This Book

As with all For Dummies books, you notice icons in the margin that signify that the accompanying information is something special:

This icon points out a helpful hint designed to save you time (or cognitive effort).

This icon is important! It indicates a piece of information that you should bear in mind even after you’ve closed the book.

This icon highlights a common misunderstanding or error that we don’t want you to make.

This icon contains a more detailed discussion or explanation of a topic; you can skip this material if you’re in a rush.

Beyond the Book

The world of research methods is full of areas to explore – and we’ve crammed all the important stuff into this book. But then we thought of some other things that you may find useful, or that may add to your understanding of research methods in psychology:

Cheat sheet.

This summarises the key points from this book. It gives you a ready reference to the important things to remember when you’re designing or conducting a research study in psychology. You can find it at

www.dummies.com/cheatsheet/researchmethodsinpsych

.

Dummies.com online articles.

These articles add to the information contained in the book. They allow us an opportunity to expand on and emphasise the points that we think are important and that we think you may benefit from knowing a little more about. The online articles delve into topics from different parts of the book, so they’re varied as well as interesting (we hope!). You can find these at

www.dummies.com/extras/researchmethodsinpsych

.

Where to Go from Here

You can read this book from start to finish (and we hope that you’d enjoy it), but it’s not like a novel. Rather, we have designed the book so that you can easily find the information you’re looking for without needing to read lots of related but separate detail.

If you’re completely new to conducting research, we suggest that you start with Chapter 1, which provides an overview of the book and introduces you to some of the important concepts. If you’re familiar with research but need some information on developing and writing a research proposal, we recommend that you turn to Part VI. If you want to look at moving away from quantitative data to focus on qualitative data, we advise that you flip to Part IV. For any other information you may be looking for, we suggest that you use the table of contents or the index to guide you to the right place.

Research is an important area in the development of psychology. With this book in hand, you’ll be able to start investigating this fascinating discipline, with its many and varied implications for life. We hope you enjoy the book and your research, and maybe even make an important contribution to the discipline – which we’ll get to read about in years to come!

Part I

Getting Started with Research Methods

Visit www.dummies.com for free access to great Dummies content online.

In this part …

Get an overview of what it means to do research in psychology.

Find out what the terms ‘validity’ and ‘reliability’ mean and why they’re so important when conducting or evaluating research studies.

Discover the five key ethical principles of conducting research and how to go about making sure your studies meet these standards.

Chapter 1

Why Do Research in Psychology?

In This Chapter

Finding out what research is and why psychologists do it

Discovering the various stages of a research study

Understanding the different research methods used to gather information

In this chapter, we introduce you to the main research methods, designs and components that you encounter during your psychology course, and we signpost you to relevant chapters in this book where you can find more information – and discover how to become a research methods maestro (or at least pass the course!).

What Is Research?

Research is a systematic way of collecting information (or data) to test a hypothesis.

A hypothesis is just a testable (or falsifiable) statement. For example, a good hypothesis is that ‘you see a statistically significant difference in self-esteem mean scores between male and female psychology students’. A poor hypothesis is hard to test (or falsify) – for example, ‘gender differences in self-esteem develop in the womb for some individuals’. How can you possibly collect data to refute this statement?

No single research study sets out to conclusively ‘prove’ a hypothesis. Over time, research studies generate, test, refine and retest hypotheses, and build up a body of knowledge and evidence. Research is more of a process than a single thing.

You need to have the skills to conduct your own research study, but you also need to be able to review and critically evaluate existing research studies.

Why Do Psychologists Need to Do Research?

We could tell you that you do research in your psychology course because it’s fun, because you can discover something new that no-one else has found and because you develop insights into fascinating areas of the discipline and develop many transferable skills along the way too – but we’re biased, and you probably won’t believe us.

Instead, we’ll tell you that psychologists do research for two main reasons. The first is to expand the knowledge base of the discipline and to explain psychological phenomenon. The second is to apply this new-found knowledge and use it to help individuals and society. Generating a reliable evidence base allows psychologists to describe and explain behaviour, establish cause-and-effect relationships and predict outcomes. Applying research findings can help policy-makers, clinicians and individuals.

Consider a clinical psychologist who meets a client suffering from depression for the first time and wants to recommend a course of therapy:

How do they know that ‘depression’ as a construct actually exists?

How do they know that the questionnaire or interview used to assess depression actually measures it?

How do they know that an intervention to reduce depression actually works?

How do they know if one intervention is better than another?

How do they know the possible causes of the depression?

The answer to all of these questions is the same: research.

Doing Psychological Research

Carrying out a research project can be a complex process. Consider these stages you have to go through (no skipping any of them!):

First you have to have a comprehensive and viable plan that involves coming up with an idea and developing a research proposal.

You have to decide if you want to measure and quantify the things you are interested in (quantitative research) or collect information on people’s experiences and opinions using their own words (qualitative research).

You then have to choose a research design that is most appropriate for your proposed project.

Finally, you have to disseminate your research findings through a written report, a research poster or an oral/verbal presentation.

The stages of a research project are not always separate and distinct. You may have to tackle the question of quantitative vs. qualitative research at the same time you’re weighing different research designs. As you read through the book, you see that there may be overlap between stages.

The following sections outline each of these stages and point you to the relevant chapters of the book to help you complete a successful research project.

Planning research

When we task students with conducting and writing up a research study, they’re often keen to begin and see the planning stage as a frustrating delay. However, it’s impossible to carry out a good research study without good planning – and this takes time.

First, you need to identify your idea. To do this, you review the literature in the area you’re interested in. A good literature review demonstrates to your supervisor that you’re aware of existing published research in the area and that you’re familiar with its strengths and weaknesses. It ensures that your proposed study hasn’t been done before. It may also inform you of ways that you can improve your research idea (for example, by using a novel methodology or including a related variable that you haven’t yet considered).

Conducting a comprehensive literature review takes time. Don’t underestimate how much time you need to explore electronic search engines to find relevant sources, track down these sources and write up your literature review. You find plenty of information on how to conduct a literature review in Chapter 16.

When you’ve settled on a research idea and defined your research question, you need to draft your research proposal. This document outlines the research that you intend to do and why you intend to do it. You need to submit your research proposal in order to obtain ethical permission to carry out your study (Chapter 3 covers research ethics and how to apply for ethical approval).

Your proposal should comprise two sections:

An introduction containing your literature review and your research questions or hypotheses.

A well-defined research protocol, which is a detailed plan of your design and methodology (we look at research designs in more detail in the later section, ‘

Choosing a research design

’). Your protocol clearly states what you intend to do and how it addresses your research questions or hypotheses. You include details of how you intend to analyse your data and a timetable specifying how long each stage of the research process takes.

Chapter 18 guides you step by step through the process of developing a solid research proposal.

A good research proposal helps you (the researcher) and your supervisor establish whether your project is feasible – that is, if your research project is practical, realistic and possible to carry out. You may have a brilliant idea for a research project (and we’re confident that you do!), but can it be completed on time, with the resources you have available, with the participants you have access to and in an ethical manner?

When you’re writing your research proposal, you need to specify the sample size that you intend to recruit. Calculating the required sample size is essential at this stage. It impacts the time and resources that you require for your study. Also, if you can’t achieve the required sample size, you’re unable to detect statistically significant effects in the data – which may mean that you reach the wrong conclusions. Chapter 17 discusses sample size calculations in more detail and covers how to calculate the required sample size for your research proposal.

Deciding between quantitative and qualitative research

A lot of research in psychology attempts to quantify psychological constructs by giving a number to them – for example, the level of depression or an IQ score. This is known as quantitative research.

Quantitative research normally uses statistics to analyse numerical data. If you need help analysing this type of data, we recommend you consult a statistics book such as Psychology Statistics For Dummies (authored by us and published by Wiley).

Qualitative research is an umbrella term used to signify that the data you collect is in words, not numbers. It focuses on gaining detailed information about people’s experiences, often at the expense of representativeness and internal validity.

You normally collect qualitative data during face-to-face interactions – for example, by conducting a semi-structured interview. You can also collect data using focus groups, existing transcripts, social media or many other novel sources.

The information you obtain through qualitative research is based on the interaction between you (as the researcher) and the participant. Your assumptions and biases can and will affect the data you collect. You must acknowledge this influence and reflect upon the impacts of this in any qualitative study.

Qualitative research uses different sets of guidelines from quantitative research. It often requires smaller sample sizes, employs different sampling techniques and differs in how you interpret and analyse data. We explore qualitative research in detail in Part IV: we share guidelines for conducting qualitative research in Chapter 10, we offer advice on analysing qualitative data in Chapter 11 and we examine some different theoretical approaches and methodologies in Chapter 12.

Choosing a research design

As part of your research proposal, you need to decide how you can address your research questions or hypotheses. The most appropriate research design for your study depends on the nature of these questions and hypotheses. In the following sections, we look at some potential research designs that may be appropriate.

Survey designs and external validity

You use survey designs to collect naturally occurring information. You don’t attempt to control or manipulate any variables (which you do with experimental designs – see the later section, ‘Experimental designs and internal validity’ for more on these). You can use surveys to collect any type of information (for example, intelligence, personality, attitudes, sexual behaviour and so on) – this may be quantitative (through the use of closed questions) or qualitative (using open-ended questions). Researchers can then investigate the relationships between variables that exist in a population – for example, the relationship between intelligence and personality, or the relationship between attitudes to risk and sexual behaviour.

Good survey designs can be a time- and cost-effective way of collecting data from a large representative sample of participants.

Plan your survey design carefully. It’s very easy to have a poor survey design if you don’t plan it properly!

Good survey designs investigate the relationships between naturally occurring variables using large sample sizes. As a result, they tend to have high external validity. External validity refers to the extent that you can generalise from the findings of the study. You find more information on external validity in Chapter 2.

Exploring types of survey designs

You can conduct survey designs in three main ways:

Cross-sectional survey designs:

You collect data from each individual at one occasion or at one time point. It doesn’t matter how long this time point actually lasts (it can last two minutes or take all day) or how many people participate at the time point (it can be one individual or a classroom full of children). Each individual participant only contributes information once.

Longitudinal survey designs:

You collect data from the same participants over multiple time points. You may be interested in how one variable changes over time – for example, you may want to see how self-esteem changes develop in adolescents by measuring self-esteem in the same group of participants every month over a period of years. Alternatively, you may be interested in how one variable can predict another variable at a later time point – for example, you may want to see if intelligence in children can predict earnings as an adult. To do this, you decide to measure intelligence scores in a group of participants as children and then measure earnings in the same participants when they’re adults.

Successive independent sample designs:

This type of design is really a mix of cross-sectional and longitudinal designs. You use it to examine changes over time when it’s not possible to use a longitudinal design. In this design, you measure a sample of people on one or more variables at one time point (as in cross-sectional designs) and then you measure the same variables at subsequent time points but using a different sample of participants. For example, you may want to know if attitudes to attention deficit hyperactivity disorder (ADHD) are changing over time in entrants to the teaching profession. You can measure attitudes to ADHD in a sample of first-year trainee teachers each year for a period of five years. This approach includes longitudinal elements because you’re measuring the same variable over time, but it also has cross-sectional elements because you have to measure a different cohort of first-year trainee teachers each year.

You can find out more about these types of survey designs in Chapter 4.

Selecting a survey method

Your research question or hypotheses dictate the type of survey design that you need to use. Once you’ve decided on your survey design, you need to decide on your data-collection method – your survey method.

The main methods for collecting survey data are

Postal surveys

Face-to-face surveys

Telephone surveys

Online surveys

You can find out more about these survey methods and the advantages and disadvantages of each approach in Chapter 4.

Experimental designs and internal validity

In experimental designs you manipulate (at least) one variable in some way to see whether it has an effect on another variable. For example, you may manipulate the amount of caffeine that participants consume to see whether this affects their mood. This approach differs from survey designs, where you simply look at the relationship between participants’ natural caffeine consumption levels and their mood (refer to the earlier section, ‘Survey designs and external validity’ for more on survey designs).

By manipulating a variable (and attempting to hold everything else constant) experimental designs can establish cause-and-effect relationships. Experimental studies endeavour to maximise internal validity. Internal validity refers to the extent that you can demonstrate causal relationship(s) between the variables in your study. You find more information on internal validity in Chapter 2.

In experimental designs, the variable that you manipulate or have control over is called the independent variable. The outcome variable that changes due to the manipulation is called the dependent variable. In the preceding example, caffeine is the independent variable and mood is the dependent variable. Figure 1-1 shows the relationship between the variables.

© John Wiley & Sons , Inc.

Figure 1-1: An example of independent and dependent variables.

Two main experimental designs underpin all other types of experiments:

Independent groups design:

Different groups of participants take part in different experimental conditions (or levels). Each participant is only tested once. You make comparisons between different groups of participants, which is why it is also known as a

between-groups design

. For example, if you want to see the effect of caffeine on mood, you assign participants to three different groups. One group consumes no caffeine, the second group is given 100 milligrams of caffeine and the third group is given 200 milligrams of caffeine. You can then compare mood between these three groups.

Repeated measures design:

The same participants take part in all the experimental conditions (or levels). Each participant is tested multiple times. You’re looking for changes within the same group of people under different conditions, which is why it is also known as a

within-groups design

. For example, if you want to see the effect of caffeine on mood, participants consume no caffeine one day, 100 milligrams of caffeine another day and 200 milligrams of caffeine at another time. You can then look at the changes in mood when the same people consume different amounts of caffeine.

You can also use more complex experimental designs, such as:

Factorial designs

Mixed between–within designs

Randomised controlled trials (RCTs)

Solomon four group design

Chapters 7 and 8 explain each of these experimental designs and outline their strengths and weaknesses. They also address techniques that you can use to help minimise weaknesses in your experimental design, including counterbalancing, random allocation, blinding, placebos and using matched pairs designs.

Reporting research

You carry out your study – well done! All that planning must have paid off. But before you start to celebrate, you need to think about disseminating your findings – after all, what’s the point of carrying out your research if you don’t share your findings?

You can disseminate or present your research findings in different formats, but you always include the same main sections:

Introduction:

Your introduction provides an overview of the current area of your research by reviewing the existing research. You then outline your rationale for the study. This flows logically from the literature review because it outlines what you intend to do in your study and how this fits into the literature you’ve reviewed. Finally, you report your research questions or hypotheses.

Method:

Your method section tells a reader exactly what you did, with enough detail to allow someone to replicate your study. A good method section contains the following subheadings:

Design

Participants

Materials

Procedure

Analysis

Results: Your results section describes the main findings from your study. The results that you report need to address the research questions or hypotheses that you state in the introduction.

You only report findings in this section – you don’t attempt to interpret or discuss them in terms of hypotheses or previous literature.

Discussion:

Your discussion, like other sections, has several different parts. First, you need to take each hypothesis in turn, state to what extent your findings support it and compare your findings to the previous literature that you discuss in your introduction. You then need to consider the implications of your findings, analyse the strengths and limitations of the study, and suggest how your work can be built on by recommending ideas for future research studies.

The most common way of disseminating your research findings is in a written report – similar to the kind of report that you read in psychological journals. You can find a detailed guide to writing research reports in Chapter 13. You may also be asked to present your findings in the form of a research poster or an oral presentation. Chapter 14 guides you through the process to help you prepare the perfect poster or presentation.

Reports, posters and presentations share similar information, but they tend to do it in different ways – so you need to be aware of the discrepancies.

Whichever format you present your research in, it must be appropriate and consistent with universal psychological standards. Chapter 15 discusses the American Psychological Association (APA) standards, outlines tips on how to report numbers and, importantly, gives you guidelines for correct referencing procedures. Failure to reference correctly means you can be accused of plagiarism – which is a serious academic offence! Find out what plagiarism is and how to avoid inadvertently committing plagiarism in Chapter 15.

Exploring Research Methods

Research methods are the methods you use to collect data for your research study. You won’t find a ‘right’ or ‘correct’ research method for your study. Each method has its own set of advantages and disadvantages. Some methods are more suitable for investigating specific hypotheses or research questions – and any method can be performed poorly. For example, if you want to find out about the experience of living with bone cancer, an interview may be more suitable than a questionnaire; however, a well-designed and validated questionnaire is far better than a poorly planned and badly executed interview.

The following sections consider some potential data-collection methods that you may consider for your research study.

Questionnaires and psychometric tests

Most of the things psychologists are interested in are hard to measure. If you want to measure someone’s height or weight, however, it’s relatively straightforward. When you can directly measure something, it’s known as an observed variable (or sometimes a manifest variable) – like height or weight.

But what about attitudes, emotional intelligence or memory? You can’t see or weigh these constructs. Variables that you can’t directly and easily measure are known as latent variables.

Psychologists have developed various questionnaires and tests to measure latent variables. If the measure is good, the observed scores that you get from the questionnaire or test reflect the latent variable that you’re trying to assess.

Questionnaires usually consist of a number of items (or questions) that each require a brief response (or answer) from participants. Psychometric tests are similar but they may also include other tasks – for example, completing a puzzle within a set time period.

Often the terms ‘questionnaire’ and ‘test’ are used interchangeably.

The scores you get from your questionnaire are only useful if they accurately assess the latent construct they are designed to measure. If they’re a poor measure, the scores that you get out (and any conclusions you base on these scores) may be worthless. You need to consider carefully the validity and reliability of any questionnaire or test that you use in your research study (read more about reliability and validity in Chapter 2).

Chapter 6 discusses how to select questionnaires for your research study and how to appropriately use the data you obtain.

Sometimes, you can’t find an existing questionnaire that directly addresses the things you want to measure. In these cases, you may decide to design your own tailored questionnaire specifically for your research study. Chapter 6 comes to the rescue again and provides you with guidelines for designing your own measure.

Interviews

You can use interviews to collect quantitative data, but you normally use interviews to collect qualitative data. Interviews typically consist of an interviewer (the researcher) asking questions to an individual participant. The interview style can vary from quite structured (where the interviewer asks closed questions requiring short specific answers from the participant) to very unstructured (where the interview is more like a free-flowing conversation about a topic with no specific questions).

The most common interview style in psychological research is the semi-structured interview. The interviewer prepares a list of open-ended questions (that can’t have a simple ‘yes’ or ‘no’ answer) and a list of themes that he wants to explore; this list is known as the interview schedule. It takes considerable and skilful piloting of the interview schedule, as well as interviewing experience, to allow the participant the flexibility to discuss important issues and also to keep the interview focused on the area of interest.

You need to record (having received permission from the participant) and transcribe your interviews. Transcription is the labour-intensive process of accurately writing up a detailed account of the interview. Interviewers must also reflect on their role in the process to consider how they may have influenced the responses and direction of the interaction.

Students can sometimes think that interviews are an easy way of collecting information, but they require careful planning and preparation. Don’t ask value-laden or judgemental questions. The rapport between the interviewer and the interviewee, the participants’ expectations and the location of an interview can all have an impact on interview outcomes. However, if they’re performed correctly, interviews can result in rich and complex information that is hard to access using any other methodology.

You can find out more information about using interviews as a research method in Chapter 10.

Focus groups

Focus groups consist of a researcher (sometimes two) and a small group of people (usually around three to ten people). The researcher’s role is to lead the group discussion and keep the conversation flowing through the use of an interview schedule (refer to the previous section for more on these). You may be interested in the content of the discussion generated by the group or the behaviours of the participants (which is why involving a second researcher to take notes can be useful).

Focus groups are a different methodology from interviews, and you collect a different type of information. The discussions and behaviours generated in focus groups are due to the interactions of the different group members. They’re very useful when you want to explore the shared experience of a group as opposed to an individual’s experience. The make-up of the group is an important consideration and influences the type of interactions that occur, so you need to decide whether you want to include people with similar or different experiences.

Participants can often feel that focus groups are more natural and informal than one-to-one interviews. They can also generate huge amounts of data (which is both an advantage and a disadvantage). However, they’re not suitable for exploring all topics (sometimes people won’t want to discuss personal or embarrassing issues), and inexperienced researchers can find them hard to control and lead.

You discover more about focus groups in Chapter 10.

Observational methods

Instead of giving people questionnaires or interviewing them you can simply observe how they normally behave. However, human behaviour is varied and complex, so it’s impossible to accurately observe everything – even in a short space of time. To get around this, you record samples of the behaviour of an individual or group. Psychologists use a number of specific techniques when observing behaviour to help make the data more manageable. These include:

Time sampling:

Observing behaviour at specific or random intervals – for example, a record every 10 minutes during the school day.

Event sampling:

Recording behaviour only when a specific event occurs – for example, a new child joins the class.

Situation sampling:

Observing behaviour in different situations or locations – for example, playing in the classroom under the supervision of the teacher, or playing unsupervised in the playground.

Observation can be overt when the participants are aware that they’re being observed – for example, by a researcher sitting in a classroom. Conversely, covert observation is when the participants are unaware that their behaviour is being observed – for example, by a researcher sitting behind a one-way mirror.

In addition, you can observe a group when you join them and actively participate in their activities; this is known as participant observation. Alternatively, you can passively observe behaviour or even record it without interfering in the participants’ behaviour; this is known as nonparticipant observation.

Observational methods can have very high external validity (see Chapter 2) because you can capture and record natural behaviours. They’re most useful for describing behaviours rather than explaining behaviours.

Observational methods aren’t suitable for certain research questions (for example, how can you observe intelligence or personality?), and they can also raise ethical questions. Chapter 3 considers the ethical issues you may find when planning your psychological study.

You can read more about observational methods in Chapter 4.

Psychophysical and psychophysiological methods

You use psychophysical methods to explore the relationship between physical stimuli and the subsequent psychological experiences that they cause. The physical stimuli may be noise, brightness, smell or anything else that leads to a sensory response. It’s a method for investigating how humans detect, measure and interpret sensory information.

You may use psychophysical methods to examine thresholds – for example, a high pitch sound may be increased or decreased in intensity until a participant can just about detect it, determining his absolute threshold for that tone. Alternatively, you may conduct a scaling study that can, for example, aim to create a rating scale for unpleasant odours.

You use psychophysiological methods to explore the relationship between physiological variables and psychological variables. Attempts to create lie detectors (or polygraphs) are a good example of psychophysiology methodology: when people are stressed or aroused (psychological variables), it tends to cause changes in pupil dilation, heart rate and breathing behaviour (physiological constructs).

Psychophysiological methods often employ specialised equipment. Examples of common non-invasive techniques include:

Electroencephalography (EEG) to record electrical brain activity

Galvanic skin response (or electro-dermal activity) to measure skin conductivity or resistance

Eye-tracking to observe eye movement and attention

These are sometimes called direct measures because they don’t require participants to think about a response. Confounding variables may have less of an effect on data collected this way compared to other methods. For example, you can directly and accurately measure how quickly participants notice an alcohol stimuli (for example, a picture of a bottle of beer) and how long they focus their attention on it (gaze duration), rather than asking them to complete a questionnaire.

Psychophysical and psychophysiological methods tend to be very specific to a particular study and often require the use of dedicated equipment. It’s not possible to generalise about these techniques in an introductory research methods textbook. If you intend to use these methods in your research study, you need the specialised knowledge and support of your supervisor.

Chapter 2

Reliability and Validity

In This Chapter

Understanding internal and external study validity

Being aware of threats to study validity

Introducing test validity

Assessing test reliability via test–retest reliability and internal consistency

Arguably, reliability and validity are the most important concepts when conducting or evaluating any research study. If your study procedure and the tests you use are not reliable and valid, your findings (and any conclusion or recommendations based on them) may not be correct. You need to consider these concepts when designing or evaluating any research to ensure that your conclusions are based on firm foundations. You also need to ensure that you evaluate both the reliability and validity of the study itself and the individual tests (or measures).

This chapter covers study validity (internal and external) and study reliability, as well as test reliability and validity.

Evaluating Study Validity

Study validity simply refers to the extent to which the findings and conclusions of a piece of research are both accurate and trustworthy. Therefore, if a study has high study validity, it accurately addresses its research question and interprets its results appropriately.

When your tutor asks you to critically evaluate a piece of research, they’re asking you to assess the validity of the study. A good place to start assessing validity is to ask some questions about the research, such as:

Does the study have clearly defined and

operationalised hypotheses

(or research questions)?

Were the sampling, methodology and sample appropriate for the aims of the research?

Was the data analysed and interpreted correctly?

Are there any other alternative explanations for the research findings?

Threats to study validity

Study validity is a very important concept when evaluating existing research or designing your own research project. You need to be aware of the main threats to study validity, which include:

Biased sampling:

Appropriate sampling techniques help to ensure your results are valid (you can read all about sampling techniques in

Chapter 5

). Imagine that you conduct a longitudinal study to investigate how self-esteem changes throughout adolescence. You measure the self-esteem of school children every three months over several years. Schools may allow you access to only the best students (who they presume have high self-esteem), or perhaps only students with high self-esteem consent to take part. In either case, you have a biased sample of children with high self-esteem, which may affect your conclusions.

History effects:

This indicates the unique impact a confounding variable or change can have on your study. Using the preceding example, the children’s teacher may change in the class where you collect your data. If this new, inspirational and motivational teacher replaces a particularly cynical and disparaging teacher, you may see an improvement in the children’s self-esteem; however, this unique influence doesn’t usually factor in how self-esteem develops in children.

Maturation effects:

Changes in your participants between each measurement session as your study progresses are known as maturation effects. Returning to the preceding example, if you look at how self-esteem develops over several years, you expect this variable to change. However other variables demonstrate maturation effects that may influence your results. For example, the reading ability and concentration levels of children may change, so as they get older they may more fully understand all your study questions and be able to concentrate on the entire questionnaire, which perhaps they weren’t able to do before (alternatively, the questions may no longer be age-appropriate and the children may start to disengage with your questions).

Sample attrition:

This is also called drop-out or, rather morbidly, mortality. It simply reflects the fact that if you’re conducting longitudinal studies, you often lose participants throughout the process. All studies tend to suffer sample attrition, but this becomes a threat to validity when you have

differential attrition

– that is, certain characteristics mean that some participants are more likely to drop out than others. In the preceding example, participants with low self-esteem or whose self-esteem decreases for a particular reason may be less likely to complete your measures as the study progresses; your results may then reflect a rise in mean self-esteem levels due to sample attrition as opposed to any developmental effect.

Testing effects:

The very fact that you repeatedly measure a construct or variable may change the participants’ responses. Using the preceding example, children may reflect more on the self-esteem questions over multiple sessions, and change their responses. Maybe they become fatigued or bored with responding to the same questions, causing them to disengage with the process, or perhaps they simply remember and repeat answers, even if these no longer reflect how they truly feel.

You can improve the validity of your study in many ways – for example, by:

Employing randomisation and suitable sampling (which can control for biased selection; see

Chapter 5

for more details)

Recruiting an appropriate sample size (to ensure that you have the required statistical power and to anticipate attrition; see

Chapter 17

for more information)

Using blind or double-testing procedures (so you can minimise demand characteristics and experimenter bias if the participants or both the experimenter and the participants are unaware of the critical aspects and aims of the study; see

Chapter 7

to investigate this further)

Adhering to strict standardised procedures

The exact methods you employ are dependent on the research design and your research questions.

Internal and external validity

Study validity refers to both how the study has accurately addressed its own internal research question, and how it has interpreted the results externally (beyond the participants in the study). Study validity is often considered in terms of internal and external validity – which is how we’re going to consider them now!

Internal validity

Internal validity refers to the extent that you can demonstrate causal relationships between the variables in your study. In other words, can you be confident that the effects that you find are due to the variables you manipulate in your study, or can these effects be due to something else, such as confounding variables?

As an example, suppose you decide to see if taking cod liver oil can improve mathematical ability in young children. You recruit a classroom of students, where you measure their initial maths ability, and then ask them to take cod liver oil for 90 days. You then return to re-measure their maths ability. How can you be sure that any changes in the children’s maths ability are due to consuming cod liver oil? Maybe they’re a result of the normal progression in maths ability expected from the three months of school work they’ve been completing alongside your study? If you can’t be confident that it was only the cod liver oil (and nothing else) that had an effect on mathematical ability, your study has poor internal validity.

You can often improve the internal validity of a study by including a control group (see Chapter 7 for more information on control groups in experimental designs). In the preceding example, this enables you to see if maths ability increases in just the intervention group (children who took cod liver oil) or if it also increases in the control group (children who took a placebo instead of cod liver oil).

External validity

External validity refers to the extent that you can generalise from the findings of your study. You can generalise a study with high external validity to the wider population. A study with low external validity may be of less interest to the psychological community if you can’t generalise the results beyond the study participants in your specific study setting.

One way of testing if your study has high external validity is to check if the results can be replicated across different groups of people or different settings. You can break external validity down into two types: population validity and ecological validity.

Population validity:

A study has high population validity if you can generalise the findings from the participants to the wider population of interest. For example, you may be interested in attitudes to dissociative identity disorder, and recruit a large number of psychology postgraduates to complete your study. It’s unlikely that you can generalise these finding to the general public, as psychology students may have more interest, knowledge and experience with dissociative identity disorder than the general population: all factors that may influence someone’s attitudes towards this condition.

Ecological validity:

A study has high ecological validity if you can generalise the results from the setting of the study to everyday life. For example, can a study observing social interaction in schoolchildren that takes place in a lab with a one-way mirror be generalised to the school playground? Ecological validity doesn’t necessarily mean the research study needs to be as realistic as the everyday scenario; just because a laboratory setting may be unrealistic or more simplistic, it doesn’t mean that all associated findings lack validity. If similar results can be replicated across different settings, the results demonstrate ecological validity.

Although internal and external validity both signify study validity, maximising both in the same study can be difficult. You can sometimes increase internal validity by using a tightly controlled experimental design, where you carefully manipulate individual variables in a controlled fashion within a very specific population, but this may, in turn, decrease external validity.

Taking a Look at Study Reliability

Reliability is necessary to ensure that your study is valid and robust. Would you consider taking a pain relief tablet that had unreliable effects? For example, it may relieve your headache most of the time, but perhaps it occasionally also makes it worse, or even makes all your hair fall out!

Study reliability refers to the extent that findings are replicable. If you replicate a research study several times but the findings fail to report the same (or very similar) effects, the original study may lack reliability – the original findings may be a fluke occurrence. To avoid this, you ensure that your study methods are prepared in a detailed manner to allow replication.

Common sources of unreliability in studies include ambiguous measuring or scoring of items, and inconsistent procedures when conducting research. For example, you may find very different results if you look at the relationship between self-esteem and body weight, depending on whether body weight is a self-reported estimate or objectively measured by a researcher.

Focusing on the Reliability and Validity of Tests

In this section, we focus on the reliability and validity of the tests, measures or questionnaires that you use to collect data or information in a study. (We use ‘test’ to refer to any psychological measure, such as attitude and personality questionnaires, or cognitive and diagnostic tests.)

You can easily define test reliability and validity:

A test is

reliable

if it’s consistent; that is, it’s self-consistent (all parts of the test measure the same thing)

and

provides similar scores from one occasion to another. For example, extraversion is a fairly stable trait, so any test should give very similar scores if you administer the test to the same people at different times; additionally, all items on the test should measure extraversion (this is explained in more detail in the ‘

Types of test reliability

’ section in this chapter).

A test is

valid

if it measures what it claims to measure. Therefore, a measure of extraversion needs to measure extraversion (and not mood, social desirability or reading ability).

A test is reliable if it is consistent (across time and with itself). A test is valid if it measures what it claims to measure.

Reliability is a prerequisite for test validity. In other words, a valid test is a reliable test; however, just because a test is reliable, it doesn’t mean it’s valid.