21,99 €
Untangle statistics and make correct, dependable conclusions
Psychology Statistics For Dummies, 2nd Edition makes statistics accessible to psychology students, covering all the content in a typical undergraduate psychology statistics class. Built on a foundation of jargon-free explanations and real-life examples, this book focuses on information and techniques that psychology students need to know (and nothing more). You'll learn to use the popular SPSS statistics software to calculate statistics and look for patterns in psychological data. And, this helpful guide offers a brief introduction to using the R programming language for statistical analysis—an increasingly important skill for the digital age. You'll also find hands-on practice exercises and examples using recent, real datasets.
This guide is perfect to use as a readable supplement to psychology textbooks and overall coursework. Students in other social and behavioral sciences can also benefit from this stellar primer on statistics.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 462
Veröffentlichungsjahr: 2025
Cover
Table of Contents
Title Page
Copyright
Introduction
About This Book
Foolish Assumptions
Icons Used in This Book
Beyond the Book
Where to Go from Here
Part 1: Describing Data
Chapter 1: Statistics? I Thought This Was Psychology!
Knowing Your Variables
What Is SPSS?
Descriptive Statistics
Inferential Statistics
Research Designs
Getting Started
Chapter 2: Dealing with Different Types of Data
Understanding Discrete and Continuous Variables
Looking at Levels of Measurement
Determining the Role of Variables
Chapter 3: Inputting Data and Labels in SPSS
Working in the Variable View Window
Entering Data in the Data View Window
Viewing the Output Window
Chapter 4: Measures of Central Tendency
Defining Central Tendency
The Mode
The Median
The Mean
Choosing between the Mode, Median, and Mean
Chapter 5: Measures of Dispersion
Defining Dispersion
The Range
The Interquartile Range
The Standard Deviation
Choosing between the Range, Interquartile Range and Standard Deviation
Chapter 6: Generating Graphs and Charts
The Histogram
The Bar Chart
The Pie Chart
Part 2: Understanding Statistical Significance
Chapter 7: Understanding Probability and Inference
Examining Statistical Inference
Making Sense of Probability
Chapter 8: Testing Hypotheses
Understanding Null and Alternative Hypotheses
Understanding Statistical Inference Errors
Looking at One- and Two-Tailed Hypotheses
Chapter 9: What’s Normal about the Normal Distribution?
Understanding the Normal Distribution
Determining Skewness
Looking at the Normal Distribution and Inferential Statistics
Chapter 10: Standardized Scores
Knowing the Basics of Standardized Scores
Using Z-Scores in Statistical Analyses
Chapter 11: Effect Sizes and Power
Distinguishing between Effect Size and Statistical Significance
Exploring Effect Size for Correlations
Comparing Differences between Two Sets of Scores
Comparing Differences between More Than Two Sets of Scores
Understanding Statistical Power
Part 3: Analyzing Relationships between Variables
Chapter 12: Correlations
Assessing Relationships by Using Scatterplots
Understanding the Correlation Coefficient
Examining Shared Variance
Using the Pearson Correlation
Using the Spearman Correlation
Using the Kendall Correlation
Using Partial Correlation
Chapter 13: Linear Regression
Getting to Grips with the Basics of Regression
Using Simple Regression
Working with Multiple Variables: Multiple Regression
Checking Assumptions of Regression
Chapter 14: Associations between Discrete Variables
Summarizing Results in a Contingency Table
Calculating Chi-Square
Measuring the Strength of Association between Two Variables
Part 4: Analyzing Independent Groups Research Designs
Chapter 15: Independent
t
-Tests and Mann–Whitney Tests
Understanding Independent Groups Design
Using the Independent
t
-test
Using the Mann–Whitney Test
Chapter 16: Between-Groups ANOVA
One-Way Between-Groups ANOVA
Two-Way Between-Groups ANOVA
Kruskal–Wallis Test
Mixed ANOVA
Chapter 17: Post-Hoc Tests and Planned Comparisons for Independent Groups Designs
Post-Hoc Tests for Independent Groups Designs
Planned Comparisons for Independent Groups Designs
Part 5: Analyzing Repeated Measures Research Designs
Chapter 18: Paired
t
-Tests and Wilcoxon Tests
Understanding Repeated Measures Design
Paired
t
-Test
The Wilcoxon Test
Chapter 19: Within-Groups ANOVA
One-Way Within-Groups ANOVA
Two-Way Within-Groups ANOVA
The Friedman Test
Chapter 20: Post-Hoc Tests and Planned Comparisons for Repeated Measures Designs
Understanding Post-Hoc Tests and Planned Comparisons
Post-Hoc Tests for Repeated Measures Designs
Planned Comparisons for Within-Groups Designs
Examining Differences between Conditions: The Bonferroni Correction
Part 6: The Part of Tens
Chapter 21: Ten Tips for Inferential Testing
Statistical Significance Is Not the Same as Practical Significance
Fail to Prepare, Prepare to Fail
Don’t Fish for a Significant Result
Check Your Assumptions
My
p
Is Not Bigger Than Your
p
Differences and Relationships Are Not Opposing Trends
Find Missing Post-hoc Tests
Don’t Categorize Continuous Data
Be Consistent
Get Help!
Chapter 22: Ten Tips for Writing a Results Section
Report the Exact
p
-Value
Report Numbers and Symbols Correctly
Remember the Descriptive Statistics
Don't Overuse the Mean
Report Effect Sizes and Directionality
Acknowledge Missing Participants
Be Careful with Your Language
Beware Correlations and Causality
Answer Your Own Question
Add Some Structure
Index
About the Authors
Connect with Dummies
End User License Agreement
Chapter 4
TABLE 4-1 Ordered Depression Scores
Chapter 5
TABLE 5-1 Ordered Depression Scores
TABLE 5-2 Depression Scores and Their Deviation from the Mean
TABLE 5-3 Depression Scores and their Squared Deviations from the Mean
Chapter 6
TABLE 6-1 Students’ Level of Interest in Anti-Drunk-Driving Messages
Chapter 7
TABLE 7-1 Reaction Time and Age of 100 Babies
Chapter 8
TABLE 8-1 Drawing Conclusions from Inferential Statistical Tests
Chapter 9
TABLE 9-1 Longevity as Estimated by People Who Smoke
Chapter 11
TABLE 11-1 Positive Mood Scores with and without Chocolate
Chapter 13
TABLE 13-1 Regression Model with Exam Score as the Criterion Variable
Chapter 14
TABLE 14-1 Contingency Table for Profession and Accurate Recall of Telephone Num...
TABLE 14-2 Contingency Table Percentaged on the Row Totals*
TABLE 14-3 Contingency Table Percentaged on the Column Totals
*
TABLE 14-4 Contingency Table with Odds Based on the Columns
Chapter 16
TABLE 16-1 Two-Way ANOVA Results
Chapter 19
TABLE 19-1 Noting the Order in Which the Variables Were Named
TABLE 19-2 Reporting a Two-Way ANOVA Result
Chapter 20
TABLE 20-1 Interpretation of the Pairwise Comparison Table
Chapter 3
FIGURE 3-1: Variable view in SPSS.
FIGURE 3-2: Inserting variable names in SPSS.
FIGURE 3-3: Selecting the variable type.
FIGURE 3-4: Variable labels in SPSS.
FIGURE 3-5: The Value Labels dialog box.
FIGURE 3-6: Adding value labels in SPSS.
FIGURE 3-7: Specifying missing values in SPSS.
FIGURE 3-8: Choosing a variable’s level of measurement.
FIGURE 3-9: The data view window in SPSS.
FIGURE 3-10: The structure of data in SPSS.
FIGURE 3-11: Inserting a variable in SPSS.
FIGURE 3-12: The Chart Editor window for an output file.
FIGURE 3-13: Changing the color or pattern of a chart.
Chapter 4
FIGURE 4-1: Choosing the Frequencies command to generate a measure of central t...
FIGURE 4-2: Selecting a variable to generate descriptive statistics.
FIGURE 4-3: Choosing the Mode option.
FIGURE 4-4: Displaying the mode in SPSS.
FIGURE 4-5: Finding the median in a set of ordered scores.
FIGURE 4-6: Finding the median for separate groups.
FIGURE 4-7: Displaying the median in SPSS.
FIGURE 4-8: Displaying the mean in SPSS.
Chapter 5
FIGURE 5-1: Choosing the range.
FIGURE 5-2: The range.
FIGURE 5-3: Finding the upper and lower quartiles in a set of ordered scores.
FIGURE 5-4: Finding the upper and lower quartiles for separate groups.
FIGURE 5-5: Obtaining the quartiles.
FIGURE 5-6: The quartiles.
FIGURE 5-7: Obtaining the standard deviation.
FIGURE 5-8: The standard deviation.
Chapter 6
FIGURE 6-1: Histograms showing the possible range of scores (top chart) and the...
FIGURE 6-2: Histogram with bars representing single scores rather than a range ...
FIGURE 6-3: Histogram with amended vertical axis.
FIGURE 6-4: Choosing the frequencies command to generate charts.
FIGURE 6-5: Choosing a variable to generate a chart.
FIGURE 6-6: Choosing the histogram.
FIGURE 6-7: A histogram in SPSS.
FIGURE 6-8: Bar chart displaying the frequency of transport categories.
FIGURE 6-9: A bar chart in SPSS.
FIGURE 6-10: A pie chart in SPSS.
Chapter 9
FIGURE 9-1: The normal distribution.
FIGURE 9-2: Choosing the Kolmogorov-Smirnov test in SPSS.
FIGURE 9-3: Choosing the appropriate objective.
FIGURE 9-4: Choosing a variable for the Kolmogorov-Smirnov test.
FIGURE 9-5: Choosing the Kolmogorov-Smirnov test.
FIGURE 9-6: The Kolmogorov-Smirnov test, as displayed by SPSS.
FIGURE 9-7: Skewness due to outliers (left) and inherent skewness (right).
FIGURE 9-8: Moderate and severe skewness as seen in a histogram.
FIGURE 9-9: The Frequencies command in SPSS.
FIGURE 9-10: Choosing a variable for which to calculate the skewness statistic.
FIGURE 9-11: Obtaining the skewness statistic in SPSS.
FIGURE 9-12: The skewness statistic as displayed by SPSS.
FIGURE 9-13: Probability under a normal distribution.
Chapter 10
FIGURE 10-1: Choosing the Descriptives option.
FIGURE 10-2: Standardizing variables.
FIGURE 10-3: A standardized variable in SPSS.
FIGURE 10-4: The standard normal distribution.
FIGURE 10-5: Probability under two
t-
score distributions.
Chapter 11
FIGURE 11-1: Comparing means for independent groups.
FIGURE 11-2: Choosing the variables for the calculation of eta squared.
FIGURE 11-3: Selecting the effect size for more than two independent groups.
FIGURE 11-4: Eta squared for independent groups in SPSS.
FIGURE 11-5: Selecting a repeated measures analysis.
FIGURE 11-6: Defining the number of repeated measurements to be analyzed.
FIGURE 11-7: Selecting the repeated measures variables for analysis.
FIGURE 11-8: Selecting the effect size in SPSS for more than two repeated measu...
FIGURE 11-9: Eta squared for repeated measurements as displayed by SPSS.
Chapter 12
FIGURE 12-1: A scatterplot demonstrating a perfect linear relationship.
FIGURE 12-2: A scatterplot demonstrating a strong positive linear relationship.
FIGURE 12-3: A scatterplot demonstrating a strong negative linear relationship.
FIGURE 12-4: A strong positive relationship in a large data set.
FIGURE 12-5: A plot illustrating a strong positive relationship when one variab...
FIGURE 12-6: Choosing the scatterplot option.
FIGURE 12-7: Choosing a simple scatterplot.
FIGURE 12-8: Specifying the variables for a scatterplot.
FIGURE 12-9: Always check your scatterplot for outliers!
FIGURE 12-10: A scatterplot of class test scores and revision hours.
FIGURE 12-11: Obtaining a bivariate correlation.
FIGURE 12-12: Specifying the Pearson correlation.
FIGURE 12-13: The Pearson correlation table produced by SPSS.
FIGURE 12-14: A scatterplot of therapy success scores against intention to cont...
FIGURE 12-15: Specifying the Spearman correlation.
FIGURE 12-16: The Spearman correlation table produced by SPSS.
Chapter 13
FIGURE 13-1: A scatterplot of revision hours and exam score.
FIGURE 13-2: A scatterplot of revision hours and exam score with regression lin...
FIGURE 13-3: An illustration of residuals.
FIGURE 13-4: Obtaining a linear regression.
FIGURE 13-5: Specifying a simple linear regression.
FIGURE 13-6: Variables Entered/Removed table.
FIGURE 13-7: Model summary table.
FIGURE 13-8: ANOVA table.
FIGURE 13-9: Coefficients table.
FIGURE 13-10: Multiple regression model.
FIGURE 13-11: Obtaining a linear regression in SPSS.
FIGURE 13-12: Specifying a simple linear regression.
FIGURE 13-13: Variables Entered/Removed table.
FIGURE 13-14: Model summary table.
FIGURE 13-15: ANOVA table.
FIGURE 13-16: Coefficients table.
FIGURE 13-17: Obtaining a histogram of the residuals.
FIGURE 13-18: Interpreting the histogram of the residuals.
FIGURE 13-19: Obtaining partial plots.
FIGURE 13-20: A partial plot illustrating a linear relationship.
FIGURE 13-21: A partial plot when the predictor variable has only two levels.
FIGURE 13-22: Examples of outliers by distance and by influence.
FIGURE 13-23: Obtaining the Casewise Diagnostics table.
FIGURE 13-24: Casewise Diagnostics table.
FIGURE 13-25: Obtaining Cook’s distances and leverage values.
FIGURE 13-26: Residuals Statistics table.
FIGURE 13-27: Checking Cook’s distance and the leverage value in your data file...
FIGURE 13-28: Obtaining collinearity figures.
FIGURE 13-29: Interpreting collinearity figures from the coefficients table.
FIGURE 13-30: Obtaining a plot to assess homoscedasticity.
FIGURE 13-31: Example of an acceptable plot.
FIGURE 13-32: Example of an unacceptable plot.
Chapter 14
FIGURE 14-1: Choosing the Crosstabs procedure.
FIGURE 14-2: Selecting the variables to be presented in a contingency table.
FIGURE 14-3: Choosing ways of percentaging the contingency table.
FIGURE 14-4: A contingency table produced by SPSS, with percentages based on ro...
FIGURE 14-5: Obtaining chi-square for a contingency table.
FIGURE 14-6: Chi-square results as presented by SPSS.
FIGURE 14-7: Phi coefficient and Cramer’s V results as presented by SPSS.
FIGURE 14-8: Odds ratio as presented by SPSS.
Chapter 15
FIGURE 15-1: Obtaining an independent
t-
test.
FIGURE 15-2: Specifying the variables for an independent
t-
test.
FIGURE 15-3: Defining groups.
FIGURE 15-4: Group Statistics table.
FIGURE 15-5: Independent Samples Test table.
FIGURE 15-6: Independent Samples Effect Sizes table.
FIGURE 15-7: Scatterplot illustrating two groups with differing variances.
FIGURE 15-8: Differing variations means the
t-
test cannot be meaningfully inter...
FIGURE 15-9: Selecting a Mann–Whitney test.
FIGURE 15-10: Specifying the variables for a Mann–Whitney test.
FIGURE 15-11: Defining the groups for a Mann–Whitney test.
FIGURE 15-12: Ranks table for the Mann-Whitney test.
FIGURE 15-13: Test Statistics table for the Mann-Whitney test.
Chapter 16
FIGURE 16-1: Choosing a one-way between-groups ANOVA.
FIGURE 16-2: Selecting variables for a one-way ANOVA.
FIGURE 16-3: Obtaining descriptive statistics and a homogeneity of variance tes...
FIGURE 16-4: Obtaining residuals for a between-groups ANOVA.
FIGURE 16-5: One-way ANOVA output.
FIGURE 16-6: Choosing two independent variables for a two-way between-groups AN...
FIGURE 16-7: Obtaining an interaction plot for a two-way between-groups ANOVA.
FIGURE 16-8: Descriptive statistics and Levene’s test output from a two-way bet...
FIGURE 16-9: ANOVA output from the two-way ANOVA procedure.
FIGURE 16-10: An interaction plot for a two-way between-groups ANOVA.
FIGURE 16-11: Choosing a Kruskal–Wallis test.
FIGURE 16-12: Selecting variables for a Kruskal–Wallis test.
FIGURE 16-13: Defining the range of the independent variable as part of the Kru...
FIGURE 16-14: Kruskal–Wallis test output.
Chapter 17
FIGURE 17-1: Choosing the between-groups ANOVA procedure.
FIGURE 17-2: Selecting variables for a between-groups ANOVA.
FIGURE 17-3: Selecting the Tukey HSD post-hoc test.
FIGURE 17-4: Obtaining descriptive statistics as part of a between-groups ANOVA...
FIGURE 17-5: ANOVA output from the between-groups ANOVA procedure.
FIGURE 17-6: Tukey post-hoc test.
FIGURE 17-7: Choosing the Dunnett test.
FIGURE 17-8: ANOVA output from the between-groups ANOVA procedure.
FIGURE 17-9: Dunnett test results.
Chapter 18
FIGURE 18-1: Obtaining a paired
t-
test.
FIGURE 18-2: Specifying the variables for an independent
t-
test.
FIGURE 18-3: Paired Samples Statistics table.
FIGURE 18-4: Paired Samples Correlations table.
FIGURE 18-5: Paired Samples Test table.
FIGURE 18-6: Paired Samples Effect Sizes table.
FIGURE 18-7: Computing a new variable.
FIGURE 18-8: Specifying a new variable using the Compute function.
FIGURE 18-9: Obtaining the Wilcoxon test.
FIGURE 18-10: Specifying the variables for a Wilcoxon test.
FIGURE 18-11: Wilcoxon Ranks table.
FIGURE 18-12: Wilcoxon Test Statistics table.
Chapter 19
FIGURE 19-1: Obtaining a within-groups ANOVA.
FIGURE 19-2: Specifying the number of levels in your independent variable.
FIGURE 19-3: Specifying the within-groups ANOVA.
FIGURE 19-4: Obtaining Descriptive Statistics for the within-groups ANOVA.
FIGURE 19-5: Within-Subjects Factors table.
FIGURE 19-6: Descriptive Statistics table.
FIGURE 19-7: Multivariate Tests table.
FIGURE 19-8: Mauchly’s Test of Sphericity table.
FIGURE 19-9: Tests of Within-Subjects Effects table.
FIGURE 19-10: Tests of Within-Subjects Contrasts table.
FIGURE 19-11: Tests of Between-Subjects Effects table.
FIGURE 19-12: Specifying the number of levels in each of the independent variab...
FIGURE 19-13: Specifying the within-groups ANOVA.
FIGURE 19-14: Obtaining plots for the within-groups ANOVA.
FIGURE 19-15: Within-Subjects Factors table.
FIGURE 19-16: Descriptive Statistics table.
FIGURE 19-17: Multivariate Tests table.
FIGURE 19-18: Mauchly’s Test of Sphericity table.
FIGURE 19-19: Tests of Within-Subjects Effects table.
FIGURE 19-20: Tests of Within-Subjects Contrast table.
FIGURE 19-21: Tests of Between-Subjects Effects table.
FIGURE 19-22: Interaction plot.
FIGURE 19-23: Choosing a Friedman test in SPSS.
FIGURE 19-24: Selecting variables for a Friedman test.
FIGURE 19-25: Mean Ranks table.
FIGURE 19-26: Test Statistics table.
Chapter 20
FIGURE 20-1: Defining your independent variable.
FIGURE 20-2: Selecting variables for a within-groups ANOVA.
FIGURE 20-3: Specifying a post-hoc test for a within-groups ANOVA.
FIGURE 20-4: The Pairwise Comparisons table: post-hoc tests.
FIGURE 20-5: The Within-Subjects Factors table.
FIGURE 20-6: Selecting variables for a within-groups ANOVA.
FIGURE 20-7: Specifying the type of contrast for the within-groups ANOVA.
FIGURE 20-8: Selecting the Reference Category for Within-Groups ANOVA.
FIGURE 20-9: Tests of Within-Subjects Contrasts table for planned comparisons.
Cover
Table of Contents
Title Page
Copyright
Begin Reading
Index
About the Authors
iii
iv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
349
350
351
352
Psychology Statistics For Dummies®, 2nd Edition
Published by: John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030-5774, www.wiley.com
Copyright © 2026 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.
Media and software compilation copyright © 2026 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.
Published simultaneously in Canada
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.
The manufacturer’s authorized representative according to the EU General Product Safety Regulation is Wiley-VCH GmbH, Boschstr. 12, 69469 Weinheim, Germany, e-mail: [email protected].
Trademarks: Wiley, For Dummies, the Dummies Man logo, Dummies.com, Making Everything Easier, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: THE PUBLISHER AND THE AUTHOR MAKE NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS WORK AND SPECIFICALLY DISCLAIM ALL WARRANTIES, INCLUDING WITHOUT LIMITATION WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE. NO WARRANTY MAY BE CREATED OR EXTENDED BY SALES OR PROMOTIONAL MATERIALS. THE ADVICE AND STRATEGIES CONTAINED HEREIN MAY NOT BE SUITABLE FOR EVERY SITUATION. THIS WORK IS SOLD WITH THE UNDERSTANDING THAT THE PUBLISHER IS NOT ENGAGED IN RENDERING LEGAL, ACCOUNTING, OR OTHER PROFESSIONAL SERVICES. IF PROFESSIONAL ASSISTANCE IS REQUIRED, THE SERVICES OF A COMPETENT PROFESSIONAL PERSON SHOULD BE SOUGHT. NEITHER THE PUBLISHER NOR THE AUTHOR SHALL BE LIABLE FOR DAMAGES ARISING HEREFROM. THE FACT THAT AN ORGANIZATION OR WEBSITE IS REFERRED TO IN THIS WORK AS A CITATION AND/OR A POTENTIAL SOURCE OF FURTHER INFORMATION DOES NOT MEAN THAT THE AUTHOR OR THE PUBLISHER ENDORSES THE INFORMATION THE ORGANIZATION OR WEBSITE MAY PROVIDE OR RECOMMENDATIONS IT MAY MAKE. FURTHER, READERS SHOULD BE AWARE THAT INTERNET WEBSITES LISTED IN THIS WORK MAY HAVE CHANGED OR DISAPPEARED BETWEEN WHEN THIS WORK WAS WRITTEN AND WHEN IT IS READ.
For general information on our other products and services, please contact our Customer Care Department within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993, or fax 317-572-4002. For technical support, please visit https://hub.wiley.com/community/support/dummies.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.
Library of Congress Control Number: 2025946114
ISBN 978-1-394-29531-9 (pbk); ISBN 978-1-394-29532-6 (ebk); ISBN 978-1-394-29533-3 (ebk)
We collected data from psychology students across 31 universities regarding their attitudes towards statistics; 51 percent of the students did not realize statistics would be a substantial component of their course and the majority had negative attitudes or anxiety towards the subject. If this sounds familiar, take comfort in the fact that you're not alone!
You might ask why, when you've enrolled for psychology, are you being forced to study statistics? Psychology is an empirical discipline, which means we use evidence to decide between competing theories, interventions, and approaches. Collecting quantitative information allows us to represent this data in an objective and easily comparable format. This information must be summarized and analyzed (after all, pages of numbers aren’t that meaningful) so we can infer conclusions and make decisions.
Understanding statistics allows you to not only conduct and analyze your own research but also read and critically evaluate previous research. Statistics are important in psychology also because psychologists use their statistical knowledge in their day-to-day work. Consider a psychologist who is working with clients exhibiting depression, anxiety, and self-harm. They must decide which therapy would most useful, whether anxiety is related to (or can predict) self-harm, and whether clients who self-harm differ in their levels of depression. Statistical knowledge is a crucial tool in any psychologist’s job.
The statistics component you have to complete for a psychology degree is not impossible and shouldn’t be grueling. If you can cope with cognitive psychology theories and understand psychobiological models, you should have no difficulty. The computer will run the complex number crunching for you.
We provide an easily accessible guide, written in plain English, that will allow you to readily understand, carry out, interpret, and report all types of statistical procedures required for your course. While we've targeted this book at psychology undergraduate students, we think it will be useful to all social science and health science students.
The book starts with basic concepts and progresses to more complex techniques. Note, though, that you aren't expected to read the book from cover to cover. Instead, each chapter (and each statistical technique) is self-contained and does not necessarily require previous knowledge. For example, if you were to look up independent t-test, you would find a clear, jargon-free explanation of the technique followed by an example with step-by-step instructions demonstrating how to perform the technique in SPSS, how to interpret the output, and, how to report the results appropriately. Each statistical procedure in the book follows the same framework, enabling you to quickly find the technique of interest, run the required analysis, and write up the results.
As we know — both from research we have conducted and our subjective experience of teaching courses — statistics is often a psychology student’s least favorite subject and causes anxiety in the majority of psychology students. We therefore deliberately steer clear of complex mathematical formulas as well as superfluous and rarely used techniques. Instead, we concentrated on producing a clear and concise guide illustrated with practical examples. We don’t assume any previous knowledge of statistics and in return we ask you relinquish any negative attitudes you may have!
Rightly or wrongly, we have made some assumptions when writing this book. We assume that
You have SPSS installed and are familiar with using a computer. We do not assume that you've used SPSS before;
Chapter 3
gives an introduction to this program and we provide step-by-step instructions for each procedure.
You are not a mathematical genius but do have a basic understanding of using numbers. If you know what we mean by squaring a number (multiplying a number by itself; if we square 5, we get 25) or taking a square root — the opposite of squaring a number (the square root of a number is that value when squared gives the original number; the square root of 25 is 5), you will be fine. Remember that the computer will be doing the calculations for you.
You do not need to conduct complex multivariate statistics. This is an introductory book and we limit out discussion to the type of analyses commonly required in undergraduate syllabuses.
As with all Dummies books, icons in the margin signify that there's something special about a piece of information.
The tip icon points out a helpful hint designed to save you time or from thinking harder than you have to.
This one is important and indicates a piece of information that you should bear in mind even after you close the book.
The warning icon highlights a common misunderstanding or error that we don’t want you to make.
This icon contains a more detailed discussion or explanation of a topic. You can skip this material if you are in a rush.
This book has been designed to cover the majority of the topics and statistics you will encounter in your undergraduate courses. On the book's companion website, you can access additional complementary material that we hope you will use and find helpful. Go to www.dummies.com/go/psychstatsfd2e and click the Downloads link.
We included all the SPSS data files used in the book. This means you could, for example, download the Between-Groups ANOVA SPSS file and follow along with the instructions in the book to ensure that you can run the analysis confidently.
For chapters that outline descriptive or inferential tests (for example, measure of central tendency or t-tests), we provided data sets so you can practice running the tests and reporting the results. For chapters that don’t focus on a specific test, we have multiple-choice questions so you can check your understanding of the material.
Some chapters have advanced content that we simply couldn’t fit in the book! For example, after reading about regression, you may want to learn about fancier techniques, so we’ve provided text on how to dummy-code categorical variables and conduct stepwise or hierarchical regression. This additional content is signposted in the relevant chapters in the book. The website also contains a bonus chapter on how to conduct and report a mixed ANOVA.
Finally, we have created a cheat sheet to help you with the often complicated language that students must use if they want to successfully conquer their statistics course! To access the cheat sheet, go to www.dummies.com and type Psychology Statistics For Dummies Cheat Sheet in the Search box.
We designed the book so you can easily find the topics you are interested in and get the information you want without having to read pages of mathematical formulas or descriptions of every option in SPSS. If you're new to this area, we suggest that you start with Chapter 1. Need some help navigating SPSS for the first time? Turn to Chapter 3. If you're not quite sure what a p-value or an effect size is, see Part 2. For other information, use the table of contents or index to guide you to the right place.
Remember, you can’t make the computer (or your head) explode, so with book in hand, it’s time to start analyzing that data!
Part 1
IN THIS PART …
Understand the role of statistics in psychology.
Become familiar with the terminology and types of variables in quantitative analysis.
Learn how to input and label data in SPSS.
Calculate and appropriately use the following measures of central tendency: mode, median, and mean.
Calculate and appropriately use the following measures of dispersion: range, interquartile range, and standard deviation.
Generate histograms, bar charts, and whisker plots to illustrate your data.
Chapter 1
IN THIS CHAPTER
Understanding variables
Introducing SPSS
Outlining descriptive and inferential statistics
Differentiating between parametric and non-parametric statistics
Explaining research designs
When we tell our initially fresh-faced and enthusiastic first-year students that statistics is a substantial component of their course, approximately half of them are genuinely shocked. “We came to study psychology, not statistics,” they shout. Presumably they thought they would be spending the next three or four years ordering troubled individuals to “lie down on the couch and tell me about your mother.” We tell them there is no point running for the exits because statistics is part of all undergraduate psychology courses and, if they plan to undertake post-graduate studies or work in this area, they'll be using these techniques for a long time to come. (Besides, we were expecting this reaction and have locked the exits.)
Then we hear the cry, “But I’m not a mathematician. I'm interested in people and behavior.” We don’t expect students to be mathematicians. If you have a quick scan through this book, you won’t be confronted with pages of scary looking equations. Software packages such as SPSS do all the complex calculations for us.
We tell them that psychology is a scientific discipline. If they want to learn about people, they have to objectively collect information, summarize it, and analyze it. Summarizing and analyzing allow you to interpret the information and give it meaning in terms of theories and real-world problems. Summarizing and analyzing information is statistics; it is a fundamental and integrated component of psychology.
The aim of this chapter is to give you a roadmap of the main statistical concepts you'll encounter during your undergraduate psychology studies and sign posts to relevant chapters on topics where you can learn how to become a statistics superhero (or at least scrape by).
All quantitative research in psychology involves collecting information (called data) that can be represented by numbers. For example, levels of depression can be represented by depression scores obtained from a questionnaire, and a person’s country of birth can be represented by a number (say, 1 for Afghan and 2 for Zambian). The characteristics you're measuring are known as variables because they vary! They can vary over time in the same person (depression scores can vary over a person’s lifetime) or vary between different individuals.
Variables can be continuous or discrete, have different levels of measurement, and can be independent or dependent. Variables can be classified as discrete, where you specify discrete categories (for example, 1 for Afghan and 2 for Zambian), or continuous, where scores can lie anywhere along a continuum (for example, depression scores may lie anywhere between 0 and 63 if measured by the Beck Depression Inventory).
Variables also differ in their measurement properties. Four levels of measurement exist:
In a
nominal
level of measurement, a numerical value is applied arbitrarily. This measurement level contains the least amount of information. Country of birth is an example of a nominal variable because it makes no sense to say one is greater or less than the other.
In an
ordinal
level of measurement, values are ranked. Rankings on a class test are an example of an ordinal level of measurement because we can order participants from the highest to the lowest score but don’t how much better the first person did compared to the second person. (The difference between actual scores could be 1 mark or 20 marks.)
In an
interval
level of measurement, the difference between each point is equal. IQ scores are measured at the interval level, which means we can order the scores and difference between 95 and 100 is the same as the difference between 115 and 120.
In a
ratio
level of measurement, the scores can be ordered, the difference between each point on the scale is equal, and the scale also has a true absolute zero. Weight, for example, is measured at the ratio level. Having a true absolute zero means a weight of zero signifies an absence of any weight and also allows you to make proportional statements, such as “10 kg is half the weight of 20 kg.”
You also need to classify the variables in your data as independent or dependent, and that classification will depend on the research question you're asking. For example, if you're investigating the difference in depression scores between Afghans and Zambians, country of birth is the independent variable (the variable you think is predicting a change) and depression scores is the dependent variable (the outcome variable where the scores depend on the independent variable).
These terms, which we cover in more detail in the next chapter, may seem bewildering. But having a good understanding of them is important because they dictate the statistical analyses that are available and appropriate for your data.
SPSS, or Statistical Package for the Social Sciences, is a program for storing, manipulating, and analyzing your data. In this book, we assume that you will be using SPSS to analyze your data. SPSS is probably the most commonly used statistics package in the social sciences, but of course other similar packages exist as well as those designed to conduct more specialized analysis.
The normal format for entering data is that each column represents a variable (for example, country of birth or depression) and each row represents one person. Therefore, if you collected and entered information on the country of birth and depression scores of 10 people, you would have 2 columns and 10 rows in SPSS data view. SPSS allows you to enter numeric data and string data (which is non-numeric data, such as names) and also assign codes (for example, 1 for Afghan and 2 for Zambian).
Once you enter your data, you can run a variety of analyses by using drop-down menus. Hundreds of analyses and options are available, but in this book we explain only the statistical procedures necessary for your course. After you select the analyses you want to conduct, your results appear in the output window; your job then is to read and interpret the relevant information.
In addition to using the pull-down menus, you can also program SPSS by using a simple syntax language. This approach can be useful if you need to repeat the same analyses on many different data sets, but explaining how to do it is beyond the scope of this introductory text.
SPSS was released in 1968 and has been through many versions and upgrades. At the time of writing this chapter, the most recent version was SPSS 30.0, which was released in 2024. In 2010 it was purchased by IBM and now appears in your computer’s menu under the name IBM SPSS statistics. (And no, we don’t know why the last statistics is necessary either!)
When you collect your data, you need to communicate your findings to other people (tutor, boss, or supervisor). Let’s imagine you collect data from 100 people on their levels of coulrophobia (fear of clowns). Simply producing a list of 100 scores in SPSS won’t be useful or easy to comprehend for your audience. Instead, you need a way to describe your data set in a concise and repeatable format. The standard way to do this is through descriptive statistics. In this section, we introduce the following types of descriptive statistics: central tendency, dispersion, graphs, and standardized scores.
There are several types of central tendency, but they all attempt to give a single number that represents your variable. The most common measure is sometimes known as average but is more correctly called the arithmetic mean, and you're probably already familiar with it. The common measures of central tendency are covered in Chapter 4.
Several measures of dispersion exist, and each aims to give a single number that represents the spread or variability of your variable. Chapter 5 describes important measures of dispersion, including standard deviation, variance, range, and interquartile range.
Another way of displaying your data is to provide a visual representation in the form of a graph. Graphs are important for another reason; the type of statistical analysis you can conduct with variables will depend on the distribution of your variables, which you will need to assess by using graphs. Chapter 6 outlines the common types of graphs used in psychology and how to generate each of them in SPSS.
Imagine you measured a friend’s extraversion level with the Revised NEO Personality Inventory and told them they obtained a score of 164. It's likely they will want to know how this score compares to other people’s scores. Is it high or low? They also might want to know how it compares to the psychoticism score of 34 they received last week from the Eysenck Personality Questionnaire. Simply reporting raw scores often isn’t informative. You need to be able to compare scores to other people’s scores and compare scores measured on different scales. The good news is that converting a raw score into a standardized score to make these comparisons is easy. We cover standardization in more detail in Chapter 10.
Descriptive statistics are useful in summarizing the properties of your sample (that is, the participants you've collected data from), but most of the time you'll be more interested in the properties of the population (that is, all possible participants of interest). For example, if you're interested in attitudes to sectarianism between children enrolled in schools in Northern Ireland, your population is all Northern Irish schoolchildren. It is unrealistic to recruit all the children in Northern Ireland (in terms of time, money, and consent), so instead you would measure sectarianism in a small subset, or sample, of the children. (You examine the differences between samples and populations in Chapter 7.)
The inferential statistic you conduct will tell you about the probability of your result occurring in the population (that is, does the difference in your sample really exist in the population or did you obtain this result by chance), but it doesn't tell you anything about the size of the difference. For instance, you may find that older children are more likely to show sectarian attitudes than younger children, but this isn’t interesting if the effect is tiny. Effect sizes indicate the strength of the relationship or difference between your variables and should always be reported with any inferential statistic. (We cover effect sizes in Chapter 11.)
Before you commence any study, you should have a hypothesis, or a specific testable statement, that reflects the aim of your study. We outline hypothesis testing in Chapter 8 and explain why we always start with the assumption that the data demonstrates no effect, difference, or relationship.
When you're addressing a hypothesis, you can conduct two main types of statistical analysis: a parametric test or the non-parametric equivalent test. Parametric statistics assume that the data approximates a certain distribution, such as the normal distribution (see Chapter 9). This allows us to make inferences, which makes this type of statistics powerful and capable of producing accurate results. (See Chapter 11 for a discussion of power.)
However, because parametric statistics are based on certain assumptions, you must check your data to ensure that it adheres to these assumptions. (We explain how to do this for each individual statistic in the book.) If you fail to check the assumptions, you risk performing inappropriate analyses, which means your results, and therefore conclusions, may be incorrect.
By comparison, non-parametric statistics make fewer assumptions about the data, which means they can be used to analyze a more diverse range of data. Non-parametric tests tend to be less powerful than their parametric equivalents, so you should always attempt to use the parametric version unless the data violates the assumptions of that test.
The choice of research design is influenced by the question you want answered or your hypothesis. Research designs can be broadly classified as correlational design or experimental design. Experimental design can be further broken down into independent groups design or repeated measures design. To determine the type of statistical analyses you should conduct, you must know your study's design.
In correlational design, you're interested in the relationships or associations between two or more variables. Unlike experimental design, in correlation design there is no attempt to manipulate the variables; instead you're investigating existing relationships between the variables.
For example, suppose you're conducting a study to look at the relationship between the use of illegal recreational drugs and visual hallucinations; in this case, you need to recruit participants with varying levels of existing drug use and measure their experience of hallucinations. The ethics panel of your department may have serious misgivings if you try to conduct an experimental study, which would mean manipulating your variables by handing out various amounts of illegal drugs to your participants.
Part 3 deals with inferential statistics, which assess relationships or associations between variables that normally relate to correlational designs. (Please note our use of normally. There are always exceptions.)
Experimental design differs from correlational design because it may involve manipulating the independent variable. Correlational design focuses on the relationship between existing variables. In experimental design, you change the independent variable (directly or indirectly) and assess whether this change has an effect on the outcome variable. For example, you may hypothesize that ergophobia (fear of work) in psychology students increases throughout their courses. You could use one of two experimental designs to test this hypothesis: independent groups design or repeated measures design.
When you employ an independent groups design, you're looking for differences on a variable between separate groups of people, such as differences on ergophobia scores between first-year and second-year psychology students. In this scenario, you can employ either the parametric or non-parametric test. We explain these tests in Chapter 15, 16, and 17.
When you employ a repeated measures design, you're looking for differences on a variable in the same group of people at different times. For example, you could measure ergophobia levels when students first start their psychology course, and then 12 months later test the same group to see if the scores have changed. If you test the same participants more than once, you should use the tests outlined in Chapters 18, 19, and 20.
The critical stage of any research study is always the start. Remember the following:
Specify a hypothesis that is testable and addresses the question you're interested in (see
Chapter 8
). Your hypothesis must be informed by theory and previous research.
Consider how you will analyze the data by deciding on the appropriate statistic; this decision will help you decide how to measure your variables. Deciding on the appropriate statistical analysis also allows you to calculate the sample size you will need (see
Chapter 11
). If you do not recruit enough participants, you're unlikely to discover a significant effect in your data even if one exists in the population, and your efforts will be a waste of time.
When you're preparing your SPSS file, take time to label your data and assign values that are easy to read and will make sense when you re-visit them months later.
The best time to consult a statistical advisor is when you're designing your study. Your advisor will be able to offer advice on the type of data to collect, the analyses to conduct, and the required sample size. Asking for help after the data has been collected may be too late!
Chapter 2
IN THIS CHAPTER
Distinguishing between discrete and continuous variables
Understanding nominal, ordinal, interval, and ratio levels of measurement
Knowing the difference between independent and dependent variables and covariates
A variable is something you measure that can have a different value from person to person or across time, such as age, self-esteem, and weight. Data is the information you gather about a variable. For example, if you gather information about the age of a group of people, the list of their ages is your research data. (Not everything that you can measure is a variable, though, as you can read in the “Constantly uninteresting” sidebar later in the chapter.)
The data you collect on all the variables of interest in a research study is called a data set — a collection of information about different types of variables. In statistical analysis, the first question you need to address is, “What type of variables do I have?” Therefore, you need to know how to distinguish between variables before you can attempt anything else in statistics. If you can get a handle on variables, statistics will be a lot less confusing.
You can classify a variable in psychological research by
Type:
Discrete or continuous
Level of measurement:
Nominal, ordinal, interval or ratio
Its role in the research study:
Independent, dependent, or covariate
In this chapter, we discuss each of these ways of classifying a variable. To practice what you learn in this chapter, go to this book’s web page at www.dummies.com/go/psychstatsfd2e and click the Downloads link and then the Chapter 2 link.
Discrete variables, sometimes called categorical variables, contain separate and distinct categories. For example, in research studies, marital status might be described as never married, married, divorced, widowed. The marital status variable is a categorical (discrete) variable because it consists of categories (four in this case).
Suppose you're collecting information about the age of a group of people (as part of a research study rather than general nosiness). You could simply ask people to record their age in years on a questionnaire. Age is an example of a continuous variable because it’s not separated into distinct categories (time proceeds continuously), it has no breaks, and you can place it along a continuum. Therefore, someone might record their age as 21 years old; another person might record their age as 21.5 years old; another person might record their age as 21.56 years old. The last two people in the example might appear a bit weird, but they’ve given a valid answer. They’ve just used a different level of accuracy in placing themselves on the age continuum.
Here's a trick to help you remember the difference between the two types of variables. Generally, fractions are meaningful with a continuous variable but are not meaningful with a discrete variable, which can take only specific values. In the examples, someone could provide their age in the form of a fraction but would provide their marital status as one of four possible answers.
Everything that you can measure you can classify as either a constant or a variable; that is, its value is always the same (constant) or its value varies (variable). Psychological research is interested only in variables — how changes in one variable are associated with changes in another variable. Constants aren’t interesting because you already know their value and can do nothing with this.
Whether you record a variable as discrete or continuous depends on how you measure it. For example, you can’t say that age is a continuous variable without knowing how age has been measured in the context of a research study. If you ask people to record their age and give them the options less than 25, 25 to 40, and older than 40, you’ve created a discrete variable. In this case, the person can choose only one of three possible answers and anything in between these answers (any fraction) doesn’t make sense. Therefore, you need to examine how you measured a variable before classifying it as discrete or continuous.
When you record variables on a data sheet, you usually record the values of the variables as numbers to facilitate statistical analysis. However, the numbers can have different measurement properties, and these properties determine the types of analyses you can perform with the numbers. The variable’s level of measurement is a classification system that tells you the measurement properties of a variable's values.
The values in a variable can possess the following measurement properties:
Magnitude
Equal intervals
True absolute zero
These three measurement properties enable you to classify the level of measurement of a variable into one of four types:
Nominal
Ordinal
Interval
Ratio
The three measurement properties outlined in this section are hierarchical. In other words, you can’t have equal intervals unless a variable also has magnitude, and you can’t have a true absolute zero point unless a variable also has magnitude and equal intervals.
The property of magnitude means that you can order the values in a variable from highest to lowest. For example, suppose you're measuring age using the following categories: less than 25, 25 to 40, and older than 40. In your research study, you give a score of 1 on the age variable to people who report being less than 25; a score of 2 to anyone who reports being between 25 to 40; and a score of 3 to anyone who reports being older than 40. Therefore, your age variable contains three values: 1, 2, and 3. These numbers have the property of magnitude because you can say that those who obtained a value of 3 are older than those who obtained a value of 2 and those who obtained a value of 1. In this way, you can order the scores.
The property of equal intervals means that a unit difference on the measurement scale is the same regardless of where that unit difference occurs on the scale. For example, in a temperature variable, the difference between 10 degrees Celsius and 11 degrees Celsius is 1 degree Celsius (one unit on the scale). Equally, the difference between 11 degrees Celsius and 12 degrees Celsius is also 1 degree Celsius. This one-unit difference is the same and means the same regardless of where on the scale it occurs.
This isn’t true for the example of the age variable in the preceding section. In that case, the difference between a value of 1 and a value of 2 is 1 (one unit) and the difference between the value of 2 and the value of 3 is also 1. However, these differences aren’t equal and, in fact, don’t make sense. Effectively, we’re asking, “Is the difference between ‘less than 25' and ‘25 to 40' the same as the difference between ‘25 to 40' and ‘older than 40'?” The question doesn’t make sense, which should tell you that this variable doesn't have the property of equal intervals.
The property of a true absolute zero point means that at the zero point on the measurement scale, nothing of the variable exists and, therefore, no scores less than zero exist. For example, when measuring weight in kilograms, at 0 kilograms you would consider the thing that you’re measuring to have no weight, and there is no weight less than 0 kilograms.