Advances in Comparative Survey Methods -  - E-Book

Advances in Comparative Survey Methods E-Book

0,0
120,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Covers the latest methodologies and research on international comparative surveys with contributions from noted experts in the field Advances in Comparative Survey Methodology examines the most recent advances in methodology and operations as well as the technical developments in international survey research. With contributions from a panel of international experts, the text includes information on the use of Big Data in concert with survey data, collecting biomarkers, the human subject regulatory environment, innovations in data collection methodology and sampling techniques, use of paradata across the survey lifecycle, metadata standards for dissemination, and new analytical techniques. This important resource: * Contains contributions from key experts in their respective fields of study from around the globe * Highlights innovative approaches in resource poor settings, and innovative approaches to combining survey and other data * Includes material that is organized within the total survey error framework * Presents extensive and up-to-date references throughout the book Written for students and academic survey researchers and market researchers engaged in comparative projects, this text represents a unique collaboration that features the latest methodologies and research on global comparative surveys.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 2110

Veröffentlichungsjahr: 2018

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Preface

Section I: Introduction

1 The Promise and Challenge of 3MC Research

1.1 Overview

1.2 The Promise

1.3 The Challenge

1.4 The Current Volume

References

2 Improving Multinational, Multiregional, and Multicultural (3MC) Comparability Using the Total Survey Error (TSE) Paradigm

2.1 Introduction

2.2 Concept of Total Survey Error

2.3 TSE Interactions

2.4 TSE and Multiple Surveys

2.5 TSE Comparison Error in Multinational Surveys

2.6 Components of TSE and Comparison Error

2.7 Obtaining Functional Equivalence and Similarity in Comparative Surveys

2.8 Challenges of Multinational Survey Research

2.9 Language

2.10 Structure

2.11 Culture

2.12 Resources for Developing and Testing Cross‐national Measures

2.13 Designing and Assessing Scales in Cross‐national Survey Research

2.14 TSE and the Multilevel, Multisource Approach

2.15 Documentation

2.16 Conclusion

References

3 Addressing Equivalence and Bias in Cross‐cultural Survey Research Within a Mixed Methods Framework

3.1 Introduction

3.2 Equivalence and Comparability: Supporting Validity of the Intended Interpretations

3.3 A Comprehensive Approach to Bias Analysis in 3MC Surveys Within a Mixed Methods Research Framework

3.4 Closing Remarks

References

Section II: Sampling Approaches

4 Innovative Sample Designs Using GIS Technology

4.1 Introduction

4.2 Cluster Selection Stage

4.3 Household Stage

4.4 Discussion

References

GIS and Remote Sensing Data Resources

5 Within‐household Selection of Respondents

5.1 Introduction

5.2 Within‐household Respondent Selection Methods

5.3 Within‐household Selection Methods in Cross‐national Surveys: The Case of ESS

5.4 A Few Practical Challenges of Within‐household Sampling in Cross‐national Surveys

5.5 Summary and Recommendations

References

Section III: Cross‐cultural Questionnaire Design and Testing

6 Overview of Questionnaire Design and Testing

6.1 Introduction

6.2 Review of Questionnaire Design and Testing in a Comparative Context

6.3 Advances in Questionnaire Design and Testing

6.4 Conclusions

References

7 Sensitive Questions in Comparative Surveys

7.1 Sensitivity Issues in a Comparative Context

7.2 The Definition of Sensitivity

7.3 Approaches to Minimizing the Effect of Sensitivity

7.4 Measuring Sensitivity in Cross‐national Contexts

7.5 New Evidence of Cross‐national Sensitivity: SQS Project

7.6 Understanding Sensitivity

7.7 Summary

References

8 Implementing a Multinational Study of Questionnaire Design

8.1 Introduction

8.2 Scope of the MSQD

8.3 Design of the MSQD

8.4 Experiments Implemented in the MSQD

8.5 Translation Requirements and Procedures

8.6 Findings on Feasibility and Limitations Due to Translations and Required Adaptations

8.7 Example Results

8.8 Conclusion

Acknowledgments

References

9 Using Anchoring Vignettes to Correct for Differential Response Scale Usage in 3MC Surveys

9.1 Introduction

9.2 Reporting Heterogeneity

9.3 Anchoring Vignettes: Design and Analysis

9.4 Validity of the Model Assumptions

9.5 Practical Issues

9.6 Empirical Demonstration of the Anchoring Vignette Method

9.7 Sensitivity Analysis: Number of Vignettes and Choices of Vignette Intensity

9.8 Discussion and Conclusion

References

10 Conducting Cognitive Interviewing Studies to Examine Survey Question Comparability

10.1 Introduction

10.2 Cognitive Interviewing as a Study in Validity

10.3 Conducting a Comparative Cognitive Interviewing Study

10.4 Real‐World Application

10.5 Conclusion

References

11 Setting Up the Cognitive Interview Task for Non‐English‐speaking Participants in the United States

11.1 Introduction

11.2 Differences in Communication Styles Across Languages and Cultures

11.3 Implications of Cross‐cultural Differences in Survey Pretesting

11.4 Setting up the Cognitive Interview Task for Non‐English‐speaking Participants

11.5 Discussion and Recommendations for Future Studies

Disclaimer

Acknowledgment

References

12 Working Toward Comparable Meaning of Different Language Versions of Survey Instruments

12.1 Introduction

12.2 Review of the Literature

12.3 Motivation for the Current Study: US Census Bureau Spanish Usability Testing

12.4 The Monolingual and Bilingual Cognitive Testing Study

12.5 Results of the Cognitive Testing

12.6 Summary and Conclusions

12.7 Future Research

Disclaimer

Acknowledgment

References

13 Examining the Comparability of Behavior Coding Across Cultures

13.1 Introduction

13.2 Methods

13.3 Results

13.4 Discussion

Acknowledgments

References

Section IV: Languages, Translation, and Adaptation

14 How to Choose Interview Language in Different Countries

14.1 Introduction

14.2 The Issue of Multilingualism

14.3 Current Practice of Language Choice in Comparative Surveys

14.4 Using a Language Survey for Decisions About Language Choice for an Interview: Example of Post‐Soviet Region

14.5 The Choice of Interview Language on the Level of Individual Respondent

14.6 Summary

References

15 Can the Language of Survey Administration Influence Respondents’ Answers?

15.1 Introduction

15.2 Language, Cognition, and Culture

15.3 Language of Administration in Surveys of Bilingual Bicultural Respondents

15.4 Data and Methods

15.5 Results

15.6 Discussion and Conclusions

References

16 Documenting the Survey Translation and Monitoring Process

16.1 Introduction

16.2 Key Concepts

16.3 Case Study: The ESENER‐2 Study

16.4 Translation Documentation from a Project Management Perspective

16.5 Translation Documentation from the Perspective of Translation Teams

16.6 Translation Documentation from the Perspective of Applied Translation Research

16.7 Translation Documentation from the Perspective of Data Analysts

16.8 Summary and Outlook

References

17 Preventing Differences in Translated Survey Items Using the Survey Quality Predictor

17.1 Introduction

17.2 Equivalence in Survey Translation

17.3 Cross‐cultural Survey Translation and Translation Assessment

17.4 Formal Characteristics of a Survey Item

17.5 Using SQP: A Five‐step Procedure for Comparing Item Characteristics Across Languages

17.6 Questions Evaluated in the ESS Round 5, Round 6, and Round 7

17.7 Discussion

References

Section V: Mixed Mode and Mixed Methods

18 The Design and Implementation of Mixed‐mode Surveys

18.1 Introduction

18.2 Consequences of Mixed‐mode Design

18.3 Designing for Mixed Mode

18.4 Auxiliary Data for Assessing and Adjusting Mode Effects

18.5 Conclusions

Acknowledgment

References

19 Mixed‐mode Surveys

19.1 Introduction

19.2 Methods

19.3 Results

19.4 Discussion and Conclusions

References

20 Mixed Methods in a Comparative Context

20.1 Introduction

20.2 Mixed Methods Data Collection Redefined

20.3 Considerations about Alternate Sources of Data

20.4 Examples of Social Science Research Using New Technologies

20.5 Linking Alternative and Survey Data

20.6 Mixed Methods with Technologically Collected Data in the 3MC Context

20.7 Conclusions

Acknowledgments

References

Section VI: Response Styles

21 Cross‐cultural Comparability of Response Patterns of Subjective Probability Questions

21.1 Introduction

21.2 State‐of‐art Application of Subjective Probability Questions in Surveys

21.3 Policy Relevance of Subjective Probability Questions

21.4 Measurement Mechanism for Subjective Probability Questions

21.5 Data and Methods

21.6 Results

21.7 Discussion

References

22 Response Styles in Cross‐cultural Surveys

22.1 Introduction

22.2 Data and Measures

22.3 OLS Regression Analysis

22.4 Confirmatory Factor Analysis

22.5 Latent Class Analysis

22.6 Multidimensional Unfolding Model

22.7 Discussion and Conclusion

References

23 Examining Translation and Respondents’ Use of Response Scales in 3MC Surveys

23.1 Introduction

23.2 Data and Methods

23.3 Results

23.4 Discussion

References

Section VII: Data Collection Challenges and Approaches

24 Data Collection in Cross‐national and International Surveys

24.1 Introduction

24.2 Recent Developments in Survey Data Collection

24.3 Data Collection Challenges Faced in Different Regions of the World

24.4 Future Directions

References

25 Survey Data Collection in Sub‐Saharan Africa (SSA)

25.1 Introduction

25.2 Overview of Common Challenges and Solutions in Data Collection in Sub‐Saharan Africa

25.3 Strategies and Opportunities

25.4 Future Developments

References

26 Survey Challenges and Strategies in the Middle East and Arab Gulf Regions

26.1 Introduction

26.2 Household and Within‐household Sampling

26.3 Interviewer–Respondent Gender Matching

26.4 Nationality‐of‐interviewer Effects

26.5 Response Scale Heterogeneity

26.6 Conclusion: Outstanding Challenges and Future Directions

References

27 Data Collection in Cross‐national and International Surveys

27.1 Introduction

27.2 Survey Research in the Latin America and Caribbean Region

27.3 Confronting Challenges with Effective Solutions

27.4 New Opportunities

27.5 Conclusion

References

28 Survey Research in India and China

28.1 Introduction

28.2 Social Science Surveys in India and China

28.3 Organizational Structure of Surveys

28.4 Sampling for Household Surveys

28.5 Permission and Approvals

28.6 Linguistic Issues

28.7 Future Directions: New Modes of Data Collection

References

29 Best Practices for Panel Maintenance and Retention

29.1 Introduction

29.2 Retention Rates

29.3 Panel Maintenance Strategies

29.4 Study Development and the Harmonization of Field Practices

29.5 Conclusion

References

30 Collection of Biomeasures in a Cross‐national Setting

30.1 Introduction

30.2 Background

30.3 Types of Biomeasures Collected

30.4 Logistic Considerations

30.5 Quality Assurance Procedures

30.6 Ethical and Legal Issues Across Countries

30.7 Summary and Conclusions

Acknowledgments

References

31 Multinational Event History Calendar Interviewing

31.1 Introduction

31.2 EHC Interviews in a Multinational Setting

31.3 EHC Interview Administration

31.4 EHC Interviewer Training

31.5 Interviewer Monitoring in an International Survey

31.6 Coding Procedures

31.7 Evaluation of Interviewer Behavior

31.8 Feedback Processing Speed

31.9 Effects of Feedback and Interviewer Effects Across Countries

31.10 Use of Different Cross‐checks Across Countries

31.11 Discussion

References

32 Ethical Considerations in the Total Survey Error Context

32.1 Introduction

32.2 Ethical Considerations and the TSE Framework

32.3 Origins and Framework of Human Subjects Protection Standards

32.4 The Belmont Report and the Components of Human Subjects Protection

32.5 Final Remarks

Acknowledgment

References

33 Linking Auxiliary Data to Survey Data

33.1 Introduction

33.2 Ethical Guidelines and Legal Framework

33.3 What Constitutes Personal Data?

33.4 Confidentiality

33.5 Consent

33.6 Concluding Remarks

References

Section VIII: Quality Control and Monitoring

34 Organizing and Managing Comparative Surveys

34.1 Introduction

34.2 Background

34.3 Factors That Impact 3MC Survey Organization and Management

34.4 General Considerations and Survey Quality When Applying Project Management to 3MC Surveys

34.5 The Application of Project Management to 3MC Surveys

34.6 Conclusion

References

35 Case Studies on Monitoring Interviewer Behavior in International and Multinational Surveys

35.1 Introduction

35.2 Case Studies

35.3 Conclusion

References

36 New Frontiers in Detecting Data Fabrication

36.1 Introduction

36.2 Standard Approaches to Detecting Data Falsification

36.3 Approaches to Preventing Falsification

36.4 Additional Challenges

36.5 New Frontiers in Detecting Fraud

36.6 A Way Forward

References

Section IX: Nonresponse

37 Comparing Nonresponse and Nonresponse Biases in Multinational, Multiregional, and Multicultural Contexts

37.1 Introduction

37.2 Harmonization

37.3 Data Collection Factors

37.4 Assessment of Risk of Nonresponse Bias

37.5 Post‐survey Adjustment

37.6 Conclusion

References

38 Geographic Correlates of Nonresponse in California

38.1 Introduction

38.2 Data and Methods

38.3 Results

38.4 Discussion and Limitations

References

39 Additional Languages and Representativeness

39.1 Introduction

39.2 Data

39.3 Methods

39.4 Results

39.5 Summary and Conclusion

References

Section X: Multi‐group Analysis

40 Measurement Invariance in International Large‐scale Assessments

40.1 Introduction

40.2 Measurement Invariance Review

40.3 Advances in Measurement Invariance

40.4 The Stepwise Procedure

40.5 Evaluation Criteria

40.6 An Example

40.7 Conclusion

References

41 Approximate Measurement Invariance

41.1 Introduction

41.2 The Multigroup Confirmatory Factor Analysis

41.3 Illustration

41.4 Discussion and Conclusion

Acknowledgment

References

Section XI: Harmonization, Data Documentation, and Dissemination

42 Data Harmonization, Data Documentation, and Dissemination

Reference

43 Basic Principles of Survey Data Recycling

43.1 Introduction

43.2 The Process of Survey Data Recycling

43.3 The Logic of SDR

43.4 Using SDR in Constructing the Harmonized Dataset

43.5 Conclusions

Acknowledgments

References

44 Survey Data Harmonization and the Quality of Data Documentation in Cross‐national Surveys

44.1 Introduction

44.2 Standards for Describing the Survey Process from Sampling to Fieldwork

44.3 Basis of Quality Assessment in the SDR Project

44.4 Results

44.5 Concluding Remarks

References

45 Identification of Processing Errors in Cross‐national Surveys

45.1 Introduction

45.2 Data and Methods

45.3 Results

45.4 Conclusions

Acknowledgments

References

46 Item Metadata as Controls for

Ex Post

Harmonization of International Survey Projects

46.1 Introduction

46.2 Harmonization Controls and Item Quality Controls

46.3 The Case for Using Item Metadata

46.4 Application: Trust in Parliament and Participation in Demonstrations

46.5 Harmonization Controls

46.6 On the Impact of Harmonization Controls

46.7 Item Quality Controls

46.8 Summary and Conclusions

Acknowledgments

References

47 The Past, Present, and Future of Statistical Weights in International Survey Projects

47.1 Introduction

47.2 Weighting as a Procedure of Improving Data Quality

47.3 Availability of Weights and Weight Types in International Survey Projects

47.4 Quality of Statistical Weights and Consequences of Errors

47.5 Comparability of Weights or Weighted Data

47.6 Summary

Acknowledgments

References

Section XII: Looking Forward

48 Prevailing Issues and the Future of Comparative Surveys

48.1 Introduction

48.2 Examples of 3MC Surveys

48.3 Data Quality and Some Special Features of 3MC Surveys

48.4 Roger Jowell’s Ten Golden Rules for Cross‐national Studies

48.5 Quality Management

48.6 A Changing Survey Landscape

48.7 Big Data

48.8 Summary of Prevailing Problems

48.9 Endnote

References

Wiley Series In Survey Methodology

Index

End User License Agreement

List of Tables

Chapter 02

Table 2.1 Typology of surveys by mode and medium.

Table 2.2 Categorizing nonresponse error.

Chapter 03

Table 3.1 Evaluation of MMR core characteristics in 3MC bias studies.

Table 3.2 Approaches to integration for a 3MC mixed methods validation study.

Chapter 04

Table 4.1 Summary of sampling approaches.

Chapter 07

Table 7.1 Reported sensitivity of different topics in a cross‐national perspective.

Chapter 08

Table 8.1 The MSQD implementation across participating organizations.

Table 8.2 Overview of the experiments.

Table 8.3 Question order experiment on attitudes toward abortion for a married woman.

Chapter 09

Table 9.1 Sample distributions by country.

Table 9.2 Estimates from ordered probit model and the self‐assessment component of the HOPIT model predicting respondent’s pain level (1 none–5 extreme).

Table 9.3 Design of the sensitivity analysis using CHARLS, SHARE, and HRS data.

Table 9.4 Estimates from the ordered probit model and the self‐assessment component of the HOPIT models predicting respondents’ pain level (1 = none through 5 = extreme): Comparison results between models with different numbers and choices of vignettes.

Chapter 10

Table 10.1 First round summary notes: Is your child too sick to play?

Table 10.2 Second round summary notes.

Chapter 11

Table 11.1 Frequency of Spanish preinterview interactions: experimental versus conventional interviews.

Table 11.2 Initial procedure used in the round 1 interviews.

Table 11.3 Enhanced introduction and practice.

Chapter 12

Table 12.1 Respondent characteristics.

Table 12.2 Concepts monolinguals misunderstood

a

more frequently than bilinguals.

Table 12.3 Concepts bilinguals misunderstood more frequently than monolinguals.

Chapter 13

Table 13.1 Behavior codes employed to identify respondent comprehension and mapping difficulties and interviewer question reading problems.

Table 13.2 Problematic and nonproblematic survey questions examined.

Table 13.3 Cross‐classified HLM model estimates of respondent, response, and question level characteristics on respondent comprehension difficulty.

Table 13.4 Cross‐classified HLM model estimates of respondent, response, and question level characteristics on respondent mapping difficulty.

Chapter 14

Table 14.1 Documentation of language choice in different comparative projects.

Table 14.2 The choice of interview language in the survey cycle.

Table 14.3 Different aspects of multilingualism (% of total population).

Table 14.4 Language knowledge of the “native language” (if the first official language in each country – also called “title language” – is named as the native language).

Table 14.5 Types of multilingualism in language usage in countries of the former Soviet Union (% from multilingual respondents).

Chapter 15

Table 15.1 Relative risk for the English language compliant group versus Spanish across measures with different language effect expectations reflecting stratification by propensity strata (NLAAS,

n

= 220).

Table 15.2 Means for the English and Spanish language compliant groups across measures with different language effect expectations reflecting stratification by propensity strata (NLAAS,

n

= 220).

Table 15.3 Relative risk for the English language group versus Spanish across measures by expectations of language effect (NIS,

n

= 632).

Chapter 17

Table 17.1 Summary of characteristics inventoried by SQP.

Table 17.2 Categories for differences in the SQP codes for two languages.

Chapter 19

Table 19.1 Frequencies and percentages of persons by data collection mode in the countries that implemented mixed‐mode survey design, ISSP 2011.

Table 19.2 Differences in proportion of persons reporting excellent or very good health status, mode effect estimates, and standard errors using the propensity score matching method, ISSP 2011.

Table 19.3 Estimated proportion of persons reporting excellent or very good health status and standard errors by response mode, ISSP 2011.

Table 19.4 Estimated proportion of persons reporting excellent or very good health status and standard errors using unadjusted data, calibration, and multiple imputation methods, ISSP 2011.

Table 19.5 Estimated differences in the proportion of persons reporting excellent or very good health status and standard errors by estimation method, ISSP 2011.

Chapter 21

Table 21.1 Path coefficients from structural equation model of item nonresponse and unsure response (item nonresponse and unsure 50 response) of subjective life expectancy question, the 2010 Health and Retirement Study.

Chapter 22

Table 22.1 Question wordings, mean, and standard deviation (SD) of the two rating scales, 2012 American National Election Studies.

Table 22.2 Estimated logistic regression coefficients and standard errors of race or ethnicity and control variables on extreme response style (ERS) and acquiescent response style (ARS), 2012 American National Election Studies (weighted).

Table 22.3 Model fit statistics of confirmatory factor analysis of ARS.

Table 22.4 Estimation of Model 4 with two content factors and one ARS factor (weighted).

Table 22.5 Estimated regression coefficients and standard errors of race or ethnicity and control variables on acquiescent response style (ARS based on the CFA), 2012 American National Election Studies (weighted).

Table 22.6 LCA model fit statistics, 2012 American National Election Studies (weighted).

Table 22.7 Estimated regression coefficients and standard errors of ERS and ARS on the Likert scale items, 2012 American National Election Studies (LCA Model 6b) (weighted).

Table 22.8 Estimated logistic regression coefficients and standard errors of race or ethnicity and control variables on extreme response style (ERS) and acquiescent response style (ARS) based on the LCA model, 2012 American National Election Studies (weighted).

Table 22.9 MUM overall goodness of fit statistics for moral traditionalism and position of Blacks in society using common or group‐specific shifting and scaling threshold parameters, 2012 American National Election Studies.

Table 22.10 MUM estimates of parameters and variances of interest for the response style in group‐specific shifting and scaling parameter model, 2012 American National Election Studies.

Table 22.11 MUM and CFA mean score estimates, 2012 American National Election Studies.

Chapter 23

Table 23.1 Scale labels used in self‐rated health in five studies (and sample sizes).

Table 23.2 Logistic regression models predicting likelihood to report “fair.”

Chapter 24

Table 24.1 Dimensions of survey context.

Chapter 25

Table 25.1 Kenya case study – strategies and opportunities.

Chapter 30

Table 30.1 Country‐specific consent rates to biomeasure collections in SHARE Wave 6.

a

Chapter 31

Table 31.1 Overview of behavior coding variables.

Table 31.2 Overview of data collection by country.

Table 31.3 Percentage of audio‐recorded interviews by country.

Table 31.4 Feedback processing speed: number of days between interview and VU University feedback.

Table 31.5 Feedback processing speed across countries in days.

Table 31.6 Interviewer performance across feedback periods per country.

Table 31.7 Frequency (absolute and relative) of cueing and cross‐checking variables.

Chapter 34

Table 34.1 Aspects of survey production lifecycle typically handled by sponsor.

Table 34.2 Aspects of survey production lifecycle typically handled by coordinating center and local teams.

Table 34.3 Project management process descriptions by subject groups included in ISO 21500.

Chapter 35

Table 35.1 SNMHS quality control indicators by sources of errors.

Table 35.2 Example of a summary sheet showing the flag status (1 flagged, 0 not flagged) for each interviewer on single occurrence indicators.

Table 35.3 Summary of intra‐interviewer correlations over 48 survey items for 36 countries in six ESS rounds.

Table 35.4 Laptop‐level statistics on quality indicators and flagged underperforming interviewers (in bold) for SHARE wave 6.

Chapter 37

Table 37.1 Incentives in ESS round 6 (2012/2013).

Chapter 38

Table 38.1 Cultural and household resistance constructs mapped to variable sources.

Table 38.2 Univariate distributions of model predictors and bivariate associations with nonresponse (unweighted).

Table 38.3 Screener interview response predicted by community‐level cultural dimensions and sample unit resistance (unweighted logistic mixed model,

n

 = 884 199).

Table 38.4 Adult interview response predicted by community‐level cultural dimensions and sample unit resistance (unweighted logistic mixed model, n = 152 134).

Chapter 39

Table 39.1 Coverage, contact, cooperation, and language problems: predicted probabilities (%) from a multivariate logit model using the SHP data in the household recruitment phase.

Table 39.2 Coverage, contact rates, cooperation, and language problems: predicted probabilities from a multivariate logit model using the SHP data in the person recruitment phase, given the household participates.

Table 39.3 Coverage, contact rates, cooperation, and language problems: predicted probabilities from a multivariate logit model using ESS/MOSAiCH‐ISSP data.

Table 39.4 Percentages of population mastering one language of different combinations by sociodemographic characteristics.

Table 39.5 Standard deviation of predicted probabilities for sociodemographic characteristics by language proficiency from a multivariate logit model.

Chapter 40

Table 40.1 Item wordings, means (variances) on the diagonal, and covariances on the off‐diagonal.

Table 40.2 LEGPROT items descriptive statistics – mean (standard deviation) – across 38 countries.

Table 40.3 Parameter specifications and model‐data fits for the exact invariance using the traditional approach and unconstrained and constrained invariance using the Bayesian approach.

Chapter 41

Table 41.1 RMSEA and CFI differences between the configural, metric, and scalar models.

Table 41.2 The influence of prior variance on parameter differences.

Table 41.3 Alteration of the intercept values of dataset 1.

Chapter 43

Table 43.1 Selected international survey projects.

Chapter 44

Table 44.1 Quality indicators by survey project.

Table 44.2 Changes in quality indicators between first and last wave for selected survey projects.

Table 44.3 Average quality by country/territory

a

and time span of survey coverage.

Chapter 45

Table 45.1 Description of the survey projects and sources of documentation.

Table 45.2 Source variables available per target variable.

Table 45.3 Example of illegitimate variable values.

Table 45.4 Example of misleading variable values and illegitimate variable values.

Table 45.5 Example of contradictory variable values.

Table 45.6 Example of variable values discrepancy and lack of variable value labels.

Table 45.7 Example of lack of variable value labels.

Table 45.8 Distribution of errors and their types per pool of source variables for given target variable.

Chapter 46

Table 46.1 Availability of items on trust in parliament and participation in demonstrations in 22 international survey projects.

Table 46.2 Comparison of translation of “trust” and “confidence” in languages of the European Social Survey (ESS) and the European Values Survey (EVS).

Table 46.3 Diversity of response scales in items about trust in the parliament.

Table 46.4 Diversity of response scales in items about trust in parliament.

Table 46.5 Harmonization controls pertaining to the phrasing of the source items on participation in demonstrations.

Table 46.6 Variation of the time frame in questions about participation in demonstrations.

Table 46.7 Effect of harmonization controls on two target variables: trust in parliament and participation in demonstrations.

Table 46.8 Comparison of the quality of questions on trust in parliament in questionnaires from Poland, using the Survey Quality Predictor 2.1.

Table 46.9 Item nonresponse to questions on trust in the national parliament and participation in demonstrations by international survey projects.

Chapter 47

Table 47.1 Percentages of weights containing particular weighting variables in time periods.

Table 47.2 Quality of survey weights by survey project.

Chapter 48

Table 48.1 Third‐party presence during interviews for the 2011 World Mental Health Survey.

Table 48.2 Response rates in PIAAC Cycle 1.

List of Illustrations

Chapter 02

Figure 2.1 Total survey error.

Figure 2.2 Total survey error: Comparison error.

Chapter 03

Figure 3.1 Conceptual framework for 3MC validation studies.

Chapter 04

Figure 4.1 Years since last population count.

Figure 4.2 Illustration of grid method.

Figure 4.3 Illustration of manual cluster creation method.

Figure 4.4 Nighttime lights, east coast of South America.

Figure 4.5 Satellite photo of residential neighborhood in Mogadishu, Somalia.

Figure 4.6 Illustration of Qibla method.

Figure 4.7 Photo taken by UAV of possible housing unit [53].

Chapter 05

Figure 5.1 Over‐/underrepresentation of females, by type of sample (ESS 1–6).

Figure 5.2 Over‐/underrepresentation of females, by type of sample and within‐household selection method (ESS 1–6).

Chapter 07

Figure 7.1 Types of sensitive questions and the impact on survey response in cross‐national comparative surveys.

Chapter 09

Figure 9.1 Illustration of reporting heterogeneity for cross‐national studies. The horizontal lines with arrows indicate the continuum scales of the domain (pain level). The short vertical lines indicate the cutoff points respondents use to answer the self‐assessment question. The vertical dashed line indicates respondents’ answers to self‐assessment questions. For those whose pain level falls on that line it is an indication they have the same true pain level.

Figure 9.2 Comparison of self‐assessments between two respondents from two countries. The horizontal lines with arrows indicate the continuum scales of the domain (pain level). The short vertical lines indicate the cutoff points respondents’ use to answer the self‐assessment question. The vertical dashed line indicates respondents’ answers to self‐assessment questions. Those whose level of true pain falls on that line indicates that they have the same true pain levels. The vertical dash–dot lines indicate respondents’ answers to different vignette questions. V1–V3 represent responses to three vignette questions. SR refers to respondents’ ratings to the self‐assesment question.

Figure 9.3 Distribution of self‐rated pain for Sweden, the United States, and China.

Chapter 10

Figure 10.1 Visual representation of construct schema.

Figure 10.2 Visual representation of construct schema with two interpretations. To eliminate the unintended interpretation of “listening,” the question was reworded as “Does your child have difficulty hearing sounds like people’s voices or music?”

Figure 10.3 Visual representation of construct schema with three interpretations.

Figure 10.4 Emerging schema from concurrent analysis.

Figure 10.5 Q‐Notes project home screen.

Figure 10.6 Q‐Notes data entry screen.

Figure 10.7 Q‐Notes analysis page.

Chapter 11

Figure 11.1 Individualism scores across countries [19].

Figure 11.2 Spanish‐speaking respondent discomfort: experimental versus conventional interviews.

Chapter 12

Figure 12.1 Self‐reported English‐speaking ability in screener and debriefing interviews.

Figure 12.2 Understanding of the foster child concept by English‐speaking ability. Data regarding understanding of this concept were unavailable for two respondents out of 39.

Figure 12.3 Understanding of “housemate/roommate” by English‐speaking ability. Data regarding understanding of this concept were unavailable for three respondents out of 39. Again, the number of respondents in our study was very small. We provide this table only for illustrative purposes and want to note that responses from a different small group of cognitive interview respondents might look somewhat different.

Chapter 13

Figure 13.1 Comprehension difficulties by race/ethnicity/language and question type.

Figure 13.2 Mapping difficulties by race/ethnicity/language and question type.

Figure 13.3 Interviewer question reading problems by race/ethnicity/language and question type.

Chapter 16

Figure 16.1 The translation process in ESENER‐2, mapped against the documentation classification.

Figure 16.2 Translation and adaptation notes (extract).

Chapter 17

Figure 17.1 Nonequivalence across linguistic groups.

Figure 17.2 Equivalence across linguistic groups.

Chapter 18

Figure 18.1 Example carrousel question in Dutch. First question in a series of five (see navigation bar). Previous and Next buttons are disabled (gray). Seven‐point response scale (totally agree – totally disagree).

Figure 18.2 Confounding of wanted mode selection and unwanted mode measurement effects in estimating the survey statistic of interest.

Chapter 19

Figure 19.1 Percentage of the population that were Internet users for the countries that participated in the Wave 6 World Values Survey.

Figure 19.2 Self‐reported health status distribution by survey mode in the ISSP 2011.

Chapter 21

Figure 21.1 Structural equation model of response patterns of subjective probability questions (for simplicity, correlation across traits and paths from demographics to personality is included in the model but not presented in the figure).

Figure 21.2 Item nonresponse rates and 95% confidence intervals of subjective life expectancy by country, race, ethnicity, and interview language in the 2010 Health and Retirement Study, wave 1 of the English Longitudinal Study of Ageing and wave 1 of the Survey of Health, Ageing and Retirement in Europe.

Figure 21.3 Detailed response patterns of subjective life expectancy question by race, ethnicity, and interview language, the 2010 Health and Retirement Study.

Chapter 22

Figure 22.1 Latent class analysis model of acquiescent response style (ARS), extreme response style (ERS), and content latent class variables (F1: moral traditionalism, F2: position of Blacks in society), with covariates.

Figure 22.2 Histogram of unadjusted confirmatory factor analysis scores of moral traditionalism by groups. (The dashed line represents the group mean score. More positive total scores correspond to more favorable attitudes toward moral traditionalism.)

Figure 22.3 Histogram of unadjusted confirmatory factor analysis scores of position of Blacks in today’s society by groups. (The dashed line represents the group mean score. More positive total scores correspond to more favorable attitudes toward position of Blacks in today’s society.)

Chapter 23

Figure 23.1 Weighted response distributions of answers to self‐rated health using the unbalanced scale.

Figure 23.2 Weighted response distributions of answers to self‐rated health using both unbalanced and balanced scales.

Figure 23.3 Weighted mean health scores by SRH responses using unbalanced scale, by surveys.

Figure 23.4 Weighted mean health scores by SRH responses using balanced scale, by surveys.

Figure 23.5 Predicted probability to report “fair” using the unbalanced scale by health scores and surveys.

Figure 23.6 Predicted probability to report “fair” using the balanced scale by health scores and surveys.

Chapter 24

Figure 24.1 Worldwide Devex tenders by year.

Chapter 25

Figure 25.1 Map of sub‐Saharan Africa statistical capacity by country.

Chapter 29

Figure 29.1 Wave‐on‐wave reinterview rates, household panel surveys. BHPS, British Household Panel Survey; HILDA, Household, Income and Labour Dynamics in Australia Survey; PSID, Panel Study of Income Dynamics (US); SHP, Swiss Household Panel; SOEP, Socio‐economic Panel (Germany); UKHLS, UK Household Longitudinal Study. Figures in parentheses indicate the year in which interviewing commenced. The PSID response rates are calculated at the family level, while the rates for all other studies are calculated at the individual level. The rates for the SOEP, BHPS, SHP, and HILDA Survey exclude deaths and moves abroad from the denominator; the rates for the PSID only exclude deaths.

Figure 29.2 Reinterview rates, wave 1 respondents. (a) Household panel surveys. (b) Child cohort studies. (c) Youth cohort studies. (d) Older aged cohort studies. BCS, British Cohort Study; BHPS, British Household Panel Survey; CVFPS, Chitwan Valley Family Panel Study (Nepal); ELSA, English Longitudinal Study of Ageing; HILDA, Household, Income and Labour Dynamics in Australia Survey; HRS, Health and Retirement Study (US); IFLS, Indonesian Family Life Survey; KLIPS, Korean Labor and Income Panel Study; LSAC, Longitudinal Survey of Australian Children; LSAY, Longitudinal Study of Australian Youth; LSYPE, Longitudinal Survey of Young People in England; MCS, Millennium Cohort Study (UK); NCDS, National Child Development Study (Great Britain); NLSCY, National Longitudinal Survey of Children and Youth (Canada); NLSY, National Longitudinal Survey of Youth (US); NLYOM, National Longitudinal Surveys of Older Males (US); SHARE, Survey of Health, Ageing and Retirement in Europe; SHP, Swiss Household Panel; SLID, Survey of Labour and Income Dynamics (Canada); SOEP, Socio‐economic Panel (Germany); WLS, Wisconsin Longitudinal Study (US); YCS, Youth Cohort Study (England). Figures in parentheses indicate the year in which interviewing commenced. Generally the response rates are calculated excluding deaths and moves abroad. The rates for the following studies only exclude deaths: IFLS, BCS, WLS, NLSY, YCS, and HRS. The following studies do not exclude either: SLID, LSAC, and SHARE. The IFLS includes an expansion of the eligible respondents in selected households in Year 7. The CVFPS and KLIPS response rates are calculated at the household level (and the CVFPS is averaged across months).

Chapter 31

Figure 31.1 Design of the CAPI EHC on laptops.

Figure 31.2 Display of behavior coding program.

Chapter 34

Figure 34.1 The survey production lifecycle.

Chapter 35

Figure 35.1 Head office, verification center, audit team, field branches, and field teams connectivity. HH, household; HO, head office; HRD, human resource development; ORV, offline real‐time verification center.

Figure 35.2 Viewership trend for three competitor channels in market A.

Figure 35.3 Interviewer‐level random intercepts (EBLUPs) for channel XYZ.

Figure 35.4 Example of “question not read” report showing interviewers flagged on any single occurrence of “not read” indicator and the drilling capability by date, sample ID, and question field.

Figure 35.5 Data flow during ongoing fieldwork of SHARE.

Chapter 37

Figure 37.1 Response rates in round 6 of the ESS (2012/2013).

Figure 37.2 Response rates and average number of calls per sample unit in the ESS round 6 (2012/2013).

Figure 37.3 Response rate and the use of respondent incentives in the ESS round 6 (2012/2013).

Figure 37.4 Survey administration in the ESS round 6 (2012/2013).

Chapter 38

Figure 38.1 Cultural Ecosystems Nonresponse (CENR) model.

Chapter 39

Figure 39.1 Representation of households and persons by nationality due to reasons for nonobservation.

Chapter 40

Figure 40.1 Illustrations of (a) uniform measurement noninvariance and (b) nonuniform measurement noninvariance.

Figure 40.2 Differences between groups in the parameter estimates (e.g. differences in the item intercept,

∆τ

) using the exact invariance of the ML approach (top) and the Bayesian approach (bottom). The mean of the differences between groups is 0 in both approaches. In the exact invariance, note the group means are exactly the same in that there is no variation in the distribution (i.e.

∆τ

 = 0 and variance = 0). In the Bayesian approach with approximate invariance, the group means are not exactly the same but approximately so, which means that the average of the differences between groups is 0. This implies some variations between groups (i.e.

∆τ

 ≈ 0 and variance ≠0). The differences between groups might be large and close to zero, for example, with a variance of 0.10 as in the example in the bottom left plot, or rather small and close to zero, for example, a variance of 0.01 as in the bottom right plot.

Figure 40.3 (a) Country means between scores computed from the exact scalar invariance and parsimonious invariance procedures. (b) Correlations between scores computed from the exact scalar invariance and parsimonious invariance procedures.

Chapter 41

Figure 41.1 Response functions (lines) for different groups (grayscale) under exact (a) vs. approximate (b) measurement invariance models.

Figure 41.2 Mplus input file containing the population parameter values for the intercepts, factor loadings, latent means, and latent variances.

Figure 41.3 Mplus output of the MGCFA chi‐square comparisons. The scalar equivalence model fits significantly worse than the metric equivalence model; hence exact measurement equivalence does not hold.

Figure 41.4 Input file in Mplus for the Bayesian approximate measurement equivalence test.

Figure 41.5 Visualization of the estimation of the intercept y1 in group 1 and group 2.

Figure 41.6 Traceplots to judge the convergence of intercept Y1–Y4 in groups 1 and 2. Note that only the last 50 000 (after the gray vertical line) are used for the parameter estimates.

Figure 41.7 Part of the Mplus output resulting from the input file in Figure 41.4.

Chapter 43

Figure 43.1 General schema of survey data recycling.

Chapter 44

Figure 44.1 Changes in survey quality over time.

Chapter 45

Figure 45.1 Distribution of processing errors per survey project wave.

Note

: Survey project waves not mentioned in the figure did not contain processing errors.

Figure 45.2 Changes of the data processing quality over time.

Chapter 46

Figure 46.1 Transformation of source values into the 0–10 scale of trust in parliament.

Chapter 47

Figure 47.1 Percentage of studies using weights per year.

Figure 47.2 Types of survey weightings across countries. Darker shade, tendency to use design over post‐stratification weights; lighter shade, tendency to use post‐stratification over design weights. White, no data available.

Figure 47.3 Distribution of the percentage of incorrectly calculated weights across countries. White, no data.

Figure 47.4 Distribution of standard deviation across countries. Darker shade, higher standard deviation; lighter shade, lower standard deviation; white, no data.

Chapter 48

Figure 48.1 Example of process control chart.

Figure 48.2 The ESS governance scheme.

Guide

Cover

Table of Contents

Begin Reading

Pages

ii

iii

iv

xix

xx

xxi

xxiii

xxiv

xxv

xxvi

xxvii

xxviii

xxix

xxx

xxxi

xxxii

1

3

4

5

6

7

8

9

10

11

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

113

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

293

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

387

388

389

390

391

392

393

394

395

396

397

398

399

400

401

402

403

404

405

406

407

408

409

410

411

412

413

414

415

416

417

418

419

420

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

436

437

438

439

440

441

442

443

444

445

446

447

448

449

450

451

452

453

454

455

457

458

459

460

461

462

463

464

465

466

467

468

469

470

471

472

473

474

475

477

478

479

480

481

482

483

484

485

486

487

488

489

490

491

492

493

494

495

496

497

498

499

501

502

503

504

505

506

507

508

509

510

511

512

513

514

515

516

517

518

519

521

522

523

524

525

526

527

528

529

530

531

532

533

534

535

536

537

538

539

540

541

542

543

544

545

546

547

548

549

550

551

552

553

554

555

556

557

558

559

560

561

562

563

564

565

566

567

568

569

570

571

572

573

574

575

576

577

578

579

580

581

582

583

584

585

586

587

588

589

590

591

592

593

594

595

596

597

598

599

600

601

602

603

604

605

606

607

608

609

610

611

612

613

614

615

616

617

618

619

620

621

623

624

625

626

627

628

629

630

631

632

633

634

635

636

637

638

639

640

641

643

644

645

646

647

648

649

650

651

652

653

654

655

656

657

658

659

660

661

662

663

664

665

666

667

668

669

670

671

672

673

674

675

676

677

678

679

680

681

682

683

684

685

686

687

688

689

690

691

692

693

694

695

696

697

698

699

700

701

702

703

704

705

707

708

709

710

711

712

713

714

715

716

717

718

719

720

721

722

723

724

725

726

727

728

729

731

732

733

734

735

736

737

738

739

740

741

742

743

744

745

746

747

748

749

750

751

752

753

754

755

756

757

758

759

760

761

762

763

764

765

766

767

768

769

770

771

772

773

774

775

776

777

778

779

780

781

782

783

784

785

786

787

788

789

790

791

792

793

794

795

796

797

798

799

800

801

802

803

804

805

807

809

810

811

812

813

814

815

816

817

818

819

820

821

822

823

824

825

826

827

828

829

830

831

832

833

835

836

837

838

839

840

841

842

843

844

845

846

847

848

849

850

851

852

853

854

855

856

857

859

860

861

862

863

864

865

866

867

868

869

870

871

872

873

874

875

876

877

879

881

882

883

884

885

886

887

888

889

890

891

892

893

894

895

896

897

898

899

900

901

902

903

904

905

906

907

908

909

910

911

912

913

914

915

916

917

918

919

920

921

922

923

924

925

926

927

928

929

931

933

934

935

936

937

938

939

940

941

942

943

944

945

946

947

948

949

950

951

952

953

954

955

956

957

958

959

960

961

962

963

964

965

966

967

968

969

970

971

972

973

974

975

976

977

978

979

980

981

982

983

984

985

986

987

988

989

990

991

992

993

994

995

996

997

998

999

1000

1001

1002

1003

1004

1005

1006

1007

1008

1009

1010

1011

1012

1013

1014

1015

1016

1017

1018

1019

1020

1021

1022

1023

1024

1025

1026

1027

1028

1029

1030

1031

1032

1033

1035

1036

1037

1038

1039

1040

1041

1042

1043

1044

1045

1046

1047

1048

1049

1050

1051

1052

1053

1055

1056

1057

1058

1059

1060

1061

1062

1063

1064

1065

1066

1067

1068

1069

1070

1071

1072

1073

1074

1075

1076

1077

1078

1079

1080

1081

1082

1083

1084

1085

1086

1087

1088

1089

1090

1091

1092

1093

1094

1095

1096

1097

1098

1099

1100

1101

1102

1103

1104

WILEY SERIES IN SURVEY METHODOLOGY

Established in Part by WALTER A. SHEWHART AND SAMUEL S. WILKS

Editors: Mick P. Couper, Graham Kalton, J. N. K. Rao, Norbert Schwarz, Christopher Skinner, Lars Lyberg

Editor Emeritus: Robert M. Groves

A complete list of the titles in this series appears at the end of this volume.

Advances in Comparative Survey Methods

Multinational, Multiregional, and Multicultural Contexts (3MC)

Edited by

Timothy P. Johnson, Beth‐Ellen Pennell, Ineke A.L. Stoop, and Brita Dorer

This edition first published 2019© 2019 John Wiley & Sons, Inc.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Timothy P. Johnson, Beth‐Ellen Pennell, Ineke A.L. Stoop, and Brita Dorer to be identified as the editors of this work has been asserted in accordance with law.

Registered OfficesJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA

Editorial Office111 River Street, Hoboken, NJ 07030, USA

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of WarrantyThe publisher and the authors make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties; including without limitation any implied warranties of fitness for a particular purpose. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for every situation. In view of on‐going research, equipment modifications, changes in governmental regulations, and the constant flow of information relating to the use of experimental reagents, equipment, and devices, the reader is urged to review and evaluate the information provided in the package insert or instructions for each chemical, piece of equipment, reagent, or device for, among other things, any changes in the instructions or indication of usage and for added warnings and precautions. The fact that an organization or website is referred to in this work as a citation and/or potential source of further information does not mean that the author or the publisher endorses the information the organization or website may provide or recommendations it may make. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this works was written and when it is read. No warranty may be created or extended by any promotional statements for this work. Neither the publisher nor the author shall be liable for any damages arising here from.

Library of Congress Cataloging‐in‐Publication Data

Names: Johnson, Timothy P., editor. | Pennell, Beth‐Ellen, editor. | Stoop, Ineke A.L., editor. | Dorer, Brita, editor.Title: Advances in Comparative Survey Methods: Multinational, Multiregional, and Multicultural Contexts (3MC) / edited by Timothy P. Johnson, Beth‐Ellen Pennell, Ineke A.L. Stoop, and Brita Dorer.Description: Hoboken, NJ : John Wiley & Sons, Inc., 2018. | Series: Wiley series in survey methodology | Includes bibliographical references and index. |Identifiers: LCCN 2018016232 (print) | LCCN 2018016961 (ebook) | ISBN 9781118884966 (Adobe PDF) | ISBN 9781118885017 (ePub) | ISBN 9781118884980 (hardcover)Subjects: LCSH: Social surveys–Methodology.Classification: LCC HM538 (ebook) | LCC HM538 .A28 2018 (print) | DDC 300.72/3–dc23LC record available at https://lccn.loc.gov/2018016232

Cover image: WileyCover design by Courtesy of Jennifer Kelley

Preface

This book is the product of a multinational, multiregional, and multicultural (3MC) collaboration. It summarizes work initially presented at the Second International 3MC Conference that was held in Chicago during July 2016. The conference drew participants from 78 organizations and 32 countries. We are thankful to them all for their contributions. We believe the enthusiasm on display throughout the 2016 Conference has been captured in these pages and hope it can serve as a useful platform for providing direction to future advancements in 3MC research over the next decade.

The conference follows from the Comparative Survey Design and Implementation Workshops held yearly since 2003 (see https://www.csdiworkshop.org/). These workshops provide a forum and platform for those involved in research relevant to comparative survey methods.

We have many colleagues to thank for their efforts in support of this monograph. In particular, we are grateful to multiple staff at the University of Michigan, including Jamal Ali, Nancy Bylica, Kristen Cibelli Hibben, Mengyao Hu, Julie de Jong, Lawrence La Ferté, Ashanti Harris, Jennifer Kelley, and Yu‐chieh (Jay) Lin.

We are particularly indebted to Lars Lyberg, who pushed us to make every element of this book as strong as possible and provided detailed comments on the text.

We also thank the various committees that helped to organize the conference:

Conference Executive CommitteeBeth‐Ellen Pennell (chair), University of MichiganTimothy P. Johnson, University of Illinois at ChicagoLars Lyberg, InizioPeter Ph. Mohler, COMPASS and University of MannheimAlisú Schoua‐Glusberg, Research Support ServicesTom W. Smith, NORC at the University of ChicagoIneke A.L. Stoop, Institute for Social Research/SCP and the European Social SurveyChristof Wolf, GESIS‐Leibniz‐Institute for the Social Sciences

Conference Organizing CommitteeJennifer Kelley (chair), University of MichiganNancy Bylica, University of MichiganAshanti Harris, University of MichiganMengyao Hu, University of MichiganLawrence La Ferté, University of MichiganYu‐chieh (Jay) Lin, University of MichiganBeth‐Ellen Pennell, University of Michigan

Conference Fundraising CommitteePeter Ph. Mohler (chair), COMPASS and University of MannheimRachel Caspar, RTI InternationalMichele Ernst Staehli, FORSBeth‐Ellen Pennell, University of MichiganEvi Scholz, GESIS‐Leibniz‐Institute for the Social SciencesYongwei Yang, Google, Inc.

Conference Monograph CommitteeTimothy P. Johnson (chair), University of Illinois at ChicagoBrita Dorer, GESIS‐Leibniz‐Institute for the Social SciencesBeth‐Ellen Pennell, University of MichiganIneke A.L. Stoop, Institute for Social Research/SCP and the European Social Survey

Conference Short Course CommitteeAlisú Schoua‐Glusberg (chair), Research Support ServicesBrita Dorer, GESIS‐Leibniz‐Institute for the Social SciencesYongwei Yang, Google, Inc.

Support for the Second 3MC Conference was also multinational, and we wish to acknowledge and thank the following organizations for their generosity in helping to sponsor the Conference:

American Association for Public Opinion Research (AAPOR)

cApStAn

Compass, Mannheim, Germany

D3 Systems, Inc.

Data Documentation Initiative

European Social Survey

FORS

GESIS‐Leibniz‐Institute for the Social Sciences

Graduate Program in Survey Research, Department of Public Policy, University of Connecticut

ICPSR, University of Michigan

IMPAQ International

International Statistical Institute

Ipsos Public Affairs

John Wiley & Sons

Joint Program in Survey Methodology, University of Maryland

Mathematica Policy Research

ME/Max Planck Institute for Social Law and Social Policy

Nielsen

NORC at the University of Chicago

Oxford University Press

Program in Survey Methodology, University of Michigan

Research Support Services, Inc.

RTI International

Survey Methods Section, American Statistical Association

Survey Research Center, Institute for Social Research, University of Michigan

Survey Lab, University of Chicago

WAPOR

Westat

In addition, we owe a special debt of gratitude to the University of Michigan’s Institute for Social Research for their exceptional support during the several years it has taken to organize and prepare this monograph.

We also thank the editors at Wiley, Divya Narayanan, Jon Gurstelle, and Kshitija Iyer who have provided us with excellent support throughout the development and production process. We also thank our editors at the University of Michigan, including Gail Arnold, Nancy Bylica, Julie de Jong, and Mengyao Hu for all of their hard work and perseverance in formatting this book. Finally, the book cover was design by Jennifer Kelley who created a word cloud from the 2016 3MC Conference program.

This monograph is dedicated to the late Dr. Janet Harkness, who helped organize and lead the 3MC movement for many years. We have worked hard to make this contribution something she would be proud of.

8 June 2017

Timothy P. JohnsonBeth‐Ellen PennellIneke A.L. StoopBrita Dorer

Notes on Contributors

Yasmin AltwaijriKing Faisal Specialized Hospital and Research CenterRiyadhKingdom Saudi Arabia

Anna V. AndreenkovaInstitute for Comparative Social Research (CESSI)MoscowRussia

Dorothée BehrGESIS – Leibniz Institute for the Social SciencesMannheimGermany

Isabel BenitezDepartment of PsychologyUniversidad Loyola AndalucíaSevilleSpain

Annelies G. Blom