16,99 €
Is political polarization on the rise? Do various "populist" movements have anything in common? Is the opposition between left and right becoming obsolete and, if so, what might replace it? Many of the most pressing questions about contemporary politics involve public opinion. This incisive sociological introduction considers the formation of opinions as not just a matter of individual responses to external conditions, but as a social process in which people influence and are in turn influenced by others. David L. Weakliem illustrates how changes in economic and social conditions affect public opinion and how the distribution of opinions is shaped by the structure of interaction among people. He applies this approach to discuss topics such as political polarization, long-term trends in public opinion, and the prospects for democracy. Combining theory with up-to-date information on public opinion, the book will be of interest to researchers and students alike in sociology, political science, and communication studies.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 322
Veröffentlichungsjahr: 2020
Cover
Front Matter
1. What is Public Opinion?
The Rise of Public Opinion
Public Opinion Surveys
Accuracy of Opinion Surveys
The Effects of Public Opinion
Interpreting Survey Responses
Change in Opinions
Other Ways of Measuring Public Opinion
Public Opinion and Popular Opinion
Sociology and Public Opinion
Summary and Conclusions
2. The Social Bases of Public Opinion
The Formation of Group Differences
Factors Affecting Group Differences
The Direction of Group Differences
Persistence and Change in Group Differences
The Combination of Group Memberships
Industrialization and the Rise of Class
From Industrial to Post-Industrial Society
Changes in Voting Patterns
The Relative Importance of Social and Economic Opinions
Education and Economic Opinions
Summary and Conclusions
3. Ideology
Recognition and Understanding of Ideological Terms
Symbolic and Operational Ideology
Attitude Constraint
Polarization
Explanations of the Rise in Polarization
Research on the Rise in Polarization
Dimensions of Ideology
Influences on Social and Economic Opinions
Influences on the Relative Importance of the Dimensions
Two Lefts and Two Rights?
Summary and Conclusions
4. Short-Term Change in Public Opinion
Individual and Aggregate Change
Economic Conditions and Public Opinion
Economic Inequality and Public Opinion
Parallel Publics
Perceptions of Facts
Trends in Public Opinion
Is there a Liberal Trend in Public Opinion?
Backlash
Social Movements and Public Opinion
Framing
Summary and Conclusions
5. Long-Term Change in Public Opinion
Socio-Demographic Change and Public Opinion
Modernization and Public Opinion
Modernization and Nationalism
Summary and Conclusions
6. Public Opinion and Liberal Democracy
Changes in Europe
Changes in the United States
The Results of Popular Discontent
Populism
Public Opinion and Elite Opinion
Explaining Ideological Divergence in the United States
Constitutional Originalism
Democratic Government in Modern Society
Continued Growth of Representative Democracy?
Summary and Conclusions
References
Index
End User License Agreement
Chapter 2
Figure 2.1
Correlation of Opinions with Democratic/Republican Voting, 1972–2016
Figure 2.2
Effects of Income and College Degree on Opinions about Redistribution
Chapter 3
Figure 3.1
Percent in Favor of Legal Abortion under Different Circumstances
Figure 3.2
Percent Rating Democratic and Republican Parties at Zero
Chapter 4
Figure 4.1
Homicide Rate and Support for the Death Penalty
Figure 4.2
An Example of “Parallel Publics”
Figure 4.3
Percent in Favor of Law Against Discrimination in Home Sales
Figure 4.4
Opinions About How Courts Treat Criminals
Figure 4.5
Estimates of General Public Opinion
Chapter 5
Figure 5.1
Agree that a Gay Man Should be Allowed to Teach in College
Figure 5.2
Modernization and Opinions on Moral Issues
Chapter 6
Figure 6.1
Confidence in Government, 1952–2016
Figure 6.2
Ratings of Democracy as a Form of Government
Figure 6.3
Ratings of Rule by the Army as a Form of Government
Chapter 3
Table 3.1
Correlations Involving Selected Social and Economic Opinions
Cover
Contents
Begin Reading
ii
iii
iv
vii
viii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
William T. Armaline, Davita Silfen Glasberg, and Bandana Purkayastha, The Human Rights Enterprise: Political Sociology, State Power, and Social Movements
Daniel Béland, What is Social Policy?Understanding the Welfare State
Miguel A. Centeno and Elaine Enriquez, War & Society
Cedric de Leon, Party & Society: Reconstructing a Sociology of Democratic Party Politics
Nina Eliasoph, The Politics of Volunteering
Hank Johnston, States & Social Movements
Richard Lachmann, States and Power
Siniša Malešević, Nation-States and Nationalisms: Organization, Ideology and Solidarity
Andrew J. Perrin, American Democracy: From Tocqueville to Town Halls to Twitter
John C. Scott, Lobbying and Society: A Political Sociology of Interest Groups
John Stone and Polly Rizova, Racial Conflict in Global Society
David L. Weakliem, Public Opinion
David L. Weakliem
polity
Copyright © David L. Weakliem 2020
The right of David L. Weakliem to be identified as Author of this Work has been asserted in accordance with the UK Copyright, Designs and Patents Act 1988.
First published in 2020 by Polity Press
Polity Press65 Bridge StreetCambridge CB2 1UR, UK
Polity Press101 Station LandingSuite 300Medford, MA 02155, USA
All rights reserved. Except for the quotation of short passages for the purpose of criticism and review, no part of this publication may be reproduced, stored in a retrieval system or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher.
ISBN-13: 978-1-5095-2949-0
A catalogue record for this book is available from the British Library.
Library of Congress Cataloging-in-Publication DataNames: Weakliem, David L., author.Title: Public opinion / David L. Weakliem.Description: Cambridge, UK ; Medford, MA : Polity Press, 2020. | Series: Political sociology series | Includes bibliographical references and index. | Summary: “Why your opinions are not necessarily your own”--Provided by publisher.Identifiers: LCCN 2020000114 (print) | LCCN 2020000115 (ebook) | ISBN 9781509529469 (hardback) | ISBN 9781509529476 (paperback) | ISBN 9781509529490 (epub)Subjects: LCSH: Public opinion.Classification: LCC HM1236 .W43 2020 (print) | LCC HM1236 (ebook) | DDC 303.3/8--dc23LC record available at https://lccn.loc.gov/2020000114LC ebook record available at https://lccn.loc.gov/2020000115
The publisher has used its best endeavours to ensure that the URLs for external websites referred to in this book are correct and active at the time of going to press. However, the publisher has no responsibility for the websites and can make no guarantee that a site will remain live or that the content is or will remain appropriate.
Every effort has been made to trace all copyright holders, but if any have been overlooked the publisher will be pleased to include any necessary credits in any subsequent reprint or edition.
For further information on Polity, visit our website: politybooks.com
In memory of Margaret F. Weakliem (1928–2017)
In a sense, this book began when I was a fellow at the Center for Advanced Study in the Behavioral Sciences in 1996–7. During that year, I began to think seriously about public opinion and explore some of the issues discussed in this book. I thank the Center for providing an excellent setting for research and reflection.
I also thank the students in my class in Public Opinion and Mass Communication at the University of Connecticut, who listened to me work out my ideas. I especially thank the students in the Fall 2019 class who gave me comments on drafts of chapters.
At Polity Press, Jonathan Skerrett suggested that I write a book on this topic and provided useful suggestions on how to approach it. Karina Jákupsdóttir encouraged me to keep going, even as I repeatedly fell behind schedule. Finally, six reviewers provided excellent comments on the manuscript.
I have drawn heavily on the collections of the Roper Center for Public Opinion Research. Unless otherwise indicated, survey questions mentioned in this book were obtained from the Roper Center’s iPoll database. I also made use of SDA (Survey Documentation and Analysis) for analyses using the General Social Survey and American National Election Studies, and the World Values Survey website for analyses using the WVS.
Finally, I thank my wife Judith Milardo and stepdaughter Laura Spalding for putting up with many distracted silences while I worked on this book.
It seems obvious that in a democracy, the government should be guided by public opinion: after all, democracy means rule by the people. It is difficult, however, to say exactly what public opinion is: “to speak with precision of public opinion is a task not unlike coming to grips with the Holy Ghost” (Key 1961, p. 8). There are polls and surveys on many different topics, but these attempts to measure public opinion raise a number of questions. One is simple accuracy: can a survey given to 1,000 people tell us about the opinions of the whole nation? Others involve the interpretation of answers. Sometimes different surveys seem to point to different conclusions. In other cases, the results seem clear, but the underlying issues are complicated ones about which most people are not well informed. For example, recent surveys in the United States show widespread support for raising the minimum wage to $15.00 an hour, but many economists think that this would lead to a substantial increase in unemployment, which people presumably do not want. Suppose that these economists are correct: should we then conclude that the public does not really favor an increase, because it does not understand what effects it would have? Or should we say that the public thinks that an increase in the wages of low-wage workers is more important than any effect on employment, or that it favors some increase in the minimum wage, but is not committed to an exact number?
This chapter will address questions of measuring and interpreting public opinion. Subsequent chapters consider the relations between society and public opinion. There are often differences of average opinion among groups, such as social classes, racial and ethnic groups, or residents of cities and rural areas. Chapter 2 considers the reasons that such differences appear, persist, and change. Chapter 3 discusses the organization of opinions, particularly the distinction between left and right (liberal and conservative). Chapters 4 and 5 consider change in opinions: Chapter 4 focuses on the period for which survey data are available – from the 1930s to the present – while Chapter 5 takes a longer view and asks whether there are trends that extend over centuries. Finally, Chapter 6 considers how public attitudes toward government are changing, and what these changes mean for the future of democracy.
The expression “public opinion” first appeared in the late 1700s, and soon came into wide use. During the nineteenth century, many observers spoke of the power of public opinion. In 1823, Lord John Russell, a Member of Parliament and future Prime Minister of Great Britain, wrote: “it is the fashion to point out the increased and increasing influence of public opinion” (Russell 1823, p. 429). A few years later, a popular American college textbook called public opinion “the sense and sentiment of the community, necessarily irresistible, showing its sovereign power everywhere” (Lieber 1839, p. 253). At about the same time, Alexis de Tocqueville (1850 [1969], p. 124) wrote that public opinion was the “directing power” in both the United States and France, despite the differences in their forms of government: “in America it works through elections and decrees, in France by revolutions.”
The appearance and growth of “public opinion” resulted from a change in the relationship between government and society. Plamenatz (1975, p. 345) defines public opinion as “opinions about the government and its policies current in circles outside the [government] hierarchy and yet close enough to it to acquire such opinions and to bring them to bear on it.” For most of history, only a small elite group could regularly participate in government; most people had no way to even become aware of what policies or actions the government was considering. Ordinary people sometimes tried to influence the government through collective protests, but these usually involved objections to the conduct of local officials or landowners rather than attempts to change government policy (Tilly 1983). This situation started to change with the Industrial Revolution, as more people became aware of the government and acquired more means to influence it. The sources of this change included the spread of literacy, the appearance of newspapers and magazines, the growth of cities, and the expansion of the “middle class” – people with enough knowledge and leisure time to pay attention to public affairs. These developments meant that news could spread more quickly and be discussed more widely, so that people could form opinions about government policy and organize to influence it.
At first, public opinion was often understood to mean middle-class opinion, but the range of the “public” expanded as time went on. The weakening of property restrictions on voting and the general adoption of the secret ballot in the late nineteenth century were important parts of this process, since voting gave ordinary people an easy and inexpensive way to influence the government (Rokkan 1961). At the same time, rising educational levels and the growth of the mass media made it easier for people to be informed about public affairs. Today, the “public” is generally understood to include the entire adult population, and people have become accustomed to offering opinions on all kinds of topics.
Although nineteenth-century observers agreed that public opinion was important, they found it difficult be sure of what that opinion was on any given question. In an election, voters merely choose a party or a person: they do not get to vote on specific policy proposals. Moreover, candidates often give different messages to different audiences or use ambiguous language that can be interpreted in a variety of ways. As a result, we know that voters preferred something about the winner, but do not know exactly what that was. People can express opinions more precisely by letters and petitions to government officials, or by public rallies and demonstrations, but only a small fraction of the public engages in such actions, and it is possible that the opinions of the people who do not are very different from the opinions of those who do. Moreover, the opinions of the people who do not participate cannot safely be ignored: they might resist a policy after it is enacted or vote against the government at the next election. The positions taken by organizations and the opinions expressed in the media can also be taken as indicators of public opinion, but these have a similar limitation: the members of an organization might not share the views of the leaders, and the readers of a publication might not share the views of the writers.
In the 1930s, a new form of measurement appeared which transformed the study of public opinion: the survey. The basic procedure of an opinion survey is to ask a standard set of questions to a group of people who are supposed to represent the public (the “sample”). Surveys are conducted in the same way that elections are: participants (also known as “respondents”) answer the questions in private and are assured that their individual answers will not be disclosed. Interviewers are instructed to be neutral – to simply record answers without raising objections or expressing their own opinions. Usually the participants choose their answer from a standard list, rather than responding in their own words. In effect, every survey question can be regarded as a small-scale referendum conducted by secret ballot. George Gallup, who founded the first survey organization, saw surveys as a way to improve the operation of democracy by giving political leaders a more accurate and detailed picture of public opinion (Gallup 1938).
In addition to giving information on opinions in the public as a whole, surveys can include background questions on characteristics such as race, gender, and educational level, making it possible to distinguish the opinions of different kinds of people. Moreover, most surveys ask for opinions on a number of topics, so it is also possible to examine the relationships among different opinions, or between opinions and voting choices.
The first surveys were conducted by commercial organizations and focused on predicting elections and measuring opinion on issues of the day (surveys of this kind are often known as “polls”). If an issue remained prominent, the polls sometimes repeated questions that they had previously used. For example, the Gallup Poll asked “are you in favor of labor unions?” in July 1936, and asked the same question again in 1937, twice in 1938, three times in 1939, and twice in 1940. When questions are repeated, it is possible to examine changes in opinion, both in the general public and in specific groups. For example, in June 1937, 70 percent said that they were in favor of labor unions and 22 percent that they were not; in October 1938, 58 percent were in favor and 28 percent were not.
Academic researchers soon began to conduct surveys, and sometimes made systematic efforts to repeat questions on a regular basis. The American National Election Studies, which began in 1948, focused on voting and political opinions, while the General Social Survey began in 1972 and covered a wide range of topics. At the same time, other polls and surveys continue to repeat questions more or less frequently. As a result, there is now a record of change in public opinion, which on some topics extends over a period of more than eighty years.
Surveys were soon adopted in several other nations, including Canada, France, Australia, and Great Britain. National survey organizations sometimes agreed to include the same question, or sometimes several questions, in their polls, making it possible to compare opinion across nations. Academic researchers followed with more systematic efforts to develop “comparative surveys,” in which an entire survey was translated into the local language and administered in different nations. Because of the expense and organizational effort necessary to carry out comparative surveys, only a few were conducted until the 1970s. Since that time, however, they have become more common, and some of them are part of continuing series that repeat questions over time. The Eurobarometer, which began in 1974, includes nations in the European Union. The World Values Survey began in 1981, when it included ten nations; the seventh wave, conducted in 2017–2020, will include about eighty nations from all parts of the world. Since 1985, the International Social Survey Programme has conducted an annual survey focusing on a particular topic: examples include the role of government in 2016, social networks in 2017, and religion in 2018.
The accumulation of survey data has increased the range of research that is possible: analysts are able to compare not only different kinds of people, but also different places or times, or all of these levels at once. For example, Brooks, Nieuwbeerta, and Manza (2006) used data from 112 election surveys to compare the effects of gender, class, and religion on voting choices in six nations between 1964 and 1998.
Another important recent development has been the increased use of survey experiments, in which different respondents in a survey are randomly selected to receive different information or different forms of a question. Simple experiments have been used since the beginnings of survey research but when surveys are given over the internet, it is possible to have more complex designs and use a wide variety of cues – for example, asking respondents to read a passage or watch a video before they answer a question.
A basic problem in survey research is how to obtain a representative sample – a sample that is like the population in all respects, except that it includes a smaller number of people. At first, most surveys sought to achieve this goal by “quota samples.” In this method, interviewers were given quotas for certain characteristics that were thought be important for opinion, such as gender, race, and age, and were otherwise left free to choose respondents in any way they saw fit. For example, an interviewer might be instructed to obtain twenty interviews, which would include ten men and ten women, seventeen whites and three blacks, four people aged 18 to 29, twelve people aged 30 to 64, and four people aged 65 and above (see Berinsky 2006 for a more detailed description of the procedures used in early surveys). Although this method guaranteed the sample would be representative in terms of the characteristics for which there were quotas, it did not necessarily make it representative in other respects. For example, if interviewers obtained their samples by approaching people in public places, then people who rarely left their homes or who worked unusual hours would be underrepresented.
An alternative way of obtaining a representative sample is a random (probability) sample, in which every person is assigned a definite chance of being chosen for the sample, and the decision of whether to include them is made at random – in effect, by a lottery. There is no guarantee that a particular random sample will be exactly representative, but it is likely to be close to the population. Suppose that a random sample of 1,000 people is drawn from a population in which 50 percent are women. It is likely (about an 80% chance) that between 48 and 52 percent of the people in the sample will be women, and almost certain (about a 99.9% chance) that between 45 and 55 percent will be women.
Random samples have three major advantages over quota samples. The first is that they will be approximately representative in terms of all characteristics, not just the ones that are included in the quotas. The second is that as a random sample becomes larger, the distribution of characteristics in the sample tends to come closer to the distribution in the population. This means that the accuracy of the sample estimates can be increased by increasing the size of the sample. For example, suppose that the population is 50 percent female. The chance that the sample will be within two percent of that figure – that is, between 48 and 52 percent female – rises from about 80 percent in a random sample of 1,000 to about 97 percent in a sample of 3,000 and 99.5 percent in a sample of 5,000. In contrast, with other methods of sampling, increasing the size of the sample will not necessarily bring it closer to the population. The third advantage of a random sample is related to the second: given the size of the sample, it is possible to calculate a “margin of error” – that is, to estimate how much difference there might be between the sample and the population.
In the United States, most surveys shifted from quota samples to random samples after about 1950. This change was facilitated by the growth of telephone ownership, which made it easier to obtain a random sample. If everyone has a telephone, then a random sample of the public can be obtained by simply dialing randomly generated telephone numbers. Quota sampling lasted longer in many other countries, but random sampling has come to be the standard method. Experience shows that quota sampling often was reasonably effective, and statistical adjustments can be applied to produce a closer match to a representative sample (Berinsky, Powell, Shickler, and Yohai 2011). Nevertheless, random sampling is preferred because it has a firm theoretical foundation – we know that a random sample is likely to be accurate within limits determined by the size of the sample.
In principle, random sampling provides a definitive solution to the problem of choosing a representative sample. In practice, however, researchers do not have complete control over the sample. Survey researchers can attempt to contact a random sample of the public, but some of the people in that sample will not be at home, and others will refuse to participate. If the people who do not participate are different from those who do – for example, more suspicious or less interested in public affairs – then the people who actually answer the questions will not be a representative sample of the population. The proportion of people who do not respond has been increasing in recent decades, partly because it has become easier for people to screen telephone calls, and partly because people have simply become more reluctant to participate in surveys. Obtaining high response rates is still possible – the response rate to the 2016 General Social Survey was over 60 percent – but it is time consuming and expensive. Typical telephone surveys today have response rates of less than ten percent (Keeter et al. 2017). Unlike sampling error, the error resulting from non-response will not become smaller as the sample size increases.
The rising rates of non-response have led to concerns that surveys are becoming less accurate. The results of the British referendum on membership in the European Union in June 2016 and the American presidential election in November 2016 surprised many observers and led to a good deal of criticism of the polls. However, closer examination shows that the polls were not far off in either case. In the referendum, they indicated that the vote would be close: the average who said that they would vote to remain was 51 percent in the final polls before the EU referendum (Hanretty 2016). Moreover, the margin seemed to narrow as the referendum approached, and several polls taken in the month before the vote showed a majority in favor of leaving (Hobolt 2016, p. 1262). Because the race was close, a relatively small error in the polls was enough to make a difference in the outcome of the referendum. In the United States, the average of surveys taken just before the election showed Hillary Clinton with 46.8 percent of the vote and Donald Trump with 43.6 percent (RealClear Politics 2016). In fact, Clinton received 48.2 percent and Trump received 46.2 percent. That is, Clinton’s lead in the vote was 2.0 percent, only slightly smaller than the 3.2 percent average of the surveys. A review by the American Association for Public Opinion Research (AAPOR 2017) concluded that the national polls were somewhat more accurate in 2016 than in most recent elections, although some of the state polls were less accurate.
A comprehensive study by Jennings and Wlezien (2018) including elections from 45 nations over 75 years found no evidence that survey predictions are becoming less accurate. However, the decline in response rates means that “random” samples now are in much the same position as quota samples: they seem to represent the population fairly well in practice, but there is no guarantee that they will continue to do so. Some survey organizations have turned to selecting respondents from a pool of people who have agreed to participate in surveys, and then weighting the sample to match the population in terms of various characteristics. In a sense, this approach is a more sophisticated kind of quota sampling (see Hillygus 2016, pp. 39–43 for further discussion).
The comparison of election results to survey predictions offers some general lessons for the interpretation of surveys. First, the people who do not respond are generally less interested and less engaged than those who do. Surveys regularly over-estimate voter turnout, and it is reasonable to assume that they also over-estimate general levels of political knowledge and interest. They may also over-estimate confidence in social institutions and under-estimate levels of alienation and general discontent. Second, there is some unpredictable variation in survey results beyond what could be expected from sampling error. Given the number of surveys taken before American presidential elections, the expected sampling error in the average of all predictions is very small, less than one tenth of one percent. However, there is often a difference of about two or three percent between the average survey prediction and the actual results (Traugott 2005). For example, if an average of all polls shows a candidate getting 52 percent of the vote, it would not be too unusual for that candidate to lose narrowly, or to get 54 or 55 percent of the vote. This error does not have any consistent direction – sometimes the Democrats do better than predicted, and sometimes the Republicans do (AAPOR 2017, p. 11). The general implication is that when considering the possibility of change in public opinion, we should focus on large or sustained movements – small differences from year to year could be illusory, even if they are statistically significant.
Even if a sample accurately represents the population, the survey is an unusual kind of interaction – an anonymous interview with a stranger, in which people are not challenged or asked to explain or justify their opinions. Do the opinions expressed in this special situation correspond to what people would do or say in other settings? Although experience shows that surveys are useful in predicting elections, surveys are designed on the model of an election, where a person casts a vote in private. This point raises a question of whether the opinions measured in surveys can predict other kinds of behavior. The noted historian Arthur M. Schlesinger, Jr. maintained that the kind of public opinion measured in surveys is of little interest: “public opinion polling … elicits essentially an irresponsible expression of opinion – irresponsible because no action is intended to follow the expression … it is responsible opinion – opinion when the chips are down, opinion which issues directly in decision and action – which is relevant to the historical process and of primary interest to the historian” (Schlesinger 1962). That is, opinions which do not lead to action do not affect history.
However, since Schlesinger wrote, there has been a good deal of research on the relationship between public opinion and government policy, and the conclusion is that the opinions measured in surveys do have an influence. Erikson, MacKuen, and Stimson (2002) and Soroka and Wlezien (2010) examine changes in public opinion and government policies over time and find that when public opinion moves in one direction, policy generally follows it. This connection holds even after taking account of the party in power: for example, when American public opinion became more conservative in the late 1970s, policies did as well, although there was a Democratic president and Democratic majorities in both houses of Congress. Gilens (2012) compares proposed laws, and finds that those which get more support in surveys are more likely to be enacted, although he also finds that the opinions of affluent people have more influence than those of ordinary people. Borch (2007) compares states, and finds that those in which public opinion is more liberal tend to have more liberal policies. Brooks and Manza (2008) compare nations and find that public opinion predicts spending on social welfare programs, while Crutchfield and Pettinicchio (2009) find that nations with a higher “taste for inequality” have more income inequality and higher rates of imprisonment.
One possible reason that the opinions measured in surveys influence policy is that public officials pay attention to surveys, presumably because they believe that enacting popular policies will help them to win re-election. Another is that often the opinions expressed in surveys result in action, perhaps not the dramatic kind that Schlesinger spoke of, but in various kinds of everyday behavior. For example, attitudes toward other racial and ethnic groups can be expressed in interactions with neighbors, friends, and co-workers. If this is the case, public opinion may have a direct effect on social conditions, apart from government policy. Weakliem, Andersen, and Heath (2005) find that income inequality is higher in nations where more people say that more productive workers deserve to be paid more than their less productive colleagues. Their explanation is that employers have to pay attention to popular beliefs about fairness when setting wages. People have ways of communicating these ideas through behavior: for example, by working harder when they think they are being paid fairly or being more likely to quit when they think that they are not.
Nevertheless, Schlesinger makes an important point. What he calls “opinion which issues directly in decision and action” may not be the only kind that matters, but it is likely to have more effect than opinions which are expressed in private. Moreover, collective action is not just a spontaneous expression of opinion, but requires organization and resources. The implication is that social movements can have an independent influence on policy, apart from general public opinion. Nevertheless, a favorable climate of public opinion will help a movement to survive and grow, since it means that there are more potential supporters and fewer potential opponents. For their part, social movements often try to influence general public opinion.
Another concern about surveys is whether they can represent opinions on complex issues. Different questions that seem to have the same meaning sometimes produce substantially different responses. For example, in August 1953, when asked “as things stand now, do you feel that the war in Korea has been worth fighting, or not?” only 27 percent said that it had. A month later, another survey asked “As you look back on the Korean war, do you think the United States did the right thing in sending troops to stop the Communist invasion, or should we have stayed out of it entirely?”: 64 percent said the United States did the right thing (Mueller 1973). There is no reason to think that opinions about the war changed dramatically during this time, so the apparent difference of opinion must have been due to the difference in the questions.
A contemporary example is provided by three recent survey questions on the death penalty. When asked “Would you like to see the death penalty abolished nationwide, or not” 31 percent said that it should be abolished. When asked, “Which punishment do you prefer for people convicted of murder? … The death penalty, life in prison with no chance of parole,” 47 percent chose the death penalty and 52 percent life without parole. Finally, when asked “In general, what do you think should be the punishment for people convicted of murder? Death penalty, life in prison with no chance of parole, depends on the circumstances,” 80 percent chose the death penalty or “depends on the circumstances,” and only 19 percent chose a life sentence without parole. The answers to the different questions suggest levels of support for the keeping the death penalty might be as low as 47 percent or as high as 80 percent.
Sometimes differences of this kind can be interpreted by supposing that some people have views that are too complex to be represented by either question alone. A person who agreed with the decision to send troops to Korea but disagreed with the subsequent conduct of the war might say the war had not been worth fighting; someone who believed that the death penalty is sometimes justified in principle but did not trust the criminal justice to apply it fairly might favor abolishing it. However, there are cases in which it is not possible to reconcile the answers to different questions in this way. For example, in December 2013 a survey asked “The federal minimum wage is now $7.25. Do you think the federal minimum wage should be raised, lowered, or should it remain the same?” 71 percent said that the minimum wage should be raised, 25 percent that it should remain the same, and only two percent said that it should be lowered. In January 2014 another survey asked: “As you may know, the federal government sets the national minimum wage – the lowest rate in dollars per hour that most workers should be paid – which is now set at seven dollars and twenty-five cents an hour. Which of the following comes closest to your view on how the federal government should handle the minimum wage? … The government should raise the minimum wage because it would help lots of people pay their bills. The government should not raise the minimum wage because it would cause businesses to cut jobs. There shouldn’t be a minimum wage because government shouldn’t tell businesses what to pay their employees.” In response to this question, 56 percent were in favor of an increase, 25 percent said it should stay the same, and 15 percent said there should not be a minimum wage. That is, one survey found that only two percent thought that the minimum wage should be less than $7.25, while the other found that 15 percent thought that it should be abolished completely. The only plausible explanation for this difference is that some people were persuaded when they heard the argument that “the government shouldn’t tell businesses what to pay their employees.”
Sometimes the same question will get different responses depending on the questions that precede it in the survey. In one dramatic example from the early 1970s, 37 percent said they were in favor of allowing Soviet journalists in the United States, but when the question followed one about whether the Soviet Union should allow American journalists, 73 percent were in favor (Schuman and Presser 1981).
Zaller (1992) proposed a model that helps to makes sense of the effects of question wording and question order on responses. In his view, even people who are well informed and interested in public affairs do not have definite “positions” on most issues in the way that a candidate for public office might. Rather, people have a variety of relevant feelings, thoughts, and pieces of information. When they answer survey questions, people “make up attitude reports as best they can as they go along,” based on
