An Introduction to Science and Technology Studies - Sergio Sismondo - E-Book

An Introduction to Science and Technology Studies E-Book

Sergio Sismondo

0,0
29,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

An Introduction to Science and Technology Studies, Second Edition reflects the latest advances in the field while continuing to provide students with a road map to the complex interdisciplinary terrain of science and technology studies.

  • Distinctive in its attention to both the underlying philosophical and sociological aspects of science and technology
  • Explores core topics such as realism and social construction, discourse and rhetoric, objectivity, and the public understanding of science  
  • Includes numerous empirical studies and illustrative examples to elucidate the topics discussed
  • Now includes new material on  political economies of scientific and technological knowledge, and democratizing technical decisions
  • Other features of the new edition include improved readability, updated references, chapter reorganization, and more material on medicine and technology

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 474

Veröffentlichungsjahr: 2011

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Contents

Preface

1 The Prehistory of Science and Technology Studies

A View of Science

A View of Technology

A Preview of Science and Technology Studies

2 The Kuhnian Revolution

Incommensurability: Communicating Among Social Worlds

Conclusion: Some Impacts

3 Questioning Functionalism in the Sociology of Science

Structural-functionalism

Ethos and Ethics

Is the Conduct of Science Governed by Mertonian Norms?

Interpretations of Norms

Norms as Resources

Boundary Work

The Place of Norms in Science?

4 Stratification and Discrimination

An Efficient Meritocracy or an Inefficient Old Boy’s Network?

Contributions to Productivity

Discrimination

Getting In, Staying In, and Getting On

Conclusion

5 The Strong Programme and the Sociology of Knowledge

The Strong Programme

Interest Explanations

Knowledge, Practices, Cultures

6 The Social Construction of Scientific and Technical Realities

What Does “Social Construction” Mean?

Richness in Diversity

7 Feminist Epistemologies of Science

Can There Be a Feminist Science and Technology?

The Technoscientific Construction of Gender

From Feminist Empiricism to Standpoint Theory

From Difference Feminism to Anti-Essentialism

Gender, Sex, and Cultures of Science and Technology

8 Actor-Network Theory

Actor-Network Theory: Relational Materialism

Some Objections to Actor-Network Theory

Conclusions

9 Two Questions Concerning Technology

Is Technology Applied Science?

Does Technology Drive History?

10 Studying Laboratories

The Idea of the Laboratory Study

Learning to See

Tinkering, Skills, and Tacit Knowledge

Creating Orderly Data

Crystallization of Formal Accounts

Culture and Power

Extensions

11 Controversies

Opening Black Boxes Symmetrically

Reasonable Disagreements

Experimenters’ Regress

Interests and Rhetoric

Technological Controversies

The Resolution of Controversies

How to Understand Controversy Studies

Captives of Controversies: The Politics of STS

12 Standardization and Objectivity

Getting Research Done

Absolute Objectivity

Formal Objectivity

What About Interpretive Flexibility?

A Tentative Solution

Conclusions

13 Rhetoric and Discourse

Rhetoric in Technical Domains?

The Strength of Arguments

The Scope of Claims

Rhetoric in Context

Reflexivity

Metaphors and Politics

Conclusions

14 The Unnaturalness of Science and Technology

The Status of Experiments

Local Knowledge and Delocalization

The Unnaturalness of Experimental Knowledge

The Unnaturalness of Theoretical Knowledge

A Link to Technology?

The Order of Nature?

15 The Public Understanding of Science

The Shape of Popular Science and Technology

The Dominant Model and Its Problems

The Deficit Model

Lingering Deficits

16 Expertise and Public Participation

Problems with Expertise

Public Participation in Technical Decisions

Citizen Science and Technology

17 Political Economies of Knowledge References

Commercialization of Research

STS and Global Development

References

Index

Praise for the first edition

“This book is a wonderful tool with which to think.

It offers an expansive introduction to the field of science

studies, a rich exploration of the theoretical terrains it

comprises and a sheaf of well-reasoned opinions that

will surely inspire argument.”

Geoffrey C. Bowker, University of California, San Diego

“Sismondo’s Introduction to Science and Technology Studies,. . .

for anyone of whatever age and background starting

out in STS, must be the first-choice primer: a resourceful,

enriching book that will speak to many of the successes,

challenges, and as-yet-untackled problems of science studies.

If the introductory STS course you teach does not fit

his book, change your course.”

Jane Gregory, ISIS, 2007

This second edition first published 2010© 2010 Sergio Sismondo

Edition history: Blackwell Publishing Ltd (1e, 2004)

Blackwell Publishing was acquired by John Wiley & Sons in February 2007. Blackwell’s publishing program has been merged with Wiley’s global Scientific, Technical, and Medical business to form Wiley-Blackwell.

Registered Office

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ,

United Kingdom

Editorial Offices

350 Main Street, Malden, MA 02148-5020, USA

9600 Garsington Road, Oxford, OX4 2DQ, UK

The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

For details of our global editorial offices, for customer services, and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com/wiley-blackwell.

The right of Sergio Sismondo to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

Library of Congress Cataloging-in-Publication Data

Sismondo, Sergio.

An introduction to science and technology studies / Sergio Sismondo. - 2nd ed. p. cm.

Includes bibliographical references and index.

ISBN 978-1-4051-8765-7 (pbk. : alk. paper)

1. Science-Philosophy. 2. Science-Social aspects. 3. Technology-Philosophy. 4. Technology-Social aspects. I. Title.

Q175.S5734 2010

501-dc22 2009012001

Preface

Science & Technology Studies (STS) is a dynamic interdisciplinary field, rapidly becoming established in North America and Europe. The field is a result of the intersection of work by sociologists, historians, philosophers, anthro­pologists, and others studying the processes and outcomes of science, including medical science, and technology. Because it is interdisciplinary, the field is extraordinarily diverse and innovative in its approaches. Because it examines science and technology, its findings and debates have repercussions for almost every understanding of the modern world.

This book surveys a group of terrains central to the field, terrains that a beginner in STS should know something about before moving on. For the most part, these are subjects that have been particularly productive in theoretical terms, even while other subjects may be of more immediate practical interest. The emphases of the book could have been different, but they could not have been very different while still being an introduction to central topics in STS.

An Introduction to Science and Technology Studies should provide an overview of the field for any interested reader not too familiar with STS’s basic findings and ideas. The book might be used as the basis for an upper-year undergraduate, or perhaps graduate-level, course in STS. But it might also be used as part of a trajectory of more focused courses on, say, the social study of medicine, STS and the environment, reproductive technologies, science and the military, or science and public policy. Because anybody putting together such courses would know how those topics should be addressed - or certainly know better than does the author of this book - these topics are not addressed here.

However the book is used, it should almost certainly be alongside a number of case studies, and probably alongside a few of the many articles mentioned in the book. The empirical examples here are not intended to replace rich detailed cases, but only to draw out a few salient features. Case studies are the bread and butter of STS. Almost all insights in the field grow out of them, and researchers and students still turn to articles based on cases to learn central ideas and to puzzle through problems. The empirical examples used in this book point to a number of canonical and useful studies. There are many more among the references to other studies published in English, and a great many more in English and in other languages that are not mentioned.

This second edition makes a number of changes. The largest is reflected in a tiny adjustment of abbreviation. In the first edition, the field’s name was abbreviated S&TS. The ampersand was supposed to emphasize the field’s name as Science and Technology Studies, rather than Science, Tech­nology, and Society, the latter of which was generally known as STS in the 1970s and 1980s. When the ampersand seemed important, the two STSs differed considerably in their approaches and subject matters: Science and Technology Studies was a philosophically radical project of under­standing science and technology as discursive, social, and material activities; Science, Technology, and Society was a project of understanding social issues linked to developments in science and technology, and how those developments could be harnessed to democratic and egalitarian ideals. When the first edition of this book was written, the ampersand seemed valuable to identifying its terrain. However, the fields of STS (with or with­out ampersand) have expanded so rapidly that the two STSs have blended together. The first STS (with ampersand) became increasingly concerned with issues about the legitimate places of expertise, about science in public spheres, about the place of public interests in scientific decision-making. The other STS (without) became increasingly concerned with understanding the dynamics of science, technology, and medicine. Thus, many of the most exciting works have joined what would once have been seen as separate. This edition, then, increases attention to work being done on the politics of science and technology, especially where STS treats those politics in more theoretical and general terms. As a result, the public understanding of science, democracy in science and technology, and political economies of knowledge each get their own chapters in this edition, expanding the scope of the book.

Besides this large change, there is considerable updating of material from the first edition, and there are some reorganizations. In particular, the chap­ter on feminist epistemologies of science has been brought forward, to put it in better contact with the chapters on social constructivism and the strong programme. The four chapters on laboratories, controversies, objectivity, and creating order have been reorganized into three.

I hope that these additions and changes make the book more useful to students and teachers of STS than was the first. It is to all teachers and students in the field, and especially my own, that I dedicate this book.

Sergio Sismondo

1

The Prehistory of Science and Technology Studies

A View of Science

Let us start with a common picture of science. It is a picture that coincides more or less with where studies of science stood some 50 years ago, that still dominates popular understandings of science, and even serves as something like a mythic framework for scientists themselves. It is not perfectly uniform, but instead includes a number of distinct elements and some healthy debates. It can, however, serve as an excellent foil for the discussions that follow. At the margins of science, and discussed in the next section, is technology, typically seen as simply the application of science.

In this picture, science is a formal activity that creates and accumulates knowledge by directly confronting the natural world. That is, science makes progress because of its systematic method, and because that method allows the natural world to play a role in the evaluation of theories. While the scientific method may be somewhat flexible and broad, and therefore may not level all differences, it appears to have a certain consistency: different scientists should perform an experiment similarly; scientists should be able to agree on important questions and considerations; and most importantly, different scientists considering the same evidence should accept and reject the same hypotheses. The result is that scientists can agree on truths about the natural world.

Within this snapshot, exactly how science is a formal activity is open. It is worth taking a closer look at some of the prominent views. We can start with philosophy of science. Two important philosophical approaches within the study of science have been logical positivism, initially associated with the Vienna Circle, and fa-lsificationism, associated with Karl Popper. The Vienna Circle was a group of prominent philosophers and scientists who met in the early 1930s. The project of the Vienna Circle was to develop a philosophical understanding of science that would allow for an expansion of the scientific worldview - particularly into the social sciences and into philosophy itself. That project was immensely successful, because positivism was widely absorbed by scientists and non-scientists interested in increasing the rigor of their work. Interesting conceptual problems, however, caused positivism to become increasingly focused on issues within the philosophy of science, losing sight of the more general project with which the movement began (see Friedman 1999; Richardson 1998).

Logical positivists maintain that the meaning of a scientific theory (and anything else) is exhausted by empirical and logical considerations of what would verify or falsify it. A scientific theory, then, is a condensed summary of possible observations. This is one way in which science can be seen as a formal activity: scientific theories are built up by the logical manipulation of observations (e.g. Ayer 1952 [1936]; Carnap 1952 [1928]), and scientific progress consists in increasing the correctness, number, and range of potential observations that its theories indicate.

For logical positivists, theories develop through a method that transforms individual data points into general statements. The process of creating scientific theories is therefore an inductive one. As a result, positivists tried to develop a logic of science that would make solid the inductive process of moving from individual facts to general claims. For example, scientists might be seen as creating frameworks in which it is possible to uniquely generalize from data (see Box 1.1).

Positivism has immediate problems. First, if meanings are reduced to observations, there are many “synonyms,” in the form of theories or statements that look as though they should have very different meanings but do not make different predictions. For example, Copernican astronomy was initially designed to duplicate the (mostly successful) predictions of the earlier Ptolemaic system; in terms of observations, then, the two systems were roughly equivalent, but they clearly meant very different things, since one put the Earth in the center of the universe, and the other had the Earth spinning around the Sun. Second, many apparently meaningful claims are not systematically related to observations, because theories are often too abstract to be immediately cashed out in terms of data. Yet surely abstraction does not render a theory meaningless. Despite these problems and others, the positivist view of meaning taps into deep intuitions, and cannot be entirely dismissed.

Even if one does not believe positivism’s ideas about meaning, many people are attracted to the strict relationship that it posits between theories and observations. Even if theories are not mere summaries of observations, they should be absolutely supported by them. The justification we have for believing a scientific theory is based on that theory’s solid connectionto data. Another view, then, that is more loosely positivist, is that one can by purely logical means make predictions of observations from scientific theories, and that the best theories are ones that make all the right predictions. This view is perhaps best articulated as falsificationism, a position developed by (Sir) Karl Popper (e.g. 1963), a philosopher who was once on the edges of the Vienna Circle.

Box 1.1 The problem of induction

Among the asides inserted into the next few chapters are a number of versions of the “problem of induction.” These are valuable background for a number of issues in Science and Technology Studies (STS). At least as stated here, these are theoretical problems that only occasionally become practical ones in scientific and technical contexts. While they could be paralyzing in principle, in practice they do not come up. One aspect of their importance, then, is in finding out how scientists and engineers contain these problems, and when they fail at that, how they deal with them.

The problem of induction arose with David Hume’s general questions about evidence in the eighteenth century. Unlike classical skeptics, Hume was interested not in challenging particular patterns of argument, but in showing the fallibility of arguments from experience in general. In the sense of Hume’s problem, induction extends data to cover new cases. To take a standard example, “the sun rises every 24 hours” is a claim supposedly established by induction over many instances, as each passing day has added another data point to the overwhelming evidence for it. Inductive arguments take n cases, and extend the pattern to the n+1st. But, says Hume, why should we believe this pattern? Could the n+1st case be different, no matter how large n is? It does no good to appeal to the regularity of nature, because the regularity of nature is at issue. Moreover, as Ludwig Wittgenstein (1958) and Nelson Goodman (1983 [1954]) show, nature could be perfectly regular and we would still have a problem of induction. This is because there are many possible ideas of what it would mean for the n+1st case to be the same as the first n. Sameness is not a fully defined concept.

It is intuitively obvious that the problem of induction is insoluble. It is more difficult to explain why, but Karl Popper, the political philosopher and philosopher of science, makes a straightforward case that it is. The problem is insoluble, according to him, because there is no principle of induction that is true. That is, there is no way of assuredly going from a finite number of cases to a true general statement about all the relevant cases. To see this, we need only look at examples. “The sun rises every 24 hours” is false, says Popper, as formulated and normally understood, because in Polar regions there are days in the year when the sun never rises, and days in the year when it never sets. Even cases taken as examples of straightforward and solid inductive inferences can be shown to be wrong, so why should we be at all confident of more complex cases?

For Popper, the key task of philosophy of science is to provide a demarcation criterion, a rule that would allow a line to be drawn between science and non-science. This he finds in a simple idea: genuine scientific theories are falsifiable, making risky predictions. The scientific attitude demands that if a theory’s prediction is falsified the theory itself is to be treated as false. Pseudo-sciences, among which Popper includes Marxism and Freudianism, are insulated from criticism, able to explain and incorporate any fact. They do not make any firm predictions, but are capable of explaining, or explaining away, anything that comes up.

This is a second way in which science might be seen as a formal activity. According to Popper, scientific theories are imaginative creations, and there is no method for creating them. They are free-floating, their meaning not tied to observations as for the positivists. However, there is a strict method for evaluating them. Any theory that fails to make risky predictions is ruled unscientific, and any theory that makes failed predictions is ruled false. A theory that makes good predictions is provisionally accepted - until new evidence comes along. Popper’s scientist is first and foremost skeptical, unwilling to accept anything as proven, and willing to throw away anything that runs afoul of the evidence. On this view, progress is probably best seen as the successive refinement and enlargement of theories to cover increasing data. While science may or may not reach the truth, the process of conjectures and refutations allows it to encompass increasing numbers of facts.

Like the central idea of positivism, falsificationism faces some immediate problems. Scientific theories are generally fairly abstract, and few make hard predictions without adopting a whole host of extra assumptions (e.g. Putnam 1981); so on Popper’s view most scientific theories would be unscientific. Also, when theories are used to make incorrect predictions, scientists often - and quite reasonably - look for reasons to explain away the observations or predictions, rather than rejecting the theories. Nonetheless, there is something attractive about the idea that (potential) falsification is the key to solid scientific standing, and so falsificationism, like logical positivism, still has adherents today.

For both positivism and falsificationism, the features of science that make it scientific are formal relations between theories and data, whether through the rational construction of theoretical edifices on top of empirical data or the rational dismissal of theories on the basis of empirical data. There are analogous views about mathematics; indeed, formalist pictures of science probably depend on stereotypes of mathematics as a logical or mathematical activity.

Box 1.2 The Duhem–Quine thesis

The Duhem-Quine thesis is the claim that a theory can never be conclusively tested in isolation: what is tested is an entire framework or a web of beliefs. This means that in principle any scientific theory can be held in the face of apparently contrary evidence. Though neither of them put the claim quite this baldly, Pierre Duhem and W.V.O. Quine, writing in the beginning and middle of the twentieth century respectively, showed us why.

How should one react if some of a theory’s predictions are found to be wrong? The answer looks straightforward: the theory has been falsified, and should be abandoned. But that answer is too easy, because theories never make predictions in a vacuum. Instead, they are used, along with many other resources, to make predictions. When a prediction is wrong, the culprit might be the theory. However, it might also be the data that set the stage for the prediction, or additional hypotheses that were brought into play, or measuring equipment used to verify the prediction. The culprit might even lie entirely outside this constellation of resources: some unknown object or process that interferes with observations or affects the prediction.

To put the matter in Quine’s terms, theories are parts of webs of belief. When a prediction is wrong, one of the beliefs no longer fits neatly into the web. To smooth things out - to maintain a consistent structure - one can adjust any number of the web’s parts. With a radical enough redesign of the web, any part of it can be maintained, and any part jettisoned. One can even abandon rules of logic if one needs to!

When Newton’s predictions of the path of the moon failed to match the data he had, he did not abandon his theory of gravity, his laws of motion, or any of the calculating devices he had employed. Instead, he assumed that there was something wrong with the observations, and he fudged his data. While fudging might seem unacceptable, we can appreciate his impulse: in his view, the theory, the laws, and the mathematics were all stronger than the data! Later physicists agreed. The problem lay in the optical assumptions originally used in interpreting the data, and when those were changed Newton’s theory made excellent predictions.

Does the Duhem-Quine thesis give us a problem of induction? It shows that multiple resources are used (not all explicitly) to make a prediction, and that it is impossible to isolate for blame only one of those resources when the prediction appears wrong. We might, then, see the Duhem-Quine thesis as posing a problem of deduction, not induction, because it shows that when dealing with the real world, many things can confound neat logical deductions.

But there are other features of the popular snapshot of science. These formal relations between theories and data can be difficult to reconcile with an even more fundamental intuition about science: Whatever else it does, science progresses toward truth, and accumulates truths as it goes. We can call this intuition realism, the name that philosophers have given to the claim that many or most scientific theories are approximately true.

First, progress. One cannot but be struck by the increases in precision of scientific predictions, the increases in scope of scientific knowledge, and the increases in technical ability that stem from scientific progress. Even in a field as established as astronomy, calculations of the dates and times of astronomical events continue to become more precise. Sometimes this precision stems from better data, sometimes from better understandings of the causes of those events, and sometimes from connecting different pieces of knowledge. And occasionally, the increased precision allows for new technical ability or theoretical advances.

Second, truths. According to realist intuitions, there is no way to understand the increase in predictive power of science, and the technical ability that flows from that predictive power, except in terms of an increase of truth. That is, science can do more when its theories are better approximations of the truth, and when it has more approximately true theories. For the realist, science does not merely construct convenient theoretical descriptions of data, or merely discard falsified theories: When it constructs theories or other claims, those generally and eventually approach the truth. When it discards falsified theories, it does so in favor of theories that better approach the truth.

Real progress, though, has to be built on more or less systematic methods. Otherwise, there would only be occasional gains, stemming from chance or genius. If science accumulates truths, it does so on a rational basis, not through luck. Thus, realists are generally committed to something like formal relations between data and theories.

Turning from philosophy of science, and from issues of data, evidence, and truth, we see a social aspect to the standard picture of science. Scientists are distinguished by their even-handed attitude toward theories, data, and each other. Robert Merton’s functionalist view, discussed in Chapter 3, dominated discussions of the sociology of science through the 1960s. Merton argued that science served a social function, providing certified knowledge. That function structures norms of scientific behavior, those norms that tend to promote the accumulation of certified knowledge. For Merton, science is a well-regulated activity, steadily adding to the store of knowledge.

Box 1.3 Underdetermination

Scientists choose the best account of data from among competing hypotheses. This choice can never be logically conclusive, because for every explanation there are in principle an indefinitely large number of others that are exactly empirically equivalent. Theories are underdetermined by the empirical evidence. This is easy to see through an analogy.

Imagine that our data is the collection of points in the graph on the left (Figure 1.1). The hypothesis that we create to “explain” this data is some line of best fit. But what line of best fit? The graph on the right shows two competing lines that both fit the data perfectly.

Clearly there are infinitely many more lines of perfect fit. We can do further testing and eliminate some, but there will always be infinitely many more. We can apply criteria like simplicity and elegance to eliminate some of them, but such criteria take us straight back to the first problem of induction: how do we know that nature is simple and elegant, and why should we assume that our ideas of simplicity and elegance are the same as nature’s?

When scientists choose the best theory, then, they choose the best theory from among those that have been seriously considered. There is little reason to believe that the best theory so far considered, out of the infinite numbers of empirically adequate explanations, will be the true one. In fact, if there are an infinite number of potential explanations, we could reasonably assign to each one a probability of zero.

The status of underdetermination has been hotly debated in philosophy of science. Because of the underdetermination argument, some philosophers (positivists and their intellectual descendants) argue that scientific theories should be thought of as instruments for explaining and predicting, not as true or realistic representations (e.g. van Fraassen 1980). Realist philosophers, however, argue that there is no way of understanding the successes of science without accepting that in at least some circumstances evaluation of the evidence leads to approximately true theories (e.g. Boyd 1984; see Box 6.2).

On Merton’s view, there is nothing particularly “scientific” about the people who do science. Rather, science’s social structure rewards behavior that, in general, promotes the growth of knowledge; in principle it also penalizes behavior that retards the growth of knowledge. A number of other thinkers hold that position, such as Popper (1963) and Michael Polanyi (1962), who both support an individualist, republican ideal of science, for its ability to progress.

Common to all of these views is the idea that standards or norms are the source of science’s success and authority. For positivists, the key is that theories can be no more or less than the logical representation of data. For falsificationists, scientists are held to a standard on which they have to discard theories in the face of opposing data. For realists, good methods form the basis of scientific progress. For functionalists, the norms are the rules governing scientific behavior and attitudes. All of these standards or norms are attempts to define what it is to be scientific. They provide ideals that actual scientific episodes can live up to or not, standards to judge between good and bad science. Therefore, the view of science we have seen so far is not merely an abstraction from science, but is importantly a view of ideal science.

A View of Technology

Where is technology in all of this? Technology has tended to occupy a secondary role, for a simple reason: it is often thought, in both popular and academic accounts, that technology is the relatively straightforward application of science. We can imagine a linear model of innovation, from basic science through applied science to development and production. Technologists identify needs, problems, or opportunities, and creatively combine pieces of knowledge to address them. Technology combines the scientific method with a practically minded creativity.

As such, the interesting questions about technology are about its effects: Does technology determine social relations? Is technology humanizing or dehumanizing? Does technology promote or inhibit freedom? Do science’s current applications in technologies serve broad public goals? These are important questions, but as they take technology as a finished product they are normally divorced from studies of the creation of particular technologies.

If technology is applied science then it is limited by the limits of scientific knowledge. On the common view, then, science plays a central role in determining the shape of technology. There is another form of determinism that often arises in discussions of technology, though one that has been more recognized as controversial. A number of writers have argued that the state of technology is the most important cause of social structures, because technology enables most human action. People act in the context of available technology, and therefore people’s relations among themselves can only be understood in the context of technology. While this sort of claim is often challenged - by people who insist on the priority of the social world over the material one - it has helped to focus debate almost exclusively on the effects of technology.

Lewis Mumford (1934, 1967) established an influential line of thinking about technology. According to Mumford, technology comes in two varieties. Polytechnics are “life-oriented,” integrated with broad human needs and potentials. Polytechnics produce small-scale and versatile tools, useful for pursuing many human goals. Monotechnics produce “mega machines” that can increase power dramatically, but by regimenting and dehumanizing. A modern factory can produce extraordinary material goods, but only if workers are disciplined to participate in the working of the machine. This distinction continues to be a valuable resource for analysts and critics of technology (see, e.g., Franklin 1990, Winner 1986).

In his widely read essay “The Question Concerning Technology” (1977 [1954]), Martin Heidegger develops a similar position. For Heidegger, distinctively modern technology is the application of science in the service of power; this is an objectifying process. In contrast to the craft tradition that produced individualized things, modern technology creates resources, objects made to be used. From the point of view of modern technology, the world consists of resources to be turned into new resources. A technological worldview thus produces a thorough disenchantment of the world.

Through all of this thinking, technology is viewed as simply applied science. For both Mumford and Heidegger modern technology is shaped by its scientific rationality. Even the pragmatist philosopher John Dewey (e.g. 1929), who argues that all rational thought is instrumental, sees science as theoretical technology (using the word in a highly abstract sense) and technology (in the ordinary sense) as applied science. Interestingly, the view that technology is applied science tends toward a form of technological determinism. For example, Jacques Ellul (1964) defines technique as “the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development)” (quoted in Mitcham 1994: 308). A society that has accepted modern technology finds itself on a path of increasing efficiency, allowing technique to enter more and more domains. The view that a formal relation between theories and data lies at the core of science informs not only our picture of science, but of technology.

Concerns about technology have been the source of many of the movements critical of science. After the US use of nuclear weapons on Hiroshima and Nagasaki in World War II, some scientists and engineers who had been involved in developing the weapons began The Bulletin of the Atomic Scientists, a magazine alerting its readers about major dangers stemming from the military and industrial technologies. Starting in 1955, the Pugwash Conferences on Science and World Affairs responded to the threat of nuclear war, as the United States and the Soviet Union armed themselves with nuclear weapons.

Science and the technologies to which it contributes often result in very unevenly distributed benefits, costs, and risks. Organizations like the Union of Concerned Scientists, and Science for the People, recognized this uneven distribution. Altogether, the different groups that made up the Radical Science Movement engaged in a critique of the idea of progress, with technological progress as their main target (Cutliffe 2000).

Parallel to this in the academy, “Science, Technology and Society” became, starting in the 1970s, the label for a diverse group united by progressive goals and an interest in science and technology as problematic social institutions. For researchers on Science, Technology and Society the project of understanding the social nature of science has generally been seen as continuous with the project of promoting a socially responsible science (e.g. Ravetz 1971; Spiegel-Rösing and Price 1977; Cutliffe 2000). The key issues for Science, Technology and Society are about reform, about promoting disinterested science, and about technologies that benefit the widest populations. How can sound technical decisions be made democratically (Laird 1993)? Can and should innovation be democratically controlled (Sclove 1995)? To what extent, and how, can technologies be treated as political entities (Winner 1986)? Given that researchers, knowledge, and tools flow back and forth between academia and industry, how can we safeguard pure science (Dickson 1988; Slaughter and Leslie 1997)? This is the other “STS,” which has played a major role in Science and Technology Studies, the former being both an antecedent of and now a part of the latter.

A Preview of Science and Technology Studies

Science and Technology Studies (STS) starts from an assumption that science and technology are thoroughly social activities. They are social in that scientists and engineers are always members of communities, trained into the practices of those communities and necessarily working within them. These communities set standards for inquiry and evaluate knowledge claims. There is no abstract and logical scientific method apart from evolving community norms. In addition, science and technology are arenas in which rhetorical work is crucial, because scientists and engineers are always in the position of having to convince their peers and others of the value of their favorite ideas and plans - they are constantly engaged in struggles to gain resources and to promote their views. The actors in science and technology are also not mere logical operators, but instead have investments in skills, prestige, knowledge, and specific theories and practices. Even conflicts in a wider society may be mirrored by and connected to conflicts within science and technology; for example, splits along gender, race, class, and national lines can occur both within science and in the relations between scientists and non-scientists.

STS takes a variety of antiessentialist positions with respect to science and technology. Neither science nor technology is a natural kind, having simple properties that define it once and for all. The sources of knowledge and artifacts are complex and various: there is no privileged scientific method that can translate nature into knowledge, and no technological method that can translate knowledge into artifacts. In addition, the interpretations of knowledge and artifacts are complex and various: claims, theories, facts, and objects may have very different meanings to different audiences.

For STS, then, science and technology are active processes, and should be studied as such. The field investigates how scientific knowledge and technological artifacts are constructed. Knowledge and artifacts are human products, and marked by the circumstances of their production. In their most crude forms, claims about the social construction of knowledge leave no role for the material world to play in the making of knowledge about it. Almost all work in STS is more subtle than that, exploring instead the ways in which the material world is used by researchers in the production of knowledge. STS pays attention to the ways in which scientists and engineers attempt to construct stable structures and networks, often drawing together into one account the variety of resources used in making those structures and networks. So a central premise of STS is that scientists and engineers use the material world in their work; it is not merely translated into knowledge and objects by a mechanical process.

Clearly, STS tends to reject many of the elements of the common view of science. How and in what respects are the topics of the rest of this book.

2

The Kuhnian Revolution

Thomas Kuhn’s The Structure of Scientific Revolutions (1970, first published in 1962) challenged the dominant popular and philosophical pictures of the history of science. Rejecting the formalist view with its normative stance, Kuhn focused on the activities of and around scientific research: in his work science is merely what scientists do. Rejecting steady progress, he argued that there have been periods of normal science punctuated by revolutions. Kuhn’s innovations were in part an ingenious reworking of portions of the standard pictures of science, informed by rationalist emphases on the power of ideas, by positivist views on the nature and meaning of theories, and by Ludwig Wittgenstein’s ideas about forms of life and about perception. The result was novel, and had an enormous impact.

One of the targets of The Structure of Scientific Revolutions is what is known (since Butterfield 1931) as “Whig history,” history that attempts to construct the past as a series of steps toward (and occasionally away from) present views. Especially in the history of science there is a temptation to see the past through the lens of the present, to see moves in the direction of what we now believe to be the truth as more rational, more natural, and less needing of causal explanation than opposition to what we now believe. But since events must follow their causes, a sequence of events in the history of science cannot be explained teleologically, simply by the fact that they represent progress. Whig history is one of the common buttresses of too-simple progressivism in the history of science, and its removal makes room for explanations that include more irregular changes.

According to Kuhn, normal science is the science done when members of a field share a recognition of key past achievements in their field, beliefs about which theories are right, an understanding of the important problems of the field, and methods for solving those problems. In Kuhn’s terminology, scientists doing normal science share a paradigm. The term, originally referring to a grammatical model or pattern, draws particular attention to a scientific achievement that serves as an example for others to follow. Kuhn also assumes that such achievements provide theoretical and methodological tools for further research. Once they were established, Newton’s mechanics, Lavoisier’s chemistry, and Mendel’s genetics each structured research in their respective fields, providing theoretical frameworks for and models of successful research.

Box 2.1 The modernity of science

Many commentators on science have felt that it is a particularly modern institution. By this they generally mean that it is exceptionally rational, or exceptionally free of local contexts. While science’s exceptionality in either of these senses is contentious, there is a straightforward sense in which science is, and always has been, modern. As Derek de Solla Price (1986 [1963]) has pointed out, science has grown rapidly over the past three hundred years. In fact, by any of a number of indicators, science’s growth has been steadily exponential. Science’s share of the US gross national product has doubled every 20 years. The cumulative number of scientific journals founded has doubled every 15 years, as has the membership in scientific institutes, and the number of people with scientific or technical degrees. The numbers of articles in many sub-fields have doubled every 10 years. These patterns cannot continue indefinitely - and in fact have not continued since Price did his analysis.

A feature of this extremely rapid growth is that between 80 and 90 percent of all the scientists who have ever lived are alive now. For a senior scientist, between 80 and 90 percent of all the scientific articles ever written were written during his or her lifetime. For working scientists the distant past of their fields is almost entirely irrelevant to their current research, because the past is buried under masses of more recent accomplishments. Citation patterns show, as one would expect, that older research is considered less relevant than more recent research, perhaps having been superseded or simply left aside. For Price, a “research front” in a field at some time can be represented by the network of articles that are frequently cited. The front continually picks up new articles and drops old ones, as it establishes new problems, techniques, and solutions. Whether or not there are paradigms as Kuhn sees them, science pays most attention to current work, and little to its past. Science is modern in the sense of having a present-centered outlook, leaving its past to historians.

Rapid growth also gives science the impression of youth. At any time, a disproportionate number of scientists are young, having recently entered their fields. This creates the impression that science is for the young, even though individual scientists may make as many contributions in middle age as in youth (Wray 2003).

Although it is tempting to see it as a period of stasis, normal science is better viewed as a period in which research is well structured. The theoretical side of a paradigm serves as a worldview, providing categories and frameworks into which to slot phenomena. The practical side of a paradigm serves as a form of life, providing patterns of behavior or frameworks for action. For example, Lavoisier’s ideas about elements and the conservation of mass formed frameworks within which later chemists generated further ideas. The importance he attached to measurement instruments, and the balance in particular, shaped the work practices of chemistry. Within paradigms research goes on, often with tremendous creativity - though always embedded in firm conceptual and social backdrops.

Kuhn talks of normal science as puzzle-solving, because problems are to be solved within the terms of the paradigm: failure to solve a problem usually reflects badly on the researcher, rather than on the theories or methods of the paradigm. With respect to a paradigm, an unsolved problem is simply an anomaly, fodder for future researchers. In periods of normal science the paradigm is not open to serious question. This is because the natural sciences, on Kuhn’s view, are particularly successful at socializing practitioners. Science students are taught from textbooks that present standardized views of fields and their histories; they have lengthy periods of training and apprenticeship; and during their training they are generally asked to solve well-understood and well-structured problems, often with well-known answers.

Nothing good lasts forever, and that includes normal science. Because paradigms can only ever be partial representations and partial ways of dealing with a subject matter, anomalies accumulate, and may eventually start to take on the character of real problems, rather than mere puzzles. Real problems cause discomfort and unease with the terms of the paradigm, and this allows scientists to consider changes and alternatives to the framework; Kuhn terms this a period of crisis. If an alternative is created that solves some of the central unsolved problems, then some scientists, particularly younger scientists who have not yet been fully indoctrinated into the beliefs and practices or way of life of the older paradigm, will adopt the alternative. Eventually, as older and conservative scientists become marginalized, a robust alternative may become a paradigm itself, structuring a new period of normal science.

Box 2.2 Foundationalism

Foundationalism is the thesis that knowledge can be traced back to firm foundations. Typically those foundations are seen as a combination of sensory impressions and rational principles, which then support an edifice of higher-order beliefs. The central metaphor of foundationalism, of a building firmly planted in the ground, is an attractive one. If we ask why we hold some belief, the reasons we give come in the form of another set of beliefs. We can continue asking why we hold these beliefs, and so on. Like bricks, each belief is supported by more beneath it (there is a problem here of the nature of the mortar that holds the bricks together, but we will ignore that). Clearly, the wall of bricks cannot continue downward forever; we do not support our knowledge with an infinite chain of beliefs. But what lies at the foundation?

The most plausible candidates for empirical foundations are sense experiences. But how can these ever be combined to support the complex generalizations that form our knowledge? We might think of sense experiences, and especially their simplest components, as like individual data points. Here we have the earlier problems of induction all over again: as we have seen, a finite collection of data points cannot determine which generalizations to believe.

Worse, even beliefs about sense impressions are not perfectly secure. Much of the discussion around Kuhn’s The Structure of Scientific Revolutions (1970 [1962]) has focused on his claim that scientific revolutions change what scientists observe (Box 2.3). Even if Kuhn’s emphasis is wrong, it is clear that we often doubt what we see or hear, and reinterpret it in terms of what we know. The problem becomes more obvious, as the discussion of the Duhem-Quine thesis (Box 1.2) shows, if we imagine the foundations to be already-ordered collections of sense impressions.

On the one hand, then, we cannot locate plausible foundations for the many complex generalizations that form our knowledge. On the other hand, nothing that might count as a foundation is perfectly secure. We are best off to abandon, then, the metaphor of solid foundations on which our knowledge sits.

According to Kuhn, it is in periods of normal science that we can most easily talk about progress, because scientists have little difficulty recognizing each other’s achievements. Revolutions, however, are not progressive, because they both build and destroy. Some or all of the research structured by the pre-revolutionary paradigm will fail to make sense under the new regime; in fact Kuhn even claims that theories belonging to different paradigms are incommensurable – lacking a common measure - because people working in different paradigms see the world differently, and because the meanings of theoretical terms change with revolutions (a view derived in part from positivist notions of meaning). The non-progressiveness of revolutions and the incommensurability of paradigms are two closely related features of the Kuhnian account that have caused many commentators the most difficulty.

If Kuhn is right, science does not straight forwardly accumulate knowledge, but instead moves from one more or less adequate paradigm to another. This is the most radical implication found in The Structure of Scientific Revolutions: Science does not track the truth, but creates different partial views that can be considered to contain truth only by people who hold those views!

Kuhn’s claim that theories within paradigms are incommensurable has a number of different roots. One of those roots lies in the positivist picture of meaning, on which the meanings of theoretical terms are related to observations they imply. Kuhn adopts the idea that the meanings of theoretical terms depend upon the constellation of claims in which they are embedded. A change of paradigms should result in widespread changes in the meanings of key terms. If this is true, then none of the key terms from one paradigm would map neatly onto those of another, preventing a common measure, or even full communication.

Secondly, in The Structure of Scientific Revolutions, Kuhn takes the notion of indoctrination quite seriously, going so far as to claim that paradigms even shape observations. People working within different paradigms see things differently. Borrowing from the work of N. R. Hanson (1958), Kuhn argues there is no such thing, at least in normal circumstances, as raw observation. Instead, observation comes interpreted: we do not see dots and lines in our visual fields, but instead see more or less recognizable objects and patterns. Thus observation is guided by concepts and ideas. This claim has become known as the theory-dependence of observation. The theory-dependence of observation is easily linked to Kuhn’s historical picture, because during revolutions people stop seeing one way, and start seeing another way, guided by the new paradigm.

Finally, one of the roots of Kuhn’s claims about incommensurability is his experience as an historian that it is difficult to make sense of past scientists’ problems, concepts, and methods. Past research can be opaque, and aspects of it can seem bizarre. It might even be said that if people find it too easy to understand very old research in present terms they are probably doing some interpretive violence to that research - Isaac Newton’s physics looks strikingly modern when rewritten for today’s textbooks, but looks much less so in its originally published form, and even less so when the connections between it and Newton’s religious and alchemical research are drawn (e.g. Dobbs and Jacob 1995). Kuhn says that “In a sense that I am unable to explicate further, the proponents of competing paradigms practice their trades in different worlds” (1970 [1962]: 150).

The case for semantic incommensurability has attracted a considerable amount of attention, mostly negative. Meanings of terms do change, but they probably do not change so much and so systematically that claims in which they are used cannot typically be compared. Most of the philosophers, linguists, and others who have studied this issue have come to the conclusion that claims for semantic incommensurability cannot be sustained, or even that it is impossible (Davidson 1974) to make sense of such radical change in meaning (see Bird 2000 for an overview).

This leaves the historical justification for incommensurability. That problems, concepts, and methods change is uncontroversial. But the difficulties that these create for interpreting past episodes in science can be overcome - the very fact that historical research can challenge present-centered interpretations shows the limits of incommensurability.

Claims of radical incommensurability appear to fail. In fact, Kuhn quickly distanced himself from the strongest readings of his claims. Already by 1965 he insisted that he meant by “incommensurability” only “incomplete communication” or “difficulty of translation,” sometimes leading to “communication breakdown” (Kuhn 1970a). Still, on these more modest readings incommensurability is an important phenomenon: even when dealing with the same subject matter, scientists (among others) can fail to communicate.

If there is no radical incommensurability, then there is no radical division between paradigms, either. Paradigms must be linked by enough continuity of concepts and practices to allow communication. This may even be a methodological or theoretical point: complete ruptures in ideas or practices are inexplicable (Barnes 1982). When historians want to explain an innovation, they do so in terms of a reworking of available resources. Every new idea, practice, and object has its sources; to assume otherwise is to invoke something akin to magic. Thus many historians of science have challenged Kuhn’s paradigms by showing the continuity from one putative paradigm to the next.

For example, instruments, theories, and experiments change at different times. In a detailed study of particle detectors in physics, Peter Galison (1997) shows that new detectors are initially used for the same types of experiments and observations as their immediate predecessors had been, and fit into the same theoretical contexts. Similarly, when theories change, there is no immediate change in either experiments or instruments. Discontinuity in one realm, then, is at least generally bounded by continuity in others. Science gains strength, an ad hoc unity, from the fact that its key components rarely change together. Science maintains stability through change by being disunified, like a thread as described by Wittgenstein (1958): “the strength of the thread does not reside in the fact that some one fibre runs through its whole length, but in the overlapping of many fibres.” If this is right then the image of complete breaks between periods is misleading.

Box 2.3 The theory-dependence of observation

Do people’s beliefs shape their observations? Psychologists have long studied this question, showing how people’s interpretations of images are affected by what they expect those images to show. Hanson and Kuhn took the psychological results to be important for understanding how science works. Scientific observations, they claim, are theory-dependent.

For the most part, philosophers, psychologists, and cognitive scientists agree that observations can be shaped by what people believe. There are substantial disagreements, though, about how important this is for understanding science. For example, a prominent debate about visual illusions and the extent to which the background beliefs that make them illusions are plastic (e.g. Churchland 1988; Fodor 1988) has been sidelined by a broader interpretation of “observation.” Scientific observation has been and is rarely equivalent to brute perception, experienced by an isolated individual (Daston 2008). Much scientific data is collected by machine, and then is organized by scientists to display phenomena publicly (Bogen and Woodward 1992). If that organization amounts to observation, then it is straightforward that observation is theory-dependent.

Theory and practice dependence is broader even than that: scientists attend to objects and processes that background beliefs suggest are worth looking at, they design experiments around theoretically inspired questions, they remember relevance and communicate relevant information, where relevance depends on established practices and shared theoretical views (Brewer and Lambert 2001).

Incommensurability: Communicating Among Social Worlds

Claims about the incommensurability of scientific paradigms raise general questions about the extent to which people across boundaries can communicate.

In some sense it is trivial that disciplines (or smaller units, like specialties) are incommensurable. The work done by a molecular biologist is not obviously interesting or comprehensible to an evolutionary ecologist or a neuropathologist, although with some translation it can sometimes become so. The meaning of terms, ideas, and actions is connected to the cultures and practices from which they stem. Disciplines are “epistemic cultures” that may have completely different orientations to their objects, social units of knowledge production, and patterns of interaction (Knorr Cetina 1999). However, people from different areas interact, and as a result science gains a degree of unity. We might ask, then, how interactions are made to work.

Simplified languages allow parties to trade goods and services without concern for the integrity of local cultures and practices. A trading zone (Galison 1997) is an area in which scientific and/or technical practices can fruitfully interact via these simplified languages or pidgins, without requiring full assimilation. Trading zones can develop at the contact points of specialties, around the transfer of valuable goods from one to another. In trading zones, collaborations can be successful even if the cultures and practices that are brought together do not agree on problems or definitions.

The trading zone concept is flexible, perhaps overly so. We might look at almost any communication as taking place in a trading zone and demanding some pidgin-like language. For example, Richard Feynman’s diagrams of particle interactions, which later became known as Feynman diagrams, were successful in part because they were simple and could be interpreted in various ways (Kaiser 2005). They were widely spread during the 1950s by visiting postdoctoral fellows and researchers. But different schools, working with different theoretical frameworks, picked them up, adapted them, and developed local styles of using them. Despite their variety, they remained important ways of communicating among physicists, and also tools that were productive of theoretical problems and insights. It would seem to stretch the “trading zone” concept to say that Feynman diagrams were parts of pidgins needed for theoretical physicists to talk to each other, yet that is what they look like.

A different, but equally flexible, concept for understanding communication across barriers is the idea of boundary objects