34,99 €
A unique resource exploring the nature of computers and computing, and their relationships to the world.
Philosophy of Computer Science is a university-level textbook designed to guide readers through an array of topics at the intersection of philosophy and computer science. Accessible to students from either discipline, or complete beginners to both, the text brings readers up to speed on a conversation about these issues, so that they can read the literature for themselves, form their own reasoned opinions, and become part of the conversation by contributing their own views.
Written by a highly qualified author in the field, the book looks at some of the central questions in the philosophy of computer science, including:
A companion website contains annotated suggestions for further reading and an instructor’s manual.
Philosophy of Computer Science is a must-have for philosophy students, computer scientists, and general readers who want to think philosophically about computer science.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 1313
Veröffentlichungsjahr: 2023
Cover
Title Page
Copyright
Dedication
List of Figures
Preface
Acknowledgments
About the Companion Website
Part I: Philosophy and Computer Science
1 An Introduction to the Philosophy of Computer Science
1.1 What This Book Is About
1.2 What This Book Is
Not
About
2 Philosophy: A Personal View
2.1 Introduction
2.2 A Definition of ‘Philosophy’
2.3 What Is Truth?
2.4 Searching for the Truth
2.5 What Is “Rational”?
2.6 Philosophy as a Personal Search
2.7 Philosophies of Anything and Everything
2.8 Philosophy and Computer Science
2.9 Appendix: Argument Analysis and Evaluation
Notes
Part II: Computer Science, Computers, and Computation
3 What Is Computer Science?
3.1 Introduction
3.2 Naming the Discipline
3.3 Why Ask What CS Is?
3.4 What Does It Mean to Ask What Something Is?
3.5 CS as the Science of Computers
3.6 CS Studies Algorithms
3.7 Physical Computers vs. Abstract Algorithms
3.8 CS Studies Information
3.9 CS as a Mathematical Science
3.10 CS as a Natural Science of
Procedures
3.11 CS as an
Empirical Study
3.12 CS as
Engineering
3.13 Science
xor
Engineering?
3.14 CS as “Both”
3.15 CS as “More”
3.16 CS as “Neither”
3.17 Summary
3.18 Questions for the Reader
Notes
4 Science
4.1 Introduction
4.2 Science and Non‐Science
4.3 Science as Systematic Study
4.4 The Goals of Science
4.5 Instrumentalism vs. Realism
4.6 Scientific Theories
4.7 “The” Scientific Method
4.8 Falsifiability
4.9 scientific revolutions
4.10 Other Alternatives
4.11 CS and Science
4.12 Questions to Think About
Notes
5 Engineering
5.1 Defining ‘Engineering’
5.2 Engineering as Science
5.3 A Brief History of Engineering
5.4 Conceptions of Engineering
5.5 What Engineers Do
5.6 The Engineering Method
5.7 Software Engineering
5.8 CS and Engineering
5.9 Questions to Think About
Notes
6 Computers: A Brief History
6.1 Introduction
6.2 Would You Like to Be a Computer?
6.3 Two Histories of Computers
6.4 The Engineering History
6.5 The Scientific History
6.6 The Histories Converge
6.7 What Is a Computer?
Notes
7 Algorithms and Computability
7.1 Introduction
7.2 Functions and Computation
7.3 ‘Algorithm’ Made Precise
8
7.4 Five Great Insights of CS
7.5 Structured Programming
31
7.6 Recursive Functions
7.7 Non‐Computable Functions
7.8 Summary
7.9 Questions for the Reader
Notes
8 Turing's Analysis of Computation
8.1 Introduction
8.2 Slow and Active Reading
8.3 Title: “The
Entscheidungsproblem
”
8.4 Paragraph 1
8.5 Paragraph 2
8.6 Section 1, Paragraph 1: “Computing Machines”
8.7 Section 9: “The Extent of the Computable Numbers”
8.8 “Computing Machines”
8.9 Section 2: “Definitions”
8.10 Section 3: “Examples of Computing Machines”
8.11 Section 4: “Abbreviated Tables”
8.12 Section 5: “Enumeration of Computable Sequences”
8.13 Section 6: “The Universal Computing Machine”
8.14 The Rest of Turing's Paper
Notes
9 Computers: A Philosophical Perspective
9.1 What Is a Computer?
9.2 Informal Definitions
9.3 Computers, Turing Machines, and Universal Turing Machines
9.4 John Searle's “Pancomputationalism”: Everything Is a Computer
9.5 Patrick Hayes: Computers as Magic Paper
9.6 Gualtiero Piccinini: Computers as Digital String Manipulators
9.7 What Else Might Be a Computer?
9.8 Conclusion
9.9 Questions for the Reader
Notes
Part III: The Church‐Turing Computability Thesis
10 Procedures
10.1 Introduction
10.2 The Church‐Turing Computability Thesis
10.3 What Is a Procedure?
10.4 Carol Cleland: Some Effective Procedures Are Not Turing Machines
10.5 Beth Preston: Recipes, Algorithms, and Specifications
10.6 Summary
10.7 Questions for the Reader
Notes
11 Hypercomputation
11.1 Introduction
11.2 Generic Computation
11.3 Non‐Euclidean Geometries and “Non‐Turing Computations”
11.4 Hypercomputation
11.5 “Newer Physics” Hypercomputers
11.6 Analog Recurrent Neural Networks
11.7 Objections to Hypercomputation
11.8 Interactive Computation
11.9 Oracle Computation
11.10 Trial‐and‐Error Computation
11.11 Summary
11.12 Questions for the Reader
Notes
Part IV: Computer Programs
12 Software and Hardware
12.1 The Nature of Computer Programs
12.2 Programs and Algorithms
12.3 Software, Programs, and Hardware
12.4 Moor: Software Is Changeable
12.5 Suber: Software Is Pattern
12.6 Colburn: Software Is a Concrete Abstraction
12.7 Summary
12.8 Questions for the Reader
Notes
13 Implementation
13.1 Introduction
13.2 Implementation as Semantic Interpretation
13.3 Chalmers's Theory of Implementation
Notes
14 Computer Programs as Scientific Theories
14.1 Introduction
14.2 Simulations
14.3 Computer Programs
Are
Theories
14.4 Computer Programs
Aren't
Theories
Notes
15 Computer Programs as Mathematical Objects
15.1 Introduction
15.2 Theorem Verification
15.3 Program Verification
15.4 The Fetzer Controversy
15.5 The Program‐Verification Debate: Summary
15.6 Program Verification, Models, and the World
Notes
16 Programs and the World
16.1 Introduction
16.2 Internal vs. External Behavior: Some Examples
16.3 Two Views of Computation
16.4 Inputs, Turing Machines, and Outputs
16.5 Are Programs Teleological?
16.6 Algorithms
Do
Need a Purpose
16.7 Algorithms
Don't
Need a Purpose
16.8 Algorithms and Goals
16.9 Computing with Symbols or with Their Meanings
16.10 Syntactic, Internal, and Indigenous Semantics
16.11 Content and Computation
16.12 Summary
16.13 Questions for the Reader
Notes
Part V: Computer Ethics and Artificial Intelligence
17 Computer Ethics I: Should We Trust Computers?
17.1 Introduction
17.2 Decisions and Computers
17.3 Are Computer Decisions
Rational
?
17.4 Should Computers Make Decisions
for
Us?
17.5 Should Computers Make Decisions
with
Us?
17.6 Should We
Trust
Decisions Computers Make?
17.7 Are There Decisions Computers
Must
Make for Us?
17.8 Are There Decisions Computers
Shouldn't
Make?
17.9 Questions for the Reader
Notes
18 Philosophy of Artificial Intelligence
18.1 Introduction
18.2 What Is AI?
18.3 The Turing Test
18.4 Digression: The “Lovelace Objection”
18.5 Digression: Turing on Intelligent Machinery
18.6 The Chinese Room Argument
18.7 The Argument from Biology
18.8 The Argument from Semantics
18.9 Leibniz's Mill and Turing's “Strange Inversion”
18.10 A Better Way
18.11 Questions for Discussion
Notes
19 Computer Ethics II: Should We Build Artificial Intelligences?
19.1 Introduction
19.2 Is AI Possible in Principle?
19.3 What Is a Person?
19.4 Rights
19.5 Responsibilities
19.6 Personal AIs and Morality
19.7 Are
We
Personal AIs?
19.8 Questions for the Reader
Notes
Part VI: Closing Remarks
20 Computer Science: A Personal View
20.1 Introduction
20.2 Computer Science and Elephants
20.3 Five Central Questions of CS
20.4 Wing's Five Questions
20.5 Conclusion
Notes
Bibliography
Index
End User License Agreement
6
Figure 1 CALVIN AND HOBBES ©2015 Watterson.
Chapter 2
Figure 2.1 How to evaluate an argument from premises ...
Chapter 3
Figure 3.1 Artificial vs. Natural.
Figure 3.2 We're awesome at teaching.
Chapter 4
Figure 4.1 World, Observations, Theory.
Figure 4.2 Fields arranged by purity.
Chapter 5
Figure 5.1 Malpas's engineering method (Malpas, 2000, p. 35).
Chapter 6
Figure 6.1 1892 computer ad.
Chapter 7
Figure 7.1 BABY BLUES ©2004 Baby Blues Bros LLC. Dist....
Figure 7.2 A function “machine” that transforms input into output .
Figure 7.3 A real‐life example of an ambiguous instruction. Whose head shoul...
Chapter 9
Figure 9.1
Chapter 13
Figure 13.1 A pictorial representation of Chalmers's analysis of implementat...
Chapter 15
Figure 15.1 2D photographic model of a real house.
Figure 15.2 Source: From Colburn et al., 1993, p. 283. Reprinted with permis...
Figure 15.3 A cognitive agent looking at a real‐world object that the agent ...
Chapter 16
Figure 16.1 LUANN ©2015 GEC Inc.
Chapter 18
Figure 18.1 Syntax, semantics, and syntactic semantics.
Figure 18.2 How a computational cognitive agent perceives the world.
Figure 18.3 Homunculi from an exhibit at the Buffalo Museum of Science(!).
Chapter 20
Figure 20.1 CALVIN AND HOBBES ©1986 Watterson.
Cover Page
Table of Contents
Title Page
Copyright
Dedication
List of Figures
Note
Preface
About the Companion Website
Begin Reading
Index
Wiley End User License Agreement
iii
iv
v
xvi
xvii
xviii
xix
xx
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
281
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
449
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
An Introduction to the Issues and the Literature
William J. Rapaport
Department of Computer Science and Engineering
Department of Philosophy, Department of Linguistics and Center for Cognitive Science
University at Buffalo, The State University of New York
Buffalo, NY
This edition first published 2023© 2023 John Wiley & Sons, Inc.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of William J. Rapaport to be identified as the author of this work has been asserted in accordance with law.
Registered Office
John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.
Limit of Liability/Disclaimer of Warranty
While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Library of Congress Cataloging‐in‐Publication Data
Names: Rapaport,William J., author.
Title: Philosophy of computer science : an introduction to the issues and
the literature /William J. Rapaport.
Description: Hoboken, NJ :Wiley-Blackwell, 2023. | Includes
bibliographical references and index.
Identifiers: LCCN 2022039093 (print) | LCCN 2022039094 (ebook) | ISBN
9781119891901 (paperback) | ISBN 9781119891918 (adobe pdf) | ISBN
9781119891925 (epub)
Subjects: LCSH: Computer science‐Philosophy.
Classification: LCC QA76.167 .R37 2023 (print) | LCC QA76.167 (ebook) |
DDC 004‐dc23/eng/20220824
LC record available at https://lccn.loc.gov/2022039093
LC ebook record available at https://lccn.loc.gov/2022039094
Cover design: Wiley
Cover image: © 3Dsculptor/Shutterstock
This book is dedicated to my family:
Mary, Michael, Sheryl, Makayla, Laura, William, Allyson, Lexie, Rob, and Robert.
If you begin with Computer Science, you will end with Philosophy.1
1
“Clicking on the first link in the main text of an English Wikipedia article, and then repeating the process for subsequent articles, usually leads to the Philosophy article. In February 2016, this was true for 97% of all articles in Wikipedia, an increase from 94.52% in 2011” (“Wikipedia:Getting to Philosophy,”
http://en.wikipedia.org/wiki/Wikipedia:Getting_to_Philosophy
).
On 9 August 2021, if you began with “Computer Science,” you would end with “Philosophy” in 11 links: computer science algorithm mathematics quantity counting number mathematical object concept abstraction rule of inference philosophy of logic philosophy.
This is a university‐level introduction to the philosophy of computer science based on a course that I created at the University at Buffalo in 2004 and taught from 2004 to 2010 (I retired in 2012). At the time I created the course, there were few other such courses and virtually no textbooks (only a few monographs and anthologies). Although there are now more such courses, there are only a very few introductory textbooks in the area. My retirement project was to turn my lecture notes into a book that could be used as an introduction to the issues and serve as a guide to the original literature; this book is the result.
The course is described in Rapaport 2005c. The syllabus, readings, assignments, and website for the last version of the course are online at http://www.cse.buffalo.edu/∼rapaport/584/. The Online Resources contain suggested further readings, in‐class exercises (arguments for analysis, in addition to the questions at the ends of some of the chapters), term-paper suggestions, a sample final exam, advice to the instructor on peer‐editing for the exercises, and a philosophy of grading.
Many of the books and articles I discuss are available on the Web. Rather than giving Web addresses (URLs) for them, I urge interested readers to try a Google (or other) search for the documents. Books and journal articles can often be found either by visiting the author's website (e.g. most of my papers are athttps://cse.buffalo.edu/∼rapaport/papers.html) or by using a search string consisting of the last name(s) of the author(s) followed by the title of the document enclosed in quotation marks (For example, to find Rapaport 2005c, search for “rapaport “philosophy of computer science””). URLs that I give for Web‐only items (or other hard‐to‐find items) were accurate at the time of writing. Some, however, will change or disappear. Documents that have disappeared can sometimes be found at the Internet Archive's Wayback Machine (https://archive.org/web/). Some documents with no public URLs may eventually gain them. And, of course, readers should search the Internet or Wikipedia for any unfamiliar term or concept.
Sidebars: Sprinkled throughout the book are sidebars in boxes, like this one. Some are Digressions that clarify or elaborate on various aspects of the text. Some are suggestions for Further Reading. Others are Questions for the reader to consider at that point in the text. Additional suggested readings, along with student assignments and an instructor's manual, are in the Online Resources.
Figure 1 CALVIN AND HOBBES ©2015 Watterson.
Reprinted with permission of ANDREWS MCMEEL SYNDICATION. All rights reserved.
For comments on, suggestions for, or corrections to earlier versions, my thanks go especially to
Peter Boltuc, Jonathan Bona, Selmer Bringsjord, Jin‐Yi Cai, Timothy Daly, Edgar Daylight, Peter Denning, Eric Dietrich, William D. Duncan, J. Michael Dunn, Frank Fedele, Albert Goldfain, James Graham Maw, Carl Hewitt, Robin K. Hill, Johan Lammens, Cliff Landesman, Nelson Pole, Thomas M. Powers, Michael I. Rapaport, Stuart C. Shapiro, Aaron Sloman, Mark Staples, Matti Tedre, and Victoria G. Traube;
as well as to
Russ Abbott, Khaled Alshammari, S.V. Anbazhagan, S. Champailler, Arnaud Debec, Roger Derham, Gabriel Dulac‐Arnold, Mike Ferguson, Pablo Godoy, David Miguel Gray, Nurbay Irmak, Patrick McComb, Cristina Murta, Alexander Oblovatnyi, [email protected], Andres Rosa, Richard M. Rubin, Seth David Schoen, Stephen Selesnick, Dean Waters, Nick Wiggershaus, and Sen Zhang;
and
the University at Buffalo Department of Computer Science Information Technology staff for help with LaTex; and my editors at Wiley: Will Croft, Rosie Hayden, and Tiffany Taylor.
This book is accompanied by a companion website:
https://cse.buffalo.edu/∼rapaport/OR/
This website includes:
An annotated list of further readings for each chapter
Sample “position paper” assignments for argument analysis
Sample term‐paper topics
A sample final exam
An instructor's manual, with information on:
how to use the position‐paper assignments
how to grade, including:
a “triage philosophy of grading”
suggested analyses and grading rubrics for the position papers
a discussion of William Perry's scheme of cognitive development and its application to the final exam.
Part I is an introduction to both philosophy and the philosophy of computer science.
Philosophy is often thought of as an activity, which may have considerable theoretical interest, but which is of little practical importance. Such a view of philosophy is … profoundly mistaken. … [P]hilosophical ideas and some kind of philosophical orientation are necessary for many quite practical activities. … [L]ooking at the general question of how far philosophy has influenced the development of computer science[, m]y own view is that the influence of philosophy on computer science has been very great.
—Donald Gillies (2002)
Who would have guessed that the arcane research done by the small set of mathematicians and philosophers working on formal logic a century ago would lead to the development of computing, and ultimately to completely new industries, and to the reconfiguring of work and life across the globe?
—Onora O'Neill (2013, p. 8)
There is no such thing as philosophy‐free science, just science that has been conducted without any consideration of its underlying philosophical assumptions.
—Daniel C. Dennett (2013a, p. 20)
My mind does not simply receive impressions. It talks back to the authors, even the wisest of them, a response I'm sure they would warmly welcome. It is not possible, after all, to accept passively everything even the greatest minds have proposed. One naturally has profound respect for … [the] heroes of the pantheon of Western culture; but each made statements flatly contradicted by views of the others. So I see the literary and philosophical tradition of our culture not so much as a storehouse of facts and ideas but rather as a hopefully endless Great Debate at which one may be not only a privileged listener but even a modest participant.
—Steve Allen (1989, p. 2), as cited in Madigan, 2014, p. 46.
As [the logician] Harvey Friedman has suggested, every morning one should wake up and reflect on the conceptual and foundational significance of one's work.
—Robert Soare (1999, p. 25)
This book looks at some of the central issues in the philosophy of computer science. It is not designed to answer all (or even any) of the philosophical questions that can be raised about the nature of computing, computers, and computer science. Rather, it is designed to “bring you up to speed” on a conversation about these issues – to give you some background knowledge – so that you can read the literature for yourself and perhaps become part of the conversation by contributing your own views.
This book is intended for readers who might know some philosophy but no computer science, readers who might know some computer science but no philosophy, and even readers who know little or nothing about either! So, although most of the book will be concerned with computer science, we will begin by asking, what is philosophy?
Then, in Part II, we will begin our inquiry into the philosophy of computer science by asking, what is computer science? To answer this, we will need to consider a series of questions, each of which leads to another: is computer science a science, a branch of engineering, some combination of them, or something else altogether? And to answer those questions, we will need to ask, what is science? and what is engineering?
We next ask, what does computer science study?Computers? If so, then what is a computer? Or does it study computation? If so, then what is computation? Computations are said to be algorithms, so what is an algorithm? And what is the Turing Machine model of algorithmic computation?
In Part III, we will explore the Church‐Turing Computability Thesis. This is the proposal that our intuitive notion of computation is completely captured by the formal notion of Turing Machine computation. But some have claimed that there are ordinary procedures (such as recipes) that are not computable by Turing Machines and that hence refute the Computability Thesis. So, what is a procedure? (And, for that matter, what is a recipe?) Others have claimed that the intuitive notion of computation goes beyond Turing Machine computation; so, what is such “hypercomputation”?
In Part IV, we explore the nature of computer programs. Computations are expressed in computer programs, which are executed by computers, so what is a computer program? Are computer programs “implementations” of algorithms? If so, then what is an implementation? Programs typically have real‐world effects, so how are programs and computation related to the world? Some programs, especially in the sciences, are designed to model or simulate or explain some real‐world phenomenon, so can programs be considered (scientific) theories? Programs are usually considered “software,” and computers are usually considered “hardware,” but what is the difference between software and hardware? Computer programs are notorious for having “bugs,” which are often only found by running the program, so can computer programs be logically verified before running them?
Finally, in Part V, we look at two topics. The first is the philosophy of artificial intelligence (AI): what is AI? What is the relation of computation to cognition? Can computers think? Alan Turing, one of the creators of the field of computation, suggested that the best way to deal with that question was by using what is now called the Turing Test. The Chinese Room Argument is a thought experiment devised by the philosopher John Searle, which (arguably) shows that the Turing Test won't work.
The other topic is computer ethics. We'll look at two questions that were not much discussed at the turn of the century but are now at the forefront of computational ethical debates: (1) should we trust decisions made by computers? (Moor, 1979) – a question made urgent by the advent of automated vehicles and by “deep learning” algorithms that might be biased; and (2) should we build “intelligent” computers? Do we have moral obligations toward robots? Can or should they have moral obligations toward us?
Computer Science Students Take Note: Along the way, we will look at how philosophers reason and evaluate logical arguments. ACM/IEEE Computer Science Curricula 2020 (CC2020) covers precisely these sorts of argument‐analysis techniques under the headings of Discrete Structures and Analytical and Critical Thinking. Many other CC2020 topics also overlap those in the philosophy of computer science. See https://www.acm.org/binaries/content/assets/education/curricula-recommendations/cc2020.pdf.
Have I left anything out? Yes! This book is not an attempt to be an encyclopedic, up‐to‐the‐minute survey of every important issue in the philosophy of computer science. Rather, the goal is to give you the background to enable you to fruitfully explore those issues and to join in the conversation.
The questions raised earlier and discussed in this book certainly do not exhaust the philosophy of computer science. They are merely a series of questions that arise naturally from our first question: what is computer science? But there are many other issues in the philosophy of computer science. Some are included in a topic sometimes called philosophy of computing. Here are some examples: consider the ubiquity of computing – your smartphone is a computer; your car has a computer in it; even some refrigerators and toasters contain computers. Perhaps someday your bedroom wall will contain (or even be) a computer! How will our notion of computing change because of this ubiquity? Will this be a good or bad thing? Another topic is the role of the Internet. For instance, Tim Berners‐Lee, who created the World Wide Web, has argued that “Web science” should be its own discipline (Berners‐Lee et al., 2006; Lohr, 2006). And there are many issues surrounding the social implications of computers in general and social media on the Internet (and the World Wide Web) in particular.
Other issues in the philosophy of computer science more properly fall under the heading of the philosophy of AI. As noted, we will look at some of these in this book, but there are many others that we won't cover, even though the philosophy of AI is a proper subset of the philosophy of computer science.
Another active field of investigation is the philosophy of information. As we'll see in Section 3.8, computer science is sometimes defined as the study of how to process information, so the philosophy of information is clearly a close cousin of the philosophy of computer science. But I don't think either is included in the other; they merely have a non‐empty intersection. If this is a topic you wish to explore, take a look at some of the books and essays cited at the end of Section 3.8.
And we will not discuss (except in passing; see, for example, Section 9.6.1) analog computation. If you're interested in this, see the Online Resources for suggested readings.
Finally, there are a number of philosophers and computer scientists who have discussed topics related to what I am calling the philosophy of computer science whom we will not deal with at all (such as the philosophers Martin Heidegger and Hubert L. Dreyfus (Dreyfus and Dreyfus, 1980; Dreyfus, 2001) and the computer scientist Terry Winograd (Winograd and Flores, 1987). An Internet search (e.g. “Heidegger "computer science"”) will help you track down information on these thinkers and others not mentioned in this book. (One philosopher of computer science [personal communication] calls them the “Dark Side philosophers” because they tend not to be sympathetic to computational views of the world!)
But I think the earlier questions will keep us busy for a while as well as prepare you for examining some of these other issues. Think of this book as an extended “infomercial” to bring you up to speed on the computer‐science–related aspects of a philosophical conversation that has been going on for over 2500 years, to enable you to join in the conversation.
Let's begin …
Further Reading: In 2006, responding to a talk that I gave on the philosophy of computer science, Selmer Bringsjord (a philosopher and cognitive scientist who has written extensively on the philosophy of computer science) said that philosophy of computer science was in its infancy. This may have been true at the time as a discipline so called, but there have been philosophical investigations of computer science and computing since at least Turing, 1936 (which we'll examine in detail in Chapter 8), and the philosopher James H. Moor's work goes back to the 1970s (we'll discuss some of his writings in Chapters 12 and 17.
In an early undergraduate computer science textbook, my former colleague Tony Ralston (1971, Section 1.2D, pp. 6–7) discussed “the philosophical impact of computers”: he said that questions about such things as the nature of thinking, intelligence, emotions, intuition, creativity, consciousness, the relation of mind to brain, and free will and determinism “are serious questions, that the advent of computers has, philosophically speaking, reopened some of these questions and thrown new light on others, and finally, that the philosophical significance of these questions provides a worthy motivation for the study of computer science.”
On social implications, see, especially, Weizenbaum, 1976 and Simon, 1977, the penultimate section of which (“Man's View of Man”) can be viewed as a response to Weizenbaum. See also Dembart, 1977 for a summary and general discussion. For a discussion of the social implications of the use of computers and the Internet, be sure to read E.M. Forster's classic short story “The Machine Stops” (Forster, 1909), which predicted the Internet and email! (You can easily find versions of it online.)
See the Online Resources for more on the philosophy of computer science.
[T]here are those who have knowledge and those who have understanding. The first requires memory, the second philosophy. … Philosophy cannot be taught. Philosophy is the union of all acquired knowledge and the genius that applies it …
—Alexandre Dumas (1844, The Count of Monte Cristo, Ch. 17, pp. 168–169)
Philosophy is the microscope of thought.
—Victor Hugo (1862, Les Misérables, Vol. 5, Book Two, Ch. II, p. 1262)
Philosophy … works against confusion.
—John Cleese (2012), “[Twenty‐First] Century,” https://www.apaonline.org/resource/resmgr/John_Cleese_statements/19_Century.mp3
Consider majoring in philosophy. I did. … [I]t taught me how to break apart arguments, how to ask the right questions.
—NPR reporter Scott Simon, quoted in Keith 2014
To the person with the right turn of mind, … all thought becomes philosophy.
—Eric Schwitzgebel (2012)
Philosophy can be any damn thing you want!
—John Kearns (personal communication, 7 November 2013)
[W]e're all doing philosophy all the time. We can't escape the question of what matters and why: the way we're living is itself our implicit answer to that question. A large part of a philosophical training is to make those implicit answers explicit, and then to examine them rigorously.
—David Egan (2019)
“What is philosophy?” is a question that is not a proper part of the philosophy of computer science. But because many readers may not be familiar with philosophy, I want to begin our exploration with a brief introduction to how I think of philosophy and how I would like non‐philosophical readers who are primarily interested in computer science to think of it. So, in this chapter, I will give you my definition of ‘philosophy’ and examine the principal methodology of philosophy: the evaluation of logical arguments.
Note on Quotation Marks: Many philosophers have adopted a convention that single quotes are used to form the name of a word or expression. So, when I write this:
I am not talking about philosophy! Rather, I am talking about the 10‐letter word spelled p‐h‐i‐l‐o‐s‐o‐p‐h‐y. This use of single quotes enables us to distinguish between a thing that we are talking about and the name or description that we use to talk about the thing. This is the difference between Paris (the capital of France) and ‘Paris’ (a five‐letter word). The technical term for this is the ‘use‐mention distinction’ (http://en.wikipedia.org/wiki/Use-mention_distinction): we use ‘Paris’ to mention Paris. It is also the difference between a number (a thing that mathematicians talk about) and a numeral (a word or symbol that we use to talk about numbers).
I will use double quotes (1) when I am directly quoting someone, (2) as “scare quotes” to indicate that I am using an expression in a special or perhaps unusual way (as I just did), and (3) to indicate the meaning of a word or other expression (as in, ‘bachelor’ means “marriageable male”) (Cole, 1999).
However, in both cases, some publishers (including the present one) follow a (slightly illogical) style according to which some punctuation (usually periods and commas), whether part of the quoted material or not, must appear inside the quotation marks. I will leave it as an exercise for the reader to determine which punctuation marks that appear inside quotation marks logically belong there! (As a warm‐up exercise, is this sentence,
which obeys the publisher's style, true?)
When ‘philosophy’ is used informally, in everyday conversation, it can mean an “outlook,” as when someone asks you what your “philosophy of life” is. The word ‘philosophical’ can also mean something like “calm,” as when we say that someone takes bad news “very philosophically” (i.e. very calmly). Traditionally, philosophy is the study of “Big Questions” (Section 2.7) such as metaphysics (what exists?), epistemology (how can we know what exists?), and ethics (what is “good”?).
In this chapter, I want to explicate the technical sense of modern, analytic, Western philosophy – a kind of philosophy that has been done since at least the time of Socrates. ‘Modern philosophy’ is itself a technical term that usually refers to the kind of philosophy that has been done since the time of René Descartes (1596–1650, about 400 years ago) (Nagel, 2016). It is “analytic” in the sense that it is primarily concerned with the logical analysis of concepts (rather than literary, poetic, or speculative approaches). And it is “Western” in the sense that it has been done by philosophers working primarily in Europe (especially in Great Britain) and North America – although, of course, there are very many philosophers who do analytic philosophy in other areas of the world (and there are many other kinds of philosophy; see Adamson 2019).
Western philosophy began in ancient Greece. Socrates (470–399 BCE,1 i.e. around 2500 years ago) was opposed to the Sophists, a group of teachers who can be caricatured as an ancient Greek version of “ambulance‐chasing” lawyers, “purveyors of rhetorical tricks” (McGinn, 2012b). For a fee, the Sophists were willing to teach anything (whether it was true or not) to anyone, or to argue anyone's cause (whether their cause was just or not).
Like the Sophists, Socrates also wanted to teach and argue, but only to seek wisdom: truth in any field. In fact, the word ‘philosophy’ comes from Greek roots meaning “love of [philo] wisdom [sophia].” The reason Socrates only sought wisdom rather than claiming that he had it (as the Sophists did) was that he believed he didn't have it: he claimed that he knew he didn't know anything (and that, therefore, he was actually wiser than those who claimed that they did know things but who really didn't). As the contemporary philosopher Kwame Anthony Appiah said, in reply to the question “How do you think Socrates would conduct himself at a panel discussion in Manhattan in 2019?”:
You wouldn't be able to get him to make an opening statement, because he would say, “I don't know anything.” But as soon as anybody started saying anything, he'd be asking you to make your arguments clearer – he'd be challenging your assumptions. He'd want us to see that the standard stories we tell ourselves aren't good enough. (Libbey and Appiah, 2019)
Socrates's student Plato (430–347 BCE), in his dialogue Apology, describes Socrates as playing the role of a “gadfly,” constantly questioning (and annoying!) people about the justifications for, and consistency among, their beliefs, in an effort to find out the truth for himself from those who considered themselves to be wise (but who really weren't).
Plato defined ‘philosopher’ (and, by extension, ‘philosophy’) in Book V of his Republic (line 475c):
The one who feels no distaste in sampling every study, and who attacks the task of learning gladly and cannot get enough of it, we shall justly pronounce the lover of wisdom, the philosopher. (Plato, 1961b, p. 714, my emphasis)
Adapting this, I define ‘philosophy’ as the personal search for truth, in any field, by rational means. This raises several questions:
What is “truth”?
Why is philosophy only the
search
for truth? (Can't the search be successful?)
What counts as being “rational”?
Why only “personal”? (Why not “universal”?)
What does ‘any field’ mean? (Is philosophy really the study of anything and everything?)
The rest of this chapter explores these questions.2
The study of the nature of truth is another “Big Question” of philosophy. I cannot hope to do justice to it here, but two theories of truth will prove useful to keep in mind on our journey through the philosophy of computer science: the correspondence theory of truth and the coherence theory of truth.
According to the Oxford English Dictionary (OED; http://www.oed.com/view/Entry/206884), ‘true’ originally meant “faithful.” Faithfulness requires two things and such that is faithful to . On a correspondence theory, truth is faithfulness of a representation of some part of reality to the reality that it is a representation of. On the one hand, there are beliefs (or propositions, or sentences); on the other hand, there is “reality”: a belief (or a proposition, or a sentence) is true if and only if (“iff”) it corresponds to reality, i.e. iff it is faithful to, or “matches,” or accurately represents or describes reality.
Terminological Digression: A “belief,” as I am using that term here, is a mental entity, “implemented” (in humans) by certain neuron firings. A “sentence” is a grammatical string of words in some language. And a “proposition” is the meaning of a sentence. These are all rough‐and‐ready characterizations; each of these terms has been the subject of much philosophical analysis. For further discussion, see Schwitzgebel 2021 on belief, https://en.wikipedia.org/wiki/Sentence-(linguistics) on sentences, and McGrath and Frank 2020 on propositions.
To take a classic example, the three‐word English sentence ‘Snow is white.’ is true iff the stuff in the real world that precipitates in certain winter weather (i.e. snow) has the same color as milk (i.e. iff it is white). Put somewhat paradoxically (but correctly – recall the use‐mention distinction!), ‘Snow is white.’ is true iff snow is white.
How do we determine whether a sentence (or a belief, or a proposition) is true? Using a correspondence theory, in principle, we would have to compare the parts of the sentence (its words plus its grammatical structure, and maybe even the context in which it is thought, uttered, or written) with parts of reality, to see if they correspond. But how do we access “reality”? How can we do the “pattern matching” between our beliefs and reality? One answer is by sense perception (perhaps together with our beliefs about what we perceive). But sense perception is notoriously unreliable (think about optical illusions). And one of the issues in deciding whether our beliefs are true is deciding whether our perceptions are accurate (i.e. whether they match reality).
So we seem to be back to square one, which gives rise to coherence theories.
According to a coherence theory of truth, a set of propositions (or beliefs, or sentences) is true iff (1) they are mutually consistent, and (2) they are supported by, or consistent with, all available evidence. That is, they “cohere” with each other and with all evidence. Note that observation statements (i.e. descriptions of what we observe in the world around us) are among the claims that must be mutually consistent, so this is not (necessarily) a “pie‐in‐the‐sky” theory that doesn't have to relate to the way things really are. It just says that we don't have to have independent access to “reality” in order to determine truth.
Which theory is correct? Well, for one thing, there are more than two theories: there are several versions of each kind of theory, and there are other theories of truth that don't fall under either category. The most important of the other theories is the “pragmatic” theory of truth (see Glanzberg 2021, Section 3; Misak and Talisse 2019). Here is one version:
[T]he pragmatic theory of truth … is that a proposition is true if and only [if] it is useful [i.e. “pragmatic,” or practical] to believe that proposition. (McGinn, 2015a, p. 148)
Fortunately, the answer to which kind of theory is correct (i.e. which kind of theory is – if you will excuse the expression – true) is beyond our present scope! But note that the propositions that a correspondence theory says are true must be mutually consistent (if “reality” is consistent!), and they must be supported by all available evidence; i.e. a correspondence theory must “cohere”. Moreover, if you include both propositions and “reality” in one large, highly interconnected network (as we will consider in Sections 16.10.4 and 18.8.3), that network must also “cohere,” so the propositions that are true according to a coherence theory of truth should “correspond to” (i.e. cohere with) reality.
Let's return to the question raised in Section 2.3.1: how can we decide whether a statement is true? One way we can determine its truth is syntactically (i.e. in terms of its grammatical structure only, not in terms of what it means), by trying to prove it from axioms via rules of inference. It is important to keep in mind that when you prove a statement this way, you are not proving that it is true! You are simply proving that it follows logically from certain other statements: i.e. that it “coheres” in a certain way with those statements. But if the starting statements – the axioms – are true (note that I said “if they are true”; I haven't told you how to determine their truth value yet), and if the rules of inference “preserve truth,” then the statement you prove by means of them – the “theorem” – will also be true.
Another way we can determine whether a statement is true is semantically: i.e. in terms of what it means. We can use truth tables to determine that axioms are true. This, by the way, is the only way to determine whether an axiom is true, since, by definition, an axiom cannot be inferred from any other statements. If it could be so inferred, then it would be those other statements that would be the real axioms.
But to determine the truth of a statement semantically is also to use syntax (i.e. symbol manipulation): we semantically determine the truth value of a complex proposition by symbol manipulation (via truth tables) of its atomic constituents. (For more on syntax and semantics, see Section 18.8.3.) How do we determine the truth value of an atomic proposition? By seeing if it corresponds to reality. But how do we do that? By comparing the proposition with reality: i.e. by seeing if the proposition coheres with reality.3
Digression: What Is a Theorem? When you studied geometry, you may have studied a version of Euclid's original presentation of geometry via a modern interpretation as an axiomatic system. Most branches of mathematics (and, according to some philosophers, most branches of science) can be formulated axiomatically. One begins with a set of “axioms”: statements that are assumed to be true (or are considered so obviously true that their truth can be taken for granted). Then there are “rules of inference” that tell you how to logically infer other statements from the axioms in such a way that the inference procedure is “truth preserving”: if the axioms are true (which they are, by assumption), then whatever logically follows from them according to the rules of inference is also true. (Truth is “preserved” throughout the inference.) Such statements are called ‘theorems.’
Do truth and proof coincide? A logical system for which they do is said to be (semantically) “complete”: all truths are theorems, and all theorems are true. Two such systems are propositional logic and first‐order logic. Propositional logic is the logic of sentences, treating them “atomically” as simply being either true or false and not having any “parts.” First‐order predicate logic can be thought of as a kind of “sub‐atomic” logic, treating sentences as being composed of terms standing in relations. (See Rapaport, 1992a,b.) However, if you add axioms for arithmetic to first‐order logic, the resulting system is not complete; see the Digression on Gödel's Incompleteness Theorem. (See Sections 2.9, 6.5, 7.4.3.2, 13.2.2, 15.1, and 15.2.1 for more details.)
There are also second‐order logics, modal logics, relevance logics, and many more (not to mention varieties of each). Is one of them the “right” logic? Tharp 1975 asks that question, which can be expressed as a “thesis” analogous to the Church‐Turing Computability Thesis: where the Computability Thesis asks if the formal theory of Turing Machine computability entirely captures the informal, pre‐theoretic notion of computability, Tharp asks if there is a formal logic that entirely captures the informal, pre‐theoretic notion of logic. We'll return to some of these issues in Chapter 11.
Digression: Gödel's Incompleteness Theorem: Can any proposition (or its negation) be proved? Given a proposition , we know that either is true or else is false (i.e. that is true). So, whichever one is true should be provable. Is it? Not necessarily!
First, there are propositions whose truth value we don't know yet. For one example, no one knows (yet) if Goldbach's Conjecture is true. Goldbach's Conjecture says that all positive even integers are the sum of two primes; for example, . For another example, no one knows (yet) if the Twin Prime Conjecture is true. The Twin Prime Conjecture says that there are an infinite number of “twin” primes”: i.e. primes such that ; e.g. 2 and 3, 3 and 5, 5 and 7, 9 and 11, 11 and 13, etc.
Second – and much more astounding than our mere inability so far to prove or disprove any of these conjectures – there are propositions that are known to be true but that we can prove that we cannot prove! This is the essence of Gödel's Incompleteness Theorem. Stated informally, it asks us to consider proposition G, which is a slight variation on the Liar Paradox (i.e. the proposition “This proposition is false”: if it's false, then it's true; if it's true, then it's false):
(G) This proposition (G) is true but unprovable.
We can assume that G is either true or false. So, suppose it is false. Then it was wrong when it said that it was unprovable; so, it is provable. But any provable proposition has to be true (because valid proofs are truth‐preserving). That's a contradiction, so our assumption that it was false was wrong: it isn't false. But if it isn't false, then it must be true. But if it's true, then – as it says – it's unprovable. End of story; no paradox!
So, G (more precisely, its formal counterpart) is an example of a true proposition that cannot be proved. Moreover, the logician Kurt Gödel showed that some such propositions are true in the mathematical system consisting of first‐order predicate logic plus Peano's axioms for the natural numbers (see Section 7.6.1); i.e. they are true propositions of arithmetic! For more information on Gödel and his proof, see Gödel 1931; Nagel et al. 2001; Hofstadter, 1979; Franzén 2005; Goldstein 2006.
We'll return to this question, also known as the “Decision Problem,” beginning in Section 6.5.
Thinking is, or ought to be, a coolness and a calmness …
—Herman Melville (1851, Moby‐Dick, Ch. 135, p. 419)
Thinking is the hardest work there is, which is the probable reason why so few engage in it.
—Henry Ford (1928, p. 481)
Thinking does not guarantee that you will not make mistakes. But not thinking guarantees that you will.
—Leslie Lamport (2015, p. 41)
Let's turn to the second question: why is philosophy only the search for truth? Can't we find the truth? Perhaps not.
How does one go about searching for the truth, for answering questions? There are basically two complementary methods: (1) thinking hard and (2) empirical investigation. We'll look at the second of these in Section 2.5. First, let's focus on thinking hard.
Some have claimed that philosophy is just thinking really hard about things (Popova, 2012). Such hard thinking requires “rethinking explicitly what we already believe implicitly” (Baars, 1997, p. 187). In other words, it's more than merely expressing one's opinion. It's also different from empirical investigation:
Philosophy is thinking hard about the most difficult problems that there are. And you might think scientists do that too, but there's a certain kind of question whose difficulty can't be resolved by getting more empirical evidence. It requires an untangling of presuppositions: figuring out that our thinking is being driven by ideas we didn't even realize that we had. And that's what philosophy is. (David Papineau, quoted in Edmonds and Warburton 2010, p. xx)
But we may not be able to find the truth, either by thinking hard or by empirical investigation. The philosopher Colin McGinn (1989, 1993) discusses the possibility that limitations of our (present) cognitive abilities may make it as impossible for us to understand the truth about certain things (such as the mind‐body problem or the nature of consciousness) as an ant's cognitive limitations make it impossible for it to understand calculus. But we may not have to find the truth. G.E. Lessing (1778, my italics)4 said,
The true value of a man [sic] is not determined by his possession, supposed or real, of Truth, but rather by his sincere exertion to get to the Truth. It is not possession of the Truth, but rather the pursuit of Truth by which he extends his powers …
Digression: ‘[sic]’: The annotation ‘[sic]’ (which is Latin for “thus” or “so”) is used when an apparent error or odd usage of a word or phrase is to be blamed on the original author and not on the person (in this case, me!) who is quoting the author. For example, here I want to indicate that it is Lessing who said “the true value of a man,” where I would have said “the true value of a person.”
In a similar vein, the mathematician Carl Friedrich Gauss (1808) said, “It is not knowledge, but the act of learning, not possession but the act of getting there, which grants the greatest enjoyment.”
Questions, questions. That's the trouble with philosophy: you try and fix a problem to make your theory work, and a whole host of others then come along that you have to fix as well.
—Helen Beebee (2017)
One reason the search for truth will never end (which is different from saying that it will not succeed) is that you can always ask “Why?”; i.e. you can always continue inquiring. This is
the way philosophy – and philosophers – are[:] Questions beget questions, and those questions beget another whole generation of questions. It's questions all the way down. (Cathcart and Klein, 2007, p. 4)
You can even ask why “Why?” is the most important question (Everett, 2012, p. 38)! “The main concern of philosophy is to question and understand very common ideas that all of us use every day without thinking about them” (Nagel, 1987, p. 5). This is the reason, perhaps, that the questions children often ask (especially, “Why?”) are often deeply philosophical.
The physicist John Wheeler pointed out that the more questions you answer, the more questions you can ask: “We live on an island surrounded by a sea of ignorance. As our island of knowledge grows, so does the shore of our ignorance” (https://en.wikiquote.org/wiki/John_Archibald_Wheeler). And “Philosophy patrols the border [e.g. the shore], trying to understand how we got there and to conceptualize our next move” (Soames, 2016). The US economist and social philosopher Thorstein Veblen said, “The outcome of any serious research can only be to make two questions grow where only one grew before” (Veblen, 1908, p. 396).
Asking “Why?” is the principal part of philosophy's “general role of critically evaluating beliefs” (Colburn, 2000, p. 6) and “refusing to accept any platitudes or accepted wisdom without examining it” (Donna Dickenson, in Popova 2012). As the humorist George Carlin put it,
[It's] not important to get children to read. Children who wanna read are gonna read. Kids who want to learn to read [are] going to learn to read. [It's] much more important to teach children to QUESTION what they read. Children should be taught to question everything. (https://georgecarlin.net/bogus/question.html)
Whenever you have a question, either because you do not understand something or because you are surprised by it or unsure of it, you should begin to think carefully about it. And one of the best ways to do this is to ask “Why?”: Why did the author say that? Why does the author believe it? Why should I believe it? We can call this “looking backward” toward reasons. And a related set of questions are these: What are its implications? What else must be true if that were true? And should I believe those implications? Call this “looking forward” to consequences. Because we can always ask these backward‐ and forward‐looking questions, we can understand why …
… we should never rest assured that our view, no matter how well argued and reasoned, amounts to the final word on any matter. (Goldstein, 2014, p. 396)
This is why philosophy must be argumentative. … Only in this way can intuitions that have their source in societal or personal idiosyncrasies be exposed and questioned. (Goldstein, 2014, p. 39)
The arguments are argued over, typically, by challenging their assumptions. It is rare that a philosophical argument will be found to be invalid (i.e. logically incorrect).5 The most interesting arguments are valid ones, so that the only concern is over the truth of their “premises”: the reasons for the conclusion. An argument that is found to be invalid is usually a source of disappointment – unless the invalidity points to a missing premise or reveals a flaw in the very nature of logic itself (an even rarer, but not unknown, occurrence).
