93,99 €
Detailed review of optimization from first principles, supported by rigorous math and computer science explanations and various learning aids
Supported by rigorous math and computer science foundations, Combinatorial and Algorithmic Mathematics: From Foundation to Optimization provides a from-scratch understanding to the field of optimization, discussing 70 algorithms with roughly 220 illustrative examples, 160 nontrivial end-of-chapter exercises with complete solutions to ensure readers can apply appropriate theories, principles, and concepts when required, and Matlab codes that solve some specific problems. This book helps readers to develop mathematical maturity, including skills such as handling increasingly abstract ideas, recognizing mathematical patterns, and generalizing from specific examples to broad concepts.
Starting from first principles of mathematical logic, set-theoretic structures, and analytic and algebraic structures, this book covers both combinatorics and algorithms in separate sections, then brings the material together in a final section on optimization. This book focuses on topics essential for anyone wanting to develop and apply their understanding of optimization to areas such as data structures, algorithms, artificial intelligence, machine learning, data science, computer systems, networks, and computer security.
Combinatorial and Algorithmic Mathematics includes discussion on:
Combinatorial and Algorithmic Mathematics is an ideal textbook resource on the subject for students studying discrete structures, combinatorics, algorithms, and optimization. It also caters to scientists across diverse disciplines that incorporate algorithms and academics and researchers who wish to better understand some modern optimization methodologies.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 713
Veröffentlichungsjahr: 2024
Cover
Table of Contents
Title Page
Copyright
Dedication
About the Author
Preface
Acknowledgments
About the Companion Website
Part I: Foundations
1 Mathematical Logic
1.1 Propositions
1.2 Logical Operators
1.3 Propositional Formulas
1.4 Logical Normal Forms
1.5 The Boolean Satisfiability Problem
1.6 Predicates and Quantifiers
1.7 Symbolizing Statements of the Form “All
P
Are
Q
”
Exercises
Notes and Sources
References
Notes
2 Set-Theoretic Structures
2.1 Induction
2.2 Sets
2.3 Relations
2.4 Partitions
2.5 Functions
Exercises
Notes and Sources
References
Note
3 Analytic and Algebraic Structures
3.1 Sequences
3.2 Summations and Series
3.3 Matrices, Subspaces, and Bases
3.4 Convexity, Polyhedra, and Cones
3.5 Farkas’ Lemma and Its Variants
Exercises
Notes and Sources
References
Notes
Part II: Combinatorics
4 Graphs
4.1 Basic Graph Definitions
4.2 Isomorphism and Properties of Graphs
4.3 Eulerian and Hamiltonian Graphs
4.4 Graph Coloring
4.5 Directed Graphs
Exercises
Notes and Sources
References
Notes
5 Recurrences
5.1 Guess-and-Confirm
5.2 Recursion-Iteration
5.3 Generating Functions
5.4 Recursion-Tree
Exercises
Notes and Sources
References
Notes
6 Counting
6.1 Binomial Coefficients and Identities
6.2 Fundamental Principles of Counting
6.3 The Pigeonhole Principle
6.4 Permutations
6.5 Combinations
Exercises
Notes and Sources
References
Notes
Part III: Algorithms
7 Analysis of Algorithms
7.1 Constructing and Comparing Algorithms
7.2 Running Time of Algorithms
7.3 Asymptotic Notation
7.4 Analyzing Decision-Making Statements
7.5 Analyzing Programs Without Function Calls
7.6 Analyzing Programs with Function Calls
7.7 The Complexity Class NP-Complete
Exercises
Notes and Sources
References
Notes
8 Array and Numeric Algorithms
8.1 Array Multiplication Algorithms
8.2 Array Searching Algorithms
8.3 Array Sorting Algorithms
8.4 Euclid’s Algorithm
8.5 Newton’s Method Algorithm
Exercises
Notes and Sources
References
Note
9 Elementary Combinatorial Algorithms
9.1 Graph Representations
9.2 Breadth-First Search Algorithm
9.3 Applications of Breadth-First Search
9.4 Depth-First Search Algorithm
9.5 Applications of Depth-First Search
9.6 Topological Sort
Exercises
Notes and Sources
References
Note
Part IV: Optimization
10 Linear Programming
10.1 Linear Programming Formulation and Examples
10.2 The Graphical Method
10.3 Standard Form Linear Programs
10.4 Geometry of Linear Programming
10.5 The Simplex Method
10.6 Duality in Linear Programming
10.7 A Homogeneous Interior-Point Method
Exercises
Notes and Sources
References
Notes
11 Second-Order Cone Programming
11.1 The Second-Order Cone and Its Algebraic Structure
11.2 Second-Order Cone Programming Formulation
11.3 Applications in Engineering and Finance
11.4 Duality in Second-Order Cone Programming
11.5 A Primal-Dual Path-Following Algorithm
11.6 A Homogeneous Self-Dual Algorithm
Exercises
Notes and Sources
References
Notes
12 Semidefinite Programming and Combinatorial Optimization
12.1 The Cone of Positive Semidefinite Matrices
12.2 Semidefinite Programming Formulation
12.3 Applications in Combinatorial Optimization
12.4 Duality in Semidefinite Programming
12.5 A Primal–Dual Path-Following Algorithm
Exercises
Notes and Sources
References
Appendix A: Solutions to Chapter ExercisesSolutions to Chapter Exercises
References
Note
Bibliography
Index
End User License Agreement
Chapter 1
Table 1.1 A truth table for .
Table 1.2 The precedence order of logical operations.
Table 1.3 A tautology, contradiction, and contingency.
Table 1.4 A truth table for the propositional formulas of Example 1.20.
Table 1.5 A truth table verifying the first DeMorgan’s law.
Table 1.6 A truth table verifying that the conditional statement in (1.4) is...
Table 1.7 A truth table verifying .
Table 1.8 A list of the most common logical equivalences.
Table 1.9 A truth table defining a Boolean function on three variables.
Table 1.10 The truth table of Example 1.28.
Table 1.11 The truth table of item in Example 1.29.
Table 1.12 The truth table of item in Example 1.29.
Table 1.13 The truth table of Example 1.31.
Table 1.14 The truth and falsity of quantifiers.
Chapter 2
Table 2.1 A list of the most common laws of algebra of sets. Here, , and ...
Table 2.2 Some relations on the natural numbers.
Chapter 3
Table 3.1 Some useful summation formulas.
Chapter 6
Table 6.1 Numbers of ways to do the tasks of going over simple and for state...
Table 6.2 Permutations and combinations with and without repetition.
Chapter 7
Table 7.1 Some common time complexities.
Table 7.2 Some common recurrence formulas and their complexities.
Chapter 8
Table 8.1 A concrete implementation of the binary search algorithm.
Table 8.2 A concrete implementation of the selection sort algorithm.
Table 8.3 Numerical results of Example 8.2.
Chapter 9
Table 9.1 Comparison between the adjacency list and adjacency matrix represe...
Chapter 10
Table 10.1 The answer of Example 10.13
Table 10.2 Correspondences between the basic feasible solutions in the stand...
Table 10.3 The simplex tableau of Example 10.34
Table 10.4 The simplex tableau of Example 10.35
Table 10.5 Correspondence rules between primal and dual linear programs
Table 10.6 Possibilities for the primal and the dual linear programs
Chapter 11
Table 11.1 The algebraic notions and concepts associated with the second-ord...
Chapter 12
Table 12.1 Comparison of gradient and Hessian derivatives of the logarithmic...
Chapter 1
Figure 1.1 Conceptual relationships among problem domain, modeling, and solu...
Chapter 2
Figure 2.1 Falling dominoes.
Figure 2.2 Venn diagrams for combined sets obtained from two sets and .
Figure 2.3 The year’s months (a), and their partitions by seasons (b) and by...
Figure 2.4 Different relations from to , some of them are functions from
Figure 2.5 The graphs of the circle and the function .
Figure 2.6 Range versus codomain.
Figure 2.7 The graph of the integer-valued cubic function with domain .
Figure 2.8 The graphs of the function with two different domains.
Figure 2.9 Examples of different types of functions.
Chapter 3
Figure 3.1 A graphical illustration for the subspace in Example 3.11 and aff...
Figure 3.2 Illustration of the definition of a convex function on .
Figure 3.3 A convex set versus a nonconvex set in .
Figure 3.4 Four convex hulls; three in and one in .
Figure 3.5 The conic and convex hulls of some points in .
Chapter 4
Figure 4.1 Directed edges forming a directed graph (a) and undirected edges ...
Figure 4.2 Examples of graphs and non-graphs.
Figure 4.3 Simple and nonsimple paths.
Figure 4.4 Cycle graphs.
Figure 4.5 Acyclic and (simple and nonsimple) cycle graphs.
Figure 4.6 The Petersen graph and some subgraphs of .
Figure 4.7 Connected and disconnected graphs.
Figure 4.8 A graph with three connected components.
Figure 4.9 A graph with seven connected components.
Figure 4.10 A tree graph (a) and a forest graph (b).
Figure 4.11 A graph with spanning trees.
Figure 4.12 Complete graphs.
Figure 4.13 Bipartite and complete bipartite graphs.
Figure 4.14 A planar graph (a) and a nonplanar graph (b).
Figure 4.15 A graph (a) and its complement (b).
Figure 4.16 Graphs with/without bridges.
Figure 4.17 An isomorphism between two graphs.
Figure 4.18 A pair of isomorphic graphs.
Figure 4.19 Graph of Example 4.6 .
Figure 4.20 The graph of Königsberg bridge problem has four vertices , and
Figure 4.21 An Eulerian path and cycle.
Figure 4.22 A graph with a Hamiltonian path and graphs with Hamiltonian cycl...
Figure 4.23 Some bipartite graphs. Among them, only satisfies the hypothes...
Figure 4.24 Scheduling final exams for 12 classes by coloring a graph of 12 ...
Figure 4.25 Proper graph coloring (a) versus improper graph coloring (b).
Figure 4.26 A simple digraph (a) versus a nonsimple digraph (b).
Figure 4.27 Balanced digraphs.
Figure 4.28 A valid directed path from to (a) versus a “nonvalid path” f...
Figure 4.29 A directed cycle (a), a directed tree (b), and a directed forest...
Figure 4.30 A directed tree (a), a DAG (b), and a non-DAG (c).
Figure 4.31 A weakly connected digraph (a) versus a strongly connected digra...
Figure 4.32 A digraph with three strongly connected components.
Figure 4.33 A digraph with five strongly connected components.
Figure 4.34 Digraph of Exercise 4.10.
Chapter 5
Figure 5.1 Constructing a recursion-tree for .
Figure 5.2 The recursion-tree for .
Figure 5.3 The recursion-tree for .
Chapter 6
Figure 6.1 Pascal’s triangle.
Figure 6.2 The possible routes that one can take to get from Jerash city to ...
Figure 6.3 Illustrating the pigeonhole principle: There are 10 pigeons but o...
Figure 6.4 Daily round-trips between Columbus and Toronto, with connections ...
Figure 6.5 Five different kinds of ice cream in five boxes (containers).
Chapter 7
Figure 7.1 The graph of .
Figure 7.2 Flowcharts showing basic statements. (a) If-statement in a flowch...
Figure 7.3 versus .
Figure 7.4 .
Figure 7.5 .
Figure 7.6 .
Figure 7.7 Relationships among Big-Oh, Big-Omega, and Big-Theta notations.
Figure 7.8 The running time complexity of an if-statement.
Figure 7.9 The running time complexity of a for-statement.
Figure 7.10 The running time complexity of a while statement.
Figure 7.11 The running time complexity of a do-while statement.
Figure 7.12 The running time complexity of a block.
Figure 7.13 The graph structure with time complexity for the code in Algorit...
Figure 7.14 The tree structure for Algorithm 7.24 with running time complexi...
Figure 7.15 The graph structure of Algorithm 7.25.
Figure 7.16 A basic scheme of the program shown in Algorithm 7.27.
Figure 7.17 Graphical relationships among the complexity classes P, NP, NP-h...
Figure 7.18 A subgraph structure of Algorithm 7.38.
Chapter 8
Figure 8.1 Linear search graph structure with running time complexity.
Figure 8.2 A concrete implementation of the insertion sort algorithm.
Figure 8.3 Selection sort tree structure with running time complexity.
Figure 8.4 A concrete implementation of the merge sort algorithm.
Figure 8.5 The geometry of Newton’s method.
Chapter 9
Figure 9.1 The breadth-first search scheme.
Figure 9.2 The graphs of Example 9.3.
Figure 9.3 Illustrating the progress of breadth-first search on the graph of...
Figure 9.4 Applying the breadth-first search on the digraph of Example 9.3
Figure 9.5 A graph and its breadth-first tree.
Figure 9.6 Finding shortest paths by running breadth-first searches.
Figure 9.7 Illustrating the progress of breadth-first search on testing bipa...
Figure 9.8 The operation of breadth-first search (a) versus that of depth-fi...
Figure 9.9 The depth-first search scheme.
Figure 9.10 Visualization of the progress of the depth-first search method o...
Figure 9.11 A graph and its depth-first forest.
Figure 9.12 Detecting a cycle by running a depth-first search.
Figure 9.13 A directed graph (a) and its topological ordering (b).
Figure 9.14 A house construction order, indicating which to execute first, i...
Figure 9.15 Visualization of the progress of Workflow 9.4 steps to compute a...
Figure 9.16 The DAG in Example 9.6 and its topological ordering (shown to ...
Chapter 10
Figure 10.1 Graphical solution of the LP problem in Example 10.9
Figure 10.2 Graphical solution of the optimization problem in Example 10.10
Figure 10.3 Graphical solution of the optimization problem in Example 10.10
Figure 10.4 Graphical solution of the optimization problem in Example 10.11
Figure 10.5 Graphical solution of the optimization problem in Example 10.11
Figure 10.6 Graphical solution of the optimization problem in Example 10.12
Figure 10.7 Graphical solution of the optimization problem in Example 10.12
Figure 10.8 Graphical solution of the optimization problem in Example 10.14....
Figure 10.9 Vertices (’s) versus nonvertices (’s).
Figure 10.10 Extreme points (’s) versus nonextreme points (’s).
Figure 10.11 The polyhedron given in Example 10.18 with four corners.
Figure 10.12 Basic solutions and basic feasible solutions.
Figure 10.13 The polyhedron in Example 10.20.
Figure 10.14 Pointed polyhedron (a) versus nonpointed polyhedron (b).
Figure 10.15 The duality gap between the primal and dual LP problems.
Chapter 11
Figure 11.1 A Venn diagram of different classes of optimization problems.
Chapter 12
Figure 12.1 A Venn diagram of different classes of optimization problems.
Figure 12.2 A 3D plot of the boundary of the cone of the positive semidefini...
Appendix A
Figure A.1 The strongly connected components of the digraph of Exercise 4.10...
Figure A.2 A Venn diagram for Exercise 6.11.
Figure A.3 The graph structure of Algorithm 7.38.
Figure A.4 A basic scheme of the program shown in Algorithm 8.14.
Figure A.5 A basic scheme of the program shown in Algorithm 8.15.
Figure A.6 Graphical solution of the optimization problem in Exercise 10.5....
Figure A.7 Graphical solution of the optimization problem in Exercise 10.6....
Figure A.8 Graphical solution of the optimization problem in Exercise 10.7
Figure A.9 Graphical solution of the optimization problem in Exercise 10.7
Figure A.10 Graphical solution of the optimization problem in Exercise 10.7
Figure A.11 Graphical solution of the optimization problem in Exercise 10.8
Figure A.12 Graphical solution of the optimization problem in Exercise 10.8
Figure A.13 Graphical solution of the optimization problem in Exercise 10.8
Figure A.14 Graphical solution of the optimization problem in Exercise 10.9
Figure A.15 Graphical solution of the optimization problem in Exercise 10.9
Cover
Title Page
Table of Contents
Copyright
Dedication
About the Author
Preface
Acknowledgments
About the Companion Website
Begin Reading
Appendix A Solutions to Chapter Exercises
Bibliography
Index
End User License Agreement
iii
iv
v
xiii
xv
xvi
xvii
xviii
xix
xxi
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
103
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
Baha Alzalg
The University of Jordan
Amman, Jordan
This edition first published 2024© 2024 John Wiley and Sons, Ltd
All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.
The right of Baha Alzalg to be identified as the author of this work has been asserted in accordance with law.
Registered Office(s)John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USAJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK
For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.
Wiley also publishes its books in a variety of electronic formats and by print-on-demand. Some content that appears in standard print versions of this book may not be available in other formats.
Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book
Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
Library of Congress Cataloging-in-Publication Data
Names: Alzalg, Baha, author.
Title: Combinatorial and algorithmic mathematics : from foundation to optimization / Baha Alzalg, the University of Jordan, Amman, Jordan.
Description: Hoboken, NJ : Wiley, 2024. | Includes bibliographical references and index.
Identifiers: LCCN 2024011181 (print) | LCCN 2024011182 (ebook) | ISBN 9781394235940 (hardback) | ISBN 9781394235957 (adobe pdf) | ISBN 9781394235964 (epub)
Subjects: LCSH: Combinatorial analysis. | Algorithms. | Mathematical optimization.
Classification: LCC QA164.A4693 2024 (print) | LCC QA164 (ebook) | DDC 511/.6–dc23/eng/20240412
LC record available at https://lccn.loc.gov/2024011181
LC ebook record available at https://lccn.loc.gov/2024011182
Cover Design: Wiley
Cover Image: © Creative-Touch/Getty Images
In fond memory ofAla M. Alzaalig1983–2021
Also dedicated tomy father Mahmoud,my mother Aysheh,and my wife Ayat
Baha Alzalg is a professor in the Department of Mathematics at the University of Jordan in Amman, Jordan. He has also held the post of visiting associate professor in the Department of Computer Science and Engineering at the Ohio State University in Columbus, Ohio. His research interests include topics in optimization theory, applications, and algorithms, with an emphasis on interior-point methods for cone programming.
Alzalg is a PhD graduate of Washington State University in Pullman, Washington. After graduation, he was a postdoctoral fellow at the University of California, Davis. During his career, he published numerous articles in leading journals in mathematical optimization and mentored several PhD and Master’s students in this field. To view his online bio, which includes links to his past published articles, visit him at sites.ju.edu.jo/sites/alzalg
While there has been a proliferation of high-quality books on the areas of combinatorics, algorithms, and optimization in recent years, especially now, there has remained a notable absence of a comprehensive reference work accessible to students that covers all essential aspects of these three fields, starting from their foundations. Our motivation for writing this book was to supply a clear and easily digestible resource designed specifically for traditional courses in combinatorial and algorithmic mathematics with optimization. These courses are typically pursued by students in mathematics, computer science, and engineering following their introductory calculus course. The book’s origins trace back to a series of lecture notes developed for courses offered at the University of Jordan and the Ohio State University. Its primary target audience is students studying discrete structures, combinatorics, algorithms, and optimization, but it also caters to scientists across diverse disciplines that incorporate algorithms. The book is crowned with modern optimization methodologies. Without the optimization part, the book can be used as a textbook in a one- or two-term undergraduate course in combinatorial and algorithmic mathematics. The optimization part can be used in a one-term high-level undergraduate course, or a low- to medium-level graduate course.
The book is divided into four major parts, and each part is divided into three chapters. Part I is devoted to studying mathematical foundations. This includes mathematical logic and basic structures. This part is divided into three chapters (Chapters 1–3). In Chapter 1, we study the propositional logic and the predicate logic. In Chapter 2, we study basic set-theoretic structures such as sets, relations, and functions. In Chapter 3, we study basic analytic and algebraic structures such as sequences, series, subspaces, convex structures, and polyhedra.
Part II is devoted to studying combinatorial structures. Discrete mathematics is the study of countable structures. Combinatorics is an area of discrete mathematics that studies how to count these objects using various representations. Part II, which is divided into three chapters (Chapters 4–6), studies recursion techniques, counting methods, permutations, combinations, arrangements of objects and sets, and graphs. Specifically, Chapter 4 introduces graph basics and properties, Chapter 5 presents some recurrence-solving techniques, and Chapter 6 introduces some counting principles, permutations, and combinations.
Part III is devoted to studying algorithmic mathematics, which is a branch of mathematics that deals with the design and analysis of algorithms. Analyzing algorithms allows us to determine and express their efficiency. This part is divided into three chapters (Chapters 7–9). In Chapter 7, we discuss the asymptotic notations, one of the important tools in algorithmic analysis. Then we dive more into determining the computational complexity of various algorithms. In Chapter 8, we present and analyze standard integer, array and numeric algorithms. In Chapter 9, we present elementary combinatorial algorithms.
Part IV is devoted to studying linear and conic optimization problems, and is divided into three chapters (Chapters 10–12). Chapter 10 introduces linear optimization and studies its geometry and duality. This chapter also studies simplex and non-simplex algorithms for linear optimization. Second-order cone programming is linear programming over vectors belonging to second-order cones. Semidefinite programming is linear programming over positive semidefinite matrices. One of the chief attractions of these conic optimization problems is their diverse applications, many in engineering. In Chapter 11, we introduce second-order cone optimization applications and algorithms. In Chapter 12, we introduce semidefinite optimization applications (especially combinatorial applications) and algorithms.
Each chapter commences with a mini table of contents. This provides what readers can expect to find in the chapter. At the end of every chapter, readers will find a set of exercises aimed at applying their knowledge of the chapter’s concepts and enriching their understanding. Solutions to all of these exercises can be found in Appendix A. The book is also accompanied by a companion website that connects students and instructors. This website includes binary and multiple choice questions together with their solutions. Additionally, to further aid readers, we have placed a list of references at the end of each chapter.
Columbus, OHApril 2022.
Baha Alzalg
I hold a deep sense of gratitude and appreciation toward the authors of some of the early books in this discipline, as their works have played a pivotal role in shaping the content of this book. Throughout the writing process, I have made every effort to diligently cite all references, ensuring the accuracy and integrity of the content. However, I must humbly acknowledge the potential for unintentional oversights. If, in any instance, you encounter an unattributed or inadequately cited statement, table, figure, example, or exercise within this book, I sincerely apologize. It was never my intent to omit proper attribution, and I am committed to rectifying any such omissions in future editions. Your understanding and support are greatly valued.
I am grateful to the Department of Mathematics at my home institution, the University of Jordan, for giving me sabbatical and unpaid leaves I have used to work on this book. I also thank the Department of Computer Science and Engineering at the Ohio State University for hosting me during the academic years of 2019/2020 to 2021/2022. The author is pleased to thank his host at OSU, Rephael Wenger. Despite being busy as an associate chair of the department, Rephael also found time to read certain chapters of the book and offered valuable feedback, for which the author is also appreciative.
Some of the author’s students at the Ohio State University have contributed by reading parts of the manuscript, pointing out misprints, and working on the exercises: Ron Chen, Luke Evers, Sery Gunawardena, Yiqing Li, Sarthak Mohanty, and Kishore Prakash Sailaja. The author’s PhD student at the University of Jordan, Asma Gafour, has also read some chapters from the optimization part and pointed out some misprints. The author is also grateful for fruitful discussions with a number of individuals, including Kenneth Supowit and Doreen Close of OSU and Fuad Kittaneh of UJ. Great people and a wonderful experience.
I am also profoundly thankful to my father, Mahmoud N. Alzalg, my mother, Aysheh H. Alzoubi, my brother, Lewa Alzaleq, and my beloved children, Heba, Rami, Alma, and Mahmoud, as well as my sisters, for their blessings and continuous inspiration that have buoyed my spirit throughout this journey. Their encouragement and unwavering support have been invaluable. And of course, I would like to thank my wife, Ayat Ababneh, for her patience and belief in making this book a reality. She gave me support and help and discussed with me some of its ideas. I am also immensely appreciative of her dedication in managing our family responsibilities during a significant portion of the book’s development.
A preliminary version of this book was fully written and independently published on September 20, 2022 (ISBN-13: 979-8353826934). This self-publishing was done before launching ChatGPT on November 30, 2022. Nevertheless, it is important to acknowledge that certain insights and content in this book version may have been influenced by ChatGPT. It is worth noting that we have meticulously verified any information or data obtained from ChatGPT that has been integrated into this book edition in order to maintain its quality and reliability.
The copyright for this book is held by Wiley, who has generously agreed to allow us to keep the preliminary edition of the book available on the web, and we acknowledge their kind permission.
Finally, I offer my sincere gratitude and praise to Allah (Alhamdulillah) for granting me the strength, resilience, and ability to undertake the work involved in this book. His divine guidance and blessings have been an unshakable foundation upon which this project has been built.
Baha Alzalg
This book is accompanied by a companion website:
www.wiley.com/go/alzalg
This website includes:
Binary Choice Questions
Multiple Choice Questions
Solutions
1.1 Propositions
1.2 Logical Operators
1.3 Propositional Formulas
1.4 Logical Normal Forms
1.5 The Boolean Satisfiability Problem
1.6 Predicates and Quantifiers
1.7 Symbolizing Statements of the Form “All
P
Are Q”
Exercises
Notes and Sources
References
The term “logic” has a broad definition, and countless logics have been explored by philosophers, mathematicians, and computer scientists. For most individuals, “logic” typically refers to either propositional logic or predicate logic. Propositional logic, a classical form, involves two truth values (true and false). Predicate logic, on the other hand, expands upon propositional logic by allowing explicit discussion of objects and their properties.
In this chapter, we first study the propositional logic which is the simplest, and most abstract logic we can study. After that, we study the predicate logic. We start by introducing the notion of a proposition.
Throughout this chapter (and the entire book), denotes the set of all natural numbers, denotes the set of all integers, and denotes the set of all rational numbers. For example, 2 and are rational numbers, but and are irrational numbers. We also let denote the set of all real (rational and irrational) numbers.
In this section, we define a proposition and introduce two different types of propositions with examples. To begin with, we have the following definition.
Definition 1.1 A proposition is a statement that can be either true or false.
It is essential to emphasize that the proposition must hold either a true or false value, and it cannot simultaneously possess both. We give the following example.
Example 1.1
Identify each of the following statements as a proposition or not and provide any required clarifications.
.
.
Hillary Clinton is a former president of the United States.
Bill Clinton is a former president of the United States.
Be careful.
Is Abraham Lincoln the greatest president that the United States has ever had?
.
“” is a proposition (a true proposition).
“” is a proposition (a false proposition).
“Hillary Clinton is a former president of the United States” is a proposition (a false proposition).
“Bill Clinton is a former president of the United States” is a proposition (a true proposition).
“Be careful” is not a proposition.
“Is Abraham Lincoln the greatest president that the United States has ever had?” is not a proposition.
“” is not a proposition. In fact, the values of the variables have not been assigned. So, we do not know the values of and , and hence it is neither true or false.
▄
Sometimes a sentence does not provide enough information to determine whether it is true or false, so it is not a proposition. An example is, “Your answer to Question 13 is incorrect.” The sentence does not tell us who we are talking about. If we identify the person, say “Adam’s answer to Question 13 is incorrect,” then the sentence becomes a proposition.
Note that, for a given statement, being not able to decide whether it is true or false (due to the lack of information) is different from being not able to know how to verify whether it is true or false. Consider Goldbach’s conjecture1: “Every even integer greater than 2 can be written as the sum of two primes.”2 Despite its origin dating back to 1742 and computational data suggesting its validity, this conjecture remains an open problem in number theory. It is impossible for this sentence to be true sometimes, and false at other times. As of now, no one has proved or disproved this claim, classifying it as a proposition with an undetermined truth value because it cannot simultaneously be both true and false, and its resolution may await future mathematical breakthroughs.
There are some sentences that cannot be determined to be either true or false. For example, the sentence: “This statement is false.” This is not a proposition because we cannot decide whether it is true or false. In fact, if the sentence: “This statement is false” is true, then by its meaning it is false. On the other hand, if the sentence: “This statement is false” is false, then by its meaning it is true. Therefore, the sentence: “This statement is false” can have neither true nor false for its truth value. This type of sentence is referred to as paradoxes.3 The study of a paradox has played a fundamental role in the development of modern mathematical logic.
Note that the sentence “This statement is false,” which is self-contradicting, is different from the sentence “This statement is true,” which is self-consistent on either choice. Nevertheless, neither is a proposition. The latter type of sentence is referred to as an anti-paradox.4 The question that arises now is: Why the sentence “This statement is true” is not a proposition? This question is left as an exercise for the reader.
We now define atomic (or primitive) propositions. Intuitively, these are the set of smallest propositions.
Definition 1.2 An atomic proposition is one whose truth or falsity does not depend on the truth or falsity of any other proposition.
According to Definition 1.2, we find that all the propositions in items – of Example 1.1 are atomic.
Definition 1.3 A compound proposition is a proposition that involves the assembly of multiple atomic propositions.
The following connectives allow us to build up compound propositions:
Example 1.2 The following proposition is compound.
Note that
compound proposition 1 = (atomic proposition 1) (atomic proposition 2),
compound proposition 2 = (compound proposition 1) (atomic proposition 3).
▄
In the realm of natural language, such as English, ambiguity is a recurring challenge that writers and readers encounter. The following remark says that slight variations in context could drastically alter the intended message.
Remark 1.1 Sentences in natural languages can often be ambiguous and words can have different meanings in the context in which they are used.
To illustrate Remark 1.1, we consider the following sentences:
The sentence “You can download WhatsApp or Skype to ring friends” could mean that you can only download one of the applications or it could mean that you can download just one or download both.
The sentence “I smelled a chlorine-like odor and felt ill” implies that the odor of chlorine made you sick, but the sentence “I am majoring in CS and minoring in Math” does not imply that majoring in CS caused you to minor in Math.
The sentence “I went to Chicago and took a plane” could mean that you took a plane to travel to Chicago or it could mean that you went to Chicago and then took a plane from Chicago to another destination, such as Las Vegas.
In mathematics and computer science, it is important to avoid ambiguity and for sentences to have a precise meaning. This is why people have invented artificial languages such as Java.
Rather than writing out propositions in full, we will abbreviate them by using propositional variables: , etc.
Example 1.3 (Example 1.2 revisited)
In Example 1.2, let be the proposition “It is raining,” be the proposition “You are outside,” and be the proposition “You get wet.” Then we can abbreviate the compound proposition
▄
In Section 1.2, we will learn more about the connectives AND, OR, NOT, IF-THEN, and IFF. This will enable us to gain a comprehensive understanding of how these connectives function within the context of formal logic and the role they play in constructing logical statements and arguments.
In this section, we study the following logical operators or connectives: Negation, conjunction, disjunction, exclusive disjunction, implication, and double implication.
Before starting with the logical operators, we introduce the truth tables which help us understand how such operators work by calculating all of the possible return values of atomic propositions.
Definition 1.4 A truth table is a mathematical table used to show the truth or falsity of a compound proposition depending on the truth or falsity of the atomic propositions from which it is constructed.
Examples of truth tables will be seen very frequently throughout this chapter.
This part is devoted to introducing the logical operators: Negation (NOT), conjunction (AND), disjunction (OR), and exclusive disjunction (XOR).
The logical operator “NOT” transforms a proposition into its opposite truth value via a negation.
Definition 1.5 If is an arbitrary proposition, then the negation of is written (NOT ) which is true when is false and is false when is true.
The truth table for “” is shown below.
T
F
F
T
Example 1.4 If is the proposition “It is raining,” then is the proposition “It is not raining.”
▄
Remark 1.2 Negation does not mean opposite. For instance, if is a real number and is the proposition “ is not positive,” then you cannot conclude that is the proposition “ is negative” because could be 0, which is neither positive nor negative.
In real numbers, two negative signs cancel each other out. Similarly, in propositional logic, two negations also cancel each other out. The double negation of is () or . The truth table for “” is shown below.
T
F
T
F
T
F
Logical equivalence is a type of relationship between two propositional formulas in propositional logic.
Definition 1.6 When the truth values for two propositional formulas and are the same, the propositional formulas are called logically equivalent, and this is denoted as or .
For instance, from the truth table for “,” we have . This equivalence is called the double negation law.
The negation connective is unary as it only takes one argument. The upcoming connectives are binary as they take two arguments.
The logical operator “AND” connects two or more propositions via a conjunction.
Defintion 1.7 If and are arbitrary propositions, then the conjunction of and is written ( AND ) which is true when both and are true.
The truth table for “” is shown below.
T
T
T
T
F
F
F
T
F
F
F
F
Example 1.5 If is the proposition “It is raining” and is the proposition “I have an umbrella,” then is the proposition “It is raining and I have an umbrella.”
▄
The logical operator “OR” connects two or more propositions via a disjunction.
Defintion 1.8 If and are arbitrary propositions, then the disjunction of and is written ( OR ) which is true when either is true, or is true, or both and are true.
The truth table for “” is shown below.
T
T
T
T
F
T
F
T
T
F
F
F
Example 1.6 If is the proposition “I get married” and is the proposition “I live alone,” then is the proposition “I get married or I live alone.”
▄
In English, the word “or” can be used in two different ways: inclusively or exclusively. For example, let us say that: a computer science course might require that a student be able to program in either Java or C++ before enrolling in the course. In this case, “or” is used inclusively: a student who can program in both Java and C++ would be eligible to take the course. On the other hand, for another example, let us say this: I get married or stay single. In this case, “or” is used exclusively: you can either get married or stay single, but not both.
Definition 1.9 If and are arbitrary propositions, then the exclusive-disjunction of and is written ( XOR ) which is true when exactly one of and is true and false otherwise.
The truth table for “” is shown below.
T
T
F
T
F
T
F
T
T
F
F
F
In this part, we study the following logical operators: implication (IF-THEN) and double implication (IFF).
The logical operator “IF-THEN” connects two or more propositions via an implication, thus forming a conditional proposition.
Definition 1.10 Let and be arbitrary propositions. The implication implies (also called the conditional of and ) is written (IF THEN ), which is false when is true and is false, and true otherwise. Here, is called the hypothesis and is called the conclusion.
Example 1.7 If is the proposition “You live in Russia” and is the proposition “You live in the coldest country in the world,” then is the proposition “If you live in Russia, then you live in the coldest country in the world.”
▄
We have five different ways to read , as is seen in the following remark.
Remark 1.3 Let and be two atomic propositions. The following statements have the same meaning as the conditional statement “If then .”
if .
is sufficient for .
is necessary for .
only if .
Example 1.8 Consider the implication:
If a person is a president of the United States, then s/he is at least 35 years old
.
According to Remark 1.3, the above implication can be restated in the following equivalent ways:
A person is at least 35 years old if s/he is president of the United States
.
Being president of the United States is sufficient for being at least 35 years old
.
Being at least 35 years old is necessary for being President of the United States
.
A person is president of the United States only if s/he is at least 35 years old
.
▄
The truth table for “” is shown below.
T
T
T
T
F
F
F
T
T
F
F
T
Therefore, the implication is true if we are on the lines (tt), (ft), or (ff) only, and it is false if we are on the line (tf). We now justify the truth table for “.” Suppose that Sara is a math teacher and that she told her students in a class before the first midterm exam the statement that “If everyone in the class gets an A on the midterm, then I will make cookies for the class.” Note that this statement says nothing about what will happen if not everyone gets an A. So, if a student did not get an A, Sara is free to make cookies or not and she will have told the truth in either case. Thus, the only case where Sara did not tell the truth (i.e., the implication is false) is the case that if everyone got an A in the midterm (i.e., the hypothesis is true), but Sara did not make cookies for the class (i.e., the conclusion is false). This is the case that made the (tf)-line in the implication truth table.
Example 1.9 Let denote the set of real numbers and .5 Decide whether each of the following implications is true or false. Justify your answer.
If , then .
If , then .
If , then .
If , then .
If the Riemann hypothesis is true, then .
6
The very first look tells us that the proposition in item is true, and those in items and are false, but we might be uncertain about the propositions in items and . Since all these propositions have the form, we shall analyze each item according to the truth table for “.” As mentioned earlier, the conditional statement is true if we are on the lines (tt), (ft), or (ff) only, and it is false if we are on the line (tf).
Since the hypothesis is and the conclusion is , we are
So, we cannot be on the (tf)-line for any . Thus, the proposition is true.
Since the hypothesis is and the conclusion is , we are
So, there is that makes us on the (tf)-line. Thus, the proposition is false.
Take , then , but . So, we are on the (tf)-line. Hence, the proposition is false.
is always false, so we cannot be on the (tf)-line. Hence, the proposition is true.
The Riemann hypothesis is an unsolved conjecture in mathematics. That is, no one knows if it is true or false at this time. But this does not prevent us from answering the question for this item. In fact, as is always true, we cannot be on the (tf)-line. Hence, the proposition is true.
▄
Example 1.10 Use a truth table to show that . This equivalence is called the implication law.
We prove that two propositional formulas are logically equivalent by showing that their truth values are the same. A truth table that contains the truth values for and is shown below and ends up the proof.
T
T
T
F
T
T
F
F
F
F
F
T
T
T
T
F
F
T
T
T
Implications are important in mathematics because many mathematical theorems can be restated in the IF-THEN form, which enables us to prove them. See, for instance, the theorem that is stated and proved in the following example.
Example 1.11 Prove the result of the following theorem.
To prove the theorem, we restate it in IF-THEN form:
Proof: If and are even, then each of and is the product of 2 and an integer. That is,
where denotes the set of integers. Then . Since the sum of any two integers is an integer, this means that is the product of 2 and an integer, and hence is even. The proof is complete.
▄
The contrapositive of the implication is the implication .
Example 1.12 The contrapositive of the proposition “If today is Monday, then Adam has a class today” is “If Adam does not have a class today, then today is not Monday.”
▄
The truth table for the contrapositive is given below.
T
T
F
F
T
T
F
T
F
F
F
T
F
T
T
F
F
T
T
T
From the truth table for contrapositive and that for the implication, we conclude that . This is called the law of contrapositive.
The contrapositive is important and useful in mathematics for two reasons:
Once a theorem in IF-THEN form is proven, there is no need to prove the contrapositive of the theorem. A proof by contrapositive, also known as contraposition, is a fundamental inference technique employed in proofs. It entails deriving a conditional statement from its contrapositive counterpart.
If we are trying to prove a theorem in IF-THEN form, sometimes it is easier to prove its contrapositive.
Example 1.13 supports the first reason and Example 1.14 supports the second reason.
Example 1.13 The Pythagorean theorem states that in any right triangle, the square of the length of the hypotenuse (the side opposite the right angle) is to equal the sum of the squares of the lengths of the legs of the right triangle. The proof of this theorem is beyond the scope of our discussion.
In IF-THEN form, the Pythagorean theorem can be rewritten as: “If a triangle is right with hypotenuse and legs and , then ,” or equivalently: “If a triangle is right-angled, then its three sides satisfy a Pythagorean triple,” where a Pythagorean triple is a set of positive integers, and , that fits the rule .
Since Pythagorean theorem was proven by Pythagoras, its contrapositive: “If triangle sides do not satisfy a Pythagorean triple, then the triangle is not right-angled” is obtained for free.
▄
Example 1.14 Let be a positive integer. Prove the result of the following theorem.
The implication in the theorem statement is equivalent to its contrapositive:
which is easier to prove. Let be an even integer, then can be written in the form , for some positive integer . It follows that . Thus, is even.
▄
Note that a proof by contraposition is different from the so-called direct proof. In a direct proof, we write a sequence of statements which are either evident or evidently follow from previous statements, and whose last statement is the desired conclusion (the one to be proved). For instance, in Example 1.14, when we proved the statement that “” we used a direct proof.
The converse of the implication is the implication . The truth table for the converse is given below.
T
T
T
T
T
F
F
T
F
T
T
F
F
F
T
T
From the above truth table, it is clear that .
Example 1.15 The converse of the proposition “If today is Monday, then Adam has a class today” is “If Adam has a class today, then today is Monday.” Suppose that all Adam’s classes are on Mondays, Wednesdays, and Fridays though, then the original implication is true while its converse is false. This illustrates why we found that and are not logically equivalent.
▄
The inverse of the implication is the implication . The truth table for the inverse is given below.
T
T
F
F
T
T
F
F
T
T
F
T
T
F
F
F
F
T
T
T
It is clear that a conditional statement is not logically equivalent its inverse. That is, . It is also clear, from the truth table for the inverse and that for converse, that the inverse and converse of any conditional statement are logically equivalent. That is, .
Example 1.16 The inverse of the proposition “If today is Monday, then Adam has a class today” is “If today is not Monday, then Adam does not have a class today.”
▄
We now give the last logical operator, which is the double implication (IFF, which is read “if and only if”).
The following definition defines a double implication as the combination of an implication and its converse.
Definition 1.11 Let P and Q be arbitrary propositions. The double implication (also-called the biconditional) of P and Q is written P Q (P IFF Q), which is true precisely when either P and Q are both true or P and Q are both false.
Example 1.17 If is the proposition “You live in Russia” and is the proposition “You live in the coldest country in the world,” then is the proposition “You live in Russia iff you live in the coldest country in the world.”
▄
The truth table for “” is given below.
T
T
T
T
F
F
F
T
F
F
F
T
Note that means that is both necessary and sufficient for . One of the chapter exercises asks to prove that .
Table 1.1 A truth table for .
T
T
T
F
T
F
T
T
F
T
T
T
T
F
T
F
F
T
T
F
F
T
F
F
F
T
T
F
F
T
F
T
F
T
F
F
F
F
T
F
F
T
F
F
F
T
F
F
To prove an “if and only if” statement, you have to prove two directions. To disprove an “if and only if” statement, you only have to disprove one of the two directions. For instance, letting be an integer, to prove that iff , we have to prove the two directions: First, we prove that if then . And second, we prove that if then (the proofs are trivial). To disprove iff , we only disprove the direction: If then (the disproof is trivial: take ).
Example 1.18 Construct a truth table for the compound proposition
Since we have three propositional variables, , and , the number of rows in our truth table is eight rows (see Remark 1.4). The truth table for is given in Table 1.1.
▄
We end this section with the following remark.
Remark 1.4 The number of rows in a truth table for a propositional formula with variables is rows.
For instance, the numbers of rows in the truth tables for the propositional formulas and are , and rows, respectively.
In this section, we study special propositional formulas, which are tautologies, contradictions, and contingencies. Then we study how to negate compound propositions and show how to derive propositional formulas. Before we dive into these, we illustrate the order in which the logical operators will be applied.
Table 1.2 The precedence order of logical operations.
Operation(s)
Operator(s)
Precedence
Parentheses
1
Negation (NOT)
2
Conjunction (AND)
3
Disjunction (OR, XOR)
4
Implication (IF-THEN)
5
Double implication (IFF)
6
In arithmetic, multiplication has higher precedence than addition. Hence, the value of the expression is not 18, but 14. Similarly, in propositional logic, logical operators have operator precedence the same as arithmetic operators. We preserve the precedence order specified in Table 1.2.
Example 1.19Determine the value of the propositional formula if the propositional variables , and have values of false, true and false, respectively.
From Table 1.2, the operator has higher precedence than the operator . So . Plugging in the values of the variables, we have , which has the value of false.
▄
Note that, for longer expressions, we recommend using parentheses to group expressions and control which ones are evaluated first. For example, the propositional formula is written as , and the propositional formula is written as . Note also that if we have two logical operators with the same precedence, then their associativity is from left to right. For example, is written as , and is written as .
Tautologies, contradictions, and contingencies are propositional formulas of particular interest in propositional logic. We have the following definition.
Definitions 1.12 A propositional formula is called
a tautology if it is always true,
a contradiction if it is always false,
a contingency if it is neither a tautology nor a contradiction.
Let be an arbitrary proposition. From Table 1.3, it is seen that is a tautology, is a contradiction, and is a contingency.
Table 1.3 A tautology, contradiction, and contingency.
T
F
T
F
F
F
T
T
F
T
Table 1.4 A truth table for the propositional formulas of Example 1.20.
T
T
T
T
T
T
T
F
F
T
T
T
F
T
F
T
T
F
F
F
F
F
T
T
Example 1.20Use a truth table to determine if each of the following propositional formulas is a tautology, contradiction, or contingency.
.
.
.
From Table 1.4, it is seen that is a tautology and is a contingency. This answers items and . Item is left as an exercise for the reader.
▄
Last, but not least, it is worth mentioning that Definition 1.12 suggests an alternative definition to the notion of logical equivalence: Two propositions and are logically equivalent if is a tautology.
A proof by contradiction is a common technique of proving mathematical statements. In a proof by contradiction, to prove that a statement is true, we begin by assuming that is false, and show that this assumption leads to a contradiction, then this contradiction tells us that our original assumption is false, and hence the statement is true. The following example illustrates how this proof technique is used.
Example 1.21 Prove that is an irrational number.
Suppose, in the contrary, that is rational. It follows that there are two integers, say and , such that . Without loss of generality, we can assume that and have no common factors (otherwise, we can reduce the fraction and write it in its simplest form). Multiplying both sides of the equation by and squaring, we get . This means that is even. From the theorem stated in Example 1.14, itself must be even. Thus, for some integer . It follows that
and hence, after dividing by 2, we have . This means that . By the same theorem stated in Example 1.14, itself must be even. We have shown that both and are even, and hence they are both multiples of 2. This contradicts the fact that and have no common factors. This contradiction tells us that our original assumption is false, and hence must not be rational. Thus, is irrational. The proof is complete.
▄
In this part, we study how to negate negations, conjunctions, disjunctions, and implications.
When we say “Results of the numerical experiment are not inconclusive” means “Results of the numerical experiment are conclusive.” So, if you negate a negation they effectively cancel each other out. This leads us to the double negation law , which was previously stated in this chapter.
Suppose your roommate made the statement “I will get an A in Software I and an A in Foundations I.” If your roommate’s prediction is not correct, then s/he did not get an A in Software I or did not get an A in Foundations I. This leads us to the first DeMorgan’s law:
Table 1.5 verifies the first DeMorgan’s law.
Suppose your roommate made the statement “I will get an A in Software I or an A in Foundations I.” If your roommate’s prediction is not correct, then s/he did not get an A in Software I and did not get an A in Foundations I. This leads us to the second DeMorgan’s law: