61,99 €
Accessible and practical framework for machine learning applications and solutions for civil and environmental engineers
This textbook introduces engineers and engineering students to the applications of artificial intelligence (AI), machine learning (ML), and machine intelligence (MI) in relation to civil and environmental engineering projects and problems, presenting state-of-the-art methodologies and techniques to develop and implement algorithms in the engineering domain.
Through real-world projects like analysis and design of structural members, optimizing concrete mixtures for site applications, examining concrete cracking via computer vision, evaluating the response of bridges to hazards, and predicating water quality and energy expenditure in buildings, this textbook offers readers in-depth case studies with solved problems that are commonly faced by civil and environmental engineers.
The approaches presented range from simplified to advanced methods, incorporating coding-based and coding-free techniques. Professional engineers and engineering students will find value in the step-by-step examples that are accompanied by sample databases and codes for readers to practice with.
Written by a highly qualified professional with significant experience in the field, Machine Learning includes valuable information on:
This textbook is a must-have reference for undergraduate/graduate students to learn concepts on the use of machine learning, for scientists/researchers to learn how to integrate machine learning into civil and environmental engineering, and for design/engineering professionals as a reference guide for undertaking MI design, simulation, and optimization for infrastructure.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 817
Veröffentlichungsjahr: 2023
M. Z. Naser, PhD, PE
School of Civil and Environmental Engineering & Earth Sciences (SCEEES) and Artificial Intelligence Research Institute for Science and Engineering (AIRISE), Clemson University, Clemson, SC, USA
Copyright © 2023 by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.
Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data
Names: Naser, M. Z., author. | John Wiley & Sons, publisher.
Title: Machine learning for civil & environmental engineers : a practical approach to data-driven analysis, explainability, and causality / M. Z. Naser.
Description: Hoboken, New Jersey : Wiley, [2023] | Includes bibliographical references and index. | Summary: “Synopsis: The theme of this textbook revolves around how machine learning (ML) can help civil and environmental engineers transform their domain. This textbook hopes to deliver the knowledge and information necessary to educate engineering students and practitioners on the principles of ML and how to integrate these into our field. This textbook is about navigating the realm of data-driven ML, explainable ML, and causal ML from the context of education, research, and practice. In hindsight, this textbook augments ML into the heart of engineering. Together, we will go over the big ideas behind ML. We will ask and answer questions such as, what is ML? Why is ML needed? How does ML differ from statistics, physical testing, and numerical simulation? Can we trust ML? And how can we benefit from ML, adapt to it, adopt it, wield it, and leverage it to overcome many, many of the problems that we may face? This book is also about showing you, my dear reader, how to amplify your engineering knowledge with a new tool. A tool that is yet to be formally taught in our curriculum. A tool that many civil and environmental engineering departments and schools may not fully appreciate; yet are eager to know more about!”-- Provided by publisher.
Identifiers: LCCN 2023003262 (print) | LCCN 2023003263 (ebook) | ISBN 9781119897606 (hardback) | ISBN 9781119897620 (pdf)
Subjects: LCSH: Machine learning. | Civil engineering--Data processing. | Environmental engineering--Data processing.
Classification: LCC Q325.5 .N37 2023 (print) | LCC Q325.5 (ebook) | DDC 006.3/1--dc23/eng20230506
LC record available at https://lccn.loc.gov/2023003262
LC ebook record available at https://lccn.loc.gov/2023003263
Cover Image: © Dall•E 2/OpenAI
Cover Design: Wiley
Set in 9.5/12.5pt STIXTwoText by Integra Software Services Pvt. Ltd, Pondicherry, India
To the future of civil and environmental engineering
Cover
Title page
Copyright page
Dedication
Preface
About the Companion Website
1 Teaching Methods for This Textbook
1.1 Education in Civil and Environmental Engineering
1.2 Machine Learning as an Educational Material
1.3 Possible Pathways for Course/Material Delivery
1.3.1 Undergraduate Students
1.3.2 Graduate Students and Post-docs
1.3.3 Engineers and Practitioners
1.3.4 A Note
1.4 Typical Outline for Possible Means of Delivery
Chapter Blueprint
Questions and Problems
References
2 Introduction to Machine Learning
2.1 A Brief History of Machine Learning
2.2 Types of Learning
2.3 A Look into ML from the Lens of Civil and Environmental Engineering
2.4 Let Us Talk a Bit More about ML
2.5 ML Pipeline
2.5.1 Formulating a Hypothesis
2.5.2 Database Development
2.5.3 Processing Observations
2.5.4 Model Development
2.5.5 Model Evaluation
2.5.6 Model Optimization
2.5.7 Model Deployment
2.5.8 Model Management (Monitoring, Updating, Etc.)
2.6 Conclusions
Definitions
Chapter Blueprint
Questions and Problems
References
3 Data and Statistics
3.1 Data and Data Science
3.2 Types of Data
3.2.1 Numerical Data
3.2.2 Categorical Data
3.2.3 Footage
3.2.4 Time Series Data*
3.2.5 Text Data*
3.3 Dataset Development
3.4 Diagnosing and Handling Data
3.5 Visualizing Data
3.6 Exploring Data
3.6.1 Correlation-based and Information-based Methods
3.6.2 Feature Selection and Extraction Methods
3.6.3 Dimensionality Reduction
3.7 Manipulating Data
3.7.1 Manipulating Numerical Data
3.7.2 Manipulating Categorical Data
3.7.3 General Manipulation
3.8 Manipulation for Computer Vision
3.9 A Brief Review of Statistics
3.9.1 Statistical Concepts
3.9.2 Regression
3.10 Conclusions
Definitions
Chapter Blueprint
Questions and Problems
References
4 Machine Learning Algorithms
4.1 An Overview of Algorithms
4.1.1 Supervised Learning
4.1.2 Unsupervised Learning
4.2 Conclusions
Definitions
Chapter Blueprint
Questions and Problems
References
5 Performance Fitness Indicators and Error Metrics
5.1 Introduction
5.2 The Need for Metrics and Indicators
5.3 Regression Metrics and Indicators
5.4 Classification Metrics and Indicators
5.5 Clustering Metrics and Indicators
5.6 Functional Metrics and Indicators*
5.6.1 Energy-based Indicators
5.6.2 Domain-specific Metrics and Indicators
5.6.3 Other Functional Metrics and Indicators
5.7 Other Techniques (Beyond Metrics and Indicators)
5.7.1 Spot Analysis
5.7.2 Case-by-Case Examination
5.7.3 Drawing and Stacking
5.7.4 Rational Vetting*
5.7.5 Confidence Intervals*
5.8 Conclusions
Definitions
Chapter Blueprint
Questions and Problems
Suggested Metrics and Packages
References
6 Coding-free and Coding-based Approaches to Machine Learning
6.1 Coding-free Approach to ML
6.1.1 BigML
6.1.2 DataRobot
6.1.3 Dataiku
6.1.4 Exploratory
6.1.5 Clarifai
6.2 Coding-based Approach to ML
6.2.1 Python
6.2.2 R
6.3 Conclusions
Definitions
Chapter Blueprint
Questions and Problems
References
7 Explainability and Interpretability
7.1 The Need for Explainability
7.1.1 Explainability and Interpretability
7.2 Explainability from a Philosophical Engineering Perspective
7.3 Methods for Explainability and Interpretability
7.3.1 Supervised Machine Learning
7.3.2 Unsupervised Machine Learning
7.4 Examples
7.4.1 Surrogates*
7.4.2 Global Explainability
7.4.3 Local Explainability
7.5 Conclusions
Definitions
Questions and Problems
Chapter Blueprint
References
8 Causal Discovery and Causal Inference
8.1 Big Ideas Behind This Chapter
8.2 Re-visiting Experiments
8.3 Re-visiting Statistics and ML
8.4 Causality
8.4.1 Definition and a Brief History
8.4.2 Correlation and Causation
8.4.3 The Causal Rungs
8.4.4 Regression and Causation
8.4.5 Causal Discovery and Causal Inference
8.4.6 Assumptions Required to Establish Causality
8.4.7 Causal Graphs and Graphical Methods
8.4.8 Causal Search Methods and ML Packages
8.4.9 Causal Inference and ML Packages
8.4.10 Causal Approach
8.5 Examples
8.5.1 Causal Discovery
8.5.2 Causal Inference
8.5.3 DAG from CausalNex
8.5.4 Modifying CausalNex’s DAG with Domain Knowledge
8.5.5 A DAG Similar to a Regression Model
8.6 A Note on Causality and ML
8.7 Conclusions
Definitions
Questions and Problems
Chapter Blueprint
References
9 Advanced Topics (Synthetic and Augmented Data, Green ML, Symbolic Regression, Mapping Functions, Ensembles, and AutoML)
9.1 Synthetic and Augmented Data
9.1.1 Big Ideas
9.1.2 Conservative Interpolation
9.1.3 Synthetic Minority Over-sampling Technique (SMOTE)
9.1.4 Generative Adversarial Networks (GANs) and Triplet-based Variational Autoencoder (TVAE)
9.1.5 Augmented Data
9.1.6 A Note
9.2 Green ML
9.2.1 Big Ideas
9.2.2 Example
9.2.3 Energy Perspective
9.2.4 A Note
9.3 Symbolic Regression
9.3.1 Big Ideas
9.3.2 Examples
9.3.3 Eureqa
9.3.4 TurningBot
9.3.5 HeuristicLab
9.3.6 GeneXproTools⁎
9.3.7 Online Interface by MetaDemoLab
9.3.8 Python
9.3.9 Eureqa
9.3.10 MetaDemoLab
9.3.11 Python
9.3.12 GeneXproTools⁎
9.3.13 Eureqa
9.3.14 MetaDemoLab
9.3.15 HeuristicLab
9.3.16 A Note
9.4 Mapping Functions
9.4.1 Big Ideas
9.4.2 Concept of Mapping Functions
9.4.3 Approach to Mapping Functions
9.4.4 Example
9.4.5 A Note
9.5 Ensembles
9.5.1 Big Ideas
9.5.2 Examples
9.6 AutoML
9.6.1 Big Ideas
9.6.2 The Rationale and Anatomy of CLEMSON
9.6.3 Example
9.6.4 A Note
9.7 Conclusions
Definitions
Questions and Problems
Chapter Blueprint
References
10 Recommendations, Suggestions, and Best Practices
10.1 Recommendations
10.1.1 Continue to Learn
10.1.2 Understand the Difference between Statistics and ML
10.1.3 Know the Difference between Prediction via ML and Carrying Out Tests and Numerical Simulations
10.1.4 Ask if You Need ML to Address the Phenomenon on Hand
10.1.5 Establish a Crystal Clear Understanding of Model Assumptions, Outcomes, and Limitations
10.1.6 Remember that an Explainable Model Is Not a Causal Model
10.1.7 Master Performance Metrics and Avoid the Perception of False Goodness
10.1.8 Acknowledge that Your Model Is Likely to Be Biased
10.1.9 Consult with Experts and Re-visit Domain Knowledge to Identify Suitable Features
10.1.10 Carefully Navigate the Trade-offs
10.1.11 Share Your Data and Codes
10.2 Suggestions
10.2.1 Start Your Analysis with Simple Algorithms
10.2.2 Explore Algorithms and Metrics
10.2.3 Be Conscious of Data Origin
10.2.4 Emphasize Model Testing
10.2.5 Think Beyond Training and Validation
10.2.6 Trace Your Model Beyond Deployment
10.2.7 Convert Your ML Models into Web and Downloadable Applications
10.2.8 Whenever Possible, Include Physics Principles in ML Models
10.3 Best Practices
10.3.1 Avoid the Use of “Small” and Low Quality Data
10.3.2 Be Aware of the Most Commonly Favored ML Algorithms
10.3.3 Follow the Most Favored Model Development Procedures
10.3.4 Report Statistics on Your Dataset
10.3.5 Avoid Blackbox Models in Favor of Explainable and Causal Models (Unless the Goal Is to Create a Blackbox Model)
10.3.6 Integrate ML into Your Future Works
Definitions
Questions and Problems
References
11 Final Thoughts and Future Directions
11.1 Now
11.2 Tomorrow
11.2.1 Big, Small, and Imbalanced Data
11.2.2 Learning ML
11.2.3 Benchmarking ML
11.2.4 Standardizing ML
11.2.5 Unboxing ML
11.2.6 Popularizing ML
11.2.7 Engineering ML
11.3 Possible Ideas to Tackle
11.4 Conclusions
References
Index
CHAPTER 01
Table 1.1 Possible course...
CHAPTER 03
Table 3.1 Example of various...
Table 3.2 Statistical insights...
Table 3.4 Results of Spearman...
Table 3.5 Results of Spearman...
Table 3.6 Common filtering methods...
Table 3.7 Normalization vs. standardization...
CHAPTER 05
Table 5.1 List of commonly...
Table 5.2 List of commonly...
Table 5.3 List of commonly...
Table 5.4 List of commonly...
Table 5.5 Comparison of metrics...
Table 5.6 Suggested metrics and...
CHAPTER 07
Table 7.1 Understandability in common...
Table 7.2 Properties of a...
Table 7.3 Splits of algorithms...
CHAPTER 08
Table 8.1 Causality based on...
Table 8.2 Algorithmic and ML...
Table 8.3 Metrics used for...
Table 8.4 Algorithmic and ML...
Table 8.5 Threshold values of...
Table 8.6 Characteristics of the...
Table 8.7 Results of analysis...
Table 8.8 Results of analysis...
Table 8.9 Results of analysis...
Table 8.10 Data compiled by...
CHAPTER 09
Table 9.1 Comparison between real...
Table 9.2 Comparison between real...
Table 9.3 Evaluation of ML...
Table 9.4 Performance of the...
Table 9.5 Comparison against the...
Table 9.6 List of selected...
CHAPTER 02
Figure 2.1 Infographic displaying...
Figure 2.2 Publication trends...
Figure 2.3 Publication trends...
Figure 2.4 Pyramid of...
Figure 2.5 Typical ML...
Figure 2.6 Illustration of...
Figure 2.7 Typical response...
Figure 2.8 Illustration of...
Figure 2.9 Sample of...
Figure 2.10 Illustration of...
CHAPTER 03
Figure 3.1 Data types...
Figure 3.2 (Cont’...
Figure 3.3 Demonstration of...
Figure 3.4 (Cont’...
Figure 3.5 Demonstration of...
Figure 3.6 (Cont’...
Figure 3.7 Demonstration of...
Figure 3.8 Correlation analysis...
Figure 3.9 Demonstration of...
Figure 3.10 Illustration of...
Figure 3.11 Visualizing the...
Figure 3.12 Steps to...
Figure 3.13 Logistic function...
CHAPTER 04
Figure 4.1 A snippet...
Figure 4.2 Layout of...
Figure 4.3 Layout of...
Figure 4.4 (Cont’...
Figure 4.5 Illustration of...
Figure 4.6 Illustration of...
Figure 4.7 TensorFlow neural...
Figure 4.8 (Cont’...
Figure 4.9 Demonstration of...
Figure 4.10 (Cont’...
Figure 4.11 Illustration of...
Figure 4.12 Illustration of...
Figure 4.13 Comparison between...
Figure 4.14 Illustration of...
Figure 4.15 Algorithms falling...
Figure 4.16 (Cont’...
Figure 4.17 Demonstration of...
Figure 4.18 Illustration of...
Figure 4.19 Sketch of...
Figure 4.20 Illustration of...
Figure 4.21 Illustration of...
Figure 4.22 Illustrations of...
Figure 4.23 Sample of...
Figure 4.24 Demonstration of...
Figure 4.25 Illustration of...
Figure 4.26 (Cont’...
Figure 4.27 Flowchart of...
Figure 4.28 Flowchart of...
Figure 4.29 Flowchart of...
Figure 4.30 (Cont’...
Figure 4.31 (Cont’...
CHAPTER 05
Figure 5.1 Comparison between...
Figure 5.2 Anatomy of...
Figure 5.3 (Cont’...
Figure 5.4 Illustration of...
Figure 5.5 Sample of...
Figure 5.6 Sample of...
Figure 5.7 Illustration of...
CHAPTER 06
Figure 6.1 Snippet of...
Figure 6.2 Snippet of...
Figure 6.3 Snippet of...
Figure 6.4 Snippet of...
Figure 6.5 Clarifai (July...
Figure 6.6 Snippet of...
Figure 6.7 Snippet of...
CHAPTER 07
Figure 7.1 Illustration of...
Figure 7.2 Illustration of...
Figure 7.3 (Cont’...
Figure 7.4 Comparison between...
Figure 7.5 Comparison between...
Figure 7.6 (Cont’...
Figure 7.7 Comparison between...
Figure 7.8 Comparison between...
Figure 7.9 Comparison between...
Figure 7.10 Feature importance...
Figure 7.11 Summary plot...
Figure 7.12 Partial dependence..4
Figure 7.13 Force plot...
Figure 7.14 LIME visualization...
Figure 7.15 Explainable tree...
Figure 7.16 Explainable tree...
Figure 7.17 Comparison of...
CHAPTER 08
Figure 8.1 Sample DAG...
Figure 8.2 Samples of...
Figure 8.3 Examples of...
Figure 8.4 Marginal vs...
Figure 8.5 Difference between...
Figure 8.6 Illustration of...
Figure 8.7 Illustration of...
Figure 8.8 Graphs...
Figure 8.10 Causal model...
Figure 8.11 CausalNex model...
Figure 8.12 Domain-knowledge...
Figure 8.13 Simplified model...
Figure 8.14 DAGs for...
CHAPTER 09
Figure 9.1 Our data..6
Figure 9.2 Illustration of...
Figure 9.3 (Cont’...
Figure 9.4 Insights into...
Figure 9.5 (Cont’...
Figure 9.6 Comparison between...
Figure 9.7 Comparison of...
Figure 9.8 Karva-expression...
Figure 9.9 Comparison between...
Figure 9.10 (Cont’...
Figure 9.11 (Cont’...
Figure 9.12 Illustration of...
Figure 9.13 Typical response...
Figure 9.14 Outline of...
Figure 9.15 Validation...
Figure 9.17 Feature importance...
Figure 9.18 Comparison between...
Figure 9.19 Flowchart of...
Figure 9.20 Comparison via...
Figure 9.21 (Con’...
Figure 9.22 GUI of...
CHAPTER 10
Figure 10.1 Space of...
Figure 10.2 Examples of...
Figure 10.3 Insights into...
Figure 10.4 Frequent ML...
Figure 10.5 Frequently used...
Cover
Title page
Copyright
Dedication
Table of Contents
Preface
About the Companion Website
Begin Reading
Index
End User License Agreement
i
ii
iii
iv
v
vi
vii
viii
ix
x
xi
xii
xiii
xiv
xv
xvi
xvii
xviii
xix
xx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
Hello, friend.
Elliot Alderson (Mr. Robot)1
The theme of this textbook revolves around how machine learning (ML) can help civil and environmental engineers transform their domain. This textbook hopes to deliver the knowledge and information necessary to educate engineering students and practitioners on the principles of ML and how to integrate these into our field. This textbook is about navigating the realm of data-driven ML, explainable ML, and causal ML from the context of education, research, and practice. In hindsight, this textbook augments ML into the heart of engineering.
Together, we will go over the big ideas behind ML. We will ask and answer questions such as, what is ML? Why is ML needed? How does ML differ from statistics, physical testing, and numerical simulation? Can we trust ML? And how can we benefit from ML, adapt to it, adopt it, wield it, and leverage it to overcome many, many of the problems that we may face?
This textbook is also about showing you, my dear reader, how to amplify your engineering knowledge with a new tool. A tool that is yet to be formally taught in our curriculum. A tool that many civil and environmental engineering departments and schools may not fully appreciate;2 yet are eager to know more about!
I have come to the conclusion that ML is one of those ideas that we constantly hear but do not really seem to fully understand, know what it is, what it does, or why we need it. Here is a prime example, our tablets and phones have plenty of ML built into them. More often than not, you will hear someone say, Oh. I hate the new update. They changed the algorithm. Again!3 I have always struggled4 with the following question, how do you write about something [ML] that your audience “thinks” that they know but, in reality, do not seem to grasp? This made writing this book an equally challenging and unique experience.
By now, you might have noticed that I am breaking free from the typical and expected formality as well as high technicality associated with engineering writing. Frankly speaking, my goal was not to write a heavy technical textbook5per se. I felt a smoother tone might be more inviting for an engineering resource with a particular theme on ML that is designed to be a textbook for its audience. While I admittedly do so, I still maintain an above average degree of formality, as I believe that such formality helps convey key ideas and discussions.
A technical tone is also great when describing facts, methodologies, and results. Such a tone sounds assuring – even in the muddy waters of ML. Personally, I wanted the experience of navigating the pages of this textbook to feel more like we are learning this material together and, for the first time – fostering an atmosphere of trust and positivity.
At some points, you may find that I have realized a balanced tone and rhythm, while in others, my alter ego may prevail, and I will try to walk you through my thinking process and then answer any questions that I imagine might be lurking in the shadows of your subconscious.6 In the latter, you will likely see my reliance on footnotes.7 I like footnotes as I feel they resemble a place where I can extend the conversation8 with a bit less formality.
I also like quotes,9 and I have purposefully used some across my chapters to help deliver my thoughts or as anchors to introduce and solidify concepts. I will be adding some short storylines that I believe will come in handy to explain some notions and ideas. You might find these at the beginning of chapters to help reinforce their themes.10 These are true stories that I have the pleasure of experiencing personally or seeing firsthand.
I would like this textbook to be a journey. I have designed it as one that more or less mimics the way I learned and researched ML.11 I believe that we need to learn ML as a methodology and philosophy instead of applied science or one that revolves around coding. The journey I am anticipating is conversational, spanning intriguing questions and covering the quest to answer them.
This seems like a good spot to start this conversation. I am M.Z. Naser,12 an assistant professor at the School of Civil and Environmental Engineering & Earth Sciences and Artificial Intelligence Research Institute for Science and Engineering (AIRISE) at the College of Engineering, Computing and Applied Sciences at Clemson University.13 I was fortunate enough to complete my undergraduate14 and MSc degrees at the American University of Sharjah, Sharjah, United Arab Emirate, and my PhD and Post-doc at Michigan State University, East Lansing, USA. Surprisingly, my research area started with structural fire engineering15 and then propagated into ML.16 I had the pleasure of authoring/co-authoring over 150 papers and four books – most of them revolve around structural fire engineering and ML.17 I actively serve on a number of international editorial boards and conferences, as well as codal building committees (in American Society of Civil Engineers, American Concrete Institute, Precast, Prestressed Concrete Institute, and International Federation for Structural Concrete,18 to name a few). At the time of this writing, I am humbled to the chair of the ASCE Advances in Technology committee.
Around 2018, I was working on developing a new and simple method to predict the fire resistance of reinforced concrete (RC) columns.19 I was hoping to develop a unified method that, unlike existing methods that do not seem to converge,20 works in a one-step approach.21 The objective was simple: to create a method that does not suffer from the same symptoms and limitations of existing methods.
I started exploring and, of course, crossed paths with ML. I built a ML model, and upon examination, this model outperformed every existing method (whether codal, accepted, or published) I have tested against. This was intriguing. How does the ML model perform so well? Why does it perform so well?
It made sense to pursue and investigate how and why the model works well.22 My rationale is simple. Knowing the how and why allows us to see what the model sees, and if we knew the answers to these two questions, then it is plausible that we could come up with a new theory to predict the fire resistance of RC columns! Or perhaps to distill new knowledge that we have not seen before. I was excited.
The answer to the how is simple. ML models are nonparametric and are designed to identify hidden patterns in data.23 Those patterns are the something that the ML model sees that we, as well as our traditional methods, may not see well.
The answer to the why is not so simple. It turns out that there is a whole field dedicated to explaining how ML models predict the way they do. So, if we could learn the reasoning behind model predictions, we could start to better grasp the hidden patterns.24 Even better, start to see such patterns without the need for ML! To get closer to the essence25 of why the phenomena we are interested in occur!
Surprisingly, explanations from ML models may or may not agree with our domain knowledge or physics! This is both concerning and problematic! Thus, we ought to know if a model’s how and why are correct and if so, then it is possible that the model understands the true cause and effect of the patterns it sees.26 Guess what? There is a whole domain about establishing causality too. We will visit this domain as well.
By now, I hope you can appreciate the overarching goal behind this textbook. This book is about convergence to building a practical approach to the above elemental questions.
This textbook comprises a series of chapters. Each chapter covers one big idea about ML. The chapters are organized in a manner that resembles the most commonly thought process in adopting ML into a problem within civil and environmental engineering. These chapters also contain examples and tutorials. Some of the examples are short, while others may span a couple of chapters. You will find that I tend to favor illustrative examples on simple components (i.e., beams and columns) as these are easy to visualize and can be thought of as metaphors that are likely to be extended to other sub-fields as well.27 Collectively, this book draws inspiration and examples from most of the sub-areas belonging to civil and environmental engineering.
Since this is a textbook, it made sense to include sections on definitions, blueprints, as well as questions and problems at the end of each chapter. The definitions section summarizes the main terms highlighted in each chapter, and the blueprints lay out a visual representation of each chapter’s concepts.
The questions can be answered by going over the discussion presented in each corresponding chapter.28 On the other hand, the problems may require you to build ML models via coding-free/coding-less platforms or via simple coding. For these, I have provided you with ample codes (scripts) and databases that can ease you into building models, programming, and coding. These can be conveniently accessed through my website.29 My goal from these questions and problems is to invite you to learn and practice ML a bit more.
Speaking of programming and coding. I did not intend for this book to be another source on learning ML by coding.30 As such, I did not dedicate much landscape to specifically teach you how to program/code. My thinking process is that 1) I would like you to learn about ML as a methodology – not as a coding exercise, and 2) there are plenty of resources on teaching coding out there. This book is about ML in civil and environmental engineering, and it is not about coding civil and environmental engineering problems using ML. A distinction, while subtle, is necessary and warranted to make clear.
Now, let us talk about each chapter.
Chapter 1 describes a series of strategies, approaches, and pathways to a few teaching methods I envision/propose for this textbook. This chapter also hopes to navigate the persisting challenge of modernizing the civil and environmental engineering curriculum. I am setting this chapter as a foundation for students and faculty who are interested in ML but may not be proficient in ML. This chapter could be interesting to advanced ML users as well.
Chapter 2 formally introduces us to ML. This chapter starts with a brief history of ML from computer science as well as civil and environmental engineering points of view. We will also be going over a systematic framework (pipeline) to build and tailor proper ML models.
Chapter 3 is where we discuss the role of data and statistics in ML. Here, we will chat a bit about where to get data from, what to do when we collect such data, what are some of the data handling techniques that a typical ML requires, and so on.
Chapter 4 is all about the different ML algorithms. These are the main tools that will help us apply and create ML models to solve many of our problems. This chapter presents key theoretical and practical concepts to supervised and unsupervised algorithms.
Chapter 5 is perhaps the last theoretical chapter before we get to practice31 ML. In this chapter, we cover principles and notions behind ML performance fitness indicators and error metrics needed to quantify the adequacy of our models.
Chapter 6 is where we practice ML! We will be introduced to the realm of coding-free and coding-based ML. We will also explore a few exciting platforms, problems, and analyses. You will get to see and enjoy plenty of examples and tutorials.
Chapter 7 goes beyond the traditional blackbox ML modeling to showcase the role of explainable and interpretable ML. In this chapter, we first dive into explainability methods and then continue with examples and tutorials we have seen and practiced in Chapter 6 but adding the necessary tools (such as SHAP and partial dependence plots, etc.) to unbox the blackbox.
Chapter 8 introduces causality, along with its principles from the lens of ML and civil and environmental engineering. This chapter also goes over methods and algorithms to identify the data generating process, the process of creating causal ML models, and causally inferring knowledge from such models.
Chapter 9 covers advanced topics. These topics span a multitude of concepts that could extend a bit over the duration of a course designed for beginners. I can see these topics as more applicable for research projects and a possible guest lecture to take place toward the end of a typical course to possibly attract and retain students interested in ML to further explore other big ideas.
Chapter 10 presents recommendations, suggestions, and best practices that can be of aid in ML-based investigations. I find these informative and applicable to many, many of our problems. The same can also give you a glimpse into the state of our domain and how our colleagues are utilizing and adopting ML.
The main body of the textbook concludes with Chapter 11. This chapter is designed to display some of the possible future trends that could potentially benefit from ML. Some of these ideas may be trimmed and tuned to be tackled in future research problems and projects.
The Appendix is divided into sub-sections where I list and share what I feel and believe will be some valuable information, sources, techniques, and additional examples that I did not get to cover within the main chapters of this book.
A few section titles were assigned the following snowflake32 symbol *. These sections are advanced and may be beneficial for well-versed users. I opted to keep these here to push the envelope a bit. Perhaps intrigue you a bit. Feel free to skim through these to have a feeling of what is available out there.
Here are a few things that I would like you to try.
Please feel free to reach out to me with your thoughts on this book, ideas for future examples/problems, or chapters. Do not be afraid to share some of the typos or inconsistencies you might come across, and do share the codes you have developed.
Advancements in ML are consistently rapid. It is amazing to see the number of publications that hypothesize, create, or apply new ML methods. Follow up with the authors of the cited references – one thing about academics is that many would love to chat about their research. Many ML enthusiasts and developers have Twitter and LinkedIn accounts.33 Ask them questions too. Follow trending topics on Google Scholar and social media. There are many YouTube channels that stay up to date on this front.34
Subscribing to scholarly journal updates is another effective method to stay in touch with the latest ML-based paper in one’s field, especially if you happen to be a student where your school’s library offers access to publishers of such journals.35 In parallel, seek to attend conferences and stay up-to-date with society updates.
While you do not need to become an advanced coder/programmer, remember that it is never too late to learn something new. Wondering where to start? Start at any point! Pick a programming language.36 There are plenty of free resources or paid services that you can use. These might include webbooks, forums, or MOOCs.37
I fought the temptations to present ML from the context of Natural Language Processing (NLP)38 or heavy-duty computer vision in this edition.39 I am confident that these topics, along with many others, will find a home in future editions, as well as other books dedicated to civil and environmental engineering.
To keep our conversation going, I may recall a few of my dialogues, events, or correspondents and use these to navigate some of my current discussions. I may even recall a few examples of good and poor use of ML I came across throughout the past few years.
Many individuals40 and organizations41 were kind enough to keep me motivated, knowingly or unknowingly. You will find those cited throughout the pages of this book. I thank you.
Much of my pursuit hopes to follow the steps of my mentors,42 Rami Hawileh, Jamal Abdalla, Hayder Rasheed, and of course, the guru, Venkatesh Kodur.43 I wish I could be their student again. Life was much easier and simpler back then. If you happen to cross their path one day, my advice is to get to learn from them as much as you can.
To my students. Haley Hostetter, Deanna Craig, Mohammad (Moe) Al-Bashiti, Arash Tapeh, Aditya Daware, Alireza Ghasemi, and Mohammad Abedi. Their hard work, support, dedication, publications, and help in putting some of the scripts and charts together made this journey brighter and is evident in this book. You are brilliant, and I thank you!
To Clemson University and my home and faculty (many of which are mentioned within the pages of this book). Thank you for your endless support!
To John Wiley & Sons, Inc. This book would not have seen the light if it was not for believing in this project and assistance along the way. My thanks go to Kalli Schultea, Amy Odum, Sarah Lemore, Britta Ramaraj, and Isabella Proietti.
Last but not least. My family, parents, siblings, nieces, and nephews – thank you for your unconditional love (despite my many, many shortcomings). My thanks also go to Pirate and Snow, my pets, for keeping me company during this journey. Thanks to Ena for making life a lil bit more interesting (and, possibly, less dramatic)! Special thanks go to my wife, Jasmine. You are my anchor and gym partner. She constantly reminds me that life is not just about research or writing papers. She says, “life is a lil bit more than that!”. She is always supportive. I continue to learn from you. You are the source of my inspiration and motivation!
And to those forgotten, yet intentionally or unintentionally not listed here. I thank you.
Finally, this book is not an individual effort. It is a community effort. We learn together. We rise together. We advance together.
1
Huntington-Klein, N. (2021).
The Effect: An Introduction to Research Design and Causality
. Boca Raton: Chapman and Hall/CRC.
https://doi.org/10.1201/9781003226055
.
2
Cunningham, S. (2021).
Causal Inference: The Mixtape
. Yale University Press. ISBN-13: 978-0300251685.
3
Alterio, M. and McDrury, J. (2003).
Learning Through Storytelling Higher Education: Using Reflection & Experience to Improve Learning
.
https://doi.org/10.4324/9780203416655
.
1
eps1.0_hellofriend.mov
is the title to the pilot episode of Mr. Robot.
2
At this moment in time, there is a handful of departments/schools that cover ML in a civil and environmental engineering curriculum. I do not know why. I can only speculate! However, in a few years from now, I am expecting that ML will be a cornerstone in our curriculum.
3
This reminds me of the following: “
I have a cue light I can use to show you when I’m joking, if you like.” –
TARS (Interstellar, 2014).
4
More often than not, especially whenever I work on research proposals with a ML component.
5
I have had the pleasure of writing some and reading/collecting many myself.
6
I was new to ML not too long ago! I still have many burning questions that I would like to answer.
7
Hello!
8
Shout out to Nick Huntington-Klein, the author of
The Effect: An Introduction to Research Design and Causality
[
1
].
9
Those extracted from scenes from movies/shows/songs/poems (especially those in Arabic). I did not know that these could be added into textbooks until I read
Causal Inference: The Mixtape
by Scott Cunningham [
2
].
10
Some might call this,
learning by storytelling
[
3
]!
11
Of course, with much less focus on failures.
12
I mainly go by my last name (thanks to the good folks at Michigan State University who started this trend). M.Z. is for Mohan
n
ad Zeyad. Note the presence of the two
N’s
in my first name.
13
In Clemson, South Carolina, 29634.
14
I usually start every one of my courses with a short introduction and then add that I graduated with a GPA of 2.69 and had many semesters where my GPA dropped below 2.20 (I was repeatedly ranked at the bottom of my class). If you happen to be an above C student, then congratulations on your hard work! For those of you who happen to share my fortune, I truly understand. Do not give up!
15
A niche area of structural engineering where we focus on how fire (thermal effects) affects materials and structures. A real conversation starter and icebreaker!
16
I must acknowledge Prof. Ghassan Abu Lebdah for introducing me to this area.
17
As always, I am thankful for all of my collaborators and students, who without them, many of these works would not have seen the light.
18
Or simply, Fib (Fédération internationale du béton) – thanks to Thomas Gernay!
19
At that point, a few tools existed – as you will see in an example covered in a later chapter.
20
We know why this happens. It is because each method is built on certain assumptions and has a list of specific conditions. Therefore, a convergence across existing methods is not guaranteed, nor expected.
21
One-step approach is a procedure that allows the prediction/evaluation of an outcome in one step.
22
To be frank, the question I had in mind then was,
what
does the ML model see that engineers/other methods do not see?
23
This is the
data driven
part to the title.
24
This is the
explainability
part to the title.
25
Formally, this is called the data generating process.
26
Here is the
causality
part to the title.
27
Two more reasons, 1) I am a visual learner and value visual examples, and 2) some examples were part of my research work which, as we have learned, stems from structural fire engineering.
28
In some rare instances, I may refer you to a scholarly article or a website.
29
www.mznaser.com, as well as the companion website to this book
www.wiley.com/go/Naser/MachineLearningforCivilandEnvironmentalEngineers
30
More on this in the first chapter.
31
Here is the
practical approach
part to the title.
32
The snowflake is a throwback to Michigan!
33
They are there to follow them. I do!
34
This is 2023, so things might change a bit by the time you get your hands on this textbook. We are barely recovering from a couple of years of pandemic.
35
I cannot offer an exhaustive list of recommended journals. The following are some that come to mind: Computer-Aided Civil and Infrastructure Engineering, Automation in Construction, Water Research, and ASCE Journal of Computing in Civil Engineering, Engineering Structures, Journal of Building Engineering, Fire Technology, and many more.
36
If you are looking for a sign, here is one. Pick
R
or
Python
.
37
Massive open online courses. I have a list of recommended reads in the Appendix.
38
Can come in very handy in many sub-fields, especially in construction management and planning.
39
If you are wondering how this book came to be, then let me walk you through the following short story. Kalli Schultea reached out to me regarding a seminar I was presenting at the 2021 ASCE convention titled,
Modernizing Civil Engineering through Machine Intelligence
. She was wondering if I was interested in writing a book on ML, and I said, absolutely yes (since at the same time, I was actually working on developing a course on ML at Clemson University to be delivered in the Spring of 2023). The timing made perfect sense! The actual project took off from there.
40
Talk about a healthy competition! Whether by reading their publications, or answering my/their questions. A special thanks go to those that invited me to present about ML (e.g., Wojciech Węgrzyński, Nan Hu, Negar Elhami Khorasani, Barbara Lane, Ting Lin etc.). Also, thanks to the creators of many of the tools that I have showcased throughout this book. They were kind enough enough to lend me their tools, including but not limited to: Eduardo García Portugués, Victor Powell, Lewis Lehe, Minsuk Kahng, Naftali Harris, Adam Harley, Jonas Greitemann, Stephanie Yee, Sasha Luccioni, etc.
41
American Concrete Institute, American Society of Civil Engineers (on many occasions!), National Fire Protection Association, International Association for Fire Safety Science, BigML, DataRobot, Dataiku, Exploratory, Clarifai, and others.
42
It is not surprising to note that we are still in direct touch after all those years!
43
Remember my GPA story? Well, I was rejected from the graduate program at AUS and then again at MSU, and Prof. Hawileh and Prof. Kodur were kind enough to override the school’s decision. Thank you!
This book is accompanied by a companion website:
www.wiley.com/go/Naser/MachineLearningforCivilandEnvironmentalEngineers
This website includes
Appendix
Datasets and codes
PowerPoint slides
Lecture notes
Homework answers
Deciding is for machines because its computational. Choosing is very human.
Joseph Weizenbaum
This brief chapter outlines some proposed suggestions for adopting this textbook into a civil and environmental engineering curriculum. I try to remain flexible with describing my suggestions1 as I understand the complexity of integrating new courses and/or modules into a dense curriculum – just like ours. I hope these can be of aid.2
There is a rich body of literature that tackles a variety of teaching and learning methods targeting engineering students [1–3]. If we are to distill this literature into its fundamental facets, we arrive at the elemental conclusion that many engineering students can be described as active learners. Such learners are likely to be experimentalists and tend to value cause-and-effect demonstrations. This presents an exciting challenge to drafting this book – and to be quite honest, it remains a challenge that I try to overcome every time I present ML to a new or junior audience. Before jumping into some possible means of delivery, please allow me to walk you through the following.
Foundational methods within our domain are developed systematically and with methodical consistency. In the majority of the time, such methods have a clear approach. We start from the first principles and work our way up to a goal (i.e., a solution). Suppose we think of designing a system (say, a load bearing element or a transportation network). This design exercise requires knowing the first principles associated with this particular system. Such principles often entail a set of provisions/guidelines, formulae, or tables/charts that can be applied to realize a proper design.3
We value first principles because they are authentic and transparent. They describe the logic that governs a problem (say, the design of aforenoted systems). They are compact and easy to use – for the most part. They could also, and perhaps most importantly, be verified via laboratory testing or some other means [4–6]. All of these qualities are essential to establish the merit and validity of adopting new methods into our domain.4
Take a look at Equation 1.1. This is a fundamental equation that ties the moment capacity (M, or simply resistance) of a W-shaped steel beam to its primary factors (i.e., the plastic section modulus, Z, and the yield strength, fy, of the grade of the structural steel used in the same beam).
Equation 1.1visually represents the main parameters governing the sectional capacity of a W-shaped steel beam under bending. Equation 1.1 also expresses the functional relationship between M, Z, and fy (i.e., the resistance equals the multiplication of Z and fy). More importantly, this question shows a perceived causal look into the sectional capacity; that is, an increase in either Z and/or fy is expected to increase the resistance of steel beams.
While the discussion on Equation 1.1 may seem trivial – especially for many of the advanced readers of this book, extending the same discussion to more complex problems is not as straightforward.5
Let us think about the collapse of buildings as a phenomenon. At this point in time, we do not have an elegant equation/formula that can represent such a phenomenon. First of all, this is a hefty problem with a vast number of involved parameters (of which some we know and many, many others that we do not know). If we are to speculate on a form of an equation that can represent this phenomenon, then opting for a formula with a highly nonlinear form would not be a far stretch. Evidently, complex problems are often represented with complex procedures. I hope you can now appreciate my emphasis on visualization.6 It is much easier to deal with problems once you visualize/describe them elegantly.
In the large scheme of our domain, we lack similar equations to Equation 1.1 on countless problems. On a more positive note, we have other means to visualize and tackle complex phenomena.
When I was a graduate student, I was introduced to numerical simulations. I was fortunate enough to learn ANSYS [7], from which I was able to model and simulate complex problems. If you take a minute to think about it, finite element (FE) models are similar to Equation 1.1. Such models visually present and describe the logic that governs phenomena. They are compact, intuitive, and can also be verified via laboratory testing, as well as differential equations and other means of verifications.
Numerical models are a form of visualization that can be applied to simple problems (e.g., moment capacity of a beam) and to more complex problems (i.e., the collapse of buildings). These models implicitly retain our preference for first principles and the notion of transparent and verifiable methods. These are some of the qualities that are elemental to the success of ML adoption and education in our domain.
When developing a new course, or educational material, a necessary question often arises, what are the learning objectives for such a course? Overall, the learning objectives are statements of what we intend to cover and instill during an education experience.
I believe that the objectives7 of an accompanying ML-themed course to this textbook can be grouped under three components; 1) introduce the principles of ML and contrast those against traditional methods favored by civil and environmental engineers (i.e., scientific method, statistical, numerical, and empirical analysis, etc), 2) present case studies to pinpoint the merit space of where ML can be most impactful,8 and 3) provide a platform for our students to practice, collaborate, develop, and create ML solutions for civil and environmental engineering problems.
A deep dive into the above shows that I did not mention the terms coding or programming as I do not see ML as a coding/programming subject or exercise. In my eyes, ML is both an instrument and a philosophy and hence needs to be taught as one.
I believe that ML is best taught in a manner similar to our foundational courses, such as statics or mechanics. Thus, for the most part, I do not support the notion that the majority of our students or engineers are expected to become programmers – just like I do not also support the push aimed to convert our students into becoming FE experts or modelers. On the contrary, I believe that the majority of our engineers are expected to be familiar with ML, just as much as they are familiar with setting up physical tests, statistical methods, and numerical models.
I once heard a wise person saying that a good indication to forecasting where one area is trending is to see its analog in different domains. A look into the recent rise and interest in big and small data research, and the success of ML in parallel engineering fields that often grow in line with civil and environmental engineering (i.e., mechanical engineering, industrial engineering, etc.), implies that ML is soon to find a permanent residence in our domain. As such, providing elementary and advanced educational material covering ML to civil and environmental engineering undergraduates and graduate students, as well as practicing engineers, will be essential to the growth of our domain.
From a practical perspective, adding a new course or a series of specialized courses to an already heavy civil engineering curriculum can be challenging.9 However, if such a course is an elective course, it would ease the burden on faculty, students, and curriculum development committees.
Thus, the path of least resistance to creating a simple means of delivery for ML-based education material is to introduce a new course. This course can build upon a faculty’s experiences with ML. In such a pathway, a faculty can tailor the layout of a course plan with modules and chapters, each of which is dedicated to a specific concept on ML. The table of contents and chapters of this textbook could potentially serve as an initial guide for such a course.10
At its surface, this textbook is designed to be a companion for a course on ML. The early chapters of this book cover introductory ML themes, and later chapters go further to outline specifics of how ML can be adopted and applied as a solution to some of the common problems faced in our domain.11
In lieu of the above simple solution, I also believe there are multiple ways to deliver a ML-themed course.12 I will cover three herein. The first is through what I refer to as teaching ML by programming, the second is by teaching ML by application, and the third is a hybrid of the two.
Teaching ML by programming heavily relies on teaching students how to code and program ML algorithms and scripts. You will notice that I am intentionally avoiding this practice of solely teaching ML by programming. This decision stems from two viewpoints; 1) my hope to change the inertia against ML as a coding experience as opposed to an engineering experience,13 and 2) the rise of new platforms that provide coding-free ML solutions (i.e., Big ML, DataRobot, Dataiku, and Exploratory).
Such platforms allow users to create, manipulate, and apply data and ML algorithms via friendly interfaces (i.e., without any programming experience).14 These platforms are driven by the desire to teaching15ML by application and provide free access to students/faculty. Hence, they are valuable for the many of us who struggle with the lack of programming in our curriculums. I foresee these platforms as the future of our domain, so I am inclined to present them in this book.16 The good news is that we can also create our own platform as I will show you in a later chapter.