21,99 €
Understand the real power of AI and and its ability to shape the future for the better. AI For Social Good: Using Artificial Intelligence to Save the World bridges the gap between the current state of reality and the incredible potential of AI to change the world. From humanitarian and environmental concerns to advances in art and science, every area of life stands poised to make a quantum leap into the future. The problem? Too few of us really understand how AI works and how to integrate it into our policies and projects. In this book, Rahul Dodhia, Deputy Director of Microsoft's AI for Good Research Lab, offers a nontechnical exploration of artificial intelligence tools--how they're built, what they can and can't do, and the raw material that teaches them what they "know." Readers will also find an inventory of common challenges they might face when integrating AI into their work. You'll also read more on: * The potential for AI to solve longstanding issues and improve lives * Learn how you can tap into the power of AI, regardless of the size of your organization * Gain an understanding of how AI works and how to communicate with AI scientists to create new solutions * Understand the real risks of implementing AI and how to avoid potential pitfalls * Real-life examples and stories that demonstrate how teams of AI specialists, project managers, and subject matter experts can achieve remarkable products. Written for anyone who is curious about AI, and especially useful for policymakers, project managers, and leaders who work alongside AI, AI For Social Good provides discussions of how AI scientists create artificially intelligent systems, and how AI can be used ethically (or unethically) to transform society. You'll also find a discussion of how governments can become more flexible, helping regulations keep up with the fast pace of change in technology.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 371
Veröffentlichungsjahr: 2024
Cover
Table of Contents
Title Page
Copyright
Dedication
Acknowledgments
About the Author
Introduction
1 A Brief History of Artificial Intelligence
How Innovators Throughout History Paved the Way for Modern AI: From Babbage to Turing
The Emergence of Modern AI
From Optimism to Pessimism: The Story of the AI Winter
The Rise of Expert Systems
AI Revival: A Fitful Resurgence
The Birth of Modern AI
AI Today
Driver of the 21st Century Economy
Final Thoughts
References
2 AI Explained: A Non-Technical Guide
Definition of AI
Machine Learning
How Machines Learn
Neural Networks
Common Deep Learning Models
Final Thoughts
References
3 AI for Good
Responding to Natural Disasters
Food and Water Security
Medicine
Education
Final Thoughts
References
4 AI for Good: Pursuit of Scientific Knowledge
Biodiversity
Proteomics
Astronomy
Final Thoughts
References
5 When Good AI Goes Bad
The Surveillance Society
Magnifying Societal Ills
Amplifying Discrimination and Social Biases
Final Thoughts
References
6 Putting Safeguards Around AI
The Need for Ethical Development
Safety and Security
Accountability and Transparency
Data Protection and Privacy
Balancing Innovation and Regulation
Economic and Social Impact
AI Governance
Final Thoughts
References
7 Getting the Best Out of Your AI Team
Roles in an AI Team
A Three-Way Conversation
Setting Expectations About AI
Case Study: Breast Cancer Example
Project Scoping
The Reality of Running AI: Cost, Connectivity, and Context
Understanding the Role of Environmental Context in AI Deployment
Technology Resources
Data: Quantity and Quality, Annotations, Biases
Modeling
Final Thoughts
References
8 The Future
New Technologies
AI Teams in the Near Future
AI-Specific Jobs
Societal Change
Final Thoughts
References
Index
End User License Agreement
Chapter 1
Figure 1.1 Drawing of Charles Babbage
Figure 1.2 Ada Lovelace, watercolor painting, possibly by Alfred Edward Chal...
Figure 1.3 John von Neumann
Figure 1.4 Alan Turing
Figure 1.5 From left to right: Yann LeCun, Geoffrey Hinton, Yoshua Bengio. A...
Chapter 2
Figure 2.1 John Watson conditioning Little Albert to be afraid of furry crea...
Figure 2.2 Ramon y Cajal's drawing of neurons in the cerebellum, 1899...
Chapter 3
Figure 3.1 Satellite image of farms in The Nature Conservancy's CHEF region...
Figure 3.2 Photo of Matebe hydroelectric powerplant that was built and is ru...
Figure 3.3 MRI images of breast cancer
Chapter 4
Figure 4.1 Camera trap image of an ocelot
Figure 4.2 Camera trap image of a wild turkey
Figure 4.3 Camera trap images of hard-to-detect birds in the Amazon rainfore...
Figure 4.4 Camera trap images of hard-to-detect birds, with bounding boxes s...
Figure 4.5 An example of the complex structure of proteins. The 3D structure...
Chapter 5
Figure 5.1 An example of a drone using thermal imagery to detect people
Figure 5.2 The image was created on Midjourney by the artist Jason Allen and...
Figure 5.3 Deepfake of the Pope wearing a fashionable Balenciaga jacket and ...
Chapter 6
Figure 6.1 Example of UL Marks that may be affixed on electrical devices, th...
Chapter 7
Figure 7.1 Example of breast cancer images. The top row shows MRI scans that...
Cover
Title Page
Copyright
Dedication
Acknowledgments
About the Author
Introduction
Table of Contents
Begin Reading
Index
End User License Agreement
i
ii
v
vi
vii
xi
xii
xiii
xiv
xv
xvi
xvii
xviii
xix
xx
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
37
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
75
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
115
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
143
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
An inspiring overview of what machine learning and artificial intelligence can already do to make the world better, and what can be done to use these tools more effectively.
—Andrew GelmanProfessor of Statistics and Political Science, Columbia University
Our evolution has been an ascent toward increasing consciousness, with tools, fire, domestication of animals, and agriculture as stepping-stones along the way. Computers, the internet, and now AI have emerged rapidly to become a major part of the technological landscape. Rahul Dodhia's book AI for Social Good is a comprehensive exploration of this transformative field, which for technologically challenged non-experts like myself, brilliantly demystifies this exciting field. It leaves you with the hope that AI will be harnessed for the good of the planet and used ethically and responsibly. This book is a must-read for anyone interested in understanding AI's past and present, as well as its profound influence on the future of humanity on planet Earth.
—Louise LeakeyPaleoanthropologist, Turkana Basin Institute, Kenya
In his important new book, leading AI practitioner Rahul Dodhia takes us on a highly accessible whirlwind tour of how AI works, what it can and cannot do, and why it sometimes goes off the rails. You will find inspirational stories on how AI can be used for good and cautionary tales that temper your hubris. Bringing his considerable experience to bear, Dodhia's compelling nuts-and-bolts discussion of how to set up teams that make the most of this potentially transformative technology is a must-read for anyone leading AI-based projects. This book has something in it for everyone seeking to understand and make the most of this rapidly evolving tool.
—Jacob N. ShapiroDirector, Empirical Studies of Conflict Project, Princeton University
Just as the dawn of the nuclear age simultaneously shaped our hopes and our greatest fears for the future of the planet in the last century, so too for artificial intelligence in our century. The world's deeply vulnerable environment and its communities are at a crossroads: one path leading to ecosystem collapse, triggering extreme poverty and violence, the other toward balance and recovery. Our ability and determination to choose the right path are profoundly linked to the choices we make on the use of this nascent technology. In his writing, Rahul provides the critical thinking to channel our choices on the use of AI into a force for long overdue positive change.
—Emmanuel de MerodeDirector of Virunga National Park
AI for Social Good is a compelling book that explores the responsible use of AI as a force for positive and transformative change. It offers a valuable guide for those interested in leveraging AI to tackle the urgent challenges of our time. Against the backdrop of our rapidly changing world and the unprecedented threats we face, the book provides concrete examples of how AI is already playing a crucial role in enhancing our understanding of, preparedness for, and response to global challenges. These examples range from the development of early warning systems for droughts and rapid disaster response programs to aiding decision-making in support of food and water security and the creation of innovative medical solutions.
Rahul, a prominent voice in the emerging AI for Social Good movement, underscores the potential of this rapidly advancing technology as an “essential ingredient in our efforts to create a better world for future generations.”
His book offers an insightful overview of AI and its evolution, providing tangible examples of its diverse applications and ability to drive positive change. It underscores the critical importance of ethics and regulation in the field of AI and provides a glimpse into the future technologies that will further propel its applications and impact.
In an era where AI is both celebrated and met with significant apprehension, AI for Social Good serves as an excellent guide, offering practical advice, real-world examples, and a compelling vision for harnessing AI to address our most pressing challenges. It is an invaluable resource for anyone navigating the rapidly evolving AI landscape in the pursuit of societal betterment. It serves as a resounding call to action, encouraging individuals to become part of the movement for positive change.
—Inbal Becker-ReshefProgram Director of NASA Harvest
RAHUL DODHIA
Copyright © 2024 by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.
Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
Names: Dodhia, Rahul, author.
Title: AI for social good / Rahul Dodhia.
Description: Hoboken, New Jersey : Wiley, [2024] | Includes index.
Identifiers: LCCN 2023047408 (print) | LCCN 2023047409 (ebook) | ISBN 9781394205783 (cloth) | ISBN 9781394205837 (adobe pdf) | ISBN 9781394205790 (epub)
Subjects: LCSH: Artificial intelligence—Moral and ethical aspects.
Classification: LCC Q334.7 .D634 2024 (print) | LCC Q334.7 (ebook) | DDC 174/.90063—dc23/eng/20231103
LC record available at https://lccn.loc.gov/2023047408
LC ebook record available at https://lccn.loc.gov/2023047409
Cover Design: C. WallaceCover Image: © EpicEtch / Adobe StockAuthor Photo: Courtesy of the Author
Dedicated to my late mother, whose memory still guides me, and my father, who taught me compassion and to care for the world.
MANY OF THE examples in the book come from projects led by past and present members of the AI for Good Research Lab. Their intelligence and dedication to improving the world around them inspire me every day. For the work they have done, thanks to Anthony Ortiz, Zhongqi Miao, Caleb Robinson, Meghana Kshirsagar, Simone Fobi Nsutezo, Juan Lavista Ferres, Shahrzad Gholami, Felipe Oviedo, Thomas Roca, Akram Zaytar, Gilles Hacheme, Lucas Meyer, Girmaw Abebe Tadesse, Md Nasir, Mayana Pereira, Yixi Xu, Darren Tanner, Amrita Gupta, Will Fein, Tammy Glazer, Anusua Trivedi, Siyu Yang, Ming Zhong, Hyojin Song, Sumit Mukherjee, and John Kahan.
Thanks also to Dan Morris, who personifies the AI for Good ethos, and Cameron Birge, who helped bring about several of the projects discussed in the book. And special thanks to the larger AI for Good Lab at Microsoft, of which the Research Lab is a part, for continually fueling the momentum of the AI for Social Good movement.
Finally, thanks are also due to my daughter, Arya, for her inquisitive nature that keeps me on my toes and her frequent wrestling matches, providing much-needed screen breaks. I also owe gratitude to my wife, Annette de Soto, who reviewed this book and offered questions, revisions, and a close reading of the text honed from too many years at the University of Chicago.
RAHUL DODHIA HEADS the AI for Good Research Lab at Microsoft, based in Redmond, Washington. He leads a team of AI researchers dedicated to addressing global challenges using artificial intelligence. His work focuses on sustainability, humanitarian action, and health issues, paying special attention to climate adaptation in the Global South.
Prior to his current role, he led machine learning teams at several corporations, including eBay, Amazon, and Expedia. He also served at the NASA Ames Research Center, where he applied foundational research on human memory to address safety concerns in general aviation and space flight.
Rahul's undergraduate education was at Brandeis University, earning a BA in Mathematics, with the highest honors, summa cum laude. His journey into the world of artificial intelligence began during his graduate studies in the psychology department at Columbia University. He conducted extensive research on human memory and decision-making models there, ultimately earning his PhD.
Rahul grew up in Thika, Kenya, a place that has seen profound ecological change. In addition to his research interests, he was a competitive sheepherder with his beloved Border collie, Artoo Dogtoo.
IN 2022, THE world was horrified by the earthquake that devastated Turkey and Syria. Like many people around the world, my team at Microsoft, the AI for Good Research Lab, wondered how we could help from so far away. Having previously utilized satellite imagery to identify areas of destruction, the Lab sprang into action, providing maps of areas in need to the authorities. When the historic town of Lahaina in Hawaii was engulfed in flames the following year, we supported the American Red Cross with maps with localized estimates of destruction, enabling them to disburse aid in record time to those most in need. Meanwhile, in drought and locust-stricken Kenya, we collaborated with the Nature Conservancy to identify smallholder farms and devise irrigation solutions. In the United States, as disinformation endangered lives and democracy, we developed tools to assess and trace the origins of false information. These initiatives all had in common new computing tools developed within the last few years: artificial intelligence that mimicked the neuronal processes of living brains.
At Microsoft's AI for Good Research Lab, my team dedicates itself daily to tackling humanity's global challenges using artificial intelligence. Despite numerous instances of AI being employed for positive purposes, many remain unaware of this side of the story of AI. Inspired by the work of the Lab, I wrote AI for Social Good for those looking to grasp the basics of AI and its real-world applications that affect positive change in society. The book clarifies AI concepts and offers a lucid and direct explanation of the technology and its numerous applications for positive impact. Whether you are new to the AI world or already working with AI, I hope this book will enhance your understanding and spark innovative applications of AI for the greater good.
Interest in artificial intelligence surged in 2023, catalyzed by the remarkable launch of ChatGPT. For generations that grew up with narratives of robots and computers with human-like intelligence, it appeared as if the future had finally arrived. However, admiration for large language models like ChatGPT has been dampened by their inclination to lead people astray. Concerns about AI's rapid, unchecked development have become louder, and respected AI researchers and leaders in technology have joined in with warnings that technology is advancing at a pace greater than our ability to absorb it. The speed at which AI is evolving makes it difficult to accurately predict its outcomes, underscoring the urgent need for a comprehensive set of guidelines to navigate this uncharted territory. Many of us are now advocating for the incorporation of ethical principles at the heart of AI development.
All of this is unfolding against a backdrop of significant transformation in the global ecosystem. Beyond perennial issues, such as employment and livelihoods, exacerbated by fears of AI usurping them, we are now also confronted with the challenges posed by climate change. Natural disasters may be growing more devastating, and food and water insecurity are rearing their ugly heads. This multitude of problems seemed overwhelming, but now AI offers some hope. We may be on the verge of discovering new solutions to these problems.
A movement that can be termed AI for Social Good has arisen to counter the dystopian narrative of AI that builds on fears of economic setbacks and global war. It manifests in various ways, from nonprofit organizations to private sector projects, from academic conferences to online communities. It is not an organized movement where members pay dues and have newsletters. But it captures the spirit of people who are troubled by what they see coming in the future, and it has been embraced by dedicated young people with a burning desire to be a part of the solution.
The book is structured to be read sequentially, but each chapter stands on its own so readers with particular interests can jump around. Here is a brief summary of each chapter.
Chapter 1 traces a brief history of artificial intelligence, how it arose from the early days of computing in the 19th century to its emergence, in fits and starts, within the last few decades. This foundation for understanding AI's development highlights key individual achievements while acknowledging the collective efforts of their peers.
Chapter 2 is a textbook-style exposition of the components that constitute AI. It introduces the reader to the terms commonly used by practitioners of AI. Terms such as neural networks, machine learning, and large language models are explained here. The history of AI from the previous chapter is appended by more stories of how technical aspects of AI came into being.
Chapter 3 highlights AI's potential to drive positive change for the reader to envision novel ways in which AI can be harnessed to address the pressing issues of our time. Several examples of how AI is used for social good are given, with an emphasis on humanitarian and environmental issues. It explores how newly available data, such as satellite and drone images and recent advancements like foundation models for language, creates opportunities for breakthroughs in the challenges plaguing society.
Chapter 4 continues the discussion of AI for social good but focuses more on scientific endeavors. By showcasing the real-world applications and implications of AI in these crucial scientific domains, the chapter aims to enlighten the reader on the indispensable role of AI in addressing contemporary scientific challenges and advancing human knowledge. Examples from biodiversity, astronomy, and proteomics illustrate this impact. The reader is not expected to have prior knowledge of these fields, and Chapter 4 introduces their significance.
Chapter 5 dispels the notion of an AI utopia. It addresses the potential pitfalls of AI and explains the fears raised by prominent technologists, again with several examples. We look particularly at how AI can supercharge propaganda and disinformation and how societal biases are mirrored in AI, a reflection of our own inclinations and actions. The chapter aims to foster a more nuanced understanding of the potential repercussions of AI, urging the reader to approach its development and deployment with a balanced perspective and a critical eye.
Chapter 6 elaborates on one of the book's central themes: AI development should be based on a core of ethics and agreed-upon standards. The need for regulation is necessary to mitigate the negative implications of AI. History shows us the need for reining in the more negative aspects of humanity, a sort of societal superego to balance the Id's baser instincts. This chapter explores the nuances of regulating AI by examining case studies and global approaches. It calls for international collaboration to establish guidelines protecting individual rights while allowing controlled experimentation. Core themes include transparency, consent, data security, algorithmic fairness, and human oversight for high-stakes decisions. Though an imperfect process, mindful governance of AI via laws, industry standards, and social norms is vital to realizing its benefits without unacceptable risks.
In Chapter 7, I draw on my experience running AI teams to offer practical advice for constructing effective teams, bridging knowledge gaps, and aligning technical capabilities with real-world utility. Developing impactful AI requires a team with diverse expertise, effective collaboration, and core roles like the project manager, domain expert, and AI expert who each contribute unique perspectives. Frequent communication and feedback loops ensure the AI model matches real-world requirements. However, challenges inevitably arise regarding data quality, model accuracy, and ethical implications. A thoughtful, human-centric approach is crucial, with human oversight playing a pivotal role in deploying reliable AI.
Chapter 8, the last chapter, looks ahead to future technologies and the immense changes that AI might wreak on our society. We are merely at the beginning of our journey with a new form of intelligence, with technologies already in the pipeline, such as quantum computing and DNA storage, that could radically redefine our conception of what we think it means to be human.
This book aspires to disseminate innovative ideas and serve as a source of inspiration for those eager to harness the power of AI to address some of the most critical challenges facing society today. If this book leaves you eager for more, an upcoming book going deeper into the topics covered here will be coming soon. Authored by several members of the AI for Good Research Lab, it will be a non-technical but in-depth discussion of the projects the lab has undertaken.
“Artificial intelligence is growing up fast, as are robots whose facial expressions can elicit empathy and make your mirror neurons quiver.”
– Diane Ackerman
“The science of today is the technology of tomorrow”
– Edward Teller
IN 1997, IBM's Deep Blue computer famously defeated world chess champion Garry Kasparov in a six-game match. This event marked a major milestone in the development of AI, as it demonstrated that a machine could outthink a human in a complex game with countless possible moves. The jubilation felt on achieving such a feat was mixed with hand-wringing that the age of machines was about to eclipse the age of humankind. Kasparov himself could not believe a machine could have defeated him and insisted this was a modern version of the Mechanical Turk, a 19th century con where a small person hid inside a supposed automaton and played chess.1,2 Despite these expressions of disbelief, the match captured the world's attention. Chess was, after all, an ancient game highly revered as an expression of human mental ability. This event sparked a new interest in the abilities of machines that could think and adapt and even outshine humans.
Nearly seven decades since the prefix “artificial” was attached to intelligence, we live on the cusp of one of the largest disruptions in human society. When the CEO of Google, Sundar Pichai, calls AI one of humanity's most profound inventions,3 and other tech luminaries such as Bill Gates argue, “The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone,”4 and Elon Musk goes so far as to deem it potentially more dangerous than nuclear weapons,5 it is hard to dismiss the furor around this new technology as hyperbole. We may indeed be living in a time of profound change.
Artificial intelligence's rise and awesome potential have been a topic of discussion among tech insiders for quite some time. Now, with the emergence of ChatGPT, a much greater slice of humanity is witnessing firsthand the impact of this technology in their daily lives. If there are skeptics questioning the impact and abilities of artificial intelligence, their doubts are certainly being challenged.
AI manifests in our lives in the form of self-driving cars, virtual assistants such as Alexa and Siri, and unprecedented information via search engines. It is even more prevalent behind the scenes, powering medical assistants, farming, and disaster response. AI developments are quickly transforming the way we work, communicate, and even think. The invention of the automobile changed landscapes and economies, while radio and telephone transformed communications and society. AI is poised to join these ranks of major disruptors in the coming years. We are witnessing the birth of a transformative force that will change how we make decisions and perceive the world around us.
However, the implications of this technological transformation are not without their challenges. There are concerns over privacy, security, and job displacement. Evidence shows that AI reflects some of society's worst habits, such as racial and societal bias. As AI continues to become more sophisticated and more integral to our lives, individuals and society must carefully consider its ethical implications. With the proper safeguards in place, the undeniable benefits of AI could usher in a new era of progress and prosperity for all.
Artificial intelligence was long the province of fiction, fantasy, folklore, and myth. Inanimate objects developing human-like intelligence and abilities beyond our own are common in the stories we share. From figures such as mystical golems in Jewish tales and enigmatic homunculi of the Middle Ages to the evil computer HAL in 2001: A Space Odyssey and the iconic droids in Star Wars, these legends reflect our curiosity and desire to create intelligence in our image.
Next, we trace the broad outlines of AI's emergence, from early conceptualizations of universal calculating machines to the first manifestations of what we today call AI.
The first practical steps toward AI happened in the last 200 years. Charles Babbage (Figure 1.1) emerged as a seminal figure in the history of AI, revered by many as the progenitor of this field. Babbage, a brilliant mathematician and inventor, possessed an indomitable spirit, a penchant for spectacle, and an insatiable curiosity that led him to his brilliant achievements in computing.6,7 His fascination with automatons mimicking human intelligence was sparked at age eight when his mother whisked him away to a museum of scientific artifacts and wonders. There, he saw an artful creation—a dancer cradling a bird—so exquisitely crafted that it appeared lifelike. From that moment forward, Babbage's destiny was irrevocably entwined with the pursuit of crafting machines capable of emulating human behavior.
Figure 1.1 Drawing of Charles Babbage
Credit: The Illustrated London News / Wikimedia Commans / Public Domain.
In his late 20s in the early 1800s, Babbage designed the first mechanical computer, the Difference Engine. This groundbreaking machine could perform complex mathematical calculations, such as producing tables of logarithms.8,9 Indulging his showman tendencies, Babbage delighted in donning extravagant attire as he showcased his creation to the venerable Royal Society in London and other esteemed venues across England. Tales of his eccentricities, ranging from chasing musicians away from his abode when they impinged on his concentration to his fastidious craftsmanship, where gears and tools personally ground by him remained in use long after his death, embellished the legend of this extraordinary man.
The Difference Engine was never completed during Babbage's lifetime. It wasn't until the 1990s that it was finally built according to Babbage's design. It is on display at the London Science Museum, and a second one remains in the possession of a private donor who financed its creation.
Although Babbage was not able to see his design take life, it inspired his later, more audacious creation, the Analytical Engine. This was a much more ambitious endeavor, surpassing the Difference Engine in its versatility. Babbage intended it to be a general-purpose computing machine that could be instructed to perform any type of calculation. He envisioned tables of mathematical values being formulated, and these tables of values would inform calculations of things like dates of eclipses. Crucially, the Analytical Engine encompassed the fundamental duality of modern computers: the ability to store and process vast troves of data.
Regrettably, quarrels with his engineers and the drying up of funding meant that the Analytical Engine, like the earlier Difference Engine, was never built. It nevertheless stands as a major milestone in the history of computing. It was the first machine designed to be truly programmable. And it also helped to popularize the idea of artificial intelligence.
Now recognized as the world's first computer programmer, Ada Lovelace (Figure 1.2) collaborated with Charles Babbage on his prototypes. When recounting the history of science and technology, the contributions of women have often been overlooked or underrepresented. But Ada Lovelace, daughter of the romantic poet Lord Byron and Anne Isabelle Milbanke, left her mark as indelibly as any male pioneer. Despite being born in the 19th century, when women's opportunities were limited, Ada Lovelace defied societal norms and fervently pursued her passion for mathematics and science. Her mother was responsible in large part for Ada's education. Seeking to shelter Ada from her father's perceived and infamous instabilities, she ensured Ada got a firm grounding in logic and mathematics.10
Figure 1.2 Ada Lovelace, watercolor painting, possibly by Alfred Edward Chalon in 1840
Credit: Science Museum Group / Wikimedia Commans / Public Domain.
When she was 17, Ada Lovelace met Charles Babbage at the house of Mary Sommerfield, a Scottish scientist and mathematician. Sommerfield had recognized a keen scientific intelligence in Lovelace and consciously brought about this intellectual match. Lovelace and Babbage became collaborators.
Her insight into Babbage's Analytical Engine went beyond his own ideas. She envisioned its potential beyond mere calculation. She recognized that the Analytical Engine could be used for more than just crunching numbers; it could be a tool for creativity and generating complex outputs. Her notes included an algorithm for calculating Bernoulli numbers, which is widely regarded as the world's first computer program. This visionary insight earned her the title of the world's first computer programmer.
Unfortunately, like many bright intelligences, she succumbed to her body's infirmities at age 36. But her legacy in computer science guides researchers and engineers to this day.11
John von Neumann (Figure 1.3) is another of the most prominent people to lay the foundations of computer science. Hailing from Budapest, Hungary, von Neumann was a child prodigy, a versatile intellectual who hungered for mathematics and physics. His unconventional, multidisciplinary approach to studying made many skeptical of his seriousness, and, like his predecessor Charles Babbage, he gained a reputation as a maverick.12
Figure 1.3 John von Neumann
Credit: Los Alamos National Laboratory / Wikimedia Commans / Public Domain.
Von Neumann's extraordinary intellect carried him to doctorates in chemical engineering at the University of Zurich and mathematics from the University of Prague. When he submitted his doctoral dissertation to the faculty at the University of Zurich, the professors found it so profound and complex that they couldn't fully understand it. They asked him to simplify it, but with characteristic conviction, he firmly declined. To his mind, if they failed to comprehend the magnitude of his ideas, they lacked the qualification to pass judgment upon them. As a result, his dissertation remained unfinished and was never formally submitted, yet it still significantly impacted the field of mathematics and was later published as a monograph.13
Von Neumann moved on to the University of Berlin, where he continued to baffle his peers and students. Many stories of his time there illustrate his brilliance. Once, a student in a statistics lecture asked him a challenging question about a complex mathematical calculation. Without skipping a beat, von Neumann proceeded to solve the problem mentally and provided the answer within seconds. His lectures were often marked by brilliant expositions, which the students would then spend hours deciphering amongst themselves.
In the 1930s, he landed a teaching appointment at Princeton University. There, his genius would shine most brilliantly, and his pioneering contributions would forever transform the field of computing. Today, we take for granted the CPU as the brain of a computer and memory where computer programs are stored. Von Neumann was the genius who formulated these concepts and helped make them a reality, like UNIVAC, one of the first computers ever built.14
For decades, the Turing test was held up as the holy grail of computing and artificial intelligence. It was an answer to the question of how we would know when machines had become intelligent. Mathematician Alan Turing (Figure 1.4) proposed his eponymous test, though he called it the Imitation Game.15,16 The test consists of questions posed to the machine and humans. If the answers are indistinguishable, one cannot tell which answers came from the machine, then the machine has won the game and passed the test.
Figure 1.4 Alan Turing
Credit: Dunk/Flickr/Public domain.
Until the early 2000s, beating the test seemed like a very difficult, nearly impossible task. This seemingly insurmountable challenge for artificial intelligence researchers imbued the Turing test with an aura of mystery and intrigue. It became a symbol of the quest for artificial intelligence.
The Turing test had profound philosophical implications. If a machine is equivalent to a human, then what does it say about human intelligence? What can it tell us about consciousness? It had practical implications as well, which we're now seeing firsthand. ChatGPT and DALL-E by OpenAI have taken the world by storm, and there's no doubt that ChatGPT can pass the Turing test.
The Turing test did, and still does, have its skeptics, who saw it as a limited indicator of machine intelligence. They argued that relying on mimicking human speech patterns did not reflect on general intelligence. Now that we have reached an honest reckoning, it's unclear whether this holy grail is as significant as we thought it was. ChatGPT is undoubtedly very human-like in its responses, but it is clearly still a non-conscious machine.
The eponym of this test, Alan Turing, was an Englishman who led the successful effort to break the German code during World War II, and then developed his theories of computing at the National Physical Laboratory. While Babbage's work was foundational for computing, and von Neumann influenced architected computer designs, Turing was a pioneer of theoretical computer science and artificial intelligence. His notion was of a Universal machine, known as a Turing machine, that could compute anything given a set of instructions. If this sounds like Babbage's Analytical Engine, it's because fundamentally they both had the same underlying idea of a flexible computing machine. Turing's mathematical concept, though, laid practical foundations for the development of computers.
The history of artificial intelligence is populated by thousands of mathematicians, engineers, psychologists, and scientists. However, among this vast sea of contributors, these four pioneers serve as human faces for the early development of artificial intelligence.
From the 1950s onward, the story of AI has taken on a certain canonical shape, which will be sketched here. Like its older, more venerable cousin of theoretical physics, its coming to maturity is evolving into a story we tell ourselves and each other, a narrative that shapes our collective understanding. The story begins with Babbage and Lovelace and continues with Turing and von Neumann, and then comes one of the nodes, a turning point, in the 1950s.
In the summer of 1956, a group of researchers gathered on the campus of Dartmouth College to discuss a new field whose name had just been made up by one of the organizers. John McCarthy put “artificial intelligence” in the name of the conference and in the proposal for its funding.17 The Dartmouth Conference was a gathering of some of the leading researchers in computer science, mathematics, philosophy, and psychology, and would come to be seen as AI's genesis moment.
The organizers were old friends. John McCarthy and Marvin Minsky had been roommates at Princeton University and had remained close friends ever since. Nathaniel Rochester and Claude Shannon were former colleagues from Bell Labs and collaborated on the development of computer languages and hardware. By most accounts, the gathering was somewhat chaotic, with a loose flow of ideas, brilliant minds each pursuing their own agendas, and people coming and going as they wished. Marvin Minsky brought his electric guitar and played late into the night, entertaining his colleagues with his musical skills.
It seemed nothing would come out of this gathering. Turing and von Neumann, who would have been expected to be major figures at the conference, were dead (Turing) or ailing (von Neumann).18,19 But, in the years that followed the Dartmouth Conference, many of the participants went on to become leaders in the field of AI. John McCarthy, for instance, went on to develop the Lisp programming language, which became a vital tool in AI research. Marvin Minsky co-founded the MIT Artificial Intelligence Laboratory and became one of the most influential figures in the field. Nathaniel Rochester continued to work at IBM, overseeing the development of some of the earliest computer systems. Two of them, Nash and Simon, went on to win Nobel prizes for other endeavors.
Minsky continued the work he had begun during his doctoral research and shaped the research direction for the new field with his colleague Samuel Papert. Psychologists had been interested in how the brain worked and tried to model the behavior of individual cells. Frank Rosenblatt put together what would become the most famous neural network of all, the perceptron.20 Incredibly simple compared to the monumental edifices that AI scientists now build, it was nevertheless an astounding demonstration of how cells could exhibit behavior. Minsky wrote his doctoral thesis on neural networks, and his book with Samuel Papert, Perceptrons, made the titular neural network famous.21 Rather than celebrating Rosenblatt's perceptron, the book argues that the network was too simple. Consisting of just a single layer of artificial neurons, they could be used to solve only very basic problems. They proposed a theoretical multi-perceptron, a neural network with multiple layers that could handle more sophisticated tasks. Ironically, the harsh critique contained in the book about the perceptron, along with no practical way of implementing a multi-perceptron, was an early, unintended salvo that crashed the enthusiasm for AI.
The 1960s were characterized by optimism and a focus on fundamental research. In comparison, the 1970s were a more challenging time for AI research, with a shift toward applied research and the development of expert systems. The enthusiasm of the 1950s and 1960s was exemplified by statements that promised human-level intelligence within a few years. Life magazine published this quote from an interview with Marvin Minsky, “[In] three to eight years, we will have a machine with the general intelligence of an average human being.”22 A few years later, in 1973, economist Herbert Simon, one of the creators of the world's first artificial intelligence program, famously declared that “machines will be capable, within twenty years, of doing any work a man can do.”23
These predictions turned out to be wildly optimistic. The power of computers at the time wasn't enough to make their dreams a reality. Sure, computers had come a long way since the days of Turing machines that cracked the German codes in World War II, but the theory quickly outpaced the hardware. It was like trying to build a skyscraper with just hammers and nails.
The pendulum swung toward pessimism. In 1975, mathematician James Lighthill published a report for the British government that criticized the state of AI research at the time, arguing that progress had been “grossly exaggerated” and that the field was unlikely to deliver significant results in the near future.24 He argued that the combinatorial explosion of choices that most decision processes would face would never be overcome. A few years earlier, philosopher Hubert Dreyfus had argued in his book “What Computers Can't Do”25 that AI was fundamentally flawed because it was based on a flawed understanding of human intelligence. Dreyfus's book became a bestseller and helped popularize the view that AI was over-hyped and unlikely to succeed in the near future.
And so, the curtain fell on the first act of the artificial intelligence saga. Responding to the mood of the times, the flow of research funds from the US government, primarily through the Defense Advanced Research Projects Agency (DARPA), dried up. The reduction in funding fueled the perception that AI was over-hyped, creating a negative feedback loop.