19,99 €
Dive into the intelligence that powers artificial intelligence
Artificial intelligence is swiftly moving from a sci-fi future to a modern reality. This edition of Artificial Intelligence For Dummies keeps pace with the lighting-fast expansion of AI tools that are overhauling every corner of reality. This book demystifies how artificial intelligence systems operate, giving you a look at the inner workings of AI and explaining the important role of data in creating intelligence. You'll get a primer on using AI in everyday life, and you'll also get a glimpse into possible AI-driven futures. What's next for humanity in the age of AI? How will your job and your life change as AI continue to evolve? How can you take advantage of AI today to make your live easier? This jargon-free Dummies guide answers all your most pressing questions about the world of artificial intelligence.
Artificial Intelligence For Dummies is the ideal starting point for anyone seeking a deeper technological understanding of how artificial intelligence works and what promise it holds for the future.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 648
Veröffentlichungsjahr: 2024
Cover
Title Page
Copyright
Introduction
About This Book
Icons Used in This Book
Beyond the Book
Where to Go from Here
Part 1: Introducing AI
Chapter 1: Delving into What AI Means
Defining the Term AI
Understanding the History of AI
Considering AI Uses
Avoiding AI Hype and Overestimation
Connecting AI to the Underlying Computer
Chapter 2: Defining Data’s Role In AI
Finding Data Ubiquitous in This Age
Using Data Successfully
Manicuring the Data
Considering the Five Mistruths in Data
Defining the Limits of Data Acquisition
Considering Data Security Issues
Chapter 3: Considering the Use of Algorithms
Understanding the Role of Algorithms
Discovering the Learning Machine
Chapter 4: Pioneering Specialized Hardware
Relying on Standard Hardware
Using GPUs
Working with Deep Learning Processors (DLPs)
Creating a Specialized Processing Environment
Increasing Hardware Capabilities
Adding Specialized Sensors
Integrating AI with Advanced Sensor Technology
Devising Methods to Interact with the Environment
Part 2: Understanding How AI Works
Chapter 5: Crafting Intelligence for AI Data Analysis
Defining Data Analysis
Defining Machine Learning (ML)
Considering How to Learn from Data
Chapter 6: Employing Machine Learning in AI
Taking Many Different Roads to Learning
Exploring the Truth in Probabilities
Growing Trees That Can Classify
Chapter 7: Improving AI with Deep Learning
Shaping Neural Networks Similar to the Human Brain
Mimicking the Learning Brain
Introducing Deep Learning
Detecting Edges and Shapes from Images
Part 3: Recognizing How We Interact with AI Every Day
Chapter 8: Unleashing Generative AI for Text and Images
Getting an Overview of Generative AI
Discovering the Magic Smooth Talk of AI
Working with Generative AI
Understanding the Societal Implications of Generative AI
Deciding What Makes a Good Generative AI App
Commercializing Generative AI
Chapter 9: Seeing AI Uses in Computer Applications
Introducing Common Application Types
Seeing How AI Makes Applications Friendlier
Performing Corrections Automatically
Making Suggestions
Considering AI-Based Errors
Chapter 10: Automating Common Processes
Developing Solutions for Boredom
Working in Industrial Settings
Creating a Safe Environment
Chapter 11: Relying on AI to Improve Human Interaction
Developing New Ways to Communicate
Exchanging Ideas
Using Multimedia
Embellishing Human Sensory Perception
Part 4: AI Applied in Industries
Chapter 12: Using AI to Address Medical Needs
Implementing Portable Patient Monitoring
Making Humans More Capable
Addressing Special Needs
Completing Analysis in New Ways
Relying on Telepresence
Devising New Surgical Techniques
Performing Tasks Using Automation
Combining Robots and Medical Professionals
Considering Disruptions AI Causes for Medical Professionals
Chapter 13: Developing Robots
Defining Robot Roles
Assembling a Basic Robot
Chapter 14: Flying with Drones
Acknowledging the State of the Art
Defining Uses for Drones
Reviewing Privacy and Data Protection in Drone Operations
Chapter 15: Utilizing the AI-Driven Car
Examining the Short History of SD Cars
Understanding the Future of Mobility
Getting into a Self-Driving Car
Overcoming Uncertainty of Perceptions
Part 5: Getting Philosophical About AI
Chapter 16: Understanding the Nonstarter Application – Why We Still Need Humans
Using AI Where It Won’t Work
Considering the Effects of AI Winters
Creating Solutions in Search of a Problem
Chapter 17: Engaging in Human Endeavors
Keeping Human Beings Popular
Living and Working in Space
Creating Cities in Hostile Environments
Making Humans More Efficient
Fixing Problems on a Planetary Scale
Chapter 18: Seeing AI in Space
Integrating AI into Space Operations
Performing Space Mining
Exploring New Places
Building Structures in Space
Part 6: The Part of Tens
Chapter 19: Ten Substantial Contributions of AI to Society
Considering Human-Specific Interactions
Developing Industrial Solutions
Creating New Technology Environments
Working with AI in Space
Chapter 20: Ten Ways in Which AI Has Failed
Understanding
Discovering
Empathizing
Index
About the Authors
Connect with Dummies
End User License Agreement
Chapter 1
TABLE 1-1 The Kinds of Human Intelligence and How AIs Simulate Them
Chapter 5
TABLE 5-1 Machine Learning Real-World Applications
Chapter 1
FIGURE 1-1: An overview of the history of AI.
Chapter 2
FIGURE 2-1: With the present AI solutions, more data equates to more intelligen...
Chapter 3
FIGURE 3-1: A tree may look like its physical counterpart or have its roots poi...
FIGURE 3-2: Graph nodes can connect to each other in myriad ways.
FIGURE 3-3: A glance at min-max approximation in a tic-tac-toe game.
Chapter 6
FIGURE 6-1: A naïve Bayes model can retrace evidence to the right outcome.
FIGURE 6-2: A Bayesian network can support a medical decision.
FIGURE 6-3: A visualization of the decision tree built from the play-tennis dat...
Chapter 7
FIGURE 7-1: Example of a perceptron in simple and challenging classification ta...
FIGURE 7-2: A neural network architecture, from input to output.
FIGURE 7-3: This neural network playground lets you see how modifying a neural ...
FIGURE 7-4: Using translation invariance, a neural network spots the dog and it...
FIGURE 7-5: A convolution scanning an image.
Chapter 8
FIGURE 8-1: A sample of images created with the DALL-E 3 technology from OpenAI...
FIGURE 8-2: How a GAN network works, oscillating between generator and discrimi...
FIGURE 8-3: How a diffusion model first adds noise and then learns to remove it...
FIGURE 8-4: A schema of how an agent and an environment interact in RL.
Chapter 13
FIGURE 13-1: The uncanny valley.
Chapter 14
FIGURE 14-1: A quadcopter flies by opportunely spinning its rotors in the right...
Chapter 15
FIGURE 15-1: An overall, schematic view of the systems working in an SD car.
FIGURE 15-2: A schematic representation of exteroceptive sensors in an SD car.
FIGURE 15-3: A Kalman filter estimates the trajectory of a bike by fusing radar...
Cover
Table of Contents
Title Page
Copyright
Begin Reading
Index
About the Authors
i
ii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
193
194
195
196
197
198
199
200
201
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
321
322
323
324
325
326
327
328
329
331
332
333
334
335
336
337
339
340
341
342
343
344
345
346
347
348
349
350
351
353
354
355
Artificial Intelligence For Dummies®, 3rd Edition
Published by: John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030-5774, www.wiley.com
Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.
Media and software compilation copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.
Published simultaneously in Canada
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.
Trademarks: Wiley, For Dummies, the Dummies Man logo, Dummies.com, Making Everything Easier, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: THE PUBLISHER AND THE AUTHOR MAKE NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS WORK AND SPECIFICALLY DISCLAIM ALL WARRANTIES, INCLUDING WITHOUT LIMITATION WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE. NO WARRANTY MAY BE CREATED OR EXTENDED BY SALES OR PROMOTIONAL MATERIALS. THE ADVICE AND STRATEGIES CONTAINED HEREIN MAY NOT BE SUITABLE FOR EVERY SITUATION. THIS WORK IS SOLD WITH THE UNDERSTANDING THAT THE PUBLISHER IS NOT ENGAGED IN RENDERING LEGAL, ACCOUNTING, OR OTHER PROFESSIONAL SERVICES. IF PROFESSIONAL ASSISTANCE IS REQUIRED, THE SERVICES OF A COMPETENT PROFESSIONAL PERSON SHOULD BE SOUGHT. NEITHER THE PUBLISHER NOR THE AUTHOR SHALL BE LIABLE FOR DAMAGES ARISING HEREFROM. THE FACT THAT AN ORGANIZATION OR WEBSITE IS REFERRED TO IN THIS WORK AS A CITATION AND/OR A POTENTIAL SOURCE OF FURTHER INFORMATION DOES NOT MEAN THAT THE AUTHOR OR THE PUBLISHER ENDORSES THE INFORMATION THE ORGANIZATION OR WEBSITE MAY PROVIDE OR RECOMMENDATIONS IT MAY MAKE. FURTHER, READERS SHOULD BE AWARE THAT INTERNET WEBSITES LISTED IN THIS WORK MAY HAVE CHANGED OR DISAPPEARED BETWEEN WHEN THIS WORK WAS WRITTEN AND WHEN IT IS READ.
For general information on our other products and services, please contact our Customer Care Department within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993, or fax 317-572-4002. For technical support, please visit https://hub.wiley.com/community/support/dummies.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.
Library of Congress Control Number is available from the publisher.
ISBN 978-1-394-27071-2 (pbk); ISBN 978-1-394-27073-6 (ebk); ISBN 978-1-394-27072-9 (ebk)
You can hardly avoid hearing about AI these days. You see AI in the movies, in books, in the news, and online. AI is part of robots, self-driving (SD) cars, drones, medical systems, online shopping sites, and all sorts of other technologies that affect your daily life in innumerable ways.
Many pundits are burying you in information (and disinformation) about AI, too. Much of the hype about AI originates from the excessive and unrealistic expectations of scientists, entrepreneurs, and businesspersons. Artificial Intelligence For Dummies, 3rd Edition, is the book you need if you feel as though you truly don’t know anything about a technology that purports to be an essential element of your life.
Using various media as a starting point, you might notice that most of the useful technologies are almost boring. Certainly, no one gushes over them. AI is like that: so ubiquitous as to be humdrum. You’re using AI in some way today; in fact, you probably rely on AI in many different ways — you just don’t notice it because it’s so mundane. This book makes you aware of these very real and essential uses of AI. A smart thermostat for your home may not sound exciting, but it’s an incredibly practical use for a technology that has some people running for the hills in terror.
This book also covers various cool uses of AI. For example, you may not realize that a medical monitoring device can now predict when you might have a heart problem — but such a device exists. AI powers drones, drives cars, and makes all sorts of robots possible. You see AI used now in all sorts of space applications, and AI figures prominently in all the space adventures humans will have tomorrow.
In contrast to many books on the topic, Artificial Intelligence For Dummies, 3rd Edition, also tells you the truth about where and how AI can’t work. In fact, AI will never be able to engage in certain essential activities and tasks, and it will be unable to engage in other ones until far into the future. One takeaway from this book is that humans will always be important. In fact, if anything, AI makes humans even more important because AI helps humans excel in ways that you frankly might not be able to imagine.
Artificial Intelligence For Dummies, 3rd Edition starts by helping you understand AI, especially what AI needs to work and why it has failed in the past. You also discover the basis for some of the issues with AI today and how those issues might prove to be nearly impossible to solve in certain cases. Of course, along with the issues, you discover the fixes for various problems and consider where scientists are taking AI in search of answers. Most important, you discover where AI is falling short and where it excels. You likely won’t have a self-driving car anytime soon, and that vacation in space will have to wait. On the other hand, you find that telepresence can help people stay in their homes when they might otherwise need to go to a hospital or nursing home.
This book also contains links to external information because AI has become a huge and complex topic. Follow these links to gain additional information that just won’t fit in this book — and to gain a full appreciation of just how astounding the impact of AI is on your daily life. If you’re reading the print version of this book, you can type the URL provided into your browser; e-book readers can simply click the links. Many other links use what’s called a TinyURL (tinyurl.com), in case the original link is too long and confusing to type accurately into a search engine. To check whether a TinyURL is real, you can use the preview feature by adding the word preview as part of the link, like this: preview.tinyurl.com/pd88943u.
AI has a truly bright future because it has become an essential technology. This book also shows you the paths that AI is likely to follow in the future. The various trends discussed in this book are based on what people are actually trying to do now. The new technology hasn’t succeeded yet, but because people are working on it, it does have a good chance of success at some point.
To make absorbing the concepts even easier, this book uses the following conventions:
Web addresses appear in
monofont
. If you're reading a digital version of this book on a device connected to the Internet, note that you can click the web address to visit that website, like this:
www.dummies.com
. Many article titles of additional resources also appear as clickable links.
Words in
italics
are defined inline as special terms you should remember. You see these words used (and sometimes misused) in many different ways in the press and other media, such as movies. Knowing the meaning of these terms can help you clear away some of the hype surrounding AI.
As you read this book, you see icons in the margins that indicate material of interest (or not, as the case may be). This section briefly describes each icon in this book.
Tips are gratifying because they help you save time or perform a task without creating a lot of extra work. The tips in this book are time-saving techniques or pointers to resources that you should try in order to gain the maximum benefit from learning about AI. Just think of them as extras that we’re paying to reward you for reading our book.
If you get nothing else out of a particular chapter or section, remember the material marked by this icon. This text usually contains an essential process or a bit of information that you must know to interact with AI successfully.
We don’t want to sound like angry parents or some kind of maniacs, but you should avoid doing anything marked with a Warning icon. Otherwise, you might find that you engage in the sort of disinformation that now has people terrified of AI.
Whenever you see this icon, think “advanced tip or technique.” You can fall asleep from reading this material, and we don’t want to be responsible for that. However, you might find that these tidbits of useful information contain the solution you need in order to create or use an AI solution. Skip these bits of information whenever you like.
Every book in the For Dummies series comes supplied with an online Cheat Sheet. You remember using crib notes in school to make a better mark on a test, don’t you? You do? Well, a cheat sheet is sort of like that. It provides you with some special notes about tasks that you can do with AI that not everyone else knows about. You can find the cheat sheet for this book by going to www.dummies.com and typing Artificial Intelligence For Dummies Cheat Sheet in the search box. The cheat sheet contains neat-o information, such as the meaning of all those strange acronyms and abbreviations associated with AI, machine learning, and deep learning.
It’s time to start discovering AI and see what it can do for you. If you know nothing about AI, start with Chapter 1. You may not want to read every chapter in the book, but starting with Chapter 1 helps you understand the AI basics that you need when working through other places in the book.
If your main goal in reading this book is to build knowledge of where AI is used today, start with Chapter 5. The materials in Part 2 can help you see where AI is used today.
If you have a bit more advanced knowledge of AI, you can start with Chapter 9. Part 3 of this book contains the most advanced material that you’ll encounter. If you don’t want to know how AI works at a low level (not as a developer but simply as someone interested in AI), you might decide to skip this part of the book.
Okay, so you want to know the super fantastic ways in which people are either using AI today or will use AI in the future. If that’s the case, start with Chapter 12. All of Parts 4 and 5 show you the incredible ways in which AI is used without forcing you to deal with piles of hype as a result. The information in Part 4 focuses on hardware that relies on AI, and the material in Part 5 focuses more on futuristic uses of AI.
Part 1
IN THIS PART …
Discover what AI can actually do for you.
Consider how data affects the use of AI.
Understand how AI relies on algorithms to perform useful work.
See how using specialized hardware makes AI perform better.
Chapter 1
IN THIS CHAPTER
Defining AI and its history
Using AI for practical tasks
Seeing through AI hype
Connecting AI with computer technology
Common apps, such as Google Assistant Alexa, and Siri, have all of us who are online every day, using artificial intelligence (AI) without even thinking about it. Productivity and creative apps such as ChatGPT, Synesthesia, and Gemini help us focus on the content rather than on how to get there. The media floods our entire social environment with so much information and disinformation that many people see AI as a kind of magic (which it most certainly isn’t). So the best way to start this book is to define what AI is, what it isn’t, and how it connects to computers today.
Of course, the basis for what you expect from AI is a combination of how you define AI, the technology you have for implementing AI, and the goals you have for AI. Consequently, everyone sees AI differently. This book takes a middle-of-the-road approach by viewing AI from as many different perspectives as possible. We don’t buy into the hype offered by proponents, nor do we indulge in the negativity espoused by detractors. Instead, we strive to give you the best possible view of AI as a technology. As a result, you may find that you have expectations somewhat different from those you encounter in this book, which is fine, but it’s essential to consider what the technology can actually do for you — rather than expect something it can’t.
Before you can use a term in any meaningful and useful way, you must have a definition for it. After all, if nobody agrees on a meaning, the term has none; it’s just a collection of characters. Defining the idiom (a term whose meaning isn’t clear from the meanings of its constituent elements) is especially important with technical terms that have received more than a little press coverage at various times and in various ways.
Saying that AI is an artificial intelligence doesn’t tell you anything meaningful, which is why people have so many discussions and disagreements over this term. Yes, you can argue that what occurs is artificial, not having come from a natural source. However, the intelligence part is, at best, ambiguous. Even if you don’t necessarily agree with the definition of AI as it appears in the sections that follow, this book uses AI according to that definition, and knowing it will help you follow the text more easily.
People define intelligence in many different ways. However, you can say that intelligence involves certain mental activities composed of the following activities:
Learning:
Having the ability to obtain and process new information
Reasoning:
Being able to manipulate information in various ways
Understanding:
Considering the result of information manipulation
Grasping truths:
Determining the validity of the manipulated information
Seeing relationships:
Divining how validated data interacts with other data
Considering meanings:
Applying truths to particular situations in a manner consistent with their relationship
Separating fact from belief:
Determining whether the data is adequately supported by provable sources that can be demonstrated to be consistently valid
The list could easily grow quite long, but even this list is relatively prone to interpretation by anyone who accepts it as viable. As you can see from the list, however, intelligence often follows a process that a computer system can mimic as part of a simulation:
Set a goal (the information to process and the desired output) based on needs or wants.
Assess the value of any known information in support of the goal.
Gather additional information that could support the goal. The emphasis here is on information that
could
support the goal rather than on information you know
will
support the goal.
Manipulate the data such that it achieves a form consistent with existing information.
Define the relationships and truth values between existing and new information.
Determine whether the goal is achieved.
Modify the goal in light of the new data and its effect on the probability of success.
Repeat Steps 2 through 7 as needed until the goal is achieved (found true) or the possibilities for achieving it are exhausted (found false).
Even though you can create algorithms and provide access to data in support of this process within a computer, a computer’s capability to achieve intelligence is severely limited. For example, a computer is incapable of understanding anything because it relies on machine processes to manipulate data using pure math in a strictly mechanical fashion. Likewise, computers can’t easily separate truth from mistruth (as described in Chapter 2). In fact, no computer can fully implement any of the mental activities described in the earlier list that describes intelligence.
As part of deciding what intelligence actually involves, categorizing intelligence is also helpful. Humans don’t use just one type of intelligence; rather, they rely on multiple intelligences to perform tasks. Howard Gardner a Harvard psychologist has defined a number of these types of intelligence (for details, see the article “Multiple Intelligences” from Project Zero at Harvard University https://pz.harvard.edu/resources/the-theory-of-multiple-intelligences) and knowing them helps you relate them to the kinds of tasks a computer can simulate as intelligence. (See Table 1-1 for a modified version of these intelligences with additional description.)
TABLE 1-1 The Kinds of Human Intelligence and How AIs Simulate Them
Type
Simulation Potential
Human Tools
Description
Bodily kinesthetic
Moderate to High
Specialized equipment and real-life objects
Body movements, such as those used by a surgeon or a dancer, require precision and body awareness. Robots commonly use this kind of intelligence to perform repetitive tasks, often with higher precision than humans, but sometimes with less grace. It’s essential to differentiate between human augmentation, such as a surgical device that provides a surgeon with enhanced physical ability, and true independent movement. The former is simply a demonstration of mathematical ability in that it depends on the surgeon for input.
Creative
None
Artistic output, new patterns of thought, inventions, new kinds of musical composition
Creativity is the act of developing a new pattern of thought that results in unique output in the form of art, music, or writing. A truly new kind of product is the result of creativity. An AI can simulate existing patterns of thought and even combine them to create what appears to be a unique presentation but is in reality just a mathematically based version of an existing pattern. In order to create, an AI would need to possess self-awareness, which would require intrapersonal intelligence.
Interpersonal
Low to Moderate
Telephone, audioconferencing, videoconferencing, writing, computer conferencing, email
Interacting with others occurs at several levels. The goal of this form of intelligence is to obtain, exchange, give, or manipulate information based on the experiences of others. Computers can answer basic questions because of keyword input, not because they understand the question. The intelligence occurs while obtaining information, locating suitable keywords, and then giving information based on those keywords. Cross-referencing terms in a lookup table and then acting on the instructions provided by the table demonstrates logical intelligence, not interpersonal intelligence.
Intrapersonal
None
Books, creative materials, diaries, privacy, time
Looking inward to understand one’s own interests and then setting goals based on those interests is now a human-only kind of intelligence. As machines, computers have no desires, interests, wants, or creative abilities. An AI processes numeric input using a set of algorithms and provides an output; it isn’t aware of anything it does, nor does it understand anything it does.
Linguistic (often divided into oral, aural, and written)
Low
Games, multimedia, books, voice recorders, spoken words
Working with words is an essential tool for communication because spoken and written information exchange is far faster than any other form. This form of intelligence includes understanding oral, aural, and written input, managing the input to develop an answer, and providing an understandable answer as output. Discerning just how capable computers are in this form of intelligence is difficult in light of AIs such as ChatGPT because it’s all too easy to create tests where the AI produces nonsense answers.
Logical mathematical
High (potentially higher than humans)
Logic games, investigations, mysteries, brainteasers
Calculating results, performing comparisons, exploring patterns, and considering relationships are all areas in which computers now excel. When you see a computer defeat a human on a game show, this is the only form of intelligence you’re seeing, out of eight kinds of intelligence. Yes, you might see small bits of other kinds of intelligence, but this is the focus. Basing an assessment of human-versus-computer intelligence on just one area isn’t a good idea.
Naturalist
None
Identification, exploration, discovery, new tool creation
Humans rely on the ability to identify, classify, and manipulate their environment to interact with plants, animals, and other objects. This type of intelligence informs you that one piece of fruit is safe to eat though another is not. It also gives you a desire to learn how things work or to explore the universe and all that is in it.
Visual spatial
Moderate
Models, graphics, charts, photographs, drawings, 3D modeling, video, television, multimedia
Physical-environment intelligence is used by people like sailors and architects (among many others). To move around, humans need to understand their physical environment — that is, its dimensions and characteristics. Every robot or portable computer intelligence requires this capability, but the capability is often difficult to simulate (as with self-driving cars) or less than accurate (as with vacuums that rely as much on bumping as they do on moving intelligently).
As described in the previous section, the first concept that’s important to understand is that AI has little to do with human intelligence. Yes, some AI is modeled to simulate human intelligence, but that’s what it is: a simulation. When thinking about AI, notice an interplay between goal seeking, data processing used to achieve that goal, and data acquisition used to better understand the goal. AI relies on algorithms to achieve a result that may or may not have anything to do with human goals or methods of achieving those goals. With this in mind, you can categorize AI in four ways:
Acting humanly
Thinking humanly
Thinking rationally
Acting rationally
When a computer acts like a human, it best reflects the Turing test, in which the computer succeeds when differentiation between the computer and a human isn’t possible. (For details, see “The Turing test” at the Alan Turing Internet Scrapbook www.turing.org.uk/scrapbook/test.html). This category also reflects what most media would have you believe AI is all about. You see it employed for technologies such as natural language processing, knowledge representation, automated reasoning, and machine learning (all four of which must be present to pass the test). To pass the Turing test, an AI should have all four previous technologies and, possibly, integrate other solutions (such as expert systems).
The original Turing test didn’t include any physical contact. Harnad’s Total Turing Test does include physical contact, in the form of perceptual ability interrogation, which means that the computer must also employ both computer vision and robotics to succeed. Here’s a quick overview of other Turing test alternatives:
Reverse Turing test:
A human tries to prove to a computer that the human is not a computer (for example, the Completely Automated Public Turing Test to Tell Computers and Humans Apart, or CAPTCHA).
Minimum intelligent signal test:
Only true/false and yes/no questions are given.
Marcus test:
A computer program simulates watching a television show, and the program is tested with meaningful questions about the show's content.
Lovelace test 2.0:
A test detects AI by examining its ability to create art.
Winograd schema challenge:
This test asks multiple-choice questions in a specific format.
Current discussions about the Turing test have researchers Philip Johnson-Laird, a retired psychology professor from Princeton University, and Marco Ragni, a researcher at the Germany-based Chemnitz University of Technology, asking whether the test is outdated. For example, If AI is making the Turing test obsolete, what might be better? This issue poses several problems with the Turing test and offers a potential solution in the form of a psychological-like evaluation. These tests would use the following three-step process to better test AIs, such as Google’s LaMDA and OpenAI’s ChatGPT:
Use tests to check the AI’s underlying inferences.Verify that the AI understands its own way of reasoning.Examine the underlying source code, when possible.Modern techniques include the idea of achieving the goal rather than mimicking humans completely. For example, the Wright brothers didn’t succeed in creating an airplane by precisely copying the flight of birds; rather, the birds provided ideas that led to studying aerodynamics, which eventually led to human flight. The goal is to fly. Both birds and humans achieve this goal, but they use different approaches.
A computer that thinks like a human performs tasks that require intelligence (as contrasted with rote procedures) from a human to succeed, such as driving a car. To determine whether a program thinks like a human, you must have some method of determining how humans think, which the cognitive modeling approach defines. This model relies on these three techniques:
Introspection:
Detecting and documenting the techniques used to achieve goals by monitoring one’s own thought processes.
Psychological testing:
Observing a person’s behavior and adding it to a database of similar behaviors from other persons given a similar set of circumstances, goals, resources, and environmental conditions (among other factors).
Brain imaging:
Monitoring brain activity directly through various mechanical means, such as computerized axial tomography (CAT), positron emission tomography (PET), magnetic resonance imaging (MRI), and magnetoencephalography (MEG).
After creating a model, you can write a program that simulates the model. Given the amount of variability among human thought processes and the difficulty of accurately representing these thought processes as part of a program, the results are experimental at best. This category of thinking humanly is often used in psychology and other fields in which modeling the human thought process to create realistic simulations is essential.
Studying how humans think using an established standard enables the creation of guidelines that describe typical human behaviors. A person is considered rational when following these behaviors within certain levels of deviation. A computer that thinks rationally relies on the recorded behaviors to create a guide to how to interact with an environment based on the data at hand.
The goal of this approach is to solve problems logically, when possible. In many cases, this approach would enable the creation of a baseline technique for solving a problem, which would then be modified to actually solve the problem. In other words, the solving of a problem in principle is often different from solving it in practice, but you still need a starting point.
Studying how humans act in given situations under specific constraints enables you to determine which techniques are both efficient and effective. A computer that acts rationally relies on the recorded actions to interact with an environment based on conditions, environmental factors, and existing data.
As with rational thought, rational acts depend on a solution in principle, which may not prove useful in practice. However, rational acts do provide a baseline on which a computer can begin negotiating the successful completion of a goal.
Human processes differ from rational processes in their outcome. A process is rational if it always does the right thing based on the current information, given an ideal performance measure. In short, rational processes go by the book and assume that the book is correct. Human processes involve instinct, intuition, and other variables that don’t necessarily reflect the book and may not even consider the existing data. As an example, the rational way to drive a car is to always follow the law. However, traffic isn’t rational. If you follow the law precisely, you end up stuck somewhere because other drivers aren’t following the law precisely. To be successful, a self-driving car must therefore act humanly rather than rationally.
The categories used to define AI offer a way to consider various uses or ways to apply AI. Some of the systems used to classify AI by type are arbitrary and indistinct. For example, some groups view AI as either strong (generalized intelligence that can adapt to a variety of situations) or weak (specific intelligence designed to perform a particular task well).
The problem with strong AI is that it doesn’t perform any task well, whereas weak AI is too specific to perform tasks independently. Even so, just two type classifications won’t do the job, even in a general sense. The four classification types promoted by Arend Hintze form a better basis for understanding AI:
Reactive machines:
The machines you see defeating humans at chess or playing on game shows are examples of reactive machines. A reactive machine has no memory or experience on which to base a decision. Instead, it relies on pure computational power and smart algorithms to re-create every decision every time. This is an example of a weak AI used for a specific purpose.
Limited memory:
A self-driving (SD) car or an autonomous robot can’t afford the time to make every decision from scratch. These machines rely on a small amount of memory to provide experiential knowledge of various situations. When the machine sees the same situation, it can rely on experience to reduce reaction time and provide more resources for making new decisions that haven’t yet been made. This is an example of the current level of strong AI.
Theory of mind:
A machine that can assess both its required goals and the potential goals of other entities in the same environment has a kind of understanding that is feasible to some extent today, but not in any commercial form. However, for SD cars to become truly autonomous, this level of AI must be fully developed. An SD car would need to not only know that it must move from one point to another but also intuit the potentially conflicting goals of drivers around it and react accordingly. (Robot soccer, at
www.cs.cmu.edu/~robosoccer/main
and
www.robocup.org
, is another example of this kind of understanding, but at a simple level.)
Self-awareness:
This is the sort of AI you see in movies. However, it requires technologies that aren’t even remotely possible now because such a machine would have a sense of both self and consciousness. In addition, rather than merely intuit the goals of others based on environment and other entity reactions, this type of machine would be able to infer the intent of others based on experiential knowledge.
For more on these classification types, check out “Understanding the four types of AI, from reactive robots to self-aware beings” at theconversation.com/understanding-the-four-types-of-ai-from-reactive-robots-to-self-aware-beings-67616. It’s several years old but still pertinent.
Earlier sections of this chapter help you understand intelligence from the human perspective and see how modern computers are woefully inadequate for simulating such intelligence, much less actually becoming intelligent themselves. However, the desire to create intelligent machines (or, in ancient times, idols) is as old as humans. The desire not to be alone in the universe, to have something with which to communicate without the inconsistencies of other humans, is a strong one. Of course, a single book can’t contemplate all of human history, so Figure 1-1 provides a brief, pertinent overview of the history of modern AI attempts.
FIGURE 1-1: An overview of the history of AI.
Figure 1-1 shows you some highlights, nothing like a complete history of AI. One thing you should notice is that the early years were met with a lot of disappointment from overhyping what the technology would do. Yes, people can do amazing things with AI today, but that’s because the people creating the underlying technology just kept trying, no matter how often they failed.
You can find AI used in a great many applications today. The only problem is that the technology works so well that you don’t know it even exists. In fact, you might be surprised to find that many home devices already make use of AI. For example, some smart thermostats automatically create schedules for you based on how you manually control the temperature. Likewise, voice input that is used to control certain devices learns how you speak so that it can better interact with you. AI definitely appears in your car and most especially in the workplace. In fact, the uses for AI number in the millions — all safely out of sight even when they’re quite dramatic in nature. Here are just a few of the ways in which you might see AI used:
Fraud detection:
You receive a call from your credit card company asking whether you made a particular purchase. The credit card company isn’t being nosy; it’s simply alerting you to the fact that someone else might be making a purchase using your card. The AI embedded within the credit card company’s code detected an unfamiliar spending pattern and alerted someone to it.
Resource scheduling:
Many organizations need to schedule the use of resources efficiently. For example, a hospital may have to determine which room to assign a patient to based on the patient’s needs, the availability of skilled experts, and the length of time the doctor expects the patient to be in the hospital.
Complex analysis:
Humans often need help with complex analysis because there are literally too many factors to consider. For example, the same set of symptoms might indicate more than one illness. A doctor or another expert might need help making a timely diagnosis to save a patient’s life.
Automation:
Any form of automation can benefit from the addition of AI to handle unexpected changes or events. A problem with some types of automation is that an unexpected event, such as an object appearing in the wrong place, can cause the automation to stop. Adding AI to the automation can allow the automation to handle unexpected events and continue as if nothing happened.
Customer service:
The customer service line you call may not even have a human behind it. The automation is good enough to follow scripts and use various resources to handle the vast majority of your questions. After hearing good voice inflection (provided by AI as well), you may not even be able to tell that you’re talking with a computer.
Safety systems:
Many of the safety systems now found in machines of various sorts rely on AI to take over operation of the vehicle in a time of crisis. For example, many automatic braking systems (ABSs) rely on AI to stop the car based on all the inputs a vehicle can provide, such as the direction of a skid. Computerized ABS is, at 40 years, relatively old from a technology perspective.
Machine efficiency:
AI can help control a machine to obtain maximum efficiency. The AI controls the use of resources so that the system avoids overshooting speed or other goals. Every ounce of power is used precisely as needed to provide the desired services.
Content generation:
When people consider content generation, they often think about ChatGPT because it’s in the public eye. However, content generation can exist deep within an application to provide specific functionality. For example, given a photo of the user, how will a new outfit look?
You’ve no doubt seen and heard lots of hype about AI and its potential impact. If you’ve seen movies such as Her and Ex Machina, you might be led to believe that AI is further along than it is. The problem is that AI is actually in its infancy, and any sort of application such as those shown in the movies is the creative output of an overactive imagination. The following sections help you understand how hype and overestimation are skewing the goals you can achieve using AI today.
You may have heard of a concept called the singularity, which is responsible for the potential claims presented in the movies and other media. The singularity (when computer intelligence surpasses human intelligence) (when computer intelligence surpasses human intelligence) is essentially a master algorithm that encompasses all five “tribes” of learning used within machine learning. To achieve what these sources are telling you, the machine must be able to learn as a human would — as specified by the eight kinds of intelligence discussed in the section “Discerning intelligence,” early in this chapter. Here are the five tribes of learning:
Symbologists:
The origin of this tribe is in logic and philosophy. It relies on inverse deduction to solve problems.
Connectionists:
This tribe’s origin is in neuroscience, and the group relies on backpropagation to solve problems.
Evolutionaries:
The Evolutionaries’ tribe originates in evolutionary biology, relying on genetic programming to solve problems.
Bayesians:
This tribe’s origin is in statistics and relies on probabilistic inference to solve problems.
Analogizers:
The origin of this tribe is in psychology. The group relies on kernel machines to solve problems.
The ultimate goal of machine learning is to combine the technologies and strategies embraced by the five tribes to create a single algorithm (the master algorithm) that can learn anything. Of course, achieving that goal is a long way off. Even so, scientists such as Pedro Domingos at the University of Washington are working toward that goal.
To make things even less clear, the five tribes may not be able to provide enough information to actually solve the problem of human intelligence, so creating master algorithms for all five tribes may still not yield the singularity. At this point, you should be amazed at just how little people know about how they think or why they think in a certain manner.
Any rumors you hear about AI taking over the world or becoming superior to people are just plain false.
Many sources of AI hype are out there. Quite a bit of the hype comes from the media and is presented by persons who have no idea of what AI is all about, except perhaps from a sci-fi novel they read a few years back. So it’s not just movies or television that cause problems with AI hype — it’s all sorts of other media sources as well. You can often find news reports presenting AI as being able to do something it can’t possibly do because the reporter doesn’t understand the technology. Oddly enough, many news articles are now written entirely by AI like ChatGPT, so what you end up with is a recycling of the incorrect information.
Some products should be tested much more before being placed on the market. The article “2020 in Review: 10 AI Failures” at SyncedReview.com (syncedreview.com/2021/01/01/2020-in-review-10-ai-failures/) discusses ten products, hyped by their developers, that fell flat on their faces. Some of these failures are huge and reflect badly on the ability of AI to perform tasks as a whole. However, something to consider with a few of these failures is that people may have interfered with the device using the AI. Obviously, testing procedures need to start considering the possibility of people purposely tampering with the AI as a potential source of errors. Until that happens, the AI will fail to perform as expected because people will continue to fiddle with the software in an attempt to cause it to fail in a humorous manner.
Another cause of problems stems from asking the wrong person about AI — not every scientist, no matter how smart, knows enough about AI to provide a competent opinion about the technology and the direction it will take in the future. Asking a biologist about the future of AI in general is akin to asking your dentist to perform brain surgery — it simply isn’t a good idea. Yet many stories appear with people like these as the information source.
To discover the future direction of AI, ask a computer scientist or data scientist with a strong background in AI research.
Because of hype (and sometimes laziness or fatigue), users continually overestimate the ability of AI to perform tasks. For example, a Tesla owner was recently found sleeping in his car while the car zoomed along the highway at 90 mph (see “Tesla owner in Canada charged with ‘sleeping’ while driving over 90 mph”). However, even with the user significantly overestimating the ability of the technology to drive a car, it does apparently work well enough (at least, for this driver) to avoid a complete failure.
Be aware that there are also cases where auto drive failed and killed people such. (See the article at www.washingtonpost.com/technology/interactive/2023/tesla-autopilot-crash-analysis.)
However, you need not be speeding down a highway at 90 mph to encounter user overestimation. Robot vacuums can also fail to meet expectations, usually because users believe they can just plug in the device and then never think about vacuuming again. After all, movies portray the devices working precisely in this manner, but unfortunately, they still need human intervention. Our point is that most robots eventually need human intervention because they simply lack the knowledge to go it alone.
To see AI at work, you need to have some sort of computing system, an application that contains the required software, and a knowledge base. The computing system can be anything with a chip inside; in fact, a smartphone does just as well as a desktop computer for certain applications. Of course, if you’re Amazon and you want to provide advice on a particular person’s next buying decision, the smartphone won’t do — you need a big computing system for that application. The size of the computing system is directly proportional to the amount of work you expect the AI to perform.
The application can also vary in size, complexity, and even location. For example, if you’re a business owner and you want to analyze client data to determine how best to make a sales pitch, you might rely on a server-based application to perform the task. On the other hand, if you’re a customer and you want to find products on Amazon to complement your current purchase items, the application doesn’t even reside on your computer; you access it via a web-based application located on Amazon’s servers.
The knowledge base (a database that holds information about the facts, assumptions, and rules that the AI can use), varies in location and size as well.) The more complex the data, the more insight you can obtain from it, but the more you need to manipulate the data as well. You get no free lunch when it comes to knowledge management. The interplay between location and time is also important: A network connection affords you access to a large knowledge base online but costs you in time because of the latency of network connections. However, localized databases, though fast, tend to lack details in many cases.
Chapter 2
IN THIS CHAPTER
Seeing data as a universal resource
Obtaining and manipulating data
Looking for mistruths in data
Defining data-acquisitions limits
Considering data security
There is nothing new about data. Every interesting application ever written for a computer has data associated with it. Data comes in many forms — some organized, some not. What has changed is the amount of data. Some people find it almost terrifying that we now have access to so much data that details nearly every aspect of most people’s lives, sometimes to a level that even the person doesn’t realize. In addition, the use of advanced hardware and improvements in algorithms make data now the universal resource for AI.
To work with data, you must first obtain it. Today, data is collected manually, as done in the past, and also automatically, using new methods. However, it’s not a matter of just one or two data collection techniques: Collection methods take place on a continuum from fully manual to fully automatic. You also find a focus today on collecting this data ethically — for example, not collecting data that a person hasn’t granted permission for. This chapter explores issues surrounding data collection.
Raw data doesn’t usually work well for analysis purposes. This chapter also helps you understand the need for manipulating and shaping the data so that it meets specific requirements. You also discover the need to define the truth value of the data to ensure that analysis outcomes match the goals set for applications in the first place.
Interestingly, you also have data-acquisition limits to deal with. No technology currently exists for grabbing thoughts from someone’s mind by telepathic means. Of course, other limits exist, too — most of which you probably already know about but may not have considered. It also doesn’t pay to collect data in a manner that isn’t secure. The data must be free of bias, uncorrupted, and from a source you know. You find out more about acquisition limits and data security in this chapter.
Big data is more than a just a buzz phrase used by vendors to propose new ways to store data and analyze it. The big data revolution is an everyday reality and a driving force of our times. You may have heard big data mentioned in many specialized scientific and business publications, and you may have even wondered what the term really means. From a technical perspective, big data refers to large and complex amounts of computer data, so large and intricate that applications can’t deal with the data by simply using additional storage or increasing computer power.
Big data implies a revolution in data storage and manipulation. It affects what you can achieve with data in more qualitative terms (meaning that in addition to doing more, you can perform tasks better). From a human perspective, computers store big data in different data formats (such as database files and .csv files), but regardless of storage type, the computer still sees data as a stream of ones and zeros (the core language of computers). You can view data as being one of two types, structured and unstructured, depending on how you produce and consume it. Some data has a clear structure (you know exactly what it contains and where to find every piece of data), whereas other data is unstructured (you have an idea of what it contains, but you don't know exactly how it is arranged).
Typical examples of structured data are database tables, in which information is arranged into columns, and each column contains a specific type of information. Data is often structured by design. You gather it selectively and record it in its correct place. For example, you might want to place a count of the number of people buying a certain product in a specific column, in a specific table, or in a specific database. As with a library, if you know what data you need, you can find it immediately.
Unstructured data consists of images, videos, and sound recordings. You may use an unstructured form for text so that you can tag it with characteristics, such as size, date, or content type. Usually, you don’t know exactly where data appears in an unstructured dataset, because the data appears as sequences of ones and zeros that an application must interpret or visualize.
Transforming unstructured data into a structured form can cost lots of time and effort and can involve the work of many people. Most of the data of the big data revolution is unstructured and stored as is, unless someone renders it structured.
This copious and sophisticated data store didn’t appear suddenly overnight. It took time to develop the technology to store this amount of data. In addition, it took time to spread the technology that generates and delivers data — namely, computers, sensors, smart mobile phones, and the Internet and its World Wide Web services. The following sections help you understand what makes data a universal resource today.
Scientists need more powerful computers than the average person because of their scientific experiments. They began dealing with impressive amounts of data years before anyone coined the term big data. At that point, the Internet wasn’t producing the vast sums of data that it does today.