29,99 €
This third edition provides a comprehensive, accessible presentation of AI, including examples, applications, full-color images, and human interest boxes. New chapters on deep learning, AI security, and AI programming keep the content cutting-edge. Topics like neural networks, genetic algorithms, natural language processing, planning, and complex board games are covered.
The course starts with an AI overview, moving through uninformed search, intelligent search methods, and game-based strategies. It delves into logic in AI, knowledge representation, production systems, uncertainty in AI, and expert systems. Middle chapters cover machine learning, neural networks, and deep learning. It continues with nature-inspired search methods, natural language processing, and automated planning, ending with robotics and advanced computer games.
These AI concepts are crucial for developing sophisticated AI applications. This book transitions you from novice to proficient AI practitioner, equipped with practical skills and comprehensive knowledge. Companion files with resources, simulations, and figures enhance learning. By the end, you'll understand AI principles and applications, ready to tackle real-world challenges.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 1378
Veröffentlichungsjahr: 2024
ARTIFICIAL INTELLIGENCEIN THE 21ST CENTURY
THIRD EDITION
LICENSE, DISCLAIMER OF LIABILITY, AND LIMITED WARRANTY
By purchasing or using this book (the “Work”), you agree that this license grants permission to use the contents contained herein, but does not give you the right of ownership to any of the textual content in the book or ownership to any of the information or products contained in it. This license does not permit uploading of the Work onto the Internet or on a network (of any kind) without the written consent of the Publisher. Duplication or dissemination of any text, code, simulations, images, etc. contained herein is limited to and subject to licensing terms for the respective products, and permission must be obtained from the Publisher or the owner of the content, etc., in order to reproduce or network any portion of the textual material (in any media) that is contained in the Work.
MERCURY LEARNING AND INFORMATION (“MLI” or “the Publisher”) and anyone involved in the creation, writing, or production of the companion disc, accompanying algorithms, code, or computer programs (“the software”), and any accompanying Web site or software of the Work, cannot and do not warrant the performance or results that might be obtained by using the contents of the Work. The author, developers, and the Publisher have used their best efforts to insure the accuracy and functionality of the textual material and/or programs contained in this package; we, however, make no warranty of any kind, express or implied, regarding the performance of these contents or programs. The Work is sold “as is” without warranty (except for defective materials used in manufacturing the book or due to faulty workmanship).
The author, developers, and the publisher of any accompanying content, and anyone involved in the composition, production, and manufacturing of this work will not be liable for damages of any kind arising out of the use of (or the inability to use) the algorithms, source code, computer programs, or textual material contained in this publication. This includes, but is not limited to, loss of revenue or profit, or other incidental, physical, or consequential damages arising out of the use of this Work.
The sole remedy in the event of a claim of any kind is expressly limited to replacement of the book, and only at the discretion of the Publisher. The use of “implied warranty” and certain “exclusions” vary from state to state, and might not apply to the purchaser of this product.
Companion files may be downloaded from the publisher by writing to [email protected].
ARTIFICIAL INTELLIGENCEIN THE 21ST CENTURY
THIRD EDITION
Stephen Lucci, Ph.D.
The City College of New York, CUNY
Sarhan M. Musa, Ph.D.
Prairie View A&M University
Danny Kopec, Ph.D.
Brooklyn College, CUNY
MERCURY LEARNING AND INFORMATION
Dulles, VirginiaBoston, MassachusettsNew Delhi
Copyright ©2022 by MERCURY LEARNING AND INFORMATION LLC . All rights reserved.
This publication, portions of it, or any accompanying software may not be reproduced in any way, stored in a retrieval system of any type, or transmitted by any means, media, electronic display or mechanical display, including, but not limited to, photocopy, recording, Internet postings, or scanning, without prior permission in writing from the publisher.
Publisher: David Pallai
MERCURY LEARNING AND INFORMATION22841 Quicksilver DriveDulles, VA [email protected](800) 232-0223
S. Lucci, S. Musa, and D. Kopec. ARTIFICIAL INTELLIGENCEIN THE21STCENTURY, Third Edition. ISBN: 978-1-68392-223-0
The publisher recognizes and respects all marks used by companies, manufacturers, and developers as a means to distinguish their products. All brand names and product names mentioned in this book are trademarks or service marks of their respective companies. Any omission or misuse (of any kind) of service marks or trademarks, etc. is not an attempt to infringe on the property of others.
Library of Congress Control Number: 2022932251
222324321 This book is printed on acid-free paper in the United States of America.
Our titles are available for adoption, license, or bulk purchase by institutions, corporations, etc.For additional information, please contact the Customer Service Dept. at (800) 232-0223 (toll free).
All of our titles are available in digital format at academiccourseware.com and other digital vendors. Companion disc files for this title are available by contacting [email protected]. The sole obligation of MERCURY LEARNING AND INFORMATION to the purchaser is to replace the disc, based on defective materials or faulty workmanship, but not based on the operation or functionality of the product.
To the memory of my parents,Louis and Connie Lucci,Both of whom always encouraged my education.
Stephen Lucci
To the memory of my parents,Mahmoud Musa and Fatmeh Hussein,Both of whom always on my mind and forever in my heart.
Sarhan Musa
Contents
Preface
Acknowledgments (Three Editions)
Credits for the 3rd Edition
PART 1: INTRODUCTION
Chapter 1 Overview of Artificial Intelligence
1.0 Introduction
1.0.1 What is Artificial Intelligence?
1.0.2 What is Thinking? What is Intelligence?
1.1 The Turing Test
1.1.1 Definition of the Turing Test
1.1.2 Controversies and Criticisms of the Turing Test
1.2 Strong AI versus Weak AI
1.3 Heuristics
1.3.1 The Diagonal of a Rectangular Solid: Solving a Simpler, but Related Problem
1.3.2 The Water Jug Problem: Working Backward
1.4 Identifying Problems Suitable for AI
1.5 Applications and Methods
1.5.1 Search Algorithms and Puzzles
1.5.2 Two-Person Games
1.5.3 Automated Reasoning
1.5.4 Production Rules and Expert Systems
1.5.5 Cellular Automata
1.5.6 Neural Computation
1.5.7 Genetic Algorithms
1.5.8 Knowledge Representation
1.5.9 Uncertainty Reasoning
1.6 Early History of AI
1.6.1 Logicians and Logic Machines
1.7 Recent History of AI to the Present
1.7.1 Games
1.7.2 Expert Systems
1.7.3 Neural Computing
1.7.4 Evolutionary Computation
1.7.5 Natural Language Processing
1.7.6 Bioinformatics
1.8 AI in the New Millennium
1.9 Chapter Summary
PART II: FUNDAMENTALS
Chapter 2 Uninformed Search
2.0 Introduction: Search in Intelligent Systems
2.1 State-Space Graphs
2.1.1 The False Coin Problem
2.2 Generate-and-Test Paradigm
2.2.1 Backtracking
2.2.2 The Greedy Algorithm
2.2.3 The Traveling Salesperson Problem
2.3 Blind Search Algorithms
2.3.1 Depth First Search
2.3.2 Breadth First Search
2.4 Implementing and Comparing Blind Search Algorithms
2.4.1 Implementing a Depth First Search Solution
2.4.2 Implementing a Breadth First Search Solution
2.4.3 Measuring Problem-Solving Performance
2.4.4 Comparing dfs and bfs
2.5 Chapter Summary
Chapter 3 Informed Search
3.0 Introduction
3.1 Heuristics
3.2 Informed Search Algorithms (Part I) – Finding Any Solution
3.2.1 Hill Climbing
3.2.2 Steepest-Ascent Hill Climbing
3.3 The Best-First Search
3.4 The Beam Search
3.5 Additional Metrics for Search Algorithms
3.6 Informed Search (Part 2) – Finding An Optimal Solution
3.6.1 Branch and Bound
3.6.2 Branch and Bound with Underestimates
3.6.3 Branch and Bound with Dynamic Programming
3.6.4 The A* Search
3.7 Informed Search (part 3) – Advanced Search Algorithms
3.7.1 Constraint Satisfaction Search
3.7.2 AND/OR Trees
3.7.3 The Bidirectional Search
3.8 Chapter Summary
Chapter 4 Search Using Games
4.0 Introduction
4.1 Game Trees and Minimax Evaluation
4.1.1 Heuristic Evaluation
4.1.2 Minimax Evaluation of Game Trees
4.2 Minimax with Alpha-Beta Pruning
4.3 Variations and Improvements to Minimax
4.3.1 Negamax Algorithm
4.3.2 Progressive Deepening
4.3.3 Heuristic Continuation and the Horizon Effect
4.4 Games of Chance and the Expectiminimax Algorithm
4.5 Game Theory
4.5.1 The Iterated Prisoner’s Dilemma
4.6 Chapter Summary
Chapter 5 Logic in Artificial Intelligence
5.0 Introduction
5.1 Logic and Representation
5.2 Propositional Logic
5.2.1 Propositional Logic – Basics
5.2.2 Arguments in the Propositional Logic
5.2.3 Proving Arguments in the Propositional Logic Valid – A Second Approach
5.3 Predicate Logic – Introduction
5.3.1 Unification in the Predicate Logic
5.3.2 Resolution in the Predicate Logic
5.3.3 Converting a Predicate Expression to Clause Form
5.4 Several Other Logics
5.4.1 Second Order Logic
5.4.2 Non-Monotonic Logic
5.4.3 Fuzzy Logic
5.4.4 Modal Logic
5.5 Chapter Summary
Chapter 6 Knowledge Representation
6.0 Introduction
6.1 Graphical Sketches and The Human Window
6.2 Graphs and The Bridges of Königsberg Problem
6.3 Representational Choices
6.4 Production Systems
6.5 Object Orientation
6.6 Frames
6.7 Scripts and the Conceptual Dependency System
6.8 Semantic Networks
6.9 Associations
6.10 More Recent Approaches
6.10.1 Concept Maps
6.10.2 Conceptual Graphs
6.10.3 Baecker’s Work
6.11 Agents: Intelligent or Otherwise
6.11.1 A Little Agent History
6.11.2 Contemporary Agents
6.11.3 The Semantic Web
6.11.4 The Future – According to IBM
6.11.5 Author’s Perspective
6.12 Chapter Summary
Chapter 7 Production Systems
7.0 Introduction
7.1 Background
7.2 Basic Examples
7.3 The CarBuyer System
7.3.1 Advantages of Production Systems
7.4 Production Systems and Inference Methods
7.4.1 Conflict Resolution
7.4.2 Forward Chaining
7.4.3 Backward Chaining
7.5 Production Systems and Cellular Automata
7.6 Stochastic Processes and Markov Chains
7.7 Chapter Summary
PART III: KNOWLEDGE-BASED SYSTEMS
Chapter 8 Uncertainty in AI
8.0 Introduction
8.1 Fuzzy Sets
8.2 Fuzzy Logic
8.3 Fuzzy Inferences
8.4 Probability Theory and Uncertainty
8.5 Chapter Summary
Chapter 9 Expert Systems
9.0 Introduction
9.1 Background
9.1.1 Human and Machine Experts
9.2 Characteristics of Expert Systems
9.3 Knowledge Engineering
9.4 Knowledge Acquisition
9.5 Classic Expert Systems
9.5.1 Dendral
9.5.2 Mycin
9.5.3 Emycin
9.5.4 Prospector
9.5.5 Fuzzy Knowledge and Bayes’ Rule
9.6 Methods for Efficiency
9.6.1 Demon Rules
9.6.2 The Rete Algorithm
9.7 Case-Based Reasoning
9.8 Other Expert Systems
9.8.1 Systems for Improving Employment Matching
9.8.2 An Expert System for Vibration Fault Diagnosis
9.8.3 Automatic Dental Identification
9.8.4 More Expert Systems Employing Case-Based Reasoning
9.9 Chapter Summary
Chapter 10 Machine Learning : Part I Neural Networks
10.0 Introduction
10.1 Machine Learning: A Brief Overview
10.2 The Role of Feedback in Machine Learning Systems
10.3 Inductive Learning
10.4 Learning with Decision Trees
10.5 Problems Suitable for Decision Trees
10.6 Entropy
10.7 Constructing a Decision Tree With ID3
10.8 Issues Remaining
10.9 Rudiments of Artificial Neural Networks
10.10 McCulloch-Pitts Network
10.11 The Perceptron Learning Rule
10.12 The Delta Rule
10.13 Backpropagation
10.14 Implementation Concerns
10.14.1 Pattern Analysis
10.14.2 Training Methodology
10.15 Discrete Hopfield Networks
10.16 Application Areas
10.17 Chapter Summary
Chapter 11 Machine Learning : Part II Deep Learning
11.0 Introduction
11.1 Deep Learning Applications: A Brief Overview
11.2 Deep Learning Network Layers
11.3 Deep Learning Types
11.3.1 Multilayer Neural Network
11.3.2 Convolutional Neural Network (CNN)
11.3.3 Recurrent Neural Network (RNN)
11.3.4 Long Short-Term Memory Network (LSTM)
11.3.5 Recursive Neural Network (RvNN)
11.3.6 Stacked Autoencoders
11.3.7 Extreme Learning Machine (ELM)
11.4 Chapter Summary
Chapter 12 Search Inspired by Mother Nature
12.0 Introduction
12.1 Simulated Annealing
12.2 Genetic Algorithms
12.3 Genetic Programming
12.4 Tabu Search
12.5 Ant Colony Optimization
12.6 Chapter Summary
PART IV: ADVANCED TOPICS
Chapter 13 Natural Language Understanding
13.0 Introduction
13.1 Overview: The Problems and Possibilities of Language
13.1.1 Ambiguity
13.2 History of Natural Language Processing (NLP)
13.2.1 Foundations (1940s and 1950s)
13.2.2 Symbolic vs. Stochastic Approaches (1957–1970)
13.2.3 The Four Paradigms: 1970–1983
13.2.4 Empiricism and Finite-State Models
13.2.5 The Field Comes Together: 1994–1999
13.2.6 The Rise of Machine Learning
13.3 Syntax and Formal Grammars
13.3.1 Types of Grammars
13.3.2 Syntactic Parsing: The CYK Algorithm
13.4 Semantic Analysis and Extended Grammars
13.4.1 Transformational Grammar
13.4.2 Systemic Grammar
13.4.3 Case Grammars
13.4.4 Semantic Grammars
13.4.5 Schank’s Systems
13.5 Statistical Methods in NLP
13.5.1 Statistical Parsing
13.5.2 Machine Translation (Revisited) and IBM’s Candide System
13.5.3 Word Sense Disambiguation
13.6 Probabilistic Models for Statistical NLP
13.6.1 Hidden Markov Models
13.6.2 The Viterbi Algorithm
13.7 Linguistic Data Collections for Statistical NLP
13.7.1 The Penn Treebank Project
13.7.2 WordNet
13.7.3 Models of Metaphor in NLP
13.8 Applications: Information Extraction and Question Answering Systems
13.8.1 Question Answering Systems
13.8.2 Information Extraction
13.9 Present and Future Research (according to Charniak)
13.10 Speech Understanding
13.10.1 Speech Understanding Techniques
13.11 Applications of Speech Understanding
13.11.1 Dragon’s NaturallySpeaking System and Windows’ Speech Recognition System
13.11.2 CISCO’s Voice System
13.12 Chapter Summary
Chapter 14 Automated Planning
14.0 Introduction
14.1 The Problem of Planning
14.1.1 Planning Terminology
14.1.2 Examples of Planning Applications
14.2 A Brief History and a Famous Problem
14.2.1 The Frame Problem
14.3 Planning Methods
14.3.1 Planning as Search
14.3.2 Partially Ordered Planning
14.3.3 Hierarchical Planning
14.3.4 Case-Based Planning
14.3.5 A Potpourri of Planning Methods
14.4 Early Planning Systems
14.4.1 Strips
14.4.2 Noah
14.4.3 Nonlin
14.5 More Modern Planning Systems
14.5.1 O-Plan
14.5.2 Graphplan
14.5.3 A Potpourri of Planning Systems
14.5.4 A Planning Approach to Learning Systems
14.5.5 The SCI Box Automated Planner
14.6 Chapter Summary
PART V: THE PRESENT AND FUTURE
Chapter 15 Robotics
15.0 Introduction
15.1 History: Serving, Emulating, Enhancing, and Replacing Man
15.1.1 Robot Lore
15.1.2 Early Mechanical Robots
15.1.3 Robots in Film and Literature
15.1.4 Twentieth-Century Robots
15.2 Technical Issues
15.2.1 Robot Components
15.2.2 Locomotion
15.2.3 Path Planning for a Point Robot
15.2.4 Mobile Robot Kinematics
15.3 Applications: Robotics in the Twenty-First Century
15.4 Chapter Summary
Chapter 16 Advanced Computer Games
16.0 Introduction
16.1 Checkers: From Samuel to Schaeffer
16.1.1 Heuristic Methods for Learning in the Game of Checkers
16.1.2 Rote Learning and Generalization
16.1.3 Signature Table Evaluations and Book Learning
16.1.4 World Championship Checkers with Schaeffer’s Chinook
16.1.5 Checkers is Solved
16.2 Chess: The Drosophila of AI
16.2.1 Historical Background of Computer Chess
16.2.2 Programming Methods
16.2.3 Beyond the Horizon
16.2.4 Deep Thought and Deep Blue against Grandmaster Competition: 1988–1995
16.3 Contributions of Computer Chess to Artificial Intelligence
16.3.1 Search in Machines
16.3.2 Search in Man vs. Machine
16.3.3 Heuristics, Knowledge, and Problem-Solving
16.3.4 Brute Force: Knowledge vs. Search; Performance vs. Competence
16.3.5 Endgame Databases and Parallelism
16.3.6 Author Contributions
16.4 Other Games
16.4.1 Othello
16.4.2 Backgammon
16.4.3 Bridge
16.4.4 Poker
16.5 Go: The New Drosophila of AI?
16.5.1 The Stars of Advanced Computer Games
16.6 Chapter Summary
Chapter 17 Reprise
17.0 Introduction
17.1 Recapitulation—PART I
17.2 Prometheus Redux
17.3 Recapitulation—PART II: Present AI Accomplishments
17.4 IBM Watson-Jeopardy Challenge
17.5 AI in the 21st Century
17.6 Chapter Summary
PART VI: SECURITY AND PROGRAMMING (OPTIONAL)
Chapter 18 Artificial Intelligence in Security (Optional)
18.0 Introduction
18.1 Internet Protocol Security (IPSec)
18.2 Security Association (SA)
18.3 Security Policies
18.3.1 Security Policy Database (SPD)
18.3.2 Security Association Selectors (SA Selectors)
18.3.3 Combining of Security Associations
18.3.4 IPSec Protocol Modes
18.3.5 Anti-Replay Window
18.4 Secure Electronic Transactions
18.4.1 Business Requirements of SET
18.5 Intruders
18.6 Intrusion Detection
18.6.1 Intrusion Detection Techniques
18.7 Malicious Programs
18.7.1 Different Phases in the Lifetime of a Virus
18.8 Anti-Virus Scanners
18.8.1 Different Generations of Anti-Virus Scanners
18.9 Worms
18.10 Firewalls
18.10.1 Firewall Characteristics
18.10.2 Firewall Techniques to Control Access
18.10.3 Types of Firewalls
18.11 Trusted Systems
18.12 Chapter Summary
Chapter 19 Artificial Intelligence Programming Tools (Optional )
19.1 Programming in Logic (Prolog)
19.1.1 Differences between C/C++ and Prolog
19.1.2 How Does Prolog Work?
19.1.3 Milestones in Prolog Language Development
19.1.4 Clauses
19.1.5 Robinson’s Resolution Rule
19.1.6 Parts of a Prolog Program
19.1.7 Queries to a Database
19.1.8 How Does Prolog Solve a Query?
19.1.9 Compound Queries
19.1.10 The _ Variable
19.1.11 Recursion in Prolog
19.1.12 Data Structures in Prolog
19.1.13 Head and Tail of a List
19.1.14 Print all the Members of the List
19.1.15 Print the List in Reverse Order
19.1.16 Appending a List
19.1.17 Find Whether the Given Item is a Member of the List
19.1.18 Finding the Length of the List
19.1.19 Controlling Execution in Prolog
19.1.20 Cut Predicate
19.1.21 About Turbo Prolog
19.2 Python
19.2.1 Running Python
19.2.2 Pitfalls
19.2.3 Features of Python
19.2.4 Functions as Rst-Class Objects
19.2.5 Useful Libraries
19.2.6 Utilities
19.2.7 Testing Code
19.3 MATLAB
19.3.1 Getting Started and Windows of MATLAB
19.3.2 Using MATLAB in Calculations
19.3.3 Plotting
19.3.4 Symbolic Computation
19.3.5 MATLAB for Python Users
Appendices (The appendices are located on the companion files)
Appendix A. Example with CLIPS: The Expert System Shell
Appendix B. Implementation of the Viterbi Algorithm for Hidden Markov Chains
Appendix C. The Amazing Walter Shawn Browne
Appendix D. Applications and Data
Appendix E. Solutions to Selected Odd Exercises
Index
Preface
Perspective and Needs
Our view is that AI is comprised of people, ideas, methods, machines, and outcomes. First, it is people who make up AI. People have ideas and these ideas become methods. Those ideas can be represented by algorithms, heuristics, procedures or systems that are the backbone of computation; and finally, we have the production of those machines (programs) which we can call “outcomes.” Every outcome can be measured for its value, effectiveness, efficiency, etc.
We have found that existing AI books are often lacking in one or more of these areas. Without people there is no AI. Hence we have decided that it is important to “showcase” the people who have made AI successful through the human-interest boxes which are sprinkled throughout the text. From people come the ideas and the development of the methods that we present over the fifteen chapters of this book. AI and computer science are relatively young fields, compared to other sciences such as mathematics, physics, chemistry and biology. Yet, AI is a discipline that is truly interdisciplinary, combining elements of many other fields. Machines/computers are the tools of AI researchers allowing them to experiment, learn, and improve methods for problem-solving that can be applied in many interesting domains that can be beneficial to humankind. And finally, not in the least, there are the outcomes, measurable as the results of applying AI to various problems and disciplines that remind us that AI must also be accountable. In a number of places in our text you will find discussions of the distinction between “performance” and “competence.” Both are needed, as the field of AI matures and advances.
Furthermore, students need hands on experience with problem solving, i.e., students need to get their hands “dirty” with the fundamentals of search techniques fully explained in Chapters 2 through 4, the methods of logic in Chapter 5, and the role of knowledge representation in AI (Chapter 6). Chapter 7 sets the stage for learning about fuzzy logic (Chapter 8) and expert systems (Chapter 9).
Advanced methods such as neural networks, genetic algorithms, and deep learning are thoroughly presented in Chapters 10 and 11. And finally, advanced topics such as natural language processing, planning, and advanced computer games are covered in Chapters 12, 13, and 14 respectively. Chapter 17, Reprise, summarizes where you’ve been on your journey with us through AI, with a view to the future. Chapter 18 provides an overview and prospective on AI in security, while chapter 19 illustrates a basic understanding of three common AI languages, specifically Prolog , Python, and MATLAB.
Instructor Resources
The presentation is enhanced with several hundred fully worked examples, as well as over 300 figures and images, many in color. Companion files include extra material related to AI and students will also benefit from the significant number of solutions to exercises that have been provided. In addition, instructor’s resources include PowerPoint slides, additional solutions, sample exams, and a sample syllabus.
A Shared Vision
Writing this book has not been an overnight process. We also believe that our approaches, to writing and development of material, while different in many respects are complementary.
We believe that the composite work should provide a strong foundation for anyone interested in AI with sufficient opportunity to accrue knowledge, experience, and competence in the methods that define the field. We are fortunate that the authors and the publisher, David Pallai, president and founder of Mercury Learning and Information, have shared the same goals and vision for this book. Fundamental to this effort has been the agreement that the book should be balanced with theory and applications, accurate, pedagogically sound, and reasonably priced.
We hope that you will enjoy and learn from our efforts.
Stephen LucciThe City College of New York
Sarhan MusaPrairie View A &M University
Danny Kopec (Late)Brooklyn College (The City University of New York)
Acknowledgments (Three Editions)
Development of a book like this is not just a job. It is in some sense representative of AI itself. In some sense it is like putting together the pieces of a large complex puzzle.
During the spring and summer of 2010, Ms. Debra Luca was particularly helpful, adding some much needed energy for preparation and completion of our manuscript. In 2011 Ms. Sharon Vanek, helped us with obtaining the rights for images and also provided the requisite energy for manuscript completion. Two graduate students in the computer science department at Brooklyn College, Shawn Hall and Sajida Noreen, were particularly helpful to us in the summer of 2011.
David Kopec was especially helpful in a number of critical junctures where he successfully and efficiently solved software problems.
We would like to mention students who contributed to this project in various capacities, roughly in the order in which we perceive their contributions: Dennis Kozobov, Daniil Agashiyev, Julio Heredia, Olesya Yefimzhanova, Oleg Yefimzhanov, Pjotr Vasilyev, Paul Wong, Georgina Oniha, Marc King, Uladzimir Aksionchykau, and Maxim Titley.
We would like to thank the administrative staff at the Brooklyn College Computer Science Department, including Camille Martin, Natasha Dutton, Audrey Williams, Lividea Jones, and Mr. Lawrence Goetz, our computer systems administrator for being particularly helpful. We are also grateful to Professor Graciela Elizalde-Utnick for allowing us to continue to work at the Center for Teaching. Mark Gold, VP of Information Technology, was particularly helpful in providing computer equipment.
We would also like to thank all the AI researchers who co-operated with us by giving us the right to use their images for this book. We apologize in advance if anyone has been left out of our acknowledgment.
Professor Dragomir Radev, at the Department of Electrical Engineering and Computer Science at the University of Michigan, was particularly helpful in suggesting topics that should be included in Chapter 12, Natural Language Processing.
Noteworthy assistance came from the following individuals: Harun Iftikhar, by developing a few sections of Chapter 12; Dr. Christina Schweikert, St. John’s University, with the preparation and editing of Chapter 13, Automated Planning; Erdal Kose of Brooklyn College, for Section 3.7.3. on the Bidirectional Search; and Edgar Troudt for material in Chapter 6 on Baecker’s work (Section 6.11.3).
Other colleagues at Brooklyn College who have been supportive include Professors Keith Harrow, James Cox, Neng-Fa Zhou, Gavriel Yarmish, Noson Yanofsky, David Arnow, Ronald Eckhardt, and Myra Kogen. Professor Jill Cirasella, of the Brooklyn College Library has always been very helpful with research assistance and development of a history of computer games. We also thank Professor Aaron Tenenbaum, chairman of the Department of Computer Science at Brooklyn College for many years, for hiring the late, Danny Kopec, encouraging this book, and for providing some crucial advice. Professor Yedidyah Langsam, chairman of the Department of Computer Science at Brooklyn College, is acknowledged for enabling the teaching and working conditions whereby this book could be completed. Likewise, Professors James Davis and Paula Whitlock are thanked for encouraging him to be the director of the Center for Teaching at Brooklyn College between 2008 and 2010, facilitating completion of the text.
Stephen Lucci would like to acknowledge the excellent education I received from The City College of New York and later from The CUNY Graduate School and University Center. Many years ago my academic mentor, Prof. Michael Anshel, displayed almost infinite patience in guiding my dissertation research. He taught me the importance of “thinking outside the box” – i.e., there is often a relationship between seemingly unrelated topics in computer science. Prof. Gideon Lidor was a teacher who taught me, early in my career, the value of excellence in the classroom. Prof. Valentin Turchin always respected my competence. I think of Prof. George Ross as my administrative mentor. He helped to provide my first employment in academe as an instructor before having obtained my Ph.D. I served as deputy chair of the CCNY Computer Science Department for many years under his auspices, work experience that prepared me later for my six-year stint as department chairperson. He was always supportive in my career advancement. I would also like to thank Prof. Izidor Gertner who always had respect for the quality of my writing. I would also like to acknowledge Dr. Gordon Bassen and Dr. Dipak Basu who have been close friends and colleagues ever since our doctoral student days. And a heartfelt “Thank You” is extended to the many students in my classrooms who have inspired and educated me during these past years.
Writing a textbook is both a challenging and (at times) an exasperating enterprise. Along the way numerous individuals have provided much-appreciated assistance. It is my pleasure at this juncture to thank those individuals.
Tayfun Pay supplied technical expertise early in our work. The chessboards in Chapter 2 along with the many search trees in Chapters 2 and 4 were drawn by him and the Three Wise Men figure in Chapter 5 benefitted from his artistic flair.
Later in the text Jordan Tadeusz provided a much-needed infusion of talent and hard work. He is responsible for many of the excellent figures in Chapters 10 and 11. He also “worked magic” with the scores of vector equations in Chapter 10.
Junjie Yao, Rajesh Kamalanathan, and Young Joo Lee helped us meet an early deadline. Nadine Bennett was instrumental in putting the finishing touches on Chapters 4 and 5. Ashwini Harikrishnan ( Ashu ) lent technical assistance later in this project. Ashu also helped “finesse” some of the figures during the editing process. The following students also contributed their time and talent to this text: Anuthida Intamon, Shilpi Pandey, Moni Aryal, Ning Xu, and Ahmet Yuksel. And finally, I wish to thank my sister Rosemary for her friendship.
In the second edition, we acknowledge Daniil Agashiyev for his contributions to sections in Chapters 13 and 14 on metaphor and SCIBox, respectively. Sona Brambhatt permitted use of segments of her Master’s thesis on speech understanding that were revised and condensed by Mimi-Lin Gao. She also contributed sections on robotic applications (ASIMO) and the Lovelace Project. Peter Tan helped develop sections on robotic applications including Big Dog, Cog, and Google Car. He also obtained rights for many images which appear in the new edition. Oleg Tosic prepared the applications box on the CISCO Voice System. Chris Pileggi indirectly helped with a number of new exercises. We also wish to thank the following students: Alejandro Morejon, Juan Emmanuel Sanchez, Ebenezer Reyes, and Feiyu Chen for typing Chapter 10 on Machine Learning under great time constraints. Additionally, Alan Mendez drew the figures for A Robotic Classroom and Umbrella Balancing in that chapter.
In the third edition, Sarhan Musa would like to express my deepest thanks to my family, my wife Lama, my children Mahmoud, Ibrahim, and Khalid for the constant support, love, and patience. In addition, we are very pleased that David Pallai, founder and president of Mercury Learning, Inc. encouraged and supported our efforts to produce another revision of this text. We are also fortunate to be surrounded by a number of faculty and students, who assisted in identifying errors in our first edition and in the development of new content.
Credits for the 3rd Edition
Chapter 1
Turing Statue © Guy Erwood/ Shutterstock.com
Fig. 1.0 Genetic Code © Bruce Riff/Shutterstock.com
Fig. 1.8 Human brain by X-rays © Dim Dimich/ Shutterstock
Fig. 1.23 Analytical Engine © Mirko Tobias, Utrecht University
Fig. 1.25 Robot Car Assembly © Boykov/ Shutterstock.com
John McCarthy (Stanford University, CIS Department)
John Conway (Courtesy Thane Plambeck)
Sherry Turkle (Courtesy Sherry Turkle, Director MIT Initiative on Technology and Self)
Chapter 2
Edsgar Dijkstra (Courtesy Hamilton Richards, CIS Department, University of Texas at Austin)
Donald Knuth (Case Alumni Association and Foundation 2010, Flicker and http://www.casealum.org/view.image?Id=1818)
Chapter 3
Fig. 3.0 The Climber © Nubephoto
Fig. 3.1 Brooklyn College to Yankee Stadium (Mapquest.com)
Fig. 3.2 Brooklyn College to Yankee Stadium (Yahoo Maps)
Fig. 3.6 (a,b,c) Hill Climbing: (a)Foothills Problem(b) Plateau Problem(c) Ridge Problem Designed by Oleg Yefimzhanov, Brooklyn College
Figs. 3.28 and 3.29 Bidirectional Search (Designed by Erdal Kose, Brooklyn College)
Judea Pearl (Courtesy Judea Pearl, UCLA, Computer Science Department)
Chapter 4
Two Game Figurines © Melanie Kintz
Fig. 4.0 Prisoner’s Dilemma © Vladmir V. Georgievskiy
Fig. 4.1 Chinatown Chicken © DenisNata and Tham Ying Kit
Richard Korf (Courtesy Richard E. Korf, UCLA, Computer Science Department)
Dana Nau (Courtesy Dana S. Nau, University of Maryland, Computer Science Department)
Chapter 5
Fig. 5.0 Puzzle © aroas
Nils Nilsson (Courtesy Nils Nilsson, Stanford University, Computer Science Department)
Chapter 6
Figs. 6.0, 6.20(a,b,c,d) (Courtesy IROBOT Corporation)
Figs 6.12a and 6.12b (Courtesy Roger Schank, CEO/Founder Socratic Arts)
Figs 6.5 (Repeated as Fig. 15.2) 6.6 Based on http://mathworld.wolfram.com/KönigsbergBridge-Problem.html; Kåhre, J. “Königsberg) January 15, 2008.
Fig. 6.13 Ross Quillian’s Semantic Networks (in Minsky, Marvin, ed., Semantic Information Processing, Fig.: “The semantic networks for the word plant”, © 1969 Massachusetts Institute of Technology, by permission of The MIT Press.)
Donald Michie (Courtesy Jack Walas, from The University of Maine 1988 Spring Symposium on Artificial Intelligence and Intelligent Tutoring Systems)
Hubert Dreyfus (Courtesy Genevieve Dreyfus)
Marvin Minsky (Ntávkav, Flicker, July 11,2009)
Rodney Brooks (Courtesy Rodney Brooks, MIT)
Text Passage on Agents: (http://www.cs.ubc.ca/labs/lci.home.html) 3/1/2011(Permission Jim Little, Laboratory for Computational Intelligence, University of British Columbia)
Chapter 7
Fig. 7.0 What If Problem Solving © marekuliasz
Table 7.1 The CarBuyer Database (Designed by Oleg Yefimzhanov
Herbert Simon (Courtesy James L. McClelland, Stanford University, Department of Psychology)
John Conway (Courtesy John H. Conway, Princeton University)
Chapter 8
Fig. 8.0 Car with Automatic Traction Control © Matthew Jacques/ Shutterstock.com
Lofti Zadeh (Courtesy Lofti Zadeh, Berkeley University)
Fig. 8.1 Lofti Zadeh (Flicker by Tingandgang)
Chapter 9
Fig. 9.1 Brain on Integrated Circuit © takito
Figs. 9.5a and 9.5b Pseudo-examples from MYCIN (Courtesy H Edward Shortliffe, President and CEO, AMIA)
Fig. 9.8 Satellite Image of an Oil Slick (NASA Satellite Image)
Edward Feigenbaum (Courtesy Edward Feigenbaum, Kumagai Professor of Computer Science Emeritus, Stanford University)
Douglas Lenat (Courtesy Douglas Lenat, President and CEO, Cycorp, Inc.)
Edward H. Shortliffe (Courtesy Edward H. Shortliffe, President and CEO, AMIA)
Janet L. Kolodner (Courtesy Janet L. Kolodner, Georgia Tech University)
Chapter 10
Fig 10.0 Classroom (Alan Mendez)
Fig 10.2 Balancing an Umbrella (Alan Mendez)
Fig. 10.12 Colored Ticker Board on Black © AshDesign
Figs 10.71, 10.72 (Courtesy James A. Ackles, President, LBS Capital Management, Inc.)
John Hopfield (Courtesy John Hopfield, Professor Emeritus, Carl Icahn Laboratory, Princeton University)
Donald Michie (Courtesy University of Edinburgh, School of Informatics)
Chapter 11
Fig. 11.1 Colored Ticker Board on Black © AshDesign
Fig. 11.13 Human inspectors © cognex
Fig. 11.14 In-sight D900 © cognex
Figs 11.60, 11.61 (Courtesy James A. Ackles, President, LBS Capital Management, Inc.)
John Hopfield (Courtesy John Hopfield, Professor Emeritus, Carl Icahn Laboratory, Princeton University)
Donald Michie (Courtesy University of Edinburgh, School of Informatics)
Chapter 12
Fig. 12.0 Business Team holding different currency symbols © Ioannis Pantzi
Fig. 12.1a Simulated Annealing © sima
Fig. 12.1b Molecular Structure © Master3D
Tabu Image 1 Giant Tiki http://flickr.com/photos/hibiscus/131769533/ © Mariko Matsumura
Tabu Image 2 kapu_ki’i at National Historic Park Hawaii http://flickr.com/photos/15104132@N08/1570594321 © Guy A. Hebert
Painting of HMS Beagle at Tierra del Fuego by Conrad Maartens (1831–1836), from The Illustrated Origin of Species by Charles Darwin, abridged and illustrated by Richard Leakey ISBN 0-571-14586-8.
John H. Holland (Courtesy of John H. Holland, University of Michigan, Computer Science and Psychology Departments)
Chapter 13
Fig. 13.0 Communication Icons © artizarus
Fig. 13.1 The Chomsky Hierarchy Based on Jurafsky and Martin (2008, p.530) Speech and Language Processing 2nd, edition, Prentice Hall Publishers.
Fig. 13.3 From Charniak, E. 2006. Why natural-language processing is now statistical natural language processing. In Proceedings of AI at 50, Dartmouth College, July 13–15.
MARGIE and SAM Examples (with permission Roger Schank, Founder / CEO Socratic Arts)
Figs. 13.4 a,b,c Interactions with EasyAsk (Courtesy Larry Harris, Founder/CEO EasyAsk Corp.)
Section 13.7.3 Models of Metaphor in NLP (Daniil Agashiyev)
Section 13.10 Speech Understanding (Sona Brahmbhatt and Mimi-Lin Gao)
Section 13.11.2 Dragon Systems and Windows Speech Recognition System (Sona Brahmbhatt and Mimi-Lin Gao)
Section 13.11.3 CISCO Voice System (Oleg Tosic)
Eugene Charniak (Courtesy Eugene Charniak, Brown University, Computer Science Department)
Roger Schank (Courtesy Roger Schank, Founder / CEO Socratic Arts)
James Maisel (Courtesy James Maisel, Retina Group of New York, Director, ZYDOC)
Larry Harris (Courtesy Jack Walas, from The University of Maine 1988 Spring Symposium on Artificial Intelligence and Intelligent Tutoring Systems)
Chapter 14
Fig. 14.1 Futuristic space station © Andreas Meyer
Fig. 14.3 Smith, S. J., Nau, D., and Throop, T. Computer bridge: A big win for AI planning. AI Magazine 19-2. 1988. Fig. 5, Page 99. American Association for Artificial Intelligence, Menlo Park, CA. with Permission to Reprint.
Fig. 14.5 Designed by Kendel Campbell (Brooklyn College)
Section 14.5.5 SCIBox Automated Planner (Oleg Tosic)
Fig. 14.6 Maintainability of a Pipe Motion (permission Liangjun Zhang, Stanford University)
From Zhang, L., Huang, X., Kim, Y. J., and Manocha, D. 2008. D-plan: Efficient collision-free path computation for part removal and disassembly. Computer-Aided Design & Applications 5(1–4).
Fig. 14.7 Robot Car Assembly Line © Rainer Plendl
Fig. 14.8 Negotiating a dynamic environment (permission M. Lau)
From Lau, M. and Kuffner, J. J. 2005. Behavior planning for character animation. In Proceedings of the 2005 ACM Siggraph/Eurographics symposium on computer animation, 271–280. Los Angeles, California, July 29–31. New York, NY: ACM.
Fig. 14.9 Snapshot of the Blocks World (Redrawn by Christina Schweikert, St. John’s University)
Fig. 14.14 Practical Applications for O-Plan (Designed by Christina Schweikert)
Austin Tate (Courtesy Austin Tate, Director, AIAI, University of Edinburgh)
Frederick Hayes-Roth (Courtesy Frederick Hayes-Roth, Naval Postgraduate School)
Chapter 15
Fig 15.0 Urban Robot (Courtesy JPL, NASA)
Fig 15.11 Spiderbot (Courtesy JPL, NASA)
Fig 15.14 ASIMO (Courtesy Honda)
Fig 15.15 Jaemi The Humanoid Robot with Kids (Lisa-Joy Zgorski, National Science Foundation)
Fig. 15.17 New England Scene (Watercolor by Magdalena Kopec)
Table 15.1 Summary of Robotics from 1960–2010 Avinash Jairam, Parker Rappaport, Yusif Alomeri, Mohammed Amin, Brian Ramdin
Application Boxes Big Dog and Cog (Peter Tan)
Aplication Box ASIMO (Mimi-Lin Gao)
Sebastian Thrun (Courtesy Sebastian Thrun)
Chapter 16
Fig. 16.0 The Card Player (Oil Painting by Magdalena Kopec)
Fig. 16.3 Alpha-Beta Tree for Checkers Position (Erdal Kose, Brooklyn College)
Fig. 16.4 Second Series of Generalization Tests. (Fig.4, from Samuel, A. (1967). Some studies in machine learning using the game of checkers: Recent progress. IBM Journal of Research and Development 11, 601–617).
Figs. 16.5 and 15.6 Checkers is Solved (Permission Jonathan Schaeffer, Vice-Provost and Associate Vice-President , Information Technology, University of Alberta)
Fig. 16.19 Chess Rating vs. Depth of Search (Adapted from Hsu, F. H, Anantharaman, T., Campbell, M., & Nowatzyk, A. (1990). A Grandmaster Chess Machine, Scientific American, Vol. 263, No. 4, October, 1990, 44–50).
Table 16.2 Olyesa Yefimzhanova, Brooklyn College
Gary Kasparov vs. Deeper Blue (Courtesy Murray Campbell, IBM)
Jonathan Schaeffer (Courtesy Jonathan Schaeffer, University of Alberta)
Chinook vs. Marion Tinsley, Checkers World Championship Match, 1992 (Courtesy Jonathan Schaeffer)
Hans Berliner (Courtesy Hans Berliner, Carnegie Mellon University)
Monty Newborn (Courtesy Monty Newborn, McGill University)
David Levy and Jaap van den Herik (Courtesy Frederic Friedel, www.chessbase.com)
Chapter 17
Fig. 17.0 Wlodek Zadrozny at CCNY addressing faculty (Courtesy Jerry Moy, IBM)
Fig. 17.1 The Wright Brothers © Brad Whitsitt
Fig 17.5 Google Car (Steve Jurvetson via Wikimedia)
Fig. 17.6 The IBM WATSON Team at CCNY (Courtesy Jerry Moy, IBM)
Fig. 17.7 Wlodek Zadrozny at CCNY (Courtesy Jerry Moy, IBM)
Fig. 17.8 Jerry Moy (Courtesy IBM)
Fig 17.9 Singularity from KurzweilAI.net homepage / Source: Shutterstock
Fig 17.10 Ray Kurzweil (Courtesy of Kurzweil Technologies, Inc.)
Fig 17.11 Moore’s Law (Courtesy Kurzweil.net)
David Ferucci (Courtesy IBM)
Application Box Google Car (Peter Tan )
Part Openers
Part I
Cyber Love © photobank.kiev.ua
Part II
Technology head © Bruce Riff
Part III
Militarya Robot fromhttp://militarya.wikispaces.com/file/view/artificial-intelligence-45.jpg/149410871/artificial-intelligence
Part IV
Orange Automatic Robot © Maksim Dubinsky
Part V
Humans Dream Abstract © IR Stone
Appendix A
Clips Example (Olyesa Yefimzhanova, Brooklyn College)
Appendix B
Hidden Markov Chain Code in Java (Harun Iftikhar, Columbia University)
Appendix C
Walter Browne (Courtesy United States Chess Federation)
Solutions to Exercises
Chapter 12 Exercises 1, 3, 7, 9, 13 Daniil Agashiyev
Chapter 13 Exercises 3, 7 Pjotr Vasiljev
PARTI
INTRODUCTION
Chapter 1Overview of Artificial Intelligence
This chapter sets the stage for all that follows. It presents the history and early motivations behind Artificial Intelligence (AI) stemming from the 1956 Dartmouth Conference.
Notions of thinking and intelligence lead to discussion of the Turing Test and the various controversies and criticisms surrounding it. This sets the stage for distinguishing Strong AI from Weak AI. Integral to any classical view is interest in how humans solve problems and the heuristics they use. From this background and perspective, it becomes feasible to identify problems suitable for AI. Various recognized disciplines and approaches to AI such as search, neural computation, fuzzy logic, automated reasoning, and knowledge representation, are then presented. This discussion transitions to a review of the early history of AI and to more recent domains, problems as well as considerations of what lies ahead of us.
CHAPTER1
OVERVIEW OF ARTIFICIAL INTELLIGENCE
1.0Introduction
1.1The Turing Test
1.2Strong AI versus Weak AI
1.3Heuristics
1.4Identifying Problems Suitable for AI
1.5Applications and Methods
1.6Early History of AI
1.7Recent History of AI to the Present
1.8AI in the New Millennium
1.9Chapter Summary
Alan Turing (Photo ©Guy Erwood/Shutterstock.com)
Early man had to contend with nature through such tools and weapons such as the wheel and fire. In the 15th century, Gutenberg’s invention of the printing press made widespread changes in peoples’ lives. In the 19th century, the Industrial Revolution exploited natural resources to develop power, which facilitated manufacturing, transportation, and communication. The 20th century has evidenced man’s continuing advancement through air and space exploration, the invention of the computer, and microminiaturization leading to personal computers, the Internet, World Wide Web, and smart phones. The last 60 years have witnessed the nascence of a world that has emerged with an abundance of data, facts, and information that must be converted into knowledge (an instance being the data contained in the genetic code for humans – see Figure 1.0). This chapter provides the conceptual framework for the discipline of Artificial Intelligence, its successful application areas and methods, recent history, and future prospects.
Figure 1.0
Genetic code (Photo ©Bruce Rolff/Shutterstock.com)
1.0INTRODUCTION
Artificial intelligence (AI) means different things to different people. Some believe that AI is synonymous with any form of intelligence achieved by nonliving systems; they maintain that it is not important if this intelligent behavior is not arrived at via the same mechanisms on which humans rely. For others, AI systems must be able to mimic human intelligence. No one would argue with the premise that to study AI or to implement AI systems, it is helpful if we first understand how humans accomplish intelligent behavior; that is, we must understand activities that are deemed intelligent in an intellectual, scientific, psychological, and technical sense. For example, if we want to build a robot capable of walking like a human, then we must first understand the process of walking from each of those perspectives. People, however, do not accomplish locomotion by constantly stating and following a prescribed set of formal rules that explain how to take steps. In fact, the more human experts are asked to explain how they achieve their level of performance in any discipline or endeavor, the more they are likely to fail. For example, when Israeli fighter pilots were asked to explain their prowess for flying, their performance actually declined. 1 Expert performance stems not from constant, conscious analysis but from the subconscious levels of the mind. Imagine trying to drive on an expressway during rush hour and needing to consciously weigh each vehicle-control decision.
Consider the story of the professor of mechanics and the unicyclist. 2 If the professor is asked to cite principles of mechanics as he attempts to ride the unicycle and bases his success on the unicycle on knowing those principles, he is doomed to failure. Likewise, if the unicyclist attempts to learn the laws of mechanics and apply them while he performs his craft, he, too, is destined for failure and perhaps a tragic accident. Human skill and expertise in many disciplines seems to be developed and stored in the subconscious, rather than being available upon explicit request from memory or first principles.
1.0.1What is Artificial Intelligence?
In everyday parlance, the term artificial means synthetic (i.e., man-made) and generally has a negative connotation as being a lesser form of a real thing. Artificial objects are often superior to real or natural objects, however. Consider, for example, an artificial flower, an object made of silk and wire and arranged to resemble a bud or blossom. This kind of flower has the advantage of not requiring sunshine or water for its sustenance, so it provides practical decoration for a home or business. Its feel and fragrance are arguably inferior to those of a natural flower, but an artificial flower can look very much like the real thing.
Another example is artificial light produced by candles, kerosene lanterns, or electric light bulbs. Artificial light is superior to natural light because it is always accessible. Sunlight, obviously, is available only when the sun appears in the sky.
Finally, consider the advantages provided by artificial motion devices—such as automobiles, trains, planes, and bicycles—in terms of speed and durability when compared with running, walking, and other natural forms of transportation, such as horseback riding. The advantages of artificial forms of transportation are tempered by stark drawbacks—our planet is paved with ubiquitous highways, our atmosphere is laden with vehicle exhaust, and our peace of mind (and often our sleep) is interrupted by the din of aircraft. 3
Like artificial light, flowers, and transportation, AI is not natural but man-made. To identify the advantages and drawbacks of AI, you must first understand and define intelligence.
1.0.2What is Thinking? What is Intelligence?
A definition for intelligence is perhaps more elusive than a definition for the term artificial. R. Sternberg, in a text on human consciousness, provided the following useful definition:
Intelligence is the cognitive ability of an individual to learn from experience, to reason well, to remember important information, and to cope with the demands of daily living. 4
We are all familiar with questions on standardized tests asking us to provide the next number in a given sequence, as in: 1, 3, 6, 10, 15, 21, __ ?
You probably noticed that the gap between successive numbers increases by one; for example from 1 to 3, the increase is two, whereas from 3 to 6, it is three, and so on. The correct response is 28. Such questions are designed to measure our proficiency at identifying salient features in patterns. We can detect patterns by learning from experience.
Try your luck with the following:
Now that we have settled on a definition for intelligence, you might ask the following questions:
1.How do you decide if someone (something?) is intelligent?
2.Are animals intelligent?
3.If animals are intelligent, how do you measure their intelligence?
Most of us can answer the first question easily. We gauge the intelligence of other people many times each day by interacting with them — by making comments or posing questions, and then observing their responses. Although we have no direct access to someone else’s mind, we feel confident that this indirect view provided by questions and answers gives us an accurate assessment of internal cerebral activity.
If we adhere to this conversational approach to measuring intelligence, how do we address the question of animal intelligence? If you have a pet, you have probably answered this question for yourself. Dogs seem to remember people they haven’t seen for a month or two, and can find their way home after getting lost. Cats often show excitement during the opening of cans at dinner time. Is this simply a matter of a Pavlovian reflex or do cats consciously associate the sound of cans opening with the impending pleasure of dinner?
An intriguing anecdote regarding animal intelligence is that of Clever Hans, a horse in Berlin, Germany, circa 1900, which was purportedly proficient in mathematics (see Figure 1.1).
Figure 1.1
Clever Hans—A horse doing calculus?
Audiences were transfixed as Hans added numbers or calculated square roots. Later, people observed that Hans did not perform as well without an audience. In fact, Hans was talented in identifying human emotions, not mathematics. Horses generally have acute hearing. As Hans came closer to a correct answer, audience members became more excited and their heart rates accelerated. Perhaps it was Hans’ uncanny ability to detect these changes that enabled him to obtain correct answers. 5 You might be reluctant to attribute Clever Han’s actions to intelligence, but consult Sternberg’s earlier definition before reaching a conclusion.
Some creatures are intelligent only in groups. For example, ants are simple insects and their isolated behavior would hardly warrant inclusion in a text on AI. Ant colonies, however, exhibit extraordinary solutions to complex problems, such as finding the optimal route from a nest to a food source, carrying heavy objects, and forming bridges. A collective intelligence arises from effective communication among individual insects. More will be said about emergent intelligence and swarm intelligence in our discussion of advanced search methods in Chapter 12, “Search Inspired by Mother Nature.”
Brain mass and brain-to-body mass ratios are often regarded as indications of animal intelligence. Dolphins compare favorably with humans in both metrics. Breathing in dolphins is under voluntary control, which could account for excess brain mass and for the interesting fact that alternate halves of the dolphin brain take turns sleeping. Dolphins score well on animal self-awareness tests such as the mirror test, in which they recognize that the image in the mirror is actually their image. They can also perform complex tricks, as visitors to Sea World can testify. This illustrates the ability of dolphins to remember and perform complex sequences of physical motions. The use of tools is another litmus test for intelligence and is often used to separate homo erectus from earlier ancestors of human beings. Dolphins share this trait with humans. For example, dolphins use deep-sea sponges to protect their spouts while foraging for food. It becomes clear that intelligence is not an attribute possessed by humans alone. Many living forms possess intelligence to some extent. You should ask yourself the following question: “Do you believe that being alive is a necessary precondition for possessing intelligence?” or “Is it possible for inanimate objects, for example, computers, to be intelligent?” The declared goal of Artificial Intelligence is to create computer software and/or hardware systems that exhibit thinking comparable to that of humans, in other words, to display characteristics usually associated with human intelligence. A pivotal question is, “Can machines think?” More generally, you might ask, “Does a person, animal, or machine possess intelligence?”
At this juncture, it is wise to underscore the distinction between thinking and intelligence. Thinking is the facility to reason, analyze, evaluate, and formulate ideas and concepts. Not every being capable of thinking is intelligent. Intelligence is perhaps akin to efficient and effective thinking. Many people approach this issue with biases, saying, “Computers are made of silicon and power supplies, and therefore are not capable of thinking,” or, at the other extreme, “Computers perform much faster than humans and therefore must be more intelligent than humans.” The truth is most likely somewhere between these two extremes.
As we have discussed, different animal species possess intelligence to varying degrees. We will expound on software and hardware systems that have been developed by the Artificial Intelligence community that also possess intelligence to varying degrees. We are not sufficiently concerned with measuring animal intelligence to develop standardized IQ tests for animals. However, we are interested in a test to ascertain the existence of machine intelligence.
Perhaps Raphael 6 put it best:
Artificial Intelligence is the science of making machines do things that would require intelligence if done by man.
1.1THE TURING TEST
The first two questions posed in the last section have already been addressed: How do you determine intelligence, and are animals intelligent? The answer to the second question is not necessarily “yes” or “no” — some people are smarter than others and some animals are smarter than others. The question of machine intelligence is equally problematic.
Alan Turing sought to answer the question of intelligence in operational terms. He wanted to separate functionality (what something does) from implementation (how something is built).
SIDEBAR
Abstraction
Abstraction is a strategy in which the implementation (e.g., internal workings) of an object or concept are ignored so that you gain a clearer picture of the artifact and its relation to the outside world. In other words, you can treat something as a black box and be concerned only with inputs and outputs from that object (see Figure 1.2).
Figure 1.2
Inputs to and outputs from a black box.
Abstraction is often a useful (and necessary) tool. For example, if you want to learn how to drive, it is probably a good idea to treat the car as a black box. Rather than beginning your efforts with a course in automatic transmissions and power trains, you concentrate on the workings of the system’s inputs—the gas pedal, brakes, and turn signals, for example—and its outputs—moving forward, stopping, and turning left and right.
Courses on data structures also use abstraction, so that if one wishes to understand how a stack behaves, you concentrate on basic stack operations such as pop (deleting an item) and push (inserting an item) rather than becoming bogged down with the details of how a list is constructed (i.e., as a linear or circular list or with linked vs. contiguous allocation of space).
1.1.1Definition of the Turing Test
Alan Turing 7 proposed two imitation games. In an imitation game, one person or entity behaves as if he were another. In the first, a person (called an interrogator) is in a room with a curtain that runs across the center of the room. On the other side of the curtain is a person, and the interrogator must determine whether it is a man or a woman. The interrogator (whose gender is irrelevant) accomplishes this task by asking a series of questions. The game assumes that the man will perhaps lie in his responses, but the woman is always truthful. In order that the interrogator cannot determine gender from voice, communication is via computer rather than through spoken words (see Figure 1.3). If it is a man on the other side of the curtain, and he is successful in deceiving the interrogator, then he wins the imitation game. In Turing’s original format for this test, both a man and a woman were seated behind a curtain and the interrogator had to identify both correctly (Turing might have based this test on a game that was popular during this period. This same game could also have been the impetus behind his machine intelligence test).
Figure 1.3
The first Turing imitation game.
Erich Fromm wrote 8 that men and women are equal, but not necessarily the same. For instance, the genders might differ in their knowledge of colors, flowers, or the amount of time spent shopping.
What does distinguishing a man from a woman have to do with the question of intelligence? Turing understood that there might be different types of thinking, and it is important to both understand these differences and to be tolerant of them. Figure 1.4 shows the second version of the Turing test.
Figure 1.4
The second Turing imitation game (drawing by the authors).
This second game is more appropriate to the study of AI. Once again, an interrogator is in a room with a curtain. This time, a computer or a person is behind the curtain. Here, the machine plays the role of the male and could find it convenient on occasion to lie. The person, on the other hand, is consistently truthful. The interrogator asks questions, and then evaluates the responses to determine whether she is communicating with a person or a machine. If the computer is successful in deceiving the interrogator, it passes the Turing test, and is thereby considered intelligent. As we all know, machines are many times faster than humans in performing arithmetic calculations. A human would have little trouble in discerning that a computer, rather than a person, is behind the curtain, if the result of a Taylor Series approximation to a trigonometric function is provided within microseconds. Naturally, by mere chance, the computer could successfully deceive the interrogators during any Turing test; to be a valid barometer for intelligence, this test should be executed many times. Again, in Turing’s original version of this test, both a person and a computer were behind the curtain, and the interrogator had to identify each correctly.
SIDEBAR
The Turing Test
No computer system has ever passed the Turing test. However, in 1990, the philanthropist Hugh Gene Loebner underwrote a contest designed to implement the Turing test. The Loebner Prize of $100,000 and a gold medal is to be awarded to the first computer to pass the Turing test. Meanwhile, each year a prize of approximately $2000 and a bronze medal is awarded to the computer that performs best in the contest.
What questions would you propose for the Turing test? Consider the following examples:
•What is (1,000,017)½? Calculations such as this one are probably not a good idea. Recall that the computer attempts to deceive the interrogator. Rather than responding in fractions of a microsecond with the correct answer, it would intentionally take longer and perhaps make a mistake, “knowing” that people are not adept at these sorts of calculations.
•What are the current weather conditions? You might be tempted to ask about the weather, assuming that a computer cannot peek outside the window. Computers are usually connected to the Web, however, and can connect to weather websites before responding.
•Are you afraid of dying? Because it is difficult for a computer to feign human emotions, you might propose this question and others such as “How does the dark make you feel?” or “What does it feel like to be in love?” Recall, however, that you are trying to determine intelligence, and human emotions might not be a valid barometer for intelligence.
Turing anticipated many objections to the idea of machine intelligence in his original paper. 7 One is the so-called “head-in-the-sand objection.” It was believed that mankind’s ability to think placed humans at the apex of creation. Admitting the possibility of computer thought would challenge this lofty perch enjoyed by humans. Turing believed that this concern was more cause for consolation than for refutation. Another objection he anticipated is the theological objection. Many believed that it is a person’s soul that enables thinking, and we would be usurping God’s authority by creating machines with this capability. Turing rebutted this argument by proposing that we would merely be carrying out God’s will by preparing vessels awaiting endowment of souls. Finally, we mention Lady Lovelace’s (often referred to in the literature as the first computer programmer) objection. Commenting on the analytical engine, she argues that it would be impossible for this mere machine to surprise us. She was reiterating the belief held by many that a computer is not capable of performing any activity for which it is not preprogrammed. Turing counters that machines surprise him all the time. He maintains that proponents of this objection subscribe to the belief that human minds can instantaneously deduce all consequences of a given fact or action. The reader is referred to Turing’s original paper 7 for a collection of these aforementioned objections, as well as several others. The next section covers additional noteworthy criticisms of the Turing test for intelligence.
1.1.2Controversies and Criticisms of the Turing Test
Block’s Criticism of the Turing Test
Ned Block argued that English text is encoded in ASCII, in other words, as a series of 0s and 1s inside a computer. 9 Hence, a particular Turing test, which is a series of questions and answers, can be stored as a very large number. For instance, assuming an upper bound on the length of a Turing test, the first three characters in tests that begin “Are you afraid of dying?” are stored as binary numbers, as shown in Figure 1.5.
Figure 1.5
Storing the beginning of a Turing test in ASCII code.
Suppose that a typical Turing test lasts an hour, and that about 50 questions are asked and 50 responses given during this time. The binary number corresponding to a test would be very long. Now suppose that a large database stores all Turing tests consisting of 50 questions or fewer and the reasonable responses. Passing the test could then be accomplished by table lookup. Granted, a computer system that can handle such a huge collection of data does not yet exist. But if it did, Block asks, “Would you feel comfortable ascribing intelligence to such a machine?” In other words, Block’s criticism is that a computer could pass the Turing test by the mechanical means of a lookup table, not through intelligence.
Searle’s Criticism: The Chinese Room
John Searle’s criticism of the Turing test is more fundamental. 10 Imagine an interrogator who asks questions as expected—this time, however, in Chinese. In a separate room is someone who does not know Chinese, but does have a detailed rulebook. Although the Chinese questions appear as a series of squiggles, the person in the room consults the rulebook, processes the Chinese characters according to the rules, and responds with answers written using Chinese characters (see Figure 1.6).
Figure 1.6
The Chinese Room argument.
The interrogator is obtaining syntactically correct and semantically reasonable responses to the questions. Does this mean that the person inside the room knows Chinese? If you answer “No,” does the combination of the person and the Chinese rule book know Chinese? No—the person is not learning or understanding Chinese, but is only processing symbols. In the same way, a computer running a program receives, processes, and responds with symbols without learning or understanding what the symbols themselves mean.
Instead of a single person with a rulebook, Searle also asks us to envision a gymnasium of people with notes that are passed to one another. When a person receives such a note, the rulebook will determine that they should either produce an output or merely pass a message on to another individual in the gymnasium (see Figure 1.7).
Figure 1.7
Variation of the Chinese Room argument.
Now, where does the knowledge of Chinese reside? With the ensemble of people? Or with the gymnasium?
Consider a final example. Picture the brain of a person who does indeed know Chinese, as shown in Figure 1.8. This person can receive questions asked in Chinese, interpret them accurately, and then respond in Chinese.
Figure 1.8
Chinese speaker receiving and responding to questions in Chinese.
Again, where does the knowledge of Chinese reside? With an individual neuron? Or, does it reside with a collection of these neurons? (It must reside somewhere!)
The crux of both Block’s and Searle’s criticisms of the Turing test is that it is not possible to gain insight into the internal state of some entity from external observations alone. That is, we should not expect to learn anything new about intelligence by treating an agent possessing intelligence (man or machine) as a black box. This is not always true, however. In the 19th century, physicist Ernest Rutherford correctly deduced the internal state of matter (that it consists mostly of empty space) by bombarding gold foil with alpha particles. He predicted that these high-energy particles would either pass through the foil or be somewhat deflected. The outcome is consistent with his orbital theory of atoms: they consist of a dense core surrounded by orbiting electrons. This is our current model of the atom, with which many of us became acquainted in high school chemistry. Rutherford successfully gained insight into the internal state of the atom through external observations alone.
In summary, it is difficult to define intelligence. It is precisely because of this difficulty in both defining intelligence and determining whether an agent possesses this attribute that Turing developed the Turing test. Implicit in his treatise is that any agent capable of passing the Turing test would invariably possess the “cerebral wherewithal” to cope with any reasonable intellectual challenge on a level commensurate with that of a person widely accepted to be intelligent. 11
ALAN TURING
