A First Course in Artificial Intelligence - Osondu Oguike - E-Book

A First Course in Artificial Intelligence E-Book

Osondu Oguike

0,0
72,24 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

The importance of Artificial Intelligence cannot be over-emphasised in current times, where automation is already an integral part of industrial and business processes.
A First Course in Artificial Intelligence is a comprehensive textbook for beginners which covers all the fundamentals of Artificial Intelligence. Seven chapters (divided into thirty-three units) introduce the student to key concepts of the discipline in simple language, including expert system, natural language processing, machine learning, machine learning applications, sensory perceptions (computer vision, tactile perception) and robotics. Each chapter provides information in separate units about relevant history, applications, algorithm and programming with relevant case studies and examples. The simplified approach to the subject enables beginners in computer science who have a basic knowledge of Java programming to easily understand the contents. The text also introduces Python programming language basics, with demonstrations of natural language processing. It also introduces readers to the Waikato Environment for Knowledge Analysis (WEKA), as a tool for machine learning.
The book is suitable for students and teachers involved in introductory courses in undergraduate and diploma level courses which have appropriate modules on artificial intelligence.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 404

Veröffentlichungsjahr: 2021

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents
BENTHAM SCIENCE PUBLISHERS LTD.
End User License Agreement (for non-institutional, personal use)
Usage Rules:
Disclaimer:
Limitation of Liability:
General:
PREFACE
CONSENT FOR PUBLICATION
CONFLICT OF INTEREST
ACKNOWLEDGEMENT
Introduction to Artificial Intelligence
Abstract
1. DEFINITION OF ARTIFICIAL INTELLIGENCE
1.1. Artificial Intelligence
1.1.1. Explanation of Artificial Intelligence
1.1.2. Turing Test Model – Acting Like Human
1.1.3. Cognitive Model – Thinking Like Human
1.1.4. Rational Agent Model – Acting Rationally
1.1.5. Law of Thought – Thinking Rationally
1.2. Foundational Discipline in Artificial Intelligence
1.2.1. Philosophy
1.2.2. Mathematics
1.2.3. Psychology
1.2.4. Computer Engineering
1.2.5. Linguistics
1.2.6. Biological Science and Others
1.3. Conclusion
1.4. Summary
2. HISTORY OF ARTIFICIAL INTELLIGENCE AND PROJECTION FOR THE FUTURE
2.1. The Birth of Artificial Intelligence
2.1.1. Alan Turing (1912 – 1954)
2.1.2. Other Significant Contributors Prior to Birth of AI
2.2. Historical Development of Other Artificial Intelligence Systems
2.2.1. Expert System (1950s – 1970s)
2.2.2. First Artificial Intelligence Winter (1974 – 1980)
2.2.3. Second Artificial Intelligence Winter (1987 – 1993)
2.2.4. Intelligent Agent (1993 – Date)
2.3. Projections into the Future of Artificial Intelligence
2.3.1. Virtual Personal Assistants
2.4. Conclusion
2.5. Summary
3. EMERGING ARTIFICIAL INTELLIGENCE APPLICATIONS
3.1. Artificial Intelligence Applied Technologies
3.1.1. Blockchain Technology
3.1.1.1. Bitcoin: First Application of Artificial Intelligence to Blockchain
3.1.1.1.1. Bitcoin Wallet
3.1.1.1.2. Peer-to-peer
3.1.1.1.3. Miners
3.1.1.1.4. Transaction
3.1.1.1.5. Earning Reward
3.1.1.2. Applications of Artificial Intelligence to Blockchain Technology
3.1.1.2.1. Smart Computing Power
3.1.1.2.2. Analyses Diverse Data
3.1.1.2.3. Analyses Protected Data
3.1.1.2.4. Monetizes Data
3.1.1.2.5. Decision Making
3.1.2. Internet of Things (IoT)
3.1.2.1. History of Internet of Things
3.1.3. Data Science, Big Data and Data Analytic
3.2. Artificial Intelligence Products
3.2.1. IBM Watson
3.2.2. Self-Driving/Autonomous Cars
3.2.3. Face Recognition System
3.3. Conclusion
3.4. Summary
CONCLUDING REMARKS
REFERENCES
Expert System
Abstract
1. EXPERT SYSTEM BASICS
1.1. Components of Expert System
1.1.1. Human Expert
1.1.2. Knowledge Engineer
1.1.3. Knowledge Base
1.1.4. Inference Engine
1.1.5. User Interface
1.1.6. Non-Expert User
1.2. Knowledge Acquisition
1.2.1. Knowledge Elicitation
1.2.2. Intermediate Representation
1.2.3. Executable Form Representation
1.3. Characteristics of Expert System
1.4. Examples of Expert System
1.4.1. Medical Diagnosis System
1.4.2. Game System
1.4.3. Financial Forecast/Advice System
1.4.4. Identification System
1.4.5. Water/Oil Drilling System
1.4.6. Car Engine Diagnosis System
1.5. Importance of Expert Systems
1.6. Conclusion
1.7. Summary
2. KNOWLEDGE ENGINEERING
2.1. Foundations of Knowledge Engineering
2.1.1. Knowledge Engineering Processes
2.1.1.1. Knowledge Acquisition
2.1.1.2. Knowledge Representation
2.1.1.3. Knowledge Verification and Validation
2.1.1.4. Inferencing
2.1.1.5. Explanation and Justification
2.1.2. Sources and Types of Knowledge
2.1.3. Levels and Categories of Knowledge
2.1.3.1. Shallow Level
2.1.3.2. Deep Level
2.1.3.3. Declarative Knowledge
2.1.3.4. Procedural Knowledge
2.1.3.5. Meta-knowledge
2.2. Knowledge Acquisition Methods
2.2.1. Knowledge Modelling Methods
2.3. Knowledge Verification and Validation
2.4. Knowledge Representation
2.4.1. Production Rules
2.4.2. Semantic Network
2.4.3. Frames
2.5. Inferencing
2.5.1. Common Sense Inferencing/Reasoning
2.5.2. Rule Base Inferencing/Reasoning
2.6. Explanation and Meta-knowledge
2.7. Inferencing with Uncertainty
2.8. Expert System Development Environment
2.8.1. Expert System Shells
2.8.2. Programming Languages
2.8.3. Hybrid Environment
2.9. Conclusion
2.10. Summary
3. PROPOSITIONAL LOGIC
3.1. Propositional Logic as Knowledge Representation Formalism
3.2. Syntax of Propositional Logic Connectives
3.3. Semantics of Propositional Logic
3.4. Automating Logical Reasoning
3.5. Uncertainty in Logical Reasoning
3.6. Automating Uncertain Propositional Logic
3.7. Conclusion
3.8. Summary
CONCLUDING REMARKS
REFERENCES
Natural Language Processing
Abstract
1. FUNDAMENTALS OF NATURAL LANGUAGE PROCESSING
1.1. Applications of Natural Language Processing
1.2. The Future of Natural Language Processing
1.3. Conclusion
1.4. Summary
2. TEXT PRE-ProcessING
2.1. Text Normalization
2.2. Tokenization
2.3. Stop Words Removal
2.4. Stemming
2.5. Lemmatization
2.6. Conclusion
2.7. Summary
3. TEXT REPRESENTATION
3.1. Bags of Words
3.2. Lookup Dictionary
3.3. One-Hot Encoding
3.4. Word Embedding
3.5. Conclusion
3.6. Summary
4. PARTS OF SPEECH TAGGING
4.1. Fundamentals of Parts of Speech
4.2. Importance of Parts of Speech Tagging
4.2.1. Word Pronunciation in Text to Speech Conversion
4.2.2. Word Sense Disambiguation
4.2.3. Stemming as Text Pre-processing Task
4.3. Computational Methods for Parts of Speech Tagging
4.3.1. Rule Based Tagging Method/Algorithm
4.3.2. Stochastic Based Tagging Method/Algorithm
4.3.3. Transformation Based Tagging
4.4. Conclusion
4.5. Summary
5. TEXT TAGGING/TEXT CLASSIFICATION
5.1. Approaches to Text Classification
5.1.1. Rule Based Text Classification
5.1.2. Machine Learning Based Text Classification
5.1.3. Rule and Machine Learning Based Text Classification
5.2. Machine Learning Algorithms for Text Classification
5.2.1 Naïve Bayes Text Classification Machine Learning Algorithm
5.2.2. Decision Tree Text Classification Machine Learning Algorithm
5.3. Conclusion
5.4. Summary
6. TEXT SUMMARIZATION
6.1. Brief History of Automatic Text Summarization
6.2. Approaches to Text Summarization
6.2.1. Extractive Text Summarization
6.2.2. Abstractive Text Summarization
6.3. Frequency Based Technique
6.4. Feature Based Technique
6.5. Text Rank Algorithm
6.6. Conclusion
6.7. Summary
7. sentiment analysis
7.1. Types of Sentiment Analysis
7.1.1. Fine Grained Sentiment Analysis
7.1.2. Emotion Detection Sentiment Analysis
7.1.3. Aspects Based Sentiment Analysis
7.1.4. Multi-Lingual Sentiment Analysis
7.1.5. Intent Detection Sentiment Analysis
7.2. Applications of Sentiment Analysis
7.2.1. Social Media Sentiment Analysis
7.2.2. Internet Sentiment Analysis
7.2.3. Sentiment Analysis on Customer Feedback
7.2.4. Sentiment Analysis on Customer Services
7.3. Approaches to Sentiment Analysis
7.3.1. Rule Based Approach
7.3.2. Machine Learning Based Approach
7.3.3. Hybrid Approach
7.4. Conclusion
7.5. Summary
8. NLP, USING PYTHON PROGRAMMING LANGUAGE
8.1. Fundamentals of NLP Using Python
8.1.1. Natural Language ToolKit (NLTK)
8.1.2. Getting Started with NLP Using Python
8.1.3. Using List in Python for NLP
8.1.4. Manipulating String in Python
8.1.5. Using Python Text Editor
8.2. Using Control Structures in Python for NLP
8.2.1. Selective Control Structure
8.2.2. Repetitive/Looping Control Structure
8.3. Accessing Text Corpora in Python
8.3.1. Gutenberg Corpus
8.3.2. Web and Chat Text
8.3.3. Brown Corpus
8.3.4. Reuters Corpus
8.3.5. Inaugural Address Corpus
8.4. Conclusion
8.5. Summary
CONCLUDING REMARKS
REFERENCES
Machine Learning
Abstract
1. INTRODUCTION TO MACHINE LEARNING
1.1. Fundamentals of Machine Learning
1.1.1. Definition of Machine Learning
1.1.2. Types of Learning
1.1.3. Basic Terminologies in Machine Learning
1.1.4. Components of a Machine Learning System
1.2. Input to Machine Learning System
1.3. Characteristics of Input Data
1.4. Output from Machine Learning System
1.4.1. Regression Equation
1.4.2. Regression Trees
1.4.3. Table
1.4.4. Cluster Diagram
1.4.5. Decision Tree
1.4.6. Classification Rule
1.5. Conclusion
1.6. Summary
2. DATA PREPARATION
2.1. Fundamentals of Data Preparation
2.1.1. Data Selection
2.1.2. Data Pre-processing
2.1.3. Data Transformation
2.2. Data Transformation Techniques
2.2.1. Feature Engineering
2.2.2. Feature Scaling
2.3. Conclusion
2.4. Summary
3. supervised machine learning
3.1. Prediction Based Machine Learning Algorithm
3.1.1. Simple Linear Regression Algorithm
3.1.1.1. Least Square Method of Simple Linear Regression
3.1.1.2. Simple Linear Regression Algorithm Based on Least Square Method
3.1.1.3. Illustrating the Use of Linear Regression Algorithm
3.1.2. Multiple Linear Regression Algorithm
3.1.2.1. Least Square Method for Linear Multiple Regression
3.1.2.2. Multiple Linear Regression Algorithm Based on Least Square Method
3.2. Classification Based Machine Learning Algorithm
3.2.1. Naïve Bayes Machine Learning Algorithm
3.2.1.1. Bayes Theorem
3.2.1.2. Illustrating the Use of Naïve Bayes Algorithm to Solve Classification Problem
3.2.2. Decision Tree Machine Learning Algorithm
3.2.2.1. Basic Terminologies Used in Decision Tree Algorithm
3.2.2.2. Outline of the Decision Tree Algorithm
3.2.2.3. Determining the Most Information Gain of Attributes by Visualization
3.2.2.4. Illustrating with Example
3.2.2.5. Computation of the Information Gain of Attribute by Formula
3.2.2.6. Illustrating the Computation of Information Gain with Example
3.2.2.7. Using Decision Tree to Solve Classification Problem
3.3. Conclusion
3.4. Summary
4. SIMPLE REGRESSION ALGORITHMS FOR NON-LINEAR RELATIONSHIPS
4.1. Types of Simple Non-Linear Relationships
4.1.1. Simple Non-Linear Relationships
4.1.2. Polynomial of Degree 2 with Minimum Point
4.1.3. Polynomial of Degree 2 with Maximum Point
4.1.4. Polynomial of Degree 3 with Minimum Point on the Right
4.1.5. Polynomial of Degree 3 with Maximum Point on the Right
4.2. Regression Algorithm for Non Lionear Relationships
4.2.1. Regression Algorithm for Simple Non-Linear Relationships
4.2.1.1. Example Illustrating the Use of the Regression Algorithm for Simple Non-Linear Relationship
4.2.2. Regression Algorithm for Polynomial of Degree 2 with Minimum Point
4.2.2.1. Example Illustrating the Use of Regression Algorithm for Polynomial of Degree 2 with Minimum Point
4.2.3. Regression Algorithm for Polynomial of Degree 2, with Maximum Point
4.2.3.1. Example Illustrating the Use of Regression Algorithm for Polynomial of Degree 2 with Maximum Point
4.2.4. Regression Algorithm for Polynomial of Degree 3, with Minimum Point on the Right
4.2.5. Regression Algorithm for Polynomial of Degree 3, with Maximum Point on the Right
4.3. Conclusion
4.4. Summary
5. UNSUPERVISED MACHINE LEARNING ALGORITHMS
5.1. Clustering Algorithms
5.1.1. K-means Clustering Algorithm
5.1.2. Using K-means Algorithm to Perform Clustering on Dataset
5.1.3. Choosing the Number of K Clusters
5.1.4. Using WEKA to Perform K-means Clustering on Dataset
5.2. Data Visualization
5.2.1. Visualizing Two Dimensional Linear Dataset Using Scatter Plot
5.2.2. Visualizing Probability Distribution of Dataset Using Scatter Plot
5.2.2.1. Binomial Probability Distribution Function
5.2.2.2. Poisson Probability Distribution Function
5.2.2.3. Exponential Probability Distribution Function
5.2.2.4. Normal Probability Distribution Function
5.3. Conclusion
5.4. Summary
6. WAIKATO ENVIRONMENT FOR KNOWLEDGE ANALYSIS, WEKA
6.1. Data Representation in WEKA
6.2. Getting Started with WEKA
6.2.1. Loading CSV Files in the WEKA Explorer
6.2. Using WEKA to Solve Machine Learning Problems
6.4. Using WEKA to Solve Simple Linear Regression Problem
6.5. Using WEKA to Solve Linear Regression on CPU.arff Dataset
6.6. Using WEKA to do Naïve Bayes Classification on Norminal Weather.arff Dataset
6.7. Conclusion
6.8. Summary
7. NEURAL NETWORK
7.1. Biological Neurons
7.1.1. How the Biological Neuron Works
7.2. Artificial Neural Network
7.2.1. Feedforward Multi-Layer Perceptron
7.2.2. Effect of Noise and Hardware Failure on the Artificial Neuron
7.2.3. Continuous Input and Output Signals of Artificial Neuron
7.2.4. Probabilistic Output Signal of Artificial Neuron
7.2.5. Training the Artificial Neural Network
7.2.5.1. Threshold Logic Unit as a Linear Classifier
7.2.5.2. Representing Logic Function/Gate Using Perceptron
7.2.5.3. Threshold Logic Unit as a Generalized Linear Classifier
7.2.5.4. Increasing the Dimension of the Input and Weight Vector by 1
7.2.5.5. The Perceptron Learning Algorithm
7.2.5.6. Gradient Descent Technique and Delta Rule
7.3. Back Propagation Algorithm
7.4. Using WEKA to Solve Artificial Neural Network Problem
7.5. Conclusion
7.6. Summary
8. DEEP LEARNING
8.1. Deep Feedforward Network
8.2. Application of Deep Feedforward Network
8.2.1. Application of Deep Learning to Logic Function Evaluation
8.3. Deep Convolutional Neural Network
8.3.1. Layers of Deep Convolutional Neural Network
8.3.1.1. Convolutional Layer
8.3.1.2. Pooling Layer
8.3.1.3. Full Connect Layer
8.4. Deep Recurrent Neural Network
8.5. Conclusion
8.6. Summary
9. REINFORCEMENT LEARNING
9.1. Introduction to Reinforcement Learning
9.2. Features of Reinforcement Learning
9.2.1. Trade-off between Exploitation and Exploration
9.2.2. Holistic Approach to Problem Solving
9.2.3. Goal of Agent is Central in Reinforcement Learning
9.2.4. Fruitful Interaction with Other Discipline
9.2.5. Evaluative Feedbacks
9.3. Elements of Reinforcement Learning
9.3.1. Agent
9.3.2. Environment
9.3.3. Action
9.3.4. Environment State
9.3.5. Policy
9.3.6. Reward Signal
9.3.7. Value Function
9.3.8. Time Step
9.3.9. Model of the Environment
9.4. History of Reinforcement Learning
9.5. Conclusion
9.6. Summary
CONCLUDING REMARKS
REFERENCES
Machine Learning Applications
Abstract
1. ANALYZING TERRORISM DATASET USING CLASSIFICATION BASED ALGORITHMS
1.1. Introduction
1.2. Methodology for Collection and Analysis of Terrorism Dataset
1.3. Design of the Two Machine Learning Algorithms
1.4. Naïve Bayes Algorithm
1.5. The Decision Tree Algorithm
1.6. Simulation, Results and Discussion
1.7. Simulation, Results and Discussion
2. ANALYZING TERRORISM DATASET USING PROBABILITY DISTRIBUTION FUNCTIONS
2.1. Methodology for Collection and Visualization of Terrorism Dataset
2.2. Theory of the Probability Distribution Functions
2.2.1. Binomial Probability Distribution Function
2.2.2. Poisson Probability Distribution Function
2.2.3. Exponential Probability Distribution Function
2.2.4. Normal Probability Distribution Function
2.3. Simulations, Results and Discussion
2.3.1. Result of Simulated Models for Binomial Probability Distribution Function
2.3.2. Result of Simulated Models for Poisson Probability Distribution Function
2.3.3. Result of Simulated Models for Normal Probability Distribution Function
2.4. Conclusion
3. POLYNOMIAL REGRESSION ALGORITHM FOR ANALYSING COVID-19 DATASET
3.1. Generalized Ordinary Least Square Method
3.2. Literature Review
3.3. Development of the Polynomial Regression Algorithm
3.3.1. Polynomial of Degree 2 with Minimum Point
3.3.2. Polynomial of Degree 2 with Maximum Point
3.3.3. Polynomial Dataset, of Degree 3 with Minimum Point on the Right
3.3.4. Polynomial Dataset, of Degree 3 with Maximum Point on the Right
3.3.5. Polynomial Dataset, of Degree n with Minimum Point on the Right
3.3.6. Polynomial Dataset, of Degree n with Maximum Point on the Right
3.4. Simulation and Discussion of Results
3.5. Conclusion
CONCLUDING REMARKS
REFERENCES
Sensory Perception
Abstract
1. COMPUTER VISION
1.1. Fundamentals of Computer Vision
1.2. Applications of Computer Vision
1.2.1. Vehicle Driver Assistance and Traffic Management
1.2.2. Eye and Head Tracker
1.2.3. File and Video for Sports Analysis
1.2.4. Film and Video for Sports Analysis
1.2.5. Gesture Recognition
1.2.6. General-Purpose Vision System
1.2.7. Industrial Automation and Inspection for Electronic Industry
1.2.8. Industrial Automation and Inspection for Agriculture Industry
1.3. History of Computer Vision
1.4. Image Formation
1.4.1. Geometry of Image
1.4.1.1. Two and Three-Dimensional Geometry
1.4.1.2. Two and Three-Dimensional Transformations
1.4.1.3. Types of Two and Three-Dimensional Transformations
1.4.1.4. Combined Transformation
1.5. Image Recognition
1.5.1. Object/Face Detection
1.5.1.1. Features Based Face Detection Technique
1.5.1.2. Appearance-Based Approach
1.5.1.3. Clustering and PCA.
1.5.1.4. Deep Neural Network
1.5.1.5. Support Vector Machine
1.5.1.6. Boosting
1.5.2. Pedestrian Detection
1.5.3. Face Recognition
1.5.4. Instance Recognition
1.6. Use of Computer Vision in Motion
1.7. Conclusion
1.8. Summary
2. SPEECH RECOGNITION
2.1. Basics of Speech Recognition
2.2. Basic Components of Speech Recognition System
2.3. Signal Processing
2.4. Uncertainties in Speech Recognition
2.5. Historical Development of Speech Recognition
2.6. Applications of Speech Recognition System
2.6.1. Cloud-based Call Center/IVR (Interactive Voice Response)
2.6.2. PC-Based Dictation/Command and Control
2.6.3. Device-Based Embedded Command Control
2.7. Conclusion
2.8. Summary
3. TACTILE SENSING
3.1. Tactile Sensing Explained
3.2. Justification for Tactile Sensing
3.3. Types of Tactile Sensors
3.4. Conclusion
3.5. Summary
CONCLUDING REMARKS
REFERENCES
Robotics
Abstract
1. FOUNDATIONS OF ROBOTICS
1.1. Robot Explained
1.2. Asimov Law of Robotics
1.3. Characteristics of Robot
1.4. User Level Applications of Robot
1.5. Types of Robots
1.6. Components of Robots
1.7. Conclusion
1.8. Summary
2. HUMANOID ROBOTS
2.1. Motivations for Humanoid Robots
2.2. Historical Development of Humanoid Robots
2.3. Current Trends in Humanoid Robots
2.4. Locomotion in Humanoid Robots
2.5. Manipulation in Humanoid Robots
2.6. Communication in Humanoid Robots
2.7. Conclusion
2.8. Summary
3. AUTONOMOUS/ROBOTIC VEHICLEs
3.1. Levels of Vehicles Automation
3.2. How Autonomous Vehicle Technology Works
3.3. History of Autonomous Vehicles
3.4. Benefits of Autonomous Vehicles
3.5. Development and Deployment of Autonomous Vehicles
3.6. Planning Implications for Autonomous Vehicles
3.7. Conclusion
3.8. Summary
4. metrics for assessing the performance of robots
4.1. Metrics for Navigational Tasks
4.2. Metrics for Perception Tasks
4.3. Metrics for Management Tasks
4.4. Metrics for Manipulation Tasks
4.5. Metrics for Social Tasks
4.6. Conclusion
4.7. Summary
CONCLUDING REMARKS
REFERENCES
A First Course in Artificial Intelligence
Authored By
Osondu Oguike
Department of Computer Science
University of Nigeria, Nsukka
Enugu State
Nigeria

BENTHAM SCIENCE PUBLISHERS LTD.

End User License Agreement (for non-institutional, personal use)

This is an agreement between you and Bentham Science Publishers Ltd. Please read this License Agreement carefully before using the ebook/echapter/ejournal (“Work”). Your use of the Work constitutes your agreement to the terms and conditions set forth in this License Agreement. If you do not agree to these terms and conditions then you should not use the Work.

Bentham Science Publishers agrees to grant you a non-exclusive, non-transferable limited license to use the Work subject to and in accordance with the following terms and conditions. This License Agreement is for non-library, personal use only. For a library / institutional / multi user license in respect of the Work, please contact: [email protected].

Usage Rules:

All rights reserved: The Work is 1. the subject of copyright and Bentham Science Publishers either owns the Work (and the copyright in it) or is licensed to distribute the Work. You shall not copy, reproduce, modify, remove, delete, augment, add to, publish, transmit, sell, resell, create derivative works from, or in any way exploit the Work or make the Work available for others to do any of the same, in any form or by any means, in whole or in part, in each case without the prior written permission of Bentham Science Publishers, unless stated otherwise in this License Agreement.You may download a copy of the Work on one occasion to one personal computer (including tablet, laptop, desktop, or other such devices). You may make one back-up copy of the Work to avoid losing it.The unauthorised use or distribution of copyrighted or other proprietary content is illegal and could subject you to liability for substantial money damages. You will be liable for any damage resulting from your misuse of the Work or any violation of this License Agreement, including any infringement by you of copyrights or proprietary rights.

Disclaimer:

Bentham Science Publishers does not guarantee that the information in the Work is error-free, or warrant that it will meet your requirements or that access to the Work will be uninterrupted or error-free. The Work is provided "as is" without warranty of any kind, either express or implied or statutory, including, without limitation, implied warranties of merchantability and fitness for a particular purpose. The entire risk as to the results and performance of the Work is assumed by you. No responsibility is assumed by Bentham Science Publishers, its staff, editors and/or authors for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products instruction, advertisements or ideas contained in the Work.

Limitation of Liability:

In no event will Bentham Science Publishers, its staff, editors and/or authors, be liable for any damages, including, without limitation, special, incidental and/or consequential damages and/or damages for lost data and/or profits arising out of (whether directly or indirectly) the use or inability to use the Work. The entire liability of Bentham Science Publishers shall be limited to the amount actually paid by you for the Work.

General:

Any dispute or claim arising out of or in connection with this License Agreement or the Work (including non-contractual disputes or claims) will be governed by and construed in accordance with the laws of the U.A.E. as applied in the Emirate of Dubai. Each party agrees that the courts of the Emirate of Dubai shall have exclusive jurisdiction to settle any dispute or claim arising out of or in connection with this License Agreement or the Work (including non-contractual disputes or claims).Your rights under this License Agreement will automatically terminate without notice and without the need for a court order if at any point you breach any terms of this License Agreement. In no event will any delay or failure by Bentham Science Publishers in enforcing your compliance with this License Agreement constitute a waiver of any of its rights.You acknowledge that you have read this License Agreement, and agree to be bound by its terms and conditions. To the extent that any other terms and conditions presented on any website of Bentham Science Publishers conflict with, or are inconsistent with, the terms and conditions set out in this License Agreement, you acknowledge that the terms and conditions set out in this License Agreement shall prevail.

Bentham Science Publishers Ltd. Executive Suite Y - 2 PO Box 7917, Saif Zone Sharjah, U.A.E. Email: [email protected]

PREFACE

The importance of Artificial Intelligence cannot be over-emphasized; as a result, Artificial Intelligence occupies a central place in the curricula of Computer Science at both undergraduate and postgraduate levels. At least one or two Artificial Intelligence course(s) must be present in the curricula of Computer Science at both undergraduate and postgraduate levels. At the moment, most universities offer Artificial Intelligence as a Degree programme, leading to Bachelor Degree or Master Degree in Artificial Intelligence. This book covers all the main aspects of Artificial Intelligence, like Expert System, Natural Language Processing, Machine Learning, Machine Learning Applications, Sensory Perceptions (Computer Vision, Tactile Perception), and Robotics. The book focuses on the following areas of computer science as it relates to the specific area of Artificial Intelligence: history, applications, algorithms, and programming with relevant case studies and examples. It adopts a simplified approach so that every beginner can easily understand the contents. It assumes basic knowledge of the Java programming language. It introduces Python programming language and uses it for natural language processing. It also introduces Waikato Environment for Knowledge Analysis, WEKA, as a tool for machine learning. The book is organized into seven main chapters; each chapter is further organized into various units. In all seven chapters, there are thirty-three units.

CONSENT FOR PUBLICATION

Not applicable.

CONFLICT OF INTEREST

The author declares no conflict of interest, financial or otherwise.

ACKNOWLEDGEMENT

Declared none.

Osondu Oguike Department of Computer Science, University of Nigeria, Nsukka, Enugu State, Nigeria.

Introduction to Artificial Intelligence

Osondu Oguike

Abstract

Every beginner in any subject needs a good foundation, which will help the student to understand the subject. This good foundation will be provided in a thorough and detailed definition of the subject and a detailed description of the fundamental models on which the subject is based. Artificial Intelligence needs a thorough definition and a detailed description of the fundamental models on which Artificial Intelligence is based. Furthermore, the history and applications of Artificial Intelligence will help the beginner to know where it is coming from, the journey so far, and the future development of Artificial Intelligence. On the other hand, the applications of Artificial Intelligence will help us to appreciate the use of Artificial Intelligence in our daily life. This chapter presents a detailed definition of Artificial Intelligence, its history, and emerging applications.

Keywords: Acting like a human, Acting rationally, Artificial Intelligence winter, Autonomous cars, Bitcoin, Blockchain technology, Cognitive model, Data Science, IBM Watson, Internet of things, Turing test model.

1. DEFINITION OF ARTIFICIAL INTELLIGENCE

The definition of Artificial Intelligence helps us to understand what Artificial Intelligence focuses on, the various aspects of Artificial Intelligence, and the various concepts, techniques, ideas, and viewpoints of other disciplines that Artificial Intelligence uses.

1.1. Artificial Intelligence

Many authors, in various literature, have attempted to define Artificial Intelligence from different perspectives. In this book, a broad and general definition of Artificial Intelligence will be provided. Artificial Intelligence can be defined as a field of study that deals with the design of systems that act like a human, think like a human, act rationally and think rationally [1-4].

This definition covers every definition of Artificial Intelligence that any literature can provide. It provides four different faces of Artificial Intelligence, which will

be explained in the next section. This means that Artificial Intelligence programs/systems are programs/systems that act like a human, think like a human, act rationally and think rationally.

1.1.1. Explanation of Artificial Intelligence

Each of the four faces of Artificial Intelligence, as provided in the definition, will be explained using an appropriate model. Each model will be used to explain each of the following faces of Artificial Intelligence that are: acting like a human, thinking like a human, acting rationally, and thinking rationally.

1.1.2. Turing Test Model – Acting Like Human

The Turing test model explains what acting like human means. In 1958, Alan Turing proposed a test model that aimed at helping people to understand what acting like human means. The test that Alan Turing proposed involved interrogating a computer by a human via a teletype, and the computer passes the test without knowing whether the interrogator was a machine or human that answered the questions. However, in the total Turing test, there is the inclusion of video signal, which tests the perception abilities of the subject, and the exchange of physical object between the interrogator and the subject. Alan Turing, therefore, defined acting like a human as behaving intelligently. A machine/human that behaves intelligently is one that achieves human-level performance to cognitive questions. Therefore, making computers achieve human-level intelligence means that the computer will possess the following abilities or requirements [1, 3].

The ability to communicate in natural language, like the English language, French language, etc.The ability to store information before or during the interrogation.The ability to use the stored information to answer questions and make a new conclusion. This is called automated reasoning in Artificial Intelligence.The ability to adapt to new circumstances due to new data, it discovers the pattern in the data and makes an appropriate decision.

Further more, passing the total Turing test requires additional abilities, which are:

The ability to perceive with the sense organ of hearing, tasting, seeing, feeling, and smelling.The ability to move objects. This is called robotics in Artificial Intelligence.

From the above requirements or abilities of an intelligent system, we can use each requirement of an intelligent system to identify the various aspects or tasks that an Artificial Intelligence system can perform. The following are the tasks that an Artificial Intelligence system can perform.

Natural Language Processing: This task allows an Artificial Intelligence system to communicate in natural language, like the English language.Knowledge Representation: This task allows an Artificial Intelligence system to use a particular method/formalism to store knowledge about a particular domain. This is called the knowledge base of an expert system.Automated Reasoning: This task allows an Artificial Intelligence system to query the stored knowledge with the aim of answering the user’s query. This is called an inference engine in an expert system.Machine Learning: This task allows an Artificial Intelligence system to solve a problem using a set of data called training data.Sensory Perception: This task allows the Artificial Intelligence system to solve a problem, using the sensory perceptions for vision, touch, hearing, tasting, smelling, etc.Robotics: This aspect of Artificial Intelligence allows the Artificial Intelligence system to solve the problem by moving itself or objects from one place to another.

1.1.3. Cognitive Model – Thinking Like Human

If we are going to say that a given program thinks like a human, we must have some ways of determining how humans think. We need to get inside the actual workings of human minds. There are two ways to do this: through introspection — trying to catch our own thoughts as they go by — or through psychological experiments. Once we have a sufficiently precise theory of the mind, it becomes possible to express the theory as a computer program. Cognitive science brings together computer models from Artificial Intelligence and experimental techniques from psychology to try to construct precise and testable theories of the workings of the human mind [1, 5-8].

1.1.4. Rational Agent Model – Acting Rationally

An agent is something that perceives and acts. It acts in order to achieve its goal. Therefore, acting rationally means acting like an agent. Artificial Intelligence is therefore considered as the study and construction of rational agents. One of the ways to act rationally is to make a correct inference, using the law of thought [1].

1.1.5. Law of Thought – Thinking Rationally

The law of thought helps to explain what thinking rationally means. It means right-thinking, i.e., given the correct premises (facts), it always produces the correct conclusion. The law of thought was originated by the Greet philosopher Aristotle. It marked the beginning of logic, which is very fundamental in Artificial Intelligence.

1.2. Foundational Discipline in Artificial Intelligence

Artificial Intelligence is a young field of study, but it uses many ideas, viewpoints, and techniques from various old disciplines. In this section, the various ideas, viewpoints, and techniques that Artificial Intelligence borrows from the various old disciplines will be considered. They form the foundation upon which Artificial Intelligence stands.

1.2.1. Philosophy

The theories of reasoning and learning, which Artificial Intelligence uses, emerged from the discipline of Philosophy. It started with the writings of Plato, his teacher, Socrates, and his student, Aristotle. Socrates wanted to know the characteristics of piety, so that he could be informed about standards that he could use to judges his actions and the actions of other people. In otherwords, he was asking for the algorithm that could be used to distinguish between piety and non-piety. In response, Aristotle formulated the law governing the rational part of the mind. He developed the informal system of syllogism for proper reasoning, which would allow one to generate conclusions, given initial premises. However, Aristotle did not believe all parts of the mind were governed by logical processes; he also had a notion of intuitive reasoning.

Philosophy had therefore, established the tradition that the mind was conceived of as a physical device, operating principally by reasoning and the knowledge that it contained. On the theory of knowledge, which Artificial Intelligence uses, Philosophy identified the source of knowledge with the following principles. The principle of induction, which states that general rules are acquired by exposure to repeated associations between their elements. This principle was refined with the principle of logical positivism, which states that all knowledge can be characterized by logical theories, connected ultimately to observation sentences that correspond to sensory inputs [1, 12].

Still on the philosophical picture of the mind, Philosophy also established a connection between knowledge and action. Artificial Intelligence is interested in the form the connection will take, and how can particular actions be justified ? This is because understanding how actions are justified, it will be possible to build an Artificial Intelligence agent with justifiable actions [1, 2, 13].

1.2.2. Mathematics

Artificial Intelligence uses mathematical tools as formal tool, in the following three main area of mathematics: computation, logic and probability. Computation can be expressed as a formal algorithm in Artificial Intelligence, while logic has remained a formal language for representing knowledge in Artificial Intelligence. Probability allows us to make logical reasoning and measure the level of certainty/uncertainty in ourreasoning [1, 14].

1.2.3. Psychology

The principle of cognitive psychology states that the brain possesses and processes information. The theory of human behavior in Psychology states that its valid components are beliefs, goals and reasoning steps. However, for most of the early history of Artificial Intelligence and Cognitive Science, no significant distinction was drawn between the two fields, and it was common to see Artificial Intelligence programs described as psychological results without any claim as to the exact human behavior they were modeling. In the last decade or so, however, the methodological distinctions have become clearer, and most work now falls into one field or the other [1, 15].

1.2.4. Computer Engineering

In reality, Artificial Intelligence belongs to the field of computer science or computer engineering. If it stands on itself as a discipline, thenideas, viewpoints and techniques from the discipline of computer science or computer engineering must beused for Artificial Intelligence to succeed. The Artificial Intelligence programs must bewritten to run on an appropriate architecture of computer.

1.2.5. Linguistics

Much of the early work on knowledge representation (the study of how to put knowledge into a form that a computer can reason with) was tied to language and informed by research in linguistics. Modern Linguistics and Artificial Intelligence were “born” at about the same time, so Linguistics does not play a large foundational role in the growth of Artificial Intelligence. Instead, the two grew up together, intersecting in a hybrid field called Computational Linguistics or natural language processing, which concentrates on the problem of language use [1].

1.2.6. Biological Science and Others

Since Artificial Intelligence is a field of study that deal with design of systems that act like human, and most of human actions are based on some human biological processes, therefore Artificial Intelligence uses some human biological processes to design intelligent system. Such human biological processes it uses includes: biological neurons, biological sensory perceptions (vision, touch, hearing, tasting and smelling).

In a similar manner, human actions are based on economic processes, political processes etc., therefore concepts that are based on Economics, Political Science will be used to design Artificial Intelligence systems that act like man, economically and politically. In general, any discipline that determines the actions of man will be useful in developing Artificial Intelligence systems.

1.3. Conclusion

Artificial Intelligence has been defined and explained in this unit. This definition and explanation will enable us to easily identify Artificial Intelligence systems. Artificial Intelligence has been identified as an inter-disciplinary subject that has something in common with other subject areas, like Mathematics, Psychology, Philosophy, Linguistics, Computer Engineering etc.

1.4. Summary

Having defined and explained what Artificial Intelligence is, every chapter of this book will focus on the various aspects of Artificial Intelligence that have been identified in this unit. Furthermore, discussion on the history and applications of Artificial Intelligence will be helpful in appreciating the usefulness of Artificial Intelligence.

2. HISTORY OF ARTIFICIAL INTELLIGENCE AND PROJECTION FOR THE FUTURE

The History of Artificial Intelligence focuses on the people, significant contributions, date of their contribution, towards the development of Artificial Intelligence. This is very useful because it helps us to recognize those that made significant contributions towards the development of Artificial Intelligence. On the other hand, its future projections look into the new and future research directions of Artificial Intelligence.

2.1. The Birth of Artificial Intelligence

John McCharty, in 1956 was the first to coin the term, Artificial Intelligence, in a conference titled, Artificial Intelligence, which was held at Dartmouth College, Hanover, New Hampshire. One of the participants at the conference, who was very optimistic about the future of Artificial Intelligence was Marvin Minsky of MIT. However, before that time, several researches have taken place that contributed to the birth of Artificial Intelligence. One of such researches was undertaken by Vannevar Bush in 1945. Another research was done by Alan Turing, in 1950, which has helped to understand what intelligence system means. Alan Turing research of 1950 has led to the popular Turing test model, which helped to explain what it means to act like human. Therefore, the birth of Artificial Intelligence cannot be complete without considering the life of Alan Turing, who made significant contribution that led to the birth of Artificial Intelligence [9-11].

2.1.1. Alan Turing (1912 – 1954)

He was a British Mathematician, though he lived for a short period of time, but he made significant contribution towards the development of computing in general, and Artificial Intelligence in particular. In 1936, he designed a universal calculator, known as Turing machine. He proved that the calculator is capable of solving any problem as long as it can be represented and solved as an algorithm. After few decades, the first digital computer was built. Turing’s electro-mechanical computer was used to unlock the code that was used by the German submarines in the Atlantic, which contributed to the British victory during world war II. In 1950, Alan Turing created a test to determine if a machine was intelligent. This test has been captioned Turing test model and it has been used by Artificial Intelligence community to explain what it means to act like human.

2.1.2. Other Significant Contributors Prior to Birth of AI

The following are other significant Artificial Intelligence systems that were made prior to the birth of Artificial Intelligence in 1956:

Ebruİz Bin Rezzaz Al Jezeri, who is one of the pioneers of cybernetic science, made water-operated automatic controlled machines in 1206.Karel Capek, first introduced the robot concept in the theatre play of Rossum's Universal Robots (RUR - Rossum's Universal Robots in 1923.The first artificial intelligence programs for the Mark 1 device were written in 1951.

2.2. Historical Development of Other Artificial Intelligence Systems

After the birth of Artificial Intelligence in 1956, different Artificial Intelligence systems have been developed, which can be classified according to the following eras:

2.2.1. Expert System (1950s – 1970s)

Expert systems, as a subset of AI, emerged in the early 1950s when the Rand-Carnegie team developed the general problem solver to deal with theorems proof, geometric problems and chess playing [2]. About the same time, LISP, the later dominant programming language in Artificial Intelligence and Expert Systems, was invented by John McCarthy in MIT [3]. During the 1960s and 1970s, expert systems were increasingly used in industrial applications. Some of the famous applications during this period were DENDRAL (a chemical structure analyzer), XCON (a computer hardware configuration system), MYCIN (a medical diagno-sis system), and ACE (AT&T's cable maintenance system). PROLOG, as an alternative to LISP in logic programming, was created in 1972 and designed to handle computational linguistics, especially natural language processing [9-11].

2.2.2. First Artificial Intelligence Winter (1974 – 1980)

Due to lack of funding, there was no significant development in Artificial Intelligence research between 1974 and 1980. This period, in the history of Artificial Intelligence is regarded as the first AI winter. It ended with the introduction of expert system.

2.2.3. Second Artificial Intelligence Winter (1987 – 1993)

Between 1987 and 1993, there was significant cut in Artificial Intelligence funding, as a result, there was no significant contributions in Artificial Intelligence research, this period was regarded as the second Artificial Intelligent winter. In some literature, the first and second Artificial Intelligence winter periods were combined as Artificial Intelligence winter, which was between 1974 and 1993.

2.2.4. Intelligent Agent (1993 – Date)

At the end of the second AI winter, research in Artificial Intelligence shifted its focus to what is called intelligent agents. An agent can be regarded as anything that perceives and acts. It acts in order to achieve its goal. An agent can therefore be a piece of software application that retrieves and presents information from the internet, does online shopping etc. Intelligent agents can be called agents or bots and they have evolved into personal digital assistants, with the emergence of Big data programs.

2.3. Projections into the Future of Artificial Intelligence

The following are the future projections that show the directions of research in Artificial Intelligence.

2.3.1. Virtual Personal Assistants

Currently, research in Artificial Intelligence is to develop virtual personal assistants, like Facebook M, Microsoft Cortana or Apple Siri. Today and the future, Artificial Intelligence research is to develop virtual personal assistants. In the area of natural language processing, such personal assistant will be capable of communicating with the user in natural language. In robotics, it is capable of moving from place to place, providing physical personal assistant. In the area of Big Data, it will be capable of making informed business decision based on available massive data. In machine learning, it will be capable of performing complex tasks.

2.4. Conclusion

In this unit, you have learnt the historical development of Artificial Intelligence and its future direction. The historical development of Artificial Intelligence has been divided into two phase. The first phase is the development of Artificial Intelligence before the birth of Artificial Intelligence, while the second phase is the development of Artificial Intelligence after the birth of Artificial Intelligence.

2.5. Summary

The historical development of Artificial Intelligence defines the applications of Artificial Intelligence. This is because Artificial Intelligence researches in the past and present will determine the Artificial Intelligence products, which will be used for specific applications.

3. EMERGING ARTIFICIAL INTELLIGENCE APPLICATIONS

The historical development of Artificial Intelligence identified the past, present and future development of Artificial Intelligence. Artificial Intelligence depends on different technologies in order to develop appropriate applications. This unit identifies and describes emerging technologies that Artificial Intelligence depends on with the aim of developing the various Artificial Intelligence products, which can be used for different applications.

3.1. Artificial Intelligence Applied Technologies

Artificial Intelligence systems are built on different technologies. Each technology has different Artificial Intelligence systems that it supports. Some of the different technologies that apply Artificial Intelligence systems will be described in detail in this unit.

3.1.1. Blockchain Technology

A blockchain can be defined as a series of immutable records of data (block), which are time stamped, secured using cryptographic principle and managed by a collection of computers that are not owned by any single entity (chain). The cryptographic principle that are used to make the series of data (block) secured involves the process of encryption and decryption. The secured data of the blockchain technology are analysed for decision making using Artificial Intelligence systems. The blockchain technology does not have any centralized control but it is decentralized. It was the ingenuous invention of a person or group of group known asSatoshi Nakamoto. It was originally invented for the Bitcoin as a cryptocurrency, now has many uses in other areas. The collection/cluster of computers that manage the block of data form a blockchain network. The block of data is shared among all the computers in the blockchain network, which means that all the computers have access to the block of data, and they are updated across the network every ten minutes. The block of data is stored in a shared database, which is stored on each of the computers on the blockchain network. Blockchain is a simple way of sharing information between computers in a safe and automated manner. The process is initiated by one party, who creates a block of data to be shared. The data is verified by thousands or millions of other computers on the internet. The verified data is added to a chain, which cannot easily be falsified [16-18].

3.1.1.1. Bitcoin: First Application of Artificial Intelligence to Blockchain

Bitcoin remains the first use of the Blockchain technology. It is a digital currency, which was created in 2009. It is a payment system that offers lower processing fee than the traditional online payment system. Bitcoin does not appear as a physical coin, but only balances that appear in a public ledger in the cloud, together with all Bitcoin transactions. Bitcoin balances are kept using public and private keys. The public and private keys are long string of numbers and letters, which are linked with the mathematical encryption algorithm that is used to create them. The public key can be likened to bank account number, which is the address that is published to the world where others will send bitcoins. On the other hand, private key can be likened to ATM PIN, which is known only by the owner of the public key. It is used to authorize Bitcoin transactions. The following terms will be useful in understanding Bitcoin [18]:

3.1.1.1.1. Bitcoin Wallet

It is a physical electronic device or software device that is used for Bitcoin trading. It allows users to track ownership of coins.

3.1.1.1.2. Peer-to-peer

This is the technology that is used to facilitate instant payments. It involves the exchange of data, information between parties without the involvement of central authority.

3.1.1.1.3. Miners

They are the individuals or companies that own the governing computing power, who participate in the Bitcoin network. Rewards and transaction fees are used to motivate them. They can be regarded as the decentralized authorities that enforce the credibility of Bitcoin network. They also make sure that Bitcoin is not duplicated. Mining therefore is the process of verifying each of the bitcoin transactions or adding a block of Bitcoin transactions into the blockchain.

3.1.1.1.4. Transaction

This is the process of making purchase or payment using Bitcoin. Each transaction forms a piece of data/record. Transactions are collected together and managed in block. Transactions in a block are secured in a network of computers (chain), using advanced cryptography. Bitcoin miners add a new block of transactions into the blockchain and the miners ensure that the transactions are accurate.

3.1.1.1.5. Earning Reward

This is the process of earning Bitcoin by Bitcoin miners, either by verifying Bitcoin transactions or by adding a block of transactions to the blockchain. The former is quite simple; it involves verifying I MB of transactions. The later involves solving a complex computational mathematical problem, called proof of work.

Therefore, Bitcoin, which is the first application of the Blockchain technology allows individuals to secure their personal information, while allowing agents to generate economic value at smaller economic scales. The data generated in the use of Bitcoin on Blockchain technology depends on Artificial Intelligence for its analysis [19].

3.1.1.2. Applications of Artificial Intelligence to Blockchain Technology

The following describes different areas where Artificial Intelligence can be applied to the blockchain technology, in particular Bitcoin technology.

3.1.1.2.1. Smart Computing Power

Operating blockchain requires generating encrypted data (blocks), which need large amount of processing power to analyse. Example, hashing algorithms will be used to mine the Bitcoin blocks (data). Such algorithm involves enumerating in a systematic manner, all the possible candidates for the solution, afterwards, checking whether each candidate satisfies the problem’s statement before verifying the transaction. However, with Artificial Intelligence, the above task can easily be accomplished in an intelligent and efficient way, through the use of machine learning based algorithm with appropriate training data.

3.1.1.2.2. Analyses Diverse Data

Though Bitcoin is the first application of the blockchain technology, it is currently used in a number of industries to create decentralization of data and network. Example SingularityNET specifically uses blockchain technology to encourage broader distribution of data and algorithm. With the diverse applications of blockchain technology, it means that Artificial Intelligence can be used to analyse the diverse data that the blockchain technology generates [19].

3.1.1.2.3. Analyses Protected Data

The success of the machine learning tool of Artificial Intelligence, which is used to analyse blockchain data depends on the amount of training data. The training data of the blockchain technology are protected or secured, since blockchain allows encrypted data to be stored on distributed ledger.

3.1.1.2.4. Monetizes Data

Monetization of data is the process of allowing others to decide how data is to be sold in order to make profit for businesses. Blockchain on its own, monetizes data by first cryptographically protecting our data and use it as we want. In the same way Artificial Intelligence uses the cryptographically protected data by first buying it through data marketplaces.

3.1.1.2.5. Decision Making

Blockchain technology provides the processes for the generation of secured data, while Artificial Intelligence uses its Machine Learning tool to analyse the data for the purpose making informed decision.

3.1.2. Internet of Things (IoT)

This is another technology that depends on the application of Artificial Intelligence. Internet of Things is the collections of billions of devices all over the world, which are connected together for the purpose of collecting and sharing data. This technology is made possible by the availability of cheap and powerful microprocessor computers and wireless networks. Any object, therefore can be connected together. Connecting these objects together with sensors, which enable them to collect and communicate/share real time data, without human interference makes them intelligent devices. Any object, therefore can become an intelligent IoT device once it can be connected to the internet with sensors for collecting and sharing information. Internets of Things devices are devices that wouldn’t normally have internet connection. Therefore, personal computers, mobile devices will not be considered as Internet of Things devices. Cars, domestic electrical appliances (fridges, vending machines etc.) can be considered as IoT devices.

3.1.2.1. History of Internet of Things

It was in the 1980s and 1990s that the idea of adding sensors and intelligence to objects around us was first discussed. However, before this time, some of the early projects are: internet-enabled vending machine. This project was hindered based on the following reasons: the technology that the vending machine would use was not ready, the chips were too big and bulky communication between objects at that time was not possible.

3.1.3. Data Science, Big Data and Data Analytic

The amount of digital data that are created is growing every day at a high rate. It has been estimated that by the year 2020, about 1.7 Mbytes of data will be created for every human being per second. Some of these data are structured, while some are unstructured. Data Science therefore deals with the cleansing, preparation and analysis of this enormous data. Data Science combines the following disciplines: Statistics, Mathematics, Programming, problem solving, data capturing, and Artificial Intelligence with the aim of gaining insight into the large amount of data. It is an umbrella of techniques that are used when trying to obtain insight from the large amount of digital data.

Big Data refers to the enormous amount of data, which cannot be processed using the traditional applications that exist. However, Gartner Mitchell-Jones of IBM defines Big Data as “high-volume, high-velocity or high-variety information assets that demand cost-effective, innovative forms of information processing that enable insight, decision making, and process automation”. It is high volume because it is enormous, it is high-velocity because of high rate of creating the data, and it is high-variety because of the various types of digital data that can be generated.

Data Analytic, on the other hand, is the process of examining datasets, using specialized systems and software in order to draw conclusion about the information they contain. It is used in commercial industries to enable organisations to make informed business decisions, and by scientists and researchers in order to verify or disprove scientific models, theories and hypothesis.

All these terms, Data Science, Big Data, Data Analytic are related to Machine Learning, which is an aspect of Artificial Intelligence that uses training data to make informed decisions.

3.2. Artificial Intelligence Products

There are many Artificial Intelligence products/systems that use the various technologies. These emerging Artificial Intelligence products/systems will be described in this section.

3.2.1. IBM Watson