Artificial Intelligence By Example - Denis Rothman - E-Book

Artificial Intelligence By Example E-Book

Denis Rothman

0,0
36,59 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Understand the fundamentals and develop your own AI solutions in this updated edition packed with many new examples




Key Features



  • AI-based examples to guide you in designing and implementing machine intelligence


  • Build machine intelligence from scratch using artificial intelligence examples


  • Develop machine intelligence from scratch using real artificial intelligence



Book Description



AI has the potential to replicate humans in every field. Artificial Intelligence By Example, Second Edition serves as a starting point for you to understand how AI is built, with the help of intriguing and exciting examples.







This book will make you an adaptive thinker and help you apply concepts to real-world scenarios. Using some of the most interesting AI examples, right from computer programs such as a simple chess engine to cognitive chatbots, you will learn how to tackle the machine you are competing with. You will study some of the most advanced machine learning models, understand how to apply AI to blockchain and Internet of Things (IoT), and develop emotional quotient in chatbots using neural networks such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs).







This edition also has new examples for hybrid neural networks, combining reinforcement learning (RL) and deep learning (DL), chained algorithms, combining unsupervised learning with decision trees, random forests, combining DL and genetic algorithms, conversational user interfaces (CUI) for chatbots, neuromorphic computing, and quantum computing.







By the end of this book, you will understand the fundamentals of AI and have worked through a number of examples that will help you develop your AI solutions.




What you will learn



  • Apply k-nearest neighbors (KNN) to language translations and explore the opportunities in Google Translate


  • Understand chained algorithms combining unsupervised learning with decision trees


  • Solve the XOR problem with feedforward neural networks (FNN) and build its architecture to represent a data flow graph


  • Learn about meta learning models with hybrid neural networks


  • Create a chatbot and optimize its emotional intelligence deficiencies with tools such as Small Talk and data logging


  • Building conversational user interfaces (CUI) for chatbots


  • Writing genetic algorithms that optimize deep learning neural networks


  • Build quantum computing circuits



Who this book is for



Developers and those interested in AI, who want to understand the fundamentals of Artificial Intelligence and implement them practically. Prior experience with Python programming and statistical knowledge is essential to make the most out of this book.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 718

Veröffentlichungsjahr: 2020

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Artificial Intelligence By Example

Second Edition

Acquire advanced AI, machine learning, and deep learning design skills

Denis Rothman

BIRMINGHAM - MUMBAI

Artificial Intelligence By Example

Second Edition

Copyright © 2020 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Producer: Tushar Gupta

Acquisition Editor – Peer Reviews: Divya Mudaliar

Content Development Editor: Dr. Ian Hough

Technical Editor: Saby D'silva

Project Editor: Kishor Rit

Proofreader: Safis Editing

Indexer: Tejal Daruwale Soni

Presentation Designer: Pranit Padwal

First published: May 2018

Second edition: February 2020

Production reference: 1270220

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham B3 2PB, UK.

ISBN 978-1-83921-153-9

www.packt.com

packt.com

Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionalsLearn better with Skill Plans built especially for youGet a free eBook or video every monthFully searchable for easy access to vital informationCopy and paste, print, and bookmark content

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.Packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.Packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks.

Contributors

About the author

Denis Rothman graduated from Sorbonne University and Paris-Diderot University, writing one of the very first word2matrix embedding solutions. He began his career authoring one of the first AI cognitive natural language processing (NLP)chatbots applied as a language teacher for Moët et Chandon and other companies. He authored an AI resource optimizer for IBM and apparel producers. He then authored an advanced planning and scheduling (APS) solution used worldwide.

"I want to thank the corporations who trusted me from the start to deliver artificial intelligence solutions and share the risks of continuous innovation. I also thank my family, who believed I would make it big at all times."

About the reviewers

Carlos Toxtli is a human-computer interaction researcher who studies the impact of artificial intelligence in the future of work. He studied a Ph.D. in Computer Science at the University of West Virginia and a master's degree in Technological Innovation and Entrepreneurship at the Monterrey Institute of Technology and Higher Education. He has worked for some international organizations such as Google, Microsoft, Amazon, and the United Nations. He has also created companies that use artificial intelligence in the financial, educational, customer service, and parking industries. Carlos has published numerous research papers, manuscripts, and book chapters for different conferences and journals in his field.

"I want to thank all the editors who helped make this book a masterpiece."

Kausthub Raj Jadhav graduated from the University of California, Irvine, where he specialized in intelligent systems and founded the Artificial Intelligence Club. In his spare time, he enjoys powerlifting, rewatching Parks and Recreation, and learning how to cook. He solves hard problems for a living.

Table of Contents

Preface

Who this book is for

What this book covers

To get the most out of this book

Get in touch

Getting Started with Next-Generation Artificial Intelligence through Reinforcement Learning

Reinforcement learning concepts

How to adapt to machine thinking and become an adaptive thinker

Overcoming real-life issues using the three-step approach

Step 1 – describing a problem to solve: MDP in natural language

Watching the MDP agent at work

Step 2 – building a mathematical model: the mathematical representation of the Bellman equation and MDP

From MDP to the Bellman equation

Step 3 – writing source code: implementing the solution in Python

The lessons of reinforcement learning

How to use the outputs

Possible use cases

Machine learning versus traditional applications

Summary

Questions

Further reading

Building a Reward Matrix – Designing Your Datasets

Designing datasets – where the dream stops and the hard work begins

Designing datasets

Using the McCulloch-Pitts neuron

The McCulloch-Pitts neuron

The Python-TensorFlow architecture

Logistic activation functions and classifiers

Overall architecture

Logistic classifier

Logistic function

Softmax

Summary

Questions

Further reading

Machine Intelligence – Evaluation Functions and Numerical Convergence

Tracking down what to measure and deciding how to measure it

Convergence

Implicit convergence

Numerically controlled gradient descent convergence

Evaluating beyond human analytic capacity

Using supervised learning to evaluate a result that surpasses human analytic capacity

Summary

Questions

Further reading

Optimizing Your Solutions with K-Means Clustering

Dataset optimization and control

Designing a dataset and choosing an ML/DL model

Approval of the design matrix

Implementing a k-means clustering solution

The vision

The data

The strategy

The k-means clustering program

The mathematical definition of k-means clustering

The Python program

Saving and loading the model

Analyzing the results

Bot virtual clusters as a solution

The limits of the implementation of the k-means clustering algorithm

Summary

Questions

Further reading

How to Use Decision Trees to Enhance K-Means Clustering

Unsupervised learning with KMC with large datasets

Identifying the difficulty of the problem

NP-hard – the meaning of P

NP-hard – the meaning of non-deterministic

Implementing random sampling with mini-batches

Using the LLN

The CLT

Using a Monte Carlo estimator

Trying to train the full training dataset

Training a random sample of the training dataset

Shuffling as another way to perform random sampling

Chaining supervised learning to verify unsupervised learning

Preprocessing raw data

A pipeline of scripts and ML algorithms

Step 1 – training and exporting data from an unsupervised ML algorithm

Step 2 – training a decision tree

Step 3 – a continuous cycle of KMC chained to a decision tree

Random forests as an alternative to decision trees

Summary

Questions

Further reading

Innovating AI with Google Translate

Understanding innovation and disruption in AI

Is AI disruptive?

AI is based on mathematical theories that are not new

Neural networks are not new

Looking at disruption – the factors that are making AI disruptive

Cloud server power, data volumes, and web sharing of the early 21st century

Public awareness

Inventions versus innovations

Revolutionary versus disruptive solutions

Where to start?

Discover a world of opportunities with Google Translate

Getting started

The program

The header

Implementing Google's translation service

Google Translate from a linguist's perspective

Playing with the tool

Linguistic assessment of Google Translate

AI as a new frontier

Lexical field and polysemy

Exploring the frontier – customizing Google Translate with a Python program

k-nearest neighbor algorithm

Implementing the KNN algorithm

The knn_polysemy.py program

Implementing the KNN function in Google_Translate_Customized.py

Conclusions on the Google Translate customized experiment

The disruptive revolutionary loop

Summary

Questions

Further reading

Optimizing Blockchains with Naive Bayes

Part I – the background to blockchain technology

Mining bitcoins

Using cryptocurrency

PART II – using blockchains to share information in a supply chain

Using blockchains in the supply chain network

Creating a block

Exploring the blocks

Part III – optimizing a supply chain with naive Bayes in a blockchain process

A naive Bayes example

The blockchain anticipation novelty

The goal – optimizing storage levels using blockchain data

Implementation of naive Bayes in Python

Gaussian naive Bayes

Summary

Questions

Further reading

Solving the XOR Problem with a Feedforward Neural Network

The original perceptron could not solve the XOR function

XOR and linearly separable models

Linearly separable models

The XOR limit of a linear model, such as the original perceptron

Building an FNN from scratch

Step 1 – defining an FNN

Step 2 – an example of how two children can solve the XOR problem every day

Implementing a vintage XOR solution in Python with an FNN and backpropagation

A simplified version of a cost function and gradient descent

Linear separability was achieved

Applying the FNN XOR function to optimizing subsets of data

Summary

Questions

Further reading

Abstract Image Classification with Convolutional Neural Networks (CNNs)

Introducing CNNs

Defining a CNN

Initializing the CNN

Adding a 2D convolution layer

Kernel

Shape

ReLU

Pooling

Next convolution and pooling layer

Flattening

Dense layers

Dense activation functions

Training a CNN model

The goal

Compiling the model

The loss function

The Adam optimizer

Metrics

The training dataset

Data augmentation

Loading the data

The testing dataset

Data augmentation on the testing dataset

Loading the data

Training with the classifier

Saving the model

Next steps

Summary

Questions

Further reading and references

Conceptual Representation Learning

Generating profit with transfer learning

The motivation behind transfer learning

Inductive thinking

Inductive abstraction

The problem AI needs to solve

The Γ gap concept

Loading the trained TensorFlow 2.x model

Loading and displaying the model

Loading the model to use it

Defining a strategy

Making the model profitable by using it for another problem

Domain learning

How to use the programs

The trained models used in this section

The trained model program

Gap – loaded or underloaded

Gap – jammed or open lanes

Gap datasets and subsets

Generalizing the Γ (the gap conceptual dataset)

The motivation of conceptual representation learning metamodels applied to dimensionality

The curse of dimensionality

The blessing of dimensionality

Summary

Questions

Further reading

Combining Reinforcement Learning and Deep Learning

Planning and scheduling today and tomorrow

A real-time manufacturing process

Amazon must expand its services to face competition

A real-time manufacturing revolution

CRLMM applied to an automated apparel manufacturing process

An apparel manufacturing process

Training the CRLMM

Generalizing the unit training dataset

Food conveyor belt processing – positive pγ and negative nγ gaps

Running a prediction program

Building the RL-DL-CRLMM

A circular process

Implementing a CNN-CRLMM to detect gaps and optimize

Q-learning – MDP

MDP inputs and outputs

The optimizer

The optimizer as a regulator

Finding the main target for the MDP function

A circular model – a stream-like system that never starts nor ends

Summary

Questions

Further reading

AI and the Internet of Things (IoT)

The public service project

Setting up the RL-DL-CRLMM model

Applying the model of the CRLMM

The dataset

Using the trained model

Adding an SVM function

Motivation – using an SVM to increase safety levels

Definition of a support vector machine

Python function

Running the CRLMM

Finding a parking space

Deciding how to get to the parking lot

Support vector machine

The itinerary graph

The weight vector

Summary

Questions

Further reading

Visualizing Networks with TensorFlow 2.x and TensorBoard

Exploring the output of the layers of a CNN in two steps with TensorFlow

Building the layers of a CNN

Processing the visual output of the layers of a CNN

Analyzing the visual output of the layers of a CNN

Analyzing the accuracy of a CNN using TensorBoard

Getting started with Google Colaboratory

Defining and training the model

Introducing some of the measurements

Summary

Questions

Further reading

Preparing the Input of Chatbots with Restricted Boltzmann Machines (RBMs) and Principal Component Analysis (PCA)

Defining basic terms and goals

Introducing and building an RBM

The architecture of an RBM

An energy-based model

Building the RBM in Python

Creating a class and the structure of the RBM

Creating a training function in the RBM class

Computing the hidden units in the training function

Random sampling of the hidden units for the reconstruction and contractive divergence

Reconstruction

Contrastive divergence

Error and energy function

Running the epochs and analyzing the results

Using the weights of an RBM as feature vectors for PCA

Understanding PCA

Mathematical explanation

Using TensorFlow's Embedding Projector to represent PCA

Analyzing the PCA to obtain input entry points for a chatbot

Summary

Questions

Further reading

Setting Up a Cognitive NLP UI/CUI Chatbot

Basic concepts

Defining NLU

Why do we call chatbots "agents"?

Creating an agent to understand Dialogflow

Entities

Intents

Context

Adding fulfillment functionality to an agent

Defining fulfillment

Enhancing the cogfilmdr agent with a fulfillment webhook

Getting the bot to work on your website

Machine learning agents

Using machine learning in a chatbot

Speech-to-text

Text-to-speech

Spelling

Why are these machine learning algorithms important?

Summary

Questions

Further reading

Improving the Emotional Intelligence Deficiencies of Chatbots

From reacting to emotions, to creating emotions

Solving the problems of emotional polysemy

The greetings problem example

The affirmation example

The speech recognition fallacy

The facial analysis fallacy

Small talk

Courtesy

Emotions

Data logging

Creating emotions

RNN research for future automatic dialog generation

RNNs at work

RNN, LSTM, and vanishing gradients

Text generation with an RNN

Vectorizing the text

Building the model

Generating text

Summary

Questions

Further reading

Genetic Algorithms in Hybrid Neural Networks

Understanding evolutionary algorithms

Heredity in humans

Our cells

How heredity works

Evolutionary algorithms

Going from a biological model to an algorithm

Basic concepts

Building a genetic algorithm in Python

Importing the libraries

Calling the algorithm

The main function

The parent generation process

Generating a parent

Fitness

Display parent

Crossover and mutation

Producing generations of children

Summary code

Unspecified target to optimize the architecture of a neural network with a genetic algorithm

A physical neural network

What is the nature of this mysterious S-FNN?

Calling the algorithm cell

Fitness cell

ga_main() cell

Artificial hybrid neural networks

Building the LSTM

The goal of the model

Summary

Questions

Further reading

Neuromorphic Computing

Neuromorphic computing

Getting started with Nengo

Installing Nengo and Nengo GUI

Creating a Python program

A Nengo ensemble

Nengo neuron types

Nengo neuron dimensions

A Nengo node

Connecting Nengo objects

Visualizing data

Probes

Applying Nengo's unique approach to critical AI research areas

Summary

Questions

References

Further reading

Quantum Computing

The rising power of quantum computers

Quantum computer speed

Defining a qubit

Representing a qubit

The position of a qubit

Radians, degrees, and rotations

The Bloch sphere

Composing a quantum score

Quantum gates with Quirk

A quantum computer score with Quirk

A quantum computer score with IBM Q

A thinking quantum computer

Representing our mind's concepts

Expanding MindX's conceptual representations

The MindX experiment

Preparing the data

Transformation functions – the situation function

Transformation functions – the quantum function

Creating and running the score

Using the output

Summary

Questions

Further reading

Answers to the Questions

Chapter 1 – Getting Started with Next-Generation Artificial Intelligence through Reinforcement Learning

Chapter 2 – Building a Reward Matrix – Designing Your Datasets

Chapter 3 – Machine Intelligence – Evaluation Functions and Numerical Convergence

Chapter 4 – Optimizing Your Solutions with K-Means Clustering

Chapter 5 – How to Use Decision Trees to Enhance K-Means Clustering

Chapter 6 – Innovating AI with Google Translate

Chapter 7 – Optimizing Blockchains with Naive Bayes

Chapter 8 – Solving the XOR Problem with a Feedforward Neural Network

Chapter 9 – Abstract Image Classification with Convolutional Neural Networks (CNNs)

Chapter 10 – Conceptual Representation Learning

Chapter 11 – Combining Reinforcement Learning and Deep Learning

Chapter 12 – AI and the Internet of Things

Chapter 13 – Visualizing Networks with TensorFlow 2.x and TensorBoard

Chapter 14 – Preparing the Input of Chatbots with Restricted Boltzmann Machines (RBMs) and Principal Component Analysis (PCA)

Chapter 15 – Setting Up a Cognitive NLP UI/CUI Chatbot

Chapter 16 – Improving the Emotional Intelligence Deficiencies of Chatbots

Chapter 17 – Genetic Algorithms in Hybrid Neural Networks

Chapter 18 – Neuromorphic Computing

Chapter 19 – Quantum Computing

Other Books You May Enjoy

Index

Landmarks

Cover

Index