25,99 €
A hands-on roadmap to using Python for artificial intelligence programming
In Practical Artificial Intelligence Programming with Python: From Zero to Hero, veteran educator and photophysicist Dr. Perry Xiao delivers a thorough introduction to one of the most exciting areas of computer science in modern history. The book demystifies artificial intelligence and teaches readers its fundamentals from scratch in simple and plain language and with illustrative code examples.
Divided into three parts, the author explains artificial intelligence generally, machine learning, and deep learning. It tackles a wide variety of useful topics, from classification and regression in machine learning to generative adversarial networks. He also includes:
This hands-on AI programming guide is perfect for anyone with a basic knowledge of programming—including familiarity with variables, arrays, loops, if-else statements, and file input and output—who seeks to understand foundational concepts in AI and AI development.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 664
Veröffentlichungsjahr: 2022
This book is accompanied by bonus content! The following extra elements can be downloaded from www.wiley.com/go/aiwithpython:
MATLAB for AI Cheat Sheets
Python for AI Cheat Sheets
Python Deep Learning Cheat Sheet
Python Virtual Environment
Jupyter Notebook, Google Colab, and Kaggle
Perry Xiao
The year 2020 was a year of turmoil, conflicts, and division. The most significant event was no doubt the COVID-19 pandemic, which was, and still is, raging in more than 200 countries and affecting the lives of hundreds of millions of people. I spent a good part of the year working from home. There are many disadvantages of remote working; however, it does have at least one advantage: it saved me at least two hours a day traveling to and from work. This gave me more time to think about, to plan, and to propose this book.
I am absolutely fascinated with artificial intelligence, and I have read many artificial intelligence books. But most of the books are heavily focused on the mathematics of artificial intelligence, which makes them difficult to understand for people without mathematics or computer science backgrounds. I have always wanted to write a book that could make it easier to get into the artificial intelligence field for beginners—people from all different disciplines. Thanks to the countless researchers and developers around the world and their open source code, particularly Python-based open source code, it is much easier to use artificial intelligence now than 10 years ago. Through this book, you will find that you can do amazing things with just a few lines of code, and in some cases, you don't need to code at all.
I am a big fan of open source, and for a research field as controversial as artificial intelligence, it is better for everyone to work together. So, I want to express my ultimate gratitude to those who made their work available for the benefit of others.
We are living in an era of digital revolutions and digital technologies such as artificial intelligence, the Internet of Things, Industry 4.0, 5G technologies, digital twin, cybersecurity, big data, cloud computing, blockchains, and, on the horizon, quantum computing. They are all being developed at a breathtaking speed. In the future, the Internet of Things will provide a means to connect all things around us and to use sensors to collect data. The industry version of the Internet of Things is called Industry 4.0, which will connect all sorts of things for manufacturers. Digital twin is a digital representation of a process, product, or service updated from real-time data. With digital twin, we can predict problems before they even occur, prevent downtime, develop new opportunities for the future through simulations. 5G technologies will provide a means for fast and low-latency communications for the data. Cybersecurity will provide a means to protect the data. Big data will provide a means to analyze the data in large quantity. Cloud computing will provide the storage, display, and analysis of the data remotely, in the cloud. Blockchains will provide traceability to the data through distributed ledgers. Quantum computing will make some of the computation faster, in fact, many orders of magnitude faster. Artificial intelligence will be right at the heart of all the technologies, which allows us to analyze the data intelligently. As you can see, all these digital technologies are going to become intertwined to make us work better and live smarter.
That is why I have always said to my students, you can change your future. Your future is in your hands. The key is learning, even after graduation. Learning is a lifelong mission. In today's ever-evolving world, with all the quickly developing digital technologies, you need to constantly reinvent yourself; you will need to learn everything and learn anything. The disadvantage of fast-changing technologies is that you will need to learn all the time, but the advantage is no one has any more advantages than you; you are on the same starting line as everyone else. The rest is up to you!
I believe artificial intelligence will be just a tool for everyone in the future, just like software coding is today. Artificial intelligence will no doubt affect every aspect of our lives and will fundamentally change the way we live, how we work, and how we socialize. The more you know about artificial intelligence and the more involved you are in artificial intelligence, the better you can transform your life.
Many successful people are lifelong learners. American entrepreneur and business magnate Elon Musk is a classic example. As the world’s richest man, he learned many things by himself, from computer programming, Internet, finance, to building cars and rockets. British comedian Lee Evans once said that by the end of the day, if you have learned something new, then it is a good day. I hope you will have a good day every day and enjoy reading this book!
Professor Perry Xiao
July 2021, London
Artificial intelligence (AI) is no doubt one of the hottest buzzwords at the moment. AI has penetrated into many aspects of our lives. Knowing AI and being able to use AI will bring enormous benefits to our work and lives. However, learning AI is a daunting task for many people, largely due to the complex mathematics and sophisticated coding behind it. This book aims to demystify AI and teach readers about AI from scratch, by using plain language and simple, illustrative code examples. It is divided into three parts.
In Part I, the book gives an easy-to-read introduction about AI, including the history, the types of AI, the current status, and the possible future trends. It then introduces AI development tools and Python, the most widely used programming language for AI.
In Part II, the book introduces the machine learning and deep learning aspects of AI. Machine learning topics include classifications, regressions, and clustering. It also includes the most popular reinforcement learning. Deep learning topics include convolutional neural networks (CNNs) and long short-term memory networks (LSTMs).
In Part III, the book introduces AI case studies; topics include image classifications, transfer learning, recurrent neural networks, and the latest generative adversarial networks. It also includes the state of the art of GPUs, TPUs, cloud computing, and edge computing. This book is packed with interesting and exciting examples such as pattern recognitions, image classifications, face recognition (most controversial), age and gender detection, voice/speech recognition, chatbot, natural language processing, translation, sentiment analysis, predictive maintenance, finance and stock price analysis, sales prediction, customer segmentation, biomedical data analysis, and much more.
This book is divided into three parts. Part I introduces AI. Part II covers machine learning and deep learning. Part III covers the case studies, or the AI application projects. R&D developers as well as students will be interested in Part III.
Part I
Chapter 1
: Introduction to AI
Chapter 2
: AI Development Tools
Part II
Chapter 3
: Machine Learning
Chapter 4
: Deep Learning
Part III
Chapter 5
: Image Classifications
Chapter 6
: Face Detection and Recognition
Chapter 7
: Object Detections and Image Segmentations
Chapter 8
: Pose Detection
Chapter 9
: GAN and Neural-Style Transfer
Chapter 10
: Natural Language Processing
Chapter 11
: Data Analysis
Chapter 12
: Advanced AI Computing
All the example source code is available on the website that accompanies this book.
This book is intended for university/college students, as well as software and electronic hobbyists, researchers, developers, and R&D engineers. It assumes readers understand the basic concepts of computers and their main components such as CPUs, RAM, hard drives, network interfaces, and so forth. Readers should be able to use a computer competently, for example, can switch on and off the computer, log in and log out, run some programs, copy/move/delete files, and use terminal software such as Microsoft Windows command prompt.
It also assumes that readers have some basic programming experience, ideally in Python, but it could also be in other languages such as Java, C/C++, Fortran, MATLAB, C#, BASIC, R, and so on. Readers should know the basic syntax, the different types of variables, standard inputs and outputs, the conditional selections, and the loops and subroutines.
Finally, it assumes readers have a basic understanding of computer networks and the Internet and are familiar with some of the most commonly used Internet services such as the Web, email, file download/upload, online banking/shopping, etc.
This book can be used as a core textbook as well as for background reading.
This book is not for readers to purely learn the Python programming language; there are already a lot of good Python programming books on the market. However, to appeal to a wider audience, Chapter 2 provides a basic introduction to Python and how to get started with Python programming, so even if you have never programmed Python before, you can still use the book.
If you want to learn all the technical details of Python, please refer to the following suggested prerequisite reading list and resources.
Absolute Beginner's Guide to Computer Basics (Absolute Beginner's Guides (Que)), 5th Edition, Michael Miller, QUE, 1 Sept. 2009.
ISBN-10: 0789742535
ISBN-13: 978-0789742537
Computers for Beginners (Wikibooks)
https://en.wikibooks.org/wiki/Computers_for_Beginners
Python Crash Course (2nd Edition): A Hands-On, Project-Based Introduction to Programming, Eric Matthes, No Starch Press, 9 May 2019.
ISBN-10 : 1593279280
ISBN-13 : 978-1593279288
Learn Python 3 the Hard Way: A Very Simple Introduction to the Terrifyingly Beautiful World of Computers and Code, 3rd Edition, Zed A. Shaw, Addison-Wesley Professional; 10 Oct. 2013.
ISBN-10 : 0321884914
ISBN-13 : 978-0321884916
Head First Python 2e: A Brain-Friendly Guide, 2nd Edition, Paul Barry, O′Reilly; 16 Dec. 2016.
ISBN-10 : 1491919531
ISBN-13 : 978-1491919538
Think Python: How to Think Like a Computer Scientist, 2nd Edition, Allen B. Downey, O'Reilly, 25 Dec. 2015.
ISBN-10 : 1491939362
ISBN-13 : 978-1491939369
Python Pocket Reference: Python in Your Pocket, 5th edition, Mark Lutz, O'Reilly Media, 9 Feb. 2014.
ISBN-10 : 1449357016
ISBN-13 : 978-1449357016
A Beginner's Python Tutorial (Wikibooks)
https://en.wikibooks.org/wiki/A_Beginner%27s_Python_Tutorial
Python Programming (Wikibooks)
https://en.wikibooks.org/wiki/Python_Programming
Introduction to Machine Learning with Python: A Guide for Data Scientists, Sarah Guido, O'Reilly Media; 25 May 2016.
ISBN-10 : 1449369413
ISBN-13 : 978-1449369415
Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems, 2nd Edition, Aurelien Geron, OReilly, 14 Oct. 2019.
ISBN-10 : 1492032646
ISBN-13 : 978-1492032649
Deep Learning with Python, Francois Chollet, Manning Publications, 30 Nov. 2017.
ISBN-10 : 9781617294433
ISBN-13 : 978-1617294433
Deep Learning (Adaptive Computation and Machine Learning Series), Illustrated edition, Ian Goodfellow, MIT Press, 3 Jan. 2017
ISBN-10 : 0262035618
ISBN-13 : 978-0262035613
Python Machine Learning: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow 2, 3rd Edition, Sebastian Raschka, Vahid Mirjalili, Packt Publishing, 12 Dec. 2019.
ISBN-10 : 1789955750
ISBN-13 : 978-1789955750
Machine Learning Yearning (Andrew Ng's free ebook)
https://www.deeplearning.ai/machine-learning-yearning/
Dive into Deep Learning (Free ebook)
https://d2l.ai/
In this book, you will need the following:
A standard personal computer with a minimum 250 GB hard drive, 8 GB RAM, and Intel or AMD 2 GHz processor, running a Windows operating system (Vista/7/8/10, Internet Explorer 9 and above, or the latest Edge browser, or Google Chrome) or a Linux operating system (such as Ubuntu Linux 16.04 (or newer) and so on). You can also use a Mac (with Mac OS X 10.13 and later, administrator privileges for installation, 64-bit browser).
Python software
https://www.python.org/downloads/
Text editors and Python IDEs (see
Chapter 2
)
Raspberry Pi (optional)
https://www.raspberrypi.org/
Arduino NANO 33 BLE Sense (optional)
https://www.arduino.cc/en/Guide/NANO33BLESense
This book is accompanied by bonus content! The following extra elements can be downloaded from www.wiley.com/go/aiwithpython:
MATLAB for AI Cheat Sheets
Python for AI Cheat Sheets
Python Deep Learning Cheat Sheet
Python Virtual Environment
Jupyter Notebook, Google Colab, and Kaggle
Chapter 1:
Introduction to AI
Chapter 2:
AI Development Tools
Part I gives a bird’s-eye overview of artificial intelligence (AI) and AI development resources.
“There is no reason and no way that a human mind can keep up with an artificial intelligence machine by 2035.”
—Gray Scott (American futurist)
1.1 What Is AI?1.2 The History of AI1.3 AI Hypes and AI Winters1.4 The Types of AI1.5 Edge AI and Cloud AI1.6 Key Moments of AI1.7 The State of AI1.8 AI Resources1.9 Summary1.10 Chapter Review QuestionsArtificial intelligence (AI) is no doubt one of the hottest buzzwords right now. It is in the news all the time. So, what is AI, and why is it important? When you talk about AI, the image that probably pops into most people's heads is of a human-like robot that can do complicated, clever things, as shown in Figure 1.1. AI is actually more than that.
AI is an area of computer science that aims to make machines do intelligent things, that is, learn and solve problems, similar to the natural intelligence of humans and animals. In AI, an intelligent agent receives information from the environment, performs computations to decide what action to take in order to achieve the goal, and takes actions autonomously. AI can improve its performance with learning.
For more information, see the John McCarthy's 2004 paper titled, “What Is Artificial Intelligence?”
https://homes.di.unimi.it/borghese/Teaching/AdvancedIntelligentSystems/Old/IntelligentSystems_2008_2009/Old/IntelligentSystems_2005_2006/Documents/Symbolic/04_McCarthy_whatisai.pdf
Figure 1.1: The common perception of AI
(Source: https://commons.wikimedia.org/wiki/File:Artificial_Intelligence:%26_AI_%26_Machine_Learning_-_30212411048.jpg)
You may not be aware that AI has already been widely used in many aspects of our lives. Personal assistants such as Amazon's Alexa, iPhone's Siri, Microsoft's Cortana, and Google Assistant all rely on AI to understand what you have said and follow the instructions to perform tasks accordingly.
Online entertainment services such as Spotify and Netflix also rely on AI to figure out what you might like and recommend songs and movies. Other services such as Google, Facebook, Amazon, and eBay analyze your online activities to deliver targeted advertisements. My wife once searched Arduino boards at work during the day, and in the evening, after she got home, no matter which websites she visited, ads for Arduino boards kept popping up!
Have you ever used the SwiftKey program on your phone or Grammarly on your computer? They are also AI.
AI has also been used in healthcare, manufactoring, driverless cars, finance, agriculture, and more. In a recent study, researchers from Google Health and Imperial College London developed an algorithm that outperformed six human radiologists in reading mammograms for breast cancer detection. Groupe Renault is collaborating with Google Cloud to combine its AI and machine learning capabilities with automotive industry expertise to increase efficiency, improve production quality, and reduce the carbon footprint. Driverless cars use AI to identify the roads, the pedestrians, and the traffic signs. The finance industry uses AI to detect fraud and predict future growth. Agriculture is also turning to AI for healthier crops, pest control, soil and growing conditions monitoring, and so on.
AI can affect our jobs. According to the BBC, 35 percent of today's jobs will disappear in the next 20 years. You can use the following BBC website to find out how safe your workplace is:
https://www.bbc.co.uk/news/technology-34066941
AI can be traced back to the 1940s, during World War II, when Alan Turing, a British mathematician and computer scientist, developed a code-breaking machine called bombe in Bletchley Park, United Kingdom, that deciphered German Enigma–encrypted messages (see Figure 1.2). The Hollywood movie The Imitation Game (2014) has vividly captured this period of history. Turing's work helped the Allies to defeat the Nazis and is estimated to have shortened the war by more than two years and saved more than 14 million lives.
Figure 1.2: The bombe machine (left) and the Enigma machine (right)
(Source: https://en.wikipedia.org/wiki/Cryptanalysis_of_the_Enigma)
In October 1950, while working at the University of Manchester, Turing published a paper entitled “Computing Machinery and Intelligence” in the journal Mind (Oxford University Press). In this paper, he proposed an experiment that became known as the famous Turing test. The Turing test is often described as a three-person game called the imitation game, as illustrated in Figure 1.3, in which player C, the interrogator, tries to determine which player—A or B—is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination. The Turing test has since been used to test a machine's intelligence to see if it is equivalent to a human. To date, no computer has passed the Turing test.
Figure 1.3: The famous Turing test, also called the imitation game. Player C, the interrogator, is trying to determine which player—A or B—is a computer and which is a human.
AI as a research discipline was established at a workshop at Dartmouth College in 1956, organized by John McCarthy, a young assistant professor of mathematics at the college (http://raysolomonoff.com/dartmouth/). The workshop lasted about six to eight weeks, and it was essentially an extended brainstorming session. There were about 11 mathematician attendees such as Marvin Minsky, Allen Newell, Arthur Samuel, and Herbert Simon. They were widely recognized as the founding fathers of AI. John McCarthy chose the term artificial intelligence for the new research field.
The history of AI can be divided into three stages, as illustrated in Figure 1.4.
1950s–1970s, neural networks (NNs):
During this period, neural networks, also called
artificial neural networks
(ANNs), were developed based on human brains that mimic the human biological neural networks. An NN usually has three layers: an input layer, a hidden layer, and an output layer. To use an NN, you need to train the NN with a large amount of given data. After training, the NN can then be used to predict results for unseen data. NNs attracted a lot of attention during this period. After the 1970s, when NNs failed to live up to their promises, known as
AI hype
, funding and research activities were dramatically cut. This was called an
AI winter
.
1980s–2010s, machine learning (ML):
This is the period when machine learning flourished. ML is a subset of AI and consists of a set of mathematical algorithms that can automatically analyze data. Classic ML can be divided into supervised learning and unsupervised learning. Supervised learning examples include speech recognition and image recognition. Unsupervised learning examples include customer segmentation, defect detection, and fraud detection. Classic ML algorithms are support vector machine (SVM), K-means clustering, decision tree, naïve Bayes, and so on.
2010s–present, deep learning (DL):
This is the period when deep learning (DL) was developed. DL is a special type of neural network that has more than one layer of hidden layers. This is possible only with the increase of computing power, especially graphical processing units (GPUs), and improved algorithms. DL is a subset of ML. DL has so far outperformed many other algorithms on a large dataset. But is DL hype or reality? That remains to be seen.
Figure 1.4: The history of AI at the NVidia website
(Source: https://developer.nvidia.com/deep-learning)
AI is often confused with data science, big data, and data mining. Figure 1.5 shows the relationships between AI, machine learning, deep learning, data science, and mathematics. Both mathematics and data science are related to AI but are different from AI. Data science mainly focuses on data, which includes big data and data mining. Data science can use machine learning and deep learning when processing the data.
Figure 1.5: The relationships between AI, machine learning, deep learning, data science, and mathematics
Figure 1.6 shows an interesting website that explains the lifecycle of data science. It includes business understanding, data mining, data cleaning, data exploration, feature engineering, predictive modeling, and data visualization.
Figure 1.6: The lifecycle of data science
(Source: http://sudeep.co/data-science/Understanding-the-Data-Science-Lifecycle/)
In summary:
AI means enabling a machine to do intelligent things to mimic humans. The two important aspects of AI are machine learning and deep learning.
Machine learning is a subset of AI and consists of algorithms that can automate data analysis.
Deep learning is a subset of machine learning. It is a neural network with more than one hidden layer.
Like many other technologies, AI has AI hypes, as shown in Figure 1.7. An AI hype can be divided into several stages. In the first stage (1950s–1970s), called Technology Trigger, AI developed quickly, with increased funding, research activities, enthusiasm, optimism, and high expectations. In the second stage (1970s), AI reached the peak, called the Peak of Inflated Expectations. After the peak, in the third stage (1970s–1980s), when AI failed to deliver on its promises, AI reached the bottom, called the Trough of Disillusionment. This is the point at which an AI winter occurred. After the trough, AI slowly recovered; this is the fourth stage (1980s–present), which we are in now, called the Slop of Enlightenment. Finally, AI will reach the fifth stage, the Plateau of Productivity, where AI development becomes more stable.
Figure 1.7: The technology hype cycle
(Source: https://en.wikipedia.org/wiki/Hype_cycle)
AI winter refers to a period of time during which public interest and research activities in artificial intelligence are significantly reduced. There have been two AI winters in history, one in the late 1970s and one in the late 1980s.
From the 1950s to the 1970s, artificial neural networks attracted a lot of attention. But since the late 1960s, after many disappointments and criticisms, funding and research activities were significantly reduced; this was the first AI winter. A famous case was the failure of machine translation in 1966. After spending $20 million to fund a research project, the National Research Council (NRC) concluded that machine translation was more expensive, less accurate, and slower than human translation, so the NRC ended all support. The careers of many people were destroyed, and the research ended.
In 1973, British Parliament commissioned Professor Sir James Lighthill to assess the state of AI research in the United Kingdom. His report, the famous Lighthill Report, criticized the utter failure of AI and concluded that nothing done in AI couldn't be done in other sciences. The report also pointed out that many of AI's most successful algorithms would not work on real-world problems. The report was contested in a debate that aired on the BBC series Controversy in 1973, pitting Lighthill against the team of Donald Michie, John McCarthy, and Richard Gregory. The Lighthill report virtually led to the dismantling of AI research in England in the 1970s.
In the 1980s, a form of AI program called the expert system became popular around the world. The first commercial expert system was developed at Carnegie Mellon for Digital Equipment Corporation. It was an enormous success and saved the company millions of dollars. Companies around the world began to develop and deploy their own expert systems. However, by the early 1990s, most commercial expert system companies had failed.
Another example is the Fifth Generation project. In 1981, the Japanese Ministry of International Trade and Industry invested $850 million for the Fifth Generation computer project to build machines that could carry on conversations, translate languages, interpret pictures, and reason like humans. By 1991, the project was discontinued, because the goals penned in 1981 had not been met. This is a classic example of expectations being much higher than what an AI project was actually capable of.
At the time of writing this book, in 2020, deep learning is developing at a fast speed, attracting lots of activities and funding, with exciting developments every day. Is deep learning a hype? When will deep learning peak, and will there be a deep learning winter? Those are billion-dollar questions.
According to many resources, AI can be divided into three categories.
Narrow AI
, also called
weak AI
or
artificial narrow intelligence
(ANI), refers to the AI that is used to solve a specific problem. Almost all AI applications we have today are narrow AI. For example, image classification, object detection, speech recognition (such as Amazon's Alexa, iPhone's Siri, Microsoft's Cortana, and Google Assistant), translation, natural language processing, weather forecasting, targeted advertisements, sales predictions, email spam detection, fraud detection, face recognition, and computer vision are all narrow AI.
General AI
, also called
strong AI
or
artificial general intelligence
(AGI), refers to the AI that is for solving general problems. It is more like a human being, which is able to learn, think, invent, and solve more complicated problems. The singularity, also called
technological singularity
, is when AI overtakes human intelligence, as illustrated in
Figure 1.8
. According to Google's Ray Kurzweil, an American author, inventor, and futurist, AI will pass the Turing test in 2029 and reach the singularity point in 2045. Narrow AI is what we have achieved so far, and general AI is what we expect in the future.
Super AI
, also called
superintelligence
, refers to the AI after the singularity point. Nobody knows what will happen with super AI. One vision is human and machine integration through a brain chip interface. In August 2020, Elon Musk, the most famous American innovative entrepreneur, has already demonstrated a pig with a chip in its brain. While some people are more pessimistic about the future of AI, others are more optimistic. We cannot predict the future, but we can prepare for it.
Figure 1.8: The human intelligence and technological singularity
For more details about the types of AI, see the following resources:
https://azure.microsoft.com/en-gb/overview/what-is-artificial-intelligence/
https://www.ubs.com/microsites/artificial-intelligence/en/new-dawn.html
https://doi.org/10.1016/B978-0-12-817024-3.00008-8
This book will mainly cover the machine learning and deep learning aspects of AI, which belong to narrow AI or weak AI.
AI applications can be run either on the large remote servers, called cloud AI, or on the local machines, called edge AI. The advantages of cloud AI are that you don't need to purchase expensive hardware; you can upload large training datasets and fully utilize the vast computing power provided by the cloud. The disadvantages are that it might require more bandwidth and have higher latency and security issues. The top three cloud AI service providers are as follows:
Amazon AWS Machine Learning
AWS has the largest market share and the longest history and provides more cloud services than anyone else. But it is also the most expensive.
https://aws.amazon.com/machine-learning/
Microsoft Azure
Azure has the second largest market share and also provides many services. Azure can be easily integrated with Windows and many other software applications, such as .NET.
https://azure.microsoft.com/
Google Cloud Platform
