88,99 €
Understanding Artificial Intelligence Provides students across majors with a clear and accessible overview of new artificial intelligence technologies and applications Artificial intelligence (AI) is broadly defined as computers programmed to simulate the cognitive functions of the human mind. In combination with the Neural Network (NN), Big Data (BD), and the Internet of Things (IoT), artificial intelligence has transformed everyday life: self-driving cars, delivery drones, digital assistants, facial recognition devices, autonomous vacuum cleaners, and mobile navigation apps all rely on AI to perform tasks. With the rise of artificial intelligence, the job market of the near future will be radically different???many jobs will disappear, yet new jobs and opportunities will emerge. Understanding Artificial Intelligence: Fundamentals and Applications covers the fundamental concepts and key technologies of AI while exploring its impact on the future of work. Requiring no previous background in artificial intelligence, this easy-to-understand textbook addresses AI challenges in healthcare, finance, retail, manufacturing, agriculture, government, and smart city development. Each chapter includes simple computer laboratories to teach students how to develop artificial intelligence applications and integrate software and hardware for robotic development. In addition, this text: * Focuses on artificial intelligence applications in different industries and sectors * Traces the history of neural networks and explains popular neural network architectures * Covers AI technologies, such as Machine Vision (MV), Natural Language Processing (NLP), and Unmanned Aerial Vehicles (UAV) * Describes various artificial intelligence computational platforms, including Google Tensor Processing Unit (TPU) and Kneron Neural Processing Unit (NPU) * Highlights the development of new artificial intelligence hardware and architectures Understanding Artificial Intelligence: Fundamentals and Applications is an excellent textbook for undergraduates in business, humanities, the arts, science, healthcare, engineering, and many other disciplines. It is also an invaluable guide for working professionals wanting to learn about the ways AI is changing their particular field.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 201
Veröffentlichungsjahr: 2022
Cover
Series Page
Title Page
Copyright Page
Dedication Page
List of Figures
Preface
Acknowledgments
Author Biographies
1 Introduction
1.1 Overview
1.2 Development History
1.3 Neural Network Model
1.4 Popular Neural Network
1.5 Neural Network Classification
1.6 Neural Network Operation
1.7 Application Development
Exercise
2 Neural Network
2.1 Convolutional Layer
2.2 Activation Layer
2.3 Pooling Layer
2.4 Batch Normalization
2.5 Dropout Layer
2.6 Fully Connected Layer
Exercise
3 Machine Vision
3.1 Object Recognition
3.2 Feature Matching
3.3 Facial Recognition
3.4 Gesture Recognition
3.5 Machine Vision Applications
Exercise
4 Natural Language Processing
4.1 Neural Network Model
4.2 Natural Language Processing Applications
Exercise
5 Autonomous Vehicle
5.1 Levels of Driving Automation
5.2 Autonomous Technology
5.3 Communication Strategies
5.4 Law Legislation
5.5 Future Challenges
Exercise
6 Drone
6.1 Drone Design
6.2 Drone Structure
6.3 Drone Regulation
6.4 Applications
Exercise
7 Healthcare
7.1 Telemedicine
7.2 Medical Diagnosis
7.3 Medical Imaging
7.4 Smart Medical Device
7.5 Electronic Health Record
7.6 Medical Billing
7.7 Drug Development
7.8 Clinical Trial
7.9 Medical Robotics
7.10 Elderly Care
7.11 Future Challenges
Exercise
8 Finance
8.1 Fraud Prevention
8.2 Financial Forecast
8.3 Stock Trading
8.4 Banking
8.5 Accounting
8.6 Insurance
Exercise
9 Retail
9.1 E‐Commerce
9.2 Virtual Shopping
9.3 Product Promotion
9.4 Store Management
9.5 Warehouse Management
9.6 Inventory Management
9.7 Supply Chain
Exercise
10 Manufacturing
10.1 Defect Detection
10.2 Quality Assurance
10.3 Production Integration
10.4 Generative Design
10.5 Predictive Maintenance
10.6 Environment Sustainability
10.7 Manufacturing Optimization
Exercise
11 Agriculture
11.1 Crop and Soil Monitoring
11.2 Agricultural Robot
11.3 Pest Control
11.4 Precision Farming
Exercise
12 Smart City
12.1 Smart Transportation
12.2 Smart Parking
12.3 Waste Management
12.4 Smart Grid
12.5 Environmental Conservation
Exercise
13 Government
13.1 Information Technology
13.2 Human Service
13.3 Law Enforcement
13.4 Homeland Security
13.5 Legislation
13.6 Ethics
13.7 Public Perspective
Exercise
14 Computing Platform
14.1 Central Processing Unit
14.2 Graphics Processing Unit
14.3 Tensor Processing Unit
14.4 Neural Processing Unit
Exercise
Appendix A: Kneron Neural Processing Unit
Appendix B: Object Detection – Overview
B.1 Kneron Environment Setup
B.2 Python Installation
B.3 Library Installation
B.4 Driver Installation
B.5 Model Installation
B.6 Image/Camera Detection
B.7 Yolo Class List
Appendix C: Object Detection – Hardware
C.1 Library Setup
C.2 System Parameters
C.3 NPU Initialization
C.4 Image Detection
C.5 Camera Detection
Appendix D: Hardware Transfer Mode
D.1 Serial Transfer Mode
D.2 Pipeline Transfer Mode
D.3 Parallel Transfer Mode
Appendix E: Object Detection – Software (Optional)
E.1 Library Setup
E.2 Image Detection
E.3 Video Detection
References
Index
End User License Agreement
Chapter 5
Table 5.1 Sensor technology comparison.
Chapter 14
Table 14.1 Tensor processing unit comparison.
Appendix D
Table D.1 Tiny Yolo v3 performance comparison.
Chapter 1
Figure 1.1 Fourth industrial revolution [3] .
Figure 1.2 Artificial intelligence.
Figure 1.3 Neural network development timeline.
Figure 1.4 ImageNet challenge.
Figure 1.5 Human neuron and neural network comparison.
Figure 1.6 Convolutional neural network.
Figure 1.7 Recurrent neural network.
Figure 1.8 Reinforcement learning.
Figure 1.9 Regression.
Figure 1.10 Clustering.
Figure 1.11 Application development cycle.
Figure 1.12 Artificial intelligence applications.
Chapter 2
Figure 2.1 Convolutional neural network architecture.
Figure 2.2 AlexNet feature map evolution.
Figure 2.3 Image convolution.
Figure 2.4 Activation function.
Figure 2.5 Pooling layer.
Figure 2.6 Dropout layer.
Chapter 3
Figure 3.1 Object recognition examples [19] .
Figure 3.2 Object recognition.
Figure 3.3 Object detection/instance segmentation [18] .
Figure 3.4 Object detection/semantic segmentation.
Figure 3.5 Feature extraction/matching [18] .
Figure 3.6 Facial recognition [21] .
Figure 3.7 Emotion recognition [22] .
Figure 3.8 Gesture recognition [23] .
Figure 3.9 Medical diagnosis [24] .
Figure 3.10 Retail applications.
Figure 3.11 Airport security [26] .
Chapter 4
Figure 4.1 Natural language processing market.
Figure 4.2 Convolutional neural network.
Figure 4.3 Recurrent neural network.
Figure 4.4 Long short‐term memory network.
Figure 4.5 Recursive neural network.
Figure 4.6 Reinforcement learning.
Figure 4.7 IBM Watson assistant.
Figure 4.8 Google translate.
Figure 4.9 Medical transcription [36] .
Chapter 5
Figure 5.1 Autonomous vehicle [39] .
Figure 5.2 Levels of driving automation.
Figure 5.3 Autonomous technology.
Figure 5.4 Computer vision technology [45] .
Figure 5.5 Radar technology [45] .
Figure 5.6 Localization technology [47] .
Figure 5.7 Path planning technology [48] .
Figure 5.8 Tesla traffic‐aware cruise control.
Figure 5.9 Vehicle‐to‐vehicle communication.
Figure 5.10 Vehicle to infrastructure communication.
Figure 5.11 Vehicle‐to‐pedestrian communication.
Figure 5.12 Autonomous vehicle law legislation.
Chapter 6
Figure 6.1 Unmanned aerial vehicle design.
Figure 6.2 Drone structure.
Figure 6.3 Six degree of freedom.
Figure 6.4 Infrastructure inspection and maintenance [57] .
Figure 6.5 Civil construction [58] .
Figure 6.6 Agricultural drone [59] .
Figure 6.7 Search and rescue drone [60] .
Chapter 7
Figure 7.1 Telehealth/telemedicine.
Figure 7.2 Medical diagnosis [66] .
Figure 7.3 Radiology analysis.
Figure 7.4 Smart medical device [71] .
Figure 7.5 Electronic health record.
Figure 7.6 Medical billing [74] .
Figure 7.7 Drug development.
Figure 7.8 Clinical trial [76] .
Figure 7.9 Medical robot [78] .
Figure 7.10 Elderly care [80] .
Chapter 8
Figure 8.1 Fraud detection [84] .
Figure 8.2 MasterCard decision intelligence solution [85] .
Figure 8.3 Financial forecast [88] .
Figure 8.4 Amazon forecast.
Figure 8.5 Stock trading [91] .
Figure 8.6 Stock portfolio comparison.
Figure 8.7 Banking AI product.
Figure 8.8 Bank of America chatbot: Erica [97] .
Figure 8.9 Accounting [100] .
Figure 8.10 Insurance claims [104] .
Chapter 9
Figure 9.1 Worldwide retail industry artificial intelligence benefits.
Figure 9.2 E‐commerce.
Figure 9.3 E‐commerce product recommendation.
Figure 9.4 Home improvement.
Figure 9.5 Virtual fitting.
Figure 9.6 Product promotion [115] .
Figure 9.7 AmazonGo Store management [116] .
Figure 9.8 Softbank pepper robot. https://softbankrobotics.com/emea/en/peppe...
Figure 9.9 Amazon warehouse management.
Figure 9.10 Amazon Prime Air Drone [122] .
Figure 9.11 Walmart inventory management.
Figure 9.12 Supply chain [127] .
Chapter 10
Figure 10.1 Artificial intelligence total manufacturing revenue [128] .
Figure 10.2 Artificial intelligence manufacturing opportunity.
Figure 10.3 Defect detection.
Figure 10.4 Quality assurance [130] .
Figure 10.5 Collaborative robot (Cobot) [136] .
Figure 10.6 Generative design.
Figure 10.7 Predictive maintenance.
Figure 10.8 Sustainability [130] .
Figure 10.9 Manufacture optimization [136] .
Chapter 11
Figure 11.1 Smart agriculture worldwide market.
Figure 11.2 Crop and soil monitoring.
Figure 11.3 Corn leaves chemical maps.
Figure 11.4 Agricultural robot.
Figure 11.5 Greenhouse farming robot.
Figure 11.6 Pest control.
Figure 11.7 Precision farming [145] .
Chapter 12
Figure 12.1 Smart city [151] .
Figure 12.2 Smart transportation.
Figure 12.3 Smart parking [153] .
Figure 12.4 Smart waste management.
Figure 12.5 Smart grid [159] .
Figure 12.6 Renewable energy source.
Figure 12.7 Air pollution map (WHO) [160] .
Chapter 13
Figure 13.1 Country national AI strategy.
Figure 13.2 The power of data [166] .
Figure 13.3 Cybersecurity.
Figure 13.4 Caseworkers support.
Figure 13.5 Virtual assistant.
Figure 13.6 Criminal recognition.
Figure 13.7 Crime spot prevention.
Figure 13.8 Risk assessment [178] .
Figure 13.9 GTAS integrated workflow.
Figure 13.10 Apex screening at speed program.
Figure 13.11 Data privacy.
Figure 13.12 AI ethics.
Figure 13.13 AI support with use case.
Figure 13.14 AI support with trust among different countries.
Figure 13.15 AI support with various age groups and geographical locations....
Figure 13.16 AI support with employment.
Chapter 14
Figure 14.1 Two‐socket configuration.
Figure 14.2 Four‐socket ring configuration.
Figure 14.3 Four‐socket crossbar configuration.
Figure 14.4 Eight‐socket configuration.
Figure 14.5 Intel AVX‐512_VNNI FMA operation (VPDPWSSD).
Figure 14.6 Nvidia GPU Turing architecture.
Figure 14.7 Tensor core performance comparison [188] .
Figure 14.8 NVLink2 Eight GPUs configuration.
Figure 14.9 NVLink2 four GPUs configuration.
Figure 14.10 NVLink2 two GPUs configuration.
Figure 14.11 NVLink single GPUs configuration.
Figure 14.12 High bandwidth memory architecture.
Figure 14.13 Systolic array matrix multiplication.
Figure 14.14 Brain floating point format.
Figure 14.15 TPU v3 pod configuration.
Figure 14.16 System reconfigurability.
Figure 14.17 Kneron system architecture.
Figure 14.18 Kneron edge AI configuration.
Appendix A
Figure A.1 Kneron neural processing unit (NPU) [199] .
Appendix B
Figure B.1 Git package [200] .
Figure B.2 Git menu [200] .
Figure B.3 Python website [201] .
Figure B.4 Python package release [201] .
Figure B.5 Python installation menu [201] .
Figure B.6 Python optional features menu [201] .
Figure B.7 Python advanced options menu [201] .
Figure B.8 Windows PowerShell [202] .
Figure B.9 Driver installation menu [203] .
Figure B.10 Image detection [199] .
Figure B.11 Camera detection [199] .
Appendix C
Figure C.1 Kneron system library [199] .
Figure C.2 System parameters [199] .
Figure C.3 NPU initialization source code [199] .
Figure C.4 Image inference setup source code [199] .
Figure C.5 Object class label and bounding box [199] .
Figure C.6 Image detection [199] .
Figure C.7 Camera inference setup source code [199] .
Figure C.8 Camera detection [199] .
Appendix D
Figure D.1 Serial transfer source code [199] .
Figure D.2 Serial transfer operation [199] .
Figure D.3 Pipeline transfer source code [199] .
Figure D.4 Pipeline transfer operation [199] .
Figure D.5 Parallel transfer source code [199] .
Figure D.6 Parallel transfer operation [199] .
Appendix E
Figure E.1 PyTorch installation menu [204] .
Figure E.2 yolov5 object detection [205] .
Figure E.3 Image detection [205] .
Figure E.4 Video detection [205] .
Cover Page
Series Page
Title Page
Copyright Page
Dedication Page
List of Figures
Preface
Acknowledgments
Author Biographies
Table of Contents
Begin Reading
Appendix A Kneron Neural Processing Unit
Appendix B Object Detection – Overview
Appendix C Object Detection – Hardware
Appendix D Hardware Transfer Mode
Appendix E Object Detection – Software (Optional)
References
Index
Wiley End User License Agreement
ii
iii
iv
v
xiii
xiv
xv
xvi
xvii
xviii
xix
xx
xxi
xxii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
29
30
31
32
33
34
35
36
37
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
63
64
65
66
67
68
69
70
71
72
73
74
75
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
111
112
113
114
115
116
117
119
120
121
122
123
124
125
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
IEEE Press445 Hoes LanePiscataway, NJ 08854
IEEE Press Editorial Board
Sarah Spurgeon,
Editor in Chief
Jón Atli BenediktssonAnjan BoseAdam DrobotPeter (Yong) Lian
Andreas Molisch Saeid Nahavandi Jeffrey Reed Thomas Robertazzi
Diomidis Spinellis Ahmet Murat Tekalp
Albert Chun Chen Liu
Kneron Inc,
San Diego, USA
Oscar Ming Kin Law
Kneron Inc,
San Diego, USA
Iain Law
University of California,
San Diego, USA
Copyright © 2022 by The Institute of Electrical and Electronics Engineers, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the publisher, or authorization through payment of the appropriate per‐copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750‐8400, fax (978) 750‐4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748‐6011, fax (201) 748‐6008, or online at http://www.wiley.com/go/permission.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762‐2974, outside the United States at (317) 572‐3993 or fax (317) 572‐4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our website at www.wiley.com.
Library of Congress Cataloging‐in‐Publication DataNames: Liu, Albert Chun Chen, author. | Law, Oscar Ming Kin, author. | Law, Iain, author.Title: Understanding artificial intelligence : fundamentals and applications / Albert Chun Chen Liu, Oscar Ming Kin Law, Iain Law.Description: Hoboken, New Jersey : Wiley‐IEEE Press, [2022] | Includes bibliographical references and index.Identifiers: LCCN 2022017564 (print) | LCCN 2022017565 (ebook) | ISBN 9781119858331 (cloth) | ISBN 9781119858348 (adobe pdf) | ISBN 9781119858386 (epub)Subjects: LCSH: Artificial intelligence.Classification: LCC Q335 .L495 2022 (print) | LCC Q335 (ebook) | DDC 006.3–dc23/eng20220718LC record available at https://lccn.loc.gov/2022017564LC ebook record available at https://lccn.loc.gov/2022017565
Cover Design: WileyCover Image: © Blue Planet Studio/Shutterstock
Education is not the learning of facts,but the training of the mind to think
Albert Einstein
Figure 1.1
Fourth industrial revolution [3].
Figure 1.2
Artificial intelligence.
Figure 1.3
Neural network development timeline.
Figure 1.4
ImageNet challenge.
Figure 1.5
Human neuron and neural network comparison.
Figure 1.6
Convolutional neural network.
Figure 1.7
Recurrent neural network.
Figure 1.8
Reinforcement learning.
Figure 1.9
Regression.
Figure 1.10
Clustering.
Figure 1.11
Application development cycle.
Figure 1.12
Artificial intelligence applications.
Figure 2.1
Convolutional neural network architecture.
Figure 2.2
AlexNet feature map evolution.
Figure 2.3
Image convolution.
Figure 2.4
Activation function.
Figure 2.5
Pooling layer.
Figure 2.6
Dropout layer.
Figure 3.1
Object recognition examples [19].
Figure 3.2
Object recognition.
Figure 3.3
Object detection/instance segmentation [18].
Figure 3.4
Object detection/semantic segmentation.
Figure 3.5
Feature extraction/matching [18].
Figure 3.6
Facial recognition [21].
Figure 3.7
Emotion recognition [22].
Figure 3.8
Gesture recognition [23].
Figure 3.9
Medical diagnosis [24].
Figure 3.10
Retail applications.
Figure 3.11
Airport security [26].
Figure 4.1
Natural language processing market.
Figure 4.2
Convolutional neural network.
Figure 4.3
Recurrent neural network.
Figure 4.4
Long short‐term memory network.
Figure 4.5
Recursive neural network.
Figure 4.6
Reinforcement learning.
Figure 4.7
IBM Watson assistant.
Figure 4.8
Google translate.
Figure 4.9
Medical transcription [36].
Figure 5.1
Autonomous vehicle [39].
Figure 5.2
Levels of driving automation.
Figure 5.3
Autonomous technology.
Figure 5.4
Computer vision technology [45].
Figure 5.5
Radar technology [45].
Figure 5.6
Localization technology [47].
Figure 5.7
Path planning technology [48].
Figure 5.8
Tesla traffic‐aware cruise control.
Figure 5.9
Vehicle‐to‐vehicle communication.
Figure 5.10
Vehicle to infrastructure communication.
Figure 5.11
Vehicle‐to‐pedestrian communication.
Figure 5.12
Autonomous vehicle law legislation.
Figure 6.1
Unmanned aerial vehicle design.
Figure 6.2
Drone structure.
Figure 6.3
Six degree of freedom.
Figure 6.4
Infrastructure inspection and maintenance [57].
Figure 6.5
Civil construction [58].
Figure 6.6
Agricultural drone [59].
Figure 6.7
Search and rescue drone [60].
Figure 7.1
Telehealth/telemedicine.
Figure 7.2
Medical diagnosis [66].
Figure 7.3
Radiology analysis.
Figure 7.4
Smart medical device [71].
Figure 7.5
Electronic health record.
Figure 7.6
Medical billing [74].
Figure 7.7
Drug development.
Figure 7.8
Clinical trial [76].
Figure 7.9
Medical robot [78].
Figure 7.10
Elderly care [80].
Figure 8.1
Fraud detection [84].
Figure 8.2
MasterCard decision intelligence solution [85].
Figure 8.3
Financial forecast [88].
Figure 8.4
Amazon forecast.
Figure 8.5
Stock trading [91].
Figure 8.6
Stock portfolio comparison.
Figure 8.7
Banking AI product.
Figure 8.8
Bank of America chatbot: Erica [97].
Figure 8.9
Accounting [100].
Figure 8.10
Insurance claims [104].
Figure 9.1
Worldwide retail industry artificial intelligence benefits.
Figure 9.2
E‐commerce.
Figure 9.3
E‐commerce product recommendation.
Figure 9.4
Home improvement.
Figure 9.5
Virtual fitting.
Figure 9.6
Product promotion [115].
Figure 9.7
AmazonGo Store management [116].
Figure 9.8
Softbank pepper robot. https://softbankrobotics.com/emea/en/pepper.
Figure 9.9
Amazon warehouse management.
Figure 9.10
Amazon Prime Air Drone [122].
Figure 9.11
Walmart inventory management.
Figure 9.12
Supply chain [127].
Figure 10.1
Artificial intelligence total manufacturing revenue [128].
Figure 10.2
Artificial intelligence manufacturing opportunity.
Figure 10.3
Defect detection.
Figure 10.4
Quality assurance [130].
Figure 10.5
Collaborative robot (Cobot) [136].
Figure 10.6
Generative design.
Figure 10.7
Predictive maintenance.
Figure 10.8
Sustainability [130].
Figure 10.9
Manufacture optimization [136].
Figure 11.1
Smart agriculture worldwide market.
Figure 11.2
Crop and soil monitoring.
Figure 11.3
Corn leaves chemical maps.
Figure 11.4
Agricultural robot.
Figure 11.5
Greenhouse farming robot.
Figure 11.6
Pest control.
Figure 11.7
Precision farming [145].
Figure 12.1
Smart city [151].
Figure 12.2
Smart transportation.
Figure 12.3
Smart parking [153].
Figure 12.4
Smart waste management.
Figure 12.5
Smart grid [159].
Figure 12.6
Renewable energy source.
Figure 12.7
Air pollution map (WHO) [160].
Figure 13.1
Country national AI strategy.
Figure 13.2
The power of data [166].
Figure 13.3
Cybersecurity.
Figure 13.4
Caseworkers support.
Figure 13.5
Virtual assistant.
Figure 13.6
Criminal recognition.
Figure 13.7
Crime spot prevention.
Figure 13.8
Risk assessment [178].
Figure 13.9
GTAS integrated workflow.
Figure 13.10
Apex screening at speed program.
Figure 13.11
Data privacy.
Figure 13.12
AI ethics.
Figure 13.13
AI support with use case.
Figure 13.14
AI support with trust among different countries.
Figure 13.15
AI support with various age groups and geographical locations.
Figure 13.16
AI support with employment.
Figure 14.1
Two‐socket configuration.
Figure 14.2
Four‐socket ring configuration.
Figure 14.3
Four‐socket crossbar configuration.
Figure 14.4
Eight‐socket configuration.
Figure 14.5
Intel AVX‐512_VNNI FMA operation (VPDPWSSD).
Figure 14.6
Nvidia GPU Turing architecture.
Figure 14.7
Tensor core performance comparison [188].
Figure 14.8
NVLink2 Eight GPUs configuration.
Figure 14.9
NVLink2 four GPUs configuration.
Figure 14.10
NVLink2 two GPUs configuration.
Figure 14.11
NVLink single GPUs configuration.
Figure 14.12
High bandwidth memory architecture.
Figure 14.13
Systolic array matrix multiplication.
Figure 14.14
Brain floating point format.
Figure 14.15
TPU v3 pod configuration.
Figure 14.16
System reconfigurability.
Figure 14.17
Kneron system architecture.
Figure 14.18
Kneron edge AI configuration.
Figure A.1
Kneron neural processing unit (NPU) [199].
Figure B.1
Git package [200].
Figure B.2
Git menu [200].
Figure B.3
Python website [201].
Figure B.4
Python package release [201].
Figure B.5
Python installation menu [201].
Figure B.6
Python optional features menu [201].
Figure B.7
Python advanced options menu [201].
Figure B.8
Windows PowerShell [202].
Figure B.9
Driver installation menu [203].
Figure B.10
Image detection [199].
Figure B.11
Camera detection [199].
Figure C.1
Kneron system library [199].
Figure C.2
System parameters [199].
Figure C.3
NPU initialization source code [199].
Figure C.4
Image inference setup source code [199].
Figure C.5
Object class label and bounding box [199].
Figure C.6
Image detection [199].
Figure C.7
Camera inference setup source code [199].
Figure C.8
Camera detection [199].
Figure D.1
Serial transfer source code [199].
Figure D.2
Serial transfer operation [199].
Figure D.3
Pipeline transfer source code [199].
Figure D.4
Pipeline transfer operation [199].
Figure D.5
Parallel transfer source code [199].
Figure D.6
Parallel transfer operation [199].
Figure E.1
PyTorch installation menu [204].
Figure E.2
yolov5 object detection [205].
Figure E.3
Image detection [205].
Figure E.4
Video detection [205].