25,99 €
Enterprise Artificial Intelligence Transformation AI is everywhere. From doctor's offices to cars and even refrigerators, AI technology is quickly infiltrating our daily lives. AI has the ability to transform simple tasks into technological feats at a human level. This will change the world, plain and simple. That's why AI mastery is such a sought-after skill for tech professionals. Author Rashed Haq is a subject matter expert on AI, having developed AI and data science strategies, platforms, and applications for Publicis Sapient's clients for over 10 years. He shares that expertise in the new book, Enterprise Artificial Intelligence Transformation. The first of its kind, this book grants technology leaders the insight to create and scale their AI capabilities and bring their companies into the new generation of technology. As AI continues to grow into a necessary feature for many businesses, more and more leaders are interested in harnessing the technology within their own organizations. In this new book, leaders will learn to master AI fundamentals, grow their career opportunities, and gain confidence in machine learning. Enterprise Artificial Intelligence Transformation covers a wide range of topics, including: * Real-world AI use cases and examples * Machine learning, deep learning, and slimantic modeling * Risk management of AI models * AI strategies for development and expansion * AI Center of Excellence creating and management If you're an industry, business, or technology professional that wants to attain the skills needed to grow your machine learning capabilities and effectively scale the work you're already doing, you'll find what you need in Enterprise Artificial Intelligence Transformation.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 492
Veröffentlichungsjahr: 2020
Cover
Foreword: Artificial Intelligence and the New Generation of Technology Building Blocks
Prologue: A Guide to This Book
Part I: A Brief Introduction to Artificial Intelligence
Chapter 1: A Revolution in the Making
The Impact of the Four Revolutions
AI Myths and Reality
The Data and Algorithms Virtuous Cycle
The Ongoing Revolution – Why Now?
AI: Your Competitive Advantage
Notes
Chapter 2: What Is AI and How Does It Work?
The Development of Narrow AI
The First Neural Network
Machine Learning
Supervised, Unsupervised, and Semisupervised Learning
Making Data More Useful
Semantic Reasoning
Applications of AI
Notes
Part II: Artificial Intelligence in the Enterprise
Chapter 3: AI in E-Commerce and Retail
Digital Advertising
Marketing and Customer Acquisition
Cross-Selling, Up-Selling, and Loyalty
Business-to-Business Customer Intelligence
Dynamic Pricing and Supply Chain Optimization
Digital Assistants and Customer Engagement
Notes
Chapter 4: AI in Financial Services
Anti-Money Laundering
Loans and Credit Risk
Predictive Services and Advice
Algorithmic and Autonomous Trading
Investment Research and Market Insights
Automated Business Operations
Notes
Chapter 5: AI in Manufacturing and Energy
Optimized Plant Operations and Assets Maintenance
Automated Production Lifecycles
Supply Chain Optimization
Inventory Management and Distribution Logistics
Electric Power Forecasting and Demand Response
Oil Production
Energy Trading
Notes
Chapter 6: AI in Healthcare
Pharmaceutical Drug Discovery
Clinical Trials
Disease Diagnosis
Preparation for Palliative Care
Hospital Care
Notes
Part III: Building Your Enterprise AI Capability
Chapter 7: Developing an AI Strategy
Goals of Connected Intelligence Systems
The Challenges of Implementing AI
AI Strategy Components
Steps to Develop an AI Strategy
Some Assembly Required
Moving Ahead
Notes
Chapter 8: The AI Lifecycle
Defining Use Cases
Collecting, Assessing, and Remediating Data
Feature Engineering
Selecting and Training a Model
Managing Models
Testing, Deploying, and Activating Models
Conclusion
Chapter 9: Building the Perfect AI Engine
AI Platforms versus AI Applications
What AI Platform Architectures Should Do
Some Important Considerations
AI Platform Architecture
Notes
Chapter 10: Managing Model Risk
When Algorithms Go Wrong
Mitigating Model Risk
Model Risk Office
Notes
Chapter 11: Activating Organizational Capability
Aligning Stakeholders
Organizing for Scale
AI Center of Excellence
Structuring Teams for Project Execution
Managing Talent and Hiring
Data Literacy, Experimentation, and Data-Driven Decisions
Conclusion
Notes
Part IV: Delving Deeper into AI Architecture and Modeling
Chapter 12: Architecture and Technical Patterns
AI Platform Architecture
Technical Patterns
Conclusion
Chapter 13: The AI Modeling Process
Defining the Use Case and the AI Task
Selecting the Data Needed
Setting Up the Notebook Environment and Importing Data
Cleaning and Preparing the Data
Understanding the Data Using Exploratory Data Analysis
Feature Engineering
Creating and Selecting the Optimal Model
Note
Part V: Looking Ahead
Chapter 14: The Future of Society, Work, and AI
AI and the Future of Society
AI and the Future of Work
Regulating Data and Artificial Intelligence
The Future of AI: Improving AI Technology
And This Is Just the Beginning
Notes
Further Reading
General
Society
Work
Acknowledgments
About the Author
Index
End User License Agreement
Chapter 2
Figure 2.1 Examples of functions
f
(
x
) that can be estimated by using machine...
Figure 2.2 Using training data for customers 1 to
m
to estimate
f
that will ...
Figure 2.3 Using the machine-learning model (
f
) to predict if customer numbe...
Figure 2.4 An example of a deep neural network.
Figure 2.5 Example of a type of knowledge graph.
Figure 2.6 Types of AI systems.
Chapter 5
Figure 5.1 Heuristic showing different failure rates during equipment compon...
Figure 5.2 Demand forecasting using historical sales and new data sources.
Figure 5.3 Energy trading scenario.
Chapter 7
Figure 7.1 Different types of third-party data that is available commerciall...
Chapter 8
Figure 8.1 The workflow for AI, machine learning, and data science projects....
Figure 8.2 A sample map of use case objectives, modeling tasks to support th...
Figure 8.3 Graph showing use cases by value and complexity.
Figure 8.4 Process for training and validating the model.
Figure 8.5 Underfitting and overfitting for regression models (top) and for ...
Figure 8.6 Training error versus testing error.
Figure 8.7 The confusion matrix setup.
Figure 8.8 Receiver operating characteristics (ROC) curve and the area under...
Figure 8.9 Comprehensive model management spans four types of configurations...
Figure 8.10 AI DevOps process.
Chapter 9
Figure 9.1 Impact of using an AI platform.
Figure 9.2 Summary of benefits of using an AI platform.
Figure 9.3 Types of users of an AI platform (vertical axis) and how they eng...
Figure 9.4 Batch versus real time for data, model training, and model infere...
Figure 9.5 The different patterns of batch or streaming data, model training...
Chapter 10
Figure 10.1 Approximating a polynomial function using simpler linear functio...
Figure 10.2 An example of how surrogate models can help with interpretabilit...
Chapter 11
Figure 11.1 Centralized, decentralized, and federated operating models for A...
Figure 11.2 Key functions within an AI center of excellence.
Chapter 12
Figure 12.1 Architecture components for an AI platform.
Figure 12.2 Question-and-answer systems built on knowledge modeling.
Figure 12.3 Leveraging multiple models for hyperpersonalization.
Figure 12.4 Orchestrating personalization interactions.
Figure 12.5 Activities for anomaly detection.
Figure 12.6 Interaction pattern for IoT and edge devices.
Figure 12.7 RPA-based digital workforce architecture.
Chapter 13
Figure 13.1 Importing relevant libraries that will be used.
Figure 13.2 Importing the data for customer churn.
Figure 13.3 Looking at the top few rows of the data.
Figure 13.4 Heatmap of missing value. If there were any, they would show as ...
Figure 13.5 Transforming categorical text data to numerical values.
Figure 13.6 One-hot encoding of US states.
Figure 13.7 Plotting frequency of datasets.
Figure 13.8 Frequency distribution of data of some of the columns.
Figure 13.9 Heatmap of the correlations of some of the key columns with each...
Figure 13.10 Looking for outliers.
Figure 13.11 Imbalance in label or target data.
Figure 13.12 Scaling the relevant data columns.
Figure 13.13 Visualizing the data distribution before scaling (left) and aft...
Figure 13.14 Dropping individual charge columns and adding the total charge ...
Figure 13.15 Analyzing churn rate by state.
Figure 13.16 Splitting data for training and testing in the ratio of 75:25....
Figure 13.17 Set up a logistic regression model for binary classification.
Figure 13.18 Percentage of customers that did not churn in the validation da...
Figure 13.19 Looking at the confusion matrix and precision, recall, and F1 s...
Figure 13.20 Receiver operating characteristic (ROC) curve and area under th...
Figure 13.21 Augmenting the minority data.
Figure 13.22 Trying a different algorithm – only lines 2 and 3 in the first ...
Figure 13.23 ROC curve and AUC using XGBoost.
Figure 13.24 Feature importance for the top 10 features in the model.
Cover
Table of Contents
Begin Reading
iii
iv
v
vii
xv
xvi
xvii
xviii
xix
xx
xxi
xxii
xxiii
xxiv
1
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
233
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
289
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
313
314
315
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
Rashed Haq
Copyright © 2020 by Rashed Haq. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600, or on the Web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media such as a CD or DVD that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.
Library of Congress Cataloging-in-Publication Data
Names: Haq, Rashed, author.
Title: Enterprise artificial intelligence transformation / Rashed Haq.
Description: First Edition. | Hoboken : Wiley, 2020. | Includes index.
Identifiers: LCCN 2019056747 (print) | LCCN 2019056748 (ebook) | ISBN 9781119665939 (hardback) | ISBN 9781119665861 (adobe pdf) | ISBN 9781119665977 (epub)
Subjects: LCSH: Business enterprises—Technological innovations. | Artificial intelligence—Economic aspects. | Organizational learning. | Organizational effectiveness.
Classification: LCC HD45 .H327 2020 (print) | LCC HD45 (ebook) | DDC 006.3068—dc23
LC record available at https://lccn.loc.gov/2019056747
LC ebook record available at https://lccn.loc.gov/2019056748
Cover Design: Wiley
Cover Image: © Darius Griffin Haq
We are on the brink of the algorithmic enterprise. Today's generation of business and technology leaders can have a metamorphic impact on humanity by catalyzing applied AI for every enterprise.
This book is dedicated to Abbu and Ammi, who encouraged me to pursue my dreams; to Tayyba, who lovingly supported me through life; and to Darius and Athena, who are my endless inspiration.
Over the past few years, I have been fortunate to discuss artificial intelligence (AI) with C-suite executives from the largest companies in the world, along with developers and entrepreneurs just getting started in this area. These interactions impressed on me how quickly the conversation is becoming commonplace for business executives, even though AI in business is still in its infancy. As you pick up this book, I hope you realize what an incredible time we live in and how transformative having computers that can mimic cognitive abilities will be in the coming years and decades. Digital transformation is becoming a commodity play as organizations shift into the cloud, and business leaders must plan for and utilize a new set of technology building blocks to help differentiate their companies. Of these, the single biggest impact, in my opinion, will come from AI.
When I entered the software industry over 25 years ago, everything we built revolved around three core elements: computing, storage, and networking, all of which were evolving at an incredible rate:
Computing – 286 to 386 to 486 to Pentium, and more
Storage – 5¼″ floppy to 3½″ floppy, to iOmega drives, to thumb drives, and more
Networking – corporate network to dial-up modem to DSL, to 2G, 3G, 4G, 5G, and more
The evolution of these building blocks created new computing infrastructure modes (client computing to client-server, to Internet, to cloud and mobile, to today's intelligent cloud and intelligent edge) that allowed technology to better support business and consumer needs. Core computing paradigms and the infrastructure models we all rely on have continued to advance over the past three decades, taking us on a journey in terms of how computer technology is used in business and in our personal lives. Today's conversations have shifted away from traditional technology infrastructures and onto digital transformation and what it means for each business and industry. While the backbone of digital transformation is based on computing, storage, and networking, the next generation is beginning with an entirely new set of building blocks.
These new elements consist of things we read about every day and use in various ways within our organizations: Internet of Things (IoT), blockchain, mixed reality, artificial intelligence, and at some point in the future, quantum computing. The next generation of employees will be natively familiar with these building blocks and be able to harness them to more broadly and dramatically redefine every industry. It is entirely possible that future changes will eclipse the advent of the PC, mobile, and the current round of cloud-driven digital transformation.
Although these building blocks are powerful, AI provides the most potential of any tool to impact businesses and industries. Unlike the other elements, which apply to clearly defined use patterns, AI can be leveraged in every area of the business. This includes product development, finance, operations, employee management, and supplier/partner/channel alignment. AI can be used to impact both top-line growth and bottom-line efficiencies and leveraged at any point in a business or product lifecycle. Given the breadth of opportunities and the importance of a balanced approach to your organization's AI journey, this book provides a critical reference for business leaders on how to think about your company's – as well as your personal – AI plan.
Each organization will undergo its own AI journey in line with its business strategy and needs. Much like the Internet when it first came along, the excitement and energy for AI is incredibly high and the long-term opportunities immense. Knowing that we are still in the beginning stages of real AI implementations allows us to be more thoughtful and prudent in how we approach this area. Furthermore, the tools and data needed for AI are also on their own journey and continue to evolve at an incredibly high rate.
This move toward production-ready AI is based on three core advancements:
Global-scale infrastructure – computing, storage, and networking at scale based on the cloud, which ultimately enables any developer or data scientist, anywhere on the planet, to work with the data and tools necessary to enable AI solutions.
Data – the growth of raw data, both machine- and device-driven (PCs, phones, IoT sensors, etc.), and human generated (web search, social media, etc.) provides the fuel for creating AI models.
Reusable algorithms – the advancement of reusable models or algorithms for basic cognitive functions (speech, understanding, vision, natural language processing, etc.) democratizes access to AI.
By combining these three elements at scale, any organization or developer can work with AI. Organizations can choose to work at the most fundamental levels creating their own models and algorithms or take advantage of prebuilt models or tools to build on. The challenge then becomes where to start and what to focus on.
Today, we are seeing a set of patterns start to emerge within organizations across a broad set of industries. These include:
Virtual agents, which interact with employees, customers, and partners on behalf of a company. These agents can help answer questions, provide support, and become a proactive representative of your company and your brand over time.
Ambient intelligence, which focuses on tracking people and objects in a physical space. In many ways, this is using AI to map activity in a physical space to a digital space, and then allowing actions on top of the digital graph. Many people will think about “pick up and go” retail shopping experiences as a prime example, but this pattern is also applicable to safety, manufacturing, construction scenarios, business meetings, and more.
AI-assisting professionals, which can be used to help almost any professional be more effective. For example, they can help the finance department with forecasting, lawyers with writing contracts, sellers with opportunity mapping, and more. We also see AI assisting doctors in areas such as genomics and public health.
Knowledge management, which takes a custom set of information (e.g., a company's knowledge base) and creates a custom graph that allows the data to be navigated much like the web today. People will get custom answers to their questions, versus a set of links to data. This is a powerful tool for businesses.
Autonomous systems, which refer to self-driving cars but also to robotic process automation and network protection. Threats to a network can be hard to identify as they occur and the lag before responding can result in considerable damage. Having the network automatically respond as a threat is happening can minimize risk and free the team to focus on other tasks.
Although these patterns are evolving and do not apply to every business or industry, it is important to note that AI is being used across a variety of business scenarios. So, as a business leader, where do you start? The intent and power of this book are to help business leaders answer this and many other important questions. In this book, Rashed Haq leverages his 20-plus years of experience helping companies navigate large-scale AI and analytics transformations to help you plot your journey and identify where to spend your energy.
A few things to keep in mind as you read this book. The first is that data is the fuel for AI; without data there is no AI, so you must consider which unique data assets your organization has. Is that data accessible and well managed? Do you have a data pipeline for the future? And are you treating data like an asset in your business? The next thing to remember is AI is a tool, and like any other tool it should be applied in areas that help you differentiate as a company or business. Just because you can build a virtual agent, or a knowledge management system, does not mean you should. Will that work help you with your core differentiation? Where do you have unique skills around data, machine learning, or AI? Where should you combine your unique data and skills to enhance your organization's differentiation? At the same time, should you be looking for partners or software providers to infuse their solutions with AI, so you can focus your energy on the things you do uniquely? If you think you have new business opportunities based on AI or data, think about them carefully and whether you can effectively execute against them. Finally, what is your policy around AI and ethics? Have you thought about the questions you will be asked from employees, partners, and customers?
The AI opportunity is both real and a critical part of your current and future planning processes. At the same time, it is still a fast-moving space, and will evolve considerably in the next 5 to 10 years. That means it is critical as a business leader to understand the basics of what AI is, the opportunities it offers, and the right questions to ask your team and partners. This book provides you with the background you need to help you understand the broader AI journey and blaze your own path.
As you begin thinking more deeply about AI and your company's journey, keep this simple thought in mind: “It's too early to do everything … it's too late to do nothing” – so leverage this book to help you figure out where to start!
Steve Guggenheimer
Corporate Vice President for AI, Microsoft
More business leaders are recognizing the value of leveraging artificial intelligence (AI) within their organizations and moving toward analytical clairvoyance: a state in which they can preemptively assess what situations are likely to arise in their company or business environments and determine how best to respond. The potential for enterprise AI adoption to transform existing businesses to help their customers and suppliers is vast, and there is little question today that AI is an increasingly necessary tool in business strategy. We are on the cusp of creating what has been called the algorithmic enterprise: an organization that has industrialized the use of its data and complex mathematical algorithms, such as those used in AI models, to drive competitive advantage by improving business decisions, creating new product lines and services, and automating processes.
However, the whole field of artificial intelligence is both immensely complex and continually evolving. Many businesses are running into challenges incorporating AI within their operating models. The problems come in many forms – change management, technical and algorithmic issues, hiring and talent management, and other organizational challenges. There is emerging legislation designed to protect both data privacy and fair use of algorithms that can prevent an AI solution from being deployed or may create legal problems for companies related to potential discrimination against minorities, women, or other classes of individuals.
Due to these roadblocks, few companies have successfully taken AI into an enterprise-scale capability, and many have not moved beyond the proof-of-concept phase. Scaling AI is a nontrivial proposition. But despite all this, AI is becoming a mainstream business tool. Many startups and the large technology companies are using AI to create new paradigms, business models, and products to benefit everyone. However, the greatest impact from AI will be unleashed when most large or medium-sized companies go through an enterprise AI transformation to improve the lives of their billions of customers. It is an exciting time for today's generation of business and technology leaders because they can have a metamorphic impact on humanity by overcoming the scaling challenges to lead this transformation in their businesses.
I have been lucky to work and talk with leaders in many large organizations as they journey toward incorporating AI across their businesses. The challenges they face are very different from the problems of digitally native companies because they have well- established and successful organizational structures, sales channels, supply chains, and the associated culture. I found that there is a widespread desire for reliable information about applying AI within these organizations but very little literature available that gives a clear, pragmatic guide to building an enterprise AI capability as well as possible business applications. There is no playbook to follow to understand and then address the opportunities of AI. I decided to write this book so that more of today's leaders will understand the appropriate and necessary steps for jump starting a scalable, enterprise-wide AI strategy capable of transforming their business while avoiding the challenges mentioned earlier. This book is the guidebook to help you understand, strategize for, and compete on the AI playing field. This knowledge will help you not only participate but play a leading role in your companies' AI transformation.
The book is a practical guide for business and technology leaders who are passionate about using AI to solve real-world business problems at scale. Executive officers, board members, operations managers, product managers, growth hackers, business strategy managers, product marketing managers, project managers, other company leaders, and anyone else interested in this growing and exciting field will benefit from reading it. No prior knowledge of AI is required. The book will also be useful to the AI practitioner, academic, data analyst, data scientist, and analytics manager who wants to understand how she can deliver AI solutions in the business world and what challenges she needs to address in the process.
I have organized the book into five parts.
In Part I, “A Brief Introduction to Artificial Intelligence,” I discuss the different types of AI, such as machine learning, deep learning, and semantic reasoning, and build an understanding of how they work. I also cover the history of AI and what is different now.
In Part II, “Artificial Intelligence in the Enterprise,” I cover AI use cases in a variety of industries, from banking to industrial manufacturing. These examples will help you gain an understanding of how AI is already in use today, how it is affecting different business functions, and which of these may apply to your own business to get the most out of your investment. This is not meant to be a comprehensive blueprint of all potential uses within these industries, nor a view of what is possible in the near future.
In Part III, “Building Your Enterprise AI Capability,” you will learn what it takes to define and implement an enterprise-wide AI strategy and how to lead successful AI projects to deliver on that strategy. Topics include creating a robust data strategy, understanding the AI lifecycle, knowing what makes a good AI platform architecture, approaches to managing AI model risk and bias, and building an AI center of excellence.
Part IV, “Delving Deeper into AI Architecture and Modeling,” will provide a more in-depth description of the architecture, various technical patterns for applications that will be useful as you move further toward implementations, and how AI modeling works using a detailed example.
Finally, Part V, “Looking Ahead,” will look at the future of AI and what it might mean for society and work.
Feel free to jump around, reading what you need when you need it. For example, if you are already familiar with AI and understand your use cases, start at Part III. If you are looking for ideas for use cases, take a look at Part II. When you are ready to implement your first set of projects, you can come back to Part IV.
Incorporating AI into your business can be easier than you might think once you have a roadmap, and this book provides you with the right information you need to succeed.
The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.
Edsger W. Dijkstra, professor of computer science at the University of Texas
Since the 1940s, dramatic technological breakthroughs have not only made computers an essential and ubiquitous part of our lives but also made the development of modern AI possible – in fact, inevitable. All around us, AI is in use in ways that fundamentally affect the way we function. It has the power to save a great deal of money, time, and even lives. AI is likely to impact every company's interactions with its customers profoundly. An effective AI strategy has become a top priority for most businesses worldwide.
Successful digital personal assistants such as Siri and Alexa have prompted companies to bring voice-activated helpers to all aspects of our lives, from streetlights to refrigerators. Companies have built AI applications of a wide variety and impact, from tools that help automatically organize photos to AI-driven genomic research breakthroughs that have led to individualized gene therapies. AI is becoming so significant that the World Economic Forum1 is calling it the fourth industrial revolution.
The first three industrial revolutions had impacts well beyond the work environment. They reshaped where and how we live, how we work, and to a large extent, how we think. The World Economic Forum has proposed that the fourth revolution will be no less impactful.
During the first industrial revolution in the eighteenth and nineteenth centuries, the factory replaced the individual at-home manufacturer of everything from clothing to carriages, creating the beginnings of organizational hierarchies. The steam engine was used to scale up these factories, starting the mass urbanization process, causing most people to move from a primarily agrarian and rural way of life to an industrial and urban one.
From the late nineteenth into the early twentieth century, the second industrial revolution was a period in which preexisting industries grew dramatically, with factories transitioning to electric power to enhance mass production. The rise of the steel and oil industries at this time also helped scale urbanization and transportation, with oil replacing coal for the world's navies and global shipping.
The third industrial revolution, also referred to as the digital revolution, was born when technology moved from the analog and mechanical to the digital and electronic. This transition began in the 1950s and is still ongoing. New technology included the mainframe and the personal computer, the Internet, and the smartphone. The digital revolution drove the automation of manufacturing, the creation of mass communications, and a scaling up of the global service industry.
The shift in emphasis from standard information technology (IT) to artificial intelligence is likely to have an even more significant impact on society. This fourth revolution includes a fusion of technologies that blurs the lines between the physical, digital, and biological spheres2 and is marked by breakthroughs in such fields as robotics, AI, blockchain, nanotechnology, quantum computing, biotechnology, the Internet of Things (IoT), 3D printing, and autonomous vehicles, as well as the combinatorial innovation3 that merges multiples of these technologies into sophisticated business solutions. Like electricity and IT, AI is considered a general-purpose technology – one that can be applied broadly in many situations that will ultimately affect an entire economy.
In his book The Fourth Industrial Revolution, World Economic Forum founder and executive chairman Klaus Schwab says, “Of the many diverse and fascinating challenges we face today, the most intense and important is how to understand and shape the new technology revolution, which entails nothing less than a transformation of humankind. In its scale, scope, and complexity, what I consider to be the fourth industrial revolution is unlike anything humankind has experienced before.”4 This fourth revolution is creating a whole new paradigm that is poised to dramatically change the way we live and work, altering everything from making restaurant reservations to exploring the edges of the universe.
It is also causing a significant shift in the way we do business. Changes over the past 10 years have made this shift inevitable. Companies need to be proactive to stay competitive; those that are not will face more significant hurdles than ever before. And things are happening more quickly than many people realize. The pace of each industrial revolution has dramatically accelerated from its pace in the previous one, and the AI revolution is no exception. Even companies such as Google, which has led the mobile-first world, has substantially shifted gears to stay ahead. As Google CEO Sundar Pichai vowed, “We will move from a mobile-first to an AI-first world.”5
Richard Foster, of the Yale School of Management, has said that because of new technologies an S&P company is now being replaced almost every two weeks, and the average lifespan of an S&P company has dropped by 75% to 15 years over the past half-century.6 Even more intriguing is that regardless of how well a company was doing, its prior successes did not afford protection unless it jumped on the technology innovations of the times.
Along similar lines, McKinsey found that the fastest-growing B2B companies “are using advanced analytics to radically improve their sales productivity and drive double-digit sales growth with minimal additions in their sales teams and cost base.”7 In another paper, they estimated that in 2016, $26 billion to $39 billion was invested in AI, and that number is growing.8 McKinsey posits the reason for this: “Early evidence suggests that AI can deliver real value to serious adopters and can be a powerful force for disruption.”9 Early AI adopters, the study goes on, have higher profit margins, and the gap between them and firms that are not adopting AI enterprise-wide is expected to widen in the future.
All this is good news for businesses that embrace innovation. The changeover to an AI-driven business environment will create big winners among those willing to embrace the AI revolution.
To most people, AI can seem almost supernatural. But at least for the present, despite its extensive capabilities, AI is more limited than that. Currently, computer scientists group AI into two categories: weak or narrow AI and strong AI, also known as artificial general intelligence (AGI). AGI is defined as AI that can replicate the full range of human cognitive abilities and can apply intelligence to any given problem as opposed to just one. Narrow AI can only focus on a specific and narrow task.
When Steven Spielberg created the movie AI, he visualized humanoid robots that could do almost everything human beings could. In some instances, they replaced humans altogether. AGI of this type is only hypothetical at this point, and it is unclear if or when we will develop it. Scientists even debate whether AGI is actually achievable and whether the gap between machine and human intelligence can ever be closed. Reasoning, planning, self-awareness: these are characteristics developed by humans when they are as young as two or three; but they remain elusive goals for any modern computer.
No computer in existence today can think like a human, and probably no computer will do so in the near future.10 Despite the media attention, there is no reason to be concerned that a simulacrum of HAL,11 from Stanley Kubrick's film 2001, will turn your corporate life upside-down. On the other hand, artificial intelligence is no longer the stuff of science fiction, and there is already a large variety of successful and pragmatic applications, some of which are covered in Part II. The majority of these are narrow AI, and some, at best, are broad AI. We define broad AI as a combination of a number of narrow AI solutions that together give a stronger capability such as autonomous vehicles. None of these are AGI applications.
So how are companies using AI to succeed in this ever-changing world?
More companies are recognizing that in today's evolving business climate, they will soon be valued not just for their existing businesses but also for the data they own and their algorithmic use of it. Algorithms give data its extrinsic value, and sometimes even its intrinsic value – for example, IoT data is often so voluminous that without complex algorithms, it has no inherent value.
Humans have been analyzing data since the first farmer sold or bartered the first sheaf of grain to her first customer. Individuals, and then companies, continued to generate analytics on their data through the first three industrial revolutions. Data analysis to improve businesses became even more indispensable starting around 1980, when companies began to use their data to improve daily business processes. By the late 1980s, organizations were beginning to measure most business and engineering processes. This inspired Motorola engineer Bill Smith to create a formal technique for measurement in 1986. His technique became known as Six Sigma.
Companies used Six Sigma to identify and optimize variables in manufacturing and business to improve the quality of the output of a process. Relevant data about operations were collected, analyzed to determine cause-and-effect relationships, and then processes were enhanced based on the data analysis. Using Six Sigma meant collecting large amounts of data, but that did not stop an impressive number of companies from doing it. In the 1990s, GE management made Six Sigma central to its business strategy, and within a few years, two-thirds of the Fortune 500 companies had implemented a Six Sigma strategy.
The more data there was, the more people wanted to use it to improve their business processes. The more it helped, the more they were willing to collect data. This feedback loop created a virtuous cycle. This virtuous cycle is how AI works within a data-driven business—collect the data, create models that give insights, and then use these insights to optimize the business. The improved company allows more data collection – for example, from the additional customers or transactions enabled by the more optimized business – allowing more sophisticated and more accurate AI models, which further optimizes the business.
Although AI has been around since the 1950s, it is only in the last few years that it has started to make meaningful business impacts. This is due to a particular confluence of Internet-driven data, specialized computational hardware, and maturing algorithms.
The idea of connecting computers over a wide-area network, or Internet, had been born in the 1950s, simultaneous with the electronic computer itself. In the 1960s, one of these wide-area networks was funded and developed by the US Department of Defense and refined in computer science labs located in universities around the country. The first message on one of these networks was sent across what was then known as the ARPANET12 in 1969, traveling from the University of California, Los Angeles, to Stanford University. Commercial Internet service providers (ISPs) began to emerge in the late 1980s. Protocols for what would become the World Wide Web were developed in the 1980s and 1990s. In 1995, the World Wide Web took off, and online commerce emerged. Companies online started collecting more data than they knew how to utilize.
Businesses had always used internally generated data for data analytics. However, since the beginnings of the Internet, broadband adoption in homes, and the emergence of social media and the smartphone, our digital interactions grew exponentially, creating the era of user-generated data. A proliferation of sensors, such as those that can measure vibrations in machines in an industrial setting, or measure the temperature in consumer products, such as coffeemakers, added to this data trove. It is estimated that there are currently over 100 sensors per person, all enabled to collect data. This data became what we refer to as big data.
Big data encompasses an extraordinary amount of digital information, collected in forms usable by computers: data such as images, videos, shopping records, social network information, browsing profiles, and voice and music files. These vast datasets have resulted from the digitization of additional processes, such as social media interactions and digital marketing. New paradigms had to be developed to handle this Internet-scale data: MapReduce was first used by Google in 2004 and Hadoop by Yahoo in 2006 to store and process these large datasets. Using this data to train AI models has enabled us to get more significant insights at a faster pace, vastly increasing the potential for AI solutions.
Although the volume of data available soared, storage costs plummeted, providing AI with all the raw material it needed to make sophisticated predictions. In the early 2000s, Amazon brought cloud-based computing and storage, making a high-performance computation on large datasets available to IT departments for many businesses. By 2005, the price of storage had dropped 300-fold in 10 years, from approximately $300 to about $1 per gigabyte. In 2010, Microsoft and Google helped further expand storage capacity with their cloud storage and computing-product releases: Microsoft Azure and Google Cloud Platform.
In the 1960s Intel co-founder Gordon Moore predicted that the processing power of computer chips would double approximately every year. Known as Moore's Law, it referred to the exponential growth of the computational power in these computers. In the 1990s, hardware breakthroughs such as the development of the graphics processing unit (GPU) increased computational processing power more than a million-fold,13 with the ability to execute parallel processing of computations. Initially used for graphics rendering, the GPU would later make it possible to train and run sophisticated AI algorithms that required enormous datasets. More recently Google has introduced the tensor processing unit (TPU) that is an AI-accelerated chip for deep learning computations.
In addition to the hardware, the advances in parallel computing were leveraged to parallelize the training of AI models. Access to these services in the cloud from Amazon, Microsoft, and Google for any company that wanted it made it easier for many companies to venture into this space where they would have been more tentative if each company had to build its own large scalable, parallel processing infrastructures.
Breakthrough techniques in artificial intelligence14 have been occurring since the 1950s, when early work on AI began to accelerate. Models based on theoretical ideas of how the human brain works, known as neural networks, were developed, followed by a variety of other attempts to teach computers to learn for themselves. These machine learning (ML) algorithms15 enabled computers to recognize patterns from data and make predictions based on those patterns, as did the increasingly complex, multilayered neural nets that are used in the type of machine learning known as deep learning.16 Another breakthrough came in the 1980s when the method of back-propagation was used to train artificial neural networks, enabling the network to optimize itself without human intervention. Through the 1990s and early 2000s, scientists developed more approaches to building neural networks to solve different types of problems such as image recognition, speech to text, forecasting, and others.
In 2009, American scientist Andrew Ng, then at Google and Stanford University, trained a neural network with 100 million parameters on graphics processing units (GPUs), showing that what might take weeks on CPUs could now be computed in just days. This implementation showed that powerful algorithms could utilize large available datasets and process them on specialized hardware to train complex machine learning and deep learning algorithms.
The progress in algorithms and technologies has continued, leading to startling advances in the computer's ability to perform complex tasks, ably demonstrated when the program AlphaGo beat the world's top human Go player in 2016.17 The game of Go has incredibly simple rules, but it is more complicated to play than chess, with more possible board positions than atoms in the universe. This complexity made it impossible to program AlphaGo with decision trees or rules about which move to make when it was in any given board position. To win, AlphaGo had to learn from observing professional games and playing against itself.
The thorny problem of speech recognition was another hard-to-solve need. The infinite variety of human accents and timbres previously sank an array of attempts to make speech comprehensible to computers. However, rather than programming for every conceivable scenario, engineers fed terabytes of data (such as speech samples) to the networks behind advanced voice-recognition-learning algorithms. The machines were then able to use these examples to transcribe the speech. This approach has enabled breakthroughs like Google's, whose Translate app can currently translate over 100 languages. Google has also released headphones that can translate 40 languages in real time.
Beyond speech recognition, companies have now “taught” computers how to both ascertain exactly what a person wants and address that need, all so that Alexa can understand that you want to listen to Bryan Adams, not Ryan Adams, or distinguish between the two Aussie bands Dead Letter Circus and Dead Letter Chorus. Virtual assistants like these can be even more useful, doing everything from taking notes for a physician while she's interacting with a patient to sorting through vast amounts of research data and recommending options for a course of therapy.
Even as technology flashes forward, existing AI techniques are continuing to provide exceptional value, enabling new and exciting ways to conduct tasks such as analyzing images. With digital and smartphone cameras, it is easier than ever to upload pictures to social networks such as Facebook, Pinterest, and Instagram. These images are becoming a larger and larger portion of big data. Their power can be illustrated by research done by Fei-Fei Li, professor of computer science at Stanford University, and the head of machine learning at Google Cloud until recently.
Li, who specializes in computer vision and machine learning, was instrumental in creating the labeled database ImageNet. In 2017, she used labeled data to accurately predict how different neighborhoods would vote based merely on the cars parked on their streets.18 To do so, she took labeled images of cars from car-sales website Edmunds.com, and using Google Street View, taught a computer to identify which cars were parked on which streets. By comparing this to labeled data from the American Community Survey and presidential election voting data, she and her colleagues were able to find a predictive correlation among cars, demographics, and political persuasion.
Research in AI and its application is growing exponentially. Universities and large technology companies are doing higher volumes of research to advance AI's capabilities and to understand better why AI works as well as it does. The student population studying AI technologies has grown proportionately, and even businesses are setting up AI research groups and multiyear internship programs, such as the AI residency program at Shell.19 All these investments are continuing to drive the evolution of AI.
This revolution has not yet slowed down. In the past five years, there has been a 300,000× increase in the computational power of AI models.20 This growth is exponentially faster than Moore's Law, which itself is exponential. However, this revolution is no longer just in the hands of academia and a set of large technology companies. The transition from research to applications is well under way. The combination of current computational power; the enormous storehouse of data that is the Internet; and multiple free, open-source programming frameworks, as well as the availability of easy-to-use software from Google, Microsoft, Amazon, and others is encouraging increasing numbers of businesses to explore AI.
Getting value from AI is not just about cutting-edge models or powerful algorithms: it is about deploying these algorithms effectively and getting business adoption for their use. AI is not yet a plug-and-play technology. Although data is a plentiful resource, extracting value from it can be a costly proposition. Businesses must pay for its collection, hosting, cleaning, and maintenance. To take advantage of data, companies need to pay the salaries of data engineers, AI scientists, analysts, and lawyers and security experts to deal with concerns such as the risk of a breach. The upsides, however, can be enormous.
Before AI, phone companies used to look at metrics such as how long it took to install a private line. Hospitals estimated how much money they would bill that would never be collected. Any company that sold something studied its sales cycles – for instance, how long did it take each of their salespeople to close a deal? Using AI, companies can look at data differently. Firms that used to ask “What is our average sales cycle?” are now able to ask “What are the characteristics of the customer or the sales rep who has a shorter sales cycle? What can we predict about the sales cycle for a given customer?” This depth of knowledge brings with it enormous business advantages.
There are undoubtedly potential downsides of trying to use AI applications widely. Building an AI application is complicated, and much of it is utilized without genuinely understanding exactly how it arrives at its decisions. Given this lack of transparency (often called the black box problem), it can be difficult to tell if an AI engine is making correct and unbiased judgments. Currently, black box problems primarily involve AI-based operating decisions that appear to handle factors such as race or gender unfairly.
A study by ProPublica21 of an algorithm designed to predict recidivism (repeated offenses) in prison populations found that black prisoners were far more likely to be flagged as having a higher rate of recurrence than white prisoners. However, when these numbers were compared to actual rates that had occurred over two years in Broward County, Florida, it turned out that the algorithm had been wrong. This discrepancy pointed out a real problem: not only could an algorithm make the wrong predictions, but the lack of algorithm transparency could make it impossible to determine why. Accountability can also be a problem. It is far too easy for people to assume that if the information came from a computer, it must be true. At the same time, if an AI algorithm makes a wrong decision, whose fault is it? Moreover, if you do not think a result is fair or accurate, what is your recourse? These are issues that must be addressed to achieve the benefits of using AI.
JP Morgan's use of AI is an impressive example of how efficient AI can be. The financial giant uses AI software to conduct tasks such as interpreting commercial loan agreements and performing simple, repetitive functions like granting access to software systems and responding to IT requests, and it has plans to automate complex legal filings. According to Bloomberg Markets,22 this software “does in seconds what took lawyers 360,000 hours.”
On the other hand, multinational trading company Cargill is beginning to incorporate AI into its business strategy. In early 2018, the Financial Times reported that Cargill was hiring data scientists to figure out how to better utilize the increasing amount of available data. According to the Times, “the wider availability of data – from weather patterns to ship movements – has diminished the value of inside knowledge of commodity markets.”23
Cargill's action illustrates two critical points. Your business strategy may well benefit from using AI, even if you have not yet worked out how to do so. Moreover, given the vast amounts of available data, the current and growing sophistication of AI algorithms, and the track records of successful companies that have adopted AI, there will never be a better time than now to both determine your AI strategy and begin to implement it. This book is designed to help you do both. To begin, we will discuss what AI is and how AI algorithms work.
1
. The World Economic Forum is a Swiss nonprofit foundation best known for an annual meeting that brings together thousands of top business and political leaders, academics, celebrities, and journalists to discuss the most pressing issues facing the world.
2
. World Economic Forum (January 14, 2016). The Fourth Industrial Revolution: What It Means, How to Respond.
https://www.weforum.org/agenda/2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/
(accessed September 26, 2019).
3
. McKinsey & Company (January 2009). Hal Varian on How the Web Challenges Managers.
https://www.mckinsey.com/industries/high-tech/our-insights/hal-varian-on-how-the-web-challenges-managers
(accessed September 26, 2019).
4
. World Economic Forum (November 27, 2017). The Rise of the Political Entrepreneur and Why We Need More of Them.
https://www.weforum.org/agenda/2017/11/the-rise-of-the-political-entrepreneur-and-why-we-need-more-of-them/
(accessed September 26, 2019).
5
.
VentureBeat
(May 18, 2017). Google Shifts from Mobile-first to AI-first World.
https://venturebeat.com/2017/05/18/ai-weekly-google-shifts-from-mobile-first-to-ai-first-world
(accessed September 26, 2019).
6
. Innosight (2018). 2018 Corporate Longevity Forecast: Creative Destruction Is Accelerating.
https://www.innosight.com/insight/creative-destruction/
(accessed September 26, 2019).
7
. McKinsey & Company (January 2018). What the Future Science of B2B Sales Growth Looks Like.
https://www.mckinsey.com/business-functions/marketing-and-sales/our-insights/what-the-future-science-of-b2b-sales-growth-looks-like
(accessed September 26, 2019).
8
. McKinsey & Company (June 2017). Artificial Intelligence: The Next Digital Frontier.
www.mckinsey.com/~/media/McKinsey/Industries/Advanced%20Electronics/Our%20Insights/How%20artificial%20intelligence%20can%20deliver%20real%20value%20to%20companies/MGI-Artificial-Intelligence-Discussion-paper.ashx
(accessed September 26, 2019).
9
.
ComputerWeekly
(June 19, 2017). AI research finds slender user adoption outside tech.
www.computerweekly.com/news/450421003/McKinsey-AI-research-finds-slender-user-adoption-outside-tech
(accessed September 26, 2019).
10
.
VentureBeat
(December 17, 2018). AGI Is Nowhere Close to Being a Reality.
https://venturebeat.com/2018/12/17/geoffrey-hinton-and-demis-hassabis-agi-is-nowhere-close-to-being-a-reality/
(accessed September 26, 2019).
11
. HAL, incidentally, is a reference to IBM. Each letter in the name of the villainous computer falls right before the letters in the famous tech company.
12
. Advanced Research Projects Agency Network.
13
.
Soft Computing
15, no. 8 (August 2011): 1657–1669. Graphics Processing Units and Genetic Programming: An overview.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.187.1823&rep=rep1&type=pdf
(accessed September 26, 2019).
14
. American scientist John McCarthy coined the term in 1955.
15
. American scientist Arthur Samuel coined the term in 1958.
16
. American scientist Rina Dechter coined the term in the context of machine learning in 1986.
17
. The documentary
AlphaGo
(2017) shows how the teams competed in the seven-day tournament in Seoul.
18
.
Stanford News
(November 28, 2017). An Artificial Intelligence Algorithm Developed by Stanford Researchers Can Determine a Neighborhood's Political Leanings by Its Cars.
https://news.stanford.edu/2017/11/28/neighborhoods-cars-indicate-political-leanings/
(accessed September 26, 2019).
19
. Shell: AI Residency Programme – Advancing the Digital Revolution.
https://www.shell.com/energy-and-innovation/overcoming-technology-challenges/digital-innovation/artificial-intelligence/advancing-the-digital-revolution.html
(accessed September 26, 2019).
20
. OpenAI Blog (May 16, 2018). AI and Compute.
https://openai.com/blog/ai-and-compute/
(accessed September 26, 2019).
21
. ProPublica (May 23, 2016). How We Analyzed the COMPAS Recidivism Algorithm.
https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
(accessed September 26, 2019).
22
. Bloomberg (February 28, 2017). JPMorgan Software Does in Seconds What Took Lawyers 360,000 Hours.
https://www.bloomberg.com/news/articles/2017-02-28/jpmorgan-marshals-an-army-of-developers-to-automate-high-finance
(accessed September 26, 2019).
23
.
Financial Times
(January 28, 2018). Cargill Hunts for Scientists to Use AI and Sharpen Trade Edge.
https://www.ft.com/content/72bcbbb2-020d-11e8-9650-9c0ad2d7c5b5
(accessed September 26, 2019).
Early AI was mainly based on logic. You're trying to make computers that reason like people. The second route is from biology: You're trying to make computers that can perceive and act and adapt like animals.
Geoffrey Hinton, professor of computer science at the University of Toronto
The concept of AI is not new. Humans have imagined machines that can compute since ancient times, and the idea has persisted through the Middle Ages and beyond. In 1804, Joseph-Marie Jacquard actually created a loom that was “programmed” to create woven fabrics using up to 2,000 punch cards. The machine could not only replace weavers, but also make patterns that might take humans months to complete, and it could replicate them perfectly.
However, it was not until the late twentieth century that AI began to look like an achievable goal. Even today, artificial intelligence is not a precisely defined term. In an article published on February 14, 2018, Forbes offered six definitions, the first derived from The English Oxford Living Dictionary: “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”1 This is a reasonable place to start because the examples in the definition are the type of AI that is currently being utilized: weak or narrow AI.
