The AI Value Playbook - Lisa Weaver-Lambert - E-Book

The AI Value Playbook E-Book

Lisa Weaver-Lambert

0,0
32,39 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Business leaders are challenged by the speed of AI innovation and how to navigate disruption and uncertainty. This book is a crucial resource for those who want to understand how to leverage AI to drive business value, drawn from the firsthand experience of those who have been implementing this technology successfully.
The AI Value Playbook focuses on questions frequently posed by leaders and boards. How can businesses adapt to these emerging technologies? How can they start building and deploying AI as a strategic asset to drive efficiency? What risks or threats need to be considered? How quickly can value be created?
This book is a response to those demands. In a series of in-depth and wide-ranging conversations with practitioners, from CEOs leading new generative AI-based companies to Data Scientists and CFOs working in more traditional companies. Our experts share their hard-earned wisdom, talking candidly about their successes and failures, and what excites them about the future. These interviews offer unique insights for business leaders to apply to their own organizations. The book distils a value-driven playbook for how AI can be put to work today.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 554

Veröffentlichungsjahr: 2024

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



The AI Value Playbook

How to make AI work in the real world

Lisa Weaver-Lambert

In Praise of

The AI Value Playbook

“Unlock the secrets of rapid AI implementation with this indispensable guide by Lisa Weaver-Lambert. Drawing from expert insights, this essential resource reveals how to harness storytelling and alignment, create an engaging environment for communication, build trust, invest in continual learning, and focus on strategic alignment. With a writing style that’s both concise and compelling, Lisa shows you how to turn AI into a game-changer for your organization – not just a buzzword. Dive in and discover the formula for success.”

— John Winn, Managing Director at Blackstone

“I cannot think of another source where so many AI examples are compiled into one book. I think this is a must read for anyone who wants to learn the history, creation, and, more importantly, ideas of AI to apply to the reader’s own journey. As a leader in the technology industry, I’ve seen firsthand the transformative potential of AI, but also the critical importance of using it responsibly and securely. The AI Value Playbook offers a roadmap to maximize its potential. We can only adapt and change when we listen, and in this book there are many voices across geographies and industries. I highly recommend it.”

— Kelly Bissell, Corporate Vice President, Microsoft

“The AI Value Playbook is valuable for organisational leaders who are seeking to improve productivity by driving value from data and AI. The AI Playbook offers a powerful guide to help companies master AI concepts, overcome the inevitable obstacles, and learn from real-world examples of applied AI.”

— Soumitra Dutta, Peter Moores Dean and Professor of Management, Saïd Business School, University of Oxford

“In an era where data is the lifeblood of success, The AI Value Playbook stands out as the essential guide for companies navigating the transition to an AI operating model. Busy users can leverage chapters for quick reference, while even advanced users will find it valuable as a source of clear explanations for colleagues and friends. Lisa unpacks success strategies, weaving insights with captivating case studies that provide executives with actionable mental models. The book offers a structured approach for initiating, measuring, and adapting AI projects, acknowledging the iterative nature of ROI. Executives seeking AI-driven growth will find it a must-read, with practical guidance and a strategic framework for implementation, including the critical development of talent. It breaks the mold of the typical executive-focused reads. It’s an indispensable ally for business leaders that will leave you inspired and informed.”

— Bogdan Crivat, Corporate Vice President, Azure Data Analytics, Microsoft

“At Race Capital, we invest in transformative technologies. The AI Value Playbook isn’t just about technology. It’s about empowering people. You’ll discover how to integrate AI to future-proof your business and empower your workforce. The AI Value Playbook is your practical guide and roadmap to unlocking the power of AI for your business inspired by real-word case studies.”

— Alfred Chuang, General Partner at Race Capital, Silicon Valley’s most passionate CEO Coach, Co-Founder, CEO, and Chairman of BEA Systems

“The AI Value Playbook offers a clear-eyed view of how AI is transforming industries. Lisa’s actionable framework, combined with real-world case studies, empowers leaders to navigate the complexities of AI integration and release its potential for sustainable growth. A great read for business leaders looking to develop a strategic roadmap to leverage AI for increased efficiency and competitiveness.”

— Pegah Ebrahimi, Co-Founder and Managing Partner at FPV Ventures

“In today’s rapidly evolving landscape, AI is no longer a future possibility, but a present necessity. The AI Value Playbook offers a clear roadmap for business leaders to navigate the exciting world of AI and transform it into a strategic advantage for our organizations. Lisa’s insights and real-world examples across industries and geographies are invaluable for anyone looking for practical techniques to implement AI and drive positive results. If you’ve questioned your company’s ability to innovate - or stay relevant - in an era of exponential technological growth, this book is a must-read.”

— Guglielmo Angelozzi, CEO, Lottomatica Group

“The AI Value Playbook is a roadmap for CEOs to leverage AI to future-proof their organization. Lisa combines insightful analysis with practical case studies, demonstrating the importance of data management and how AI can drive efficiency, accelerate innovation, and unlock new markets. This book is a critical resource for any organization looking to scale with AI and secure a competitive advantage in the years to come.”

— Sajjad Jaffer, GrowthCurve Capital; Ex Co-Founder, Two Six Capital

“The AI Value Playbook equips board members with the knowledge, practical tools, and competitive edge needed to provide the vital leadership required to successfully navigate the transformative potential of AI. By providing insights from real-world case studies spanning hugely diverse organizations, the book reveals how AI can disrupt and redefine industries, driving increased productivity, cost optimization, and long-term growth. It also tackles the gap between AI’s promise and its practical impact, addressing internal organizational barriers. Most importantly, the book equips boards with a practical framework distilled from practitioners that have the bruises and scars from deploying AI in businesses, guiding board members to ask the right questions of their organizations throughout the AI transformation.”

— Sanjeevan Bala, Independent Non-executive Director (Bakkavor, SThree) | Group Chief Data & AI Officer | AI Board Advisor

“In the fast-paced world of private markets in which I work, staying ahead of the curve is paramount. The AI Value Playbook offers a clear-eyed view of how AI is transforming industries. Lisa’s actionable framework, combined with real-world case studies provides a much needed personal, story-telling approach to a topic that’s understandably exciting yet for many, can feel overwhelming to truly understand. Her deeply humanistic and practical guide empowers business leaders to identify high-potential AI applications. This book is a valuable guide on the challenges and transformative power of AI that speaks directly to the non-technical business leader.”

— Alice Murray, Founder, Editor-in-Chief, The Line. Head of Content and Communications, Alpha Group

“ Lisa is an ex-colleague of mine at Microsoft and I respect her opinions. Also, I suspect like a lot of you, I have mixed feelings about the current AI boom: I can see the value in AI but I can also see the vast amount of hype and the obviously ridiculous claims being made. More than anything I see senior executives talking confidently about a subject I’m sure they don’t understand, and that is clearly a big problem. This book aims to help solve that problem by providing a practical guide to AI for non-technical leaders, in the form of a series of case studies and interviews with entrepreneurs and C-level people in the AI space. This is a very readable book – Lisa has talked to a lot of interesting, knowledgeable people – and the format makes it a lot more palatable for the target audience of your boss’s boss’s boss than your average tech book. As a technical person who isn’t by any means an AI expert I also enjoyed reading it.”

— Chris Webb, Principal Program Manager at Microsoft

The AI Value Playbook

Copyright © 2024 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Niranjan Naikwadi

Publishing Product Manager: Nitin Nainani

Book Project Manager: Shambhavi Mishra

Senior Editor: David Sugarman

Copy Editor: Safis Editing

Indexer: Manju Arasan

Production Designer: Vijay Kamble

Senior DevRel Marketing Coordinator: Vinishka Kalra

First published: August 2024

Production reference: 4310125

Published by Packt Publishing Ltd.

Grosvenor House

11 St Paul’s Square

Birmingham

B3 1RB, UK

ISBN 978-1-83546-175-4

www.packtpub.com

About the Author

Lisa Weaver-Lambert has been leveraging data to solve complex commercial challenges for over two decades through her roles at Microsoft, Accenture, and in capital and public markets. In addition, she has held executive line management positions and served at board level. Her work in regions including Europe, the US, APAC, and Africa has brought her a global perspective.

Lisa’s early career in financial services, working on acquisition, retention, and fraud detection programs, provided a valuable foundation in data-driven decision making. This experience solidified her understanding of data quality, governance, and its commercial potential. She witnessed firsthand the power of large datasets and how working with data could solve real-world business problems.

Since then, she has worked with some of the world’s best-known brands towards success, earning her recognition as a leading woman in technology in the EMEA investment field. Her extensive experience in strategy and business transformation has led her to collaborate with organizations of different sizes and maturity levels to accelerate performance.

Recognizing the transformative potential of AI, Lisa’s mission is to equip non-technical leaders with the knowledge and confidence to leverage AI for their organizations. Lisa recently founded Oxford AI Studio. This initiative aims at bridging industry with researchers and knowledge institutes to tackle real-world business challenges and drive the development and application of responsible AI innovation.

Lisa earned her degrees from the University of Lancaster and ESCP Business School in Paris, strengthening her qualifications through executive programs at MIT Sloan School of Management and UC Berkeley School of Information.

LinkedIn: lisa-weaver-lambert

Acknowledgements

This book, like the discussions it seeks to facilitate, is the culmination of countless conversations and collaborations. I am deeply grateful to all the interviewees, and their teams, whose insights, stories, and lessons are the essence, of this work. Thank you for your time and generosity of spirit.

I am deeply grateful to the following colleagues and friends who have enriched this book in invaluable ways:

Giuseppe Saltini brought his twenty-five years of experience in the data and information architecture space. Giuseppe’s insightful feedback and edits not only enhanced the manuscript’s accuracy but, I trust, will also make it more relevant to readers.

David Goodchild lent his creative expertise to the book’s visual identity. From the captivating cover design to the clear internal layout and informative diagrams, David’s contribution significantly improved the overall presentation.

Kate Willis for her editorial contributions to the interview drafts. And Lara Lambert for her meticulous proof-reading.

Insight Partners for their expertise in identifying high-growth generative AI companies.

The Packt team, whose keen eyes ensure the book’s clarity and professionalism.

About the Reviewer

Tanmaya Gaur is a Principal Architect at T-Mobile US, Inc., with over 15 years of experience building enterprise systems. He is a technical expert in the architecture, development, and deployment of advanced software and infrastructure for enhanced user support in telecom applications. He is passionate about utilizing composable architecture strategies to aid creation and management of reusable components across the entire spectrum of web development, from front-end UX to backend coordination and content management. His current focus is building Telecom CRM tools that leverage micro-front-end, artificial intelligence, and machine learning algorithms to boost efficiency and improve the product experience.

Contents

Introduction

Overview of AI Concepts and Technology Stack

Sam Liang, CEO of Otter.ai

Amr Awadallah, Founder and CEO at Vectara

Philipp Heltewig, Co-Founder and CEO at Cognigy

Miao Song, Chief Information Officer at GLP

Ruben Ortega, General Partner at Enjoy the Work

Joshua Rubin, Principal AI Scientist at Fiddler AI

Nadine Thomson, Global Chief Technology Officer at GroupM (WPP)

Sarvarth Misra, Co-Founder and CEO of ContractPodAi

Edward Fine, AI and Data Science Consultant, Technologist, and Instructor

Sanjeevan Bala, Group Chief Data and AI Officer at ITV

Nathalie Gaveau, AI Tech Entrepreneur and Board Member

Phil Harvey, Applied AI Architect

Elizabeth Ajayi, Director, Intelligent Industry at Capgemini Invent

Louis DiCesari, Head of Data, Analytics and AI at Levi Strauss & Co.

Vickey Rodrigues, CTO/CDO in Insuretech, Payments, and Healthtech

Sean McDonald, Former Global Chief Innovation Officer at McCann Worldgroup

Julie Gray, Head of Data and Internal Systems at Agilio

Peter Jackson, Chief Data and Technology Officer at Outra

Mark Beckwith, Director of Data Governance and Architecture at the Financial Times

Kshira Saagar, Data Science at Wolt and Doordash

Joe Romata, Global Head of Customer Experience at a Multinational Energy Company

Tomasz Ullman, Former Global Head of Data Science and Strategy at Ford Pro

Oz Krakowski, Chief Business Development Officer at Deepdub

Case Study, LLMs and RAG Enable Hyper-Personalized Education for Healthcare Technicians

Case Study, AI Personalization Increases Engagement in Nascent Tech Communities

Case Study, AI-Powered Virtual Agent Augments Service Efficiency

Case Study, Generative AI Creates a Paradigm Shift in Innovation Processes

Case Study, Unlocking Profit Potential – Leveraging Enterprise Data for Customer Profitability

Case Study, Minimizing Customer Churn with AI

Case Study, Enhancing Marketing Strategies Through the Power of LLMs

Case Study, Multimodal LLMs Redefine Software Development and Customer Innovation

Where We’ve Got To and What’s Next

Author’s Note

Writing about AI for non-technical executives and board members requires striking a balance between rigor and readability, making sure to be succinct without sacrificing nuance. I have drawn on my own experience working with leadership teams to find the sweet spot on the continuum, getting the substance right while keeping the book accessible.

Opening up my personal access to leading minds in today’s business landscape has given me a unique perspective. Through our compelling conversations, I have aimed to bridge the gap between the technical world of AI and the strategic considerations of leadership. My interview approach maintains the authenticity of the dialogue. I hope you find that I have achieved this and these insights inspire you.

1

Introduction

Sitting across from an astute CEO I had been working closely with, I could sense his growing understanding of AI’s potential to accelerate the growth of his business. His plan to integrate two market-leading businesses was complex yet also a significant opportunity. His questions were increasingly insightful, delving deeper into his organization’s processes and people. He was envisioning the future – one powered by AI.

It was a chance to rethink the operating model of his company. As the weeks passed, he began networking with other external leaders, who were all harnessing data as a strategic asset, leveraging it to gain insights, make informed decisions, and drive growth. 

But, as I witnessed this shift, new questions began to form: How could I scale this understanding through human stories? How could these practices and learnings be applied to other organizations? This marked the beginning of The AI Value Playbook, a practical guide to AI, aimed at supporting non-technical executives and board members to quickly formulate a perspective on how to integrate AI into their businesses. AI is no longer new, but extracting value from it remains a challenge for many of us.

Driving business value with AI 

Today, the expectation that AI will inevitably create value, foster innovation, and secure competitive advantage is prevalent in everyday life. Our daily interactions with products and tools from leading technology companies – Amazon, Apple, Facebook, Google, Microsoft, and Netflix – create an assumption that AI will flick a switch and deliver results almost instantly.

But this is not the reality for most organizations. The tech giants have been built from the bottom up with data as their bedrock and with AI integrated into core processes for nearly a decade. Technological innovation is intrinsic to their business models and growth strategies, and they face few of the cash, storage, or resource challenges common to others.

However, the transformative potential of AI is not exclusive to these tech giants. AI promises to redefine businesses, scaling across every sector of our economy. There’s clear evidence that integrating AI into business operations measurably enhances efficiency while reducing costs. It is now among the top five value drivers on CEO agendas.

The growing cost of capital, unstable inflation, and pressure on operating expenses are all compounding stresses on organizations today. Therefore, the ability to manage and control processes underpins growth, regardless of the sector. Goldman Sachs’ research asserts that AI has the potential to boost annual global GDP by 7% (or almost $7 trillion) over the coming decade (packt.link/f5sli). Therefore, the transformative potential of AI promises to boost productivity and fuel long-term economic expansion.

In light of aging populations and other economic growth headwinds, productivity becomes even more critical. Without a sustained rise in productivity, economies will face a growth ceiling that will impact all businesses.

Sustained productivity growth therefore depends, in part, on how effectively AI solutions are integrated across sectors. However, the speed and scale of this integration remain uncertain. We’re at the early stages of this journey.

Where we are

History teaches us that there are inherent delays with any transformative technology. The Solow paradox, coined by economist Robert Solow in 1987, claimed that while you could see computers everywhere, their influence was not then evident in productivity statistics. This paradox was resolved in the 1990s when large sectors of the economy, such as retail and wholesale, finally adopted technologies such as client-server architectures and ERP (enterprise resource planning) systems. These innovations helped revamp business processes, particularly in supply chain and distribution center efficiency, leading to noticeable gains in productivity.

Today seems to mirror the patterns of the past. Despite the fact that AI is ubiquitous, it hasn’t yet translated into significant productivity growth. This discrepancy exists primarily because large sectors of our economy have yet to integrate AI into their businesses. While the tech sector is hinging its future strategies on the continued development and application of AI, many other sectors are still navigating their path.

Governments worldwide are turning to AI as a potential engine for both economic growth and competitive advantage. Discussions at this level frequently revolve around innovation, regulation, talent, and specifically the systemic barriers hindering progress.

As technological advancements have brought down costs, the financial hurdles associated with AI adoption have significantly reduced. So, as cost and access are no longer obstacles, the systemic challenge lies in developing AI capabilities within organizations and strengthening the integration muscle. This is a process that demands time and commitment.

The crucial question remains – what’s the optimal way for businesses to develop and integrate AI?

Who this book is for 

This book is a practical guide to AI, aimed at supporting non-technical executives and board members to quickly formulate a perspective on how to integrate AI into their businesses. It includes which levers and processes to consider to future-proof value.

The emphasis is on questions that are frequently posed, such as, “How can businesses adapt to these emerging technologies? How can they start building and deploying AI as a strategic asset to drive efficiency? What risks or threats need to be considered? How quickly can value be created?”

This book is a response to those demands. In a series of in-depth and wide-ranging conversations with an elite group of practitioners, from CEOs leading new generative AI-based companies to data scientists and CFOs working in more traditional companies, we hear how they have succeeded in building AI solutions in businesses of all sizes across the globe.

Our focus has been on how to integrate AI into organizations to drive value. However, as businesses are at varying levels of maturity, the conversations in this book cover a variety of applications from cutting-edge generative AI to the essential fundamentals of data management.

Real-world conversations 

A diverse scope of interviews and case studies provide honest reflections and insights from their protagonists. We discover the reasons behind their choices, the obstacles they’ve faced, how they’ve persevered, and the lessons learned from their own set of experiences. Their thoughts and conclusions deserve attention and make for inspiring reading.

I have distilled core principles and actionable strategies in a practical playbook. Yet, it goes without saying that not every tool or practice is universally effective; what works for one organization might not be effective for another. However, there are consistent principles that can be learned from others, even if the details will differ within each organization. The future application of AI technologies is likely to be an amalgamation of different models and practices, with the deployment of solutions that respond to evolving business circumstances and needs.

Just as a map is a description of an environment, but doesn’t tell you where to go, The AI Value Playbook provides a roadmap to navigate your route forward based on the stories of individuals who have made meaningful progress.

Starting to work with AI doesn’t mean immediately overhauling your current operating model or handing everything over to data scientists. Instead, it implies working in alignment with your business objectives and key challenges and identifying those that could be better solved with AI. This involves making business opportunities visible through method, through process, and the work done in multi-disciplinary skilled teams. Incorporating AI is fundamentally about empowering people to do their best work and make the best next decision.

If you are evaluating how to build AI capabilities to future-proof your company, this book will provide you with the necessary guidance. Through insightful interviews, real-life case studies, and actionable strategies, The AI Value Playbook will help you better organize and harness the value of AI.

How this book is organized

As a versatile resource, this book can serve as your AI companion, relevant to wherever you are on your AI journey. It allows you to access sections or examples that directly address the specific challenges you face.

It’s structured into four distinct sections for easy navigation:

An overview of AI concepts and technology stack provides relevant context to engage with and learn from for the subsequent conversations and case studies.In-depth conversations with experienced practitioners and leaders across various sectors, sharing insights on their data and AI journey. These dialogues are designed to be read in any order and offer a broad perspective on the implementation and challenges of using AI. Some businesses are mature in their use, others are just beginning their journey, and many are at various stages in between.Case studies exploring the specifics of real-world applications. These present detailed analyses of practical scenarios, offering a closer look at the application and impact of AI.The AI Value Playbook is the practical framework I have distilled from the people I’ve spoken with and my own professional experience for successful AI implementation.

2

Overview of AI Concepts and Technology Stack

To give context to the interviews with leaders and practitioners that follow in this book, it is important to set a foundational understanding of some of the fundamental concepts and taxonomy of AI. However, AI terminology is challenging today due to its dynamic nature which is resulting in new terms and redefinitions. Therefore, the focus here is on what terms will be beneficial to the reader looking to understand how businesses are using AI to drive value. This chapter will help you engage with and learn from the interviews and case studies.

This chapter is presented in two key sections:

2.a AI concepts: This section delves into the core principles that underpin AI and the different types of AI, such as machine learning, deep learning, neural networks and generative AI. It also highlights the strengths and limitations of AI technologies, helping you identify which ones are most relevant to your business needs.2.b AI technology stack: This section provides a brief overview of the set of technologies, tools and services used together to build and run an AI application, establishing and ensuring trustworthy AI Systems and AI as a Service (AIaaS).

2.a AI Concepts

The term “artificial intelligence” has been used in computer science since the 1950s, with the Dartmouth conference in 1956 usually cited as its official birthplace.

Since then, the field of AI has been through a series of cycles of strong support and investment versus periods of criticism and reduction of investment, referred to as AI winters. These cycles throughout the ‘70s, ‘80s, and ‘90s had a material impact on consistent progress.

This situation fundamentally changed with the advent of big data and cloud computing. The affordability and power of data storage and CPU time increased significantly, making it possible to train models more effectively. As a result, AI models that were once theoretical and unattainable due to resource constraints became practical and efficient.

In the last five years, the largest technology organizations in the world have tackled and started to solve problems that were completely intractable only ten years before. We are at the start of a period of high investment and rapid progress that is bringing new enthusiasm among a larger population. However, the outcome of this period remains uncertain.

AI is a broad definition for a range of software applications. As a branch of computer science, AI can be thought of as a smart computer that can do tasks usually done by people. It understands languages, recognizes patterns, solves problems, makes decisions, and learns from experience. It uses vast amounts of data and follows instructions called algorithms to automate tasks.

Comparing AI to traditional software applications, for instance, if we want software to detect dogs in a picture, writing conventional software becomes complex and costly. AI is more suitable for this task. The process involves feeding an AI system with thousands of pictures, some containing dogs and others not. The AI learns from these examples and develops an internal configuration that can detect if a random image contains a dog. This example illustrates how AI is suitable for solving complex tasks that are near impossible to tackle with traditional software. This is a fundamental shift from traditional programming which is designed to produce deterministic results based on a set of inputs, as AI learns from the data and makes predictions or decisions without having to be explicitly programmed to perform the task.

Types of AI

At a high level, AI can be categorized into two main types: narrow AI and general AI.

Narrow AI, also known as weak AI, is designed to perform a specific task, such as voice recognition or image analysis. These solutions operate under a limited set of constraints within a limited context and are only “intelligent” within their specific domain. Examples are frequently found in finance, for example, scaling commercial contract reviews, augmenting collection prioritization, forecasting, streamlining invoice approvals, and anomaly detection. The examples given in the interviews in this book provide numerous use case demonstrations of Narrow AI.General AI, also known as strong AI and Artificial General Intelligence (AGI), refers to systems that possess the ability to perform any intellectual task that a human can do. They can understand, learn, adapt, and implement knowledge in different domains. While this type of AI is not currently available, prominent AI researchers forecast that the first AGI will be developed within the next decade. Nevertheless, it is the core of many philosophical debates as while the advancement of AI may be inevitable, its ultimate destination is not clear. AI’s function is complex and uncertain. Some Large Language Models (LLMs) have demonstrated advanced capabilities beyond narrow AI; however, there are still substantial differences between these AI systems and human cognitive systems.

AI encompasses various interconnected areas:

Machine Learning (ML) is a subset of AI. It is the foundation that includes the algorithms and statistical models that underpin AI. It refers to the approaches associated with getting computers to learn and act like humans, improving their learning over time in an autonomous fashion, by feeding them data and information in the form of observations and real-world interactions.Neural Networks (NNs) are a subset of machine learning, modeled after the human brain’s architecture and designed to emulate its processing style. These networks consist of interconnected nodes, analogous to the brain’s neurons, organized in layers. Data enters through the input layer, is processed through one or several layers where computation and transformation occur, and culminates in an output layer that delivers the final results. Each layer’s nodes are linked with weights – numerical parameters that are adjusted during training – enabling the network to discern complex patterns and “learn” from data.Deep Learning (DL) is an advanced subset of NNs characterized by its use of several hidden layers that add complexity and the ability to perform complex tasks. Deep learning approaches excel at processing and learning from extensive datasets and are instrumental in fueling many of the latest innovations in AI. These include generative AI models, computer vision, and self-driving software. The advantage here is the ability to handle and learn from very large datasets, allowing it to recognize complex patterns and make intelligent decisions. The drawback is the almost complete absence of explainability. For example, it does not explain why a dog is not recognized in a certain picture when there clearly is a dog present, or why the AI has denied credit to a specific person.Generative AI (GenAI) is a subset of deep learning that generates multi-modal content such as text, videos, and images with user interactions. It is trained on vast datasets to detect patterns and creates outputs without instruction. Unlike databases, these models don’t query and return 100% accurate historical information. Instead, they mimic inputs and provide new content that is likely to be correct based on probabilistic techniques.

AI training techniques

Training a custom model frequently involves the use of multiple techniques, as shown in the following diagram.

Figure 2.1 – A variety of AI training techniques

Some common techniques include:

Supervised Learning, a traditional approach in machine learning where a model is trained on a labeled dataset. In this context, “labeled” means that the dataset contains the answers to the questions we want the model to be able to answer.

For example, if we want to train an ML model to predict customer churn, we can feed the model with historical data about our customers, including age, gender, geographical location, purchasing behavior, and product usage. Crucially, we augment this dataset with the churn data for each customer. Churn, in this context, refers to when a customer stops using a product or service. Once trained, the model will be able to forecast the date when a particular customer is likely to churn based on the available data for that unique customer.

Reinforcement Learning (RL) addresses a key limitation of supervised and unsupervised learning: their inability to discover novel mechanisms beyond the relationships in the data. Reinforcement Learning algorithms such as deep-Q learning, or actor-critic, enhance effectiveness by allowing the exploration of strategies not represented in the available data. This approach is akin to “A/B testing on steroids,” where millions of different scenarios are automatically generated, and the feedback loop is integral to the learning process.

For example, the machine might experiment with varying discount levels to identify the most effective discount at each stage of the customer journey. The RL system might issue random discounts to thousands of customers and then measure the churn rate. This process refines the discount strategy to optimal levels based on the knowledge of “what works or not” for specific customer groups. For example, the system might learn that customers with children value a free month in August, while affluent customers in rural areas respond well to bundle discounts. In this way, reinforcement learning enables the discovery of effective, tailored strategies that were not initially apparent in the data.

Unsupervised Learning refers to an ML approach where models are trained on unlabeled data. Unlike supervised learning, where we have explicit labels for our data, unsupervised learning involves discovering underlying structures without predefined categories. In this scenario, there aren’t specific targets to predict, but we aim to uncover meaningful patterns, similarities, or clusters in the data.

For example, by feeding customer data into the ML solution, we might discover that customers without children exhibit similar behavior across different geographical locations and income levels. Armed with this insight, specific marketing campaigns can be developed to target this particular customer segment effectively.

AI’s limitations

The current limitations of AI systems include hallucinations, bias, explainability, and scalability:

Hallucinations: AI hallucinations refer to inaccurate or misleading results generated by AI models. These errors can arise due to various factors, including limited training data, faulty assumptions made by the model, or biases present in the training data.Bias: AI systems are trained on large datasets, which are created by humans. Consequently, these systems can inadvertently learn and perpetuate the biases present in the training data. Common biases include:Cognitive biases: Unconscious errors in human judgment that can influence AI training and outcomes.Implicit bias: Unconscious stereotypes that can be reflected in AI if present in training data.Gender, racial, and age bias: AI decisions can unfairly disadvantage people based on these factors.Sampling bias: Occurs when training data isn’t representative of the population.Temporal bias: Occurs when training data is time-specific and may not reflect current or future states.Over-fitting to training data: Occurs when AI performs well on training data but poorly on new data. This happens when a model captures external factors (noise) that alter the data points but are not part of the system that we want to model along with the underlying pattern in the training data, resulting in poor performance on new data.Algorithmic bias: Prejudiced assumptions made during algorithm development can lead to biased outcomes.Explainability: AI models, including LLMs, are often referred to as “black boxes” because it’s challenging to understand how they arrive at a particular output. This lack of transparency applies in particular to neural networks and deep learning systems whereas simple ML algorithms like decision trees allow an external user to understand how the AI came to a specific result.Scalability: AI systems that perform at scale in production settings can become very expensive and very hard to maintain for a number of related reasons:Data sourcing and ingesting: In general, large AI models perform better when more good quality data is used to train them. Large, good quality datasets are difficult and expensive to handle and store.Compute cost to train: Most deep learning systems, and LLMs in particular, require vast amounts of compute power to train, with periodic retraining to maintain acceptable performances or to add more recent data.Compute cost to operate: Running a model that serves many concurrent users can become complex and expensive in exponential ways.

Copyright and AI

A complex, critical issue linked to current limitations of AI is that of copyright infringement. As I write this book, in 2024, it is still unclear how the issues of consent, credit, and compensation can be addressed and resolved. Most authors argue that before an LLM is trained using their original work, they should be consulted to give permission. Then, when their work contributes to an LLM output, they should be credited and compensated.

Most AI companies argue that it’s impossible to correctly attribute credit since LLMs are not explainable and it’s impossible to determine what data source is used to generate arbitrary output. They also argue that LLMs and generative AI in general are not dissimilar to artists who draw inspiration and build on the work of those before them. It’s difficult to predict where the copyright law is going to settle, but recently large publishing companies and AI organizations have signed contracts to regulate the usage of copyright-protected material. It seems likely that more of those deals will be struck to try and create a viable business model.

The issue of copyright is gaining momentum. However, there are conflicting and contradictory regulations across different regions. As an area of ongoing change and development, business leaders are advised to refer to the latest regulations in the regions where their organizations operate.

Trustworthy AI

Trustworthy AI is also an emerging field originating from collaborative international efforts, privacy-enhancing technologies, and ethical principles, aimed at reliable, transparent, and privacy-respecting AI systems. It has become an integral part of the AI system lifecycle to ensure beneficial and reliable outcomes. It requires governance and regulatory compliance, from ideation to design, development, deployment, and machine learning operations.

Generative AI and LLMs in more detail

LLMs, a subset of generative AI, are good at summarizing context and language, which is particularly effective in text-based writing assistance, content creation, and reasoning over structured and unstructured data. Companies are applying LLMs across different sectors to enhance their operations and we have in-depth case studies where we discuss these applications.

The common emerging applications for generative AI include in operations, enabling a new type of customer engagement and care, empowering advisors and market researchers, building co-pilots to help customers find knowledge faster, improving marketing effectiveness, and accelerating product development.

Foundational components of LLMs

Tokens and vectors, embeddings, and transformers are all interconnected concepts and play crucial roles in processing and understanding natural language data. They help LLMs to grasp the intricacies of human language and perform a wide range of Natural Language Processing (NLP) tasks. Here’s a brief explanation of each.

Tokens and Vectors

In the first stage of the process of understanding human language, the text data is broken down into smaller units called tokens, which can be words, parts of words, or punctuation marks. These tokens are then transformed into vectors, which are numerical representations of these words. This transformation process allows the model to understand the meaning of words and how they relate to each other.

Embeddings

The vectors created from tokens are also known as embeddings, which make text, images, and audio digestible for AI transformers. Embeddings are numerical vectors that represent, in a machine-readable format, the semantic context of a text (for chatbots) in a format that is language-independent and designed to retain as much knowledge as possible. For example, the machine will be able to parse the token “Florence” as a city (the Italian city in Tuscany) or a first name (as in “Florence Nightingale”). This allows the machine to retain the implied knowledge that in a specific text might be describing the architecture of a city as opposed to the founder of modern nursing.

A key feature of embeddings is that similar concepts have similar values. This feature enables search, comparison, and manipulation beyond simple full-text queries, thereby enhancing the capabilities of AI systems.

Transformers

The transformer is the core engine of LLMs. Its primary function is to convert an input embedding into an output embedding, which is then “translated” into human language. The transformer leverages all its training to determine the most probable output given the user’s input. It generates its output by selecting the most likely token at each step. This process allows the transformer to produce coherent and contextually appropriate responses. It’s a key component in the effectiveness of LLMs in understanding and generatinghuman-like text.

The evolution and impact of transformers in AI

In 2017, a paper titled “Attention Is All You Need” was published by Google and introduced the concept of transformers. This kick-started an entirely new era of AI. Transformers now form the backbone of many cutting-edge AI applications enabling the scaling of LLMs like those that power ChatGPT and Gemini.

While LLMs focus primarily on language, generative AI models can serve other purposes, transforming data between multiple modalities, including text, images, video, and audio. Regardless of the purpose, each model interprets the underlying meaning of its input and generates the most likely output according to the training data.

The real power of LLMs comes from the fact that they work in areas far beyond language. The latest release of AI solutions is multimodal, meaning that they can use a text prompt to generate an image or video. Examples of multimodal systems include:

DALL-E, an OpenAI system that can create images from text descriptions, using a large neural network trained on a dataset of text-image pairs.Midjourney, a generative artificial intelligence program and service that can create images from text descriptions (prompts).Stable Diffusion AI, an open-source AI model that can generate realistic images from text descriptions or other inputs. It uses a technique called latent diffusion, which involves the iterative addition of “noise” – random variations or distortions – to an image until it matches the desired output.SORA OpenAI system generates videos from text prompts, due for launch in 2024.
Model Customization – refining existing models

The current LLMs available on the market, like GPT, are pre-trained on a diverse range of internet text and can be used by many users at the same time. However, they are not designed to learn or adapt to new information after training. This means that while they can generate creative text based on the input provided, they can’t be fine-tuned on a specific task without going through a re-training process.

For instance, if you have a specific set of documents on a specialized topic and want to teach an LLM to understand and generate text about this topic, you cannot do this with models like ChatGPT. Instead, you would need to take a general-purpose LLM and train it on your specific dataset. This process, known as fine-tuning, adapts the model to your specific needs. However, fine-tuning a model is technologically complex and expensive. The investment is typically justified by the potential return or by addressing concerns such as security and privacy that require the model to have a deep understanding of a particular domain. A practical example of a fine-tuned LLM is a chatbot trained on the FAQ of a large enterprise. This chatbot can answer specific questions about the company’s products or processes, providing valuable assistance to both customers and employees.

Grounded Generation/Retrieval Augmented Generation (RAG)

Grounded Generation, or Retrieval Augmented Generation (RAG), is a secure, cost-effective alternative to fine-tuning in LLMs. It reduces hallucination levels and uses semantic retrieval techniques for context provision. It can provide citations, enhancing the credibility of its outputs. RAG is adept at extracting, encoding, and indexing intelligence from various sources, which is then retrieved and ranked per user queries. It generates relevant content by considering unique contexts and historical materials. The choice between Grounded Generation and fine-tuning depends on the task requirements. Some projects may benefit from a combination of RAG and fine-tuning, enhancing model performance and reliability. This trade-off is explained in depth in the Vectara case study (Chapter 26).

The commercial opportunities of the democratization of LLMs

Moving forward, as demonstrated by the companies in this book, the expectation is that businesses will take a novel approach to solving new problems previously labeled unsolvable. Today, there are many people who until now didn’t have access to this capability nor come from AI research domains, but who now have access and can bring their creativity to solving new problems.

Multiple companies are competing to establish and develop environments that can effectively work with open-source AI models, including GPT, Claude, Gemini, and Llama among others. This open-source nature is leading to diverse and innovative applications of these models, as individuals and organizations take them in unique and interesting directions, as we will see in the following interviews and case studies.

The pace of development in this area is likely to be rapid, with companies striving to create use cases that offer some form of differentiation. This competitive landscape is expected to drive investment, innovation, and progress in the field.

How controlled your industry domain is will heavily influence how applicable the models will be. In areas of very high regulation, such as in finance and healthcare, where being correct versus close enough is most important, these models are less applicable without considerable investment in customization.

2.b AI Technology Stack

An AI technology structure, often referred to as “the stack,” is a set of tools and technologies that are used to develop, deploy, and manage AI applications. It consists of different layers that perform various functions such as data collection, data processing, machine learning, model training, model deployment, and user interface. An AI tech stack can vary depending on the type and complexity of the AI project.

To understand the components of the tech stack, let’s use an example of an infrastructure investment firm wanting to forecast the value of their assets at exit. To accomplish this, the firm will need the following:

Data sources: Both internal (for example, asset depreciation history) and external (for example, macroeconomic factors, demographic data) sources to provide the necessary data.Data pipeline layer: Automated processes to ingest data from various sources and store it consistently in a central data store.Data storage: A system to efficiently store and potentially transform the data to meet enterprise needs.Data quality: Ensure datasets are of good enough quality to properly train the AI model.AI algorithm: Use the data from the storage to train a forecasting model to predict asset value at exit. Past asset valuations and market conditions contribute to the training of the algorithm.AI platform: Provide a toolset to efficiently build, test, and deploy AI algorithms in an enterprise, shared environment. The toolset includes Integrated Development Environments (IDEs), code editors, debuggers, testing tools, version control systems, and documentation tools.APIs (application programming interface): Forecast data is accessible programmatically through an internal system. APIs allow authorized third-party systems to access the data in a controlled and monitored way.AI solution: A fully integrated solution that allows business users to explore and analyze forecasted asset valuations as well as “what if” scenarios to anticipate various outcomes. It often includes a web-based interface for interaction and a dashboard for standardized reporting.

Data as a universal resource

Without data, there is no AI. The emerging imperative for businesses is to not only adopt AI models but to also establish robust data management practices. Historically, some companies may have managed without a proper data architecture, but to leverage AI effectively, a solid foundation is essential.

Despite the buzz around AI and machine-learning techniques, their applications are still maturing, with the majority of effort and time being spent on the data foundations which are needed to enable data usage. We will encounter this challenge in honest interviews with practitioners, particularly in more established companies.

Many companies seldom question the value of their data. However, when companies invest in structuring their data and using it effectively, there is a significant shift. Proper data management can streamline operations, improve decision-making, and ultimately drive growth.

Data comes in many forms, all of which are valuable for AI systems. The following are examples of data types that often serve initial use cases:

Transactional data includes data about sales, purchases, and other customer transactions. It’s often used for forecasting and personalization.Financial data includes sales figures, revenue, costs, and other financial metrics. It’s often used for forecasting and financial planning.Customer data includes demographic information, purchase history, and other information about customer interactions. It’s often used for personalization and churn prediction.Customer support data is generated from customer interactions with support services, including call logs, emails, and chat transcripts. It’s used to improve customer service, identify common issues, and enhance product/service quality.Operational data includes data about business operations such as supply chain data, inventory data, and other operational metrics such as inventory, suppliers, manufacturing process and efficiency, shipments (timelines and costs), procurement and compliance. It’s often used for recasting and optimization.Web analytics include data about website usage, such as page views, click-through rates, and time spent on site. It’s often used for personalization and understanding customer behavior.

In addition to the typical data inputs, several other types of data are widely used to support various use cases such as personalizing customer journeys, B2B acquisition, understanding customer sentiment to inform marketing strategies, optimizing supply chains, and new product development. These include:

Sociodemographic data, which includes information about individuals like age, gender, and education. It’s essential for market research, customer segmentation, and tailoring products/services to specific demographic groups.User journey data, which is generated by tracking the path customers take through a digital channel (website, application, chatbot or conversational interface). It includes data points like page views, clicks, page browsing times, questions asked, sentiment (answer was useful versus not), basket adds, and purchases.Firmographic data relates to organizational characteristics and typically includes specifics such as industry classification, company size gauged by revenue and employees, geographical locations, investment profiles, fundraising history, credit ratings, and so on. It’s used for market segmentation, targeted marketing, and competitive analyses.Social media data, which includes user-generated content on B2B and B2C social media platforms, including posts, likes, shares, comments, and engagement with content.Product telemetry data, which is collected from products or devices, providing insights into usage patterns, performance issues, and user interactions. This data is typically used for product development, quality assurance, and customer experience improvement.

Data readiness

Data needs to be prepared or pre-processed to be suitable for the specific AI model. Data readiness refers to the extent to which a business’s data is prepared, available, and suitable for use in AI applications. However, achieving data readiness is often challenging due to data assets scattered across different departments and systems, poor quality, lack of standardization, and accessibility as well as gaps in processes for compliance and security measures.

Lack of data readiness is often the primary factor contributing to delays or abandonment of AI initiatives. Therefore, ensuring data readiness is crucial for successful AI implementation.

AI as a Service (AIaaS)

Building and then maintaining AI solutions can be costly and time-consuming to satisfy compliance standards and regulations. AIaaS enables customers to access and use cloud-based AI capabilities without having to invest in the hardware or software needed to develop and deploy AI applications. It provides scalability, allowing clients to adjust their resources according to their needs without worrying about infrastructural limitations. AIaaS ensures flexibility as customers aren’t required to have expertise in AI or machine learning and can select from an array of pre-built or tailored solutions. Furthermore, it enables access to the latest technologies and best practices in the field.

3

Sam Liang

CEO of Otter.ai

We want to build a large spoken language model that can generate realistic conversations. Eventually the OtterPilot will have a really natural conversation with one person, or even with multiple people, and really understand the dynamics at play. In a few years, it’s conceivable that OtterPilot will actively participate in meetings… Quite fascinating, isn’t it?

LinkedIn: samliang

Sam Liang, CEO of Otter.ai, followed his initial studies in computer science with a PhD at Stanford University in computer-distributed systems. Instrumental in the development of the Google Maps location platform, Sam engineered the back-end system that powers Google Mobile Maps, used by billions of users worldwide. In 2010, he ventured into the world of entrepreneurship, dedicating his time and knowledge to building startups.

His innovative next venture in Palo Alto, Silicon Valley, initially focused on the development of a mobile analytics system. This caught the attention of Alibaba, leading to an acquisition in 2016. In that same year, undeterred by the fact that conversational AI was in a nascent form, he launched Otter.ai – a speech recognition system that is now a transcription service used worldwide.

LISA: How does your AI-powered system enhance business collaboration?

SAM: Our AI has high accuracy and speed, operating in real time, recording and processing information as it happens. The end product is live-streamed, providing users with immediate access to the data. The AI’s output appears on your computer almost instantly. But that’s not all. In the background, it’s also comprehending the context and understanding the nuances of the conversation. Meanwhile, our AI is also generating a real-time summary.

We have been working with AI for seven years now. During this time, we’ve been diligently building an AI-powered collaboration system for enterprises. Working in a large enterprise requires you to spend a ton of time in meetings – people forget how much time they are talking and listening on a daily basis. So, a tremendous amount of investment, both in terms of time and resources, is allocated to meetings. We’re trying to disrupt the current system, to optimize meetings, and reduce unnecessary ones. Using Otter, many large meetings can be bypassed as it swiftly provides a content summary, eliminating the need to sit through 60-minute meetings. This is a whole new world. We’re really excited about ChatGPT and other large language models and we’re building our own AI to enhance the value of conversations.

LISA: What was the motive behind the origin of Otter.ai? Could you share the story behind its inception?

SAM: There were two motivations. The first is my deep interest in data as a computer scientist. Think about the fact that humans have been around for about 100,000 years, and for most of that time, we’ve communicated verbally. But no voice data was saved, so that’s a huge loss of human knowledge and intelligence. It was only after Thomas Edison invented the audio recorder that we began to capture voice, but even then, 99.9% of human conversation is never recorded. So, I realized that there’s a big loss of knowledge there.

On a practical level, as a startup founder working with businesses and having to go to so many meetings, I found it hard to remember who said what or even recall what I had promised others, and this was back in 2016. I found it frustrating that I could use Gmail to search for anything from 20 years ago, but I couldn’t search for something I had heard just three hours ago. This led me to think, why not build a product application that enables this?

So, I had two motivations. One was about wanting to collect as much voice data as possible. The second is the practical need to help people remember the information they discussed and help teams share the information relevant to a broader audience outside the meeting. So, how do we share this information effectively? We wanted to solve that problem.

LISA: What are you focused on now to build out the services?

SAM: We’re building a horizontal product to help people with their meetings across different domains. This includes investment meetings, board meetings, internal team meetings, and even customer support or sales meetings.

We’ve recently launched a new product called OtterPilot for Sales. It’s a vertical solution built on top of our horizontal platform. This solution offers specific features tailored to the sales use case, for when a sales rep speaks to the customer.

In a typical sales interaction, the sales rep needs to listen very carefully to the customer’s requirements and pain points, the competitors they’re considering, their budget, and pricing requirements. Traditionally, the sales rep would need to carefully take notes and then, after the meeting, manually enter this information into a CRM system. Now, with AI OtterPilot for Sales, we completely automate this process because AI can listen, with the participants’ permission, transcribe and save everything, and then identify the important insights that need to be saved into the CRM. The AI extracts this information and instantly enters it into the CRM, saving the sales rep valuable time.

The AI provides transparency. The sales call information is available to the sales manager as well. The sales manager’s job is to monitor how all the sales calls go and to keep coaching the sales rep. Traditionally, the sales managers actually don’t have visibility on how all the sales calls go but with Otter, they can access every call, all the recordings, transcripts, summaries, and even get a score for the call. This automation and insight generation can really help both junior sales reps and sales managers, potentially uncovering insights they might have missed.

LISA: Can you bring this to life with a customer case study?

SAM: Certainly. We worked with a US-based IT automation services company facing productivity challenges due to the cumbersome process of manual note taking during sales calls. Otter helped them to:

Streamline the note-taking process to allow reps to focus on customer interactions.Automatically record, transcribe, and store meeting notes to improve efficiency.Easily tag and assign action items to participants to increase collaboration and move deals quickly.

The impact was significant: Otter greatly improved workflow, saving the sales team approximately one-third of their time and increasing productivity.

Another use case was the work we did with a global financial media company to address various challenges, including those arising from the increase in remote work and the need to ensure inclusivity within their workforce. The media company decided Otter was the best company-wide solution for its tens of thousands of employees due to its:

Speed and accuracy over other AI-powered transcription tools.Searchability feature, which allows media journalists to easily find what they’re looking for.Ability to save time in meetings, give journalists confidence in their notes, and provide accountability.Potential to foster a more inclusive workplace, as Otter offers employees with disabilities new options for accessing their work and communicating in the workplace. For instance, Otter offers employees with hearing impairments an essential solution.

LISA: Could you elaborate on the technology stack behind Otter?

SAM: Our technology is designed to understand and transcribe conversations with high accuracy and speed in real time. Most of the products you see today don’t provide live transcripts, or handle accents, names, terminologies, or medical terms if you’re in the healthcare domain. These are all challenging and a lot of hard work for AI to do. We’ve also developed technology to understand the voice print of each person and identify who’s talking. It’s natural for humans, but challenging for machines, to separate one person’s voice from another.

The raw transcript could have thousands of words from one meeting, so to make it more consumable we summarize and condense the information into a few bullet points and identify the key topics. You can then search for topics and find a few bullet points for each one.

Moving forward, we’re building a large spoken language model, which is different from the models built by OpenAI and others. Our focus is on training with conversational data, as spoken language and human conversations have different characteristics compared to written documents.

The most prominent differences involve multiple speakers who, as they take turns, will each speak with specific intonation and emotion. This dynamic is special to conversations and differs from written documents, which are usually formal and have a well-designed structure. Conversations, on the other hand, are free-flowing and unstructured, and the presence of multiple speakers makes things even more complicated.

The next step is to actually generate conversations. We want to build a large spoken language model that can generate realistic conversations. Eventually the OtterPilot can have a really natural conversation with one person, or even with multiple people, and really understand the dynamics at play. In a few years, it’s conceivable that OtterPilot will actively participate in meetings, using advanced AI analysis to identify key topics, actions, decisions made, and discussion sentiment. Additionally, it could provide multilingual support. Quite fascinating, isn’t it?

LISA: Is OtterPilot being developed only in English or in multiple languages?

SAM: At present, we are focusing solely on English. However, we have plans to extend our reach to all other major languages in the future.

LISA