Marketing with AI For Dummies - Shiv Singh - E-Book

Marketing with AI For Dummies E-Book

Shiv Singh

0,0
19,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Stay ahead in the marketing game by harnessing the power of artificial intelligence

Marketing with AI For Dummies is your introduction to the revolution that’s occurring in the marketing industry, thanks to artificial intelligence tools that can create text, images, audio, video, websites, and beyond. This book captures the insight of leading marketing executive Shiv Singh on how AI will change marketing, helping new and experienced marketers tackle AI marketing plans, content, creative assets, and localized campaigns. You’ll also learn to manage SEO and customer personalization with powerful new technologies.

  • Peek at the inner workings of AI marketing tools to see how you can best leverage their capabilities
  • Identify customers, create content, customize outreach, and personalize customer experience with AI
  • Consider how your team, department, or organization can be retooled to thrive in an AI-enabled world
  • Learn from valuable case studies that show how large organizations are using AI in their campaigns


This easy-to-understand Dummies guide is perfect for marketers at all levels, as well as those who only wear a marketing hat occasionally. Whatever your professional background, Marketing with AI For Dummies will usher you into the future of marketing.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 638

Veröffentlichungsjahr: 2024

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Marketing with AI For Dummies®

To view this book's Cheat Sheet, simply go to www.dummies.com and search for “Marketing with AI For Dummies Cheat Sheet” in the Search box.

Table of Contents

Cover

Title Page

Copyright

Introduction

About This Book

Foolish Assumptions

Icons Used in This Book

Beyond the Book

Where to Go from Here

Part 1: Getting Started with Marketing with AI

Chapter 1: A Brief History of AI

Early Technological Advances

Alan Turing and Machine Intelligence

The Dartmouth Conference of 1956

Machine Learning and Expert Systems Emerge

An AI Winter Sets In

The Stanford Cart: From the ’60s to the ’80s

More AI Developments in the 1980s

Rapid Advancements of AI in the 1990s and Beyond

Chapter 2: Exploring AI Business Use Cases

Automating Customer Service

Enhancing Product and Technology with AI

Accelerating Research and Development

Giving Marketing an AI Boost

Optimizing Sales with AI

Adding AI to Legal Activities

Chapter 3: Launching into the AI Marketing Era

Ready or Not: AI Is Your New Marketing Copilot

Watching AI Upend the Corporate World

Taking Foundational Steps Toward AI Marketing

Adopting a Strategic Framework for Entering the AI Era

Part 2: Exploring Fundamental AI Structures and Concepts

Chapter 4: Collecting, Organizing, and Transforming Data

Defining Data in the Context of AI

Choosing Data Collection Methods for Marketing with AI

Putting Your Marketing Data in Its Place

Understanding Data via Manual and Automated Systems

Preparing the Data for Use by AI Algorithms and Models

Chapter 5: Making Connections: Machine Learning and Neural Networks

Examining the Process of Machine Learning

Understanding Neural Networks

Supervised and Unsupervised Learning

Exploring Reinforcement Learning

Mastering Sequences and Time Series

Developing Vision and Image Processing in AI

Tools for Machine Learning and Neural Networks

Chapter 6: Adding Natural Language Processing and Sentiment Analysis

Demystifying the Backbone of NLP

Elevating NLP with Machine Learning

Examining Transformers and Attention Mechanisms

Unpacking Sentiment Analysis

Challenges for NLP and Sentiment Analysis

Engaging Best Practices for Using NLP and Sentiment Analysis

Chapter 7: Collaborating via Predictions, Procedures, Systems, and Filtering

Understanding Predictive Analytics

Putting AI Procedures into Practice

The AI System Development Lifecycle

Understanding Filtering in AI

Chapter 8: Getting Comfortable with Generative AI

Changing the Game with Generative AI

Getting to Know GPT Models

Creating New Text, Images, and Video

Introducing Major Consumer-Facing Generative AI Models

Addressing the Challenges of Using Generative AI Models

Part 3: Using AI to Know Customers Better

Chapter 9: Segmentation and Persona Development

Exploring Behavioral Segmentation Elements

Sourcing the Right Customer Data

Seeing How AI Performs Segmentation

Refining, Validating, and Enhancing Segmentation Models

Aligning Persona Development

Leveraging AI Personas for All Business Efforts

Employing Synthetic Customer Panels

Chapter 10: Lead Scoring, LTV, and Dynamic Pricing

Working Together: Three Core Concepts

Scoring Leads with the Help of AI

Calculating Lifetime Value to Affect Lead Scoring

Turning Lead Scoring and LTV Insights into Dynamic Pricing

Chapter 11: Churn Modeling and Measurement with AI

Getting the Scoop on Churn Modeling

Ramping Up Your Measurement Operations

Checking Out Tools for Churn Modeling and Measurement Operations

Part 4: Transforming Brand Content and Campaign Development

Chapter 12: Using AI for Ideation and Planning

Engaging AI to Ideate on Behalf of Human Beings

Deciding whether AI Hallucinations Are a Feature or a Bug

Following Practical Steps for Idea Generation with AI

Deciding on AI Ideation Tools to Use

Chapter 13: Perfecting Prompts for Conversational Interfaces

Reviewing Use Cases for Conversational Interfaces

Writing Strong Prompts to Guide AI Responses

Good and Bad Marketing Prompt Design Examples

Refining and Iterating Strong Prompts

Fighting AI Bias in Prompt Writing

Using Prompt Design Apps

Chapter 14: Developing Creative Assets

Trying Out an AI-Generated Where’s Waldo? Illustration

Exploring an Approach for Creating Visual Assets with AI

Enhancing Existing Creative Assets

Fine-Tuning Creativity with AI Tools and Techniques

Choosing AI Tools for Creating Visual Assets

Chapter 15: Search Engine Optimization (SEO) in the AI Era

Describing Search Generative Experiences (SGEs)

Strategies for SEO Success in the AI Era

Enhancing the User Experience with AI

Maximizing Your SEO Efforts

Knowing the AI Tools to Use with SEO

Chapter 16: Performing A/B Testing with AI

Examining the Fundamentals of A/B Testing

Surveying A/B Testing Extensions

Gathering AI Tools for A/B Testing

Chapter 17: Fine-Tuning Content with Localization and Translation

Exploiting AI for Localization and Translation

Adopting Core Strategies for Localization

Examining Real-Time Localization and Translation Solutions

Part 5: Targeting Growth Marketing and Customer Focus with AI

Chapter 18: Applying AI to Performance Marketing

Examining Google Performance Max

Exploring Meta Advantage+ Campaigns

Inspecting Amazon Ads

Taking Stock of TikTok Advertising

AI Tools for Performance Marketing

Chapter 19: E-mail and SMS Marketing with AI

Tracking E-mail and SMS Marketing

Adding the Power of AI to E-mail and SMS Marketing

AI-Powered E-mail and SMS Marketing Tools

Chapter 20: Diving into Personalized Marketing

Adapting Marketing to Meet Consumer Personalization Preferences

Examining Personalization Concepts

Unlocking the Deeper Value of Personalization with Generative AI

Making Personalization Operational with AI

AI Tools to Help with Personalization

Chapter 21: Leading Your Business in the AI Era

Following Steps for Integrating AI into Your Business

Building AI Capability within Marketing

Integrating Marketing with the Rest of the Enterprise

Organizing for the Future

Chapter 22: Addressing Ethical, Legal, and Privacy Concerns with AI

Operating Principles for Ethical AI

Using All Data Responsibly

Fighting Bias in Data and Results

Protecting Copyright and Intellectual Property

Facing the Deepfake Problem

Saving Human Beings from Artificial Intelligence

Part 6: The Part of Tens

Chapter 23: Tens Pitfalls to Avoid When Marketing with AI

Ignoring Qualitative Insights

Depending Solely on Generated Personas

Relying Only on AI for Creative Briefs

Bypassing Human Creativity

Losing Your Brand Voice

Neglecting Emerging Media Channels

Over-Optimizing for Short-Term Goals

Creeping Customers Out

Ignoring the Value of the Human Touch

Relying Solely on AI for ROI Analysis

Chapter 24: Ten Future AI Developments to Watch For

Quantum Computing–Aided AI

Autonomous Creative Campaigns

Cognitive AI Systems for Deep Insights

AI-Driven Virtual Reality Experiences

Neural Interface for Marketing Insights

AI-Curated Personal Digital Realities

Synthetic Media for Dynamic Content

Predictive World Modeling

AI as a Customer Behavior Simulator

Molecular-Level Product Customization

Index

About the Author

Connect with Dummies

End User License Agreement

List of Tables

Chapter 4

TABLE 4-1 Data Structures and Handling

Chapter 5

TABLE 5-1 Specifics of Time Series in Neural Networks

Chapter 7

TABLE 7-1 Creating Successful Predictive Models

Chapter 8

TABLE 8-1 Generative AI Concepts and Structures

TABLE 8-2 Major Generative AI Models

Chapter 9

TABLE 9-1 Behavioral Segmentation Data

Chapter 10

TABLE 10-1 Companies Offering AI in Lead-Scoring Solutions

TABLE 10-2 Companies with AI-Based LTV Solutions

TABLE 10-3 Companies with AI in Dynamic Pricing Solutions

Chapter 11

TABLE 11-1 AI Tools for Churn Modeling and Measurement

Chapter 12

TABLE 12-1 AI Tools for Idea Generation

Chapter 13

TABLE 13-1 Elements of Good Prompt Designs

TABLE 13-2 Bad Marketing Prompt Designs

TABLE 13-3 How to Refine Prompts

Chapter 14

TABLE 14-1 AI-Generated Visual Asset Tools

Chapter 15

TABLE 15-1 AI Tools That Help with SEO

Chapter 16

TABLE 16-1 AI Tools for A/B Testing

Chapter 17

TABLE 17-1 LLMs for Localization and Translation

TABLE 17-2 Consumer AI Tools for Translation

TABLE 17-3 Enterprise AI Tools for Translation

Chapter 18

TABLE 18-1 Marketing Platforms that Use AI

Chapter 19

TABLE 19-1 AI Tools for E-mail and SMS Marketing

Chapter 20

TABLE 20-1 Tools that Use AI for Personalization

Guide

Cover

Table of Contents

Title Page

Copyright

Begin Reading

Index

About the Author

Pages

i

ii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

119

120

121

122

123

124

125

126

127

128

129

130

131

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

365

366

367

368

369

370

371

373

374

375

376

377

378

379

380

381

382

383

384

385

387

388

389

Marketing with AI For Dummies®

Published by: John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030-5774, www.wiley.com

Copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.

Media and software compilation copyright © 2025 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.

Published simultaneously in Canada

No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.

Trademarks: Wiley, For Dummies, the Dummies Man logo, Dummies.com, Making Everything Easier, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: THE PUBLISHER AND THE AUTHOR MAKE NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS WORK AND SPECIFICALLY DISCLAIM ALL WARRANTIES, INCLUDING WITHOUT LIMITATION WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE. NO WARRANTY MAY BE CREATED OR EXTENDED BY SALES OR PROMOTIONAL MATERIALS. THE ADVICE AND STRATEGIES CONTAINED HEREIN MAY NOT BE SUITABLE FOR EVERY SITUATION. THIS WORK IS SOLD WITH THE UNDERSTANDING THAT THE PUBLISHER IS NOT ENGAGED IN RENDERING LEGAL, ACCOUNTING, OR OTHER PROFESSIONAL SERVICES. IF PROFESSIONAL ASSISTANCE IS REQUIRED, THE SERVICES OF A COMPETENT PROFESSIONAL PERSON SHOULD BE SOUGHT. NEITHER THE PUBLISHER NOR THE AUTHOR SHALL BE LIABLE FOR DAMAGES ARISING HEREFROM. THE FACT THAT AN ORGANIZATION OR WEBSITE IS REFERRED TO IN THIS WORK AS A CITATION AND/OR A POTENTIAL SOURCE OF FURTHER INFORMATION DOES NOT MEAN THAT THE AUTHOR OR THE PUBLISHER ENDORSES THE INFORMATION THE ORGANIZATION OR WEBSITE MAY PROVIDE OR RECOMMENDATIONS IT MAY MAKE. FURTHER, READERS SHOULD BE AWARE THAT INTERNET WEBSITES LISTED IN THIS WORK MAY HAVE CHANGED OR DISAPPEARED BETWEEN WHEN THIS WORK WAS WRITTEN AND WHEN IT IS READ.

For general information on our other products and services, please contact our Customer Care Department within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993, or fax 317-572-4002. For technical support, please visit https://hub.wiley.com/community/support/dummies.

Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.

Library of Congress Control Number: 2024944407

ISBN 978-1-394-23719-7 (pbk); ISBN 978-1-394-23721-0 (ePDF); ISBN 978-1-394-23720-3 (epub)

Introduction

Technology can revolutionize our lives in unimaginable ways. Many people don’t remember life before e-mail, the World Wide Web, mobile phones, and video streaming. Work routines often rely heavily on laptops, wireless Internet, and search engines. The transformations driven by artificial intelligence (AI) fall into this same category of technological shifts but, arguably, will be more dramatic than any of the other shifts that came before.

When ChatGPT 3.0 launched in November 2022, AI moved to the forefront of everyday technology use. ChatGPT quickly became one of the fastest-growing apps in history, marking a pivotal shift in the use of AI in everyday life.

Every marketing sub-function — annual planning, strategy, research, campaign development, ad production, media planning, analytics, CRM — stands poised for a transformation with the advent of AI. Marketers will copilot every activity with AI, leading to more insightful, creative, personalized, and impactful marketing than ever before.

About This Book

Discussing technological transformations in broad terms can feel abstract. In this book, you can find out how AI’s impact on everyday lives is becoming increasingly tangible and personal, and what that means for your work in marketing.

Marketing with AI For Dummies breaks down the implications of using AI for marketing into digestible pieces, making the subject accessible to any marketer. It provides definitions, frameworks, concepts, case studies, and practical guidance to translate AI’s vast potential into actionable strategies for your business.

And although the world of AI is changing rapidly, the pace at which it gets incorporated into the marketing ecosystem is slower, meaning that the core concepts, strategies, frameworks, and practical guidance are more timeless than you may initially think.

Here are some conventions that I use throughout this book and what they mean:

Italicized

words or phrases are terms that I define for you in the surrounding text.

Web addresses appear in

monofont

. If you're reading a digital version of this book on a device connected to the Internet, note that you can click the web address to visit that website, like this:

www.dummies.com

.

In several chapters, I point out what I consider to be best marketing practices with the words

Best Marketing Practice

in bold and italics.

To make the content of Marketing with AI For Dummies more accessible, I divided it into six parts:

Part 1

: Getting Started with AI and Marketing.

This part lays the historical and contextual foundation for AI. It also traces the evolution of AI from its mythological roots to modern-day applications, covering significant milestones such as the development of the Turing test, machine learning, and generative AI.

Part 2

: Exploring Fundamental AI Structures and Concepts.

In this part, I identify some of the best use cases for AI in marketing, evaluate various tools, and introduce some of the risks you may face when integrating AI into your workflow.

Part 3

: Using AI to Know Customers Better.

This part discusses AI’s ability to deliver personalized experiences to customers, tailoring content and advertisements to individual consumers, and enhancing customer engagement. You can examine AI-driven technologies, such as chatbots, and how they can contribute to enhanced customer satisfaction.

Part 4

: Transforming Brand Content and Campaign Development.

This part explores the role of AI in generating creative content. It discusses how to prompt the AI tools to create content effectively and identifies which tools can help you produce high-quality content efficiently and at scale. You can read about AI’s impact on advertising, including how to run effective A/B testing with the latest AI technologies, develop stronger SEO programs, and localize content using AI.

Part 5

: Targeting Growth Marketing and Customer Focus with AI.

This part covers AI's integration into growth marketing, focusing on optimizing campaigns, improving customer experiences, and enhancing operational efficiency. It also emphasizes the importance of ethical guidelines, responsible use, and strategic integration into business operations. Additionally, the part addresses ethical, legal, and privacy concerns, providing principles for responsible AI use in marketing.

Part 6

: The Part of Tens.

In this part, you can find a list of ten things to avoid in AI marketing and ten developments that I predict are coming for the marketing world while it begins using AI more commonly.

Foolish Assumptions

Whether you’re a chief marketing officer at a Fortune 500 company, a junior marketer in a small business, an agency executive working with marketers, or wearing several hats (including the marketing hat) in your business, this book is for you. The only real assumptions I make about you are that you’re interested in AI and how it can be used in marketing, and some best practices for doing so.

Icons Used in This Book

Throughout this book, icons in the margins highlight certain types of valuable information that call out for your attention. Here are the icons that you may encounter and a brief description of each.

The Tip icon marks tips and shortcuts that you can use to make working with AI in your marketing efforts easier.

Remember icons mark the information that’s especially important to know. To siphon off the most important information in each chapter, just skim through these icons.

The Technical Stuff icon marks information of a highly technical nature that you can normally skip over unless you want to get some nonessential info on the subject.

The Warning icon tells you to watch out! It marks important information that may save you headaches, including issues such as ethical missteps to avoid or common mistakes in execution that you can steer clear of.

Beyond the Book

In addition to all the AI-marketing information and guidance that you can find in this book itself, you get access to even more help and information online at Dummies.com. Check out this book’s online Cheat Sheet by going to www.dummies.com/ and searching for “Marketing with AI For Dummies Cheat Sheet.”

Where to Go from Here

The chapters in this book cover all the critical facets of marketing with AI. Each part builds on the previous one, providing a comprehensive road map for navigating the AI-driven transformation of the marketing landscape. However, you don’t have to read the book from cover to cover. You can dip into chapters that address different AI-related questions that you have while you incorporate AI into your marketing efforts. Check out the Table of Contents to identify the subjects most important to you, and dive in!

Part 1

Getting Started with Marketing with AI

IN THIS PART …

Trace AI’s evolution from myth to modern business tool.

Discover how businesses have applied AI in marketing, customer service, legal, and other functions.

Consider frameworks for integrating AI into your marketing efforts.

Chapter 1

A Brief History of AI

IN THIS CHAPTER

Tracking AI from conception to fruition

Watching machines fool people and beat the experts

Seeing advanced AI capabilities in everyday life

To fully grasp the role of artificial intelligence (AI) in business, I begin by helping you trace its fascinating history. This background exploration not only illuminates AI’s vast advancements, but also highlights its utility in business and marketing.

The earliest conceptions of artificial intelligence date back to Greek mythology, where Talos — an 8-foot-tall giant constructed of bronze — stood guard over the island of Crete to protect it from pirates and other invaders. Talos would throw boulders at ships and patrol the island each day. As the legend goes, Talos was eventually defeated when a plug near his foot was removed, allowing the ichor (blood of the gods) to flow out from the single vein in his body.

From that point forward, tales of automated entities flourished in mythology, captivating the minds of scientists, mathematicians, and inventors. Modern science and technology have realized some of these mythological concepts through recent advancements. In this chapter, I introduce you to those advancements, including the Turing test, machine learning, expert systems, and generative AI.

Early Technological Advances

Scientists trace the dawn of automation back to the 17th century and the invention of the pascaline, a mechanical calculator. Constructed by French inventor Blaise Pascal between 1642 and 1644, this groundbreaking device featured a controlled carry mechanism that facilitated the arithmetic operations of addition and subtraction by effectively carrying the 1 to the next column. This calculator worked especially efficiently when dealing with large numbers. Following in Pascal’s footsteps, Wilhelm Leibniz, a German mathematician, invented a calculator in 1694 that expanded upon the concept of the pascaline by enabling all four basic arithmetic operations — addition, subtraction, multiplication, and division (not just addition and subtraction). These devices first offered a glimpse into the potential for mechanical reasoning.

Fast-forward to the early 1800s, and you encounter the Jacquard system, developed by Joseph-Marie Jacquard of France, which used interchangeable punched cards to dictate the weaving of cloth and the design of intricate patterns. These punched cards laid the groundwork for future developments in computing. Near the mid-1800s, British inventor Charles Babbage unveiled the first computational device known as the analytical engine. Employing punch cards, this machine could perform a variety of calculations involving multiple variables, and it featured a reset function when it completed its task. Importantly, it also incorporated temporary data storage for more advanced computations — a feature crucial for any artificial intelligence (AI) system.

By the late 1880s, the development of the tabulating machine — designed by American inventor Herman Hollerith specifically to process data for the 1890 U.S. Census — helped the development of AI reach another milestone. This electro-mechanical device utilized punched cards to store and aggregate data, effectively enhancing the analytical engine’s storage capabilities through the inclusion of an accumulator. Remarkably, modified iterations of the tabulating machine remained operational until as recently as the 1980s.

Alan Turing and Machine Intelligence

Many people regard Alan Turing, a British mathematician, logician, and computer scientist, as the founding father of theoretical computer science, and he paved the way for further AI breakthroughs. During World War II, he served at Bletchley Park, the United Kingdom’s codebreaking establishment; and he played a pivotal role in decrypting messages encoded by the German Enigma machine (a code-generating device). Scholars and historians credit his work at Bletchley Park with both shortening the war and saving millions of lives.

Turing’s key innovation at Bletchley was the development of the Bombe, a machine that significantly accelerated the code-breaking process used to decode messages from the Enigma machine. The Enigma used a series of rotating disks to transform plain text messages into encrypted cipher text. The complexity of this encryption device and the coded messages it generated came in part from the fact that Enigma users changed the machine’s settings daily. The United Kingdom and all the Allies found cracking the code within the 24-hour window — before the settings were altered again — exceedingly difficult. The Bombe automated the process of identifying Enigma settings, sorting through various potential combinations far more rapidly than any human could. This automation enabled the British to regularly decode German communications.

Although the details of this code-breaking device remained classified for many years, the Bombe stands as one of the earliest examples of technology outperforming humans in tasks that traditionally required human intelligence, executing them more efficiently and accurately.

The Turing Test in 1950

Soon after World War II, in a paper published in 1950 titled “Computing Machinery and Intelligence,” Turing introduced the idea of defining a standard by which we can call a machine intelligent. He designed the experiment (now called the Turing test) to answer the question, “Can machines think?” The fundamental premise of the experiment said that if a computer can participate in a dialogue with a human in such a way that an observer can’t tell which participant is human and which is computer, then you can consider that computer intelligent.

Turing’s test proposed that a human evaluator assess dialogues between a human and a machine that was designed to generate human-like responses. The evaluator knows that one of the participants is a machine, but not which one. To eliminate any bias from vocal cues, Turing proposed that the test giver limit the interactions to a text-only medium. If the evaluator found it challenging to distinguish between the machine and the human participant, the machine passed the test. The evaluation didn’t focus on the correctness of the machine’s answers, but on how indistinguishable its responses were from a human’s. In fact, the test’s criteria didn’t make any reference to the accuracy of the answers.

The Turing test: 1960s and beyond

In 1966, well after Alan Turing’s death, German-American scientist Joseph Weizenbaum created ELIZA, the first program that some say appeared to pass the Turing test. Many sources refute that it could pass the Turing test, but it was technically capable of making some humans believe that they were talking to human operators. The program worked by studying a user’s typed comments for keywords and then executing a rule that transformed the user’s comments, resulting in the program returning a new sentence. In effect, the ELIZA, like many programs since then, mimicked an understanding of the world without actually possessing any real-world knowledge.

Taking this development a step further, in 1972, Kenneth Colby, an American psychiatrist, created PARRY, which he described as ELIZA with attitude. Experienced psychiatrists tested PARRY in the early 1970s by using a variation of the Turing test. They analyzed text from real patients and from computers running PARRY. The psychiatrists correctly identified the patients only 52 percent of the time, a statistic consistent with random guessing.

Even to this day, the Turing test gives the world a concise, easily understandable method of assessing whether a piece of technology has intelligence or not. By limiting the test to text-based interactions that require natural language query (conversational English), anyone could easily understand the nature of the test when Turing first introduced it. And by separating out the accuracy of the response from the question of identification, it focused the test on evaluating what truly makes humans more human.

Computers have advanced by leaps and bounds since the time that Alan Turing first proposed the Turing test. But consider this timeline regarding the ongoing development of intelligent technology:

As recently as 2021,

chatbots that much of the world had access to struggled to pass the Turing test consistently. Services such as Siri from Apple, Alexa from Amazon, and Google’s Assistant could speak to us in natural language but would quickly get stumped with some of the most basic of questions. For example, the question “Describe yourself using only colors and shapes?” may prompt the answer “Okay, I found this on the web for describing colors and shapes… .”

As of 2023,

major chat interfaces from the likes of OpenAI, Google, and others, can pass the Turing test. This quick change shows how technological advancements in the field of AI happen in fits and starts, with so much having changed dramatically in just 24 months.

The Dartmouth Conference of 1956

The academic community often considers the Dartmouth Conference of 1956 as the birth of artificial intelligence (AI) as a distinct field of research. Held during the summer of that year at Dartmouth College in Hanover, New Hampshire, the conference brought together luminaries from various disciplines — computer science, cognitive psychology, mathematics, and engineering — under one roof for an extended period of six to eight weeks. Organized by computer scientists John McCarthy, Marvin Minsky, and Nathaniel Rochester, and mathematician Claude Shannon, the conference aimed to explore “every aspect of learning or any other feature of intelligence,” as stated in the original proposal for the conference.

The Dartmouth Conference of 1956 was groundbreaking for several reasons. It was more than just a summer gathering of intellectuals; it was a seminal event that shaped the trajectory of AI as we know it today. It provided the name, the initial community, the research directions, and the momentum that have fueled decades of innovation in AI.

Specifically, the conference

Coined the term

artificial intelligence

(AI):

The conference gave a name to a field that had been, up until that point, loosely defined and interdisciplinary across mathematics, computer science, engineering, and related fields. John McCarthy, one of the organizers, was credited with introducing the term, which helped in shaping the future direction of research by providing a focal point around which scholars could rally.

Served as a catalyst for future research: It set the research agenda for decades to come. During the conference, participants engaged in deep discussions, brainstorming sessions, and even early-stage experiments on foundational topics in the AI field. The participants aimed to discover whether they could program machines to simulate aspects of human intelligence, with research topics such as

Problem-solving

Symbolic reasoning

Neural networks

Language understanding

Learning machines

They designed programs to play chess, prove mathematical theorems, and generate rather simplistic sentences.

Provided a collaborative platform for interdisciplinary research:

Researchers who may not have otherwise crossed paths now engaged in meaningful dialogues, forging relationships that would lead to significant collaborations in the years and decades to come. This interdisciplinary nature was crucial for tackling the complex problem of simulating human intelligence, which requires insights from various fields such as psychology, neuroscience, linguistics, operations research, economics, and more.

Attracted critical funding and attention to the developing field of AI:

The visibility and credibility gained from this event led to increased investment in AI research from both governmental and private sectors. This financial backing was essential for the development of labs, academic programs, and research projects that propelled the field forward.

Machine Learning and Expert Systems Emerge

Following the Dartmouth Conference (see the preceding section), two key subfields emerged that became the cornerstones of artificial intelligence — machine learning and expert systems. The expert systems were rule-based methods that drew upon predefined sets of instructions established by human beings. Machine learning (initially referred to as self-teaching computers) represented a radical shift in approach that aimed to build systems that learned from data, rather than by following scripted rules.

Meeting machine learning

Arthur Samuel, an American pioneer in the field of computer gaming and artificial intelligence, officially coined the term machine learning in 1959. Unlike traditional computing methods that relied on explicit instructions for every operation, machine learning focused on developing algorithms capable of producing results from existing data. These algorithms use statistical techniques to identify patterns, make decisions, or predict future outcomes based on those patterns.

In the 1960s, the Raytheon Company made a significant contribution to the field by developing an early learning machine system that could analyze various types of data, including sonar signals, electrocardiograms, and speech patterns. The machine used a form of reinforcement learning, a subset of machine learning in which the algorithm identifies optimal actions through trial and error. In essence, the system was rewarded for correct decisions and punished for incorrect ones. Humans operated and fine-tuned the system, and those humans pushed a goof button to flag and correct any errors. These corrections enabled the machine to adapt and improve its performance over time.

Critical standout features of machine learning include the following:

Adaptability: Instead of relying on humans to manually code solutions to problems, machine learning enables computers to come up with their own solutions by examining large sets of data. This freedom has led to groundbreaking applications across various sectors. For example, machine learning algorithms power large language models and computer vision systems that enable computers to identify and understand objects and people in images and videos.

These systems can

Generate human-like text.

Recognize thousands of objects and filter spam e-mails with incredible accuracy.

Transcribe and translate human speech in real time.

I discuss each of these topics in detail in subsequent chapters (Chapters 4 and 5, for example).

Efficient and scalable solutions:

Because developing specific algorithms for each recognition, filtering, or generating task would be both costly and time-consuming, machine learning offers a far more efficient and

scalable solution

(which means that the solution can perform tasks on huge data sets without having a corresponding increase in costs). The data-driven approach to finding solutions has revolutionized the way technologists approach and solve problems, and it has automated complex tasks (such as reviewing social media content for hate speech) that computer scientists once considered beyond the reach of computers.

Because machine learning continues to evolve, experts expect its impact and relevance across various fields to continue to grow. See Chapter 2 for examples of the effects on areas of business.

Examining expert systems

In the late 1960s, many researchers focused on capturing domain-specific knowledge, which laid the foundation for expert systems, meaning technology systems or computers that played the role of experts in a specific domain such as drug discovery. Those expert systems were the precursors to modern-day AI systems that now exist. By the 1970s, researchers created some of the first expert systems, including DENDRAL (designed for chemical mass spectrometry) and MYCIN (aimed at diagnosing bacterial infections). These expert systems captured knowledge and reasoning capabilities from human experts to offer advice as diverse as simple medical diagnoses and exploration strategies for mineral mining.

The systems worked well in narrow subject domains, but the cost and difficulty of maintaining and scaling their rule-based knowledge effectively limited their usefulness. Research and development of expert systems went something like this:

In the late 1970s,

a thawing of the AI Winter (see the following section) supported the broader adoption of expert systems in various industries, including healthcare, finance, and manufacturing. During this period, computer scientists developed specific tools to help expand their expert systems while those systems’ usefulness grew exponentially.

By the 1990s,

the limitations of expert systems became very evident, particularly their inability to learn from their processing experiences or strengthen their performance without external programming. This shortcoming led to a decline in the development of stand-alone expert systems, and computer scientists began to integrate them into larger, more complex computer systems.

More recently,

ideas at the heart of expert systems have seen a resurgence of sorts, although they often appear in hybrid forms that incorporate machine learning (see the preceding section) and other data-driven techniques. Although not many corporations create and use stand-alone expert systems (after their limitations on explicit knowledge and brittleness became more apparent), the core concepts of capturing and applying human expertise in computational models remains integral to AI. And broader AI solutions incorporate expert systems as a complement to other advanced methods (such as machine learning and natural language processing, or NLP; see the section “

More AI Developments in the 1980s

” later in the chapter for more).

The introduction of expert systems was an important moment in the history of artificial intelligence. Expert systems development pioneered knowledge engineering techniques that computer scientists still use to train AI systems today. But most AI tools now depend more on machine learning (which is much more scalable, or easily expanded), rather than explicitly programmed rules that require human involvement.

An AI Winter Sets In

After the hype of artificial intelligence in the 1960s and early 1970s, the limitations of early AI became clear, leading to a period of reduced funding and interest, which was coined the AI winter. The Lighthill report, compiled for the British Science Research council and originally published in 1973, helped bring about this AI winter. The report criticized the lack of practical applications and questioned the potential of AI research. These criticisms led to reduced government funding in several countries, including the United Kingdom.

But even during this period of reduced funding, research continued that advanced core technical capabilities such as probabilistic reasoning, neural networks, and intelligent agents. Even in this period of reduced optimism, diligent computer scientists still drove key advancements before machine learning unlocked its next era of rapid progress in the 1980s.

The lessons of the AI winter of the 1970s have continued to inform the ethics debate around realistic versus overhyped claims in the AI world. This debate matters more than ever while differing opinions on the promise and perils of AI collide around the world.

The Stanford Cart: From the ’60s to the ’80s

You can’t have a conversation about the history of artificial intelligence (AI) without discussing the story of the Stanford Cart, a remote controlled four-wheeled cart first developed in the 1960s that later came equipped with a camera and onboard computer for vision and control. This seminal project in the history of AI and robotics was one of the earliest attempts to create a self-driving vehicle. The cart, which was developed over a 20-year period, served as a platform for research into computer vision, path planning, and autonomous navigation.

The evolution of the Stanford Cart project not only mirrored the evolution of AI and robotics over its 20-year time span, but it also shaped the trajectory of AI and robotics, as well. The project remains a testament to the enduring impact of focused research and iterative development in the field of AI.

The stages of the Stanford Cart’s evolution include

Remote control:

In the 1960s, the first version of the cart simply allowed for remote control capabilities. Starting the cart’s development this way made perfect sense because the cart served as a research platform for investigating the problem of controlling a Moon rover remotely from Earth.

Self-navigation:

The early 1970s saw the cart get a camera and an onboard computer, which allowed it to navigate an obstacle course by taking photographs and then computing the best path forward based on those images. Later in the 1970s, more advanced computer vision algorithms allowed the cart to navigate complex environments more quickly while the image processing capabilities accelerated as well.

Real-time complex navigation:

By the 1980s, the cart could follow roads and avoid obstacles in real time, largely due to improvements in both hardware and software, especially in broad increases of computer processing power. This capability marked a significant milestone in the development of autonomous vehicles, which entered commercial production decades later. Increased processing power allowed for faster and more complex computations, while advanced algorithms enabled the cart to make split-second decisions.

As one of the first practical applications of AI in robotics, the Stanford Cart demonstrated how computers could interact with the real world. The computer components that allowed visual input and analysis demonstrated the potential benefits of sophisticated image recognition and scene interpretation. And today’s robotics and autonomous systems for path planning and obstacle avoidance use various algorithmic techniques that the Stanford Cart first introduced.

More AI Developments in the 1980s

Arguably, the 1980s stand as a critical decade in the development of artificial intelligence, characterized by groundbreaking advancements in various subfields, especially in machine learning, neural networks, and natural language processing. This period saw foundational advancements that set the stage for the AI technologies of today.

This decade’s significant developments include

Backpropagation:

The introduction and popularization of the backpropagation algorithm for training neural networks. Before backpropagation, training complex neural networks took a lot of computational power and was less effective. The

backpropagation algorithm

streamlined the training process by efficiently calculating the error between predicted and actual outcomes, and then distributing this error back through the network to adjust the

internal weights

(which effectively transform the input data within the network’s hidden layers). This innovation facilitated the training of multi-layer neural networks and paved the way for more complex architectures and applications.

Deep learning:

A subfield of machine learning that uses neural networks that have three or more layers. Researchers such as Geoffrey Hinton, Yann LeCun, and Yoshua Bengio (operating at various universities) were instrumental during this period because they laid the groundwork for this subfield. These layered neural networks found use in a range of applications, from image and voice recognition to natural language understanding, which would later fuel innovations in automating various business processes.

Natural language processing (NLP):

Initially, programmers largely based NLP systems on handcrafted rules. However, the 1980s saw a significant shift toward statistical models, making these systems more robust and scalable. The decade set the stage for machine learning–based approaches that have come to dominate the NLP landscape, enabling more complex applications such as chatbots, translation services, and sentiment analysis tools.

Robotics:

The decade also marked the beginning of significant advancements in robotics, much of which was built on the foundational concepts of AI. The Stanford Cart project, for example (see the preceding section), served as a crucial catalyst for research into autonomous systems.

Rapid Advancements of AI in the 1990s and Beyond

The remarkable journey of artificial intelligence (AI) goes from its mythological inspirations (Talos, the bronze giant in Greek mythology who protected Crete) to groundbreaking inventions such as Pascal’s calculator (discussed in the section “Early Technological Advances,” earlier in this chapter) and projects such as the Stanford Cart (see the section “The Stanford Cart: From the ’60s to the ’80s,” earlier in this chapter). The progress made since the early 2010s alone transformed the AI landscape and altered the way people think about technology’s role in various domains, including business and society at large.

Beginning in the 1990s, rapid advancements in existing branches of AI research brought expansion of capabilities to machine learning and deep learning. Other advancements in AI research brought new depth to the capability of AI to demonstrate seemingly intuitive thinking and to generate human-like original content.

Watching machine learning grow up

Between the 1990s and the early 2000s, machine learning emerged as a dominant force in AI development. (See the section “Meeting machine learning,” earlier in this chapter, for an introduction to machine learning.) This field of AI uses algorithms to analyze huge data sets to uncover patterns and make predictions without built-in, explicitly programmed rules. Spurred on by significant increases in computing power and data availability, machine learning delivered new use cases in the realm of computer vision (where computers derive information from images, videos, and other input) and recommender systems (information filtering systems that suggest items most pertinent to the user).

These AI advancements came about in part because the AI engines had access to large data sets. The models used to analyze these data sets mimicked more human-like pattern recognition and decision making by using statistical relationships between the data objects. These developments illustrated how quickly an AI system could learn (extrapolate) from data on its own, rather than having a programmer code specific and explicit instructions for that system. Machine learning is at the heart of AI to this day.

Playing a pivotal chess match

The 1990s saw a pivotal moment in the history of AI that captured the imagination of people around the world. IBM’s Deep Blue, a chess-playing computer, defeated the reigning world chess champion, Garry Kasparov, in 1997. Even though Deep Blue didn’t have the benefit of a modern neural network at the time and instead relied on brute-force heuristic search techniques and specialized chess algorithms, it did incorporate basic machine learning techniques to evaluate board positions and enhance its game play. Deep Blue’s chess win was another momentous advance for AI and machine learning; it

Proved that a machine can outperform a human in a task that required complex decision-making over many steps.

Triggered huge debates about the future of AI and its potential impact on all facets of life. Those debates have only accelerated today with the much more recent introduction of generative artificial intelligence (see the section “

Creating content with generative AI

,” later in the chapter).

Supported Kasparov’s perspective that machines and humans working together can accomplish much more than either of them working alone. He introduced the term,

advanced chess

for a form of chess in which humans partner with computer systems to play chess, emphasizing that human intuition and machine calculations together were an almost unbeatable combination.

Kasparov’s idea of advanced chess had a lasting impact on how we think about AI today, and many AI researchers consider advanced chess a precursor to modern theories around AI serving as an assistant to a human operator in various domains. (Satya Nadella, Microsoft CEO, has referred to this assistance more popularly as AI co-piloting.) In subsequent chapters, I delve into the role of AI as a complementary tool for humans in the realm of business and marketing, and in those discussions, you can clearly trace the philosophical roots of this cooperative approach to Kasparov’s insights.

Tracking the deep learning revolution

In recent years, the advent of deep learning has significantly elevated the capabilities and accuracy of AI systems. Building on the foundations laid by traditional machine learning, deep learning employs neural networks that have multiple layers — often referred to as deep neural networks — to achieve unprecedented levels of accuracy in tasks such as image classification, speech recognition, and natural language processing.

What sets deep learning apart from earlier AI technologies is the advancement in computational power, the availability of massive data sets, and the use of intricate algorithms that optimize neural networks with more than just a few layers. This multi-layered architecture enables the system to model complex relationships in the data, leading to remarkably precise results.

Deep learning–enabled systems

Stand as the engine powering an extensive range of AI applications in use today.

Deep learning revolutionizes automation by enabling systems to perform complex analytical and predictive tasks entirely autonomously, without any human intervention. Whether you use digital voice assistants such as Siri or Alexa, voice-activated TV remotes, or advanced driver-assistance systems in modern automobiles, deep learning acts as the key technology underpinning many of these innovations.

Promise to offer even more cross-domain intelligence in their next generation.

These future systems will likely require less data for effective learning, operate more efficiently on increasingly sophisticated processors, and employ even more advanced algorithms. People developing AI technologies want to bring artificial intelligence closer to mimicking the complexities and capabilities of the human brain.

Although scientists and programmers may still be decades away from achieving artificial general intelligence — a state where AI possesses reasoning, learning, and common sense akin to human cognition — deep learning undeniably serves as a significant step toward that lofty goal.

Demonstrating intuition in the age of AI

The Turing test raised the seminal question, “Can machines think?” People began to ponder whether humans could distinguish between a machine and a human during a text-based interaction. (See the section “Alan Turing and Machine Intelligence,” earlier in this chapter, for info about the Turing test.) This question appeared to find a definitive answer in the groundbreaking 2016 victory of AlphaGo over Lee Sedol in a game of Go.

AlphaGo was the brainchild of DeepMind, a British AI company that Google later bought. Unlike conventional AI programs, AlphaGo was purpose-built to master the game of Go, an ancient board game that boasts a complexity far surpassing that of chess. Although the game has simple rules, the sheer number of possible moves adds astronomical complexity. Top Go players — such as Lee Sedol, a leading figure in the world of Go — are revered for their intuition, creativity, and analytical skills.

In preparation for its 2016 face-off with Lee Sedol, AlphaGo underwent rigorous training, using a combination of machine learning methodologies, including deep learning, along with other algorithms such as the probability-based Monte Carlo tree search. The program analyzed thousands of historical Go matches and, perhaps more impressively, honed its skills by playing countless matches against itself. This self-play allowed AlphaGo to simulate various strategies and tactics, thereby enhancing its own game-playing capabilities.

When AlphaGo beat Lee Sedol in a five-game series, the global AI community sat up and took notice of two startling realizations:

The unexpected display of AI ingenuity:

AlphaGo’s ability to make apparently creative and intuitive strategic choices — qualities that many assumed were the exclusive domain of human cognition. Sergey Brin of Google — whose company acquired DeepMind — was in Seoul for the third game and said, “When you watch really great Go players play, it is like a thing of beauty. So I am very excited that we have been able to instill that kind of beauty in our computers.”

The profound capabilities and future potential of AI:

AlphaGo’s win provided more than just a technological milestone; it created a paradigm shift that raised the awareness of leaders across various sectors — from scientists and politicians to business leaders and the general public.

This historical event where AlphaGo beat a consummate human Go player served as an irrefutable testament to the advancements in deep learning, indicating that AI can indeed perform tasks that many people previously thought only human intelligence could do.

Creating content with generative AI

Advancements in AI after 2010 saw dramatic innovation, particularly in the development of generative models (which can generate new synthetic data such as text or images). And by the 2020s, generative models found applications in a variety of fields ranging from art and entertainment to scientific research and drug discovery. Two specific developments provided the necessary foundation for generative models’ advancement:

Generative adversarial networks (GANs):

Introduced by computer research scientist Ian Goodfellow and his colleagues in 2014, GANs had the capability to generate incredibly realistic images, text, and other types of data. This significant leap forward provided a robust framework for generating intricate, high-quality digital assets. Subsequent advancements in GANs led to models such as StyleGAN, which can generate high-resolution, highly realistic images.

The Transformer architecture:

Initially designed for natural language processing, it was adapted for generative tasks later in the same decade. The adaptation to generative tasks culminated in models such as OpenAI’s GPT series that can generate human-like text.

I cover generative AI models extensively in Chapter 8.

Chapter 2

Exploring AI Business Use Cases

IN THIS CHAPTER

Serving customers with AI applications

Assessing and validating products and technologies

Enhancing innovation in research and development

Personalizing and managing sales and marketing efforts

Analyzing and streamlining legal tasks

Since the 1980s, artificial intelligence (AI) has been steadily permeating organizations, and each advancement in AI innovation opens up new possibilities for its application in business. Given that technology is the backbone of a multitude of business processes (from accounting to inventory management), every advancement in AI implies that people can automate more processes, or they can enrich existing automated processes in more inventive ways.

Generative AI — a branch of artificial intelligence focused on creating new content (such as text, images, and music) by learning from existing data — stands out as the most significant development in the history of AI development. It gives rise to a plethora of business applications, many of which no one could even imagine until recently. These applications, including drug discovery and video creation, span all operations and functions within organizations. The more sophisticated ones (for example, finding cures for specific disease states) necessitate deep, industry-specific knowledge and access to unique data sets. Indeed, the uniqueness and completeness of the data set used to train an AI tool allows the AI to perform increasingly intricate tasks.

A recent study by McKinsey & Company (a global management consulting firm) highlighted the potential of generative AI to enhance workplace productivity and inject trillions of dollars in value into the global economy. Specifically, the study estimated that generative AI could annually contribute between $2.6 trillion and $4.4 trillion in value across 63 distinct use cases (specific, narrow business applications or implementations of the technology within a company). To contextualize these figures, they’re roughly equivalent to the entire GDP of the United Kingdom in 2021.

This study doesn’t even account for the additional productivity benefits gained by integrating generative AI into existing software for tasks beyond those 63 defined use cases. Notably, the study indicated that three-quarters of the value that generative AI can create would come from customer operations, marketing and sales, software development, and research and development. You can read this McKinsey study yourself online. Just go to www.mckinsey.com, select the Search icon, enter “economic potential of generative ai” in the text box, and click the Search icon again. Select the first search result to access the study.

In this chapter, you can find a pragmatic approach to AI use that examines various functions in a business — from marketing and sales to product development and legal departments — to help you determine which major activities or workflows you can enhance, or even replace, with AI. Because an abundance of generative AI solutions are on the horizon at the time of writing (thanks to the groundbreaking work of OpenAI, Google, Anthropic, and others), this chapter examines the most common use cases by business function, offering insights into the decisions that managers should contemplate when they incorporate more AI into their initiatives.

Automating Customer Service

A logical starting point for using AI in business involves the customer service department, given its extensive history of leveraging technology to automate numerous tasks, all aimed at reducing costs and enhancing business efficiencies. Because customer service typically comes at a significant expense for companies (especially in a post-COVID era, when customer expectations have only increased), it’s a natural place to identify opportunities for efficiencies. According to a Harvard Business Review article published in August of 2023, AI-driven automations in customer service, such as personalized recommendations and improved support, ultimately lead to increased customer satisfaction and loyalty.

Serving customers by using chatbots

One of the most fundamental use cases for the application of artificial intelligence in customer service falls into the realm of chatbots, which are applications designed to simulate human-like conversations based on user input. This technology proliferated across companies in the last decade as a way to handle real-time customer service tasks (such as changing flight bookings) more efficiently and cost effectively than by having a human being available, whether on the phone, at a desk, or through a computer.

Historically, technology limitations resulted in most of the chatbot and virtual assistant use cases being rather rudimentary. Companies and systems integrators (technology consultants) built these chatbots as rule-based decision trees: A live chatbot asked customers a series of questions, and those customers could choose from options in a predefined list of responses. Based on their specific response, the chatbot would ask another question that had another set of predefined responses; or if the customer reached the end of a decision tree without their query being resolved, the chatbot would point them to a web page that may have the answer to their original query.

Generative AI has changed this structure by allowing customers to write their questions in their own words. And in turn, using natural language processing (NLP) enables the chatbot to read the question, analyze the query, and then identify the intent and associated entities (articles, web pages, and so on) that it can use to provide a useful response in natural language. These more advanced chatbots use technologies such as OpenAI’s ChatGPT plug-ins to analyze and understand the conversation that a human being is having with it. The chatbot then responds by drawing insights from the internal customer service database of the company and putting it in everyday conversational English.

FINE-TUNING CUSTOMER INTERACTIONS WITH GENERATIVE AI CHATBOTS