32,99 €
An essential resource on artificial intelligence ethics for business leaders
In Trustworthy AI, award-winning executive Beena Ammanath offers a practical approach for enterprise leaders to manage business risk in a world where AI is everywhere by understanding the qualities of trustworthy AI and the essential considerations for its ethical use within the organization and in the marketplace. The author draws from her extensive experience across different industries and sectors in data, analytics and AI, the latest research and case studies, and the pressing questions and concerns business leaders have about the ethics of AI.
Filled with deep insights and actionable steps for enabling trust across the entire AI lifecycle, the book presents:
Written to inform executives, managers, and other business leaders, Trustworthy AI breaks new ground as an essential resource for all organizations using AI.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 263
Veröffentlichungsjahr: 2022
Cover
Title Page
Copyright
Foreword
Preface
Acknowledgments
Introduction
Chapter 1: A Primer on Modern AI
The Road to Machine Intelligence
Basic Terminology in AI
Types of AI Models and Use Cases
New Challenges for the Modern AI Era
Notes
Chapter 2: Fair and Impartial
A Longstanding Ethical Question
The Nature of Bias in AI
Tradeoffs in Fairness
Leading Practices in Promoting Fairness
Toward a Fairer Future in AI
Notes
Chapter 3: Robust and Reliable
Robust vs Brittle AI
Developing Reliable AI
The Challenge of Generalizable Deep Learning
Factors Influencing AI Reliability
Robustness and Bad Actors
Consequences Worth Contemplating
Leading Practices in Building Robust and Reliable AI
Driving Toward Robust and Reliable Tools
Notes
Chapter 4: Transparent
Defining the Nature of Transparency in AI
The Limits of Transparency
Weighing the Impact on the Stakeholders
Taking Steps into Transparency
Trust from Transparency
Notes
Chapter 5: Explainable
The Components of Understanding AI Function
The Value in Explainable AI
Factors in Explainability
Technical Approaches to Fostering Explainability
Leading Practices in Process
The Explainable Imperative
Notes
Chapter 6: Secure
What Does AI Compromise Look Like?
How Unsecure AI Can Be Exploited
The Consequences from Compromised AI
Leading Practices for Shoring‐Up AI Security
Securing the Future with AI
Notes
Chapter 7: Safe
Understanding Safety and Harm in AI
Aligning Human Values and AI Objectives
Technical Safety Leading Practices
Seeking a Safer Future with AI
Notes
Chapter 8: Privacy
Consent, Control, Access, and Privacy
The Friction Between AI Power and Privacy
Beyond Anonymization or Pseudonymization
Privacy Laws and Regulations
Leading Practices in Data and AI Privacy
The Nexus of AI Trust and Privacy
Notes
Chapter 9: Accountable
Accountable for What and to Whom?
Balancing Innovation and Accountability
Laws, Lawsuits, and Liability
Leading Practices in Accountable AI
Accounting for Trust in AI
Notes
Chapter 10: Responsible
Corporate Responsibility in the AI Era
Motivating Responsible AI Use
Balancing Good, Better, and Best
Leading Practices in the Responsible Use of AI
Trust Emerging from Responsibility
Notes
Chapter 11: Trustworthy AI in Practice
Step 1 – Identify the Relevant Dimensions of Trust
Step 2 – Cultivating Trust Through People, Processes, and Technologies
Guidelines for Action on Trustworthy AI
Taking the Next Steps
Note
Chapter 12: Looking Forward
Note
Index
End User License Agreement
Cover
Table of Contents
Title Page
Copyright
Foreword
Preface
Acknowledgments
Introduction
Begin Reading
Index
End User License Agreement
iii
iv
vii
viii
ix
x
xi
xii
xiii
xiv
xv
xvi
2
3
4
5
6
7
8
9
10
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
146
147
148
149
150
151
152
153
154
155
156
157
158
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
199
200
201
202
203
204
205
206
207
208
209
Beena Ammanath
Copyright © 2022 by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc. Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per‐copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750‐8400, fax (978) 750‐4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748‐6011, fax (201) 748‐6008, or online at http://www.wiley.com/go/permission.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762‐2974, outside the United States at (317) 572‐3993 or fax (317) 572‐4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging‐in‐Publication Data is Available:
ISBN: 9781119867920 (Hardback)
ISBN: 9781119867968 (ePdf)
ISBN: 9781119867951 (epub)
Cover Design: Wiley
Cover Image: Sunrise © titoOnz/Getty Images
There are two kinds of organizations. Those that are fueled by artificial intelligence and those that will become fueled by AI. Eventually, all organizations public and private will be AI organizations. It is a twenty‐first‐century fact that efficiency, agility, competitiveness, and growth hinge on the successful use of AI. This is to the good, and the value is measured in trillions of dollars.
The potential, however, comes with a caveat. We cannot create world‐changing solutions and use AI for all the beneficial purposes we can imagine unless this technology rests on a firm ethical foundation. What we do today to align AI function and use with human values sets the trajectory for this transformative technology for decades to come.
I have had wonderful opportunities to speak with government and business leaders from countries around the world. Questions about AI ethics and trust are being debated in boardrooms, business halls, legislative chambers, and the public square. What I've learned is that, on the one hand, there is a growing awareness of how ethics and trust affect the use of AI. On the other hand, this awareness is causing some wariness about whether AI should be used at all. This speaks to valid concern about the technical function and social impact of AI, but it also reveals a more fundamental need on the part of organizations deploying AI.
People want to trust AI, if only they could.
The good news is we can get there, and it will take important decisions and actions to set not just individual businesses but the entire world on the path to trustworthy AI.
More and more, leaders are realizing that we, as a global population, must act purposefully on AI ethics – and we must do so now. If we do, we can minimize the risks, maximize trust in the technology, and build toward the brightest future AI can facilitate.
Some might speculate that prioritizing ethics and trust could impede innovation at the moment when the power of AI is finally brought to fruition. I submit that the reverse is true. The most powerful and useful AI innovations are those that align with our ethics and values. What we do not trust, we will not use. We need to be able to trust cognitive tools to move forward with AI.
While the imperative is clear, the tactics and knowledge to address trust are somewhat less so. What ethics are relevant for AI use? What does it mean to trust a machine intelligence? Where are the hurdles, the pitfalls, and the great opportunities to build ethics into novel AI? The chorus of voices asking these questions and others like them is growing louder, and the refrain is the same: How do we get to a place where our AI tools are simultaneously powerful, valuable, and trustworthy?
For better or worse, there is not just one answer to that question. There are many possible answers. Every organization operates within a society, and communities, nations, and regions can have very different views and laws on morality, ethics, and standards for technology application. What is considered trustworthy in one place may not hold in another. The priority ethics in one industry may be secondary or tertiary matters in a different field. No two businesses are the same and so no two frameworks for AI ethics and trust will be identical.
Fortunately, however, we do not need to determine universal rules for trustworthy AI. What we do need is a clear and precise framework that lays out key questions and priorities, defines essential terms and concepts, sets waypoints and guideposts throughout the AI lifecycle, and orients the business strategy and the workforce toward supporting ethical AI.
This book is an asset for your efforts. Its author, Beena Ammanath, is a consummate expert on AI. An AI optimist, the lessons and insights she shares are born of her rich experience as a technologist and an executive working across numerous industries. In this book, Beena helps you cut through the noise and discover the knowledge and practical steps your organization needs for a trustworthy future with AI.
In the pages that follow, you will discover the many dimensions of AI ethics and trust, the questions they prompt, the priorities for the organization, and some of the best practices that contribute to trustworthy AI governance. One lesson that permeates this thoughtful investigation of trust is that enterprise leaders should resist the clarion call of “everyone uses AI, so must you.” Fools rush in. Instead, AI application requires a close consideration of trust, with questions focused not just on whether the organization can use AI but whether it should, and if it does, how it can do so ethically.
As you read this book, consider how the lessons and insights can help your enterprise develop a plan for using AI. Think through goals, use cases, application, management, training, and policies and how the dimensions of trust influence and are influenced by these qualities. This book helps you conceive of a strong ethical foundation for AI and identify the plans and processes that engender confidence and trust in these powerful tools.
The capabilities and application of intelligent machines are evolving fast – even faster than most might realize. To capture the most benefit and mitigate the most challenges, we must take up the banner of trustworthy AI and carry it with us as we charge ahead into this bold new era with AI.
Kay Firth‐Butterfield
Head of Artificial Intelligence and Machine Learning
World Economic Forum
There are clear turning points in human history, and we are in the midst of one. Artificial intelligence is changing the world before our eyes at a breathtaking pace. There is no segment of society nor slice of the market that will go untouched by AI, and this transformation has the potential to deliver the most positive impact of any technology we have yet devised. This is cause for optimism and excitement. The era of AI has arrived.
Today, we can predict jet engine failure to improve safety and prevent disruption. In medicine, we can detect diseases earlier and increase chances of patient recovery. Autonomous transportation is evolving across land, sea, air, and outer space. And every aspect of doing business is receiving new valuable, powerful solutions. Faster customer service, real‐time planning adjustments, supply chain efficiency, even AI innovation itself – all have radically changed and improved with the cognitive tools now deployed at scale.
There has arguably never been a more exciting time in AI. Alongside the arrival of so much promise and potential, however, the attention placed on AI ethics has been relatively slight. What passes for public scrutiny is too often just seductive, clickbait headlines that fret over AI bias and point to a discrete use case. There's a lot of noise on AI ethics and trust, and it does not move us closer to clarity or consensus on how we keep trust in AI commensurate with its power.
Anyone who has worked in an enterprise understands the challenges inherent in integrating new technology. The tech implementation, training, equipment investments, process adjustments – seizing value with technology is no simple matter. How much more challenging then is it to simultaneously drive toward nebulous concepts around ethics and trust?
Yet, the challenge notwithstanding, enterprises do need to contend with these matters. Fortunately, there is every cause for optimism. We are not necessarily late in addressing trust and ethics in AI, but it is time for the organizations to get moving. That recognition was the catalyst for this book.
This is not the first time humanity has stood at the doorstep of innovation and been confronted with ethical unknowns. We should have confidence that we can devise methods for aligning technology, ethics, and our need for trust in the tools we use. The solutions are waiting for us to find them.
But there will never be just one solution, no one‐size‐fits‐all answer to the question of trustworthy AI. From an organizational perspective, irrespective of whether you are developing AI or just using AI, every company has to identify what trustworthy AI means for the enterprise and then design, develop, and deploy to that vision.
When we consider all that AI can do (and will be able to do), it can be hard to temper our enthusiasm. When we think about how poor AI ethics could lead to bad outcomes, it can be difficult to see beyond our concerns. The path forward with AI is between these extremes – working toward the greatest benefit AI can enable while taking great care to ensure the tools we use reflect human values.
One shortcoming of how AI ethics are frequently debated is that it is seldom pertinent to the priorities of business leaders. We have all read a lot about racist chatbots and highly speculative fears about an imagined general AI. In place of this, we need a rich discussion on AI trust as it relates to business decision making and complex enterprise functions. The AI models used in businesses are far more varied than what is commonly discussed, and there are numerous stakeholders across business units, each of whom have different needs, goals, and concerns around AI.
So that we are not talking in the abstract, let's anchor our reading journey on a company that performs high‐precision manufacturing – this is an imaginary company that exists only in the pages of this book, of course. The enterprise, called BAM Inc., is headquartered in the United States, runs manufacturing plants in three regions and six countries, and does about $4 billion in business annually. Like leaders in large companies in the real world, the executives at BAM Inc. get value from AI but also face uncertainties around trustworthy AI.
Each business unit aspires to greater productivity and success, and as AI tools are deployed, the problems they create require the executive leadership to make decisions on how to prevent issues before they occur and correct them if they do. By looking through the lenses of business leaders, we can probe the challenging nuances that every organization encounters during its maturation into an AI‐fueled enterprise.
In the investigation of trust and ethics in the following chapters, we use BAM Inc. as a laboratory for exploring the challenges with trustworthy AI in the business environment. As we follow the company's AI saga, remember that the issues the business faces are arising in enterprises around the world. There are all‐too‐real boardroom conversations where leadership is facing an AI challenge but may lack the tools to find solutions.
The solutions are waiting to be discovered, and this book is a companion in the journey to finding them. Whether you are an executive, technologist, ethicist, engineer, user, or indeed anyone who touches the AI lifecycle, the ensuing chapters can equip you with the perspective, questions, and next steps for cultivating your future with trustworthy AI.
This book is the product of decades of professional experience, research, and AI application in a diverse set of industries, and by virtue of that, there are many people to whom I owe gratitude and acknowledgment for their valuable insights and support.
First, I thank Wiley for their readiness to publish this important work and their dedication to seeing it come to fruition.
I also offer my sincere thanks and appreciation to my colleagues, Nitin Mittal, Irfan Saif, Kwasi Mitchell, Dave Couture, Matt David, Costi Perricos, Kate Schmidt, Anuleka Ellan Saroja, David Thogmartin, Jesse Goldhammer, David Thomas, Sanghamitra Pati, Catherine Bannister, Masaya Mori, Gregory Abisror, and Michael Frankel.
I owe great thanks to the insights and discussions with colleagues and friends over the past several years – Lovleen Joshi, Archie Deskus, Dr. Abhay Chopada, Jana Eggers, Prajesh Kumar, Dr. Sara Terheggen, Jim Greene, Colin Parris, Dr. Amy Fleischer, Vince Campisi, Rachel Trombetta, Justin Hienz, Tony Thomas, Deepa Naik, Tarun Rishi, Marcia Morales‐Jaffe, Mike Dulworth, and Jude Schramm – who have helped shape my thinking on the myriad dimensions of technology.
This book and all of my life's work would not have been possible without the love and support of my parents, Kamalam and Kumar, my husband, Nikhil, and my sons, Neil and Sean. It is my hope that Trustworthy AI helps make this world a better place with all the benefits of AI and without many of the negative impacts. This is for my children – and yours, too.
Finally, thank you, reader, for taking this opportunity to explore trustworthy AI and seek ways to contribute to the ethical development of this powerful technology. We all share the responsibility to create and use AI for the greatest global benefit. Thank you for joining this important journey to a trustworthy future.
We can only see a short distance ahead, but we can see plenty there that needs to be done.
– Alan Turing
The most significant factor that will impact our future with artificial intelligence is trust.
Our human society depends on trust – trust in one another, in our economic and governmental systems, and in the products and services we purchase and use. Without trust, we are plagued with uncertainty, suspicion, reticence, and even fear. Trust can be fragile, and once broken, nearly impossible to repair. And all the while, trust receives only passing notice as a vital part of our lives. We take it for granted and presume it is present, until it isn't.
This vital and uniquely human need to trust is today colliding with the powerful forces of technology innovation. For decades, AI has existed primarily in research labs, developed for novel experiments that incrementally move the field forward. This has changed. Advancing at a near exponential clip, AI tools are being developed and deployed in huge numbers. These technologies touch nearly every part of our connected lives.
We are commonly aware of the AI behind self‐driving cars and chatbots that mimic human speech, but AI is much more pervasive. It fuels machine automation and predictive analysis across a range of functions. It powers back office operations and customer‐facing communication. It leads to new products and services, new business models, and truly new ways of working and living. In short, the age of AI has arrived, and every citizen and organization must contend with what that means for our trust in the tools we use.
It is a question not just of what can be done with AI but how it should be done—or whether it should be done at all. Fundamentally, because AI has been developed to this level of maturity, we must now grapple with some of the more philosophical considerations around AI.
What does it mean for AI use to be ethical? How do we know if we can trust the AI tools we use? How do we know if we should?
It may be simultaneously intimidating and motivating that these questions, for the most part, have not been answered. Indeed, it is the absence of answers that is driving increasing focus among data scientists, business leaders, and government authorities alike on how we ensure this new era with AI is one we can trust and one we want.
History will record the early twenty‐first century as a watershed moment in human civilization. Few inventions even come close to the potential in AI, though there are standout examples, such as the printing press, the internal combustion engine, and the CPU. AI is already yielding social and economic changes that are on par with (if not exceeding) the transformative impact of those innovations. It is impossible to overstate just how much AI is changing everything, and for that reason, we are obligated to think through the ethics of how these tools should be developed and used.
There is no corpus of literature and scholarship that defines ethics and trust in AI to a granular degree. There is no checklist that, if satisfied, yields trustworthy AI. We are not there yet. On the road to universal agreement on the qualities of ethical AI that can be trusted, those using AI have an important role to play. A data science background is not a prerequisite for participating in this journey. To the contrary, the rules, leading practices, considerations, and philosophies that can support the next decades with AI, as it fully matures into ever more powerful technologies, require the input of people from all fields and all walks of life.
Most simply, if an organization is using AI, everyone in that organization is already participating in shaping the AI era. And given that, there is a moral (if not strategic) imperative to equip people with the knowledge, processes, and teamwork they need to contribute meaningfully and responsibly to how the use of AI continues to unfold. Such is the purpose of this book.
The effort and attention required to ensure AI is trustworthy is significant, even daunting, but this is not the first time humanity has stood at the doorstep of technological revolution. History has lessons for us. Take the birth and mass adoption of personal automobiles. When gas‐powered cars were first unveiled and public adoption grew fast, there were few of the rules, technologies, and standards that we have today.
When the first Model Ts rolled out of the factory, there were no speed limits or automated lights at intersections. There were no standard signs to guide traffic. Pedestrians had to learn to watch out for the nearly one‐ton machines rattling down the road. Drivers had to learn how to safely operate the car. Legislators had to pass new laws governing the use of personal cars, and courts were pressed to hear cases where there was no precedent. Indeed, the consumer car became commonplace and because of that, a sociotechnical system evolved around it to govern how the world used this transformative technology.
To continue the analogy, AI is rolling out of the factory en masse, and we have few speed limits or seatbelts. These tools are powerful, but their power is comparatively meager relative to what is coming, which points to our current challenge. We must dedicate our efforts to developing the commensurate traffic lights, speed limits, and consumer regulations that can nurture a sociotechnical system that guides AI to its greatest, most trustworthy potential. How?
We can conceive of this task along three streams: research, application, and trust and ethics. Research is the province of data science and AI engineering. Visionaries and innovators can be found not just in academic labs but increasingly in private enterprise, where they push the envelope of AI capabilities. This stream has characterized much of the history of AI to date.
For several decades and accelerating, we have seen AI application in growing volume. This is more than the automation of repetitive tasks. AI can find patterns in vast datasets, it can accurately predict real‐world conditions before they arrive, and it can engage with humans across all aspects of their lives. It is impacting every industry. Innovation is the byword of the day, and we are right to be excited. This is a fascinating time to be alive, to see such a technology and its impact become manifest.
The potential raises what is the increasingly important third stream – determining how to use this technology in an ethical way such that we can trust it. There is a growing consensus throughout the AI community that we must meet this challenge and do so now, when modern AI is in its relative infancy. That task falls on every organization using AI and ultimately, the onus for action falls initially on the leaders of these organizations. The sociotechnical system that will dictate how AI is used for years to come is being built today by enterprise leaders, regardless of whether they realize it.
Solving for trust in AI is not just a virtuous endeavor that is now necessary. In business, it has a real impact on the bottom line, as well as on how customers view and engage with the organization. The trust we place in a company is an extension of our trust in how it operates, and that includes the tools it employs. There is no shortage of stories (some humorous, others troubling) covered in the popular press about an AI with untended outcomes. The more AI is deployed, however, and the more powerful it becomes, the more these stories will register with the broader public. Concerns will likely deepen, and an enterprise is well served by considering today how its AI endeavors will be guided such that they are deserving of customer trust.
To get there, we must go deeper than mere handwringing over nebulous ethical ideas. We need to get into the weeds of the components of trustworthy technology. And then, we must contemplate which elements of trust are most important in AI for our respective purposes.
A credit scoring AI tool that yields biased outputs is undeserving of trust, but fairness as an ethical concept does not apply to all AI tools. If a cognitive machine is trained to process invoices and remit payment, fairness and bias are irrelevant to the AI's proper function. Likewise, a credit scoring tool presents no real threat to personal safety, whereas the actions of a fast‐moving robot arm in a factory are vitally important for safety. This is the landscape of trustworthy AI, and every organization must navigate it according to their needs and applications.
Thus, a point‐for‐point how‐to on trustworthy AI is the red herring of AI application. Every business is entering this age with its own goals, strategies, technical capabilities, and risk tolerance. Legislators and regulators are increasingly wading into these ethical waters. And consumers are only now beginning to appreciate just how embedded AI is throughout every aspect of daily life. Every stakeholder on this frontier is charting a path to a bright horizon while attempting to anticipate the challenges in the way.
We can be sure that there will be missteps and blind spots. The ethical use of technology, like innovation itself, is not a straight line. Without serious consideration of all the potential outcomes, hubris and shortsightedness can be ingredients for unintended consequences. In 1922, Henry Ford wrote:
We are entering an era when we shall create resources which shall be so constantly renewed that the only loss will be not to use them. There will be such a plenteous supply of heat, light and power, that it will be a sin not to use all we want.2
Now, a century later, such a statement is sobering in its error. It points to the question, what will be the outcomes from AI in the decades to come? Should we sprint into this future with the sentiment of using “all we want,” or are we better served by dedicating effort now to guiding the trustworthiness of a technology we already know will change the world? As it is said, an ounce of prevention is worth a pound of cure. We have today an essential opportunity not just to prevent harm but to extract the greatest possible good from AI.
We have a grand opportunity to seize the moment and work toward the sociotechnical system that makes trustworthy AI possible. Given that so much of AI is developed and deployed by private enterprise, those who can affect the most positive impact are today's business leaders. From the boardroom of Fortune 100 companies to the local office of a small business, much of the future of AI rests in the capable hands of the private sector.
