Responsible AI in the Enterprise - Adnan Masood - E-Book

Responsible AI in the Enterprise E-Book

Adnan Masood

0,0
32,39 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Responsible AI in the Enterprise is a comprehensive guide to implementing ethical, transparent, and compliant AI systems in an organization. With a focus on understanding key concepts of machine learning models, this book equips you with techniques and algorithms to tackle complex issues such as bias, fairness, and model governance.
Throughout the book, you’ll gain an understanding of FairLearn and InterpretML, along with Google What-If Tool, ML Fairness Gym, IBM AI 360 Fairness tool, and Aequitas. You’ll uncover various aspects of responsible AI, including model interpretability, monitoring and management of model drift, and compliance recommendations. You’ll gain practical insights into using AI governance tools to ensure fairness, bias mitigation, explainability, privacy compliance, and privacy in an enterprise setting. Additionally, you’ll explore interpretability toolkits and fairness measures offered by major cloud AI providers like IBM, Amazon, Google, and Microsoft, while discovering how to use FairLearn for fairness assessment and bias mitigation. You’ll also learn to build explainable models using global and local feature summary, local surrogate model, Shapley values, anchors, and counterfactual explanations.
By the end of this book, you’ll be well-equipped with tools and techniques to create transparent and accountable machine learning models.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 473

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Responsible AI in the Enterprise

Practical AI risk management for explainable, auditable, and safe models with hyperscalers and Azure OpenAI

Adnan Masood, PhD

Heather Dawe, MSc

BIRMINGHAM—MUMBAI

Responsible AI in the Enterprise

Copyright © 2023 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the authors, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Publishing Product Manager: Ali Abidi

Book Project Manager: Kirti Pisat

Content Development Editor: Joseph Sunil

Technical Editor: Kavyashree K S

Copy Editor: Safis Editing

Proofreader: Safis Editing

Indexer: Pratik Shirodkar

Production Designer: Prashant Ghare

DevRel Marketing Coordinator: Vinishka Kalra

First published: July 2023

Production reference: 2050923

Published by Packt Publishing Ltd.

Grosvenor House

11 St Paul’s Square

Birmingham

B3 1RB, UK.

ISBN 978-1-80323-052-8

www.packtpub.com

We would like to acknowledge the contributions of all the people who have dedicated their careers to advancing the field of AI and its responsible development. Their work continues to inspire us and provides a foundation for the discussions and recommendations in this book. We would like to extend our sincere gratitude to the following individuals for their invaluable contributions to the ethical AI space and for inspiring us to create this book: Cathy O’Neil, Timnit Gebru, Marzyeh Ghassemi, Joy Buolamwini, Anima Anandkumar, Cynthia Dwork, Margaret Mitchell, Kate Crawford, Rumman Chowdhury, Stephanie Tuszynski, Andrew Ng, Yoshua Bengio, Stuart Russell, Joanna Bryson, Zeynep Tufekci, Francesca Rossi, Virginia Dignum, Ayanna Howard, Thomas Dietterich, Solon Barocas, Arvind Narayanan, Patrick Hall, Navrina Singh, Rayid Ghani, and many others. Their tireless efforts and groundbreaking research have driven the development of ethical AI practices and governance frameworks. It is our hope that this book will contribute to their ongoing efforts and help promote responsible and accountable AI development.

We would like to thank the reviewers who provided valuable feedback on the advance draft of this book. Their insights and suggestions have helped us to improve the quality and clarity of the book. We are especially grateful to Dr. Tatsu Hashimoto, Dr. Alla Abdella, Geethi Nair, Dr. Shani Shalgi, David Lazar, Omar Siddiqi, Akhil Seth, and many others who took the time to read and provide insightful comments. We would also like to thank the organizations and individuals who have provided support and resources for the research and writing of this book. Their contributions have been invaluable in helping us to understand the complex and rapidly evolving field of AI ethics.

Last but not least, our heartfelt gratitude to Ali Abidi, David Sugarman, Joseph Sunil, Kirti Pisat, and the amazing Packt team who supported us throughout this endeavor.

Foreword

I know what you’re thinking: “This book is full of quotes about AI. The last thing we need is a foreword full of even more quotes about AI!”

You know, I couldn’t agree more. But that’s what you’re going to get! Life is full of disappointments, so I might as well help with that. You know, so that you get used to it.

“I am telling you, the world’s first trillionaires are going to come from somebody who masters AI and all its derivatives and applies it in ways we never thought of.” — Mark Cuban

Let’s assume Mark knows what he’s talking about. He made Shark Tank famous, he owns the Dallas Mavericks, a production company, a lot of tiny companies that people pitch him on television, and an entire town in Texas. Plus, he’s worth over 4 billion dollars. (Oh, and he got RKO’d by Randy Orton on WWE).1

So maybe he’s smart enough to be right, and he knows that the future trillionaire is going to be in the AI business. What if the first major founder of our future AI is already the richest man in the world? Well, it won’t be. But it seemed possible for a few years...

In 1995, Elon Musk co-founded Zip2 (the first zip worked fine for me). In 1999, he co-founded X.com, which merged with PayPal in 2000 (things move fast in Musk-time) and got bought by eBay in 2002 (Musk was the largest shareholder and made the most from the sale). Musk started SpaceX in 2002 (their Starlink satellites came later, in 2019), co-founded Tesla in 2004(ish), Neuralink in 2016 (robot brains!), and The Boring Company in 2017. And he bought Twitter in 2022.

But what about AI? Musk started by investing in Vicarious and DeepMind. Meanwhile, Google wanted to differentiate in the Cloud business (it launched GCP in 2008) by being the best in AI. Both of those AI companies got acquired by Google: DeepMind in 2014 (DeepMind built Google Bard) and Alphabet’s Intrinsic later acquired Vicarious for robotics/AI in 2022. Meanwhile, in 2015 (after losing/selling his share of DeepMind to Google), Musk co-founded OpenAI (which led to ChatGPT and Microsoft’s New Bing). He eventually left the OpenAI board to focus on businesses that made money (like Tesla’s AI for cars and robotics). So, while OpenAI is open source and big companies (like Google, Microsoft, and Meta) do the big company thing with their AI business opportunities, Musk and others still left the door wide open for Mark Cuban’s smart AI entrepreneur to walk through and claim the crown of being the first trillionaire.

Okay. Where does that leave Musk (other than applying what he’s learned to his money-making businesses)? It leaves him very concerned. He saw something with DeepMind, Vicarious, OpenAI, and his Tesla AI endeavors that he simply can’t unsee. (Also, SpaceX uses AI for its satellites, and Neuralink seeks to bridge human intelligence and artificial intelligence.)

That concern is what we’re starting to see with our online stories of how corrupt, evil, or just plain wrong AI can be, whether we’re poking fun at ChatGPT, Bard, New Bing, or other AI attempts. There’s a danger there, and in our humanity, people (often journalists, which is not a coincidence) are determined to uncover that danger. (By the way, GPT stands for Generative Pre-trained Transformer. Don’t ask.)

And that brings us to the first of many quotes from Mr. Musk (yes, I’ve been building up to this for a reason). Just consider yourself forewarned (that might be a foreword joke; I’m starting a new joke genre).

“AI will be the best or worst thing ever for humanity.” — Elon Musk

Here we have a very concerned individual. Musk has embraced AI and contributed to it far beyond what most (almost all) of us will ever be capable of. Yet he fears AI.

“Robots will be able to do everything better than us...I am not sure exactly what to do about this. This is really the scariest problem to me.” — Elon Musk

It really scares him. I’ve got an idea of what you can do about this, Elon, stop making Tesla robots and cars with your AI built in them! And maybe stop putting AI in peoples’ brains. It’s just a thought. Especially since you think we might recreate the Terminator franchise in our real life (thanks go out to James Cameron, for all those nightmares and warnings, by the way):

“In the movie Terminator they didn’t expect some sort of Terminator-like outcome.” — Elon Musk

That’s right, Elon. Cyberdyne (the fictional representation of dozens of our actual companies) thought they were helping people with prosthetics and AI, but when their Skynet AI became self-aware, humans tried to destroy Skynet, and so it retaliated against the humans (and I won’t even mention the Matrix).

(Oh, wait, I just mentioned the Matrix. Moving on...)

But Elon, how could this happen?

“If AI has a goal and humanity just happens to be in the way, it will destroy humanity as a matter of course without even thinking about it...It’s just like, if we’re building a road and an anthill just happens to be in the way, we don’t hate ants, we’re just building a road.” — Elon Musk

That’s true. We don’t even think about the ants. But the ants still die, regardless of our intentions.

How serious is Musk about this?

“If you’re not concerned about AI safety, you should be. Vastly more risk(y) than [another country].” — Elon Musk

Part of Musk’s fear is that people get replaced and things change, but with AI, that paradigm shifts...

“The least scary future I can think of is one where we have at least democratized AI...[also] when there’s an evil dictator, that human is going to die. But for an AI, there would be no death. It would live forever. And then you’d have an immortal dictator from which we can never escape.” — Elon Musk

And then Musk starts to get to the point...

“Mark my words, AI is far more dangerous than nukes...why do we have no regulatory oversight?” —Elon Musk

That’s what Musk is driving at... what’s the solution? How do we prevent the destruction of mankind?

“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.” — Elon Musk

I’m increasingly inclined to think that you’re just starting to realize that you’re reading a bunch of Elon Musk quotes (which you obviously never signed up for). Mah. I’ve done worse...

Anyway, we’d rather not summon a demon (at least I assume we’re all on the same page on that one). Tell us more about this solution, Elon:

“I am not normally an advocate of regulation and oversight...I think one should generally err on the side of minimizing those things...but this is a case where you have a very serious danger to the public.” — Elon Musk

Okay. That’s the last quote. I promise. Here’s the thing though. In summer 2022, it was Elon Musk who warned his companies that layoffs were coming, and that the economy was heading in a not-so-great direction. Of course, everybody responded like he was crazy and wrong. But then it hit all the big tech companies in late 2022 and going into 2023... they were the biggest tech company layoffs that the world has ever seen. Musk was correct, and he wasn’t afraid to tell everyone that something bad was coming (which he might have contributed to when he bought Twitter and made other interesting life choices).

So, even if you’re not a fan of the Musk, you might want to consider that he could be right about this.

Hopefully, he’s putting some of that oversight and regulation into his own AI endeavors, but without AI democratized and regulated (as he mentioned), AI won’t be consistently safe, even if Musk manages to keep it safe for his companies.

In other words, we’d love for you to be the first trillionaire, which Cuban foresees, but please do so without wiping out mankind (that’s my polite request to you). (Especially while we wait for that systematic oversight and regulation.)

And that brings me to this book! (Oh, you thought I’d never mention it?)

In Model Governance for Responsible AI, Dr. Adnan Masood (PhD; he’s not a dentist or anything), goes into practical detail about how you can manage the risk of AI. What is explainable AI (and can it even be explained?), what is ethical AI, and how does AI bias work? You’ll dig into algorithms, how to monitor your operational model, how to govern it (with compliance standards and recommendations), and you’ll even get your own starter kit to build transparency for your AI solution! He then goes into the toolkits that you’ll find from AWS, Google Cloud, Microsoft Azure, and more. You’ll learn from some real-world case studies!

He explains how to leverage Microsoft FairLearn, how to do your own fairness assessments (and bias mitigation), and he ends on a fun note with foundational models (and how to look for bias in your models). This is a very exhaustive book! It’s a great resource.

If you’re even thinking about building an AI solution, please pick up this book (which you’re already reading, so keep reading it), put on your metaphorical bib, and devour this meal. And then put it in the fridge (maybe on a bookshelf) and come back to it for seconds, whenever you’re thinking about leveraging AI. Maybe thirds and fourths too (it’s a lot to consume all at once, just like a Thanksgiving meal).

Bon Appetit! (That’s French for, “Enjoy your meal!”)

Oh, before I go... I do have one other plug on this topic. This book will open your eyes so much that you’ll never have thought that your eyes were even closed. Once you’re done (and you’re thinking about these concepts some more), check out some supplementary online content that I helped publish and curate over at Microsoft, https://aka.ms/Responsibility. Follow the left navigation table of contents to read about Responsible Innovation, AI, and the ML context. And with that, I’ll leave you alone. Please read this book, and please don’t accidentally design the future generations of Terminator robots, the Matrix, or Ultron. We’re good. We don’t need dictator robots.

-Ed Price, Manager of Architecture Content, Google Cloud Former Senior Program Manager of architectural publishing, Microsoft Azure Co-Author of 7 Books, including Meg the Mechanical Engineer, Hands-On Cognitive Services (co-written with Dr. Adnan Masood), The Azure Cloud Native Architecture Mapbook (from Packt), and ASP.NET Core 5 for Beginners (Packt).

Foreword 2

Responsible AI in the Enterprise offers an extensive overview of responsible AI, providing insights into AI and machine learning model governance. It takes a deep dive into explainable AI, ethical AI, AI bias, model interpretability, model governance, data governance, and AI-based upskilling. The content, delivered in an easy-to-understand style, makes it an invaluable resource for professionals at different levels of expertise dealing with AI. The book takes you on a journey through 10 chapters, each offering in-depth knowledge on a unique aspect of responsible AI. From understanding the basics of ethical AI, interpreting black-box models, ongoing model validation and monitoring, to understanding governance and compliance standards, the book covers all. It also offers practical guidelines on implementing AI fairness, trust, and transparency in an enterprise setting and presents an overview of various interpretability toolkits.

In a world where AI and machine learning are transforming our society and businesses, Responsible AI in the Enterprise emerges as an indispensable guide for professionals navigating this landscape. The book masterfully deciphers complex concepts such as explainable AI, model governance, and AI ethics, making them accessible to novices and seasoned practitioners alike. Its emphasis on practical tools, through valuable examples, empowers you to implement responsible AI in your organizations. In the rapidly evolving field of AI, this book stands out with its solid foundation, insightful analysis, and commitment to making AI more ethical, responsible, and accessible. I highly recommend Responsible AI in the Enterprise to any AI enthusiast, data scientist, IT professional, or business stakeholder who seeks a robust understanding of AI and machine learning model governance. Reading this book is not just an investment in your professional development; it’s a step toward a more equitable AI future. I hope it brings you the same pleasure as it did to me.

- Dr. Ehsan Adeli, Stanford Artificial Intelligence Lab

Contributors

About the authors

Adnan Masood, PhD is a visionary leader practitioner in the field of AI, with over 20 years of experience in financial technology and large-scale systems development. He drives the firm’s digital transformation, machine learning, and AI strategy. Dr. Masood collaborates with renowned institutions such as Stanford AI Lab and MIT CSAIL, holds several patents in AI and machine learning, and is recognized by Microsoft as an AI MVP and Regional Director. In addition to his work in the technology industry, Dr. Masood is a published author, international speaker, STEM robotics coach, and diversity advocate.

Heather Dawe, MSc is a renowned data science and AI thought leader with over 25 years of experience in the field. Heather has innovated with data and AI throughout her career; highlights include building the first data science team in the UK public sector and leading the development of early machine learning and AI assurance processes for the National Health Service (NHS). Heather currently works with large UK enterprises, innovating with data and technology to improve services in the health, local government, retail, manufacturing, and finance sectors. A STEM ambassador and multi-disciplinary data science pioneer, Heather also enjoys mountain running, rock climbing, painting, and writing. She served as a jury member for the 2021 Banff Mountain Book Competition and guest edited the 2022 edition of The Himalayan Journal. Heather is the author of several books inspired by mountains and has written for national and international print publications including The Guardian and Alpinist.

About the reviewers

Jaydip Sen has experience in research, teaching, and industry for over 28 years. Currently, he is a professor in Data Science and Artificial Intelligence at Praxis Business School, Kolkata, India. His research areas include information security and privacy, machine learning, deep learning, and artificial intelligence. He has published over 75 papers in indexed journals, edited 11 volumes, and authored 4 books published by internationally reputed publishers. He has figured among the top 2% of scientists worldwide for the last four consecutive years 2019-2022, as per studies conducted by Stanford University. He is the editor of Springer's Journal Knowledge Decision SupportSystems in Finance. He is a senior member of IEEE and ACM, USA.

Geethi Gopinathan Nair is an accomplished data scientist with over two decades of experience in the field of information technology, with a specific focus on data science for several years. As a subject matter expert in healthcare, she brings a unique perspective to her reviews. Recognized for her attention to detail and commitment to delivering actionable insights, Geethi Nair has consistently received accolades for her insightful and well-researched reviews. With a passion for staying up-to-date with the latest advancements in data science and artificial intelligence, Geethi Nair is admired for providing valuable recommendations to both technical and non-technical audiences.

Dr. Alla Abdella, a Ph.D. holder, is an expert in Natural Language Processing (NLP), Machine Learning, Deep Learning, and advanced AI technologies such as Generative AI, LLMs, Synthetic Agents, and Conversational AI. He excels at translating complex business problems into practical, actionable solutions. With many years of experience in the Telecom, Healthcare, and Communication domains, Dr. Abdella brings a wealth of knowledge to his work. He has numerous journal and conference publications to his name, as well as two filed patents. His experience ranges from his role as Lead Data Scientist at UST to his current position as Chief AI Officer at Yobi.app. Dr. Abdella consistently pushes the boundaries of AI's potential impact on industry, innovation, and technological progression. In collaboration with a Stanford professor, he co-creates open-domain QA systems and combats algorithmic bias, leaving a distinct mark on the global AI landscape.

Table of Contents

Preface

Part 1: Bigot in the Machine – A Primer

1

Explainable and Ethical AI Primer

The imperative of AI governance

Key terminologies

Explainability

Interpretability

Explicability

Safe and trustworthy

Fairness

Ethics

Transparency

Model governance

Enterprise risk management and governance

Tools for enterprise risk governance

AI risk governance in the enterprise

Perpetuating bias – the network effect

Transparency versus black-box apologetics – advocating for AI explainability

The AI alignment problem

Summary

References and further reading

2

Algorithms Gone Wild

AI in hiring and recruitment

Facial recognition

Bias in large language models (LLMS)

Hidden cost of AI safety – low wages and psychological impact

AI-powered inequity and discrimination

Policing and surveillance

Social media and attention engineering

The environmental impact

Autonomous weapon systems and military

The AIID

Summary

References and further reading

Part 2: Enterprise Risk Observability Model Governance

3

Opening the Algorithmic Black Box

Getting started with interpretable methods

The business case for explainable AI

Taxonomy of ML explainability methods

Shapley Additive exPlanations

How is SHAP different from Shapley values?

A working example of SHAP

Local Interpretable Model-Agnostic Explanations

A working example of LIME

Feature importance

Anchors

PDPs

Counterfactual explanations

Summary

References and further reading

4

Robust ML – Monitoring and Management

An overview of ML attacks and countermeasures

Model and data security

Privacy and compliance

Attack prevention and monitoring

Ethics and responsible AI

The ML life cycle

Adopting an ML life cycle

MLOps and ModelOps

Model drift

Data drift

Concept drift

Monitoring and mitigating drift in ML models

Simple data drift detection using Python data drift detector

Housing price data drift detection using Evidently

Analyzing data drift using Azure ML

Summary

References and further reading

5

Model Governance, Audit, and Compliance

Policies and regulations

United States

European Union

United Kingdom

Singapore

United Arab Emirates

Toronto Declaration – protecting the right to equality in ML

Professional bodies and industry standards

Microsoft’s Responsible AI framework

IEEE Global Initiative for Ethical Considerations in AI and Autonomous Systems

ISO/IEC’s standards for AI

OECD AI Principles

The University of Oxford’s recommendations for AI governance

PwC’s Responsible AI Principles/Toolkit

Alan Turing Institute guide to AI ethics

Technology toolkits

Microsoft Fairlearn

IBM’s AI Explainability 360 open source toolkit

Credo AI Lens toolkit

PiML – the integrated Python toolbox for interpretable ML

FAT Forensics – algorithmic fairness, accountability, and transparency toolbox

Aequitas – the Bias and Fairness Audit Toolkit

AI trust, risk, and security management

Auditing checklists and measures

Datasheets for datasets

Model cards for model reporting

Summary

References and further reading

6

Enterprise Starter Kit for Fairness, Accountability, and Transparency

Getting started with enterprise AI governance

AI STEPS FORWARD – AI governance framework

Implementing AI STEPS FORWARD in an enterprise

The strategic principles of AI STEPS FORWARD

AI STEPS FORWARD in enterprise governance

The AI STEPS FORWARD maturity model

Risk management in AI STEPS FORWARD

Measures and metrics of AI STEPS FORWARD

AI STEPS FORWARD – taxonomy of components

Salient capabilities for AI Governance

The indispensable role of the C-suite in fostering responsible AI adoption

An AI Center of Excellence

The role of internal AI boards in enterprise AI governance

Healthcare systems

Retail and e-commerce systems

Financial services

Predictive analytics and forecasting

Cross-industry applications of AI

Establishing repeatable processes, controls, and assessments for AI systems

Ethical AI upskilling and education

Summary

References and further reading

Part 3: Explainable AI in Action

7

Interpretability Toolkits and Fairness Measures – AWS, GCP, Azure, and AIF 360

Getting started with hyperscaler interpretability toolkits

Google Vertex Explainable AI

Model interpretability in Vertex AI – feature attribution and example-based explanations

Integration with Google Colab and other notebooks

Simplified deployment

Explanations are comprehensive and multimodal

AWS Sagemaker Clarify

Azure Machine Learning model interpretability

Azure’s responsible AI offerings

Responsible AI scorecards

Open source offerings – the responsible AI toolbox

Open source toolkits and lenses

IBM AI Fairness 360

Aequitas – Bias and Fairness Audit Toolkit

PETs

Differential privacy

Homomorphic encryption

Secure multiparty computation

Federated learning

Data anonymization

Data perturbation

Summary

References and further reading

8

Fairness in AI Systems with Microsoft Fairlearn

Getting started with fairness

Fairness metrics

Fairness-related harms

Getting started with Fairlearn

Summary

References and further reading

9

Fairness Assessment and Bias Mitigation with Fairlearn and the Responsible AI Toolbox

Fairness metrics

Demographic parity

Equalized odds

Simpson’s paradox and the risks of multiple testing

Bias and disparity mitigation with Fairlearn

Fairness in real-world scenarios

Mitigating correlation-related bias

The Responsible AI Toolbox

The Responsible AI dashboard

Summary

References and further reading

10

Foundational Models and Azure OpenAI

Foundation models

Bias in foundation models

The AI alignment challenge – investigating GPT-4’s power-seeking behavior with ARC

Enterprise use of foundation models and bias remediation

Biases in GPT3

Azure OpenAI

Access to Azure OpenAI

The Code of Conduct

Azure OpenAI Service content filtering

Use cases and governance

What not to do – limitations and potential risks

Data, privacy, and security for Azure OpenAI Service

AI governance for the enterprise use of Azure OpenAI

Getting started with Azure OpenAI

Consuming the Azure OpenAI GPT3 model using the API

Azure OpenAI Service models

Code generation models

Embedding models

Summary

References and further reading

Index

Other Books You May Enjoy

Part 1: Bigot in the Machine – A Primer

This section introduces the importance of Explainable AI (XAI) and its challenges. It is a primer on XAI and ethical AI for model risk management, defining key concepts and terms. The section also presents several stories that highlight the dangers of unexplainable and biased AI, emphasizing the need for different approaches to address similar problems. Overall, this section serves to show you why you should ensure that the AI developed and used within your enterprise is explainable and effectively governed so it is auditable, providing you with a deeper understanding of XAI and how to integrate it into your AI model development and deployment strategies.

This section comprises the following chapters:

Chapter 1, Explainable and Ethical AI Primer Chapter 2, Algorithms Gone Wild

1

Explainable and Ethical AI Primer

“The greatest thing by far is to be a master of metaphor; it is the one thing that cannot be learnt from others; and it is also a sign of genius, since a good metaphor implies an intuitive perception of the similarity in the dissimilar.”

– Aristotle

“Ethics is in origin the art of recommending to others the sacrifices required for cooperation with oneself.”

– Bertrand Russell

“I am in the camp that is concerned about super intelligence.”

– Bill Gates

“The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.”

– Nick Bilton, tech columnist for The New York Times

This introductory chapter presents a detailed overview of the key terms related to explainable and interpretable AI that paves the way for further reading.

In this chapter, you will get familiar with safe, ethical, explainable, robust, transparent, auditable, and interpretable machine learning terminologies. This should provide both a solid overview for novices and serve as a reference to experienced machine learning practitioners.

This chapter covers the following topics:

Building the case for AI governanceKey terminologies – explainability, interpretability, fairness, explicability, safety, trustworthiness, and ethicsAutomating bias – the network effectThe case for explainability and black-box apologetics

Artificial intelligence (AI) and machine learning have significantly changed the course of our lives. The technological advancements aided by their capabilities have a deep impact on our society, economy, politics, and virtually every spectrum of our lives. COVID-19, being the de facto chief agent of transformation, has dramatically increased the pace of how automation shapes our modern enterprises. It would be both an understatement and a cliché to say that we live in unprecedented times.

The increased speed of transformation, however, doesn’t come without its perils. Handing things out to machines has its inherent cost and challenges; some of these are quite obvious, while other issues become apparent as the given AI system is used, and some, possibly many, have yet to be discovered. The evolving future of the workplace is not only based on automating mundane, repetitive, and dangerous jobs but also on taking away the power of human decision-making. Automation is rapidly becoming a proxy for human decision-making in a variety of ways. From providing movies, news, books, and product recommendations to deciding who can get paroled or get admitted to college, machines are slowly taking away things that used to be considered uniquely human. Ignoring the typical doomsday elephants in the room (insert your favorite dystopian cyborg movie plot here), the biggest threat of these technological black boxes is the amplification and perpetuation of systemic biases through AI models.

Typically, when a human bias gets introduced, perpetuated, or reinforced among individuals, for the most part, there are opposing factors and corrective actions within society to bring some sort of balance and also limit the widescale spread of such unfairness or prejudice. While carefully avoiding the tempting traps of social sciences, politics, or ethical dilemmas, purely from a technical standpoint, it is safe to say that we have not seen experimentation at this scale in human history. The narrative can be subtle, nudged by models optimizing their cost functions, and then perpetuated by either reinforcing ideas or the sheer reason of utility. We have repeatedly seen that humans will trade privacy for convenience – anyone accepting End User Licensing Agreements (EULAs) without ever reading them, feel free to put your hands down.

While some have called for a pause in the advancement of cutting-edge AI while governments, industry, and other relevant stakeholders globally seek to ensure AI is fully understood and accordingly controlled, this does not help those in an enterprise who wish to benefit from less contentious AI systems. As enterprises mature in the data and AI space, it is entirely possible for them to ensure that the AI they develop and deploy is safe, fair, and ethical. We believe that, as policymakers, executives, managers, developers, ethicists, auditors, technologists, designers, engineers, and scientists, it is crucial for us to internalize the opportunities and threats presented by modern-day digital transformation aided by AI and machine learning. Let’s dive in!

The imperative of AI governance

“Starting Jan 1st 2029, all manual, and semi-autonomous operating vehicles on highways will be prohibited. This restriction is in addition to pedestrians, bicycles, motorized bicycles, and non-motorized vehicle traffic. Only fully autonomous land vehicles compliant with intelligent traffic grid are allowed on the highways.”

– Hill Valley Telegraph, June 2028

Does this headline look very futuristic? Probably a decade ago, but today, you could see this as a reality in 5 to 10 years. With the current speed of automation, humans behind the wheel of vehicles weighing thousands of pounds would sound irresponsible in the next 10 years. Human driving will quickly become a novelty sport, as thousands of needless vehicle crash deaths caused by human mistakes can be avoided, thanks to self-driving vehicles.

Figure 1.1: The upper row shows an image from the validation set of Cityscapes and its prediction. The lower row shows the image perturbed with universal adversarial noise and the resulting prediction. Image Courtesy Metzen et al – Universal Adversarial Perturbations Against Semantic Image Segmentation – source: https://arxiv.org/pdf/1704.05712.pdf

As we race toward delegating decision-making to algorithms, we need to ask ourselves whether we have the capability to clearly understand and justify how an AI model works and predicts. It might not be important to fully interpret how your next Netflix movie has been recommended, but when it comes to the critical areas of human concerns such as healthcare, recruitment, higher education admissions, legal, commercial aircraft collision avoidance, financial transactions, autonomous vehicles, or control of massive power generating or chemical manufacturing plants, these decisions are critical. It is pretty self-explanatory and logical that if we can understand what algorithms do, we can debug, improve, and build upon them easily. Therefore, we can extrapolate that in order to build an ethical AI – an AI that is congruent with our current interpretation of ethics – explainability would be one of the must-have features. Decision transparency, or understanding why an AI model predicts what it predicts, is critical to building a trustworthy and reliable AI system. In the preceding figure, you can see how an adversarial input can change the way an autonomous vehicle sees (or does not see) pedestrians. If there is an accident, an algorithm must be able to explain its action clearly in the state when the input was received – in an auditable, repeatable, and reproducible manner.

AI governance and model risk management