Artificial Intelligence - IT Governance Publishing - E-Book

Artificial Intelligence E-Book

IT Governance Publishing

0,0
35,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

This book offers an in-depth exploration of Artificial Intelligence (AI), from its origins to the ethical and societal challenges it presents today. It provides a comprehensive understanding of AI’s impact on human interaction, collaboration, privacy, and security. Through analyzing both opportunities and risks, the book emphasizes the ethical concerns surrounding AI, such as bias, privacy violations, and security threats.
Chapters explore AI’s transformative role in cybersecurity, misinformation, and human-machine collaboration, highlighting its implications for job markets and human relationships. Real-world examples illustrate how AI can drive progress or cause harm. The ethical dilemmas around AI, including its use in surveillance and decision-making, are thoroughly examined, presenting challenges central to modern technology.
Looking ahead, the book offers a forward-thinking perspective on AI’s future, discussing emerging trends and the need for responsible policy-making. It concludes by addressing how society can prepare for AI’s continued growth, offering strategies for navigating the evolving landscape. With practical insights and deep analysis, this book helps readers grasp AI’s profound implications for our future.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 423

Veröffentlichungsjahr: 2025

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Artificial Intelligence

Ethical, social, and security impacts forthe present and the future

Second edition

Artificial Intelligence

Ethical, social, and security impacts forthe present and the future

Second edition

DR JULIE E. MEHAN

Every possible effort has been made to ensure that the information contained in this book is accurate at the time of going to press, and the publisher and the author cannot accept responsibility for any errors or omissions, however caused. Any opinions expressed in this book are those of the author, not the publisher. Websites identified are for reference only, not endorsement, and any website visits are at the reader’s own risk. No responsibility for loss or damage occasioned to any person acting, or refraining from action, as a result of the material in this publication can be accepted by the publisher or the author.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form, or by any means, with the prior permission in writing of the publisher or, in the case of reprographic reproduction, in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publisher at the following address:

IT Governance Publishing Ltd

Unit 3, Clive Court

Bartholomew’s Walk

Cambridgeshire Business Park

Ely, Cambridgeshire

CB7 4EA

United Kingdom

www.itgovernancepublishing.co.uk

© Dr Julie E. Mehan 2022, 2024.

The author has asserted the rights of the author under the Copyright, Designs and Patents Act, 1988, to be identified as the author of this work.

First published in the United Kingdom in 2022 by IT Governance Publishing.

ISBN 978-1-78778-372-0

Second edition published in the United Kingdom in 2024 by IT Governance Publishing.

ISBN 978-1-78778-514-4

FOREWORD

“Throughout history, new technologies have disrupted society in different ways – some positively and some negatively – from steam-powered engines and electricity, to the Internet, and now again with artificial intelligence (AI); generative AI in particular in this instance. The creation of art, journalism, education, and the very truth itself have all been tested by the use of ChatGPT and other generative AIs.”

Markkula Center for Applied Ethics

In the interests of full disclosure, I am a Professor of Digital Ethics at the University of Maryland and not a researcher or developer of Artificial Intelligence (AI). In addition to digital ethics, I’ve also taught courses in Cyberterrorism, Information Technology, and Information Systems Management. Although I possess no formal training in the mathematical and statistical underpinnings of AI, my longstanding engagement with digital ethics, and more recently the ethics of AI and its associated technologies, has given me a front-row seat to many of the efforts to assess and define the potential impact of AI on individuals, our society, and our ethical foundation.

Over the past few years, the topic of AI, or essentially algorithmic1 systems, has played a greater role in my Digital Ethics classes. AI, I began to sense, is much more than just a simple tool powering our smartphones or allowing us to ask Alexa about the latest movie schedules. AI is a technology that is, in very subtle but unmistakable ways, exerting an ever-increasing influence over our lives – and in somewhat unpredictable ways. And the more we use it, the more AI is altering our lives and our environment.

And it is not just that humans are spending more and more time staring at a smartphone screen, their habits and routines are also being affected by the services offered by AI-power applications and tools. AI is changing the way our brains work, how we relate to other individuals, how we get and understand our news, how well we retain information or not, and even how we define what it is to be “human.”

For a long time, artificial intelligence (AI) seemed like a problem that we would have to tackle tomorrow – until it wasn’t! Unless you’ve had your head in the sand, you’ve likely been hearing a lot about artificial intelligence. It is a big deal right now. In late 2022, OpenAI released a generative AI2 called ChatGPT and the AI playing field was irreversibly changed. Initially, most of the discussion about this new technology centered around how students would be using ChatGPT to write their homework assignments (which it does quite well, by the way), but more serious implications of the capabilities of generative AI have become apparent. And overall, we’ve realized that AI and the associated systems are evolving much more quickly than anticipated.

I wrote the first edition of this book on AI and ethics in 2022 – and already it requires updating to reflect the new developments in AI and ethics, particularly in terms of generative AI. While the first edition remains relevant, there is increased interest in AI ethics, and organizations operating in the AI space are starting to take AI ethics seriously, as indicated by the growing proportion of peer-reviewed AI ethics papers and proposed legislation.

Humans are insatiably curious. And they are now engaged in a quest to build an AI that can do everything a human can do – and throughout 2022 and 2023, new large-scale AI models were released almost every month. These capabilities, such as ChatGPT, Stable Diffusion, Whisper, and DALL-E 23 are able to perform an increasingly broad range of tasks, from text manipulation and analysis, to image generation, to unprecedentedly good speech recognition.

Along with the new changes in AI technologies, there is an increased interest in AI and digital ethics. Policy- and lawmakers are talking about AI and the need for better controls and ethical approaches more than ever before. Industry leaders who have effectively incorporated AI into their business models are seeing tangible cost and revenue benefits. And the general public is becoming more informed (and concerned) about AI and which elements they like or dislike.

It is certain that AI will continue to evolve and, as such, become more integrated into our daily lives. Given the increased deployment of powerful AI technologies and the potential for massive disruption, we should all think more critically about exactly HOW we want AI to be developed and deployed. We should also ask important questions about who is developing and deploying it to avoid AI being increasingly defined by the actions of a small set of private-sector actors, rather than a broader range of societal actors.

“AI and Ethics,” published online by SpringerLink in November 2022, proposed this definition of AI ethics: “AI ethics is a set of values, principles, and techniques that employ widely accepted standards of right and wrong to guide moral conduct in the development and use of AI technologies.”4 This definition captures the true essence of the goals of this book on AI and ethics.

As AI continues to evolve and becomes increasingly integrated into our daily lives, it is crucial that we continue to examine its ethical and societal implications. For this reason, I’ve updated the book with a new chapter dedicated to looking at some of the most recent trends in AI and their ethical and societal implications, such as the ethics of using generative AI, the treatment of increasingly sentient machines, the potential for bias, how AI could affect human decision-making capabilities, and how these new AI technologies may change the workforce. This chapter focuses on the moral imperative to address the challenges posed by the highly disruptive technologies evolving through AI.

So far, it seems as if 2024 and beyond will be the years when there is an international push for stronger legislation and more ethical approaches to the development and deployment of AI-based technologies. It is a good time to take a close look at some of these technologies and how they are/can be affecting our society and our application of digital ethics.

Unless stated otherwise, all figures are the author’s own. Some figures have been created using the Creative Commons Organization:

https://creativecommons.org/.

___________________________

1 An algorithm is a set of instructions for how a computing system should accomplish a particular task.

2 Generative AI, also referred to as GenAI, allows users to input a variety of prompts to generate new content, such as text, images, videos, sounds, code, 3D designs, and other media. It ‘learns’ and is trained on documents and artifacts that already exist online. Generative AI evolves as it continues to train on more data. It operates on AI models and algorithms that are trained on large unlabeled data sets, which require complex math and lots of computing power to create. These data sets train the AI to predict outcomes in the same ways humans might act or create on their own.

3 Stable Diffusion is an open-source machine learning model that can generate images from text, modify images based on text, or fill in details on low-resolution or low-detail images. Whisper is a multi-task model that is capable of speech recognition in many languages, voice translation, and language detection. DALL-E 2 is an AI system that creates realistic images and art from a description in natural language.

4 Rees, C. & Muller, B. (November 16, 2022). “All that glitters is not gold: trustworthy and ethical AI principles”. Published online in the AI and Ethics Journal, SpringerLink. Available at https://link.springer.com/article/10.1007/s43681-022-00232-x.

ABOUT THE AUTHOR

Dr Julie E. Mehan is semi-retired, but still serves as a professor at the University of Maryland Global College (UMGC), where she teaches Digital Ethics, Cyberterrorism, and Information Systems in organizations. It was the students in her Digital Ethics and Computer Science classes that inspired this book.

Until her semi-retirement to Florida, she was the founder and president of JEMStone Strategies and a principal in a strategic consulting company in the state of Virginia.

Dr Mehan has been a career government service employee, a strategic consultant, and an entrepreneur – which either demonstrates her flexibility or her inability to hold on to a steady job! She has led business operations, as well as information technology governance and cybersecurity-related services, including designing and leading white-hat and black-hat penetration testing exercises, certification and accreditation, systems security engineering process improvement, and cybersecurity strategic planning and program management. During her professional years, she delivered cybersecurity and related privacy services to senior Department of Defense staff, Federal Government, and commercial clients working in Italy, Australia, Canada, Belgium, Germany, and the United States.

She has served on the President’s Partnership for Critical Infrastructure Security, Task Force on Interdependency and Vulnerability Assessments. Dr Mehan was chair for the development of criteria for the International System Security Engineering Professional (ISSEP) certification, a voting board member for development of the International Systems Security Professional Certification Scheme (ISSPCS), and Chair of the Systems Certification Working Group of the International Systems Security Engineers Association.

Dr Mehan graduated summa cum laude with a PhD from Capella University, with dual majors in Information Technology Management and Organizational Psychology. Her research was focused on success and failure criteria for Chief Information Security Officers (CISOs) in large government and commercial corporations, and development of a dynamic model of Chief Security Officer (CSO) leadership. She holds an MA with honors in International Relations Strategy and Law from Boston University, and a BS in History and Languages from the University of New York.

Dr Mehan was elected 2003 Woman of Distinction by the women of Greater Washington and has published numerous articles including Framework for Reasoning About Security – A Comparison of the Concepts of Immunology and Security; System Dynamics, Criminal Behavior Theory and Computer-Enabled Crime; The Value of Information-Based Warfare to Affect Adversary Decision Cycles; and Information Operations in Kosovo: Mistakes, Missteps, and Missed Opportunities, released in Cyberwar 4.0.

Dr Mehan is the author of several books published by ITGP: Insider Threat published in 2016; Cyberwar, CyberTerror, CyberCrime, and CyberActivism, 2nd Edition published in 2014; and The Definitive Guide to the Certification & Accreditation Transformation published in 2009. She is particularly proud of her past engagement as pro-bono President of Warrior to Cyber Warrior (W2CW), a non-profit company which was dedicated to providing cost-free cybersecurity career transition training to veterans and wounded warriors returning from the various military campaigns of recent years.

Dr Mehan is fluent in German, has conversational skills in French and Italian, and is working on learning Croatian and Irish.

She can be contacted at [email protected].

ACKNOWLEDGEMENTS

This is not my first book, but I have to admit that the process of writing this book has been both more difficult and more gratifying than I could ever have imagined.

I have to begin by thanking my awesome partner, John. From reading the very first chapters to giving me advice on things to consider, he was as important to this book being written as I was. He was the one who had to put up with me and my frequent hour-long absences to the office – and he did so without strangling or shooting me and dropping me into the St. Johns River. Thank you so much.

Next, my sincere appreciation goes to John’s oldest son, also John Deasy. As a computer scientist and physicist, he provided me with critical insight into the concepts around AI from the perspective of a scientist. Without some of these insights, this book would have been missing some key elements.

It was the students in my digital ethics course that inspired my interest in AI and its impact on the world we live in. Without them, the concepts for this book would never have evolved.

This section would definitely not be complete without acknowledging the superb support from the entire ITGP team. This includes Nicola Day, publications manager; Vicki Utting, managing executive; copy editor Susan Dobson; Jonathan Todd, senior copy editor at GRC International Group PLC; and Jo Ace, the book cover designer. Their assistance and patience during the process from start to publication has been exemplary.

I would also like to thank Chris Evans; Christopher Wright; and Adam Seamons, information security manager at GRC International Group PLC; for their helpful comments during the production of this book.

CONTENTS

Introduction

Chapter 1: AI defined and common depictions of AI – Is it a benevolent force for humanity or an existential threat?

A very brief history of AI – and perceptions of AI

What exactly is AI?

AI learning styles

Back to the original question

Chapter 2: Is AI really ubiquitous – Does it or will it permeate everything we do?

Ubiquitous AI

What is allowing AI to become almost a “common utility”?

What makes people refer to AI as ubiquitous?

Chapter 3: Human-machine collaboration – Can we talk to them in English (or any other language)?

The “Age of WithTM”

The five Ts of human-machine collaboration

Types of human-machine collaboration

How can humans and machines communicate?

Challenges to human-machine collaboration

Chapter 4: AI, ethics, and society – Are they compatible?

How do ethics relate to AI?

Potential positive ethical and security/safety impacts of AI

AI in the time of COVID

Potential negative ethical and security/safety impacts of AI

The reasons it is difficult for humans to fully address these potential challenges from AI

Chapter 5: Bias in AI – Why is it important and potentially dangerous?

Where does AI bias come from?

Real-life examples of bias in AI

Can AI’s decisions be less biased than human ones?

Identifying and removing bias in AI

Chapter 6: AI and cybersecurity – AI as a sword or a shield?

AI as a sword

AI as a shield

It’s more than just AI tools

Chapter 7: AI and our society – Will the result be professional, ethical, and social deskilling, or upskilling?

Professional deskilling and upskilling

Ethical deskilling and upskilling

Social deskilling and upskilling

Chapter 8: AI and privacy – Is anything really private anymore?

Privacy and security

Data privacy

Monitoring and surveillance

Privacy laws and regulations in the age of AI

Chapter 9: Misinformation, disinformation, fake news, manipulation, and deepfakes – Do we know how to think critically about information?

Misinformation

Disinformation

User manipulation

Thinking critically about information and information sources

Become a digital detective

Chapter 10: AI and social media – How is it affecting us?

What is the relationship between social media and AI?

The unforeseen effects of AI and social media

Chapter 11: “The measure of a man” – Can AI become more like a human?

What does it mean to be “human”?

Applying human characteristics to AI

But can AI be considered alive?

AI – A new species

Chapter 12: what’s next in AI – Less artificial and more intelligent?

The future of AI – What comes next?

A primer for integrating ethics and safety into new AI developments

Policy and legal considerations

Chapter 13: Ethical challenges of AI 2024 and beyond: The moral imperative of navigating this new terrain

2023 was a tipping point for AI

How do we set safeguards for AI, especially generative AI?

Misinformation, disinformation, and malinformation through the use of generative AI

Weaponization of AI

Copyright, intellectual property, and other legal risks

Privacy, surveillance, and social media

AI-induced loss of jobs

AI in education

Environmental impact of increased AI

AI and changes in human relationships

2024 and beyond – the years of AI legislation

Chapter 14: Final thoughts

Further reading

FIGURES

Figure 1-1: Gynoid Maria from the movie Metropolis

Figure 1-2: AI is multidisciplinary

Figure 1-3: Definitions of AI, ML, and DL

Figure 1-4: AI learning styles

Figure 1-5: Dog vs. not a dog

Figure 1-6: Dogs

Figure 1-7: Feedback loop in reinforced learning

Figure 1-8: Deep neural network representation

Figure 1-9: The DL process

Figure 2-1: AI is integrated into many areas

Figure 2-2: Use of AI in entertainment

Figure 2-3: Jimmy Fallon meets Sophia the robot

Figure 3-1: Human-machine communication

Figure 4-1: The potential positive impacts of AI

Figure 4-2: AI uses during COVID

Figure 4-3: The potential negative impacts of AI

Figure 4-4: Loneliness and social isolation

Figure 4-5: Equality vs. fairness

Figure 5-1: Bias in environment, data, design and use

Figure 5-2: Six steps to identify and mitigate AI bias

Figure 5-3: EqualAI checklist to identify bias

Figure 6-1: Manipulation of an AI-trained view

Figure 6-2: AI as a shield

Figure 7-1: Will a robot replace your job?

Figure 7-2: Micro, meso and macro socio-ethical skills

Figure 8-1: Why individuals feel they do not control personal data

Figure 8-2: Privacy vs. security

Figure 8-3: Good practices for user data privacy

Figure 8-4: Process of facial recognition

Figure 9-1: Seven types of misinformation and disinformation

Figure 9-2: Facebook engagements for Top 20 election stories (Fake News)

Figure 9-3: How to make a deepfake

Figure 9-4: Facts don’t matter

Figure 9-5: Five types of media bias

Figure 9-6: Forms of media bias

Figure 9-7: AllSides Media Bias Chart

Figure 10-1: Most popular social media sites as of October 2021

Figure 10-2: Instagram engagement calculator

Figure 10-3: Signs of social media addiction

Figure 10-4: Attitudes toward Facebook among US adults

Figure 11-1: Comparison of human and AI

Figure 11-2: The concept of autonomy

Figure 12-1: Constructs to assist AI developers in integrating ethics into AI

Figure 12-2: The SUM values

Figure 12-3: The FAST Principles

Figure 12-4: Transparency in AI

Figure 13-1: Cheapfake of an Australian soldier

Figure 13-2: Liquid democracy

Figure 13-3: Jason Allen’s winning AI-generated artwork

Figure 13-4: Share of industry affected by AI automation: US

Figure 13-5: Share of industry affected by AI automation: EU

Figure 13-6: Robot and human love

TABLES

Table 4-1: Comparison Between Ethics and Morals

Table 4-2: Quantitative and Qualitative Fairness Tools

Table 7-1: Potential Negative and Positive Effects of AI On the Workforce

Table 13-1: Rare-earth Elements Used in Computers and Technology

INTRODUCTION

Let’s start by saying that this book is not a guide on how to develop AI. There are plenty of those – and plenty of YouTube videos providing introductions to machine learning (ML) and AI. Rather, the intent is to provide an understanding of AI’s foundations and its actual and potential social and ethical implications – though by no means ALL of them, as we are still in the discovery phase. Although it is not technically-focused, this book can provide essential reading for engineers, developers, and statisticians in the AI field, as well as computer scientists, educators, students, and organizations with the goal of enhancing their understanding of how AI can and is changing the world we live in.

An important note: throughout this book, the term AI will be used as an overarching concept encompassing many of the areas and sub-areas of AI, ML, and deep learning (DL). So, readers, allow some latitude for a certain degree of inaccuracy in using the overarching AI acronym in reference to all of its permutations.

It is essential to begin at the outset to define and describe AI, all the while bearing in mind that there is no one single accepted definition. This is partly because intelligence itself is difficult to define. As Massachusetts Institute of Technology (MIT) Professor Max Tegmark pointed out, “There’s no agreement on what intelligence is even among intelligent intelligence researchers.”5

In fact, few concepts are less clearly defined as AI. The term AI itself is polysemous – having multiple meanings and interpretations. In fact, it appears that there are as many perceptions and definitions of AI as there are proliferating applications. Although there are multiple definitions of AI, let’s look at this really simple one: AI is intelligence exhibited by machines, where a machine can learn from information (data) and then use that learned knowledge to do something.

According to a 2017 Rand Study,

“algorithms and artificial intelligence (AI) agents (or, jointly, artificial agents) influence many aspects of our lives: the news articles we read, the movies we watch, the people we spend time with, our access to credit, and even the investment of our capital. We have empowered them to make decisions and take actions on our behalf in these and many other domains because of the efficiency and speed gains they afford.”6

AI faults in social media may have only a minor impact, such as pairing someone with an incompatible date. But a misbehaving AI used in defense, infrastructure, or finance could represent a potentially high and global risk. A “misbehaving” algorithm refers to an AI whose processing results lead to incorrect, prejudiced, or simply dangerous consequences. The market’s “Flash Crash” of 20107 is a painful example of just how vulnerable our reliance on AI can make us. The recent evolutions in AI, especially in generative AI, are showing us just how great the impact can be on our lives. Melvin Kranzberg8 wrote as early as 1986 that “Many of our technology-related problems arise because of the unforeseen consequences where apparently benign technologies are employed on a massive scale.” And this is becoming the case with generative AI. As with other technologies, a messy period of behavioral, societal, and legislative adaptation will certainly have to follow.

As an international community, we need to address the more existential concerns. For example, where will continued innovation in AI ultimately lead us? Will today’s more narrow applications of AI make way for fully intelligent AI? Will the result be a continuous acceleration of innovation resulting in exponential growth in which super-intelligent AI will develop solutions for humanity’s problems, or will future AI intentionally or unintentionally destroy humanity – or even more likely, be distorted and abused by humanity? These are the immediate and long-term concerns arising from the increased development and deployment of AI in so many facets of our society.

But there is a counter to this argument that runs central to this book, and it could not be better expressed than in the words of Kevin Kelly, founder of Wired magazine:

“But we haven’t just been redefining what we mean by AI – we’ve been redefining what it means to be human. Over the past 60 years, as mechanical processes have replicated behaviors and talents that we once thought were unique to humans, we’ve had to change our minds about what sets us apart … In the grandest irony of all, the greatest benefit of an everyday, utilitarian AI will not be increased productivity or an economics of abundance or a new way of doing science – although all those will happen. The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.”9

___________________________

5 Tegmark, M. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence. London: Penguin Books.

6 Osonde A. Osoba, and William Welser IV. (2017). An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. RAND Corporation.

7 On May 6, 2010, Wall Street experienced its worst stock plunge in several decades, wiping almost a trillion dollars in wealth out in a mere 20 minutes. Other so-called flash crashes have occurred since, and most were a result of a misbehaving algorithm.

8 From the Six Laws of Technology, written in 1986 by Melvin Kranzberg, a professor of the History of Technology at Georgia Tech. Published in July 1986 in “Technology and Culture”, Vol. 27, No. 3. Available at https://www.jstor.org/stable/i356080.

9 Kelly, Kevin. (October 27, 2014). The Three Breakthroughs That Have Finally Unleashed AI on the World. Wired magazine online. Available at www.wired.com/2014/10/future-of-artificial-intelligence/.

CHAPTER 1: AI DEFINED AND COMMON DEPICTIONS OF AI – IS IT A BENEVOLENT FORCE FOR HUMANITY OR AN EXISTENTIAL THREAT?

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”

Eliezer Yudkowsky10

“OK! AI will destroy humans!”

This statement sums up some of the common (mis-) perceptions held by humans about AI. In truth, we are at no near-term (or even long-term) risk of being destroyed by intelligent machines.

Elon Musk, the noted tech tycoon, begs to differ, with his claim that “AI is a fundamental risk for the existence of human civilization.”11 Musk made this statement based on his observations that the development and deployment of AI is far outpacing our ability to manage it safely.

Narratives about AI play a key role in the communication and shaping of ideas about AI. Both fictional and non-fictional narratives have real-world effects. In many cases, public knowledge about the AI and its associated technology is limited. Perceptions and expectations are therefore usually informed by personal experiences using existing applications, by film and books, and by the voices of prominent individuals talking about the future. This informational disconnect between the popular narratives and the reality of the technology can have potentially significant negative consequences.

Narratives that are focused on utopian extremes could create unrealistic expectations that the technology is not yet able to meet. Other narratives focused on the fear of AI may overshadow some of the real challenges facing us today. With real challenges, such as wealth distribution, privacy, and the future of work facing us, it’s important for public and legislative debate to be founded on a better understanding of AI. Bad regulation is another potential consequence resulting in misleading narratives and understanding, and influencing policymakers: they either respond to these narratives because these are the ones that resonate with the public, or because they are themselves influenced by them. AI may develop too slowly and not meet expectations, or it may evolve so fast that it is not aligned with legal, social, ethical, and cultural values.

A very brief history of AI – and perceptions of AI

Whether AI is a potential threat or not may be debatable, but before entering the debate, let’s look at the history of AI. AI is not a new term. In fact, it was first introduced in 1956 by John McCarty, an assistant Professor at Dartmouth College, at the Dartmouth Summer Research Project. His definition of AI was the “science and making of intelligent machines” or getting machines to work and behave like humans.

But the concept of AI was not first conceived with the term in 1956. Although it is not surprising that AI grew rapidly post-computers, what is surprising is how many people thought about AI-like capabilities hundreds of years before there was even a word to describe what they were thinking about. In fact, something similar to AI can be found as far back as Greek mythology and Talos. Talos was a giant bronze automaton warrior said to have been made by Hephaestus to protect Europa, Zeus’s consort, from pirates and invaders who might want to kidnap her.

Between the fifth and fourteenth centuries, or the “Dark Ages,” there were a number of mathematicians, theologians, philosophers, professors, and authors who contemplated mechanical techniques, calculating machines, and numeral systems that ultimately led to the idea that mechanized “human” thought might be possible in non-human beings.

Although never realized, Leonardo da Vinci designed an automaton (a mechanical knight) in 1495.

Jonathan Swift’s novel Gulliver’s Travels from the 1700s talked about an apparatus it called the engine. This device’s supposed purpose was to improve knowledge and mechanical operations to a point where even the least talented person would seem to be skilled – all with the assistance and knowledge of a non-human mind.

Inspired by engineering and evolution, Samuel Butler wrote an essay in 1863 entitled Darwin Among the Machines wherein he predicted that intelligent machines would come to dominate:

“… the machines are gaining ground upon us; day by day we are becoming more subservient to them […] that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what noperson of a truly philosophic mind can for a moment question.”12

Fast forward to the 1900s, where concepts related to AI took off at full tilt and there was the first use of the term “robot.” In 1921, Karel Čapek, a Czech playwright, published a play entitled Rossum’s Universal Robots (English translation), which featured factory-made artificial people – the first known reference to the word.

One of the first examples in film was Maria, the “Maschinenmensch” or “machine-human,” in the Fritz Lang directed German movie Metropolis made in 1927. Set in a dystopian future, the gynoid13 Maria was designed to resurrect Hel, the deceased love of the inventor, Rotwang, but Maria evolved to seduce, corrupt, and destroy. In the end, her fate was to be destroyed by fire. Many claims have been made that this movie spawned the trend of futurism in the cinema. Even if we watch it today in its 2011 restoration, it is uncanny to see how many shadows of cinema yet to come it already contains.

Figure 1-1: Gynoid Maria from the movie Metropolis14

In 1950, Alan Turing published “Computing Machinery and Intelligence,” which proposed the idea of The Imitation Game – this posed the question of whether machines could actually think. It later became known as The Turing Test, a way of measuring machine (artificial) intelligence. This test became an important component in the philosophy of AI, which addresses intelligence, consciousness, and ability in machines.

In his novel, Dune, published in 1965, Frank Herbert describes a society in which intelligent machines are so dangerous that they are banned by the commandment “Thou shalt not make a machine in the likeness of a human mind.”15

Fast forward to 1969, and the “birth” of Shakey – the first general purpose mobile robot. Developed at the Stanford Research Institute (SRI) from 1966 to 1972, Shakey was the first mobile robot to reason about its actions. Its playground was a series of rooms with blocks and ramps. Although not a practical tool, it led to advances in AI techniques, including visual analysis, route finding, and object manipulation. The problems Shakey faced were simple and only required basic capability, but this led to the researchers developing a sophisticated software search algorithm called “A*” that would also work for more complex environments. Today, A* is used in applications, such as understanding written text, figuring out driving directions, and playing computer games.

1997 saw the development of Deep Blue by IBM, a chess-playing computer that became the first system to play chess against the reigning world champion, Gary Kasparov, and win. This was a huge milestone in the development of AI and the classic plot we’ve seen so often of man versus machine. Deep Blue was programmed to solve the complex, strategic problems presented in the game of chess, and it enabled researchers to explore and understand the limits of massively parallel processing. It gave developers insight into ways they could design a computer to tackle complex problems in other fields, using deep knowledge to analyze a higher number of possible solutions. The architecture used in Deep Blue has been applied to financial modeling, including marketplace trends and risk analysis; data mining – uncovering hidden relationships and patterns in large databases; and molecular dynamics, a valuable tool for helping to discover and develop new drugs.

From 2005 onwards, AI has shown enormous progress and increasing pervasiveness in our everyday lives. From the first rudimentary concepts of AI in 1956, today we have speech recognition, smart homes, autonomous vehicles (AVs), and so much more. What we are seeing here is a real compression of time in terms of AI development. But why? Blame it on the increase in data or “big data.” Although we may not see this exact term, it hasn’t disappeared. In fact, data has just got bigger. This increase in data has left us with a critical question: Now what? As in: We’ve got all this stuff (that’s the technical term for it!) and it just keeps accumulating – so what do we do with it? AI has become the set of tools that can help an organization aggregate and analyze data more quickly and efficiently. Big data and AI are merging into a synergistic relationship, where AI is useless without data, and mastering today’s ever-increasing amount of data is insurmountable without AI.

So, if we have really entered the age of AI, why doesn’t our world look more like The Jetsons, with autonomous flying cars, jetpacks, and intelligent robotic housemaids? Oh, and in case you aren’t old enough to be familiar with The Jetsons – well, it was a 1960s TV cartoon series that became the single most important piece of twentieth-century futurism. And though the series was “just a Saturday morning cartoon,” it was based on very real expectations for the future.

In order to understand where AI is today and where it might be tomorrow, it’s critical to know exactly what AI is, and, more importantly, what it is not.

What exactly is AI?

In many cases, AI has been perceived as robots doing some form of physical work or processing, but in reality, we are surrounded by AI doing things that we take for granted. We are using AI every time we do a Google search or look at our Facebook feeds, as we ask Alexa to order a pizza, or browse Netflix movie selections.

There is, however, no straightforward, agreed-upon definition of AI. It is perhaps best understood as a branch of computer science that endeavors to replicate or simulate human intelligence in a machine, so machines can efficiently – or even more efficiently – perform tasks that typically require human intelligence. Some programmable functions of AI systems include planning, learning, reasoning, problem solving, and decision-making.

In effect, AI is multidisciplinary, incorporating human social science, computing science, and systems neuroscience,16 each of which has a number of sub-disciplines.17

Figure 1-2: AI is multidisciplinary18

Computer scientists and programmers view AI as “algorithms for making good predictions.” Unlike statisticians, they are not too interested in how we got the data or in models as representations of some underlying truth. For them, AI is black boxes making predictions.

Statisticians understand that it matters how data is collected, that samples can be biased, that rows of data need not be independent, that measurements can be censored, or truncated. In reality, the majority of AI is just applied statistics in disguise. Many techniques and algorithms used in AI are either fully borrowed from or heavily rely on the theory from statistics.

And then there’s mathematics. The topics at the heart of mathematical analysis – continuity and differentiability – are also what is at the foundation of most AI/ML algorithms.

All AI systems – real and hypothetical – fall into one of three types:

1.Artificial narrow intelligence (ANI), which has a narrow range of abilities;

2.Artificial general intelligence (AGI), which is on par with human capabilities; or

3.Artificial superintelligence (ASI), which is more capable than a human.

ANI is also known as “weak AI” and involves applying AI only to very specific and defined tasks, i.e. facial recognition, speech recognition/voice assistants. These capabilities may seem intelligent, however, they operate under a narrow set of constraints and limitations. Narrow AI doesn’t mimic or replicate human intelligence, it merely simulates human behavior based on a narrow and specified range of parameters and contexts. Examples of narrow AI include:

• Siri by Apple, Alexa by Amazon, Cortana by Microsoft, and other virtual assistants;

• IBM’s Watson;

• Image/facial recognition software;

• Disease mapping and prediction tools;

• Manufacturing and drone robots; and

• Email spam filters/social media monitoring tools for dangerous content.

AGI is also referred to as “strong” or “deep AI,” or intelligence that can mimic human intelligence and/or behaviors, with the ability to learn and apply its intelligence to solve any problem. AGI can think, understand, and act in a way that is virtually indistinguishable from that of a human in any given situation. Although there has been considerable progress, AI researchers and scientists have not yet been able to achieve a fully-functional strong AI. To succeed would require making machines conscious, and programming a full set of cognitive abilities. Machines would have to take experiential learning to the next level, not just improving efficiency on singular tasks, but gaining the ability to apply the experiential knowledge to a wide and varying range of different problems. The physicist Stephen Hawking stated that there is the potential for strong AI to “… take off on its own and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, could not compete, and would be superseded.”19

One of the most frightening examples of AGI is HAL (Heuristically programmed ALgorithmic computer) in 2001: A Space Odyssey. HAL 9000, the sentient computer at the heart of 2001, remains one of the most memorable “characters” in the film. Faced with the prospect of disconnection after an internal malfunction, HAL eventually turns on the Discovery 1 astronaut crew, killing one, before being manually shut down by the other crew member. HAL continues to represent a common fear of future AI, in which man-made technology could turn on its creators as it evolves in knowledge and consciousness.

ASI is still only a hypothetical capability. It is AI that doesn’t just mimic or understand human intelligence and behavior; ASI represents the point machines become self-aware and may even surpass the capacity of human intelligence and ability. ASI means that AI has evolved to be so similar to a human’s emotions and experiences that it doesn’t just understand them, it even develops emotions, needs, beliefs, and desires of its own.

A possible example of ASI is the android Data who appeared in the TV show, Star Trek: The Next Generation. In one episode, “The Measure of a Man,” Data becomes an object of study, threatened with his memory being removed and then being deactivated and disassembled in order to learn how to create more Data-like androids. The scientist argues that Data is purely a machine; Data claims that he will lose himself, as his identity consists of a complex set of responses to the things he has experienced and learned over time, making him unique. And if other androids were created, they would be different from him for precisely this reason. The possibility of new androids does not make him worry about his own identity; rather, it is the possibility that he will be reverted to something like a blank slate, which would then no longer be him. In the end, it came down to the question of “What is human”? Can humanity be defined by something like sentience, self-awareness, or the capacity for self-determination (autonomy), and how are these determined? It appears that these questions could not even be fully answered for humans, much less for Data, the android.

As AI continues to evolve, however, these may become the most salient questions.

Before we talk any further about AI, it’s critical to understand that AI is an overarching term. People tend to think that AI, ML, and DL are the same things, since they have common applications. These distinctions are important – but this book will continue to us AI as the primary term that reaches across all of these subsets.

Figure 1-3: Definitions of AI, ML, and DL

Let’s take a deeper look at each of these. A machine is said to have AI if it can interpret data, potentially learn from the data, and use that knowledge to achieve specific goals, or perform specific tasks. It is the process of making machines “smart,” using algorithms20 that allow computers to solve problems that used to be solved only by humans.

AI technologies are brilliant today at analyzing vast amounts of data to learn to complete a particular task or set of tasks – ML. The main goal of ML is to develop machines with the ability to learn entirely or almost entirely by themselves, without the need for anyone to perfect their algorithms. The objective is to be so much like the human mind that these machines can independently improve their own processes and perform the tasks that have been entrusted to them with an ever-greater degree of precision. However, in order for ML to function, ideally, humans must supply the machine with information, either through files loaded with a multitude of data, or by enabling the machine to gather data through its own observations and to even interact with the world outside itself.

AI learning styles

AI has a variety of learning styles and approaches that enable its ability to solve problems or execute desired tasks. These learning styles fall mostly into the category of ML or DL.

Figure 1-4: AI learning styles

ML

ML is the core to AI, because it has allowed the machines to advance in capability from relatively simple tasks to more complex ones. A lot of the present anticipation surrounding the possibility of AI is derived from the enormous promise of ML. ML encompasses supervised learning, unsupervised learning, and reinforcement learning.

Supervised learning

Supervised learning feeds the machines with existing information so that they have specific, initial examples and can expand their knowledge over time. It is usually done by means of labels, meaning that when we program the machines, we pass them properly labeled elements so that later they can continue labeling new elements without the need for human intervention. For example, we can pass the machine pictures of a car, then we tell it that each of these pictures represents a car, and how we want it to be interpreted. Using these specific examples, the machine generates its own supply of knowledge so that it can continue to assign labels when it recognizes a car. Using this type of ML, however, the machines are not limited to being trained from images, but can use other data types. For example, if the machine is fed with sounds or handwriting data sets, it can learn to recognize voices or detect written patterns and associate them with a particular person. The capability evolves entirely from the initial data that is supplied to the machine.

Figure 1-5: Dog vs. not a dog

As humans, we consume a lot of information, but often don’t notice these data points. When we see a photo of a dog,21 for example, we instantly know what the animal is based on our prior experience. But the machine can only recognize an image as a dog if it has been fed the examples and told that these images represent a dog.

Unsupervised learning

In unsupervised learning, the developers do not provide the machine with any kind of previously labeled information about what it should recognize, so the machine does not have an existing knowledge base. Rather, it is provided with data regarding the characteristics of the thing it is meant to identify, and then has to learn to recognize those characteristics on its own. Essentially, this type of learning algorithm requires the machine to develop its own knowledge base from a limited data set.

This form of ML is actually closest to the way the human mind learns and develops. The machine learns to analyze groups using a method known as clustering. This is nothing more than grouping the elements according to a series of characteristics they have in common.

Figure 1-6: Dogs

In unsupervised learning, a data scientist provides the machine with, for example, a photo of a group of dogs, and it’s the system’s responsibility to analyze the data and conclude whether they really are dogs or something else, like a cat or a trombone.

Unsupervised learning problems can be classified into clustering and association problems.

Semi-supervised learning

Semi-supervised ML is a mixture of both supervised and unsupervised learning. It uses a small amount of labeled data and a larger amount of unlabeled data. This delivers the benefits of unsupervised and supervised learning, while sidestepping the issues associated with finding a large amount of labeled data. It means that AI can be trained to label data without having to apply as much labeled training data.

Reinforced learning

Using reinforced learning, systems or machines are designed to learn from acquired experiences. In these cases, when humans program the algorithm, they define what the final result should be without indicating the best way to achieve it. Consequently, the machine discovers itself how to achieve its goal. The machine is in charge of carrying out a series of tests in which it obtains success or failure, learning from its successes and discarding actions that led to failure. In short, it detects patterns of success that it repeats over and over again to become increasingly efficient. In simple words, the machine learns from its mistakes – much like humans do – without pre-programming and largely without human intervention.

Reinforcement learning is how most of us learn. Even our dogs. Nikita, my dog, is a Siberian Husky. Like other dogs, she doesn’t understand any human language, but she picks up intonation and human body language with surprisingly good accuracy.

So, I can’t really tell her what to do, but I can use treats to persuade her to do something. It could be anything as simple as sitting or shaking hands. If I can get Nikita to shake hands, she gets a treat. If she doesn’t shake hands, she doesn’t get her treat.

After a few of these interactions, Nikita realizes that all she needs to do is raise her paw at the “shake” command and she gets a treat.

In the case of reinforcement learning, the goal is to identify an appropriate action model that will maximize the total reward of the agent. The figure below depicts the typical action-reward feedback loop of a generic reinforced learning model.

Figure 1-7: Feedback loop in reinforced learning

Just looking at the figure above might be a bit overwhelming, so let’s take a quick look at the terminology and definitions:

•Agent: The AI system that undergoes the learning process. Also called the learner or decision maker. The algorithm is the agent.

•Action: The set of all possible moves an agent can make.

•Environment: The world through which the agent moves and receives feedback. The environment takes the agent’s current state and action as inputs, and then outputs the reward and the next state.

•State: An immediate situation in which the agent finds itself. It can be a specific moment or position in the environment. It can also be a current as well as a future situation. In simple words, it’s the agent’s state in the environment.

•Reward: For every action made, the agent receives a reward from the environment. A reward could be positive or negative, depending on the action.

This can perhaps be better explained using the example of a computer game. Let’s use Pac-Man. In the grid world of Pac-Man, the goal of the Agent (Pac-Man) is to devour the food on the grid, at the same time as avoiding any of the ghosts that might get in the way of getting the food. The grid world is the Environment where the Agent acts. The Agent is rewarded for getting the food and is punished if killed by a ghost. The State is the location of the Agent and the total cumulative Reward is the Agent winning the game.

Autonomous vehicles are also a good example of this type of learning algorithm. Their task is very clear: take passengers safely to their intended destination. As the cars make more and more journeys, they discover better routes by identifying shortcuts, roads with fewer traffic lights, etc. This allows them to optimize their journeys and, therefore, operate more efficiently.

Ensemble learning

Ensemble learning is a form of ML that merges several base models in order to produce one optimal predictive model. In ML, the AI equivalent of crowd wisdom can be attained through ensemble learning. The result obtained from ensemble learning, which combines a number of ML models, can be more accurate than any single AI model. Ensemble learning can work in two ways: using different algorithms (e.g. linear regression, support vector machine, regression decision tree, or neural network) with the same data set or by training the AI using different data sets with the same algorithm.

From ML, we progress to the next level – DL.

DL

DL is the next generation of ML inspired by the functionality of human brain cells, or neurons, which evolved into the concept of an artificial neural network. DL differentiates itself through the way it solves problems. ML requires a domain expert to identify most applied features, i.e. someone to tell the machine that a dog is a dog. On the other hand, DL understands features incrementally, thus eliminating the need for domain expertise. As a result, DL algorithms take much longer to train than ML algorithms, which only need a few seconds to a few hours. But the reverse is true during processing. DL algorithms take much less time to run than ML algorithms, whose time increases along with the size of the data.