Erhalten Sie Zugang zu diesem und mehr als 300000 Büchern ab EUR 5,99 monatlich.
Explore the fascinating field of Artificial Intelligence and its profound impact on the industry in this insightful book. With a focus on the positive potentials of AI, the author navigates through the subtleties of this transformative technology. This book aims to empower readers to engage in informed discussions about AI and contribute to business decisions involving its application. The author's collaboration with ChatGPT, a generative AI tool currently gaining a lot of attention, resulted in a well-structured and comprehensive exploration of AI & The Industry. Being a proof of concept of effective human-AI collaboration itself, the book provides a comprehensive AI overview, both readable and informative. Stripped of technical jargon, algorithms, and intricate mathematical formulae, the narrative unfolds in pure English language, making it accessible to a broad audience. This book is an ideal starting point for readers who want to comprehend the versatility of Artificial Intelligence itself as well as its applications in industry. Join the exploration, gain insights, and make AI a joyful part of your reading adventure.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 197
Veröffentlichungsjahr: 2023
Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:
Thorsten Bohnenberger
AI & The Industry
Thorsten Bohnenberger
AI & The Industry
An Overview
Imprint
© 2023 by Thorsten Bohnenberger
Published by Neopubli GmbH,
Köpenicker Straße 154a, 10997 Berlin, Germany
www.epubli.com
All rights reserved. No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law.
Title: AI & The Industry
Author: Thorsten Bohnenberger
First Edition: December 2023
ISBN: 978-3-758441-82-0
Cover Design: Thorsten Bohnenberger
Book Design and Layout: Thorsten Bohnenberger
Printed by epubli – a service of Neopubli GmbH, Berlin
For inquiries, please contact:
Thorsten Bohnenberger
c/o AutorenServices.de
Birkenallee 24
36037 Fuldaᔍ
“I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
Alan Turing뼍
Content
Introduction
AI, an Academic Discipline
Weak and Strong AI
Turing Test & Chinese Room
Symbolic and Subsymbolic AI
AI-Technologies
Knowledge Representation
Game-Playing
Semantic Networks & Ontologies
Formal Logic & Automated Reasoning
Uncertain Knowledge & Reasoning
Knowledge-Based Systems
Expert Systems
Automated Planning
Natural Language Processing
Computer Vision
Neural Networks
Machine Learning
Deep Learning
Generative AI
ChatGPT & Bing Chat Enterprise
Robotics
Autonomous Driving
Distributed AI & Multiagent Systems
AI in Commerce
Recommender Systems
AI in Manufacturing Industry
AI in Research & Development
Industrial Robots
Industry 4.0
Big Data in Industry
Internet of Things
Digital Twin
Blockchain in Manufacturing Industry
Cloud Computing
SAP Cloud-Services
Palantir
Palantir Gotham
Palantir Foundry
Further AI Potentials in Industry
Digital Transformation
Singularity
Symbolic and Subsymbolic AI Revisited
AI Threats & Opportunities
Epilog: AI & The Industry
ChatGPT’s Acknowledgements
Thorsten’s Acknowledgements
AI Programming Languages
AI Text Books
AI Movies
Introduction
Artificial Intelligence (AI) is one of the greatest topics of our time. Some consider it a fantastic opportunity; others are rather scared by it and consider it a huge threat. In this book, I will put my focus on the upsides of AI and its potentials for the industry. Though, without ignoring its threats, be they justified or not.
Fear is often a result of lack of knowledge. The more you know about a technology like AI, the less frightening it will appear to you in most regards—not in all. The mere fact is: AI is unavoidable. It is here to stay. Like the knife that can peel the apple or stab the foe. We will have to deal with it.
Many people might want to discuss about AI but notice that they do not know enough to talk about it properly. This may be in private life just like in business. The purpose of this book is to cover the great field of Artificial Intelligence on a level that allows you to join such discussion in an informed manner. In business, the book shall help you to contribute to decisions that have to do with the application of AI. This is why the focus lies on AI & The Industry.
In October 2023, I started to make myself familiar with ChatGPT{1}, one of the most prominent AI tools these days. It does not take much. You register and start posing questions to test ChatGPT’s capabilities. And it is always tempting to ask questions, that you do not expect ChatGPT to answer in a reasonable way. Yet, you might be impressed how well ChatGPT can hide the weaknesses that it certainly has. However, the conversation with ChatGPT is rarely boring and can sometimes leave you with a smile on your lips, even though the system might not have been able to answer your question satisfyingly.
At the last week-end of October, I was wondering for what reasonable things I might use ChatGPT myself. I noticed before that it is an outstanding strength of ChatGPT to summarize topics in a comprehensive and nicely structured way using a smooth language. What could I achieve, if I made extensive use of ChatGPT’s strengths?
This book is the result of the collaboration of ChatGPT and myself over a rather short period of time. It took only one day at the first week-end of November to script a complete draft of this book. All I needed to do for this initial step was to ask ChatGPT the right questions.
This does not mean that the rest was a no-brainer. I still had to rephrase many statements, eliminate redundancies, restate my questions to ChatGPT to get deeper insights, put everything into a reasonable order, and develop the central theme and narrative of the book. Eventually, I used Scribbr{2} to make sure that I do not produce plagiarism. The plagiarism check revealed that ChatGPT did not generate any one-to-one “copy & paste” text modules at all, but created “fresh text” instead. Similarities to existing texts were limited to general and commonly used statements. But act with caution: This might not be true in other contexts.
I needed about four week-ends in total to elaborate this text about AI & The Industry. At the beginning of December, the result was an AI overview that I considered well readable and yet informative enough to be worth being published. And this was the objective of my exercise: I wanted to present a book that you can read within a few days during vacation or at night in bed. To find a good starting point for your journey to AI & The Industry. The book is free of algorithms written in programming language, tables, diagrams, mathematical formulae, or other figures with detailed explanations. Pure English language.
All this is not to say that either ChatGPT is awesome nor that I am awesome myself. Rather, it is supposed to showcase how effective the collaboration of a human being with an AI system can be.
Many people could have written this book together with ChatGPT. But in order to ask ChatGPT the right questions, you need to have a broad background in the topic of the book to be written, in this case both in AI and the industry. Those who have this background might as well ask their questions to ChatGPT directly. And thereby explore the wonderful world of AI and its broad variety of applications in a more interactive way. Readers who already have a background in industry but not so much in AI yet, should benefit most from this book.
I trust that the combination of my own experiences and ChatGPT’s expertise will provide you with valuable new insights and wish you some joyful hours of reading.圍
AI, an Academic Discipline
Artificial Intelligence is an interdisciplinary field that draws knowledge and techniques from various academic disciplines. The foundation of AI is Computer science, which provides the fundamental principles for designing and implementing AI algorithms{3} and systems{4}. Computer science on the other hand is a subfield of mathematics. Therefore, if you decide to study AI, you should expect to do a lot of computer programming that in one or the other way performs complex calculations. Mathematics, in particular statistics, plays a critical role in AI, especially in areas like machine learning—maybe the most prominent AI area at present. But a lot of other concepts from linear algebra, calculus, probability theory, and discrete mathematics are essential for understanding AI algorithms and models.
Many other academic disciplines had and still have a strong influence on AI. This is not surprising if you consider the following definition of artificial intelligence, which is very simple yet to the point: AI is the science that tries to enable machines doing things that humans originally can do better. So, what are the academic disciplines that explore the outstanding capabilities of human beings?
Cognition is the capability that many people consider as humans’ most distinct characteristic. Cognitive science is the academic discipline that explores the principles of human cognition. The insights about human cognition gained by cognitive science can inform the development of AI systems that mimic or enhance human-like intelligence.
Neuroscience research can provide insights into how the human brain processes information and learns. Some AI approaches, like neural networks, are inspired by the structure and function of biological neural networks.
The same holds for cognitive psychology. Psychological research on human perception, memory, decision-making, and problem-solving can inform the design of AI systems that interact with humans and understand human behavior.
Linguistics is particularly important for natural language processing and understanding in AI systems, helping machines to “understand”, process, and generate human language effectively.
Engineering disciplines, such as electrical engineering and mechanical engineering, are relevant for AI applications in robotics, hardware design for AI accelerators, and control systems.
Data Science has a tremendous importance for the more recent successes in AI. It involves the extraction of insights from large datasets, which is a crucial aspect of AI, particularly in machine learning and data analysis.
In some areas of AI—and you will see that AI is indeed a very broad discipline—economics plays an important role. Economic principles are applied in AI for areas like game theory, auction mechanisms, and the study of AI's impact on the labor market and economy.
In other areas, control theory and physics are crucial for the development of AI systems. Control theory is relevant in robotics and autonomous systems, enabling AI-driven control of physical devices. Physics concepts are essential in areas like computer vision, where understanding the physical properties of objects and light is critical.
Often underestimated is the role of Human-Computer Interaction (HCI), which focuses on the design and usability of AI systems, ensuring that they are user-friendly and align with human needs and preferences. A well-designed human-machine interface is crucial for the acceptance of any computer system and AI systems in particular. A good human-machine interface might not even be noticed. A bad one will be noticed in any case.
Sociological research turns out to be important for approaches of distributed AI, where several AI entities interact with each other. But it can also provide insights into how AI technologies affect society, culture, and human behavior.
Business and management are two disciplines heavily impacted by AI, because it has significant implications for business strategy and innovation. Knowledge of business principles is important for AI adoption in the corporate world.
Philosophical discussions are essential in addressing ethical and moral considerations related to AI, as well as questions about consciousness, self-awareness, and the nature of intelligence.
Eventually, ethics and law address the ethical and legal implications of AI, including issues related to privacy, liability, and fairness.
This overview gives a notion of how interdisciplinary the field of AI is. The applications of AI span a wide range of domains. Researchers and practitioners often collaborate across these disciplines to advance AI technology and its practical applications. We will get to know many such applications in the course of this book. But first, let us clarify some fundamentals of AI.ል
Weak and Strong AI
Weak AI and Strong AI are two important concepts that describe the level of artificial intelligence in a system. They are also known as Narrow AI and General AI, respectively.
Weak AI, also known as Narrow AI, refers to artificial intelligence systems that are designed and trained for a specific task or a narrow range of tasks. These systems are not capable of generalizing their knowledge or skills to tasks outside their specific domain. Weak AI systems are highly specialized and do not possess “true” intelligence or even consciousness. They simulate human intelligence in a very limited, predefined way. Examples of weak AI include virtual personal assistants like Siri or chatbots used for customer support, which can perform specific tasks but lack general problem-solving abilities.
Strong AI, also known as General AI or AGI (Artificial General Intelligence), refers to artificial intelligence systems that have the ability to understand, learn, and apply knowledge across a wide range of tasks and domains. Strong AI possesses human-like general intelligence and cognitive abilities, including reasoning, problem-solving, creativity, and self-awareness. These systems are not limited to predefined tasks and can adapt to new, unfamiliar situations.
Strong AI remains a theoretical concept. So far, no system or technology has achieved true strong AI. Researchers continue to work toward this goal, but it presents significant challenges and ethical considerations.
In summary, the key difference between weak AI and strong AI lies in the scope of their capabilities. Weak AI is specialized and task-specific, while strong AI, if realized, would have the capacity for general intelligence and the ability to perform a wide range of cognitive tasks as a human can.
The relevance of strong AI (General AI) and weak AI (Narrow AI) in industry can be understood by examining the specific applications and implications of each.
Weak AI systems excel at automating specific, well-defined tasks in various industries. This can lead to increased efficiency, cost reduction, and improved accuracy.
Manufacturing and logistics industries use weak AI for process optimization and predictive maintenance. Machine learning algorithms can help optimize supply chains and reduce equipment downtime, improving operational efficiency.
Weak AI can be used to create personalized experiences for users, customers, and clients. Recommender systems in e-commerce and content platforms, as well as targeted advertising, are good examples of how personalization can enhance user engagement and drive business growth.
Industries such as healthcare and finance benefit from weak AI systems that provide decision support by analyzing data and offering insights to professionals. Medical diagnosis support systems and algorithmic trading platforms are examples.
Moreover, weak AI-powered virtual assistants and chatbots improve customer service by providing quick responses and assistance, reducing the need for human intervention and enabling 24/7 support.
Weak AI is crucial for processing and analyzing vast amounts of data, making it valuable for industries such as marketing, finance, and cybersecurity.
In all the described areas, AI has achieved great successes in the past years. In the light of these successes, AI can be considered already well established in many commercial and industrial contexts. Weak AI is the standard when it gets to AI in industry. What could strong AI add?
Most importantly, strong AI would mean true autonomy. In that sense, strong AI could potentially revolutionize industries by enabling systems to perform a wide range of tasks with human-like cognitive abilities. This could lead to fully autonomous robots, self-learning systems that can learn in general rather than in narrow areas of application, and truly creative problem solvers, impacting fields like robotics, research, and creative arts.
Strong AI could assist in complex decision-making scenarios where general human expertise is required. These could be general medical diagnoses rather than medical diagnoses in narrow subfields of medicine, general scientific research with a system’s ability to decide by itself in what directions to explore, and general strategy development considering all kind a relevant context information such as an intelligent human would do. Moreover, strong AI would be able to adapt to new tasks and domains, making it highly valuable in dynamic industries that require rapid adjustments and learning.
Strong AI could tremendously accelerate scientific discovery and innovation, leading to unforeseeable breakthroughs in various fields. At the same time, achieving strong AI would raise ethical and safety concerns, including questions about the rights and responsibilities of such systems and the potential for misuse.
While strong AI remains a theoretical goal and faces significant technical and ethical challenges, weak AI is already transforming industries by enhancing automation, personalization, decision support, and data analysis. Both forms of AI have their roles and implications in industry, with weak AI being by far more prevalent and practical in the current technological landscape. Strong AI, if ever realized, would introduce a new era of possibilities and challenges for a wide range of industries.
In the early years of AI, there was also a big debate about what it means for a machine to be intelligent and how such intelligence might be measured or tested. Let us summarize some key aspects of this debate in the upcoming chapter, before starting our review of the main areas and technologies of Artificial Intelligence.
Turing Test & Chinese Room
The first natural language processing systems that I encountered as a student where simple chatbots like ELIZA and Parry.Both ELIZA and Parry were early attempts to create conversational agents, and they highlighted the challenges and limitations of early AI systems.
ELIZA operated by recognizing certain keywords in user input and generating responses based on pre-programmed patterns. It mimicked a psychotherapist, engaging users in conversation about their feelings and thoughts.
Parry engaged in text-based conversations, simulating the behavior of someone with paranoid delusions. It responded to user inputs with answers that reflected a paranoid thought process. Parry's design aimed to showcase the limitations and challenges of using AI in understanding complex human emotions and behaviors.
ELIZA and Perry were developed in the mid-60s and early 70s, respectively. A few decades later, commercial chatbots have almost become ubiquitous. Considering the rather short period of time that elapsed since then, we can recognize how fast the progress in the field of AI really is.
However, even ten years ago, I did not yet foresee the successes that are achieved with systems like ChatGPT today. Despite my close connectedness with AI for almost four decades. This is what makes me humble when it gets to predictions about what AI is yet to achieve in the upcoming years. However, I am still ambivalent about what we will experience. On the one hand, I am deeply convinced that the speed of development in the area of AI will rather increase than decrease, I still anticipate that true general AI will remain out of reach for some period of time. A period of time that I do not dare to quantify, yet. I anticipate that we will see incredible abilities of AI systems, while the same systems will still fail in some tasks that the most parochial mind can solve with ease. The versatility of the human mind might still excel the abilities of AI systems for a while.
The Turing Test was proposed by the British mathematician and computer scientist Alan Turing{5} in 1950. It is a test of a machine's ability to exhibit intelligent behavior indistinguishable from that of a human. The Turing Test is a benchmark for evaluating a machine's capability to demonstrate human-like or human-level intelligence, particularly in natural language understanding and conversation. It works as follows.
The test setup involves three participants: a human "interrogator" (often referred to as the "judge"), a human respondent (the "hidden" human), and a machine respondent (the "hidden" machine). The interrogator engages in text-based conversations with both the human and the machine, but does not know which one is which. The conversations typically occur through a computer terminal to avoid physical cues.
The goal of the test is for the machine to exhibit behavior so convincing that the interrogator cannot reliably distinguish between the machine and the human based on their responses. If the machine can consistently "fool" the interrogator, it is said to have passed the Turing Test.
According to Turing's original formulation, if the machine is mistaken for the human in at least 30% of the interactions during a predefined time frame, it can be considered to have passed the Turing Test.
However, it is important to note that the Turing Test is a behavioral test and does not require the machine to have actual understanding, consciousness, or self-awareness. The machine only needs to mimic human-like conversation effectively.
The Turing Test has been influential in the field of Artificial Intelligence and is often used as a benchmark for assessing the progress of AI systems, particularly in the area of natural language processing and chatbots. However, it has also been the subject of criticism and debate. Some argue that passing the Turing Test does not necessarily indicate true intelligence or understanding but rather the ability to simulate it convincingly. The test also leaves room for subjectivity, as the judge's judgment may vary.
An important annual competition in the field of Artificial Intelligence and natural language processing (NLP) is the Loebner Prize, also known as the Loebner Prize Turing Test. It was established by Hugh Loebner in 1990 with the goal of furthering the development of AI and evaluating the progress of conversational AI systems in terms of human-like conversation.
The central focus of the Loebner Prize is a variation of the Turing Test that typically involves chatbots or conversational AI programs that compete against each other in natural language conversation. A panel of judges engages in conversations with the AI systems, and the AI that is considered to be the most convincing or human-like is awarded the Loebner Prize.
The Loebner Prize has played a role in advancing the development of chatbots and conversational AI over the years. While it has not definitively determined whether any AI has truly passed a Turing Test in the sense originally proposed by Alan Turing, it has provided a platform for evaluating the state of AI technology and promoting research in the field.
Just as the Turing Test itself, the Loebner Prize and its variations have been the subject of debate and criticism within the AI community, as some argue that they do not truly capture the complexity of human intelligence and conversation. Nevertheless, the competition continued to be a noteworthy event in the AI calendar until 2019 and contributed to ongoing discussions about AI capabilities and limitations in natural language understanding and generation.
The Chinese Room is an important philosophical thought experiment and argument proposed by philosopher John Searle in 1980 to challenge the idea that a computer, even one that passes the Turing Test, can truly understand, despite whatever intelligent appearing behavior it might be able to display. It is a critique of the "strong AI" hypothesis, which suggests that a computer program with the right algorithms could exhibit genuine understanding and even consciousness. The thought experiment is as follows.
Imagine a person who does not understand Chinese but is placed inside a room. This person has a set of rules, a book, or a program that allows them to manipulate Chinese symbols according to a set of predefined instructions. These instructions are in English, a language the person understands. People outside the room pass messages in Chinese through a slot in the door. The person inside the room follows the rules to manipulate the Chinese symbols, create responses, and send them back out through the slot.
From the perspective of those outside the room, it may appear that the person inside understands Chinese because they can provide coherent responses to Chinese input. However, in reality, the person inside the room has no comprehension of Chinese; they are merely following syntactic rules without understanding the meaning of the symbols.
Searle's argument, known as the "Chinese Room Argument," is that a computer program, like the person in the room, can manipulate symbols and generate responses based on predefined rules, but it does not have genuine understanding or even consciousness. In other words, syntax (the manipulation of symbols) is not equivalent to semantics (true understanding of meaning). Searle's argument challenges the notion that a computer running the right software can have true understanding or consciousness, as it only processes symbols according to rules without grasping the meaning behind them.
The Chinese Room has been a significant point of debate in the philosophy of AI and consciousness, and it raises questions about the nature of artificial intelligence, the limits of computational processes, and the possibility of machines achieving genuine understanding. Critics argue that it does not necessarily prove that AI cannot achieve understanding but highlights the importance of distinguishing between syntax and semantics in AI and cognition.ꬍ
Symbolic and Subsymbolic AI
There are two important schools in AI: Symbolic AI and Subsymbolic AI. They represent two different approaches to artificial intelligence, each with distinct characteristics and methodologies. The following paragraphs describe the key differences between them.
