54,61 €
AI in the Social and Business World: A Comprehensive Approach offers an in-depth exploration of the transformative impact of Artificial Intelligence (AI) across a wide range of sectors. This edited collection features 13 chapters, each penned by field experts, providing a comprehensive understanding of AI's theoretical foundations, practical applications, and societal implications. Each chapter offers strategic insights, case studies, and discussions on ethical considerations and future trends.
Beginning with an overview of AI's historical evolution, the book navigates through its diverse applications in healthcare, social welfare, business intelligence, and more. Chapters systematically explore AI's role in enhancing healthcare delivery, optimizing business operations, and fostering social inclusion through innovative technologies like AI-based sign recognition and IoT in agriculture.
With strategic insights, case studies, and discussions on ethical considerations and future trends, this book is a valuable resource for researchers, practitioners, and anyone interested in understanding AI's multifaceted influence. It is designed to foster informed discussions and strategic decisions in navigating the evolving landscape of AI in today's dynamic world.
This book is an essential resource for researchers, practitioners, and anyone interested in understanding AI’s multifaceted influence across the social and business landscapes.
Readership: Undergraduate/Graduate Students, Professionals.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 422
Veröffentlichungsjahr: 2024
This is an agreement between you and Bentham Science Publishers Ltd. Please read this License Agreement carefully before using the book/echapter/ejournal (“Work”). Your use of the Work constitutes your agreement to the terms and conditions set forth in this License Agreement. If you do not agree to these terms and conditions then you should not use the Work.
Bentham Science Publishers agrees to grant you a non-exclusive, non-transferable limited license to use the Work subject to and in accordance with the following terms and conditions. This License Agreement is for non-library, personal use only. For a library / institutional / multi user license in respect of the Work, please contact: [email protected].
Bentham Science Publishers does not guarantee that the information in the Work is error-free, or warrant that it will meet your requirements or that access to the Work will be uninterrupted or error-free. The Work is provided "as is" without warranty of any kind, either express or implied or statutory, including, without limitation, implied warranties of merchantability and fitness for a particular purpose. The entire risk as to the results and performance of the Work is assumed by you. No responsibility is assumed by Bentham Science Publishers, its staff, editors and/or authors for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products instruction, advertisements or ideas contained in the Work.
In no event will Bentham Science Publishers, its staff, editors and/or authors, be liable for any damages, including, without limitation, special, incidental and/or consequential damages and/or damages for lost data and/or profits arising out of (whether directly or indirectly) the use or inability to use the Work. The entire liability of Bentham Science Publishers shall be limited to the amount actually paid by you for the Work.
Bentham Science Publishers Pte. Ltd. 80 Robinson Road #02-00 Singapore 068898 Singapore Email: [email protected]
In this day and age of fast technological growth, the incorporation of Artificial Intelligence (AI) has emerged as a revolutionary force, dramatically influencing both the social fabric and the economic environment. "AI in the Social and Business World: A Comprehensive Approach" is an edited book that draws on the knowledge of a wide range of authors to give a nuanced and comprehensive examination of AI's multidimensional function.
This collaborative effort unfolds across 13 chapters, each authored by experts in their respective domains. Together, these chapters form a comprehensive tapestry that not only elucidates the theoretical foundations of AI but also delves into its practical applications across various sectors.
As editors, our intention is to present a holistic view of AI, addressing its societal implications and strategic relevance for businesses. The journey begins with an introduction to the historical evolution of AI, setting the stage for a deeper exploration into its impact on our social structures and cultural dynamics.
The subsequent chapters navigate the intricate terrain of AI in business, offering strategic insights, case studies, and a critical analysis of its integration. From enhancing customer experiences to reshaping human resources and marketing strategies, the chapters weave together a narrative that reflects the diverse and dynamic nature of AI applications.
We extend our gratitude to the contributing authors whose expertise and insights have enriched this collection. Their collective knowledge forms the backbone of this book, providing readers with a valuable resource for understanding the complexities and possibilities that AI brings to our social and business environments.
We urge you to explore the many viewpoints offered on these pages, whether you are a researcher, practitioner, or enthusiast interested in understanding the significant implications of AI. May this book serve as a guiding light in traversing the vast expanse of artificial intelligence, promoting intelligent debate and educated decision-making in the ever-changing world of technology.
Artificial intelligence is a field of computer science that focuses on human-like intelligence in machines. Artificial intelligence is advancing in many areas to increase the efficiency, accuracy, and speed of the decision-making process. The chapters of this book provide a detailed overview of the AI journey and provide readers with insights to improve their knowledge of AI. The chapters also cover the evolution of artificial intelligence and the techniques used to create it. As artificial intelligence continues to evolve and integrate into our daily lives, the chapters of this book discuss the ethical and social implications of AI and the unpredictable growth and impact of artificial intelligence in society. This chapter also contains thoughts on the future of artificial intelligence, which has the potential to transform business, drive innovation, solve complex problems, and provide justice to social and governance issues in a better-explained way. Overall, this book chapter shapes one’s mind with the entire concept of artificial intelligence.
Artificial intelligence (AI) refers to the branch of computer science that centers on building clever machines that can perform tasks that regularly require human insights. AI develops and improves all areas of society, introducing new solutions, increasing productivity, and improving the overall quality of life. The current relevance of AI lies in its ability to solve complex problems, produce bits of knowledge from expansive volumes of information, and support human capabilities in many areas (Biersmith et al., 2022). The rapid deployment of AI applications has led to increased scrutiny and monitoring in various sectors, including infrastructure,
consumer products, and home applications. Policymakers often lack the technical knowledge to assess the safety and effectiveness of emerging AI technologies. This work provides an overview of AI legislation, directives, professional standards, and technological society initiatives, serving as a resource for policymakers and stakeholders. Moreover, these chapters look into the future, that is, a future where artificial intelligence becomes an agent of change. They demonstrate the potential of AI to drive business transformation, take innovation to new heights, solve complex problems, and bring justice to competition and regulation. Every sentence and every chapters are tied together to show the complexity of intellectual skills and create a good understanding in the minds of the readers.
The origin of the AI revolution has infused machines with the intellectual ability to reflect the complexity of the human mind. As algorithms evolve from lines of code to virtual minds capable of understanding, learning, and thinking, the possibilities are expanding in surprising ways. This revolution has proved effective in various fields around the world. Fig. (1) shows the evolution of AI in various fields.
Fig. (1)) The revolution of AI in various fields.The above diagram shows the revolution of AI in various fields like medicine, education, research, and many more, as shown in the diagram.
The development of artificial intelligence (AI) has been an exciting journey with major advances and breakthroughs (O'leary et al., 1995). Understanding the history of AI applications from key conferences is important for several reasons; it provides insight into scientific and non-academic pioneers. This chapter is an introduction to the history of artificial intelligence applications since the 1940s. Here is a summary of the important stages in the development of intelligence: Fig. (2) focuses on the roadmap of AI evolution.
Fig. (2)) Roadmap of AI evolution.The Mcculloch-Pitts neuron is one of the earliest structures of the neural brain. Introduced by Warren Mcculloch and Walter Pitts in 1943, it is a binary threshold unit that functions as the main function of biological neurons. It takes binary input and produces binary output according to predefined thresholds.
The Alan Turing test, proposed by the English mathematician and computer researcher Alan Turing in 1950, is a test planned to decide whether a machine shows shrewd behavior that is vague compared to that of a human. The main idea of the Turing test is that a person decides that they are communicating with a machine, and a person communicates with text.
The 1956 Dartmouth conference, organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, is widely regarded as the birth of artificial intelligence. The conference brings together scientists who share the goal of creating smart machines.
ELIZA is an early language learning program developed by Joseph Weizenbaum at the MIT artificial intelligence lab in the 1960s. It is considered one of the earliest examples of chatbots.
WABOT, short for Waseda humanoid robot, is one of the first humanoid robots designed to focus on intelligent behavior. WABOT is designed to simulate human-like movement and interaction with its environment.
The term “AI winter” refers to a period in which interest and funding for artificial intelligence (AI) declined in the 1970s and 1980s. Ambition and support for artificial intelligence research and development waned during this period.
Deep Blue is a computer chess game developed by IBM. He gained international recognition in 1997 by defeating world chess champion Garry Kasparov in six matches. “Deep Blue” is important in the fields of artificial intelligence and computer chess.
Roomba is a popular brand of sweeping robots manufactured by IROBOT. It was released in 2002. Roomba changed the way people clean their homes by introducing a robot vacuum cleaner that can move around and clean floors without human intervention.
SIRI, apple INC. This is a virtual assistant that was first released on the iPhone 4s on th October 2011 and has since been integrated into many Apple devices, including iPhone, iPad, Mac computers, apple watch and home pod smart speaker.
WATSON is an artificial intelligence platform developed by IBM Jeopardy. He gained widespread recognition when he competed against human contestants on a quiz show and won in 2011. WATSON is designed to process and analyze big data to generate insights and provide intelligent solutions.
Alexa is a virtual assistant created by Amazon. It is often associated with the Amazon Echo smart speaker and other Amazon devices. ALEXA uses advanced language processing and speech recognition to interact with users and perform various tasks.
TAY, also known as Thai AI, is an intelligent chatbot developed by Microsoft. It was announced on Twitter in March 2016 as an experiment in social AI. TAY is designed to lock in clients, learn from their intelligence, and move forward with their reactions over time.
Alpha Go is an AI program generated by Google’s Deepmind team. Alpha Go gained international attention in March 2016 when he defeated the world champion Lee Sedol in five matches.
Chat GPT is a variant of the GPT (generative pre-training transformer) language generated by Open AI. It is designed for interactive and dynamic communication. Chat GPT uses deep learning techniques to generate human-like responses from context and conversation history.
The above diagram was created with reference to the Evolution of Artificial Intelligence presentation from the Slideshare site, which displays how AI has evolved from around the 1940s until 2022.
Artificial intelligence considers the designs of the human brain and analyzes brain research forms. The results of these thoughts are the improvement of intelligent software and systems. One of the best benefits of AI is that it can reduce errors and increase accuracy and precision. Artificial intelligence has a huge impact on every aspect of society. Here are some key areas where AI will be effective:
AI-powered systems automate repetitive or mundane tasks, allowing workers to focus on complex and creative tasks. This increases efficiency and productivity in business areas such as manufacturing, shipping, and customer service.
AI has the potential to revolutionize healthcare by improving diagnostics, drug discovery, personalized medicine, and patient care. AI-powered robots and virtual assistants are also used in healthcare facilities to support patient care and assist doctors.
NLP enables machines to understand and reproduce human language. NLP can also be used in applications such as sentiment analysis, data collection, and translation.
Artificial intelligence plays an important role in the creation of autonomous vehicles, enabling them to see their surroundings, make decisions, and navigate safely.
Artificial intelligence-driven algorithms are broadly utilized within the financial industry for tasks such as fraud detection, risk analysis, and algorithmic trading.
AI has played an important role in strengthening cybersecurity protection. It can detect and respond to cyber threats in real-time by analyzing large amounts of data, identifying vulnerabilities, and predicting potential attacks.
AI enables organizations to extract insights from big data. Machine learning algorithms can analyze huge sums of information, recognize patterns, and make forecasts, making a difference in how businesses make data-driven choices and progress operations.
The rapid development of intelligence creates ethical dilemmas and social consequences. Algorithmic bias, privacy concerns, automated unemployment, and the impact of AI on inequality must be addressed to ensure accountability and equal use of AI technology.
Machine learning is a branch of artificial intelligence that enables machines to learn and improve through experience (Dubey et al., 2023 & Jawad et al., 2021). Machine learning is the study of statistical models and algorithms that are used by computers to complete tasks without external guidance or explicit programming. It is widely used in applications such as search engines and computer learning to increase efficiency based on prior knowledge or sample data. ML algorithms are used for data collection, pre-processing, visualization, prediction, and decision-making. The main advantage of machine learning is that it learns how to handle data independently. Fig. (3) shows the process involved in machine learning.
Fig. (3)) Process involved in ML.The above diagram states the process involved in machine learning under the reference this topic, What is machine learning? from the site named Scribbr.
There are many types of machine learning algorithms, each suitable for different types of problems and data:
In this type of learning, the algorithm is trained on sample registration and recommendation, documents and correspondence. The algorithm learns from the input map to the output by generalizing from the recorded data, and we use the recorded data to train the machine. The Pictorial representation of Supervised Learning can be seen in Fig. (4). There are many types of supervised learning algorithms:
Fig. (4)) Pictorial representation of supervised learning.The model predicts a continuous output from input features by fitting a linear equation to the data.
For a binary distribution, estimate the probability of a binary outcome.
Build feature-based tree-like decision models to make predictions.
A hybrid model that combines multiple decision trees to increase accuracy.
Create a large plane to separate the different classes in the data.
The above picture represents the workings of a supervised learning model. The diagram is created by referring to the topic of supervised learning in the source label.
Here, the algorithm learns only from the unsupervised data into which the data is entered. The algorithm is designed to find patterns or relationships in data without a clear direction. This may include grouping similar items, reducing the data size, or finding commonalities between different pieces. We teach or train systems using anonymous or undirected information. The Pictorial representation of Unsupervised Learning can be seen in Fig. (5). There are several types of unsupervised learning:
Fig. (5)) Pictorial representation of unsupervised learning.Groups similar data according to its characteristics.
Create a set hierarchy by recursively merging or splitting sets.
Reduces the size of data by finding principal components.
Finding interesting relationships or relationships between variables in big data.
The above picture represents the workings of an unsupervised learning model. The diagram was created by referring to the topic of unsupervised learning in ML from the source, Techvidvan.
This sort of learning includes operator learning in collaboration with the environment. The specialist gets input within the framework of rewards or disciplines based on their activities. By looking for differences and learning from the results, one learns to take the most profitable actions over time. Fig. (6) shows the pictorial representation of Reinforcement Learning.
Fig. (6)) Pictorial representation of reinforcement learning.There are two types of reinforcement learning algorithms:
A model-independent algorithm in which the agent learns by trial and error to generate a reward.
Combines incremental learning with deep neural networks.
The above diagram represents the workings of the reinforcement learning model. The diagram was created by referring to the topic, Reinforcement Learning Principles, from the source, PST.
Deep learning algorithms are a set of machine learning calculations planned to memorize and extract significant representations from huge sums of information [5, 6]. It focuses on technologies, including particle swarm algorithms, image-matching algorithms, and deletion strategies. These methods help in pattern recognition, deep meaning, and image deletion, ensure image integrity, and process large amounts of stored information. They are propelled by the structure and work of the human brain, especially the way neurons are connected. The pictorial representation of Deep Learning can be seen in Fig. (7).
Fig. (7)) Pictorial representation of Deep Learning.The above diagram represents the workings of the deep learning model. The diagram was created by referring to the topic of Deep Learning vs. Machine Learning from the source Zendesk.
There are many types of deep learning algorithms that are widely used in many applications.
CNNs are widely used for computer processing and are designed to work with documents that have a grid-like structure, such as images. CNNs utilize layers to memorize nearby patterns and spatial chains of command within the input information, making them ideal for assignments such as picture classification, question discovery, and picture division.
RNNs are outlined to handle sequential data such as time series or natural language. Unlike feedforward neural networks, RNNs have feedback loops that allow them to detect physical disturbances and process information of different sizes. RNNs are often used for tasks such as speech recognition, language modeling, and machine translation.
LSTMs are a special type of RNN that solves the fading problem and is better at modeling long-term memory. LSTMs have a brain memory that can store information for long periods of time, making them useful for tasks that require the understanding of content and memory, such as speech recognition, emotional analysis, and language development.
A GAN comprises two neural networks, a generator and a discriminator, trained together in competition. While the observer tries to identify the real data from the fake data, the producer tries to create synthetic data that resembles the real data. GANs are commonly used for tasks such as translation, data manipulation, and model conversion.
Natural language processing (NLP) is an important area of artificial intelligence research, including knowledge representation, logical reasoning, and constraint satisfaction (Kumar et al., 2023). Over the past decade, NLP research has shifted to the large-scale application of statistical methods such as machine learning and data mining, leading to the development of learning and optimization methods such as genetic algorithms and neural networks.
NLP is a way to help voices like Siri, Google Assistant, and Alexa understand and respond to human speech and actions based on commands. Applications of NLP can be seen in Fig. (8). Here are a few essential procedures utilized in NLP:
Tokenization is the method of breaking content into partitioned words or tokens. Tokenization is regularly the primary step in NLP tasks and empowers further analysis and processing.
Fig. (8)) Applications of NLP.Common words like “like”, “is”, and “and” give small rationale esteem and can be expelled to diminish commotion and make strides in computational efficiency.
Stemming decreases words to their base frame (e.g., “Eating” gets to be “Eat”), whereas lemmatization diminishes words to their canonical shape (e.g., “Superior” gets to be “Great”). These procedures offer assistance in normalizing and decreasing word variations.
Allotting linguistic labels to each word in a sentence, such as thing, verb, descriptive word, etc. POS labeling is valuable for understanding the syntactic structure of a sentence.
Recognizing and classifying named substances in content, such as individual names, areas, dates, organizations, etc. NER makes a difference in extricating structured information from unstructured text.
Allotting predefined categories or names to content records. It is utilized in task such as spam discovery, estimation examination, theme classification, and more.
Recognizing and extricating organized data from unstructured content, such as connections between entities.
Interpreting content from one dialect to another. Machine interpretation procedures incorporate factual strategies, rule-based approaches, and, as of late, neural machine translation.
The above diagram shows the applications of natural language processing in real-time referred to as the source datasciencedojo.
Reinforcement learning is a leading artificial intelligence research field that focuses on learning from the environment to action mapping (Dubey & Tiwari, 203). Recent achievements, such as AlphaGo's deep reinforcement learning, have drawn attention. It introduces classic and deep reinforcement learning methods, discusses state-of-the-art work, and addresses challenges faced by the field. Fig. (9) shows the terms used in Reinforcement Learning.
Fig. (9)) Terms used in reinforcement learning.Q-learning is a popular, model-free, reinforcement learning algorithm for learning optimal rules in a stateless and domain-independent environment. It belongs to the study of supporting values, based on the idea of estimating the value of a pair of states.
State-action-reward-state-action (SARSA) is another popular motivational exercise similar to Q-learning. SARSA is a rule-based approach, meaning it learns the Q value based on the current rule the agent follows.
DQN is a dynamic reinforcement learning algorithm that combines Q-learning with deep neural networks to manage the high-state domain. DQN has achieved great results in many complex tasks.
The above picture shows the popular terms used in reinforcement learning.
The Python programming language was created by Guido van Rossum in 1991. Python is a widely used programming language that has gained popularity along with C++ and Java (Lyu et al., 2022). Python is a strong choice for data science and machine learning, with low-level libraries that promote quality and productivity. This explores the relationship between Python, data science, and machine learning algorithms. Data science tools such as data summarization and exploratory data analysis were applied to the Iris data set, resulting in optimized models for future prediction with significant accuracy. The advantages of Python in AI can be seen in Fig. (10). Here are some commonly used Python libraries in AI:
Scikit-learn is known for its basic and reliable API, making it available for both apprentices and experienced specialists.
TensorFlow is an open-source library created by Google for numerical computation and machine learning.
PyTorch is an open-source library created by Facebook's AI Investigate Lab. PyTorch provides a flexible framework for building and training machine learning models, particularly deep neural networks.
Keras emphasizes simplicity and ease of use, allowing users to quickly prototype and build deep learning models. It provides a wide range of pre-built neural network layers and models.
XGBoost is an optimized gradient-boosting library that focuses on decision tree models.
LightGBM is outlined to handle large-scale datasets and strengthens different machine learning task, including classification, regression, and ranking.
CatBoost is an open-source gradient-boosting library developed by Yandex. It is designed to handle categorical features efficiently, making it suitable for tasks with high-cardinality categorical data.
H2O is an open-source, scalable machine-learning platform that provides a user-friendly interface for building and deploying machine-learning models.
Caffe provides a simple and expressive architecture for designing deep neural networks and has a large collection of pre-trained models available in its model zoo.
Theano is a deep learning library that allows users to define, optimize, and evaluate mathematical expressions. Theano provides a low-level interface for building and training deep neural networks and has a strong focus on numerical computation.
CNTK provides a rich set of tools and APIs for building and training deep neural networks and has seamless integration with Microsoft Azure for cloud-based deep learning.
Fig. (10)) Advantages of python In AI.The above diagram shows the advantages of Python in the field of Artificial Intelligence.
Cloud-based AI services refer to artificial intelligence capabilities and resources that are provided over the Internet through a cloud computing platform (Mohamed et al., 2023). Cloud computing enables efficient delivery of computer services, including software, storage, analytics, and intelligence. AI enhances data security, business collaboration, and business collaboration with smart devices and computer vision models, improving the effectiveness of the public cloud. Fig. (11) shows the advantages of Cloud. Some popular cloud-based AI services include:
AWS provides a comprehensive suite of AI services, including Amazon Recognition for image and video analysis, Amazon Poly for text-to-speech conversion, and Amazon Poly for building and using machine learning models, which includes Sagemaker.
Fig. (11)) Advantages of cloud.Google Cloud offers various AI services, such as the Google Cloud Vision API for image recognition, the Google Cloud Natural Language API for NLP tasks, and Google Cloud AutoML for custom model development.
Microsoft Azure offers AI services such as Azure Cognitive Services, which include vision, speech, language, and search capabilities. Azure Machine Learning allows users to create, deploy, and manage machine learning models.
IBM Watson offers a range of AI services, including Watson Assistant for building conversational chatbots, Watson Visual Recognition for image analysis, and Watson Natural Language Understanding for NLP tasks.
The above diagram shows the advantages of cloud computing, which enables the efficient delivery of computer services, including software, storage, analytics, and intelligence.
“Building machine learning model for prediction of underwater rocks vs. mine”.
Building a machine learning model for the prediction of underwater rocks vs. mines through a complete analysis of the sonar dataset.
• To create a model that can efficiently classify targets in rocks and mines with acceptable accuracy, aiding in safe mine location and identification.
• Using SONAR technology, which uses sound waves to detect objects.
• Training a model using machine learning algorithms to effectively identify targets, thus alerting the administration in case of any mine detection.
Marine forces use mines to provide good security but at the same time, pose a serious threat to life at sea, on ships, and on submarines. For this reason, it is necessary to build a system using the Sonar database available on GitHub to train our machine learning models that can classify the following products and provide accurate results.
Sonar detects objects as mines or rocks through a binary classification problem using machine learning techniques, which has practical applications in underwater mine detection, naval operations, and marine exploration.