79,78 €
This book describes the Throughput Model methodology that can enable individuals and organizations to better identify, understand, and use algorithms to solve daily problems. The Throughput Model is a progressive model intended to advance the artificial intelligence (AI) field since it represents symbol manipulation in six algorithmic pathways that are theorized to mimic the essential pillars of human cognition, namely, perception, information, judgment, and decision choice. The six AI algorithmic pathways are (1) Expedient Algorithmic Pathway, (2) Ruling Algorithmic Guide Pathway, (3) Analytical Algorithmic Pathway, (4) Revisionist Algorithmic Pathway, (5) Value Driven Algorithmic Pathway, and (6) Global Perspective Algorithmic Pathway.
As AI is increasingly employed for applications where decisions require explanations, the Throughput Model offers business professionals the means to look under the hood of AI and comprehend how those decisions are attained by organizations.
Key Features:
- Covers general concepts of Artificial intelligence and machine learning
- Explains the importance of dominant AI algorithms for business and AI research
- Provides information about 6 unique algorithmic pathways in the Throughput Model
- Provides information to create a roadmap towards building architectures that combine the strengths of the symbolic approaches for analyzing big data
- Explains how to understand the functions of an AI algorithm to solve problems and make good decisions
- informs managers who are interested in employing ethical and trustworthiness features in systems.
Dominant Algorithms to Evaluate Artificial Intelligence: From the view of Throughput Model is an informative reference for all professionals and scholars who are working on AI projects to solve a range of business and technical problems.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 533
Veröffentlichungsjahr: 2002
This is an agreement between you and Bentham Science Publishers Ltd. Please read this License Agreement carefully before using the book/echapter/ejournal (“Work”). Your use of the Work constitutes your agreement to the terms and conditions set forth in this License Agreement. If you do not agree to these terms and conditions then you should not use the Work.
Bentham Science Publishers agrees to grant you a non-exclusive, non-transferable limited license to use the Work subject to and in accordance with the following terms and conditions. This License Agreement is for non-library, personal use only. For a library / institutional / multi user license in respect of the Work, please contact: [email protected].
Bentham Science Publishers does not guarantee that the information in the Work is error-free, or warrant that it will meet your requirements or that access to the Work will be uninterrupted or error-free. The Work is provided "as is" without warranty of any kind, either express or implied or statutory, including, without limitation, implied warranties of merchantability and fitness for a particular purpose. The entire risk as to the results and performance of the Work is assumed by you. No responsibility is assumed by Bentham Science Publishers, its staff, editors and/or authors for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products instruction, advertisements or ideas contained in the Work.
In no event will Bentham Science Publishers, its staff, editors and/or authors, be liable for any damages, including, without limitation, special, incidental and/or consequential damages and/or damages for lost data and/or profits arising out of (whether directly or indirectly) the use or inability to use the Work. The entire liability of Bentham Science Publishers shall be limited to the amount actually paid by you for the Work.
Bentham Science Publishers Pte. Ltd. 80 Robinson Road #02-00 Singapore 068898 Singapore Email: [email protected]
“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We're nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.” —Larry Page
I think we're going to need artificial assistance to make the breakthroughs that society wants. Climate, economics, disease -- they're just tremendously complicated interacting systems. It's just hard for humans to analyze all that data and make sense of it.
Artificial intelligence (AI) systems are already transforming individuals and organizations manner of functioning in today’s world. AI can automate repetitive tasks, analyze large volumes of data, recommend content, translate languages, and even play games. Further, AI and related technologies are progressively ubiquitous in business and society. For example, AI increasingly find their way into everything from advanced quantum computing systems, automobiles, household appliances, and leading-edge medical diagnostic systems to consumer electronics and “smart” personal assistants. AI tools are also employed virtual reality, augmented reality as well as making IoT devices and services smarter and more secure.
Nonetheless, the current scope of things that AI can accomplish is relatively narrow. Some experts say the technology is far from becoming so-called artificial general intelligence, or AGI. That is, AGI is the capability to understand or learn any intellectual task that a human being can.
Furthermore, others have noted that even in its current, narrow proficiencies, AI provokes a series of ethical and trustworthiness questions. These questions represent issues such as whether the data fed into AI programs are without bias, and whether AI can be held accountable if something goes wrong.
To construct ethical and trusted AI systems, there needs to be cooperation among nations and various stakeholders. Experts have previously warned that inherently biased AI programs can present momentous problems and it may get in the way people’s trust in those systems. For example, facial recognition software, for example, may incorporate accidental racial and gender bias, which may pose a threat to a particular group of individuals.
Therefore, this book provides a methodology described as the Throughput Model that can enable individuals and organizations to better identify, understand, and use algorithms to solve daily problems. Moreover, the Throughput Model can further the AI field since it represents symbol manipulation in six algorithmic pathways that seems to be essential for human cognition, namely, perception, information, judgment, and decision choice. Finally, The Throughput Model provides the first steps towards building architectures that combine the strengths of the symbolic approaches that can be adapted for machine learning/deep learning, and to develop better techniques for extracting and generalizing abstract knowledge from large, often noisy data sets.
As AI is employed more and more for applications where decisions require explanations, the Throughput Model offers the means to look under the hood of AI and comprehend how those decisions are attained by organizations. This is, particularly important for employing ethical and trustworthiness systems. Hence, Throughput Modelling ought to be considered from the start as it will inform the design of an AI system. Building trusted and ethical AI systems and the governance around them may potentially become a competitive strength for organizations.
Not applicable.
The authors declare no conflict of interest, financial or otherwise.
I would like to express my gratitude to the many people who saw me through this book; to all those who provided support, talked things over, read, wrote, offered comments, allowed me to quote their remarks and assisted in the editing, proofreading and design of the book. Further, I would like to thank those for using artificial intelligence figures in the book.
Further, I would like to thank my students. Learning is a collaborative activity when it is happening at its best. We work together using each other's strengths to build our own challenges, developing our thinking and problem-solving skills. Therefore, the relationship we develop with our students at every age is one that is to be respected, nurtured, and admired.
Last and not least: I request forgiveness of individuals who have been with me over the course of the years and whose names I have failed to mention.
Abstract
“We have seen AI providing conversation and comfort to the lonely; we have also seen AI engaging in racial discrimination. Yet the biggest harm that AI is likely to do to individuals in the short term is job displacement, as the amount of work we can automate with AI is vastly larger than before. As leaders, it is incumbent on all of us to make sure we are building a world in which every individual has an opportunity to thrive”.
---Andrew Ng, Co-founder and lead of Google Brain
The AI of the past used brute-force computing to analyze data and present them in a way that seemed human. The programmer supplied the intelligence in the form of decision trees and algorithms. Imagine that you were trying to build a machine that could play tic-tac-toe. You would give it specific rules on what move to make, and it would follow them. Today's AI uses machine learning in which you give it examples of previous games and let it learn from the examples. The computer is taught what to learn and how to learn and makes its decisions. What's more, the new AIs are modeling the human mind itself using techniques similar to our learning processes.
---Vivek Wadhwa
Google's work in artificial intelligence ... includes deep neural networks, networks of hardware and software that approximate the web of neurons in the human brain. By analyzing vast amounts of digital data, these neural nets can learn all sorts of useful tasks, like identifying photos, recognizing commands spoken into a smartphone, and, as it turns out, responding to Internet search queries. In some cases, they can learn a task so well that they outperform humans. They can do it better. They can do it faster. And they can do it at a much larger scale.
---Cade Metz
Abstract
The Fourth Industrial Revolution generation has ushered in extremely sophisticated digital apparatuses that have taken the place of manual processing to ensure higher automation and sophistication. Artificial Intelligence (AI) provides the tools to exhibit human-like behaviors while adjusting to the newly given inputs and accommodating
change in the environment. Moreover, the tech-giants such as Amazon, Apple, IBM, Facebook, Google, Microsoft, and many others are investing in generating AI-driven products to facilitate the market demands for sophisticated automation. AI will continually influence areas such as job opportunities, environmental protection, healthcare, and other areas in economic and social systems.
The development of artificial intelligence (AI) has transformed our economic, social, and political way of life. Tedious and time-consuming tasks can now be delegated to AI tools that can complete the work in a matter of minutes, if not seconds. Within the world of business, this have significantly decreased the time required to conclude transactions. Nonetheless, there is always the fear of a person being replaced by AI tools for the sake of cost and time efficiency. Although these fears are valid in some arenas, AI is not developed enough to completely replace a human’s judgment or expertise in a variety of situations. Moreover, AI can be considered as a tool that should be fully embraced to improve an individual or organization efficiency and effectiveness when performing a task. Within the human resource department, machines can be used throughout the entire process.
This book presents a decision-making model described as the “Throughput Model,” which houses six dominant algorithmic pathways for AI use. This modeling process may better guide individuals, organizations, and society in general to assess the overall algorithmic architect that is guiding AI systems. Moreover, the Throughput Modeling approach can address values and ethics that are often not baked into the digital systems, which assembles individuals’ decisions for them. Finally, the Throughput Model specified six major algorithms (to be discussed later) that may augment human capacities by countering people's deepening dependence on machine-driven networks that can erode their abilities to think for themselves, act independent of automated systems and interact effectively with others [1]. The Throughput Model dominant six algorithms can be utilized as a platform for an enhanced understanding of the erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications. Further, the model may assist in the understanding of the use of weaponized information, lies and propaganda to dangerously destabilize human groups.
AI is the ability of a computer, machine or a robot controlled by software to do tasks that are typically performed by humans since they require human intelligence and discernment. In other words, AI can simulate humans’ style of living and work rules, as well as transform people thinking and actions into systematized operations. Scientists have discovered more about the brain in the last 10 years than in all prior centuries due to the accelerating pace of research in neurological and behavioral science and the development of new research digital techniques [2].
In addition, neurological brain research experts have found that the human brain has approximately 86 billion neurons, and each neuron is divided into multiple layers [2]. There are more than 100 synapses on each neuron, and the connections between each neuron are communicated by synapses, and this transmission mode establishes a complex neural network [3]. AI mimics the brain nerve to operate, analyze and calculate things, and distribute them in the neural network like a picture by picture, completing various activities. This immensely augments people work efficiency and saves the corresponding labor force, thus reducing many labor costs and helping enterprises to have a better development [1].
Furthermore, digital life is augmenting human capacities and disrupting eons-old human activities. Algorithmic driven systems have spread to more than half of the world’s inhabitants in encompassing information and connectivity, proffering previously unimagined opportunities. AI programs are adept of mimicking and even do better than human brains in many tasks [1]. The rise of AI will make most individuals and organizations better off over the years to come. AI will become dominant in most, if not all, aspects of decision-making in the foreseeable future. The utilization of algorithms is rapidly rising as substantial amounts of data are being created, captured, and analyzed by government, businesses, and public bodies. The opportunities and risks accompanying with the utilization of algorithms in decision-making depend on the kind of algorithm; and understanding of the context in which an algorithm functions will be essential for public acceptance and trust [1]. Likewise, whether an AI system acts as a primary decision maker, or as an important decision aid and support to an individual decision maker, will suggest different regulatory approaches.
Fundamentally, the goal of an algorithm is to solve a specific problem, usually defined by someone as a sequence of steps. In machine learning or deep learning, an algorithm is a set of rules given to an AI program to help it learn on its own. Whereby machine learning is a set of algorithms that enable the software to update and “learn” from prior results without the requirement for programmer intervention. In addition, machine learning can get better at completing tasks over time based on the labeled data it ingests. Also, deep learning can be depicted as a related field to machine learning that is concerned with algorithms stimulated by the structure and function of the human brain called artificial neural networks [1].
For many years ago, AI was housed in data centers, where there was satisfactory computing power to achieve processor-demanding cognitive chores. Today, AI has made its way into software, where predictive algorithms have changed the nature of how these systems support organizations. AI technologies, from machine learning and deep learning to natural language processing (NLP) and computer vision, are precipitously spreading throughout the world. NLP is a subfield of linguistics, computer science, and AI that is concerned with the interfaces between computers and human language. In addition, it involves program computers to process and analyze large amounts of natural language data. Whereas computer vision is an interdisciplinary scientific field that deals with how computers can access elevate its understanding from digital images or videos. From the viewpoint of engineering and computer scientists, it pursues to understand and automate tasks that the human visual system can do.
NLP applications are in use at least hundreds of times per day. For example, predictive text on mobile phones typically implements NLP. Furthermore, searching for something on Google utilizes NLP. Finally, a voice assistant application such as Alexa or Siri utilizes NLP when asking a question.
Machine learning is a branch of AI that enables computers to self-learn from data and harness that learning without human intervention. When confronted with a circumstance in which a solution is hidden in a large data set, machine learning performs admirably well [1]. Furthermore, machine learning does extremely well at processing that data, extracting patterns from it in a fraction of the time a human would take, and generating otherwise unattainable insight.
Deep learning is a tool for classifying information through layered neural networks, a rudimentary replication of how the human brain works. Neural networks have a set of input units, where raw data is supplied. This can be from pictures, or sensible samples, or written text. The inputs are then mapped to the output nodes, which determine the category to which the input information belongs. For instance, it can determine that the fed picture comprises a dog, or that the small sound sample was the word “Goodbye”.
Deep learning can be depicted as a subset of machine learning, and machine
learning is a subset of AI, which is an umbrella term for any computer program that does something intelligent [1]. Deep learning models operate in a manner that draws from the pattern recognition capabilities of neural networks (Fig. 1.1). These so-called “narrow” AIs are ubiquitous, that are embedded in people’s GPS systems and Amazon recommendations. Nevertheless, the goal is artificial general intelligence, a self-teaching system that can outperform humans across a wide range of disciplines [1].
Fig. (1.1)) Artificial single layer neural network. Source: Adopted by author.The enormous data digitization as well as the emerging technologies that implement them are disrupting most economic sectors, comprising of transportation, retail, advertising, energy, and other areas [4]. Further, AI is also having an influence on democracy and governance as computerized systems are being adopted to enhance accuracy and drive objectivity in government operations. Nonetheless, the risks are also considerable and conceivably present tremendous governance challenges. These consist of labor displacement, inequality, an oligopolistic global market structure, reinforced totalitarianism, shifts and volatility in national power, strategic instability, and an AI race that sacrifices safety and other values.
AI tools are progressively expanding and elevating decision-making capabilities through such means as coordinating data delivery, analyzing data trends, providing forecasts, developing data consistency, quantifying uncertainty, anticipating the user's data needs, and delivering timely information to the decision makers. Moreover, decision-making is essential to individuals and organizations, and AI algorithms are progressively being utilized in our daily decision choices. AI can be depicted as a group of technologies used to solve specific problems [1]. AI is typically pitched around delivering a data-based answer or offering a data-fueled prediction. Then features and elements begin to diverge. For instance, natural language processing (NLP) may be used to automate incoming emails, machine vision to assess quality on the product line, or advanced analytics to predict a failure of an organization network [5].
Computer algorithms are widely employed throughout our economy and society to make decisions that have far-reaching impacts, including their applications for education, access to credit, healthcare, and employment. The ubiquity of algorithms in our everyday lives is an important reason to focus on addressing challenges associated with the design and technical aspects of algorithms and preventing bias from the onset. That is, algorithms gradually mold our news, economic options, and educational trajectories.
Traditional algorithms are rule-based, which represents a set of logical rules that are created based on expected inputs and outputs. Algorithms often depend upon the analysis of considerable amounts of personal and non-personal data to infer correlations or, more generally, to derive information regarded beneficial to make decisions. Moreover, the decision-making processes for such algorithms can easily be explained and the process is typically transparent. Nonetheless, AI generated machine learning and/or deep learning algorithms create rules internally; therefore, are very difficult to make transparent. This also implies that the utilization of some machine learning and deep learning algorithms are encapsulated in a so-called “black box”. Hence, AI produced algorithms are problematic, since how the black boxes arrive to their decision choices are irresolvable to explain.
In other words, decision-making on the quintessential characteristics of digital life is automatically relinquished to code-driven, black box tools. Individuals lack input and do not learn the context about how the technology operates in practice. Further, society sacrifice independence, privacy and power over choice and there is no control over these processes. This effect may expand as automated systems become more prevalent and complex.
In addition, human involvement in the decision-making may diverge, and maybe entirely out of the loop in operating systems. For example, the influence of the decision on individuals can be sizeable, such as access to credit, employment, medical treatment, or judicial sentences, among other issues. Entrusting algorithms to make or to sway such decisions produces an assortment of ethical, political, legal, or technical issues. where careful consideration must be taken to study and address them properly. If they are ignored, the anticipated benefits of these algorithms may be invalidated by an array of various risks for individuals (e.g., discrimination, unfair practices, loss of autonomy, etc.), the economy (e.g., unfair practices, limited access to markets, etc.), and society (e.g., manipulation, threat to democracy, etc.). These systems are globally networked and not easy to regulate or rein in.
In sum, AI’s foremost improvement over humans sits in its capability to detect faint patterns within large quantities of data and to learn from them. While a commercial loan officer will look at several measures when deciding whether to grant an organization a loan (i.e., liquidity, profitability, and risk factors), an AI algorithm will learn from thousands of minor variables (e.g., factors covering character dispositions, social media, etc.). Taken alone, the predictive power of each of these is small, but taken together, they can produce a far more accurate prediction than the most discerning loan officers are capable of comprehending.
An algorithm is only as suitable as the data it works with in a system. Data is often imperfect in ways that permit these algorithms to inherit the predispositions of previous decision makers. In other cases, data may merely replicate the pervasive biases that persevere in society at large. In other applications of algorithms, data mining can uncover unexpectedly advantageous regularities that are just preexisting patterns of exclusion and inequality. The arena of data mining is somewhat contemporary and in a state of evolution. Data mining is the study of collecting, cleaning, processing, analyzing, and gaining useful insights from data [6].
Further, data mining is the process of extricating beneficial information from huge amounts of data. In addition, data mining is the technique of uncovering meaningful correlations, patterns and trends by filtering through substantial amounts of data gathered in repositories. Data mining utilizes pattern recognition technologies, as well as statistical and mathematical techniques.
For example, e-mail spam filter depends on, in part, rules that a data mining algorithm has learned from scrutinizing millions of e-mail messages that have been catalogued as spam or not spam. Moreover, real-time data mining techniques facilitate Web-based merchants to instruct “customers who purchased product “A” are also likely to purchase product “B”. In addition, data mining assists banks to ascertain applicants’ types that are more likely to default on loans, supports tax authorities to pinpoint the type of tax returns that are most likely to be duplicitous, and aids catalog merchants to pursue those customers that are most likely to purchase [7].
Flourishing organizations are constructing effective use of the abundance of data, whereby they have access to make better forecasts, enhanced strategies, and improved decision choices. Nevertheless, in a world where algorithms are fixtures of organizations and by extension, peoples’ lives, the issue of biased training data is increasingly consequential. In addition, AI insurance could emerge as a new revenue stream for insurance companies indemnifying organizations.
Gradually more, AI systems acknowledged as deep learning neural networks are relished to inform decisions essential to human health and safety, such as in autonomous driving or medical diagnosis. These networks are respectable at identifying patterns in large, complex datasets to facilitate in decision-making.
Moreover, AI algorithms and robotics are digital technologies that will have momentous influence on the development of humanity in the near future. Ethical issues have been raised regarding what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these apparatuses.
The focus comes as AI research progressively deals more with controversies surrounding the application of its technologies. This is especially the case in the use of biometrics such as iris and facial recognition. Issues pertain to grasping and understanding biases in algorithms may reflect existing patterns of perceptual framing in data. There is no such thing as a neutral technological platform since algorithms can influence human beliefs.
AI is the leading technology in “Fourth Industrial Revolution See Fourth Industrial Revolution”. AI denotes technological advances from biotechnology to big data, which are precipitously reshaping the global community. The First Industrial Revolution utilized water and steam power to industrialize production. Next, the Second Industrial Revolution employed electric power to produce mass production. Thereafter, the Third Industrial Revolution exercised electronics and information technology to computerize production. The Fourth Industrial Revolution is assembled on the Third. That is, the digital revolution has been transpiring since the middle of the last century. It is characterized by a merging of technologies that has obscured the lines between the physical, digital, and biological spheres.
Furthermore, AI represents a family of tools where algorithms uncover or learn associations of predictive power from data. An algorithm is depicted as a step-by-step procedure for solving a problem. The most palpable form of AI is machine learning, which comprises a family of techniques called deep learning that bank on multiple layers of representation of data and are therefore able to embody complex relationships between inputs and outputs. Nonetheless, learned representations are difficult for humans to interpret, which is one of the advantages of deep learning neural networks.
Algorithms have been cultivated into more complex structures; however, certain challenges still emerge. That is, AI can aid in identifying and reducing the influence of human biases. Nonetheless, it can also make the problem worse by intertwining in and positioning biases in sensitive application areas, such as profiling people in facial recognition apparatus. It is not the machines that have biases. An AI tool does not ‘want’ something to be true or false for reasons that cannot be explained through logic. Unfortunately, human bias exists in machine learning from the creation of an algorithm to the interpretation of data. Further, until now hardly anyone has tried to solve this huge problem.
The potential of AI stands on the transition of AI that differentiates the past, grounded on symbol processing and syntax, from the future, constructed on learning and semantics grounded in sensory experience.
Machine learning is the field most frequently related with the current explosion of AI. Machine learning is a set of techniques and algorithms that can be implemented to “train” a computer program to routinely identify patterns in a set of data.
Machine learning can be encapsulated as a research field that is proficient of recognizing patterns in data and developing systems that will learn from those. More specifically, supervised machine learning guides systems using examples classified (labelled) by individuals. For example, these transactions are deceptive; those transactions are not deceptive. Grounded on the features of that classified data, the system learns what the underlying patterns of those kinds are, and then can predict which new transactions are decidedly likely to be duplicitous. Whereas unsupervised machine learning can uncover patterns in large quantities of unlabeled data. This procedure endeavors to unearth a fundamental structure of its own accord, such as by clustering cases that is similar to one another and formulating associations [1].
Many diverse tools fall under the umbrella of “machine learning”. Typically, machine learning utilized “features” or “variables” (e.g., the location of fire departments in a city, data from surveillance cameras, attributes of criminal defendants) procured from a set of “training data” to learn these patterns without explicitly being told what those patterns are by humans. Machine learning has come to comprise of items that have historically been more basically described as “statistics”. Machine learning is the tool at the heart of new automated AI systems, making it challenging for people to comprehend the logic behind those systems.
There is typically a trade-off between performance and explainability for machine learning, deep learning, or neural networks. Machine learning will often be more advantageous when the situation is depicts a black box scenario due to multifaceted elements with many intermingling influences. As a result, these systems will more than likely be accountable via post hoc monitoring and evaluation. For example, if the machine learning algorithm’s decision choices are significantly biased, then something regarding the system or the data it is trained on may need to change.
Algorithms are not inherently biased. In other words, algorithmic decision choices are predicated on several aspects, including how the software is deployed, and the quality and representativeness of the underlying data. Further, it is important to ensure that data transparency, review and remediation is considered throughout algorithmic engineering processes. Yet, the increasing use of algorithms in decision-making also brings to light important issues about governance, accountability, and ethics.
While organizations today employ widespread utilization of complex algorithms, the viewpoint of algorithmic accountability persists as an elusive ideal because of the opacity and fluidity of algorithms. Machines may not suffer from the same biases that we humans have, but they have their own problems. Machine learning procedures may aggravate bias in decision-making due to poorly conceived models. Moreover, the occurrence of unrecognized biases in training data or because of disparate sample sizes across subgroup can cause problems.
A common principle of AI ethics is explainability [8]. The risk of producing AI that reinforces societal biases has stimulated calls for greater transparency about algorithmic or machine learning decision processes, and for means to understand and audit how an AI agent arrives at its decision choices or classifications. As the utilization of AI systems flourishes, being able to explain how a given model or system works will be imperative, particularly for those used by industry, governments, or public sector agencies.
AI algorithms entail a computational process, comprising one derived from machine learning, deep learning, statistics, data processing or related tools, that makes a decision choice or contributes to human decision making, that influences users such as consumers. Employed across industries, AI algorithms can open smartphones utilizing facial recognition, make driving decisions in autonomous vehicles, endorse entertainment assortments grounded on user preferences. Further, AI applications can support the process of pharmaceutical advancement, ascertain the creditworthiness of potential homebuyers, and screen applicants for job interviews. In addition, AI automates, accelerates, and make better data processing by locating patterns in the data, acclimating to new data, and learning from experience.
Algorithmic accountability appeal to the following related remedies.
Transparency. Decision makers cannot utilize the intricacies and proprietary nature of many algorithmic models as a shield against inquiry.Explanation. Certify those algorithmic decisions as well as any data driving those decisions can be interpreted to end-users and other stakeholders in non-technical terms. At a minimum, there is the “right to explanation” the nature and construction of algorithms.Audits. Algorithmic techniques should be examined by some internal auditor and/or independent third party. In addition, interested third parties can inquire, understand, and check the nature of the algorithm through disclosure of information that facilitates monitoring, checking, or criticism, integrating through provision of detailed documentation, technically suitable, and accommodating terms of use. In other words, make available externally discernable avenues of redress for adverse individual or societal effects of an algorithmic system.Fairness. Verify that algorithmic decision choices do not produce discriminatory or unjust influences when differentiating transversely different demographics (e.g., race, sex, etc.). The issues of unfairness and bias may be confronted with by constructing fairness requirements into the algorithms themselves.To reduce the risks in algorithms, issues pertaining to intrinsic and extrinsic requirements can apply to any algorithmic properties, such as safety, security, or privacy [9]. Intrinsic prerequisites, such as fairness, absence of bias or non-discrimination, can be articulated as properties of the algorithm itself in its application framework. ‘Fairness’ can be construed with ‘absence of undesirable bias.’ In addition, 'discrimination' can be depicted as a particular type of unfairness associated to the utilization of distinctive types of data (such as ethnic origin, political opinions, gender, etc.) [8]. Extrinsic requirements are related to ‘understandability,’ which is the possibility to provide understandable information about the connection between the input and the output of the algorithms. The two foremost forms of understandability are deemed to be “transparency” and “explainability” [9].
Algorithmic transparency is openness about the purpose, structure and fundamental actions of the algorithms employed to search for, process and deliver information. Transparency is delineated as the availability of the algorithmic code with its design documentation, parameters, and the learning dataset. When the algorithm relies on machine learning or deep learning tools. Transparency does not necessarily imply availability to the public. It also embodies situations in which the code is made known only to actors, for example for audit or certification. For example, a common method utilized to offer transparency and ensure algorithmic accountability is the use of third-party audits.
Decision choices formulated by algorithms can be opaque due to technical and social reasons. Furthermore, algorithms maybe deliberately opaque to protect intellectual property. For example, the algorithms may be too multifaceted to explain or efforts to illuminate the algorithms might necessitate the utilization of data that infringes a country's privacy regulations.
Explainability is described as the availability of explanations about AI algorithms. In contrast to transparency, explainability necessitates the delivery of information beyond the AI algorithms [9]. Explanations can be of diverse kinds (i.e., operational, logical, or causal). Further, they can be either global (about the whole algorithm) or local (about specific results); and they can take distinctive forms (decision trees, histograms, picture or text highlights, examples, counterexamples, etc.). The strengths and weaknesses of each explanation method should be evaluated in relation to the recipients of the explanation (e.g., professional or employee prospect), their level of expertise, and their objectives (to challenge a decision, take actions to obtain a decision, verify compliance with legal obligations, etc.).
The next section highlights explainability in terms of a model described as the Throughput Model [10]. This model emphasizes “explainability” by considering stages of AI development, namely, pre-modelling, model development, and post-modelling. The majority of AI explainability literature targets illuminating a black-box model that is already developed, namely, post-modelling explainability. The Throughput Model theory is suggested to resolve these issues.
The centrality and concerns about algorithmic decision-making is increasing daily. Issues link to addressing legal, policy and ethical challenges indicates that algorithmic power in media production and consumption, commerce, and education. Moreover, a case is often made that we are looking to a future in which decision-making will be based on automated processing of large datasets becomes increasingly common. Big data, machine learning, algorithmic decision-making and similar technologies have the capacity to bring substantial advantage to individuals, groups, and society. They could also produce new injustices and entrench old ones in manners that permit them to be strongly reproduced across national and international networks. The Throughput Model allows us to view the design of the algorithms, which in effect is looking inside of the black box (see Fig. 1.2).
Fig. (1.2)) Throughput Modelling Diagram. Where P= perception, I= information, J= judgment, and D= decision choice. Source [11].Further, the Throughput Model outlines the steps and strategies that decision makers need to determine before making decision. The daily decision-making process depicted in the Throughput Model that affects the activities of individuals and organizations involves different algorithmic paths among four factors, which are “perception (P)”, “information (I)”, “judgment (J)” and “decision choice (D)”.
As shown in Fig. (1.2), these four components link to six algorithmic decision-making routes. The first of these components is “perception” of the environment and framework within an individual or organization operates, and how relevant “information,” specifically facts or details related to the issue under review, should be considered for use. Perception can be influenced by biases and heuristics on the part of decision makers, their previous experience, and other external and internal factors, all of which will affect the way information is processed. Among these, the double arrows in Fig. (1.2) indicate the consistency between perception and information Also, this relationship is like a neural network in that information updates perception; and perception influences the selection of information [12]. “Information” affects and reshapes individuals’ or organizations’ perception and decision choice. Rodgers [13] concluded that a lack of coherence between perception and information by decision makers will lead to a loss of cognition. The process of “judgment” includes weighing existing information and making an objective assessment, while decision-making is the final element of an executive’s action plan. In the Throughput Model, the six algorithmic different paths available to decision makers are:
P→D,P→J→D,I→J→D,I→P→D,P→I→J→D, andI→P→J→D.As contrast to the Throughput Model approach, the black box approach analyses the behavior of systems without ‘opening the hood’ of the vehicle. That is, without any knowledge of the system codes. Explanations are constructed from observations of the relationships between the inputs and outputs of the system. This is the only possible approach when the operator or provider of the system is uncollaborative (does not agree to disclose the code).
The Throughput Modelling approach in contrast to the black box approach assumes that analysis of a system code is possible. Further, this approach in contrast with the black box approach, provides a design for systems by designing dominant algorithms that assist in explainability. This is possible by (1) relying on six dominant algorithmic pathways, which by design, provides sufficient accuracy, and (2) enhances precise algorithms with explanation whereby that it can generate, in addition to its nominal results (e.g., classification), a faithful and intelligible explanation for these results.
In addition, the Throughput Model and its algorithmic pathways uncovers strategies used by individuals or organizations in arriving at a problem [14]. This model is useful since AI systems are primed by human intelligence. Moreover, interesting enough, the Throughput Model is closely related to Machine Learning. Machine learning relates to computer systems that can perform autonomous learning programs without specific programming. For example, in cloud computing and cloud storage, the calculator can automatically insert massive data in the original function with the help of the Throughput Model, which can reduce the data reserve [1]. At the same time, it can enhance the computing power to automatically improve itself. The Throughput Model can be depicted as a theoretical system to address the adoption of up-and-coming tools and technologies (such as deep learning components, digitization, neural networks, etc.) and to describe the development capabilities needed to effectively address the challenges of this century.
The Throughput Model alerts our attention that for cognitive tasks, it depicts the process steps that individuals or organizations make to arrive at a decision choice. This algorithmic model highlights that they can continuously learn from new data and perform better depending upon the selected algorithmic pathway. In its entirety, the Throughput Model suggests that not only one algorithmic pathway is considered, but also parallel algorithmic paths can be considered based on the perceptual and information sets that are processed through the algorithms. Together, this allows the Throughput Model to place into algorithmic format over countless tasks across society. Such as driving a car, diagnosing a disease, or providing customer support.
The coming of AI will eventually produce occupations and positions, we cannot even elaborate at the present time. Examples today comprise AI engineers, data scientists, data-labelers, and robot mechanics. Integrating AI optimization and the human cognitive theories (i.e., decision-making) may reinvent many occupations and generate even more opportunities. AI will handle routine tasks in in concert with people, who will perform the tasks that necessitate, consideration and compassion is specific cases. For example, future doctors will still lead as a patient’s primary overseer; however, they can depend on AI diagnostic tools to determine the best treatment. This can enable the doctor’s responsibility into that of a supportive caregiver, providing them more time with their patients.
Opportunities are now in progress for research developers. These people interview workers, conduct surveys, and build tools that provide a more quantitative perspective on what is happening on various Internet platforms. There is great optimism pertaining to AI future impact on health care. Further, this impact extends to possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. In addition, AI maybe a factor in broad public-health programs constructed around enormous amounts of data that may be attained in the coming years approaching everything from personal genomes to nutrition. AI impact extends also to long-expected transformations in formal and informal education systems.
Quite a few projects involve labeling data, such as image data (i.e., facial recognition). This data can be supplied into supervised or unsupervised machine learning models. Further, other projects involve transcribing audio. For example, when you talk to Goggle Assistant or Amazon Alexa, they can be capable of answering questions, as well as executing the smart home commands. These voice recognition algorithms learn to understand speech better. Moreover, organization workers can label websites that might be filled with hate speech or pedophilia. This process will eliminate the possibility of the user being exposed to such websites.
AI has made fantastic strides to date, but it often needs huge amounts of data and computer power to arrive at a decision. Researchers are using powerful AI algorithms onto simple, low-power computer chip that can run for months on a battery. These new developments could facilitate in developing more advanced AI capabilities, like image and voice recognition, to home appliances and wearable devices, along with medical gadgets and government, commercial, and industrial sensors. This technology could also assist in keeping data private and secure by diminishing the prerequisite to send anything to the cloud.
Microcontrollers are moderately simple, low-cost, low-power computer chips located inside billions of products, comprising automobile engines, power tools, TV remotes, and medical implants. These tools include deep learning algorithms and neural network programs that liberally mimic the manner in which neurons connect and fire in the human brain. Deep learning algorithms normally run on dedicated computer chips that apportion the parallel computations required to train and operate the network more effectively.
For example, chatbots have recently emerged as a new communications conduit for consumer brands and customers. Apprehending language is one factor of perfecting chatbots. Another factor is employing empathy (see www.wired.com/wiredinsider/2018/04/ai-future-work/). A new upsurge of startups is inserting the emotional intelligence into chatbot-based communication. This is an AI component of natural language processing (NLP), which is the ability of a computer program to understand human language as it is spoken.
Another type of AI technology is computer vision, which works on enabling computers to see, recognize and process images in a similar way that human vision does, and then provide appropriate output [1].
Computer vision is an interdisciplinary field that deals with how computers can be made to achieve high-level comprehending from digital images or videos. From the perspective of engineering, it seeks to automate tasks that the human visual system can do. Therefore, as NLP is to speech, computer vision is to sight. Further, it is a representation of imparting human intelligence and instincts to a computer. Nonetheless, it is an arduous task to empower computers to recognize images of different objects.
Although early computer vision attempts date back to 1950, the convergence of hardware and software improvements, along with an inflow of new visual data from mobile devices and other cameras, are positioning a computer vision resurgence. Moreover, as AI proficiencies have developed, it can empower machines to assess items that individuals cannot. As a result, computer vision can learn to view and interpret the visual world in much the same way humans process through their vision.
Computer vision's objective is not only to see, but also to process and deliver useful results based on the observation. For example, a computer could generate a 3D image from a 2D image, such as those in automobiles, and furnish critical data to the automobile and/or driver.
Automobiles tailored with computer vision could be able to identify and discriminate objects on and around the road such as traffic lights, pedestrians, traffic signs and so on, and act appropriately to the situation. This AI device could offer inputs to the driver or even make the automobile stop if there is an unexpected obstacle on the road. Finally, computer vision capabilities can process, categorize, and understand images and video at a scale and speed that would otherwise be unattainable for humans.
By utilizing NLP as well, computer vision technology may be able to not only encapsulate, index, store, and extract information from visual data, but also to curate, normalize, and understand content from images or documents [15]. Computer vision technology assists healthcare providers ameliorate the arrangement of conditions as well as fuels automated driving solutions like Google’s Waymo and Tesla’s Autopilot. Amazon Web Services invented a programmable deep learning-enabled camera and kits that organizations can implement to develop their own computer vision applications.
Other future commercial applications could include smart glasses, augmented reality devices that continuously operate target detection. Another application will be sensors that are designed to predict problems with industrial machinery. Currently, sensors need to be wirelessly networked so that computation can be done remotely, on a more powerful system. Finally, another important application could be in medical devices that use machine learning to continuously monitor blood pressure.
Nonetheless, there are limits to the capabilities of today’s AI. Although AI is prodigious at optimizing for an exceedingly narrow objective, it is incapable to select its own goals or to think creatively. And while AI is phenomenal in the ruthless world of numbers and data, it is deficient in social skills or empathy. The capability to make another individual feel understood and cared for is presently lacking from AI apparatuses. Correspondingly, in the domain of robotics, AI is capable to operate many rudimentary tasks like stocking goods or driving automobiles; however, it lacks the tactful adroitness required to attend to elderly people or toddlers.
There has been intensifying interest in robotics and automation, both in accounting and the financial markets in the use of big data. Robotics can deliver other benefits, such as improved compliance, faster turnaround times and higher quality.
Financial robots are based on robot process automation technology. Robot process automation mainly imitates a user’s manual operation, such as automatic generation of accounting data, simulation generation of accounting decision-making risk, etc. The goal of robot process automation is to substitute workers with automation. Moreover, robot process automation refers to the process automation of robots. Furthermore, robot process automation is a technical means to automate and process human labor by executing repetitive instructions based on data programming and rules [15-20].
According to the definition of robot process automation the Institute for Robotic Process Automation and Artificial Intelligence [15] designates a technology application that enables organizational employees to conFig. computer software or ‘robots,’ collect and construe prevailing applications to process transactions, manipulate data and trigger responses, and communicate with other digital systems [16]. The financial robot is the application of robotic process automation in accounting and finance that relies on big data, Internet, and AI.
In May 2017, Deloitte Touché Tohmatsu took the lead in launching financial robot products, which had an immediately impact in the financial circle [21]. KPMG, PricewaterhouseCoopers and Ernst & Young, which belong to the four largest international accounting firms, have also successively launched their own financial robots and financial robot solutions [22-26].
Robotic process automation indicates that more and more retail financial consumers interact with financial service providers through a financial robot driven by algorithms or other mathematical models [24]. The financial robot is based on the robotic process automation technology that is based on computer coding and rule-based software. In addition, it can automate manual activities by performing repeated rule-based tasks [25]. Robot process automation mainly simulates the manual operation of users, such as automatic generation of accounting data, simulation of generation of accounting decision risks, etc. The goal of robotic process automation is to replace people with automation [16].
The financial robot performs the same tasks as a human by issuing program commands from a computer. For example, some of the tasks it performs include but are not limited to data entry, analysis, report generation and other work in accordance with the computer command in an orderly manner. Further, the accounting robot can work with other software on an existing desktop. The accounting robot can trigger the record button to generate a script robot when the user wants to perform an automated task. With some programs, the script robot can browse e-mails, open files, identify useful information, and enter data into the system. Also, the financial robot can monitor the progress of the program in real time, send emails to managers and report abnormal data.
Financial robots require precise programming commands to perform tasks. Imprecise program commands can skew data collection [17]. exclaimed that the task instructions performed by the financial robot are targeted, which is to automate the commonly known work. If the task being performed is the first time, the financial robot is not suitable for the task. It must include the premise of the manager's prejudgment that there can be no uncertain results.
The traditional accounting work is to deal with the post event records of business activities, including the accounting of economic results, the entry of vouchers, the statistics of data, and the accounting process of financial statements. These basic accounting tasks require accountants to summarize and report. Moreover, the basic accounting work is simple and has a high repetition rate, which requires a lot of time and energy to complete. To a certain extent, it extracts very valuable human resources of an organization. Nonetheless, with the implementation of financial robots can liberate accountants from the above simple and repeated work and enable accountants to have the energy to complete higher-end financial management and pre-decision-making work.
As for the industry, accounting firms are vigorously developing financial robots to aid their work. From the vantage viewpoint of auditing, coordination, internal control testing and other human intensive audit tasks can be completed through financial robots. Hence the role of auditors is changing from yesteryear of data collectors, processors, and analysts to the evaluation components of audit procedures. Auditors are gradually assigning the part of the audit process that can be automated to the financial robot for work implementation. For the financial bots' program commands to run as expected, they require particularized instructions for performing specific tasks. For example, executing the check unread message command, through, which an individual will need many pre-embedded conditions to perform the same task. In this case, the financial robot will require to open an Internet browser, enter the relevant password to log in to the organization's mailbox, and check the unread mail [17].
The Public Company Accounting Oversight Board [18, 19] emphasizes that revenue is an area with high audit risk, which indicates that there is a chance for additional audit work. Research has indicated that financial robots can improve audit quality by testing the overall situation of income transactions. In addition, financial robots can allow auditors to more accurately assess and deal with the risk of significant misstatement of income [17]. The promotion of financial robots does not imply that there is no need for accountants. That is, financial robots are good at data integration and report generation, but it cannot evaluate the economic environment and make scientific and reasonable decisions. This requires traditional accounting practitioners to make management decisions through professional knowledge [20].
There are also risks in the use of financial robots. Because the financial robot needs to manage data information, data information is generally stored in the cloud, so there will be privacy and security issues. In the information age, it is not uncommon for the network security to be invaded by hackers [21]. As one of the famous accounting firms, Deloitte Touché Tohmatsu suffered an abnormal network security attack in 2017. Hackers broke into the cloud of Deloitte email system and illegally obtained customer records [27].
Financial robots work with big data. Barocas and Selbst [28] suggested five mechanisms by which Big Data and the algorithms that process it may unfairly affect different groups:
1. Target variables. A proxy is selected when the goal of “quality” is not accessible. For instance, how to identify a prospective employee? If performance reviews are designated as a measure, then any bias in an organization will be transmitted by the hiring algorithm. More so, longevity with an employer and other amalgamation of measures each have its own uncertainties.
2. Training data. Similar to target variables inheriting ingrained bias, so may data implemented to train the model. Using social media data builds in other sources of bias. There is no easy escape.
3. Feature selection. Features are the variables or attributes that an organization may assemble into a model. Should the algorithm include the reputation of the applicant’s university in the score for a job applicant? Or should the algorithm include the zip code of their home address? Both may show a relationship with categories such as race, class, or social economic status.
4. Proxies. Criteria that are authentically pertinent in making rational and well-informed decision choices may also occur to operate as dependable proxies for membership in a particular group. Employers may find that subject members of certain groups to constantly receive disadvantageous treatment since the criteria that establish the desirability of employees happen to be held at systematically lower rates by members of these groups.
5. Masking. All the above apparatuses can occur unintentionally; nonetheless, they can also transpire with intent if the employer has erroneous preconceived notions, and the algorithm may then serve to mask their bias.