AI Act compact - Peter Hense - E-Book

AI Act compact E-Book

Peter Hense

0,0
74,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The EU AI Act is here, and contrary to popular opinion, it is not just Europe's problem. As the first comprehensive law to regulate AI systems, the AI Act attempts to establish a global framework, setting limits on dynamic technological developments and creating new legal responsibilities for organisations worldwide. The AI Act's definition of AI systems is expansive, covering a wide range of technologies, even those that, until recently, were considered traditional machine learning models. This makes understanding and preparing for compliance even more critical, because if your business involves AI, the AI Act is now your business. "AI Act Compact" is your go-to tool for tackling the challenges imposed by the Act. Written by Tea Mustac and Peter Hense, experienced legal experts and hosts of the podcast "RegInt: Decoding AI Regulation," this book provides a deep dive into the AI Act's key provisions, processes, and real-world implications. The AI Act introduces a new risk-based framework, establishing compliance assessments and relying on harmonized standards. It imposes obligations for data governance, data quality management, accuracy and robustness, risk management, explainability, non-discrimination, accountability, liability, human controllability and more. Implementing these requirements presents significant practical challenges, especially given their broad application to numerous actors along the AI supply chain. Drawing heavily on international technical standards from CEN/CENELEC, ISO, IEC, and IEEE, the authors provide a practical toolkit for managing AI risks and ensuring compliance. Whether you're a lawyer, data scientist, or machine learning engineer, this book offers clear, actionable strategies for staying compliant and competitive in this fast-evolving landscape.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 585

Veröffentlichungsjahr: 2024

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



AI Act Compact

Compliance, management & use cases in corporate practice

by

Peter Hense

Rechtsanwalt, Leipzig

and

Tea Mustać

Mag. iur., Leipzig

  

Fachmedien Recht und Wirtschaft | dfv Mediengruppe | Frankfurt am Main

Bibliografische Information der Deutschen Nationalbibliothek

Die Deutsche Nationalbibliothek verzeichnet diese Publikation in der Deutschen Nationalbibliografie; detaillierte bibliografische Daten sind im Internet über http://dnb.d-nb.de abrufbar.

ISBN: 978-3-8005-1960-6

© 2025 Deutscher Fachverlag GmbH, Fachmedien Recht und Wirtschaft, Mainzer Landstr. 251, 60326 Frankfurt am Main, [email protected] www.ruw.de This work and all its individual parts are protected by copyright law. Any unauthorized use outside the narrow limitations laid down by copyright law (Urheberrechtsgesetz) is unlawful and liable for prosecution. This applies in particular to reproduction, editing, translation, microfilming and storage and processing in electronic systems.

Printing: WIRmachenDRUCK GmbH, 71522 Backnang

Preface

The first question you should ask yourself when building an AI system is; can we do it without AI?

– Common Sense in Machine Learning

Our book is an early work, yet it encapsulates the essence of nearly three years of intensive engagement with the EU AI Act. The insights and understandings that are succinctly gathered in the following pages of AI Act Compact stem from our long-standing involvement with machine learning projects in research and development. From dynamic pricing and biometric identification to social CRMs and systems for speech and emotion recognition – we have dissected all these technologies at our tables. That is the technical side. However, since 2021, we have focused on the specific European regulation of machine learning, which is today commonly referred to as “artificial intelligence.”

The early organization of compliance along the AI supply chain for companies and organizations worldwide compelled us to address topics such as model disgorgement by the FTC, bias audits in New York, and the first comprehensive AI law, the Colorado AI Act, long before the official introduction of the AI Act. Without our close network of colleagues from various disciplines around the globe, we would not have been able to comprehend and respond to these developments, especially in light of the boom in generative AI systems. But we were fortunate to count on good friends.

Partly out of curiosity, partly on assignment, but always with enthusiasm, we have been involved with the IEEE in standardizing Algorithmic Bias Consideration (P7003) since 2017, conducted Algorithmic Impact Assessments as early as 2018, drafted the first Fundamental Rights Impact Assessments just before the pandemic in 2020, and assisted with the first bias audits of automated employment decision tools under New York’s Local Law 144 at the end of 2022. In 2023, we reviewed banks’ automated credit scoring algorithms to ensure discrimination-free practices and today, we are pioneers in implementing AI management systems in accordance with ISO 42001. Yet, faced with the sheer volume and diversity of the topic “artificial intelligence,” we often feel like beginners. More than once, we had to revise our assumptions and adjust our evaluations – technology law often lags behind the technology itself.

This book is not about legal hair-splitting – for that, we felt the time was too precious. Instead, we aim to create understanding and present practical solutions for all those who design, develop, offer, and deploy high-risk AI systems. Contrary to some wishful thinking, that includes quite a number of organizations. With AI Act Compakt, you receive a concise, informative, and personally tailored insight into what we derive from the AI Act. It incorporates thousands of hours of work and exchanges with colleagues from data science, statistics, ML engineering, cybersecurity, sociology, standardization, and law. We do not claim to know everything in every detail. However, our readers can rely on the fact that what we write is well-considered and often refined through discussions with other experts.

The AI Act is not a well-crafted law. It is riddled with peculiarities, linguistic confusions, repetitions, gaps, and contradictions. But the main lines are recognizable, and we have made every effort not to turn every linguistic quirk into a legal elephant but to provide practical guidance that allows providers and deployers to take the right path.

And we have enjoyed the work – just as we enjoy our podcast “RegInt: Decoding AI Regulation,” where since 2023 we have been informing and entertaining those interested in regulation, technology, and the backgrounds of AI on a monthly basis. We have reviewed the available literature from recent years and incorporated current research findings from 2020 to 2024 where they appeared relevant to understanding legislative terms and intentions. The AI Act did not fall from the sky; it is based largely on these research findings and societal developments – the lobbying of companies as well as the engagement of civil society NGOs. Additionally, ongoing downstream integration of AI technologies continues to shape and refine regulatory approaches worldwide.

We see significant added value in another focus area of our book: the standards, technical reports, and guidelines from ISO, IEC, IEEE, and CEN/CENELEC. We even provide insights into those still in the non-public draft stage. Why? Because without the specialized knowledge condensed in these standards, the core requirements of the AI Act are difficult to understand. Standardization has the advantage that legal, scientific, and humanities stakeholders can rely on a common terminology and shared understanding. Believe us: Knowledge of and from interdisciplinary standards is significantly more important in the practical implementation of AI Act compliance than reading the latest law journals.

And now, enjoy reading!

Leipzig, September 2024

Tea and Peter

Table of Contents

Preface

I. On the Scope

1. The Scope of the AI Act

a. On AI Systems

(1) Introduction

(2) A Deeper Dive: Qualitative and Quantitative Aspects of AI Systems

(3) On Ends and Beginnings: AI Infrastructure and AI Systems

(4) Solutions From the Field of AI Safety, Computation and UML

(5) Conclusion

b. Risk-based Approach

c. Personal Scope

(1) Providers

(2) Product Manufacturers

(3) Deployers

(4) Importers and Distributors

d. Territorial Scope

e. On AI Literacy

(1) Operationalizing AI Literacy

i. Objectives

ii. Training Needs, Including Target Groups and Levels of Expertise

iii. Training Content

iv. Training Methods

v. Training Frequency

vi. Evaluation

(2) Final Remarks on AI Literacy

2. The Scope of the Book

II. Design

1. Idea

2. Legal Requirements Engineering

3. Intended Purpose

a. Usability Engineering

b. Human-factor Engineering

4. Initial Risk Categorization and Assessment

III. Development

1. AI System Requirements

a. Quality Management System

(1) Types of Quality Management

(2) Structure and Main Components

(3) Software and AI System Quality Models

(4) Operationalizing the Legal Requirements Under Article 17

i. First Line

ii. Second Line

iii. Third Line

b. Risk Management

(1) Risk Identification

(2) Risk Assessment

i. Factors for Estimating Likelihood

ii. Factors for Estimating Impact

(3) Risk Mitigation

(4) Risk Acceptance

(5) Risk Tolerance

(6) Establishing Controlling, Monitoring and Incident Response Processes

i. Controlling

ii. Monitoring

iii. Incident Response Process

c. Human Oversight

d. Accuracy, Robustness, and Cybersecurity of AI Systems (Article 15 AI Act)

(1) Overview

i. Guiding Principles for the AI System Life Cycle87 (Article 15(1))

ii. Benchmarking and Standardization, Article 15(2)

iii. Documentation Obligations, Article 15(3)

iv. Resilience, Article 15(4)

v. Cybersecurity, Article 15(5)

(2) Detailed Explanations

i. Significance of Harmonized Norms and Standards in the AI Act136

ii. Consistent Performance Throughout the AI System Life Cycle: Universal Lessons in Machine Learning

iii. What Are Feedback Loops153 and How Should They Be Addressed?

iv. “Engineering Safety in Machine Learning”

(a) Standardization: Accumulated Experience

(b) AI Engineering Best Practices

(c) Case Study: Safety and Robustness of AI Systems in the Automotive Sector

(d) Inherently Safe Design in Machine Learning172

(e) A Framework for Testing AI Systems

(f) AI Red Teaming

(g) NIST Test Platform “Dioptra”

2. Overview of Data Governance & Data Management (Article 10)

a. Machine Learning and Training Data in a Nutshell

b. Mandatory Quality Criteria for Training, Validation, and Testing of AI Systems, Article 10(1)

c. Data Governance & Data Management, Article 10(2)

d. Standardization of Data (Quality) Management

e. Definitions and Metrics

(1) Training Data, Article 3(29)

(2) Validation Data, Article 3(30)

(3) Validation Dataset, Article 3(31)

(4) Testing Data, Article 3(32)

f. The Individual Elements

(1) Relevant Design Choices, Article 10(2)(a)

(2) Data Collection Processes and the Origin of Data, and in the Case of Personal Data, the Original Purpose of Data Collection, Article 10(2)(b)

i. Data Sources in Machine Learning Practice

ii. Datasheets for Datasets

iii. Data Protection Law

(3) Relevant Data-Preparation Processing Operations, such as Annotation, Labeling, Cleaning, Updating, Enrichment, and Aggregation, Article 10(2)(c)

i. Annotation and Labeling

ii. Data Cleaning

iii. Updating

iv. Enrichment

v. Aggregation

(4) Formulation of Assumptions, Particularly Regarding the Information that the Data are Intended to Measure and Represent, Article 10(2)(d)

(5) Assessment of the Availability, Quantity, and Suitability of the Necessary Data Sets, Article 10(2)(e)

(6) Examination of Possible Biases Affecting Health and Safety, Fundamental Rights, or Leading to Discrimination Prohibited Under Union Law Article 10(2)(f) and Appropriate Measures to Detect, Prevent, and Mitigate Possible Biases Identified According to Article 10(2)(g)

(7) Identification of Relevant Data Gaps or Shortcomings Preventing Compliance and Appropriate Mitigation Measures, Article 10(2)(h)

g. Combating Bias and Discrimination, Articles 10(3), (4), and (5)

(1) Recitals and Ethics Guidelines for Trustworthy AI by the High-Level Expert Group on Artificial Intelligence (HLEG AI, 2019)

(2) Fundamental Rights Agency (FRA), LIBE, HLEG AI and the Toronto Declaration

(3) Research and Science: It’s Not Just the Data, Stupid!

(4) International Standardization

(5) Relevant, Sufficiently Representative, Free of Errors and Complete in view of the Intended Purpose

i. The Purpose of the System

ii. Relevance

iii. Representativeness

iv. Error-Free

v. Completeness

(6) Balanced Statistical Characteristics in Datasets

(7) Geographically, Contextually, Behaviorally, or Functionally Typical Datasets

i. Geographical Setting

ii. Contextual Setting

iii. Behavioural Setting

iv. Functional Setting

(8) Processing of Sensitive Data for the Analysis and Mitigation of Biases229

(9) AI, Bias, and European Anti-Discrimination Law: An Overview239

i. Legal Framework

ii. Scope of Application and Protected Characteristics

iii. Direct and Indirect Discrimination

iv. Justification

v. Positive Measures

vi. Reversal of Burden of Proof

vii. Legal Consequences

viii. Respondents

3. Testing & Compliance

a. Sandboxes

(1) Sandboxes in the AI Act

(2) Establishment and Operation of AI Sandboxes

(3) A Case for Participating in AI Sandboxes?

(4) Sandbox Plan

(5) Exit Reports

(6) Consequences

(7) Processing of Data Within the Sandbox

b. Testing in Real World Conditions Outside AI Regulatory

c. Conformity Assessment

(1) What?

(2) When?

(3) Who and How?

(4) Exceptions Make The Rule

(5) Why?

d. Harmonised Standards, Common Specifications & Presumption of Conformity

(1) Harmonised Standards

(2) Presumption of Conformity

(3) Common Specifications

(4) Codes of Conduct and Guidelines

e. Placing on the Market

4. Technical Documentation

IV. Deployment

1. Providers

a. The Obvious

b. Documentation Keeping (Article 18) and Automatically Generated Logs (Article 19)

c. Risk Management

d. Human Oversight

e. Transparency and Provision of Information to Deployers and/or End Users

f. Post-market Monitoring (Article 72), Corrective Action & Duty of Information (Article 20), Reporting Serious Incidents (Article 73) and Cooperation with the Authorities

(1) Goal and Purpose

(2) Key Features

(3) Post-market Monitoring Plan

(4) Exit Report

(5) Interplay with Other Systems and Processes

2. Deployers

a. The Obvious: Due Diligence, Use According to Instructions for Use (Logs, Human Oversight), Transparency, Monitoring and Duty of Information, Reporting

(1) Due Diligence: The Legal Deep-dive

(2) Use According to Instructions for Use (Logs, Human Oversight)

(3) Transparency

(4) Monitoring and Duty of Information

b. Data Governance

c. Fundamental Rights Impact Assessment (FRIA)

(1) Who?

(2) What?

(3) Why?

i. Make Adjustments and Implement Additional Measures

ii. To Assess the Acceptability of Residual Risks

(4) End Result

d. Consulting the Works Council

e. Data Protection Impact Assessment

f. Article 25 Obligations Along the AI Value Chain313

V. Special Considerations

1. IP, TDM and GenAI in the AI Act

a. General Purpose AI in the AI Act

(1) AI Model Providers and AI System Providers

(2) General-purpose AI Model Provider’s Obligations

(3) General-purpose AI System Provider’s Obligations

(4) Copyright and General-purpose AI Models

b. Training and Training Data331

(1) GenAI Needs Data (and It Needs a Lot of It)

(2) Memorization and Overfitting340

(3) Text and Data Mining

i. The Copyright in the Digital Singe Market Directive (CDSM Directive)

ii. TDM and Transformer Models

iii. Three-Step Test

iv. Conclusion

c. Use, Inputs and Outputs

(1) Use and Inputs

(2) Outputs

(3) The Curious Case of Retrieval-Augmented Generation (RAG)

2. AI in Financial Institutions

a. Two Important Caveats

b. One Additional (Major) Burden

c. Where It Gets Easier

3. Biometric and Emotion Recognition Systems

a. Biometric Identification

b. Biometric Verification

c. Biometric Categorisation

d. Emotion Recognition

I. On the Scope

For, usually and fitly, the presence of an introduction is held to imply that there is something of consequence and importance to be introduced.

– Arthur Machen

Before we can begin discussing the AI Act and its expected practical consequences for anyone developing or deploying an “AI system”, it is essential to clarify the scope of the AI Act as opposed to the scope of this book. This distinction is crucial because the Act itself is way too broad to be covered comprehensively. At least not without writing several sequels of the AI Act Compact book series.

Furthermore, it is essential to emphasize that this book serves as a practical handbook. Every single chapter in this book could merit its own doctoral thesis if we were to explore the theoretical and philosophical nuances, as well as legal and technical discussions associated with the underlying topics. That is not our goal here. Instead, we aim to transform something highly abstract into something graspable, understandable, and, most importantly, implementable. Therefore, though some nuances and discussions will be mentioned, that is not the primary purpose of this book. Finally, the views presented here are not (nor do they aim to be) comprehensive. Now that we have clarified this, we can begin.

1. The Scope of the AI Act

a.On AI Systems

(1)Introduction

After multiple conceptual changes to the definition, the AI Act was finally based on the OECD definition of an AI system. This is problematic for several reasons, starting with the fact that adopting this definition, intentionally or not, broadens the scope of an already very broad definition. Furthermore, the OECD definition was never intended to serve as a legal definition but rather as a programmatic statement typical of public policymaking. This makes it per se too vague to be one. Nonetheless, Article 3(1) of the AI Act now defines an “AI system” as:

– A machine-based system

– Designed to operate with varying levels of autonomy

– That may exhibit adaptiveness after deployment,

– That infers from its inputs how to generate outputs such as predictions, content, recommendations, or decisions, and

– That can influence physical or virtual environments.

Recital 12 attempts to provide some clarity on the matter by stating that, firstly, autonomy is to be understood as some degree of independence of the AI system from the human operator. However, this clarification fails to consider that most systems today possess at least a minimum degree of independence associated with process automation. Just think of your spam filter. Yes, of course, we can go check the spam filter and see what the algorithm has sorted out as “spam”. We can also choose to override the algorithmic label. However, the main point is that the algorithm independently sorted an incoming email into the spam folder, which also means you saw the email five days later than you would have otherwise. Not to say that anyone is complaining, as this situation is still preferable to receiving all the spam emails in our regular folder. Still, if any degree of autonomy is sufficient to satisfy this criterion, then it may very well be the case that even very simple programs we have been using for years, or even decades, fulfil it.

Secondly, in terms of adaptiveness, Recital 12 clarifies that it refers to self-learning capabilities, which allow the system to change while in use. Here, one might be tempted to sigh in relief as many systems do not have such capabilities. However, this is where the AI Act definition crucially deviates from that of the OECD one making the material scope of the AI Act virtually unlimited. While the OECD definition demands that AI systems be adaptive, the AI Act merely states that these systems “may exhibit adaptiveness”, meaning that they do not necessarily have to. To continue with our previous example, this implies that even our old-school spam filters, which do not improve over time and initially sort our emails automatically, still fall within the definition, as adaptiveness is apparently not a decisive factor. The third criterion is also clarified in the Recital. Inferences should be interpreted in light of development techniques that enable inference, which include “machine learning approaches that learn from data how to achieve certain objectives, and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved.” Furthermore, inferences are not a specific feature of artificial intelligence, but a general process used in many fields of science, philosophy, and daily life, such as in statistical calculations or medical diagnoses. Inferences, according to the internationally conception of the term,1 are used to draw conclusions from data and models, for example, in predicting outcomes or classifying data. In machine learning, the inference phase is when the trained model is used to make new predictions or decisions. While this specific application is technical, the underlying process of reasoning exists in many other scientific and practical disciplines. Unfortunately, this again fails to serve as a distinctive criterion between many traditional systems used since the nineties and an “AI system”.

The fourth component, which involves influencing the AI system’s environment is not further clarified. However, it is fairly safe to say that integrating any kind of system into anything will necessarily influence that system’s environment. The environment is to be understood as “the contexts in which the AI systems operate, whereas outputs generated by the AI system reflect different functions performed by AI systems”. This can encompass anything. Because, as soon as we implement a system into existing processes, we do so precisely to influence those processes by making them more efficient, simpler, faster, user friendly, etc. Furthermore, when humans use an AI system for any given purpose the AI system will at the very least steer the human thought process, thus, also exerting influence over it. Since there is no other threshold associated with the condition, such as the influence being decisive or even just major, this criterion can be considered fulfilled by any system automating any part of any process, as it will always influence at least that one part of the process.

One straw to grasp upon here is the part of Recital 12 that states that AI systems are and should be observed separately from simpler traditional software systems or programming approaches. Meaning that they should not cover systems based on rules defined solely by natural persons to automatically execute operations. At least here one can argue that our spam filter is not AI so long as someone hardcoded all the “trigger words” that result in an email being designated as “spam”. Needless to say, no one actually does that anymore.

Finally, there has been a lot of theoretical discussions about this definition and its delimitation. Some present novel and somewhat creative criteria for differentiating between traditional systems and AI systems, while others reinterpret parts of the previously analysed definition to allow for reasonable delimitation from a business perspective or even include use-case examples and their thorough examinations.2 While all these contributions are useful, we present a different and much more efficient approach:

Whenever you have to break your head over whether you are dealing with an AI system in the sense of the AI Act, you most likely are.

Many may be unhappy with or surprised by our simple interpretation. However, we consider it useful for several reasons:

– Any guidelines from the Commission based on Article 96 of the AI Act or from authorities on what is or is not an AI system will take a while to become available.

– Not all AI systems are subject to all the obligations under the AI Act, so wasting too much time on this question might be counterproductive.

– Designating your system as an AI system will also allow you to market your system as an “AI system”, which tends to sell better than a plain old traditional software system.

– Furthermore, there are already other, very concise and standardized definitions of what an “AI system” is from an engineering perspective that support our view (more on this below).

Finally, there are also AI systems that, although they fall under the definition of an AI system, are not regulated under the AI Act. These exceptions are mentioned in Article 2 and include:

– AI systems that are neither placed on the market nor put into service in the Union, if their outputs in the Union are used exclusively for military, defense, or national security purposes (Article 2(3)),

– AI systems used by authorities in third countries or international organizations as part of international cooperation or international agreements in the field of law enforcement and judicial cooperation with the Union or with one or more Member States (Article 2(4)),

– AI systems or AI models, including their outputs, developed and deployed solely for the purpose of conducting scientific research (Article 2(6)).

(2)A Deeper Dive: Qualitative and Quantitative Aspects of AI Systems

After this brief introduction to the topic, we would like to contribute some necessary and helpful impulses to the current debate derived from legal systematicity, technology, and business process modelling. The limitations on the unclear wording of the term “AI system”, supposedly clarified in Recital 12, must be viewed critically. Not only from a temporal perspective, where they read like an appeal to the legislator, to change what has already been done, but also in light of the clear case law of the CJEU. Most recently, in case C-307/22 (Judgment of 26 of October 2023 – DW v. FT, para. 44), the court held that “the preamble to an act of EU law has no binding legal force and cannot be relied on either as a ground for derogating from the actual provisions of the act in question or for interpreting those provisions in a manner that is clearly contrary to their wording.”

In light of this clear limitation on the normative force of recitals, Recital 12 seems ambitiously formulated. This raises the question of whether the trilogue intentions of some stakeholders, aimed at counteracting an (arguably) overly broad definition of the term “AI system,” can have any practical relevance. Particularly, the negative differentiation from “traditional systems”, which are themselves not further described in the recitals, provides a strong indication that, upon closer inspection, the definition in Article 3(1) encompasses a surprisingly wide range of existing systems. Including applications and processes that would hardly claim the rather glamorous title of “Artificial Intelligence” for themselves.3 Therefore, instead of puzzling over the incoherent result of a rushed legislative process, one should focus on the systematic context of the norm when interpreting the term. This has also recently been confirmed by the CJEU in its judgment of June 22, 2023 (C-579/21 – Pankki S, para. 38), stating that “the interpretation of a provision of EU law requires that account be taken not only of its wording, but also of its context and the objectives and purpose pursued by the act of which it forms part.”

Based on this systematic approach, the CJEU has also previously been willing to close identified legal protection gaps in order to maintain a high level of protection for citizens subject to the law. For example, in Judgment of the 7 of December 2023, C-634/21 – OQ v. Land Hessen and Schufa, para. 61 the CJEU stated that “in circumstances such as those at issue in the main proceedings, in which three stakeholders are involved, there would be a risk of circumventing Article 22 of the GDPR and, consequently, a lacuna in legal protection if a restrictive interpretation of that provision was retained...” Furthermore, this decision also makes apparent the parallels between protection against automated decision-making and the various protective considerations embedded in the AI Act. For example, the AI Act explicitly demands a high level of protection for safety, health, and all fundamental rights enshrined in the Charter of Fundamental Rights of the EU. This goal is to be achieved through the use of trustworthy AI, as described by the Independent High-Level Expert Group on Artificial Intelligence (HLEG AI) in the “Ethics Guidelines for Trustworthy AI” from 2019, which were also highlighted in Recital 27 of the AI Act. Therefore, the term “AI system” should also be interpreted in light of the protective intention of the AI Act, especially since “Artificial Intelligence” means many things to many people.

The HLEG AI, in addition to their Ethics Guidelines, published a paper in April 2019,4 which addressed the question of what constitutes an AI system. Although this publication may only be considered as illustrative of the context of the legislative process, it can be assumed, given the legislator’s clear reference to the work of this expert commission in Recital 27, that their findings will have some influence on the judicial interpretation of the term “AI system”. Even more so as the system definition provided by the HLEG AI forms the foundation for the “Principles for Trustworthy and Ethical AI” adopted by the legislator in Recital 27.

The mentioned publication refers to the wording used by the Commission in its European Commission’s Communication on AI:

“Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems), or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones, or Internet of Things applications).”5

For the purposes of our handbook, we will forgo a detailed analysis of the psychological and cognitive foundations of what constitutes “intelligence,” leaving such considerations to experts in the field.6 Instead, we will focus solely on the outcome, namely the definition of Artificial Intelligence Systems by the HLEG AI:

“Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information derived from this data, and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors, and actuators, as well as the integration of all other techniques into cyber-physical systems).”

This definition has a particular narrative charm, standing in stark contrast to the cumbersome, almost mechanical sequence of words found in Article 3(1) of the AI Act. It provides an alternative, multidimensional, interdisciplinary approach to the concept of “Artificial Intelligence,” and unlike the one that became law, it also displays a logical and hierarchical structure. The composition of the High-Level Expert Group on AI apparently ensured a level of technical quality that was sacrificed to more trivial interests during the legislative process.

On the other hand, artificial intelligence has also been a topic for engineering and standardization organizations like ISO (International Organization for Standardization), IEC (International Electrotechnical Commission), and IEEE (Institute of Electrical and Electronics Engineers) long before it garnered the attention of legal professionals. With the sobriety and precision typical of practicing scientists, the ISO/IEC 22989:2022 (Information technology – Artificial intelligence – Artificial intelligence concepts and terminology) defines the term “AI system” under 3.1.4 as an “engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives.” Keeping things simple, seems to be the general consensus of the experts involved in standard-setting. Focusing on the essentials and on the technological core of the debate reveals the strength of this definition. Further considerations, which are relevant but not essential to the definition itself, can be found in the “notes to entry”:

Note 1 to entry:

“The engineered system can use various techniques and approaches related to artificial intelligence to develop a model to represent data, knowledge, processes, etc., which can be used to conduct tasks.”

Note 2 to entry:

“AI systems are designed to operate with varying levels of automation.”

At this point, attentive readers will pause and recall that this sentence appears in a similar form in both the OECD definition and Article 3(1) of the AI Act, but with one very important difference. Instead of “varying levels of autonomy,” which is the wording of the AI Act definition, the norm refers to “automation.” Why? A possible and not necessarily flattering answer for those involved in the legislative process can be found in the previously cited ISO/IEC 22989 under section 5.13, “Autonomy, heteronomy and automation,” and the note found in the quoted Section:

“In jurisprudence, autonomy refers to the capacity for self-governance. In this sense, ‛autonomous’ is also a misnomer as applied to automated AI systems, because even the most advanced AI systems are not self-governing. Rather, AI systems operate based on algorithms and otherwise obey the commands of operators. For these reasons, this document does not use the popular term autonomous to describe automation.’”

Autonomy, often featured in works of science fiction as self-determined actions of technical systems, does not exist in this form in technological reality. However, there are various levels and forms of automation that can influence processes and decisions to varying degrees. In light of these technological realities, one can reasonably assume that a responsible legislator is committed not to fiction, but to shaping the real world. It is therefore reasonable to interpret the term “autonomy” in the context of the AI Act as “automation.”

This rational interpretation is particularly supported by the fact that the legislator demands a high level of protection for any form of automation of decisions concerning people, as soon as these have practical relevance. The CJEU extensively discussed this in the aforementioned case C-634/21, significantly expanding the scope of the prohibition on automated decision-making under Article 22 of the GDPR. A special focus was given to multi-layered cases, where multiple subprocesses lead to a final decision and individual partial scores sequentially influenced each other.7 This applies, for example, to automated creditworthiness assessments, performance optimization, and load balancing of cloud-based applications by microservices, or even recommendation engines of streaming services and social networks.

Once the decisions made by these automated (sub-)systems concerning natural persons exceed a certain threshold of significance and fall within the scope of Article 22 of the GDPR, a decision-making process unfolds that is not passively integrated but actively impacts all elements relevant within this process. In the context of such systems, legal practitioners working with the AI Act face significant logical and practical challenges, particularly since creditworthiness assessments are typically classified as “High-Risk AI systems” and therefore must be developed, implemented, and monitored with great care.

In this complex landscape, a central question arises, that is of utmost importance for the affected parties: Must each subsystem that contributes to the final decision through automated preliminary processes, and possibly even “significantly directs” it (CJEU C-634/21, para. 62), be classified as an independent “AI system”? Or rather, should the pre-, main-, and post-processing be considered as a unified, overarching system that connects the various subsystems into an organic whole – essentially a technical collective – whose individual parts collectively produce the final decision, which, according to the chosen definition in Article 3(1), can affect physical or virtual environments?

The consequences of the answer to this question are of considerable significance. If each subsystem were to be classified as an independent AI system, each would also fall under the comprehensive obligations of Chapter 3 of the AI Act, including stringent conformity assessments and reporting obligations. This would also mean that each subsystem would need to undergo individual evaluation, significantly increasing compliance efforts and costs.

If, on the other hand, one adopts a functional understanding of the term “AI system” based on its final purpose, a single overarching conformity assessment could suffice. However, in this case, particularly in processes organized by a stringent division of labor – such as those discussed by the CJEU in case C-634/21 – significant technical, organizational, and legal coordination between the involved parties would be necessary to achieve and demonstrate compliance. The involved parties would need to harmonize their actions and (sub-)systems, and also share responsibility to ensure that the decision-making process, as a unified and integrated system, meets the requirements of the AI Act. It is now the task of science, jurisprudence, and practice to find a meaningful balance between a view that is too fragmented and one that is incredibly complex in its functional scope.

Finally, regardless of the answer to the question of whether something is or is not a single or multiple AI systems and of the technical complexity of that system, what matters most is the purpose for which it is used. Classifying a system as an AI system does therefore not inherently mean that all is lost, and compliance is impossible. It merely serves as an entry ticket to further classification rounds prescribed by the AI Act. Of particular importance here is Article 6, which imposes strict quality requirements on high-risk AI systems. These stringent regulations are intended to ensure that such systems serve the overarching goal: the establishment of trustworthy AI “made in Europe.” There is little to object to this goal.

Practical Example: Microservice Architecture

To illustrate the levels of complexity that can be reached in IT systems, it is worth looking at the visual representations of the “Death Stars of Microservices,” which depict the architectures of Amazon and Netflix.

These two now almost historical (first published in 2015) and widely circulated visualizations illustrate the tremendous density and interconnection of the microservice architectures used by both companies. The term “Death Star” refers to the overwhelming number of interconnected services, which are linked in a tightly knit web of dependencies. A look at these architectures reveals the immense challenge in managing and monitoring such systems. Each service communicates with many others and contributes to the overall success of an organization, or at least a given business process. However, this multitude of interactions also necessitates a very precise organization and meticulous monitoring of the processes within these networks to ensure their efficiency and stability. What initially appears to be a tangled mess is, in fact, the expression of an advanced approach that allows organizations to serve millions of customers simultaneously while continuously integrating innovations into their systems.

Microservice architecture (or simply Microservices) is a modern paradigm in software development that aims to break down previously monolithic applications into smaller, independent services.8 These microservices are loosely coupled, act autonomously, are automated, and each fulfill a specific, well-defined function. They communicate via defined Application Programming Interfaces (APIs) and can be developed, deployed, and scaled independently. This modern software architecture has become particularly prevalent in large-scale distributed systems, as it promotes flexibility, scalability, and agility.

A key advantage of microservice architecture lies in the possibility to update or scale individual services independently, without impacting the entire system. This enables continuous delivery while facilitating the introduction of new features. The decentralized application logic increases fault tolerance, as failures of individual services do not have significant effects on the overall system performance. Essentially, these are functionalities also play a relevant role in AI systems and the AI Act under the keyword “Robustness” (Article 15). At the same time, this architecture presents challenges, particularly in tracking decision-making processes that include numerous interdependent microservices. The complex interactions and dependencies of the services make it difficult to trace and control the exact path of a decision-making process through the system. The underlying question is: What if one, several, or all of these microservices, individually or in their entirety, constitute an AI system? How can we even determine whether a specific microservice has a relevant impact on data flows or overall functionality?

Another example demonstrating the complexity of simple vendor application process models, which are often partly automated and partly manual, and rely on third-party tools at various points, would be, for instance, OpenAI’s GPT being used to extract master data and perform sentiment analysis, while verification services from the Google Maps Connector or an interface to a communication application like Slack is being used to make the application more user-friendly. Even at this superficial level, it becomes clear how complex seemingly simple automated workflows can be and how important it is to understand the underlying structure and dynamics of such processes. Another important point is the question of responsibility for the system-wide decisions that connect all components, which demands a precise answer.

There is no need to despair (yet). At least that is how we feel after several years of experience in this field. For example, Business Process Modeling (BPM) is often used in practice to systematically determine the influence of individual microservices on the decision-making process. In the IT context, BPM focuses on the representation and optimization of data flows and automated processes within a system. It helps visualize, analyze, and improve IT-supported business processes. Furthermore, it also enables transparent mapping and control of the various interactions between system components, data sources, and applications. Software-based BPM solutions enable tracking process flows based on various microservices within a company. This makes the overarching business logic transparent and understandable. Companies can see how their distributed architectures operate in real-time and make adjustments or optimizations as needed. This type of monitoring and control is particularly important in complex systems, where many interactions and dependencies need to be efficiently coordinated.

The Business Process Model and Notation (BPMN™) is a graphical notation for specifying business processes in a Business Process Diagram. The term “notation” refers to a standardized system of symbols and rules used to clearly and uniformly represent business processes. The goal of BPMN is to provide a system of symbols that is both understandable for the management and quickly provides complex technical details to the developers. BPMN has basically become the standard for business process diagrams and is used by stakeholders who design, manage, and implement business processes. It allows for the creation of diagrams that are precise enough to be translated into software components. The flowchart-like notation is easy to use and independent of specific implementation environments. The main goal of BPMN is to maintain a notation that is understandable for all users. From business analysts designing the processes, over developers implementing them in technology, to individuals monitoring, controlling, or even legally evaluating these processes. By doing this, BPMN effectively builds a bridge between process design and process implementation.

ISO/IEC 19510:2013 (Information Technology – Object Management Group Business Process Model and Notation), on the other hand, goes even a step further by formally standardizing BPMN worldwide. It not only defines notation but also the underlying semantics and best practices, ensuring that BPMN diagrams are applied consistently and correctly. This standard guarantees that the representation and interpretation of business processes based on BPMN remain consistent across different industries, professions, and countries, effectively creating a common language for process modeling.

Finally, Business Process Modeling offers another significant advantage. It facilitates the fulfillment of transparency obligations for highrisk AI systems under Article 86(1), particularly the subjective right to explanation of individual decision-making concerning the “main elements of the decision taken.” In the case of decisions relating to natural persons within the scope of Article 22 GDPR, it allows for the provision of meaningful information about the logic involved and the significance and effects of such processing according to Articles 13(2)(f), 14(2)(g), and 15(1)(h) GDPR. In this context, it is also worth noting the conclusions of Advocate General Richard de la Tour in Case C-203/22 – CK v. Dun & Bradstreet Austria GmbH/Magistrate of the City of Vienna. According to the AG Opinion, the following requirements must be met for information provided on automated decision-making to be considered meaningful: (1) it must include all methods and criteria used, (2) it must be precise, easily accessible, and presented in clear and simple language, (3) it must be complete and context-specific, and (4) tailored to the individual’s level of understanding, so that the person (5) can verify the correctness of the information provided, along with (6) establishing an verifiable causal link between methods, criteria, and results. Since trade secret protection already plays a subordinate role under the GDPR, it is highly unlikely that this will change under the AI Act and that “black boxes” will suddenly start holding up as arguments in court. On the contrary, Richard de la Tour opens avenues for supervisory authorities and courts to examine trade secrets and decide to which extent these can ever be used to limit the priority of the right to access information.

(3)On Ends and Beginnings: AI Infrastructure and AI Systems

As it should be clear by this point, the engineering-based identification and delineation between AI systems is, in our view, far more relevant than the rather pointless debate about when existing IT systems and applications legally qualify as AI systems. Particularly where they form an integral part of a larger overall architecture, such as, in Recital 12 explicitly referenced, embedded AI. A sound and reliable answer to such questions requires extensive technical expertise, especially regarding system architecture, algorithm functionality, underlying data flows, and the modeling logic applied. Only by analyzing the system’s core components can we make a well-founded delineation. In contrast, a superficial evaluation, whether based on marketing materials, user manuals, or a quick glance at the underlying data flows, as often exercised in data protection and IT security, will rarely lead to a satisfactory answer to the complex question of system classification. A robust assessment instead requires a thorough examination of the technical specifications and the entire developmental process of the system. In doing so, we can reliably determine what technically constitutes an AI system under the AI Act and what does not.

A solid starting point, especially for newcomers, is glancing into the world of technical standards. ISO/IEC 23053:2022 Framework (Framework for Artificial Intelligence [AI] Systems Using Machine Learning), for example, provides a comprehensive overview of what constitutes a machine learning system (ML system). When reading the standard, it quickly becomes clear that designs of these systems can greatly vary. The provided framework offering a structured approach to the development and implementation of ML systems as part of AI systems by helping define the essential components and processes of such systems. This framework is particularly useful for acquiring preliminary understanding of the complexity of the various requirements and applications of an ML system:

Elements of an ML System as Part of an AI system:

The table outlines the central elements of an ML system as part of a broader AI system. The first step involves defining the task, which outlines the specific problem the system is intended to solve. Typical tasks in this context include regression, classification, clustering, or anomaly detection. This problem definition is the basis for all subsequent steps.

Once we have appropriately defined the problem, we can begin with the development and training of the model tailored to the respective task. In this part of the process, machine learning models are trained, tested, and, if necessary, continuously retrained with new data or for specific use cases. Data is of critical importance here, and it is essential to differentiate between various data types involved, such as training data, validation data, test data, and finally, production data, which are used at different stages of model training and deployment.

To finally develop the model and optimize its performance, various tools and techniques are applied. These include methods for data preparation, algorithm selection and optimization, and comprehensive model evaluation. Key techniques include neural networks, decision trees, gradient descent as an optimization method, and evaluation metrics such as precision and the F1 score, which are used to assess model quality. These four core areas, task, model, data, and tools, are closely interconnected and mutually dependent. Only through their interaction does a functional ML system emerge, capable of solving complex problems and being integrated into a larger architecture of an AI system.

To add a bit of flesh to the bones of our, until now, mostly theoretical considerations, let us examine the dynamics of AI system architectures using the example of autonomous driving. In such a context, machine learning systems play a central role as they are at the core of critical functions such as object recognition, path planning, and decision-making. The architecture of an autonomous vehicle includes various sensors (e.g., cameras, LIDAR, radar) that continuously collect data. This data is processed in real time to analyze the environment, understand traffic situations, and make appropriate decisions. Various components, such as sensor data processing, model training, real-time data evaluation, and decision algorithms, work together to ensure that the vehicle navigates safely and accurately. The requirements for the system architecture are highly dynamic and variable, as the system must function reliably under changing environmental conditions and in complex traffic situations. A standardized framework like ISO/IEC 23053:2022 helps structure this complex architecture.

Figure 1:Stolte et al., Towards Automated Driving: Unmanned Protective Vehicle for Highway Hard Shoulder Road Works, 2020

In Stolte et al,9 a following and rather technical description is provided:

“For the environment perception, a sensor set is deployed consisting of long- and mid-range radar as well as a camera system. The raw data are fed into the model based filtering processes which in parallel detect and track the lane boundaries and objects. Following, this information is used to generate and update the environment model containing the boundaries of the drivable area and the objects in front of the protective vehicle.

Simultaneously to the environment model, a self-model of the system is generated. The self-model is the result of selfperception, which combines sensor values to a model based representation of the vehicle guidance system. All kinds of available vehicle sensors are utilized to improve the selfrepresentation. Among others, this includes gyroscopes and accelerometers but also fuel level or tire pressure sensors. Derived from the sensor raw data, the motion estimation and generation of additional state variables are conducted. The motion estimation is additionally utilized for the detection of other vehicles colliding with the protective vehicle from behind. Connecting the self-perception with the environment perception, the motion estimation is also relevant for tracking lane boundaries and objects.

Based on both types of perception, the current mission is accomplished. On the tactical level a decision logic determines which discrete action shall be performed. Besides the information gained by both types of perception, commands stemming from the human-machine interface in the road maintenance vehicle are considered. In particular, this affects state changes triggered by the human operator and the change of parameters of the operating modes, like distances or maximum speeds. The discrete action comprises the motion of the vehicle on the one hand, but also the usage of turn signals, window wiper etc. on the other hand.

Next, the discrete action is transferred to the stabilization level, where a target trajectory is generated. The target trajectory is then fed into the vehicle dynamic controller which finally controls the vehicle’s actuators.”

Translated to plain English, the cited section describes how a vehicle perceives its surroundings using radar and camera sensors and processes the collected data to create a model of the road and surrounding objects, as well as to generate a self-model of the vehicle. Based on these models, the system makes decisions and controls the vehicle, for instance, by tracking the lane or activating turn signals.

What is instructive about the above description is that the system architecture is hierarchically structured into different levels fulfilling different functions in order to serve the overarching goal of autonomous control. Each level has a specific task, and the flow of information moves from the lowest level, the sensors, up to the control level, with the data being refined and processed at each stage.

– The lowest level, the System Context in Relation to the Vehicle, forms the foundation of the architecture and includes all sensors capturing data about the environment and the vehicle’s condition. These sensors include, among others, cameras, radar systems, and accelerometers, which provide raw data that serves as the basis for all subsequent processing steps.

– At the Operational Level, the sensor data is being processed using algorithms for filtering and modeling to convert raw data into usable information. This includes detecting and tracking lane boundaries and objects, as well as state estimation and motion prediction. This level creates a detailed perception of the environment and vehicle condition, which is a necessary prerequisite for further decision-making.

– At the Tactical Level, actual decision-making takes place. At this level, world models derived from sensor data are used to avoid collisions and calculate optimal driving routes. Based on the conducted analysis, the system finally controls the vehicle by making decisions about acceleration, braking, steering, etc.

– The Strategical Level monitors the achievement of higher-level goals. It ensures that the system meets its long-term objectives, such as reaching a destination, while taking into account all relevant environmental influences and vehicle parameters. These strategic decisions provide the guidelines and direct lower-level tactical decisions.

– Finally, there is the Communication Level, which ensures information exchange. This occurs through human-machine interaction (HMI), where the system provides feedback and warnings to the driver, and through Vehicle-to-Vehicle Communication (V2V), which enables the exchange of information between vehicles ensuring coordinated and safe traffic movements.

The connection between the levels follows a logical sequence. First the collected sensor data is filtered and modeled at the operational level to be used for decision-making at the tactical level. The strategic level provides overarching action guidelines, while the communication level ensures both internal and external information exchange. The flow of information is clearly structured, moving through the various processing levels, until the decisions are finally translated into actions.

The multilayered structure allows for a clear separation of the different functions of an AI system, simplifying its development, maintenance, and scaling. This modular architecture enables individual components to be improved or adjusted independently and without affecting the entire system. Additionally, it ensures a logical progression of processing steps from data collection to decision-making and communication, enhancing the system’s efficiency and flexibility. This structure is generally well suited for a wide range of AI use cases beyond the field of assistance systems and robotics.

However, even after explaining the system’s functionality at a high school level, it unfortunately remains unclear whether, and if so, how many AI systems are present in the provided description. Answering this question requires not only a deep understanding of the technical documentation but also meaningful communication and collaboration with all project stakeholders. Otherwise, getting a clear picture of the system integrations and components or even the number of AI systems present in the described scenario, would be impossible. This would make fulfilling the associated legal obligations equally impossible.

In practice, these types of system explanations are the exception. On the contrary, organizations are much more frequently confronted with vague marketing statements from service providers. Often going something along the lines of the following:

Experience our next-generation intelligent driving comfort! Our vehicles use state-of-the-art radar and camera sensors to capture their surroundings. By using advanced technology, the cars create detailed road models and instantly detect all surrounding objects. Based on this information, our innovative systems make smart decisions, getting you to your desired destination safely, comfortably and reliably by smoothly following lanes or automatically activating the blinkers. With our technology, you will enjoy maximum safety, comfort, and an unparalleled driving experience. Step in and discover the future of autonomous driving!

At this point, it should already be clear that interdisciplinary teams and access to critical information is essential already for defining the object of investigation. That is, for determining which elements belong to an AI system and which can potentially be excluded from further legal assessment. This is not a trivial task, and our experience from the past months of analyzing complex systems in relation to the applicability of the AI Act shows that this process should be approached with great diligence and an open mind. For legal professionals, it is worth noting that due to liability concerns, premature independent determinations of the object of investigation should be avoided.

(4)Solutions From the Field of AI Safety, Computation and UML

To lay the groundwork for a reliable analysis at a technical level, it is worth taking a look at ISO/DPAS 8800 (“Road vehicles – Safety and artificial intelligence”), which, as the name “Draft Publicly Available Specification (DPAS)” suggests, is still not finalized. In the specification stretching over approximately 180 pages, the ISO presents a standard reference for AI in road vehicles and safety, which can serve as a model for other industries. The document addresses the topic of an “AI system” in a structured and informative manner, suitable even for laypersons. Some basic knowledge of information technology is strongly recommended of course. We are convinced that the readers who have followed our discussion this far are sufficiently prepared for the standard as well as the rest of the discussion.

An enlightening starting point is the initial visualization and breakdown of a typical AI system, as consisting of at least one AI model and additional AI elements, thereby introducing the fundamental components of an AI system. In the following illustration, the AI system is shown in the grey-shaded area. The components shaded in dark are defined as AI components (ISO/DPAS 8800, 6.4 Example Architecture for an AI System).

The provided example depicts an AI system that receives its input data from the source, performs its tasks based on the inputs and the control signals, and then provides its output data to the consumer. The AI system itself consists of three AI components: (1) AI pre-processing, (2) AI model, and (3) AI post-processing. AI post-processing uses the data from the previous processing steps (i.e., AI pre-processing and AI model) in combination with the original input data for monitoring purposes.

The depicted control element is often not a part of the AI system and is therefore illustrated as an external element fulfilling a separate task. While the AI system focuses on data processing, decision-making, and predictions, the control element manages and coordinates the overall process. It regulates the data flow, activates the AI system when necessary, and integrates its results into the overall system. This functional separation allows for flexible adaptation of the AI system and ensures that it can be controlled and operated safely in real time. The control element is thus responsible for system logic and monitoring, while the AI system takes care of the data-driven tasks. However, despite the fact that the control element may technically be considered as external to the AI system, it could still fall within the scope of the AI Act’s definition. Therefore, specific circumstances and functionalities of any given control or other infrastructural element must be carefully examined to arrive at a legally sound conclusion. For this, it is important to consider both the technical interaction between respective elements and other infrastructural elements, as well as the protective intent of the AI Act. Depending on the application area and system architecture, a control element may take on different tasks, which influences the assessment of its relevance and legal implications within the overall system. A helpful approach to structuring technical details is found in ISO DPAS 8800, as it provides a framework for evaluating system landscapes.

The following chart illustrates the structure and logic of an AI system based on the approach of ISO DPAS 8800. An AI system exists only when it fulfills a specific task, the so-called AI task. The AI system is a technical element that utilizes one or more AI models to accomplish this AI task. Furthermore, an AI system can include various AI components, defined as functional building blocks in ISO/IEC 22989:2022. These components include not only the AI models responsible for the data processing but also pre-processing and post-processing components. Where the pre-processing components prepare the data for the model, while the post-processing components further process the results or prepare them for specific purposes. The structure of the following chart shows a clear separation of the individual functional elements of an AI system and illustrates how they interact to fulfill a specific AI task.

This structure seems as a reasonable first step for legal professionals wanting to conduct plausible AI system legal assessments, even though it only represents a small portion of what has already been established in technical standards.

For applying the same concepts to AI hardware, it is also recommendable to look at the latest Technical Report ISO/IEC TR 17903:2024 (Information technology – Artificial intelligence – Overview of machine learning computing devices), which is based on ISO/IEC 19505-1:2012 (Information technology – Object Management Group Unified Modeling Language (OMG UML) – Part 1: Infrastructure). This report enables a stepby-step analysis of AI systems, which is both technically sound and can facilitate a legal analysis of the infrastructure. We have already encountered the Object Management Group in the practical example of microservices as the source of the Business Process Model and Notation (BPMN), and here we encounter the same approach to finding a standardized language suitable to all stakeholders. This time for hardware. The following chart shows the hierarchy of AI system components proposed in Annex A of ISO/IEC TR 17903:2024, where the respective terms (device, entity, unit, etc.) have been defined.

–ML computing device: computing device that can be used for accelerating machine learning computing,

–Al computing device: computing device that can be used for accelerating some or all of artificial intelligence computing,

–Computing device: functional unit that can perform substantial computations, including numerous arithmetic operations and logic operations with or without human intervention,

–Functional unit: entity of hardware or software, or both, capable of accomplishing a specific purpose,

–Unit: lowest level of hardware assembly for which acceptance and qualification tests are required,

–Software: all or part of the programs, procedures, rules, and associated documentation of an information processing system,

–Entity: any concrete or abstract thing that exists, did exist, or might exist, including associations among these things,

–Hardware: all or part of the physical components of an information processing system,

–Component: entity with discrete structure, such as an assembly or software module, within a system considered at a particular level of analysis.

By combining the ISO/IEC 19510:2013 introduced in the above microservices example, a detailed and precise standard for business process modeling, with the equally comprehensive ISO/IEC 19505-1:2012, describing system infrastructure with the elegant rigor of Unified Modeling Language, we create a powerful tool when used by a skilled user group. With this hybrid approach, it meeting the diverse legal requirements of explainability, and transparency of AI systems in a traceable and efficient manner finally becomes possible.

(5)Conclusion

To conduct a well-founded legal assessment of an AI system, its boundaries, interdependencies, and overlaps, a detailed technical analysis