84,99 €
The new AI act will have significant consequences for civil society and the economy. The text will enter into force in the summer of 2024, the first provisions will apply just six months later. The aim of the European legislator was to provide better protection against the dangers of AI while at the same time promoting innovation. In parallel to the beginning implementation of the AI Act, the practical guide tries to elaborate and asses the regulation systemically. It provides legal users with an initial but reliable orientation when using the technology. In addition to the basics in the AI regulation itself; practice-relevant areas are classified in the annexes on high-risk AI systems. The use cases range from biometric identification of natural persons to questions of education as well as labor law to the use in the judiciary related to law enforcement and administration of justice. Special attention is paid to the comprehensible communication of the complex technical interrelationships in the use of artificial intelligence. In addition, the questions of the relationship to the other digital and data law of the EU, above all the GDPR, which arise in business practice, are assessed. This becomes particularly relevant, for example, in the case of transparency requirements, technical data protection and risk assessment. Finally, the work addresses practice-relevant liability issues and presents enforcement by the supervisory regime. At a glance: - Classification of concrete use cases in regulation by the AI Regulation. - AI Regulation in distinction to the GDPR (e.g. transparency requirements, technical data protection and risk assessment). - Mediation of technical contexts in the implementation of legal obligations - Liability for AI - Checklists for the use of AI in the companyProfessor Dr. Rolf Schwartmann is head of the Cologne Research Center for Media Law at the Cologne University of Technology and chairman of the Society for Data Protection and Data Security, as well as editor and author of numerous specialist publications on data protection and digital law. Professor Dr. Tobias Keber is the State Commissioner for Data Protection and Freedom of Information in Baden-Württemberg and previously conducted research on Artificial Intelligence at the Stuttgart Media University. Kai Zenner is Head of Office for MEP Axel Voss, EPP rapporteur for the AI Regulation and has been intensively involved in the negotiations for the adoption of the law. Besides, he is One AI Member (OECD) and advises the High Level Advisory Body on AI (HLAB AI) of the United Nations. In June 2023 he received the MEP Award for best Accredited Parliamentary Assistant for his work and dedication in the European Parliament. #AI Act #AI #KI #artificial intelligence #künstliche Intelligenz
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Veröffentlichungsjahr: 2025
A practical guide
edited by
Prof. Dr. Rolf Schwartmann
Prof. Dr. Tobias O. Keber
Dipl. Jur. Kai Zenner M.sc.
written by
Kristin Benedikt Judge at the administrative court Regensburg, member of the board of the society for data protection and data security (GDD e.V.)
Dr. Jonas Ganter Legal counsel
Tobias Haar, LL.M., MBA General Counsel Aleph Alpha
Dr. h.c. Marit Hansen State Commissioner for Data Protection and Freedom of Information Schleswig-Holstein
Markus Hartmann Chief Public Prosecutor at the Cologne General Prosecutor’s Office and Director of the Central Cybercrime Department of North Rhine-Westphalia
Dr. Clarissa Henning Personal Assistant to the State Commissioner for Data Protection and Freedom of Information in Baden-Württemberg
Prof. Dr. Tobias O. Keber State Commissioner for Data Protection and Freedom of Information Baden-Württemberg
Prof. Dr. Martin Kessen LL.M. (Univ. of Texas) Judge at the Federal Court of Justice
Moritz Köhler Cologne Research Center for Media TH Köln (University of Applied Sciences)
Sascha Kremer Lawyer, certified specialist in IT law
Sonja Kurth Formerly Cologne Research Center for Media Law, TH Köln (University of Applied Sciences)
Daniel Maslewski Consultant at the State Commissioner for Data Protection and Freedom of Information Baden-Württemberg
Dr. Kristof Meding, LL.M. Group Leader Computational Law, CZS Institute for Artificial Intelligence and Law at the Universität Tübingen
Dr. Robin Lucien Mühlenbeck Ass. iur.
Dr. Peter Nägele Consultant at the State Commissioner for Data Protection and Freedom of Information Baden-Württemberg
Eva-Maria Pottkämper, LL.M. Cologne Research Center for Media Law, TH Köln (University of Applied Sciences)
Johannes Rembold, LL.M. Consultant at the State Commissioner for Data Protection and Freedom of Information Baden-Württemberg
Dr. Jessica Sänger Lawyer and in-house counsel, Director for European and International Affairs, Börsenverein des Deutschen Buchhandels e.V.
Prof. Dr. Rolf Schwartmann Head of the Cologne Research Center for Media Law at TH Köln (University of Applied Sciences), Chairman of the Society for Data Protection and Data Security (GDD e.V.)
Dr. Anne Steinbrück Consultant at the State Commissioner for Data Protection and Freedom of Information Baden-Württemberg
David Wasilewski, LL.B. Cologne Research Center for Media Law, TH Köln (University of Applied Sciences)
Dr. Markus Wünschelbaum Policy & Data Strategy Advisor Hamburg Commissioner for Data Protection and Freedom of Information
Dipl.-Jur. Kai Zenner, M.Sc. Office manager and digital policy advisor to MEP Axel Voss
Translated in collaboration with
Henry Simwinga Legal Consultant at the Society for Data Protection and Data Security (GDD e.V.)
www.cfmueller.de
Bibliografische Information der Deutschen Nationalbibliothek
Die Deutsche Nationalbibliothek verzeichnet diese Publikation in der Deutschen Nationalbibliografie; detaillierte bibliografische Daten sind im Internet über <https://portal.dnb.de> abrufbar.
ISBN 978-3-8114-6412-4
E-Mail: [email protected]
Telefon: +49 6221 1859 599Telefax: +49 6221 1859 598
www.cfmueller.de
© 2025 C.F. Müller GmbH, Waldhofer Straße 100, 69123 Heidelberg
Hinweis des Verlages zum Urheberrecht und Digitalen Rechtemanagement (DRM)
Dieses Werk, einschließlich aller seiner Teile, ist urheberrechtlich geschützt. Der Verlag räumt Ihnen mit dem Kauf des e-Books das Recht ein, die Inhalte im Rahmen des geltenden Urheberrechts zu nutzen.
Jede Verwertung außerhalb der engen Grenzen des Urheberrechtsgesetzes ist ohne Zustimmung des Verlages unzulässig und strafbar. Dies gilt insbesondere für Vervielfältigungen, Übersetzungen, Mikroverfilmungen und Einspeicherung und Verarbeitung in elektronischen Systemen.
Der Verlag schützt seine e-Books vor Missbrauch des Urheberrechts durch ein digitales Rechtemanagement. Angaben zu diesem DRM finden Sie auf den Seiten der jeweiligen Anbieter.
The German version of this handbook was published almost simultaneously with the entry into force of the AI Act, which happened on 1 August 2024. The requirements of the new law regulating the placing on the market and putting into service of AI models and AI systems will apply alongside existing EU legal acts, such as the Digital Services Act (DSA) for the protection of democracy and the GDPR for the protection of personal data. In addition, the provisions of copyright law and a wide range of other European laws and those from Member State apply.
The complexity of the AI Act begins with its focus point, the AI system, which is characterised in particular by opacity and unpredictability, but also by autonomy and, to that extent, a lack of controllability. This last category, which according to the conventional sense of language is attributed solely to humans, is extended by the AI Act by definition, including software that is capable to infer how to generate outputs without human predictability or even controllability. Because the new generation of AI systems make use of human communication in word, writing, sound and image, they can simulate human creativity according to the rules of probability theory. Since the market launch of ChatGPT in November 2022, it has been available in a wide range of forms to people of all ages.
This handbook approaches the classification of this technology and its regulatory framework in three parts. Part 1 deals with the fundamentals, from the time frame of the AI Act, to definitions and the differentiation of the AI Act from neighbouring legal areas, to the complex technical and economic classification of AI.
Part 2 is dedicated to the regulation of AI and is divided into three chapters. Legal obligations for general-purpose AI models (GPAI models) on the one hand and for high-risk AI systems on the other are categorised on the basis of practical examples and guidelines. The chapter also addresses the duties and responsibilities in relation to AI in general and when using AI along the value chain. It also covers standardisation issues and testing in real-world laboratories. Since the AI Act is supplementary to existing rules and does not conclusively regulate the use of AI, there is still the possibility of national laws on AI, which is explained in the second chapter of this section. The complex relationship between the AI Act and the regulation of artificial intelligence in other legal acts of the Union and the Member States is described in the concluding third chapter. The comments there on data law, copyright, personal rights, and labour and employment law are of particular relevance for public authorities and companies that want to implement AI systems in their workflows.
Part 3 of the book is dedicated to the enforcement of the law. After dealing with the supervision of AI models and AI systems at the European and national level, liability in connection with the use of AI systems is elaborated. Further sections are dedicated to the sanctioning of breaches with the AI Act and the presentation of enforcement activities under the GDPR.
This guide is written primarily by practitioners for practitioners. The book's particular focus is on the comprehensible and practice-oriented presentation of the new law. The list of authors is made up of dedicated representatives from universities, as well as data protection authorities and the perspective of the European legislator.
The editors Rolf Schwartmann and Tobias Keber have their (common) scientific background not only in international law, but also in media and data protection law. Both editors pay particular attention to practice-oriented research. Rolf Schwartmann, as head of the Cologne Research Centre for Media Law at the Cologne University of Applied Sciences (TH Köln), conducts research in the field of media, data and artificial intelligence law and teaches in particular in the master's programme in media law and media economics there. He is the chairman of the Gesellschaft für Datenschutz und Datensicherheit (GDD) e.V. and was member of the Data Ethics Commission of the German Federal Government. Tobias Keber has been the State Commissioner for Data Protection and Freedom of Information for Baden-Württemberg since July 2023. He is the head of a large national data protection authority and is thus also responsible for the supervision of a number of major tech companies with a focus on AI. Prior to this, he has already conducted extensive research on the use of AI. Kai Zenner is the Head of Office of MEP Axel Voss since 2017 and has played a key role in the negotiations for the AI Act, particularly at the technical level. He is also part of expert groups and networks at the level of the OECD, WEF and the United Nations (UN). His contributions to this book reflect his personal views and not those of the European Parliament.
The authors of this book have been chosen with particular consideration of the practical requirements of the new law on AI. It was important to us to put together a group of people from academia and practice, who will contribute their extensive expertise from different areas of law and. The team consists of authors from academia as well as members of the legal profession, judges and public prosecutors, representatives of data protection authorities, relevant associations and companies. In addition to legal contributors, technicians and computer scientists are also involved. Case law and literature as well as other sources were taken into account until August 2024.
The present book is one of the first to be published on the AI Act. That was our ambitious goal. The volume was written in the spring of 2024 after the AI Act had been given its essentially final version in February 2024. The book must and will be measured against the claim not only to present the law of AI in a practice-oriented way, but also to develop and present solutions in an understandable way that makes the European Union a safe and attractive location for AI for the benefit of people.
As the editors, we would like to thank everyone involved, and in particular the publisher C.F. Müller, for smoothly implementing this ambitious project in a short period of time. Special thanks go also to Moritz Köhler from the Cologne Research Centre for Media Law at the Cologne University of Applied Sciences (TH Köln), who was in charge of coordinating the contributions.
The first edition of the guide, published in July 2020, was so quickly out of print that a new edition was needed in August 2020. We have taken into account the literature published since July as part of a minor update and smoothed out minor inaccuracies.
This English version is based on the 2nd edition and was translated from German to English with the help of DeepL. The authors adjusted their texts and were thereby supported by the publisher C.F. Müller and Henry Simwinga. This approach was planned and very much necessary. The program has translated names such as “Wünschelbaum” in “Wisheltree”. The term “Verkehrssicherungspflicht” was translated into “road safety obligation”. This was changed by the author again to “duty to safeguard the public in general”, while the “Verkehrssicherungspflicht” was added in brackets. We cannot rule out that other – similar odd creations – were overlooked. They will illustrate that the cooperation of machine and human being is still immature but also that the machine is not at fault. The editors are, which is why we are responsible for all remaining flaws.
We are grateful for criticism and notifications with regards to errors, which should be addressed to the Cologne Research Centre for Media Law at the Cologne University of Applied Sciences (TH Köln) ([email protected]).
Rolf Schwartmann, Tobias Keber and Kai Zenner
Cologne, Stuttgart and Brussels in Dezember 2024
Preface to the English translation
List of abbreviations
List of literature
Part IBasics
Chapter 1Timeline of the AI Act
(Schwartmann/Kurth)
Chapter 2Definitions
(Keber/Zenner/Hansen/Schwartmann)
A.Artificial intelligence1
B.Symbolic AI (1st Wave)2 – 5
C.Machine Learning (2nd Wave)6
D.GPAI Models and Generative AI7
E.Future prognosis (3rd Wave)8
Chapter 3Differentiation from other fields of action
(Benedikt/Keber)
A.Differentiation from legal tech and automation1
B.Differentiation from robotics2
Chapter 4Technical and economic classification of artificial intelligence
A.How AI applications work (Meding)1 – 10
B.Overview of AI application areas and use cases11 – 69
I.GPAI models and generative AI as co-pilots (Schwartmann/Mühlenbeck)11, 12
II.GPAI Models and Generative AI for Image and Text Processing (Schwartmann/Mühlenbeck)13 – 15
III.Open-source and generative AI (Wünschelbaum)16 – 22
IV.Legal Tech und Smart Contracts (Schwartmann/Mühlenbeck)23
V.AI in legal, tax and management consulting (Kremer)24 – 32
1.AI as a product in its own right, as a service or integration into consulting services24 – 27
2.Practical examples of the use of AI in consulting services28
3.Short assessment in compliance with the AI Act29 – 32
VI.AI applications in sport and health sector (Schwartmann/Mühlenbeck)33, 34
VII.AI in public sector35 – 54
1.Criminal prosecution (Hartmann)35 – 41
a)Specific use cases of law enforcement37, 38
b)General Use Cases of Law Enforcement39 – 41
2.Educational institutions (Schwartmann)42 – 46
a)Starting point43, 44
b)Assessment45, 46
3.Authorities (Wünschelbaum)47 – 54
VIII.AI in the context of employment (Wünschelbaum)55 – 60
IX.Risk of circumvention by legacy systems (Wünschelbaum)61 – 69
Part IIRegulation of AI
Chapter 1AI Act
A.History of the law (Zenner)1 – 6
B.Scope of application7 – 50
I.Material scope of application (Keber/Zenner)10 – 17
1.Concept of artificial intelligence in the AI Act11 – 16
a)AI system12 – 15
b)GPAI models16
2.Protected legal interests 17
II.Personal scope of application (Keber/Zenner)18 – 24
1.Obligated parties19 – 23
2.Protected parties24
III.Territorial scope of application (Keber/Zenner)25
IV.Scope exceptions26 – 50
1.Free and open-source licence (FOSS) (Keber/Zenner)26 – 33
a)Development27
b)AI systems, not models28
c)Free and open-source licence (FOSS)29, 30
d)Provided31 – 33
2.GDPR (Maslewski)34, 35
3.Research (Keber/Zenner)36 – 38
4.Military (Keber/Zenner)39 – 43
5.Further exceptions (Schwartmann/Köhler)44 – 50
C.Risk-based approach51 – 477
I.Basics (Schwartmann/Köhler)51 – 54
II.Measures for AI literacy (Art. 4) (Schwartmann/Köhler)55, 56
III.Prohibited AI systems57 – 124
1.Subliminal influence (Art. 5 para. 1 letter a) (Schwartmann/Pottkämper)57 – 63
a)Purpose and scope of application57
b)Definition and scope for interpretation58, 59
c)Conditions for the prohibition60, 61
d)Limitations and future considerations62, 63
2.Exploiting of vulnerabilites and other weaknesses (Art. 5 para. 1 letter b) (Schwartmann/Pottkämper)64 – 68
a)Aim and scope64, 65
b)Restrictions and effects66
c)Future developments67, 68
3.Evaluation based on social behaviour (Art. 5 para. 1 letter c) (Schwartmann/Pottkämper)69 – 75
a)Prohibition and its conditions70
b)Objective and challenges71
c)Specification of the prohibition72
d)Scope of application and future considerations73 – 75
4.Predictive policing (Art. 5 para. 1 letter d) (Schwartmann/Keber/Köhler)76 – 85
a)Scope of application78 – 80
b)Risks of predictive policing81, 82
c)Differentiation from automated data analysis83 – 85
5.Untargeted collection and use of facial images (Art. 5 para. 1 letter e) (Schwartmann/Keber/Pottkämper)86 – 88
6.Emotion recognition systems (Art. 5 para. 1 letter f) (Schwartmann/Keber/Steinbrück)89 – 93
7.Biometric categorisations (Art. 5 para. 1 letter g)94 – 99
8.Biometric real-time remote identification (Art. 5 para. 1 letter h) (Ganter/Rembold)100 – 122
a)Prohibition of use100 – 104
b)The authorised purposes of the use of RBI105 – 109
aa)Search for victims of criminal offences and missing persons (Art. 5 para. 1 subpara. 1 letter h point i)105
bb)Defence against serious threats (Art. 5 para. 1 subpara. 1 letter h point ii)106
cc)Investigation of criminal offences (Art. 5 para. 1 subpara. 1 letter h point iii)107, 108
dd)Purposes other than law enforcement (Art. 5 para. 1 subpara. 2)109
c)Further requirements for the RBI (Art. 5 para. 2)110 – 113
aa)Situation impact assessment (Art. 5 para. 2 subpara. 1 letter a)111
bb)Intervention impact assessment (Art. 5 para. 2 subpara. 1 letter b)112
cc)Appropriate restrictions, fundamental rights impact assessment and registration (Art. 5 para. 2 subpara. 2)113
d)Authorisation of the RBI by national authorities (Art. 5 para. 3)114 – 116
e)Notification to market surveillance authority and data protection authority (Art. 5 para. 4)117
f)Implementation of the RBI in national law (Art. 5 para. 5)118, 119
g)Annual report of the market surveillance authorities and data protection authorities on the use of RBI (Art. 5 para. 6)120, 121
h)Annual report of the Commission on the use of RBI (Art. 5 para. 7)122
9.Relationship to other Union law (Art. 5 para. 8)123, 124
IV.High-risk AI systems125 – 449
1.Basics (Schwartmann/Köhler)125 – 151
a)Classification according to Art. 6 para. 1127
b)Classification according to Art. 6 para. 2128 – 145
aa)Exceptions to the high-risk classification (Art. 6 para. 3)129 – 142
bb)Right of the Commission to amend the relevant criteria143 – 145
c)Right of the Commission to amend Annex III146 – 151
2.High-risk AI systems pursuant to Art. 6 para. 1 (Annex I) (Schwartmann/Pottkämper)152 – 164
a)Ways to be categorised as a high-risk AI system pursuant to Art. 6 para. 1 in conjunction with Annex I155
b)High-risk AI systems in product safety law156
c)Product regulation under the old legal framework157 – 159
d)Product regulation under the new legal framework160, 161
e)Consequences of classification as a high-risk system162
f)Interim conclusion163, 164
3.High-risk AI systems pursuant to Art. 6 para. 2 AI Act (Annex III)165 – 265
a)Biometric identification, categorisation and emotion recognition of natural persons (Hansen/Nägele/Steinbrück)165 – 171
b)Management and operation of critical infrastructures (Nägele)172 – 175
c)Education and training176 – 201
aa)High-risk classification according to Annex III no. 3 (Schwartmann/Keber/Henning/Köhler)177 – 184
bb)Guidance for educational institutions (Schwartmann)185 – 201
d)Employment, worker's management and access to self-employment (Wünschelbaum)202 – 221
aa)Sectoral scope of application204 – 209
bb)Covered employment areas210 – 217
cc)Exemptions from high-risk regulation218 – 221
e)Accessibility and utilisation of basic private and public services and benefits (Schwartmann/Köhler)222 – 231
aa)Access to public support and services223, 224
bb)Creditworthiness and credit score225 – 228
cc)Life and health insurance229
dd)Emergency calls230, 231
f)Prosecution (Benedikt/Hartmann)232 – 241
g)Migration, asylum and border control (Benedikt)242 – 245
h)Administration of justice and democratic processes246 – 265
aa)High-risk classification according to Annex III no. 8 (Schwartmann/Benedikt/Köhler)247 – 260
bb)Structured party submissions – a case for the AI Act? (Schwartmann/Köhler)261 – 265
4.Requirements for high-risk AI systems266 – 363
a)Compliance with the requirements for high-risk AI systems (Art. 8) (Schwartmann/Keber/Köhler/Zenner)266 – 276
aa)Genesis267
bb)Addressees of the obligations268
cc)Modalities of compliance269, 270
dd)Special features for AI systems within the meaning of Art. 6 para. 1271, 272
ee)Use case and competition with other obligations under the AI Act273 – 276
b)Risk management system (Art. 9) (Hansen)277 – 296
aa)Management systems for dealing with risks278 – 280
bb)Requirements of Art. 9 for a risk management system281 – 292
cc)Proof of fulfilment of Art. 9 of the AI Act293 – 296
c)Data and data governance (Art. 10) (Schwartmann/Keber/Köhler/Nägele)297 – 308
aa)Regulation of training, validation and test data sets (Art. 10 para. 1–4 AI Act)301 – 304
bb)Legal basis for the processing of special categories of personal data (Art. 10 para. 5 GDPR)305 – 308
d)Technical documentation (Art. 11 AI Act) (Hansen)309 – 318
e)Record-keeping obligations (Art. 12) (Schwartmann/Keber/Köhler)319 – 328
aa)Logging functions to ensure traceability320 – 324
bb)Special logging functions for biometric remote identification systems325
cc)Addressee and retention obligation326
dd)Conflict with judicial independence327, 328
f)Transparency and provision of information to deployers (Art. 13) (Schwartmann/Keber/Köhler)329 – 338
aa)Importance of transparency in the use of high-risk AI systems330, 331
bb)Transparency332 – 334
cc)Operating instructions335 – 338
g)Human oversight (Art. 14) (Schwartmann/Keber/Köhler)339 – 352
aa)Understanding of functioning and risks (Art. 14 para. 4 letters a–c)343 – 346
bb)Interference with the business (Art. 14 para. 4 letters d and e)347 – 350
cc)Verification when using biometric remote identification systems351, 352
h)Accuracy, robustness and cybersecurity (Art. 15 AI Act) (Nägele/Steinbrück)353 – 363
5.Obligations in dealing with high-risk AI systems (Kremer/Haar)364 – 449
a)Obligations of providers of high-risk AI systems (Art. 16–22 AI Act)365 – 398
aa)General obligations of providers (Art. 16 AI Act)366, 367
bb)Existence of a quality management system (Art. 17 AI Act)368 – 372
cc)Documentation keeping (Art. 18 AI Act)373 – 377
dd)Retention of the logs (Art. 19 AI Act)378 – 380
ee)Taking corrective actions and duty of information (Art. 20 AI Act)381 – 387
ff)Cooperation with competent authorities (Art. 21 AI Act)388 – 392
gg)Appointment of an authorised representative (Art. 22 AI Act)393 – 398
b)Obligations of importers of high-risk AI systems (Art. 23 AI Act)399 – 408
aa)Inspection obligations (para. 1, para. 2)400 – 402
bb)Obligations from placing on the market (para. 3–5)403 – 405
cc)Cooperation with the competent authorities (para. 6, para. 7)406 – 408
c)Obligations of distributors of high-risk AI systems (Art. 24 AI Act)409 – 415
aa)Inspection obligations (para. 1, para. 2)410, 411
bb)Obligations from making available on the market (para. 3–4)412 – 414
cc)Cooperation with the competent national authorities (para. 5, para. 6)415
d)Obligations of deployers of high-risk AI systems (Art. 26 AI Act)416 – 441
aa)Obligations in connection with use (para. 1–3)417 – 420
bb)Appropriate and representative input data (para. 4)421 – 424
cc)Monitoring obligations (para. 5)425 – 427
dd)Obligation to retain automatically generated logs (para. 6)428
ee)Information obligation for employers (para. 7)429
ff)Registration (para. 8)430
gg)Carrying out the data protection impact assessment (para. 9)431 – 433
hh)High-risk AI system for post-remote biometric identification (para. 10)434 – 439
ii)Duty to provide information to natural persons (para. 11)440
jj)Cooperation with the competent national authorities (para. 12)441
e)Fundamental rights impact assessment for high-risk AI systems (Art. 27 AI Act)442 – 449
aa)Deployers obliged to carry out the fundamental rights impact assessment (para. 1)443
bb)Timing, frequency of implementation and relationship to the data protection impact assessment (para. 1, para. 2, para. 4)444 – 446
cc)Subject of the fundamental rights impact assessment (para. 1)447, 448
dd)Notification to the market surveillance authority (para. 3, para. 5)449
V.AI systems with special transparency obligations (Art. 50 AI Act) (Kremer/Haar)450 – 476
1.Time of fulfilment of the transparency obligations (para. 5)453 – 455
2.Conformity with the applicable accessibility requirements (para. 5)456
3.Special transparency obligations457 – 474
a)AI systems intended to interact directly with natural persons (para. 1)458 – 461
b)AI systems generating synthetic audio, image, video or text content (para. 2)462 – 465
c)Emotion recognition systems or biometric categorisation systems (para. 3)466 – 468
d)AI systems for the generation of deep fakes (para. 4 case 1)469 – 472
e)AI system for generating or manipulating texts to inform the public (para. 4 case 2)473, 474
4.Relationship of the special transparency obligations to the Digital Services Act475, 476
VI.Simple AI systems (Art. 2 in conjunction with Art. 112, Art. 95) (Kremer/Haar)477
D.Responsibilities along the AI value chain (Zenner/Schwartmann/Hansen)478 – 524
I.The conceptual problems with the Commission's regulatory approach478 – 484
II.Overview of the final approach after the trialogue485 – 524
1.GPAI models with systemic risk (Level 1) (Art. 51 and 52 AI Act) 487 – 501
a)Genesis488 – 491
b)Definitions of GPAI models (Art. 3 no. 63–65)492
c)Classification of systemic GPAI models (Art. 51 and 52 AI Act)493
d)GPAI models with systemic risk (Art. 51 AI Act)494
e)Notification obligation under Art. 52 para. 1 AI Act495
f)Information and documentation obligations for systemic GPAI models under Art. 53 of the AI Act496
g)Specific obligations for GPAI models with systemic risk (Art. 55 AI Act)497
h)Codes of Practive (Art. 56 AI Act)498 – 501
2.GPAI models and AI components (Level 2) (Art. 53 and Art. 25 para. 2 AI Act) 502 – 509
3.Provider (Level 3)510 – 520
a)Definition511, 512
b)Provider change through actions listed in Art. 25 AI Act ("upgrade with consequences")513 – 520
aa)Trademark (Art. 25 para. 1 letter a)515
bb)Technical modification (Art. 25 para. 1 letter b)516
cc)Change of the intended purpose (Art. 25 para. 1 letter c)517, 518
dd)Cooperation (Art. 25 para. 2 AI Act)519, 520
4.Deployer (Level 4)521, 522
5.Affected person (level 5)523, 524
E.Harmonised standards and common specifications (Art. 40 et seq. GDPR) (Hansen)525 – 539
I.European standardisation527 – 529
II.Role of standardisation in the AI Act530 – 533
III.Harmonised standards and standardisation documents (Art. 40 AI Act)534 – 537
IV.Common specifications (Art. 41 AI Act)538, 539
F.Regulatory sandboxes (Art. 57 et seq. AI Act) (Keber/Hansen/Nägele)540 – 563
I.General541 – 545
II.Aims of the regulatory sandboxes546, 547
III.Implementing acts of the Commission548, 549
IV.Authorisation for further processing of personal data550, 551
V.Final report552
VI.Liability553
VII.Testing under real conditions outside of AI regulatory sandboxes554 – 558
VIII.Promoting innovation, especially among SMEs and start-ups559 – 561
IX.National Regulatory Sandbox Act562
X.Applicable regulations for health research with AI563
Chapter 2National regulation
(Schwartmann/Köhler)
A.Scope of application of the AI Act3
B.Opening clauses4 – 8
C.Permissibility of the use of AI systems9 – 11
D.Use case: AI use in the media12 – 18
I.Differentiation according to automation levels13 – 16
II.Special problem: Ensuring diversity in the use of AI in the media17, 18
Chapter 3Relationship with other areas of law
A.Data law3 – 105
I.GDPR3 – 70
1.Scope of application and responsibility (Schwartmann/Köhler)3 – 15
a)Data categories in dealing with AI systems4 – 6
b)Applicability of the GDPR7 – 11
c)Responsibility under data protection law12 – 15
2.Concept of artificial intelligence in the GDPR (Maslewski)16 – 18
3.Prohibition with reservation of authorisation pursuant to Art. 6 para. 1 and Art. 9 para. 1 GDPR (Keber)19 – 32
a)Consent (Art. 6 para. 1 letter a GDPR)20
b)Necessary for the performance of a contract (Art. 6 para. 1 letter b GDPR)21, 22
c)Necessary for compliance with a legal obligation (Art. 6 para. 1 letter c GDPR)23
d)Public interest or public authority (Art. 6 para. 1 letter e GDPR)24
e)Legitimate interests (Art. 6 para. 1 letter f GDPR)25 – 31
f)Change of purpose (Art. 6 para. 4 GDPR)32
4.Special categories of personal data (Keber/Steinbrück)33, 34
5.Prohibition of automated individual decision-making pursuant to Art. 22 GDPR (Schwartmann/Benedikt/Köhler)35 – 43
6.Transparency and information obligations pursuant to Art. 5 para. 1 letter a and 12 et seq. GDPR (Schwartmann/Köhler)44 – 47
7.Technical and organisational measures (Art. 24, 25, 32 GDPR) (Hansen/Keber)48 – 70
a)Requirements of the GDPR50 – 58
aa)Objective of the technical and organisational measures in the GDPR51 – 53
bb)Requirements of Art. 25 GDPR: Organisation of the processing54 – 56
cc)Requirements of Art. 32 GDPR: Security of processing57, 58
b)Significance for AI systems59 – 68
aa)General59 – 61
bb)Implementation of Art. 25 GDPR in relation to AI systems62 – 66
cc)Implementation of Art. 32 GDPR in relation to AI systems67, 68
c)Guidelines and standards69
d)DSK guidance for the data protection-compliant use of AI applications70
II.BDSG and German federate states' data protection laws (Schwartmann/Köhler)71 – 75
1.German Federal Data Protection Act (BDSG)72 – 74
2.Federate states' data protection laws75
III.Telecommunication Digital Services Data Protection Act (TDDDG) (Benedikt)76 – 80
IV.Data Act (DA) (Schwartmann/Wasilewski)81 – 86
1.General considerations between AI Act and DA82, 83
2.Data and data governance (Art. 10 AI Act)84 – 86
V.Data Governance Act (DGA) (Schwartmann/Pottkämper)87 – 92
1.Addressees88, 89
2.Main contents of the legal acts90 – 92
a)AI Act: Safety and responsibility in the use of AI90
b)Data Governance Act: data availability and exchange91
c)Complementary relationship between AI Act and DGA92
VI.European Data Spaces (Schwartmann/Wasilewski)93 – 105
1.European Health Data Space (EHDS)95 – 102
2."GreenData4All"103 – 105
B.Relationship with other European legal acts (Schwartmann/Wasilewski)106 – 111
I.European Media Freedom Act (EMFA)106 – 110
II.Other legal acts111
C.Copyright (Sänger)112 – 135
I.The input side112 – 124
1.Copyright in training data112 – 122
2.Prompts123, 124
II.The output side125 – 135
1.No copyright for the AI application125, 126
2.Copyright of the provider127
3.Copyright of the user128, 129
4.Related property rights130 – 135
D.Personal rights (Schwartmann/Köhler)136 – 144
I.Violations of personal rights through AI-generated content138 – 141
II.Approaches to solutions in development142, 143
III.Consequences for users144
E.Exam regulation (Schwartmann/Köhler/Kurth)145 – 154
I.Use of AI systems on the part of the candidates146, 147
II.Use on the part of the examiners148 – 154
F.Labour law: AI in the employment context (Wünschelbaum)155 – 199
I.Employment-specific AI prohibitions156 – 159
II.Employer as provider and deployer160 – 163
III.Employees as deployers164
IV.AI opening clause in the employment context165 – 191
1.Purpose166
2.Personal scope: Employees and partly solo self-employed persons167, 168
3.Material scope of regulation: user sphere169 – 171
4.Permissible regulatory instruments172 – 178
a)Laws172
b)Collective agreements173
c)Concept of "more favourable" regulation174 – 178
aa)Multidimensional regulatory objectives175 – 177
bb)More favourable means advantages without falling below the level of protection178
5.Legal scope for regulation179 – 187
a)Specification of high-risk systems in accordance with Annex III no. 4180, 181
b)Implementation of the Platform Directive182
c)AI Officer183
d)Clarify the employer's deployer status184
e)Strengthening the rights of employees185
f)Sandboxing options and regulatory sandboxes for deployers in the employment context186
g)Cross-border use of AI systems187
6.Scope for regulation in collective agreements188 – 191
a)Use of AI in the Group190
b)Concretisation of requirements: Participation and information rights of the AI Act191
V.Regulatory competition in Union law: Platform Directive192
VI.Competing regulations in German labour law193 – 199
Part IIIEnforcement
Chapter 1Governance
A.Introduction (Zenner)1 – 4
B.Overview of governance structures5 – 59
I.Commission (AI Office, Art. 64 et seq.) (Zenner)5, 6
II.European Artificial Intelligence Board (Art. 65–66 AI Act) (Zenner)7 – 9
III.Advisory forum (Art. 67 AI Act) (Zenner)10, 11
IV.Scientific Panel of Independent Experts (Art. 68–69 AI Act) (Zenner)12, 13
V.European Data Protection Supervisor (EDPS) (Zenner)14
VI.Evaluation (Zenner)15
VII.National authorities (Schwartmann/Hansen/Keber)16 – 29
1.Structure17 – 24
a)National authorities under the AI Act18 – 20
b)Assignment to national authorities21 – 24
2.Obligations of the Member States25
3.Modalities of the supervision26 – 29
a)Independence, impartiality and without bias26
b)Confidentiality27
c)Exchange of experience28
d)Guidance and advice29
VIII.Catalogue of measures (Schwartmann/Keber/Zenner/Hansen)30 – 50
1.Market surveillance authorities31 – 42
a)Investigative powers32 – 35
b)Supervisory measures in the event of violations of the AI Act36 – 38
c)Other tasks of the market surveillance authorities39 – 42
2.Other national authorities (Art. 77 AI Act)43
3.GPAI systems and models44 – 50
IX.Notification procedure and notified bodies (Schwartmann/Köhler)51 – 55
X.Legal remedies (Schwartmann/Köhler)56 – 59
1.Right to lodge a complaint with a competent market surveillance authority (Art. 85 AI Act)57
2.Right to explanation of the individual decision-making (Art. 86 GDPR)58
3.Whistleblower directive59
Chapter 2Liability
A.Basics (Kessen)1 – 25
I.Contractual liability7
II.Tortious liability8 – 25
1.Liability of the user/operator8 – 16
2.Liability of manufacturers and suppliers17 – 25
a)Product Liability Act19, 20
b)Producer liability21 – 23
c)Liability according to Section 823 para. 2 BGB24, 25
B.Special issues of national law26 – 38
I.Liability for copyright infringements (Sänger)26 – 31
1.Inputs27
2.Outputs28 – 31
II.Liability for violations of personal rights (Schwartmann/Köhler)32 – 38
Chapter 3Sanctions and other enforcement measures
(Schwartmann/Köhler)
A.Fines2 – 13
I.General conditions3 – 6
II.Amount of the fines7 – 10
III.Special features for GPAI models and Union bodies11, 12
IV.Fines under the GDPR13
B.Other sanctions and enforcement measures14
Chapter 4Official orders under the GDPR
(Keber/Maslewski)
AfP
Zeitschrift für Medien- und Kommunikationsrecht
AI
Artificial Intelligence
AI Act
Regulation oft he European Parliament and oft he Concil laying down harmonised rules on artificial intelligence and amending Certain Union Legislative Acts
Art.
Article/Articles
BayObLG
Bayerisches Oberstes Landesgericht (Bavarian Appeal Court)
BB
Betriebs-Berater
BDSG
Bundesdatenschutzgesetz (Federal Data Protection Act)
BGB
Bürgerliches Gesetzbuch (German Civil Code)
BGH
Bundesgerichtshof (Federal Court of Justice)
BR-Drucks.
Bundesratsdrucksache
BT-Drucks.
Bundestagsdrucksache
BVerfG
Bundesverfassungsgericht (Federal Constitutional Court)
cf.
confer
ch./chap.
chapter
CR
Computer und Recht
DA
Data Act
DB
Der Betrieb
DGA
Data Governance Act
DRiZ
Deutsche Richterzeitung
DSA
Digital Services Act
DSB
Datenschutz-Berater
DSK
Data Protection Conference
DSM-RL
Directive EU 2019/790 of the European Parliament and oft he Concil on copyright and related rights in the Digital Single Market
DuD
Datenschutz und Datensicherheit
ECJ
European Court of Justice
ed.
edition
EDPB
European Data Protection Board
e.g.
for example
EMFA
European Media Freedom Act
esp.
especially
et al.
and others
etc.
et cetera
et seq.
and the following
EU
European Union
EuR
Europarecht (Zeitschrift)
EuZW
Europäische Zeitschrift für Wirtschaftsrecht
f.
following
F.A.Z.
Frankfurter Allgemeine Zeitung
ff.
and following
Fn.
footnote
FOSS
free and open-source software
FS
Festschrift
GDPR
General Data Protection Regulation
GG
Grundgesetz (Basic Law fort he Federal Republic of Germany)
GPAI model
general purpose AI model
GPAI system
general purpose AI system
GRCh
Charter of Fundamental Rights oft he European Union
GRUR
Gewerblicher Rechtsschutz und Urheberrecht
HmbBfDI
The Hamburg Commissioner for Data protection and freedom of information
HR
Human Resources
Hrsg.
Herausgeber (editor)
i.e.
that is
in fine
in conclusion
JI Directive
Directive on the protection of natural persons with regard to the processing of personal data by competent authorities fort he purposes oft he prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA
KIR
Künstliche Intelligenz und Recht (Zeitschrift)
K&R
Kommunikation und Recht
LfDI BW
The state Data Protection Authority of Baden-Württemberg
lit.
litera, letter
LTO
Legal Tribune Online
LTZ
Zeitschrift für digitale Anwendung
MedR
Medizinrecht (Zeitschrift)
ML
machine learning
MMR
Multimedia und Recht
MüKo
Münchener Kommentar
NJW
Neue Juristische Wochenschrift
NLF
New Legislative Framework
no.
number/numbers
Nr.
Nummer (engl.: number)
NVwZ
Neue Zeitschrift für Verwaltungsrecht
NZA
Neue Zeitschrift für Arbeitsrecht
NZBau
Neue Zeitschrift für Baurecht und Vergaberecht
OECD
Organisation for Economic Co-operation and Developement
p.
page
para.
paragraph/paragraphs
PharmR
Fachzeitschrift für das gesamte Arzneimittelrecht
RBI
Risk-based Inspection
RBÜ
Berne Convention fort he Protection of Literay and Artistic Works
RDi
Recht Digital
RDV
Recht der Datenverarbeitung
Rec
Recital
SME
small and medium-sized enterprises
subpara.
subparagraph/subparagraphs
TFEU
Treaty on the Functioning of the European Union
TOM
technical and organisational measures
UrhG
Urheberrechtsgesetz (Act on Copyright and Related Rights)
UWG
Gesetz gegen den unlauteren Wettbewerb (Act against Unfair Competition)
VersR
Versicherungsrecht (Zeitschrift)
VK:KIWA
Virtual Competence Centre VK:KIWA. AI and Scientific Work (DE)
W3C
World Wide Web Consortium
WRP
Wettbewerb in Recht und Praxis
ZD
Zeitschrift für Datenschutz
ZfDR
Zeitschrift für Digitalisierung und Recht
ZfPC
Zeitschrift für Product Compliance
ZRP
Zeitschrift für Rechtspolitik
ZUM
Zeitschrift für Urheber- und Medienrecht
Assion/Willecke Neue Regelungen zu vernetzten Produkten und Diensten: EU Data Act, MMR 2023, 805
Bai et al. Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback, arXiv:2204.05862 2022
Bauckhage/Föhr/Loitz/Marten Fünf Thesen zur Bedeutung von Künstlicher Intelligenz in der Wirtschaftsprüfung, DB 2023, 2065
Baumann Generative KI und Urheberrecht – Urheber und Anwender im Spannungsfeld, NJW 2023, 3673
Baumgartner/Brunnbauer/Cross Anforderungen der DS-GVO an den Einsatz von Künstlicher Intelligenz, MMR 2023, 543
Becker Der Kommissionsentwurf für eine KI-Verordnung – Gefahr für die Wissenschaftsfreiheit? Eine Analyse des KI-VOE im Licht von Art. 13 GRC, ZfDR 2023, 164
Becker/Feuerstack Der neue Entwurf des EU-Parlaments für eine KI-Verordnung, MMR 2024, 22
Becker/Toprak/Beyerer Explainable Artificial Intelligence for Interpretable Data Minimization, 2023 IEEE International Conference on Data Mining Workshops (ICDMW), 2023, 885
Bennek Synergie von Large Language Models und Knowledge Graphen im Rechtsbereich: Anwendungen, Herausforderungen und Zukunftsperspektiven, DSRITB 2023, 573
Bertelsmann Automatisch erlaubt? Fünf Anwendungsfälle algorithmischer Systeme auf dem juristischen Prüfstand, 2020
Berz/Engel/Hacker Generative KI, Datenschutz, Hassrede und Desinformation – Zur Regulierung von KI-Meinungen, ZUM 2023, 586
Binder/Egli Umgang mit Hochrisiko-KI-Systemen in der KI-VO, MMR 2024, 626
Birnbaum ChatGPT und Prüfungsrecht, NVwZ 2023, 1127
Bode Contesting Use of Force Norms Through Technological Practices ZaöRV 2023, 39
Bomhard/Merkle Europäische KI-Verordnung – Der aktuelle Kommissionsentwurf und praktische Auswirkungen, RDi 2021, 276
Bomhard/Siglmüller AI Act – das Trilogergebnis, RDi 2024, 45
Borges/Keil Rechtshandbuch Big Data, 2024
Botta Die Förderung innovativer KI-Systeme in der EU, ZfDR 2022, 391
Braegelmann Zuhilfenahme Künstlicher Intelligenz bei der Erstellung von Texten für die Universität, RDi 2024, 188
Brodowski/Hartmann/Sorge Legal Tech, KI und eine „hybride Cloud“ im Einsatz gegen Kindesmissbrauch, NJW 2023, 583
Brown et al. Language Models are Few-Shot Learners, arXiv:2005.14165 2020, 1
Callies/Ruffert EUV/AEUV, 6. Aufl. 2022
Cole/Schiedermair/Wagner Die Entfaltung von Freiheit im Rahmen des Rechts, Festschrift für Dieter Dörr zum 70. Geburtstag, 2022 (cited as: author in FS Dörr)
Däubler Die KI-Verordnung der Europäischen Union: Überblick und Konsequenzen im Arbeitsrecht, SR 2024, 110
Denga Die Nutzungsgovernance im European Health Data Space als Problem eines Immaterialgütermarkts, EuZW 2023, 25
Der Hamburgische Beauftragte für Datenschutz und Informationsreiheit Diskussionspapier: Large Language Models und personenbezogene Daten, https://datenschutz-hamburg.de/fileadmin/user_upload/HmbBfDI/Datenschutz/Informationen/240715_Diskussionspapier_HmbBfDI_KI_Modelle.pdf
Dierks European Health Data Space – Anforderungen und Chancen für die pharmazeutische Industrie, PharmR 2023, 369, 370 f.
Ebers/Heinze/Krügel/Steinrötter (Hrsg.) Künstliche Intelligenz und Robotik, 2020 (cited as: Ebers/Heinze/Krügel/Steinrötter/author § Rn. )
Ebers/Hoch/Rosenkranz/Ruschemeier/Steinrötter Der Entwurf für eine EU-KI-Verordnung: Richtige Richtung mit Optimierungsbedarf, RDi 2021, 531
Ebers/Hoch/Rosenkranz/Ruschemeier/Steinrötter The European Commission‘s Proposal for an Artificial Intelligence Act – A Critical Assessment by Members of the Robotics and AI Law Society (RAILS), Multidisciplinary Scientific Journal 2021, 589
Ebert/Spiecker gen. Döhmann Der Kommissionsentwurf für eine KI-Verordnung der EU, NVwZ 2021, 1188
Ehring/Taeger Produkthaftungs- und Produktsicherheitsrecht, 2022
Eichelberger Arzthaftung beim Einsatz von KI und Robotik, ZfPC 2023, 209
Engel Generative KI, Foundation Models und KI-Modelle mit allgemeinem Verwendungszweck in der KI-VO, KIR 2024, 21
Engeler Der Konflikt zwischen Datenmarkt und Datenschutz: Eine ökonomische Kritik der Einwilligung, NJW 2022, 3398
Europäische Kommission Artificial Intelligence – Questions and Answers, https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683
Ferreau Europas Digital- und Medienpolitik ist nicht (immer) die Lösung, sondern das Problem, K&R Editorial 11/2023
Feuerstack/Becker//Hertz Die Entwürfe des EU-Parlaments und der EU-Kommission für eine KI-Verordnung im Vergleich: Eine Bewertung mit Fokus auf Regeln zu Transparenz, Forschungsfreiheit, Manipulation und Emotionserkennung, ZfDR 2023, 433
Fischer/Jeremias/Dieterich Prüfungsrecht, 8. Aufl. 2022
Foerste/Graf von Westphalen Produkthaftungshandbuch, 4. Aufl. 2024 (cited as: Foerste/Graf von Westphalen/author)
Forgó Der European Health Data Space im Kontext der MMR, MMR 2023, 3
Frank/Heine KI-Einsatz im Betrieb unter der KI-Verordnung, NZA 2023, 1281
Franzen Beschäftigtendatenschutz: Was wäre besser als der Status quo?, RDV 2014, 201
Friedewald/Roßnagel/Heesen/Krämer/Lamla Künstliche Intelligenz, Demokratie und Privatheit, Der KI-Verordnungsentwurf und biometrische Erkennung: Ein großer Wurf oder kompetenzwidrige Symbolpolitik?, 2022 (cited as: Friedewald/Roßnagel/Heesen/Krämer/Lamla/author)
Furch Stimmlokalisierung von Games, MMR 2024, 728
Geminn Die Regulierung Künstlicher Intelligenz, ZD 2021, 354
Gerdemann Harmonisierte Normen und ihre Bedeutung für die Zukunft der KI, MMR 2024, 614
Gless/Seelmann Intelligente Agenten und das Recht, 2016
Gola/Heckmann Datenschutz-Grundverordnung, Bundesdatenschutzgesetz – DS-GVO/BDSG, 3. Aufl. 2022 (cited as: Gola/Heckmann/author DS-GVO/BDSG, Art./§ Rn.)
Gonscherowski/Hansen/Rost Resilienz – eine neue Anforderung aus der Datenschutz-Grundverordnung, DuD 2018, 442
Götz Rechtsdurchsetzung von „meldenden Personen“ gegenüber Online-Plattformen nach dem DSA — Zur abschließenden Regelung der Rechtsbehelfe durch den DAS, CR 2023, 450
Graichen Die Automatisierung der Justiz, 2022
von der Groeben/Schwarze/Hatje Europäisches Unionsrecht, 7. Aufl. 2015
Grützmacher Die zivilrechtliche Haftung für KI nach dem Entwurf der geplanten KI-VO, CR 2021, 433
Gudarzi Vielfaltssicherung in sozialen Netzwerken: Eine verfassungsrechtliche Betrachtung des Schutzes der Meinungsvielfalt und der Meinungsbildungsfreiheit, 2022
Günter/Gerigk/Berger Von Algorithmen und Arbeitnehmern: Die europarechtliche Regulierung von KI im arbeitsrechtlichen Kontext, NZA 2024, 234
Hacker A Legal Framework for AI Training Data – From First Principles to the Artificial Intelligence Act, Law Innovation and Technology 2021, 257
Hacker/Berz Der AI Act der Europäischen Union – Überblick, Kritik und Ausblick, ZRP 2023, 226
Hahn Die Regulierung biometrischer Fernidentifizierung in der Strafverfolgung im KI-Verordnungsentwurf der EU-Kommission, ZfDR 2023, 142
Hanloser Cookies 2.0 – TTDSG und nun? ZD 2021, 399
Hansen/Probst Souveräne Sicherheit, Zero Trust und das Datenschutzrecht, DuD 2023, 623
Hartmann KI & Recht kompakt, 2020
Hartmann ChatGPT & Co. in der Strafjustiz – Einsatzszenarien großer KI-Sprachmodelle in der Strafverfolgung, RDV 2023, 300 ff.
Hartmann/Cipierre/Beek Datamining in der Strafjustiz? RDV 2023, 147 ff.
Hartmann/Prinz Immaterialgüterrechtlicher Schutz von Systemen Künstlicher Intelligenz, DSRITB 2018, 769
Hartung Smartlaw, ChatGPT und das RDG, RDi 2023, 209
Heckmann/Rachut Rechtsichere Hochschulprüfungen mit und trotz generativer KI, OdW 2024, 65
Heinze Daten, Plattformen und KI als Dreiklang unserer Zeit, DSRITB 2022, 187
Heinze/Sorge/Specht-Riemenschneider Das Recht der Künstlichen Intelligenz, KIR 2024, 11
Hennemann/Steinrötter Der Data Act, NJW 2024, 1
Hentsch/Rodenhausen Einsatzfelder von KI in Games, MMR 2024, 714
Hilgendorf/Roth-Isigkeit Die neue Verordnung der EU zur Künstlichen Intelligenz, 2023
Hoeren/Sieber/Holznagel Handbuch Multimedia-Recht; 60. Aufl. 2024; Loseblattwerk (cited as: author in Hoeren/Sieber/Holznagel, Multimedia-Recht)
Holthausen Big Data, People Analytics, KI und Gestaltung von Betriebsvereinbarungen – Grund-, arbeits- und datenschutzrechtliche An- und Herausforderungen, RdA 2021, 19
Holthausen Einsatz künstlicher Intelligenz im HR-Bereich und Anforderungen an die „schöne neue Arbeitswelt X.0“, RdA 2023, 361
Hornung KI-Regulierung im Mehrebenensystem, DuD 2022, 561
Hornung/Schallbruch IT-Sicherheitsrecht, 2. Aufl. 2024 (im Erscheinen)
Ji et al. Survey of hallucination in natural language generation, ACM Computing Surveys, Vol. 55, Issue 12, Article No. 248, 3
Käde/von Maltzahn Die Erklärbarkeit von Künstlicher Intelligenz (KI). Entmystifizierung der Black Box und Chancen für das Recht, InTeR 2020, 66
Kahnemann Schnelles Denken, langsames Denken, 2012
Kalbfus/Schöberle ArbRAktuell, Arbeitsrechtliche Fragen beim Einsatz von KI am Beispiel von ChatGPT 2023, 251
Karaboga/Frei/Ebbers/Rovelli/Friedewald Emotions- und Krankheitserkennung mittels biometrischer Verfahren, DuD 2023, 553
Kastl-Riemann Algorithmen und Künstliche Intelligenz im Äußerungsrecht ZUM 2023, 578
Kaulartz/Braegelmann Rechtshandbuch Artificial Intelligence und Machine Learning, 2020
Keber/Maslewski Rechtsgrundlagen für das Training von Systemen Künstlicher Intelligenz nach der DS-GVO, RDV 2023, 273
Kessen/Reif/Burkhardt Datenschutz- und zivilrechtliche Anforderungen an „Service gegen Daten“-Geschäftsmodelle RDV 2022, 64
Kipker Handbuch Cybersecurity, 2. Aufl. 2023
Klagge/Üge KI und Geschäftsgeheimnisrecht in der Games-Branche, MMR 2024, 733
Klindt Kommentar Produktsicherheitsgesetz, 3. Aufl. 2021
Kluth Künstliche Intelligenz als Auslöser neuer Regelungsbedarfe im Architekten- und Ingenieurrecht, NZBau 2023, 283
Köhler Zur Unionsrechtskonformität von § 3 BDSG: Erkenntnisse aus EuGH, Urt. v. 30.3.2023 – C-34/21, RDV 2023, 307
Kraetzig Deliktsschutz gegen KI-Abbilder – Teil 1: Täuschende Deepfakes, CR 2024, 207
Kraetzig Europäische Medienregulierung – Freiheit durch Aufsicht?, NJW 2023, 1485
Kramme Vertragsrecht für digitale Produkte, RDi 2021, 20
Krause Auf dem Weg zur unionsrechtlichen Regelung von Plattformtätigkeiten, NZA 2022, 522
Krizhevsky/Sutskever/Hinton Advances Neural Information Processing Systems 25 2012, 1
Krone Urheberrechtlicher Schutz von ChatGPT-Texten? RDi 2023, 117
Kühling/Buchner Datenschutz-Grundverordnung, Bundesdatenschutzgesetz: DS-GVO/BDSG, 4. Aufl. 2024 (cited as: Kühling/Buchner/author Art./§ Rn. )
Kulick/Goldhammer (Hrsg.) Der Terrorist als Feind?, 2020 (cited as: Kulick/Goldhammer/author)
Lang/Reinbach Künstliche Intelligenz im Arbeitsrecht NZA 2023, 1273
Laue/Nink/Kremer Betriebliches Datenschutzrecht in der Praxis, 3. Aufl. 2024
Linardatos Auf dem Weg zu einer europäischen KI-Verordnung – ein (kritischer) Blick auf den aktuellen Kommissionsentwurf, GPR 2022, 60
Macher/Graf Ballestrem Der neue EU Data Act: Zugang zu Daten – und Geschäftsgeheimnissen?, GRUR-Prax 2023, 661
Malferrari Europäische Kommission: Wettbewerb: Kommission bittet um Rückmeldungen zur Marktdefinition, EuZW 2022, 1083
Martini Blackbox Algorithmus, 2019
Martini/Botta KI-Aufsicht im föderalen Staat, MMR 2024, 630
Mayrhofer Produkthaftungsrechtliche Verantwortlichkeit des „Trainer-Nutzers“ von KI-Systemen, RDi 2023, 20
Meyer Künstliche Intelligenz im Personalmanagement und Arbeitsrecht, NJW 2023, 1841
Mitsching/Rauda/Sach Die sieben wichtigsten KI-Anwendungsfälle in der Games-Branche, MMR 2024, 718
Müller-Peltzer/Tanczik Künstliche Intelligenz und Daten, RDi 2023, 452
Muller/Schöppl/Avramidou/Talvitie/Peñalver AIA in-depth, #3a High-Risk AI Classification
Münchener Kommentar zum Bürgerlichen Gesetzbuch BGB, 8. Aufl. 2020 (cited as: MüKo BGB/author Art. Rn. )
Nebel Werbe-Tracking nach Inkrafttreten des TTDSG – Zum einwilligungsfreien Browser- und Device-Fingerprinting, CR 2021, 666
Nemitz Künstliche Intelligenz und Demokratie, MMR 2024, 603
Niemann/Kevekordes Machine Learning und Datenschutz (Teil 1), CR 2020, 17
Nordemann Generative Künstliche Intelligenz: Urheberrechtsverletzungen und Haftung, GRUR 2024, 1
Oechsler Die Haftungsverantwortung für selbstlernende KI-Systeme, NJW 2022, 2713
Ory Medienfreiheit – Der Entwurf eines European Media Freedom Act, ZRP 2023, 26
Oster/Walter/Zaouras Festschrift Gounalakis (cited as: author in FS Gounalakis)
Pesch/Böhme Verarbeitung personenbezogener Daten und Datenrichtigkeit bei großen Sprachmodellen, MMR 2023, 917
Pfeffer Stille Europäisierung – Wie europäisch wird das deutsche Polizeirecht?, NVwZ 2022, 294
Piltz Das neue TTDSG aus Sicht der Telemedien, CR 2021, 555
Plath DS-GVO/BDSG/TTDSG, 4. Aufl. 2023 (cited as: Plath/author Art./§ Rn. )
Rachut Kein Zugang zum Masterstudium wegen Vorlage eines mittels KI erstellten Essays, Anmerkung zu VG München, Beschluss vom 28.11.2023 – M 3 E 23.4371, NJW 2024, 1052
Raji Datenräume in der Europäischen Datenstrategie am Beispiel des European Health Data Space, ZD 2023, 3
Rath Überlastung der Gerichte durch Firmen wie Flightright?, DRiZ 2019, 288
Reusch KI und Software im Kontext von Produkthaftung und Produktsicherheit, RDi 2023, 152
Riehm/Abold Rechtsbehelfe von Verbrauchern bei Verträgen über digitale Produkte — Einführung in das neue Gewährleistungsrecht für die Digitalisierung, CR 2021, 530
Roos/Weitz Hochrisiko-KI-Systeme im Kommissionsentwurf für eine KI-Verordnung, MMR 2021, 844
Roß European Media Freedom Act und Kernfragen der europäischen Integration, EuR 2023, 450
Roßnagel Datenschutz in der Forschung, ZD 2019, 157
Roth-Isigkeit Grundstrukturen der geplanten KI-Aufsichtsbehörden – KI-Bürokratie?, ZRP 2022, 187
Roth-Isigkeit Der risikobasierte Ansatz als Paradigma des Digitalverwaltungsrechts, MMR 2024, 621
Roth-Isigkeit Der neue Rechtsrahmen für Künstliche Intelligenz in der Europäischen Union, KIR 2024, 15
Rüthers/Fischer/Birk Rechtstheorie, 12. Aufl. 2022
Schaub Nutzung von Künstlicher Intelligenz als Pflichtverletzung?, NJW 2023, 2145
Schmidhuber Deep Learning in Neural Networks: An Overview, Neural Networks 2015, 85
Schneider KI-unterstütztes Coding in der Spieleentwicklung, MMR 2024, 724
Schucht Die neue Architektur im europäischen Produktsicherheitsrecht nach New Legislative Framework und Alignment Package, EuZW 2014, 848
Schumacher/Sydow/von Schönfeld Cookie Compliance, quo vadis?, MMR 2021, 603
Schüßler/Zöll EU-Datenschutz-Grundverordnung und Beschäftigtendatenschutz,DuD 2013, 639
Schwartmann Dialektik versus Stochastik, Wissenschaft darf nicht im mathematischen Bermuda-Dreieck der KI verschwinden, Forschung & Lehre 2023, 132
Schwartmann Autonom wie ein Tier – KI in Hochschullehre und Prüfung, Forschung & Lehre 2024, 352 f.
Schwartmann/Benedikt/Reif Datenschutz bei Websites – Aktuelle Rechtslage und Ausblick auf das TTDSG, RDV 2020, 231
Schwartmann/Benedikt/Reif Datenschutz im Internet(im Erscheinen)
Schwartmann/Benedikt/Reif Entwurf zum TTDSG: Für einen zeitgemäßen Online-Datenschutz?, MMR 2021, 99
Schwartmann/Benedikt/Stelkens Advocatus Diaboli im Rechtsstaat, RDV 2023, 296
Schwartmann/Jaspers/Thüsing/Kugelmann DS-GVO/BDSG, 3. Aufl. 2024 (cited as: HK DS-GVO/BDSG/author Art./§ Rn. )
Schwartmann/Kessen/Hartmann/Benedikt „Richter-Maschine-Richter“ – Kontrolle des Einsatzes generativer KI, DRiZ 2023, 388
Schwartmann/Kurth/Köhler Der Einsatz von KI an Hochschulen – eine rechtliche Betrachtung, OdW 2024, 161
Selbst Disparate Impact in Big Data Policing, Georgia Law Review 2017, 109
Siglmüller/Gassner Softwareentwicklung durch Open-Source-trainierte KI – Schutz und Haftung, RDi 2023, 124
Simitis/Hornung/Spiecker gen. Döhmann Datenschutzrecht, 2019 (cited as: Datenschutzrecht/author Art./§ Rn. )
Singelnstein Predictive Policing: Algorithmenbasierte Straftatprognosen zur vorausschauenden Kriminalintervention, NStZ 2018, 1
Skitka/Mosier/Burdick Does automation bias decision-making?, International Journal of Human-Computer Studies 51, 991
Steege Definition von Künstlicher Intelligenz in Art. 3 Nr. 1 KI-VO-E, MMR 2022, 926
Steen Ableitungen als wesentliche Fähigkeit von KI-Systemen nach der KI-VO, KIR 2024, 7
Steinrötter Verhältnis von Data Act und DS-GVO, GRUR 2023, 216
Stephen et al. Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback, arXiv:2307.15217 2023, 1, 1 ff.
Taeger/Gabel DSGVO – BDSG – TTDSG, 4. Aufl. 2022 (cited as: Taeger/Gabel/author Art./§ Rn. )
Ukrow EU: Einigung zum European Media Freedom Act erzielt, MMR-Aktuell 2024, 01294
Ukrow Künstliche Intelligenz als Herausforderung für die positive Medienordnung, 2022
Vaswani et al. Attention is All you Need, Advances in Neural Information Processing Systems 30 (NIPS 2017), 1
Veale/Borgesius Demystifying the Draft EU Artificial Intelligence Act, CRI 2021, 108
Wagner Verantwortlichkeit im Zeichen digitaler Techniken, VersR 2020, 717
Wendehorst/Nessler/Aufreiter/Aichinger Der Begriff des „KI-Systems“ unter der neuen KI-VO, MMR 2024, 605
Wendt/Wendt Das neue Recht der Künstlichen Intelligenz, 2024
Werry Generative KI-Modelle im Visier der Datenschutzbehörden, MMR 2023, 911
Wiebe Produktsicherheitsrechtliche Betrachtung des Vorschlags für eine KI-Verordnung, BB 2022, 899
Wolff Algorithmen als Richter, Band 3, 2022
Wünschelbaum Kollektivautonomer Datenschutz, 2022
Wünschelbaum IT-Mitbestimmung bei generativen KI-Systemen: Bestandsaufnahme und Handlungsleitfaden, RDV 2024, 140
Zech/Hünefeld Einsatz von KI in der Medizin: Haftung und Versicherung, MedR 2023, 1
Zenner KI-Regulierung: Ein Trilog unter großem Zeitdruck, RDV 2023, 340
Zentrum für vertrauenswürdige Künstliche Intelligenz Vertrauenswürdige KI-Systeme im Journalismus, Ausgabe 8 v. 29.11.2023
Zimmerling/Brehm Prüfungsrecht, 2. Aufl. 2001
1