Erhalten Sie Zugang zu diesem und mehr als 300000 Büchern ab EUR 5,99 monatlich.
Build AI skills and meet the legal requirements of the new AI regulation: This book provides a thorough overview of what executives, entrepreneurs, freelancers, users, and HR developers need to know and implement now - packed with numerous tips, examples, and tools for quick adoption! Since 1st August 2024, the new AI regulation has been in force: As the world's first comprehensive AI law, the European AI Act presents new challenges for companies in Germany and Austria too. Artificial Intelligence is no longer just an IT matter. It fundamentally transforms how businesses operate, make decisions, and organise processes - across various industries, departments, and individual roles, from manufacturing and medicine to marketing and human resources. Yet, the European AI regulation introduces a host of rules, obligations, and prohibitions: Ignoring them risks fines in the millions, legal consequences, and competitive disadvantages. The competence obligation under Article 4 of the AI Act requires companies to train their employees in the safe use of AI systems and ensure ongoing development. This book is your compass through the complex world of AI - offering practical knowledge, clearly explained and immediately applicable! Artificial Intelligence Made Simple: The technical, legal, and ethical foundations of AI - from machine learning to generative models - are presented in an accessible way, perfect for beginners and decision-makers. Inspiring Practice: Numerous examples and solutions demonstrate how companies can successfully implement AI and build AI skills. Ready to Use: Over 100 actionable tips, along with practical examples of checklists and templates, assist with risk management, training measures, and documentation. In an environment of rapid developments and complex regulations, this book provides the guidance you need. It equips you with the essential knowledge and tools to understand the key requirements of the European AI regulation and turn them into opportunities: From AI risk classification to data protection, copyright, liability issues, technical functionality, and ethical standards - a comprehensive overview to train your staff and future-proof your business.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 261
Veröffentlichungsjahr: 2025
Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:
Foreword
1 Introduction
1.1 AI: Opportunities, Risks and Challenges
1.2 The AI Act and Its Significance
1.3 AI Competence Requirement
2 Fundamentals of Artificial Intelligence
2.1 Definitions and Concepts: What Is AI?
2.2 Legal Definition of AI
2.3 Types of AI Systems
2.4 Data: The Fuel for AI Systems
2.5 Machine Learning: The Engine of AI
2.6 Deep Learning and Neural Networks
2.7 Transformers and Large Language Models
2.8 Hardware and Infrastructure
3 The European AI Act
3.1 Objectives and Risk-Based Approach
3.2 Prohibited AI Systems with Unacceptable Risk
3.3 High-Risk AI Systems
3.4 AI Systems with Limited Risk
3.5 AI Systems with Minimal Risk
3.6 Competence Obligations: Knowledge as a Prerequisite
3.7 Enforcement and Sanctions
3.8 Schedule and Important Deadlines
3.9 Scope and International Perspective
4 Legal framework
4.1 Data Protection Considerations for AI Applications
4.2 Copyright and Intellectual Property
4.3 Liability Issues
4.4 International Dimensions of AI Regulation
4.5 Consumer Protection in the Context of AI
5 Risk Management for AI Systems
5.1 Introduction to AI Risks
5.2 Technical AI Risks and Vulnerabilities
5.3 Cyber Security: AI as a Target
5.4 Cyber Security: AI as an Attack Tool
5.5 Social and Economic Risks
5.6 Ecological and Resource-Related Risks
5.7 Tactical Security Measures
5.8 AI Risk Management and Governance
5.9 Establishment of AI Guidelines
6 Building AI Competence
6.1 Fundamentals of AI Competence
6.2 Staged Model for AI Training
6.3 Context-Dependent AI Training
6.4 Monitoring and Evaluation of AI Competence
6.5 Challenges and Solutions
6.6 Best Practices for Competence Building
7 Success Factors for AI Implementation
7.1 Selection and Integration of AI Systems
7.2 Secure Data Processing
7.3 Training and Optimisation of AI Models
7.4 Validation and Verification of AI Results
7.5 Scaling and Maintaining AI Systems
8 Generative AI
8.1 Introduction to Generative AI
8.2 Key Technologies of Generative AI
8.3 Text Generation using AI
8.4 Multimodal AI
8.5 Image Generation and Text to Image
8.6 Text-to-Video
8.7 Text-to-Music
8.8 Text-to-Speech (TTS)
8.9 Text-to-3D
9 Current Developments and Trends
9.1 Autonomous AI Agents
9.2 Artificial General Intelligence (AGI)
9.3 Further Concepts and Developments
9.4 Strategies to Prepare for the AI Future
9.5 Concluding Thoughts
Appendix: Working documents
Compliance Checklists
Practical Templates
AI Glossary
Welcome to The European AI Act: Competence + Compliance.
I am delighted to guide you through the fascinating realm of artificial intelligence – a field that is acquiring new frameworks and opportunities through the European AI Act. This book is intended for entrepreneurs, employees, managers, and human resource professionals – indeed, anyone keen to understand how AI is transforming our working lives and what steps we must take to actively shape the future.
My aim is to elucidate the intricate requirements of the AI Act, which came into effect on 1st August 2024, and to offer practical solutions. This landmark EU regulation mandates that companies and organisations prepare their teams for AI implementation – a requirement that not only ensures legal compliance but also unlocks significant potential for innovation.
Within these pages, I will demonstrate how to navigate the legal obligations, equip your teams with the necessary skills, and harness the opportunities presented by this technology.
Structure of the book
This book is crafted to engage both novices and experienced readers. Chapters 1 to 5 establish a foundational understanding for all: they explore the opportunities and challenges posed by AI, outline the provisions of the AI Act, and address legal, ethical, and security considerations. This groundwork underscores why the AI Act represents not merely an obligation, but also a valuable opportunity.
Chapter 6 is tailored specifically for managers and HR professionals. It details the development of AI competencies, offering a step-by-step model, best practices, and solutions to common obstacles. Chapter 7, directed at IT and project managers, highlights critical success factors for AI initiatives, from system selection to scaling.
A personal highlight is Chapter 8, which focuses on Generative AI. Beyond the introductory and core chapters – designed to remain broadly relevant and enduring – I delve into the realms of text, image, and music generation, introducing you to the most effective tools available. In Chapter 9, we examine current and emerging trends, such as autonomous AI agents and the concept of Artificial General Intelligence (AGI).
The appendix provides practical working documents to inspire and assist you in applying these insights: checklists, templates, and a concise AI glossary for daily reference.
I have structured the book to serve as a readily accessible resource. Key concepts and challenges are, wherever possible, explained within the context of their respective sections, with cross-references to other parts of the text for further detail where appropriate.
Your benefit
This book seeks to deliver comprehensive knowledge in an actionable format, igniting enthusiasm for engaging with AI. The AI Act is both a responsibility and an invitation to future-proof your organisation. By mastering its regulations and training your teams, you can unlock pathways to innovation and growth.
Let this book inspire you – it is your toolkit for shaping the AI revolution.
Yours sincerely,
Markus M. Kirchmair
Artificial intelligence enhances processes, boosts efficiency, facilitates more accurate diagnoses, and produces creative content that consistently astonishes us with its quality. Yet, it also poses significant challenges and raises intricate ethical dilemmas. For instance, how can we prevent algorithms from perpetuating existing biases or deceiving us with emerging threats such as deepfakes?
The pace of artificial intelligence’s evolution is captivating – yet it simultaneously tests the resilience of our social and legal frameworks. How does one responsibly govern a technology whose innovation cycles outstrip the pace of political decision-making?
The European Union addresses this very challenge through the AI Act. Enacted on 1st August 2024, this regulation adopts a risk-based approach, aiming to safeguard fundamental rights, security, and ethical principles while fostering innovation.
Its scope extends beyond technical stipulations – such as those concerning documentation, transparency, and security – to include a mandate for cultivating AI competencies within organisations. For the EU, the responsible deployment of AI systems hinges on first achieving a thorough understanding of their nature and implications.
In the opening chapter, we offer a succinct overview of AI’s primary opportunities, risks, and challenges. We explore how the AI Act fundamentally tackles these issues, setting the stage for a deeper examination of specific functionalities, regulations, their practical applications, and potential solutions in the chapters that follow.
Artificial intelligence (AI) is transforming numerous facets of life by transferring capabilities such as learning, problem-solving, and decision-making from humans to technology. It is reshaping business, science, and daily routines, yet it also introduces a host of technical, ethical, and societal challenges.
Artificial intelligence harnesses vast datasets to tackle complex problems, revolutionising a range of industries.
In manufacturing, it streamlines production processes; in healthcare, it enables early disease detection through precise image analysis. Within the mobility sector, AI-powered assistance systems enhance safety by processing real-time sensor data and adapting through wireless updates. In logistics, AI demonstrates its potential by optimising transport routes with algorithms and managing supply chains efficiently through the intelligent integration of inventory, weather, and traffic information.
Generative AI is redefining creative domains – from crafting marketing content to designing lifelike products. Businesses also benefit from AI-driven personalised services, which unlock new markets and bolster customer loyalty.
Moreover, AI systems reveal insights from data previously obscured, with sophisticated analyses driving breakthroughs in research.
The International AI Safety Report 20251 characterises this unprecedented momentum as exponential scaling, propelled by computing power that has doubled every six months since 2012, alongside larger datasets and refined algorithms. Deep learning and transformer models deliver context-sensitive responses, produce realistic designs, and generate program code – advancements that enhance efficiency and open entirely new avenues of application.
Despite its vast potential, artificial intelligence is not a universal solution. Its effectiveness relies heavily on the quality and volume of available data. Inaccurate or incomplete data can yield biased or discriminatory outcomes, such as in AI-assisted lending decisions. Furthermore, AI systems are not inherently impartial; they often mirror human biases and errors.
Technical risks stem from AI’s sensitivity to shifts in underlying data. Unforeseen changes can destabilise autonomous vehicles or drones, leading to critically flawed decisions.
Additionally, the opacity of many modern AI models – so-called black boxes – renders their decision-making processes obscure, particularly in sensitive fields like healthcare and justice.
Socio-ecological risks emerge from predictable shifts in employment due to rising automation, while energy-intensive AI data centres contribute significantly to environmental degradation.
Security threats include realistic manipulations, such as deepfakes, which provide criminals with new tools, and the potential misuse of autonomous systems for military purposes. The integrity of public discourse is increasingly jeopardised by manipulated media.
Additional risks include dependence on AI systems, rendering critical infrastructure more susceptible to cyberattacks, alongside ethical concerns surrounding AI-supported surveillance and decision-making. The rapid evolution of AI threatens to outpace existing regulatory frameworks, raising pressing ethical questions:
Who bears responsibility for errors in autonomous systems?
How can we mitigate social inequalities exacerbated by automation?
These opportunities and challenges underscore the need for a sustainable, responsible approach to AI deployment.
1See https://www.gov.uk/government/publications/international-ai-safety-report-2025
Given its profound opportunities and risks, artificial intelligence presents complex challenges for societies and regulators. To address these effectively, the European Union introduced the AI Act – officially termed EU Regulation 2024/1689, or the "Artificial Intelligence Regulation." Enacted on 1st August 2024, it stands as the world’s first comprehensive legislation governing AI.
The AI Act seeks to regulate the development and application of AI systems, fostering innovation while safeguarding fundamental rights and ethical standards.
The European Union employs a risk-based framework, categorising AI systems into four tiers to establish tailored regulatory requirements:
Unacceptable risk
Certain AI systems, such as social scoring or widespread biometric real-time surveillance, are generally prohibited because they could significantly endanger fundamental human rights. Special regulations are only provided for in clearly defined exceptional cases, such as combating terrorism.
High-risk systems
Applications with a significant impact on security and individual rights, such as medical diagnostic tools, AI-supported personnel selection processes or autonomous vehicles, are subject to strict requirements regarding documentation, transparency and security.
Limited risk
Systems where the risk is manageable – for example chatbots – only need to communicate transparently that they are based on AI in order to prevent users from being deceived.
Minimal risk
AI applications that have little critical impact, such as game recommendations or simple tools for personal entertainment, remain largely unregulated.
This risk-based approach aims to nurture innovation while building trust in AI systems through clear standards for traceability, security, and ethical accountability. Breaches of these standards may incur severe penalties, including substantial fines.
Concurrently, the EU pursues strategic objectives with the AI Act. Much like the General Data Protection Regulation (GDPR), uniform regulatory standards for AI technologies are intended to enhance Europe’s competitiveness and innovative capacity.
The AI Act offers businesses planning certainty, yet it imposes significant duties: companies must ensure their AI systems are equitable, secure, and transparent, and that their staff are adequately trained.
Tip 1
Conduct a risk assessment of your AI systems as early as possible to determine their classification and plan the timely implementation of compliance measures.
Through the AI Act, the European Union provides a robust regulatory response to AI’s opportunities and challenges, striving to balance progress with the protection of fundamental rights. This interplay between innovation and safeguarding is a recurring theme, explored in depth in Chapter 3.
Beyond risk-specific measures, the AI Act explicitly mandates that providers, operators, and users of AI systems develop and maintain appropriate AI competencies. The following section examines why this requirement is strategically vital and how businesses can meet it.
The European Commission does not regard AI implementation as a task confined to IT departments, but as a pivotal responsibility spanning entire organisations. A cornerstone of the AI Act, enshrined in Article 4, stipulates that all individuals involved in developing, operating, or using AI systems must possess suitable AI skills.
This regulation aims to ensure the safe and effective management of AI applications, stimulate innovation, and robustly uphold ethical values and fundamental rights. It presents businesses with an opportunity to responsibly leverage technological potential within the digitalisation landscape while minimising risks.
The regulatory objectives can be distilled as follows:
Managing AI Complexity
AI systems are often intricate and opaque. Staff with deep AI expertise are essential for ensuring transparency, critically evaluating decisions, and identifying risks promptly.
Fostering Innovation and Competitiveness
Investment in AI skills enables companies to accelerate the development of new applications, enhance technological adaptability, and gain competitive edges.
Safeguarding Fundamental Rights and Ethical Values
AI profoundly affects sensitive domains like data protection and automated decision-making. Skilled employees can detect and avert breaches of ethical and legal standards.
Effective Risk Mitigation
Training diminishes the likelihood of erroneous decisions or misuse of AI, which could lead to legal repercussions or reputational harm.
Sustainable Digitalisation
Widespread AI competence is vital for rendering digitalisation secure, efficient, and sustainable over the long term.
In practical terms, this competence requirement mandates that all employees interacting with AI systems possess a foundational understanding of their operation. This includes the ability to swiftly pinpoint errors and evaluate the ethical and legal ramifications of AI-driven decisions.
To achieve these aims effectively, a structured and tiered approach is advised:
Basic Training for All
Employees encountering AI systems require introductory training on their functionality, risks, and ethical considerations, encompassing basic awareness of the GDPR and the AI Act.
In-Depth, Specialised Training
For critical applications – such as in healthcare, finance, or law – advanced knowledge is essential. These staff members should thoroughly grasp regulatory demands, safety protocols, and the mechanics of AI systems’ decision-making processes (see Section 3.6).
Continuous and Documented Learning
Given the swift pace of technological progress, ongoing updates and meticulous documentation of training content are imperative. This ensures staff remain current and legal obligations are consistently met.
Tip 2
Diligently record the planning and execution of training initiatives. This enhances internal quality assurance and supports compliance with legal standards.
For successful implementation, a comprehensive training strategy engaging all relevant departments is recommended. Chapter 6 elaborates on how businesses can systematically cultivate AI competencies with lasting impact.
Artificial intelligence is a concept that both sparks the imagination and ignites spirited debates – from visions of futuristic robots capable of independent thought to the everyday companions like voice assistants on our smartphones. AI is already woven into the fabric of our daily lives, and it holds the potential to reshape our entire society in the years to come. But what lies beneath these notions? And how much of this is already part of our reality?
In this chapter, we’ll embark on a journey through the realm of artificial intelligence together: we’ll define its technical underpinnings and chart its evolution from early concepts to contemporary breakthroughs. Step by step, we’ll tackle essential questions: What sets AI apart, and how does it differ from conventional software? What varieties of AI systems exist, and which ones might you encounter daily without even noticing? We’ll lift the curtain to explore the vital components that breathe life into every AI – machine learning algorithms and data.
How does an AI system sift through vast troves of data to identify patterns, make predictions, or reach decisions? How do neural networks and cutting-edge AI models, such as Transformers, function – enabling feats like generating text or crafting images? Whether it’s recognising faces in photographs, streamlining supply chains, or aiding doctors with diagnoses, data serves as the unseen fuel powering these capabilities.
My aim is to demonstrate, in a clear and practical manner, just how fascinating, comprehensible, and approachable the world of AI can be. The pages that follow will establish a foundation not only for understanding AI more deeply but also for harnessing it strategically and responsibly.
Artificial intelligence refers to technologies and systems capable of performing tasks traditionally regarded as manifestations of human intelligence. These encompass: Learning from experience, identifying intricate patterns and making autonomous decisions.
As a pivotal branch of computer science, AI concentrates on crafting systems that emulate or automate intelligent behaviour. The roots of artificial intelligence trace back to the mid-20th century. In 1950, Alan Turing, through his renowned Turing Test,2 posed the seminal question: could machines replicate human behaviour so convincingly that it becomes indistinguishable from that of a real person? A few years later, in 1956, the Dartmouth Conference saw John McCarthy coin the term "Artificial Intelligence" (AI), defining it as the science and technology of creating intelligent machines and programmes.3
Early AI systems relied on fixed rules and logical algorithms. However, these methods soon revealed their limitations in more complex or unpredictable situations. It was only with the advent of vast datasets and significant leaps in computational power in recent decades that AI evolved into flexible, self-learning systems.
Though approaches have evolved considerably since then – from rule-based, symbolic AI to data-driven techniques like neural networks and deep learning (see 2.6) – McCarthy’s foundational definition still holds true today.4
The operation of artificial intelligence hinges on four core concepts:
Intelligence:
Here, intelligence denotes the capacity to absorb information, discern patterns within it, and draw meaningful conclusions or decisions. AI systems can thus automate intricate tasks, such as pattern recognition or forecasting.
Learning:
Unlike conventional software with static programming, AI leverages machine learning (ML). By analysing vast datasets, systems independently identify relationships, learn patterns, and enhance their performance over time.
Problem-Solving:
AI employs techniques like heuristic search algorithms, logical reasoning, or optimisation methods to tackle complex challenges – think of chess computers generating novel moves autonomously.
Decision-Making:
AI bases decisions on data and algorithms, calculating probabilities and systematically addressing uncertainties (probabilistic models).
Unlike traditional software, which operates on rigid rules and predefined processes, AI stands out for its adaptability and dynamic learning capacity:
Traditional Software:
Follows set rules, yielding predictable outcomes. If external conditions – such as legal requirements – shift, manual updates are required.
Example 1: A conventional accounting programme computes profit by subtracting expenses from income.
AI Systems:
Learn autonomously from data and adapt to new circumstances without intervention. For instance, an AI tax-forecasting system can account for regional variations or legal changes automatically, sans reprogramming.
Example 2: An AI system might independently detect regional disparities, seasonal trends, or new legislation from historical tax data, generating future tax burden forecasts without programmer input.
AI’s applications are remarkably wide-ranging, underscoring its transformative potential across industries:
Image Processing:
AI systems autonomously learn to identify objects in images, enabling uses like facial recognition or security monitoring.
Autonomous Vehicles:
AI processes real-time sensor data to make decisions, navigating vehicles safely and efficiently through traffic.
Speech Recognition:
Digital voice assistants analyse spoken commands, refining their performance via machine learning.
Medical Diagnostics:
AI examines complex medical data, detects subtle disease patterns, and aids doctors in delivering more accurate diagnoses and treatments.
Marketing:
Firms harness AI to craft personalised advertising campaigns, tailoring content to customer behaviour and preferences.
Current AI research explores optimising machine learning, natural language processing (NLP), and explainable AI for enhanced transparency. Ethical and ecological concerns, such as substantial resource use, are also gaining prominence.
For further insights and a glimpse into AI’s future, see Chapter 9.
2 Cf. Turing, 1950, https://academic.oup.com/mind/article/LIX/236/433/986238
3 Cf. McCarthy, J., 1956, "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence", https://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html
4 See Russell, S., & Norvig, P., 2021, "Artificial Intelligence: A Modern Approach", https://aima.cs.berkeley.edu/
Article 3(1) of the European AI Act defines an AI system as
“machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”5
This definition clearly and pragmatically delineates the scope of the AI Act. It deliberately distinguishes itself from academic definitions by focusing on specific characteristics with regulatory relevance.
For businesses, understanding this definition is of paramount importance: only if a deployed system meets these criteria do the corresponding regulatory requirements regarding documentation, transparency, and safety come into play.
A misinterpretation could lead companies to inadequately regulate AI systems, thereby risking significant legal consequences (see section 3.7).
To enhance clarity, the key aspects of the definition are explained individually below:
Machine-Assisted System:
AI systems rely on physical or virtual infrastructure – e.g., servers, cloud services, or sensors – excluding purely manual processes. For instance, software-driven quality control using cameras and AI analysis falls under the Act, while paper-based checklists do not.
Varying Degrees of Autonomy:
This spans systems with minimal autonomy (e.g., basic chatbots with preset responses) to highly autonomous ones (e.g., self-driving cars making complex decisions). Autonomy levels often determine risk classification – highly autonomous systems are typically deemed high-risk.
Adaptability Post-Deployment:
A hallmark of AI is its capacity to learn and adjust after deployment. Take a spam filter that evolves to detect new phishing tactics automatically.
Derivation from Inputs:
Unlike fixed instructions, AI generates outputs by analysing input data and drawing conclusions. For example, an AI weather forecasting tool uses temperature, humidity, and wind data to produce predictions independently.
Generation of Outputs or Decisions:
Systems produce specific results – forecasts (e.g., sales predictions), content (e.g., text), recommendations, or automated decisions (e.g., credit approvals) – highlighting the breadth of regulated applications.
Impact on Environments:
AI actively shapes its surroundings, whether physical (e.g., a robotic arm moving objects) or virtual (e.g., a website showing tailored content), amplifying its regulatory and ethical significance.
Unlike John McCarthy’s academic focus on crafting intelligent machines, the AI Act offers a pragmatic, regulation-specific lens. While traditional software relies on fixed rules, AI systems excel in learning and adapting. A calculator, for instance, falls outside the AI Act’s scope due to its lack of learning ability. Similarly, static expert systems with rigid "if-then" rules are exempt, despite appearing "intelligent" superficially.
The table below outlines the distinctions:
Criterion
AI system (AI Act)
Traditional software
Basis
Data-driven, learns from inputs
Fixed rules, pre- programmed
Adaptability
Dynamic after deployment
Static, no learning ability
Expenditure
Predictions, decisions, etc.
Calculations, fixed results
Regulation
Subject to AI Act (depending on risk)
Mostly not regulated by AI Act
To determine if a system falls under the AI Act, consider three key questions:
Does it learn independently from data?
Does it adapt automatically after deployment?
Does it actively influence its environment?
Examples:
An ERP system calculating stock levels with fixed rules is exempt from the AI Act.
AI-driven warehouse management analysing order histories and dynamically predicting stock levels incurs at least minimal risk classification.
A basic chatbot with fixed responses isn’t AI, but one with learning components falls under regulation.
Tip 3
Conduct an early, systematic risk analysis to ensure full compliance with regulatory demands (see
5.8
).
5 Cf.https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L202401689
AI systems differ fundamentally in their functionality, applicability, and level of intelligence. Understanding these distinctions is crucial for selecting appropriate AI technologies to meet specific business needs while effectively complying with regulatory frameworks such as the AI Act. This section explores how AI systems can be differentiated, provides practical examples of their use, and outlines the legal requirements that arise from them.
The level of intelligence indicates the capability and scope of application of an AI system:
Weak AI (Narrow AI)
Narrow AI refers to specialised systems designed to perform specific tasks efficiently and accurately. These systems lack general intelligence and are confined to a defined domain. Examples include recommendation systems like Netflix or quality assurance tools in manufacturing. Their strength lies in targeted applications, though they require clearly defined objectives and high-quality data.
Example 3: A company employs a specialised AI for quality control in manufacturing, inspecting components for defects based on pre-provided image data.
Strong AI (General AI, AGI)
General AI denotes a theoretical form of AI capable of replicating human intelligence and adaptability across a wide range of domains. AGI remains a future vision, raising significant ethical and regulatory challenges.
Many experts predict its practical realisation is decades away. For businesses, Narrow AI systems are currently the most relevant, though it’s worth monitoring long-term developments towards AGI (see also Section 9.2).
AI systems also vary in how they operate and make decisions:
Rule-Based Systems (Symbolic AI)
These systems rely on pre-programmed rules and logical structures to make decisions. They are transparent, predictable, and easily controllable but lack flexibility in unfamiliar situations or with new data.
Example 4: A customer service chatbot that identifies complaints and automatically routes them to the appropriate department based on predefined keywords and rules.
Learning Systems (Machine Learning)
These systems adapt dynamically by learning from data and independently identifying patterns. This makes them flexible and powerful, though often opaque (“black box”), posing regulatory challenges related to transparency and accountability.
Example 5: An AI system analysing sales data independently detects seasonal trends and generates automated forecasts for future sales volumes.
Hybrid Systems
Combining symbolic logic with statistical learning methods, hybrid systems are particularly suited to highly regulated sectors, offering both adaptability and transparency.
Example 6: Medical diagnostic systems that identify patterns in patient data while transparently documenting rule-based diagnostic steps.
The application areas of AI highlight its versatility and relevance across various industries:
Generative AI
This type generates original content such as text, images, or videos and is commonly used in marketing and creative fields. Its creative freedom presents specific regulatory challenges (see Chapter 8).
Natural Language Processing (NLP)
Systems like chatbots or translation software analyse and generate human language to support communication. Under the AI Act, they are often classified as limited-risk systems (see Section 3.4).
Computer Vision
Image processing systems analyse visual data, for instance, in quality control or traffic monitoring. Due to their potential impact on safety, they are frequently categorised as high-risk systems.
Robotics
AI systems integrated into robots use sensor data to perform physical tasks, such as warehouse operations or surgical procedures. Given their safety implications, they are often deemed high-risk systems.
Autonomous Systems
Self-driving cars or drones make independent decisions based on realtime data. These systems are subject to stringent regulatory requirements and are typically classified as high-risk (see Section 3.3).
Each of these application areas demands specific data sources, technical implementations, and tailored compliance measures in line with the AI Act’s requirements.
Tip 4
Select the appropriate type of AI system based on your specific task. Opt for rule-based AI for simple, transparent processes and learning AI for complex forecasting.
Data forms the foundation of every artificial intelligence system. Without high-quality data, AI systems would be as useless as an engine without fuel. Particularly for data-driven AI approaches, such as machine learning (see Section 2.5), data is critical, serving as the basis for learning, adaptation, and evaluation of these systems.
AI systems learn by recognising patterns within data. The quality and quantity of this data significantly determine the success of an AI system: the more comprehensive and high-quality the data foundation, the more precise and reliable the outcomes.
Data Quality
High-quality data must be accurate, complete, up-to-date, and representative. Flawed or unrepresentative data leads to distorted results and inefficient AI models.
Data Quantity
AI models often require vast amounts of data to perform effectively. However, care should be taken to avoid unnecessary or irrelevant data ("data clutter") slowing down the process.
Modern AI models, such as GPT (see Section 8.3), are trained on enormous datasets to master complex tasks like natural language generation.
Example 7: If a retailer trains an AI solely with urban data, the model may fail to make accurate predictions for rural branches.
Throughout the AI lifecycle, data plays various roles:
Training Data
forms the foundation: it is used to develop a model – for instance, sales data training an AI to forecast future revenue.
Validation Data
helps optimise the model during training, ensuring it doesn’t simply memorise the training data (overfitting).
Test Data
assesses final performance using new, unseen data – a crucial step to ensure robustness.
Example 8: A logistics company uses sensor data (driving speed, fuel consumption, traffic) and historical delivery times to optimise routes. The model is trained with this data and later tested with fresh data to validate its accuracy.
Each type of data must be carefully selected and maintained, as errors at this stage directly impact results. In real-world applications, real-time data also comes into play, continuously enhancing the system – think of a chatbot learning from customer queries.
Data comes in various forms and formats, each requiring different processing methods. The key distinctions are:
Structured Data:
Clearly organised, such as tables containing sales figures or customer records. These are easily analysable and ideal for AI applications in sales or financial forecasting.
Unstructured Data:
Text, images, audio, and video data are more complex but often provide deeper, more comprehensive insights, such as in sentiment analysis of customer reviews.
Example 9: A car manufacturer combines structured sensor data (e.g., speed) with unstructured camera images to thoroughly train driver assistance systems. This combination enhances the AI system’s robustness.
Careful collection and processing of data are fundamentally important:
Data Collection
Strategically chosen data sources ensure representativeness. Raw data from CRM systems, IoT devices (Internet of Things), or public datasets often forms the basis.
Data Annotation or Labelling
Especially for images or medical data, careful manual or semi-automated labelling is essential.
Example 10: A diagnostic model requires annotated images to accurately distinguish between benign and malignant skin conditions.
Data Processing
Raw data is prepared for use through:
Cleaning duplicates or errors