22,99 €
An easy-to-follow guide to demystifying Agentic AI, the next step in the evolution of artificial intelligence
Agentic AI is the next big leap in artificial intelligence. Agentic systems don't just respond to commands. They set goals, make decisions, and take initiative without direct human interaction. Sound like a lot to wrap your head around? Fortunately Agentic AI For Dummies is here to help you gain understanding of this advancing technology.
Written by the author of ChatGPT For Dummies and Generative AI For Dummies, this easy-to-understand tech guide helps you take your first steps into Agentic AI. Get insight into the technologies driving Agentic AI, a road map for shifting from legacy systems to Agentic systems, and a tour of real-world use cases for Agentic AI. This books arms you with an understanding to make better decisions about how and when to use Agentic AI technologies.
Inside the book:
Perfect for business owners, entrepreneurs, managers, executives, professionals and team leaders in the private sector, Agentic AI For Dummies is a hands-on toolkit and strategy guide for using autonomous AI solutions to solve hard problems in your organization.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 520
Veröffentlichungsjahr: 2025
Chapter 2
TABLE 2-1 Interacting with GenAI versus Agentic AI
TABLE 2-2 Comparing GenAI and Agentic AI
Chapter 3
TABLE 3-1 ANP at a Glance: Pros and Cons
TABLE 3-2 A2A at a Glance: Pros and Cons
TABLE 3-3 ACP at a Glance: Pros and Cons
TABLE 3-4 Agentic System-Building Frameworks
Chapter 4
TABLE 4-1 Challenges in Context Engineering for Agentic AI
Chapter 5
TABLE 5-1 Comparison of GenAI and Agentic AI
Chapter 11
TABLE 11-1 GenAI Chatbot versus Agentic AI Systems
Chapter 13
TABLE 13-1 Agentic AI versus AI Swarm Systems
Chapter 1
FIGURE 1-1: A screenshot of ChatGPT chatbot user interface.
FIGURE 1-2: A screenshot of the Godmode interface.
FIGURE 1-3: A screenshot of an AI agent pixie that offers a free landing-page g...
Chapter 4
FIGURE 4-1: A side-by-side comparison of prompt engineering and context enginee...
FIGURE 4-2: A comparison chart illustrating the user experience shift from trad...
FIGURE 4-3: Comparison chart of app vs agent internet processes.
FIGURE 4-4: Flowchart demonstrating how agentic AI coordinates across services ...
FIGURE 4-5: A visual concept of an Agentic AI swarm, showing how multiple speci...
Chapter 5
FIGURE 5-1: A decision flowchart for choosing a pilot project.
FIGURE 5-2: A diagram that shows how the Agentic AI architecture fits together....
FIGURE 5-3: A run-measure-refine cycle.
Chapter 6
FIGURE 6-1: Agentic AI for upskilling robotic surgery comparison chart.
FIGURE 6-2: A learning moment showing how context signals feed into the AI tuto...
Chapter 8
FIGURE 8-1: The aiApply interface and its claims.
FIGURE 8-2: Screenshot of Pine AI, showing easy buttons to get you started.
Chapter 12
FIGURE 12-1: A learning feedback loop system embedded in the 70-20-10 learning ...
FIGURE 12-2: A five-year (and beyond) learning plan for future-proofing your ca...
Chapter 13
FIGURE 13-1: A screenshot from Anthropic’s internal evaluation of its multi-age...
FIGURE 13-2: Visualizing a swarm of AI agents, AI style.
Cover
Table of Contents
Title Page
Copyright
Begin Reading
Appendix
Index
About the Author
iii
iv
1
2
3
4
5
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
333
334
335
336
Agentic AI For Dummies®
Published by: John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030-5774, www.wiley.com
Copyright © 2026 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.
Media and software compilation copyright © 2026 by John Wiley & Sons, Inc. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.
No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.
The manufacturer’s authorized representative according to the EU General Product Safety Regulation is Wiley-VCH GmbH, Boschstr. 12, 69469 Weinheim, Germany, e-mail: [email protected].
Trademarks: Wiley, For Dummies, the Dummies Man logo, Dummies.com, Making Everything Easier, and related trade dress are trademarks or registered trademarks of John Wiley & Sons, Inc. and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: THE PUBLISHER AND THE AUTHOR MAKE NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE ACCURACY OR COMPLETENESS OF THE CONTENTS OF THIS WORK AND SPECIFICALLY DISCLAIM ALL WARRANTIES, INCLUDING WITHOUT LIMITATION WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE. CERTAIN AI SYSTEMS HAVE BEEN USED IN THE CREATION OF THIS WORK. NO WARRANTY MAY BE CREATED OR EXTENDED BY SALES OR PROMOTIONAL MATERIALS. THE ADVICE AND STRATEGIES CONTAINED HEREIN MAY NOT BE SUITABLE FOR EVERY SITUATION. THIS WORK IS SOLD WITH THE UNDERSTANDING THAT THE PUBLISHER IS NOT ENGAGED IN RENDERING LEGAL, ACCOUNTING, OR OTHER PROFESSIONAL SERVICES. IF PROFESSIONAL ASSISTANCE IS REQUIRED, THE SERVICES OF A COMPETENT PROFESSIONAL PERSON SHOULD BE SOUGHT. NEITHER THE PUBLISHER NOR THE AUTHOR SHALL BE LIABLE FOR DAMAGES ARISING HEREFROM. THE FACT THAT AN ORGANIZATION OR WEBSITE IS REFERRED TO IN THIS WORK AS A CITATION AND/OR A POTENTIAL SOURCE OF FURTHER INFORMATION DOES NOT MEAN THAT THE AUTHOR OR THE PUBLISHER ENDORSES THE INFORMATION THE ORGANIZATION OR WEBSITE MAY PROVIDE OR RECOMMENDATIONS IT MAY MAKE. FURTHER, READERS SHOULD BE AWARE THAT INTERNET WEBSITES LISTED IN THIS WORK MAY HAVE CHANGED OR DISAPPEARED BETWEEN WHEN THIS WORK WAS WRITTEN AND WHEN IT IS READ.
For general information on our other products and services, please contact our Customer Care Department within the U.S. at 877-762-2974, outside the U.S. at 317-572-3993, or fax 317-572-4002. For technical support, please visit https://hub.wiley.com/community/support/dummies.
Wiley publishes in a variety of print and electronic formats and by print-on-demand. Some material included with standard print versions of this book may not be included in e-books or in print-on-demand. If this book refers to media that is not included in the version you purchased, you may download this material at http://booksupport.wiley.com. For more information about Wiley products, visit www.wiley.com.
Library of Congress Control Number is available from the publisher.
ISBN 978-1-394-37960-6 (pbk); ISBN 978-1-394-37962-0 (ebk); ISBN 978-1-394-37961-3 (ebk)
As of early 2025, industry consensus estimates that 115 to 180 million individuals worldwide have been using artificial intelligence (AI) on a daily basis, and you probably have encountered it yourself. Maybe you asked a chatbot to write an e-mail for you, or you used an AI image generator to make a funny picture of your dog in a spacesuit or to make changes to a photo you took with your phone. That kind of AI — which generates text, images, or sound from nothing more than your spoken or typed command — is called Generative AI (GenAI), and it’s been all the rage since late 2022.
But AI is constantly evolving, and the new wave is called Agentic AI. The term agentic simply denotes AI that doesn’t just sit there waiting for you to enter a command in the prompt bar. Instead, it can take action on its own accord. It follows goals and the framework that you set, but it finds its own path to getting there based on its own reasoning and decision-making. Agents can even work with other AI agents to form a whole team of digital coworkers who never need coffee breaks. Multiple agents working together is called an Agentic AI system.
Think of the difference between GenAI and Agentic AI this way:
GenAI is like a calculator.
You push the buttons, and it gives you an answer.
Agentic AI is like a junior assistant.
You tell it what outcome you want, and it figures out the steps to get there. Sometimes, it asks other human or AI assistants for help along the way.
Welcome to the age of Agentic AI. Buckle up. It’s going to be an interesting ride!
The shift from AI output (GenAI) to AI action (Agentic AI) is a huge technological feat on par with the autonomous cars available now, and these technologies share many similar risks and opportunities. Agentic AI can schedule tasks, run experiments, optimize business decisions, or help you shop online, for example. And it can do all of that without you having to hold its hand every step of the way.
But the addition of partial or complete autonomy that Agentic AI possesses comes with a new set of challenges. In an accounting scenario, how do you keep agents from going off track and deleting an entire spreadsheet or database? In a medical situation, how do you trust Agentic AI to work safely and not hurt a patient? If you love the idea of an AI agent working as your personal online shopper, how do you keep that agent from buying things with your credit card that you didn’t intend for it to buy? And how do you tell the difference between a useful agent and one that’s more hype than help?
This book is here to answer some of the practical questions about Agentic AI and explain the scope of applications that it can touch. By the time that you finish reading this book, you’ll be able to talk about Agentic AI with confidence, spot it when you see it, and know how to make it work for you, instead of the other way around.
Some typical conventions that you may find in this book include the following:
If you see a term in
italics
, you can usually find a definition or explanation for the term close by in the text.
Web addresses and programming code appear in monofont. If you’re reading a digital version of this book on a device connected to the internet, you can click the web address to visit that website, like this:
www.dummies.com
. Some web addresses break across two lines of text. If you’re reading this book in print and want to visit one of these web pages, simply type the address exactly as it appears in the text, pretending the line break doesn’t exist.
To make the content more accessible, I divide it into five parts:
Part 1
: Understanding Agentic AI:
In this part, you find out what Agentic AI is and how it works
Part 2
: Getting Started on the Agentic AI Path:
Check out this part to get a good grasp on where and when to use Agentic AI as well as the ethics involved
Part 3
: Agentic AI in the Real World:
Here you’ll find how Agentic AI will likely change your work and your world — and what you can do to make sure no one is harmed in the process
Part 4
: Exploring Myths and Realities:
Here are the facts about what Agentic AI is and isn’t, whether it has agency, and what amount of upskilling you need to keep pace
Part 5
: The Part of Tens:
This section gets right to the point of unexpected surprises now and 10 years from now, and it gives you a list of things that Agentic AI is absolutely terrible at doing
I wrote this book for anyone who wants to understand and use AI agents and Agentic AI systems in their work and daily life, as well as to prepare for inevitable changes that this technology will introduce. This book is written for professionals, not programmers. To get value from this book, you do not need
A degree in computer science, math, or engineering
Years of coding experience
A deep knowledge of AI research papers and technical protocols
If you can read, think critically, and apply new ideas in your work, you already have the background that you need.
But I do make certain assumptions about the book’s audience (you) as a practical matter. For instance, I assume that
You possess at least a limited understanding of GenAI and are in hot pursuit of leveling up your skills to now understand and work with autonomous AI agents.
You have at least a basic level of comfort and skill in working with computing devices, browsers, and web applications.
You’re smart and pressed for time, so you want all meat and no fluff in a fast and easy read. (I hope I hit that mark for you with this
For Dummies
book.)
Throughout this book, icons in the margins highlight certain types of valuable information that call out for your attention. Here are the icons that you might encounter and a brief description of each.
The Tip icon marks tips and shortcuts that you can use to make building, tasking, or using AI agents easier or simply more fun.
Remember icons mark the information that’s especially important to know. To siphon off the most important information in each chapter, just skim through until you find these icons.
The Technical Stuff icon marks information of a highly technical nature that you can normally skip over. Unless, of course, you came for the technical stuff — in which case, it’s now earmarked for you.
This icon warns you of a stumbling block or danger that may not be obvious to you until it’s too late. Please make careful note of warnings.
In addition to the abundance of information and guidance related to Agentic AI in this book, you get access to even more help and information online. Check out this book’s online Cheat Sheet: Just go to www.dummies.com and enter “Agentic AI For Dummies Cheat Sheet” in the Search text box. Press Enter, and a link to the Cheat Sheet pops up in the results.
This is a reference book, so you don’t have to read it cover to cover (unless you want to soak in all the new information all at once). Also, feel free to read the chapters in any order. Each chapter is designed to stand alone, meaning you don’t have to know the material in preceding chapters to understand the chapter that you’re reading. Start anywhere and finish when you feel you have all the information that you need for whatever task you have on hand.
Here are a few specific tips on where to find the info particularly interesting or useful to you:
Check out the Table of Contents at the front of the book or the Index at the back to find a topic of interest.
If you simply want to understand what AI agents are and how they work, both alone and together, read
Part 1
.
If you have business interests,
Chapter 5
guides you through making a plan so that your investments in Agentic AI can deliver, both on the mission that you give it and a bankable return on investment. And
Chapter 6
offers a peek into how first-adopter companies and industries are using Agentic AI at the time of writing.
Chapter 7
poses all the hard questions that no one wants to grapple with — but also that no one can escape. Here, I blow away the hype and present the facts and obstacles that keep this tech from being a plug-and-play miracle.
The chapters in
Part 3
give you a good look at how Agentic AI technology is reshaping work, economies, and safety for humankind.
Do you wonder if and when people should think about AI having agency and autonomy?
Chapter 13
explores the issues of consciousness, intent, and motive as it applies to AI agency in sharper detail.
Part 1
IN THIS PART …
Find out what Agentic AI is all about.
Get a look at how Agentic AI learns, reasons, and remembers.
Discover Agentic AI’s core functionalities and multiple interaction points.
Direct Agentic AI with prompt and context engineering.
Chapter 1
IN THIS CHAPTER
Identifying Agentic AI and its connection to traditional AI
Embracing reasoning as an Agentic AI trait
Distinguishing AI agents from Agentic AI
Evolving prompting to direct Agentic AI
Examining Agentic AI impacts on the internet and commerce
Agentic AI represents a significant shift in the evolution of AI. Its capabilities are leaps and bounds beyond those of Generative AI (GenAI) and other traditional forms of AI, such as voice assistants like Siri and Alexa, or the technologies that drive autonomous cars. The most distinguishing feature that puts Agentic AI in a league of its own is autonomy (its capacity to make decisions and carry out a set of actions toward a goal without specific instruction at each step).
Put in a simpler way, GenAI is all talk, and Agentic AI is all action.
This chapter explores what defines an AI as agentic, how it differs from other AI types, and the foundational elements required to recognize such a system. Further, this chapter presents two examples of profound disruptions that Agentic AI brings: the rise of the AI web and the shift from e-commerce to A-commerce.
Agentic AI is a type of artificial intelligence that can act on its own to achieve goals, instead of just waiting for prompts from a human. It doesn’t just respond to a human’s commands at every step. It can decide what steps to take, make plans, change its approach if something doesn’t work, keep track of what it learns along the way, and reflect on its performance so that it can improve.
The word agentic comes from agent, meaning that the AI behaves like an agent on your behalf, an intelligent helper, or a problem-solver that has a degree of independence. Think of Agentic AI as AI that not only completes tasks, but also figures out how to complete them.
This isn’t the AI of scary science fiction stories. However, technology experts widely view Agentic AI as a critical stepping stone on the path toward artificial general intelligence (AGI; the form of AI depicted in scary science fiction stories) and possibly the technological singularity — or simply the singularity — a hypothetical future point at which artificial intelligence surpasses human intelligence in a way that leads to unpredictable, rapid, and irreversible changes in society and technology.
By enabling systems to reason, plan, reflect, and take initiative across changing environments, Agentic AI helps bridge the gap between today’s highly specialized models and the broad, self-directed intelligence that systems need to realize AGI.
In the broader vision of the hypothetical singularity, exponentially advancing artificial intelligence could emerge through self-improving, interconnected agentic systems. Such systems might continually refine their own architectures and methods, collaborate with other agents (even across different networks or domains), and pursue complex goals with diminishing need for direct human oversight.
However, Agentic AI and more autonomous systems bring us a step closer to the kind of self-directed intelligence imagined in singularity scenarios, and they also introduce new risks of unintended consequences and demand additional safeguards, including
Strong alignment with human values,
training and guiding models by using data, feedback, and objectives that reflect humanity’s shared ethical and social principles.
Robust guardrails
that provide clear, operational boundaries and fail-safes that define what the AI can and cannot do, even as it learns or acts independently.
Ethical oversight
to maintain human accountability in how developers design, deploy, and monitor agentic systems throughout their lifecycle.
Agentic AI is not only a technical milestone in the evolution toward AGI, but also a pivotal moment in which to decide what kind of AI future humanity will build.
Developers often design Agentic AI to handle complex, multi-step processes, such as managing projects, conducting research, or solving technical problems. These systems can include tools such as memory (to track what’s already been done), reasoning engines (to decide what makes the most sense), and planning modules (to map out steps and sequences). Although these systems aren’t really human-like or conscious, they’re moving closer to becoming trusted aides to the people using them.
Despite the advances made so far in the development of Agentic AI, it’s an emerging technology; people will likely take some time before they accept this technology as a routine entity in their work and daily lives.
Whether people consciously recognize it (or not), instinct often drives them to fear an AI’s ability to reason independently. This fear stems from the understanding that the capacity to reason has long defined humanity’s unique position at the top of the natural order.
If machines can also reason, the thinking goes, they could eventually become intellectually superior to humans, creating an unnatural hierarchy of reason. That perceived loss of uniqueness threatens the intrinsic value of being human and may also feel like a challenge to humanity’s place as both the observer and steward of the natural world around them.
The instinctive fear of being knocked from nature’s top spot is rooted deep in humanity’s collective history. Across time, people have moved through this line of thinking:
Recognizing the importance of reason:
Classical Greek philosophers introduced the idea that
reason
is the defining characteristic separating humans from all other creatures, and is the source of our unique position in the natural order of the world.
Using reason:
Reasoning rose to be the centerpiece of Western philosophy and later of Western science. Although Western thought also includes faith and empirical observation, reason still serves as its guiding foundation; the method by which truth and knowledge are pursued.
Defining humanity with reason: Western philosophers view the ability to reason as both a noble pursuit of truth and a uniquely human capacity. Reasoning, as reason has it, is what makes humans — well, humans.
Expanding reasoning capabilities:
Because reason (philosophically) is held as a noble pursuit, many people feel driven to improve upon how, how fast, and how much they can reason — which leads to the creation of tools to extend human reasoning. These tools range from mathematics and logic to computing — and now AI.
I generalize and minimize the description of philosophy because this is a book about AI’s ability to reason — or lack thereof — and not about philosophy. But I bring philosophy to your attention so that you can see the root of humanity’s unease. AI threatens not just to imitate human reason but to redefine it. That’s the source of both the fear and the fascination, and the devilishly desirable urge to build something that can reason as we do, perhaps even better.
In the beginning — which is to say, a few years ago, when ChatGPT became a household word, and not decades ago, when Generative AI (GenAI) became a thing — the public wasn’t sure what to make of the tool. But thoughts immediately turned to the deduction that if AI is intelligent, then it must be like (or even better than) humans, and therefore both capable of reasoning and inherently evil. Right? Never mind that various levels of intelligence, cleverness, and reasoning exist, some of which barely blip on the scale between instinct and thinking.
Although creators and marketers of GenAI models have heavily promoted their systems’ supposed reasoning abilities, many industry observers and technology journalists, including myself, have long expressed skepticism about these claims.
In June 2025, Apple (a technology giant) published a research paper from its Machine Learning Research team titled (in short) “The Illusion of Thinking,” which systematically evaluated large language models (LLMs) and large reasoning models (LRMs) by using controlled puzzle environments and specialized datasets. The findings felt like an earthquake under the feet of AI designers. The groundbreaking study revealed critical limits in the reasoning capabilities of today’s AI systems. In short, the researchers found that LLMs and LRMs don’t really “think.” They simply recognize patterns and imitate the steps of reasoning rather than truly understanding what they’re doing.
The takeaway: While developers now have access to many AI models and tools, none of them can truly reason in a humanlike way. Still, some developers are working to create Agentic AI systems that can reason on their own with little to no human intervention. Is the effort doomed from the start? No. But serious challenges remain. For example, AI models tend to break down when tasks become too complicated or require deep, step-by-step logic. Apple’s report points to a few ways researchers might get past these hurdles, such as creating smarter test environments that actually measure real reasoning, and teaching AI to check its own work as it goes along.
The path to truly autonomous AI agents is still blocked by basic limitations (such as needing more high-quality, well-structured data and lacking true common sense). Apple’s research findings (see the sidebar “Experiencing the Illusion of Thinking,” in this chapter, and the section “Differentiating between AI Agents and Agentic AI,” later in this chapter) highlight a critical issue: Even the most advanced AI systems today are great at mimicking thought but not at genuinely understanding. They often rely on techniques like chain-of-thought prompting (a method that walks the model through a problem one step at a time) to appear more thoughtful, but they’re still missing the logical depth and common-sense reasoning needed for full independence.
When Apple researchers talk about a reasoning collapse, they mean that AI models can look brilliant on simple tasks but start to fall apart as problems get harder. The models don’t understand logic, but instead, they follow patterns they’ve seen before. Once those patterns no longer fit, their “thinking” breaks down.
Don’t confuse reasoning behavior with reasoning ability. When an AI explains its steps or thinks aloud, it’s not actually reasoning. It’s replaying patterns from training data that look like reasoning. True reasoning means understanding cause and effect, spotting breaks in logic, and adapting on the fly. These are things today’s AI still can’t do (at least not yet).
Beyond reasoning limitations, Agentic AI systems face substantial technical and operational challenges that complicate real-world deployment. Agentic AI architectures are complex and require robust infrastructure, sound data governance, and seamless interoperability with existing systems to function at scale. (Flip to Chapter 3 for more about the components that Agentic AI requires.)
Orchestrating multi-agent workflows, maintaining accuracy, and managing variability remain major technical hurdles. For Agentic AI systems to reach their full potential, designers and developers must overcome both reasoning gaps and practical constraints. Anything short of genuine autonomy is likely to appear as a connected series of specialized (but conventional) AI models, coordinated to perform tasks together rather than truly acting on their own.
The two terms AI agents and Agentic AI systems are often used interchangeably in discussions about artificial intelligence, but they actually describe different concepts that share overlapping features. Understanding their differences can help clarify the different types of AI that exist today and where they might be headed as they evolve toward more capable and autonomous systems:
AI agents:
Software entities designed to perform specific tasks autonomously within defined parameters. They operate based on programmed rules (rule-based agents), machine learning models (learning-based agents), or large language models (LLM-based agents) to achieve particular objectives by perceiving their environment, making decisions, and taking actions. Examples include customer-service chatbots, game-playing bots, web navigation agents, and recommendation systems.
Agentic AI systems:
Also software entities, but they extend basic task automation into the realm of complex, multi-step process management. These systems can plan their own actions, coordinate multiple tools or agents, and adapt their workflows to reach broader objectives, often across less predictable environments.
Apple 2025 research study, “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity” (which you can read about in the sidebar “Experiencing the Illusion of Thinking,” in this chapter), focuses primarily on large language models (LLMs) and large reasoning models (LRMs), and not explicitly on Agentic AI. However, because Agentic AI systems are typically built on top of similar LLM foundations, the study’s findings apply by extension. The research confirms that both AI agents and, by association, Agentic AI systems rely heavily on pattern matching, rather than genuine reasoning. This dependence limits their performance on tasks that demand abstract, logical, or novel reasoning.
For Agentic AI systems, Apple’s study points toward several potential directions for improvement. These include hybrid neurosymbolic architectures (which combine neural networks’ pattern-recognition strengths with symbolic reasoning’s logic and structure) and the use of advanced datasets such as GSM-Symbolic to better evaluate and train reasoning skills. The study also introduces Twisted Sequential Monte Carlo (TSMC), a technique for improving multi-step reasoning and inference. It is an approach that could be particularly valuable for agentic systems striving for higher autonomy, whether that autonomy is designed in or gradually self-emerges through adaptation.
Examples of AI agents include chatbots such as ChatGPT (www.chatgpt.com; see Figure 1-1), recommendation systems that suggest what to watch or buy, or simple robotic agents, such as a Roomba vacuum cleaner. These agents typically have a narrow scope and focus on well-defined tasks with limited adaptability to new contexts.
www.chatgpt.com
FIGURE 1-1: A screenshot of ChatGPT chatbot user interface.
An Agentic AI system goes further than a chatbot such as ChatGPT. Agentic systems often combine multiple AI agents and add capabilities such as goal-setting, planning, reasoning, and task monitoring over extended periods. One example is Godmode (www.godmode.space), shown in Figure 1-2, an online interface that lets you launch and manage autonomous AI agents such as AutoGPT (www.agpt.co) and BabyAGI (www.babyagi.org). Godmode doesn’t crowdsource from these projects; instead, it provides a user-friendly control panel that connects to and coordinates open-source agent frameworks behind the scenes.
www.godmode.space
FIGURE 1-2: A screenshot of the Godmode interface.
If you’d like to try these systems directly, you can access a hosted version of BabyAGI in your browser (with no local setup needed) at https://babyagi-ui.vercel.app. For open-source enthusiasts, both AutoGPT and BabyAGI also offer their own graphical user interfaces (GUIs) through GitHub:
AutoGPT UI:
https://github.com/neuronic-ai/autogpt-ui
BabyAGI UI:
https://github.com/miurla/babyagi-ui
Other examples of Agentic AI systems include autonomous supply chain management platforms that optimize logistics in real time, and AI-driven research assistants that design experiments, gather data, and summarize findings. Another emerging example is GPTConsole (www.gptconsole.ai), which uses autonomous agents such as Pixie to generate complete, production-ready apps and websites from simple landing pages to full data dashboards. Figure 1-3 shows the free landing-page generator interface that you can try yourself.
http://landingpages.gptconsole.ai
FIGURE 1-3: A screenshot of an AI agent pixie that offers a free landing-page generator.
Although I define AI agents and Agentic AI systems earlier in this chapter (see the section “Differentiating between AI Agents and Agentic AI”), it’s worth restating for clarity’s sake that the term AI agent is broader than either Agentic AI or GenAI tools. Using the term doesn’t automatically imply that an AI agent is agentic or generative when in fact it could be neither but instead a different type of AI altogether. In this book, AI agent means any software entity that can perceive its environment, make decisions, and take actions toward a goal, with or without direct human control.
Both AI agents and Agentic AI systems operate with some degree of autonomy so that they can make at least some decisions without constant human intervention. For instance, a chatbot (AI agent) responds to user queries independently, while an Agentic AI system that manages a logistics network adjusts delivery truck routes in direct response to traffic data, weather conditions, road conditions, and/or real-time supply updates.
By design, both AI agents and Agentic AI systems can
Perceive and interact with their environments.
A recommendation system (an AI agent) analyzes user behavior and preferences, while an Agentic AI system in healthcare might monitor patient data and adjust treatment plans based on vital signs, lab results, or medication responses.
Achieve specific objectives.
A virtual assistant (an AI agent) aims to answer questions accurately, while an Agentic AI system for urban planning seeks to optimize traffic flow across an entire city.
Leverage AI technologies,
such as machine learning, natural language processing (NLP), and/or rule-based systems. Large language models (LLMs), for example, can power both a conversational AI agent and components of an Agentic AI system.
Although AI agents and Agentic AI systems have similarities (see the preceding section), you can find marked differences, as well. Here are some examples:
Their strengths:
AI agents:
Offer simplicity and efficiency because they’re highly optimized for repetitive, well-defined tasks, and deliver fast response times. They’re also cost-effective and generally reliable in their narrow domains and tasks.
Agentic AI systems:
Excel in adaptability and scalability in handling complex, multi-step problems that require coordination across multiple agents and tools. They are also showing emerging potential for advanced reasoning and collective decision-making.
Their weaknesses:
AI agents:
Have limited scope of data and functionality,
reasoning deficits
(they may fail to grasp the full intent of a prompt),
brittle performance
(producing nonsensical results in complex real-world situations), and low capacity for
generalization
(applying training to new data).
Agentic AI systems:
Present complexity for developers and users, reasoning limitations (because of limitations of available data and training), high development and operational costs, and their still-experimental nature.
Their ideal use cases:
AI agents:
Work well as customer support chatbots, shopping and other recommendation systems, simple automation (of data entry or appointment scheduling, for example), and robotic process automation (RPA).
Agentic AI systems:
Can optimize supply chains (to increase efficiency and reduce costs), support healthcare decision-making (providing patient data analysis and actionable insights for treatment), do scientific research (for handling large datasets and predicting outcomes), and manage smart cities (including optimizing transportation services and urban planning).
The path from prompt engineering to AI autonomy doesn’t involve outright replacement of one by the other, but an evolving relationship (which I talk about in Chapter 4). You’ll still need foundational prompting skills to work with increasingly intelligent systems. While AI becomes more agentic (that is, autonomous), the role of human input shifts in its focus, but not in its value.
In my opinion, people who now claim that AI users don’t need to learn how to prompt the systems that they use are (at best) foolishly misguided. In my previous book, Generative AI For Dummies (Wiley), I discuss how this technology enables machines to speak like humans, but to effectively use them, humans must think like machines. By that, I mean people who use AI need to think in a logical, step-by-step progression, in the same way that developers do when they write computer programs. Prompting skills teach you how to think like a developer.
Further, prompts will remain the primary interface for you (and other humans) to communicate with AI agents — especially in Agentic AI systems. The reason is simple: You can more easily command the AI in your own language than in computer code; then a large language model (LLM) understands your command and parses the necessary instructions to different AI agents to perform the feat that you commanded. Chapter 4 goes into detail about how to interact with Agentic AI.
Even advanced agentic systems use prompting under the hood to pass instructions, context, and goals between modules or agents. Despite their complexity and apparent autonomy, much of the system’s internal coordination still happens through prompt-based messages. And prompting isn’t the only way agents communicate. Some frameworks also use function or API calls, shared memory, or symbolic reasoning layers to exchange data and results.
Simply put, prompting isn’t just for humans’ use when talking to AI; it’s one of the key ways that AI components use to talk to each other inside modern Agentic AI systems.
Prompt engineering involves crafting precise, contextual instructions for LLM-based agents and tools so that you get the results you want. You can use it in all kinds of applications, from content generation and coding, to research assistance, data transformation, and even tool automation. Prompt engineering teaches you how to use structured language to interact with and direct powerful GenAI systems. Think of good prompting like giving instructions to a very smart assistant who understands your language but still needs clarity and direction.
As developers began experimenting with more complex interactions, they started chaining prompts — using the output from one prompt as the input for the next. Frameworks such as LangChain and AutoGPT made this easier by allowing AI systems to simulate multi-step reasoning or tool use. Chaining prompts connect actions such as using memory, making decisions, and calling application programming interfaces (APIs) or plugins. This approach moves AI from answering one-off queries toward executing full workflows. In that sense, prompt engineering at this level is less like writing a single question and more like designing a system or scripting a process. It is the creative foundation of Agentic AI behavior.
For example, an Agentic AI system for customer support might use a prompt pipeline (chain) to classify queries, retrieve relevant data, and generate responses, with prompts ensuring tone, context, and accuracy. Although Agentic AI systems can set sub-goals, evaluate their own outputs, and operate in open-ended, dynamic environments, they still rely heavily on well-crafted prompt templates, safety guardrails, and human review cycles. Even autonomous agents are largely built atop prompt scaffolding, which is a prompt engineering technique that uses a series of prompts to guide an AI system.
Even AI that developers see as autonomous has no will, no self-awareness, and no inherent motivation to do anything. Humans must give AI a reason to act, even the over-industrious Agentic AI systems. Without a call to action and a cause to complete, any AI will sit idle for centuries.
Prompting gives you a way to guide and influence unpredictable systems and perhaps stop unsafe actions (which you can read more about in Chapter 7). Effective prompting can reduce hallucinations, align outputs with your intent, and promote safer, more ethical system behavior. As AI agents grow in autonomy, troubleshooting their behavior often involves analyzing their internal prompts and instruction chains to understand what went wrong. Users must have the necessary prompting skills to do this kind of work. Think of it as AI quality assurance or cognitive debugging.
In enterprise environments, prompt engineering goes even further. Organizations often build custom prompt libraries, design domain-specific prompts, and develop fine-tuned instruction sets for areas such as law, medicine, or customer service. These libraries are growing quickly because well-crafted prompts are often more scalable and cost-effective than retraining or fine-tuning models from scratch.
In short, the ability to frame questions, shape outputs, and design workflows through prompts is a creative and strategic skill, not just a technical one, which remains critical in human-AI collaboration, no matter how autonomous AI becomes.
Ultimately, prompting acts as a core design layer in AI interaction and development. In the same way that knowing SQL doesn’t become irrelevant when databases get more complex, prompt engineering won’t disappear while AI becomes more autonomous. It will simply evolve into AI literacy and collaborative creativity for individual AI users.
At the maker and enterprise levels, prompt engineers will likely evolve into AI system architects, agent behavior designers, and human-AI interaction specialists. The essential skill of translating human intentions into AI-understandable instructions remains fundamental, even as it operates at higher levels of abstraction and complexity. Chapter 8 dives into the potential changes to how people will work when they have Agentic AI in the picture.
Today, the internet mostly reacts to what people ask it to do. You click a link, fill out a form, or type a question, and something responds. But in the near future, I predict that AI agents will start to do these things on their own — behind the scenes — based on what they know about your goals, habits, desires, and preferences.
The idea of the Agentic AI web is the next big shift in how people use the internet. Essentially, the internet will transform into an environment where people rarely visit but where smart AI assistants work together and get things done for people. You can see that beginning to happen now as more people use AI rather than search engines to find information and rarely bother to go to the website or original source. Chapter 9 looks at the role Agentic AI will play in the online marketplace.
In the future, the AI agents won’t just live in one app or service; they’ll connect across websites, platforms, and tools. One agent might handle your schedule, another could manage your online shopping by acting as your personal shopper, and another might track job openings or negotiate a better deal with your ISP or streaming service on your behalf. The various agents can talk to each other, use tools, search the web, write e-mails, fill out forms, and even call APIs. They’ll take action largely on their own initiative and without needing you to hover over them.
So you won’t go to the internet to watch videos, listen to music, shop, or do research. Instead of going to websites directly, you’ll rely on a network of smart helpers working quietly in the background. Instead of having to log in to ten different websites to get something done, you might just tell your agent what you want to do (for example, compare the prices of an item offered by three vendors), and it takes care of the rest. You’re still in charge, but the agents handle the details for you.
For example, suppose you ask an AI assistant to plan a trip. As part of an Agentic AI system, the assistant could pass tasks to other AI agents — one agent books flights, another checks traffic, and a third picks activities that it thinks you’d love to do. The AI agents work together smoothly to get the job done without further input from you.
A broader span of agent authority in the future could also scale up to big stuff, such as Agentic AI systems running cities, managing energy, or speeding up the impact of science by sharing discoveries globally. These AI agents could learn on their own, teaming up for large tasks, such as disaster relief or prevention perhaps by using drone data and weather updates to save lives from a disastrous storm.
For the internet to morph into this AI-run version of itself, it needs a few key things to come together:
A shared language that enables agents and systems to talk to each other:
Several emerging standards already work toward that goal and one of the most promising is the Model Context Protocol (MCP). MCP acts like a USB-C cable for AI: a universal connector that allows different AI systems to plug into tools, apps, and data sources without needing a custom setup each time. It solves the problem of one-off integrations by creating a standard way for AI to access external resources such as files, databases, or web services.
Secure ways to share knowledge between systems and agents:
For example, healthcare systems (such as hospitals and clinics) need to swap patient data, treatment plans, or medical advice without exposing private information. An AI system working in this environment must manage multiple tasks simultaneously, maintain strict data security, and comply with privacy regulations such as HIPAA. To support this, some developers explore using trust-enhancing technologies, including blockchain or other tamper-evident record systems, to securely log transactions, verify data sources, and track information access across agents and systems.
The key components of an AI-run internet are still in their early days, but developers are addressing the issues of shared language and secure data while increasingly setting the stage for huge advances in autonomous AI operation.
You probably won’t just get up one day to find that Agentic AI systems have completely restructured the internet into some alien digital landscape. Instead, you’ll likely note subtle changes in your online interactions first, followed by bigger leaps of disruption. Indeed, you likely noticed a few changes online already, especially in how you search for and consume information.
Online search used to be simple: You typed a few keywords into a search bar, got a list of links to source websites in return, and then had to dig through websites and pages to find what you needed. GenAI tools such as ChatGPT, Google’s Gemini, Microsoft’s Copilot in Bing, and Perplexity AI (the first major AI-native search engine that doesn’t rely on results from traditional providers) have changed that experience dramatically.
Now, instead of just returning links to source websites and a few paid ads, search engines serve up concise answers from AI responses. Today, you can commonly get conversational, direct answers to your query from an AI tool with nary a source cited.
Having AI provide concise answers can be dangerous because many users think all AI results are dependable, when in truth, they’re not. Even Perplexity AI, which routinely serves up sources with its narrative results, can lead you astray. Any AI can make up sources as easily as it can make up facts. Yes, I mean AI can outright lie to you about who said what and where and when. Source citations often aren’t worth the pixels used to display them.
Any AI outputs, even the summaries given on search engine results, need to be fact-checked before you accept and use them. First, click through the source links the AI provides (if it does provide them) to ensure the source exists and that it is the original source of that particular information. You can also scroll down and use the search engine results that appear below the AI summary to find reputable sources to peruse so that you can validate and verify the answer that the AI gave you as a summary.
All told, that doesn’t provide much of a time saver or a convenient service, now does it? Still, it makes a step in the direction of A-commerce.
Online search results are changing fast. Even traditional search engines now place AI-generated summaries at the top of the page, ahead of the familiar list of website links that formerly defined search. The reason is simple: Competition for use and user trust is fierce. Tech companies are racing to make their AI systems the first stop for everything people want to know. As users increasingly accept these instant summaries instead of clicking through to source websites, overall web traffic is dropping across the board.
This shift can drastically limit how many page viewers a news article gets, how many online sales a merchant makes, how many students a university attracts, and so on. In other words, e-commerce and content-driven industries alike suffer in innumerable ways as AI tools become the new intermediaries between information and its audience.
Adding insult to injury, while AI-powered summaries gather more eyeballs, the classic search engine optimization (SEO) strategy of ranking high on a results page matters less if users no longer click through to websites. If organizations can’t improve their website’s visibility and number of click-throughs via SEO and search result ad strategies, then why bother having a website? And how can those organizations survive and prosper without direct traffic?
The tipping point will come when AI systems, not human users, handle most product discovery and purchasing decisions online. That moment marks the shift from e-commerce (electronic commerce) to A-commerce (short for autonomous commerce) where AI agents search, compare, negotiate, and even complete transactions on behalf of users.
Although e-commerce sees people browsing websites, comparing prices, and clicking a Buy button, A-commerce flips that model completely. In the A-commerce scenario, AI agents will act as your personal shoppers, deal hunters, and decision-makers by handling the shopping process for you based on your goals, desires, preferences, and budget. Agents may even purchase the items for you — using your credit card, of course — and have the item sent to you. All of that shopping and buying may happen with you doing little more than expressing an interest or intent to buy an item or service.
To better help you imagine how this AI shopping agent will work, consider the following examples:
You need a new pair of running shoes.
Your agent knows your size, preferred brands, past purchases, and current deals across the web. It finds three great options, checks reviews, and presents you with the best one — or just buys it if you previously gave it permission.
You're planning a trip.
Instead of jumping between flight, hotel, and rental car sites, your travel agent AI bundles the whole thing, rebooks if prices drop, and gives you one neat itinerary.
You run a small business.
Your AI handles supplier orders, monitors inventory, and even renegotiates contracts when better options appear.
You have an important event on your calendar.
Your AI notes the event and appropriate attire, and searches for the best deals on each item of the perfect outfit — from shoes to hat. All of the outfit fits your size and list of preferences. The AI can then ask you to choose between outfits or items, and then purchase it. Or, if it has your permission, simply buy it and have it shipped to your home or the hotel where you’ll be staying during the event.
In short, A-commerce is the next evolution in online commerce, where AI agents handle purchasing decisions, product research, and transaction completion on behalf of you, your family, or your company. This shift will create profound changes in how SEO strategies and websites must evolve.
Traditional SEO and websites target traditional search engine rankings to increase visibility before people who are interested in buying their products or services, but A-commerce requires optimizing for AI agents that make purchasing decisions without a person looking at any of their wares. So site designers need to structure content for AI comprehension, rather than solely for human readability. Sites need to present product information in formats that AI agents can easily parse, compare, and evaluate.
Consider these factors that can reshape the content of commerce-related websites:
Structure and focus of website content:
AI agents rely heavily on structured data to make decisions. For example
E-commerce sites will need to prioritize schema markup, product APIs, and machine-readable formats over traditional content optimization. The focus shifts from keyword density for making the website visible to human searchers to data clarity and accessibility for AI agents.
