139,99 €
The idea of autonomous systems that are able to make choices according to properties which allow them to experience, apprehend and assess their environment is becoming a reality. These systems are capable of auto-configuration and self-organization.
This book presents a model for the creation of autonomous systems based on a complex substratum, made up of multiple electronic components that deploy a variety of specific features.
This substratum consists of multi-agent systems which act continuously and autonomously to collect information from the environment which they then feed into the global system, allowing it to generate discerning and concrete representations of its surroundings.
These systems are able to construct a so-called artificial corporeity which allows them to have a sense of self, to then behave autonomously, in a way reminiscent of living organisms.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 360
Veröffentlichungsjahr: 2016
Cover
Title
Copyright
Introduction
List of Algorithms
1 Systems and their Design
1.1. Modeling systems
1.2. Autonomous systems
1.3. Agents and multi-agent systems
1.4. Systems and organisms
1.5. The issue of modeling an autonomous system
2 The Global Architecture of an Autonomous System
2.1. Introduction
2.2. Reactivity of a system
2.3. The basic structure of an autonomous system: the substratum
2.4. The membrane of autonomous systems
2.5. Two types of proactivity and the notion of artificial organ
2.6. Autonomy and current representation
2.7. The unifying system that generates representations
3 Designing a Multi-agent Autonomous System
3.1. Introduction
3.2. The object layer on the substratum
3.3. The agent representation of the substratum: interface agents, organs and the notion of sensitivity
3.4. The interpretation system and the conception agents
3.5. Aggregates of conception agents
3.6. The intent and the activity of conception agents
3.7. Agentifying conception agents
3.8. Activity of a conception agent
3.9. The three layers of conceptual agentification and the role of control
3.10. Semantic lattices and the emergence of representations in the interpretation system
3.11. The general architecture of the interpretation system
3.12. Agentification of knowledge and organizational memory
3.13. Setting up the membrane network of an autonomous system
3.14. Behavioral learning of the autonomous system
4 Generation of Current Representation and Tendencies
4.1. Introduction
4.2. Generation of current representation and semantic lattices
4.3. The cause leading the system to choose a concrete intent
4.4. Presentation of artificial tendencies
4.5. Algorithm for the generation of a stream of representations under tendencies
5 The Notions of Point of View, Intent and Organizational Memory
5.1. Introduction
5.2. The notion of point of view in the generation of representations
5.3. Three organizational principles of the interpretation system for leading the intent
5.4. Algorithms for intent decisions
5.6. Organizational memory and the representation of artificial life experiences
5.7. Effective autonomy and the role of the modulation component
5.8. Degree of organizational freedom
6 Towards the Minimal Self of an Autonomous System
6.1. Introduction
6.2. The need for tendencies when leading the system
6.3. Needs and desires of the autonomous system
6.4. A scaled-down autonomous system: the artificial proto-self
6.5. The internal choice of expressed tendencies and the minimal self
6.6. The incentive to produce representations
6.7. Minimal self affectivity: emotions and sensations
6.8. Algorithms for tendency activation
6.9. The feeling of generating representations
7 Global Autonomy of Distributed Autonomous Systems
7.1. Introduction
7.2. Enhancement of an autonomous system by itself
7.3. Communication among autonomous systems in view of their union
7.4. The autonomous meta-system composed of autonomous systems
7.5. The system generating autonomous systems: the meta-level of artificial living
Conclusion
Bibliography
Index
End User License Agreement
1 Systems and their Design
Figure 1.1. Peer-to-peer organization around a network
2 The Global Architecture of an Autonomous System
Figure 2.1. Diagram of a reactive system
Figure 2.2. The system and its functional substratum
Figure 2.3. The three layers of strongly autonomous systems
Figure 2.4. General architecture of the interpreting system of an autonomous system
3 Designing a Multi-agent Autonomous System
Figure 3.1. Functionalities and effective activities of elements
Figure 3.2. The two organizations of agents, based on ontologies
Figure 3.3. The internal macro automaton that structures the action of a conception agent
Figure 3.4. Stage 1: structuring agents wrap conception agents
Figure 3.5. Stage 2: structuring agents organize to extract the structure of the emerging representation with elements that become prominent
Figure 3.6. List of the main agents of the system
Figure 3.7. The four organizations of the representation producing system
Figure 3.8. Interpretation system and organizational memory
4 Generation of Current Representation and Tendencies
Figure 4.1. Structure of a tendency agent
5 The Notions of Point of View, Intent and Organizational Memory
Figure 5.1. The three tendencies in determining intents under the synchronizing action of the meta-modulation component
Figure 5.2. Meta-component: the autonomous system and the central modulation component synchronizing all the system’s proactive components
6 Towards the Minimal Self of an Autonomous System
Figure 6.1. The main strongly coactive components of the autonomous system
Figure 6.2. The tendencies and needs component in the general system of representation
Figure 6.3. The proto-self of the system formed by coactivity of the tendency generation component and the component intentionally generating representations
Figure 6.4. The architecture of a minimal self formed by the coactivity of the components of tendency, intentions, and emotions generation, relying on the organizational memory, the whole being synchronized by the modulation component
Cover
Table of Contents
Begin Reading
C1
iii
iv
v
ix
x
xi
xii
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
Reliability of Multiphysical Systems Set
coordinated byAbdelkhalak El Hami
Volume 1
Alain Cardon
Mhamed Itmi
First published 2016 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK
www.iste.co.uk
John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA
www.wiley.com
© ISTE Ltd 2016The rights of Alain Cardon and Mhamed Itmi to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2016933400
British Library Cataloguing-in-Publication DataA CIP record for this book is available from the British LibraryISBN 978-1-84821-935-9
In this book, we present the results of our research on the modeling and design of a software system allowing for systems with a very strong sense of autonomy with intentionality. We are operating in the context of systems built with a complex substratum, which have multiple electronic components deploying a variety of specific features. The scale of the development of these electronic systems is very important in the current technological climate, allowing the construction and use of technological components in all areas. These systems, however, still have an autonomy that is limited to the use of their functional capabilities, as is the case of automated robotic systems used in various industrial, economic and cultural fields. They require human operators to control them, as is the case for drones. The main problem involves providing these types of systems with a computing level which allows for an intentional autonomy that will drive their behaviors.
We present a complete model that gives these systems a very strong behavioral autonomy, providing them with the ability to make behavioral decisions based on desires, to have their own intentions and even to be aware of their autonomy. We will, therefore, be presenting how to give these systems the ability to intentionally generate artificial representations of things that they perceive and design, so that they behave in the way they want, of course within the limits of a common sociality. The idea is, indeed, to develop a proto-self.
We believe that a truly autonomous system, which has a substrate composed of many distributed mechanical and electronic components, can be unified by the development of a meta-software layer that would consider this substrate to be its corporeity. With this understanding of corporeity, the system can generate its own internal representations of its situation: representations of its condition, its posture, allowing it to develop its actions intentionally. This meta-software layer must enable total self-regulation of the substrate of the system by itself, without any external control, and it would need to be reliable. It must continuously generate what we call representations, which are the complex generated constructs composed of a number of software agents activating and aggregating to create shapes and images expressing all the semantic aspects. These representations should indicate what the system apprehends in its environment, based on the knowledge it has acquired but also based on its tendencies and its desires, feeling these representations to deepen them. This software layer will allow the system to continuously manage its own action plans, evaluate them and memorize them in order to improve and evolve. Therefore, in this work, we describe a new model of the autonomy of artificial systems, an autonomy strongly inspired by higher living organisms.
We present the computable concepts for the perception of object situated in a system’s environment, the notions of representation for something and the system’s concerns that will lead it to be interested in one thing rather than another. For this, we will establish a specific definition of the computing architecture of the layer generating the representations, with all the necessary elements for the system to develop tendencies, desires and needs. For this, we will develop a new concept for control in massive multi-agent systems to meet, in real-time, the aggregations of agents with multiple semantic indications.
We also show that such systems inherently communicate with each other, such that they have the tendency to unite in order to form a very large metasystem. As these models are perfectly implementable today, it will be up to the scientific community to decide whether or not to create them and whether or not to put them at the disposal of the people for their use. We hope such developments will be applied in very ethical fields.
2.1. General functioning of the system
3.1. Development of a representation on the basis of an intent
3.2. Activity of the conception agents that produce the current representation
3.3. Learning reactions to typical cases
3.4. Self-learning of a new case and evaluation
3.5. Improvement of case recognition on the basis of an intervention by the system designer
4.1. Generation of a representation by using an upper bound
4.2. Adaptive generation of a representation
4.3. Determining a new concrete intent
4.4. Action of a tendency
4.5. Production of a stream of representations under tendency
5.1. Expression of a point of view
5.2. Choice of intent under the supervision of the modulation component
6.1. Deployment of a tendency
6.2. Functioning of the central modulation component
6.3. Activation of a tendency in the minimal Self
6.4. Deployment of a tendency in the minimal Self
6.5. Bifurcation of tendencies in the minimal Self
6.6. Expression of the sense of self
7.1. Enhancement of an autonomous system by an external functional system
7.2. Simple deployment of two autonomous systems
A system is designed to provide one or more services. It is made up of hardware, software and human resources, with the aim to satisfy a precise, well-defined need. Such systems abound in the history of science. Thanks to accumulating experience, technological progress and ever improving modeling approaches, methods to develop these are constantly gaining efficiency. The description of a system potentially involves various notions about its components, their aggregation and their interactions with each other and with the system’s environment.
A system usually consists of a set of interdependent entities whose functions are fully specified. The system is completely characterized according to an equational or functional approach, in an iterative top-down or bottom-up process. The process is top-down in an analytical approach whereby each part can be broken down into smaller subparts that are complete sub-systems themselves. Conversely, when the approach consists of building a system up from the basis of simpler sub-systems, the iterative process is called bottom-up. The system’s realization and potential evolution are predetermined in a strict, narrow field, and its functionalities can pertain to various applicative areas such as electricity, electronics, computer science, mechanics, etc.
Because of the advances being made in system design as well as in information and communication technologies, there is a tendency to design ever larger systems that involve an increasing number of strongly connected elements and which handle large volumes of data.
Systems can be categorized according to various typologies. Here, we will only focus on two classes: conventional systems and complex systems.
Systems said to be individual or conventional have their inputs and outputs fully specified, in the sense that everything is already designed for them in the early stages of their conception. The vast majority of the systems we interact with belong to this class. Management applications, scientific computation programs and musical creation aids are all examples of conventional systems. The constitutive elements of such systems are defined and organized precisely to accomplish the tasks for which the system was formatted. They process inputs and produce actions or results that are the essential goals of the system, i.e. its “raison d’être”. Even if it continues to evolve while it is operational, as soon as it starts to depend on a project manager the system belongs to the class of conventional systems, for whom everything is delimited by a tight framework. An automatic teller machine (ATM) is a good example of such a system. Every single use-case must have been clearly defined, modeled and tested so that the machine is able to perform its duties reliably and respond accurately to its users (the customers and the bank). Operating in a degraded mode or in the event of unforeseen circumstances must have also been considered.
Conventional systems benefit from the development of computer networks, which expand their access to resources and their ability to interact. They also tend to become more complex, but they remain essentially conventional systems. Let us consider the example of service-oriented architectures (SOA) with, for instance, the recent development of cloud computing services. The great variety of services offered entails an intricate organization of many different subsystems within one global cloud. The architecture nevertheless remains a conventional system as long as the services offered can be deduced from the sum of the services provided by its subsystems. Integrating new systems in order to add new services will create a larger system that remains conventional because of its functional description. In such systems, the management of malfunctions is usually also built in.
Among the many types of systems that are detailed in the literature, complex systems are particularly often focused upon because of their unpredictable behavior. Complex systems usually apply to subjects in which a multidisciplinary approach is an essential part of any understanding: economy, neuroscience, insect sociology, etc.
Authors globally agree to define a complex system as a system composed of a large number of interacting entities and whose global behavior cannot be inferred from the behaviors of its parts. Hence, the concept of emergence: a complex system has an emergent behavior, which cannot be inferred from any of its constitutive systems. Size is not what qualifies a system as complex: if its parts have been designed and arranged so that they interact in a known or predictable way, then it is not a complex system. However, a non-complex system becomes complex as soon as it integrates a human being as one of its constituents.
Many behavioral features of complex systems are subject to intense research and scrutiny: self-organization, emergence, non-determinism, etc. To study complex systems, researchers usually resort to simulations, which enable them to grasp an idea, if incomplete, of the behavior of a system. In fact, complex systems exhibit some behavioral autonomy, a notion that will be detailed further on, when we relate it to the concept of proactivity.
Any information system that includes functional elements while taking human decisions and actions into account as well as handling multiple perspectives is a complex system in which the components are set in various levels of a multi-scale organization.
The concept of system of systems (SoS) [JAM 08] was introduced into the research community without being characterized by a clear, stable definition. Several approaches to refine the concept can be found in the literature. It primarily implies that several systems operate together [ZEI 13]. Architectures that ultimately fall back in the conventional system class, where a centralized mechanism fully regulates the behavior, like in families of systems, are not considered to be SoS. Examples of SoS can be found in super-systems based on independent complex components that cooperate towards a common goal, or in large scale systems of distributed, competing systems.
The most common type of SoS [MAI 99] is that which is made of a number of systems that are all precisely specified and regulated so as to provide their own individual services but that do not necessarily report to the global system. To qualify as an SoS, the global system must also exhibit an emergent behavior, taking advantage of the activities of its subsystems to create its own. The number of subsystems can not only be large, but it can also change, as subsystems are able to quit or join the global system at any moment. This description highlights the absence of any predefined goal and underlines the essentially different mode of regulation of such an SoS. In other words, the general goal of an SoS need not be defined a priori.
The SoS can evolve constantly by integrating new systems, whether it be for financial reasons or because of technological breakthroughs. An SoS can thus gain or lose parts “live” [ABB 06]. This shows that an SoS cannot be engineered in a conventional manner, neither with a top-down nor with a bottom-up construction process.
This approach demands a specific architecture whose functioning implies some level of coordination/regulation as well as a “raison d’être”, manifesting itself by a drive towards one or several goals. This raises several issues about autonomy, the reasons for such an organization in autonomous systems, behavioral consistency, orientation of activity and regulation of such systems.
To approximate the behavior of an SoS, one can use distributed simulations. These simulations are similar to peer-to-peer simulations except that additional tools are required to apprehend emergent behaviors (see Figure 1.1).
Figure 1.1.Peer-to-peer organization around a network
The concept of an autonomous system (within the field of robotics) implies a system able to act by itself in order to perform the necessary steps towards the achievement of predefined goals, taking into account stimuli that, in robotics for example, come from sensors. In the literature, the perspectives on the notion of autonomy are diverse because the capacity to act by oneself can have various aspects and defining features, depending on whether it is applied to, for example, an automaton, a living being, or even a system able to learn in order to improve its activity.
Implied by the notion of autonomous system, which goes beyond that of non-autonomous system, the notion of intelligent regulation goes beyond the notion of regulation. Intelligent regulation calls upon algorithmic notions as well as upon linguistics and mathematics applied to systems and processes [SAR 85]. The regulation of hierarchical systems is often described by three level models that are widely documented in the literature. The following briefly reminds the reader of the basics of this modeling approach, which can be studied in more detail in the original paper by Saridis [SAR 85]. The three levels are:
– the organizational level;
– the coordination level;
– the executive level.
The first level seeks to mimic human functions, with a tendency towards analytical approaches. The following remarks can be formulated about this approach:
– the proposed model is hierarchical (top-down) and therefore describes a machine submitted to the diktat of the organizational level (the question remains of how information is communicated upwards);
– the approach relies heavily on computation and ignores any work on knowledge representation. Therefore, processing is done in a “closed world”, which seems prone to prevent any adaptation to multidisciplinary;
– the detailed definitions of each of these levels worsen this separation: for example, the two first levels do not even take into account notions such as organization and emergence;
– integrating two systems seems impossible in Saridis’s approach. Since there is absolutely no notion of proactivity in that approach, integrating a new proactive system is not plausible. Working on an
a priori
knowledge means that regulation is determined in advance, whereas a proactive element can’t be strictly regulated;
– that the notion of perspective, or point of view, is lacking is another significant point, as it is essential to our approach. In fact, one of our fundamental assumptions is that knowledge depends on perspective, which makes it relative. In our approach, knowledge is, therefore, subjective and we do not assume any absolute truth.
In this work, we propose a biology-inspired model of autonomous systems. It differs from the model described above. Our approach will show that we do not address the same issues as these addressed by strictly analytical approaches.
In order for the system to behave like an autonomous organism, its architecture must be made of elements that are considered as artificial organs. More importantly, the most elementary levels of the system must be made of informational components that also have some level, even if minimal, of autonomy, that are sensitive to their environment and that alter themselves merely by activating themselves and operating.
The concept of agents is used in various areas. Definitions differ according to the area to which the notion of an agent is applied. In economy, for instance, agents are defined as selfish human entities, which is not pertinent for the computer science field. In the specific field this work focuses on, an agent is defined as [NEW 82]:
An active, autonomous entity who is able to accomplish specific tasks. This definition comes from A. Newell’s rational agent, in which the knowledge level is set above the symbolic level. The knowledge represented by rational agents is not only made of what it knows, but also of its goals as well as its means of action and communication.
More precisely, an agent is:
– an intelligent entity that acts rationally and intentionally towards a goal, according to the current state of its knowledge;
– a high-level entity, although slave to the global system, which acts continuously and autonomously in an environment where processes take place and where other agents exist.
Furthermore, in order to specify the bounds of the concept, M. Woolridge and N.R. Jennings introduced the strong and weak notions of agent [WOO 94].
An agent pertaining to the weak notion of agent must exhibit the following features:
– it must be able to act without any intervention from any third party (human or agent) and it must be able to regulate its own actions as well as its internal state, using predefined rules;
– it must be endowed with some sociality, in other words, it must be able to interact with other (software or human) agents when the situation demands it, in order to accomplish its tasks or help other agents accomplish theirs;
– it must be proactive, in other words, it must exhibit an opportunistic behavior and an ability to make its own decisions.
The two authors define agents pertaining to the strong notion as having, in addition to the abilities of weak agents, the following features:
– beliefs: what the agent knows and interprets of its environment;
– desires: the goals of the agent, defined according to its motives;
– intentions: in order to realize its desires, the agent performs actions that manifest its intentions.
This strong notion of agent qualifies them as truly autonomous complex systems rather than as the usual software agents that constitute a system that might be, on the whole, complex. The three features are non-trivial because they are inspired from human psychology, which Artificial Intelligence (AI) specialists can hardly make models from on the basis of classical knowledge representation formalisms. In this work, we won’t be using the strong notion; we will instead focus on systems based on architectures of numerous agents in the weak sense. We assume that beliefs, desires and intentions can only exist at the global level of the whole architecture, emerging as patterns from the coordinated, organized behavior of the agents.
Computer science initially saw agents in two different ways. The first one, called “cognitive”, considers agents as intelligent entities that are able to solve problems by themselves. Any such agent can rely on a limited knowledge base, some strategies and some goals to plan and accomplish its tasks. These entities, that we can qualify as “intelligent”, will necessarily have to cooperate and communicate with each other. In order to study this collaborative feature of cognitive agents, researchers rely on sociological work to address issues related to coordination of social agents.
The second perspective on agents is called “reactive”. In this perspective, the intelligent behavior of the system is considered to emerge from the interactions of the various behaviors of its agents, behaviors that are much simpler than these of cognitive agents. In this framework, agents are designed with neither complex cognitive representations nor fine-grained reasoning mechanisms. They only have mechanisms that enable them to react in various manners to the events they perceive.
Nowadays, agents are widely considered to have cognitive abilities that, albeit limited, are effective because they are specified with rules and meta-rules that are implemented in the agent’s structure as early as during the design stage. The central issue is thus how to make such agents relate to each other, interact and how some agents can establish themselves as hegemonic. These issues need to be addressed in order to understand how, on the basis of the set of active agents and according to the current situation, the most appropriate and efficient behavior can emerge in the global system. This approach will therefore not focus its reflection on the notion of individual agents but rather on notions such as agent organization. Such organizations will be constituted of very large numbers of agents whose interactions will have to be used and regulated. This leads us to the notion of multi-agent systems, well-organized sets of agents that perform various actions that, when combined, constitute the system’s behavior.
Let us nevertheless give a minimal definition of agents, in the constructionist perspective of systems modeling. Agents considered as conceptual entities should have, according to J. Ferber [FER 99], the following properties:
– ability to act in a planned manner, within its environment;
– skills and services to offer;
– resources owned by itself;
– ability to perceive its environment, although in a limited manner because it can only build a partial representation of that environment;
– ability to communicate directly with other agents through links called relations of acquaintance;
– willing to act in order to reach or optimize individual goals according to a satisfaction function, or even to a survival function;
– intentional behavior towards reaching its goals, taking into account its resources and skills as well as what it perceives and communications it receives.
A multi-agent system (MAS) is made of many agents that constitute an organization, i.e. an identified system that reorganizes itself through its actions and through the relations between its elements. It configures and reconfigures itself in order to realize its action on the environment. Systems that are developed in AI simulate, in a specific domain, some human reasoning abilities on the basis of inference-based reasoning mechanisms that operate on knowledge representation structures. On the contrary, MAS are designed and implemented as sets of agents that interact in modes involving cooperation, concurrence or negotiation and continuously reconfigure themselves in order to always set up the most efficient organization.
An MAS is thus defined by the following features:
– each of its constitutive agent has limited information and problem solving abilities. Its knowledge and understanding are partial, local with respect to the general problem that the MAS must process and solve;
– there is no global, centralized control system in the MAS. This is essential;
– the data the systems relies upon is also distributed. Some interface agents gather data and manage its distribution as well as timing issues;
– the problem-solving computation that the MAS must perform each time it is solicited, its actual functioning, emerges from the asynchronous coordination of its constitutive agents. This emergence selects a limited number of agents who are in charge of realizing the problem’s action/solution.
The MAS can also be seen as a set of agents that are situated in an environment made of other agents and objects, which are different from agents. Agents use the objects of the environment. These objects, in a strictly functional, computer science sense, are purely reactive entities that provide information and produce functional actions. Agents can interpret both the information that the objects’ methods provide and the behavior of other agents, with the necessarily incurred delays. In other words, agents use objects and communicate with other agents in order to reach their goals. This model enables us to discriminate the information to be gathered accurately, which will be produced by objects systematically (this defines the role of objects) from its analyses and multi-level conceptual interpretations produced by the organization of agents (this defines the role of the organization of agents).
The agents that constitute these systems are considered to be merely reactive. A range of reflex methods are programmed so that the agents can react to any event that might occur. Actions are broken down into elementary behavioral actions that are distributed among agents. The efficient synchronization of the distributed actions then becomes the issue to address. Each agent is in charge of a so-called stimulus–action link that it must manage with accurate timing, taking the state of the environment into account. Globally, the system analyzes any stimulus via its apprehension by agents whose nature is to be sensitive to it. It then finds the appropriate reflex methods in the appropriate agents, provided they exist, and responds by making the agents and methods found act with as much synchronization as possible. Such systems may seem intelligent when they operate exactly as expected, but since they do not attach any meaning to their action, they remain purely functional. Strictly speaking, coordinating agents does not go beyond the issue of functional regulation in order to optimize efficiency. Plus, such systems have often been designed to operate within a very specific range of situations, making them very vulnerable to unforeseen events.
Reactive agent-based MAS that exhibit behavioral emergence nonetheless remain among the best examples of successful reactive systems. They are especially well-known for computer applications applied to specific, well-delimited fields.
These multi-agent systems are able to separate and interpret information coming from their external environment, thanks to cognitive symbolization processes based on various predefined features that are implemented in the structures of the agents. They apprehend semantic features of information that is initially received as data and distinguish their unifying meaning according to their subjective situation. A perceptive system considers a perceived event as a complex fact. It transforms it into a series of interrelated symbolic features that are organized by groups of agents. These groups of agents have the necessary knowledge to elaborate various possible interpretations. Each active group of agents then constitutes a semantic pattern that symbolizes the perceived event. The various active semantic patterns, in turn, construct a multi-scale categorization of the represented facts. When, in this work, we detail this type of multi-agent system, the central issue will be to understand this semantic categorization pattern of any event that the autonomous system apprehends accurately.
To design the mechanism that will enable the system to interpret its situation in the current environment, we will use a massive multi-agent system in which each entity has some level of proactivity. Let us define this important notion: an agent is a proactive concept-based element if it is active when it needs be and if it uses its knowledge according to its internal state and to its situation in the environment, responding or not to the solicitations of other agents.
So, the two main reasons for using organizations of agents to model autonomous systems are:
– agents can dynamically reify any specific item of knowledge by relating it to knowledge represented in other agents. This means that specific items of knowledge can be considered as aspects of a large relational organization. This organization is what expresses, with continuously updated dynamical constructs, the appropriate causal relations, and the relevant global perspective of the system on its current situation;
– the proactive as well as very communicative behavior of agents enables the constitution of aggregates of agents acting and communicating with each other. Such aggregates can, to some extent, be seen as analogous to the sociological notion of “social groups”. Because relations evolve continuously, aggregates with a higher activity will become distinguishable. The combination of the specific features of each more or less active aggregate will outline a shared feature, a common perspective according to which the knowledge is organized. Beyond the mere resolution of a well-defined optimization problem with functions and variables in a fully determined space, the stake consists of making cognitive patterns emerge from the communication of many agents, so that these cognitive patterns represent the multiple aspects of the system’s functionality as well as decisions that are truly relevant to a complex and ever-changing situation.
These two rich features are specific to organizations of agents. Objects of object-oriented languages are entities that are perfectly fit for the rational design of a priori well-defined structures whose possible actions are all anticipated and whose overall behavior is fully planned. Of course, the agents are to be built with objects, processes, distant objects and threads but they will be able to alter their own attributes, to create new objects/processes and at the conceptual level they will blend activities, knowledge representation, migration and the creation of new instances and classes.
In the following, we will focus on open systems, i.e. systems that interact with their environment. Such systems are to be understood as groups of elements that are in relation with each other and whose coordinated actions are organized to produce the system’s action on the environment. These systems are, therefore, defined both by the set of their elements and by all the continuous relations that make them exist and act on their environments.
An organism, in biology, is defined as the set of organs of a living being. “Organ” is a biological term that denotes several tissues that perform one or a few specific physiological functions. An organ is thus a constitutive element of a biological system that performs all the functions pertaining to a specific area. Organs and their relations are represented by anatomical diagrams or charts that depict their organization within the unified framework that constitutes the living organism. The organism can thus be identified with the living being.
Some artificial systems can be seen as analogous to natural organisms, in so far as one analyzes them in terms of their constitutive elements and underlying relations between these elements. Relations between elements of a system can be seen as information processing. To this end, let us consider a two-level organization:
– the level of physical elements, made of basic elements and their aggregates;
– the level of information processing and exchange between the various physical elements.
Here, we take an approach that transposes fundamental features of living organisms into the field of artificial systems. Such an approach demands a novel design strategy and requires that very specific building blocks be used.
Artificial corporeity results from an organization of distributed electronic and informational elements that, although they have well-defined functions and are locally controlled by information processors, act as a unified whole that endows all their relations and individual actions with meaning by continuously coordinating them.
Within this framework, an artificial organ is a particular element composed of a specific electronic system that activates electromechanical parts and of an informational control system that associates these various parts and represents their specific functions in order to use them in a very precisely coordinated manner. The organ is situated within a corporeity of multiple other organs and is managed, together with the other organs, as a strongly coactive element.
Two essential concepts will guide the definition of the complex architecture of the artificial organism we intend to design:
– the first one is the concept of corporeity, which means that the physical components of the system, in order to be considered as organs, must fall under a very precise and elaborate organization;
– the second major concept is that of an interpreting system. It will continuously manage the behavioral state of the system, as well as process and interpret any gathered information in the light of the whole of its knowledge. The interpreting system will enable the artificial organism to continuously generate, with intentionality, series of representations derived from what it apprehends, conceives, believes or desires, and to thus engage in continuously intentional and interpreted actions.
The goal here is to provide the system with a generator of series of clear representations, in order for it to be able to express its intentions, wills and desires while experiencing sensations. The design of such a system, which would fully use its corporeity and apprehend itself as an organism, is key for the current concept of autonomy.
The interpreting system, key to the autonomy of the global system, will make series of representation emerge from what is apprehended and desired by the system at any time. Such a system, set at a purely informational level, can be seen as a proto-self. Knowledge representation in such a system is very specific. Further on in this work, we will detail our proposal to use swarms of active software agents. The challenge will then consist of being able to orient them towards making representations emerge from what is apprehended. Our suggestion is to use a self-regulation mechanism to apply incentive regulation, which has so far not been developed.
This is what a truly autonomous system will be. It won’t be merely using various knowledge bases to produce predetermined appropriate responses to more or less complicated situations. It will cognitively and sensitively interpret the reality it apprehends in order to deploy and situate its own identity completely within it. The physical level will be immersed in a computational system, the essential component of the artificial autonomous organism. In the following, we detail the architecture of this computational system.
We consider a system that is made of numerous elements pertaining to various fields, which is in a continuous reorganization state. The elements that compose the system strongly interact with each other, conforming to rules that specify the local and global actions of the system on itself and its environment. Such a system is considered open because it communicates with its environment systematically, gathering and expressing information continuously. The essential function of some of its components is therefore to communicate back and forth with the environment. In this chapter, we start with presenting the physical, hardware layer of an autonomous system. It is made of electronic or mechanical elements that constitute the system’s corporeity. Some of them can also be specific informational applications. The global system appreciates their situations so that it is able to organize them in structures at another scale and consider them as organs or parts of organs.