From Complexity in the Natural Sciences to Complexity in Operations Management Systems - Jean-Pierre Briffaut - E-Book

From Complexity in the Natural Sciences to Complexity in Operations Management Systems E-Book

Jean-Pierre Briffaut

0,0
139,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Although complexity makes up the very fabric of our daily lives and has been more or less addressed in a wide variety of knowledge fields, the approaches developed in the Natural Sciences and the results obtained over the past century have not yet permeated Management Sciences very much. The main features of the phenomena that the Natural Sciences deal with are: non-linear behavior, self-organization and chaos. They are analyzed with the framing of what is called "systems thinking", popularized by the mindset pertaining to cybernetics. All pioneers in systems thinking either had direct or indirect connections with Biology, which is the discipline considered complex par excellence by the public. When applying these concepts to Operations Management Systems and modeling organizations by BDI (Beliefs, Desires, Intentions) agents, the lack of predictability in the conduct of change management that is prone to bifurcations (tipping points) in terms of organizational structures and in forecasting future activities, reveals them to be ingrained in the interplay of complexity and chaos.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 309

Veröffentlichungsjahr: 2019

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Preface

Dedication

1 Complexity and Systems Thinking

1.1. Introduction: complexity as a problem

1.2. Complexity in perspective

1.3. System-based current methods proposed for dealing with complexity

1.4. Systems thinking and structuralism

1.5. Biodata of two figureheads in the development of cybernetics

1.6. References

2 Agent-based Modeling of Human Organizations

2.1. Introduction

2.2. Concept of agenthood in the technical world

2.3. Concept of agenthood in the social world

2.4. BDI agents as models of organization agents

2.5. Patterns of agent coordination

2.6. Negotiation patterns

2.7. Theories behind the organization theory

2.8. Organizations and complexity

2.9. References

3 Complexity and Chaos

3.1. Introduction

3.2. Complexity and chaos in physics and chemistry

3.3. Order out of chaos

3.4. Chaos in organizations – the certainty of uncertainty

3.5. References

Conclusion

Appendices

Appendix 1: Notions of Graph Theory for Analyzing Social Networks

Appendix 2: Time Series Analysis with a View to Deterministic Chaos

Index

End User License Agreement

List of Illustrations

Chapter 1

Figure 1.1. Dynamics of reactions with the intermediary formation of a catalyst

Figure 1.2. Relationship between signs, data, information, knowledge and opinion...

Figure 1.3. Examples of relations between objects, classes of objects and attrib...

Figure 1.4. A frame for a long-term loan

Figure 1.5. Activation (output) computed for a single cell

Figure 1.6. Contrasting the concepts of first-order cybernetics and second-order...

Figure 1.7. Relationship between observed and observing systems

Chapter 2

Figure 2.1. Product structures from a manufacturing point of view

Figure 2.2. Gantt chart for scheduling machining and assembly activities for del...

Figure 2.3. Connection network matrix for the example of six children and three ...

Figure 2.4. Bipartite graph of the connection network matrix for the example of ...

Figure 2.5. BDI agent architecture

Figure 2.6. A situation S and its two possible worlds

Figure 2.7. Coordination between proprietary information systems through a colla...

Figure 2.8. Systematic approach to derive the Aufbau- und Ablauforganisation com...

Figure 2.9. A human agent at the interface of two universes

Figure 2.10. Description of a context shared by a set of actors interacting betw...

Figure 2.11. Set of actors interacting between themselves via their environment

Figure 2.12. Conceptualized processing steps for diagnosing messages and events

Chapter 3

Figure 3.1. Graphical paths to derive the iterates step by step from the initial...

Figure 3.2. Structures of time lags in a business control loop

Figure 3.3. Examples of time-lagged response to a step increase in target value

Figure 3.4. Effects of time lags on an oscillating production system

Figure 3.5. Effect of positive feedback on an oscillating production system

Figure 3.6. Effect of negative feedback on an oscillating production system

Figure 3.7. Three vertices (game points) of the game board and some iterations o...

Figure 3.8. A Sierpinski gasket

Figure 3.9. Two iterations of a circle with an MRCM on the basis of a triangle p...

Figure 3.10. Composite elements of a baseline feedback machine

Figure 3.11. Phased steps of analytics processes

Figure 3.12. Bifurcation and change in organizational patterns

Figure 3.13. Role of information in controlling business resources in a manufact...

Figure 3.14. An example of layered business system architecture

Figure 3.15. Layered information system architecture for a manufacturing company

Figure 3.16. Hierarchical architecture of application systems for a manufacturin...

Figure 3.17. System of insight building blocks to manage incoming data flows fro...

Figure 3.18. Articulation between controlled process, information system and con...

Figure 3.19. Articulation between controlled process, information systems and co...

Figure 3.20. Building blocks of the computer-integrated manufacturing (CIM) conc...

Appendix A1

Figure A1.1. Three illustrative networks and their two representative formalisms

Figure A1.2. Contrasting isomorphism and homomorphism between two sets of elemen...

Guide

Cover

Table of Contents

Begin Reading

Pages

iii

iv

v

ix

x

xi

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

191

192

193

194

195

196

197

198

199

200

201

203

205

206

207

208

209

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

Systems of Systems Complexity Set

coordinated by Jean-Pierre Briffaut

Volume 1

From Complexity in the Natural Sciences to Complexity in Operations Management Systems

Jean-Pierre Briffaut

First published 2019 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd

27-37 St George’s Road

London SW19 4EU

UK

www.iste.co.uk

John Wiley & Sons, Inc.

111 River Street

Hoboken, NJ 07030

USA

www.wiley.com

© ISTE Ltd 2019

The rights of Jean-Pierre Briffaut to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2019930116

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-78630-368-4

Preface

The word “complex” is used in many contexts, be it at the level of social sciences, biology, chemistry and physics or in our professional and private environments. Any time we cannot understand a situation, we try to escape the challenge of feeling doubt and uncertainty, because we have the impression that we lack methods and techniques (in one word, capabilities) to address the issues involved. Within this framework, we decide to give up and convince ourselves that we are right to do so because we are overwhelmed by “complexity”. Complexity is an idea, fabric of our daily experience.

When a phenomenon seems simple to us, it is because we perceive that one object and one action are involved in spite of the fact that reality may be much more intricate. This simplification is enough for making us “cognize” the ins and outs of the situation we experience. In contrast, when a great number of interacting elements are involved, we perceive the situation as complex.

Economic systems and human relationships are complex. Macroscopic situations may appear “simple” because the microscopic underlying states are hidden. We perceive “averages” without knowing the detailed states of the components of a whole.

During the second half of the 20th Century, developments in the thermodynamic theory of irreversible processes, the theory of dynamical systems and classical mechanics have converged to show that the chasm between simple and complex, order and disorder, is much more reduced than thought.

Biology is acknowledged as complex, as it is associated with living organisms whose chemical functioning relies on the interactions of many subsystems. The idea of complexity is no longer restricted to biology and has undergone a paradigm shift. It is invading physical as well as social sciences.

The purpose of this book is to describe the main results reached in natural sciences (physics, chemistry and biology) to come to terms with complexity during the second half of the 20th Century and how these results can be adapted to help understand and conduct management operations.

It is divided into three main chapters, namely “Complexity and Systems Thinking”, “Agent-based Modeling for Human Organizations” and “Complexity and Chaos”.

The purpose of the first chapter, “Complexity and Systems Thinking”, is to give an overview of the way the concept of system has been instrumental in interpreting phenomena observed in natural sciences as well as in emotional behaviors of the human system, as a human being is a system in itself.

The second chapter is devoted to complexity and human organizations. Analyzing existing organizations is a difficult exercise because human relations are intricate. Making them explicit entails the help of a relevant model, including cognitive features. The BDI (Beliefs, Desires, Intention) agent model that will be elaborated meets this requirement.

The third chapter deals with complexity and chaos that is associated with disorder. We will examine how the concepts developed in physical sciences can be used in the field of human organizations for understanding their behavioral evolutions, especially when change management in organizations is pushed by fast-evolving technologies and their consequences in terms of interacting collaboration and cooperation between their human actors.

Jean-Pierre BRIFFAUT

January 2019

Dedication

This book has been written on the occasion of the fortieth anniversary of the foundation of the Institut Frederik Bull (IFB) by Bull. It operates as a think tank with working groups studying the societal impacts of informatics and economy digitalization. The working group linked to this book investigates the complexity of Systems of Systems (SoS).

IFB has developed collaborations with management and engineering schools (EMLV, ESILV, IIM) located at Le Pôle Universitaire Leonard de Vinci in Paris-La Défense.

1Complexity and Systems Thinking

1.1. Introduction: complexity as a problem

Perception of the world brings about the feeling that it is a giant conundrum with dense connections among what is viewed as its parts. As human beings with limited cognitive capabilities, we cannot cope with it in that form and are forced to reduce it to some separate areas which we can study separately.

Our knowledge is thus split into different disciplines, and over the course of time, these disciplines evolve as our understanding of the world changes. Because our education is conducted in terms of this division into different subject matters, it is easy not to be aware that the divisions are man-made and somewhat arbitrary. It is not nature that divides itself into physics, chemistry, biology, sociology, psychology and so on. These “silos” are so ingrained in our thinking processes that we often find it difficult to see the unity underlying these divisions.

Given our limited cognitive capabilities, our knowledge has been arranged by classifying it according to some rational principle. Auguste Comte (1880) in the 19th Century proposed a classification following the historical order of the emergence of the sciences, and their increasing degrees of complexity in terms of understanding their evolving concepts. Comte did not mention psychology as a science linking biology and the social sciences. He did not regard mathematics as a science but as a language which any science may use. He produced a classification of the experimental sciences into the following sequence of complexity: physics, chemistry, biology, psychology and social sciences.

Physics is the most basic science, being concerned with the most general concepts such as mass, motion, force, energy, radiation and atomic particles. Chemical reactions clearly entail the interplay of these concepts in a way that is intuitively more intricate than the isolated physical processes. A biological phenomenon such as the growth of a plant or an embryo brings in again a higher level of complexity. Psychology and social sciences belong to the highest degree of human-felt complexity. In fact, we find convenient to tackle the hurdles we are confronted with by establishing a hierarchy of separate sciences. As human beings, biology is of special interest for us because it studies the very fabric of our existence.

In physics, the scientific method inspired by the reductionistic approach from Descartes’ rule has proved successful for gaining knowledge. Chemistry and biology can rely on physics for explaining chemical and biological reactions; however, they are left with their own autonomous problems. K. Popper shares this point of view in his “intellectual biography” (Popper 1974):

“I conjecture that there is no biological process which cannot be regarded as correlated in detail with a physical process or cannot be progressively analysed in physiochemical terms. But no physiochemical theory can explain the emergence of a new problem… the problems of organisms are not physical: they are neither physical things, nor physical laws, nor physical facts. They are specific biological realities; they are ‘real’ in the sense that their existence may be the cause of biological effects”.

1.2. Complexity in perspective

1.2.1.Etymology and semantics

The noun “complexity” or the adjective “complex” are currently used in many oral or written contexts when some situations, facts or events cannot be described and explained with a straightforward line of thought.

It is always interesting to investigate the formation of a word and how its meaning has evolved in time in order to get a better understanding of its current usage.

“Complex” is derived from Latin complexus, made of interlocked elements. Complectere means to fold and to intertwine. This word showed up in the 16th Century for describing what is composed of heterogeneous entities and was given acceptance in logic and mathematics (complex number) circa 1652. At the turn of the 20th Century, it became closer to “complicated” and used in chemistry (organic complexes), economics (1918) and psychology (Oedipus complex, inferiority complex – Jung and Freud 1909/1910). “Complicated” is derived from Latin complicare, to fold and roll up. It was used in its original meaning at the end of the 17th Century. Its current usage is analogous to “complex”: what is complex – theory, concept, idea, event, fact and situation – is something difficult to understand. It is related to human cognitive and computable capabilities, which are both limited.

A telling instance is the meaning given to the word “complex” in psychology: it is a related group of repressed ideas causing abnormal behavior or mental state. It is implicitly supposed that the relations between these repressed ideas are intricate and are difficult to explain to outside observers.

In the Oxford dictionary, “complex” is described by three attributes, i.e. consisting of parts, composite and complicated. These descriptors are conducive to exploring the relationships of the concept of complexity with two well-established fields of knowledge, i.e. systems thinking and structuralism. That will be done in following sections.

1.2.2.Methods proposed for dealing with complexity from the Middle Ages to the 17th Century and their current outfalls

Complexity is not a new issue in quest of what is knowable to humans about the world they live in.

Two contributors from the Middle Ages and the Renaissance will be considered here, William of Ockham and René Descartes in their endeavors to come to terms with complexity. Their ideas are still perceptible in the present times.

1.2.2.1. Ockham’s razor and its outfall

William of Ockham (circa 1285–1347) is known as the “More Than Subtle Doctor”, English scholastic philosopher, who entered the Franciscan order at an early age and studied at Oxford. William of Ockham’s razor (also called the principle of parsimony) is the name commonly given to the principle formulated in Latin as “entia non sunt multiplicanda praeter necessitatem” (entities should not be multiplied beyond what is necessary). This formulation, often attributed to William of Ockham, has not been traced back in any of his known writings. It can be interpreted as an ontological principle to the effect that one should believe in the existence of the smallest possible number of general kinds of objects: there is no need to postulate inner objects in the mind, but only particular thoughts, or states of mind, whereby the intellect is able to conceive of objects in the world (Cottingham 2008). It can be translated into a methodology to the effect that the explanation of any given fact should appeal to the smallest number of factors required to explain the fact in question. Opponents contended that this methodological principle commends a bias towards simplicity.

Ockham wrote a book, Sum of Logic. Two logical rules, now named De Morgan’s laws, were stated by Ockham. As rules or theorems, the two laws belong to standard propositional logic:

1) [Not (p And q) ] is equivalent to [Not p Or Not q].

2) [Not (p Or q)] is equivalent to [Not p And Not q].

Not, And and Or are logic connectors; p and q are propositions.

In other words, the negation of a conjunction implies, and is implied by, the disjunction of the negated conjuncts. And the negation of a disjunction implies, and is implied by, the conjunction of the negated disjuncts.

It can be figured out that Ockham, who was involved in a lot of disputations, felt the need to use the minimum of classes of objects in order to articulate his arguments in efficient discussible ways on the basis of predicate logic. Predicate logic allows for clarifying entangled ideas and arguments, and producing a “rational” chain of conclusions that can be understood by a wide spectrum of informed people.

Different posterior schools of thought can be viewed as heirs apparent to the principle of Ockham’s razor, among many others, the ontological theory and Lévi-Strauss’ structuralism.

The word “ontology” was coined in the early 17th Century to avoid some of the ambiguities of “metaphysics”. Leibniz was the first philosopher to adopt the word. The terminology introduced by 18th Century came to be widely adopted: ontology is the general theory of being as such and forms the general part of metaphysics. In the usage of 20th-Century analytical philosophy, ontology is the general theory of what there is (MacIntyre 1967).

Ontological questions revolve around:

– the existence of abstract entities (numbers);

– the existence of imagined entities such as golden mountains/ square circles;

– the very nature of what we seek to know.

In the field of organization theories, ontology deals with the nature of human actors and their social interactions. In other more abstract words, ontology aims to establish the nature of entities involved and their relationships. Ontology and knowledge go hand in hand because our conception of knowledge depends on our understanding of the nature of the knowable.

The ontological commitment of a theory is twofold:

– assumptions about what there is and what kinds of entities can be said to exist (numbers, classes, properties);

– when commitments are paraphrased into a canonical form in predicate logic, they are the domains over which the variables are bound to the theory range.

When it comes to complexity, the ontological description of an entity should refer to its structure (structural complexity) and its organization (organizational complexity). This is in line with the mindset in German culture to describe a set of entities by two concepts, i.e. Aufbau (structure) and Ablauf (flows of interactions inside the structure). An entity can be a proxy that represents our perception of the world. According to the purpose, a part of the world can be perceived in different ways and can turn out to be modeled by different sets of ontological building blocks.

Lévi-Strauss was a Belgian-born French social anthropologist and leading exponent of structuralism, a name applied to the analysis of cultural systems in terms of the structural relationships among their elements. Lévi-Strauss’ structuralism was an effort to classify and reduce the enormous amount of information about cultural systems to ontological entities. Therefore, he viewed cultures as systems of communication and constructed models based on structural linguistics, information theory and cybernetics to give them an interpretation. Structuralism is a school of thought which evolved first in linguistics (de Saussure 1960) and did not disseminate outside the French-speaking intellectual ecosystem.

1.2.2.2. René Descartes

René Descartes (1596–1650), French philosopher and mathematician, was very influential in theorizing the reductionistic approach to analyzing complex objects. It consists of the view that a whole can be fully understood in terms of its isolated parts or an idea in terms of simple concepts.

This attitude is closely connected to the crucial issue that science faces, i.e. its ability to cope with complexity. Descartes’ second rule for “properly conducting one’s reason” divides up the problems being examined into separate parts. This principle most central to scientific practice assumes that this division will not dramatically distort the phenomenon under study. It assumes that the components of the whole behave similarly when examined independently to when they are playing their part in the whole, or that the principles governing the assembling of the components into the whole are themselves straightforward.

The well-known application of this mindset is the decomposition of a human being into the body and the mind localized in the brain. It is surprising to realize that Descartes’ approach to understanding what a human being is and how (s)he is organized remains an issue discussed by philosophers of our time. The issue of mind–body interaction with the contributions of neurosciences will be developed in another section.

The argument supporting this approach is to reduce the complexity of an entity in reducing the variety of variables to analyze concomitantly. It is clear that this methodology can be helpful in a first step but understanding how the isolated parts interact to produce the properties of the whole cannot be avoided. This type of exercise can appear very tricky. This way to approach complexity contrasts with holism. Holism consists of two complementary views. The first view is that an account of all the parts of a whole and of their interrelations is inadequate as an account of the whole. For example, an account of the parts of a watch and of their interactions would be incomplete as long as nothing is said about the role of a watch as a whole. The complementary view is that an interpretation of a part is impossible or at least inadequate without reference to the whole to which it belongs.

In the philosophy of science, holism is a name given to views like the Duhem–Quine thesis, according to which it is the whole theories rather single hypotheses that are accepted or rejected. For instance, the single hypothesis that the earth is round is confirmed if a ship disappears from view on the horizon. However, this tenet presupposes a whole theory – one which includes the assumption that light travels in straight lines. The disappearance of the ship, with a theory that light-rays are curved, can also be taken to confirm that the earth is flat. The Duhem–Quine thesis implies that a failed prediction does not necessarily refute the hypothesis it is derived from, since it may be preferable to maintain the hypothesis and instead revise some background assumptions.

The term holism was created by Jan Smuts (1870–1950), the South African statesman and philosopher, and used in the title of his book Holism and Evolution (Smuts 1926). In social sciences, holism is the view that the proper object of these sciences is systems and structures which cannot be reduced to individual social agents in contrast with individualism.

As a mathematician, Descartes has developed what is called analytical geometry. Figures of geometric forms (lines, circles, ellipses, hyperboles, etc.) are defined by analytical functions and their properties described in terms of equations in “Cartesian” coordinates measured from intersecting straight axes. This is an implicit way to facilitate the analysis of complex properties of geometric forms along different spatial directions.

1.3. System-based current methods proposed for dealing with complexity

1.3.1.Evolution of system-based methods in the 20th Century

All current methods used to deal with complexity have been evolved in the 20th Century within the framework of what is called the system theory.

The system concept is not a new idea. It was already defined in the encyclopedia by Diderot and d’Alembert published in the 18th Century in Amsterdam to describe different fields of knowledge. In astronomy, a system is assumed to be a certain arrangement of various parts that make up the universe. The earth in Ptolemy’s system is the center of the world. This view was supported by Aristotle and Hipparchus. The motionless sun is the center of the universe in the Copernicus’ system. In the art of warfare, a system is the layout of forces on a battlefield or the provision of defensive works respectively according to the concepts of a general or a military engineer. The project by Law around 1720 to introduce paper money for market transactions was called Law’s system.

1.3.1.1. The systems movement from the 1940s to the 1970s

A revived interest in the concept of systems emerged in the 1940s, in the wake of the first-order cybernetics whose seminal figure is Norbert Wiener (1894–1964). His well-known book Cybernetics: Or the Study of Control and Communication in the Animal and the Machine (Wiener 1948) was published in 1948 and is considered as a landmark in the field of controlled mechanisms (servo-mechanisms). The word “animal” in the title of Wiener’s book reflects his collaboration with the Mexican physiologist Arturo Rosenblueth (1900–1970) of the Harvard medical School. He worked on transmission processes in nervous systems and favored teleological, non-mechanistic models of living organisms.

Cybernetics is the science that studies the abstract principles to control and regulate complex organizational structures. It is concerned not so much with what systems consist of but with their function capabilities and their articulations. Cybernetics is applied to design and manufacture purpose-focused systems of nonself-reorganizable components. By design, these mechanistic systems can sustain a certain range of constraints from the environment – never forget that the surroundings in which a system is embedded are part of the system – through feedback and/or feed-forward loops and from the failure of some of its components. In general, this last situation is dealt with at the design stage for securing a “graceful” degradation of operations. This first-order cybernetics clearly refers to Descartes’ school of thought: courses of action controlled by memorized instructions.

Shortcomings were revealed when the cybernetics corpus of concepts was applied to non-technical fields and especially social and management sciences. Second-order cybernetics was worked out under the impellent of Heinz von Förster (1978). The core idea was to distinguish the object (the system) and the subject (the system designer and controller). This delineation focuses on the role of the subject that decides on the rules and means a given set of interacting entities (the object) has to operate with. The subject can be a complex system. It may constitute in different parts, a human being or group and technical proxies thereof encapsulating the rules chosen by the human entity. These rules are subject to dynamical changes due to evolving constraints and set targets that are fulfilled or not. The presence of human cognitive capabilities in the control loop allows for securing the sustainability of systems in evolving ecosystems.

Another important stakeholder – not to say the founding father – in the conception and dissemination of the system paradigm is the biologist Ludwig von Bertalanffy (1901–1972). It is relevant to describe his scientific contribution to systems thinking, contribution that goes far beyond his well-known book General Systems Theory published in 1968 and often referred to as GST (von Bertalanffy 1968). With the economist Kenneth Boulding, R.W. Gerard and the biomathematician A. Rapoport in 1954 founded a think-tank (Society for General Systems Theory) whose objectives were to define a corpus of concepts and rules relevant to system design, analysis and control. GTS was imagined by Ludwig von Bertanlaffy as a tool to design models in all domains where a “scientific” approach can be secured. Opposite to the mathematical approach of Norbert Wiener, Ludwig von Bertalanffy describes models in a non-formal language striving to translate relations between objects and phenomena by sets of interacting components, the environment being a full part of the system. These interacting components match an organizational structure with an inner dynamical assembling device like living organisms. Contrary to Norbert Wiener’s cybernetics feedback and feed-forward mechanisms, actions according to Ludwig von Bertalanffy’s view are not only applied to objectively given things but can result in self-(re)organization of systems structures to reach and/or maintain a certain state, as happens in living organisms. In a world where data travels at the speed of light, the response time to adjust a human organization to an evolving environment is critical. The environment of any system is a source of uncertainty, because it is generally out of the control of the system designer and operator.

1.3.1.2. The systems movement in the 1980s: complexity and chaos

The systems movement in the 1980s became aware of two extents that were up to this time unconsciously not considered, i.e. that complexity cutting through all the scientific disciplines from physics to biology and economics is a subject matter in itself and that some researchers in a limited number of fields pioneered investigation in chaotic behaviors of systems independently. The Santa Fe Institute (SFI) played and still plays a major leading role in the interdisciplinary approach to complexity.

The SFI initiative brings a telling insight into the consciousness felt in the 1980s by scholars of different disciplines that they share the same issue, complexity, and that interdisciplinary discussions could help tackle this common stumbling block to achieve progress in knowledge.

SFI was founded in Santa Fe (California) in 1984 by scientists (including several Nobel laureates) mainly coming from the Los Alamos Laboratory. It is a non-profit organization and was created to be a visiting institution with no permanent positions. It consists of a small number of resident faculty, post-doctoral researchers and a large group of external faculty. Funding comes from private donors, grant-making foundations, government science agencies and companies affiliated with its business network. Its budget in 2014 was about 14 million US dollars. The primary focus is theoretical research in wide-ranging models and theories of complexity. Educational programs are also run from undergraduate level to professional department.

As viewed by SFI, “complexity science is the mathematical and computational study of evolving physical, biological, social, cultural and technological systems” (SFI website). Research themes and initiatives “emerge from the multidisciplinary collaboration of (their) research community” (SFI website).

SFI’s current fields of research which are described on their website demonstrate the wide spectrum of subject matters considered relevant today:

– complex intelligence: natural, artificial and collective (measuring and comparing unique and species-spanning forms of intelligence);

– complex time (can a theory of complex time explain aging across physical and biological systems?);

– invention and innovation (how does novelty – both advantageous and unsuccessful – define evolutionary processes in technological, biological and social systems?).

M. Mitchell Waldrop (1992) chronicles the events that happened in SFI from its foundation to the early 1990s. It is outside the scope of this context to survey all the interdisciplinary workshops run during this period of time. I will elaborate on the contributions of John H. Holland and W. Brian Arthur.

The lecture “The Global Economy as an Adaptive Process” delivered by John H. Holland, Professor of Psychology and Professor of Computer Science and Engineering at the University of Michigan, at a workshop held on September 8, 1987 contains the following main points that are of general application:

– Economy is the model “

par excellence

” of what is called “complex adaptive systems” (CAS), term coined by the SFI. They share crucial properties and refer to the natural world (brains, immune systems, cells, developing embryos, etc.) and to the human world (political parties, business organizations, etc.). Each of these systems is a network of agents acting in parallel and interacting. This view implies that the environment of any agent is produced by other acting and reacting agents. The control of this type of system is highly distributed as long as no agent turns to be a controlling supervisor.

CAS has to be contrasted with “complex physical systems” (CPS). CPS follow fixed physical laws usually expressed by differential equations – Newton’s laws of gravity and Maxwell’s laws of electromagnetism are cases in point. In CPS, neither the laws nor the nature of elements change over time; only the states of the elements change according to the rules of the relevant laws. The variables of differential equations describe element states. CPS will be investigated in section 1.3.1.3.

– Complex adaptive systems have a layered architecture. Agents of lower layers deliver services to agents of higher levels. Furthermore, all agents engineer changes to cope with the environmental requirements perceived through incoming signals.

– Complex adaptive systems have capabilities for anticipation and prediction encoded in their genes.

Irish-born W. Brian Arthur who shifted from operation research to economics when joining the SFI has kept working in the field of complexity and economics. He produced an SFI working paper in 2013 (Arthur 2013) summarizing his ideas about complexity, economics and complexity economics. This term was first used by W. Brian Arthur in 1999.

Here are the main features of W. Brian Arthur’s positions:

– “Complexity is not a theory but a movement in the sciences that studies how the interacting elements in a system create overall patterns and how those overall patterns in turn cause the interacting elements to change or adapt … Complexity is about formation – the formation of structures – and how this formation affects the objects causing it” (p 4). This means that most systems experience feedback loops of some sort, which entails nonlinear behaviors “genetically”.

– An economic system cannot be in equilibrium. “Complexity economics sees the economy as in motion, perpetually ‘computing’ itself – perpetually constructing itself anew”. (p 1). Equilibrium, studied by neoclassical theory, is an idealized case of non-equilibrium which does not reflect the behavior of the real economy. This simplification makes mathematical equations tractable.

This point of view is in line with the dissipative systems studied by Ilya Prigogine. An economic system like a living organism operates far from equilibrium.

A lot of distinguished scholars brought significant contributions to the SFI activities. It is outside the scope of this framework to describe all of them.

1.3.1.3. Ilya Prigogine: “the future is not included in the present”

Ilya Prigogine and his research team played a major role in the study of irreversible processes with the approach of thermodynamics. His reputation outside the scientific realm comes from his co-working with Isabelle Stengers with whom he co-authored some seminal books about the epistemology of science (Prigogine 1979; Prigogine and Stengers 1979 and 1984).

He received the 1977 Nobel Prize for Chemistry for his contributions to non-equilibrium thermodynamics, particularly the theory of dissipative structures that are the very core of all phenomena in nature. He started his research at a time when non-equilibrium thermodynamics was not considered worthwhile since all thermodynamic states were supposed to reach equilibrium and stability sooner or later as time passes by. Only equilibrium and near-equilibrium thermodynamic systems were subject to academic research.

The pioneer work of Ilya Prigogine proved that non-equilibrium systems are widespread in nature: many of them are natural organic processes, namely evolving with time without any possible time reverse. That holds true clearly for biological processes. Science cannot yet reverse aging!