Information Organization of the Universe and Living Things - Alain Cardon - E-Book

Information Organization of the Universe and Living Things E-Book

Alain Cardon

0,0
139,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

The universe is considered an expansive informational field subjected to a general organizational law. The organization of the deployment results in the emergence of an autonomous organization of spatial and material elements endowed with permanence, which are generated on an informational substratum where an organizational law is exercised at all scales. The initial action of a generating informational element produces a quantity of basic informational elements that multiply to form other informational elements that will either be neutral, constituting the basic spatial elements, or active, forming quantum elements. The neutral basic elements will form the space by a continuous aggregation and will represent the substrate of the informational links, allowing the active informational elements to communicate, in order to aggregate and organize themselves. Every active element is immersed in an informational envelope, allowing it to continue its organization through constructive communications. The organizational law engages the active quantum elements to aggregate and produce new and more complex quantum elements, then molecular elements, massive elements, suns and planets. Gravity will then be the force of attraction exerted by the informational envelopes of the aggregates depending on their mass, to develop them by acquisition of new aggregates. The organizational communication of the informational envelopes of all of the physical material elements on Earth will enable the organization of living things, with reproduction managed by communications between the informational envelopes of the elements, realizing a continuous and powerful evolution.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 325

Veröffentlichungsjahr: 2021

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright

Introduction

Part 1. Informational Generation of the Universe

1 The Computable Model, Computer Science and Physical Concepts

1.1. The Turning model

1.2. Computer science

1.3. Formation of the Universe in physical sciences

2 The Informational Components and the Organizational Law of the Formation of Space and the Elements of the Universe

2.1. Informational model of universe generation and organizational law

2.2. The notion of generating information in the Universe

2.3. The generative informational component and the informational energy of the substrate of the Universe

2.4. The formation process of the Universe from the informational components

3 An Agent Model to Represent Informational Components

3.1. Informational and control agents representing the components

3.2. The generation of atoms and molecules in the informational agent model

3.3. The formation of a hydrogen atom agent with informational agents

3.4. Formation of a helium-type atomic agent

4 The Generation of the First Molecules in the Agent Approach

4.1. The informational characteristics of the system forming the molecules

4.2. Formation of simple molecules of helium hydride and dihydrogen . . .

5 The Formation of Physical Elements in the Agent Approach

5.1. The notion of aggregate mass

5.2. The formation of stars and galaxies by the general action of the organizational law

5.3. The informational program for the design of the universal system . . .

Part 2 Life Produced by the Organizational Law

Introduction to Part 2

6 The Characteristics of Scientific Theories of Life

6.1. Evolution and selection: Charles Darwin’s theory of gradual evolution and the biochemical approach

6.2. The constitution of life, from DNA to developmental biology

6.3. Genes and their expression: an open problem

7 The Informational Interpretation of the Living

7.1. Origin of the living and bifurcation of the organizational law

7.2. Evolutionary reproduction

7.3. Informational action of reproduction of life with morphological patterns

7.4. The application of the organizational law in the reproduction process

7.5. The continuous evolution of life

7.6. The human species in the organizational evolution of life

7.7. The informational envelope of the planet Earth today

Conclusion

References

Index

End User License Agreement

List of Illustrations

Chapter 1

Figure 1.1. The Turing machine

Chapter 2

Figure 2.1. An element made up of informational components forming its internal ...

Chapter 7

Figure 7.1. Organizational influence of dominant morphological patterns on the r...

Guide

Cover

Table of Contents

Title Page

Copyright

Introduction

Begin Reading

Conclusion

References

Index

Other titles from iSTE in Science, Society and New Technologies

End User License Agreement

Pages

v

iii

iv

ix

x

xi

xii

xiii

1

3

4

5

6

7

8

9

10

11

12

13

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

101

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

183

184

185

186

187

188

189

190

191

Digital Sciences Set

coordinated by

Abdelkhalak El Hami

Volume 3

Information Organization of The Universe and Living Things

Generation of Space, Quantum and Molecular Elements, Coactive Generation of Living Organisms and Multiagent Model

Alain Cardon

First published 2021 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd

27-37 St George’s Road

London SW19 4EU

UK

www.iste.co.uk

John Wiley & Sons, Inc.

111 River Street

Hoboken, NJ 07030

USA

www.wiley.com

© ISTE Ltd 2021

The rights of Alain Cardon to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2021941901

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-78630-746-0

Introduction

The Universe is a huge expanse of space containing innumerable quantum, molecular and material elements. The physical sciences have discovered space-time and quantum particles, and there exists the possibility of producing scientific theories about the creation and the reasons for the state of the Universe, specifying the generation, the role and the action of all the elements that it contains. We will propose a universe generation model by considering that it is formed in regard to the expansion of an informational substratum possessing great informational energy and applying everywhere a self-organizing law to produce, by emergence, the space and all the material elements in continuous organization.

To understand what the Universe is, we scientifically observe very many elements at multiple scales, compelling us to ask three major fundamental questions and provide precise answers:

– What was the initial state of generation of this universe, a state where there was neither space nor time and where nothing physical existed?

– What is the space of the Universe, how did the space that allows the generation and deployment of all physical elements come to be created?

– What is this organization of the Universe that allowed the creation of atoms, molecules, clouds of physical components, then stars, galaxies and planets with physical and temporal stability, and what is it that allowed the creation of the extraordinary development of life on Earth, up to humans who ask existential questions?

To answer these questions, we can define a truly unifying model for the notions of generation of space, of quantum elements and of all material elements and also of all living organisms. We must posit that there was an empty space available upon the creation of the Universe and then particles in place there, developing and unfolding at random using energy. We need an answer with a conceptual unification of the creation of the space of the Universe, of the quantum particles, of the molecules and of the material aggregates, and we need to define what this energy was which enabled the creation of all that and especially to specify why the physical elements aggregated to constitute such aggregates leading to the suns, to the planets and to life. In fact, it is necessary to make a unifying analysis of the generation of the Universe and not only multiple local analyses, in bottom-up and top-down approaches. It is necessary to define an organizational law that engages in a continuous way the organization of the Universe. We will pose that the unification was realized by an informational substrate of the Universe, under the quantum level, the Universe being then a self-organized emergence on this substrate.

In order to define a unifying generation model of the Universe, we must first define a very singular initial state which engaged the creation of all space and all elements according to a generative hyper-process, then specify how the generative processes unfolded in a continuous and self-organized way. We posit that this singular initial state generator of the Universe is a localized informational system, that it was defined elsewhere and that it produced a quantity of informational elements which also produced some of the same. It was thus a generating system producing elements which in turn generated. We will then develop an informational model where all the structure and all the elements of the Universe will be considered based on informational fields of communication, allowing these fields to generate structures with temporal permanence and forming both informational and material elements. Furthermore the notion of information used will be different from the one defined by the Turing machines that run programs. We will use the notion of organizational information which will in turn use the notion of informational fields of communications, of the type, for example of a photon field but more intense and totally invisible due to operating at a speed much higher than the speed of light, and which will produce a space of dense communicational links everywhere between all the created elements. This space of informational links will be the substrate producing the space of the Universe. Therefore, there is an informational substrate that will perform incentive control at multiple scales, to continuously produce an emergence that will be the space-time with all the physical elements of the Universe being generated. It will be necessary to introduce a fundamental organizational law that will regulate all the formations and aggregations of generated elements, in order not to have a system randomly generating a set that becomes chaotic with very little structural coherences.

We assume that the generating element is an autonomous and complex informational element which will generate many informational elements which will be very communicative among themselves. This generating element will use informational energy that will be given to it by its builders and which it will diffuse to the elements that it will generate. The informational energy will therefore be the force that will activate all the informational fields and all the generated informational elements. The elements produced by the generating element will be of two types as will the production of these elements: either they will be structural elements and they will then form cells of the empty space which will constitute themselves on the informational links of the communications and which will unify to generate the space of the Universe, or they will be elements of activity, endowed with informational energy and whose actions will be to communicate to form aggregates constituting the material elements of the Universe, and they will duplicate themselves by producing other elements of activity or structure to realize the expansion of the Universe.

The generated elements of activity will contain fermions and will produce, by their informational communications, the quantum-type particles. So, from the initial generating element, there is the beginning of the production of space in the form of empty cells and elements of activity that will constantly produce others that will be elements of activity or structure, thus continuing the generation of space and quantum elements. All the elements of activity, which are of the type of informational fields, will be able to communicate to generate aggregates, that is, to produce atoms, molecules then all physical elements of the Universe. Thus the Universe generates itself, without ceasing its process of organizational deployment produced by the incentive of the organizational substrate which is its incentive controller at the informational level, because it imposes the application of an organizational law allowing the generation of structured aggregations.

We therefore propose a model considering the whole universe as a fundamentally informational and self-organizing system, composed of what we will call informational fields making up the elements of activity in the generated space, the Universe being in constant expansion. The structure of any produced element of activity will thus be considered as both material and informational, and the organizing informational fields will exist at all scales of physical elements. The Universe will thus be an organizational emergence on a substrate of informational energy that produces incentive control, which will be the application of the organizational law that we will define precisely.

In the model describing the generation of the Universe and all its elements, including the living organisms on Earth, by explaining the communicative action of genes, we therefore pose a central hypothesis which specifies that the Universe has an informational substratum, that everything is based on the generation and action of informational fields which use a basic informational energy and that this activity is subject to a law of multiscale incentive organization. We will show that the formation of the Universe can be represented by an informational program which generates its elements and uses a considerable dynamic memory for the control of the operations and which will be its informational substratum.

We will develop the reasons for the creation and evolution of life on Earth. The model will be based on the notion of communication between all the generating and generated elements enveloped by what we will call physical and informational membranes and which exchange information with incentive values specifying morphological modifications in these fields and thus in the generated organs. We will show that the organizational law can produce specific local bifurcations and that it is the case on the planet Earth which produced all living beings. We will show how the control that allows the modification of the organizations is carried out by introducing the notion of a morphological pattern, an informational element that alters the communications, that allows for modifying the action of the genomes and thus for evolution to occur, and that allows the psychic systems of human beings to have fundamental drives and tendencies. This will make it possible to show how and why terrestrial life was constituted, how and why sexuality was formed for the reproduction of organisms and how and why innumerable very sophisticated and intercommunicating species were formed, making up the Gaia system. We can then say that all life on Earth is based on an informational organization and that it is desirable that we apprehend well this type of information in its domains, in order for humans to position themselves in evolutionary social structures that are really shared by the use of fundamental communications, moving towards ethical, cultural and peaceful societies.

We will therefore present what this generating information is, how the informational fields operate in communications, how the informational envelopes of quantum and molecular elements are formed. We will present the informational law that allows the substrate of informational energy to incite the realization of material aggregates, then of stars and planets. We will see that we can use the notion of agent, which has been deeply developed in computer science, by defining informational agents representing physical elements in the Universe. Then in the second part, we will present how the generation of life on Earth was achieved by explaining why and how there was a continuous evolution of the formation of all species, how reproduction permits the generation of new organisms forming groups and then new species.

The informational model presented is an attempt to unify many scientific fields analyzing all the elements that make up the Universe, starting from the quantum elements and going all the way to the living organisms on Earth.

Part 1Informational Generation of the Universe

1The Computable Model, Computer Science and Physical Concepts

We will first specify the foundations of computer science considered as the science of the calculable, and then expose the general physical theories on the situation of the elements of the Universe.

Computer science, as a science, is based on the computable model of functions and compositions of functions, which is the Turing model. In its applicative aspects, computer science today has considerable technological applications that invest all types of production in the world, that have upset the use of communications and the manipulation and processing of the knowledge used. We will present the fundamental model on which the calculable functions are based and we will see that we must go towards another model of information manipulation to conceive at the informational level the generation of the space of the Universe and the elements constituting it.

1.1. The Turning model

Mathematicians and computer scientists have been interested in the classes of functions that can be calculated with algorithms, which are automatic calculation processes understood as sequences of instructions defining the values that the variables of the activated functions take. An algorithm is therefore a sequence of instructions that calculates the value of various specific functions, and is defined by its various steps.

The mathematician Alan Turing, in 1936, before the invention of the first computers, posited the existence of an abstract machine capable of calculating all the values of any mathematical function defined on the integers according to an automatic process. Such a machine consists of an infinite tape for storing data and has two parts – one used to read the new data and the other to store them. These data are numbers, which are interpreted by a reading system and written by a writing system and which transmits the read values to the instructions of the machine which uses them to produce the result, which is another sequence of numbers placed in the memorizing part of the tape (see Figure 1.1). There is thus a reading–writing head which makes it possible to specify the actions for processing the instructions. The machine is, at each step of calculation, in a state which is represented by a certain numerically indexed symbol and it is given a precise quantity of these states during its construction. The program of the machine is a set of instructions processing the read values to produce a numerical result. Each instruction has two parts: its trigger to activate and its action to process the value read from the tape. The correct instruction is activated by the trigger and it reads and processes the digital data that is being accessed on the read tape. Its action is to use and rewrite this value read in the same cell of the tape so that it can be used by other machines and then to move the reading head by one cell or not to move it, and then possibly to change its state to specify another one.

An elementary instruction of the Turing machine thus has the form of the following quadruplet (qi, Sj, Sk, qs) with:

qi is the current state of the machine;

Sj is the piece of data which is read on the reading head;

Sk is the numeric character that will replace Sj;

qs is the new current state of the machine after the replacement.

However, the machine can also have one of the following two forms, with D and G being the actions of simply moving the read head to the right or left without writing anything on the read–write tape:

(qi, Sj, D, qs)

(qi, Sj, G, qs)

This machine is totally automatic, and it is the most elementary possible with regard to the calculations to be carried out. It is the most important universal machine in the history of computation. All the algorithms use sequences of instructions and are therefore sophisticated compositions of Turing machines, and the instructions can lead to the request after their execution to place themselves on another instruction to be executed by the famous “Go to” and this repeatedly until meeting the instruction “End” of the end of execution of the instructions of the algorithm.

The functions that the Turing machine computes are called recursive primitive functions and they operate on sequences of natural numbers. They are obtained from basic functions, like identity, projection, successor function, using composition, recurrence and minimization, and they are executed in associations. They define all the usual arithmetic functions by machine associations, like power functions, products and sorts, and they are thus the basic model of what can be defined in mathematics to operate on sequences of integers.

Figure 1.1.The Turing machine

All the calculations of the values of numerical functions are thus based on the design of Turing machines. We define a general function on a given problem and its effective calculation amounts to defining the set of Turing machines which will represent it.

However, we can proceed in a much more general way by using the differential equations of mathematics. In this framework, we first define functions and constants specifying the elements in relation in a physical system describing a natural phenomenon and we place these functions in differential equations representing the spatial–temporal relations between the observable elements of the phenomenon. We then seek the solutions of these differential equations giving the values of the functions and thus giving the solution of the problem of the relations and movements by comparing with the results of the physical observation to validate the equations. However, this problem does not always offer a good solution in fundamental physics which uses differential equations and partial differential equations representing the relations between the functions which are the characters of the studied problem, because the solution of these equations, if they are indeed calculable, is not always in agreement with the experimental measurements.

This approach is characterized as ascending, because we start from the observation of the phenomenon and we try to represent it by variables and functions that describe its evolution. It is therefore assumed that there is a space available and that the physical phenomena that occur in this space with structured elements must be precisely described to measure their evolution.

1.2. Computer science

At the base of everything that is done and calculated by computers, there is a very general and far-reaching scientific problem. To understand it well, we must approach the science of the calculable. Let us consider the problem of the mathematical calculation of integer functions. We are interested here in what can be done with the integers, that is, those going from zero to infinity, and used, for example, to count objects. This set is noted N, the set of natural numbers. We know, in mathematics, that we can calculate many things using integers, and we can define a lot of functions of the set of integers in itself. Indeed, we can code everything that is symbolic, everything that is cognitive with natural integers. Let us recall that any integer can be represented, if necessary, in base two, that is, with the two digits 0 and 1. We can generalize this and be interested in functions whose argument is formed by a sequence of n integers, the value of the function being another sequence of integers. In this case, we are interested in vectors of integers. We have thus defined all the functions with n integer arguments, and whose values are certain sequences of integers. We are in the domain of integer mathematics where any formula is in the form of a sequence of signs. We can encode this sequence of signs by integers and thus represent any mathematical formula by a sequence of vectors of integers. Any mathematical demonstration is, in the same way, a sequence of signs that can also be coded by integers. We can therefore see that the study of integer functions is a fundamental problem of the representation of mathematical language. The question is the following: since the formulas and the mathematical demonstrations of the domain of integers are finite sequences of signs, can we represent the demonstrations by programs? The answer will be yes, and for a very large set of functions and demonstrations.

Computer science as a science of the calculable appears here. All these integer functions, all that mathematicians can define on these integers in the form of various equations, are equivalent to computer programs of abstract machines. It has been shown that for any function of a sequence of integers in another sequence of integers to make mathematical sense, to be coherent, there must exist some abstract machine, an “abstract computer”, with instructions that allow it to be calculated. The existential of all mathematical functions on integers has a meaning if the computable allows it to have one and vice versa. This very powerful theoretical result is the famous thesis of Alonzo Church, dating from 1936. It amounts to saying that for a function on integers to have a mathematical meaning, to be coherent, we need only define the program of a theoretical machine which can calculate it. If there is no such program, the function does not exist and is not logically admissible. Today, the fields of application of computer science are considerable, making it possible to represent practically all structurable knowledge in all fields and to direct quantities of electronic devices in real time.

In the usual approach, computer science deals with the processing of information related to multiple calculations, including those established from a great number of functions, with systems using as a basic element what is called state machines. A state machine is an abstract machine passing through strictly determined states, choosing them one after the other from a set defined as available, and in which precise elementary instructions are executed. We use the automaton by starting from an initial state to reach a final state which is the expected result of the following calculations. This notion of state automaton is fully used when dealing with problems that are decomposed into many sub-problems, all very well defined, the whole forming a perfectly structured set, where what is to be computed at each step is well specified and develops the continuation of what is to be computed. Such problems belong to the class of what are called well decomposable problems. Computer science deals, at the electronic level, with binary information encoding elementary instructions, forming the programs themselves composed of sequences of calculations carried out by elementary instructions. The length of the programs and the number of instructions of a program can be considerable, and several programs can easily be executed at the same time and communicate to each other at the right time information allowing to realize well synchronized calculations. But, in a classical way, any program remains a sequence of calculations which, step by step, each calculation step after each calculation step, passes through a finite number of predefined states until its final state.

This locally mechanistic vision of the computing process has evolved a lot. Today, we know how to make many, many programs communicate, based on state machines, which run in parallel and, above all, which modify their own machines during their operation by communicating to synchronize themselves, even though the basis of each program is still the state machine. We have therefore shifted the framework of the regular automation of programs to the notion of autonomy. We know how to build programs made of many sub-programs which have their own behavior, which can communicate, synchronize, modify themselves and which can especially generate new programs breaking the order given by the initially conceived state machines. These systems are called adaptive multiprocess systems, and they are the ones that run on current computer clusters. Indeed, this is the case for any networked operating system that manages the simultaneously active resources and applications of a desktop computer, which is so common today. The notion of multiprocessing is important, and it has been a basic notion for the conception of artificial consciousness (Cardon 2018), because it places the consideration of programs at the level of autonomous software entities, active, carrying out precise local actions and above all highly communicative with each other in order to form dynamic structures that are constantly changing.

We will use the following two different notions of the term process:

– The notion of a

functional process

which is seen as a vast movement of components exchanging information and energy, and producing the state of a certain system, as is the case of brains producing representations. We can thus speak of the process of emergence of a form of thought about something focused from a trigger generating intention.

– The much more precise notion of

computer process

, which is a small program wrapped in utilities and processed in a computer system that handles quantities of them simultaneously. We will then speak of swarms of processes to designate very numerous computer processes running in competition, this notion of swarm of processes being then close to the other notion of functional process.

Generally speaking, there are two categories of programs in computer science:

– The category of programs where it is a question of calculating a given function which is precise, well defined in advance, of strictly developing the calculations of all the necessary steps, which amounts to the execution of a structured set of state machines.

– The category of autonomous programs composed of multiple swarms of processes that will run in parallel, that will capture internal and external information, that will confront each other at certain times to exchange information, that will modify themselves, generate others and thus create new processes, to finally produce a global result that will be the most adapted to the situation that has evolved from its initial phase.

The second category is, for example, the state of all Internet users’ programs over a given period of time, when these programs themselves constantly consult and modify highly interactive Web sites. There are no permanent elements in this case and the problem cannot be based on an a priori decomposition into permanent elements.

The difference between the two categories of problems proposed lies in a fundamental point. There are programs in both cases, but in the second case, they will have to be constantly modified, rewritten and evolve, whereas in the first case, the software is used as it was conceived, with its initial capacities well-defined and non-variable during use. The second case must make one think of a certain form of autonomous, very abstract, artificial behavior.

1.3. Formation of the Universe in physical sciences

In mathematics there exists the domain of real numbers, which is a complete Archimedean body, whose use defines sophisticated and very powerful equations: the differential equations and the partial differential equations. This domain of real numbers using differential equations has allowed physicists to define very fine theories of the evolution of matter in the Universe. To define a differential equation, we first define the variables that characterize the observed system, and then we define the functions and their relations that should allow us to predict the evolution of the values of the variables, taking into account the values of certain constants. All this is put into a differential equation which must calculate the functions and thus obtain the characters of the evolution of a system starting from an initial state. This will be used in a very important way.

The physical sciences have been working for a very long time on a generation model for the Universe. The main model of the physical theory describing the creation of the Universe is the Big Bang model, posed by Alexandre Friedmann in 1925 and by Edwin Hubble in 1929, then very widely developed thereafter. This model posits the existence of a primordial nucleosynthesis, a very singular set of quantum elements with a considerable temperature that inhibits the propagation of photons that continuously interact with quantum particles. After a hundred seconds, the photons lose energy and the protons and neutrons are able to associate in a durable way to generate the first complex nuclei of the elements. From this initial set, the Universe developed by a considerable dilation while its temperature remained very high. The initial temperature of the Universe fell to reach 3000 degrees after 380,000 years, producing light by the emission of quantities of photons. This light of the cosmological background radiation (Peebles 1980) is observable today, when the temperature of the Universe is only 2.7 degrees above absolute zero. The light formed by the photons therefore propagated out of this initial set by constituting the cosmological background, it propagated in a space whose theory does not specify how it is formed or where it comes from (Klein 2010). The classical model posits that the generic elements of the galaxies were created by this very particular initial set and that the Universe was constituted by continuous dilation, its expansion.

The Big Bang theory thus posits that the Universe originated from a singular point 13.7 billion years ago, but there is no precise notion of the formation of space or time, only a notion of considerable energy at this initial moment in the form of photon fields and the activity of quantum elements. Physically observable matter appeared from this initial state at a very high temperature, with neutrons, protons and electrons in considerable numbers, the protons being assumed to be seven times more numerous than the neutrons. Then were formed, by combinations, nuclei of hydrogen and helium. This set of elements will not cease to expand very strongly in space, thus lowering its considerable temperature and allowing the activation of physical processes that will form the nuclei of many atoms.

All this is defined by differential equations with constants, variables and functions, and these are the equations that give us all the characteristics of this expanding universe. According to these equations formulating the notion of energy, the temperature of several billion degrees Kelvin drops in a few seconds to one billion degrees Kelvin. Then, protons and neutrons are able to associate to form heavy hydrogen nuclei by releasing photons. Some of these nuclei are able to attract an electron to form a hydrogen atom. Some hydrogen atoms are able to attract each a proton, a neutron and an electron from their neighborhood to form helium nuclei, the helium atom being four times heavier than the hydrogen atom. It is deduced in this theory that 99% of the mass of the Universe is formed during this process, being thus constituted mainly of hydrogen and helium.

It is well-observed that hydrogen is very present in the Universe and this leads us to the following result: a quarter of the mass of the Universe is formed of helium and three quarters of hydrogen atoms.

The scientific theory states that the matter in the Universe represents 5% compared to the empty space which is considerable, but it also states that there is dark matter which is six times more important than the visible matter and which allows the maintenance of the conformation of galaxies and their structures which otherwise would not cease to dilate and therefore could not exist. However, this dark matter has not been physically found or observed. The theory posits that there is 70% dark matter and 75% dark energy in the Universe, which are still undetectable with current observational equipment. In our model, we will pose a hypothesis specifying what dark matter is precisely. It is posed that the variations of density of matter will allow, by undergoing gravitational force, to form in the galaxies structures made up of stars and planets.

This description, describing how to generate the structures of material elements of all sizes, gives a very important place to chance. There is the chance of communications between protons and neutrons forming the first atoms, there is the chance of collisions between these atoms generating photons and constituting other atoms, then the chance of movements of atoms aggregating and forming small masses. There is finally the chance of the meetings of these small masses to form bigger masses, very big masses aggregating the smallest by the force of gravity, attracting to them the small ones and thus forming the stars and the planets.

In this general theory, everything is based on the atomic elements that can combine according to the chance of formation of aggregates of atoms that will form structured elements having permanence, like those of the stars and planets, and that are positioned in the available empty space of the Universe. But there are theoretical contradictions between quantum physics and the theory of gravity, as these two approaches to reality do not manage to agree by modeling domains of the Universe that are at totally different scales defined by equations that are not compatible.

The theory of the generation of our universe states that it is expanding. The theory is based on the notion of energy and expansive dilation but does not give any explanation on what is the space where the particles are located and how this space is formed. The great fundamental question is then the following:

What is space, where does it come from, how and why is it generated by allowing physical elements to take place in it?

We are then going to specify that all that exists in the Universe is in the form of particular informational fields with deployment in a space that is going to be the dynamic reification of the activity of these informational fields, fields propagating unceasingly innumerable communications to realize by confrontation of the emergences of aggregations constituting elements that are going to have structure and will continue the realization of the changes of structure of the elements to generate the complexity of the organization. We will thus pose that the Universe is an autonomous system based on a dynamic informational space treating in a continuous way the specific informational communications between its elements to make them exist and continue to exist by becoming complex. The space will thus be the major constituent of the Universe, which will lead us to conceive an informational system different from the functional information of the Turing machine.