Evolutionary Algorithms for Food Science and Technology - Evelyne Lutton - E-Book

Evolutionary Algorithms for Food Science and Technology E-Book

Evelyne Lutton

0,0
139,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Researchers and practitioners in food science and technology routinely face several challenges, related to sparseness and heterogeneity of data, as well as to the uncertainty in the measurements and the introduction of expert knowledge in the models. Evolutionary algorithms (EAs), stochastic optimization techniques loosely inspired by natural selection, can be effectively used to tackle these issues. In this book, we present a selection of case studies where EAs are adopted in real-world food applications, ranging from model learning to sensitivity analysis.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 224

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title

Copyright

Acknowledgments

Preface

1 Introduction

1.1. Evolutionary computation in food science and technology

1.2. A panorama of the current use of evolutionary algorithms in the domain

1.3. The purpose of this book

2 A Brief Introduction to Evolutionary Algorithms

2.1. Artificial evolution: Darwin’s theory in a computer

2.2. The source of inspiration: evolutionism and Darwin’s theory

2.3. Darwin in a computer

2.4. The genetic engine

2.5. Theoretical issues

2.6. Beyond optimization

3 Model Analysis and Visualization

3.1. Introduction

3.2. Results and discussion

3.3. Conclusions

3.4. Acknowledgments

4 Interactive Model Learning

4.1. Introduction

4.2. Background

4.3. Proposed approach

4.4. Experimental setup

4.5. Analysis and perspectives

4.6. Conclusion

5 Modeling Human Expertise Using Genetic Programming

5.1. Cooperative co-evolution

5.2. Modeling agrifood industrial processes

5.3. Phase estimation using GP

5.4. Bayesian network structure learning using CCEAs

5.5. Conclusion

Conclusion

Bibliography

Index

End User License Agreement

List of Tables

3 Model Analysis and Visualization

Table 3.1.

Glossary that will be used in this chapter

Table 3.2.

Milk gel data used for training (Database 1, L

1

to L

7

) and validation (Database 2, V

1

to V

4

)

Table 3.3.

Intervals of validity for each parameter in the optimization problem considered. The parameters’ values are obtained from literature and expertise on the subject

Table 3.4.

Parameters of the two EAs used during the experience. μ is the size of the population, and λ is the size of the offspring generated at each iteration. While NSGA-II is terminated after 100 iterations (or

generations),

CMA-ES stops when a stagnation condition is reached (when the difference in fitness value between all solutions in the population is under a user-defined threshold). For CMA-ES, initial points in the middle of the search space are specified for each dimension, and initial standard deviation to generate solutions is set; the algorithm will self-adapt the standard deviation during the run. For NSGA-II, P(operator) represents the probability of applying a specific genetic operator when a new solution is produced. η

operator

is the distribution index of the genetic operator, regulating how much the child solutions will differ from the parents

Table 3.5.

Total effects of the parameters on the variations in the outputs of the model. Meaning of symbols: 0, no or very low impact (S

T

i

≤ 0.1); +, low impact

(0.1 <

S

T

i

≤ 0.3); ++, average impact

(0.3 <

S

T

i

≤ 0.6); +++, high impact

(0.6 <

S

T

i

≤ 1); ++++, very high impact (S

T

i

> 1.0

)

Table 3.6.

Average errors for four- and five-parameter solutions for both training and validation sets. The relative error for each data point is computed as , where p

i

is the value predicted by the model, e

i

is the experimental value, and e

max

and e

min

are the maximum and minimum experimental values, respectively

5 Modeling Human Expertise Using Genetic Programming

Table 5.1.

Probabilities of point mutation operators

Table 5.2.

Parameters of the GP methods

Table 5.3.

Parameters of the three strategies

Table 5.4.

Experimental results of the three strategies

Table 5.5.

P-values

Table 5.6.

Parameters of IMPEA. Values are chosen within their typical range depending on the size of the network and the desired computation time

Table 5.7.

Averaged results of PC algorithm after 100 runs

Table 5.8.

Averaged results of P-IMPEA algorithm after 100 runs

Table 5.9.

Number of edges detected for all algorithms

Landmarks

Cover

Table of Contents

Begin Reading

Pages

C1

iii

iv

v

ix

xi

xii

xiii

xiv

xv

xvi

xvii

xviii

xix

xx

xxi

xxii

xxiii

xxiv

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

G1

G2

G3

G4

G5

G6

G7

Metaheuristics Set

coordinated byNicolas Monmarché and Patrick Siarry

Volume 7

Evolutionary Algorithms for Food Science and Technology

Evelyne Lutton

Nathalie Perrot

Alberto Tonda

First published 2016 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

www.iste.co.uk

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.wiley.com

© ISTE Ltd 2016

The rights of Evelyne Lutton, Nathalie Perrot and Alberto Tonda to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2016950824

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-84821-813-0

Acknowledgments

We would like to express our gratitude to all those who provided support, read, wrote, offered comments, allowed us to quote their remarks and assisted us in the editing and proofreading of this book. In particular:

– our co-workers, who contributed to some chapters of this book: Sébastien Gaucel, Julie Foucquier, Alain Riaublanc (

Chapter 3

); André Spritzer (

Chapter 4

); Olivier Barrière, Cédric Baudrit, Bruno Pinaud, Mariette Sicard, and Pierre-Henri Wuillemin (

Chapter 5

);

– Nathalie Godefroid for our exciting discussions and her valuable help in writing the preface of this book;

– Corinne Dale for her kind, patient and erudite proofreading;

– our editors Nicolas Monmarché and Patrick Siarry for their friendly insistence that helped us to finish this book.

Preface

This book, which focuses on the domain of food science, is an excellent occasion to consider various issues related to optimization. Optimality, in any domain, is an open question, raising various complex issues. Questioning the purpose of optimization, its ability to answer important real-life questions, is in essence an intellectual exercise: are we able to address the appropriate issues with the help of modern computational tools? Do we believe too much in computation? Are we able to address the right issues with the right tools?

These questions have been considered with the help of a philosopher, Nathalie Godefroid1, and this preface is the result of our conversations on this vast subject.

The sources

The idea of optimization has its roots in what has been called “modernity” since 16th and 17th Centuries, based on a fundamental change in the perception of man within nature. During Antiquity and the Middle Ages nature was considered a “cosmos”, that is a big whole, symbolic, sacred, and respected hierarchy (each creature has its own position). “Modernity”, however, initiated a neutral standpoint, from which symbolics are progressively removed. The universe is infinite, without purpose, and nature is just a set of physical laws that can be understood and controlled, and therefore submitted to human needs and desires. Descartes’ project is to become “owner and master” of nature (Discours de la méthode, [DES 37]): managing and predicting natural phenomena becomes an attractive and reachable challenge. This control of nature is based on mathematics: “nature is a book written in mathematical language” (Galilée, [MAR 02]), the aim is to depart from mistery and contingency using rationality and mathematics.

This humanist project (knowledge and progress must benefit human beings, their freedom and their happiness), however, is, the source of other troubles, as highlighted by Heidegger. Technique progressively becomes unlimited, mandatory and above human beings, their projects and activities. Everything then becomes a product, a consumer good, including humans: everything behaves in a way that is “computable”. Heidegger shows that technique is no longer an instrument at the service of humans, but an end in itself. The search for performance and optimization is a technical ideal. Rationality in technique is relying on a value system (an evaluation function) in connection with economic interests. The technical means thus dictate a value to the user: efficacy. Efficacy has gained supremacy everywhere: economy, pedagogy, sport, research, social organization, politics, sex, everyday life, etc. “The technical phenomenon is the concern of the immense majority of men today, that is to search in everything for the most efficient method” [ELL 77].

Technique, power and language

With technique, men control nature using a complex and evolving set of means. “Not merely its application, but technique itself is domination – over nature and over men: methodical, clairvoyant domination. The aims and interests of domination are not additional or dictated to technique from above – they enter into the construction of the technical apparatus itself. For example, technique is a social and historical project: into it is projected what a society and its ruling interests decide to make of man and things. The aims of domination are substantive, and belong to the form of technical reason itself” [MAR 64].

Another hazard caused by technique, a major hazard according to Heidegger, is the modification of language: technique triggers the ideal of a non-ambiguous communication language. This ideal language dedicated to information encoding is non-hermeneutic2, in contrast to natural language, which predates and is external to technique, like the poetic language. This impact of technique on language, with its pure utilitarian approach, is a threat to human essence, as it discards philosophical and religious thoughts, meditations and contemplations, which are typical disinterested, non-measurable activities.

According to the sociologist Philippe Breton (L’Utopie de la communication, [BRE 92]), a true social utopia has been built since the Second World War with cybernetics and the work of Norbert Wiener (an American mathematician and philosopher, deemed to be the originator of cybernetics). Considering that everything is information and information sharing, living organisms and machines are on one level: the brain is a computer, thinking is computing… Even if this viewpoint forms the groundwork for artificial intelligence, this posttraumatic utopia emerged after the Second World War, with the intention of discarding such horrors forever. The main values are transparency, consensus and information circulation, as opposed to entropy and chaos. Machines would be more efficient and rational than human beings in making decisions, particularly in politics.

Technical developments thus answer the desire for full control in an uncertain and complex world. But complexities are the essence of life and the human brain’s creativity. Randomness and unpredictability are a major characteristic of many systems, including living organisms, populations and ecosystems. Creativity of life may remain outside of the scope of mathematical modeling. In Ancient Greek philosophy, an “opportunity” is a recurrent topic: man is the one who knows or should know how to exploit opportunity in a world where nothing is perfectly predictable. Intuition and improvisation are “human” capabilities (in particular in the musical domain, as highlighted by Jankélévitch). Even if computer science and artificial intelligence have made huge progress, the question of the respective roles of man and machine remains.

The human factor in computer science

The human body is synonym of imperfection (Plátõn), flesh is a source of corruption: emotions, illness, death. Medical and technical progress aim at repairing, improving and augmenting the human body. But in the scientific domain, the body is often considered a neutral material, a source of information, of unpredictable data or signals, emptied of its symbolic meaning. Embodiment, the humility of the human condition, and finally the fear of death are at the source of the modern fantasy aiming at abolishing the body. This idea actually also comes from modernity, from Descartes and the first anatomists: humanity is thought; the body, a hindrance. The sociologist David Le Breton ([LE 99]) draws a parallel with the fact that we now use our body less and less in everyday life (cars, lifts, sitting position for working, the Internet, virtual world, etc.): the body has atrophied. This restriction of physical and sensorial activities changes our perception of the world, limits our impact on reality and weakens our actual knowledge of things.

Norbert Wiener was one of the first to blur the line between machine and life. The brain is an intelligent machine that can be mimicked with a computer. The body is inessential, and we may dream of downloading a spirit into a computer, as in some science fiction film ….

But according to Hubert Dreyfus, artificial intelligence underlies some erroneous metaphysics3 [DRE 79]:

– a biological assumption: “The brain processes information in discrete operations by way of some biological equivalent of on/off switches”;

– a psychological assumption: “The mind can be viewed as a device operating on bits of information according to formal rules”;

– an epistemological assumption: “All knowledge can be formalized”;

– an ontological assumption: “The world consists of independent facts that can be represented by independent symbols”.

It has been found that the current understanding of the human mind was based on engineering principles and problem-solving techniques related to management science. Modern artificial intelligence research is now more open to issues that have become important to modern European philosophy and psychology, such as situatedness, embodiment, perception and gestalt.

For the moment, we can still claim that there are fundamental differences between man and machine: the computer has no “marginal consciousness”, making man sensitive to various and multiple facts of his environment. The computer is not able to use a context and bring ambiguous words or situations into perspective, making them thus intelligible; it does not distinguish what is essential and what is auxiliary using intuition. The computer is designed for precise works; it is not as versatile and adaptive, as the human brain is. And finally the computer has no body (except maybe if we consider robots).

This question of the role of the body in the implementation of intelligence is a major question. Humans are not facing a world made up of parameters to be recorded and processed. Understanding relies on a symbolic system: language and body. Language is not a code made up of unambiguous signs. It is based on culture and history (except maybe for a common core that can be found in all languages, see Noam Chomsky’s theories). Words always impart more than their definition; they have an evocative power (now and then magic and religious). The body is a measurement of the world: through his/her body, an individual interprets his/her environment and acts on it according to some influences related to his/her habits or education.

Perception is at the outset of meaning and value-creating, a symbolic comprehension of the world, a deciphering that creates meaning. The viewpoint of a human being is full of feeling, emotions. Intelligence is always in a state that cannot be considered independently from a singular and carnal existence. Human thought is emotional. A computer is a language tool, but not a language subject. It serves a definite purpose.