Optimal Control in Bioprocesses - Jérôme Harmand - E-Book

Optimal Control in Bioprocesses E-Book

Jerome Harmand

0,0
139,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Optimal control is a branch of applied mathematics that engineers need in order to optimize the operation of systems and production processes. Its application to concrete examples is often considered to be difficult because it requires a large investment to master its subtleties.

The purpose of Optimal Control in Bioprocesses is to provide a pedagogical perspective on the foundations of the theory and to support the reader in its application, first by using academic examples and then by using concrete examples in biotechnology. The book is thus divided into two parts, the first of which outlines the essential definitions and concepts necessary for the understanding of Pontryagin’s maximum principle – or PMP – while the second exposes applications specific to the world of bioprocesses.

This book is unique in that it focuses on the arguments and geometric interpretations of the trajectories provided by the application of PMP.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 314

Veröffentlichungsjahr: 2019

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Introduction

PART 1: Learning to use Pontryagin’s Maximum Principle

1 The Classical Calculus of Variations

1.1. Introduction: notations

1.2. Minimizing a function

1.3. Minimization of a functional: Euler–Lagrange equations

1.4. Hamilton’s equations

1.5. Historical and bibliographic observations

2 Optimal Control

2.1. The vocabulary of optimal control theory

2.2. Statement of Pontryagin’s maximum principle

2.3. PMP without terminal constraint

3 Applications

3.1. Academic examples (

to facilitate understanding

)

3.2. Regular problems

3.3. Non-regular problems and singular arcs

3.4. Synthesis of the optimal control, discontinuity of the value function, singular arcs and feedback

3.5. Historical and bibliographic observations

PART 2: Applications in Process Engineering

4 Optimal Filling of a Batch Reactor

4.1. Reducing the problem

4.2. Comparison with Bang–Bang policies

4.3. Optimal synthesis in the case of Monod

4.4. Optimal synthesis in the case of Haldane

4.5. Historical and bibliographic observations

5 Optimizing Biogas Production

5.1. The problem

5.2. Optimal solution in a well-dimensioned case

5.3. The Hamiltonian system

5.4. Optimal solutions in the underdimensioned case

5.5. Optimal solutions in the overdimensioned case

5.6. Inhibition by the substrate

5.7. Singular arcs

5.8. Historical and bibliographic observations

6 Optimization of a Membrane Bioreactor (MBR)

6.1. Overview of the problem

6.2. The model proposed by Benyahia

et al

.

6.3. The model proposed by Cogan and Chellamb

6.4. Historical and bibliographic observations

Appendices

Appendix 1: Notations and Terminology

A1.1. Notations

A1.2. Definitions

A1.3. Concepts from topology

Appendix 2: Differential Equations and Vector Fields

A2.1. Existence and uniqueness of solutions

A2.2. Variational equations

A2.3. Solutions for differential equations with time-dependent discontinuities

A2.4. Vector fields

A2.5. Historical and bibliographic observations

Appendix 3: Outline of the PMP Demonstration

A3.1. Principle of demonstration

A3.2. Small elementary variation and the cone Γ

A3.3. The interior of the cone Γ does not intersect the half-line Π

A3.4. Introduction of the adjoint vector at time

t

A3.5. Historical and bibliographic comments

Appendix 4: Demonstration of PMP without a Terminal Target

A4.1. Demonstration of theorem 2.5

A4.2. Demonstration of the corollary in section 2.1

Appendix 5: Problems that are Linear in the Control

A5.1. Definition of the problem

A5.2. Phase portrait of the Hamilton equations

A5.3. The relationship with singular arcs

Appendix 6: Calculating Singular Arcs

A6.1. The general case for the dimension

n

A6.2. The case of one-dimensional systems

A6.3. The case of two-dimensional systems

References

Index

End User License Agreement

Guide

Cover

Table of Contents

Begin Reading

Pages

iii

iv

v

ix

x

xi

xii

1

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

127

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

191

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

215

216

217

218

219

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

243

244

245

246

247

Chemostat and Bioprocesses Set

coordinated by Claude Lobry

Volume 3

Optimal Control in Bioprocesses

Pontryagin’s Maximum Principle in Practice

Jérôme Harmand

Claude Lobry

Alain Rapaport

Tewfik Sari

First published 2019 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd

27–37 St George’s Road

London SW19 4EU

UK

www.iste.co.uk

John Wiley & Sons, Inc.

111 River Street

Hoboken, NJ 07030

USA

www.wiley.com

© ISTE Ltd 2019

The rights of Jérôme Harmand, Claude Lobry, Alain Rapaport and Tewfik Sari to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Control Number: 2018966075

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-78630-045-4

Introduction

Applying optimal control theory to concrete examples is often considered a difficult task as mastering the nuances of this theory requires considerable investment. In literature in this field, there are many books that discuss optimal control theory (e.g. [LEE 67, VIN 00]) illustrated using examples (e.g. [BRY 75] or [TRÉ 05]), and books dedicated to families of applied problems (e.g. [LIM 13]). The objective of the current book is to present a pedagogic view of the fundamental tenets of this theory, a little in the style of Liberzon (see [LIB 12]), and to guide the reader in the application of the theory, first using academic examples (the swing problem, a driver in a hurry, also known as “the double integrator” or “the landing moon” problem), and then moving on to concrete examples in biotechnology, which will form the central part of the book. Special emphasis has been laid on the geometric arguments and interpretations of the trajectories given by Pontryagin’s maximum principle (PMP).

While this book studies optimal control, it is not, strictly speaking, a book on optimal control. It is, first and foremost, an introduction – and only an introduction – to PMP. It is seen that PMP is one of the tools used in optimal control theory. Optimal control aims to determine a control signal (or action signal), which will make it possible to minimize (or maximize) an integral performance criterion that brings in the state of a dynamic system (with constraints if required) and doing so within a fixed time period or with a free terminal time. In many situations, when PMP is applied we can comprehensively characterize the properties of this control, understand all the nuances of its “synthesis” and may even have the value for the control to be applied at any point depending on the system state.

At a point in time where a basic computer makes it possible to envisage the use of optimization techniques that are said to be direct1 for a large number of problems encountered in engineering, it is valid to wonder about the benefits to be gained by turning to a method that enables the computation of optimal analytical solutions. On the one hand, to do this would be to forget that using a numerical optimization procedure requires taking into account the specific initial conditions of the dynamic system under consideration, which limits how generic the computed control can be. On the other hand, when an optimal control is available, it makes it possible to compute the minimal (or maximal) value for the optimization criterion, which is not possible with a numerical approach (except for some very particular cases). By doing this, and doing this independently of the practical constraints that may lead a user to apply a control that deviates, however minimally, from the theoretical optimal control, we have a means of quantifying the distance between the theoretically optimal trajectories and those observed in reality based on experiments carried out on the real system.

Over the past few years, the control of bioprocesses has seen a startling growth; this is, notably, due to the extraordinary development in the world of sensors. Until quite recently, only physical quantities (such as temperature, pressure or flow rates) could be precisely measured. Today, however, it is possible to take online measurements of system variables that can be called functional, such as the substrate concentration or the concentration of bacteria in the reaction environment. Consequently, many technologists will state that control in biological systems – which often consists of keeping certain values constant – no longer poses a major problem. However, in our opinion, this view tends to forget that control theory not only seeks to stabilize a system and reject disturbances, but also tries to calculate the set-point trajectory. In other words, it attempts to establish around what state the system must be operated, both in terms of optimality, as well as to effectively control it so that the values of the variables of interest can, as far as possible, stay close to this set-point.

The first part of the book titled “Learning to use Pontryagin’s Maximum Principle” indicates that it offers an approach that is based on learning procedures to resolve equations (rather than focusing on a theoretical discussion of fundamental results), which are usually rather difficult to access in existing literature. In Chapter 1, we revisit concepts as basic as the minimization of a function, which, by extension, allows the minimization of a functional through the calculus of variations. After having presented the limitations, which relate specifically to the function classes to which the control must belong, Chapter 2 will present the terminology used in optimal control and PMP. Chapter 3 presents several academic applications and problems that highlight some nuances of PMP, especially the importance that must be accorded to questions of regularity of control.

The second part of the book, “Applications in Process Engineering”, is comprised of three distinct chapters that focus on problems that are specific to process engineering and biotechnology. In Chapter 4, we describe a problem of the optimal startup of a biological reactor. We will see that in order to maximize the performance of the bioreactor (that is minimize the time in which a resource – here, a pollutant – reaches a given threshold), the control is highly dependent on the type of growth function under consideration.

In Chapter 5, we go on to examine the optimization of biogas production. More specifically, we propose – given particular initial conditions of the system, which is in two dimensions – a solution to the problem of maximizing biogas production within a given time range. We can show that the constraints on the control (typically applied to the minimum and maximum acceptable limits) greatly constrains the proposed solution.

Finally, in Chapter 6, we will discuss the optimization of a membrane filtration system. These systems are being used more and more in biotechnology. Filtering through a membrane consists of maintaining a pressure difference, called the transmembrane pressure (TMP) across a membrane immersed in a fluid. The force created results in the movement of fluid from the side where pressure is greater to the side where pressure is lower. As this happens, elements in the fluid that are larger than the pore size are retained by this membrane, allowing these elements to be filtered out of the fluid. Over time, these elements will then clog up the membrane. At this point, we must either increase the TMP to maintain a constant flow across the membrane or accept that there will be a continuous decrease in the flow across the membrane until such time as all pores are clogged. To limit this phenomenon, we can regularly unclog the membrane, for example using a backwash fluid. If the membrane performance is defined as the quantity of fluid filtered over time, the question may arise as to which backwash strategy would be most appropriate in order to maximize the quantity of fluid filtered over a given time period. In practice, this is the same as determining at what time instants, and how often the backwash must be applied, keeping in mind that clear water is used during the backwash; the volume of this water will then be subtracted from the performance criterion. We thus find ourselves faced with an inevitable compromise: unclogging is essential to keep the membrane as clean as possible, but this must be carried out with the lowest possible frequency so that the filtration performance is not affected. If there is no model of the installation, we have little choice but to proceed through trial and error. We take a grid of the time instants, and we fix the duration of these washes; the backwash is then carried out in proceeding with experiments, while keeping in mind that the initial state of the membrane may play a large role here.

If we are able to obtain a model of the filtration membrane, we can then ask whether control tools may or may not be used. It is important to note here that this type of model is generally nonlinear. With a direct approach we may – depending on the initial conditions given – obtain a specific filtration/backwash sequence to be applied in order to maximize the system’s performance. But how can we find out whether the algorithm that we are using is a global solution? As the model used is not linear, it is certainly possible that another policy will make it possible to obtain identical, or even better, performances. In this book, we will see that characterizing the optimal control using PMP will make it possible to completely resolve the same problem even if the applicability of the optimal solution may pose practical problems that must then be resolved. In effect, while the real solution requires the calculation of the time instants when the backwash must be applied, applying PMP requires that the controls must be constrained such that they belong to sets that are much larger in order to guarantee the existence of this optimal control. In reality, these controls may take values that make no physical sense. However, this is not the point to focus on here, as, in practice, several strategies may allow us to find approximations for these values (see, for instance, the theory proposed in [KAL 17]). The essential result that must be kept in mind is that the precise values of the control to be applied can only be found by using PMP.

1

By this we mean a purely numerical approach that can be distinguished from indirect optimization approaches in which we first write the optimality condition that are resolved analytically and/or numerically.

PART 1Learning to use Pontryagin’s Maximum Principle