Model Predictive Control - Baocang Ding - E-Book

Model Predictive Control E-Book

Baocang Ding

0,0
106,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Model Predictive Control Understand the practical side of controlling industrial processes Model Predictive Control (MPC) is a method for controlling a process according to given parameters, derived in many cases from empirical models. It has been widely applied in industrial units to increase revenue and promoting sustainability. Systematic overviews of this subject, however, are rare, and few draw on direct experience in industrial settings. Assuming basic knowledge of the relevant mathematical and algebraic modeling techniques, the book's title combines foundational theories of MPC with a thorough sense of its practical applications in an industrial context. The result is a presentation uniquely suited to rapid incorporation in an industrial workplace. Model Predictive Control readers will also find: * Two-part organization to balance theory and applications * Selection of topics directly driven by industrial demand * An author with decades of experience in both teaching and industrial practice This book is ideal for industrial control engineers and researchers looking to understand MPC technology, as well as advanced undergraduate and graduate students studying predictive control and related subjects.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 326

Veröffentlichungsjahr: 2024

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Table of Contents

Title Page

Copyright

About the Authors

Preface

Acronyms

Introduction

1 Concepts

1.1 PID and Model Predictive Control

1.2 Two‐Layered Model Predictive Control

1.3 Hierarchical Model Predictive Control

2 Parameter Estimation and Output Prediction

2.1 Test Signal for Model Identification

2.2 Step Response Model Identification

2.3 Prediction Based on Step Response Model and Kalman Filter

3 Steady‐State Target Calculation

3.1 RTO and External Target

3.2 Economic Optimization and Target Tracking Problem

3.3 Judging Feasibility and Adjusting Soft Constraint

4 Two‐Layered DMC for Stable Processes

4.1 Open‐Loop Prediction Module

4.2 Steady‐State Target Calculation Module

4.3 Dynamic Calculation Module

4.4 Numerical Example

5 Two‐Layered DMC for Stable and Integral Processes

5.1 Open‐Loop Prediction Module

5.2 Steady‐State Target Calculation Module

5.3 Dynamic Calculation Module

5.4 Numerical Example

6 Two‐Layered DMC for State‐Space Model

6.1 Artificial Disturbance Model

6.2 Open‐Loop Prediction Module

6.3 Steady‐State Target Calculation Module

6.4 Dynamic Calculation Module

6.5 Numerical Example

7 Offset‐Free, Nonlinearity and Variable Structure in Two‐Layered MPC

7.1 State Space Steady‐State Target Calculation with Target Tracking

7.2 QP‐Based Dynamic Control and Offset‐Free

7.3 Static Nonlinear Transformation

7.4 Two‐Layered MPC with Varying Degree of Freedom

7.5 Numerical Example with Output Collinearity

8 Two‐Step Model Predictive Control for Hammerstein Model

8.1 Two‐Step State Feedback MPC

8.2 Stability of Two‐Step State Feedback MPC

8.3 Region of Attraction for Two‐Step MPC: Semi‐Global Stability

8.4 Two‐Step Output Feedback Model Predictive Control

8.5 Generalized Predictive Control: Basics

8.6 Two‐Step Generalized Predictive Control

8.7 Region of Attraction for Two‐Step Generalized Predictive Control

9 Heuristic Model Predictive Control for LPV Model

9.1 A Heuristic Approach Based on Open‐Loop Optimization

9.2 Open‐Loop MPC for Unmeasurable State

10 Robust Model Predictive Control

10.1 A Cornerstone Method

10.2 Invariant Set Trap

10.3 Prediction Horizon: Zero or One

10.4 Variant Feedback MPC

10.5 About Optimality

11 Output Feedback Robust Model Predictive Control

11.1 Model and Controller Descriptions

11.2 Characterization of Stability and Optimality

11.3 General Optimization Problem

11.4 Solutions to Output Feedback MPC

References

Index

End User License Agreement

List of Tables

Chapter 2

Table 2.1 Analogy of (2.77) and (2.78) to (2.47) and (2.48).

Table 2.2 Analogy of (2.84) and (2.85) to (2.47) and (2.48).

Chapter 3

Table 3.1 Lookup table for adjusting the soft constraints and choosing weigh...

Table 3.2 Lookup table for adjusting the soft constraints and choosing weigh...

Chapter 4

Table 4.1 Parameters of multi‐priority‐rank SSTC.

Chapter 5

Table 5.1 Parameters of multi‐priority‐rank SSTC (types 1 and 3 iCV).

Table 5.2 Parameters of multi‐priority‐rank SSTC (types 2 and 4 iCV).

Chapter 6

Table 6.1 Parameters of multi‐priority‐rank SSTC.

Chapter 7

Table 7.1 Parameters of multi‐priority‐rank SSTC.

Table 7.2 Parameters of multi‐priority‐rank SSTC.

Table 7.3 Parameters of multi‐priority‐rank SSTC.

Chapter 10

Table 10.1 For several simple cases of , find the largest values of

Table 10.2 For several simple cases of (), find the largest values of

List of Illustrations

Chapter 1

Figure 1.1 Control system using PID.

Figure 1.2 Control system using PID+manual.

Figure 1.3 Control system using MPC based on PID.

Figure 1.4 Control system manipulating “manual” by MPC.

Figure 1.5 Control system using MPC+PID.

Figure 1.6 Control system using MPC+PID+manual.

Figure 1.7 In the control system using MPC+PID+manual, a controlled object o...

Figure 1.8 Control system using MPC.

Figure 1.9 Control system using MPC with SSTC.

Figure 1.10 Control system using RTO+MPC including SSTC.

Figure 1.11 The contents of hierarchical MPC and two‐layered MPC.

Figure 1.12 Two‐layered structure composed of SSTC and DC.

Figure 1.13 Transformation of system structure of RTO+two‐layered MPC.

Figure 1.14 The system structure of RTO+two‐layered MPC with open‐loop predi...

Chapter 2

Figure 2.1 Expected coverage by MPC model.

Figure 2.2 Step test.

Figure 2.3 A white noise.

Figure 2.4 Spectral density of GBN with .

Figure 2.5 Spectral density of GBN with .

Figure 2.6 Test platform on Simulink of MATLAB.

Figure 2.7 Sampled data for IndepV (a) and DepV (b).

Figure 2.8 FIR and FSR obtained when data length is 1200 and model horizon i...

Figure 2.9 FIR and FSR obtained when data length is 1200 and model horizon i...

Figure 2.10 FIR and FSR obtained when data length is 1800 and model horizon ...

Figure 2.11 FSRs for case 1.

Figure 2.12 FSRs for case 2.

Figure 2.13 FSRs for case 3.

Chapter 3

Figure 3.1 The heavy oil fractionator.

Figure 3.2 Time‐varying trajectories of ss (LHS without, but RHS with, distu...

Figure 3.3 The time‐varying trajectories of ss when there is a minimum‐move ...

Figure 3.4 Feasible region surrounded by multiple constraints.

Figure 3.5 The feasible region for adjusting the soft constraints, but there...

Figure 3.6 The feasible region for adjusting the soft constraints, and there...

Chapter 4

Figure 4.1 The control result

Chapter 5

Figure 5.1 The control result of type 1 iCV.

Figure 5.2 The control result of type 2 iCV.

Figure 5.3 The control result of type 3 iCV.

Figure 5.4 The step response curve including the pseudo iCV.

Figure 5.5 The control result of type 4 iCV.

Chapter 6

Figure 6.1 The control result.

Chapter 7

Figure 7.1 The control result.

Figure 7.2 The control result.

Figure 7.3 The control result.

Figure 7.4 The control result.

Figure 7.5 Hammerstein–Wiener nonlinear model.

Figure 7.6 The schematic diagram of the nonlinear transformation method.

Figure 7.7 The control result of CV.

Figure 7.8 The control result of MV.

Figure 7.9 The control result of CV.

Figure 7.10 The control result of MV.

Figure 7.11 The control result of CV.

Figure 7.12 The control result of MV.

Figure 7.13 The control result of CV.

Figure 7.14 The control result of MV.

Chapter 8

Figure 8.1 Curve .

Figure 8.2 The closed‐loop state trajectories when has no eigenvalue outsi...

Figure 8.3 The closed‐loop state trajectories when has an eigenvalue outsi...

Figure 8.4 The static nonlinear feedback form.

Figure 8.5 The sketch map of nonlinear item

Figure 8.6 The block diagram of TSGPC

Figure 8.7 The uncertain system representation of TSGPC

Figure 8.8 RoA and closed‐loop state trajectory of TSGPC

Chapter 9

Figure 9.1 The state responses.

Figure 9.2 The control input.

Figure 9.3 Response of state .

Figure 9.4 Response of state .

Figure 9.5 Control input signal .

Figure 9.6 Response of state .

Figure 9.7 Response of state .

Figure 9.8 Control input signal .

Chapter 10

Figure 10.1 The networked control system.

Figure 10.2 The state responses of the closed‐loop systems.

Figure 10.3 The control moves.

Figure 10.4 Comparisons of RoAs between parameter‐dependent open‐loop MPC an...

Figure 10.5 Closed‐loop state trajectories.

Figure 10.6 RoAs between the problems 10.168 and (10.173).

Figure 10.7 RoAs between the problems 10.168 and the partial feedback MPC (

Chapter 11

Figure 11.1 The disturbance utilized in the simulation.

Figure 11.2 The state trajectories.

Figure 11.3 The control input signal.

Figure 11.4 The performance index .

Figure 11.5 The regions of attraction, set (i).

Figure 11.6 The regions of attraction, set (ii).

Figure 11.7 The evolutions of .

Figure 11.8 The state trajectories of closed‐loop system.

Figure 11.9 The control input signals.

Guide

Cover

Table of Contents

Title Page

Copyright

About the Authors

Preface

Acronyms

Introduction

Begin Reading

References

Index

End User License Agreement

Pages

iii

iv

xi

xiii

xiv

xv

xvi

xvii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

279

280

281

Model Predictive Control

 

 

Baocang Ding and Yuanqing Yang

Chongqing University of Posts & Telecommunications

 

 

 

 

This edition first published 2024.© 2024 John Wiley & Sons Ltd

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Baocang Ding and Yuanqing Yang to be identified as the authors of this work has been asserted in accordance with law.

Registered OfficesJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USAJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Cataloging‐in‐Publication Data:

Names: Ding, Baocang, author. | Yang, Yuanqing (College teacher), author.Title: Model predictive control / Baocang  Ding and Yuanqing Yang.Description: Hoboken, NJ : Wiley-IEEE Press, 2024. | Includes  bibliographical references and index.Identifiers: LCCN 2023049301 (print) | LCCN 2023049302 (ebook) | ISBN 9781119471394 (cloth) | ISBN 9781119471424 (adobe pdf) | ISBN 9781119471318 (epub)Subjects: LCSH: Predictive control.Classification: LCC TJ217.6 .D55 2024 (print) | LCC TJ217.6 (ebook) | DDC  629.8–dc23/eng/20240206LC record available at https://lccn.loc.gov/2023049301LC ebook record available at https://lccn.loc.gov/2023049302

Cover Design: WileyCover Image: © Abstract Aerial Art/Getty Images

About the Authors

Baocang Ding, PhD, teaches model predictive control (MPC) to both undergraduate and graduate students in the School of Automation, Chongqing University of Posts and Telecommunications, China. His research interests include MPC, control of power networks, process control, and control software development.

Yuanqing Yang, PhD, teaches MPC to both undergraduate and graduate students in the School of Automation, Chongqing University of Posts and Telecommunications, China. His research interests include MPC, fuzzy control, networked control, and distributed control systems.

Preface

As a class of model‐based control algorithms, model predictive control (MPC) has been extensively researched and applied to numerous real processes/equipment. Among various MPC approaches applied in practice, dynamic matrix control (DMC) is perhaps the best representative. DMC has been developed from the single‐layered to the two‐layered. The steady‐state targets (generally called setpoints in control) are tracked in the single‐layered MPC. The single‐layered MPC is the lower layer of the two‐layered MPC. In the upper layer of the two‐layered MPC, the steady‐state targets are calculated. Among various studies on MPC theory, the robust MPC for models with uncertainties is one of the best representatives. In the community of robust MPC, the approaches for linear parameter‐varying (LPV) model have their own deep impacts.

We began our research on MPC early in 1997. We not only applied MPC in petrochemical/chemical processes but also published over 200 MPC papers. In this book, we talk about MPC but concentrate on our contributions/ideas. Chapter 1 tells how MPC is developed, beginning as a substitution/upper level of proportional integral derivative (PID), from the single‐layered to the two‐layered, and as a lower level of real‐time optimization (RTO). In general, the two‐layered DMC has three modules: open‐loop prediction, steady‐state target calculation (SSTC), and dynamic control (DC). Chapter 2 concerns the identification of the basic model for MPC but emphasizes finite step response (FSR) and, being based on FSR model, the open‐loop prediction. Chapter 3 explains the general steps of SSTC. Chapters 4 and 5 concern the two‐layered DMC when there are, respectively, only stable controlled variables (CVs) and both stable and integral CVs. Chapter 6 extends the ideas in Chapters 4 and 5 to the state‐space model, also named two‐layered DMC. The open‐loop prediction in Chapter 2 can be seen as a bridge between DMC for FSR and DMC for state‐space, where Kalman filter (KF) is the link. Chapter 7 discusses some important issues in two‐layered MPC: offset‐free property, static nonlinearity, and variable structure. Hammerstein nonlinearity is an important type of static nonlinearity, and it incurs necessary conditions in order to retrieve closed‐loop stability. In Chapter 8, the two‐step MPCs for Hammerstein model, including the two‐step MPC with the state feedback, the two‐step MPC with the dynamic output feedback, the two‐step generalized predictive control (GPC), are given with stability analyses. In the first step of two‐step MPC, the control law or the control move, being based on the linear model, is given. In the second step of two‐step MPC, Hammerstein nonlinearity is handled by the inaccurate inversion. Chapter 9 gives two heuristic approaches of MPC for LPV model, with state feedback and dynamic output feedback, respectively. The controllers cannot guarantee closed‐loop stability due to their “heuristic” nature, like DMC. Chapter 10 retrieves closed‐loop stability (as compared with Chapter 9) with state‐feedback for LPV model. This was a hot topic in MPC. Chapter 11 retrieves closed‐loop stability (as compared with Chapter 9) with the dynamic output feedback for LPV model. Chapter 11 shows our unique contribution to robust MPC.

This book may have missed citing some important materials, and we sincerely apologize for that.

 

Baocang Ding

College of AutomationChongqing University of Posts and TelecommunicationsChongqing, P. R. China

Acronyms

CARIMA

controlled auto‐regressive integral moving average

CARMA

controlled auto‐regressive moving average

CCA

cone‐complementary approach

CLTVQR

constrained linear time‐varying quadratic regulation

CRHPC

constrained receding horizon predictive control

CSTR

continuous stirred tank reactor

CV

controlled variable

CVET

external target of controlled variable

CVss

ss of controlled variable

DbCC

double convex combination

DC

dynamic control, dynamic control module, dynamic calculation

DepV

dependent variable

DMC

dynamic matrix control

DV

disturbance variable

ET

external target

FIR

finite impulse response

FSR

finite step response

GNB

generalized binary noise

GPC

generalized predictive control

HHL

high high limit, upper engineering limit

HL

high limit, upper operating limit

ICCA

iterative cone‐complementary approach

iCV

first‐order integral controlled variable

iDepV

first‐order integral dependent variable

IndepV

independent variable

IRV

ideal resting value

KBM

Kothare–Balakrishnan–Morari

KF

Kalman filter

LA

Lu–Arkun

LGPC

linear generalized predictive control

LHS

left‐hand side

LL

low limit, lower operating limit

LLL

low low limit, lower engineering limit

LMI

linear matrix inequality

LP

linear programming

LPV

linear parameter‐varying

LQ

linear quadratic

LQR

linear quadratic regulator

LS

least square

LTI

linear time‐invariant

MIMO

multiple‐input multiple‐output

MISO

multiple‐input single‐output

MPC

model predictive control, multivariable predictive controller

MV

manipulated variable

MVET

external target of manipulated variable

MVss

ss of manipulated variable

NLGPC

nonlinear generalized predictive control

NSGPC

nonlinear separation generalized predictive control

OFRMPC

output feedback robust MPC

ol

open‐loop

PID

proportional integral derivative

PRBS

pseudo‐random binary sequence

QB

quadratic boundedness

QP

quadratic programming

RHC

receding horizon control

RHS

right‐hand side

RoA

region of attraction

RTO

real‐time optimization

sCV

stable controlled variable

sDepV

stable dependent variable

SIORHC

stabilizing input/output receding horizon control

SISO

single‐input single‐output

sp

setpoint

ss

steady‐state target, steady‐state targets

ssKF

steady‐state Kalman filter

SSTC

steady‐state target calculation

SVD

singular value decomposition

TSGPC

two‐step generalized predictive control

TSMPC

two‐step MPC, two‐step state feedback MPC

TSOFMPC

two‐step output feedback MPC

TTSS

time to steady‐state

Introduction

This book discusses the two‐layered dynamic matrix control (DMC) for finite step response (FSR) and state‐space models, the two‐step model predictive control (MPC) for the state‐space and the input–output models with Hammerstein static nonlinearity, and the robust MPC for linear parameter‐varying (LPV) model with/without bounded disturbance. The topics are linked by Kalman filter and the state‐space equivalence. It covers both real applied algorithms and theoretical results. This book represents the author's main contributions to MPC, with appropriate extensions.

1Concepts

When we talk about model predictive control (MPC), we should know that MPC has other names, e.g., receding horizon control (RHC). What are the differences between the two names? It is usually called MPC in the industrial circle. When we apply the state space paradigm to study MPC with stability guarantee, it is sometimes called RHC for emphasizing the feature of receding‐horizon optimization. On the application aspects, besides receding‐horizon optimization, MPC has other features, such as model‐based prediction and feedback correction. If MPC has no feedback correction, and its prediction is naturally obtained from the model, it is named RHC. Thus, RHC is often used in academic/theoretical research studies.

1.1 PID and Model Predictive Control

It is said that in industry, more than of automatic control loops are utilizing proportional integral derivative (PID). Someone says that this number should be or even . The percentage cannot be very authoritative. PID control strategy is widely used not only in civil industry but also in aerospace, military, and electronic mechanical devices. The use of PID is shown in Figure 1.1. Figure 1.1 has PID, the actuator, the controlled process (controlled device) {plant}, the controlled output {}, sp (the setpoint) of controlled output {}, a plus sign and a minus sign. The measured output feedback. A measurement (meter) block can be added. However, for theoretical research studies, it assumes including the measurement (meter) in the plant.

For many factories, it is optimistic to apply PID for above , since many actuators are manually operated where PID is not operable. The manual operation of actuators is shown in Figure 1.2. Since there are many PIDs, s is added, and both and are vectors.

What is the situation for MPC? According to the statistics, as compared with PID, MPC occupies about 10–15% of automatic control loops in the process control. We should not count based on the upper bounds (PID , MPC ), since other control strategies (different from PID and MPC) misleadingly seem useless. In many factories, 80–90% of actuators are manually operated, i.e., with neither PID nor MPC. Let us concern a modern factory with high‐level automation and immense courage to accept advanced control strategies like MPC; according to the statistics in these factories, PID occupies approximately and MPC approximately , while other control strategies are definitely non‐mainstream.

Figure 1.1 Control system using PID.

Figure 1.2 Control system using PID+manual.

Figure 1.3 Control system using MPC based on PID.

How does MPC play its role? Sometimes there is misunderstanding. Based on Figure 1.1, MPC acts as in Figure 1.3. In MPC, . MPC lies before PID. is in front of MPC, which is called ss (the steady‐state target) of , i.e., sp of MPC. The measurement of is sent to MPC.

In the process control, the controllable input is called manipulated variable (MV); the controlled output is called controlled variable (CV); the measurable disturbance is called disturbance variable (DV), sometimes called the feedforward variable. These names are conventional in the industrial MPC.

Can the “manual” of Figure 1.2 become automatic? If MPC is well‐applied and “manual” is also given by MPC, then of MPC includes both and manual, as shown in 1.4. Before applying MPC, “manual” implies that the operator directly operates the valves; PID implies using PID algorithm to manipulate the valve, and the operator has to operate PID sp. By applying MPC, MPC manipulates both valve and PID sp. MPC primarily manipulates PID sp, and secondly directly manipulates some valves.

Figure 1.4 is not a general situation. In practice, some projects cannot utilize MPC on all PID sps (setpoints), i.e., some PID sps are still manually adjusted, as shown in Figure 1.5. Some PID sps are manipulated by MPC, denoted as ; the other PID sps, denoted as , are not manipulated by MPC.

Figure 1.4 Control system manipulating “manual” by MPC.

Figure 1.5 Control system using MPC+PID.

Figure 1.6 Control system using MPC+PID+manual.

Figure 1.5 is still not a general situation. Some “manuals” may not be manipulated by MPC, as shown in Figure 1.6. Applying MPC for all “manuals” represents a high level of control, but all projects are not achievable.

In industry, all of MPC are not actuator positions; some are the actuator positions, and the others are PID sps. There were misunderstandings about this fact.

What does the controlled object of MPC become? It is shown in the dashed box in Figure 1.7. Hence, applying MPC to a real system requires establishing a mathematical model of the object in the dashed box rather than merely building the “plant” model. The model ready for MPC must take PID into account, i.e., the model includes the role of PID. The “manual” in Figure 1.7 becomes DV of MPC, and so is .

Figure 1.7 In the control system using MPC+PID+manual, a controlled object of MPC is in dashed box.

Figure 1.8 Control system using MPC.

Suppose the model in the dashed box is obtained, including the portions for both DV‐to‐CV and MV‐to‐CV. In the literature, most researchers are concerned, given , studying

(1) how to optimize , i.e., the algorithms, which were dominating in the 1970s and 1980s;

(2) whether or not the sequence ( from 0 to ) converges, i.e., stability, which is dominating in the academic theory;

(3) whether or not , i.e., the offset‐free, which is not mainstream but has some papers.

In the 1990s, there were many mature results in stability. The offset‐free is only valid after assuming stability. It seems that the research studies on offset‐free has less patterns than on stability.

Let us abbreviate the controlled object of MPC (often referred to as a generalized object in process control), in the dashed box in Figure 1.7, as PLANT wich is capitalized. Then, Figure 1.7 reduces to Figure 1.8. Thus, all three types of studies, mentioned earlier, are for Figure 1.8.

1.2 Two‐Layered Model Predictive Control

Are the aforementioned three types of studies closely consistent with state‐of‐the‐ art industrial applications? The answer is negative. There are more issues to tackle. When MPC is applied, as in Figure 1.8, it is only called dynamic control (DC) or dynamic tracking, or often, dynamic move calculation in industrial software.

The biggest issue is where comes from. Before using MPC, both valve and PID sp are manually operated. By applying MPC, if is again manually operated, can we have high enough knowledge for well‐operating? If we cannot well‐operate PID sps, we might not gain big benefits by operating in Figure 1.8. If obtaining is not automatic, it requires a lot of operation experiences and a high‐level engineer in order to enhance MPC efficiency. Hence, automating the calculation of is a key to simplifying MPC operation.

Let us take an example, where “PLANT” takes a simple form, i.e., the transfer function model . According to the final‐value theorem, we obtain , where is the steady‐state gain matrix. In applying PID, for every being controlled, there must be a sp. MPC manipulates not only PID sps, but also some valves. Imagine that the numbers of MVs and CVs may be unequal. When they are unequal, for any , is there satisfying ? By setting a arbitrarily, does MPC necessarily drive CV to ? Obviously not. Looking at the equation with as the unknown, it unnecessarily has a solution for any . Uniqueness of the solution for this equation is rare. In most cases, either there is no solution, or there are infinitely many solutions. For the industrial applications, MPC should automatically calculate not only , but also . The term compatibility refers to, with the given , whether or not there is a solution . The term uniqueness refers to, with the given , whether or not there is a unique solution . In the case of multiple solutions, it should be given the principle to choose .

In industrial operations, it is evident that both and may be related to economy. Any variable related to economy could be involved in an optimization. In MPC, economy is a broad concept; it may include, e.g., increasing money, reducing energy consumption, reducing exhaust gas emission, and reducing pollutants.

Figure 1.8 can be modified. Besides , there is , satisfying . For any , it may fail to find . For any , there is a unique as long as is a real matrix. Since PLANT is , a failure to satisfy brings troubles. The small trouble could be steady‐state error (non‐offset‐free), and the big could be dynamic instability. can easily cause dynamic instability.

In summary, it is important to automatically calculate . MPC in Figure 1.8 is renamed as DC. In order to take out a set of for DC, it needs a so‐called steady‐state target calculation (SSTC) in front of DC, as shown in Figure 1.9. The term SSTC is somewhat academic, and its alias in industry is steady‐state optimization.

Figure 1.9 Control system using MPC with SSTC.

Figure 1.10 Control system using RTO+MPC including SSTC.

Recall that there is an expected value for PID and an expected value for DC. Is there an expected value for SSTC? Yes, there are before SSTC (some have , i.e., ; some have , i.e., ), as shown in Figure 1.10. are called the ideal values or external targets (ETs). SSTC calculates based on these ideal values and some other factors. is ss of the actuator position or PID sp. Therefore, ss is calculated, which is the steady‐state PID sp or actuator position. Then, DC calculates the dynamic PID sp and actuator position, which moves the physical devices.

Recall that manually giving might be difficult. By introducing , will the difficulty turn to manually operating ? It depends on the key difference between and . (satisfying ) is related to control, which is lower‐layered. In a factory, calculating sp of the controller is low‐layered work. , on the other hand, is related to the optimization, or related to economy. The relationship that satisfies is written as

(1.1)

where denotes some other variables. Then, is obtained by solving

(1.2)

where is the performance index related to economy. Equation (1.2) is usually called real‐time optimization (RTO) in the industry. Here, “real‐time” mainly reflects that some physical parameters in , relating to the real system, are updated in real‐time. There are a lot of research studies on RTO in the industrial and academic circles. RTO technology could be more difficult than MPC.

Remark 1.1 The nature of (1.1) is steady‐state. What is the difference between (1.1) and ? In general, they are nonequivalent. Some research works linearize to yield . Academically, it should take such a linearization; otherwise, linearity and nonlinearity can be inconsistent. In theory, if linearizing does not give , it implies that the model is not well built. However, in the industrial practices, we might not worry about this inconsistency. While RTO is developed by some technicians, MPC can be developed by different groups. The people for RTO are engaged in system dynamics, energy technology or technological processes, while the group for MPC is in system control technique. The two groups of persons are unnecessarily consistent, although cooperation is encouraged. The two groups should maintain some independence.

In other words, has its advanced source, coming from thoughtful people. A thoughtful module RTO provides , which may be better than from control technical circles.

Let us make a visual analogy. The Ministry of Education shows some assessment indices to a university; the university partitions these indices into the colleges, and each college then assigns tasks to the departments. The college corresponds to MPC level, and the department is at PID level. If each PID corresponds to a supervisor or a team, then each department has several PIDs. The target of PID level, or the college level, as compared with that of a higher level (university, Ministry of Education) planning, will have some deviation. Indeed, what PIDs and colleges should do is to carry out higher‐level optimums as closely as possible. is driven by the higher‐leveled .

1.3 Hierarchical Model Predictive Control

Let us enclose MPC with SSTC in a box, which is referred to as the two‐layered MPC in academia. If RTO is included, then it is called the hierarchical MPC. appears in Figure 1.11, including the previous and ; in addition, some DVs denoted as are not mentioned above, which are neither manual nor . in Figure 1.11 should contain , mainly . Why is it called hierarchy? Because this is a high‐to‐low layered framework.

Actually, the two‐layered MPC does not have a hierarchical framework. Two‐layered, being also named dual‐moduled, implies that at each control interval, first calculate ss, and then track ss; see Figure 1.12.

In summary, for industrial MPC, actuator positioning is not a complete understanding. We need to change our minds in two manners.

(1) MPC mainly controls PID sp, i.e., ;

(2) is also automatically calculated.

Then, we can elevate from the theoretical MPC (as seen in a large number of literatures) to the industrial MPC.

In order to highlight hierarchy, reshape Figure 1.11 as Figure 1.13. Within the dashed box, it is MPC, and below and above, the real industrial process and RTO, respectively. In fact, RTO may do more than giving , but we only care about its role on MPC. Other without , and without , are handled by SSTC. In order to calculate ss, it is necessary to set a standard, which will be detailed later.

Figure 1.11 The contents of hierarchical MPC and two‐layered MPC.

In “model predictive control”, what does “model predictive” represent? MPC depends on a model, which implies building a model for the part below the dashed box in Figure 1.13. MPC can be renamed prediction‐based control. How is the prediction given? Not only the basic KF (Kalman filter), but also the extended KF, the unscented KF, the information fusion KF, and the particle filter, can be used. As long as there is a causality model, we can make the prediction. Since the dynamic is developed by one module, and ss is calculated by another, it is better to prepare a separate module for prediction, as shown in Figure 1.14. Figure 1.13 sends to DC. In Figure 1.14, since is sent to the prediction module, DC does not have to receive .

Figure 1.12 Two‐layered structure composed of SSTC and DC.

The predictions have two types. One is called the dynamic prediction, which refers to predicting over a period of time, i.e.,

(1.3)

where the superscript ol represents open‐loop; when the control move is unchanged, it is ol. DC does not send back information to the prediction module. The prediction module unidirectionally sends information to DC. The prediction module is unaware of the immediate result of DC. A prediction without feedback is called ol. By applying as much disturbance information as it knows, while assuming (i.e., all the future control moves to be equivalent to , i.e., the control moves to remain unchanged), the prediction is called ol. If we know the future series for a period of time, then apply the series for prediction. If we only know the current value, then only apply this value for prediction.

The second type of prediction is called steady‐state prediction. The steady‐state prediction is also ol, denoted as . SSTC uses ol prediction, and then from SSTC is not ol. Does the prediction module give ? We should consider it. Usually, , but sometimes .

Figure 1.13 Transformation of system structure of RTO+two‐layered MPC.

Figure 1.14 The system structure of RTO+two‐layered MPC with open‐loop prediction module.

There are various methods for the prediction module, as long as it can give the dynamic and steady‐state ol predictions. Since the prediction is a completely independent module, for both academic research and industrial application, it is not restricted to the traditional prediction methods.

Why is drawn in Figure 1.14? For some theoretical research, it may be unnecessary to inform the controller of the value , as the controller should know as that having been sent for implementation at . However, for the real control, given at may unnecessarily be equal to . For example, suppose the control period is 1 minute. At it gives a PID sp. This sp may not be realized until . In the real industry, a PID sp is limited to be “slowly varying”. If the PID sp given at varies significantly (as compared with the instantaneous value of PID sp at ), it may be clipped (cannot be realized at ). We can consider this limitation in MPC, i.e., not allowing a significant variation; this still may not avoid . Let us take another example, i.e., directly applies to the actuator. It cannot guarantee that the actuator position sp given by is achieved at . In this situation, at , it is necessary to read the actuator position again, in order to get a true position value. In both theory and simulation, since is sent in the previous control interval, and the computer program runs accordingly, there is no need to doubt whether or not it is achieved in the next control interval. The computers cannot make mistakes, but the physical plants may fail to react as quick.

The dashed box in Figure 1.14 is called MPC, which is in line with the literature. It should be called the two‐layered MPC, which is often referred to as constrained multivariable control in industry. Why is there such a big difference between the two names? In industry, PID accounts for around of automatic control loops, while MPC accounts for around , and there are rarely other control algorithms. Thus, the constrained multivariable control is generally referred to as MPC. So far, MPC is the only method that can systematically handle constraints and be applied in industry. MPC has an almost equivalent name, the constrained multivariable control.

This book uses to denote DV (other literature may use ). In mathematics, is generally applied as a function, while this book is a little different: denoting DV, denoting MV, denoting CV, and denoting the state when the state space model is applied.

This book includes three types of two‐layered MPC. The ordinary two‐layered dynamic matrix control (DMC) is given in Chapter 4, which is the usual DMC for open‐loop stable system. Chapter 5 has special complexity for the two‐layered DMC with integral controlled variable (iCV). Chapter 6 adopts the state space model for the two‐layered DMC, which also has its complexity. Relatively, the basic two‐layered DMC in Chapter 4 is easier, at least as compared with those for iCV and state space model.