An Introduction to Self-adaptive Systems - Danny Weyns - E-Book

An Introduction to Self-adaptive Systems E-Book

Danny Weyns

0,0
97,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

A concise and practical introduction to the foundations and engineering principles of self-adaptation

Though it has recently gained significant momentum, the topic of self-adaptation remains largely under-addressed in academic and technical literature. This book changes that. Using a systematic and holistic approach, An Introduction to Self-adaptive Systems: A Contemporary Software Engineering Perspective provides readers with an accessible set of basic principles, engineering foundations, and applications of self-adaptation in software-intensive systems.

It places self-adaptation in the context of techniques like uncertainty management, feedback control, online reasoning, and machine learning while acknowledging the growing consensus in the software engineering community that self-adaptation will be a crucial enabling feature in tackling the challenges of new, emerging, and future systems.

The author combines cutting-edge technical research with basic principles and real-world insights to create a practical and strategically effective guide to self-adaptation. He includes features such as:

  • An analysis of the foundational engineering principles and applications of self-adaptation in different domains, including the Internet-of-Things, cloud computing, and cyber-physical systems
  • End-of-chapter exercises at four different levels of complexity and difficulty
  • An accompanying author-hosted website with slides, selected exercises and solutions, models, and code

Perfect for researchers, students, teachers, industry leaders, and practitioners in fields that directly or peripherally involve software engineering, as well as those in academia involved in a class on self-adaptivity, this book belongs on the shelves of anyone with an interest in the future of software and its engineering.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 520

Veröffentlichungsjahr: 2020

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright

Dedication

Foreword

Acknowledgments

Acronyms

Introduction

1 Basic Principles of Self‐Adaptation and Conceptual Model

1.1 Principles of Self‐Adaptation

1.2 Other Adaptation Approaches

1.3 Scope of Self‐Adaptation

1.4 Conceptual Model of a Self‐Adaptive System

1.5 A Note on Model Abstractions

1.6 Summary

1.7 Exercises

1.8 Bibliographic Notes

2 Engineering Self‐Adaptive Systems: A Short Tour in Seven Waves

2.1 Overview of the Waves

2.2 Contributions Enabled by the Waves

2.3 Waves Over Time with Selected Work

2.4 Summary

2.5 Bibliographic Notes

3 Internet‐of‐Things Application

3.1 Technical Description

3.2 Uncertainties

3.3 Quality Requirements and Adaptation Problem

3.4 Summary

3.5 Exercises

3.6 Bibliographic Notes

4 Wave I: Automating Tasks

4.1 Autonomic Computing

4.2 Utility Functions

4.3 Essential Maintenance Tasks for Automation

4.4 Primary Functions of Self‐Adaptation

4.5 Software Evolution and Self‐Adaptation

4.6 Summary

4.7 Exercises

4.8 Bibliographic Notes

5 Wave II: Architecture‐based Adaptation

5.1 Rationale for an Architectural Perspective

5.2 Three‐Layer Model for Self‐Adaptive Systems

5.3 Reasoning about Adaptation using an Architectural Model

5.4 Comprehensive Reference Model for Self‐Adaptation

5.5 Summary

5.6 Exercises

5.7 Bibliographic Notes

6 Wave III: Runtime Models

6.1 What is a Runtime Model?

6.2 Causality and Weak Causality

6.3 Motivations for Runtime Models

6.4 Dimensions of Runtime Models

6.5 Principal Strategies for Using Runtime Models

6.6 Summary

6.7 Exercises

6.8 Bibliographic Notes

7 Wave IV: Requirements‐driven Adaptation

7.1 Relaxing Requirements for Self‐Adaptation

7.2 Meta‐Requirements for Self‐Adaptation

7.3 Functional Requirements of Feedback Loops

7.4 Summary

7.5 Exercises

7.6 Bibliographic Notes

8 Wave V: Guarantees Under Uncertainties

8.1 Uncertainties in Self‐Adaptive Systems

8.2 Taming Uncertainty with Formal Techniques

8.3 Exhaustive Verification to Provide Guarantees for Adaptation Goals

8.4 Statistical Verification to Provide Guarantees for Adaptation Goals

8.5 Proactive Decision‐Making using Probabilistic Model Checking

8.6 A Note on Verification and Validation

8.7 Integrated Process to Tame Uncertainty

8.8 Summary

8.9 Exercises

8.10 Bibliographic Notes

9 Wave VI: Control‐based Software Adaptation

9.1 A Brief Introduction to Control Theory

9.2 Automatic Construction of SISO Controllers

9.3 Automatic Construction of MIMO Controllers

9.4 Model Predictive Control

9.5 A Note on Control Guarantees

9.6 Summary

9.7 Exercises

9.8 Bibliographic Notes

10 Wave VII: Learning from Experience

10.1 Keeping Runtime Models Up‐to‐Date Using Learning

10.2 Reducing Large Adaptation Spaces Using Learning

10.3 Learning and Improving Scaling Rules of a Cloud Infrastructure

10.4 Summary

10.5 Exercises

10.6 Bibliographic Notes

11 Maturity of the Field and Open Challenges

11.1 Analysis of the Maturity of the Field

11.2 Open Challenges

11.3 Epilogue

Bibliography

Index

End User License Agreement

List of Tables

Chapter 2

Table 2.1 Summary of the state‐of‐the‐art before each wave with motivation, t...

Chapter 4

Table 4.1 Key insights of Wave I: Automating Tasks.

Chapter 5

Table 5.1 Key insights of Wave II: Architecture‐based Adaptation.

Chapter 6

Table 6.1 Key insights of Wave III: Runtime Models.

Table 6.2 Messages generated by the motes in the simple IoT system.

Table 6.3 SNR (dB) along the links in the simple IoT system.

Chapter 7

Table 7.1 Example operators to handle uncertainty requirements of self‐adapti...

Table 7.2 Types of awareness requirements.

Table 7.3 Evolution requirement operators with effects on managed and managin...

Table 7.4 Key insights of Wave IV: Requirements‐Driven Adaptation.

Chapter 8

Table 8.1 Sources of uncertainty.

Table 8.2 Illustration of characteristic values for quality properties of con...

Table 8.3 Determining the utilities of a small subset of configurations shown in...

Table 8.4 Key insights of Wave V: Guarantees under uncertainty

Chapter 9

Table 9.1 Key insights of Wave VI: Control‐based Software Adaptation.

Chapter 10

Table 10.1 Firing levels of the rules at time

t1

in the auto‐scaling scenario....

Table 10.2 Excerpt of Q‐table at time

t1

in the auto‐scaling scenario.

Table 10.3 Sample data for fuzzy Q‐learning scenario.

Table 10.4 Firing levels of the rules at time

t2

in the auto‐scaling scenario....

Table 10.5 Performance results of fuzzy Q‐learning.

Table 10.6 Key insights of Wave VII: Learning from Experience.

List of Illustrations

Chapter 1

Figure 1.1 Architecture of a simple service‐based health assistance system

Figure 1.2 Conceptual model of a self‐adaptive system

Figure 1.3 Conceptual model applied to a self‐adaptive service‐based health ...

Chapter 2

Figure 2.1 Seven waves of research on engineering self‐adaptive systems

Figure 2.2 Main periods of activity of each wave over time with representati...

Chapter 3

Figure 3.1 Geographical deployment of the DeltaIoT network

Figure 3.2 Part of the DeltaIoT network architecture with Gateway and Manage...

Figure 3.3 Uncertainty due to interference for one of the communication link...

Chapter 4

Figure 4.1 Left: DeltaIoT system extended with feedback loop. Top right: exc...

Figure 4.2 Utility preferences for failure rate and cost in the example.

Figure 4.3 Self‐optimization scenario for DeltaIoT. The table shows the expe...

Figure 4.4 Self‐healing scenario for DeltaIoT. The tables summarizes the res...

Figure 4.5 Self‐protection scenario for DeltaIoT. The tables shows a subset ...

Figure 4.6 Self‐configuration scenario for DeltaIoT. Left: initial configura...

Figure 4.7 Reference model of a managing system.

Figure 4.8 Essential models managed by the knowledge of a managing system.

Figure 4.9 Basic workflow of the Monitor function.

Figure 4.10 Basic workflow of the Analyzer function.

Figure 4.11 Basic workflow of the Planner function.

Figure 4.12 Basic workflow of the Executor function.

Figure 4.13 Basic artifacts and activities of evolution management.

Figure 4.14 Basic artifacts and activities of self‐adaptation management.

Figure 4.15 Integration of evolution management and self‐adaptation manageme...

Figure 4.16 Two principal classes of interaction between evolution managemen...

Chapter 5

Figure 5.1 Three‐layer model for self‐adaptive systems.

Figure 5.2 Three‐layer model applied to DeltaIoT with a scenario that illust...

Figure 5.3 Runtime architecture of an architecture‐based adaptation approach...

Figure 5.4 Layered architecture of a self‐adaptive Web‐based client‐server s...

Figure 5.5 Reflection perspective of the encompassing reference model for se...

Figure 5.6 MAPE‐K perspective of the encompassing reference model for self‐a...

Figure 5.7 Distribution perspective of the encompassing reference model for ...

Figure 5.8 DeltaIoT scenario with distributed self‐adaptation. The network c...

Figure 5.9 Example of the Packet‐World [204].

Chapter 6

Figure 6.1 Illustration of a structural model (left) versus a behavioral mod...

Figure 6.2 Illustration of a simple functional model for DeltaIoT.

Figure 6.3 Illustration of a simple Markov model for DeltaIoT.

Figure 6.4 Illustration of a simple queuing model for DeltaIoT (flow from le...

Figure 6.5 Example of a formal model to predict the failure rate of a servic...

Figure 6.6 High‐level overview of the three strategies for using runtime mod...

Figure 6.7 Illustration of the strategy where MAPE components share runtime ...

Figure 6.8 Illustration of the strategy where MAPE components exchange runti...

Figure 6.9 Example architecture of the strategy where MAPE models share runt...

Figure 6.10 Illustration of the strategy where MAPE models share runtime mod...

Figure 6.11 Simple IoT network.

Figure 6.12 Example of messages generated by Mote [2].

Chapter 7

Figure 7.1 Illustration of goal models for DeltaIoT. Left: original goal mod...

Figure 7.2 Excerpt of a goal model for DeltaIoT with three awareness require...

Figure 7.3 Examples of feedback loop models for DeltaIoT: Monitor (top) and ...

Figure 7.4 Overview of a runtime architecture that realizes the third approa...

Chapter 8

Figure 8.1 Work flow of formal analysis of adaptation options.

Figure 8.2 Work flow of the selection of the best adaptation option.

Figure 8.3 Architecture of a feedback loop that uses exhaustive verification...

Figure 8.4 Excerpt of DTMC model for the service‐based health assistance sys...

Figure 8.5 Excerpt of verification results for the service‐based system.

Figure 8.6 Detail of some of the verification results for the service‐based ...

Figure 8.7 Architecture of a feedback loop that uses statistical verificatio...

Figure 8.8 Example of a formal model to predict energy consumption of config...

Figure 8.9 Excerpt of the verification results for DeltaIoT.

Figure 8.10 Architecture of a feedback loop that uses proactive decision‐mak...

Figure 8.11 Schedule of model execution with probabilistic model checking.

Figure 8.12 MDP that models the environment of RUBiS.

Figure 8.13 The four stages of the integrated process to tame uncertainty.

Figure 8.14 Model of the simple IoT network.

Chapter 9

Figure 9.1 Block diagram of a basic feedback control loop.

Figure 9.2 Overview of control properties for a response to a step change in...

Figure 9.3 SISO control system with its construction and operation phases.

Figure 9.4 Illustration of model building.

Figure 9.5 Illustration of incremental model updating (left) and model rebui...

Figure 9.6 Closed feedback loop system.

Figure 9.7 Experimental results of the SISO controller realization.

Figure 9.8 Operation phase of automatically generated MIMO control system.

Figure 9.9 Experimental results of the MIMO controller realization.

Figure 9.10 Operation phase of automatically constructed MPC control schema....

Figure 9.11 Experimental results of the automatic MPC controller realization...

Figure 9.12 High‐level overview of discrete event controller synthesis.

Chapter 10

Figure 10.1 General overview of feedback loop functions that can be supporte...

Figure 10.2 Reliability model of the service‐based health assistance system ...

Figure 10.3 Monitor enhanced with Bayesian estimator to keep a quality model...

Figure 10.4 Simulated estimations of failure rate for link

to

of the DTM...

Figure 10.5 Extended version of DeltaIoT with 37 nodes.

Figure 10.6 Left: adaptation space of DeltaIoT.v2 at some point in time. Rig...

Figure 10.7 Work flow of incremental classification.

Figure 10.8 Analyzer enhanced with a classifier to determine relevant adapta...

Figure 10.9 Left: relevant and explored adaptation options with classifier i...

Figure 10.10 Feedback loop architecture for the fuzzy learning approach.

Figure 10.11 Fuzzy membership functions for the variables of the auto‐scalin...

Figure 10.12 Excerpts of the evolution of q‐values.

Figure 10.13 Workload patterns for experiments with fuzzy Q‐learning.

Figure 10.14 Fuzzy membership functions used in the experiments with the aut...

Chapter 11

Figure 11.1 Evolution of the maturity of the field of self‐adaptation. Gray ...

Figure 11.2 Schematic overview of the Code of Ethics of IEEE/ACM; circles re...

Guide

Cover

Table of Contents

Begin Reading

Pages

ii

iii

xi

xii

xiii

xv

xvii

xviii

xix

xx

xxi

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

An Introduction to Self-Adaptive Systems

A Contemporary Software Engineering Perspective

Danny Weyns Katholieke Universiteit Leuven, Belgium

 

 

 

 

 

This edition first published 2021

© 2021 John Wiley & Sons Ltd

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Danny Weyns to be identified as the author of this work has been asserted in accordance with law.

Registered Offices

John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

Editorial Office

The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of Warranty

While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Cataloging‐in‐Publication Data applied for,

Hardback ISBN: 9781119574941

Cover Design: Wiley

Cover Image: © Takeshi.K/Getty Images

To Frankie

Foreword

From the earliest days of computing, theorists recognized that one of the most striking aspects of computation is its potential ability to change itself: rather than presenting users with a fixed set of computations determined at deployment, the system could at runtime modify both what it computes and how it computes it. However, while “self‐modification” was perhaps interesting from a theoretical point of view, few programming systems and engineering methods embraced this capability – the advantages of doing so were not obvious given the additional complexity of reasoning about system behavior and the potential for inadvertently making a really big mess of things.

Over the past decade, however, self‐adaptive systems have emerged as a fundamental element of modern software systems. Virtually all enterprise systems have built‐in adaptive mechanisms to handle faults, resource management, and attacks. Increasingly systems are taking over tasks otherwise performed by humans in transportation (automated driving), medicine (assisted diagnosis), environmental control (smart buildings), and many others.

In the broad field of software engineering these changes have been mirrored in a number of seismic shifts. The first has been a shift in focus from development time to runtime. For most of its history, software engineering primarily focused on getting things “right” before a system was deployed. When problems were encountered, systems were taken off‐line and fixed before redeployment. This made sense because few systems required non‐stop availability and hence they could be taken down for “scheduled maintenance.” But today almost all public facing systems must be continuously available, requiring that systems be modifiable (either automatically or by developers) while they continue to operate.

A second shift has been the increasing level of uncertainty that accompanies modern systems. In the past, software was typically developed for a known environment using components that were largely under the control of the developers. Today systems work in much more uncertain contexts: loads can change dramatically; resources (such as network bandwidth) can vary substantially for mobile computing; faults can arise from interaction with other systems and resources outside the control of the developer; and attacks can emerge in unexpected ways.

A third shift has been an interest in automation to reduce the cost of operations. In the 1980s it was recognized that while the cost of acquiring or developing increasingly complex computing systems was steadily declining, the overall cost of ownership was rising. The reason for this was the need for system administration, which was taking up a larger and larger fraction of the IT operational budget. By automating many of the routine functions performed by administrators, systems would become more affordable. Moreover, arguably, for today's complex software systems, complete human oversight and control is simply not possible.

A fourth shift has been the commoditization of AI. Whereas for much of its existence AI had largely been relegated to special niches (e.g., robotics), the increasing availability of planners, machine learning, genetic algorithms, and game‐theoretic decision systems has made it possible to harness sophisticated reasoning and learning mechanisms in support of automation.

All of these shifts have led to a set of critical challenges for software engineers. What are the fundamental unifying principles that underlie self‐adaptive systems? How should one go about engineering them in a way that allows us to assure the system matches its requirements even as those requirements change after deployment? How can we provide safeguards against adaptation‐gone‐awry? How do we engender trust in systems where human oversight has been delegated to the machine? How do we decompose the engineering effort of self‐adaptive systems into manageable subtasks? How can we reuse elements of one adaptive system when constructing another?

The software engineering discipline of self‐adaptive systems attempts to answer these questions. It seeks to provide the principles, practices, and tools that will allow engineers to harness the vast potential of adaptation for engineering today's systems.

In doing this, the field of self‐adaptive systems has much to draw on from other disciplines: from control theory, techniques for maintaining a system's envelope of behavior within desired ranges; from biology and ecology, the ability of organisms and populations to respond to environmental changes; from immunology, organic mechanisms of self‐healing; from software architecture, patterns of structuring systems to enable predictable construction of adaptive systems; from fault tolerance, techniques for detecting and responding to faults; from AI, mechanisms that support autonomy. And many others.

All of this can lead to a rather confusing landscape of concepts and techniques, making it difficult for a software engineer to apply what we know about self‐adaptation to the building of software systems. This book by Danny Weyns provides exactly the right introduction for software engineers to navigate this fascinating, complex, and evolving area. It both identifies some foundational principles of the discipline, as well as covering the broad terrain of the field. Through its “waves” approach, it nicely highlights the structure of the field and the influences and perspectives from other disciplines, without losing the fundamental focus on software engineering and applications. Additionally, the waves help to highlight the important research areas that have contributed synergistically to our current understanding of the field and that position us for further advancement.

Taken as a whole, this book provides the first comprehensive treatment of self‐adaptive systems targeted at software engineering students, practitioners, and researchers, and provides essential reading for each of these. For someone who is approaching this field for the first time it will provide a broad view of what is now known and practiced. For the experienced professional, it will provide concrete examples and techniques that can be put into practice. For the researcher, it will provide a structured view of the important prior work and of the open challenges facing the field.

February 2020

David Garlan

Professor, School of Computer Science

Carnegie Mellon University

Acknowledgments

This book has been developed in two stages over the past three years. The following colleagues provided me with particularly useful feedback that helped me to improve preliminary versions of this book: Carlo Ghezzi (Politecnico di Milano), Jeff Kramer (Imperial Collage London), Bradley Schmerl (Carnegie Mellon University Pittsburg), Thomas Vogel (Humboldt University Berlin), Gabriel A. Moreno (Carnegie Mellon University), Martina Maggio (Lund University), Antonio Filieri (Imperial Collage London), Marin Litoiu (York University), Vitor E. Silva Souza (Federal University of Espírito Santo), Radu Calinescu (University of York), Jeff Kephart (IBM T. J. Watson Research Center), Betty H.C. Cheng (Michigan State University), Nelly Bencomo (Aston University), Javier Camara Moreno (University of York), John Mylopoulos (University of Toronto), Sebastian Uchitel (University of Buenos Aires), Pooyan Jamshidi Dermani (University of South Carolina), Simos Gerasimou (University of York), Kenji Tei (Waseda University), Dimitri Van Landuyt (Katholieke Universiteit Leuven), Panagiotis (Panos) Patros (University of Waikato), Raffaela Mirandola (Polytechnic University of Milan), and Paola Inverardi (University of L'Aquila). These people have suggested improvements and pointed out mistakes. I thank everyone for providing me with very helpful comments.

I thank the members of the imec‐Distrinet research group for their support. I am particularly thankful to my colleague Danny Hughes, his former student Gowri Sankar Ramachandran, and the members of the Network task force for sharing their expertise on the Internet‐of‐Things. I want to express my sincere appreciation to my colleagues at Linnaeus University, in particular Jesper Andersson, for their continuous support.

I thank M. Usman Iftikhar, Stepan Shevtsov, Federico Quin, Omid Gheibi, Sara Mahdavi Hezavehi, Angelika Musil, Juergen Musil, Nadeem Abbas, and the other students I worked with at KU Leuven and Linnaeus University for their inspiration and collaboration.

I express my sincere appreciation to the monks of the abbeys of West‐Vleteren, Westmalle, Tongerlo, and Orval for their hospitality during my stays when working on this book.

Finally, I express my gratitude to Wiley for their support with the publication of this manuscript.

Danny Weyns

Acronyms

24/7

24 hours a day, seven days a week: all the time

A‐LTL

Adapt operator‐extended Linear Temporal Logic

ActivFORMS

Active FOrmal Models for Self‐adaptation

Amazon EC2

Amazon Elastic Compute Cloud

AMOCS‐MA

Automated Multi‐objective Control of Software with Multiple Actuators

AP

Atomic Propositions

C

Coulomb

CD‐ROM

Compact Disk Read Only Memory

CPU

Central Processing Unit

CTL

Computation Tree Logic

dB

deciBel

DCRG

Dynamic Condition Response Graph

DeltaIoT.v2

Advanced version of DeltaIoT

DeltaIoT

IoT application for building security monitoring

DiVA

Dynamic Variability in complex Adaptive systems

DTMC

Discrete‐Time Markov Chain

ENTRUST

ENgineering of TRUstworthy Self‐adaptive sofTware

EUREMA

ExecUtable RuntimE MegAmodels

F1‐score

Score that combines precision and recall to evaluate a classifier

FLAGS

Fuzzy Live Adaptive Goals for Self‐adaptive systems

FORMS

FOrmal Reference Model for Self‐adaptation

FQL4KE

Fuzzy Q‐Learning for Knowledge Evolution

FUSION

FeatUre‐oriented Self‐adaptatION

GDPR

General Data Protection Regulation

GORE

Goal‐Oriented Requirements Engineering

IBM

International Business Machines Corporation

IEEE

Institute of Electrical and Electronics Engineers

IoT

Internet‐of‐Things

ISO

International Organization for Standardization

KAMI

Keep Alive Models with Implementation

KAOS

Knowledge Acquisition in Automated Specification

LTS

Labeled Transition System

MAPE‐K

Monitor‐Analyze‐Plan‐Execute‐Knowledge

MARTAS

Models At Runtime And Statistical techniques

mC

mili Coulomb

MDP

Markov Decision Process

MIMO system

Multiple‐Input Multiple‐Output control system

MIT

Massachusetts Institute of Technology

MJ

Mega Joules

MoRE

Model‐Based Reconfiguration Engine

MPC

Model Predictive Control

NATO

North Atlantic Treaty Organization

OSGi

OSGi Alliance, formerly known as the Open Service Gateway Initiative

PCTL

Probabilistic Computation Tree Logic

PI controller

Proportional‐Integral controller

PLTS

Probabilistic Labeled Transition System

PRISM

PRobabIlistic Symbolic Model checker

Q‐learning

Classic reinforcement learning algorithm

QoS

Quality of Service

QoSMOS

Quality of Service Management and Optimization of Service‐based systems

RELAX

Requirements specification language for dynamically adaptive systems

RFID

Radio‐Frequency IDentification

RUBiS

Open source auction site prototype modeled after eBay.com

SASO

Stability, Accuracy, Settling time, Overshoot control properties

SAVE

Self‐Adaptive Video Encoder

SIMCA

Simplex Control Adaptation

SISO system

Single‐Input Single‐Output control system

SNR

Signal to Noise Ratio

SSIM

Structural Similarity Index Metric

UML

Unified Modeling Language

Uppaal

Integrated model checking suite for networks of automata

US

United States

UUV

Unmanned Underwater Vehicle

WiFi

Wireless networking technology based on the IEEE 802.11 standards

Z‐transform

Transformation of a discrete time to a frequency domain representation

ZNN.com

News service that serves multimedia news content to customers

Introduction

Back in 1968, the North Atlantic Treaty Organization (NATO) organized the first conference on Software Engineering, in Garmisch, Germany. At the time, managers and software engineers perceived a “software crisis,” referring to the manageability problems of software projects and software that was not delivering its objectives. One of the key identified causes for the crisis was the growing gap between the rapidly increasing power of computing systems and the ability of programmers to effectively exploit the capabilities of these systems. The crisis was reflected in projects running over‐budget and over‐time, software of low quality that did not meet requirements, and code that was difficult to maintain. This crisis triggered the development of novel programming paradigms, methods and processes to assure software quality. While today large and complex software projects remain vulnerable to unanticipated problems, the causes that underlay this first software crisis are now relatively well under the control of project managers and software engineers.

About 35 years later, in 2001, IBM released a manifesto that referred to another “looming software crisis,” this time caused by the increasing complexity of installing, configuring, tuning, and maintaining computing systems. New emerging computing systems at that time went beyond company boundaries into the Internet, introducing new levels of complexity that could hardly be managed, even by the most skilled system administrators. The complexity resulted from various internal and external factors, causing uncertainties that were difficult to anticipate before deployment. Examples are the scale of the system; inherent distribution of the software system, which may span administrative domains; dynamics in the availability of resources and services; external threats to systems; faults that may be difficult to predict; and changes in user goals during operation. A consensus grew that self‐management was the only viable option to tackle the problems that caused this complexity crisis. Self‐management refers to computing systems that can adapt autonomously to achieve their goals based on high‐level objectives. Such computing systems are usually called self‐adaptive systems.

From the outset in the early 2000s, there was a common understanding among researchers and engineers that realizing the full potential of self‐adaptive systems would take a long‐term and worldwide effort across a diversity of fields. Over the past two decades, communities of different fields have put extensive efforts in understanding the foundational principles of self‐adaptation as well as devising techniques and methods to engineer self‐adaptive systems. This text aims at providing a comprehensive overview of the field of self‐adaptation by consolidating key knowledge obtained from these efforts.

Introducing self‐adaptive systems is challenging given the diversity of research topics, engineering methods, and application domains that are part of this field. To tackle this challenge, this text is based on six pillars.

First, we lay a foundation for what constitutes a self‐adaptive system by introducing two generally acknowledged, but complementary basic principles. These two principles enable us to characterize self‐adaptive systems and distinguish them from other related types of systems. From the basic principles, a conceptual model of a self‐adaptive system is derived, which offers a basic vocabulary that we use throughout the text.

Second, the core of the text, which focuses on how self‐adaptive systems are engineered, is partitioned into convenient chunks driven by research and engineering efforts over time. In particular, the text approaches the engineering of self‐adaptive systems in seven waves. These waves put complementary aspects of engineering self‐adaptive systems in focus that synergistically have contributed to the current body of knowledge in the field. Each wave highlights a trend of interest in the research community. Some of the earlier waves have stabilized now and resulted in common knowledge in the community. Other more recent waves are still very active and the subject of debate; the knowledge of these waves has not been fully consolidated yet.

Third, throughout the text we use a well‐thought‐out set of applications to illustrate the material with concrete examples. We use a simple service‐based application to illustrate the basic principles and the conceptional model of self‐adaptive systems. Before the core part of the text that zooms in on the seven waves of research on engineering self‐adaptive systems, we introduce a practical Internet‐of‐Things application that we use as the main case to illustrate the characteristics of the different waves. In addition, we use a variety of cases from different contemporary domains to illustrate the material, including a client‐server system, a mobile service, a geo‐localization service, unmanned vehicles, video compression, different Web applications, and a Cloud system.

Fourth, each core chapter of the book starts with a list of learning outcomes at different orders of thinking (from understanding to synthesis) and concludes with a series of exercises. The exercises are defined at four different levels of complexity, characterized by four letters that refer to the expected average time required for solving the exercises. Level requires a basic understanding of the material of the chapter; the exercises should be solvable in a number of person‐hours. Level requires in depth understanding of the material of the chapter; these exercises should be solvable within person‐days. Level requires the study of some additional material beyond the material in the chapter; these exercises should be solvable within person‐weeks. Finally, level requires the development of novel solutions based on the material provided in the corresponding chapter; these exercises require an effort of person‐months. The final chapter discusses the maturity of the field and outlines open challenges for research in self‐adaptation, which can serve as further inspiration for future research endeavors, for instance as a start point for PhD projects.

Fifth, each chapter concludes with bibliographic notes. These notes point to foundational research papers of the different parts of the chapter. In addition, the notes highlight some characteristic work and provide pointers to background material. The material referred to in the bibliographic notes is advised for further reading.

Sixth, supplementary material is freely available for readers, students, and teachers at the book website: https://introsas.cs.kuleuven.be/. The supplementary material includes slides for educational purposes, selected example solutions of exercises, models and code that can be used for the exercises, and complementary material that elaborates on specific material from the book.

As such, this manuscript provides a starting point for students, researchers, and engineers that want to familiarize themselves with the field of self‐adaptation. The text aims to offer a solid basis for those who are interested in self‐adaptation to obtain the required skill set to understand the fundamental principles and engineering methods of self‐adaptive systems.

The principles of self‐adaptation have their roots in software architecture, model‐based engineering, formal specification languages, and principles of control theory and machine learning. It is expected that readers are familiar with the basics of these topics when starting with our book, although some basic aspects are introduced in the respective chapters.

1Basic Principles of Self‐Adaptation and Conceptual Model

Modern software‐intensive systems1 are expected to operate under uncertain conditions, without interruption. Possible causes of uncertainties include changes in the operational environment, dynamics in the availability of resources, and variations of user goals. Traditionally, it is the task of system operators to deal with such uncertainties. However, such management tasks can be complex, error‐prone, and expensive. The aim of self‐adaptation is to let the system collect additional data about the uncertainties during operation in order to manage itself based on high‐level goals. The system uses the additional data to resolve uncertainties and based on its goals re‐configures or adjusts itself to satisfy the changing conditions.

Consider as an example a simple service‐based health assistance system as shown in Figure 1.1. The system takes samples of vital parameters of patients; it also enables patients to invoke a panic button in case of an emergency. The parameters are analyzed by a medical service that may invoke additional services to take actions when needed; for instance, a drug service may need to notify a local pharmacy to deliver new medication to a patient. Each service type can be realized by one of multiple service instances provided by third‐party service providers. These service instances are characterized by different quality properties, such as failure rate and cost. Typical examples of uncertainties in this system are the patterns that particular paths in the workflow are invoked by, which are based on the health conditions of the users and their behavior. Other uncertainties are the available service instances, their actual failure rates and the costs to use them. These parameters may change over time, for instance due to the changing workloads or unexpected network failures.

Figure 1.1 Architecture of a simple service‐based health assistance system

Anticipating such uncertainties during system development, or letting system operators deal with them during operation, is often difficult, inefficient, or too costly. Moreover, since many software‐intensive systems today need to be operational 24/7, the uncertainties necessarily need to be resolved at runtime when the missing knowledge becomes available. Self‐adaptation is about how a system can mitigate such uncertainties autonomously or with minimum human intervention.

The basic idea of self‐adaptation is to let the system collect new data (that was missing before deployment) during operation when it becomes available. The system uses the additional data to resolve uncertainties, to reason about itself, and based on its goals to reconfigure or adjust itself to maintain its quality requirements or, if necessary, to degrade gracefully.

In this chapter, we explain what a self‐adaptive system is. We define two basic principles that determine the essential characteristics of self‐adaptation. These principles allow us to define the boundaries of what we mean by a self‐adaptive system in this book, and to contrast self‐adaptation with other approaches that deal with changing conditions during operation. From the two principles, we derive a conceptual model of a self‐adaptive system that defines the basic elements of such a system. The conceptual model provides a basic vocabulary for the remainder of this book.

LEARNING OUTCOMES

To explain the basic principles of self‐adaptation.

To understand how self‐adaptation relates to other adaptation approaches.

To describe the conceptual model of a self‐adaptive system.

To explain and illustrate the basic concepts of a self‐adaptive system.

To apply the conceptual model to a concrete self‐adaptive application.

1.1 Principles of Self‐Adaptation

There is no general agreement on a definition of the notion of self‐adaptation. However, there are two common interpretations of what constitutes a self‐adaptive system.

The first interpretation considers a self‐adaptive system as a system that is able to adjust its behavior in response to the perception of changes in the environment and the system itself. The self prefix indicates that the system decides autonomously (i.e. without or with minimal human intervention) how to adapt to accommodate changes in its context and environment. Furthermore, a prevalent aspect of this first interpretation is the presence of uncertainty in the environment or the domain in which the software is deployed. To deal with these uncertainties, the self‐adaptive system performs tasks that are traditionally done by operators. Hence, the first interpretation takes the stance of the external observer and looks at a self‐adaptive system as a black box. Self‐adaptation is considered as an observable property of a system that enables it to handle changes in external conditions, availability of resources, workloads, demands, and failures and threats.

The second interpretation contrasts traditional “internal” mechanisms that enable a system to deal with unexpected or unwanted events, such as exceptions in programming languages and fault‐tolerant protocols, with “external” mechanisms that are realized by means of a closed feedback loop that monitors and adapts the system behavior at runtime. This interpretation emphasizes a “disciplined split” between two distinct parts of a self‐adaptive system: one part that deals with the domain concerns and another part that deals with the adaptation concerns. Domain concerns relate to the goals of the users for which the system is built; adaptation concerns relate to the system itself, i.e. the way the system realizes the user goals under changing conditions. The second interpretation takes the stance of the engineer of the system and looks at self‐adaptation from the point of view how the system is conceived.

Hence, we introduce two complementary basic principles that determine what a self‐adaptive system is:

External principle:

A self‐adaptive system is a system that can handle changes and uncertainties in its environment, the system itself, and its goals autonomously (i.e. without or with minimal required human intervention).

Internal principle:

A self‐adaptive system comprises two distinct parts: the first part interacts with the environment and is responsible for the domain concerns – i.e. the concerns of users for which the system is built; the second part consists of a feedback loop that interacts with the first part (and monitors its environment) and is responsible for the adaptation concerns – i.e. concerns about the domain concerns.

Let us illustrate how the two principles of self‐adaptation apply to the service‐based health assistance system. Self‐adaptation would enable the system to deal with dynamics in the types of services that are invoked by the system as well as variations in the failure rates and costs of particular service instances. Such uncertainties may be hard to anticipate before the system is deployed (external principle). To that end, the service‐based system could be enhanced with a feedback loop. This feedback loop tracks the paths of services that are invoked in the workflow, as well as the failure rates of service instances and the costs of invoking service instances that are provided by the service providers. Taking this data into account, the feedback loop adapts the selection of service instances by the workflow engine such that a set of adaptation concerns is achieved. For instance, services are selected that keep the average failure rate below a required threshold, while the cost of using the health assistance system is minimized (internal principle).

1.2 Other Adaptation Approaches

The ability of a software‐intensive system to adapt at runtime in order to achieve its goals under changing conditions is not the exclusivity of self‐adaptation, but can be realized in other ways.

The field of autonomous systems has a long tradition of studying systems that can change their behavior during operation in response to events that may not have been anticipated fully. A central idea of autonomous systems is to mimic human (or animal) behavior, which has been a source of inspiration for a very long time. The area of cybernetics founded by Norbert Wiener at MIT in the mid twentieth century led to the development of various types of machines that exposed seemingly “intelligent” behavior similar to biological systems. Wiener's work contributed to the foundations of various fields, including feedback control, automation, and robotics. The interest in autonomous systems has expanded significantly in recent years, with high‐profile application domains such as autonomous vehicles. While these applications have extreme potential, their successes so far have also been accompanied by some dramatic failures, such as the accidents caused by first generation autonomous cars. The consequences of such failures demonstrate the real technical difficulties associated with realizing truly autonomous systems.

An important sub‐field of autonomous systems is multi‐agent systems, which studies the coordination of autonomous behavior of agents to solve problems that go beyond the capabilities of single agents. This study involves architectures of autonomous agents, communication and coordination mechanisms, and supporting infrastructure. An important aspect is the representation of knowledge and its use to coordinate autonomous behavior of agents. Self‐organizing systems emphasize decentralized control. In a self‐organizing system, simple reactive agents apply local rules to adapt their interactions with other agents in response to changing conditions in order to cooperatively realize the system goals. In such systems, the global macroscopic behavior emerges from the local interactions of the agents. However, emergent behavior can also appear as an unwanted side effect, for example in the form of oscillations. Designing decentralized systems that expose the required global behavior while avoiding unwanted emergent phenomena remains a major challenge.

Context‐awareness is another traditional field that is related to self‐adaptation. Context‐awareness puts the emphasis on handling relevant elements in the physical environment as first‐class citizens in system design and operation. Context‐aware computing systems are concerned with the acquisition of context (e.g. through sensors to perceive a situation), the representation and understanding of context, and the steering of behavior based on the recognized context (e.g. triggering actions based on the actual context). Context‐aware systems typically have a layered architecture, where a context manager or dedicated middleware is responsible for sensing and dealing with context changes. Self‐aware computing systems contrast with context‐aware computing systems in the sense that these systems capture and learn knowledge not only about the environment but also about themselves. This knowledge is encoded in the form of runtime models, which a self‐aware system uses to reason at runtime, enabling it to act in accordance with higher‐level goals.

1.3 Scope of Self‐Adaptation

Autonomous systems, multi‐agent systems, self‐organizing systems, and context‐aware systems are families of systems that apply classical approaches to deal with change at runtime. However, these approaches do not align with the combined basic principles of self‐adaptation. In particular, none of these approaches comply with the second principle, which makes an explicit distinction between a part of the system that handles domain concerns and a part that handles adaptation concerns. However, the second principle of self‐adaptation can be applied to each of these approaches – i.e. these systems can be enhanced with a feedback loop that deals with a set of adaptation concerns. This book is concerned with self‐adaptation as a property of a computing system that is compliant with the two basic principles of self‐adaptation.

Furthermore, self‐adaptation can be applied at different levels of the software stack of computing systems, from the underlying resources and low‐level computing infrastructure to middleware services and application software. The challenges of self‐adaptation at these different levels are different. For instance, the space of adaptation options of higher‐level software entities is often multi‐dimensional, and software qualities and adaptation goals usually have a complex interplay. These characteristics are less applicable to the adaptation of lower‐level resources, where there is often a more straightforward relation between adaptation actions and software qualities. In this book, we consider self‐adaptation applied at different levels of the software stack of computing systems, from virtualized resources up to application software.

1.4 Conceptual Model of a Self‐Adaptive System

Starting from the two basic principles of self‐adaptation, we define a conceptual model for self‐adaptive systems that describes the basic elements of such systems and the relationship between them. The basic elements are intentionally kept abstract and general, but they are compliant with the basic principles of self‐adaptation. The conceptual model introduces a basic vocabulary for the field of self‐adaptation that we will use throughout this book. Figure 1.2 shows the conceptual model of a self‐adaptive system.

Figure 1.2 Conceptual model of a self‐adaptive system

The conceptual model comprises four basic elements: environment, managed system, feedback loop, and adaptation goals. The feedback loop together with the adaptation goals form the managing system. We discuss the elements one by one and illustrate them for the service‐based health assistance application.

1.4.1 Environment

The environment refers to the part of the external world with which a self‐adaptive system interacts and in which the effects of the system will be observed and evaluated. The environment can include users as well as physical and virtual elements. The distinction between the environment and the self‐adaptive system is made based on the extent of control. The environment can be sensed and effected through sensors and effectors, respectively. However, as the environment is not under the control of the software engineer of the system, there may be uncertainty in terms of what is sensed by the sensors or what the outcomes will be of the effectors.

Applied to the service‐based health assistance system example, the environment includes the patients that make use of the system; the application devices with the sensors that measure vital parameters of patients and the panic buttons; the service providers with the services instances they offer; and the network connections used in the system, which may all affect the quality properties of the system.

1.4.2 Managed System

The managed system comprises the application software that realizes the functions of the system to its users. Hence, the concerns of the managed system are concerns over the domain, i.e. the environment of the system. Different terminology has been used to refer to the managed system, such as managed element, system layer, core function, base‐level system, and controllable plant. In this book, we systematically use the term managed system. To realize its functions to the users, the managed system senses and effects the environment. To support adaptations, the managed system needs to be equipped with sensors to enable monitoring and effectors (also called actuators) to execute adaptation actions. Safely executing adaptations requires that actions applied to the managed systems do not interfere with the regular system activity. In general, they may affect ongoing activities of the system – for instance, scaling a Cloud system might require bringing down a container and restarting it.

A classic approach to realizing safe adaptations is to apply adaptation actions only when a system (or the parts that are subject to adaptation) is in a quiescent state. A quiescent state is a state where no activity is going on in the managed system or the parts of it that are subject to adaptation so that the system can be safely updated. Support for quiescence requires an infrastructure to deal with messages that are invoked during adaptations; this infrastructure also needs to handle the state of the adapted system or the relevant parts of it to ensure its consistency before and after adaptation. Handling such messages and ensuring consistency of state during adaptations are in general difficult problems. However, numerous infrastructures have been developed to support safe adaptations for particular settings. A well‐known example is the OSGi (Open Service Gateway Initiative) Java framework, which supports installing, starting, stopping, and updating arbitrary components (bundles in OSGi terminology) dynamically.

The managed system of the service‐based health assistance system consists of a service workflow that realizes the system functions. In particular, a medical service receives messages from patients with values of their vital parameters. The service analyzes the data and either invokes a drug service to notify a local pharmacy to deliver new medication to the patient or change the dose of medication, or it invokes an alarm service in case of an emergency to notify medical staff to visit the patient. The alarm service can also be invoked directly by a patient via a panic button. To support adaptation, the workflow infrastructure offers sensors to track the relevant aspects of the system and the characteristics of service instances (failure rate and cost). The infrastructure allows the selection and use of concrete instances of the different types of services that are required by the system. Finally, the workflow infrastructure needs to provide support to change service instances in a consistent manner by ensuring that a service is only removed and replaced when it is no longer involved in any ongoing service invocation of the health assistance system.

1.4.3 Adaptation Goals

Adaptation goals represent concerns of the managing system over the managed system; adaptation goals relate to quality properties of the managed system. In general, four principal types of high‐level adaptation goals can be distinguished: self‐configuration (i.e. systems that configure themselves automatically), self‐optimization (systems that continually seek ways to improve their performance or reduce their cost), self‐healing (systems that detect, diagnose, and repair problems resulting from bugs or failures), and self‐protection (systems that defend themselves from malicious attacks or cascading failures).

Since the system uses the adaptation goals to reason about itself during operation, the goals need to be represented in a machine‐readable format. Adaptation goals are often expressed in terms of the uncertainty they have to deal with. Example approaches are the specification of quality of service goals using probabilistic temporal logics that allow for probabilistic quantification of properties, the specification of fuzzy goals whose satisfaction is represented through fuzzy constraints, and a declarative specification of goals (in contrast to enumeration) allowing the introduction of flexibility in the specification of goals. Adaptation goals can be subject to change themselves, which is represented in Figure 1.2 by means of the evolve interface. Adding new goals or removing goals during operation will require updates of the managing system, and often also require updates of probes and effectors.

In the health assistance application, the system dynamically selects service instances under changing conditions to keep the failure rate over a given period below a required threshold (self‐healing goal), while the cost is minimized (optimization goal). Stakeholders may change the threshold value for the failure rate during operation, which may require just a simple update of the corresponding threshold value. On the other hand, adding a new adaptation goal, for instance to keep the average response time of invocations of the assistance service below a required threshold, would be more invasive and would require an evolution of the adaptation goals and the managing system.

1.4.4 Feedback Loop

The adaptation of the managed system is realized by the managing system. Different terms are used in the literature for the concept of managing system, such as autonomic manager, adaptation engine, reflective system, and controller. Conceptually, the managing system realizes a feedback loop that manages the managed system. The feedback loop comprises the adaptation logic that deals with one or more adaptation goals. To realize the adaptation goals, the feedback loop monitors the environment and the managed system and adapts the latter when necessary to realize the adaptation goals. With a reactive policy, the feedback loop responds to a violation of the adaptation goals by adapting the managed system to a new configuration that complies with the adaptation goals. With a proactive policy, the feedback loop tracks the behavior of the managed system and adapts the system to anticipate a possible violation of the adaptation goals.

An important requirement of a managing system is ensuring that fail‐safe operating modes are always satisfied. When such an operating mode is detected, the managing system can switch to a fall‐back or degraded mode during operation. An example of an operating mode that may require the managing system to switch to a fail‐safe configuration is the inability to find a new configuration to adapt the managed system to that achieves the adaptation goals within the time window that is available to make an adaptation decision. Note that instead of falling back to a fail‐safe configuration in the event that the goals cannot be achieved, the managing system may also offer a stakeholder the possibility to decide on the action to take.

The managing system may consist of a single level that conceptually consists of one feedback loop with a set of adaptation goals, as shown in Figure 1.2. However, the managing system may also have a layered structure, where each layer conceptually consists of a feedback loop with its own goals. In this case, each layer manages the layer beneath – i.e. layer manages layer ‐1, and layer 1 manages the managed system. In practice, most self‐adaptive systems have a managing system that consists of just one layer. In systems where additional layers are applied, the number of additional layers is usually limited to one or two. For instance, a managing system may have two layers: the bottom layer may react quickly to changes and adapts the managed system when needed, while the top layer may reason over long term strategies and adapt the underlying layer accordingly.

The managing system can operate completely automatically without intervention of stakeholders, or stakeholders may be involved in support for certain functions realized by the feedback loop; this is shown in Figure 1.2 by means of the generic support interface. We already gave an example above where a stakeholder could support the system with handling a fail‐safe situation. Another example is a managing system that detects a possible threat to the system. Before activating a possible reconfiguration to mitigate the threat, the managing system may check with a stakeholder whether the adaptation should be applied or not.

The managing system can be subject to change itself, which is represented in Figure 1.2 with the evolve interface. On‐the‐fly changes of the managing systems are important for two main reasons: (i) to update a feedback loop to resolve a problem or a bug (e.g. add or replace some functionality), and (ii) to support changing adaptation goals, i.e. change or remove an existing goal or add a new goal. The need for evolving the feedback loop model is triggered by stakeholders either based on observations obtained from the executing system or because stakeholders want to change the adaptation goals.

The managing system of the service‐based health assistance system comprises a feedback loop that is added to the service workflow. The task of the feedback loop is to ensure that the adaptation goals are realized. To that end, the feedback loop monitors the system behavior and the quality properties of service instances, and tracks that the system is not violating the adaptation goals. For a reactive policy, the feedback loop will select alternative service instances that ensure the adaptation goals are met in the event that goal violations are detected. If no configuration can be found that complies with the adaptation goals within a given time (fail‐safe operating mode), the managing system may involve a stakeholder to decide on the adaptation action to take. The feedback loop that adapts the service instances to ensure that the adaptation goals are realized may be extended with an extra level that adapts the underlying method that makes the adaptation decisions. For instance, this extra level may track the quality properties of service instances over time and identify patterns. The second layer can then use this knowledge to instruct the underlying feedback loop to give preference to selecting particular service instances or to avoid the selection of certain instances. For instance, services that expose a high level of failures during particular periods of the day may temporarily be excluded from selection to avoid harming the trustworthiness of the system. As we explained above, when a new adaptation goal is added to the system, in order to keep the average latency of invocations of the assistance service below a required threshold, the managing system will need to be updated. For instance, the managing system will need to be updated such that it can make adaptation decisions based on three adaptation goals instead of two.

1.4.5 Conceptual Model Applied

Figure 1.3 summarizes how the the conceptual model maps to the self‐adaptive service‐based health assistance system. The operator in this particular instance is responsible for supporting the self‐adaptive system with handling fail‐safe conditions (through the support interface). In this example, we do not consider the evolution of adaptation goals and the managing system.

Figure 1.3 Conceptual model applied to a self‐adaptive service‐based health assistance system

1.5 A Note on Model Abstractions

It is important to note that the conceptual model for self‐adaptive systems abstracts away from distribution – i.e. the deployment of the software to hardware that is connected via a network. Whereas a distributed self‐adaptive system consists of multiple software components that are deployed on multiple nodes connected via some network, from a conceptual point of view such system can be represented as one managed system (that deals with the domain concerns) and one managing system (that deals with adaptation concerns of the managed system). The conceptual model also abstracts away from how adaptation decisions in a self‐adaptive system are made and potentially coordinated among different components. In particular, the conceptual model is invariant to self‐adaptive systems where the adaptation functions are made by a single centralized entity or by multiple coordinating entities in a decentralized way. In a concrete setting, the composition of the components of a self‐adaptive system, the concrete deployment of these components to hardware elements, and the degree of decentralization of the decision making of adaptation will have a deep impact on how such self‐adaptive systems are engineered.

1.6 Summary

Dealing with uncertainties in the operating conditions of a software‐intensive system that are difficult to predict is an important challenge for software engineers. Self‐adaptation is about how a system can mitigate such uncertainties.

There are two common interpretations of what constitutes a self‐adaptive system. The first interpretation considers a self‐adaptive system as a system that is able to adjust its behavior in response to changes in the environment or the system itself. The second interpretation contrasts traditional internal mechanisms that enable a system to deal with unexpected or unwanted events with external mechanisms that are realized by means of feedback loops.

These interpretations lead to two complementary basic principles that determine what is a self‐adaptive system. The external principle states that a self‐adaptive system can handle change and uncertainties autonomously (or with minimal human intervention). The internal principle states that a self‐adaptive system consists of two distinct parts: one part that interacts with the environment and deals with the domain concerns and a second part that interacts with the first part and deals with the adaptation concerns.

Other traditional approaches to deal with change at runtime include autonomous systems, multi‐agent systems, self‐organizing systems, and context‐aware systems. These approaches differ from self‐adaptation, in particular with respect to the second basic principle. However, the second principle can be applied to these approaches through adding a managing system realizing self‐adaptation.