Advanced Testing of Systems-of-Systems, Volume 2 - Bernard Homes - E-Book

Advanced Testing of Systems-of-Systems, Volume 2 E-Book

Bernard Homes

0,0
126,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

As a society today, we are so dependent on systems-of-systems that any malfunction has devastating consequences, both human and financial. Their technical design, functional complexity and numerous interfaces justify a significant investment in testing in order to limit anomalies and malfunctions. Based on more than 40 years of practice, this book goes beyond the simple testing of an application - already extensively covered by other authors - to focus on methodologies, techniques, continuous improvement processes, load estimates, metrics and reporting, which are illustrated by a case study. It also discusses several challenges for the near future. Pragmatic and clear, this book displays many examples and references that will help you improve the quality of your systemsof-systems efficiently and effectively and lead you to identify the impact of upstream decisions and their consequences. Advanced Testing of Systems-of-Systems 2 deals with the practical implementation and use of the techniques and methodologies proposed in the first volume.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 432

Veröffentlichungsjahr: 2022

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright Page

Dedication and Acknowledgments

Preface Implementation

1 Test Project Management

1.1. General principles

1.2. Tracking test projects

1.3. Risks and systems-of-systems

1.4. Particularities related to SoS

1.5. Particularities related to SoS methodologies

1.6. Particularities related to teams

2 Testing Process

2.1. Organization

2.2. Planning

2.3. Control of test activities

2.4. Analyze

2.5. Design

2.6. Implementation

2.7. Test execution

2.8. Evaluation

2.9. Reporting

2.10. Closure

2.11. Infrastructure management

2.12. Reviews

2.13. Adapting processes

2.14. RACI matrix

2.15. Automation of processes or tests

3 Continuous Process Improvement

3.1. Modeling improvements

3.2. Why and how to improve?

3.3. Improvement methods

3.4. Process quality

3.5. Effectiveness of improvement activities

3.6. Recommendations

4 Test, QA or IV&V Teams

4.1. Need for a test team

4.2. Characteristics of a good test team

4.3. Ideal test team profile

4.4. Team evaluation

4.5. Test manager

4.6. Test analyst

4.7. Technical test analyst

4.8. Test automator

4.9. Test technician

4.10. Choose our testers

4.11. Training, certification or experience?

4.12. Hire or subcontract?

4.13. Organization of multi-level test teams

4.14. Insourcing and outsourcing challenges

5 Test Workload Estimation

5.1. Difficulty to estimate workload

5.2. Evaluation techniques

5.3. Test workload overview

5.4. Understanding the test workload

5.5. Defending our test workload estimate

5.6. Multi-tasking and crunch

5.7. Adapting and tracking the test workload

6 Metrics, KPI and Measurements

6.1. Selecting metrics

6.2. Metrics precision

6.3. Product metrics

6.4. Process metrics

6.5. Definition of metrics

6.6. Validation of metrics and measures

6.7. Measurement reporting

7 Requirements Management

7.1. Requirements documents

7.2. Qualities of requirements

7.3. Good practices in requirements management

7.4. Levels of requirements

7.5. Completeness of requirements

7.6. Requirements and agility

7.7. Requirements issues

8 Defects Management

8.1. Defect management, MOA and MOE

8.2. Defect management workflow

8.3. Triage meetings

8.4. Specificities of TDDs, ATDDs and BDDs

8.5. Defects reporting

8.6. Other useful reporting

8.7. Don’t forget minor defects

9 Configuration Management

9.1. Why manage configuration?

9.2. Impact of configuration management

9.3. Components

9.4. Processes

9.5. Organization and standards

9.6. Baseline or stages, branches and merges

9.7. Change control board (CCB)

9.8. Delivery frequencies

9.9. Modularity

9.10. Version management

9.11. Delivery management

9.12. Configuration management and deployments

10 Test Tools and Test Automation

10.1. Objectives of test automation

10.2. Test tool challenges

10.3. What to automate?

10.4. Test tooling

10.5. Automated testing strategies

10.6. Test automation challenge for SoS

10.7. Typology of test tools and their specific challenges

10.8. Automated regression testing

10.9. Reporting

11 Standards and Regulations

11.1. Definition of standards

11.2. Usefulness and interest

11.3. Implementation

11.4. Demonstration of compliance – IADT

11.5. Pseudo-standards and good practices

11.6. Adapting standards to needs

11.7. Standards and procedures

11.8. Internal and external coherence of standards

12 Case Study

12.1. Case study: improvement of an existing complex system

13 Future Testing Challenges

13.1. Technical debt

13.2. Systems-of-systems specific challenges

13.3. Correct project management

13.4. DevOps

13.5. IoT (Internet of Things)

13.6. Big Data

13.7. Services and microservices

13.8. Containers, Docker, Kubernetes, etc.

13.9. Artificial intelligence and machine learning (AI/ML)

13.10. Multi-platforms, mobility and availability

13.11. Complexity

13.12. Unknown dependencies

13.13. Automation of tests

13.14. Security

13.15. Blindness or cognitive dissonance

13.16. Four truths

13.17. Need to anticipate

13.18. Always reinvent yourself

13.19. Last but not least

Terminology

References

Index

Summary of Volume 1

Other titles from iSTE in Computer Engineering

End User License Agreement

List of Tables

Chapter 6

Table 6.1

Average defects per function points

List of Illustrations

Chapter 1

Figure 1.1

Different risk tolerance

Figure 1.2

Inherited and imposed risks

Chapter 2

Figure 2.1

Test processes.

Chapter 3

Figure 3.1

The four phases of CTP

Figure 3.2

Example of CTP radar reporting

Chapter 4

Figure 4.1

Example of skills assessment table © RBCS

Chapter 5

Figure 5.1

Test points calculation (TPA) © RBCS

Figure 5.2

Converting to test hours © RBCS

Figure 5.3

Bathtub curve

Chapter 6

Figure 6.1

Progress chart

Figure 6.2

Graph of changes during refactoring

Figure 6.3

Time-on-time diagram

Chapter 8

Figure 8.1

Example of defect management workflow

Figure 8.2

Defect detection and correction graph.

Chapter 9

Figure 9.1

Version branches and merges

Figure 9.2

Example of test levels with multiple different configurations

Figure 9.3

Test Levels in systems-of-systems

Chapter 10

Figure 10.1

Multi-applications test framework

Chapter 12

Figure 12.1

Case study: project organization

Figure 12.2

Daily assignment tracking. see www.iste.co.uk/homes/systems2.zip

Chapter 13

Figure 13.1

Images of automobiles

Guide

Cover Page

Title Page

Copyright Page

Dedication and Acknowledgments

Preface

Table of Contents

Begin Reading

Terminology

References

Index

Summary of Volume 1

Other titles from iSTE in Computer Engineering

Wiley End User License Agreement

Pages

iii

iv

xiii

xv

1

2

3

4

5

6

7

8

9

10

11

12

13

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

Advanced Testing of Systems-of-Systems 2

Practical Aspects

Bernard Homès

First published 2022 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2022

The rights of Bernard Homès to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s), contributor(s) or editor(s) and do not necessarily reflect the views of ISTE Group.

Library of Congress Control Number: 2022944148

British Library Cataloguing-in-Publication DataA CIP record for this book is available from the British LibraryISBN 978-1-78630-750-7

Dedication and Acknowledgments

Inspired by a dedication from Boris Beizer1, I dedicate these two books to many very bad projects on software and systems-of-systems development where I had the opportunity to – for a short time – act as a consultant. These taught me multiple lessons on difficulties that these books try and identify and led me to realize the need for this book. Their failure could have been prevented; may they rest in peace.

I would also like to thank the many managers and colleagues I had the privilege of meeting during my career. Some, too few, understood that quality is really everyone’s business. We will lay a modest shroud over the others.

Finally, paraphrasing Isaac Newton, If I was able to reach this level of knowledge, it is thanks to all the giants that were before me and on the shoulders of which I could position myself. Among these giants, I would like to mention (in alphabetical order) James Bach, Boris Beizer, Rex Black, Frederic Brooks, Hans Buwalda, Ross Collard, Elfriede Dustin, Avner Engel, Tom Gilb, Eliahu Goldratt, Dorothy Graham, Capers Jones, Paul Jorgensen, Cem Kaner, Brian Marick, Edward Miller, John Musa, Glenford Myers, Bret Pettichord, Johanna Rothman, Gerald Weinberg, James Whittaker and Karl Wiegers.

After 15 years in software development, I had the opportunity to focus on software testing for over 25 years. Specialized in testing process improvements, I founded and participated in the creation of multiple associations focused on software testing: AST (Association of Software Tester), ISTQB (International Software Testing Qualification Board), CFTL (Comité Français des Tests Logiciels, the French Software Testing committee) and GASQ (Global Association for Software Quality). I also dedicate these books to you, the reader, so that you can improve your testing competencies.

Note

1

Beizer, B. (1990).

Software Testing Techniques

, 2nd edition. ITP Media.

PrefaceImplementation

In the first part of these two books on systems-of-systems testing, we identified the impacts of software development cycles, testing strategies and methodologies, and we saw the benefit of using a quality referential and the importance of test documentation and reporting. We have identified the impact of test levels and test techniques, whether we are talking about static techniques or dynamic techniques. We ended with an approach to test project management that allowed us to identify that human actor and how their interactions are essential elements that must be considered.

In this second part of the book on systems-of-systems testing, we will focus on more practical aspects such as managing test projects, testing processes and how to improve them continuously. We will see the additional but necessary processes such as the management of requirements, defects and configurations, and we will also see a case study allowing us to ask ourselves several useful questions. We will end with a perilous prediction exercise by listing the challenges that tests will have to face in the years to come.

August 2022

1Test Project Management

We do not claim to replace the many contributions of illustrious authors on good practices in project management. Standards such as PMBOK (PMI 2017) or CMMI and methodologies such as ITIL and PRINCE2 comprehensively describe the tasks, best practices and other activities recommended to properly manage projects. We focus on certain points associated with the testing of software, components, products and systems within systems-of-systems projects.

At the risk of writing a tautology, the purpose of project management is to manage projects, that is, to define the tasks and actions necessary to achieve the objectives of these projects. The purpose, the ultimate objective of the project, takes precedence over any other aspect, even if the budgetary and time constraints are significant. To limit the risks associated with systems-of-systems, the quality of the deliverables is very important and therefore tests (verifications and validations that the object of the project has been achieved) are necessary.

Project management must ensure that development methodologies are correctly implemented (see Chapter 2) to avoid inconsistencies. Similarly, project management must provide all stakeholders with an image of the risks and the progress of the system-of-systems, its dependencies and the actions to be taken in the short and medium term, in order to anticipate the potential hazards.

1.1. General principles

Management of test projects, whether on components, products, systems or systems-of-systems, has a particularity that other projects do not have: they depend – for their deadlines, scope and level of quality – on other parts of the projects: the development phases. Requirements are often unstable, information arrives late, deadlines are shorter because they depend on evolving developments and longer deadlines, the scope initially considered increases, the level of quality of input data – requirements, components to be tested, interfaces – is often of lower quality than expected and the number of faults or anomalies is greater than anticipated. All of these are under tighter budgetary and calendar constraints because, even if the developments take longer than expected, the production launch date is rarely postponed.

The methodologies offered by ITIL, PRINCE2, CMMI, etc. bring together a set of good practices that can be adapted – or not – to our system-of-systems project. CMMI, for example, does not have test-specific elements (only IVV), and it may be necessary to supplement CMMI with test-specific tasks and actions as offered by TMM and TMMI.

Let us see the elements specific to software testing projects.

1.1.1. Quality of requirements

Any development translates requirements (needs or business objectives) into a component, product or system that will implement them. In an Agile environment, requirements are defined in the form of User Stories, Features or Epics. The requirements can be described in so-called specification documents (e.g. General Specifications Document or Detailed Specifications Document). Requirements are primarily functional – they describe expected functionality – but can be technical or non-functional. We can classify the requirements according to the quality characteristics they cover as proposed in Chapter 5 of Volume 1 (Homès 2022a).

Requirements are provided to development teams as well as test teams. Production teams – design, development, etc. – use these requirements to develop components, products or systems and may propose or request adaptations of these requirements. Test teams use requirements to define, analyze and implement, or even automate, test cases and test scenarios to validate these requirements. These test teams must absolutely be informed – as soon as possible – of any change in the requirements to proceed with the modifications of the tests.

The requirements must be SMART, that is:

– Specific: the requirements must be clear, there must be no ambiguity and the requirements must be simple, consistent and with an appropriate level of detail.

– Measurable: it must be possible, when the component, product or system is designed, to verify that the requirement has been met. This is directly necessary for the design of tests and metrics to verify the extent to which requirements are met.

– Achievable: the requirements must be able to be physically demonstrated under given conditions. If the requirements are not achievable (e.g. the system will have 100% reliability and 100% availability), the result will be that the component, product or system will never be accepted or will be cost-prohibitive. Achievable includes that the requirement can be developed in a specific time frame.

– Realistic: in the context of software development – and testing – is it possible to achieve the requirement for the component, product or system, taking into account the constraints in which the project is developed? We add to this aspect the notion of time: are the requirements achievable in a realistic time?

– Traceable: requirements traceability is the ability to follow a requirement from its design to its specification, its realization and its implementation to its test, as well as in the other direction (from the test to the specification). This helps to understand why a requirement was specified and to ensure that each requirement has been correctly implemented.

1.1.2. Completeness of deliveries

The completeness of the software, components, products, equipment and systems delivered for the tests is obviously essential. If the elements delivered are incomplete, it will be necessary to come back to them to modify and complete them, which will increase the risk of introducing anomalies.

This aspect of completeness is ambiguous in incremental and iterative methodologies. On the one hand, it is recommended to deliver small increments, and on the other hand, losses should be eliminated. Small increments imply partial releases of functionality, thus generation of “losses” both regarding releases and testing (e.g. regression testing) – in fact, all the expectations related to these multiple releases and multiple test runs – to be performed on these components. Any evolution within the framework of an iteration will lead to a modification in the functionalities and therefore an evolution compared to the results executed during the previous iterations.

1.1.3. Availability of test environments

The execution of the tests is carried out in different test environments according to the test levels envisaged. It will therefore be necessary to ensure the availability of environments for each level.

The test environment is not limited to a machine on which the software component is executed. It also includes the settings necessary for the proper execution of the component, the test data and other applications – in the appropriate versions – with which the component interacts.

Test environments, as well as their data and the applications they interface with must be properly synchronized with each other. This implies an up-to-date definition of the versions of each system making up the system-of-systems and of the interfaces and messages exchanged between them.

Automating backups and restores of test environments allows testers to self-manage their environments so that they are not a burden on production systems management teams.

In DevOps environments, it is recommended to enable automatic creation of environments to test builds as they are created by developers. As proposed by Kim et al. (2016), it is necessary to allow to recreate – automatically – the test environments rather than trying to repair them. This automatic creation solution ensures an identical test environment to the previous version, which will facilitate regression testing.

1.1.4. Availability of test data

It is obvious that the input test data of a test case and the expected data at the output of a test case are necessary, and it is also important to have a set of other data that will be used for testing:

– data related to the users who will run the tests (e.g. authorization level, hierarchical level, organization to which they are attached, etc.);

– information related to the test data used (e.g. technical characteristics, composition, functionalities present, etc.) and which are grouped in legacy systems interfaced with the system-of-systems under test;

– historical information allowing us to make proposals based on this historical information (e.g. purchase suggestions based on previous purchases);

– information based on geographical positioning (e.g. GPS position), supply times and consumption volumes to anticipate stock replenishment needs (e.g. need to fill the fuel tank according to the way to drive and consume fuel, making it possible to offer – depending on the route and GPS information – one or more service stations nearby);

– etc.

The creation and provision of quality test data is necessary before any test campaign. Designing and updating this data, ensuring that it is consistent, is extremely important because it must – as far as possible – simulate the reality of the exchanges and information of each of the systems of the system-of-systems to be tested. We will therefore need to generate data from monitoring systems (from sensors, via IoT systems) and ensure that their production respects the expected constraints (e.g. every n seconds, in order to identify connection losses or deviations from nominal operating ranges).

Test data should be realistic and consistent over time. That is, they must either simulate a reference period and each of the campaigns must ensure that the systems have modified their reference date (e.g. use a fixed range of hours and reset systems at the beginning of this range) or be consistent with the time of execution of the test campaign. This last solution requires generating the test data during the execution of the test campaign, in order to verify the consistency of the data with respect to the expected (e.g. identification of duplicate messages, sequencing of messages, etc.) and therefore the proper functioning of the system-of-systems as a whole.

1.1.5. Compliance of deliveries and schedules

Development and construction projects are associated with often strict delivery dates and schedules. The impact of a late delivery of a component generates cascading effects impacting the delivery of the system and the system-of-systems. Timely delivery, with the expected features and the desired level of quality, is therefore very important. In some systems-of-systems, the completeness of the functionalities and their level of quality are often more important than the respect of the delivery date. In others, respecting the schedule is crucial in order to meet imperatives (e.g. launch window for a rocket aiming for another planet).

Test projects depend on the delivery of requirements and components to be tested within a specific schedule. Indeed, testers can only design tests based on the requirements, user stories and features delivered to them and can only run tests on the components, products and systems delivered to them in the appropriate test environments (i.e. including the necessary data and systems). The timely delivery of deliverables (contracts, requirements documents, specifications, features, user stories, etc.) and components, products and systems in a usable state – that is, with information or expected and working functionality – is crucial, or testers will not be able to perform their tasks properly.

This involves close collaboration between test manager and project managers in charge of the design and production of components, products or systems to be tested, as well as managers in charge of test environments and the supply of test data.

In the context of Agile and Lean methods, any delay in deliveries and any non-compliance with schedules is a “loss of value” and should be eliminated. It is however important to note that the principles of agility propose that it is the development teams that define the scope of the functionalities to be delivered at each iteration.

1.1.6. Coordinating and setting up environments

Depending on the test levels, environments will include more and more components, products and systems that will need to coordinate to represent test environments representative of real life. Each environment includes one or more systems, components, products, as well as interfaces, ETLs and communication equipment (wired, wireless, satellite, optical networks, etc.) of increasing complexity. The design of these various environments quickly becomes a full-time job, especially since it is necessary to ensure that all the versions of all the software are correctly synchronized and that all the data, files, contents of databases and interfaces are synchronized and validated in order to allow the correct execution of the tests on this environment.

The activity of coordinating and setting up environments interacts strongly with all the other projects participating in the realization of the system-of-systems. Some test environments will only be able to simulate part of the target environment (e.g. simulation of space vacuum and sunlight with no ability to simulate zero gravity), and therefore there may be, for the same test level, several test execution campaigns, each on different technical or functional domains.

1.1.7. Validation of prerequisites – Test Readiness Review (TRR)

Testing activities can start effectively and efficiently as soon as all their prerequisites are present. Otherwise, the activities will have to stop and then start again when the missing prerequisite is provided, etc. This generates significant waste of time, not to mention everyone’s frustration. Before starting any test task, we must make sure that all the prerequisites are present, or at the very least that they will arrive on time with the desired level of quality. Among the prerequisites, we have among others the requirements, the environment, the datasets, the component to be tested, the test cases with the expected data, as well as the testers, the tools and procedures for managing tests and anomalies, the KPIs and metrics allowing the reporting of the progress of the tests, etc.

One solution to ensure the presence of the prerequisites is to set up a TRR (Test Readiness Review) milestone, a review of the start of the tests. The purpose of this milestone is to verify – depending on the test level and the types of test – whether or not the prerequisites are present. If prerequisites are missing, it is up to the project managers to decide whether or not to launch the test activity, taking into account the identified risks.

In Agile methods, such a review can be informal and only apply to one user story at a time, with the acronym DOR for definition of ready.

1.1.8. Delivery of datasets (TDS)

The delivery of test datasets (TDS) is not limited to the provision of files or databases with information usable by the component, product or system. This also includes – for the applications, components, products or systems with which the component, product or system under test interacts – a check of the consistency and synchronization of the data with each other. It will be necessary to ensure that the interfaces are correctly described, defined and implemented.

Backup of datasets or automation of dataset generation processes may be necessary to allow testers to generate the data they need themselves.

The design of coherent and complete datasets is a difficult task requiring a good knowledge of the entire information system and the interfaces between the component, product or system under test on the one hand and all the other systems of the test environment on the other hand. Some components, products or systems may be missing and replaced by “stubs” that will simulate the missing elements. In this case, it is necessary to manage these “stubs” with the same rigor as if they were real components (e.g. evolution of versions, data, etc.).

1.1.9. Go-NoGo decision – Test Review Board (TRB)

A Go-NoGo meeting is used to analyze the risks associated with moving to the next step in a process of designing and deploying a component, product, system or system-of-systems, and to decide whether to proceed to the next step.

This meeting is sometimes split into two reviews in time:

– A TRB (Test Review Board) meeting analyzes the results of the tests carried out in the level and determines the actions according to these results. This technical meeting ensures that the planned objectives have been achieved for the level.

– A management review to obtain – from the hierarchy, the other stakeholders, the MOA and the customers – a decision (the “Go” or the “NoGo” decision) accepted by all, with consideration of business risks, marketing, etc.

The Go-NoGo meeting includes representatives from all business stakeholders, such as operations managers, deployment teams, production teams and marketing teams.

In an Agile environment, the concept of Go-NoGo and TRB is detailed under the concept of DOD (definition of done) for each of the design actions.

1.1.10. Continuous delivery and deployment

The concept of continuous integration and continuous delivery (CI/CD) is interesting and deserves to be considered in systems-of-systems with preponderant software. However, such concepts have particular constraints that we must study, beyond the use of an Agile design methodology.

1.1.10.1. Continuous delivery

The continuous delivery practices mentioned in Kim et al. (2016) focus primarily on the aspects of continuous delivery and deployment of software that depend on automated testing performed to ensure developers have quick (immediate) feedback on the defects, performance, security and usability concerns of the components put in configuration. In addition, the principle is to have a limited number of configuration branches.

In the context of systems-of-systems, where hardware components and subsystems including software must be physically installed – and tested on physical test benches – the ability to deliver daily and ensure the absence of regressions becomes more complex, if not impossible, to implement. This is all the more true since the systems-of-systems are not produced in large quantities and the interactions are complex.

1.1.10.2. Continuous testing

On-demand execution of tests as part of continuous delivery is possible for unit testing and static testing of code. Software integration testing could be considered, but anything involving end-to-end (E2E) testing becomes more problematic because installing the software on the hardware component should generate a change in the configuration reference of the hardware component.

Among the elements to consider, we have an ambiguity of terminology: the term ATDD (Acceptance Test-Driven Development) relates to the acceptance of the software component alone, not its integration, nor the acceptance of the system-of-system nor of the subsystem or equipment.

Another aspect to consider is the need for test automation and (1) the continued increase in the number of tests to be executed, which will mean increasing test execution time as well as (2) the need to ensure that the test classes in the software (case of TDD and BDD) are correctly removed from the versions used in integration tests and in system tests.

One of the temptations associated with testing in a CI/CD or DevOps environment is to pool the tests of the various software components into a single test batch for the release, instead of processing the tests separately for each component. This solution makes it possible to pool the regression tests of software components, but is a difficult practical problem for the qualification of systems-of-systems as mentioned in Sacquet and Rochefolle (2016).

1.1.10.3. Continuous deployment

Continuous deployment depends on continuous delivery and therefore automated validation of tests, and the presence of complete documentation – for component usage and administration – as well as the ability to run end-to-end on an environment representative of production.

According to Kim et al. (2016), in companies like Amazon and Google, the majority of teams practice continuous delivery and some practice continuous deployment. There is wide variation in how to perform continuous deployment.

1.2. Tracking test projects

Monitoring test projects requires monitoring the progress of each of the test activities for each of the systems of the system-of-systems, as well as on each of the test environments of each of the test levels of each of these systems. It is therefore important that the progress information of each test level is aggregated and summarized for each system and that the test progress information of each system is aggregated at the system-of-systems level. This involves defining the elements that must be measured (the progress), against which benchmark they must be measured (the reference) and identifying the impacts (dependencies) that this can generate. Reporting of similar indicators from each of the systems will facilitate understanding. Automated information feedback will facilitate information retrieval.

1.3. Risks and systems-of-systems

Systems-of-systems projects are subject to more risk than other systems in that they may inherit upstream-level risks and a process’s tolerance for risk may vary by organization and the delivered product. In Figure 1.1, we can identify that the more we advance in the design and production of components by the various organizations, the risks will be added and the impact for organizations with a low risk tolerance will be more strongly impacted than others.

Figure 1.1Different risk tolerance

In Figure 1.2, we can identify that an organization will be impacted by all the risks it can inherit from upstream organizations and that it will impose risks on all downstream organizations.

Figure 1.2Inherited and imposed risks

We realize that risk management in systems-of-systems is significantly more complex than in the case of complex systems and may need to be managed at multiple levels (e.g. interactions between teams, between managers of the project or between the managers – or leaders – of the organizations).

1.4. Particularities related to SoS

According to Firesmith (2014), several pitfalls should be avoided in the context of systems-of-systems, including:

– inadequate system-of-systems test planning;

– unclear responsibilities, including liability limits;

– inadequate resources dedicated to system-of-systems testing;

– lack of clear systems-of-systems planning;

– insufficient or inadequate systems-of-systems requirements;

– inadequate support of individual systems and projects;

– inadequate cross-project defect management.

To this we can add:

– different quality requirements according to the participants/co-contractors, including regarding the interpretation of regulatory obligations;

– the needs to take into account long-term evolutions;

– the multiplicity of level versions (groupings of software working and delivered together), multiple versions and environments;

– the fact that systems-of-systems are often unique developments.

1.5. Particularities related to SoS methodologies

Development methodologies generate different constraints and opportunities. Sequential developments have demonstrated their effectiveness, but involve constraints of rigidity and lack of responsiveness, if the contexts change. Agility offers better responsiveness at the expense of a more restricted analysis phase and an organization that does not guarantee that all the requirements will be developed. The choice of a development methodology will imply adaptations during the management of the project and during the testing of the components of the system-of-systems.

Iterative methodologies involve rapid delivery of components or parts of components, followed by refinement phases if necessary. That implies that:

– The planned functionalities are not fully provided before the last delivery of the component. Validation by the business may be delayed until the final delivery of the component. This reduces the time for detecting and correcting anomalies and can impact the final delivery of the component, product or system, or even the system-of-systems.

– Side effects may appear on other components, so it will be necessary to retest all components each time a component update is delivered. This solution can be limited to the components interacting directly with the modified component(s) or extend to the entire system-of-systems, and it is recommended to automate it.

– The interfaces between components may not be developed simultaneously and therefore that the tests of these interfaces may be delayed.

Sequential methodologies (e.g. V-cycle, Spiral, etc.) focus on a single delivery, so any evolution – or need for clarification – of the requirement will have an impact on lead time and workload, both in terms of development (redevelopment or adaptation of components, products or systems) and in terms of testing (design and execution of tests).

1.5.1. Components definition

Within the framework of sequential methodologies, the principle is to define the components and deliver them finished and validated at the end of their design phase. This involves a complete definition of each product or system component and the interactions it has with other components, products or systems. These exhaustive definitions will be used both for the design of the component, product or system and for the design of the tests that will validate them.

1.5.2. Testing and quality assurance activities

It is not possible to envisage retesting all the combinations of data and actions of the components of a level of a system-of-systems; this would generate a workload disproportionate to the expected benefits. One solution is to verify that the design and test processes have been correctly carried out, that the proofs of execution are available and that the test activities – static and dynamic – have correctly covered the objectives. These verification activities are the responsibility of the quality assurance teams and are mainly based on available evidence (paper documentation, execution logs, anomaly dashboards, etc.).

1.6. Particularities related to teams

In a test project, whether it is software testing or systems-of-systems testing, one element to take into account is the management of team members, and their relationships with each other, others and to the outside. This information is grouped into what NASA calls CRM (Crew Resource Management). Developed in the 1970s–1980s, CRM is a mature discipline that applies to complex projects and is ideal for decision-making processes in project management.

It is essential to:

– recognize the existence of a problem;

– define what the problem is;

– identify probable solutions;

– take the appropriate actions to implement a solution.

If CRM is mainly used where human error can have devastating effects, it is important to take into account the lessons that CRM can bring us in the implementation of decision-making processes. Contrary to a usual vision, people with responsibilities (managers and decision-makers) or with the most experience are sometimes blinded by their vision of a solution and do not take into account alternative solutions. Among the points to keep in mind is communication between the different members of the team, mutual respect – which will entail listening to the information provided – and then measuring the results of the solutions implemented in order to ensure their effectiveness. Team members can all communicate important information that will help the project succeed.

The specialization of the members of the project team, the confidence that we have in their skills and the confidence that they have in their experience, the management methods and the constraints – contractual or otherwise – mean that the decision-making method and the decisions made can be negatively impacted in the absence of this CRM technique. This CRM technique has been successfully implemented in aeronautics and space, and its lessons should be used successfully in complex projects.

2Testing Process

Test processes are nested within the set of processes of a system-of-systems. More specifically, they prepare and provide evidence to substantiate compliance with requirements and provide feedback to project management on the progress of test process activities. If CMMI is used, other process areas than VER and VAL will be involved: PPQA (Process and Product Quality Assurance), PMC (Project Monitoring and Control), REQM (Requirements Management), CM (Configuration Management), TS (Technical Solution), MA (Measurement and Analysis), etc.

These processes will all be involved to some degree in the testing processes. Indeed, the test processes will decline the requirements, whether or not they are defined in documents describing the conformity needs, and the way in which these requirements will be demonstrated (type of IADT proofs), will split the types of demonstration according to the levels test and integration (system, subsystem, sub-subsystem, component, etc.) and static types (Analysis and Inspection for static checks during design) or dynamic (demonstration and tests during the levels of integration and testing of subsystems, systems and systems-of-systems). Similarly, the activities of the test process will report information and progress metrics to project management (CMMI PMC for Project Monitoring and Control process) and will be impacted by the decisions descending from this management.

The processes described in this chapter apply to a test level and should be repeated on each of the test levels, for each piece of software or containing software. Any modification in the interfaces and/or the performance of a component interacting with the component(s) under test will involve an analysis of the impacts and, if necessary, an adaptation of the test activities (including about the test) and evidence to be provided to show the conformity of the component (or system, subsystem, equipment or software) to its requirements. Each test level should coordinate with the other levels to limit the execution of tests on the same requirements.

The ISO/IEC/IEEE29119-1 standard describes the following processes, grouping these in organizational processes (in yellow), management processes (pink) and dynamic processes (green).

Figure 2.1Test processes.

Defined test processes are repeatable at each test level of a system-of-systems:

– the general organization of the test level;

– planning of level testing activities;

– control of test activities;

– analysis of needs, requirements and user stories to be tested;

– design of the test cases applicable to the level;

– implementation of test cases with automation and provision of test data;

– execution of designed and implemented test cases, including the management of anomalies;

– evaluation of test execution results and exit criteria;

– reporting;

– closure of test activities, including feedback and continuous improvement actions;

– infrastructure and environment management.

An additional process can be defined: the review process, which can be carried out several times on a test level, on the one hand, on the input deliverables, and on the other hand, on the deliverables produced by each of the processes of the level. Review activities can occur within each defined test process.

The proposed test processes are applicable regardless of the development mode (Agile or sequential). In the case of an Agile development mode, the testing processes must be repeated for each sprint and for each level of integration in a system-of-systems.

The processes must complement each other and – even if they may partially overlap – it must be ensured that the processes are completed successfully.

2.1. Organization

Objectives:

– develop and manage organizational needs, in accordance with the company’s test policy and the test strategies of higher levels;

– define the players at the level, their responsibilities and organizations;

– define deliverables and milestones;

– define quality targets (SLA, KPi, maximum failure rate, etc.);

– ensure that the objectives of the test strategy are addressed;

– define a standard RACI matrix.

Actor(s):

– CPI (R+A), CPU/CPO (I), developers (C+I);

– experienced “test manager” having a pilot role of the test project (R).

Prerequisites/inputs:

– calendar and budgetary constraints defined for the level;

– actors and subcontractors envisaged or selected;

– repository of lessons learned from previous projects.

Deliverables/outputs:

– organization of level tests;

– high-level WBS with the main tasks to be carried out;

– initial definition of test environments.

Entry criteria:

– beginning of the organization phase.

Exit criteria:

– approved organizational document (ideally a reduced number of pages).

Indicators:

1) efficiency: writing effort;

2) coverage: traceability to the quality characteristics identified in the project test strategy.

Points of attention:

– ensure that the actors and meeting points (milestones and level of reporting) are well defined.

2.2. Planning

Objective:

– plan test activities for the project, level, iteration or sprint considering existing issues, risk levels, constraints and objectives for testing;

– define the tasks (durations, objectives, incoming and outgoing, responsibilities, etc.) and sequencing;

– define the exit criteria (desired quality level) for the level;

– identify the prerequisites, resources (environment, personnel, tools, etc.) necessary;

– define measurement indicators and frequencies, as well as reporting.

Actor(s):

– CPI (R+A), CPU/CPO (I), developers (C+I);

– experienced testers “test manager”, having a role of manager of the test project (R);

– testers (C+I).

Prerequisites/inputs:

– information on the volume, workload and deadlines of the project;

– information on available environments and interfaces;

– objectives and scope of testing activities.

2.2.1. Project WBS and planning

Objective:

– plan test activities for the project or level, iteration or sprint considering existing status, risk levels, constraints and objectives for testing;

– define the tasks (durations, objectives, incoming and outgoing, responsibilities, etc.) and sequencing;

– define the exit criteria (desired quality level) for the level;

– identify prerequisites, resources (environment, personnel, tools, etc.) necessary;

– define measurement indicators and frequencies, as well as reporting.

Actor(s):

– CPI (R+A), CPU/CPO (I), developers (C+I);

– experienced testers “test manager”, having a role of manager of the test project (R);

– testers (C+I).

Prerequisites/inputs:

– REAL and project WBS defined in the investigation phase;

– lessons learned from previous projects (repository of lessons learned).

Deliverables/outputs:

– master test plan, level test plan(s);

– level WBS (or TBS for Test Breakdown Structure), detailing – for the applicable test level(s) – the tasks to be performed;

– initial definition of test environments.

Entry criteria:

– start of the investigation phase.

Exit criteria:

– test plan approved, all sections of the test plan template are completed.

Indicators:

1) efficiency: writing effort vs. completeness and size of the deliverables provided;

2) coverage: coverage of the quality characteristics selected in the project Test Strategy.

Points of attention:

– ensure that test data (for interface tests, environment settings, etc.) will be well defined and provided in a timely manner;

– collect lessons learned from previous projects.

Deliverables/outputs:

– master test plan, level test plan(s);

– level WBS, detailing – for the test level(s) – the tasks to be performed;

– detailed Gantt of test projects – each level – with dependencies;

– initial definition of test environments.

Entry criteria:

– start of the investigation phase.

Exit criteria:

– approved test plan, all sections of the applicable test plan template are completed.

Indicators:

1) efficiency: writing effort;

2) coverage: coverage of the quality characteristics selected in the project’s test strategy.

Points of attention:

– ensure that test data (for interface testing, environment settings, etc.) will be well defined and provided in a timely manner.

2.3. Control of test activities

Objective:

– throughout the project: adapt the test plan, processes and actions, based on the hazards and indicators reported by the test activities, so as to enable the project to achieve its objectives;

– identify changes in risks, implement mitigation actions;

– provide periodic reporting to the CoPil and the CoSuiv;

– escalate issues if needed.

Actor(s):

– CPI (A+I), CPU/CPO (I), developers (I);

– test manager with a test project manager role (R);

– testers (C+I) [provide indicators];

– CoPil CoNext (I).

Prerequisites/inputs:

– risk analysis, level WBS, project and level test plan.

Deliverables/outputs:

– periodic indicators and reporting for the CoPil and CoSuiv;

– updated risk analysis;

– modification of the test plan and/or activities to allow the achievement of the “project” objectives.

Entry criteria:

– project WBS, level WBS.

Exit criteria:

– end of the project, including end of the software warranty period.

Indicators:

– dependent on testing activities.

2.4. Analyze

Objective:

– analyze the repository of information (requirements, user stories, etc. usable for testing) to identify the test conditions to be covered and the test techniques to be used. A risk or requirement can be covered by more than one test condition. A test condition is something – a behavior or a combination of conditions – that may be interesting or useful to test.

Actor(s):

– testers, test analysts, technical test analysts.

Prerequisites/inputs:

– initial definition of test environments;

– requirements and user stories (depending on the development method);

– acceptance criteria for (if available);

– analysis of prioritized project risks;

– level test plan with the characteristics to be covered, the level test environment.

Deliverables/outputs:

– detailed definition of the level test environment;

– test file;

– prioritized test conditions;

– requirements/risks traceability matrix – test conditions.

Entry criteria:

– validated and prioritized requirements;

– risk analysis.

Exit criteria:

– each requirement is covered by the required number of test conditions (depending on the RPN of the requirement).

Indicators:

1) Efficiency:

- number of prioritized test conditions designed,

- updated traceability matrix for extension to test conditions.

2) Coverage:

- percentage of requirements and/or risks covered by one or more test conditions,

- for each requirement or user story analyzed, implementation of traceability to the planned test conditions,

- percentage of requirements and/or risks (by risk level) covered by one or more test conditions.

2.5. Design

Objective:

– convert test conditions into test cases and identify test data to be used to cover the various combinations. A test condition can be converted into one or more test cases.

Actor(s):

– testers, test technicians.

Prerequisites/inputs:

– prioritized test conditions;

– requirements/risks traceability matrix – test conditions.

Deliverables/outputs:

– prioritized test cases, definition of test data for each test case (input and expected);

– prioritized test procedures, taking into account the execution prerequisites;

– requirements/risks traceability matrix – test conditions – test cases.

Entry criteria:

– test conditions defined and prioritized;

– risk analysis.

Exit criteria:

– each test condition is covered by one or more test cases (according to the RPN);

– partitions and typologies of test data defined for each test;

– defined test environments.

Indicators:

1) Efficiency:

- number of prioritized test cases designed,

- updated traceability matrix for extension to test cases.

2) Coverage:

- percentage of requirements and/or risks covered by one or more test cases designed.

2.6. Implementation

Objective:

– finely describe – if necessary – the test cases;

– define the test data for each of the test cases generated by the test design activity;

– automate the test cases that need to be;

– setting up test environments.

Actor(s):

– testers, test automators, data and systems administrators.

Prerequisites/inputs:

– prioritized test cases;

– risk analysis.

Deliverables/outputs:

– automated or non-automated test scripts, test scenarios, test procedures;

– test data (input data and expected data for comparison);

– traceability matrix of requirements to risks – test conditions – test cases – test data.

Entry criteria:

– prioritized test cases, defined with their data partitions.

Exit criteria:

– test data defined for each test;

– test environments defined, implemented and verified.

Indicators:

1) Efficiency:

- number of prioritized test cases designed with test data,

- updated traceability matrix for extension to test data,

- number of test environments defined, implemented and verified vs. number of environments planned in the test strategy.

2) Coverage:

- percentage of test environments ready and delivered,

- coverage of requirements and/or risks by one or more test cases with data,

- coverage of requirements and/or risks by one or more automated test cases.

2.7. Test execution

Objective:

– execute the test cases (on the elements of the application to be tested) delivered by the development;

– identify defects and write anomaly sheets;

– report monitoring and coverage information.

Actor(s):

– testers, test technicians.

Prerequisites/inputs:

– system to be tested is available and managed in delivery (configuration management), accompanied by a delivery sheet.

Deliverables/outputs:

– anomaly sheets filled in for any identified defect;

– test logs.

Entry criteria:

– testing environment and resources (including testing tools) available for the level, and tested;

– anomaly management tool available and installed;

– test cases and test data available for the level;

– component or application to be tested available and managed in delivery (configuration management);

– delivery sheet provided.

Exit criteria:

– coverage of all test cases for the level.

Indicators:

1) Efficiency: