126,99 €
As a society today, we are so dependent on systems-of-systems that any malfunction has devastating consequences, both human and financial. Their technical design, functional complexity and numerous interfaces justify a significant investment in testing in order to limit anomalies and malfunctions. Based on more than 40 years of practice in the development and testing of systems, including safety-critical systems, this book discusses development models, testing methodologies and techniques, and identifies their advantages and disadvantages. Pragmatic and clear, this book displays many examples and references that will help you improve the quality of your systemsof-systems efficiently and effectively and lead you to identify the impact of upstream decisions and their consequences. Advanced Testing of Systems-of-Systems 1 is complemented by a second volume dealing with the practical implementation and use of the techniques and methodologies proposed here.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 425
Veröffentlichungsjahr: 2022
Cover
Title Page
Copyright Page
Dedication and Acknowledgments
Preface
1 Introduction
1.1. Definition
1.2. Why and for who are these books?
1.3. Examples
1.4. Limitations
1.5. Why test?
1.6. MOA and MOE
1.7. Major challenges
2 Software Development Life Cycle
2.1. Sequential development cycles
2.2. Incremental development cycles
2.3. Agile development cycles
2.4. Acquisition
2.5. Maintenance
2.6. OK, what about reality?
3 Test Policy and Test Strategy
3.1. Test policy
3.2. Test strategy
3.3. Selecting a test strategy
4 Testing Methodologies
4.1. Risk-based tests (RBT)
4.2. Requirement-based tests (TBX)
4.3. Standard-based (TBS) and systematic tests
4.4. Model-based testing (MBT)
4.5. Testing in Agile methodologies
4.6. Selecting a multi-level methodology
4.7. From design to delivery
5 Quality Characteristics
5.1. Product quality characteristics
5.2. Quality in use
5.3. Quality for acquirers
5.4. Quality for suppliers
5.5. Quality for users
5.6. Impact of quality on criticality and priority
5.7. Quality characteristics demonstration
6 Test Levels
6.1. Generic elements of a test level
6.2. Unit testing
6.3. Component integration testing
6.4. Component tests
6.5. Component integration tests
6.6. System tests
6.7. Acceptance tests or functional acceptance
6.8. Particularities of specific systems
7 Test Documentation
7.1. Objectives for documentation
7.2. Conformity construction plan (CCP)
7.3. Articulation of the test documentation
7.4. Test policy
7.5. Test strategy
7.6. Master test plan (MTP)
7.7. Level test plan
7.8. Test design documents
7.9. Test case specification
7.10. Test procedure specification
7.11. Test data specifications
7.12. Test environment specification
7.13. Reporting and progress reports
7.14. Project documentation
7.15. Other deliverables
8 Reporting
8.1. Introduction
8.2. Stakeholders
8.3. Product quality
8.4. Cost of defects
8.5. Frequency of reporting
8.6. Test progress and interpretation
8.7. Progress and defects
8.8. Efficiency and effectiveness of test activities
8.9. Continuous improvement
8.10. Reporting attention points
9 Testing Techniques
9.1. Test typologies
9.2. Test techniques
9.3. CRUD
9.4. Paths (PATH)
9.5. Equivalence partitions (EP)
9.6. Boundary value analysis (BVA)
9.7. Decision table testing (DTT)
9.8. Use case testing (UCT)
9.9. Data combination testing (DCOT)
9.10. Data life cycle testing (DCYT)
9.11. Exploratory testing (ET)
9.12. State transition testing (STT)
9.13. Process cycle testing (PCT)
9.14. Real life testing (RLT)
9.15. Other types of tests
9.16. Combinatorial explosion
10 Static Tests, Reviews and Inspections
10.1. What is static testing?
10.2. Reviews or tests?
10.3. Types and formalism of reviews
10.4. Implementing reviews
10.5. Reviews checklists
10.6. Defects taxonomies
10.7. Effectiveness of reviews
10.8. Safety analysis
Terminology
References
Index
Summary of Volume 2
Other titles from iSTE in Computer Engineering
End User License Agreement
Chapter 1
Figure 1.1
Complex system
Figure 1.2
System-of-systems
Figure 1.3
Simple–complicated–complex–chaotic
Figure 1.4
ISTQB foundation versus advanced testers.
Chapter 2
Figure 2.1
Waterfall
Figure 2.2
Systems-of-systems V-cycle
Figure 2.3
Standard V-cycle
Figure 2.4
Reclining L cycle
Figure 2.5
Hybrid V-cycle and Agile
Figure 2.6
Spiral development cycle
Figure 2.7
Cycle Agile – Scrum
Figure 2.8
Scrum framework
Figure 2.9
Nexus framework
Figure 2.10
SAFe organization
Figure 2.11
DevOps representation
Figure 2.12
Redundancy and production platforms
Chapter 3
Figure 3.1
Development and test environments
Figure 3.2
Test execution velocity per sprint.
Figure 3.3
Shift left
Figure 3.4
Necessary test iterations.
Chapter 4
Figure 4.1
Example of test effort based on RPN [© RBCS]
Figure 4.2
Periodic risk assessment
Figure 4.3
Integration, delivery or continuous deployment
Figure 4.4
Design – delivery and feedback
Chapter 6
Figure 6.1
Test levels in a system-of-systems.
Chapter 8
Figure 8.1
Cost of fixing defects
Figure 8.2
Evolution of risks by risk category.
Figure 8.3
Comparison of risks by projects.
Figure 8.4
Defects per components.
Figure 8.5
Coverage of component or functionality.
Figure 8.6
Defect discovery and correction.
Figure 8.7
Average and cumulative defect fix duration.
Figure 8.8
Number of defect openings
Figure 8.9
Example of burn down.
Figure 8.10
Example of burn up.
Figure 8.11
Example of KANBAN board
Figure 8.12
Defects found versus fixed curves.
Figure 8.13
Defects per component
Figure 8.14
Defects versus test coverage.
Chapter 9
Figure 9.1
Swimlanes
Figure 9.2
Flow diagram
Figure 9.3
Expanded decision table
Figure 9.4
Simplified decision table
Figure 9.5
MS Word font options
Figure 9.6
Classification tree (partial)
Chapter 10
Figure 10.1
Shift left and costs identification
Cover Page
Title Page
Copyright Page
Dedication and Acknowledgments
Preface
Table of Contents
Begin Reading
Terminology
References
Index
Summary of Volume 2
Other titles from iSTE in Computer Engineering
Wiley End User License Agreement
iii
iv
xiii
xiv
xv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
31
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
151
152
153
154
155
156
157
158
159
160
161
162
163
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
Bernard Homès
First published 2022 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK
John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA
www.iste.co.uk
www.wiley.com
© ISTE Ltd 2022
The rights of Bernard Homès to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s), contributor(s) or editor(s) and do not necessarily reflect the views of ISTE Group.
Library of Congress Control Number: 2022943899
British Library Cataloguing-in-Publication DataA CIP record for this book is available from the British LibraryISBN 978-1-78630-749-1
Inspired by a dedication from Boris Beizer1, I dedicate these two books to many very bad projects on software and systems-of-systems development where I had the opportunity to – for a short time – act as a consultant. These taught me multiple lessons on difficulties that these books try and identify and led me to realize the need for this book. Their failure could have been prevented; may they rest in peace.
I would also like to thank the many managers and colleagues I had the privilege of meeting during my career. Some, too few, understood that quality is really everyone’s business. We will lay a modest shroud over the others.
Finally, paraphrasing Isaac Newton, If I was able to reach this level of knowledge, it is thanks to all the giants that were before me and on the shoulders of which I could position myself. Among these giants, I would like to mention (in alphabetical order) James Bach, Boris Beizer, Rex Black, Frederic Brooks, Hans Buwalda, Ross Collard, Elfriede Dustin, Avner Engel, Tom Gilb, Eliahu Goldratt, Dorothy Graham, Capers Jones, Paul Jorgensen, Cem Kaner, Brian Marick, Edward Miller, John Musa, Glenford Myers, Bret Pettichord, Johanna Rothman, Gerald Weinberg, James Whittaker and Karl Wiegers.
After 15 years in software development, I had the opportunity to focus on software testing for over 25 years. Specialized in testing process improvements, I founded and participated in the creation of multiple associations focused on software testing: AST (Association of Software Tester), ISTQB (International Software Testing Qualification Board), CFTL (Comité Français des Tests Logiciels, the French Software Testing committee) and GASQ (Global Association for Software Quality). I also dedicate these books to you, the reader, so that you can improve your testing competencies.
1
Beizer, B. (1990).
Software Testing Techniques
, 2nd edition. ITP Media.
The breadth of the subject justifies splitting this work in two books. Part I, this book, covers the general aspects applicable to systems-of-systems testing, among them the impact of development life cycle, test strategy and methodology, the added value of quality referential, test documentation and reporting. We identified the impact of various test levels and test techniques, whether static or dynamic.
In the second book, we will focus on project management, identifying human interactions as primary elements to consider, and we will continue with practical aspects such as testing processes and their iterative and continuous improvement. We will also see additional but necessary processes, such as requirement management, defects management and configuration management. In a case study, we will be able to ask ourselves several useful questions. We will finish this second book with a rather perilous prospective exercise by listing the challenges that testing will need to face in the coming years.
These two books make a single coherent and complete work building on more than 40 years of experience by the author. The main aspect put forward is the difference between the traditional vision of software testing – focused on one system and one version – and the necessary vision when multiple systems and multiple versions of software must be interconnected to provide a service that needs to be tested thoroughly.
August 2022
There are many definitions of what a system-of-systems (or SoS) is. We will use the following one: “A system-of-systems is a set of systems, software and/or hardware, developed to provide a service by collaborating together, by organizations that are not under the same management”. This simple definition entails challenges and adaptations that we will identify and study.
A system-of-systems can be considered from two points of view: on the one hand, from the global systemic level (we could take the image of a company information system) and, on the other hand, from the unitary application system (which we may call a subsystem, application system or application, software-predominant equipment or component). We will thus have at the upper level a system-of-systems that could be a “information system” that is made of multiple systems that we will call subsystems. For example, a company may have in their information system one accounting system, a CRM, a human resource management system, a stock management system, etc. These different systems are most likely developed by different editors and their interaction provides a service to the company. Other examples of systems-of-systems are air traffic systems, aircrafts and satellite systems, vehicles and crafts. In these systems-of-systems, the service is provided to the users when all subsystems work, correctly and quickly exchanging data between them.
Systems-of-systems, even if they are often complex, are intrinsically different from complex systems: a complex system, such as an operating system, may be developed by a single organization (see Figure 1.1) and thus does not respond exactly to the definition as the subsystems are developed under the same hierarchy. The issue of diverse organizations and directions (see Figure 1.2) implies technical, economic and financial objectives that may diverge between the parties and thus multiple separate systems creating, when put together, a system-of-systems. A more exhaustive description is presented in ISO21840 (2019).
Figure 1.1Complex system
Figure 1.2System-of-systems
Usually, a system-of-systems tend to have:
– multiple levels of stakeholders, sometimes with competing interests;
– multiple and possibly contradictory objectives and purposes;
– disparate management structures whose limits of responsibility are not always clearly defined;
– multiple life cycles with elements implemented asynchronously, resulting in the need to manage obsolescence of subsystems;
– multiple owners – depending on subsystems – making individual resource and priority decisions.
It is important to note that the characteristics differ between systems and systems-of-systems and are not mutually exclusive.
Why a book on the testing of systems-of-systems? Systems-of-systems are part of our everyday life, but they are not addressed in software testing books that focus only on one software at a time, without taking into account the physical systems that are required to execute them, nor the interactions between them that increase the difficulty and combinatorial complexity of testing. To ensure quality for a system-of-systems means to ensure for each subsystem (and sub-subsystem) the quality of the design process for each of these systems, subsystems, components, software, etc., that make them up.
Frequently, actors on a system-of-systems project focus only on their own activity, resecting contractual obligations, without considering the requirements of the overall system-of-systems or the impact their system may have on the system-of-systems. This focus also applies when developing software to be used in a company’s information system: the development teams seldom exchange with the teams in charge of support or production. This slowly changes with the introduction of DevOps in some environments, but the gap between IT and business domains remains large.
As more projects become increasingly complex, connected to one another in integrated systems-of-systems, books on advanced level software testing in the frame of these kinds of systems become necessary.
Most books on software testing focus on testing one software for one structure, where those that define requirements, design the software and test it are in the same organization, or – at least – under the same hierarchy. These are thus a common point for decisions. In a system-of-systems, there are at least two sets of organizations: the client and the contractors. A contractual relationship exists and directs the exchanges between these organizations.
Many specific challenges are associated with these contractual relationships:
– Are requirements and specifications correctly defined and understood by all parties?
– Are functionalities and technical characteristics coherent with the rest of the system-of-systems with which the system will be merged?
– Have evolutions, replacements and possible obsolescence been considered for the whole duration of the system-of-systems being developed?
In a system-of-systems, interactions with other systems are more numerous than in a simple system. Thus, the verification of these numerous exchanges between components and systems will be a heavier load than for other software. In case of defect, it will be necessary to identify which party will have to implement the fixes, and each actor will prefer to reject the responsibility to others. These decisions may be influenced by economic factors (it may be cheaper to fix one system instead of another), regulatory factors (conformance may be easier to demonstrate on one system instead of another), contractual or technical (one system may be simpler to change than another).
Responsibilities are different between the client and the organization that executes the development. The impact is primarily felt by the client, and it is up to the development organization to ensure the quality of the developments.
The increase in the complexity of IT solutions forces us to envisage a more efficient management of specific challenges linked to systems-of-systems to which we are increasingly dependent.
Design of software, systems and systems-of-systems requires interaction between many individuals, each with different objectives and different points of view. The notion of “quality” of a deliverable will vary and depend on the relative position of each party. This book tries to cover each point of view and shows the major differences between what is described in many other books – design and test of a single software application – with regard to the complexity and reality of systems-of-systems. The persons who could benefit from reading this book are as follows:
– design organization project managers who must ensure that the needs of users, their customers and their clients are met and therefore that the applications, systems and systems-of-systems are correctly developed and tested (i.e. verified and validated);
– by extension, the design organization we will have assistant Project Managers, who will have to ensure that the overall objectives of the designing organization are correctly checked and validated, especially taking into account the needs of the users – forever changing given the length of systems-of-systems projects – and that the evidence provided to justify a level of quality is real;
– customer project managers, whether for physical (hardware) production or for digital (software) production, and specifically those responsible for programs, development projects or test projects, in order to ensure that the objectives of Design organizations are correctly understood, deduced and implemented in the solutions they put in place;
– test managers in charge of quality and system-of-systems testing (at design organization level), as well as test managers in charge of quality and system testing (at design and at client level), applications and predominant software components entering into the composition of systems-of-systems, with the particularity that the so-called “end-to-end” (E2E) tests are not limited to a single application or system, but cover all the systems making up the system-of-systems;
– testers, test analysts and technical test analysts wishing to obtain a more global and general vision of their activities, to understand how to implement their skills and knowledge to further develop their careers;
– anyone wishing to develop their knowledge of testing and their impact on the quality of complex systems and systems-of-systems.
These books are part of a cycle of three books on software testing:
– the first book (Fundamentals of Software Testing, ISTE and Wiley, 2012) focuses on the ISTQB Foundation level tester certification and is an aid to obtaining this certification; it was elected third best software testing book of all time by BookAuthority.org;
– this present book on the general aspects of systems-of-systems testing;
– a third book on practical implementation and case studies showing how to implement tests in a system-of-systems, Advanced Testing of Systems-of-Systems 2: Practical Aspects (ISTE and Wiley, 2022).
The last two books complement each other and form one. They are independent of the first.
We are in contact with and use systems-of-systems of all sizes every day: a car, an orchestra, a control-command system, a satellite telecommunications system, an air traffic control management system, an integrated defense system, a multimodal transport system, a company, all are examples of systems-of-systems. There is no single organizational hierarchy that oversees the development of all the components integrated into these systems-of-systems; some components can be replaced by others from alternative sources.
In this book, we will focus primarily on software-intensive systems. We use them every day: a company uses many applications (payroll, inventory management, accounting, etc.) developed by different companies, but which must work together. This company information system is thus a system-of-systems.
Our means of transportation are also systems-of-systems: the manufacturers (of metros, cars, planes, trains, etc.) are mainly assemblers integrating hardware and software designed by others.
Operating systems – for example, open source – integrating components from various sources are also systems-of-systems. The developments are not carried out under the authority of an organization, and there is frequently integration of components developed by other structures.
The common elements of systems-of-systems – mainly software-intensive systems – are the provision of a service, under defined conditions of use, with expected performance, providing a measurable quality of service. It is important to think “systems” at the level of all processes, from design to delivery to the customer(s) of the finished and operational system-of-systems.
Often, systems-of-systems include, within the same organization, software of various origins. For example, CRM software such as SAP, a Big-Data type data analysis system, vehicle fleet management systems, accounting monitoring or analysis of various origins, load sharing systems (load balancing), etc.
The examples in this book come from the experience of the author during his career. We will therefore have examples in space, military or civil aeronautics, banking systems, insurance and manufacturing.
To fully understand what a system-of-system is in our everyday life, let’s take the example of connecting your mobile phone to your vehicle. First of all, we have your vehicle, and the operating system which interacts via a Bluetooth connection with your phone. Then, we have your phone, which has an operating system version that evolves separately from your car; then, we have the version of the software app which provides the services to your phone and is available on a store. Finally, we have the subscription that your car manufacturer provides you with to ensure the connection between your vehicle and your phone. This subscription is certainly supported by a series of mainframes, legacy applications and these must also be accessible via the Web. The information reported by your vehicle will certainly be included in a repository (Big Data, Datalake, etc.) where it can be aggregated and allow maintenance of your vehicle as well as improvement in the maintenance of vehicles of your type. This maintenance information will allow your dealer to warn you if necessary (e.g. failure identified while the vehicle is not at the garage, and need to go to a garage quickly). You can easily identify all the systems that need to communicate correctly so that you – the user – are satisfied with the solution offered (vehicle + mobile + application + subscription + information reported + emergency assistance + vehicle monitoring + preventive or corrective maintenance + … etc.).
This book will focus primarily on systems-of-systems and software-intensive systems, and how to test such systems. The identified elements can be extrapolated to physical systems-of-systems.
As we will focus on testing, the view we will have of systems-of-systems will be that of Test Managers: either the person in charge of testing for the client or for the design organization, or in charge of testing a component, product, or subsystem of a system-of-systems, in order to identify the information to be provided within the framework of a system-of-systems. We will also use this view of the quality of systems and systems-of-systems to propose improvements to the teams in charge of implementation (e.g. software development teams, developers, etc.).
This work is not limited to the aspects of testing – verification and validation – of software systems, but also includes the point of view of those in charge of improving the quality of components – software or hardware – and processes (design, maintenance, continuous improvement, etc.).
As part of this book, we will also discuss the delivery aspects of systems-of-systems in the context of DevOps.
The necessity of testing the design of software, components, products or systems before using or marketing them is evident, known and recognized as useful. The objective of the test can be seen according to a system of five successive phases, as proposed by Beizer (1990):
– testing and debugging are related activities in that it is necessary to test in order to be able to debug;
– the purpose of the test is to show the proper functioning of the software, component, product or system;
– the purpose of the test is to show that the software, component, product or system does not work;
– the objective of the test is not to prove anything, but to reduce the perceived risk of non-operation to an acceptable value;
– the test is not an action; it is a mental discipline resulting in software, components, products or systems having little risk, without too much testing effort.
Each of these five phases represents an evolution of the previous phases and should be integrated by all stakeholders on the project. Any difference in the understanding of “Why we test” will lead to tensions on the strategic choices (e.g. level of investment, prioritization of anomalies and their criticalities, level of urgency, etc.) associated with testing.
A sixth answer to the question “why test?” adds a dimension of improving software quality and testing processes to identify anomalies in products comprising software such as systems-of-systems. This involves analyzing the causes of each failure and implementing processes and procedures to ensure the non-reproducibility of this type of failure. In critical safety areas (e.g. aeronautics), components are added to the systems to keep information on the operating status of the systems in the event of a crash (the famous “black boxes”). The analysis of these components is systematic and makes it possible to propose improvements in procedures or aircraft design, so as to make air travel even more reliable.
Adding such a way of doing things to development methods is what is planned during sprint retrospectives (Agile Scrum methodology) and more generally in feedback activities. This involves objectively studying anomalies or failures and improving processes to ensure that they cannot recur.
When talking about systems-of-systems, it is common (in France) to use the terms client project management (MOA) and designer project management (MOE). These acronyms from cathedral building have been taken up in the world of software engineering. They are 100% French-speaking, and represent two different views of the same things:
– the client project owner (abbreviated MOA) represents the end users that have the need and define the objectives, schedule and budget; MOA is responsible for the needs of the company, of the users and of their customers, of the principals, sponsors or stakeholders, of the business of the company. There usually is only one MOA;
– the designer project manager (abbreviated MOE) represents the person (or company) who designs and controls the production of an element or a set of elements making up the system-of-systems; it is all the production teams, with constraints and objectives often different from those of the company and the principals. There could be multiple MOE.
In a system-of-systems, we therefore must take into account this separation between MOA (client) and MOE (supplier) and therefore the two separate views of each of these major players.
When we deal with systems-of-systems testing, we will speak of “test manager”, but these can be assigned to a single test level (e.g. for a software subsystem) or cover several levels (e.g. the manager responsible for testing at the project management level).
Recent statistics1 show that only 6% of large IT projects are successful and 52% are outside budget, timeframe or lack all the expected functionalities. The remaining 42% are cancelled before their delivery, becoming losses for the organizations.
We can conclude that the most appropriate development and testing processes should be implemented to minimize, as much as possible, the risks associated with systems-of-systems. When compared to complex systems, Test Managers of systems-of-systems face and must master many challenges.
Systems-of-systems are generally more complex and larger than complex systems developed by a single entity. We must consider:
– interfaces and interoperability of systems with each other, both logical (messages exchanged, formats, coding, etc.) and physical (connectors, protections against EMP, length of connectors, etc.);
– development life cycles of the organizations and their evolutions;
– obsolescence of components of the system-of-systems, as well as their versions and compatibilities;
– integration of simulation and decision support tools, as well as the representativity of these tools with regard to the components they simulate;
– governance and applicable standards – as well as their implementation – for both process and product aspects;
– design architecture and development process frameworks;
– the quality of requirements and specifications, as well as their stability or evolution over time;
– the duration of the design process to develop and integrate all the components, compatibility of these with each other, as well as their level of security and the overall security of the entire system-of-systems;
– organizational complexity resulting from the integration of various organizations (e.g. following takeovers or mergers) or the decision to split the organizations, to call on relocated external subcontracting (offshore) or not;
– the complexity of development cycles stemming from the desire to change the development model, which implies the coexistence of more or less incompatible models with each other for fairly long periods.
Figure 1.3Simple–complicated–complex–chaotic
We could use the CYNEFIN2 model (see Figure 1.3, simple–complicated– complex–chaotic) to better understand the aspects of evolution between simple systems (most software developments), complicated systems (e.g. IT systems), complex systems (the majority of systems-of-systems) and chaotic systems, where the number of interactions is such that it is difficult (impossible?) to reproduce and/or simulate all the conditions of execution and operation of the system-of-systems.
To determine if the system is simple, complicated, complex or chaotic, we can focus on the predictability of effects and impacts. We also have the “disorder” state which is the initial position from which we will have to ask ourselves questions to determine which model of system we should turn to.
If the causes and effects are well known and predictable, the problem is said to be “simple”. The steps can be broken down into feeling, categorizing and then acting. We can look at the applicable “best practices” and select the one(s) that is(are) appropriate, without needing to think too much.
An environment will be said to be “complicated” when the causes and effects are understandable but require a certain expertise to understand them. The domain of practices – including software testing practices – is that of “best practices”, known to experts and consultants and making it possible to reach a predefined final target.
In the realm of the “complex”, the causes and effects are difficult to identify, to understand, to isolate and to define. It seems difficult, if not impossible, to get around the question. We are moving here from the field of “best practices” to that of the emergence of solutions appearing little by little, without an a priori identification of the final target. We are no longer here in a posture of expertise but are entering into a posture of a coach who asks questions, who enlightens through reflections and makes the actors gain understanding.
In a so-called “chaotic” system, we are unable to distinguish the links between causes and their effects. At this level, the reaction will often be an absence of reaction, like paralysis. When you’re in chaos, the only thing you can do is get out of the chaos as quickly as possible, by any means imaginable. Given the exceptional side of what is happening, there are no best practices to apply. You will also not have the time to consult experts who will take a few weeks to analyse in detail what is happening and finally advise you on the right course of action. You will certainly not have the time to do a few harmless experiments to let an original solution emerge. The urgency is to take shelter: the urgency is to act first.
Most systems-of-systems are large – even very large – projects. Measured in function points (e.g. IFPUG or SNAP), these projects easily exceed 10,000 function points and even reach 100,000 function points. Capers Jones (2018a) tells us that on average these projects have a 31–47% probability of failure. The Chaos Report in 2020 confirms this trend with 19% of projects failing and 50% seriously off budget, off deadline or lacking in quality.
Since the causes of failure add up to one another, it is critical to implement multiple quality improvement techniques throughout the project, from the start of the project. The choice of these techniques should be made based on their measured and demonstrated effectiveness (i.e. not according to the statements or opinions of one or more individual). A principle applicable to QA and testing is “prevention is better than cure”. It is better to detect a defect early and avoid introducing it into any deliverable (requirements, codes, test cases, etc.), rather than discovering it late. This principle also applies to tests: reviews and inspections have demonstrated their effectiveness in avoiding the introduction of defects (measured effectiveness greater than 50%), while test suites generally only have an effectiveness of less than 35%. This is the basis of the “shift left” concept which encourages finding defects as early as possible (to the left in the task schedule). This justifies providing stakeholders with information on the level of quality of systems-of-systems from the start of design, as well as measurable information for each of the subsystems that compose them. Implementing metrics and a systematic reporting of measures is therefore necessary to prevent dangerous drifts from appearing and leading the project to failure.
Since systems-of-systems are large projects involving several organizations, it is difficult to have complete and detailed visibility into all the components and their interactions with each other. It will be necessary to use documentation – paper or electronic via tools – to transmit the information. In this type of development, these activities will sometimes be taken over by those in charge of Quality Assurance. Test Managers belong to Quality Assurance, focusing mainly on the execution of tests to verify and validate requirements and needs.
The Test Manager will thus have to:
– analyse information coming from lower levels and related to the level of quality of the subsystems or components which are developed and tested there;
– provide information to higher levels, related to the level of quality of the subsystems or components tested at its level.
Each subdivision level of the system-of-systems must therefore receive the level of information necessary to carry out its activities, and be informed of developments that may impact it. This involves a two-way traceability of information from requirements to test results.
Can also be considered as systems-of-systems complex information systems comprising numerous software of various origins and natures. For example:
– a production management system based on an ERP (e.g. SAP);
– a customer relationship management system (e.g. SalesForce);
– legacy applications based on multiple systems and technologies;
– applications gathering commercial or other data for statistical purposes (Big Data type);
– applications managing websites, etc.
Applications often come from many external – and internal – sources that have different responsiveness. A management system (ERP or CRM type) can release one or two versions per year with a sequential development cycle, while an Internet application (catalog and Internet sales type) can use a DevOps methodology and deliver every week.
These applications interact and exchange information to create a complete and complex information system. Mastering such a system-of-systems requires having a global functional vision to identify the impacts of a change on the entire information system. We mean here “change” both at the functional level (modification of a management rule, addition of a new commercial offer) and at the technical level (e.g. addition of a new system, modification of the security rules or data transfer, implementation of new tools, etc.).
Systems-of-systems are developed by different companies with no common development policy. Some of the components used in systems-of-systems may exist before the design of the system-of-systems; other components are developed specifically for the system-of-systems. Lifetime, design mode and component criticality will certainly be different. The responsibilities of the stakeholders as well as the confidentiality of the information will also be different. All these elements should be considered. Let us see them in a little more detail.
Components – as well as the systems and subsystems – making up the system-of-system have different life spans; it is common for some components to become obsolete or no longer be supported by their manufacturer even before placing the system-of-systems on the market. We could take as an example the number of PCs that still run the XP operating system, within systems-of-systems such as those of defense or large financial organizations, while Microsoft ended support for XP on April 8, 2014, or the number of legacy systems developed in Cobol nearly 40 years ago and still in operation.
Likewise, companies often have different design styles and different testing requirements. Design models vary from sequential development to Agile development, through all the different styles (V, Iterative, Incremental, RUP, Scrum, SAFe, etc.), each with a different level of documentation and verification/validation activities.
The design mode impacts documentation (in terms of volume and evolution) as well as the frequencies of delivery of work products (punctual deliveries or continuous deliveries).
Often, the level of testing of a software component will vary based on the criticality defined by the publisher. A non-critical component – probably tested with less rigor than a critical component – could be introduced later in a system-of-systems of high criticality. For example, we could have, in a critical system, an intrusion notification that is transmitted by the cellular network (GSM) or by the TCP/IP network. Neither of these networks – otherwise very reliable – are reliable enough only to guarantee that the notification will always be transmitted.
In systems-of-systems, the responsibility for testing at the system-of-system level is the responsibility of a Test Manager that we will call Product Test Manager (TMP). The tests of the various subsystems are delegated to other Test Managers, who report information to the Product Test Manager. The Product Test Manager may therefore have to combine and synthesize the results of the tests carried out on each of the systems making up the system-of-systems.
Industrial organization may involve subcontractors to develop certain systems or components. Each subcontractor must ensure that the deliverables provided are of good quality, that they comply with applicable requirements. Subcontractors sometimes wish to limit the information provided to their client (including the level of tests executed and their results), thus limiting the reporting burden (of measuring, summarizing and processing information). The corollary is that the Product Test Manager is unable to identify tests already carried out and their level of quality.
As the system-of-systems produced will be associated with a brand, it is important for the Product Test Manager to ensure that the quality level of each of the systems that compose it are of good quality.
Example: a system-of-systems produced by the company AAA is made up of many systems developed by other companies, for example, XXX, YYY and ZZZ. If the interaction of the systems produced by the companies XXX and YYY leads to a malfunction of the component produced by ZZZ, it will always be the name of the AAA company which will be mentioned, and not those of the other companies XXX, YYY or ZZZ. It is therefore important for the Project Test Manager of company AAAA to ensure that the products delivered by companies XXX, YYY or ZZZ function correctly within the AAA system-of-systems.
Each company has its own processes, techniques and methods which represent its added value, its know-how. In many cases, these processes are confidential and undisclosed. This confidentiality also applies to the testing activities of the products designed.
However, this confidentiality of the processes should be lifted for customers, to allow them to verify the technical relevance of the existing processes. The provision of numerical references and representative statistics, as well as comparisons or references with respect to recognized technical standards, allows an assessment of the relevance of these processes.
In a system-of-systems, there are many levels of testing. We could have seven test levels (such as for airborne systems):
– testing at the developer level, in a software development environment (see unit test);
– testing the software alone (see section 6.4 of this book), in a separate test environment;
– testing the software integrated with other software in a separate test environment (see integration test and system test within the meaning of ISTQB, system tests within the meaning of TMap-Next);
– testing the software installed on the hardware equipment (see hardware– software integration test);
– testing the equipment in connection with other equipment to form the system, but on a test bench;
– testing the system on the aircraft, on the ground;
– testing the system on the aircraft, in flight (see acceptance test).
If we take another type of system-of-systems – for example, a manufacturing company that markets its products and uses outsourcing – the test levels (here eight levels) may be different:
– testing at the developer level within the development organization (component or unit testing);
– testing of the embedded software, in a separate test environment, within the development organization;
– functional acceptance test of the integrated software components, by the client organization – or its representatives – in the development organization test environment (factory acceptance test or system test as per ISTQB terminology, validation according to TMap-Next terminology);
– acceptance testing of the software integrated with the hardware equipment (by the client organization) in the environment of the development organization (Hardware–Software Integration Testing);
– test of the software installed in the hardware and software environment of acceptance of the customer company including the technical verification of the flows between the various applications of the system-of-systems (integration test and verification of the inter-application flows);
– software testing in an environment representative of production to ensure performance and correct operation for users (acceptance testing);
– testing the software and the system-of-systems on a limited perimeter (pilot phase), but with functional verification of all flows;
– the test of the system-of-systems during a running-in phase (acceptance test).
Each level focuses on certain types of potential defects. Between the various levels, we could have intermediate levels, such as the FAI (First Article Inspection) which check the first equipment – or system – delivered to the assembly line. Each level has its own requirements, development and test methods and environments, development and test teams.
The number of test levels, and separate test teams, increases the risks:
– of redundant tests, already executed at one test level and re-executed at another level, which impacts test efficiency;
– of the reduced efficiency of the test activities, because the principle is to remain focused on the objectives of a level without re-executing the tests of a lower level. This can be mitigated by analyzing the feedback of the level test coverage information to higher levels.
For business information systems and systems-of-systems, it is also necessary to consider other levels of acceptance of applications or systems:
– Does the software integrate and communicate well with other software? A systems integration testing level may be required.
– Has the distribution and production of the application been checked? Is it easy, is it possible to rollback – without losses – in the event of problems?
– Have the training activities been carried out to allow optimal application use?
– Are metrics and measurements of user satisfaction with the application considered to ensure customer satisfaction?
One of the usual challenges for a Test Manager and their test team is to be effective and efficient, that is, not to perform unnecessary tasks. A task that does not add value should be avoided. When certain tests are carried out by a development team (outsourced or not) and the same tests are carried out at another level, we lose efficiency. It is therefore essential to have good communication between the teams and identification of what is being done at each level of testing.
Defect detection and remediation costs increase over time; teams should be directed to perform testing as soon as possible to limit these costs. Each design activity should be followed by an activity that will focus on identifying defects that might have been introduced in that design activity. Phase confinement can be measured easily by an analysis of the phases of introduction and detection of defects. The ODC technique is based on this principle to identify the processes to be improved (see also section 13.3.1.3 of volume 2).
In subcontracted and/or outsourced development and testing, tests should be carried out at each level, both by the subcontractor’s design team and by the client teams. The tests executed by the design team are often more detailed and more technical than the tests executed by the customer team (which rather executes functional tests). The lack of feedback from the design team to the customer team prevents the implementation of an optimized multi-level test strategy.
The principle of a system-of-system is that the development of components, products and systems and their testing are not done under the authority of the same management, but under different organizations. It will be necessary to define the contractual relations between the partners and between the systems of the system-of-systems.
Contracts should include synchronization points or information feedback allowing the main contractor – the project owner – to have a global and sufficiently detailed view of the progress of the realization of the system-of-systems. The feedback information is not only information on deadlines or costs, but should also cover the quality of the components, their performance, reliability level and relevance regarding the objectives of the system-of-systems. As mentioned earlier, each system may require multiple levels of testing. The results of each of these test levels must be compiled and integrated in order to have a global view of the project.
Each development and each test level can be the subject of a specific contract, including objectives in terms of service quality (SLA; Service Level Agreement) or product quality, metrics and progress indicators.
The definition of the metrics, and the measures necessary to ensure the level of quality of each product, must be included in each contract, as well as the frequency of measurement and the reporting method. Non-compliance with or failure to achieve these SLAs may result in contractual penalties.
It must be realized that penalties are double-edged swords: if the level of penalty reaches a threshold that the subcontractor cannot bear, the latter may decide to stop the contract and wait for a judicial resolution of the dispute. Even if the litigation ends with a condemnation of the subcontractor, this does not solve the problem at the system-of-systems level. The principal will be forced to find another subcontractor to replace the defaulting one, and there is no indication that this new subcontractor will accept binding penalty clauses that do not suit them. We have had the experience of subcontractors who, having reached the maximum level of penalties provided for by the contract, decided to suspend performance of the contract until the penalties were waived.
The metrics and measures must be adapted to each level of the system-of-systems, to each level of progress of the project, and according to the software development life cycles (SDLC) selected for the realization.
The measures must cover the product, its level of quality, as well as the processes – production and testing – and their level of efficiency. In terms of reporting, information must be reported with sufficient frequency to make relevant decisions and anticipate problems.
To ensure that the information is unbiased, it is important to measure the same data in several ways or at least to ensure its relevance. For example, considering that a software is of good quality if no defect is found during a test period is only valid if during this period the tests have been carried out on the application. That is, the application has been delivered and is working, the test cases and their test data have been executed correctly, the expected test results have been obtained, the desired level of requirements and code coverage has been achieved, manual testing has been performed by competent testers, etc.
Frequently, outsourcing teams are tempted to hide negative information by disguising it or glossing over it, hoping that delays can be made up for later. Therefore, it is essential to carefully follow the reported measurements and investigate inconsistencies in the ratios.
In a system-of-systems, exchanges between systems and between applications are important. These exchanges are tested during integration tests. In the absence of components, it may be necessary to design mock objects (stubs, emulators or simulators) that will generate the expected messages.
These objects may vary depending on the test levels. They will have to evolve according to the evolution of the specifications of the components they replace, and may have to be scrapped when the components they replace are made available.
The main concerns associated with these objects are: