Fundamentals of Software Testing - Bernard Homès - E-Book

Fundamentals of Software Testing E-Book

Bernard Homes

0,0
126,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Software testing has greatly evolved since the first edition of this book in 2011. Testers are now required to work in "agile" teams and focus on automating test cases. It has thus been necessary to update this work, in order to provide fundamental knowledge that testers should have to be effective and efficient in today's world.

This book describes the fundamental aspects of testing in the different lifecycles, and how to implement and benefit from reviews and static analysis. Multiple other techniques are approached, such as equivalence partitioning, boundary value analysis, use case testing, decision tables and state transitions.

This second edition also covers test management, test progress monitoring and incident management, in order to ensure that the testing information is correctly provided to the stakeholders.

This book provides detailed course-study material for the 2023 version of the ISTQB Foundation level syllabus, including sample questions to help prepare for exams.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 541

Veröffentlichungsjahr: 2024

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Table of Contents

Title Page

Copyright Page

Preface

Glossary

1 Fundamentals of Testing

1.1. What is testing?

1.2. What is testing?

1.3. Paradoxes and main principles

1.4. Test activities, testware and test roles

1.5. Roles in testing

1.6. Essential skills and “good practices” in testing

1.7. Testers and code of ethics (FL 1.6)

1.8. Sample exam questions

2 Testing Throughout the Software Life Cycle

2.1. Testing through the software development life cycle

2.2. Test levels and test types

2.3. Types of tests

2.4. Test and maintenance

2.5. Oracles

2.6. Process improvements

2.7. Specific cases

2.8. Sample exam questions

3 Static Testing

3.1. Static techniques and the test process

3.2. Review process

3.3. Static analysis by tools

3.4. Added value of static activities

3.5. Sample exam questions

4 Test Design Techniques

4.1. The test development process

4.2. Categories of test design techniques

4.3. Black-box techniques

4.4. Structure-based techniques

4.5. Experience-based technique

4.6. Collaboration-based test approaches

4.7. Choosing test techniques

4.8. Sample exam questions

5 Test Management

5.1. Test organization

5.2. Test planning and estimation

5.3. Test progress monitoring and control (FL 5.3)

5.4. Reporting

5.5. Transverse processes and activities

5.6. Risk management (FL 5.2)

5.7. Defect management (FL 5.5)

5.8. Sample exam questions

6 Tools Support for Testing

6.1. Types of test tools

6.2. Assumptions and limitations of test tools

6.3. Selecting and introducing tools in an organization

6.4. Sample exam questions

7 Mock Exam

8 Templates and Models

8.1. Master test plan

8.2. Test plan

8.3. Test design document

8.4. Test case

8.5. Test procedure

8.6. Test log

8.7. Defect report

8.8. Test report

9 Answers to the Questions

9.1. Answers to the end-of-chapter questions

9.2. Correct answers to the sample paper questions

References

Index

Other titles from ISTE in Computer Engineering

End User License Agreement

List of Tables

Chapter 2

Table 2.1. Development cycle comparison

Chapter 3

Table 3.1. Comparison of review types

Table 3.2. Defects in data flow analysis

Chapter 4

Table 4.1. Valid and invalid equivalence partitions

Table 4.2. Expanded decision table

Table 4.3. Reduced decision table

Table 4.4. State transition table representation

Chapter 5

Table 5.1. Risk likelihood

Table 5.2. Impact severity of the risks

Table 5.3. Risks by severity and likelihood

List of Illustrations

Chapter 1

Figure 1.1. Origin and impacts of defects

Figure 1.2. Fundamental test processes

Chapter 2

Figure 2.1. Waterfall model

Figure 2.2. V-model

Figure 2.3. W-model.

Figure 2.4. Iterative model

Figure 2.5. Spiral model

Figure 2.6. Incremental model

Figure 2.7. Scrum development model

Figure 2.8. ISO 25010 quality characteristics

Figure 2.9. Example of transaction budget.

Figure 2.10. Impact analysis.

Figure 2.11. Bathtub curve

Chapter 3

Figure 3.1. Static and dynamic techniques and defects

Figure 3.2. Types of objectives per level of review

Figure 3.3. Static analysis graph

Figure 3.4. Optimized website architecture.

Figure 3.5. Architecture that is not optimized.

Figure 3.6. Control flow

Figure 3.7. Control flow and data flow

Figure 3.8. Example of data flow for three variables.

Chapter 4

Figure 4.1. Traceability and coverage follow-up

Figure 4.2. Test techniques

Figure 4.3. Example of fields.

Figure 4.4. Expense reimbursement for kilometers traveled in France in 2010 fo...

Figure 4.5. State transition diagram for a telephone

Figure 4.6. State transition diagram

Figure 4.7. Representation of a decision

Figure 4.8. Representation of instructions

Figure 4.9. Code of function “Factorial”.

Figure 4.10. Control flow for function “factorial”.

Figure 4.11. Instruction coverage

Figure 4.12. Control flow – horizontal representation

Figure 4.13. Example of alternatives

Figure 4.14. Linked conditions

Figure 4.15. Control flow for linked conditions

Figure 4.16. Branch coverage

Figure 4.17. Control flow instruction “case of”

Figure 4.18. Control flow with two conditions.

Figure 4.19. Boolean decision table

Figure 4.20. Decision and condition coverage.

Chapter 5

Figure 5.1. Uniform test distribution.

Figure 5.2. Non-uniform test distribution.

Figure 5.3. Documentary architecture IEEE 829 v1998 and v2008.

Figure 5.4. Example of integration.

Figure 5.5. Defect detection and correction ((♦) opened; (△) closed)....

Figure 5.6. Quality of a milestone

Figure 5.7. Quality indices of the reviews.

Figure 5.8. Dashboard.

Figure 5.9. Time-over-time diagram.

Figure 5.10. Table of risks: occurrence × impact

Figure 5.11. Evolution of the risks throughout time

Guide

Cover Page

Table of Contents

Title Page

Copyright Page

Preface

Glossary

Begin Reading

References

Index

Other titles from ISTE in Computer Engineering

WILEY END USER LICENSE AGREEMENT

Pages

iii

iv

xi

xii

xiii

xiv

xv

xvii

xviii

xix

xx

xxi

xxii

xxiii

xxiv

xxv

xxvi

xxvii

xxviii

xxix

xxx

xxxi

xxxii

xxxiii

xxxiv

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

299

300

301

302

303

304

305

306

307

308

309

310

311

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

Revised and Updated 2nd Edition

Fundamentals of Software Testing

Bernard Homès

First edition published 2011 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc., © ISTE Ltd 2011.

This edition published 2024 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

www.iste.co.uk

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.wiley.com

© ISTE Ltd 2024The rights of Bernard Homès to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s), contributor(s) or editor(s) and do not necessarily reflect the views of ISTE Group.

Library of Congress Control Number: 2024932622

British Library Cataloguing-in-Publication DataA CIP record for this book is available from the British LibraryISBN 978-1-78630-982-2

Preface

“My eldest brother sees the spirit of sickness and removes it before it takes shape, so his name does not get out of the house. My elder brother cures sickness when it is still extremely minute, so his name does not get out of the neighborhood. As for me, I puncture veins, prescribe potions, and massage skin, so from time to time my name gets out and is heard among the lords”.

– Sun Tzu, The Art of War

I often turn to the above quote, replacing “sickness” by “defects”, and applying it to software instead of humans. I have seen few eldest brothers, a number of elder ones, and am perhaps in the last category of practitioners.

Why this book?

As we know, software testing is increasingly important in the industry, reflecting the increasing importance of software quality in today’s world. Since 2011, when the first edition of this book was published, there have been evolutions in the software industry that have impacted software testing. This new – revised – edition will help testers adopt more up-to-date fundamental practices.

Due to the lack of formal and recognized training in software testing, a group of specialist consultants gathered together in 2002 and founded the International Software Testing Qualifications Board (ISTQB). They defined the minimal set of methodological and technical knowledge that testers should know depending on their experience. This was gathered into what is called a syllabus. The foundation level syllabus was reviewed in 2023 and has been the basis of an international certification scheme that has already been obtained by more than 1,000,000 testers worldwide. This book can serve as reference material for testers preparing the ISTQB foundation level exam, and for any beginner testers. It references the 2023 version of the ISTQB Certified Tester Foundation Level syllabus.

This book follows the order and chapters of the syllabus, which should help you to successfully complete the certification exam. It is a one-stop reference book offering you:

more detailed explanations than those found in the ISTQB syllabus;

definitions of the terms (i.e. the Glossary) used in the certification exams;

practice questions similar to those encountered during the certification exam;

a sample exam.

For testers who want to acquire a good understanding of software and system tests, this book provides the fundamental principles as described by the ISTQB and recognized experts.

This book provides answers and areas of discussion to enable test leaders and managers to:

improve their understanding of testing;

have an overview of process improvement linked to software testing;

increase the efficiency of their software development and tests.

Throughout this book, you will find learning objectives (denoted as FL-…) that represent the ISTQB foundation level syllabus learning objectives. These are the topics that certification candidates should know and that are examined in certification exams.

Prerequisite

Software testing does not require specific prerequisites, but a basic understanding of data processing and software will allow you to better understand software testing.

The reader with software development knowledge, whatever the programming language, will understand certain aspects faster, but a simple practice as a user should be enough to understand this book.

ISTQB and national boards

The ISTQB is a not-for-profit international association grouping national software testing boards covering over 50 countries. These national boards are made up of software testing specialists, consultants and experts, and together they define the syllabi and examination directives for system and software testers.

A number of prominent authors of software testing books participated in the creation of the initial syllabi, ensuring that they reflect what a tester should know depending on their level of experience (foundation, advanced, expert) and their objectives (test management, functional testing and test techniques, specialization in software security or performance testing, etc.).

Glossary, syllabus and business outcomes

The ISTQB is aware of the broad diversity of terms used and the associated diversity of interpretation of these terms depending on the customers, countries and organizations. A common glossary of software testing terms has been set up and national boards provide translation of these terms in national languages to promote better understanding of the terms and the associated concepts. This becomes increasingly important in a context of international cooperation and offshore sub-contracting.

The syllabi define the basis of the certification exams; they also help to define the scope of training and are applicable at three levels of experience: foundation level, advanced level and expert level. This book focuses on the foundation level.

The foundation level, made up of a single module, is detailed in the following chapters.

Expected business outcomes, as stated by the ISTQB, are as follows for foundation level testers:

understand what testing is and why it is beneficial;

understand the fundamental concepts of software testing;

identify the test approach and activities to be implemented depending on the context of testing;

assess and improve the quality of documentation;

increase the effectiveness and efficiency of testing;

align the test process with the software development life cycle;

understand test management principles;

write and communicate clear and understandable defect reports;

understand the factors that influence the priorities and efforts related to testing;

work as part of a cross-functional team;

know the risks and benefits related to test automation;

identify the essential skills required for testing;

understand the impact of risk on testing;

effectively report on test progress and quality.

As we can see, the work of a tester impacts many different aspects in software development, from evaluating the quality of input documentation (specifications, requirements, user stories, etc.) to reporting on progress and risks, to test automation and interacting with the development teams to understand what to test and explain what defects are identified.

ISTQB certification

The ISTQB proposes software tester certifications, which are recognized as equivalent by all ISTQB member boards throughout the world. The level of difficulty of the questions and the exams are based on identical criteria (defined in the syllabi) and terminology (defined in the Glossary).

The certification exams proposed by the ISTQB enable the candidates to validate their knowledge, and assure employers or potential customers of a minimum level of knowledge from their testers, whatever their origin. Training providers deliver courses to help participants succeed in the certification exams, however much of the training involves brain cramming sessions and does not ensure that the participant has the required level of autonomy to succeed in the profession. This book attempts to identify the necessary skills, as well as provide the reader with a concentrate of more than 40 years of practice in the field of software quality and testing.

The ISTQB certifications are recognized as equivalent throughout the whole world, enabling international cross-recognition.

Key for understanding the content

To be used efficiently, this book has the following characteristics:

FL-xxx: text that starts with FL-xxx is a reminder of the learning objectives present in the ISTQB foundation level syllabus for certified testers. Those objectives are expanded in the paragraphs following this tag.

The titles of the chapters correspond to those of the ISTQB foundation level syllabus, version 2011. This is also often the case for the section heads; the syllabus reference is provided in the form (FLx.y), where x.y stands for the chapter and section head of the ISTQB foundation level syllabus.

A synopsis closes each of the chapters, summarizing the aspects covered and identifying the terms in the glossary that should be known for the certification exam. Sample exam questions are also provided at the end of each chapter. These questions were developed by applying the same criteria as for the creation of real exam questions.

The sample questions provided in Chapters 1–6 have been reproduced with the kind permission of © Bernard Homès 2011.

March 2024

Glossary

The definitions listed below have been extracted from the International Software Testing Qualifications Board (ISTQB) Standard Glossary of Terms used in Software Testing. Only the terms used for the Foundation Level certification exams are mentioned, so as not to drown the reader in terms that are used at other levels or in other syllabi.

Acceptance criteria: The criteria that a component or system must satisfy in order to be accepted by a user, customer or other authorized entity (from ISO 24765).

Acceptance test-driven development: A collaboration-based test-first approach that defines acceptance tests in the stakeholders’ domain language. Abbreviation: ATDD.

Acceptance testing: Formal testing with respect to user needs, requirements and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entities to determine whether or not to accept the system. See also user acceptance testing.

ACM: (Association for Computer Machinery) professional and scientific association for the development of information technology.

Alpha testing: Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed as a form of internal acceptance testing.

Anomaly: A condition that deviates from expectation (from ISO 24765).

Attack: Directed and focused attempt to evaluate the quality, and especially the reliability, of a test object by attempting to force specific failures to occur.

Beta testing: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing in order to acquire feedback from the market.

Black-box technique: See also black-box testing.

Black-box test technique: A test technique based on an analysis of the specification of a component or system. Synonyms: black-box test design technique, specification-based test technique.

Black-box testing: Testing, either functional or nonfunctional, based on an analysis of the specification of the component or system. Synonym: specification-based testing.

Boundary value analysis: A black-box test technique in which test cases are designed based on boundary values. See also boundary values.

Branch coverage: The coverage of branches in a control flow graph (percentage of branches that have been exercised by a test suite). One hundred percent branch coverage implies both 100% decision coverage and 100% statement coverage.

Bug: See also defect.

Checklist-based testing: An experience-based test technique in which test cases are designed to exercise the items of a checklist.

Code coverage: An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, for example, statement coverage, decision coverage or condition coverage.

Collaboration-based test approach: An approach to testing that focuses on defect avoidance by collaborating among stakeholders.

Commercial off-the-shelf software (COTS): See also off-the-shelf software.

Compiler: A software tool that translates programs expressed in a high-order language into their machine language equivalents.

Complexity: The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify. See also cyclomatic complexity.

Component integration testing: The testing executed to identify defects in the interfaces and interactions between integrated components. Synonyms: module integration testing, unit integration testing.

Component testing: A test level that focuses on individual hardware or software components. Synonyms: module testing, unit testing.

Configuration control: An element of configuration management, consisting of the evaluation, coordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification.

Configuration item: An aggregation of hardware, software or both, that is designated for configuration management and treated as a single entity in the configuration management process.

Configuration management: A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements.

Confirmation testing: A type of change-related testing performed after fixing a defect to confirm that a failure caused by that defect does not reoccur. Synonym: re-testing.

Control flow: An abstract representation of all possible sequences of events (paths) in the execution through a component or system.

Coverage: The degree to which specified coverage items are exercised by a test suite, expressed as a percentage. Synonym: test coverage.

Coverage item: An attribute or combination of attributes derived from one or more test conditions by using a test technique. See also coverage criteria.

Coverage measurement tool: See also coverage tool.

Coverage tool: A tool that provides objective measures of what structural elements, for example, statements, branches, have been exercised by the test suite.

Cyclomatic complexity: The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where:

L = the number of edges/links in a graph;

N = the number of nodes in a graph;

P = the number of disconnected parts of the graph (e.g. a calling graph and a subroutine).

Data-driven testing: A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data-driven testing is often used to support the application of test execution tools such as capture/playback tools. See also keyword-driven testing.

Data flow: An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object can be creation, usage or destruction.

Debugging: The process of finding, analyzing and removing the causes of failures in a component or system.

Debugging tool: A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step-by-step, halt a program at any program statement and set and examine program variables.

Decision coverage: The percentage of decision outcomes that have been exercised by a test suite. One hundred percent decision coverage implies both 100% branch coverage and 100% statement coverage.

Decision table testing: A black-box test technique in which test cases are designed to exercise the combinations of conditions inputs and/or stimuli (causes) shown in a decision table.

Defect: An imperfection or deficiency in a work product, which can cause the component or system to fail to perform its required function, for example, an incorrect statement or data definition (from ISO 24765). A defect, if encountered during execution, may cause a failure of the component or system. Synonyms: bug, fault.

Defect density: The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, for example, lines-of-code, number of classes or function points).

Defect management: The process of recognizing, recording, classifying, investigating, resolving and disposing of defects. It involves recording defects, classifying them and identifying the impact.

Defect management tool: See also incident management tool.

Defect report: Documentation of the occurrence, nature and status of a defect. Synonym: bug report.

Driver: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.

Dynamic analysis tool: A tool that provides run-time information on the state of the software code. These tools are most commonly used to identify unassigned pointers, check pointer arithmetic, and monitor the allocation, use and de-allocation of memory and highlight memory leaks.

Dynamic testing: Testing that involves the execution of the test item/software of a component or system (from ISO 29119-1). See also static testing.

Entry criteria: The set of generic and specific conditions that permit a process to proceed with a defined task (from Gilb and Graham), for example, test phase. The purpose of entry criteria is to prevent a task that would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria from starting. See also exit criteria.

Equivalence partitioning: A black-box test technique in which test conditions are equivalence partitions exercised by one representative member of each partition (from ISO 29119-1). Synonym: partition testing.

Error: A human action that produces an incorrect result (from ISO 24765). Synonym: mistake.

Error guessing: A test design technique in which tests are derived on the basis of the tester’s knowledge of past failures, or general knowledge of failure modes, in order to anticipate the defects that may be present in the component or system under test as a result of errors made, and design tests specifically to expose them (from ISO 29119-1).

Exhaustive testing: A test approach in which the test suite comprises all combinations of input values and preconditions.

Exit criteria: The set of generic and specific conditions, agreed upon with the stakeholders, that permit a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task that have not been finished. Exit criteria are used by testing to report against and plan when to stop testing (after Gilb and Graham). Synonyms: test completion criteria, completion criteria. See also entry criteria.

Experience-based test technique: A test technique based on the tester’s experience, knowledge and intuition. Synonyms: experience-based test design technique, experience-based technique.

Exploratory testing: An approach to testing in which the testers dynamically design and execute tests based on their knowledge, exploration of the test item and the results of previous tests. This is used to design new and better tests (from ISO 29119-1). See also test charter.

Failure: An event in which a component or system does not perform a required function within specified limits (from ISO 24765). Actual deviation of the component or system from its expected delivery, service or result (according to Fenton). The inability of a system or system component to perform a required function within specified limits. A failure may be produced when a fault is encountered [EUR 00].

Failure rate: The ratio of the number of failures of a given category to a given unit of measure, for example, failures per unit of time, failures per number of transactions and failures per number of computer runs.

Fault attack: See also attack.

Field testing: See also beta testing.

Finite state testing: See also state transition testing.

Formal review: A type of review that follows a defined process with a formally documented output, for example, inspection (from ISO 20246).

Functional requirement: A requirement that specifies a function that a component or system must perform.

Functional testing: Testing performed to evaluate if a component or system satisfies functional requirements (from ISO 24765). See also black-box testing.

Horizontal traceability: The tracing of requirements for a test level through the layers of test documentation (e.g. test plan, test design specification, test case specification and test procedure specification).

IEEE: Institute for Electrical and Electronic Engineers, a professional, not-forprofit association for the advancement of technology, based on electrical and electronic technologies. This association is active in the design of standards. There is a French chapter on this association, which provides publications that are useful for software testers.

Impact analysis: The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.

Incident: Any event occurring during testing which requires investigation.

Incident management tool: A tool that facilitates the recording and status tracking of incidents found during testing. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents and provide reporting facilities. See also defect management tool.

Incident report: A document reporting on any event that occurs during the testing which requires investigation.

Incremental development model: A development life cycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this life cycle model, each sub-project follows a “mini V-model” with its own design, coding and testing phases.

Independence of testing: Separation of responsibilities, which encourages the accomplishment of objective testing.

Informal review: A type of review that does not follow a defined process and has no formally documented output.

Inspection: A type of formal review that relies on visual examination of documents to detect defects, for example, violations of development standards and non-conformance to higher-level documentation, and uses defined team roles and measurements to identify defects in a work product and improve the review and software development processes. The most formal review technique and, therefore, always based on a documented procedure (from ISO 20246). See also peer review.

Intake test: A special instance of a smoke test to decide whether the component or system is ready for detailed and further testing. An intake test is typically carried out at the start of the test execution phase. See also smoke test.

Integration: The process of combining components or systems into larger assemblies.

Integration testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. See also component integration testing, system integration testing.

Interoperability testing: The process of testing to determine the interoperability of a software product.

ISTQB: International Software Testing Qualifications Board, a nonprofit association developing international certification for software testers.

Keyword-driven testing: A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test. See also data-driven testing.

Maintainability testing: The process of testing to determine the maintainability of a software product.

Maintenance testing: Testing the changes to an operational system or the impact of a changed environment on an operational system.

Master test plan: See also project test plan.

Metric: A measurement scale and the method used for measurement.

Mistake: See also error.

Modeling tool: A tool that supports the validation of models of the software or system.

Moderator: The leader and main person responsible for an inspection or other review process.

Non-functional testing: Testing performed to evaluate whether a component or system complies with nonfunctional requirements.

N-switch coverage: The percentage of sequences of N+1 transitions that have been exercised by a test suite.

N-switch testing: A form of state transition testing in which test cases are designed to execute all valid sequences of N+1 transitions (Chow). See also state transition testing.

Off-the-shelf software: A software product that is developed for the general market, that is, for a large number of customers, and that is delivered to many customers in identical format.

Oracle: See also test oracle.

Peer review: See also technical review.

Performance testing: The process of testing to determine the performance of a software product.

Performance testing tool: A tool to support performance testing, that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.

Portability testing: The process of testing to determine the portability of a software product.

Probe effect: The effect on the component or system when it is being measured, for example, by a performance testing tool or monitor. For example, performance may be slightly worse when performance testing tools are being used.

Product risk: A risk impacting the quality of a product and directly related to the test object. See also risk.

Project risk: A risk related to management and control of the (test) project, for example, lack of staffing, strict deadlines, changing requirements, etc., that impacts project success. See also risk.

Project test plan: A test plan that typically addresses multiple test levels. See also master test plan.

Quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations (from IREB).

Quality assurance: Activities focused on providing confidence that quality requirements will be fulfilled. Abbreviation: QA (from ISO 24765). See also quality management.

RAD: Rapid Application Development, a software development model.

Regression testing: A type of change-related testing to detect whether defects have been introduced or uncovered in unchanged areas of the software. It is performed when the software or its environment is changed.

Reliability testing: The process of testing to determine the reliability of a software product.

Requirement: A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification or other formally imposed document.

Requirement management tool: A tool that supports the recording of requirements, attributes of requirements (e.g. priority, knowledge responsible), and annotation, and facilitates traceability through layers of requirements and requirement change management. Some requirement management tools also provide facilities for static analysis, such as consistency checking and violations to pre-defined requirement rules.

Re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

Review: A type of static testing in which a work product or process is evaluated by one or more individuals to detect defects or provide improvements. Examples include management review, informal review, technical review, inspection and walk-through.

Review tool: A tool that provides support to the review process. Typical features include review planning, tracking support, communication support, collaborative reviews and a repository for collecting and reporting of metrics.

Reviewer: The person involved in the review who identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.

Risk: A factor that could result in future negative consequences; usually expressed as impact and likelihood. See also product risk, project risk.

Risk analysis: The overall process of risk identification and risk assessment.

Risk assessment: The process to examine identified risks and determine the risk level.

Risk-based testing: Testing in which the management, selection, prioritization and use of testing activities and resources are based on corresponding risk types and risk levels. This approach is used to reduce the level of product risks and inform stakeholders about their status, starting in the initial stages of a project (from ISO 29119-1).

Risk control: The overall process of risk mitigation and risk monitoring.

Risk identification: The process of finding, recognizing and describing risks. (from ISO 31000).

Risk level: The measure of a risk defined by risk impact and risk likelihood. Synonym: risk exposure.

Risk management: The process for handling risks (from ISO 24765).

Risk mitigation: The process through which decisions are reached and protective measures are implemented to reduce risks or maintain them at specified levels.

Risk monitoring: The activity that checks and reports the status of known risks to stakeholders.

Robustness testing: Testing to determine the robustness of the software product.

Root cause: A source of a defect such that if it is removed, the occurrence of the defect type is decreased or removed (from CMMI).

SBTM: Session-based test management, an ad hoc and exploratory test management technique, based on fixed length sessions (from 30 to 120 minutes), during which testers explore a part of the software application.

Scribe: The person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a logging form. The scribe has to ensure that the logging form is readable and understandable.

Scripting language: A programming language in which executable test scripts are written, used by a test execution tool (e.g. a capture/replay tool).

Security testing: Testing to determine the security of the software product.

Shift left: An approach to perform testing and quality assurance activities as early as possible in the software development life cycle.

Site acceptance testing: Acceptance testing by users/customers on site, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes, normally including hardware as well as software.

SLA: Service-level agreement, service agreement between a supplier and their client, defining the level of service a customer can expect from the provider.

Smoke test: A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertain that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices.

State transition: A transition between two states of a component or system.

State transition testing: A black-box test design technique in which test cases are designed to exercise elements of a state transition model, and execute valid and invalid state transitions. Synonym: finite state testing. See also N-switch testing.

Statement coverage: The percentage of executable statements that have been exercised by a test suite.

Static analysis: The process of evaluating a component or system without executing it, based on its form, structure, content or documentation, for example, requirements or code, carried out without execution of these software artifacts (from ISO 24765).

Static code analyzer: A tool that carries out static code analysis. The tool checks the source code for certain properties, such as conformance to coding standards, quality metrics or data flow anomalies.

Static testing: Testing of a component or system at the specification or implementation level without execution of that software, for example, reviews or static code analysis. See also dynamic testing.

Stress testing: A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified workloads, or with reduced availability of resources, such as access to memory or servers. See also performance testing, load testing.

Stress testing tool: A tool that supports stress testing.

Structural testing: See also white-box testing.

Stub: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.

System integration testing: Testing the integration of systems and packages; testing the interfaces of external organizations (e.g. electronic data interchange, the Internet).

System testing: A test level that focuses on verifying that a system as a whole meets specified requirements.

Technical review: A formal peer group discussion/review by technical experts who examine the quality of a work product and identify discrepancies from specifications and standards. See also peer review.

Test: A set of one or more test cases.

Test analysis: The activity that identifies test conditions by analyzing the test basis.

Test approach: The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.

Test automation: The use of software to perform or support test activities.

Test basis: All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of a formal amendment procedure, then the test basis is called a frozen test basis (from TMap).

Test case: A set of execution preconditions, input values, actions (where applicable), expected results and post-conditions, developed based on test conditions, such as to exercise a particular program path or verify compliance with a specific requirement. See also test step.

Test case specification: A document specifying a set of test cases (objective, inputs, test actions, expected results and execution preconditions) for a test item.

Test comparator: A test tool to perform automated test comparison.

Test completion: The activity that makes testware available for later use, leaves the test environments in a satisfactory condition and communicates the results of testing to relevant stakeholders.

Test completion report: A type of test report produced at completion milestones that provides an evaluation of the corresponding test items against exit criteria. Synonym: test summary report.

Test condition: A testable aspect of a component or system identified that could be verified by one or more test cases, for example, a function, transaction, quality attribute or structural element (from ISO 29119-1). Synonyms: test situation, test requirement.

Test control: A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. See also test management.

Test coverage: See also coverage.

Test data: Data that exists/is needed (e.g. in a database) before a test is executed, and that affects or is affected by the component or system under test. Synonym: test dataset.

Test data preparation tool: A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing.

Test design: The process of transforming general testing objectives into tangible test conditions and test cases. See also test design specification.

Test design specification: A document that specifies the test conditions (coverage items) for a test item, the detailed test approach, and identifies the associated high-level test cases.

Test design technique: A method used to derive or select test cases.

Test design tool: A tool that supports the test design activity by generating test inputs from a specification that may be held in a CASE tool repository, for example, the requirements management tool, or from specified test conditions held in the tool itself.

Test-driven development: Agile development method, where the tests are designed and automated before the code (from the requirements or specifications), then a minimal amount of code is written to successfully pass the test. This iterative method ensures that the code continues to fulfill requirements via test execution.

Test environment: An environment containing hardware, instrumentation, simulators, software tools and other support elements needed to conduct a test.

Test execution: The process of running a test using the component or system under test, producing actual results.

Test execution schedule: A scheme for the execution of test procedures. The test procedures are included in the test execution schedule in their context and in the order in which they are to be executed.

Test execution tool: A type of test tool that is able to execute other software using an automated test script, for example, capture/playback.

Test harness: A test environment composed of stubs and drivers needed to conduct a test.

Test implementation: The activity that prepares the testware needed for test execution based on test analysis and design.

Test leader: See also test manager.

Test level: A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels: component test, integration test, system test and acceptance test. Synonym: test stage.

Test log: A chronological record of relevant details about the execution of tests.

Test management: The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.

Test manager: The person responsible for testing and evaluating a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object.

Test monitoring: A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the results with what was expected. See also test management.

Test object: The work product to be tested. Synonym: test item.

Test objective: A reason or purpose for designing and executing a test. Synonym: test goal.

Test oracle: A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual or an individual’s specialized knowledge, but should not be the code (from Adrion). Synonym: oracle.

Test plan: A document describing the scope, approach, resources and schedule of intended test activities. Among others, it identifies test items, the features to be tested, the testing tasks, who will do each task, the degree of tester independence, the test environment, the test design techniques, the test measurement techniques to be used, the rationale for their choice and any risks requiring contingency planning. It is a record of the test planning process (from ISO 29119-1). See also master test plan, level test plan, test scope.

Test planning: The activity of establishing or updating a test plan.

Test policy: A high-level document describing the principles, approach and major objectives of the organization regarding testing.

Test procedure: A sequence of test cases in execution order, and any associated actions that may be required to set up the initial preconditions and any wrap-up activities post execution (from ISO 29119-1). See also test procedure specification.

Test procedure specification: A document specifying a sequence of actions for the execution of a test; also known as the test script or manual test script.

Test process: The set of interrelated activities comprised of test planning, test monitoring and control, test analysis, test design, test implementation, test execution and test completion.

Test progress report: A type of periodic test report that includes the progress of test activities against a baseline, risks and alternatives requiring a decision. Synonym: test status report.

Test pyramid: A graphical model representing the amount of testing per level, with more at the bottom than at the top.

Test report: See also test summary report.

Test result: The consequence/outcome of the execution of a test. Synonyms: outcome, test outcome, result.

Test script: Commonly used to refer to a test procedure specification, especially an automated one.

Test strategy: A high-level document defining the test levels to be performed and the testing within those levels for a program (one or more projects).

Test suite: A set of several test cases for a component or system under test, where the post-condition of one test is often used as the precondition for the next one.

Test summary report: A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.

Test technique: A procedure used to define test conditions, design test cases and specify test data. Synonym: test design technique.

Test type: A group of test activities based on specific test objectives aimed at specific characteristics of a component or system (from TMap).

Tester: A technically skilled professional who is involved in the testing of a component or system.

Testing: The process within the software development life cycle that evaluates the quality of a component or system and related work products. See also quality control.

Testing quadrants: A classification model of test types/test levels in four quadrants, relating them to two dimensions of test objectives: supporting the product team versus critiquing the product, and technology-facing versus business-facing.

Testware: Artifacts produced during the test process required to plan, design and execute tests, such as documentation, scripts, inputs, expected results, setup and clear-up procedures, files, databases, environment and any additional software or utilities used in testing (from ISO 29119-1).

Thread testing: A version of component integration testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by levels of a hierarchy.

Traceability: The ability to identify related items in documentation and software, such as requirements with associated tests (from IREB). See also horizontal traceability, vertical traceability.

Usability testing: Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.

Use case testing: A black-box test design technique in which test cases are designed to execute user scenarios.

User acceptance testing: See also acceptance testing.

Validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled (from IREB).

Verification: Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled (from ISO 9000).

Version control: See also configuration control.

Vertical traceability: The tracing of requirements through the layers of development documentation to components.

V-model: A framework to describe the software development life cycle activities from requirement specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.

Walkthrough: A step-by-step presentation by the author of a document in order to gather information and establish a common understanding of its content. This may lead members of the review to ask questions and make comments about possible issues (from ISO 20246). Synonym: structured walkthrough. See also peer review.

White-box test technique: A test technique only based on the internal structure of a component or system. Synonyms: white-box test design technique, structure-based test technique.

White-box testing: Testing based on an analysis of the internal structure of the component or system. Synonyms: clear-box testing, code-based testing, glass-box testing, logic-coverage testing, logic-driven testing, structural testing, structure-based testing.

1Fundamentals of Testing

1.1. What is testing?

In our everyday life, we are dependent on the correct execution of software, whether it is in our equipment (cell phones, washing machine, car engine injection, etc.), in the transactions we undertake each day (credit or debit card purchases, fund transfers, Internet usage, electronic mail, etc.), or even those that are hidden from view (back office software for transaction processing); software simplifies our daily lives. When it goes awry, the impact can be devastating.

Some defects are pretty innocuous, and some may only show themselves under very unusual circumstances. However, some of those defects can have very serious consequences, and some of those defects can lead to serious failures. In some cases, the effect of the failures is financial loss or inconvenience. In some cases, physical consequences, such as injury or death, can occur.

When you hear the word “testing”, you may think of someone sitting in front of a computer, entering information, observing the results and reporting defects; i.e. executing tests. That is certainly part of testing, and often the most visible to outside stakeholders. However, as we will see later, there are multiple activities associated with testing, some focused on avoiding introduction of defects and some focused on identifying – and later removing – defects.

1.1.1. Software and systems context

Testing software and systems is necessary to avoid failures visible to customers and avoid bad publicity for the organizations involved. This is the case for service companies responsible for the development or testing of third-party software, because the customer might not renew the contract, or might sue for damages.

We can imagine how millions of Germans felt on January 1, 2010, when their credit cards failed to work properly. There was no early warning, and they found themselves, the day after New Year celebrations, with an empty fridge, totally destitute, without the possibility of withdrawing cash from ATMs or purchasing anything from retail outlets. Those most pitied were probably those who took advantage of the holiday period to go abroad; they did not even have the possibility to go to their bank to withdraw cash.

On November 20, 2009, during its first week of commercial operation on the Paris to New York route, the autopilot function of the Airbus A380, the pride of the Air France fleet, suffered a software failure where it was forced to return to New York. The passengers were dispatched to other flights. Such a software problem could have been a lot more serious.

Software problems can also have an impact on an individual’s rights and freedom, be it in the United States, where voting machines failed during the presidential elections, which prevented a large number of votes from being included [THE 04]; or in France where, during local elections in 2008, a candidate from the Green party obtained 1,765 votes from 17,656 registered voters, and the software from the Ministry of Interior allowed the person to sit in the next stage of the election as the 10% threshold was reached. However, the software did not compute three digits after the decimal point and an “unfortunate rounding error to 10% was computed while the candidate only had 9.998% of the registered voters”. The end result was that the candidate was not allowed to participate in the next stage of the election. [BOU].

Software such as those listed above can be the root cause of accidents and even fatalities. This happened with the radiotherapy system Therac-25 [LEV 93, pp. 18–41], which led to six accidental releases of massive overdoses of radiation between 1985 and 1987, which led to the death of three patients. In the case of Therac-25, the root cause of the software failures – and of the death of the patients – were determined as being:

a lack of code reviews by independent personnel;

software design methods that were not adapted and thus incorrectly implemented for safety critical systems;

lack of awareness regarding system reliability for evaluation of software

defects

;

ambiguous error messages and usability problems in the software;

a lack of full acceptance tests for the complete system (hardware and software).

There are other examples of software failures that have caused major incidents, which have occurred in the space industry, such as:

the first flight of the Ariane 5 launcher, where a component that was developed and used reliably on the Ariane 4 launchers was used outside its normal operational context; this led to the loss of the launcher and all the satellites it carried;

NASA’s (National Aeronautics and Space Administration) Mars Climate Orbiter mission, where a unit conversion problem, between the units used by the European Space Agency (ESA; using metric-based units) and the units used by NASA (nautical mile) led to the loss of the spaceship and the full mission;

NASA’s Mars Polar Lander, where a speck of dust led to an incorrect response from one of the three landing gear, and a lack of software testing led to the shutdown of the probe’s engine some 40 meters above the surface; this led to the loss of the probe and the mission.

These three examples each cost hundreds of millions of Euros and US dollars, even with the high level of quality and tests done on such systems. Every year, software failures generate financial losses evaluated to be hundreds of millions of Euros. Correct testing of software is necessary to avoid frustration, lost financial expenditure, damages to property, or even death; all this is due to failures in software. This implies effective and efficient processes throughout the development life cycle of the software and the products it operates.

1.1.2. Causes of software defects

FL-1.2.3 (K2) Distinguish between root cause, error, defect and failure

There is a causality link between errors and defects, and between defects and failures generated. The initial cause – the root cause – of defects is often found to be caused by the actions (or lack of action) of humans:

misunderstanding of the specifications by functional analysts, resulting in a software design or architecture that prevents the expected goals from being reached or objectives stated by the customers;

mistakes, such as replacing a greater than sign by a greater than or equal to sign, resulting in abnormal behavior when both variables are equal.

Some failures are not directly caused by human action, but are caused by the interactions between the test object and its environment:

software malfunctions when electronic components overheat abnormally due to dust;

electrical or electronic interferences produced by power cables near unshielded data cables;

solar storms or other activities generating ionizing radiation that impacts on electronic components (this is important for satellite and airborne equipment);

impact of magnets or electromagnetic fields on data storage devices (magnetic disks or tapes, etc.).

Many terms describe the incorrect behavior of software: bug, error, failure, defect, fault, mistake, etc. These terms are sometimes considered as equivalent, which may generate misunderstandings. In this book, just as for the International Software Testing Qualifications Board (ISTQB), we will use the following terms and definitions:

error: human action at the root of a defect;

defect: result, present in the test object, of a human action (i.e. error);

failure: result from the execution of a defect by a process (whether the process is automated or not).

These terminological differences are important and will result in different activities to limit their occurrence and/or impact. See also section 5.2.