Test Automation Fundamentals - Manfred Baumgartner - E-Book

Test Automation Fundamentals E-Book

Manfred Baumgartner

0,0

Beschreibung

Concepts, methods, and techniques—supported with practical, real-world examples

  • The first book to cover the ISTQB® Certified Test Automation Engineer syllabus
  • With real-world project examples
  • – Suitable as a textbook, as a reference book for ISTQB® training courses, and for self-study

This book provides a complete overview of how to design test automation processes and integrate them into your organization or existing projects. It describes functional and technical strategies and goes into detail on the relevant concepts and best practices. The book's main focus is on functional system testing. Important new aspects of test automation, such as automated testing for mobile applications and service virtualization, are also addressed as prerequisites for creating complex but stable test processes. The text also covers the increase in quality and potential savings that test automation delivers.

The book is fully compliant with the ISTQB® syllabus and, with its many explanatory examples, is equally suitable for preparation for certification, as a concise reference book for anyone who wants to acquire this essential skill, or for university-level study.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern
Kindle™-E-Readern
(für ausgewählte Pakete)

Seitenzahl: 518

Das E-Book (TTS) können Sie hören im Abo „Legimi Premium” in Legimi-Apps auf:

Android
iOS
Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



About the Authors

Manfred Baumgartner has more than 30 years of experience in software testing and quality assurance. Since 2001, he has established and expanded the QA consulting and training services at Nagarro, a leading software testing services company. He is a board member of the Association for Software Quality and Further Education (ASQF) and the Association for Software Quality Management Austria (STEV). He is also a member of the Austrian Testing Board (ATB). He shares his extensive experience at numerous conferences and in his articles and books on software testing.

Thomas Steirer is a test automation architect, test manager and trainer, and leads Nagarro’s global test automation unit. He qualified as an ISTQB® Certified Tester - Full Advanced Level in 2010. He is a lecturer for test automation in the master’s program for software engineering at the UAS Technikum in Vienna, and does research into the use of artificial intelligence for increasing efficiency in test automation.

Marc-Florian Wendland is a research associate at the Fraunhofer FOKUS institute in Berlin. He has more than 10 years’ experience in national and international, cross-domain research and industrial projects that involve the design and execution of test automation. He is a member of the German Testing Board (GTB) and a trainer for various ISTQB® programs.

Stefan Gwihs is a passionate software developer and tester, and is a test automation architect at Nagarro, where he currently focuses on test automation for agile software development and DevOps.

Julian Hartner is based in New York City. He is an ISTQB® certified quality engineer and a passionate software developer and test automation engineer. He currently focuses on streamlining manual and automated testing for CRM applications.

Richard Seidl has seen and tested a lot of software in the course of his career: good and bad, big and small, old and new, wine and water. His guiding principle is: “Quality is an attitude”. If you want to create excellent software, you have to think holistically and include people, methods, tools, and mindset in the development process. As a consultant and coach, he supports companies in their efforts to turn agility and quality into reality, and to make them part of corporate DNA.

Manfred Baumgartner · Thomas Steirer · Marc-Florian Wendland ·Stefan Gwihs · Julian Hartner · Richard Seidl

Test Automation Fundamentals

A Study Guide for the Certified Test Automation Engineer Exam

Advanced Level Specialist

ISTQB® Compliant

Manfred Baumgartner · [email protected]

Thomas Steirer · [email protected]

Marc-Florian Wendland · [email protected]

Stefan Gwihs · [email protected]

Julian Hartner · [email protected]

Richard Seidl · [email protected]

Editor: Christa Preisendanz

Editorial Assistant: Julia Griebel

Copyediting: Jeremy Cloot

Layout and Type: Frank Heidt, Veronika Schnabel

Production Editor: Stefanie Weidner

Cover Design: Helmut Kraus, www.exclam.de

Printing and Binding: mediaprint solutions GmbH, 33100 Paderborn, and Lightning Source®, Ingram Content Group.

Bibliographic information published by the Deutsche Nationalbibliothek (DNB)

The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data can be found on the Internet at http://dnb.dnb.de.

ISBN dpunkt.verlag:

Print

978-3-86490-931-3

PDF

978-3-96910-870-3

ePUB

978-3-96910-871-0

mobi

978-3-96910-872-7

ISBN Rocky Nook:

Print

978-1-68198-981-5

PDF

978-1-68198-982-2

ePUB

978-1-68198-983-9

mobi

978-1-68198-984-6

1. edition 2022 Copyright © 2022 dpunkt.verlag GmbH

Wieblinger Weg 17

69123 Heidelberg

Title of the German Original: Basiswissen Testautomatisierung

Aus- und Weiterbildung zum ISTQB® Advanced Level Specialist – Certified Test Automation Engineer

3., überarbeitete und aktualisierte Auflage 2021

ISBN 978-3-86490-675-6

Distributed in the UK and Europe by Publishers Group UK and dpunkt.verlag GmbH.

Distributed in the U.S. and all other territories by Ingram Publisher Services and Rocky Nook, Inc.

Many of the designations in this book used by manufacturers and sellers to distinguish their products are claimed as trademarks of their respective companies. Where those designations appear in this book, and dpunkt.verlag was aware of a trademark claim, the designations have been printed in caps or initial caps. They are used in editorial fashion only and for the benefit of such companies, they are not intended to convey endorsement or other affiliation with this book.

No part of the material protected by this copyright notice may be reproduced or utilized in any form, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without written permission of the copyright owner. While reasonable care has been exercised in the preparation of this book, the publisher and author assume no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein.

This book is printed on acid-free paper.

Printed in Germany and in the United States.

5 4 3 2 1 0

Preface

“Automatically better through test automation!?”

One hundred percent test coverage, a four-hundred percent increase in efficiency, significantly reduced risk, faster time to market, and robust quality—these were, and still are, the promises made by test automation; or rather by those who make their living with test automation tools and consulting services. Since the publication of our first book on the subject in 2011, test automation has been on the to-do list of almost all companies that produce or implement software. However, the promised and expected goals are rarely achieved. In fact, there is a significant discrepancy between the potential achievements presented in the tool vendors” glossy brochures and the uncertainty in many companies regarding the successful and sustainable use of test automation.

This book provides a broad-based and practical introduction that serves as a comprehensive guide to test automation for a variety of roles in the field. In the fast-moving IT market, test automation has developed rapidly in recent years, both technically and as a discipline in its own right. Scalable agility, continuous deployment, and DevOps make test automation a mission-critical component of virtually all software development.

These dynamics also affect all test automation tools, whether commercial or open source. Therefore, this book doesn’t go into detail on specific tools, as any functional evaluation would surely be superseded by the time it goes to print. Additionally, there are so many great open source and commercial sector tools available that picking favorites would be unfair to the other manufacturers and communities. Instead, we list tools suitable to the test automation architecture and solutions discussed in each chapter. Tool comparisons and market research are available quickly and easily on the internet, although you have to remember that these are often not updated regularly.

The importance of test automation has also been confirmed by the international testing community. In 2016, the first English-language version of the ISTQB® Advanced Level Syllabus Test Automation Engineer was published—a milestone for the profession of test automation engineers. In late 2019, the German version of the syllabus was released [ISTQB: CT-TAE], which was an important step for the German-speaking (“DACH”) countries. This makes test automation more than ever an indispensible core component of software testing in general and provides it with its own certification and educational syllabus.

Previous editions of this book were always ahead of the published syllabus, but we felt the time had come to align ourselves with this established international standard, which is designed to support knowledge sharing and a common test automation language. Furthermore, the book introduces you to the contents of the syllabus and helps you to prepare for the certification exam. The syllabus is highly detailed and is a reference book on its own. However, this book adds significant value by providing a practical context, an easy-to-read format, and real-world examples that make it much easier to gain a firm grasp of the subject matter than you can by studying the syllabus alone.

In short, this book not only prepares you for the certification exam, it also teaches you practical, hands-on test automation.

The contents of the curriculum (currently the 2016 version) are presented in a different order and with different emphases to the syllabus itself. We also supplement the syllabus content with other important topics that are clearly marked as excursus.

Please note that the certification exam is always based on the current version of the official syllabus.

In addition to reading this book, we recommend that you attend an appropriate training course and use the current version of the syllabus [ISTQB: CT-TAE] to prepare for the exam.

Covering the curriculum is only one of several major points that we address in this book and, aside from this, our three main goals are as follows:

Firstly, we want to help you avoid disappointment due to overblown expectations. Test automation is not a question of using specific tools and is not a challenge to implement the marketing buzzwords used by software manufacturers, but rather a resource that enables you to better cope with the constantly growing demands made by software testing.

Secondly, we give you guidance on how to make best use of this resource. We focus on the long-term view, future return on investment, and the real-world business value it provides. These aspects cannot be measured using metrics such as code coverage or the number of test scripts, but rather by the total cost of ownership of application development, evolution, and benefits, as well as user feedback in the marketplace.

Thirdly, we have incorporated key aspects of the test automation process, such as the role of test automation in the context of artificial intelligence (AI) systems and in the DevOps environment.

Does test automation automatically make things better? Certainly not! A manufacturing machine that is set up incorrectly will produce only junk; if it is operated badly, it will produce random, useless results; if it is not properly maintained, it will break down or perhaps even become unusable. Appropriately trained employees, sustainable concepts, a responsible approach, and the awareness that test automation is an essential production factor are the prerequisites for realizing the potential and the real-world benefits of this technology. In most cases, test automation is indispensable for delivering robust quality in agile project environments, making it critical to the success of a project. It is also essential for keeping pace with the speed of modern continuous delivery processes while ensuring the long-term economic viability of software development projects.

We wish you every possible success implementing test automation at your company.

Manfred Baumgartner

Thomas Steirer

Marc-Florian Wendland

Stefan Gwihs

Julian Hartner

Richard Seidl

May 2022

Acknowledgements

For their hands-on support we would like to thank Michael Hombauer, Sonja Baumgartner, Dominik Schildorfer, Anita Bogner, Himshikha Gupta, Christian Mastnak, Roman Rohrer, Martin Schweinberger, Stefan Denner, Stephan Posch, Yasser Aranian, Georg Russe, Vincent Bayer, Andreas Lenich, Cayetano Lopez-Leiva, Bernhard König, Jürgen Pointinger, and everyone at Nagarro.

This book is dedicated to Himshikha Gupta, who worked tirelessly to create the figures and diagrams it contains, and who passed away much too early, shortly before it was finished in early 2022. She will be sorely missed.

Foreword by Armin Metzger

The second wave is here! I believe we are in the middle of the second wave of test automation. The first big wave clearly took place in the early 2000s, and the projects involved were initially very successful in terms of improving the effectiveness and the efficiency of test processes in some specific areas. However, in line with the Gartner cycle, the “trough of disillusionment” was quickly reached and, in my view, most projects didn’t actually reach the “plateau of productivity”.

What I observed at the time were projects that expended enormous effort over several years to work their way to a high degree of test automation. Then came technology changes such as the switch to .NET platforms, or process changes such as the switch to agile development methodology. A lot of the test automation frameworks didn’t survive those transitions. Back then I liked to give talks with provocative titles such as Test Automation Always Fails.

We saw two core problems: firstly, companies failed to scale isolated successes to the entire project or organization, and secondly, test automation platforms were not sufficiently flexible to absorb disruptive changes in the technology base.

It is therefore no surprise that, over time, test automation began to lose acceptance. Management aspects also play a supporting role here. In the long run, the great economic expectations of a one-time investment intended to significantly reduce regression efforts were often simply not met.

Since the middle of the second decade of the 21st Century, we see a trending new wave of test automation in large projects. Will test automation once again fall short of its expectations? I don’t think so. Both the overall test automation environment and the expectations test automation raises have changed. Test automation has now re-established itself as an indispensable factor for the success of projects in current technological scenarios. What changed?

With the introduction of agile processes, highly automated, tool-supported development has evolved significantly and has now become standard practice. Continuous integration concepts are constantly being refined into DevOps processes to create a seamless platform for the integration of automated project steps—all the way from the initial idea to final production and operation. The end-to-end automation of processes naturally forms an excellent basis for integrating test automation into the overall development process. Additionally, agile processes have helped process scaling to reach a new, higher level of importance. This development is an essential factor for the successful introduction and long-term establishment of test automation solutions.

However, a key factor in the importance (and necessity) of test automation is the current technological platform on which we operate. Disruptive technologies such as IoT (Internet of Things) and AI (artificial intelligence) are rapidly pushing their way out of their decades-old niche existence and into our products. With this comes a significant shift of priorities for the quality attributes we have to test. While 20 years ago, ninety per cent of all tests were functional tests, the importance of non-functional tests for usability, performance, IT security, and so on is slowly but surely gaining ground. The number of test cases required to assess product quality is therefore increasing rapidly, and only automated tests can effectively safeguard quality characteristics such as performance.

The development and maintenance of products takes place in increasingly short cycles. Due to the increasing variance in hardware and software configurations, entire and partial systems need to be tested in an increasing number of variants. Non-automated regression testing thus becomes an increasing burden, and it becomes more and more difficult to achieve the required test coverage while retaining an adequate level of effort.

And—fortunately—we have also learned a lot about methodology: test architectures are one of the most important factors (if not the most important factor) influencing quality in the maintainability of automated tests. In fact, test architectures are now so well established that the dedicated role of test architect is now being introduced in many organizations. This is just one example of such changes.

But beware: using the right approach and having knowledge of the pitfalls and best practices involved in introducing and maintaining test automation are key to long-term success. Introducing appropriate expertise into projects and organizations is not always easy. This is where the Certified Tester certification scheme—long established as an industry standard with a common glossary—can help. The Test Automation Engineer training and certification covered in this book are intended for advanced testers and translate the focus and factors that influence the long-term success of test automation into a structured canon of collected expertise—for example, on the subject of test automation architectures. This book clearly shows that these skills are constantly evolving.

We are better equipped than ever and I believe we have taken a significant step forward in the field of test automation. I wish you every success and plenty of creative fun using test automation as a key factor for your professional success!

Dr. Armin Metzger

Managing Director of the German Testing Board, 2022

Overview

1An Introduction to Test Automation and Its Goals

2Preparing for Test Automation

3Generic Test Automation Architecture

4Deployment Risks and Contingencies

5Reporting and Metrics

6Transitioning Manual Testing to an Automated Environment

7Verifying the Test Automation Solution

8Continuous Improvement

9Excursus: Looking Ahead

APPENDICES

ASoftware Quality Characteristics

BLoad and Performance Testing

CCriteria Catalog for Test Tool Selection

DGlossary

EAbbreviations

FReferences

Index

Contents

1An Introduction to Test Automation and Its Goals

1.1Introduction

1.1.1Standards and Norms

1.1.2The Use of Machines

1.1.3Quantities and Volumes

1.2What is Test Automation?

1.3Test Automation Goals

1.4Success Factors in Test Automation

1.4.1Test Automation Strategy

1.4.2Test Automation Architecture (TAA)

1.4.3Testability of the SUT

1.4.4Test Automation Framework

1.5Excursus: Test Levels and Project Types

1.5.1Test Automation on Different Test Levels

1.5.2Test Automation Approaches for Different Types of Projects

2Preparing for Test Automation

2.1SUT Factors that influence Test Automation

2.2Tool Evaluation and Selection

2.2.1Responsibilities

2.2.2Typical Challenges

2.2.3Excursus: Evaluating Automation Tools

2.2.4Excursus: Evaluation made easy

2.3Testability and Automatability

3Generic Test Automation Architecture

3.1Introducing Generic Test Automation Architecture (gTAA)

3.1.1Why is a Sustainable Test Automation Architecture important?

3.1.2Developing Test Automation Solutions

3.1.3The Layers in the gTAA

3.1.4Project Managing a TAS

3.1.5Configuration Management in a TAS

3.1.6Support for Test Management and other Target Groups

3.2Designing a TAA

3.2.1Fundamental Questions

3.2.2Which Approach to Test Case Automation Should Be Supported?

3.2.3Technical Considerations for the SUT

3.2.4Considerations for Development and QA Processes

3.3TAS Development

3.3.1Compatibility between the TAS and the SUT

3.3.2Synchronization between the TAS and the SUT

3.3.3Building Reusability into the TAS

3.3.4Support for Multiple Target Systems

3.3.5Excursus: Implementation Using Different Approaches and Methods

4Deployment Risks and Contingencies

4.1Selecting a Test Automation Approach and Planning Deployment/Rollout

4.1.1Pilot Project

4.1.2Deployment

4.2Risk Assessment and Mitigation Strategies

4.2.1Specific Risks During the Initial Rollout

4.2.2Specific Risks during Maintenance Deployment

4.3Test Automation Maintenance

4.3.1Types of Maintenance Activities and What Triggers Them

4.3.2Considerations when Documenting Automated Testware

4.3.3The Scope of Maintenance Activities

4.3.4Maintenance of Third-Party Components

4.3.5Maintaining Training Materials

4.3.6Improving maintainability

4.4Excursus: Application Areas According to System Types

4.4.1Desktop Applications

4.4.2Client-Server Systems

4.4.3Web Applications

4.4.4Mobile Applications

4.4.5Web Services

4.4.6Data Warehouses

4.4.7Dynamic GUIs: Form Solutions

4.4.8Cloud-Based Systems

4.4.9Artificial Intelligence and Machine Learning

5Reporting and Metrics

5.1Metrics and Validity

5.2Metrics Examples

5.3Precise Implementation and Feasibility Within a TAS

5.3.1TAS and SUT as Sources for Logs

5.3.2Centralized Log Management and Evaluation

5.3.3Implementing Logging in a TAS

5.4Test Automation Reporting

5.4.1Quality Criteria for Reports

6Transitioning Manual Testing to an Automated Environment

6.1Criteria for Automation

6.1.1Suitability Criteria for the Transition to Automated Testing

6.1.2Preparing for the Transition to Automated Testing

6.2Steps Required to Automate Regression Testing

6.3Factors to Consider when Automating Testing for New or Changed Functionality

6.4Factors to Consider when Automating Confirmation Testing

7Verifying the Test Automation Solution

7.1Why Quality Assurance Is Important for a TAS

7.2Verifying Automated Test Environment Components

7.3Verifying the Automated Test Suite

8Continuous Improvement

8.1Ways to Improve Test Automation

8.2Planning the Implementation of Test Automation Improvement

9Excursus: Looking Ahead

9.1Challenges Facing Test Automation

9.1.1Omnipresent Connectivity

9.1.2Test Automation in IT Security

9.1.3Test Automation in Autonomous Systems

9.2Trends and Potential Developments

9.2.1Agile Software Development Is Inconceivable without Test Automation

9.2.2New Outsourcing Scenarios for Automation

9.2.3Automating Automation

9.2.4Training and Standardization

9.3Innovation and Refinement

APPENDICES

ASoftware Quality Characteristics

A.1Functional Suitability

A.2Performance Efficiency

A.3Compatibility

A.4Usability

A.5Reliability

A.6Security

A.7Maintainability

A.8Portability

BLoad and Performance Testing

B.1Types of Load and Performance Tests

B.2Load and Performance Testing Activities

B.3Defining Performance Goals

B.4Identifying Transactions and/or Scenarios

B.5Creating Test Data

B.6Creating Test Scenarios

B.7Executing Load And Performance Tests

B.8Monitoring

B.9Typical Components of Performance/Load Testing Tools

B.10Checklists

CCriteria Catalog for Test Tool Selection

DGlossary

EAbbreviations

FReferences

F.1Literature

F.2Norms and Standards

F.3URLs

Index

1An Introduction to Test Automation and Its Goals

Software development is rapidly becoming an independent area of industrial production. The increasing digitalization of business processes and the increased proliferation of standardized products and services are key drivers for the use of increasingly efficient and effective methods of software testing, such as test automation. The rapid expansion of mobile applications and the constantly changing variety of end-user devices also have a lasting impact.

1.1Introduction

A key characteristic of the industrialization of society that began at the end of the 18th Century has been the mechanization of energy- and time-consuming manual activities in virtually all production processes. What began more than 200 years ago with the introduction of mechanical looms and steam engines in textile mills in England has become the goal and mantra of all today’s manufacturing industries, namely: the continuous increase and optimization of productivity. The aim is always to achieve the desired quantity and quality using the fewest possible resources in the shortest possible time. These resources include human labor, the use of machines and other equipment, and energy.

Software development and software testing on the way to industrial mass production

In the pursuit of continuous improvement and survival in the face of global competition, every industrial company has to constantly optimize its manufacturing processes. The best example of this is the automotive industry, which has repeatedly come up with new ideas and approaches in the areas of process control, production design and measurement, and quality management. The auto industry continues to innovate, influencing other branches of industry too. A look at a car manufacturer’s factories and production floor reveals an impressive level of precision in the interaction between man and machine, as well as smooth, highly automated manufacturing processes. A similar pattern can now be seen in many other production processes.

The software development industry is, however, something of a negative exception. Despite many improvements in recent years, it is still a long way from the quality of manufacturing processes found in other industries. This is surprising and perhaps even alarming, as software is the technology that has probably had the greatest impact on social, economic, and technical change in recent decades. This may be because the software industry is still relatively young and hasn’t yet reached the maturity of other branches of industry. Perhaps it is because of the intangible nature of software systems, and the technological diversity that makes it so difficult to define and consistently implement standards. Or maybe it is because many still see software development in the context of the liberal, creative arts rather than as an engineering discipline.

Software development has also had to establish itself in the realm of international industrial standards. For example, Revision 4 of the International Standard Industrial Classification of All Economic Activities (ISIC), published in August 2008, includes the new section J Information and Communication, whereas the previous version hid software development services away at the bottom of the section called Real estate, renting and business activities ([ISIC 08], [NACE 08]).

Software development as custom manufacturing

Although the “young industry” argument is losing strength as time goes on, software development is still often seen as an artistic rather than an engineering activity, and is therefore valued differently to the production of thousands of identical door fittings. However, even if software development is not a “real” mass production process, today it can surely be viewed as custom industrial manufacturing.

But what does “industrial” mean in this context? An industrial process is characterized by several features: by the broad application of standards and norms, the intensive use of mechanization, and the fact that it usually involves large quantities and volumes. Viewed using these same attributes, the transformation of software development from an art to a professional discipline is self-evident.

1.1.1Standards and Norms

Since the inception of software development there have been many and varied attempts to find the ideal development process. Many of these approaches were expedient and represented the state of the art at the time. Rapid technical development, the exponential increase in technical and application-related complexity and constantly growing economic challenges require continuous adaptation of the procedures, languages and process models used in software development—waterfall, V-model, iterative and agile software development; ISO 9001:2008, ISO 15504 (SPICE), CMMI, ITIL; unstructured, structured, object-oriented programming, ISO/IEC/ IEEE 29119 software testing—and that’s just the tip of the iceberg. Software testing has also undergone major changes, especially in recent years. Since the establishment of the International Software Testing Qualifications Board (ISTQB) in November 2002 and the standardized training it offers for various Certified Tester skill levels, the profession and the role of software testers have evolved and are now internationally established [URL: ISTQB]. The ISTQB® training program is continuously expanded and updated and, as of 2021, comprises the following portfolio:

Fig. 1–1 The ISTQB® training product portfolio, as of 2022

Nevertheless, software testing is still in its infancy compared to other engineering disciplines with their hundreds, or even thousands, of years of tradition and development. This relative lack of maturity applies to the subject matter and its pervasiveness in teaching and everyday practice.

One of the main reasons many software projects are still doomed to large-scale failure despite the experience enshrined in its standards is because the best practices involved in software development are largely nonbinding. Anyone ordering software today cannot count on a product made using a verifiable manufacturing standard.

Not only do companies generally decide individually whether to apply certain product and development standards, the perpetuation of the nonbinding nature of standards is often standard practice at many companies too. After all, every project is different. The “Not Invented Here” syndrome remains a constant companion in software development projects [Katz & Allen 1982].

Norms and standards are often missing in test automation

Additionally, in the world of test automation, technical concepts are rarely subject to generalized standards. It is the manufacturers of commercial tools or open source communities who determine the current state of the art. However, these parties are less concerned with creating a generally applicable standard or implementing collective ideas than they are with generating a competitive advantage in the marketplace. After all, standards make tools fundamentally interchangeable—and which company likes to have its market position affected by the creation of standards? One exception to this rule is the European Telecommunication Standards Institute (ETSI) [URL: ETSI] testing and test control notation (TTCN-3). In practice, however, the use of this standard is essentially limited to highly specific domains, such as the telecommunications and automotive sectors.

For a company implementing test automation, this usually means committing to a single tool manufacturer. Even in the foreseeable future, it won’t be possible to simply transfer a comprehensive, automated test suite from one tool to another, as both the technological concepts and the automation approaches may differ significantly. This also applies to investment in staff training, which also has a strongly tool-related component.

Nevertheless, there are some generally accepted principles in the design, organization, and execution of automated software testing. These factors help to reduce dependency on specific tools and optimize productivity during automation.

The ISTQB® Certified Tester Advanced Level Test Automation Engineer course and this book, which includes a wealth of hands-on experience, introduce these fundamental aspects and principles, and provide guidance and recommendations on how to implement a test automation project.

1.1.2The Use of Machines

Another essential aspect of industrial manufacturing is the use of machines to reduce and replace manual activities. In software development, software itself is such a machine—for example, a development environment that simplifies or enables the creation and management of program code and other software components. However, these “machines” are usually just editing and management systems with certain additional control mechanisms, such as those performed by a compiler. The programs themselves still need to be created by human hands and minds. Programming mechanization is the goal of the model-based approaches, where the tedious work of coding is performed by code generators. The starting point for code generation is a model of the software system in development written, for example, in UML notation. In some areas this technology is already used extensively (for example, in the generation of data access routines) or where specifications are available in formal languages (for example, in the development of embedded systems). On a broad scale, however, software development is still pure craftsmanship.

Mechanization in Software Testing

Use of tools for test case generation and test execution

One task of the software tester is the identification of test conditions and the design of corresponding test cases. Analogous to model-based development approaches, model-based testing (MBT) aims to automatically derive and generate test cases from existing model descriptions of the system under test (SUT). Sample starting points can be object models, use case descriptions or flow graphs written in various notations. By applying a set of semantic rules, domain-oriented test cases are derived based on written specifications. Corresponding parsers also generate abstract test cases from the source code itself, which are then refined into concrete test cases. A variety of suitable test management tools are available for managing these test cases, and such tools can be integrated into different development environments. Like the generation of code from models, the generation of test cases from test models is not yet common practice. One reason for this is that the outcome (i.e., the generated test case) depends to a high degree on the model’s quality and the suitability of its description details. In most cases, these factors are not a given.

Another task performed by software testers is the execution and reporting of test cases. At this point, a distinction must be made between tests that are performed on a technical interface level, on system components, and on modules or methods; or functional user-oriented tests that are rather performed via the user interface. For the former, technical tools such as test frameworks, test drivers, unit test frameworks and utility programs are already in widespread use. These tests are mostly performed by “technicians” who can provide their own “mechanical tools”. Functional testing, on the other hand, is largely performed manually by employees from the corresponding business units or by dedicated test analysts. In this area, tools are also available that support and simplify manual test execution, although their usage involves corresponding costs and learning effort. This is one of the reasons why, in the past, the use of test automation tools has not been generally accepted. However, in recent years, further development of these tools has led to a significant improvement in their cost-benefit ratio. The simplification of automated test case creation and maintainability due to the increasing separation of business logic and technical implementation has led to automation providing an initial payoff when complex manual tests are automated for the first time, rather than only when huge numbers of test cases need to be executed or the nth regression test needs to be repeated.

1.1.3Quantities and Volumes

While programming involves the one-time development of a limited number of programs or objects and methods that, at best, are then adapted or corrected, testing involves a theoretically unlimited number of test cases. In real-world situations, the number of test cases usually runs into hundreds or thousands. A single input form or processing algorithm that has been developed once must be tested countless times using different input and dialog variations or, for a data-driven test, by entering hundreds of contracts using different tariffs. However, these tests aren’t created and executed just once. With each change to the system, regression tests have to be performed and adjusted to prove the system’s continuing functionality. To detect the potential side effects of changes, each test run should provide the maximum possible test coverage. However, experience has shown that this is not usually feasible due to cost and time constraints.

The required scope of testing can only be effectively handled with the help of mechanization

This requirement for the management of large volumes and quantities screams out for the use of industrial mechanization—i.e., test automation solutions. And, if the situation doesn’t scream, the testers do! Unlike machines, testers show human reactions such as frustration, lack of concentration, or impatience when performing the same test case for the tenth time. In such situations, individual prioritization may lead to the wrong, mission-critical test case being dropped.

In view of these factors, it is surprising that test automation hasn’t been in universal use since way back. A lack of standardization, unattractive cost-benefit ratios, and the limited capabilities of the available tools may have been reasons for this. Today, however, there is simply no alternative to test automation. Increasing complexity in software systems and the resulting need for testing, increasing pressure on time and costs, the widespread adoption of agile development approaches, and the rise of mobile applications are forcing companies to rely on ongoing test automation in their software development projects.

1.2What is Test Automation?

The ISTQB® definition of test automation is: “The use of software to perform or support test activities”. You could also say: “Test automation is the execution of otherwise manual test activities by machines”. The concept thus includes all activities for testing software quality during the development process, including the various development phases and test levels, and the corresponding activities of the developers, testers, analysts, and users involved in the project.

Accordingly, test automation is not just about executing a test suite, but rather encompasses the entire process of creating and deploying all kinds of testware. In other words, all the work items required to plan, design, execute, evaluate, and report on automated tests.

Relevant testware includes:

Software

Various tools (automation tools, test frameworks, virtualization solutions, and so on) are required to manage, design, implement, execute, and evaluate automated test suites. The selection and deployment of these tools is a complex task that depends on the technology and scope of the SUT and the selected test automation strategy.

Documentation

This not only includes the documentation of the test tools in use, but also all available business and technical specifications, and the architecture and the interfaces of the SUT.

Test cases

Test cases, whether abstract or specific, form the basis for the implementation of automated tests. Their selection, prioritization, and functional quality (for example: functional relevance, functional coverage, accuracy) as well as the quality of their description have a significant influence on the long-term cost-benefit ratio of a test automation solution (TAS) and thus directly on its long-term viability.

Test data

Test data is the fuel that drives test execution. It is used to control test scenarios and to calculate and verify test results. It provides dynamic input values, fixed or variable parameters, and (configuration) data on which processing is based. The generation, production, and recovery of existing and process data for and by test automation processes require special attention. Incorrect test data (such as faulty test scripts) lead to incorrect test results and can severely hinder testing progress. On the other hand, test data provides the opportunity to fully leverage the potential of test automation. The importance and complexity of efficient and well-organized test data management is reflected in the GTB

Certified Tester Foundation Level Test Data Specialist

[

GTB: TDS

] training course (only in German).

Test environments

Setting up test environments is usually a highly complex task and is naturally dependent on the complexity of the SUT as well as on the technical and organizational environment at the company. It is therefore important to discuss general operation, test environment management, application management, and so on, with all stakeholders in advance. It is essential to clarify who is responsible for providing the SUT, the required third-party systems, the databases, and the test automation solution within the test environment, and for granting the necessary access rights and monitoring execution.

If possible, the test automation solution should be run separately from the SUT to avoid interference. Embedded systems are an exception because the test software needs to be integrated with the SUT.

Although the term “test automation” refers to all activities involved in the testing process, in practice it is commonly associated with the automated execution of tests using specialized tools or software.

In this process, one or more tasks that are defined the same way as they are for the execution of dynamic tests [Spillner & Linz 21], are executed based on the previously mentioned testware:

Implement the automated test cases based on the existing specifications, the business test cases and the SUT, and provide them with test data.

Define and control the preconditions for automated execution.

Execute, control, and monitor the resulting automated test suites.

Log and interpret the results of execution—i.e., compare actual to expected results and provide appropriate reports.

From a technical point of view, the implementation of automated tests can take place on different architectural levels. When replacing manual test execution, automation accesses the graphical user interface (GUI testing) or, depending on the type of application, the command line interface of the SUT (CLI testing). One level deeper, automation can be implemented through the public interfaces of the SUT’s classes, modules, and libraries (API testing) and also through corresponding services (service testing) and protocols (protocol testing). Test cases implemented at this lower architectural level have the advantage of being less sensitive to frequent changes in the user interfaces. In addition to being much easier to maintain, this approach usually has a significant performance advantage over GUI-based automation. Valuable tests can be performed before the software is deployed to a runtime environment—for example, unit tests can be used to perform automated testing of individual software components for each build before these components are fully integrated and packaged with the software product. The test automation pyramid popularized by Mike Cohn illustrates the targeted distribution of automated tests based on their cost-benefit efficiency over time [Cohn 2009].

Fig. 1–2 The test automation pyramid

1.3Test Automation Goals

The implementation of test automation is usually associated with several goals and expectations. In spite of all its benefits, automation is not (and will never be) an end in itself. The initial goal is to improve test efficiency and thus reduce the overall cost of testing. Other important factors are the reduction of test execution time, shorter test cycles, and the resulting chance to increase the frequency of test executions. This is especially important for the DevOps and DevTestOps approaches to testing. Continuous integration, continuous deployment, and continuous testing can only be effectively implemented using a properly functioning test automation solution.

In addition to reducing costs and speeding up the test execution phase, maintaining or increasing quality is also an important test automation goal. Quality can be achieved by increasing functional coverage and by implementing tests that can only be performed manually using significant investments in time and resources. Examples include testing a very large number of relevant data configurations or variations, testing for fault tolerance (i.e., test execution at the API/service level with faulty input data to evaluate the stability of the SUT), or performance testing in its various forms. Also, the uniform and repeated execution of entire test suites against different versions of the SUT (regression testing) or in different environments (different browsers and versions on a variety of mobile devices) is only economically feasible if the tests involved are automated.

Benefits of Test Automation

One of the greatest benefits of test automation results from building an automated regression test suite that enables increasing numbers of test cases to be executed per software release. Manual regression testing very quickly reaches the limits of feasibility and cost-effectiveness. It also ties up valuable manual resources and becomes less effective with every execution, mainly due to the testers’ unavoidable decline in concentration and motivation. In contrast, automated tests run faster, are less susceptible to operational errors and, once they have been created, complex test scenarios can be repeated as often as necessary. Manual test execution requires great effort to understand the increasing complexity of the test sequences involved and to execute them with consistent quality.

Certain types of tests are barely feasible in a manual test environment, while the implementation and execution of distributed and parallel tests is relatively simple to automate—for example, for the execution of load, performance, and stress tests. Real-time tests—for example, in control systems technology—also require appropriate tools.

Since automated test cases and test scenarios are created within a defined framework and (in contrast to manual test cases) are formally described in a uniform way, they do not allow any room for interpretation, and thus increase test consistency and repeatability as well as the overall reliability of the SUT.

From the overall project point of view there are also significant advantages to using test automation. Immediate feedback regarding the quality of the SUT significantly accelerates the project workflow. Existing problems are identified within hours instead of days or weeks and can be fixed before the effort required for correction increases even further.

Test automation also enables more efficient and effective use of testing resources. This applies not only to technical infrastructures, but also to testers in IT and business units, especially through the automation of regression testing. As a result, these testers can devote more time to finding defects—for example, through explorative testing or the targeted use of various dynamic manual testing procedures.

Drawbacks of Test Automation

As well as advantages, test automation has drawbacks too, and these need to be considered in advance to avoid unpleasant surprises later on.

Automating processes always involves additional costs, and test automation is no exception. The initial investments required to set up and launch a test automation solution include tools (for example, for test execution) that have to be purchased or developed; workplace equipment for test automation engineers (TAE) (which usually includes several development and execution PCs/screens); test environment upgrades; the establishment of new processes and work steps that become necessary for developing the test scripts; additional configuration management and versioning systems; and so on.

In addition to investing in additional technologies or processes, time and money need to be invested in expanding the test team’s skills. This includes training to become an ISTQB® Test Automation Engineer, further training in software development, and training in the use of the test automation solution and its tools.

The effort required to maintain a test automation solution and its automated testware —first and foremost of course, the test scripts—is also frequently underestimated. Ultimately, test automation itself generates software that needs to be maintained. An unsuitable architecture, noncompliance with conventions, inadequate documentation, and lack of configuration management all have dramatic effects as soon as the automated test suite reaches a level at which changes and enhancements take place constantly. The user interface, processes, technical aspects, and business rules in the SUT change too, and these changes have a direct and immediate impact on the test automation solution and the automated testware.

It is not uncommon for a test automation engineer to find out about such changes “in production” when a discrepancy occurs during test execution. This discrepancy is then reported and rejected by the developer as a defect in the TAS (a so-called “false positive” result). But this is not the only scenario in which the TAS leads to failures—as previously mentioned, a TAS is also just software, and software is always prone to defects.

For this reason, test automation engineers often focus too much on the technical aspects of the TAS and get distracted from the underlying qualitative test objectives that are necessary for the required coverage of the SUT.

Once a TAS is established and working well, testers are tempted to automate everything, such as extensive end-to-end testing, intertwined dialog sequences, or complicated workflows. This sounds like a great thing to do, but you must be aware of the effort involved in implementing and maintaining automated tests. Just creating and maintaining consistent test data across multiple systems for extensive end-to-end testing is a major challenge.

The Limitations of Test Automation

Test automation also has its limits. While the technical options are manifold, sometimes the cost of automating certain manual tests is not proportional to the benefit.

A machine can only check real, machine-interpretable results and to do so requires a “test oracle” which also needs to be automated in some way. The main strength of test automation lies in the precise comparison of expected and actual behavior within the SUT, while its weakness lies in the validation of the system and the evaluation of its suitability for its intended use. Faults in requirement definition or incorrect interpretation of requirements are not detected by the test automation solution. A test automation solution cannot “read between the lines” or apply creativity, and therefore cannot completely replace (manual) structured dynamic testing or exploratory testing. The SUT needs to achieve a certain level of stability and freedom from defects at its user and system interfaces for test sequences to be usefully automated without being subjected to constant changes.

1.4Success Factors in Test Automation

To achieve the set goals, to meet expectations in the long term, and to keep obstacles to a minimum, the following success factors are of particular importance for ongoing test automation projects. The more these are fulfilled, the greater the probability that the test automation project will be a success. In practice, it is rare that all these criteria are fulfilled, and it is not absolutely necessary that they are. The general project framework and success criteria need to be examined before the project starts and continuously analyzed during the project’s lifetime. Each approach has its own risks in the context of a specific project, and you have to be aware of which success factors are fulfilled and which are not. Accordingly, the test automation strategy and architecture need to be continuously adapted to changing conditions.

Please note: in the following sections we won’t go into any further detail on success factors for piloting test automation projects.

1.4.1Test Automation Strategy

The test automation strategy is a high-level plan for achieving the long-term goals of test automation under given conditions and constraints. Statements concerning the test automation strategy can be included in a company’s testing policy and/or in its organizational test strategy. The latter defines the generic requirements for testing in one or more projects within an organization, including details on how testing should be performed, and is usually aligned with overall testing policy.

Every test automation project requires a pragmatic and consistent test automation strategy that is aligned with the maintainability of the test automation solution and the consistency of the SUT.

Because the SUT itself can consist of various old and new functional and technical areas, and because it includes applications and components run on different platforms, it is likely that specific strategies have to be defined in addition to the existing baseline strategy. The costs, benefits, and risks involved in applying the strategy to the various areas of the SUT must be considered.

Another key requirement of the test automation strategy is to ensure the comparability of test results from automated test cases executed through the SUT’s various interfaces (for example, the API and the GUI).

You will gain experience continuously in the course of a project. The SUT will change, and the project goals can be adapted accordingly. Correspondingly, the test strategy needs to be continuously adapted and improved too. Improvement processes and structures therefore have to be defined as part of the strategy.

Excursus: The Test Automation Manifesto

Fundamental principles for test automation in projects or companies can be articulated to serve as a mindset and guide when tackling various issues. The diagram below shows an example from the authors‘ own project environment:

Fig. 1–3 The Test Automation Manifesto

Transparency over Comfort

Test automation is characterized by risk calculation and risk avoidance, similar to the safety net used by a high-wire act. This means that if everything works out correctly, regression-testing output (i.e., the number of detected defects) is minimal. However, this doesn’t mean that test automation does not add value. It is important to position test automation and its results and functions clearly and visibly within the organization. It also means that any problems with test automation problems are clearly and instantly visible. We believe this to be a strength, not a weakness.

Collaboration over Independence

A typical situation occurs when a test automation tool is purchased and handed over to a tester who is then responsible for its implementation and use. Often, the tester in question will enter “experimental mode” and try to implement automated test cases under pressure. A typical behavior pattern in this context is: “Me vs. tool vs. the product”—i.e., a tendency to want to solve or work around problems and challenges alone. Instead, we recommend actively engaging with other roles. For example, if it is difficult to display a particular table, reach out to the developers, ask the community, or simply call vendor support.

Quality over Quantity

A typical metric for the value and progress of test automation is the degree of automation of a test suite, measured either as a percentage or the absolute number of automated test cases. However, this does not reflect the additional value generated by the maintainability and robustness of the automated tests. A guiding principle in this context is: “Ten meaningful, stably automated tests are worth more than a thousand unstable and untraceable test cases”. Ergo, a small regression test suite is often more useful than a huge test portfolio that is difficult to maintain.

Flexibility over Continuity

Test automation is like a twin of the systems it tests and is often a tool for ensuring the successful execution of business processes. It delivers the greatest added value when it can be used over a long period of time with little maintenance. During this time, technologies, tools, personnel, and even business processes can change significantly. To remain effective, test automation requires a high degree of flexibility in the face of change. This is both a strategic and process-related problem as well as a technological/architectural one, which is also addressed by the generic test automation architecture described in detail in later chapters.

A test automation strategy also needs to be tailored to the type of project it is used in. Additionally, the different test levels and test types that are to be supported through automation may also require different approaches.

Section 1.5 on test levels and project types, Appendix A and B provide an introduction to this topic in the form of an excursus (i.e., they are not a part of the official ISTQB® CT-TAE syllabus).

1.4.2Test Automation Architecture (TAA)

The architecture of a test automation solution is crucial to its acceptance, its existence, and its long-term use. The design of a suitable TAA is also a core topic of the Test Automation Engineer training. It requires a certain amount of experience to implement architectural requirements in the best possible way. For this reason, many test automation projects have a test automation architect who, like a software development architect, supports the project in its initial stages and in the case of major modifications.

Fig. 1–4 Schematic representation of the layers of a generic test automation architecture (gTAA)

Test Automation Architecture Requirements

The architecture of a test automation solution is closely related to the

architecture of the SUT

. The individual components, user interfaces, dialogs, interfaces, services and technical concepts, languages used, and so on, must all be addressed.

The test and test automation strategy should clearly define which

functional and non-functional requirements of the SUT

are to be addressed and supported by test automation, and thus by the test automation architecture. These will usually be the most important requirements for the software product.

Appendix A

provides an overview of software quality characteristics according to ISO 25010 (part of the ISO/IEC 25000:2014 series of standards).

However,

the functional and non-functional requirements of a test automation solution

also have to be considered. In particular, the requirements covering maintainability, performance, and learnability are in focus during the design of a test automation architecture. The SUT is subject to continuous development, so a high degree of modifiability and extensibility is essential. Using modular concepts or separating functional and technical implementation layers are ways to ensure this.

As the size of the automated test suite increases, the performance of the test automation solution becomes an important issue. Increased testing via the API interfaces rather than the GUI can lead to significant improvements in efficiency. Additionally, the test automation solution should not be treated as a mystery that is only accessible to a chosen few experts. Understandability and learnability are therefore also important factors. It is also worth looking at the quality characteristics listed in Appendix A and evaluating them for their potential use within the test automation architecture.

Collaboration with the software developers and architects is essential to develop the best possible architecture for a test automation solution in a given context. This is because a deep understanding of the SUT architecture is required to meet the requirements mentioned above.

1.4.3Testability of the SUT

Testability or, more precisely, the automated testability of the SUT, is also a key success factor. The test automation tools must have access to the objects and elements of the various user and system interfaces, as well as to system architecture components and services, to identify and leverage them.

Test automation tools provide a range of automation adapters based on a wide variety of technologies and platforms. Whether .NET, Java, SAP, web, desktop or mobile solution, Windows, Linux, Android/iOS, Google Chrome, Internet Explorer, Microsoft Edge, Mozilla Firefox, or Safari, the range is huge.

Manufacturers align their solutions with the common standards used by these technologies and platforms. Problems often arise when the SUT contains implementations and concepts that deviate from these standards. It is therefore necessary to determine the basic automation capability of the SUT during a proof of concept, and to find the most suitable automation solution. Three aspects of this process can be tricky and/or expensive: persuading the manufacturer of an automation tool to modify their product to fit your ideas; convincing the development department to adapt the architecture of the SUT and exchange in-house class libraries for others; somehow finding a workaround using complex constructs within the test automation solution.

However, as the use of test automation becomes more widespread, especially in agile development scenarios, the ability to automate test execution may gain importance as a new quality metric for software applications.

For example, for automated testing via the GUI, the interaction elements and data should be decoupled from their layout as far as possible. For API testing, corresponding interfaces (classes, modules/components, or the command line) can be provided publicly or developed separately.

For each SUT there are areas (classes, modules, functional units) that are easy to automate and areas where automation can be very time-consuming. Potential showstoppers should already have been addressed during tool evaluation and selection. Because an important success factor is the easiest possible implementation and distribution of automated test scripts, the initial focus should be on test areas that can be easily automated. The proof of successful automated test execution helps the project along and supports investment in the expansion of test execution. However, if you dive too deep into critical areas, you may not deliver many results and thus add less value to the project.

1.4.4Test Automation Framework

A test automation framework (TAF) must be easy to use, well documented and, above all, easy to maintain. The foundations for these attributes are laid in the test automation architecture. The test automation framework should also ensure a consistent approach to test automation.

The following factors are especially important:

Implementing reporting facilities

Test reports provide information about the quality of the SUT (passed/failed/faulty/not executed/aborted, statistical data, and so on) and should present this information in an appropriate format for the various stakeholders (testers, test managers, developers, project managers, and other stakeholders).

Support for easy troubleshooting and debugging

In addition to test execution and logging, a test automation framework should provide an easy way to troubleshoot failed tests. The following are some of the reasons for failures and, ideally, the framework will classify them in a way that supports failure analysis:

Failures found in the SUT

Failures found in the test automation solution (TAS)

Problems with the tests themselves (for example, flawed test cases

Problems with the test environment (for example, non-functioning services, missing test data, and so on)

Correct setup of the test environment

Automated test execution requires a dedicated test environment that integrates the various test tools in a consistent manner. If the automated test environment or the test data cannot be manipulated or configured, the test scripts might not be set up and executed according to the test execution requirements. This in turn may lead to unreliable, misleading, or even incorrect test results (false positive or false negative results). A false positive test result means that a problem is detected (i.e., the automated test fails), even if there is no defect in the SUT. A false negative test result indicates that a test is successful (i.e., the automated test does not encounter a failure), even though the system is faulty.

Documentation of automated test cases

The goals of test automation must be clearly defined and described. Which parts of the software should be tested and to what extent? Which test automation approach should be used? Which (functional and nonfunctional) properties of the SUT are to be automatically tested? Furthermore, the documentation of the automated test cases (or test case sets) must make it clear which test objective they cover.

Traceability of automated testing

The functional test scenarios covered by automated test suites are sometimes exceedingly hard to understand, let alone discover. This frequently results in the creation and implementation of new, redundant test scripts. In addition to a fundamental lack of transparency, this creates a lot of unnecessary redundancies and a lack of clarity. Therefore, the test automation framework must also support traceability between the automated test case steps and the corresponding functional test cases and test scenarios.

High maintainability

One of the biggest risks for the success of a test automation project is the maintenance effort it involves. Ideally, the effort required to maintain existing test scripts should be a small percentage of the overall test automation effort. In addition, the effort required to customize the test automation solution should be in a healthy proportion to the scope of the changes to the SUT. If test automation becomes more expensive than the development of the SUT, the goal of reducing costs using test automation will probably not be achieved. Automated test cases should therefore be easy to analyze, change, and extend. A good modular design tailored to the SUT allows a high degree of reusability for individual components and thus reduces the number of artifacts that have to be adapted when changes become necessary.

Keeping test cases up to date

Some test cases fail because changes are made to the business or technical requirements that are not yet addressed in the test scripts, rather than due to an application defect. The affected test cases should not simply be discarded but rather adapted accordingly. It is therefore essential that the test automation engineer receives all relevant information about changes to the SUT through appropriate processes, documentation, and tools, and can thus update the test suite in a timely fashion.

Software deployment planning

The test automation framework should also support the version and configuration management built into the test automation solution, which in turn needs to be kept in sync with the current version of the SUT, again through the appropriate use of tools and standards. The deployment, modification, and redeployment of test scripts must be kept as simple as possible.

Retiring automated tests

When certain automated test sequences are no longer needed, the test automation framework needs to support their structured removal from the test suite. In most cases, it is not sufficient to simply delete scripts. To maintain the consistency of the test automation solution, all dependencies between the components involved must be easy to edit and resolve. As you do when developing software, you should always avoid producing dead code.

SUT monitoring and recovery

Normally, to be able to continuously execute tests, the SUT needs to be constantly monitored. If a fatal failure occurs in the SUT (a crash, for example), the test automation framework must be able to skip the current test case, return the SUT to a consistent state, and proceed with the execution of the next test case.

Maintaining Test Automation Code