20,39 €
Master high quality software development driven by unit tests
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 245
Veröffentlichungsjahr: 2015
Copyright © 2015 Packt Publishing
All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.
Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.
Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.
First published: August 2015
Production reference: 1240815
Published by Packt Publishing Ltd.
Livery Place
35 Livery Street
Birmingham B3 2PB, UK.
ISBN 978-1-78216-660-3
www.packtpub.com
Author
Frank Appel
Reviewers
Stefan Birkner
Jose Muanis Castro
John Piasetzki
Acquisition Editor
Sonali Vernekar
Content Development Editor
Merwyn D'souza
Technical Editor
Humera Shaikh
Copy Editors
Sarang Chari
Sonia Mathur
Project Coordinator
Nikhil Nair
Proofreader
Safis Editing
Indexer
Monica Ajmera Mehta
Graphics
Jason Monteiro
Production Coordinator
Nilesh R. Mohite
Cover Work
Nilesh R. Mohite
Frank Appel is a stalwart of agile methods and test-driven development in particular. He has over 2 decades of experience as a freelancer and understands software development as a type of craftsmanship. Having adopted the test first approach over a decade ago, he applies unit testing to all kinds of Java-based systems and arbitrary team constellations. He serves as a mentor, provides training, and blogs about these topics at codeaffine.com.
I'd like to thank the reviewers, John Piasetzki, Stefan Birkner, and Jose Muanis Castro, and the editors, Sonali Vernekar, Merwyn D'souza, and Humera Shaikh, who spent time and effort to point out my errors, omissions, and sometimes unintelligible writing. In particular, I would like to thank my friend Holger Staudacher, who helped in the reviewing process of the book.
Thanks to all of you for your valuable input and support!
Stefan Birkner has a passion for software development. He has a strong preference for beautiful code, tests, and deployment automation. Stefan is a contributor to JUnit and maintains a few other libraries.
Jose Muanis Castro holds a degree in information systems. Originally from the sunny Rio de Janeiro, he now lives in Brooklyn with his wife and kids. At The New York Times, he works with recommendation systems on the personalization team. Previously, he worked on CMS and publishing platforms at Globo.com in Brazil.
Jose is a seasoned engineer with hands-on experience in several languages. He's passionate about continuous improvement, agile methods, and lean processes. With a lot of experience in automation, from testing to deploying, he constantly switches hats between development and operations. When he's not coding, he enjoys riding around on his bike. He was a reviewer on the 2014 book, Mastering Unit Testing Using Mockito and JUnit, Packt Publishing. His Twitter handle is @muanis.
I'm thankful to my wife, Márcia, and my kids, Vitoria and Rafael, for understanding that I couldn't be there sometimes when I was reviewing this book.
John Piasetzki has over 15 years of professional experience as a software developer. He started out doing programming jobs when he was young and obtained a bachelor of science degree in computer engineering. John was fortunate enough to get his start in programming by contributing to WordPress. He continued by working at IBM on WebSphere while getting his degree. Since then, he has moved on to smaller projects. John has worked with technologies such as Python, Ruby, and most recently, AngularJS. He's currently working as a software developer at OANDA, a foreign exchange company.
For support files and downloads related to your book, please visit www.PacktPub.com.
Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.PacktPub.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at <[email protected]> for more details.
At www.PacktPub.com, you can also read a collection of free technical articles, sign up for a range of free newsletters and receive exclusive discounts and offers on Packt books and eBooks.
https://www2.packtpub.com/books/subscription/packtlib
Do you need instant solutions to your IT questions? PacktLib is Packt's online digital book library. Here, you can search, access, and read Packt's entire library of books.
If you have an account with Packt at www.PacktPub.com, you can use this to access PacktLib today and view 9 entirely free books. Simply use your login credentials for immediate access.
Testing with JUnit is a skill that presents much harder challenges than you might expect at first sight. This is because, despite its temptingly simple API, the tool plays ball with profound and well-conceived concepts. Hence, it's important to acquire a deep understanding of the underlying principles and practices. This avoids ending up in gridlocked development due to messed-up production and testing code.
Mastering high-quality software development driven by unit tests is about following well-attuned patterns and methods as a matter of routine rather, than reinventing the wheel on a daily basis. If you have a good perception of the conceptual requirements and a wide-ranging arsenal of solution approaches, they will empower you to continuously deliver code, which achieves excellent ratings with respect to the usual quality metrics out of the box.
To impart these capabilities, this book provides you with a well-thought-out, step-by-step tutorial. Foundations and essential techniques are elaborated, guided by a golden thread along the development demands of a practically relevant sample application. The chapters and sections are built on one another, each starting with in-depth considerations about the current topic's problem domain and concluding with an introduction to and discussion of the available solution strategies.
At the same time, it's taken care that all thoughts are backed up by illustrative images and expressive listings supplemented by adequate walkthroughs. For the best possible understanding, the complete source code of the book's example app is hosted at https://github.com/fappel/Testing-with-JUnit. This allows you to comprehend the various aspects of JUnit testing from within a more complex development context and facilitates an exchange of ideas using the repository's issue tracker.
Chapter 1, Getting Started, opens with a motivational section about the benefits of JUnit testing and warms up with a short coverage of the toolchain used throughout the book. After these preliminaries, the example project is kicked off, and writing the first unit test offers the opportunity to introduce the basics of the test-driven development paradigm.
Chapter 2, Writing Well-structured Tests, explains why the four fundamental phases' test pattern is perfectly suited to test a unit's behavior. It elaborates on several fixture initialization strategies, shows how to deduce what to test, and concludes by elucidating different test-naming conventions.
Chapter 3, Developing Independently Testable Units, shows you how to decompose big requirements into small and separately testable components and illustrates the impact of collaboration dependencies on testing efforts. It explains the importance of test isolation and demonstrates the use of test doubles to achieve it.
Chapter 4, Testing Exceptional Flow, discusses the pros and cons of various exception capture and verification techniques. Additionally, it explains the meaning of the fail fast strategy and outlines how it intertwines with tests on particular boundary conditions.
Chapter 5, Using Runners for Particular Testing Purposes, presents JUnit's pluggable test processor architecture that allows us to adjust test execution to highly diverse demands. It covers how to write custom runners and introduces several useful areas of application.
Chapter 6, Reducing Boilerplate with JUnit Rules, unveils the test interception mechanism behind the rule concept, which allows you to provide powerful, test-related helper classes. After deepening the knowledge by writing a sample extension, the chapter continues with the tools' built-in utilities and concludes by inspecting useful third-party vendor offerings.
Chapter 7, Improving Readability with Custom Assertions, teaches the writing of concise verifications that reveal the expected outcome of a test clearly. It shows how domain-specific assertions help you to improve readability and discusses the assets and drawbacks of the built-in mechanisms, Hamcrest and AssertJ.
Chapter 8, Running Tests Automatically within a CI Build, concludes the example project with important considerations of test-related architectural aspects. Finally, it rounds out the book by giving an introduction to continuous integration, which is an excellent brief of the test first approach and establishes short feedback cycles efficiently by automation.
Appendix, References, lists all the bibliographic references used throughout the chapters of this book.
For better understanding and deepening of the knowledge acquired, it's advisable to comprehend the examples within a local workspace on your computer. As JUnit is written in Java, the most important thing you need is Java Development Kit. The sample code requires at least Java 8, which can be downloaded from http://www.oracle.com/technetwork/java/index.html.
Although it's possible to compile and run the listings from the command line, the book assumes you're working with a Java IDE, such as Eclipse (http://www.eclipse.org/), IntelliJ IDEA (https://www.jetbrains.com/idea/) or NetBeans (https://netbeans.org/). The sample application was developed using Eclipse and so are the screenshots.
As mentioned in the preceding paragraph, the book's code sources are hosted at GitHub, so you can clone your local copy using Git (https://git-scm.com/). The chapter and sample app projects are based on Maven (https://maven.apache.org/) with respect to their structure and dependency management, which makes it easy to get the sample solutions up and running. This allows a thorough live inspection and debugging of passages that are not fully understood.
Due to this availability of comprehensive sources, the listings in the chapters are stripped down using static imports wherever appropriate or use ellipses to denote a class that has content unrelated to the topic. This helps you to keep the snippets small and focus on the important stuff.
Apart from that, in the course of the book, several Java libraries are introduced. They can all be declared as Maven dependencies and can be downloaded automatically from the publicly available Maven Central Repository (http://search.maven.org/). For some examples, you can refer to the pom.xml files of the sample application. An overview of the testing toolset is given in Chapter 1, Getting Started.
No matter what your specific background as a Java developer is, whether you're simply interested in building up a safety net to reduce the regression of your desktop application or in improving your server-side reliability based on robust and reusable components, unit testing is the way to go. This book provides you with a comprehensive, but concise, entrance, advancing your knowledge step-wise, to a professional level.
Feedback from our readers is always welcome. Let us know what you think about this book—what you liked or disliked. Reader feedback is important for us as it helps us develop titles that you will really get the most out of.
To send us general feedback, simply e-mail <[email protected]>, and mention the book's title in the subject of your message.
If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide at www.packtpub.com/authors.
Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.
You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.
We also provide you with a PDF file that has color images of the screenshots/diagrams used in this book. The color images will help you better understand the changes in the output. You can download this file from: https://www.packtpub.com/sites/default/files/downloads/6603OS_Graphics.pdf.
Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in the text or the code—we would be grateful if you could report this to us. By doing so, you can save other readers from frustration and help us improve subsequent versions of this book. If you find any errata, please report them by visiting http://www.packtpub.com/submit-errata, selecting your book, clicking on the ErrataSubmissionForm link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata will be uploaded to our website or added to any list of existing errata under the Errata section of that title.
To view the previously submitted errata, go to https://www.packtpub.com/books/content/support and enter the name of the book in the search field. The required information will appear under the Errata section.
Piracy of copyrighted material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide us with the location address or website name immediately so that we can pursue a remedy.
Please contact us at <[email protected]> with a link to the suspected pirated material.
We appreciate your help in protecting our authors and our ability to bring you valuable content.
If you have a problem with any aspect of this book, you can contact us at <[email protected]>, and we will do our best to address the problem.
Accomplishing the evolving objectives of a software project in time and budget on a long-term basis is a difficult undertaking. In this opening chapter, we're going to explain why unit testing can play a vital role in meeting these demands. We'll illustrate the positive influence on the defect rate, code quality, development pace, specification density, and team morale. All that makes it worthwhile to acquire a broad understanding of the various testing techniques. To get started, you'll learn to arrange our tool set around JUnit and organize our project infrastructure properly. You'll be familiarized with the definition of unit tests and the basics of test-driven development. This will prepare us for the following chapters, where you'll come to know about more advanced testing practices.
Since you are reading this, you likely have a reason to consider unit testing as an additional development skill to learn. Whether you are motivated by personal interest or driven by external stimulus, you probably wonder if it will be worth the effort. But properly applied unit testing is perhaps the most important technique the agile world has to offer. A well-written test suite is usually half the battle for a successful development process, and the following section will explain why.
The most obvious reason to write unit tests is to build up a safety net to guard your software from regression. There are various grounds for changing the existing code, whether it be to fix a bug or to add supplemental functionality. But understanding every aspect of the code you are about to change is difficult to achieve. So, a new bug sneaks in easily. And it might take a while before it gets noticed.
Think of a method returning some kind of sorted list that works as expected. Due to additional requirements, such as filtering the result, a developer changes the existing code. Inadvertently, these changes introduce a bug that only surfaces under rare circumstances. Hence, simple sanity tests may not reveal any problems and the developer feels confident to check in the new version. If the company is lucky, the problem will be detected by the quality assurance team, but chances are that it slips through to the customer. Boom!
This is because it's hardly possible to check all corner cases of a nontrivial software from a user's point of view, let alone if done manually. Besides an annoyed customer, this leads to a costly turnaround consisting of, for example, filing a bug report, reproducing and debugging the problem, scheduling it for repair, implementing the fix, testing, delivering, and, finally, deploying the corrected version. But who will guarantee that the new version won't introduce another regression?
Sounds scary? It is! I have seen teams that were barely able to deliver new functionality as they were about to drown in a flood of bugs. And hot fixes produced to resolve blocking situations on the customer side introduced additional regression all the time. Sounds familiar? Then, it might be time for a change.
Good unit tests can be written with a small development overhead and verify, in particular, all the corner case behavior of a component. Thus, the developer's said mistake would have been captured by a test. At the earliest possible point in time and at the lowest possible price. But humans make mistakes: what if a corner case is overlooked and a bug turns up? Even then, you are better off because fixing the issue sustainably means simply writing an additional test that reproduces the problem by a failing verification. Change the code until all tests pass and you get rid of the fault forever.
The influence a consistent testing approach will have on the code quality is less apparent. Once you have a safety net in place, changing the existing code to make it more readable, and hence easier to enhance, isn't risky anymore. If you are introducing a regression, your tests will tell you immediately. So, the code morphs from a never touch a running system shrine to a lively change embracing place.
Matured test-first practices will implicitly improve your code with respect to most of the common quality metrics. Testing first is geared to produce small, coherent, and loosely coupled components combined with a high coverage and verification of the component's behavior. The production of clean code is an inherent step of the test-driven development mantra explained further ahead.
The following image shows two screenshots of measurements taken from a small, real-world project of the Xiliary GitHub repository (https://github.com/fappel/xiliary). Developed completely driven by tests, we couldn't care less about the project's metrics before writing this chapter. But not very surprisingly, the numbers look quite okay.
Don't worry if you're not familiar with the meaning of the metrics. All you need to know at the moment is that they would appear in red if exceeding the tool's default thresholds.
So, in case you wonder about the three red spots with low coverage numbers, note that two of those classes are covered by particular integration tests as they are adapters to third-party functionality (a more detailed explanation of integration tests follows in the upcoming Understanding the nature of a unit test section). The remaining class is at an experimental or prototypical stage and will be replaced in the future.
Note that we'll deepen our knowledge of code coverage in Chapter 2, Writing Well-structured Tests, and in Chapter 8, Running Tests Automatically within a CI Build.
Metrics of a TDD project
Programs built on good code quality stand out from systems that merely run, because they are easier to maintain and usually impress with a higher feature evolution rate.
At first glance, the math seems to be simple. Writing additional testing code means more work, which consumes more time, which leads to lower development speed. Right? But would you like to drive a car whose individual parts did not undergo thorough quality assurance? And what would be gained if the car had to spend most of its lifetime in the service shop rather than on the road, let alone the possibility of a life-threatening accident?
The initial production speed might be high, but the overall outcome would be poor and might ruin the car manufacturer in the end. It is not that much different with the development of nontrivial software systems. We elaborated already on the costs of bugs that manage to sneak through to the customer. So, it is a naïve assessment calculating development speed like that.
As a developer, you stand between two contradictory goals: on the one hand, you have to be quick on the draw to meet your deadlines. On the other hand, you must not commit too many sins to be able to also meet subsequent deadlines. The term sin refers to work that should be done before a particular job can be considered complete or proper. This is also denoted as technical debt, [TECDEP]. And here comes the catch. Keeping the balance often does not work out, and once the technical debt gets too high, the system collapses. From that point in time, you won't meet any deadlines again.
So, yes, writing tests causes an overhead. But if done well, it ensures that subsequent deadlines are not endangered by technical debt. The development pace might be initially at a slightly lower rate with testing, but it won't decrease and is, therefore, higher when watching the overall picture.
By the way, if you know your tools and techniques, the overhead isn't that much at all. At least, I am usually not hired for being particularly slow. When you think of it, running a component's unit tests is done in the time of a wink. On the flip side, checking its behavior manually involves launching the application, clicking to the point where your code actually gets involved, and after that, you click and type yourself again through certain scenarios you consider important. Does the latter sound like an efficient modus operandi?
A good test suite at hand can be an additional source of information about what your system components are really capable of and one that doesn't outdate unlike design docs, which usually do. Of course, this is a kind of low-level specification that only a developer is apt to write. But if done well, a test's name tells you about the functionality under test with respect to specific initial conditions and the test's verifications about the expected outcome produced by the execution of this functionality.
This way, a developer who is about to change an existing component will always have a chance to check against the accompanying tests to understand what a component is really all about. So, the truth is in the tests! But this underscores that tests have to be treated as first-class citizens and have to be written and adjusted with care. A poorly written test might confuse a programmer and hinder the progressing rate significantly.
Everybody likes to be in a winning team. But once you are stuck in a bug trail longer than the Great Wall of China and a technical debt higher than Mount Everest, fear creeps in. At that time, the implementation of new features can cause avalanches of lateral damage and developers get reluctant to changes. What follow are debates about consolidation phases or even rewriting large parts of the system from scratch before they dare to think about new functionality. Of course, this is an economic horror scenario from the management's point of view, and that's how the development team member's confidence and courage say good bye.
Again, this does not happen as easily with a team that has build its software upon components backed up with well-written unit tests. We learned earlier why unit tested systems neither have many bugs nor too much technical debt. Introducing additional functionality is possible without expecting too much lateral damage since the existing tests beware of regressions. Combined with module-spanning integration tests, you get a rock-solid foundation in which developers learn to trust.
I have seen more than once how restructuring requirements of nontrivial systems were achieved without doing any harm to dependent components. All that was necessary was to take care not to break existing tests and cover changed code passages with new or adjusted tests. So, if you are unluckily more or less familiar with some of the scenarios described in this section, you should read on and learn how to get confidence and courage back in your team.