Test-Driven Development in Go - Adelina Simion - E-Book

Test-Driven Development in Go E-Book

Adelina Simion

0,0
28,79 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Experienced developers understand the importance of designing a comprehensive testing strategy to ensure efficient shipping and maintaining services in production. This book shows you how to utilize test-driven development (TDD), a widely adopted industry practice, for testing your Go apps at different levels. You’ll also explore challenges faced in testing concurrent code, and learn how to leverage generics and write fuzz tests.
The book begins by teaching you how to use TDD to tackle various problems, from simple mathematical functions to web apps. You’ll then learn how to structure and run your unit tests using Go’s standard testing library, and explore two popular testing frameworks, Testify and Ginkgo. You’ll also implement test suites using table-driven testing, a popular Go technique. As you advance, you’ll write and run behavior-driven development (BDD) tests using Ginkgo and Godog. Finally, you’ll explore the tricky aspects of implementing and testing TDD in production, such as refactoring your code and testing microservices architecture with contract testing implemented with Pact. All these techniques will be demonstrated using an example REST API, as well as smaller bespoke code examples.
By the end of this book, you’ll have learned how to design and implement a comprehensive testing strategy for your Go applications and microservices architecture.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 468

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Test-Driven Development in Go

A practical guide to writing idiomatic and efficient Go tests through real-world examples

Adelina Simion

BIRMINGHAM—MUMBAI

Test-Driven Development in Go

Copyright © 2023 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Gebin George

Publishing Product Manager: Pooja Yadav

Content Development Editor: Rosal Colaco

Technical Editor: Jubit Pincy

Copy Editor: Safis Editing

Project Coordinator: Deeksha Thakkar

Proofreader: Safis Editing

Indexer: Sejal Dsilva

Production Designer: Prashant Ghare

Developer Relations Marketing Executive: Sonia Chauhan and Rayyan Khan

First published: March 2023

Production reference: 2190623

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80324-787-8

www.packtpub.com

To my family and my partner, who are always in my corner.

– Adelina Simion

Foreword

I consult many companies on various topics, whether architecture, data pipelines, optimization, or one of many others. And every time, I find myself writing tests, sometimes a lot of tests. Needless to say, I think tests are important.

The main factor in writing tests is the cost of error. If you are writing an internal web server for lunch orders, you can fix bugs as they appear. However, if you write software that controls railroad semaphores, you will invest more time in upfront testing.

You can never test enough: SQLite has 608 lines of tests for each line of code (https://www.sqlite.org/testing.html), and it still has bugs. This means you need to be smart about the time and effort you invest in writing tests. You need to know what kind of tests give you the most “value for money.” It might be integration tests, end-to-end tests, or even fuzzing. Probably a combination of several of these methods. You can be data-driven by marking bugs with what kind of test would have caught them – after a while, you will know where to invest your time. Remember that there is no “one size fits all” here; every project and every team will find a different set of tests more effective for them. And things will change during a project life cycle. Make sure to update your testing priorities accordingly.

No matter how much you test, bugs will get to production. NASA has a very thorough development process (https://www.fastcompany.com/28121/they-write-right-stuff), and they still manage to ship bugs to Mars. However, they also triage and fix (https://futurism.com/the-byte/nasa-mars-lander-hit-itself-shovel) bugs on Mars, which I find mind-boggling. For you, this means metrics and logging; you need to know when and why your code fails.

I could ramble about tests for a very long time; just ask my students. But you came here to hear about the book you are about to read. My advice is, read it!

Adelina does a great job of showing you the ropes with several practical examples. She will walk you through the bigger picture of testing, explaining TDD, “red, green, refactor,” and more terms, and then get into the nuts and bolts of writing tests. You will learn about the difference between t.Error and t.Fatal (I use the latter), how to write table-driven tests, and many other best practices and tools. You will even learn how to Use Docker Compose for running end-to-end tests, use external libraries that automagically create mocks, and much more. And, even though this book is about testing, you will learn a lot of Go along the way.

It shows that Adelina has a lot of experience writing tests and that she is passionate about the subject. There are many tips sprinkled throughout the book: for example, how to name tests so that test files will show next to the code they are testing. It might seem like a trivial matter, but in large code bases, this tip will save you a lot of time.

I encourage you to dive in, and as you surface from time to time, head over to your code and implement what you read.

Happy hacking!

Miki Tebeka,

Founder and CEO of 353Solutions

I’m thrilled Adelina has written this book. By reading it and following along with the examples, you’ll become confident with the practices of Test-Driven Development (TDD) and will hopefully adopt it as your primary method of engineering.

The first two chapters provide a solid foundation that will give you everything you need to get started. Chapter 3, Mocking and Assertion Frameworks, gets stuck into practices that enable you to write tests that touch multiple systems, which is explored more in the second part of the book, starting with Chapter 5, Performing Integration Testing. The final part of the book, Part 3, Advanced Testing Techniques, covers many of the more difficult areas of testing in Go, concurrency, fuzzing, and generics.

I almost exclusively follow the TDD approach and have seen it pay dividends again and again, allowing me to write very high-quality code and get it to production sooner. Strong testing skills can make the difference between a good engineer and a great one. This is why a few friends and I have created testing libraries such as Testify (https://github.com/stretchr/testify) and my minimalist alternative, Is (https://github.com/matryer/is), and Moq (https://github.com/matryer/moq) for generating mocks from interfaces in Go.

When I first heard about TDD many years ago, I spat my coffee out, looked directly at the camera, and pulled a very confused face indeed. How can you test something that doesn’t yet exist?

Maintaining software often costs more than the initial build, especially in successful projects that have a long life, so it makes sense to invest time and energy in making maintenance easier. Tests are one of our most powerful tools for this mission. If we get tests right and they all pass, we know our code is safe to deploy. If we find a bug, we can prove it with a test (it’s important to see that test fail) before fixing the issue and adding that new test to our suite so that we never see that same bug again. Tests allow us to make bold changes with confidence. Adelina dives into this in more detail in Chapter 7, Refactoring in Go.

This all applies regardless of when you write the tests, so why do we write the tests first?

One big misconception with TDD is that you should write all of your tests up front, before writing any “real code.” In fact, the approach should be much more iterative. Write the smallest amount of test code that moves the story along, run the test and see that it fails, and then make that test pass. Rinse and repeat. Chapter 2, Unit Testing Essentials, covers this nicely and gives you some practice.

The whole Web 2.0 movement in the early twenty-first century taught us that the user experience matters, perhaps more than everything else. If our interfaces are clear and slick, with an elegant design, people are more likely to use our products. It’s important to realize that this applies to our code as well; we have users, and there are interfaces (not just code interfaces, but our functions, structs, method names, arguments, return types – everything somebody, including ourselves, will use).

When we write tests, we become our first user. Writing tests up front pulls the interface design considerations into sharp focus. How should people use this new thing we’re writing? What do they think about the problem? How would they (or us) expect to interact with these types and methods? Starting with the tests gets us thinking about this early, and can drive us towards a simpler design.

Once you’ve got the happy path working, you can explore the edges. Chapter 10, Testing Edge Cases, provides insights into property-based testing and fuzzing, which you can use to increase the robustness of your code.

I have twice mentioned that it’s important to see a test fail, and there’s a very good reason for this. Consider a test that can’t fail:

is.True(true) // true should be true

A test that can’t fail is useless and might as well be deleted. This is a trivial example, but seeing a test fail proves to you and others that it is saying something meaningful about your code. If you write a test and make it pass before seeing it fail, how can you be sure that the test is actually interacting with the code it’s testing? This isn’t just theoretical; I have plenty of examples throughout my career where I have made a mistake like this, and it’s worse than not having a test at all because it gave me false confidence. The first chapter of this book, Chapter 1, Getting to Grips with Test-Driven Development, will give you a solid understanding of the advantages of TDD.

It is, of course, possible to achieve good test coverage without TDD, but it’s harder. How do you know you haven’t missed something important? How are those reviewing your PR supposed to know that you’ve got the testing right? Without checking out the code and going through deliberately breaking things, they don’t. With TDD, we do know because we saw that test fail. We know it’s covered.

One side-effect of TDD that I find particularly helpful is when it gives me a TODO list. I start with a user-centric perspective, and I am then delivered a series of errors and failures from the tooling. First, I get compiler errors (Sorry, that method doesn’t exist) and, later, I get assertion errors (Sorry, that string result wasn’t what we expected). This guides my work and helps me focus. At the end of the day, I will often leave my morning self a failing test so I can jump right back into things. I’m faster when following TDD than without it because of the mistakes I avoid, and the clarity I have earlier in the process.

If TDD feels unusual and slow to you initially, I urge you to stick with it. You’ll get better. You’ll get better at designing software, and as Chris James (author of Learn Go with Tests: https://quii.gitbook.io/learn-go-with-tests/) points out, TDD is the feedback loop for validating your design.

I hope you enjoy this book and your journey through TDD with Adelina. She will walk you through it all in detail, taking you deep into the rationale behind the approach and giving you lots of practical and actionable advice along the way.

I’d love to hear about your experiences. Please tweet me, @matryer, and share your perspective.

Mat Ryer

Engineering director at Grafana Labs

Contributors

About the author

Adelina Simion is a technology evangelist at Form3. She is a polyglot engineer and developer relations professional, with a decade of technical experience at multiple start-ups in London. She started her career as a Java backend engineer, converted later to Go, and then transitioned to a full-time developer relations role. She has published multiple online courses about Go on the LinkedIn Learning platform, helping thousands of developers upskill with Go. She has a passion for public speaking, having presented on cloud architectures at major European conferences. Adelina holds an M.Sc. in mathematical modeling and computing.

I want to thank all those who have supported me in this project, especially my technical reviewers, Stuart Murray and Dimas Prawira, without whom this project would not have been possible.

About the reviewer

Stuart Murray is an engineer with 10 years of experience across Go, Rust, Java, TypeScript, and Python. He has worked in a variety of industries including fintech, healthtech, insurtech, and marketing.

Dimas Yudha Prawira is a Go backend engineer, speaker, tech community leader, and mentor with a love for all things Go, open source, and software architecture. He spends his days developing Go microservices, new features, observability, improved testing, and best practices. Dimas explores thought leadership avenues, including reviewing Go textbooks, speaking at the Go community, and leading the Go and Erlang software engineer community. Dimas holds a master’s degree in Digital Enterprise Architecture from Pradita University and a bachelor’s degree in Information Technology from UIN Syarif Hidayatullah.

In his spare time, he likes to contribute to open source projects, read books, watch movies or play with his kids.

I’d like to thank my caring, loving, supportive wife, my deepest gratitude. Your encouragement when the times got rough is much appreciated.

A heartfelt thanks for the comfort and relief of knowing that you were willing to manage our household activities while I focused on my work. To my kids Khaira, Farensa, and Salah thank you and I love you.

To the memory of my father, who always believed in me. You are gone but your belief in me has made this journey possible.

Lastly, to all my friends, thank you for all the support and encouragement you give me, and the patience and unwavering faith you have in me.

Table of Contents

Preface

Part 1: The Big Picture

1

Getting to Grips with Test-Driven Development

Exploring the world of TDD

Introduction to the Agile methodology

Types of automated tests

The iterative approach of TDD

TDD best practices

Understanding the benefits and use of TDD

Pros and cons of using TDD

Use case – the simple terminal calculator

Alternatives to TDD

Waterfall testing

Acceptance Test-Driven Development

Understanding test metrics

Important test metrics

Code coverage

Summary

Questions

Further reading

Answers

2

Unit Testing Essentials

Technical requirements

The unit under test

Modules and packages

The power of Go packages

Test file naming and placement

Additional test packages

Working with the testing package

The testing package

Test signatures

Running tests

Writing tests

Use case – implementing the calculator engine

Test setup and teardown

The TestMain approach

init functions

Deferred functions

Operating with subtests

Implementing subtests

Code coverage

The difference between a test and a benchmark

Summary

Questions

Further reading

3

Mocking and Assertion Frameworks

Technical requirements

Interfaces as dependencies

Dependency injection

Implementing dependency injection

Use case – continued implementation of the calculator

Exploring mocks

Mocking frameworks

Generating mocks

Verifying mocks

Working with assertion frameworks

Using testify

Asserting errors

Writing testable code

Summary

Questions

Further reading

4

Building Efficient Test Suites

Technical requirements

Testing multiple conditions

Identifying edge cases

External services

Error-handling refresher

Table-driven testing in action

Step 1 – declaring the function signature

Step 2 – declaring a structure for our test case

Step 3 – creating our test-case collection

Step 4 – executing each test

Step 5 – implementing the test assertions

Step 6 – running the failing test

Step 7 – implementing the base cases

Step 8 – expanding the test case collection

Step 9 – expanding functional code

Parallelization

Advantages and disadvantages of table-driven testing

Use case – the BookSwap application

Testing BookService

Summary

Questions

Further reading

Part 2: Integration and End-to-End Testing with TDD

5

Performing Integration Testing

Technical requirements

Supplementing unit tests with integration tests

Limitations of unit testing

Implementing integration tests

Running integration tests

Behavior-driven testing

Fundamentals of BDD

Implementing BDD tests with Ginkgo

Understanding database testing

Useful libraries

Spinning up and tearing down environments with Docker

Fundamentals of Docker

Using Docker

Summary

Questions

Further reading

6

End-to-End Testing the BookSwap Web Application

Technical requirements

Use case – extending the BookSwap application

User journeys

Using Docker

Persistent storage

Running the BookSwap application

Exploring Godog

Implementing tests with Godog

Creating test files

Implementing test steps

Running the test suite

Using database assertions

Seed data

Test cases and assertions

Summary

Questions

Further reading

7

Refactoring in Go

Technical requirements

Understanding changing dependencies

Code refactoring steps and techniques

Technical debt

Changing dependencies

Relying on your tests

Automated refactoring

Validating refactored code

Error verification

Custom error types

Splitting up the monolith

Key refactoring considerations

Summary

Questions

Further reading

8

Testing Microservice Architectures

Technical requirements

Functional and non-functional testing

Performance testing in Go

Implementing performance tests

Contract testing

Fundamentals of contract testing

Using Pact

Breaking up the BookSwap monolith

Production best practices

Monitoring and observability

Deployment patterns

The circuit breaker pattern

Summary

Questions

Further reading

Part 3: Advanced Testing Techniques

9

Challenges of Testing Concurrent Code

Technical requirements

Concurrency mechanisms in Go

Goroutines

Channels

Applied concurrency examples

Closing once

Thread-safe data structures

Waiting for completion

Issues with concurrency

Data races

Deadlocks

Buffered channels

The Go race detector

Untestable conditions

Use case – testing concurrency in the BookSwap application

Summary

Questions

Further reading

10

Testing Edge Cases

Technical requirements

Code robustness

Best practices

Usages of fuzzing

Fuzz testing in Go

Property-based testing

Use case – edge cases of the BookSwap application

Summary

Questions

Further reading

11

Working with Generics

Technical requirements

Writing generic code in Go

Generics in Go

Exploring type constraints

Table-driven testing revisited

Step 1 – defining generic test cases

Step 2 – creating test cases

Step 3 – implementing a generic test run function

Step 4 – putting everything together

Step 5 – running the test

Test utilities

Extending the BookSwap application with generics

Testing best practices

Development best practices

Testing best practices

Culture best practices

Summary

Questions

Further reading

Assessments

Chapter 1, Getting to Grips with Test-Driven Development

Chapter 2, Unit Testing Essentials

Chapter 3, Mocking and Assertion Frameworks

Chapter 4, Building Efficient Test Suites

Chapter 5, Performing Integration Testing

Chapter 6, End-to-End Testing the BookSwap Web Application

Chapter 7, Refactoring in Go

Chapter 8, Testing Microservice Architectures

Chapter 9, Challenges of Testing Concurrent Code

Chapter 10, Testing Edge Cases

Chapter 11, Working with Generics

Index

Other Books You May Enjoy

Preface

At the beginning of my career as a software engineer, I was focused on understanding technical concepts and delivering functionality as fast as I could. As I advanced in my career and matured my code-writing craft, I started to understand the importance of code quality and maintainability. This is especially important for Go developers since the language is designed around the values of efficiency, simplicity, and safety.

This book aims to provide you with all the tools you need to elevate the quality of your own Go code, through the industry-standard development methodology of Test-Driven Development (TDD). It provides a comprehensive introduction to the principles and practices of TDD, helping you get started without any prior knowledge. It also demonstrates how to apply this methodology to Go, which continues to gain popularity as a development language.

Throughout this book, we will explore how to leverage the benefits of TDD demonstrated with a variety of code examples, including building a demo REST API. This practical approach will teach you how to design testable code and write efficient Go tests, using the standard testing library as well as popular open source third-party libraries in the Go development ecosystem.

This book introduces the practices of TDD and teaches you how to use them in the development of Go applications using practical examples. It demonstrates how to leverage the benefits of TDD in applications at every level, ensuring that they deliver functional and non-functional requirements. It also touches on important principles of how to design and implement testable code, such as containerization, database integrations, and microservice architectures.

I hope you will find this book helpful in your journey to becoming a better engineer. In its pages, I have included all the knowledge that I wish I had when I first started out with Go development, which I hope will help make writing well-tested code easier for you.

Happy reading!

Who this book is for

This book is aimed at developers and software testing professionals, who want to deliver high-quality and well-tested Go projects. If you are just getting started with TDD, you will learn how to adopt its practices in your development process. If you already have some experience, the code examples will help you write more efficient testing suites and teach you new testing practices.

What this book covers

Chapter 1, Getting to Grips with Test-Driven Development, introduces the principles and benefits of TDD, setting motivation for continuing to learn about it.

Chapter 2, Unit Testing Essentials, teaches us the essential knowledge for beginning our journey with test writing. It covers the test pyramid and how to write unit tests with Go’s standard testing library, and how to run the tests in our projects.

Chapter 3, Mocking and Assertion Frameworks, builds upon the knowledge from previous chapters and teaches us how to write tests for code that has dependencies. It covers the usage of interfaces, how to write better assertions, and the importance of generating and using mocks to write tests with isolated scope.

Chapter 4, Building Efficient Test Suites, explores how to group tests into test suites (which cover a variety of scenarios) using the popular Go testing technique of table-driven design.

Chapter 5, Performing Integration Testing, expands the scope of the tests we write to include the interactions between components using integration testing. It also introduces Behavior-Driven Development (BDD), which is an extension of TDD.

Chapter 6, End-to-End Testing the BookSwap Web Application, focuses on building the REST API application, which is the main demonstration tool of the book. It covers containerization using Docker, database interactions, and end-to-end testing.

Chapter 7, Refactoring in Go, discusses techniques for code refactoring, which is a significant part of the development process. It covers how the process of changing dependencies is facilitated by the use of interfaces and the common process of splitting up monolithic applications into microservice architectures.

Chapter 8, Testing Microservice Architectures, explores the testing challenges of microservice architectures, which change at a rapid pace. It introduces contract testing, which can be used to verify the integration between services.

Chapter 9, Challenges of Testing Concurrent Code, introduces Go’s concurrency mechanisms of goroutines and channels, including the challenges of verifying concurrent code. It also explores the usage and limitations of the Go race detector.

Chapter 10, Testing Edge Cases, expands the testing of edge cases and scenarios by making use of fuzz testing and property-based testing. It also explores code robustness, which allows us to write code that can handle a variety of inputs.

Chapter 11, Working with Generics, concludes our exploration of TDD in Go by exploring the usage and testing of generic code. It also discusses how to write table-driven tests for generic code, as well as how to create custom test utilities.

To get the most out of this book

You will need a version of Go later than 1.19 installed on your computer. All code examples have been tested using Go 1.19 on macOS. After Chapter 6, running the BookSwap demo application will require you to have PostgreSQL installed or run it using Docker.

Software covered in the book

Operating system requirements

Go 1.19

Windows, macOS, or Linux

PostgreSQL 15

Windows, macOS, or Linux

Docker Desktop 4.17

Windows, macOS, or Linux

Postman 10 (optional)

Windows, macOS, or Linux

The GitHub repository describes the configuration required for running the BookSwap application locally, which includes setting some local environment variables.

If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

You will get the most out of reading this book if you are already familiar with the fundamentals and syntax of Go. If you are completely new to Go, you can complete a tour of Go here: https://go.dev/tour/list.

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Test-Driven-Development-in-Go. If there’s an update to the code, it will be updated in the GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://packt.link/KFZWx.

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “The Go toolchain provides the single go test command for running all the tests that we have defined.”

A block of code is set as follows:

func(e *Engine) Add(x, y float64) float64{   return x + y }

When we wish to draw your attention to a particular part of a code block, the relevant lines or items are set in bold:

func TestAdd(t *testing.T) {   e := calculator.Engine{}   x, y := 2.5,3.5   want := 6.0     got := e.Add(x,y)   if got != want {   t.Errorf("Add(%.2f,%.2f) incorrect, got: %.2f, want: %.2f", x, y, got, want)   } }

Any command-line input or output is written as follows:

$ go test -run TestDivide ./chapter04/table -v

Bold: Indicates a new term, an important word, or words that you see onscreen. For instance, words in menus or dialog boxes appear in bold. Here is an example: “Select System info from the Administration panel.”

Tips or important notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share Your Thoughts

Once you’ve read Test-Driven Development in Go, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content

Download a free PDF copy of this book

Thanks for purchasing this book!

Do you like to read on the go but are unable to carry your print books everywhere?

Is your eBook purchase not compatible with the device of your choice?

Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.

Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application.

The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily

Follow these simple steps to get the benefits:

Scan the QR code or visit the link below

https://packt.link/free-ebook/9781803247878

Submit your proof of purchaseThat’s it! We’ll send your free PDF and other benefits to your email directly

Part 1: The Big Picture

This part begins our journey into the world of Test-Driven Development (TDD) and provides us with all the essentials we need to start using it for unit testing our code. We begin with an introduction to the principles and practices of TDD, including how it fits into Agile development. Then, we focus our attention on how to apply these practices to write Go unit tests, exploring the fundamentals of test writing and running in Go. Based on these essentials, we explore how to write isolated tests with mocks and simplify our assertions. We learn how to use the third-party open source assertion libraries, ginkgo and testify, which supplement Go’s standard testing library. Finally, we learn how to implement and leverage the popular technique of table-driven testing to easily write tests that cover a variety of scenarios, making it easy to cover multiple test scenarios and extend the scope of our tests. In this section, we also begin the implementation of our demo REST API, the BookSwapweb application.

This part has the following chapters:

Chapter 1, Getting to Grips with Test-Driven DevelopmentChapter 2, Unit Testing EssentialsChapter 3, Mocking and Assertion FrameworksChapter 4, Building Efficient Test Suites

1

Getting to Grips with Test-Driven Development

Programs and software have never been more complex than they are today. From my experience, the typical tech startup setup involves deployment to the cloud, distributed databases, and a variety of software integrations from the very beginning of the project. As we use software and consume data at unprecedented rates, the expectation of high availability and scalability has become standard for all the services we interact with.

So, why should we care about testing when we are so busy delivering complex functionality in fast-paced, high-growth environments? Simply put, to verify and prove that the code you write behaves and performs to the expectations and requirements of your project. This is important to you as the software professional, as well as to your team and product manager.

In this chapter, we’ll look at the Agile technique of Test-Driven Development (TDD) and how we can use it to verify production code. TDD puts test writing before implementation, ensuring that test scripts cover and change with requirements. Its techniques allow us to deliver quality, well-tested, and maintainable code. The task of software testing is a necessity for all programmers, and TDD seamlessly incorporates test writing into the code delivery process.

This chapter begins our exploration into the world of testing. It will give you the required understanding of TDD and its main techniques. Defining and setting these fundamentals firmly in our minds will set the stage for the later implementation of automated testing in Go.

In this chapter, we’ll cover the following main topics:

The world and fundamentals of TDDThe benefits and use of TDDAlternatives to TDDTest metrics

Exploring the world of TDD

In a nutshell, TDD is a technique that allows us to write automated tests with short feedback loops. It is an iterative process that incorporates testing into the software development process, allowing developers to use the same techniques for writing their tests as they use for writing production code.

TDD was created as an Agile working practice, as it allows teams to deliver code in an iterative process, consisting of writing functional code, verifying new code with tests, and iteratively refactoring new code, if required.

Introduction to the Agile methodology

This precursor to the Agile movement was the waterfall methodology, which was the most popular project management technique. This process involves delivering software projects in stages, with work starting on each stage once the stage before it is completed, just like water flows downstream. Figure 1.1 shows the five stages of the waterfall methodology:

Figure 1.1 – The five stages of the waterfall methodology

Intuition from manufacturing and construction projects might suggest that it is natural to divide the software delivery process into sequential phases, gathering and formulating all requirements at the start of the project. However, this way of working poses three difficulties when used to deliver large software projects:

Changing the course of the project or requirements is difficult. A working solution is only available at the end of the process, requiring verification of a large deliverable. Testing an entire project is much more difficult than testing small deliverables.Customers need to decide all of their requirements in detail at the beginning of the project. The waterfall allows for minimal customer involvement, as they are only consulted in the requirements and verification phases.The process requires detailed documentation, which specifies both requirements and the software design approach. Crucially, the project documentation includes timelines and estimates that the clients need to approve prior to project initiation.

The waterfall model is all about planning work

Project management with the waterfall methodology allows you to plan your project in well-defined, linear phases. This approach is intuitive and suitable for projects with clearly defined goals and boundaries. In practice, however, the waterfall model lacks the flexibility and iterative approach required for delivering complex software projects.

A better way of working named Agile emerged, which could address the challenges of the waterfall methodology. TDD relies on the principles of the Agile methodology. The literature on Agile working practices is extensive, so we won’t be looking at Agile in detail, but a brief understanding of the origins of TDD will allow us to understand its approach and get into its mindset.

Agile software development is an umbrella term for multiple code delivery and project planning practices such as SCRUM, Kanban, Extreme Programming (XP), and TDD.

As implied by its name, it is all about the ability to respond and adapt to change. One of the main disadvantages of the waterfall way of working was its inflexibility, and Agile was designed to address this issue.

The Agile manifesto was written and signed by 17 software engineering leaders and pioneers in 2001. It outlines the 4 core values and 12 central principles of Agile. The manifesto is available freely at agilemanifesto.org.

The four core Agile values highlight the spirit of the movement:

Individuals and interactions over processes and tools: This means that the team involved in the delivery of the project is more important than their technical tools and processes.Working software over comprehensive documentation: This means that delivering working functionality to customers is the number one priority. While documentation is important, teams should always focus on consistently delivering value.Customer collaboration over contract negotiation: This means that customers should be involved in a feedback loop over the lifetime of the project, ensuring that the project and work continue to deliver value and satisfy their needs and requirements.Responding to change over following a plan: This means that teams should be responsive to change over following a predefined plan or roadmap. The team should be able to pivot and change direction whenever required.

Agile is all about people

The Agile methodology is not a prescriptive list of practices. It is all about teams working together to overcome uncertainty and change during the life cycle of a project. Agile teams are interdisciplinary, consisting of engineers, software testing professionals, product managers, and more. This ensures that the team members with a variety of skills collaborate to deliver the software project as a whole.

Unlike the waterfall model, the stages of the Agile software delivery methodology repeat, focusing on delivering software in small increments or iterations, as opposed to the big deliverables of waterfall. In Agile nomenclature, these iterations are called sprints.

Figure 1.2 depicts the stages of Agile project delivery:

Figure 1.2 – The stages of Agile software delivery

Let’s look at the cyclical stages of Agile software delivery:

We begin with the Plan phase. The product owner discusses project requirements that will be delivered in the current sprint with key stakeholders. The outcome of this phase is the prioritized list of client requirements that will be implemented in this sprint.Once the requirements and scope of the project are settled, the Design phase begins. This phase involves both technical architecture design, as well as UI/UX design. This phase builds on the requirements from the Plan phase.Next, the Implement phase begins. The designs are used as the guide from which we implement the scoped functionality. Since the sprint is short, if any discrepancies are found during implementation, then the team can easily move to earlier phases.As soon as a deliverable is complete, the Test phase begins. The Test phase runs almost concurrently with the Implement phase, as test specifications can be written as soon as the Design phase is completed. A deliverable cannot be considered finished until its tests have passed. Work can move back and forth between the Implement and Test phases, as the engineers fix any identified defects.Finally, once all testing and implementation are completed successfully, the Release phase begins. This phase completes any client-facing documentation or release notes. At the end of this phase, the sprint is considered closed. A new sprint can begin, following the same cycle.

The customer gets a new deliverable at the end of each sprint, enabling them to see whether the product still suits their requirements and inform changes for future sprints. The deliverable of each sprint is tested before it is released, ensuring that later sprints do not break existing functionality and deliver new functionality. The scope and effort of the testing performed are limited to exercising the functionality developed during the sprint.

One of the signatories of the Agile manifesto was software engineer Kent Beck. He is credited with having rediscovered and formalized the methodology of TDD.

Since then, Agile has been highly successful for many teams, becoming an industry standard because it enables them to verify functionality as it is being delivered. It combines testing with software delivery and refactoring, removing the separation between the code writing and testing process, and shortening the feedback loop between the engineering team and the customer requirements. This shorter loop is the principle that gives flexibility to Agile.

We will focus on learning how to leverage its process and techniques in our own Go projects throughout the chapters of this book.

Types of automated tests

Automated testing suites are tests that involve tools and frameworks to verify the behavior of software systems. They provide a repeatable way of performing the verification of system requirements. They are the norm for Agile teams, who must test their systems after each sprint and release to ensure that new functionality is shipped without disrupting old/existing functionality.

All automated tests define their inputs and expected outputs according to the requirements of the system under test. We will divide them into several types of tests according to three criteria:

The amount of knowledge they have of the systemThe type of requirement they verifyThe scope of the functionality they cover

Each test we will study will be described according to these three traits.

System knowledge

As you can see in Figure 1.3, automated tests can be divided into three categories according to how much internal knowledge they have of the system they test:

Figure 1.3 – Types of tests according to system knowledge

Let’s explore the three categories of tests further:

Black box tests are run from the perspective of the user. The internals of the system are treated as unknown by the test writer, as they would be to a user. Tests and expected outputs are formulated according to the requirement they verify. Black box tests tend not to be brittle if the internals of the system change.White box tests are run from the perspective of the developer. The internals of the system are fully known to the test writer, most likely a developer. These tests can be more detailed and potentially uncover hidden errors that black box testing cannot discover. White box tests are often brittle if the internals of the system change.Gray box tests are a mixture of black box and white box tests. The internals of the system are partially known to the test writer, as they would be to a specialist or privileged user. These tests can verify more advanced use cases and requirements than black box tests (for example security or certain non-functional requirements) and are usually more time-consuming to write and run as well.

Requirement types

In general, we should provide tests that verify both the functionality and usability of a system.

For example, we could have all the correct functionality on a page, but if it takes 5+ seconds to load, users will abandon it. In this case, the system is functional, but it does not satisfy your customers’ needs.

We can further divide our automated tests into two categories, based on the type of requirement that they verify:

Functional tests: These tests cover the functionality of the system under test added during the sprint, with functional tests from prior sprints ensuring that there are no regressions in functionality in later sprints. These kinds of tests are usually black box tests, as these tests should be written and run according to the functionality that a typical user has access to.Non-functional tests: These tests cover all the aspects of the system that are not covered by functional requirements but affect the user experience and functioning of the system. These tests cover aspects such as performance, usability, and security aspects. These kinds of tests are usually white-box tests, as they usually need to be formulated according to implementation details.

Correctness and usability testing

Tests that verify the correctness of the system are known as functional tests, while tests that verify the usability and performance of the system are known as non-functional tests. Common non-functional tests are performance tests, load tests, and security tests.

The testing pyramid

An important concept of testing in Agile is the testing pyramid. It lays out the types of automated tests that should be included in the automated testing suites of software systems. It provides guidance on the sequence and priority of each type of test to perform in order to ensure that new functionality is shipped with a proportionate amount of testing effort and without disrupting old/existing functionality.

Figure 1.4 presents the testing pyramid with its three types of tests: unit tests, integration tests, and end-to-end tests:

Figure 1.4 – The testing pyramid and its components

Each type of test can then be further described according to the three established traits of system knowledge, requirement type, and testing scope.

Unit tests

At the bottom of the testing pyramid, we have unit tests. They are presented at the bottom because they are the most numerous. They have a small testing scope, covering the functionality of individual components under a variety of conditions. Good unit tests should be tested in isolation from other components so that we can fully control the test environment and setup.

Since the number of unit tests increases as new features are added to the code, they need to be robust and fast to execute. Typically, test suites are run with each code change, so they need to provide feedback to engineers quickly.

Unit tests have been traditionally thought of as white-box tests since they are typically written by developers who know all the implementation details of the component. However, Go unit tests usually only test the exported/public functionality of the package. This brings them closer to gray-box tests.

We will explore unit tests further in Chapter 2, Unit Testing Essentials.

Integration tests

In the middle of the testing pyramid, we have integration tests. They are an essential part of the pyramid, but they should not be as numerous and should not be run as often as unit tests, which are at the bottom of the pyramid.

Unit tests verify that a single piece of functionality is working correctly, while integration tests extend the scope and test the communication between multiple components. These components can be external or internal to the system – a database, an external API, or another microservice in the system. Often, integration tests run in dedicated environments, which allows us to separate production and test data as well as reduce costs.

Integration tests could be black-box tests or gray-box tests. If the tests cover external APIs and customer-facing functionality, they can be categorized as black-box tests, while more specialized security or performance tests would be considered gray-box tests.

We will explore integration tests further in Chapter 4, Building Efficient Test Suites.

End-to-end tests

At the top of the testing pyramid, we have end-to-end tests. They are the least numerous of all the tests we have seen so far. They test the entire functionality of the application (as added during each sprint), ensuring that the project deliverables are working according to requirements and can potentially be shipped at the conclusion of a given sprint.

These tests can be the most time-consuming to write, maintain, and run since they can involve a large variety of scenarios. Just like integration tests, they are also typically run in dedicated environments that mimic production environments.

There are a lot of similarities between integration tests and end-to-end tests, especially in microservice architectures where one service’s end-to-end functionality involves integration with another service’s end-to-end functionality.

We will explore end-to-end tests further in Chapter 5, Performing Integration Testing, and Chapter 8, Testing Microservice Architectures.

Now that we understand the different types of automated tests, it’s time to look at how we can leverage the Agile practice of TDD to implement them alongside our code. TDD will help us write well-tested code that delivers all the components of the testing pyramid.

The iterative approach of TDD

As we’ve mentioned before, TDD is an Agile practice that will be the focus of our exploration. The principle of TDD is simple: write the unit tests for a piece of functionality before implementing it.

TDD brings the testing process together with the implementation process, ensuring that every piece of functionality is tested as soon as it is written, making the software development process iterative, and giving developers quick feedback.

Figure 1.5 demonstrates the steps of the TDD process, known as the red, green, and refactor process:

Figure 1.5 – The steps of TDD

Let’s have a look at the cyclical phases of the TDD working process:

We start at the red phase. We begin by considering what we want to test and translating this requirement into a test. Some requirements may be made up of several smaller requirements: at this point, we test only the first small requirement. This test will fail until the new functionality is implemented, giving a name to the red phase. The failing test is key because we want to ensure that the test will fail reliably regardless of what code we write.Next, we move to the green phase. We swap from test code to implementation, writing just enough code as required to make the failing test pass. The code does not need to be perfect or optimal, but it should be correct enough for the test to pass. It should focus on the requirement tested by the previously written failing test.Finally, we move to the refactor phase. This phase is all about cleaning up both the implementation and the test code, removing duplication, and optimizing our solution.We repeat this process until all the requirements are tested and implemented and all tests pass. The developer frequently swaps between testing and implementing code, extending functionality and tests accordingly.

That’s all there is to doing TDD!

TDD is all about developers

TDD is a developer-centric process where unit tests are written before implementation. Developers first write a failing test. Then, they write the simplest implementation to make the test pass. Finally, once the functionality is implemented and working as expected, they can refactor the code and test as needed. The process is repeated as many times as necessary. No piece of code or functionality is written without corresponding tests.

TDD best practices

The red, green, and refactor approach to TDD is simple, yet very powerful. While the process is simple, we can make some recommendations and best practices for how to write components and tests that can more easily be delivered with TDD.

Structure your tests

We can formulate a shared, repeatable, test structure to make tests more readable and maintainable. Figure 1.6 depicts the Arrange-Act-Assert (AAA) pattern that is often used with TDD:

Figure 1.6 – The steps of the Arrange-Act-Assert pattern

The AAA pattern describes how to structure tests in a uniform manner:

We begin with the Arrange step, which is the setup part of the test. This is when we set up the Unit Under Test (UUT) and all of the dependencies that it requires during setup. We also set up the inputs and the preconditions used by the test scenario in this section.Next, the Act step is where we perform the actions specified by the test scenario. Depending on the type of test that we are implementing, this could simply be invoking a function, an external API, or even a database function. This step uses the preconditions and inputs defined in the Arrange step.Finally, the Assert step is where we confirm that the UUT behaves according to requirements. This step compares the output from the UUT with the expected output, as defined by the requirements.If the Assert step shows that the actual output from the UUT is not as expected, then the test is considered failed and the test is finished.If the Assert step shows that the actual output from the UUT is as expected, then we have two options: one option is that if there are no more test steps, the test is considered passed and the test is finished. The other option is that if there are more test steps, then we go back to the Act step and continue.The Act and Assert steps can be repeated as many times as necessary for your test scenario. However, you should avoid writing lengthy, complicated tests. This is described further in the best practices throughout this section.

Your team can leverage test helpers and frameworks to minimize setup and assertion code duplication. Using the AAA pattern will help to set the standard for how tests should be written and read, minimizing the cognitive load of new and existing team members and improving the maintainability of the code base.

Control scope

As we have seen, the scope of your test depends on the type of test you are writing. Regardless of the type of test, you should strive to restrict the functionality of your components and the assertions of your tests as much as possible. This is possible with TDD, which allows us to test and implement code at the same time.

Keeping things as simple as possible immediately brings some advantages:

Easier debugging in the case of failuresEasier to maintain and adjust tests when the Arrange and Assert steps are simpleFaster execution time of tests, especially with the ability to run tests in parallel

Test outputs, not implementation

As we have seen from the previous definitions of tests, they are all about defining inputs and expected outputs. As developers who know implementation details, it can be tempting to add assertions that verify the inner workings of the UUT.

However, this is an anti-pattern that results in a tight coupling between the test and the implementation. Once tests are aware of implementation details, they need to be changed together with code changes. Therefore, when structuring tests, it is important to focus on testing externally visible outputs, not implementation details.

Keep tests independent

Tests are typically organized in test suites, which cover a variety of scenarios and requirements. While these test suites allow developers to leverage shared functionality, tests should run independently of each other.

Tests should start from a pre-defined and repeatable starting state that does not change with the number of runs and order of execution. Setup and clean-up code ensures that the starting point and end state of each test is as expected.

It is, therefore, best that tests create their own UUT against which to run modifications and verifications, as opposed to sharing one with other tests. Overall, this will ensure that your test suites are robust and can be run in parallel.

Adopting TDD and its best practices allows Agile teams to deliver well-tested code that is easy to maintain and modify. This is one of many benefits of TDD, which we will continue to explore in the next section.

Understanding the benefits and use of TDD

With the fundamentals and best practices of TDD in mind, let us have a more in-depth look at the benefits of adopting it as practice in your teams. As Agile working practices are industry standard, we will discuss TDD usage in Agile teams going forward. Incorporating TDD in the development process immediately allows developers to write and maintain their tests more easily, enabling them to detect and fix bugs more easily too.

Pros and cons of using TDD

Figure 1.7 depicts some of the pros and cons ofusing TDD:

Figure 1.7 – Pros and cons of using TDD

We can expand on these pros and cons highlights:

TDD allows the development and testing process to happen at the same time, ensuring that all code is tested from the beginning. While TDD does require writing more code upfront, the written code is immediately covered by tests, and bugs are fixed while relevant code is fresh in developers’ minds. Testing should not be an afterthought and should not be rushed or cut if the implementation is delayed.TDD allows developers to analyze project requirements in detail at the beginning of the sprint. While it does require product managers to establish the details of what needs to be built as part of sprint planning, it also allows developers to give early feedback on what can and cannot be implemented during each sprint.Well-tested code that has been built with TDD can be confidently shipped and changed. Once a code base has an established test suite, developers can confidently change code, knowing that existing functionality will not be broken because test failures would flag any issues before changes are shipped.Finally, the most important pro is that it gives developers ownership of their code quality by making them responsible for both implementation and testing. Writing tests at the same time as code gives developers a short feedback loop on where their code might be faulty, as opposed to shipping a full feature and hearing about where they missed the mark much later.

In my opinion, the most important advantage of using TDD is the increased ownership by developers. The immediate feedback loop allows them to do their best work, while also giving them peace of mind that they have not broken any existing code.

Now that we understand what TDD and its benefits are, let us explore the basic application of TDD to a simple calculator example.

Use case – the simple terminal calculator

This use case will give you a good understanding of the general process we will undertake when testing more advanced examples.

The use case we will look at is the simple terminal calculator. The calculator will run in the terminal and use the standard input to read its parameters. The calculator will only handle two operators and the simple mathematical operations you see in Figure 1.8:

Figure 1.8 – The simple calculator runs in the terminal

This functionality is simple, but the calculator should also be able to handle edge cases and other input errors.

Requirements

Agile teams typically write their requirements from the user’s perspective. The requirements of the project are written first in order to capture customer needs and to guide the test cases and implementation of the entire simple calculator project. In Agile teams, requirements go through multiple iterations, with engineering leadership weighing in early to ensure that the required functionality can be delivered.

Users should be able to do the following:

Input positive, negative, and zero values using the terminal input. These values should be correctly transformed into numbers.