Effective Concurrency in Go - Burak Serdar - E-Book

Effective Concurrency in Go E-Book

Burak Serdar

0,0
28,79 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

The Go language has been gaining momentum due to its treatment of concurrency as a core language feature, making concurrent programming more accessible than ever. However, concurrency is still an inherently difficult skill to master, since it requires the development of the right mindset to decompose problems into concurrent components correctly. This book will guide you in deepening your understanding of concurrency and show you how to make the most of its advantages.

You’ll start by learning what guarantees are offered by the language when running concurrent programs. Through multiple examples, you will see how to use this information to develop concurrent algorithms that run without data races and complete successfully. You’ll also find out all you need to know about multiple common concurrency patterns, such as worker pools, asynchronous pipelines, fan-in/fan-out, scheduling periodic or future tasks, and error and panic handling in goroutines.

The central theme of this book is to give you, the developer, an understanding of why concurrent programs behave the way they do, and how they can be used to build correct programs that work the same way in all platforms.

By the time you finish the final chapter, you’ll be able to develop, analyze, and troubleshoot concurrent algorithms written in Go.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 257

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Effective Concurrency in Go

Develop, analyze, and troubleshoot high performance concurrent applications with ease

Burak Serdar

BIRMINGHAM—MUMBAI

Effective Concurrency in Go

Copyright © 2023 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

Group Product Manager: Gebin George

Publishing Product Manager: Pooja Yadav

Senior Editor: Kinnari Chohan

Technical Editor: Jubit Pincy

Copy Editor: Safis Editing

Project Coordinator: Manisha Singh

Proofreader: Safis Editing

Indexer: Hemangini Bari

Production Designer: Shankar Kalbhor

Developer Relations Marketing Executives: Sonia Chauhan and Rayyan Khan

First published: April 2023

Production reference: 1240323

Published by Packt Publishing Ltd.

Livery Place

35 Livery Street

Birmingham

B3 2PB, UK.

ISBN 978-1-80461-907-0

www.packtpub.com

To Berrin, Selen, and Ersel

– Burak Serdar

Contributors

About the author

Burak Serdar is a software engineer with over 30 years of experience in designing and developing distributed enterprise applications that scale. He’s worked for several start-ups and large corporations, including Thomson and Red Hat, as an engineer and technical lead. He’s one of the co-founders of Cloud Privacy Labs, where he works on semantic interoperability and privacy technologies for centralized and decentralized systems. Burak holds BSc and MSc degrees in electrical and electronic engineering, and an MSc degree in computer science.

About the reviewer

Tan Quach is an experienced software engineer with a career spanning over 25 years in exotic locations such as London, Canada, Bermuda, and Spain. He has worked with a wide variety of languages and technologies for companies such as Deutsche Bank, Merrill Lynch, and Progress Software and loves diving deep into experimenting with new ones.

Tan’s first foray into Go began in 2017 with a proof-of-concept application built over a weekend and productionized and released three weeks later. Since then, Go has been his language of choice when starting any project.

When he can be torn away from the keyboard, Tan enjoys cooking meat over hot coals and open flames and making his own charcuterie boards.

Table of Contents

Preface

1

Concurrency – A High-Level Overview

Technical Requirements

Concurrency and parallelism

Shared memory versus message passing

Atomicity, race, deadlocks, and starvation

Summary

Question

Further reading

2

Go Concurrency Primitives

Technical Requirements

Goroutines

Channels

Mutex

Wait groups

Condition variables

Summary

Questions

3

The Go Memory Model

Why a memory model is necessary

The happened-before relationship between memory operations

Synchronization characteristics of Go concurrency primitives

Package initialization

Goroutines

Channels

Mutexes

Atomic memory operations

Map, Once, and WaitGroup

Summary

Further reading

4

Some Well-Known Concurrency Problems

Technical Requirements

The producer-consumer problem

The dining philosophers problem

Rate limiting

Summary

5

Worker Pools and Pipelines

Technical Requirements

Worker pools

Pipelines, fan-out, and fan-in

Asynchronous pipeline

Fan-out/fan-in

Fan-in with ordering

Summary

Questions

6

Error Handling

Error handling

Pipelines

Servers

Panics

Summary

7

Timers and Tickers

Technical Requirements

Timers – running something later

Tickers – running something periodically

Heartbeats

Summary

8

Handling Requests Concurrently

Technical Requirements

The context, cancelations, and timeouts

Backend services

Distributing work and collecting results

Semaphores – limiting concurrency

Streaming data

Dealing with multiple streams

Summary

9

Atomic Memory Operations

Technical Requirements

Memory guarantees

Compare and swap

Practical uses of atomics

Counters

Heartbeat and progress meter

Cancellations

Detecting change

Summary

10

Troubleshooting Concurrency Issues

Technical Requirements

Reading stack traces

Detecting failures and healing

Debugging anomalies

Summary

Further reading

Index

Other Books You May Enjoy

Preface

Languages shape the way we think. How we approach problems and formulate solutions for them depends on the concepts we can express using language. This is also true for programming languages. Given a problem, the programs written to solve it may differ from one language to another. This book is about writing programs by expressing concurrent algorithms in the Go language, and about understanding how these programs behave.

Go differs from many popular languages by its emphasis on comprehensibility. This is not the same as readability. Many programs written in easy-to-read languages are not understandable. In the past, I also fell into the trap of writing well-organized programs using frameworks that make programming easy. The problem with that approach is that once writing is over, the program starts a life of its own, and others take over its maintenance. The tribal knowledge that evolved during the development phase is lost, and the team is left with a program that they cannot understand without the help of the last person remaining from the original development team. Developing a program is not that much different from writing a novel. A novel is written so that it can be read by others. So are the programs. If nobody can understand your program, it will not age well.

This book will attempt to explain how to think in the Go language using concurrency constructs so you can understand how the program will behave when you are given a piece of code, and others can understand what you produce. It starts with a high-level overview of concurrency and Go’s treatment of it. It will then work on several data processing problems using concurrent algorithms. After all, programs are written to deal with data. I hope that seeing how concurrency patterns develop organically while solving real-life problems can help you acquire the skills to use the language efficiently and effectively. Later chapters will work on more examples involving timing, periodic tasks, server programming, streaming, and practical uses of atomics. The last chapter will talk about troubleshooting, debugging, and additional instrumentation useful for scalability.

It is impossible to cover all topics related to concurrency in a single book. There are many areas left unexplored. However, I am confident that once you work through the examples, you will have more confidence in solving problems using concurrency. Everybody says concurrency is hard. Using the language correctly makes it easier to produce correct programs. The rule of thumb you should always remember is that correctness comes before performance. So, make it work right first, then you can make it work faster.

Who this book is for

If you are a developer who has basic knowledge of the Go language and are looking to gain expertise in highly concurrent backend application development, this is the book for you. This book would also appeal to Go developers of various experience levels in making their backend systems more robust and scalable.

What this book covers

Chapter 1, Concurrency: A High-Level Overview, talks about what concurrency is and what it isn’t – in particular, how it relates to parallelism. Shared memory and message-passing paradigms, and common concurrency concepts such as race, atomicity, liveness, and deadlock are also introduced in this chapter.

Chapter 2, Go Concurrency Primitives, introduces Go language primitives for concurrent programming – namely, goroutines, channels, mutexes, wait groups, and condition variables.

Chapter 3, The Go Memory Model, talks about the visibility guarantees of memory operations. It introduces the happened-before relationship that allows you to reason about concurrent behavior, then gives the memory visibility guarantees of concurrency primitives and some of the standard library facilities.

Chapter 4, Some Well-Known Concurrency Problems, studies the well-known producer/consumer problem, the dining philosophers problem, and rate-limiting.

Chapter 5, Worker Pools and Pipelines, first studies worker pools, which is a common way to process large amounts of data with limited concurrency. Then, it develops several concurrent data pipeline implementations for efficient data processing applications.

Chapter 6, Error Handling, explores how to deal with errors and panics in a concurrent program, and how to pass errors around.

Chapter 7, Timers and Tickers, shows how to do things periodically and how to do things some time later.

Chapter 8, Handling Requests Concurrently, mostly talks about server programming, but many of the concepts discussed in this chapter are broadly about handling requests, so they can be applied in a wide range of scenarios. It describes how to use context effectively, how to distribute work and collect results, how to limit concurrency, and how to stream data.

Chapter 9, Atomic Memory Operations, covers atomic memory operations, their memory guarantees, and their practical uses.

Chapter 10, Troubleshooting Concurrency Issues, talks about the underrated but essential skill of reading stack traces, and how to detect failures and heal them at runtime.

To get the most out of this book

You need to have a basic understanding of the Go language and a running Go development environment for your operating system. This book does not rely on any other third-party tools or libraries. Use the code editor you are most comfortable with. All examples and code samples can be built and run using the Go build system.

If you are using the digital version of this book, we advise you to type the code yourself or access the code from the book’s GitHub repository (a link is available in the next section). Doing so will help you avoid any potential errors related to the copying and pasting of code.

Download the example code files

You can download the example code files for this book from GitHub at https://github.com/PacktPublishing/Effective-Concurrency-in-Go. If there’s an update to the code, it will be updated in the GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots and diagrams used in this book. You can download it here: https://packt.link/3rxJ9.

Conventions used

There are a number of text conventions used throughout this book.

Code in text: Indicates code words in text, database table names, folder names, filenames, file extensions, pathnames, dummy URLs, user input, and Twitter handles. Here is an example: “The net/http package implements a Server type that handles each request in a separate goroutine.”

A block of code is set as follows:

1: chn := make(chan bool) // Create an unbuffered channel 2: go func() { 3:     chn <- true  // Send to channel 4: }() 5: go func() { 6:     var y bool 7:     y <-chn      // Receive from channel 8:     fmt.Println(y) 9: }()

Any command-line input or output is written as follows:

{"row":65,"height":172.72,"weight":97.61} {"row":64,"height":195.58,"weight":81.266} {"row":66,"height":142.24,"weight":101.242} {"row":68,"height":152.4,"weight":80.358} {"row":67,"height":162.56,"weight":104.87400000000001}

Tips or important notes

Appear like this.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, email us at [email protected] and mention the book title in the subject of your message.

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata and fill in the form.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Share Your Thoughts

Once you’ve read Concurrency in Golang, we’d love to hear your thoughts! Please click here to go straight to the Amazon review page for this book and share your feedback.

Your review is important to us and the tech community and will help us make sure we’re delivering excellent quality content.

Download a free PDF copy of this book

Thanks for purchasing this book!

Do you like to read on the go but are unable to carry your print books everywhere? Is your eBook purchase not compatible with the device of your choice?

Don’t worry, now with every Packt book you get a DRM-free PDF version of that book at no cost.

Read anywhere, any place, on any device. Search, copy, and paste code from your favorite technical books directly into your application.

The perks don’t stop there, you can get exclusive access to discounts, newsletters, and great free content in your inbox daily

Follow these simple steps to get the benefits:

Scan the QR code or visit the link below

https://packt.link/free-ebook/9781804619070

Submit your proof of purchaseThat’s it! We’ll send your free PDF and other benefits to your email directly

1

Concurrency – A High-Level Overview

For many who don’t work with concurrent programs (and for some who do), concurrency means the same thing as parallelism. In colloquial speech, people don’t usually distinguish between the two. But there are some clear reasons why computer scientists and software engineers make a big deal out of differentiating concurrency and parallelism. This chapter is about what concurrency is (and what it is not) and some of the foundational concepts of concurrency.

Specifically, we’ll cover the following main topics in this chapter:

Concurrency and parallelismShared memory versus message passingAtomicity, race, deadlocks, and starvation

By the end of this chapter, you will have a high-level understanding of concurrency and parallelism, basic concurrent programming models, and some of the fundamental concepts of concurrency.

Technical Requirements

This chapter requires some familiarity with the Go language. Some of the examples use goroutines, channels, and mutexes.

Concurrency and parallelism

There was probably a time when concurrency and parallelism meant the same thing in computer science. That time is long gone now. Many people will tell you what concurrency is not: “concurrency is not parallelism,” but when it comes to telling what concurrency is, a simple definition is usually elusive. Different definitions of concurrency give different aspects of the concept because concurrency is not how the real world works. The real world works with parallelism. I will try to summarize some of the core ideas behind concurrency, hoping you can understand the abstract nature of it well enough so that you can apply it to solve practical problems.

Many things around us act independently at the same time. There are probably people around you minding their own business, and sometimes, they interact with you and with each other. All these things happen in parallel, so parallelism is the natural way of thinking about multiple independent things interacting with each other. If you observe a single person’s behavior in a group of people, things are much more sequential: that person does things one after the other and may interact with others in the group, all in an orderly sequence. This is quite similar to multiple programs interacting with each other in a distributed system, or multiple threads of a program interacting with each other in a multi-threaded program.

In computer science, it is widely accepted that the study of concurrency started with the work of Edsger Dijkstra – in particular, the one-page paper titled Solution of a Problem in Concurrent Programming Control in 1965. This paper deals with a mutual exclusion problem involving N computers sharing memory. This is a very clever description that highlights the difference between concurrency and parallelism: it talks about “concurrent programming” and “parallel execution.” Concurrency relates to how programs are written. Parallelism relates to how programs run.

Even though this was mostly an academic exercise at the time, the field of concurrency grew over the years and branched into many different but related topics, including hardware design, distributed systems, embedded systems, databases, cloud computing, and more. It is now one of the necessary core skills for every software engineer, thanks to the advances in hardware technology. Nowadays, multi-core processors are the norm, which are essentially multiple processors packed on a single chip, usually sharing memory. These are used in data centers that power cloud-based applications in which someone can provision hundreds of computers connected via a network within minutes, and destroy them after a workload has been completed. The same concurrency principles apply to applications running on multiple machines on a distributed system, to applications running on a multi-core processor in a laptop, and to applications that run on a single-core system. Thus, any serious software developer must be knowledgeable of these principles to develop correct and safe programs that can scale.

Over the years, several mathematical models have been developed to analyze and validate the behavior of concurrent systems. Communicating Sequential Processes (CSP) is one such model that influenced the design of Go. In CSP, systems are composed of multiple sequential processes that are running in parallel. These processes can communicate with each other synchronously, which means that a system sending a message to the other can only continue once the other system receives it (this is exactly how unbuffered channels behave in Go.)

The validation aspect of such formal frameworks is most intriguing because they are developed with the promise of proving certain properties of complex systems. These properties can be things such as “can the system deadlock?”, which may have life-threatening implications for mission-critical systems. You don’t want your auto-pilot software to stop working mid-flight. Most validation activities boil down to proving properties about the states of the program. That’s what makes proving properties about concurrent systems so difficult: when multiple systems run together, the possible states of the composite system grow exponentially.

The state of a sequential system captures the history of the system at a certain point in time. For a sequential program, the state can be defined as the values in memory together with the current execution location of that program. Given these two, you can determine what the next state will be. As the program executes, it modifies the values of variables and advances the execution location so that the program changes its state. To illustrate this concept, look at the following simple program written in pseudo-code:

1: increment x 2: if x<3 goto 1 3: terminate

The program starts with loc=1 and x=0. When the statement at location 1 is executed, x becomes 1 and location becomes 2. When the statement at location 2 is executed, x stays the same, but the location goes back to 1. This goes on, incrementing x every time the statement at location 1 runs until x reaches 3. Once x is 3, the program terminates. The sequence in Figure 1.1 shows the states of the program:

Figure 1.1 – Sequence of the states of the program

When multiple processes are running in parallel, the state of the whole system is the combination of the states of its components. For example, if there are two instances of this program running, then there are two instances of the x variable, which are x1 and x2, and two locations, loc1 and loc2, pointing to the next line to run. At every state, the possible next states branch based on which copy of the system runs first. Figure 1.2 illustrates some of the states of this system:

Figure 1.2 – States of the parallel program

In this diagram, the arrows are labeled with the index of the process that runs in that step. A particular run of the composite program is one of the paths in the diagram. Several observations can be made about these diagrams:

Each sequential process has seven distinct states.Each sequential process goes through the same sequence of states at every run, but the states of the two instances of the program interleave in different ways on each path.In a particular run, the composite system can go through 14 different states. That is, the length of any path from the start state to the end state in the composite state diagram is 14. (Each process has to go through seven distinct states, making 14 distinct composite states.)Every run of the composite system can go through one of the possible paths.There are 128 distinct states in the composite system. (For each state of system 1, system 2 can be in 7 distinct states, so 27=128.)No matter which path is taken, the end state is the same.

In general, for a system with n states, m copies of that system running in parallel will have nmdistinct states.

That’s one of the reasons why it is so hard to analyze concurrent programs: independent components of a concurrent program can run in any order, making it practically impossible to do state analysis.

It is now time to introduce a definition of concurrency:

“Concurrency is the ability of different parts of a program to be executed out-of-order or in partial order without affecting the result.”

This is an interesting definition, especially for those who are new to the field of concurrency. For one thing, it does not talk about doing multiple things at the same time, but about executing algorithms “out-of-order.” The phrase doing multiple things at the same time defines parallelism. Concurrency is about how the program is written, so according to Rob Pike, one of the creators of the Go language, it is about “dealing with multiple things at the same time.”

Now, a few words on “ordering” things. There are “total orders,” such as the less-than relationship for integers. Given any two integers, you can compare them using the less-than relationship. For sequential programs, we can define a “happened-before relationship,” which is a total order: for any two distinct events that happen within a sequential process, one event happens before the other. If two events happen in different processes, how can a happened-before relationship be defined? A globally synchronized clock can be used to order events happening in isolated processes. However, such a clock with sufficient precision does not usually exist in typical distributed systems. Another possibility is causal relationships between processes: if a process sends a message to another when the message is received, anything that happened before sending the message happened before the second process received it. This is illustrated in Figure 1.3:

Figure 1.3 – a and b happened before c

Here, event ahappened beforec, and bhappened beforec, but nothing can be said about a and b. They happened “concurrently.” In a concurrent program, not every pair of events are comparable, thus the happened-before relationship is a partial order.

Let’s revisit the famous “dining philosophers problem” to explore the ideas of concurrency, parallelism, and out-of-order execution. This was first formulated by Dijkstra but later brought to its final form by C.A.R. Hoare. The problem is defined as follows: five philosophers are dining together at the same round table. There are five plates, one in front of each philosopher, and one fork between each plate, five forks total. The dish they are eating from requires them to use both forks, one on their left side, and the other on their right side. Each philosopher thinks for a random interval and then eats for a while. To eat, a philosopher must acquire both forks – one on the left side and one on the right side of the philosopher’s plate:

Figure 1.4 – Dining philosophers’ problem – some of the possible states

The goal is to devise a concurrent framework that keeps the philosophers well-fed while allowing them to think. We will revisit this problem in detail later as well. For this chapter, we are interested in possible states, some of which are illustrated in Figure 1.4. From left to right, the first figure shows all philosophers thinking. The second figure shows two philosophers that picked up the fork on their left-hand side, so one of the philosophers is waiting for the other to finish. The third figure shows the state in which one of the philosophers is eating while the others are thinking. The philosophers next to the one that’s eating are waiting for their turn to use the fork. The fourth figure shows the state in which two philosophers are eating at the same time. You can see that this is the maximum number of philosophers that can eat at the same time because there are not enough resources (forks) for one more philosopher to eat. The last figure shows the state where each philosopher has one fork, so they are all waiting to acquire the second fork to eat. This situation will only be resolved if at least one of the philosophers gives up and puts the fork back on to the table so that another one can pick it up.

Now, let’s change the problem setup a little bit. Instead of five philosophers sitting at the table, suppose we have a single philosopher who prefers walking when she is thinking. When she gets hungry, she randomly chooses a plate, places the adjacent forks on that plate one by one, and then starts eating. When she is done, she places the forks back on the table one by one and goes back to thinking while walking around the table. She may, however, get distracted during this process and get up at any point, neglecting to put one or both forks back on the table.

When the philosopher chooses a plate, one of the following is possible:

Both forks are on the table. Then, she picks them up and starts eating.One of the forks is on the table, and the other one is on the next plate. Realizing that she cannot eat with a single fork, she gets up and chooses another plate. She may or may not put the fork back on the table.One of the forks is on the table, and the other one is on the selected plate. She picks up the second fork and starts eating.None of the forks are on the table, because they are both on adjacent plates. Realizing that she cannot eat without a fork, she gets up and chooses another plate.Both forks are on the selected plate. She starts eating.

Even though the modified problem has only one philosopher, the possible states of the modified problem are identical to those of the original. The five states depicted in the preceding figure are still some of the possible states of the modified problem. The original problem, where there are five processors (philosophers) performing a computation (eating and thinking) using shared resources (forks) illustrates the parallel execution of a concurrent program. In the modified program, there is only one processor (philosopher) performing the same computation using shared resources by dividing her time (time sharing) to fulfill the roles of the missing philosophers. The underlying algorithms (behavior of the philosopher(s)) are the same. So, concurrent programming is about organizing a problem into computational units that can run using time sharing or that can run in parallel. In that sense, concurrency is a programming model like object-oriented programming or functional programming. Object-oriented programming divides a problem into logically related structural components that interact with each other. Functional programming divides a problem into functional components that call each other. Concurrent programming divides a problem into temporal components that send messages to each other, and that can be interleaved or run in parallel.

Time-sharing means sharing a computing resource with multiple users or processes. In concurrent programming, the shared resource is the processor itself. When multiple threads of executions are created by a program, the processor runs one thread for some time, and then switches to another thread, and so on. This is called context-switching. The context of an execution thread contains its stack and the states of the processor registers when that thread stopped. This way, the processor can quickly switch from stack to stack, saving and restoring the processor’s state at each switch. The exact location in the code where the processor does that switch depends on the underlying implementation. In preemptive threading, a running thread can be stopped at any time during that thread’s execution. In non-preemptive threading (or cooperative threading), a running thread voluntarily gives up execution by performing a blocking operation, a system call, or something else.

For a long time (until Go version 1.14), the Go runtime used a cooperative scheduler. That meant that in the following program, once the first goroutine started running, there was no way to stop it. If you build this program with a Go version < 1.14 and run it with a single OS thread multiple times, some runs will print Hello, while others will not. This is because if the first goroutine starts working before the second one, it will never let the second goroutine run:

func main() {      ch:=make(chan bool)      go func() {            for {}      }()      go func() {            fmt.Println("hello")      }()      <-ch }

This is no longer the case for more recent Go versions. Now, the Go runtime uses a preemptive scheduler that can run other goroutines, even if one of them is trying to consume all processor cycles.

As a developer of concurrent systems, you have to be aware of how threads/goroutines are scheduled. This understanding is the key to identifying the possible ways in which a concurrent system can behave. At a high level, the states a thread/goroutine can be in are shown using the state in Figure 1.5:

Figure 1.5 – Thread state diagram

When a thread is created, it is in the Ready state. When the scheduler assigns it to a processor it moves to the Running state and starts running. A running thread can be preempted and moved back into the Ready state. When the thread performs an I/O operation or blocks waiting for a lock or channel operation, it moves to the Blocked state. When the I/O operation completes, the lock is unlocked, or the channel operation is completed, the thread moves back to the Ready state, waiting to be scheduled.

The first thing you should notice here is that a thread waiting for something to happen in a blocked state may not immediately start running when it is unblocked. This fact is usually overlooked when designing and analyzing concurrent programs. What is the meaning of this for your Go programs? It means that unlocking a mutex doesn’t mean one of the goroutines waiting for that mutex will start running immediately. Similarly, writing to a channel does not mean the receiving goroutine will immediately start running. They will be ready to run, but they may not be scheduled immediately.

You will see different variations of this thread state diagram. Every operating system and every language runtime has different ways of scheduling their execution threads. For example, a threading system may differentiate between being blocked by an I/O operation and being blocked by a mutex. This is only a high-level depiction almost all thread implementations share.

Shared memory versus message passing

If you have been developing with Go for some time, you have probably heard the phrase “Do not communicate by sharing memory. Instead, share memory