Hands-On Parallel Programming with C# 8 and .NET Core 3 - Shakti Tanwar - E-Book

Hands-On Parallel Programming with C# 8 and .NET Core 3 E-Book

Shakti Tanwar

0,0
40,81 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Enhance your enterprise application development skills by mastering parallel programming techniques in .NET and C#




Key Features



  • Write efficient, fine-grained, and scalable parallel code with C# and .NET Core


  • Experience how parallel programming works by building a powerful application


  • Learn the fundamentals of multithreading by working with IIS and Kestrel



Book Description



In today's world, every CPU has a multi-core processor. However, unless your application has implemented parallel programming, it will fail to utilize the hardware's full processing capacity. This book will show you how to write modern software on the optimized and high-performing .NET Core 3 framework using C# 8.






Hands-On Parallel Programming with C# 8 and .NET Core 3 covers how to build multithreaded, concurrent, and optimized applications that harness the power of multi-core processors. Once you've understood the fundamentals of threading and concurrency, you'll gain insights into the data structure in .NET Core that supports parallelism. The book will then help you perform asynchronous programming in C# and diagnose and debug parallel code effectively. You'll also get to grips with the new Kestrel server and understand the difference between the IIS and Kestrel operating models. Finally, you'll learn best practices such as test-driven development, and run unit tests on your parallel code.






By the end of the book, you'll have developed a deep understanding of the core concepts of concurrency and asynchrony to create responsive applications that are not CPU-intensive.




What you will learn



  • Analyze and break down a problem statement for parallelism


  • Explore the APM and EAP patterns and how to move legacy code to Task


  • Apply reduction techniques to get aggregated results


  • Create PLINQ queries and study the factors that impact their performance


  • Solve concurrency problems caused by producer-consumer race conditions


  • Discover the synchronization primitives available in .NET Core


  • Understand how the threading model works with IIS and Kestrel


  • Find out how you can make the most of server resources



Who this book is for



If you want to learn how task parallelism is used to build robust and scalable enterprise architecture, this book is for you. Whether you are a beginner to parallelism in C# or an experienced architect, you'll find this book useful to gain insights into the different threading models supported in .NET Standard and .NET Core. Prior knowledge of C# is required to understand the concepts covered in this book.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB

Seitenzahl: 308

Veröffentlichungsjahr: 2019

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Hands-On Parallel Programming with C# 8 and .NET Core 3

 

 

Build solid enterprise software using task parallelism and multithreading

 

 

 

 

 

 

 

 

Shakti Tanwar

 

 

 

 

 

 

 

 

 

 

BIRMINGHAM - MUMBAI

Hands-On Parallel Programming with C# 8 and .NET Core 3

Copyright © 2019 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing or its dealers and distributors, will be held liable for any damages caused or alleged to have been caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

 

Commissioning Editor:Richa TripathiAcquisition Editor:Alok DhuriContent Development Editor:Digvijay BagulSenior Editor: Rohit SinghTechnical Editor:Pradeep SahuCopy Editor: Safis EditingProject Coordinator:Francy PuthiryProofreader: Safis EditingIndexer:Priyanka DhadkeProduction Designer:Jyoti Chauhan

First published: December 2019

Production reference: 1191219

Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK.

ISBN 978-1-78913-241-0

www.packt.com

 

To my wife, Kirti Tanwar, and my son, Shashwat Singh Tanwar, for being my life support and for keeping me motivated to excel in all walks of life.
 

Packt.com

Subscribe to our online digital library for full access to over 7,000 books and videos, as well as industry leading tools to help you plan your personal development and advance your career. For more information, please visit our website.

Why subscribe?

Spend less time learning and more time coding with practical eBooks and Videos from over 4,000 industry professionals

Improve your learning with Skill Plans built especially for you

Get a free eBook or video every month

Fully searchable for easy access to vital information

Copy and paste, print, and bookmark content

Did you know that Packt offers eBook versions of every book published, with PDF and ePub files available? You can upgrade to the eBook version at www.packt.com and as a print book customer, you are entitled to a discount on the eBook copy. Get in touch with us at [email protected] for more details.

At www.packt.com, you can also read a collection of free technical articles, sign up for a range of free newsletters, and receive exclusive discounts and offers on Packt books and eBooks. 

Contributors

About the author

Shakti Tanwar is the CEO of Techpro Compsoft Pvt Ltd, a global provider of consulting in information technology services. He is a technical evangelist and software architect with more than 15 years of experience in software development and corporate training. Shakti is a Microsoft Certified Trainer and has been conducting training in association with Microsoft in the Middle East. His areas of expertise include .NET; Azure Machine Learning; artificial intelligence; applications of pure functional programming to build fault-tolerant, reactive systems; and parallel computing. His love for teaching led him to start a special "train the professors" program for the betterment of colleges in India.

This book would not have been possible without the sacrifices of my wife, Kirti, and son, Shashwat. They stood by me through every struggle and every success. It was their smiles and motivation during tough times that kept me going. I’m also eternally grateful to my parents and my siblings, who always motivated me to scale new heights of success. Many thanks to my friends, mentors, and team Packt, who guided me throughout this journey.

 

 

About the reviewers

Alvin Ashcraftis a developer living near Philadelphia. He has spent his 23-year career building software with C#, Visual Studio, WPF, ASP.NET, and more. He has been awarded, nine times, the Microsoft MVP title. You can read his daily links for .NET developers on his blog, Morning Dew. He works as a principal software engineer for Allscripts, building healthcare software. He has previously been employed by software companies, including Oracle. He has reviewed other titles for Packt Publishing, such asMastering ASP.NET Core 2.0,Mastering Entity Framework Core 2.0, andLearning ASP.NET Core 2.0.

I would like to thank wonderful wife, Stelene, and our three amazing daughters for their support. They were very understanding when I was reading and reviewing these chapters on evenings and weekends to help deliver a useful, high-quality book for .NET developers.

 

Vidya Vrat Agarwal is an avid reader, speaker, published author for Apress, and technical reviewer of over a dozen books for Apress, Packt, and O'Reilly. He is a hands-on architect with 20 years of experience in architecting, designing, and developing distributed software solutions for large enterprises. At T-Mobile as a principal architect, he has worked with B2C and B2B teams where he continues to partner with other domain architects to establish the solution vision and architecture roadmaps for various T-Mobile initiatives to positively impact millions of T-Mobile customers. He sees software development as a craft, and he is a big proponent of software architecture and clean code practices.

 

 

 

 

Packt is searching for authors like you

If you're interested in becoming an author for Packt, please visit authors.packtpub.com and apply today. We have worked with thousands of developers and tech professionals, just like you, to help them share their insight with the global tech community. You can make a general application, apply for a specific hot topic that we are recruiting an author for, or submit your own idea.

Table of Contents

Title Page

Copyright and Credits

Hands-On Parallel Programming with C# 8 and .NET Core 3

Dedication

About Packt

Why subscribe?

Contributors

About the author

About the reviewers

Packt is searching for authors like you

Preface

Who this book is for

What this book covers

To get the most out of this book

Download the example code files

Download the color images

Conventions used

Get in touch

Reviews

Section 1: Fundamentals of Threading, Multitasking, and Asynchrony

Introduction to Parallel Programming

Technical requirements

Preparing for multi-core computing

Processes

Some more information about the OS

Multitasking

Hyper-threading

Flynn's taxonomy

Threads

Types of threads

Apartment state

Multithreading

Thread class

Advantages and disadvantages of threads

The ThreadPool class

Advantages, disadvantages, and when to avoid using ThreadPool

BackgroundWorker

Advantages and disadvantages of using BackgroundWorker

Multithreading versus multitasking

Scenarios where parallel programming can come in handy

Advantages and disadvantages of parallel programming 

Summary

Questions

Task Parallelism

Technical requirements

Tasks

Creating and starting a task

The System.Threading.Tasks.Task class

Using lambda expressions syntax

Using the Action delegate

Using delegate

The System.Threading.Tasks.Task.Factory.StartNew method

Using lambda expressions syntax

Using the Action delegate

Using delegate

The System.Threading.Tasks.Task.Run method

Using lambda expressions syntax

Using the Action delegate

Using delegate

The System.Threading.Tasks.Task.Delay method

The System.Threading.Tasks.Task.Yield method

The System.Threading.Tasks.Task.FromResult<T> method

The System.Threading.Tasks.Task.FromException and System.Threading.Tasks.Task.FromException<T> methods

The System.Threading.Tasks.Task.FromCanceled and System.Threading.Tasks.Task.FromCanceled<T> methods

Getting results from finished tasks

How to cancel tasks

Creating a token

Creating a task using tokens

Polling the status of the token via the IsCancellationRequested property 

Registering for a request cancellation using the Callback delegate

How to wait on running tasks

Task.Wait

Task.WaitAll

Task.WaitAny

Task.WhenAll

Task.WhenAny

Handling task exceptions

Handling exception from single tasks

Handling exceptions from multiple tasks

Handling task exceptions with a callback function

Converting APM patterns into tasks

Converting EAPs into tasks

More on tasks

Continuation tasks

Continuing tasks using the Task.ContinueWith method

Continuing tasks using Task.Factory.ContinueWhenAll and Task.Factory.ContinueWhenAll<T>

Continuing tasks using Task.Factory.ContinueWhenAny and Task.Factory.ContinueWhenAny<T>

Parent and child tasks

Creating a detached task

Creating an attached task

Work-stealing queues

Summary

Implementing Data Parallelism

Technical requirements

Moving from sequential loops to parallel loops

Using the Parallel.Invoke method

Using the Parallel.For method

Using the Parallel.ForEach method

Understanding the degree of parallelism

Creating a custom partitioning strategy

Range partitioning

Chunk partitioning

Canceling loops

Using the Parallel.Break method

Using ParallelLoopState.Stop

Using CancellationToken to cancel loops

Understanding thread storage in parallel loops

Thread local variable

Partition local variable

Summary

Questions

Using PLINQ

Technical requirements

LINQ providers in .NET

Writing PLINQ queries

Introducing the ParallelEnumerable class

Our first PLINQ query

Preserving order in PLINQ while doing parallel executions

Sequential execution using the AsUnOrdered() method

Merge options in PLINQ

Using the NotBuffered merge option

Using the AutoBuffered merge option

Using the FullyBuffered merge option

Throwing and handling exceptions with PLINQ

Combining parallel and sequential LINQ queries

Canceling PLINQ queries

Disadvantages of parallel programming with PLINQ

Understanding the factors that affect the performance of PLINQ (speedups)

Degree of parallelism

Merge option

Partitioning type

Deciding when to stay sequential with PLINQ

Order of operation

ForAll versus calling ToArray() or ToList()

Forcing parallelism

Generating sequences

Summary

Questions

Section 2: Data Structures that Support Parallelism in .NET Core

Synchronization Primitives

Technical requirements

What are synchronization primitives?

Interlocked operations 

Memory barriers in .NET

What is reordering? 

Types of memory barriers

Avoiding code reordering using constructs

Introduction to locking primitives

How locking works

Thread state

Blocking versus spinning

Lock, mutex, and semaphore

Lock

Mutex

Semaphore

Local semaphore

Global semaphore

ReaderWriterLock

Introduction to signaling primitives

Thread.Join

EventWaitHandle

AutoResetEvent

ManualResetEvent

WaitHandles

Lightweight synchronization primitives

Slim locks

ReaderWriterLockSlim

SemaphoreSlim

ManualResetEventSlim

Barrier and countdown events

A case study using Barrier and CountDownEvent

SpinWait

SpinLock

Summary

Questions

Using Concurrent Collections

Technical requirements

An introduction to concurrent collections

Introducing IProducerConsumerCollection<T>

Using ConcurrentQueue<T>

Using queues to solve a producer-consumer problem

Solving problems using concurrent queues

Performance consideration – Queue<T> versus ConcurrentQueue<T>

Using ConcurrentStack<T>

Creating a concurrent stack

Using ConcurrentBag<T>

Using BlockingCollection<T>

Creating BlockingCollection<T>

A multiple producer-consumer scenario

Using ConcurrentDictionary<TKey,TValue>

Summary

Questions

Improving Performance with Lazy Initialization

Technical requirements

Introducing lazy initialization concepts

Introducing System.Lazy<T>

Construction logic encapsulated inside a constructor

Construction logic passed as a delegate to Lazy<T>

Handling exceptions with the lazy initialization pattern

No exceptions occur during initialization

Random exception while initialization with exception caching

Not caching exceptions

Lazy initialization with thread-local storage

Reducing the overhead with lazy initializations

Summary

Questions

Section 3: Asynchronous Programming Using C#

Introduction to Asynchronous Programming

Technical requirements

Types of program execution

Understanding synchronous program execution

Understanding asynchronous program execution

When to use asynchronous programming

Writing asynchronous code

Using the BeginInvoke method of the Delegate class

Using the Task class

Using the IAsyncResult interface

When not to use asynchronous programming

In a single database without connection pooling

When it is important that the code is easy to read and maintain

For simple and short-running operations

For applications with lots of shared resources

Problems you can solve using asynchronous code

Summary

Questions

Async, Await, and Task-Based Asynchronous Programming Basics

Technical requirements

Introducing async and await

The return type of async methods

Async delegates and lambda expressions

Task-based asynchronous patterns

The compiler method, using the async keyword

Implementing the TAP manually

Exception handling with async code

A method that returns Task and throws an exception

An async method from outside a try-catch block without the await keyword

An async method from inside the try-catch block without the await keyword

Calling an async method with the await keyword from outside the try-catch block

Methods returning void

Async with PLINQ

Measuring the performance of async code

Guidelines for using async code

Avoid using async void

Async chain all the way

Using ConfigureAwait wherever possible

Summary

Questions

Section 4: Debugging, Diagnostics, and Unit Testing for Async Code

Debugging Tasks Using Visual Studio

Technical requirements

Debugging with VS 2019 

How to debug threads

Using Parallel Stacks windows

Debugging using Parallel Stacks windows

Threads view

Tasks view

Debugging using the Parallel Watch window

Using Concurrency Visualizer

Utilization view

Threads view

Cores view

Summary

Questions

Further reading 

Writing Unit Test Cases for Parallel and Asynchronous Code

Technical requirements

Unit testing with .NET Core

Understanding the problems with writing unit test cases for async code

Writing unit test cases for parallel and async code

Checking for a successful result

Checking for an exception result when the divisor is 0

Mocking the setup for async code using Moq

Testing tools

Summary

Questions 

Further reading

Section 5: Parallel Programming Feature Additions to .NET Core

IIS and Kestrel in ASP.NET Core

Technical requirements

IIS threading model and internals

Starvation Avoidance

Hill Climbing

Kestrel threading model and internals

ASP.NET Core 1.x

ASP.NET Core 2.x

Introducing the best practices of threading in microservices

Single thread-single process microservices

Single thread-multiple process microservices

Multiple threads-single process

Asynchronous services

Dedicated thread pools

Introducing async in ASP.NET MVC core

Async streams 

Summary

Questions

Patterns in Parallel Programming

Technical requirements

The MapReduce pattern

Implementing MapReduce using LINQ

Aggregation

The fork/join pattern

The speculative processing pattern

The lazy pattern

Shared state pattern

Summary

Questions

Distributed Memory Management

Technical requirements

Introduction to distributed systems

Shared versus distributed memory model

Shared memory model

Distributed memory model

Types of communication network

Static communication networks

Dynamic communication networks

Properties of communication networks

Topology

Routing algorithms

Switching strategy

Flow control

Exploring topologies

Linear and ring topologies

Linear arrays

Ring or torus

Meshes and tori

2D mesh

2D torus

Programming distributed memory machines using message passing

Why MPI?

Installing MPI on Windows

Sample program using MPI

Basic send/receive use

Collectives

Summary

Questions

Assessments

Chapter 1 – Introduction to Parallel Programming

Chapter 3 – Implementing Data Parallelism

Chapter 4 – Using PLINQ

Chapter 5 – Synchronization Primitives

Chapter 6 – Using Concurrent Collections

Chapter 7 – Improving Performance with Lazy Initialization

Chapter 8 – Introduction to Asynchronous Programming

Chapter 9 – Async, Await, and Task-Based Asynchronous Programming Basics

Chapter 10 – Debugging Tasks Using Visual Studio

Chapter 11 – Writing Unit Test Cases for Parallel and Asynchronous Code

Chapter 12 – IIS and Kestrel in ASP.NET Core

Chapter 13 – Patterns in Parallel Programming

Chapter 14 – Distributed Memory Management

Other Books You May Enjoy

Leave a review - let other readers know what you think

Preface

Packt first contacted me about writing this book nearly a year ago. It's been a long journey, harder than I anticipated at times, and I've learned a lot. The book you hold now is the culmination of many long days, and I'm proud to finally present it.

Having written this book about C# means a lot to me as it has always been a dream of mine to write about the language that I started my career with. C# has really grown in leaps and bounds since it was first introduced. .NET Core has actually enhanced the power and reputation of C# within the developer community. 

To make this book meaningful to a wide audience, we will cover both the classic threading model and the Task Parallel Library (TPL), using code to explain them. We'll first look at the basic concepts of the OS that make it possible to write multithreaded code. We'll then look closely at the differences between classic threading and the TPL.

In this book, I take care to approach parallel programming in the context of modern-day best programming practices. The examples have been kept short and simple so as to ease your understanding. The chapters have been written in a way that makes the topics easy to learn even if you don't have much prior knowledge of them.

I hope you enjoy reading this book as much as I enjoyed writing it.

Who this book is for

This book is for C# programmers who want to learn multithreading and parallel programming concepts and want to use them in enterprise applications built using .NET Core. It is also designed for students and professionals who simply want to learn about how parallel programming works with modern-day hardware.

It is assumed that you already have some familiarity with the C# programming language and some basic knowledge of how OSes work.

What this book covers

Chapter 1, Introduction to Parallel Programming, introduces the important concepts of multithreading and parallel programming. This chapter includes coverage of how OSes have evolved to support modern-day parallel programming constructs.

Chapter 2, Task Parallelism, demonstrates how to divide your program into tasks for the efficient utilization of CPU resources and high performance.

Chapter 3, Implementing Data Parallelism, focuses on implementing data parallelism using parallel loops. This chapter also covers extension methods to help in achieving parallelism, as well as partitioning strategies.

Chapter 4, Using PLINQ, explains how to take advantage of PLINQ support. This includes ordering queries and canceling queries, as well as the pitfalls of using PLINQ.

Chapter 5, Synchronization Primitives, covers the synchronization constructs available in C# for working with shared resources in multithreaded code.

Chapter 6, Using Concurrent Collections, describes how to take advantage of concurrent collections available in .NET Core without worrying about the effort of manual synchronization coding.

Chapter 7, Improving Performance with Lazy Initialization, explores how to implement built-in constructs utilizing lazy patterns.

Chapter 8, Introduction to Asynchronous Programming, explores how to write asynchronous code in earlier versions of .NET.

Chapter 9, Async, Await, and Task-Based Asynchronous Programming Basics, covers how to take advantage of the new constructs in .NET Core to implement asynchronous code.

Chapter 10, Debugging Tasks Using Visual Studio, focuses on the various tools available in Visual Studio 2019 that makes debugging parallel tasks easier.

Chapter 11, Writing Unit Test Cases for Parallel and Asynchronous Code, covers the various ways to write unit test cases in Visual Studio and .NET Core.

Chapter 12, IIS and Kestrel in ASP.NET Core, introduces the concepts of IIS and Kestrel. The chapter also looks at support for asynchronous streams.

Chapter 13, Patterns in Parallel Programming, explains the various patterns that are already implemented in the C# language. This also includes custom pattern implementations.

Chapter 14, Distributed Memory Management, explores how memory is shared in distributed programs.

To get the most out of this book

You need to have Visual Studio 2019 installed on your system along with .NET Core 3.1. Basic knowledge of C# and OS concepts is recommended as well.

Download the example code files

You can download the example code files for this book from your account at www.packt.com. If you purchased this book elsewhere, you can visit www.packtpub.com/support and register to have the files emailed directly to you.

You can download the code files by following these steps:

Log in or register at

www.packt.com

.

Select the

Support

tab.

Click on

Code Downloads

.

Enter the name of the book in the

Search

box and follow the onscreen instructions.

Once the file is downloaded, please make sure that you unzip or extract the folder using the latest version of:

WinRAR/7-Zip for Windows

Zipeg/iZip/UnRarX for Mac

7-Zip/PeaZip for Linux

The code bundle for the book is also hosted on GitHub at https://github.com/PacktPublishing/Hands-On-Parallel-Programming-with-C-8-and-.NET-Core-3. In case there's an update to the code, it will be updated on the existing GitHub repository.

We also have other code bundles from our rich catalog of books and videos available at https://github.com/PacktPublishing/. Check them out!

Download the color images

We also provide a PDF file that has color images of the screenshots/diagrams used in this book. You can download it here: https://static.packt-cdn.com/downloads/9781789132410_ColorImages.pdf.

Get in touch

Feedback from our readers is always welcome.

General feedback: If you have questions about any aspect of this book, mention the book title in the subject of your message and email us at [email protected].

Errata: Although we have taken every care to ensure the accuracy of our content, mistakes do happen. If you have found a mistake in this book, we would be grateful if you would report this to us. Please visit www.packtpub.com/support/errata, selecting your book, clicking on the Errata Submission Form link, and entering the details.

Piracy: If you come across any illegal copies of our works in any form on the internet, we would be grateful if you would provide us with the location address or website name. Please contact us at [email protected] with a link to the material.

If you are interested in becoming an author: If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, please visit authors.packtpub.com.

Reviews

Please leave a review. Once you have read and used this book, why not leave a review on the site that you purchased it from? Potential readers can then see and use your unbiased opinion to make purchase decisions, we at Packt can understand what you think about our products, and our authors can see your feedback on their book. Thank you!

For more information about Packt, please visit packt.com.

Section 1: Fundamentals of Threading, Multitasking, and Asynchrony

In this section, you will become familiar with the concepts of threading, multitasking, and asynchronous programming.

This section comprises the following chapters:

Chapter 1

,

Introduction to Parallel Programming

Chapter 2

,

Task Parallelism

Chapter 3

,

Implementing Data Parallelism

Chapter 4

,

Using PLINQ

Introduction to Parallel Programming

Parallel programming has been supported in .NET since the start and it has gained a strong footing since the introduction of the Task Parallel Library (TPL) from .NET framework 4.0 onward.

Multithreading is a subset of parallel programming and is one of the least understood aspects of programming; it's one that many new developers struggle to understand. C# has evolved significantly since its inception. It has very strong support, not only for multithreading but also for asynchronous programming. Multithreading in C# goes way back to C# version 1.0. C# isprimarily synchronous, but with the strong async support that has been added from C# 5.0 onward, it has become the first choice for application programmers. Whereas multithreading only deals with how to parallelize within processes, parallel programming also deals with inter-process communication scenarios.

Prior to the introduction of the TPL, we relied on Thread, BackgroundWorker, and ThreadPool to provide us with multithreading capabilities. At the time of C# v1.0, it relied on threads to split up work and free up the user interface (UI), thereby allowing the user to develop responsive applications. This model is now referred to as classic threading. With time, this model made way for another model of programming, called TPL, which relies on tasks and still uses threads internally.

In this chapter, we will learn about various concepts that will help you learn about writing multithreaded code from scratch.

We will cover the following topics:

Basic concepts of multi-core computing, starting with an introduction to the concepts and processes related to the

operating system

(

OS

)

Threads and the difference between multithreading and multitasking

Advantages and disadvantages of writing parallel code and scenarios in which parallel programming is useful

Technical requirements

All the examples demonstrated in this book have been created in Visual Studio 2019 using C# 8. All the source code can be found on GitHub at https://github.com/PacktPublishing/Hands-On-Parallel-Programming-with-C-8-and-.NET-Core-3/tree/master/Chapter01.

Preparing for multi-core computing

In this section, we will introduce the core concepts of the OS, starting with the process, which is where threads live and run. Then, we will consider how multitasking evolved with the introduction of hardware capabilities, which make parallel programming possible. After that, we will try to understand the different ways of creating a thread with code. 

Processes

In layman's terms, the word process refers to a program in execution. In terms of the OS, however, a process is an address space in the memory. Every application, whether it is a Windows, web, or mobile application, needs processes to run. Processes provide security for programs against other programs that run on the same system so that data that's allocated to one cannot be accidentally accessed by another. They also provide isolation so that programs can be started and stopped independently of each other and independently of the underlying OS.

Some more information about the OS

The performance of applications largely depends on the quality and configuration of the hardware. This includes the following:

CPU speed

Amount of RAM

Hard disk speed (5400/7200 RPM)

Disk type, that is, HDD or SSD

Over the last few decades, we have seen huge jumps in hardware technology. For example, microprocessors used to have a single core, which is a chip with one central processing unit (CPU). By the turn of the century, we saw the advent of multi-core processors, which are chips with two or more processors, each with its own cache.

Multitasking

Multitasking refers to the ability of a computer system to run more than one process (application) at a time. The number of processes that can be run by a system is directly proportional to the number of cores in that system. Therefore, a single-core processor can only run one task at a time, a dual-core processor can run two tasks at a time, and a quad-core processor can run four tasks at a time. If we add the concept of CPU scheduling to this, we can see that the CPU runs more applications at a time by scheduling or switching them based on CPU scheduling algorithms.

Hyper-threading

Hyper-threading (HT) technology is a proprietary technology that was developed by Intel that improves the parallelization of computations that are performed on x86 processors. It was first introduced in Xeon server processors in 2002. HT-enabled single-processor chips run with two virtual (logical) cores and are capable of executing two tasks at a time. The following diagram shows the difference between single- and multi-core chips:

The following are a few examples of processor configurations and the number of tasks that they can perform:

A single processor with a single-core chip

: One task at a time

A single processor with an HT-enabled single-core chip

: Two tasks at a time

A single processor with a dual-core chip

: Two tasks at a time

A single processor with an HT-enabled dual-core chip

: Four tasks at a time

A single processor with a quad-core chip

: Four tasks at a time

A single processor with an HT-enabled quad-core chip

: Eight tasks at a time

The following is a screenshot of a CPU resource monitor for an HT-enabled quad-core processor system. On the right-hand side, you can see that there are eight available CPUs:

You might be wondering how much you can improve the performance of your computer simply by moving from a single-core to a multi-core processor. At the time of writing, most of the fastest supercomputers are built on the Multiple Instruction, Multiple Data (MIMD) architecture, which was one of the classifications of computer architecture proposed by Michael J. Flynn in 1966.

Let's try to understand this classification.

Flynn's taxonomy

Flynn classified computer architectures into four categories based on the number of concurrent instruction (or control) streams and data streams:

Single Instruction, Single Data (SISD)

:

 

In this model, there is a single control unit and a single instruction stream. These systems can only execute one instruction at a time without any parallel processing. All single-core processor machines are based on the SISD architecture.

Single Instruction, Multiple Data (SIMD)

:

 

In this model, we have a single instruction stream and multiple data streams. The same instruction stream is applied to multiple data streams in parallel. This is handy in speculative-approach scenarios where we have multiple algorithms for data and we don't know which one will be faster. It provides the same input to all the algorithms and runs them in parallel on multiple processors.  

Multiple Instructions, Single Data (MISD)

:

 

In this model, multiple instructions operate on one data stream. Therefore, multiple operations can be applied in parallel on the same data source. This is generally used for fault tolerance and in space shuttle flight control computers.

Multiple Instructions, Multiple Data (MIMD)

:

 

In this model, as the name suggests, we have multiple instruction streams and multiple data streams. Due to this, we can achieve true parallelism, where each processor can run different instructions on different data streams. Nowadays, this architecture is used by most computer systems.

Now that we've covered the basics, let's move our discussion to threads.

Threads

A thread is a unit of execution inside a process. At any point, a program may consist of one or more threads for better performance. GUI-based Windows applications, such as legacy Windows Forms (WinForms) or Windows Presentation Foundation (WPF), have a dedicated thread for managing the UI and handling user actions. This thread is also called the UI thread, or the foreground thread. It owns all the controls that are created as part of the UI.

Types of threads

There are two different types of managed threads, that is, a foreground thread and a background thread. The difference between these is as follows:

Foreground threads:

 These have a direct impact on an application's lifetime. The application keeps running until there is a foreground thread.

Background threads:

 These have no impact on the application's lifetime. When the application exits, all the background threads are killed.

An application may comprise any number of foreground or background threads. While active, a foreground thread keeps the application running; that is, the application's lifetime depends on the foreground thread. The application stops completely when the last foreground thread is stopped or aborted. When the application exits, the system stops all the background threads.

Multithreading

Parallel execution of code in .NET is achieved through multithreading. A process (or application) can utilize any number of threads, depending on its hardware capabilities. Every application, including console, legacy WinForms, WPF, and even web applications, is started by a single thread by default. We can easily achieve multithreading by creating more threads programmatically as and when they are required.

Multithreading typically functions using a scheduling component known as a thread scheduler, which keeps track of when a thread should run out of active threads inside a process. Every thread that's created is assigned a System.Threading.ThreadPriority, which can have one of the following valid values. Normal is the default priority that's assigned to any thread:

Highest

AboveNormal

Normal

BelowNormal

Lowest

Every thread that runs inside a process is assigned a time slice by the OS based on the thread priority scheduling algorithm. Every OS can have a different scheduling algorithm for running threads, so the order of execution may vary in different operating systems. This makes it more difficult to troubleshoot threading errors. The most common scheduling algorithm is as follows:

Find the threads with the highest priority and schedule them to run.