C# 2008 and 2005 Threaded Programming - Hillar Gaston C. - E-Book

C# 2008 and 2005 Threaded Programming E-Book

Hillar Gaston C.

0,0
20,53 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

In Detail

Most modern machines have dual core processors. This means that multitasking is built right into your computer's hardware. Using both cores means your applications can process data faster and be more responsive to users. But to fully exploit this in your applications, you need to write multithreading code, which means learning some challenging new concepts.

This book will guide you through everything you need to start writing multithreaded C# applications. You will see how to use processes and threads in C#, .NET Framework features for concurrent programming, sharing memory space between threads, and much more. The book is full of practical, interesting examples and working code.

This book begins with the fundamental concepts such as processes, threads, mono-processor systems, multi-processor systems. As the book progresses, the readers get a clear understanding of starting, joining, pausing and restarting threads. The readers get a better understanding of the simple techniques associated with parallelism. There are short exercises at the end of every chapter for the readers to perform.

The book also includes several practical parallelism algorithms and data structures used for illustration, and best practices and practical topics like debugging and performance.

A practical guide to developing responsive multi-threaded and multi-process applications in C#

Approach

This is a concise practical guide that will help you learn C# threaded programming, with lot of examples and clear explanations. It is packed with screenshots to aid your understanding of the process.

Who this book is for

Whether you are a beginner to working with threads or an old hand that is looking for a reference, this book should be on your desk. This book will help you to build scalable, high performance software using parallel programming techniques.

Students learning introductory threaded programming in C# will also gain benefits from this book.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 503

Veröffentlichungsjahr: 2009

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

C# 2008 and 2005 Threaded Programming
Credits
About the Author
Acknowledgement
About the Reviewers
Preface
What this book covers
What you need for this book
Who is this book for
Conventions
Time for action — Uploading a document
What just happened?
Pop quiz
Have a go hero
Reader feedback
Customer support
Downloading the example code for the book
Errata
Piracy
Questions
1. Taking Advantage of Multiprocessing and Multiple Cores
Mono-processor systems: The old gladiators
Single core: Only one warrior to fight against everybody
Doing a tiny bit of each task
The performance waterfall
Have a go hero - Researching micro-architectures and applications
Multi-processor systems: Many warriors to win a battle
Have a go hero - Multi-processing systems
Estimating performance improvements
Have a go hero - Calculating an estimated performance improvement
Avoiding bottlenecks
Have a go hero - Detecting bottlenecks
Taking advantage of multiple execution cores
Have a go hero - Counting cores
Scalability
Have a go hero - Detecting scalability problems
Load balancing: Keeping everybody happy
Have a go hero - Thinking about load balancing
Operating systems and virtual machines
Parallelism is here to stay
Have a go hero - Preparing minds for parallelism
Summary
2. Processes and Threads
Processes—any running program
Time for action — Coding a simple CPU-intensive loop
What just happened?
Time for action — Changing the cores available for a process
What just happened?
Relating processes to cores
Time for action — Changing a process priority
What just happened?
Linear code problems in multiprocessing systems
Time for action — Running many processes in parallel
What just happened?
Time for action — Testing parallelism capabilities with processes
What just happened?
Time for action — Using the Process Explorer
Threads—Independent parts of a process
Time for action — Listing threads with Process Explorer
Have a go hero - Searching multithreaded applications
Time for action — Analyzing context switches with Process Explorer
What just happened?
Multiple threads in servers
Multiple threads in clients
Have a go hero - Redesigning algorithms using pseudo-code
Summary
3. BackgroundWorker—Putting Threads to Work
RTC: Rapid thread creation
Time for action — Breaking a code in a single thread
What just happened?
Time for action — Defining the work to be done in a new thread
What just happened?
Have a go hero - Adding UI elements and monitoring the application
Asynchronous execution
Time for action — Understanding asynchronous execution step-by-step
What just happened?
Synchronous execution
Showing the progress
Time for action — Using a BackgroundWorker to report progress in the UI
What just happened?
Have a go hero - Reporting progress in many ways
Cancelling the job
Time for action — Using a BackgroundWorker to cancel the job
What just happened?
Time for action — Using a BackgroundWorker to detect a job completed
What just happened?
Time for action — Working with parameters and results
What just happened?
Have a go hero - Enhancing the application
Working with multiple BackgroundWorker components
Time for action — Using many BackgroundWorker components to break the code faster
What just happened?
Have a go hero - Monitoring and enhancing the application
BackgroundWorker and Timer
BackgroundWorker creation on the fly
Time for action — Creating BackgroundWorker components in run-time
What just happened?
Have a go hero - Enhancing the code
Pop quiz
Summary
4. Thread Class—Practical Multithreading in Applications
Creating threads with the Thread class
Time for action — Defining methods for encryption and decryption
What just happened?
Time for action — Running the encryption in a new thread using the Thread class
What just happened?
Decoupling the UI
Creating a new thread
Retrieving data from threads
Sharing data between threads
Time for action — Updating the UI while running threads
What just happened?
Sharing some specific data between threads
A BackgroundWorker helping a Thread class
Time for action — Executing the thread synchronously
What just happened?
Main and secondary threads
Have a go hero - Concurrent encryption algorithms
Passing parameters to threads
Time for action — Using lists for thread creation on the fly I
What just happened?
Time for action — Using lists for thread creation on the fly II
What just happened?
Creating as many threads as the number of cores
Receiving parameters in the thread method
Have a go hero - Concurrent UI feedback
Pop quiz
Summary
5. Simple Debugging Techniques with Multithreading
Watching multiple threads
Time for action — Understanding the difficulty in debugging concurrent threads
What just happened?
Debugging concurrent threads
Time for action — Finding the threads
What just happened?
Understanding the information shown in the Threads window
Time for action — Assigning names to threads
What just happened?
Identifying the current thread at runtime
Debugging multithreaded applications as single-threaded applications
Time for action — Leaving a thread running alone
What just happened?
Freezing and thawing threads
Viewing the call stack for each running thread
Have a go hero - Debugging and enhancing the encryption algorithm
Showing partial results in multithreaded code
Time for action — Explaining the encryption procedure
What just happened?
Showing thread-safe output
Time for action — Isolating results
What just happened?
Understanding thread information in tracepoints
Have a go hero - Concurrent decryption
Pop quiz
Summary
6. Understanding Thread Control with Patterns
Starting, joining, pausing, and restarting threads
Time for action — Defining methods for counting old stars
What just happened?
Avoiding conflicts
Splitting image processing
Understanding the pixels' color compositions
Time for action — Running the stars counter in many concurrent threads
What just happened?
Creating independent blocks of concurrent code
Using flags to enhance control over concurrent threads
Rebuilding results to show in the UI
Testing results with Performance Monitor and Process Explorer
Time for action — Waiting for the threads' signals
What just happened?
Using the AutoResetEvent class to handle signals between threads
Using the WaitHandle class to check for signals
Have a go hero - Pausing and restarting threads with flags
Pop quiz
Summary
7. Dynamically Splitting Jobs into Pieces—Avoiding Problems
Running split jobs many times
Time for action — Defining new methods for running many times
What just happened?
Time for action — Running a multithreaded algorithm many times
What just happened?
Using classes, methods, procedures, and functions with multithreading capabilities
Time for action — Analyzing the memory usage
What just happened?
Understanding the garbage collector with multithreading
Time for action — Collecting the garbage at the right time
What just happened?
Controlling the system garbage collector with the GC class
Avoiding garbage collection problems
Avoiding inefficient processing usage problems
Have a go hero - Queuing threads and showing progress
Retrieving the total memory thought to be allocated
Generalizing the algorithms for segmentation with classes
Time for action — Creating a parallel algorithm piece class
What just happened?
Time for action — Using a generic method in order to create pieces
What just happened?
Creating the pieces
Time for action — Creating a parallel algorithm coordination class
What just happened?
Starting the threads associated to the pieces
Accessing instances and variables from threads' methods
Time for action — Adding useful classic coordination methods
What just happened?
Have a go hero - Splitting algorithms specializing classes
Pop quiz
Summary
8. Simplifying Parallelism Complexity
Specializing the algorithms for segmentation with classes
Time for action — Preparing the parallel algorithm classes for the factory method
What just happened?
Defining the class to instantiate
Preparing the classes for inheritance
Time for action — Creating a specialized parallel algorithm piece subclass
What just happened?
Creating a complete piece of work
Writing the code for a thread in an instance method
Time for action — Creating a specialized parallel algorithm coordination subclass
What just happened?
Creating simple constructors
Time for action — Overriding methods in the coordination subclass
What just happened?
Programming the piece creation method
Programming the results collection method
Time for action — Defining a new method to create an algorithm instance
What just happened?
Forgetting about threads
Time for action — Running the Sunspot Analyzer in many concurrent independent pieces
What just happened?
Optimizing and encapsulating parallel algorithms
Achieving thread affinity
Avoiding locks and many synchronization nightmares
Have a go hero - Avoiding side-effects
Pop quiz
Summary
9. Working with Parallelized Input/Output and Data Access
Queuing threads with I/O operations
Time for action — Creating a class to run an algorithm in an independent thread
What just happened?
Time for action — Putting the logic into methods to simplify multithreading
What just happened?
Avoiding Input/Output bottlenecks
Using concurrent streams
Controlling exceptions in threads
Time for action — Creating the methods for queuing requests
What just happened?
Using a pool of threads with the ThreadPool class
Managing the thread queue in the pool
Time for action — Running concurrent encryptions on demand using a pool of threads
What just happened?
Converting single-threaded tasks to a multithreaded pool
Encapsulating scalability
Thread affinity in a pool of threads
Have a go hero - Managing the pool of threads
Parallelizing database access
Have a go hero - Creating a parallelized data access algorithm
Pop quiz
Summary
10. Parallelizing and Concurrently Updating the User Interface
Updating the UI from independent threads
Time for action — Creating a safe method to update the user interface
What just happened?
Creating delegates to make cross-thread calls
Figuring out the right thread to make the call to the UI
Avoiding UI update problems with a delegate
Retrieving results from a synchronous delegate invoke
Time for action — Invoking a user interface update from a thread
What just happened?
Providing feedback when the work is finished
Time for action — Identifying threads and giving them names
What just happened?
Time for action — Understanding how to invoke delegates step-by-step
What just happened?
Decoding the delegates and concurrency puzzle
Time for action — Creating safe counters using delegates and avoiding concurrency problems
What just happened?
Taking advantage of the single-threaded UI to create safe counters
Have a go hero - Implementing a Model-View-Controller design
Reporting progress to the UI from independent threads
Time for action — Creating the classes to show a progress bar column in a DataGridView
What just happened?
Time for action — Creating a class to hold the information to show in the DataGridView
What just happened?
Time for action — Invoking multiple asynchronous user interface updates from many threads
What just happened?
Creating a delegate without parameters
Invoking a delegate asynchronously to avoid performance degradation
Time for action — Updating progress percentages from worker threads
What just happened?
Providing feedback while the work is being done
Have a go hero - Creating a parallelized user interface
Pop quiz
Summary
11. Coding with .NET Parallel Extensions
Parallelizing loops using .NET extensions
Time for action — Downloading and installing the .NET Parallel Extensions
What just happened?
No silver bullet
Time for action — Downloading and installing the imaging library
What just happened?
Time for action — Creating an independent class to run in parallel without side effects
What just happened?
Counting and showing blobs while avoiding side effects
Time for action — Running concurrent nebula finders using a parallelized loop
What just happened?
Using a parallelized ForEach loop
Coding with delegates in parallelized loops
Working with a concurrent queue
Controlling exceptions in parallelized loops
Time for action — Showing the results in the UI
What just happened?
Combining delegates with a BackgroundWorker
Retrieving elements from a concurrent queue in a producer-consumer scheme
Time for action — Providing feedback to the UI using a producer-consumer scheme
What just happened?
Creating an asynchronous task combined with a synchronous parallel loop
Changing the threads' names while debugging
Time for action — Invoking a UI update from a task
What just happened?
Providing feedback when each job is finished
Using lambda expressions to simplify the code
Parallelizing loops with ranges
Parallelizing queries
Time for action — Parallelized counter
What just happened?
Parallelizing LINQ queries with PLINQ
Specifying the degree of parallelism for PLINQ
Parallelizing statistics and multiple queries
Have a go hero - Creating a parallelized user interface
Pop quiz
Summary
12. Developing a Completely Parallelized Application
Joining many different parallelized pieces into a complete application
Time for action — Creating an opacity effect in an independent thread
What just happened?
Running code out of the UI thread
Time for action — Creating a safe method to change the opacity
What just happened?
Blocking the UI—Forbidden with multithreading code
Time for action — Creating a class to run a task in an independent thread
What just happened?
Time for action — Putting the logic into methods to simplify running tasks in a pool of threads
What just happened?
Time for action — Queuing requests, running threads, and updating the UI
What just happened?
Combining threads with a pool of threads and the UI thread
Time for action — Creating a specialized parallel algorithm piece subclass to run concurrently with the pool of threads
What just happened?
Time for action — Creating a specialized parallel algorithm coordination subclass to run concurrently with the pool of threads
What just happened?
Time for action — Overriding methods in the brightness adjustment coordination subclass
What just happened?
Time for action — Starting new threads in a new window
What just happened?
Creating threads inside other threads
Time for action — Showing new windows without blocking the user interface
What just happened?
Multiple windows and one UI thread for all of them
Rationalizing multithreaded code
Have a go hero - Improving the application and solving bugs
Have a go hero - Creating parallel, multithreaded applications using the C# programming language
Pop quiz
Summary
Index

C# 2008 and 2005 Threaded Programming

Gastón C. Hillar

C# 2008 and 2005 Threaded Programming

Beginner's Guide

Copyright © 2009 Packt Publishing

All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews.

Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, Packt Publishing, nor its dealers or distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book.

Packt Publishing has endeavored to provide trademark information about all the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information.

First published: January 2009

Production Reference: 1200109

Published by Packt Publishing Ltd.

32 Lincoln Road

Olton

Birmingham, B27 6PA, UK.

ISBN 978-1-847197-10-8

www.packtpub.com

Cover Image by Vinayak Chittar (<[email protected]>)

Credits

Author

Gastón C. Hillar

Reviewers

Bogdan Brinzarea-Iamandi

Jerry L. Spohn

Ron Steckly

Senior Acquisition Editor

David Barnes

Development Editor

Shilpa Dube

Technical Editor

Rakesh Shejwal

Editorial Team Leader

Akshara Aware

Copy Editor

Sumathi Sridhar

Project Team Leader

Lata Basantani

Project Coordinator

Rajashree Hamine

Project Editorial Manager

Abhijeet Deobhakta

Indexer

Monica Ajmera

Proofreaders

Chris Smith

Ron Steckly

Production Coordinator

Rajni R. Thorat

Cover Work

Rajni R. Thorat

About the Author

Gastón C. Hillar has been working with computers since he was eight. He began programming with the legendary Texas TI-99/4A and Commodore 64 home computers in the early 80s.

He has a Bachelor degree in Computer Science, graduated with honors, and an MBA (Master in Business Administration), graduated with an outstanding thesis.

He has worked as developer, architect, and project manager for many companies in Buenos Aires, Argentina. He was project manager in one of the most important mortgage loan banks in Latin America for several years. Now, he is an independent IT consultant working for several Spanish, German, and Latin American companies, and a freelance author. He is always looking for new adventures around the world.

He also works with electronics (he is an electronics technician). He is always researching about new technologies and writing about them. He owns an IT and electronics laboratory with many servers, monitors, and measuring instruments.

He is the author of more than 40 books in Spanish about computer science, modern hardware, programming, systems development, software architecture, business applications, balanced scorecard applications, IT project management, Internet, and electronics, published by Editorial HASA and Grupo Noriega Editores.

He usually writes articles for leading Spanish magazines "Mundo Linux", "Solo Programadores", and "Resistor".

He lives with his wife, Vanesa, and his son, Kevin. When not tinkering with computers, he enjoys developing and playing with wireless virtual reality devices and electronics toys with his father, his son, and his nephew Nico.

You can reach him at <[email protected]>

Acknowledgement

When writing this book, I was fortunate to work with an excellent team at Packt Publishing Ltd, whose contributions vastly improved the presentation of this book. David Barnes helped me to transform the idea into the final book and to give my first steps working with the Beginner's Guide. Rajashree Hamine made everything easier with her incredible time management. Shilpa Dube helped me realize my vision for this book and provided many sensible suggestions regarding the text, the format, and the flow. The reader will notice her excellent work. Rakesh Shejwal made the sentences, the paragraphs, and the code easier to read and to understand. He has added great value to the final drafts.

I would like to thank my technical reviewers Bogdan Brinzarea-Iamandi, Jerry L. Spohn, and Ron Steckly and proofreaders Chris Smith and Ron Steckly, for their thorough reviews and insightful comments. I was able to incorporate some of the knowledge and wisdom they have gained in their many years in the software development industry. The examples and the code include the great feedback provided by Bogdan Brinzarea. Bogdan helped me a lot to include better and shorter code to simplify the learning process.

I wish to acknowledge Hector A. Algarra, who always helped me to improve my writing.

Special thanks go to my wife, Vanesa S. Olsen, my son Kevin, my nephew, Nicolas, my father, Jose Carlos, who acted as a great sounding board and participated in many hours of technical discussions, my sister, Silvina, who helped me when my grammar was confusing, and my mother Susana. They always supported me during the production of this book.

About the Reviewers

Bogdan Brinzarea-Iamandi has a strong background in Computer Science holding a Master and Bachelor Degree from the Automatic Control and Computers Faculty of the Politehnica University of Bucharest, Romania, and also an Auditor diploma from the Computer Science department at Ecole Polytechnique, Paris, France. His main interests cover a wide area from embedded programming, distributed and mobile computing, and new web technologies.

Currently, he is employed as Supervisor within the team of the Alternative Channels Sector of the IT Division in Banca Romaneasca, a Member of the National Bank of Greece. He is Project Manager for Internet Banking and he coordinates other projects related to new technologies and applications to be implemented in the banking area.

Bogdan is also the author of two AJAX books, the popular AJAX and PHP: Building Responsive Web Applications and Microsoft AJAX Library Essentials, also published by Packt.

Jerry Spohn is a Manager of Development for a medium-sized software development firm in Exton, Pennsylvania. His responsibilities include managing a team of developers and assisting in architecting a large multi-lingual, multi-currency loan account system, written in COBOL and JAVA. He is also responsible for maintaining and tracking a system-wide program and database documentation web site, in which he uses DotNetNuke as the portal for this information.

Jerry is also the owner of Spohn Software LLC., a small consulting firm that helps small businesses in the area with all aspects of maintaining and improving their business processes. This includes helping with the creation and maintenance of web sites, general office productivity issues, and computer purchasing and networking. Spohn Software, as a firm, prefers to teach its clients how to solve their problems internally, rather than acquire a long-term contract, thereby making the business more productive and profitable in the future.

Jerry currently works and resides in Pennsylvania, with his wife, Jacqueline, and his two sons, Nicholas and Nolan.

Ron Steckly has been developing various platforms for the past several years, recently adopting .NET as his platform of choice. He graduated from U.C. Berkeley with highest distinction in 2004. He recently moved back to sunny Northern California after living for several years in Manhattan. He is currently working as a Web Application Engineer at Empirical Education in Palo Alto, CA and authoring a book on using MySQL with .NET for Packt Publishing. In his spare time, he enjoys studying mathematics (particularly combinatorics), statistics, economics, and new programming languages.

I would like to thank my good friends Johannes Castner, Josh Brandt-Young, and David Aaron Engle for all their kindness and patience over the years.

To my son, Kevin

Preface

Most machines today have multiple core processors; to make full use of these, applications need to support multithreading. This book will take your C# development skills to the next level. It includes best practices alongside theory and will help you learn the various aspects of parallel programming, thereby helping you to build your career. The book covers various aspects of parallel programming—right from planning, designing, preparing algorithms and analytical models up to specific parallel programming systems. It will help you learn C# threaded programming, with numerous examples and clear explanations packed with screenshots to aid your understanding of every process. After all of the code is written, it is bundled in .zip files for easy availability and use.

What this book covers

Chapter 1 acknowledges the advantages of parallel programming with C# for the coming years. It also elaborates on the challenges associated with parallel processing and programming.

Chapter 2 focuses on the fundamentals of the operating system scheduler and how a single application can be divided into multiple threads or different processes. It also explains the different ways of using threads to work in clients and servers.

Chapter 3 shows how to develop applications that are able to create background threads, start and cancel threads, and launching multiple threads using the BackgroundWorker components. It also discusses the differences between multiple threads using BackgroundWorker and Timers.

Chapter 4 introduces the powerful Thread class, using which one can create independent and very flexible threads. It also discusses the differences between multiple threads, using BackgroundWorker and employing the Thread class, and ways to create high performance applications.

Chapter 5 focuses on debugging applications with many concurrent threads and coordinating the entire debugging process. It also explains the differences between single-threaded debugging and multithreading debugging for threads created using BackgroundWorker and employing the Thread class and many tricks that help simplifying the debugging process.

Chapter 6 takes a closer look at working with independent blocks of code when concurrency is not allowed, managing and coordinating those using new techniques different from the ones offered by the Thread class. It also explains how to apply parallel algorithms to image processing, and the solutions to the most common problems when working with components not enabled for multithreading.

Chapter 7 shows how to improve the memory usage in heavy multithreading applications, managing and coordinating the garbage collection service, and using an object-oriented approach for splitting jobs into well-managed pieces, easily and dynamically. It also covers developing highly optimized multithreaded algorithms.

Chapter 8 elaborates on using object-oriented capabilities offered by the C# programming language, using design patterns for simplifying the parallelism complexity, and avoiding synchronization pains. It also covers the principles of thread affinity, and how to avoid the undesirable side effects related to concurrent programming.

Chapter 9 takes a closer look at using object-oriented capabilities offered by the C# programming language for achieving great scalability in converting single-threaded algorithms to multithreaded scalable jobs, while avoiding the pains of multithreading. It emphasizes the use of pools and parallelized input/output operations in many ways.

Chapter 10 focuses on providing a more responsive user interface, using synchronous and asynchronous delegates. It explains how to combine parallelized operations with a precise user interface feedback while avoiding some multithreading pains. It also shows how to combine a pool of threads with a responsive user interface.

Chapter 11 walks through parallelizing the execution of code, taking advantage of the .NET Parallel Extensions. It explains how to combine different execution techniques with automatically parallelized structures that will be available in Visual Studio 2010. The chapter also shows how to transform a single-threaded imaging library into a parallelized algorithm, and how to combine the .NET Parallel Extensions with a responsive user interface.

Chapter 12 helps you in creating a whole application from scratch with completely multithreaded code offering a responsive user interface for every event. It demonstrates on how to join all the pieces in a complete application, parallelize the execution as much as possible to offer great scalability, an impressive performance, and an incredibly responsive user interface. It shows how to combine different parallelized tasks with multiple-window UIs, always offering the best possible performance, and the most responsive UI.

What you need for this book

You need prior knowledge of C# programming language and .NET Framework, as this book helps developers to find out how to improve their applications' performance and responsiveness. However, you do not need to be a C# guru to understand the book. In order to execute the code included in most chapters you need Visual C# 2005, 2008 or 2010 (CTP). Nevertheless, in order to run the examples in the last two chapters, you need Visual C# 2008 or 2010 (CTP), as you will be using many features available in these new releases.

You can use Visual C# Express Editions for most of the exercises. However, the Threads Window and many multithreading debugging features are not available in these editions. Thus, you will not be able to run some debugging exercises included in the book. Therefore, you are encouraged to use at least a Trial or Standard Edition, instead of working with the Express Editions.

You need a computer with at least two cores (dual-core) or two microprocessors installed in order to achieve significant results for most experiences, as we will be focusing on multi-core development. You can run the exercises in single-core microprocessors, but you will not be able to understand the improvements you are achieving.

Who is this book for

Whether you are a beginner to working with threads or an old hand who is looking for a reference, this book should be on your desk.

This book is for people who are interested in working with C#. This book will help you to build scalable, high performance software using parallel programming techniques.

The book will prove beneficial to C++ programmers who are interested in moving to C#, and to beginner programmers that are interested in learning C#. Students learning introductory threaded programming in C# will also gain benefits from this book.

Time for action — Uploading a document

Action 1Action 2Action 3

When instructions need some extra explanation so that they make sense, they are followed with...

What just happened?

... which explains how the task or instructions you just completed work, so that you learn how Moodle works as you complete useful activities.

You will also find some other learning aids in the book, including:

Pop quiz

These are short questions intended to help you test your own understanding.

Have a go hero

These set practical challenges and give you ideas for experimenting with what you have learned.

Reader feedback

Feedback from our readers is always welcome. Let us know what you think about this book, what you liked or may have disliked. Reader feedback is important for us to develop titles that you really get the most out of.

To send us general feedback, simply drop an email to <[email protected]>, making sure to mention the book title in the subject of your message.

If there is a book that you need and would like to see us publish, please send us a note in the SUGGEST A TITLE form on www.packtpub.com or email <[email protected]>.

If there is a topic that you have expertise in and you are interested in either writing or contributing to a book, see our author guide on www.packtpub.com/authors.

Customer support

Now that you are the proud owner of a Packt book, we have a number of things to help you to get the most from your purchase.

Downloading the example code for the book

Visit http://www.packtpub.com/files/code/7108_Code.zip to directly download the example code.

The downloadable files contain instructions on how to use them.

Errata

Although we have taken every care to ensure the accuracy of our contents, mistakes do happen. If you find a mistake in one of our books—maybe a mistake in text or code—we would be grateful if you would report this to us. By doing this you can save other readers from frustration, and help to improve subsequent versions of this book. If you find any errata, report them by visiting http://www.packtpub.com/support, selecting your book, clicking on the let us know link, and entering the details of your errata. Once your errata are verified, your submission will be accepted and the errata added to the list of existing errata. The existing errata can be viewed by selecting your title from http://www.packtpub.com/support.

Piracy

Piracy of copyright material on the Internet is an ongoing problem across all media. At Packt, we take the protection of our copyright and licenses very seriously. If you come across any illegal copies of our works in any form on the Internet, please provide the location address or web site name immediately so we can pursue a remedy.

Please contact us at <[email protected]> with a link to the suspected pirated material.

We appreciate your help in protecting our authors, and our ability to bring you valuable content.

Questions

You can contact us at <[email protected]> if you are having a problem with some aspect of the book, and we will do our best to address it.

Chapter 1. Taking Advantage of Multiprocessing and Multiple Cores

We already know how to develop applications using the C# programming language. However, modern computers are prepared for running many operations in parallel, concurrently. C# is an advanced programming language. Thus, users and our bosses expect a C# application to offer great performance and a responsive user interface.

So, let's take our C# development skills to the next level. We want to take full advantage of modern hardware. For that reason, the first thing we have to do is try and understand how modern computers differ from older computers. Let's understand the parallelization revolution. The only requirement to be able to develop parallelized C# applications is to understand the basics of the C# programming language and the Visual Studio IDE. We will cover the rest of the requirements in our journey through the parallel programming world!

We must understand some fundamentals related to the multiprocessing capabilities offered by modern computers. We will have to consider them in order to develop applications that take full advantage of parallel processing features. In this chapter, we will cover many topics to help us understand the new challenges involved in parallel programming with modern hardware. Upon reading it and following the exercises we shall:

Begin a paradigm shift in software designUnderstand the techniques for developing a new generation of applicationsHave an idea of the performance increases we can achieve using parallel programming with C#Perform accurate response time estimation for critical processes

Mono-processor systems: The old gladiators

The mono-processor systems use an old-fashioned, classic computer architecture. The microprocessor receives an input stream, executes the necessary processes and sends the results in an output stream that is distributed to the indicated destinations. The following diagram represents a mono-processor system (one processor with just one core) with one user, and one task running:

This working scheme is known as IPO (Input; Processing; Output) or SISD (Single Instruction, Single Data). This basic design represents the von Neumann machines, developed by this outstanding mathematician in 1952.

Single core: Only one warrior to fight against everybody

These days, systems with a single processing core, with just one logical processor, are known as single core.

When there is only one user running an application in a mono-processor machine and the processor is fast enough to deliver an adequate response time in critical operations, the model will work without any major problems.

For example, consider a robotic servant in the kitchen having just two hands to work with. If you ask him to do one task that requires both his hands, such as washing up, he will be efficient. He has a single processing core.

However, suppose that you ask him to do various tasks—wash up, clean the oven, make your lunch, mop the floor, cook dinner for your friends, and so on. You give him the list of tasks, and he works down the tasks. But since there is so much washing up, its 2 pm before he even starts making your lunch—by which time you get very hungry and make it yourself. You need more robots when you have multiple tasks. You need multiple execution cores, many logical processors.

Each task performed by the robot is a critical operation, because you and your friends are very hungry!

Let's consider another case. We have a mono-processor computer and it has many users connected, requesting services which the computer must process. In this case, we have many input streams and many output streams, one for each connected user. As there is just one microprocessor, there is only one input channel and only one output channel. Therefore the input streams are en-queued (multiplexing) for their processing, and then the same happens with the output streams but is inverted, as shown in the following diagram:

Doing a tiny bit of each task

Why does the robot take so long to cook dinner for you and your friends? The robot does a tiny bit of each task, and then goes back to the list to see what else he should be doing. He has to keep moving to the list, read it, and then start a new task. The time it takes to complete the list is much longer because he is not fast enough to finish so many tasks in the required time. That's multiplexing, and the delay is called von Neumann's bottleneck. Multiplexing takes additional time because you have just one robot to do everything you need in the kitchen.

The systems with concurrent access by multiple users are known as multi-user systems.

If the processor is not fast enough to deliver an adequate response time in every critical operation requested by each connected user, a bottleneck will be generated in the processor's input queue. This is well known in computer architecture as von Neumann's bottleneck.

There are three possible solutions to this problem, each consisting of upgrading or increasing one of the following:

The processor's speed, by using a faster robot. He will need less time to finish each task.The processor's capacity to process instructions concurrently (in parallel), that is, adding more hands to the robot and the capability to use his hands to do different jobs.The number of installed processors or the number of processing cores, that is, adding more robots. They can all focus on one task, but everything gets done in parallel. All tasks are completed faster and you get your lunch on time. That is multitasking.

No matter which option we pick, we must consider other factors that depend particularly on the kind of operations performed by the computer and which could generate additional bottlenecks. In some cases, the main memory speed access could be too slow (the robot takes too much time to read each task). In some other cases, the disks subsystem could have bad response times (the robot takes too much time to memorize the tasks to be done), and so on. It is important to make a detailed analysis on these topics before taking a decision to troubleshoot bottlenecks.

Moreover, sometimes the amount of data that needs to be processed is too large and the problem is the transfer time between the memory and the processor. The robot is too slow to move each hand. Poor robot! Why don't you buy a new model?

In the last few years, every new micro-architecture developed by microprocessor manufacturers has focused on improving the processors' capacity to run instructions in parallel (a robot with more hands). Some examples of these are the continuous duplication of processing structures like the ALU (Arithmetic and Logic Unit) and the FPU (Floating Point Unit), and the growing number of processing cores that are included in one single physical processor. Hence, you can build a super robot with many independent robots and many hands. Each sub-robot can be made to specialize in a specific task, thus parallelizing the work.

Computers used as servers, with many connected users and running applications, take greater advantage of modern processors' capacity to run instructions in parallel as compared to those computers used by only one user. We will learn how to take full advantage of those features in the applications developed using the C# programming language. You want the robot to get your lunch on time!

The performance waterfall

Considering all the analysis we have done so far to develop new algorithms for the applications of critical processes, we can conceive the performance waterfall shown in the following image:

Note

FSB (Front Side Bus)

The FSB is a bus that transports the data between the CPU and the outside world. When the CPU needs data from memory or from the I/O subsystem, the FSB is the highway used for that information interchange.

This performance waterfall will help us understand how we can take full advantage of modern multiprocessing. The topmost part of the waterfall represents the best performance. Hence, we lose speed as we go down each step. It is not a linear relationship, and the hardware infrastructure in which the application runs will determine the exact performance loss with each step represented in the above figure. However, the cascade is the same for every case, neither dependent on the kind of application being developed nor the hardware being used.

We must design the algorithms bearing in mind to keep the steps, down to the bottom of the performance waterfall, minimal. We should go downstairs as an exception and not as a rule. For example, a good decision consists of recovering all the necessary information from the disk subsystem or the network in one pass. Then, we can take everything to memory and begin processing without having to search for the data in every iteration.

A small performance problem in a mono-processing application multiplies its defects in its translation to a concurrent model. Therefore, we must consider these details.

As a rule or as a design pattern, the best approach when optimizing a critical process consists of running its tasks in the higher steps of the performance waterfall most of the time. It should visit the main memory in some iteration, but as little as possible. Each step down from the top means losing a small portion of the performance.

Let's draw an example of this. We have a state-of-the-art laser printer capable of printing 32 pages per minute. It is in an office on the sixth floor, but the paper ream stays on the first one. When the printer finishes with a page, a person must step down the six floors to take another sheet of paper and put it in the printer's paper feed tray. It takes about five to ten minutes for this person to bring each sheet of paper to the printer, as he goes downstairs, then he goes upstairs, on the way, he spends some time talking to a neighbor, and then he arrives back to the office with the sheet of paper. In addition, he could feel thirsty and go for a drink. As we can see, he wasted the state-of-the-art printer's performance (the execution core) because the paper tray was not fed quickly enough. The problem is that he brings a small quantity each time he arrives at the office (the hard disk and the I/O subsystem).

The printer's work would be more efficient if the person could feed it with a paper ream containing 500 sheets. The person could bring another paper ream with 500 sheets from the first floor when the printer's paper feed tray has only 50 sheets left (bringing it to the cache memories L1, L2, or L3).

What happens if we have eight printers working in parallel instead of only one? In order to take full advantage of their performance and their efficient printing process, all of them must have a good number of sheets in their respective paper feed trays. This is the goal we must accomplish when we plan an algorithm for parallelism.

Note

In the rest of the book, we will consider the performance waterfall for many examples and will try to achieve optimal results. We will not leave behind the necessary pragmatism in order to improve performance within a reasonable developing time.

Have a go hero - Researching micro-architectures and applications

A group of researchers need some consulting services of an IT professional specialized in parallel computing. They are not very clear in explaining the kind of research they are doing. However, you decide to help them.

They want to find the best computer micro-architecture needed to parallelize an application.

Research the new micro-architectures that are being prepared by leading PC microprocessor manufacturers, and the schedules for their release, particularly on these topics:

Are they increasing the processors' speed?Do they mention upgrades about the processor's capacity to process instructions concurrently (in parallel)?Are they talking about increasing the number of processing cores?