A First Course in Probability and Markov Chains - Giuseppe Modica - E-Book

A First Course in Probability and Markov Chains E-Book

Giuseppe Modica

0,0
70,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Provides an introduction to basic structures of probability with a view towards applications in information technology

A First Course in Probability and Markov Chains presents an introduction to the basic elements in probability and focuses on two main areas. The first part explores notions and structures in probability, including combinatorics, probability measures, probability distributions, conditional probability, inclusion-exclusion formulas, random variables, dispersion indexes, independent random variables as well as weak and strong laws of large numbers and central limit theorem. In the second part of the book, focus is given to Discrete Time Discrete Markov Chains which is addressed together with an introduction to Poisson processes and Continuous Time Discrete Markov Chains. This book also looks at making use of measure theory notations that unify all the presentation, in particular avoiding the separate treatment of continuous and discrete distributions.

A First Course in Probability and Markov Chains:

  • Presents the basic elements of probability.
  • Explores elementary probability with combinatorics, uniform probability, the inclusion-exclusion principle, independence and convergence of random variables.
  • Features applications of Law of Large Numbers.
  • Introduces Bernoulli and Poisson processes as well as discrete and continuous time Markov Chains with discrete states.
  • Includes illustrations and examples throughout, along with solutions to problems featured in this book.

The authors present a unified and comprehensive overview of probability and Markov Chains aimed at educating engineers working with probability and statistics as well as advanced undergraduate students in sciences and engineering with a basic background in mathematical analysis and linear algebra.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 443

Veröffentlichungsjahr: 2012

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Title Page

Copyright

Preface

Chapter 1: Combinatorics

1.1 Binomial Coefficients

1.2 Sets, Permutations and Functions

1.3 Drawings

1.4 Grouping

Chapter 2: Probability measures

2.1 Elementary Probability

2.2 Basic Facts

2.3 Conditional Probability

2.4 Inclusion–Exclusion Principle

Chapter 3: Random variables

3.1 Random Variables

3.2 A Few Discrete Distributions

3.3 Some Absolutely Continuous Distributions

Chapter 4: Vector valued random variables

4.1 Joint Distribution

4.2 Covariance

4.3 Independent Random Variables

4.4 Sequences of Independent Random Variables

Chapter 5: Discrete time Markov chains

5.1 Stochastic Matrices

5.2 Markov Chains

5.3 Some Characteristic Parameters

5.4 Finite Stochastic Matrices

5.5 Regular Stochastic Matrices

5.6 Ergodic Property

5.7 Renewal Theorem

Chapter 6: An introduction to continuous time Markov chains

6.1 Poisson Process

6.2 Continuous Time Markov Chains

Appendix A: Power series

A.1 Basic Properties

A.2 Product of Series

A.3 Banach Space Valued Power Series

Appendix B: Measure and integration

B.1 Measures

B.2 Measurable Functions and Integration

B.3 Product Measures and Iterated Integrals

B.4 Convergence Theorems

Appendix C: Systems of linear ordinary differential equations

C.1 Cauchy Problem

C.2 Efficient Computation of eQt

C.3 Continuous Semigroups

References

Index

This edition first published 2013

© 2013 John Wiley & Sons, Ltd

Registered office

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom

For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.

The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

Library of Congress Cataloging-in-Publication Data

Modica, Giuseppe.

A first course in probability and Markov chains / Giuseppe Modica and Laura Poggiolini.

pages cm

Summary: “A first course in Probability and Markov Chains presents an introduction to the basic

elements in statistics and focuses in two main areas”— Provided by publisher.

Includes bibliographical references and index.

ISBN 978-1-119-94487-4 (hardback)

1. Markov processes. I. Poggiolini, Laura. II. Title.

QA274.7.M63 2013

519.2′33— dc23

2012033463

A catalogue record for this book is available from the British Library.

ISBN: 978-1-119-94487-4

Preface

This book collects topics covered in introductory courses in probability delivered by the authors at the University of Florence. It aims to introduce the reader to typical structures of probability with a language appropriate for further advanced reading. The attention is mainly focused on basic structures.

There is a well established tradition of studies in probability due to the wide range of possible applications of related concepts and structures in science and technology. Therefore, an enormous amount of literature on the subject is available, including treatises, lecture notes, reports, journal papers and web pages. The list of references at the end of this book is obviously incomplete and includes only references used directly in writing the following pages. Throughout this book we adopt the language of measure theory (relevant notions are recalled in the appendices).

The first part of the book deals with basic notions of combinatorics and probability calculus: counting problems and uniform probability, probability measures, probability distributions, conditional probability, inclusion–exclusion principle, random variables, dispersion indexes, independence, and the law of large numbers are also discussed. Central limit theorem is presented without proof. Only a basic knowledge of linear algebra and mathematical analysis is required.

In the second part we discuss, as a first example of stochastic processes, Markov chains with discrete time and discrete states, including the Markov chain Monte Carlo method, and we introduce Poisson process and continuous time Markov chains with finite states. For this part, further notions in mathematical analysis (summarized in the appendices) are required: the Banach fixed point theorem, systems of linear ordinary differential equations, powers and power series of matrices.

We wish to thank all the students who have attended our courses. We also wish to thank our colleagues Matteo Focardi, Mariano Giaquinta, Paolo Maria Mariano, Andrea Mennucci, Marco Romito and Enrico Vicario who helped us with suggestions and comments. Special gratitude goes to Enrico Vicario for many helpful discussions on the applications.

Our special thanks also go to the editorial staff of Wiley for the excellent quality of their work.

We have tried to avoid misprints and errors. However, we would be very grateful to be notified of any errors or misprints and would be glad to receive any criticism or comments. Our e-mail addresses are:

We will try to keep up an errata corrige at the following web pages:

http://www.dma.unifi.it/∼modica
http://www.dma.unifi.it/∼poggiolini
http://www.dmi.unifi.it/∼modica
http://www.dmi.unifi.it/∼poggiolini

Chapter 1

Combinatorics

Combinatorics deals with the cardinality of classes of objects. The first example that jumps to our minds is the computation of how many triplets can be drawn from 90 different balls. In this chapter and the next we are going to compute the cardinality of several classes of objects.

1.1 Binomial Coefficients

1.1.1 Pascal Triangle

Binomial coeffcients are defined as

Binomial coefficients are usually grouped in an infinite matrix

called a Pascal triangle given the triangular arrangement of the nonzero entries, see Figure 1.1. Here and throughout the book we denote the entries of a matrix (finite or infinite) where the superscript i and the subscript j mean the ith row and the jth column, respectively. Notice that the entries of each row of C are zero if the column index is large enough, with j > i ≥ 0. We also recall the Newton binomial formula,

Thus formula can be proven with an induction argument on n or by means of Taylor formula.

Figure 1.1 Pascal matrix of binomial coefficients , k, n ≥ 0.

1.1.2 Some Properties of Binomial Coefficients

Many formulas are known on binomial coefficients. In the following proposition we collect some of the simplest and most useful ones.

Proposition 1.1
The following hold.
i., 0 ≤ k ≤ n.
ii., 1 ≤ k ≤ n.
iii.Stifel formula, 1 ≤ k ≤ n.
iv., 0 ≤ k ≤ j ≤ n.
v., 0 ≤ k ≤ n.
vi.the mapachieves its maximum at.
vii..
viii.
ix..

Estimate (ix) in Proposition 1.1 can be made more precise. For instance, from the Stirling asymptotical estimate of the factorial,

one gets

so that

or, equivalently,

1.1

1.1.3 Generalized Binomial Coefficients and Binomial Series

For we define the sequence of generalized binomial coefficients as

Notice that if and if . The power series

1.2

is called the binomial series.

Proposition 1.2 (Binomial series)
The binomial series converges if |z| < 1 and
Proposition 1.3
Let . The following hold.
i..
ii..
iii..
Proof
The proofs of (i) and (ii) are left to the reader. Proving (iii) is a matter of computation:
A few negative binomial coefficients are quoted in Figure 1.2.

Figure 1.2 The coefficients .

1.1.4 Inversion Formulas

Proof
Let , so that both B and CNB are lower triangular, i.e. if 0 ≤ n < k. Moreover, (iv) and (viii) of Proposition 1.1 yield for any n ≥ k

A few entries of the inverse of the matrix of binomial coefficients are shown in Figure 1.3. As a consequence of Theorem 1.4 the following inversion formulas hold.

Figure 1.3 The inverse of the matrix of binomial coefficients .

Corollary 1.5
Two sequences , satisfy
if and only if

Similarly,

Corollary 1.6
Two N-tuples or real numbers and satisfy
if and only if

1.1.5 Exercises

Exercise 1.8
Differentiating the power series, see Appendix A, prove the formulas in Figure 1.4.

Figure 1.4 The sum of a few series related to the geometric series.

Solution
Differentiating the identity , |z| < 1, we get for |z| < 1
Moreover, for any non-negative integer n, differentiating the identities
for any |z| < 1, we get

1.2 Sets, Permutations and Functions

1.2.1 Sets

We recall that a finite setA is an unordered collection of pairwise different objects. For example, the collection A hose objects are 1,2,3 is a finite set which we denote as ; the collection 1,2,2,3 is not a finite set, and and are the same set.

Proposition 1.9
Let A be a finite set with n elements, n ≥ 1. There are subsets of A with k elements.

1.2.2 Permutations

Let N be a finite set and let n be its cardinality. Without loss of generality, we can assume . A permutation of N is an injective (and thus one-to-one) mapping π:N → N. Since composing bijective maps yields another bijective map, the family of permutations of a set N is a group with respect to the composition of maps; the unit element is the identity map; this group is called the group of permutations ofN. It is denoted as Sn or . Notice that is a not a commutative group if n ≥ 3.

or, in brief, with its image-word 231465.

The set of permutations of has n! elements,

In fact, the image π(1) of 1 can be chosen among n possible values, then the image π(2) of 2 can be chosen among n − 1 possible values and so on. Hence

1.2.2.1 Derangements

We now compute the cardinality dn of the set of permutations without fixed points, also called derangements.

Proposition 1.10
The cardinality of is
Corollary 1.11
The number dn of derangements of n points is the nearest integer to n!/e.

Figure 1.5 contains the first elements of the sequence {dn}.

Figure 1.5 From the left, the numbers d0, d1, d2, d3, … of derangements of 0, 1, 2, 3, … points.

1.2.3 Multisets

Another interesting structure is an unordered list of elements taken from a given set A. This structure is called a multiset on A. More formally, a multiset on a set A is a couple (A, a) where A is a given set and is the multiplicity function which counts ‘how many times’ an element x ∈ A appears in the multiset. Clearly, each set is a multiset where each object has multiplicity 1. We denote as or a2b2c5 the multiset on where a and b have multiplicity 2 and c has multiplicity 5. The cardinality of a multiset (A, a) is ∑x∈Aa(x) and is denoted by |(A, a)| or #(A, a). For instance, the cardinality of a2b2c5 is 9.

If B is a subset of A, then B is also the multiset (A, a) on A where

Given two multisets (B, b) and (A, a), we say that (B, b) is included in (A, a) if B ⊂ A and b(x) ≤ a(x) . In this case, where

1.2.4 Lists and Functions

Given a set A, a list of k objects from the set A or a k-word with symbols in A is an ordered k-tuple of objects. For instance, if , then the 6-tuples (1,2,3,3,2,1) and (3,2,1,3,2,1) are two different 6-words of objects in A. In these lists, or words, repetitions are allowed and the order of the objects is taken into account. Since each element of the list can be chosen independently of the others, there are n possible choices for each object in the list. Hence, the following holds.

Proposition 1.13
The number of k-lists of objects from a set A of cardinality n is nk.
Proposition 1.14
The number of functions in is .

1.2.5 Injective Functions

Proposition 1.15
The cardinality of is
Some of the 's are in Figure 1.6.

Figure 1.6 The cardinality of the set of injective maps for n, k ≥ 0.

1.2.6 Monotone Increasing Functions

Let , k ≤ n, be the set of strictly monotone increasing functions . The image-list of any such function is an ordered k-tuple of strictly increasing–hence pairwise disjoint–elements of . The k-tuple is thus identified by the subset of the elements of appearing in it, so that we have the following.

Proposition 1.16
The cardinality of is

1.2.7 Monotone Nondecreasing Functions

Let be the class of monotone nondecreasing functions . The image-list of any such function is a nondecreasing ordered k-tuple of elements of , so that elements can be repeated. The functions in are as many as the multisets with cardinality k included in a multiset (A, a), where and a(x) ≥ k. Thus, see Proposition 1.12, we have the following.

Proposition 1.17
The cardinality of is

1.2.8 Surjective Functions

The computation of the number of surjective functions is more delicate. Let denote the family of surjective functions from onto and let

There are exactly subsets of with cardinality j and there are different surjective functions onto each of these sets. Thus, and

Since we defined , we get

1.5

Therefore, applying the inversion formula in Corollary 1.5 we conclude the following.

Proposition 1.18
The cardinality of the set of surjective functions from onto is

We point out that the equality holds also if k ≤ n so that

Another useful formula for is an inductive one, obtained starting from and for any k and n with k < n.

Proposition 1.19
We have

1.6

Proof
Assume n ≥ 1 and k ≥ 1 and let be a surjective function. Let be the class of functions such that the restriction of f is surjective and let . The cardinality of A is because there are surjective maps from onto and there are n possible choices for f(k). Since the maps on B have a range of (n − 1) elements, we infer that there are maps of this kind. In fact, there are subsets E of of cardinality n − 1 and there are surjective functions from onto E. Therefore,
i.e. (1.6).

Some of the 's are in Figure 1.7.

Figure 1.7 The cardinality of the set of surjective maps for n, k ≥ 0.

1.2.9 Exercises

Exercise 1.20
How many diagonals are there in a polygon having n edges?

1.3 Drawings

A drawing or selection of k objects from a population of n is the choice of k elements among the n available ones. We want to compute how many of such selections are possible. In order to make this computation, it is necessary to be more precise, both on the composition of the population and on the rules of the selection as, for instance, if the order of selection is relevant or not. We consider a few cases:

The population is made by pairwise different elements, as in a lottery: in other words, the population is a set.

The population is a multiset (

A

,

a

). In this case, we say that we are dealing with a

drawing from

A

with repetitions

.

The selected objects may be given an order. In this case we say that we consider an

ordered selection

. Unordered selections are also called

simple selections

.

Some drawing policies simply boil down to the previous cases:

In the lottery game, numbers are drawn one after another, but the order of drawings is not taken into account: it is a simple selection of objects from a set.

In ordered selections the

k

-elements are selected one after another and the order is taken into account.

A drawing with replacement, i.e. a drawing from a set where each selected object is put back into the population before the next drawing is equivalent to a drawing with repetitions, i.e. to drawing from a multiset where each element has multiplicity larger than the total number of selected objects.

1.3.1 Ordered Drawings

Ordered drawings of k objects from a multiset (A, a) are k-words with symbols taken from A.

1.3.1.1 Ordered Drawings from a Set

different k-words with pairwise different symbols.

1.3.1.2 Ordered Drawings from a Multiset

Proposition 1.21
The number of ordered drawings of k elements from a multiset (A, a) where is nk.

In particular, the number of ordered drawings with replacement of k elements from A is nk.

1.3.2 Simple Drawings

1.3.2.1 Drawings from a Set

The population from which we make the selection is a set A. To draw k objects from A is equivalent to selecting a subset of k elements of A: we do not distinguish selections that contain the same objects with a different ordering.

Proposition 1.22
The number of possible drawings of k elements from a set of cardinality n is .

1.3.2.2 Drawings from a Multiset

i.e. S is a multiset of k elements included in (A, a) (cf. Proposition 1.17).

Proposition 1.23
The number of simple drawings of k elements from a multiset (A, a), is provided .

1.3.3 Multiplicative Property of Drawings

The previous results on drawings can also be obtained from the following combinatorics properties of drawings.

Theorem 1.24
For each non-negative integer k let ak and bk be the numbers of drawings of k objects from the multisets (A, a) and (B, b) made according to policies P1 and P2, respectively. If A and B are disjoint, then the number of drawings of k elements from the population obtained by the union of (A, a) and (B, b) made according to policies P1 and P2 for the drawings from (A, a) and (B, b), respectively, is
Proof
A drawing of k objects from the union of the two populations contains, say, j elements from (A, a) and k − j elements from (B, b), where j is an integer, 0 ≤ j ≤ k. The j elements drawn from (A, a) can be chosen in aj different ways, while the n − j elements drawn from (B, b) can be chosen in bk−j different ways and the two choices are independent. Thus,

A similar result holds for ordered drawings

Theorem 1.25
For each non-negative integer k let ak and bk be the number of ordered drawings from the multisets (A, a) and (B, b) made according to policies P1 and P2, respectively. If A and B are disjoint, then the number of ordered drawings from the population union of (A, a) and (B, b) made according to policy P1 for the elements of (A, a) and according to policy P2  for elements of (B, b) are
Proof
A drawing of k elements from the union of the two populations contains j elements from (A, a) and n − j elements from (B, b) for some integer . The j elements from (A, a) can be chosen in aj different ways, the k − j elements drawn from (B, b) can be chosen in bk−j different ways and the two chosen groups are independent. Finally, there are ways to order such selections. Thus,

1.3.4 Exercises

Exercise 1.26
A committee of 7 people has to be chosen among 11 women and 8 men. In each of the following cases compute how many different committees can be chosen:
No constraint is imposed.At least two women and at least one man must be present.There must be more women than men.At least two women and no more than three men must be present.

1.4 Grouping

Many classical counting problems amount to a collocation or grouping problem: how many different arrangements of k objects in n boxes are there? Putting it another way, how many different ways of grouping k objects into n groups are there? Also in this case a definite answer cannot be given: we must be more precise both on the population to be arranged, on the rules (or policy) of the procedure, and on the way the groups are evaluated. For example, one must say whether the objects to be arranged are pairwise different or not, whether the order of the objects in each box must be taken into account or not, whether the boxes are pairwise distinct or not, and if further constraints are imposed. Here we deal with a few cases, all referring to collocation or grouping in pairwise different boxes. We consider the formed groups as a list instead of as a set: for instance, if we start with the objects then the two arrangements in two boxes and are considered to be different.

1.4.1 Collocations of Pairwise Different Objects

Arranging k distinct objects in n pairwise different boxes is the act of deciding the box in which each object is going to be located. Since both the objects and the boxes are pairwise distinct, we may identify the objects and the boxes with the sets and , respectively. Each arrangement corresponds to a grouping map that puts the object j into the box f(j).

1.4.1.1 No Further Constraint

In this case the set of possible locations is in a one-to-one correspondence with the set of all maps . Therefore, there are nk different ways to locate k-different objects in n boxes.

different choices for the elements in the nth box. Thus the different possible arrangements are

1.7

the ratio in (1.7) is called the multinomial coefficient and is denoted as

From (1.7) we infer that the possible collocations of k pairwise different objects in n pairwise different boxes are

1.4.1.2 At Least One Object in Each Box

We now want to compute the number of different arrangements with at least one object in each box. Assuming we have k objects and n boxes, collocations of this type are in a one-to-one correspondence with the class of surjective maps from onto , thus there are

collocations of k pairwise different into n pairwise different boxes that place at least one object in each box.

1.8

ways to arrange k different objects in n boxes with ij objects in the box j. Thus the number of arrangements with no empty box is

1.9

1.4.1.3 At Most One Object in Each Box

We now impose a different constraint: each box may contain at most one object. Assuming we have k objects and n boxes, collocations of this type are in a one-to-one correspondence with the class of injective grouping maps from onto , thus there are

collocations of this type.

1.4.1.4 Grouping Into Lists

Here, we want to compute the number of ways of grouping k pairwise different objects in n pairwise different boxes and pretend that the order of the objects in each box matters. In other words we want to compute how many different ways exist to group k objects in a list of n lists of objects. We proceed as follows.

1.4.2 Collocations of Identical Objects

We want to compute the number of ways to arrange k identical objects in n pairwise different boxes. In this case each arrangement is characterized by the number of elements in each box, that is by the map which counts how many objects are in each box. Obviously, . If the k objects are copies of the number ‘0’, then each arrangement is identified by the binary sequence

1.10

where the number ‘0’ denotes the fact that we are changing box.

1.4.2.1 No Further Constraint

Let us compute the number of such arrangements with no further constraint. There is a one-to-one correspondence between such arrangements and the set of all binary sequences of the type (1.10). Therefore, see Proposition 1.12, the different collocations ofkidentical objects innpairwise different boxes is

1.11

1.4.2.2 At Least One in Each Box

We add now the constraint that each box must contain at least one object. If k < n no such arrangement is possible. If k ≥ n, we then place one object in each box so that the constraint is satisfied. The remaining k − n objects can be now collocated without constraints. Therefore, cf. (1.11), there are

ways to arrange k identical objects in n boxes, so that no box remains empty.

1.4.2.3 At Most One in Each Box

We consider arrangements of k identical objects in n pairwise different boxes that place at most one object into each box. In this case, each arrangement is completely characterized by the subset of filled boxes. Since we can choose them in different ways, we conclude that the collocations ofkidentical objects innpairwise different boxes with at most one object per box is

1.4.3 Multiplicative Property

Combinatorial properties hold for collocations as well as for drawings.

Theorem 1.27
For each non-negative integer k, let ak and bk be the number of collocations of k pairwise different objects in two sets S1 and S2 of pairwise different boxes with policies P1 and P2, respectively. If , then the different collocations of the k objects in S1 ∪ S2 following policy P1 for collocations in boxes of S1 and policy P2 for collocations in boxes of S2 is
Proof
Let j objects, 0 ≤ j ≤ k be collocated in the boxes of the set S1 and let the other k − j objects be collocated in the boxes of S2. There are aj different ways of placing j objects in the boxes of S1 and bk−j different ways of placing (k − j) objects in the boxes of S2. Moreover, there are different ways to choose which objects are collocated in the boxes of S1. Hence,
A similar result holds for the collocations of identical objects.
Theorem 1.28
For each non-negative integer k, let ak and bk be the number of collocations of k identical objects in two sets S1 and S2 of pairwise different boxes with policies P1 and P2, respectively. If , then the collocations of the k objects in the boxes of S1 ∪ S2 made according to policy P1  for the collocations in the boxes of S1 and according to policy P2  for the collocations in the boxes of S2 is
Proof
Let j objects, 0 ≤ j ≤ k be collocated in the boxes of the set S1 and let the other k − j objects be collocated in the boxes of S2. There are aj ways of placing j objects in the boxes of S1 and bk−j different ways of placing (k − j) objects in the boxes of S2. Since the objects are identical, there is no way to select which the j objects to be placed in the boxes of S1 are. Then the possible different collocations are

1.4.4 Collocations in Statistical Physics

In statistical physics, each ‘particle’ is allowed to be in a certain ‘state’; an ‘energy level’ is associated with each state. The total energy of a system of particles depends on how many particles are in each of the possible states; the mean value of the energy depends on the probabilities that particles stay in a certain state. Thus, the number of possible collocations of the particles in the available states must be evaluated.

1.4.4.1 Maxwell-Boltzmann Statistics

This is the case of classical statistical physics: the particles are distinct and no constraint is imposed on their distribution in different states. The number of possible collocations of k particles in n states is thus the number of collocations of k pairwise different objects in n pairwise different boxes, i.e. nk.

1.4.4.2 Bose–Einstein Statistics

The particles are indistinguishable and no constraint is imposed on their distribution in different states. Particles with this behaviour are called bosons. The number of collocations of k particles in n states is then the number of collocations of k identical objects in n pairwise different boxes, i.e.

1.4.4.3 Fermi–Dirac Statistics

The particles are indistinguishable and each state can be occupied by at most one particle (Pauli exclusion principle). Particles following this behaviour are called fermions. Then the collocations of k particles in n states is the number of possible choices for the states to be occupied, i.e. . Obviously, the Pauli exclusion principle implies n ≥ k.

1.4.5 Exercises

Exercise 1.29
A group of eight people sits around a table with eight seats. How many different ways of sitting are there?
Exercise 1.32
For any non-negative integer n ≥ 0, define
Prove that

1.12

[Hint. Take advantage of (iii) of Proposition 1.3.]
Exercise 1.33
n players participate in a single-elimination tennis tournament. How many matches shall be played?
Exercise 1.34
You are going to place 4 mathematics books, 3 chemistry books and 2 physics books on a shelf. Compute how many different arrangements are possible. What if you want to keep books of the same subject together?
Exercise 1.35
A comittee of 4 is to be chosen from a group of 20 people. How many different choices are there for the roles of president, chairman, secretary, and treasurer?
Exercise 1.36
How many bytes of N digits exist with more zeroes than ones?
Exercise 1.37
There are 40 cabinet ministers sitting around a circular table. One seat is reserved for the Prime Minister. In how many ways can the other ministers seat themselves?
Exercise 1.38
Thirty people meet after a long time and they all shake hands. How many handshakes are there?
Exercise 1.40
Find the number of non-negative integer solutions (x, y, z, t) of x + y + z + t ≤ 6.

Chapter 2

Probability measures

The correspondence between Blaise Pascal (1623–1662) and Pierre de Fermat (1601–1665) on some matters related to card games suggested by the nobleman and gambler Chevalier de Méré is considered to be the birth of probability. The first treatise, De Ratiociniis in ludo aleae, by Christiaan Huygens (1629–1695) was published in 1647. Other publications followed in the eighteenth century: at the beginning of the century, the Ars conjectandi by Jacob Bernoulli (1654–1705) and the Doctrine of Chances by Abraham de Moivre (1667–1754) were published in 1713 and 1718, respectively. Then the Théorie analytique des probabilités by Pierre-Simon Laplace (1749–1827), which appeared in 1812, summarized the knowledge on probability of the whole century.

The definition of probability given by Pascal for games of chance is the so-called classical interpretation: the probability of an event E is the ratio between the number of successes (the cases in which the event E happens) and the number of all possible cases:

2.1

Clearly, the definition can be criticized: it cannot define the probability of those events for which we do not know a priori the number of successes and it does not make sense when there are an infinite number of possible cases. Generally speaking, the definition does not apply to a multitude of cases which we would like to speak of in probability terms.

Another definition, given by Richard von Mises (1883–1953), is the frequentist interpretation: the probability of an event is the limit of the frequencies of successes as the number of trials goes to infinity:

2.2

This definition can also be easily criticized: how can the limit of a sequence be computed if the sequence itself is not known? Moreover, the limit may not exist and in many situations, such as in economics, experiments cannot be repeated.

A further definition, first given by Jacob Bernoulli (1654–1705), is the subjectivist interpretation: the probability of an event E is a number that measures the degree of belief that E may happen, expressed by a coherent individual. The ‘coherence’ is expressed by the fact that if the individual measures as p ∈ [0, 1] the degree of belief that E happens, then 1 − p is the degree of belief that E does not happen.

Another way to describe the subjectivist interpretation is the bet that an individual, according to the information in his possession, regards as a fair cost to pay (to get paid) in order to get (pay) one if the event E happens and in order to get (pay) nothing if E does not happen. The coherence is in a fair evaluation of the probability that E happens: the individual may be either the gambler or the bookmaker but the evaluation remains the same. The following example is from [1].

2.1 Elementary Probability

In the classic definition by Pascal, the probability