Delayed and Network Queues - Aliakbar Montazer Haghighi - E-Book

Delayed and Network Queues E-Book

Aliakbar Montazer Haghighi

0,0
107,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Presents an introduction to differential equations, probability, and stochastic processes with real-world applications of queues with delay and delayed network queues

Featuring recent advances in queueing theory and modeling, Delayed and Network Queues provides the most up-to-date theories in queueing model applications. Balancing both theoretical and practical applications of queueing theory, the book introduces queueing network models as tools to assist in the answering of questions on cost and performance that arise throughout the life of a computer system and signal processing. Written by well-known researchers in the field, the book presents key information for understanding the essential aspects of queues with delay and networks of queues with unreliable nodes and vacationing servers.

  • Beginning with simple analytical fundamentals, the book contains a selection of realistic and advanced queueing models that address current deficiencies. In addition, the book presents the treatment of queues with delay and networks of queues, including possible breakdowns and disruptions that may cause delay. Delayed and Network Queues also features:
  • Numerous examples and exercises with applications in various fields of study such as mathematical sciences, biomathematics, engineering, physics, business, health industry, and economics
  • A wide array of practical applications of network queues and queueing systems, all of which are related to the appropriate stochastic processes
  • Up-to-date topical coverage such as single- and multiserver queues with and without delays, along with the necessary fundamental coverage of probability and difference equations
  • Discussions on queueing models such as single- and multiserver Markovian queues with balking, reneging, delay, feedback, splitting, and blocking, as well as their role in the treatment of networks of queues with and without delay and network reliability

Delayed and Network Queues is an excellent textbook for upper-undergraduate and graduate-level courses in applied mathematics, queueing theory, queueing systems, probability, and stochastic processes. The book is also an ideal reference for academics and practitioners in mathematical sciences, biomathematics, operations research, management, engineering, physics, business, economics, health industry, and industrial engineering.

Aliakbar Montazer Haghighi, PhD, is Professor and Head of the Department of Mathematics at Prairie View A&M University, USA, as well as founding Editor-in-Chief of Applications and Applied Mathematics: An International Journal (AAM). His research interests include probability, statistics, stochastic processes, and queueing theory. Among his research publications and books, Dr. Haghighi is the coauthor of Difference and Differential Equations with Applications in Queueing Theory (Wiley, 2013).

Dimitar P. Mishev, PhD, is Professor in the Department of Mathematics at Prairie View A&M University, USA. His research interests include differential and difference equations and queueing theory. The author of numerous research papers and three books, Dr. Mishev is the coauthor of Difference and Differential Equations with Applications in Queueing Theory (Wiley, 2013).

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 582

Veröffentlichungsjahr: 2016

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright

Dedication

Preface

Chapter 1: Preliminaries

1.1 Basics of Probability

1.2 Discrete Random Variables and Distributions

1.3 Discrete Moments

1.4 Continuous Random Variables, Density, and Cumulative Distribution Functions

1.5 Continuous Random Vector

1.6 Functions of Random Variables

1.7 Continuous Moments

1.8 Difference Equations

1.9 Methods of Solving Linear Difference Equations with Constant Coefficients

Exercises

Chapter 2: Stochastic Processes

2.1 Introduction and Basic Definitions

2.2 Markov Chain

2.3 Markov Process

2.4 Random Walk

2.5 Up-and-Down Biased Coin Design as a Random Walk

Exercises

Chapter 3: Birth and Death Processes

3.1 Overviews of the Birth and Death Processes

3.2 Finite B–D Process

3.3 Pure Birth Process (Poisson Process)

3.4 Pure Death Process (Poisson Death Process)

Exercises

Chapter 4: Standard Queues

4.1 Introduction of Queues (General Birth and Death Process)

4.2 Remarks on Non-Markovian Queues

4.3 Stationary

M/M/

1 Queueing Process

4.4 A Parallel

M

/

M

/

C/K

with Baking and Reneging

4.5 Stationary

M/M/

1/

K

Queueing Process

4.6 Busy Period of an

M

/

M

/1/

K

Queue

4.7 Stationary

M/M/

1 and

M

/

M

/1/

K

Queueing Processes with Feedback

4.8 Queues with Bulk Arrivals and Batch Service

4.9 A Priority Queue with Balking and Reneging

4.10 Discrete Time

M/M/

1 Queueing Process, Combinatorics Method (Lattice Paths)

4.11 Stationary

M/M/

C Queueing Process

Exercises

Chapter 5: Queues with Delay

5.1 Introduction

5.2 A Queuing System with Delayed Service

5.3 An

M

/

G

/1 Queue with Server Breakdown and with Multiple Working Vacation

5.4 A Bulk Queuing System Under

N

-Policy with Bilevel Service Delay Discipline and Start-Up Time

5.5 Interrelationship between

N

-policy

M/G/

1/

K

and

F

-policy

G/M/

1/

K

Queues with Start-up Time

5.6 A Transient

M

/

M

/1 Queue Under (

M

,

N

)-Policy, Lattice Path Method

5.7 Stationary

M/M/

1 Queuing Process with Delayed Feedback

5.8 Single-Server Queue with Unreliable Server and Breakdowns with an Optional Second Service

5.9 A Bulk Arrival Retrial Queue with Unreliable Server

5.10 Multiserver Queue with Retrial Feedback Queuing System with Two Orbits

5.11 Steady-State Stability Condition of a Retrial Queuing System with Two Orbits, Reneging, and Feedback

5.12 Batch Arrival Queue with General Service in two Fluctuating Modes and Reneging During Vacation and Breakdowns

Exercises

Chapter 6: Networks of Queues with Delay

6.1 Introduction to Networks of Queues

6.2 Historical Notes on Networks of Queues

6.3 Jackson's Network of Queues

6.4 Robustness of Networks of Queues

6.5 A

MAP

Single-Server Queueing System with Delayed Feedback as a Network of Queues

6.6 Unreliable Networks of Queueing System Models

6.7 Assessment of Reliability of a Network of Queues

6.8 Effect of Network Service Breakdown

Exercises

References

Index

End User License Agreement

Pages

xi

xii

xiii

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

377

378

379

380

381

382

383

384

385

386

387

388

389

390

391

392

393

394

395

396

397

398

399

400

Guide

Cover

Table of Contents

Preface

Begin Reading

List of Illustrations

Chapter 2: Stochastic Processes

Figure 2.1 One-dimensional random walk.

Figure 2.2 Treatment rule, biased coin design I (BCD I).

Chapter 3: Birth and Death Processes

Figure 3.1 Transition diagram of a birth and death processes. Arrows represent the possible transitions between states and the labels on the arrows represent the state transition rates between states.

Figure 3.2 Transition diagram of the dual process of the birth–death process of Figure 3.1. Arrows represent the possible transitions between states and the labels on the arrows represent the state transition rates between states.

Figure 3.3 Transition diagram of a birth and death process with staying put allowed. Arrows represent the possible transitions between states and the labels on the arrows represent the state transition rates between states.

Figure 3.4 Transition diagram of the dual birth–death process depicted in Figure 3.3. It is an absorbing B–D at states −1 and

K

. Arrows represent the possible transitions between states and the labels on the arrows represent the state transition rates between states.

Chapter 4: Standard Queues

Figure 4.1 Single-server Markovian queueing process, exponential interarrival times, and exponential service times; = average arrival rate, = average service rate, = average service time.

Figure 4.2 Lattice path for the ballot problem with eight and five votes for candidates A and B, respectively

Figure 4.3 Lattice path representation of random walk.

Chapter 5: Queues with Delay

Figure 5.1 Probability of busy period approaching 1 as time becomes large;

λ

= 1,

μ

= 2.

Figure 5.2 Probability of busy period approaching 1 as time becomes large;

λ

= 1,

μ

= 10.

Figure 5.3 Difference between busy period of model discussed and standard

M

/

M

/1;

λ

= 1,

μ

= 2,

τ

= 0.09.

Figure 5.4 Difference between busy period of model discussed and standard

M

/

M

/1;

λ

= 1,

μ

= 10,

τ

= 0.08.

Figure 5.5 Cumulative probabilities approaching 1, regardless of values of

τ

;

λ

= 1,

μ

= 2.

Figure 5.6 Cumulative probabilities approaching 1, regardless of values of

τ

;

λ

= 2,

μ

= 6.

Figure 5.7 Cumulative probabilities approaching 1, regardless of values of

τ

;

λ

= 1,

μ

= 10.

Figure 5.8 Cumulative probabilities approaching 1, regardless of values of

τ

;

λ

= 0.02,

μ

= 2.

Figure 5.9 Single processor with splitting and delayed feedback

Figure 5.10 The mean orbit size, versus

δ

.

Figure 5.11 The idle probability decreases for the increasing the service loss probability; versus

θ

.

Figure 5.12 The surface displays an upward trend; versus

λ

and

δ

.

Figure 5.13 Increase of the number of vacations and decrease of the probability of idle, cause increase of the probability of idle during retrial time and increase of probability of that server on vacation; versus

p

and

b

.

Figure 5.14 Increase of repair rate and the probability of server idle will cause decrease of the mean orbit size and the probability of server under repair; versus

α

and

γ

.

Figure 5.15 Decrease of mean orbit size when the first type probability increases; versus

p

1

and .

Figure 5.16 A queuing model with two redial and two reconnect orbits.

Figure 5.17 A queuing model with two orbits, abandonment, and feedback.

Chapter 6: Networks of Queues with Delay

Figure 6.1 Cyclic queueing system.

Figure 6.2 Busy period process state transition for station 1.

Figure 6.3 Basic operational model of a hospital system.

Figure 6.4 Jennings and Véricourt's model.

Figure 6.5 The IW model as a semi-open queueing network.

Figure 6.6 IW model as a closed Jackson network.

Figure 6.7 Single processor with infinite server-buffer, splitting and delay-general batch feedback.

Figure 6.8 Density and distribution functions of a busy period with small values of

h

.

Figure 6.9 Graph of the joint probability distribution of the number of tasks in the system.

Figure 6.10 Graph of the cumulative joint probability distribution of the number of tasks in the system.

Figure 6.11 Three-dimensional graph of the cumulative joint probability distribution of the number of tasks in the system.

Figure 6.12 Three-dimensional graph of differences between two consecutive probabilities of the queue length with

τ

= 700 and 600.

Figure 6.13 A CoginfoCom scenario.

Figure 6.14 A retrial queue with components.

Figure 6.15 Mean orbit size versus service's failure rate.

Figure 6.19 Wasted time of Intelligent entities versus service's failure rate.

Figure 6.16 Probability that the server is in limited state versus service's failure rate.

Figure 6.17 Probability that the server fails versus service's failure rate.

Figure 6.18 Mean response time of Intelligent entities versus service's failure rate.

Figure 6.20 Queueing network of Exercise 6.7.

List of Tables

Chapter 1: Preliminaries

Table 1.1 pmf

Table 1.2

p

X

,

Y

=

P

(

X

=

x

and

Y

=

y

)

Table 1.3 Laplace Transforms of Some Elementary Functions

Chapter 2: Stochastic Processes

Table 2.1 First Passage Time Probabilities

Table 2.2 Response Function Values Under Logistic CDF

Table 2.3 Target Quantile Values,

μ

, Under Logistic CDF

Table 2.4 Target Quantile Values,

μ

, Under Weibull CDF,

α

= 5.5, and

β

= 1.5

Table 2.5 Target Quantile Values,

μ

, Under Weibull CDF,

α

= 7.0, and

β

= 3

Chapter 3: Birth and Death Processes

Table 3.1 Transient Probability Distribution of the Number of Tasks in the System with

K

= 39 and

c

= 3

Table 3.2 Transient Probability Distribution of the Number of Tasks in the System with

K

= 7 and

c

= 3

Table 3.3 Transient Probability Distribution of the Number of Tasks in the System by Randomization Method with

K

= 7,

c

= 3, and

m

being Preassigned According to

Table 3.4 Execution Time to Obtain the Steady-State Solution for the Present Method for Various Values of

N

with

c

= 3

Chapter 4: Standard Queues

Table 4.1 Distribution of a Busy Period with

N

= 7 and

c

= 3

Table 4.2 Summary of Values for Case

M

1

= 1 and

M

2

= 7

Table 4.3 Calculation of

n

(

x

,

y

, 1) for 0 ≤

y

x

≤ 10

Table 4.4 Calculation of

B

(

x

,

y

) for

x

,

y

≤ 10

Table 4.5 Probability that Throughout the Counting Number of Votes Registered for Candidate A,

x

, is Always Greater than the Number of Votes Registered for Candidate B,

y

, for

x

,

y

≤ 10

Chapter 5: Queues with Delay

Table 5.1 Arrival Rate,

λ

, versus Mean System Size,

L

b

, in Regular Service Period

Table 5.2 Arrival Rate,

λ

, versus Mean Waiting Time,

W

b

, in Regular Service Period

Table 5.3 Arrival Rate,

λ

, versus Mean System Size,

L

ν

in Regular Service Period

Table 5.4 Arrival Rate,

λ

, versus Mean Waiting Time,

W

ν

in Regular Service Period

Table 5.5 Interrelationship between the

N

-Policy and

F

-Policy Problems

Table 5.6

q

s

= 0,

q

o

= 0.6,

p

f

= 0.4,

a

=

q

o

μ

,

b

=

p

f

μ

,

d

= 0.6, and

r

=

b

/

μ

= 0

Table 5.7 Influence of the Reliability Factors

α

1

and

α

2

Table 5.8 Effect of Negative Arrival Probability,

δ

, on

P

0

L

q

and

F

Loss

Table 5.9 Effect of Service Loss Probability,

θ

, on

P

0

,

L

q

, and

P

Table 5.10 Effect of Failure Probability, , on

P

0

,

L

q

, and

P

Table 5.11 Effect of Number of Vacations,

J

, on

P

0

,

P,

and

Ω.

Table 5.12 Effect of Repair rate on FTS,

ξ

1

, on

P

0

,

L

q

, and

R

1

Chapter 6: Networks of Queues with Delay

Table 6.1 Expected Value and Variance of Length of a Busy Period for Three Sets of Data Points

Table 6.2 Number of Busy Periods for Example 6.5.2

Table 6.3 Expected Value and Variance of Length of a Busy Period for Three Sets of Data Points

Table 6.4 Distribution of the Number of Tasks in the System (Both Stations), the First 18 Rows and 12 Columns of the Probability Distribution Matrix

Table 6.5 Differences between Two Consecutive pdfs of the Queue with

τ

= 700 and

τ

= 600

Table 6.6 Maximum Differences between Two Consecutive Probabilities in the Distribution of Queue Size

Table 6.7 Data Used for Numerical Example

Delayed and Network Queues

 

Aliakbar Montazer Haghighi

Department of Mathematics Prairie View A&M University Member of the Texas A&M University System Prairie View, Texas, USA

 

Dimitar P. Mishev

Department of Mathematics Prairie View A&M University Member of the Texas A&M University System Prairie View, Texas, USA

 

 

 

Copyright © 2016 by John Wiley & Sons, Inc. All rights reserved

Published by John Wiley & Sons, Inc., Hoboken, New Jersey

Published simultaneously in Canada

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permissions.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging-in-Publication Data:

Names: Haghighi, Aliakbar Montazer, author. | Mishev, D. P. (Dimiter P.), author.

Title: Delayed and network queues / Aliakbar Montazer Haghighi, Prairie View A&M University, member of Texas A&M University System, Prairie View, Texas, USA, Dimitar P. Mishev, Prairie View A&M University, member of Texas A&M University System, Prairie View, Texas.

Description: Hoboken, New Jersey : John Wiley & Sons, 2016. | Includes bibliographical references and index.

Identifiers: LCCN 2016015749 (print) | LCCN 2016020734 (ebook) | ISBN 9781119022138 (cloth) | ISBN 9781119022145 (pdf) | ISBN 9781119022152 (epub)

Subjects: LCSH: Routing (Computer network management)-Mathematics. | Computer networks--Mathematical models. | Telecommunication--Traffic. | Queuing networks (Data transmission)

Classification: LCC TK5105.5487 .H34 2016 (print) | LCC TK5105.5487 (ebook) | DDC 004.601/51982-dc23

LC record available at https://lccn.loc.gov/2016015749

I have always appreciated my lovely wife, Shahin Hamidi, for putting up with me during our about 45 years of marriage. A significant part of what was supposed to be our “quality time” together, I have squandered in research and publication. Fully aware of her involuntary sacrifice, I have always devoted my scholarly work and achievement to her, along with others.

This volume (my 4th in English Language) is also in her honor which she has consented to gracefully share with my late parents, Roghayyeh Jafari and Mohammad Ali Montazer-Haghighi. I am confident, yet continue to pray, that both remain in the peaceful and merciful hands of the Almighty Allah in whom they lived their lives.

“To Allah we belong and to HIM we will return”

Aliakbar Montazer HaghighiHouston, Texas, USADecember 31, 2015

I would like to dedicate the book to my wife, Tatiana, for her patience and encouragement, to my children Elizabeth, Antoan and Ilia, and to my brother Iordan and his family.

Dimitar P. MishevHouston, Texas, USADecember 31, 2015

Preface

Queueing theory is considered a part of Operations Research, in which the concept of stochastic processes is widely applied. With advancements in manufacturing and production systems, public transportation, logistics, and health care, in this era of technological development made possible by wireless communication and the Internet, performance evaluation of such systems is best accomplished by modeling the system as a network of queues. A network of queues is applied to model a real-world system when a set of resources is shared by some components of the system. Thus, queueing network modeling is a methodology for the analysis of systems such as computer systems. Many examples of networks of queues that appear in our day-to-day lives sometimes go unnoticed. The assembly of an automobile in a car manufacturing plant is a simple example.

In the 1960s, the computer industry was not very aware of the need of queueing network analysis; instead, the time sharing idea was used in the data processing industry. The networks only became fashionable by the end of that decade. New challenges developed in the 1970s and continued when the wireless communication system was studied in the 1980s and has still continued to the present time. By now, the algorithms for evaluating queueing network models are well developed and the mathematical sophistication to analyze systems such as computer systems is not so essential.

However, for long, the networks were based on the assumption of reliable nodes. Breaking down the nodes soon proved the assumption unrealistic in both the Internet and wireless and the industry such as production lines and the automobile industry. This fact is, indeed, a source of delay in the performance analysis of network of queues. This delay is different from the standard waiting time in a service line. Because of the importance of delay that not only affects the time delay but also being an economic factor in a production line, for instance, we have allocated part of Chapter 6 to discuss the unreliability of networks.

This book is an attempt to present some queueing models and networks of queues with delay, in a style that is between a monograph and a textbook. We also hope that it will satisfy the need of a handbook for researchers and practitioners.

The principal audience of the book are senior undergraduates, masters and doctorate students, as well as professionals and researchers in the mathematical sciences, stochastic processes, operations research, and engineering. The materials included in the book should also be of interest to those involved in computer network performance analysis. These materials actually instruct the application of queueing network models as tools to assist in the answering of questions on cost and performance that arise throughout the life of a computer system. Our intention is to provide enough information that readers can fully understand the essential aspects of queues with delay and networks of queues with unreliable nodes that may cause delay. We have omitted numerical examples for some models as the development of the fundamentals of models are sufficiently detailed to understand them.

We have included results of some recent research studies to ensure the contents of the book are current enough for all uses.

Particular advantages of this text are as follows:

1.

Treatment of queues with delay and networks of queues with possible breakdown and disruption that may cause delay.

2.

Inclusion of recently published materials on the subject so that the book is very Topical.

3.

Complementation by modern technology (including current software packages).

We have divided the book into six chapters. In Chapter 1, basics of probability and some preliminary materials are discussed as mind refreshers and background materials for later use. Chapter 2 contains topics in stochastic processes that are the basis for queueing theory. Birth and death processes are discussed, not in detail, in Chapter 3. Some standard queues such as single-server and multiserver queues and queues with feedback with some variations and generalizations are presented in Chapter 4. Chapter 5 is devoted to queues with delay. Different types of models that may cause delay in performance and service are discussed in this chapter. In the final chapter, Chapter 6, networks of queues with delay and models with reliable and unreliable nodes, particularly Jackson's types, are discussed. Additionally, included in this chapter are discussions of assessment of reliability of a network of queues and effect of network service breakdown. The splitting (relatively new and still developing) and blocking features are discussed in both Chapters 5 and 6.

In addition to the calculus and differential equations (to the level taught in most engineering undergraduate schools), some knowledge of difference equations is necessary to better comprehend the contents of this book. Related questions are included in the “Exercises” section at the end of each chapter.

It is our wish that readers of this text have an exciting experience going through this lavish collection of queueing models. We would seize one final opportunity to express our gratitude to John Wiley & Sons, Inc., New Jersey, for offering and awarding us the contract to write this book, as well as those who helped us in this work.

Aliakbar Montazer HaghighiDimitar P. Mishev

Houston, Texas, USAJanuary 25, 2016

Chapter 1Preliminaries

1.1 Basics of Probability

1.1.1 Introduction

In this chapter, we introduce some basics of probability that will be needed in the later chapters. We also take the liberty in stating some theorems without presenting proofs and emphasize that the contents of this chapter, by no means, represent all topics of probability that deserve a detailed discussion.

A chance or random experiment is an experiment whose outcomes or results of its performance are uncertain. A set of outcomes is called an event. The set of all possible outcomes is referred to as the sample space, denoted by . Thus, an event is a subset of the sample space and an element of the sample space is a sample point.

Two events and are referred to as mutually exclusive if their intersection is empty. A set is called a partition of if the events are mutually exclusive, such that .

Probability of an event E, denoted by , indicates a number between 0 and 1 (inclusive), describing the likelihood of occurrence of the event E. There are two particular events: an event with probability 1 (referred to as almost sure event) and another with probability 0 (referred to as null or impossible event).

For a finite sample space with n elements, if all outcomes have the same chance to occur, each member is assigned a probability of 1/n and the sample space is called equiprobable. In case of an infinite sample space, elements are with uniform measure.

If a chance experiment is repeated, the chance of occurrence of an outcome is the ratio of the number of occurrences of the outcome to the total number of repetitions. Hence, for a sample space with n equiprobable points, the probability of an event with k points is k/n, referred to as relative frequency, and it is an approximation of the probability of the event. In other words, for a sample space with equiprobable sample points, if E is an event with k points, the probability of the event E is given by

1.1

The number of elements of the event E is referred to as the “size” of E. Thus, probability of the event E may be defined as

1.2

The triplet is called the probability space associated with the random experiment, where

a.

is the sample space, that is, the set of all outcomes of the random experiment.

b.

is the set function containing all possible events drawn from

, which has the structure of the

field. This means that

satisfies the following conditions:

i.

the empty set is in

,

ii.

if

, then the complement of

E

is also in

, and

iii.

if

, then

.

c.

P

is the probability (measure) of an event. In fact,

P

is a function that associates a number

for each element

E

of

with the following properties (called

axioms of probability

):

Axiom 1

 

, for each event

E

is

.

Axiom 2

 

.

Axiom 3

 For any sequence

of

mutually exclusive

events (disjoint sets, that is,

if

) in

,

1.1.2 Conditional Probability

For the probability space , let B be an event (that is, ) and P(B) > 0. Then, given the probability of an event A, the conditional probability of B, denoted by , defined on , is given by

1.3

If , then is not defined.

It should be noted that conditional probability exhibits properties similar to ordinary probability, but restricted to a smaller space.

One of the concepts often needed is the “independence” of events. We offer the definition for two events that can be easily expanded. Hence, we have the following:

Two events A and B are independent if and only if

1.4

In other words, occurrence of one does not affect the chance of occurrence of the other. Relation (1.4) can be expanded for an arbitrary number of events. Hence, the arbitrary family of events , where is the set of natural numbers, is independent if

1.5

for every finite subset of indices .

As a consequence of Equation (1.4), it can be easily proved that if two events A and B are independent and then

1.6

and conversely, if and Equation (1.6) holds to be true, then A and B are independent.

As another application of independence, the following is called the multiplicative law. Although we offer the definition for only two events, it can be expanded for any finite number of events. Thus, for any two events A and B with conditional probability or and as long as or is nonnegative, we have

1.7

Another property of conditional probability that can be easily verified called the law of total probability or total probability theorem is that if is a partition of the sample space , that is, n mutually exclusive events whose sum is unity, then for any given arbitrary event E, we have

1.8

Using Equation (1.8) and the conditional probability, a very important theorem called the Bayes' theorem can be proved. It can be stated as follows: If E is an event and a partition of the sample space , then

1.9

Relation (1.9) is referred to as Bayes' formula. The conditional probability is called the posterior probability. The original probability of is called the prior probability of .

1.2 Discrete Random Variables and Distributions

Although sample points are numerical in many practical problems, there are nonnumerical in many others. For practical purposes, a numerical sample space is more desirable. The tool to quantify a sample space to a numerical one is the random variable.

Before defining a random variable, we define a countable set. A set is called countable if it has the same number of elements (cardinality) as some subset of the set of natural numbers . In case of a finite subset of , the set is sometimes referred to as finitely countable or at most countable. For the case of the same cardinality as , it is sometimes referred to as countably infinite. A set that is not countable is called uncountable or denumerable. Throughout the book, we may use any of the terms as may be appropriate. Thus for us, the set of natural numbers , the set of natural numbers including zero , the set of integers , and the set of rational numbers (reactions) are all countable or infinitely countable sets. However, is at most countable or finitely countable. We note that all infinitely countable sets are of the same size, infinite. However, the set of real numbers and an interval are not countable. The latter was first proved by George Cantor using the diagonal argument method.

Thus, a random variable is defined as a function (mapping) that assigns a numerical value (or a set of values) to a sample point. Hence, If X is a random variable, it assigns a value to each outcome in . If the function X is on an at most countable sample space into the set of real numbers, , the random variable X is called a discrete random variable. Thus, a random variable is discrete if it takes at most countably many values. In other words, if there is a finitely countable set of real numbers, say A, such that .

For the sake of convenience, we may, allow a discrete random variable to assume positive infinity, as one of its values such as waiting time for an event to occur for the first time. This is because the event may never occur.

We leave it as an exercise for the reader to prove the following properties of discrete random variables. If X and Y are two discrete random variables, then , , and are also random variables, for the last one, provided Y is nonzero.

Probability distribution of a discrete random variable indicates the assignment of probabilities over the entire values of a random variable.

It is important to note that probabilities are nonnegative real numbers and total assignment of probabilities must be 1. Hence, these two simple properties establish two conditions for a function to be a probability distribution function.

Let the discrete random variable X be defined on a sample space with a typical element x. Then, the probability mass function(pmf) of X, denoted by , is defined as . If no specific value is given, we denote it by .The pmf is sometimes described by a table or matrix. For instance, if X is a random variable with the general vale x and specific values , that are assigned probabilities , then the pmf can be written as in Table 1.1.

Table 1.1 pmf

X

It is important to note that according to the second axiom of probability, , where x varies over all possible values of X.

When two random variables X and Y have the same distribution, say and , we say X and Y as equally distributed.

Example 1.2.1 Bernoulli Distribution

The simplest chance experiment (that is, an experiment whose outcomes are determined by chance, that is also called a trial) is an experiment with only two outcomes. Such an experiment is called a Bernoulli Trial. Independent repetitions of such trials are referred to as Bernoulli Trials. The random variable representing a Bernoulli trial is called a Bernoulli random variable. Thus, the sample space for a Bernoulli random variable has two sample points, referred to as success and failure. Denoting the probability of a success by p, , the probability of failure will be . Thus, if X is a Bernoulli random variable with values 1 and 0, corresponding to success and failure, respectively, then distribution of X is given by

1.10

with .

Relation (1.10) can also be expressed as

1.11

In order to determine whether Equation (1.11) defines a distribution function, in this case, the Bernoulli probability distribution function, we note that

Example 1.2.2 The Indicator Function

For an event A in the probability space, the indicator function of A is defined as the random variable

1.12

Hence, for every and . In other words, distribution function of an indicator function is a Bernoulli distribution with parameter . Because of this fact, sometimes, a Bernoulli random variable is called the indicator random variable. The random variable X, in this case, is an indicator of the event . This is because, when , is in the event A, the first part of Equation (1.12). For example, suppose two dice are rolled. Let and be random variables representing the numbers shown on the first and second rolling, respectively. Thus, the sample space for each of these discrete random variables is , and for both will be the cross-product , containing 36 elements, which are ordered pairs. In other words, . Suppose that we are interested in the sum of and be at least 7.Thus, we have . Let the sum of interest be denoted by Z. That is, , or

1.13

Thus, we have to compute . Now, for the sum to be at least 7, we have to achieve the following ordered pairs: (1,6), (2,5), (3,4), (4,3), (5,2), and (6,1). That is, the sample space of Z, which is a subset of contains (1,6), (2,5), (3,4), (4,3), (5,2) and (6,1). Probability of achieving each of these pairs is 1/36. Thus, .

Example 1.2.3 Binomial Distribution

Consider Example 1.2.1 Let us assume that a Bernoulli trial is independently repeated n times and also that the random variable X represents the number of successes occurred in the n trials. Obviously, the sample space, in this case, will have 2n sample points. The number of ways k success can occur in n trials is . Requiring k success in n trials leaves n − k failures. With the trials being independent, the probability of a succession of successes and failures will be the product . Thus, the pmf of X, denoted by , and called the binomial random variable is given by

1.14

Relation (1.14) is, indeed, a probability distribution function, called the binomial distribution function with parameters n and p. We leave the proof as an exercise to the reader.

Example 1.2.4 Poisson Distribution

The random variable X with probability distribution function (or pmf)

1.15

where is a constant, is called a Poisson random variable and Equation (1.15) is called Poisson distribution function (or Poisson pmf) with parameter. We leave it as an exercise to show that Equation (1.15) is, actually, a distribution function.

The discrete joint pmf of the random variables , with real values , denoted by , is defined by

1.16

where is nonnegative and

From Equation (1.16), each individual probability, , is called a marginal mass function, that is,

1.17

where the summation is over all possible n-tuples with the ith component held fixed at a specific value, say .

We list two properties of a random variable.

Property 1.2.1

We consider two random variables X and Y, with their respective pmfs and and their joint mass function is denoted by . Then, the conditional mass function of X given Y is defined by

1.18

provided that .

As observed in defining the conditional probability function (1.18), random variables can be substituted for events. Thus, Equation (1.18) produces a different probability distribution for each value of y.

The pmf of the conditional random variable (read as “X given Y = y”) is given by

1.19

for values x of X.

Example 1.2.5 Conditional pmf

Let X and Y be jointly distributed as shown in Table 1.2.

Table 1.2pX, Y = P(X = x and Y = y)

Y

X

2

4

6

Marginal pmf of

Y

0

0.10

0.03

0.04

1

0.01

0.16

0.07

2

0.15

0.06

0.00

3

0.05

0.09

0.24

Marginal pmf of

X

1

FromTable 1.1, . Then, the conditional mass function of X given Y = 1 is

1.20

Keeping Equation (1.5) for events in mind, we leave it as an exercise to prove that two random variables X and Y, with their respective pmfs and , and their joint mass function , are independent if and only if

1.21

Let X, Y, and Z be discrete random variables. Then, X and Y are said to be conditionally independent, given Z, if

1.22

for all x, y, and z such that . As a consequence of Equation (1.22), if X and Y are conditionally independent, given Z, then

1.23

for all x, y, and z such that .

Property 1.2.2 Functions of a Random Variable

Let X be a discrete random variable with pmf, denoted by , where x is a real number representing the values of X. Let y be a real-valued function of the real variable x, denoted by . Hence, transforms the random variable X into the random variable Y, whose values are denoted by y. Thus, as X maps a sample point s to a real number, so does . This indicates that cumulative distribution function (cdf) of Y depends on and cdf of X.

We note that the domain of g should contain the range of X. Hence, pmf of Y can be expressed as

1.24

where is the indicator function.

1.3 Discrete Moments

For k integers , the arithmetic average, denoted by , is defined as the sum of integers divided by k. In other words, the arithmetic average of the k integers is obtained by multiplying each number by 1/k and add, that is,

1.25

Similarly, let X be a random variable with values , and with probabilities , and , respectively. Then, the mean (or weighted average or expected value or expectation or mathematical expectation) of X, denoted by , is defined as

1.26

Note that each , is the weight for each value , and , respectively.

Example 1.3.1 The Indicator Function

We leave it as an exercise to prove that

1.27

Relation (1.26) may be expanded for a random variable with infinite values. In other words, let X be a random variable with values , and respective probabilities , that is . Then, the mean (or weighted average or expected value or expectation or mathematical expectation) of X is defined as

1.28

provided that the infinite series in Equation (1.28) converges absolutely; otherwise, the series does not exist, and hence, the expected value of X does not exist.

The following are some of the properties of Equation (1.28):

1.

The only way Equation (

1.28

) would not exist is when

1.29

2.

Let

X

and

Y

be random variables and

a

,

b

, and

c

constants, then

1.30

3.

Let X and Y be two independent random variables with marginal pmf and , respectively. Assume and exist. Then,

1.31

Relation (1.31) can be expanded for a finite number of independent random variables.

4.

For random variables X and Y,

1.32

where x and y are values of X and Y, respectively.

It is important to note that Equation (1.32) is a function of y, which implies that is a random variable in its own right.

Example 1.3.2 Conditional Expected Value

Consider Example 1.2.5 and Table 1.2. In Example 1.2.5, we found the conditional pmf of X given Y = 1. Now we want to find the expected value of this conditional variable. From Table 1.2 and Equation (1.20), we have

5.

For two random variables

X

and

Y

, we have

1.33

6.

For a random variable

X

, with

x

representing its values, the

nth moment

, denoted by

, is defined as

1.34

where

n

is a nonnegative integer.

a.

If

, then Equation (

1.34

) reduces to the expected value or the first moment of

X

. With this note in mind, the variance of the random variable

X

, denoted by

, can be defined as

1.35

where

x

represents values of

X

. The following are some properties of variance (relation (

1.35

)):

b.

Let

X

and

Y

be random variables and

a

,

b

, and

c

constants, then

1.36

c.

For

, then Equation (

1.34

) reduces to the second moment and that in turn leads to the variable of

X

, that is

1.37

7.

Central Moment.

The expected value of the random variable of

X

is denoted by

, that is,

. The

n

th moment of the random variable

, that is,

, is called the

central moment

of

X

. The random variable

measures the

deviation of X from its expectation

. This deviation can be positive or negative depending on the values of

. Hence, its absolute value gives an

absolute measure of deviation

of the random variable

X

from its mean

. Yet a better measure, the

mean square deviation

, that is,

, called the

second central moment of X

, is, indeed, the variance of

X

. Thus, the variance measures the

average deviation

or

dispersion

of the random variable from its mean. As the variance measures the deviation from the mean by squares, in order to adjust for this squaring process, the square root is used. The positive square root of the variance of a random variable

X

is called the

standard deviation

.

8. Generating Function.

Let

X

be a discrete random variable with nonnegative integer values with

as the pmf of

X

. The

probability generating function

(

pgf

) of

X

, denoted by

G

(

z

),where

z

is a complex number, that was defined in

Chapter 1

, can be redefined as

1.38

where the power series in Equation (

1.38

) converges absolutely at least for all

z

, such that

. The idea of applying generating function to probability is intended to encapsulate all the information about the random variable.

a.

When

X

takes

k

as its value,

takes

as its value.

b.

The power series in Equation (

1.38

) follows all the rules of convergence of power series with nonnegative coefficients.

c.

The product of two generating functions and is given by

1.39

where

1.40

The sequence defined in Equation (1.40) is called the convolution of the two sequences and . In general, the convolution initiates two independent random variables X and Y with pmfs and , respectively. Here it is defined as follows. Let X and Y be two independent discrete random variables with pmfs of and , respectively. Then, the convolution of and is a pmf such that

1.41

The pmf defined in Equation (1.41) is distribution function of the random variable .

d.

For a random variable

X

that takes

, as its values, pmf of

X

is recovered by the derivatives of the generating function as follows:

1.42

e.

If

is the pmf of a random variable

X

, then

1.43

f.

If

z

= 1, then

1.44

g.

For a random variable

X

,

1.45
1.46

h.

The nth factorial moment, denoted by , is defined as

1.47

and for a random variable X, it is given by

1.48

where

1.49

All the moments of X can be obtained from Equation (1.48). For instance, if n = 1, then we obtain Equation (1.43). If n = 2, we obtain , which leads to the variance given in Equation (1.44).

9.

The Moment generating function of a random variable X, denoted by , with the pmf is defined as

1.50

where is the generating function of X and t is a nonnegative real number.

It is important to note the following points:

a.

It is possible that

t

is a complex number with nonnegative real part.

b.

The moment generating function generates all the moments of

X

, as did the generating function.

c.

If

t

= 0, then

1.51

Recall that for generating function, we had relation (1.42), that is, if , then .

d.

The nth moment of the random variable X can be obtained from the moment generating function as

1.52

Recall that for generating function, we had a similar relation (1.49), that is, to obtain the nth factorial moment of a random variable X, we need to evaluate the nth derivative of the generating function at 1. Hence, for higher moments than the second, the moment generation yields direct results.

e.

For computational purposes, it is more convenient to use the following relations:

1.53

and

1.54

1.4 Continuous Random Variables, Density, and Cumulative Distribution Functions

A sample space is called continuous if the outcomes of a chance experiment can assume real values, that is, is the entire real number line or a part of it. However, in practice, a smaller set of subsets can be taken to contain all events of our interest. The smallest such set, denoted by , is referred to as the Borel set. For instance, the set of all rational numbers in the interval [0, 1] is a Borel set for [0, 1]. The Borel set leads to a new probability space, where P is the probability of evens in the Borel set.

We define the function on the set of real numbers, , such that , for all real x, and . Then, is called a continuous probability density function (pdf) on. Let Ω be a continuous sample space. A random variable X, which takes its values for such , is called a continuous random variable. In other words, a continuous random variable maps to a subset of the real line, that is, X is a real-valued function defined on such that . In terms of pdf, , we have the following properties:

1.55
1.56
1.57

Suppose X is a continuous random variable defined on a sample space with a probability density function . Then, the cdf of X, denoted by , is defined as

1.58

As it can be seen, if the density function is continuous, then .

We note that the distribution is nondecreasing, right continuous, and exhibits the following properties:

1.59

If there is no danger of confusion, we will suppress the subscript X from and . Equation (1.55) can be rewritten using Equation (1.55) as

1.60

It is important to note that