137,99 €
This book aims to help engineers, Masters students and young researchers to understand and gain a general knowledge of logistic systems optimization problems and techniques, such as system design, layout, stock management, quality management, lot-sizing or scheduling. It summarizes the evaluation and optimization methods used to solve the most frequent problems. In particular, the authors also emphasize some recent and interesting scientific developments, as well as presenting some industrial applications and some solved instances from real-life cases. Performance evaluation tools (Petri nets, the Markov process, discrete event simulation, etc.) and optimization techniques (branch-and-bound, dynamic programming, genetic algorithms, ant colony optimization, etc.) are presented first. Then, new optimization methods are presented to solve systems design problems, layout problems and buffer-sizing optimization. Forecasting methods, inventory optimization, packing problems, lot-sizing quality management and scheduling are presented with examples in the final chapters.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 402
Veröffentlichungsjahr: 2012
Table of Contents
Introduction
Chapter 1. Modeling and Performance Evaluation
1.1. Introduction
1.2. Markovian processes
1.3. Petri nets
1.4. Discrete-event simulation
1.5. Decomposition method
Chapter 2. Optimization
2.1. Introduction
2.2. Polynomial problems and NP-hard problems
2.3. Exact methods
2.4. Approximate methods
2.5. Multi-objective optimization
2.6. Simulation-based optimization
Chapter 3. Design and Layout
3.1. Introduction
3.2. The different types of production system
3.3. Equipment selection
3.4. Line balancing
3.5. The problem of buffer sizing
3.6. Layout
Chapter 4. Tactical Optimization
4.1. Introduction
4.2. Demand forecasting
4.3. Stock management
4.4. Cutting and packing problems
4.5. Production and replenishment planning, lot-sizing methods
4.6. Quality management
Chapter 5. Scheduling
5.1. Introduction
5.2. Scheduling problems
Bibliography
Index
This book is dedicated to the memory of Mohamad Chehade.
It is also dedicated to our respective families, who supported and encouraged us:
– Sara, Lehna and the whole Yalaoui family;
– Farah, Lana and the whole Chehade family;
– Stéphanie, Laura, Aurélien and the whole Amodeo family.
We thank everyone for their participation in proofreading this work, and for their support and invaluable assistance: Frédéric Dugardin, Romain Watier, Stéphanie Dussolier, Julie Rubaszewski, Yassine Ouazene, Andrès Felipe Bernate Lara, Naïm Yalaoui, Farah Belmecheri, Atefeh Moghaddam and Slim Daoud.
Finally, we thank all our colleagues as well as our academic and industrial partners.
First published 2012 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EU UKwww.iste.co.ukJohn Wiley & Sons, Inc. 111 River StreetHoboken, NJ 07030 USAwww.wiley.com© ISTE Ltd 2012
The rights of Alice Yalaoui, Hicham Chehade, Farouk Yalaoui, Lionel Amodeo to be identified as the author of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2012946446
British Library Cataloguing-in-Publication DataA CIP record for this book is available from the British LibraryISBN 978-1-84821-424-8
To remain competitive, businesses must design, produce and distribute their products and services while taking account of delays and increasingly restrictive quality requirements. To succeed, they put in place new technologies and methods to continually improve their production and distribution tools. Such a system is called a production system, or more commonly a logistics system. These systems combine the set of processes due to which a business produces a product or service, thus allowing it to satisfy a demand.
The components of a logistics system can be divided into two broad categories, thereby defining a conceptual model [AMO 99]: the physical system and the production control system (Figure I.1).
The role of the physical system is to transform the raw materials and components into finished products. It is composed of human resources, machines, transport systems, storage systems and numerous other elements. The major decisions at the control system level may be classified into three categories: strategic, tactical and operational decisions [PAP 93]. Design and management problems (Figure I.2) are divided into three distinct but interacting stages: long-term, medium-term and shortterm decisions.
For long-term decisions, a time frame of several years may be considered. Decisions are to be made regarding the initial design of the system as well as the functional modifications to be made. These decisions may be preliminary, with some data, or detailed decisions. Preliminary decisions are characterized by approximate data, which require the testing of a significant number of solutions in order to obtain the best design. For detailed decisions, this consists of evaluating the most promising solutions, as the data become more precise.
Figure I.1.Conceptual model of a production system [AMO 99]
Figure I.2.Position of design and management problems [PAP 93]
Medium-term decisions are distributed across a time frame ranging from several weeks to several months. They essentially consist of determining the types of products and the planning of their production in certain time frames, taking into consideration several parameters such as production targets, constraints on resource capacity and costs.
Regarding the short-term decisions, the time frame varies from several hours to several days. These decisions concern the distribution of resources in the system, scheduling production tasks, reactions to unforeseen events, etc. In the context of automated systems, most short-term decisions, with the eventual exception of scheduling, are made by the system’s control devices.
The study of logistics systems is inherently complex. Despite the numerous models and solutions already developed, the set of constraints and specificities that characterize these systems (availability of resources, random phenomena, interactions between the components, etc.) are not all taken into consideration. It is very often impossible to model such a system in its entirety; hence, in most cases, the provided solutions are not optimal.
The study of logistics systems generally involves several phases. The objective of the first phase is the acquisition of information, which consists of identifying the different parameters influencing the system. The second, very important, phase concerns the development of a model that must integrate all of the identified parameters. The third phase addresses the evaluation and analysis of the system’s performance using the initial model. This performance is understood through its quantitative and qualitative aspects. However, the model may be large and difficult to use. An additional simplification phase may then reduce the order of the system, while retaining the characteristics and properties of the initial model. Finally, an optimization phase is regularly used to improve the performance of the system.
The aim of this book is, first, to focus specifically on techniques for modeling logistics systems and to present some single and multiple criteria optimization tools that can be used to solve decision problems, design and management. Second, some new approaches and methods are presented to provide new answers and solutions to some of these problems, such as the design of production lines, their layout, production planning, quality management, stocking and scheduling. Numerous examples are also presented.
The book is organized as follows. Chapter 1, entitled “Modeling and Performance Evaluation”, introduces modeling tools such as Markov chains, Petri nets, expansion methods and discrete-event simulation. Chapter 2 discusses optimization methods such as mathematical programming and exact methods (branch-and-bound procedures and dynamic programming). Approach methods and multi-objective optimization are also discussed in the chapter. Chapter 3 deals with the design of logistics systems and their arrangement. It introduces the notions of line-balancing, buffer sizing, and the selection of production equipment. The physical layout of production lines is also introduced. Chapter 4 focuses on tactical management, introducing notions of production planning, replenishment, stock management and quality management. Operational management of production scheduling is the subject of Chapter 5.
A system, be it logistic or otherwise, may be considered as a set of interacting entities, capable of handling other entities that are internal or external to it. A model of a system is a logical, mathematical representation of its real behavior in a given context and following a given problem. A model is a decision-support tool that allows the study of a complex system through one or more simpler systems, replacing it in a scientific analysis, providing information about the studied system or, for example predicting the behavior of the initial system in various conditions.
We may distinguish between different types of model based on different characteristics. A model may be analogical (such as a scale model of a machine) or abstract (without physical representation). If time is not considered in the study, the model is static, and it is described as dynamic if the state of the system it represents evolves over time. If its evolution involves an element of chance, it is described as stochastic (as opposed to deterministic). The notions of deterministic and stochastic models are directly related to uncertainties. These uncertainties are, for example variations in operating times, variations in machine preparation time, etc. If the model requires a formal equation, it is described as mathematical, and it will be called numerical if it is based on a simulation.
The aim of modeling is therefore to best reproduce the actual operation of the studied system. Hence, the physical and technical characteristics of the system must be taken into consideration. This information constitutes a model’s input data, and allows the determination, in the most precise possible manner, of output data known as the performance indicators of the system.
In this Chapter, we present a non-exhaustive list of methods and tools for modeling and evaluating the performance of logistics systems, such as Markov chains, Petri nets, the Gershwin decomposition method and discrete-event simulation.
Probability theory is a mathematical science that began in the 17th Century with the work of Galileo on physical measurement errors. It was only later in the 19th Century that A. Markov (1856–1922) defined the basis of the theory of stochastic processes, creating the model that carries his name. Markov chains occupy an important place among stochastic models and are currently used in numerous applications (economic, meteorological, military, computational, etc.). Their use in production systems is significant (queuing networks, maintenance policy, fault detection, etc.). A Markov model is well adapted to the study and analysis of production systems because it provides a simple graphical representation while retaining its powerful analytical properties.
Numerous authors have introduced stochastic processes in their works. We cite the work of [FEL 68].
DEFINITION 1.1.– A stochastic process is a family of random variables ξt:
[1.1]
where the parameter t explores the set T . T may belong to the set of natural numbers, real numbers, etc.
If T is discrete and countable: {ξt} form a stochastic sequence.
If T is a finite or infinite interval: {ξt} form a continuous process.
Note that T typically represents an interval of time.
Let X be the state space. X is the set whence the variables ξt take their values. X may be discrete or continuous.
From the definition of random processes [1.1], we distinguish between four Markov processes according to the discrete or continuous nature of the state space and of the set T (see Table 1.1).
Table 1.1.The different Markov processes
T
discrete
T
continuous
X
discrete
Markov chains with a discrete state space
Markov processes with a discrete state space
X
continuous
Markov chains with a continuous state space
Markov processes with a continuous state space
The analysis of certain systems in the field, for example, of reliability analysis, imposes continuous-time modeling. We use Markov processes when the transition probabilities at a precise time are replaced by transition rates equivalent to a number of events per unit time. These characteristics are essential as they allow the direct introduction of failure rates and machine repairs into the model.
In this section, we present the basic notions of Markov processes and their applications to production systems modeling.
DEFINITION 1.2.– A Markov process is a family of random variables {ξt}, whose parameter belongs to a finite or an infinite continuous interval. The set T with index t is continuous in this case. The family {ξt} has the Markov property, that is:
[1.2]
The distribution of the process at a future instant depends only on its present state, and not on its past evolution. The process is memoryless.
For such processes, at each instant, the transition probability from a state i to a state j is zero. Instead of a transition probability πij , we consider the transition rate μij , defined as the limit of the ratio of the transition probability from state i to state j in a small interval Δt from the instant t, with the length of this interval when it tends to zero:
[1.3]
Let us consider the case of a homogeneous Markov process that possesses a finite number of states (1, 2, …, r). To describe the random process that takes place in this system, state probabilities (π1 (t), π2(t), …, πr (t)) are used, where πi(t) is the probability that the system is found in the state i at time t:
[1.4]
It is clear that for all t, the state probabilities satisfy the property of stochasticity:
[1.5]
To calculate the state probabilities [1.4], it is necessary to resolve the system of differential equations, known as the Chapman–Kolmogorov equations:
[1.6]
where , also written i(t), corresponds to the rate of variation of the probability associated with state i.
These differential equations [1.6] constitute the state equations of the system. Using matrices, we obtain:
[1.7]
[1.8]
Matrix A is called the transition rate matrix, or in certain works, the infinitesimal generator of the Markov process.
αij is the transition rate μij of state i to state j (i ≠ j). αii is equal to the negative of the sum of transition rates from state i to all other states.
The elements of A have the following properties:
[1.9]
The sum of the elements in each line is zero and the determinant of A is therefore zero. Equation [1.6] may be written in amore condensed formwith (t), the derivative of the state probability vector at time t:
[1.10]
This equation perfectly defines the transient state of the system. To compute the probability distribution π(t) at every instant t, it is necessary to solve the system of differential equations [1.6]. When the Markov process is homogeneous, the solution of the system is expressed in the form:
[1.11]
The existence of limiting state probabilities requires that the Markov process be ergodic. In other words, the limit of Pr{ξtx} when t tends to infinity exists regardless of the initial distribution. Thus:
[1.12]
In the limiting case, the variation of the state probabilities (t) is zero. Equation [1.6] becomes:
[1.13]
The quantity μij .πi(t) is called the probability flow of the transition of state i to state j.
Equation [1.13] is called the balance equation. This corresponds to an equilibrium between the total probability flow entering state i and the one leaving it.
To solve the system of linear equations, it is necessary to add an equation corresponding to the stochasticity of the steady-state probabilities:
[1.14]
The solution to equation [1.14] gives us the limiting distribution of the state probabilities:
[1.15]
The observation of a workstation in a workshop shows that over time, each element of the productive activity moves successively in a deterministic or random way through a number of different phases referred to as elementary states. These elements are categorized into three groups that are found in every production facility:
– the products;
– the means of production;
– the production operators.
Generally, we may say that each element (product, means, and operator) is successively found in one of the two primary states according to whether or not they are participating in the production process. We may call these elementary states:
– an operating or active state (participation in production);
– an idle or inactive state (non-participation in production).
Random phenomena that disrupt the proper operation of the production facility depend on different elements, for example:
– products: absence, defect, etc.;
– means: failure (mechanical, electrical, hydraulic, etc.), waiting, calibration, repair, etc.;
– operator: pause, absence, incident, etc.
The different elementary states of the production system are modeled using Markov processes, where each state corresponds to a vertex on the graph. Next, the different links that exist between the vertices of the graph must be determined. Each link is assigned a rate of transition between states per unit time. This rate may correspond, for example to a machine’s rate of failure or repair. This characteristic makes Markov processes crucial in operations research, as they serve as a starting point for the development of several models of queuing, renewal, equipment maintenance and stock management. Here is an example of a production system model.
EXAMPLE 1.1.– Let us consider a production cell composed of two workstations: a manufacturing machine and an assembly machine.
The behavior of the cell may be decomposed into four elementary states:
– state 1: production;
– state 2: breakdown of the manufacturing machine;
– state 3: breakdown of the assembly machine;
– state 4: breakdown of the cell (the manufacturing and assembly machines).
The transition rates between the different states are determined from the failure and repair rates of the two machines. The numerical values of these two rates are given in Table 1.2. Figure 1.1 shows the graph of the Markov process associated with the production cell.
Table 1.2.Characteristics of the machines
Figure 1.1.Graph of the Markov process associated with the production cell
From this graph, the generator A that contains the set of transition rates between the four states is formulated:
[1.16]
The transient state of the system is given by the eigenvalues of the generator A:
From the system of linear equations [1.14], we calculate the steady-state probabilities:
Figure 1.2.Evolution of the state probabilities
DEFINITION 1.3.– A Markov chain {ξn} is a random Markov process whose state space X is a finite or an infinite set. ξn may be considered as the result of the test n
The Markov property states that the future behavior of the sequence of events depends exclusively on the knowledge of the present state. Thus:
[1.18]
DEFINITION 1.4.– The Markov chain is said to be homogeneous in time if its transition probabilities are independent of the considered time n. The transition probabilities are then stationary, and satisfy:
[1.19]
These transition probabilities satisfy the following properties:
[1.20]
[1.21]
Let πi(n) be the probability that the system is in state i at time n:
[1.22]
Grouping these probabilities into a vector, called the state probability vector at time n, we obtain:
[1.23]
[1.24]
The state probabilities present the following properties:
[1.25]
[1.26]
The state probabilities at time (n + 1) are given by the law of total probability:
[1.27]
[1.28]
or in vectorial form:
[1.29]
We arrive at the matrix equation, called the fundamental equation, of a Markov chain:
[1.30]
[1.31]
This equation gathers all the information necessary to pass from the state probability distribution at time n to the state probability distribution at time n + 1.
The matrix τ is called the transition matrix at time n. It gathers together the set of transition probabilities. This matrix is stochastic because its elements satisfy properties [1.20] and [1.21].
In the case of a homogeneous Markov chain, equation [1.30] is written as:
[1.32]
As a result, if π(0) is the initial distribution of the state probabilities, the distribution at later times will be as in the case of a homogeneous Markov chain:
[1.33]
Formula [1.33] allows us to define the asymptotic properties of τ, that is it allows us to calculate the steady-state probabilities.
The graph may be weighted by the transition probabilities πij associated with the arcs connecting state i to state j.
The graphical model of Markov chains is based, of course, on graph theory to obtain its properties, such as the asymptotic properties of π(n).
First, let us define the ergodicity property.
PROPERTY 1.1.– A Markov chain is ergodic if, in its asymptotic behavior, the system tends to a unique limit in distribution, independent of the initial conditions:
[1.34]
The study of state probabilities in ergodic, steady-state Markov chains is mainly carried out using one of two methods: one direct and the other indirect. The indirect method consists of writing the transition matrix τ in diagonal form using a modal transformation. Here, we develop the direct, simplest and, above all, fastest method.
When we make n tend to infinity in the fundamental equation [1.32], we obtain:
[1.35]
then:
[1.36]
Equation [1.36] is then written:
[1.37]
The matrix D is known as the dynamic matrix.
Using the stochasticity property of the state probability vector, the linear system of equations is written:
[1.38]
Solving the above equation gives us the limit in distribution of the state probabilities.
[1.39]
The representation of production systems in the form of state models corresponds exactly to the model offered by Markov chains because each vertex of the graph represents a state of the system. It suffices to determine the transition probabilities between the different states.
Generally, numerous industrial examples have been studied using Markov chains, such as queuing networks and problems in operational research.
It is undeniable that Markov chains are very efficient tools for the study of production systems.
In addition to a simple graphical model, the mathematical model associated with Markov chains allows the calculation of numerous performance indices, such as reliability, availability and even the maintainability of a system.
EXAMPLE 1.2.– During its life, each piece of equipment passes through different degrees of wear. We associate a state in a Markov chain with each of these degrees of wear.
With each instance of repair or maintenance work carried out, the equipment moves to a lower degree of wear. Let us take as an example a production machine (lathe, milling machine, etc.) having four degrees of wear. The transition matrix takes the following form:
[1.40]
From this transition matrix, we formulate the state graph of the Markov chain (Figure 1.3).
Figure 1.3.The Markov chain associated with a system of wear
With each degree of wear, i, we associate a state i of the graph. The link between states correspond to the transition probabilities between the degrees of wear.
Using the fundamental equation [1.32], we may calculate the state probabilities at different times n.
Figure 1.4 represents this evolution:
To calculate the steady-state probabilities, we solve the system of linear equations [1.41], which consists of four equations with four variables:
[1.41]
Thus, we obtain
Figure 1.4.The evolution of the state probabilities
In steady state, in one sixth of the time, the system is in state 1 or in state 2 and in one third of the time the system is in states 3 or 4.
A Petri net (PN) is a graphical and mathematical modeling tool, which is used in a large number of domains such as automation, operations management and computer science. It allows the description of events and simultaneous, shared and synchronized evolution.
Petri nets were created by Carl Adam Petri in 1962. Numerous research teams currently work with this tool, developing simulation software for different applications such as production systems and information systems.
In this section, we recall the principal definitions of autonomous Petri nets and their particular structures, explaining the static and dynamic aspects as well as their structural properties.
DEFINITION 1.5.– A Petri net is a bipartite-oriented graph whose nodes are places and transitions.
A Petri net may be defined as a four-tuple:
[1.42]
where
– Pre: (P × T ) → is the forward incidence mapping.
– Post: (P × T ) → is the backward incidence mapping.
– The matrices Pre, Post are n × m matrices.
– Pre(Pi, Tj) is the weight of the arc linking Pi to Tj .
– Post(Pi, Tj) is the weighting of the arc linking Tj to Pi.
Graphically, places are represented by circles, and transitions are represented by bars.
The arcs linking places to transitions and transitions to places are oriented.
There is always an alternation between places and transitions (two places or two transitions cannot be linked) as in the example in Figure 1.5.
Figure 1.5.An example of a Petri net
Notation:
– is the set of input (output) places of the transition Tj .
– is the set of input (output) transitions of the place Pi.
The incidence matrix W is defined by:
[1.43]
DEFINITION 1.6.– A marked Petri net is a pair (P N, M), where:
– M : P → is a marking function.
The initial marking of the Petri net is written as M0. The marking of the Petri net is carried out by tokens. Graphically, the tokens (or marks) are represented by black disks and are placed in the center of the places.
The dynamics of Petri nets are obtained by inserting tokens or marks into places. The tokens will thus circulate from place to place by crossing different alternating transitions. The crossing (or “firing”) of a transition must follow certain rules. A transition Tj may fire (be crossed) for a marking M if and only if:
The firing of a transition Tj consists of taking Pre(Pi, Tj) tokens from each place Pi ∈ °Tj and adding Post(Pi, Tj) tokens from every place . Thus, from marking M, the firing of a transition Tj leads to a new marking M' defined as follows:
A crossable transition does not necessarily immediately fire, and only one firing is possible at a given moment. Figure 1.6 shows the firing of transition T1. In this example, the number of tokens in the Petri net is not conserved. Indeed, the total number of tokens is four before and five after T1 fires. The marking obtained after T1 fires is the following:
hence:
Figure 1.6.The firing of a transition
A loop (Pi, Tj) is such that Pi ∈ °Tj and (Figure 1.7(a)). In the incidence matrix W, there is a loss of information regarding the loop.
A source transition is a transition without an input place (it may always fire). An example of a source transition source is given in Figure 1.7(b). A sink transition is, for its part, a transition without an output place. An example of a sink transition is given in Figure 1.7(c). Loops are very important in production systems modeling because they limit the number of products that a machine may process simultaneously.
Two transitions T1 and T2 are in structural conflict if and only if they have at least one input place Pi in common. An example of structural conflict is shown in Figure 1.8(a). Two transitions T1 and T2 are in effective conflict for a marking M if and only if they have at least one input place Pi in common such that:
Figure 1.7.Some specific structures: a loop, a source, and a sink
Figure 1.8.Examples of (a) structural and (b) effective conflict
An example of effective conflict is shown in Figure 1.8(b).
Conflicts are also found in production systems modeling. In a scheduling problem, they allow the modeling of the choice between two tasks to be carried out, or the choice between two machines.
Inhibitor arcs make possible the crossing of transitions when places are empty or when they have a marking of less than a set value. Figure 1.9 shows that the transition T1 may fire if there is a minimum of one token in P1 and a maximum of three tokens in P3 . The inhibitor arc linking P3 to T1 works in the opposite way to a normal arc. Note that an inhibitor arc always connects a place to a transition, and not vice versa.
Figure 1.9.An example of a Petri net with an inhibitor arc
REMARK 1.2.– An inhibitor arc does not remove tokens from its input place.
Inhibitor arcs are also very important in the study of production systems. They are widely used to model inventory policies because they can trigger replenishment when the stock level is less than a certain value.
The main methods of Petri net analysis may be classified into three groups:
– methods based on the enumeration of all the accessible states: marking trees and graphs and covering trees and graphs;
– methods based on linear algebra;
– Petri net reduction methods.
The tree of reachable markings defines all the markings that may be attained starting from M0. This is a tree in which:
– The nodes are the markings attainable from M0.
– Each arc represents the firing of a transition.
– The root of the tree corresponds to M0.
The reachability graph is deduced from the reachability tree by merging the nodes with the same marking. Note that the number of reachable markings may be infinite (the non-boundedness property); in this case, it is impossible to build the reachability tree.
The spanning tree (or covering tree) limits the size of the tree when the number of states is not finite. For this, a new symbol ω is introduced, which corresponds to infinity. The state equation is defined by:
[1.44]
where M is the initial marking, s is the characteristic vector of the firing sequence s, and whose j-component corresponds to the number of firings of transition Tj in the sequence s, and M' is the new marking obtained from M after the firing sequence s has been executed.
Note that the state equation does not guarantee the firability of the sequence s (i.e. whether or not the sequence may be realized), but only gives the final marking obtained after a firing sequence.
This state equation may be used without much difficulty in the creation of a computer program (in Excel, Matlab®, C or other programming languages) to simulate the behavior of an autonomous Petri net.
The following algorithm allows the step-by-step, firing-by-firing simulation of the evolution of a Petri net. This algorithm guarantees that at each step, the transition firing sequence is valid.
At the end of the simulation, we may simply trace the evolution of the markings on a graph. Here, we are concerned with discrete-event simulation with Petri nets.
Algorithm 1.1 Autonomous Petri net simulation algorithm
Data : M0, Pre, Post
Repeat
STEP 1: From the current marking M, seek the enabled transitions. Hence deduce the characteristic firing vector s.
STEP 2: Fire the enabled transitions and determine the new marking M' using the relation:
Until The number of firings is not reached.
DEFINITION 1.7.– The determination of the reachability of a marking M' from a marking M consists of verifying that there is a sequence of transitions s that allows the marking M to be attained from the marking M'.
DEFINITION 1.8.– A place Pi in a marked Petri net 〈P N, M0〉 is k-bounded if there exists a positive integer k such that for every marking M that is reachable from the initial marking M0, we have:
A place Pi is bounded if there exists a positive integer k such that Pi is k-bounded.
A marked Petri net 〈P N, M0〉 is k-bounded if all of its places are k-bounded.
A Petri net is bounded if there exists a positive integer k such that it is k-bounded. The net is said to be safe if it is 1-bounded.
DEFINITION 1.9.– A transition Tj of a Petri net 〈P N, M0〉 is live if, for any marking M that is accessible from the marking M0, there exists a firing sequence, which from M, leads to a marking that allows the transition Tj to fire.
DEFINITION 1.10.– A Petri net 〈P N, M0〉 is live if all its transitions are live.
These various properties have applications to production systems. In fact, the boundedness property implies a lack of increase in the number of parts in the system. This may correspond to a finite capacity for a stock. The boundedness of the net ensures that the stock reaches a certain level without ever going over it. The liveness property indicates that there are no sinks. It also guarantees that the production system can produce correctly. There is no risk of stopping after a certain operating time.
In a non-autonomous Petri net, evolution depends not only on the state of the net, but also on the environment in which it is included. Thus, the evolution may be timedependent. Several Petri nets such as timed, temporal, stochastic and continuous Petri nets, take this into consideration. This evolution may also depend on external data, and Petri nets that integrate these data are called high-level Petri nets. Examples of this category include colored, object or predicate-transition Petri nets. The following sections give the definitions and operation of the main Petri nets. More detail is given regarding continuous and stochastic Petri nets, which are efficient models and are widely used in the study of industrial systems.
Timed Petri nets allow the description of systems whose operation depends on time (the duration between the start and the end of an operation). The introduction of time may be done either in transitions or in places. Thus, we distinguish between two types of nets: P-timed and T-timed Petri nets. Only T-timed Petri nets are discussed in this book.
DEFINITION 1.11.– A T-timed Petri net is a pair 〈P N, M0〉 with
PN a marked Petri net:
Temp: an application from the set T into +;
Marks have two states:
– reserved while the firing of the transition is decided;
– not reserved.
The validation of a transition does not imply the immediate reservation of the mark.
At the initial time, all marks are not reserved. The calculation of performance indices is carried out in steady state, i.e. when the evolution of the markings is constant and stable. Three major performance indices emerge readily. By simply counting the number of tokens that reside in the places or that cross the transitions in a fixed interval, we obtain:
– the average number or marks in a place, denoted by M∗;
– the mean firing frequency of the transitions, denoted by f∗;
– the mean waiting time of a mark in a place, denoted by t∗.
If we apply the previously defined performance indices to production systems, we obtain the following:
– the average number of items produced per unit time;
– the mean utilization rate of the machines;
– the average number of items in stock;
– the mean sojourn time of an item in a stock.
In our case, we are interested in the mean number and mean throughput of pallets in the different parts of the system. Measures of performance (mean place markings and mean firing frequencies of the transitions) are only calculated in the steady state of the system, i.e. on marking evolution curves in the time interval [10, +∞[.
Figure 1.10.A T-timed Petri net model of the flow of pallets
Algorithm 1.2 Timed Petri net simulation algorithm
Repeat
Until The number of firings is not reached.
Trace the evolution of markings as a function of time.
Thus, we obtain:
The evolution curves clearly show that the marking of P1 evolves between 0 and 1 and that of P2 evolves between 9 and 10, with the same sojourn time (0.5 time units) that gives, on average, 0.5 pallets in stock and 9.5 pallets in the machine.
The mean frequencies are identical and equal to one pallet per unit time. In a production line, it is always the slowest component (here the production unit) that gives the mean throughput rate of the whole line.
Figure 1.11.The evolution of the markings
Continuous Petri nets are used when the number of marks is very significant. It is then impossible to graphically represent all of the marks in the places
The marks are replaced by an integer and the time delays are replaced by firing speeds. An example of a continuous Petri net is shown in Figure 1.12.
Figure 1.12.A continuous Petri net
The operation of a continuous Petri net is initially defined as an extension of the operation of T-timed Petri nets. Tokens are divided into parts until an infinity of subtokens is obtained. The time delay associated with a transition is then replaced by a firing speed. This speed corresponds to the quantity of tokens fired per unit time. We may compare the operation of a transition and an upstream place with an hourglass, where the sand grains are tokens. We may also use the analogy of a fluid that varies between zero and infinity in a continuous manner, i.e. a place may have a marking of 0.6. From the notion that we may have a marking of less than 1, two approximations are possible:
– . The Petri net is called a constant speed continuous Petri net (CCPN).
In the first approximation, the speed is constant and does not take into account the marking of upstream places. In the second approximation, the speed is variable as it will depend on the marking of upstream places, but only when the marking is less than 1. On the other hand, both types of continuous Petri nets, CCPNs and VCPNs, have exactly the same behavior when m1 ≥ 1
The evolution of a marking in time is given by this fundamental equation:
where
– M(t): the marking vector at time t;
– W : the incidence matrix;
– v(t): the firing speed vector at time t;
This fundamental equation gives a system of first-order differential equations. The solution of these equations gives increasing or decreasing linear or exponential evolution.
Performance evaluation with continuous Petri nets is carried out by solving the system of first-order differential equations obtained from the fundamental equation defined in section 1.3.4.1. We then obtain the different equations for the evolution of markings in the net. Note:
– The evolution equations are only valid in well-specified intervals of time.
– Markings are always positive or zero: ∀i, ∀t, mi(t) ≥ 0.
Let us take again the example used for timed Petri nets – the study of a closed flow of pallets in a stock and a production machine. The continuous Petri net model of this system is shown in Figure 1.13. In the case of a CCPN the firing speeds are as follows:
Figure 1.13.Continuous Petri net model of the flow of pallets
From the definition of firing speeds, we obtain the marking evolution equations for places P1 and P2:
This evolution is shown in Figure 1.14(a). For comparison, the marking evolution curves of places P1 and P2 are given for the case of a T-timed Petri net (Figure 1.11). For a VCPN, the firing speeds of the transitions are as follows:
According to whether the value of the markings m1(t) and m2(t) is less than or greater than one, we obtain three phases of evolution: two exponential phases and one linear phase.
Figure 1.14.Marking evolution curves [DAV 89]
The marking evolution equations for the first and second phases are given below:
Thus, after solving the differential equations, we obtain:
Thus, after integrating both derivatives, we obtain two linear equations:
When the elements being modeled are of different types, there are two possibilities: create as many Petri nets as there are types of element, or have a single Petri net with different types of tokens. Colored Petri nets provide the latter option by coloring the tokens to differentiate between them. The operation of the net remains identical to autonomous Petri nets, but with firings that depend on the colors.
DEFINITION 1.12.– A colored Petri net is a six-tuple:
where
– P : the set of places;
– T : the set of transitions;
– Pre, Post: functions relating to the colors;
– M0: the initial marking.
In colored Petri nets, the tokens are assigned with different colors, which allows us to distinguish them (for example component A and B). There is also an association of functions to the arcs. Finally, places may contain tokens of different colors and the transitions have firings that depend on the colors. Colored Petri nets provide the advantage to use only one graph of Petri net and not one graph per type of token. A single net may receive different types of marks. In the case of a production machine that fabricates three types of components with different operating times, we have a single Petri net that contains three tokens of different colors instead of three Petri nets with a single type of token. The validation of the transition takes place according to a set type of color. Once the color is defined, the validation is identical to that of autonomous Petri nets:
As for the validation of transitions, firing occurs according to a set color. The firing rule is identical to that of autonomous Petri nets:
An example of transition firing is shown in Figure 1.15.
Figure 1.15.An example of firing in a colored Petri net [DAV 89]
EXAMPLE 1.4.– In Figure 1.15, give the markings obtained after the firing of transition T1 according to the colors b and c. Give the marking graph of this Petri net with:
Stochastic Petri nets (SPNs) were developed by Florin in 1978 to solve computational problems concerning dependability. They are regularly used to model random phenomena such as breakdowns, faults or accidents. On the basis of on Florin’s initial work, numerous classes of SPNs have been created. This diversity provides a way to meet the many demands of modeling (production systems, logistic chains information systems) by integrating different types of transition (immediate, deterministic and stochastic). SPNs are undoubtedly the most widely used Petri nets in the study of production systems as it may be both a stochastic and a deterministic model at the same time.
DEFINITION 1.13.– A stochastic Petri net is a five-tuple:
(P, T , Pre, Post, M0, μ)
where
– P : the set of places;
– T : the set of transitions;
– Pre, Post: forward and backward incidence mappings;
– μ: the set of firing rates;
– M0: the initial marking.
A time di, which is a random variable with an exponential probability distribution, is associated with each transition Ti (Figure 1.16):
: the probability that the firing of Ti takes place before t.
Figure 1.16.The cumulative distribution function of an exponential distribution
The mean value of the firing time is written as di*, and is defined in the following way:
where μi is called the firing rate of transition Ti.
This policy is applied when two or more transitions are firable at the same time. In this case, it is necessary to determine which transition will be fired. Two policies are possible:
Probabilistic selection: this consists of preselecting the transition before selecting its firing time. The probability distribution is made on the subset of firable transitions. This is followed by a random selection of the transition to be fired. Once the transition is chosen, a new random selection of the firing time is made.
Random selection: the choice of transition is made after selecting the firing delays of all the firable transitions. The transition that has the smallest delay is chosen and fired with the corresponding delay. If there is an equal delay between transitions after the selection, we turn to probabilistic selection.
This policy is applied when a transition is q-enabled, i.e. the transition may be simultaneously fired q times. As a result of this, three cases are possible:
– If a single selection is made, then one service at a time.
– If q selections are made, then q services at once.
– The number of selections is equal to: M in(q, deg(T), where deg(T) is the maximum number of simultaneous firings allowed by transition T. This condition is explicitly indicated by the transition of a loop containing deg(T) tokens.
This policy is used after the firing of a transition at time t. What happens to the firing times of firable transitions? Four possibilities are available. The choice of any of the possibilities depends on the modeling of the studied system:
– Forgetting the selection (resampling memory): in this case, a new random selection of the firing times is carried out on the enabled transitions. This may model a breakdown phenomenon.
– Remembering the selection (enabling memory): the firing times of transitions remaining enabled after a decrement of t are kept. The other transitions are disabled. This may model the continuity of a phenomenon.
– Keeping the initial selection: the initial firing times (not decremented by time t) are kept, even those for transitions that have become disabled. The transitions keep this time for their next firing. This may model a job that was aborted, and then restarted from the beginning.
– Keeping the selection decremented by t
