149,99 €
Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management. The three volumes of the Combinatorial Optimization series aim to cover a wide range of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization. Concepts of Combinatorial Optimization, is divided into three parts: - On the complexity of combinatorial optimization problems, presenting basics about worst-case and randomized complexity; - Classical solution methods, presenting the two most-known methods for solving hard combinatorial optimization problems, that are Branch-and-Bound and Dynamic Programming; - Elements from mathematical programming, presenting fundamentals from mathematical programming based methods that are in the heart of Operations Research since the origins of this field.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 700
Veröffentlichungsjahr: 2014
Table of Contents
Preface
Part I Complexity of Combinatorial Optimization Problems
Chapter 1 Basic Concepts in Algorithms and Complexity Theory
1.1. Algorithmic complexity
1.2. Problem complexity
1.3. The classes P, Np and Npo
1.4. Karp and Turing reductions
1.5. Np-completeness
1.6. Two examples of Np-complete problems
1.7. A few words on strong and weak Np-completeness
1.8. A few other well-known complexity classes
1.9. Bibliography
Chapter 2 Randomized Complexity
2.1. Deterministic and probabilistic algorithms
2.2. Lower bound technique
2.3. Elementary intersection problem
2.4. Conclusion
2.5 Bibliography
Part II Classical Solution Methods
Chapter 3 Branch-and-Bound Methods
3.1. Introduction
3.2. Branch-and-bound method principles
3.3. A detailed example: the binary knapsack problem
3.4. Conclusion
3.5. Bibliography
Chapter 4 Dynamic Programming
4.1. Introduction
4.2. A first example: crossing the bridge
4.3. Formalization
4.4. Some other examples
4.5. Solution
4.6. Solution of the examples
4.7. A few extensions
4.8. Conclusion
4.9. Bibliography
Part III Elements from Mathematical Programming
Chapter 5 Mixed Integer Linear Programming Models for Combinatorial Optimization Problems
5.1. Introduction
5.2. General modeling techniques
5.3. More advanced Milp models
5.4. Conclusions
5.5. Bibliography
Chapter 6 Simplex Algorithms for Linear Programming
6.1. Introduction
6.2. Primal and dual programs
6.3. The primal simplex method
6.4. Bland’s rule
6.5. Simplex methods for the dual problem
6.6. Using reduced costs and pseudo-costs for integer programming
6.7. Bibliography
Chapter 7 A Survey of some Linear Programming Methods
7.1. Introduction
7.2. Dantzig’s simplex method
7.3. Duality
7.4. Khachiyan’s algorithm
7.5. Interior methods
7.6. Conclusion
7.7. Bibliography
Chapter 8 Quadratic Optimization in 0–1 Variables
8.1. Introduction
8.2. Pseudo-Boolean functions and set functions
8.3. Formalization using pseudo-Boolean functions
8.4. Quadratic pseudo-Boolean functions (qpBf)
8.5. Integer optimum and continuous optimum of qpBfs
8.6. Derandomization
8.7. Posiforms and quadratic posiforms
8.8. Optimizing a qpBf: special cases and polynomial cases
8.9. Reductions, relaxations, linearizations, bound calculation and persistence
8.10. Local optimum
8.11. Exact algorithms and heuristic methods for optimizing qpBfs
8.12. Approximation algorithms
8.13. Optimizing a quadratic pseudo-Boolean function with linear constraints
8.14. Linearization, convexification and Lagrangian relaxation for optimizing a qpBf with linear constraints
8.15. ε-Approximation algorithms for optimizing a qpBf with linear constraints
8.16. Bibliography
Chapter 9 Column Generation in Integer Linear Programming
9.1. Introduction
9.2. A column generation method for a bounded variable linear programming problem
9.3. An inequality to eliminate the generation of a 0–1 column
9.4. Formulations for an integer linear program
9.5. Solving an integer linear program using column generation
9.6. Applications
9.7. Bibliography
Chapter 10 Polyhedral Approaches
10.1. Introduction
10.2. Polyhedra, faces and facets
10.3. Combinatorial optimization and linear programming
10.4. Proof techniques
10.5. Integer polyhedra and min–max relations
10.6. Cutting-plane method
10.7. The maximum cut problem
10.8. The survivable network design problem
10.9. Conclusion
10.10. Bibliography
Chapter 11 Constraint Programming
11.1. Introduction
11.2. Problem definition
11.3. Decision operators
11.4. Propagation
11.5. Heuristics
11.6. Conclusion
11.7. Bibliography
General Bibliography
List of Authors
Index
Summary of Volume 2
Summary of Volume 3
First edition published 2010 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.© ISTE Ltd 2010Second edition published 2014 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK
www.iste.co.uk
John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA
www.wiley.com
© ISTE Ltd 2014The rights of Vangelis Th. Paschos to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2014942903
British Library Cataloguing-in-Publication DataA CIP record for this book is available from the British LibraryISBN 978-1-84821-656-3
Preface
What is combinatorial optimization? There are, in my opinion, as many definitions as there are researchers in this domain, each one as valid as the other. For me, it is above all the art of understanding a real, natural problem, and being able to transform it into a mathematical model. It is the art of studying this model in order to extract its structural properties and the characteristics of the solutions of the modeled problem. It is the art of exploiting these characteristics in order to determine algorithms that calculate the solutions but also to show the limits in economy and efficiency of these algorithms. Lastly, it is the art of enriching or abstracting existing models in order to increase their strength, portability, and ability to describe mathematically (and computationally) other problems, which may or may not be similar to the problems that inspired the initial models.
Seen in this light, we can easily understand why combinatorial optimization is at the heart of the junction of scientific disciplines as rich and different as theoretical computer science, pure and applied, discrete and continuous mathematics, mathematical economics, and quantitative management. It is inspired by these, and enriches them all.
This revised and updated second edition of Concepts of Combinatorial Optimization, is the first volume in a series of books entitled Combinatorial Optimization. It tries, along with the other volumes in the set, to embody the idea of combinatorial optimization. The subjects of this volume cover themes that are considered to constitute the hard core of combinatorial optimization. The book is divided into three parts:
In the first part, Chapter 1 introduces the fundamentals of the theory of (deterministic) complexity and of algorithm analysis. In Chapter 2, the context changes and we consider algorithms that make decisions by “tossing a coin”. At each stage of the resolution of a problem, several alternatives have to be considered, each one occurring with a certain probability. This is the context of probabilistic (or randomized) algorithms, which is described in this chapter.
In the second part some methods are introduced that make up the great classics of combinatorial optimization: branch-and-bound and dynamic programming. The former is perhaps the most well known and the most popular when we try to find an optimal solution to a difficult combinatorial optimization problem. Chapter 3 gives a thorough overview of this method as well as of some of the most well-known tree search methods based upon branch-and-bound. What can we say about dynamic programming, presented in Chapter 4? It has considerable reach and scope, and very many optimization problems have optimal solution algorithms that use it as their central method.
The third part is centered around mathematical programming, considered to be the heart of combinatorial optimization and operational research. In Chapter 5, a large number of linear models and an equally large number of combinatorial optimization problems are set out and discussed. In Chapter 6, the main simplex algorithms for linear programming, such as the primal simplex algorithm, the dual simplex algorithm, and the primal–dual simplex algorithm are introduced. Chapter 7 introduces some classical linear programming methods, while Chapter 8 introduces quadratic integer optimization methods. Chapter 9 describes a series of resolution methods currently widely in use, namely column generation. Chapter 10 focuses on polyhedral methods, almost 60 years old but still relevant to combinatorial optimization research. Lastly, Chapter 11 introduces a more contemporary, but extremely interesting, subject, namely constraint programming.
This book is intended for novice researchers, or even Master’s students, as much as for senior researchers. Master’s students will probably need a little basic knowledge of graph theory and mathematical (especially linear) programming to be able to read the book comfortably, even though the authors have been careful to give definitions of all the concepts they use in their chapters. In any case, to improve their knowledge of graph theory, readers are invited to consult a great, flagship book from one of our gurus, Claude Berge: Graphs and Hypergraphs, North Holland, 1973. For linear programming, there is a multitude of good books that the reader could consult, for example V. Chvátal, Linear Programming, W.H. Freeman, 1983, or M. Minoux, Programmation mathématique: théorie et algorithmes, Dunod, 1983.
In this revised second edition I would like to thank again the authors who, despite their many responsibilities and commitments (which is the lot of any university academic), agreed to participate in the book by writing chapters in their areas of expertise and, at the same time, to take part in a very tricky exercise: writing chapters that are both educational and high-level science at the same time.
The second edition of this book could never have come into being without the original proposal of Jean-Charles Pomerol, President of the scientific board at ISTE, and Sami Ménascé and Raphaël Ménascé, the President and Vice-President of ISTE, respectively. I once again give my warmest thanks to them for their insistence and encouragement.
Vangelis Th. PASCHOSJune 2014
Vangelis Th. PASCHOS.
In algorithmic theory, a problem is a general question to which we wish to find an answer. This question usually has parameters or variables the values of which have yet to be determined. A problem is posed by giving a list of these parameters as well as the properties to which the answer must conform. An instance of a problem is obtained by giving explicit values to each of the parameters of the instanced problem.
An algorithm is a sequence of elementary operations (variable affectation, tests, forks, etc.) that, when given an instance of a problem as input, gives the solution of this problem as output after execution of the final operation.
The two most important parameters for measuring the quality of an algorithm are: its execution time and the memory space that it uses. The first parameter is expressed in terms of the number of instructions necessary to run the algorithm. The use of the number of instructions as a unit of time is justified by the fact that the same program will use the same number of instructions on two different machines but the time taken will vary, depending on the respective speeds of the machines. We generally consider that an instruction equates to an elementary operation, for example an assignment, a test, an addition, a multiplication, a trace, etc. What we call the complexity in time or simply the complexity of an algorithm gives us an indication of the time it will take to solve a problem of a given size. In reality this is a function that associates an order of magnitude1 of the number of instructions necessary for the solution of a given problem with the size of an instance of that problem. The second parameter corresponds to the number of memory units used by the algorithm to solve a problem. The complexity in space is a function that associates an order of magnitude of the number of memory units used for the operations necessary for the solution of a given problem with the size of an instance of that problem.
There are several sets of hypotheses concerning the “standard configuration” that we use as a basis for measuring the complexity of an algorithm. The most commonly used framework is the one known as “worst-case”. Here, the complexity of an algorithm is the number of operations carried out on the instance that represents the worst configuration, amongst those of a fixed size, for its execution; this is the framework used in most of this book. However, it is not the only framework for analyzing the complexity of an algorithm. Another framework often used is “average analysis”. This kind of analysis consists of finding, for a fixed size (of the instance) n, the average execution time of an algorithm on all the instances of size n; we assume that for this analysis the probability of each instance occurring follows a specific distribution pattern. More often than not, this distribution pattern is considered to be uniform. There are three main reasons for the worst-case analysis being used more often than the average analysis. The first is psychological: the worst-case result tells us for certain that the algorithm being analyzed can never have a level of complexity higher than that shown by this analysis; in other words, the result we have obtained gives us an upper bound on the complexity of our algorithm. The second reason is mathematical: results from a worst-case analysis are often easier to obtain than those from an average analysis, which very often requires mathematical tools and more complex analysis. The third reason is “analysis portability”: the validity of an average analysis is limited by the assumptions made about the distribution pattern of the instances; if the assumptions change, then the original analysis is no longer valid.
The definition of the complexity of an algorithm can be easily transposed to problems. Informally, the complexity of a problem is equal to the complexity of the best algorithm that solves it (this definition is valid independently of which framework we use).
Let us take a size n and a function f (n). Thus:
We can now specify the following general classes of complexity:
With respect to the classes that we have just defined, we have the following relations: P ⊆ PSPACE ⊆ EXPTIME and P ⊂ EXPTIME. Knowing whether the inclusions of the first relation are strict or not is still an open problem.
The definition of any algorithmic problem (and even more so in the case of any combinatorial optimization problem) comprises two parts. The first gives the instance of the problem, that is the type of its variables (a graph, a set, a logical expression, etc.). The second part gives the type and properties to which the expected solution must conform. In the complexity theory case, algorithmic problems can be classified into three categories:
Decision problems are questions concerning the existence, for a given instance, of a configuration such that this configuration itself, or its value, conforms to certain properties. The solution to a decision problem is an answer to the question associated with the problem. In other words, this solution can be:
Let us now consider the MIN TSP problem, defined as follows: given a complete graph Kn over n vertices for which each edge e ∈ E(Kn) has a value d(e) > 0, we are looking for a Hamiltonian cycle H ⊂ E (a partial closely related graph such that each vertex is of 2 degrees) that minimizes the quantity ∑e∈Hd(e). Let us assume that for this problem we have, as well as the complete graph Kn and the vector , costs on the edges Kn of a constant K and that we are looking not to determine the smallest (in terms of total cost) Hamiltonian cycle, but rather to answer the following question: “Does there exist a Hamiltonian cycle of total distance less than or equal to K?”. Here, once more, the solution is either yes if such a cycle exists, or no if it does not.
For optimum value calculation problems, we are looking to calculate the value of the optimum solution (and not the solution itself).
In the case of the MIN TSP for example, the optimum associated value calculation problem comes down to calculating the cost of the smallest Hamiltonian cycle, and not the cycle itself.
Optimization problems, which are naturally of interest to us in this book, are those for which we are looking to establish the best solution amongst those satisfying certain properties given by the very definition of the problem. An optimization problem may be seen as a mathematical program of the form:
where is the vector describing the solution3, v() is the objective function, CI is the problem’s constraint set, set out for the instance I (in other words, CI sets out both the instance and the properties of the solution that we are looking to find for this instance), and opt ∈ {max, min}. An optimum solution of I is a vector ∈ argopt{v() : ∈ CI}. The quantity v() is known as the objective value or value of the problem. A solution ∈ CI is known as a feasible solution.
The solution to an optimization problem includes an evaluation of the optimum value. Therefore, an optimum value calculation problem can be associated with an optimization problem. Moreover, optimization problems always have a decisional variant as shown in the MIN TSP example above.
Let us consider a decision problem Π. If for any instance I of Π a solution (that is a correct answer to the question that states Π) of I can be found algorithmically in polynomial time, that is in O(|I|k) stages, where |I| is the size of I, then Π is called a polynomial problem and the algorithm that solves it a polynomial algorithm (let us remember that polynomial problems make up the P class).
For reasons of simplicity, we will assume in what follows that the solution to a decision problem is:
In other words, if, to solve a problem, we could consult an “oracle”, it would provide us with an answer of not just a yes or no but also, in the first case, a certificate proving the veracity of the yes. This testimony is simply a solution proposal that the oracle “asserts” as being the real solution to our problem.
Let us consider the decision problems for which the validity of the certificate can be verified in polynomial time. These problems form the class NP.
DEFINITION 1.1.– A decision problem Π is in NP if the validity of all solutions of Π is verifiable in polynomial time.
For example, the SAT problem belongs to NP. Indeed, given the assignment of the values 0, 1 to the variables of an instance ϕ of this problem, we can, with at most nm applications of the connector , decide whether the proposed assignment is a model for ϕ, that is whether it satisfies all the clauses.
Therefore, we can easily see that the decisional variant of MIN TSP seen previously also belongs to NP.
Definition 1.1 can be extended to optimization problems. Let us consider an optimization problem Π and let us assume that each instance I of Π conforms to the three following properties:
Thus, Π belongs to the class NPO. In other words, the class NPO is the class of optimization problems for which the decisional variant is in NP. We can therefore define the class PO of optimization problems for which the decisional variant belongs to the class P. In other words, PO is the class of problems that can be optimally solved in polynomial time.
We note that the class NP has been defined (see definition 1.1) without explicit reference to the optimum solution of its problems, but by reference to the verification of a given solution. Evidently, the condition of belonging to P being stronger than that of belonging to NP (what can be solved can be verified), we have the obvious inclusion of : P ⊆ NP (Figure 1.1).
“What is the complexity of the problems in NP \ P?”. The best general result on the complexity of the solution of problems from NP is as follows [GAR 79].
THEOREM 1.1.– For any problem Π ∈ NP, there is a polynomialpΠsuch that each instanceIof Π can be solved by an algorithm of complexityO(2PΠ(|I|)).
In fact, theorem 1.1 merely gives an upper limit on the complexity of problems in NP, but no lower limit. The diagram in Figure 1.1 is nothing more than a conjecture, and although almost all researchers in complexity are completely convinced of its veracity, it has still not been proved. The question “Is P equal to or different from NP?” is the biggest open question in computer science and one of the best known in mathematics.
Figure 1.1.P and NP (under the assumption P ≠ NP)
As we have seen, problems in NP \ P are considered to be algorithmically more difficult than problems in P. A large number of problems in NP \ P are very strongly bound to each other through the concept of polynomial reduction.
The principle of reducing a problem n to a problem consists of considering the problem Π as a specific case of , modulo a slight transformation. If this transformation is polynomial, and we know that we can solve in polynomial time, we will also be able to solve Π in polynomial time. Reduction is thus a means of transferring the result of solving one problem to another; in the same way it is a tool for classifying problems according to the level of difficulty of their solution.
We will start with the classic Karp reduction (for the class NP) [GAR 79, KAR 72]. This links two decision problems by the possibility of their optimum (and simultaneous) solution in polynomial time. In the following, given a problem Π, let be all of its instances (we assume that each instance I ∈ is identifiable in polynomial time in |I|). Let be the subset of for which the solution is yes; is also known as the set of yes-instances (or positive instances) of Π.
DEFINITION 1.2.– Given two decision problems Π1and Π2, a Karp reduction (or polynomial transformation) is a function, which can be calculated in polynomial time, such that, given a solution forf(I), we are able to find a solution forIin polynomial time in |I| (the size of the instance I).
A Karp reduction of a decision problem Π1 to a decision problem Π2 implies the existence of an algorithm for Π1 that uses an algorithm for Π2. Given any instance , the algorithm constructs an instance ; it executes the algorithm , which calculates a solution on I2, then transforms this solution into a solution for Π1 on I1. If is polynomial, then is also polynomial.
Following on from this, we can state another reduction, known in the literature as the Turing reduction, which is better adapted to optimization problems. In what follows, we define a problem Π as a couple (, SolΠ), where SolΠ is the set of solutions for Π (we denote by SolΠ(I) the set of solutions for the instance I ∈ ).
DEFINITION 1.3.– A Turing reduction of a problem Π1to a problem Π2is an algorithm that solves Π1by using (possibly several times) an algorithm for Π2in such a way that ifis polynomial, thenis also polynomial.
The Karp and Turing reductions are transitive: if Π1 is reduced (by one of these two reductions) to Π2 and Π2 is reduced to Π3, then Π1 reduces to Π3. We can therefore see that both reductions preserve membership of the class P in the sense that if Π reduces to and ∈ P, then Π ∈ P.
For more details on both the Karp and Turing reductions refer to [AUS 99, GAR 79, PAS 04]. In Garey and Johnson [GAR 79] (Chapter 5) there is also a very interesting historical summary of the development of the ideas and terms that have led to the structure of complexity theory as we know it today.
From the definition of the two reductions in the preceding section, if reduces to Π, then Π can reasonably be considered as at least as difficult as (regarding their solution by polynomial algorithms), in the sense that a polynomial algorithm for Π would have sufficed to solve not only Π itself, but equally . Let us confine ourselves to the Karp reduction. By using it, we can highlight a problem Π* ∈ NP such that any other problem Π ∈ NP reduces to Π* [COO 71, GAR 79, FAG 74]. Such a problem is, as we have mentioned, the most difficult problem of the class NP. Therefore we can show [GAR 79, KAR 72] that there are other problems ∈ NP such that reduces to . In this way we expose a family of problems such that any problem in NP reduces (remembering that the Karp reduction is transitive) to one of its problems. This family has, of course, the following properties:
The problems from this family are NP-complete problems and the class of these problems is called the NP-complete class.
DEFINITION 1.4.– A decision problem Π is NP-complete if, and only if, it fulfills the following two conditions:
Of course, a notion of NP-completeness very similar to that of definition 1.4 can also be based on the Turing reduction.
Let us assume that there is a polynomial algorithm for the problem Π2. In this case, the algorithm is a (polynomial) Turing reduction.
A problem that fulfills condition 2 of definition 1.4 (without necessarily checking condition 1) is called NP-hard5. It follows that a decision problem Π is NP-complete if and only if it belongs to NP and it is NP-hard.
Let us denote by NP-intermediate the class NP\ (P∪NP – complete). Informally, this concerns the class of problems of intermediate difficulty, that is problems that are more difficult than those from P but easier than those from NP-complete. More formally, for two complexity classes C and C′ such that C′ ⊂ C, and a reduction preserving the membership of C′, a problem is C-intermediate if it is neither C-complete under , nor belongs to C′. Under the Karp reduction, the class NP-intermediate is not empty [LAD 75].
Let us note that the idea of NP-completeness goes hand in hand with decision problems. When dealing with optimization problems, the appropriate term, used in the literature, is NP-hard6. A problem of NPO is NP-hard if and only if its decisional variant is an NP-complete problem.
Figure 1.2.P, NP and NP-complete (under the assumption P ≠ NP)
The problem sat was the first problem shown to be NP-complete (the proof of this important result can be found in [COO 71]). The reduction used (often called generic reduction) is based on the theory of recursive languages and Turing machines (see [HOP 79, LEW 81] for more details and depth on the Turing machine concept; also, language-problem correspondence is very well described in [GAR 79, LEW 81]). The general idea of generic reduction, also often called the “Cook–Levin technique (or theory)”, is as follows: For a generic decision (language) problem belonging to NP, we describe, using a normal conjunctive form, the working of a non-deterministic algorithm (Turing machine) that solves (decides) this problem (language).
The second problem shown to be NP-complete [GAR 79, KAR 72] was the variant of SAT, written 3SAT, where no clause contains more than three literals. The reduction here is from SAT [GAR 79, PAP 81]. More generally, for all k ≥ 3,the kSAT problems (that is the problems defined on normal conjunctive forms where each clause contains no more than k literals) are all NP-complete.
It must be noted that the problem 2SAT, where all normal conjunctive form clauses contain at most two literals, is polynomial [EVE 76]. It should also be noted that in [KAR 72], where there is a list of the first 21 NP-complete problems, the problem of linear programming in real numbers was mentioned as a probable problem from the class NP-intermediate. It was shown, seven years later ([KHA 79] and also [ASP 80], an English translation of [KHA 79]), that this problem is in P.
The reference on NP-completeness is the volume by Garey and Johnson [GAR 79]. In the appendix, A list of NP-complete problems, there is a long list of NP-complete problems with several commentaries for each one and for their limited versions. For many years, this list has been regularly updated by Johnson in the Journal of Algorithms review. This update, supplemented by numerous commentaries, appears under the title: The NP-completeness column: an ongoing guide.
“What is the relationship between optimization and decision for NP-complete problems?”. The following theory [AUS 95, CRE 93, PAZ 81] attempts to give an answer.
THEOREM 1.2.– Let Π be a problem of NPO and let us assume that the decisional version of Π, written Πd, is NP-complete. It follows that a polynomial Turing reduction exists between Πdand Π.
In other words, the decision versions (such as those we have considered in this chapter) and optimization versions of an NP-complete problem are of equivalent algorithmic difficulty. However, the question of the existence of a problem NPO for which the optimization version is more difficult to solve than its decisional counterpart remains open.
Given a problem Π, the most conventional way to show its NP-completeness consists of making a Turing or Karp reduction of an NP-complete Π′ problem to Π. In practical terms, the proof of NP-completeness for Π is divided into three stages:
In the following, we show that MIN VERTEX COVER(G(V, E), K) and MAX STABLE(G(V, E), K), the decisional variants of MIN VERTEX COVER and of MAX STABLE7, respectively, are NP-complete. These two problems are defined as follows. MIN VERTEX COVER(G(V, E), K): given a graph G and a constant K ≤ |V|, does there exist in G a transversal V′ ⊆ V less than or equal in size to K? MAX STABLE SET(G(V, E), K): given a graph G and a constant K ≤ |V|, does there exist in G a stable set V′ ⊆ V of greater than or equal in size to K?
Figure 1.3.The graph associated with the expression
We will now show that Gallows a transversal less than or equal in size ton + 2mif and only if the expressionϕcan be satisfied.
The proof of membership of MAX STABLE(G(V, E), K) is so simple that it is omitted here.
Let us consider a graph G(V, E) of magnitude n having m edges and let us denote by A its incidence matrix8. Let us also consider the expression of MIN VERTEX COVER as a linear program in whole numbers and the transformations that follow:
We note that the last program in the series is nothing more than the linear program (in whole numbers) of MAX STABLE. Furthermore, this series of equivalents indicates that if a solution vector for MAX STABLE is given, then the vector (that is the vector where we interchange the “1” and the “0” regarding ) is a feasible solution for MIN VERTEX COVER. Furthermore, if contains at least K “1” (that is the size of the stable set is at least equal to K), then the solution vector deduced for MIN VERTEX COVER contains at most n – K “1” (that is the size of the transversal is at most equal to n – K). Since the function is polynomial, then so is the described transformation.
Let Π be a problem and I an instance of Π of size |I|. We denote by max(I) the highest number that appears in I. Let us note that max(I) can be exponential in n. An algorithm for Π is known as pseudo-polynomial if it is polynomial in |I| and max(I) (if max(I) is exponential in |I|, then this algorithm is exponential for I).
DEFINITION 1.5.– An optimization problem is NP-complete in the strong sense (strongly NP-complete) if the problem is NP-complete because of its structure and not because of the size of the numbers that appear in its instances. A problem is NP-complete in the weak sense (weakly NP-complete) if it is NP-complete because of its valuations (that is max(I) affects the complexity of the algorithms that solve it).
In Chapter 4 (see also Volume 2, Chapter 8), we find a dynamic programming algorithm that solves this problem in linear time for the highest value of ai and in logarithmic time for the highest value of ci. This means that if this value is a polynomial of n, then the algorithm is also polynomial, and if the value is exponential in n, the algorithm itself is of exponential complexity.
In this section, we briefly present a few supplementary complexity classes that we will encounter in the following chapters (for more details, see [BAL 88, GAR 79, PAP 94]). Introductions to some complexity classes can also be found in [AUS 99, VAZ 01].
Let us consider a decision problem Π and a generic instance I of Π defined on a data structure S (a graph, for example) and a decision constant K. From the definition of the class NP, we can deduce that if there is a solution giving the answer yes for Π on I, and if this solution is submitted for verification, then the answer for any correct verification algorithm will always be yes. On the other hand, if such a solution does not exist, then any solution proposal submitted for verification will always bring about a no answer from any correct verification algorithm.
Let us consider the following decisional variant of MIN TSP, denoted by CO-MINTSP: “given Kn, and K, is it true that there is no Hamiltonian cycle of a total distance less than or equal to K?”. How can we guarantee that the answer for an instance of this problem is yes? This questions leads on to that of this problem’s membership of the class NP. We come across the same situation for a great many problems in NP \ P (assuming that P ≠ NP).
Figure 1.4.P, NP and co-NP (under the conditions P ≠ NP and NP ≠ co-NP)
Obviously, for a decision problem Π ∈ P, the problem also belongs to P; as a result, .
A decision problem Π belongs to RP if there is a polynomial p and an algorithm such that, for any instance I: if I ∈ , then the algorithm gives a decision in polynomial time for at least half of the candidate solutions (certificates) S, such that |S| ≤ p(|I|), that are submitted to it for verification of their feasibility; if, on the other hand, I ∉ (that is if I ∈ \ ), then for any proposed solution S with |S| ≤ p(|I|), rejects S in polynomial time. A problem Π ∈ co-RP if and only if ∈ RP, where is defined as before. We have the following relationship between P, RP and NP: P ⊆ RP ⊆ NP. For a very simple and intuitive proof of this relationship, see [VAZ 01].
[ASP 80] ASPVALL B., STONE R.E., “Khachiyan’s linear programming algorithm”, J. Algorithms, vol. 1, p. 1–13, 1980.
[AUS 95] AUSIELLO G., CRESCENZI P., PROTASI M., “Approximate solutions of NP optimization problems”, Theoret. Comput. Sci., vol. 150, p. 1–55, 1995.
[AUS 99] AUSIELLO G., CRESCENZI P., GAMBOSI G., KANN V., MARCHETTI-SPACCAMELA A., PROTASI M., Complexity and Approximation. Combinatorial Optimization Problems and their Approximability properties, Springer-Verlag, Berlin, 1999.
[BAL 88] BALCÀZAR J.L., DIAZ J., GABARRÒ J., Structural Complexity, vol. 1 and 2, Springer-Verlag, Berlin, 1988.
[COO 71] COOK S.A., “The complexity of theorem-proving procedures”, Proc. STOC’71, p. 151–158, 1971.
[CRE 93] CRESCENZI P., SILVESTRI R., “Average measure, descriptive complexity and approximation of minimization problems”, Int. J. Foundations Comput. Sci., vol. 4, p. 15–30, 1993.
[EVE 76] EVEN S., ITAI A., SHAMIR A., “On the complexity of timetable and multicommodity flow problems”, SIAM J. Comput., vol. 5, p. 691–703, 1976.
[FAG 74] FAGIN R., “Generalized first-order spectra and polynomial-time recognizable sets”, KARP R.M., Ed., Complexity of Computations, p. 43–73, American Mathematics Society, 1974.
[GAR 79] GAREY M.R., JOHNSON D.S., Computers and Intractability. A Guide to the Theory of NP-completeness, W. H. Freeman, San Francisco, 1979.
[HOP 79] HOPCROFT J.E., ULLMAN J.D., Introduction to Automata Theory, Languages and Computation, Addison-Wesley, 1979.
[KAR 72] KARP R.M., “Reducibility among combinatorial problems”, MILLER R.E., THATCHER J.W., Eds., Complexity of computer computations, p. 85–103, Plenum Press, New York, 1972.
[KHA 79] KHACHIAN L.G., “A polynomial algorithm for linear programming”, Dokladi Akademiy Nauk SSSR, vol. 244, p. 1093–1096, 1979.
[LAD 75] LADNER R.E., “On the structure of polynomial time reducibility”, J. Assoc. Comput. Mach., vol. 22, p. 155–171, 1975.
[LEW 81] LEWIS H.R., PAPADIMITRIOU C.H., Elements of the Theory of Computation, Prentice-Hall, New Jersey, 1981.
[MOT 95] MOTWANI R., RAGHAVAN P., Randomized Algorithms, Cambridge University Press, Cambridge, 1995.
[PAP 81] PAPADIMITRIOU C.H., STEIGLITZ K., Combinatorial Optimization: Algorithms and Complexity, Prentice-Hall, New Jersey, 1981.
[PAP 94] PAPADIMITRIOU C.H., Computational Complexity, Addison-Wesley, 1994.
[PAS 04] PASCHOS V.TH., Complexité et approximation polynomiale, Hermes, Paris, 2004.
[PAZ 81] PAZ A., MORAN S., “Non deterministic polynomial optimization problems and their approximations”, Theoret. Comput. Sci., vol. 15, p. 251–277, 1981.
[VAZ 01] VAZIRANI V., Approximation Algorithms, Springer-Verlag, Berlin, 2001.
2. We associate two literals x and with a Boolean variable x, that is the variable itself and its negation; a clause is a disjunction of the literals.
3. For the combinatorial optimization problems that concern us, we can assume that the components of this vector are 0 or 1 or, if need be, integers.
4. Given a graph G(V, E) of order n, in the MIN VERTEX COVER problem we are looking to find a smallest transversal of G, that is a set V′ ⊆ V such that for every edge (u, v) ∈ E, either u ∈ V′, or v ∈ V′ of minimum size; we denote by MIN WEIGHTED VERTEX COVER the version of MIN VERTEX COVER where a positive weight is associated with each vertex and the objective is to find a transversal of G that minimizes the sum of the vertex weights.
5. These are the starred problems in the appendices of [GAR 79].
6. There is a clash with this term when it is used for optimization problems and when it is used in the sense of property 2 of definition 1.4, where it means that a decision problem Π is harder than any other decision problem ∈ NP.
7. Given a graph G(V, E) of magnitude n, we are trying to find a stable set of maximum size, that is a set V′ ⊆ V such that of maximum size.
8. This matrix is of dimensions m × n.
Jérémy BARBAY
A deterministic algorithm is “a method of solution, by the application of a rule, a finite number of times” (Chapter 1). This definition extends to probabilistic algorithms by allowing probabilistic rules, or by giving a distribution of probability over a set of deterministic algorithms. By definition, probabilistic algorithms are potentially more than their deterministic counterparts: the class of probabilistic algorithms contains the class of deterministic algorithms. It is therefore natural to consider probabilistic algorithms for solving the hardest combinatorial optimization problems, all the more so because probabilistic algorithms are often simpler than their deterministic counterparts.
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
Lesen Sie weiter in der vollständigen Ausgabe!
