Concepts of Combinatorial Optimization, Volume 1 -  - E-Book

Concepts of Combinatorial Optimization, Volume 1 E-Book

0,0
160,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management. The three volumes of the Combinatorial Optimization series aims to cover a wide range of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization. Concepts of Combinatorial Optimization, is divided into three parts: * On the complexity of combinatorial optimization problems, that presents basics about worst-case and randomized complexity; * Classical solution methods, that presents the two most-known methods for solving hard combinatorial optimization problems, that are Branch-and-Bound and Dynamic Programming; * Elements from mathematical programming, that presents fundamentals from mathematical programming based methods that are in the heart of Operations Research since the origins of this field.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 672

Veröffentlichungsjahr: 2012

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Preface

PART I. Complexity of Combinatorial Optimization Problems

Chapter 1. Basic Concepts in Algorithms and Complexity Theory

1.1. Algorithmic complexity

1.2. Problem complexity

1.3. The classes P, NP and NPO

1.4. Karp and Turing reductions

1.5. NP-completeness

1.6. Two examples of NP-complete problems

1.7. A few words on strong and weak NP-completeness

1.8. A few other well-known complexity classes

1.9. Bibliography

Chapter 2. Randomized Complexity

2.1. Deterministic and probabilistic algorithms

2.2. Lower bound technique

2.3. Elementary intersection problem

2.4. Conclusion

2.5 Bibliography

PART II. Classical Solution Methods

Chapter 3. Branch-and-Bound Methods

3.1. Introduction

3.2. Branch-and-bound method principles

3.3. A detailed example: the binary knapsack problem

3.4. Conclusion

3.5. Bibliography

Chapter 4. Dynamic Programming

4.1. Introduction

4.2. A first example: crossing the bridge

4.3. Formalization

4.4. Some other examples

4.5. Solution

4.6. Solution of the examples

4.7. A few extensions

4.8. Conclusion

4.9. Bibliography

PART III. Elements from Mathematical Programming

Chapter 5. Mixed Integer Linear Programming Models for Combinatorial Optimization Problems

5.1. Introduction

5.2. General modeling techniques

5.3. More advanced MILP models

5.4. Conclusions

5.5. Bibliography

Chapter 6. Simplex Algorithms for Linear Programming

6.1. Introduction

6.2. Primal and dual programs

6.3. The primal simplex method

6.4. Bland’s rule

6.5. Simplex methods for the dual problem

6.6. Using reduced costs and pseudo-costs for integer programming

6.7. Bibliography

Chapter 7. A Survey of some Linear Programming Methods

7.1. Introduction

7.2. Dantzig’s simplex method

7.3. Duality

7.4. Khachiyan’s algorithm

7.5. Interior methods

7.6. Conclusion

7.7. Bibliography

Chapter 8. Quadratic Optimization in 0–1 Variables

8.1. Introduction

8.2. Pseudo-Boolean functions and set functions

8.3. Formalization using pseudo-Boolean functions

8.4. Quadratic pseudo-Boolean functions (qpBf)

8.5. Integer optimum and continuous optimum of qpBfs

8.6. Derandomization

8.7. Posiforms and quadratic posiforms

8.8. Optimizing a qpBf: special cases and polynomial cases

8.9. Reductions, relaxations, linearizations, bound calculation and persistence

8.10. Local optimum

8.11. Exact algorithms and heuristic methods for optimizing qpBfs

8.12. Approximation algorithms

8.13. Optimizing a quadratic pseudo-Boolean function with linear constraints

8.14. Linearization, convexification and Lagrangian relaxation for optimizing a qpBf with linear constraints

8.15. ε-Approximation algorithms for optimizing a qpBf with linear constraints

8.16. Bibliography

Chapter 9. Column Generation in Integer Linear Programming

9.1. Introduction

9.2. A column generation method for a bounded variable linear programming problem

9.3. An inequality to eliminate the generation of a 0–1 column

9.4. Formulations for an integer linear program

9.5. Solving an integer linear program using column generation

9.6. Applications

9.7. Bibliography

Chapter 10. Polyhedral Approaches

10.1. Introduction

10.2. Polyhedra, faces and facets

10.3. Combinatorial optimization and linear programming

10.4. Proof techniques

10.5. Integer polyhedra and min–max relations

10.6. Cutting-plane method

10.7. The maximum cut problem

10.8. The survivable network design problem

10.9. Conclusion

10.10. Bibliography

11.1. Introduction

11.2. Problem definition

11.3. Decision operators

11.4. Propagation

11.5. Heuristics

11.6. Conclusion

11.7. Bibliography

List of Authors

Index

Summary of Other Volumes in the Series

First published 2010 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc. Adapted and updated from Optimisation combinatoire volumes 1 to 5 published 2005-2007 in France by Hermes Science/Lavoisier © LAVOISIER 2005, 2006, 2007

Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:

ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK

John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2010

The rights of Vangelis Th. Paschos to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Cataloging-in-Publication Data

Combinatorial optimization / edited by Vangelis Th. Paschos.

v. cm.

Includes bibliographical references and index.

Contents: v. 1. Concepts of combinatorial optimization

ISBN 978-1-84821-146-9 (set of 3 vols.) -- ISBN 978-1-84821-147-6 (v. 1)

1. Combinatorial optimization. 2. Programming (Mathematics)

I. Paschos, Vangelis Th.

QA402.5.C545123 2010

519.6'4--dc22

2010018423

British Library Cataloguing-in-Publication Data

A CIP record for this book is available from the British Library

ISBN 978-1-84821-146-9 (Set of 3 volumes)

ISBN 978-1-84821-147-6 (Volume 1)

Preface

What is combinatorial optimization? There are, in my opinion, as many definitions as there are researchers in this domain, each one as valid as the other. For me, it is above all the art of understanding a real, natural problem, and being able to transform it into a mathematical model. It is the art of studying this model in order to extract its structural properties and the characteristics of the solutions of the modeled problem. It is the art of exploiting these characteristics in order to determine algorithms that calculate the solutions but also to show the limits in economy and efficiency of these algorithms. Lastly, it is the art of enriching or abstracting existing models in order to increase their strength, portability, and ability to describe mathematically (and computationally) other problems, which may or may not be similar to the problems that inspired the initial models.

Seen in this light, we can easily understand why combinatorial optimization is at the heart of the junction of scientific disciplines as rich and different as theoretical computer science, pure and applied, discrete and continuous mathematics, mathematical economics, and quantitative management. It is inspired by these, and enriches them all.

This book, Concepts of Combinatorial Optimization, is the first volume in a set entitled Combinatorial Optimization. It tries, along with the other volumes in the set, to embody the idea of combinatorial optimization. The subjects of this volume cover themes that are considered to constitute the hard core of combinatorial optimization. The book is divided into three parts:

– Part I: Complexity of Combinatorial Optimization Problems;

– Part II: Classical Solution Methods;

– Part III: Elements from Mathematical Programming.

In the first part, Chapter 1 introduces the fundamentals of the theory of (deterministic) complexity and of algorithm analysis. In Chapter 2, the context changes and we consider algorithms that make decisions by “tossing a coin”. At each stage of the resolution of a problem, several alternatives have to be considered, each one occurring with a certain probability. This is the context of probabilistic (or randomized) algorithms, which is described in this chapter.

In the second part some methods are introduced that make up the great classics of combinatorial optimization: branch-and-bound and dynamic programming. The former is perhaps the most well known and the most popular when we try to find an optimal solution to a difficult combinatorial optimization problem. Chapter 3 gives a thorough overview of this method as well as of some of the most well-known tree search methods based upon branch-and-bound. What can we say about dynamic programming, presented in Chapter 4? It has considerable reach and scope, and very many optimization problems have optimal solution algorithms that use it as their central method.

The third part is centered around mathematical programming, considered to be the heart of combinatorial optimization and operational research. In Chapter 5, a large number of linear models and an equally large number of combinatorial optimization problems are set out and discussed. In Chapter 6, the main simplex algorithms for linear programming, such as the primal simplex algorithm, the dual simplex algorithm, and the primal–dual simplex algorithm are introduced. Chapter 7 introduces some classical linear programming methods, while Chapter 8 introduces quadratic integer optimization methods. Chapter 9 describes a series of resolution methods currently widely in use, namely column generation. Chapter 10 focuses on polyhedral methods, almost 60 years old but still relevant to combinatorial optimization research. Lastly, Chapter 11 introduces a more contemporary, but extremely interesting, subject, namely constraint programming.

This book is intended for novice researchers, or even Master’s students, as much as for senior researchers. Master’s students will probably need a little basic knowledge of graph theory and mathematical (especially linear) programming to be able to read the book comfortably, even though the authors have been careful to give definitions of all the concepts they use in their chapters. In any case, to improve their knowledge of graph theory, readers are invited to consult a great, flagship book from one of our gurus, Claude Berge: Graphs and Hypergraphs, North Holland, 1973. For linear programming, there is a multitude of good books that the reader could consult, for example V. Chvátal, Linear Programming, W.H. Freeman, 1983, or M. Minoux, Programmation mathématique: théorie et algorithmes, Dunod, 1983.

Editing this book has been an exciting adventure, and all my thanks go, firstly, to the authors who, despite their many responsibilities and commitments (which is the lot of any university academic), have agreed to participate in the book by writing chapters in their areas of expertise and, at the same time, to take part in a very tricky exercise: writing chapters that are both educational and high-level science at the same time.

This work could never have come into being without the original proposal of Jean-Charles Pomerol, Vice President of the scientific committee at Hermes, and Sami Ménascé and Raphaël Ménascé, the heads of publications at ISTE. I give my warmest thanks to them for their insistence and encouragement. It is a pleasure to work with them as well as with Rupert Heywood, who has ingeniously translated the material in this book from the original French.

Vangelis Th. PASCHOSJune 2010

PART I

Complexity of Combinatorial Optimization Problems

Chapter 1

Basic Concepts in Algorithms and Complexity Theory1

1.1. Algorithmic complexity

In algorithmic theory, a problem is a general question to which we wish to find an answer. This question usually has parameters or variables the values of which have yet to be determined. A problem is posed by giving a list of these parameters as well as the properties to which the answer must conform. An instance of a problem is obtained by giving explicit values to each of the parameters of the instanced problem.

An algorithm is a sequence of elementary operations (variable affectation, tests, forks, etc.) that, when given an instance of a problem as input, gives the solution of this problem as output after execution of the final operation.

The two most important parameters for measuring the quality of an algorithm are: its execution time and the memory space that it uses. The first parameter is expressed in terms of the number of instructions necessary to run the algorithm. The use of the number of instructions as a unit of time is justified by the fact that the same program will use the same number of instructions on two different machines but the time taken will vary, depending on the respective speeds of the machines. We generally consider that an instruction equates to an elementary operation, for example an assignment, a test, an addition, a multiplication, a trace, etc. What we call the complexity in time or simply the complexity of an algorithm gives us an indication of the time it will take to solve a problem of a given size. In reality this is a function that associates an order of magnitude1 of the number of instructions necessary for the solution of a given problem with the size of an instance of that problem. The second parameter corresponds to the number of memory units used by the algorithm to solve a problem. The complexity in space is a function that associates an order of magnitude of the number of memory units used for the operations necessary for the solution of a given problem with the size of an instance of that problem.

There are several sets of hypotheses concerning the standard configuration that we use as a basis for measuring the complexity of an algorithm. The most commonly used framework is the one known as worst-case. Here, the complexity of an algorithm is the number of operations carried out on the instance that represents the worst configuration, amongst those of a fixed size, for its execution; this is the framework used in most of this book. However, it is not the only framework for analyzing the complexity of an algorithm. Another framework often used is average analysis. This kind of analysis consists of finding, for a fixed size (of the instance) n, the average execution time of an algorithm on all the instances of size n; we assume that for this analysis the probability of each instance occurring follows a specific distribution pattern. More often than not, this distribution pattern is considered to be uniform. There are three main reasons for the worst-case analysis being used more often than the average analysis. The first is psychological: the worst-case result tells us for certain that the algorithm being analyzed can never have a level of complexity higher than that shown by this analysis; in other words, the result we have obtained gives us an upper bound on the complexity of our algorithm. The second reason is mathematical: results from a worst-case analysis are often easier to obtain than those from an average analysis, which very often requires mathematical tools and more complex analysis. The third reason is analysis portability: the validity of an average analysis is limited by the assumptions made about the distribution pattern of the instances; if the assumptions change, then the original analysis is no longer valid.

1.2. Problem complexity

The definition of the complexity of an algorithm can be easily transposed to problems. Informally, the complexity of a problem is equal to the complexity of the best algorithm that solves it (this definition is valid independently of which framework we use).

Let us take a size n and a function f(n). Thus:

TIMEf(n) is the class of problems for which the complexity (in time) of an instance of size n is in O(f(n)).

SPACEf(n) is the class of problems that can be solved, for an instance of size n, by using a memory space of O(f(n)).

We can now specify the following general classes of complexity:

With respect to the classes that we have just defined, we have the following relations: PPSPACEEXPTIME and PEXPTIME. Knowing whether the inclusions of the first relation are strict or not is still an open problem.

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!