Integer Programming - Laurence A. Wolsey - E-Book

Integer Programming E-Book

Laurence A. Wolsey

0,0
96,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

A PRACTICAL GUIDE TO OPTIMIZATION PROBLEMS WITH DISCRETE OR INTEGER VARIABLES, REVISED AND UPDATED The revised second edition of Integer Programming explains in clear and simple terms how to construct custom-made algorithms or use existing commercial software to obtain optimal or near-optimal solutions for a variety of real-world problems. The second edition also includes information on the remarkable progress in the development of mixed integer programming solvers in the 22 years since the first edition of the book appeared. The updated text includes information on the most recent developments in the field such as the much improved preprocessing/presolving and the many new ideas for primal heuristics included in the solvers. The result has been a speed-up of several orders of magnitude. The other major change reflected in the text is the widespread use of decomposition algorithms, in particular column generation (branch-(cut)-and-price) and Benders' decomposition. The revised second edition: * Contains new developments on column generation * Offers a new chapter on Benders' algorithm * Includes expanded information on preprocessing, heuristics, and branch-and-cut * Presents several basic and extended formulations, for example for fixed cost * network flows * Also touches on and briefly introduces topics such as non-bipartite matching, the complexity of extended formulations or a good linear program for the implementation of lift-and-project Written for students of integer/mathematical programming in operations research, mathematics, engineering, or computer science, Integer Programming offers an updated edition of the basic text that reflects the most recent developments in the field.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 462

Veröffentlichungsjahr: 2020

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Preface to the Second Edition

Preface to the First Edition

Intended Audience

What and How

Abbreviations and Notation

About the Companion Website

1 Formulations

1.1 Introduction

1.2 What Is an Integer Program?

1.3 Formulating IPs and BIPs

1.4 The Combinatorial Explosion

1.5 Mixed Integer Formulations

1.6 Alternative Formulations

1.7 Good and Ideal Formulations

1.8 Notes

1.9 Exercises

2 Optimality, Relaxation, and Bounds

2.1 Optimality and Relaxation

2.2 Linear Programming Relaxations

2.3 Combinatorial Relaxations

2.4 Lagrangian Relaxation

2.5 Duality

2.6 Linear Programming and Polyhedra

2.7 Primal Bounds: Greedy and Local Search

2.8 Notes

2.9 Exercises

3 Well‐Solved Problems

3.1 Properties of Easy Problems

3.2 IPs with Totally Unimodular Matrices

3.3 Minimum Cost Network Flows

3.4 Special Minimum Cost Flows

3.5 Optimal Trees

3.6 Submodularity and Matroids

3.7 Two Harder Network Flow Problems*

3.8 Notes

3.9 Exercises

4 Matchings and Assignments

4.1 Augmenting Paths and Optimality

4.2 Bipartite Maximum Cardinality Matching

4.3 The Assignment Problem

4.4 Matchings in Nonbipartite Graphs

4.5 Notes

4.6 Exercises

5 Dynamic Programming

5.1 Some Motivation: Shortest Paths

5.2 Uncapacitated Lot‐Sizing

5.3 An Optimal Subtree of a Tree

5.4 Knapsack Problems

5.5 The Cutting Stock Problem*

5.6 Notes

5.7 Exercises

6 Complexity and Problem Reductions

6.1 Complexity

6.2 Decision Problems, and Classes and

6.3 Polynomial Reduction and the Class

6.4 Consequences of or

6.5 Optimization and Separation

6.6 The Complexity of Extended Formulations

6.7 Worst‐Case Analysis of Heuristics

6.8 Notes

6.9 Exercises

7 Branch and Bound

7.1 Divide and Conquer

7.2 Implicit Enumeration

7.3 Branch and Bound: an Example

7.4 LP‐Based Branch and Bound

7.5 Using a Branch‐and‐Bound/Cut System

7.6 Preprocessing or Presolve

7.7 Notes

7.8 Exercises

8 Cutting Plane Algorithms

8.1 Introduction

8.2 Some Simple Valid Inequalities

8.3 Valid Inequalities

8.4 A Priori Addition of Constraints

8.5 Automatic Reformulation or Cutting Plane Algorithms

8.6 Gomory's Fractional Cutting Plane Algorithm

8.7 Mixed Integer Cuts

8.8 Disjunctive Inequalities and Lift‐and‐Project

8.9 Notes

8.10 Exercises

9 Strong Valid Inequalities

9.1 Introduction

9.2 Strong Inequalities

9.3 0–1 Knapsack Inequalities

9.4 Mixed 0–1 Inequalities

9.5 The Optimal Subtour Problem

9.6 Branch‐and‐Cut

9.7 Notes

9.8 Exercises

10 Lagrangian Duality

10.1 Lagrangian Relaxation

10.2 The Strength of the Lagrangian Dual

10.3 Solving the Lagrangian Dual

10.4 Lagrangian Heuristics

10.5 Choosing a Lagrangian Dual

10.6 Notes

10.7 Exercises

11 Column (and Row) Generation Algorithms

11.1 Introduction

11.2 The Dantzig–Wolfe Reformulation of an IP

11.3 Solving the LP Master Problem: Column Generation

11.4 Solving the Master Problem: Branch‐and‐Price

11.5 Problem Variants

11.6 Computational Issues

11.7 Branch‐Cut‐and‐Price: An Example*

11.8 Notes

11.9 Exercises

12 Benders' Algorithm

12.1 Introduction

12.2 Benders' Reformulation

12.3 Benders' with Multiple Subproblems

12.4 Solving the Linear Programming Subproblems

12.5 Integer Subproblems: Basic Algorithms*

12.6 Notes

12.7 Exercises

13 Primal Heuristics

13.1 Introduction

13.2 Greedy and Local Search Revisited

13.3 Improved Local Search Heuristics

13.4 Heuristics Inside MIP Solvers

13.5 User‐Defined MIP heuristics

13.6 Notes

13.7 Exercises

14 From Theory to Solutions

14.1 Introduction

14.2 Software for Solving Integer Programs

14.3 How Do We Find an Improved Formulation?

14.4 Multi‐item Single Machine Lot‐Sizing

14.5 A Multiplexer Assignment Problem

14.6 Integer Programming and Machine Learning*

14.7 Notes

14.8 Exercises

References

Index

Integer Programming: 2nd Edition

Solutions to Certain Exercises in IP Book

Solutions to Certain Exercises in Chapter 1

Solutions to Certain Exercises in Chapter 2

Solutions to Certain Exercises in Chapter 3

Solutions to Certain Exercises in Chapter 4

Solutions to Certain Exercises in Chapter 5

Solutions to Certain Exercises in Chapter 6

Solutions to Certain Exercises in Chapter 7

Solutions to Certain Exercises in Chapter 8

Solutions to Certain Exercises in Chapter 9

Solutions to Certain Exercises in Chapter 10

Solutions to Certain Exercises in Chapter 11

Solutions to Certain Exercises in Chapter 12

Solutions to Certain Exercises in Chapter 13

Solutions to Certain Exercises in Chapter 14

End User License Agreement

List of Tables

Chapter 1

Table 1.1 Some typical functions.

Chapter 3

Table 3.1 Matrices that are not TU.

Table 3.2 Matrices that are TU.

Chapter 5

Table 5.1

for a 0–1 knapsack problem.

Table 5.2

for an integer knapsack problem.

List of Illustrations

Chapter 1

Figure 1.1 LP and IP solutions.

Figure 1.2 Subtours.

Figure 1.3 Fixed cost function.

Figure 1.4 Either/or constraints.

Figure 1.5 Two different formulations of an IP.

Figure 1.6 The ideal formulation.

Figure 1.7 TSP Instance.

Chapter 2

Figure 2.1 Bounds for IP.

Figure 2.2 Matching and cover by nodes.

Figure 2.3 Weak duality for matching.

Figure 2.4 Extreme points and rays.

Figure 2.5 (a) Matching instance. (b) Stable set instance.

Chapter 3

Figure 3.1 Digraph for minimum cost network flow.

Figure 3.2 Graph for optimal weight tree.

Figure 3.3 (a) Shortest path instance. (b) Network instance.

Figure 3.4 (a) Tree instance, (b) fixed charge network flow instance.

Figure 3.5 Steiner tree instance.

Chapter 4

Figure 4.1 An augmenting path.

Figure 4.2 Bipartite matching.

Figure 4.3 Bipartite matching 2.

Figure 4.4 Primal step.

Figure 4.5 Maximum weight bipartite matching.

Figure 4.6 (a) Matching to be augmented and (b) maximum cardinality matching...

Figure 4.7 Weighted bipartite graph.

Chapter 5

Figure 5.1 Shortest

path.

Figure 5.2 Network for lot‐sizing problem.

Figure 5.3 Shortest path for ULS.

Figure 5.4 Rooted tree with node weights

.

Figure 5.5 Knapsack longest path problem.

Figure 5.6 Decomposition of integer flow of 7 units.

Figure 5.7 Rooted tree with node weights

.

Chapter 6

Figure 6.1 Node Cover for 3‐

.

Figure 6.2 Triangle Inequality.

Figure 6.3 Optimal tour longer than two matchings on

.

Chapter 7

Figure 7.1 Binary enumeration tree.

Figure 7.2 TSP enumeration tree.

Figure 7.3 Pruned by optimality.

Figure 7.4 Pruned by bound.

Figure 7.5 No pruning possible.

Figure 7.6 Partial branch‐and‐bound tree 1.

Figure 7.7 Partial branch‐and‐bound tree 2.

Figure 7.8 Complete branch‐and‐bound tree.

Figure 7.9 Division of the feasible region.

Figure 7.10 Branch‐and‐bound flow chart.

Figure 7.11 Piecewise linear approximation.

Figure 7.12 (a) Standard branching. (b) Orbital branching

Figure 7.13 Enumeration tree (min).

Chapter 8

Figure 8.1 Mixed integer inequality.

Figure 8.2 Gomory cutting planes.

Figure 8.3 The basic mixed integer inequality.

Figure 8.4 Two split cuts. The region cut off is shaded.

Figure 8.5 Disjunctive inequality.

Chapter 9

Figure 9.1 Two examples of dominated inequalities.

Figure 9.2 Facets and faces of a polyhedron.

Figure 9.3 Single node flow set.

Figure 9.4 Fractional optimal subtour solution.

Figure 9.5 Branch‐and‐cut.

Figure 9.6 Branch‐and‐cut tree.

Figure 9.7 Fractional solution of STSP.

Figure 9.8 Fractional solution of CFL.

Chapter 10

Figure 10.1 Optimal tour for STSP.

Figure 10.2 (a) A convex function. (b) Form of the dual problem.

Figure 10.3 Optimal 1‐tree for (a)

, (b)

, and (c)

.

Chapter 11

Figure 11.1 Branching for 0–1 column generation: (a) original and (b) column...

Figure 11.2 A branching scheme for a partitioning problem.

Chapter 12

Example Figure 12.1 Branch‐and‐cut tree for Benders' algorithm.

Figure 12.2 Branch‐and‐cut tree for Benders' algorithm:

.

Chapter 13

Figure 13.1 2‐Exchange for STSP.

Chapter 14

Figure 14.1 First ULS solution.

Figure 14.2 Second ULS solution.

Figure 14.3 Third ULS solution.

Figure 14.4 CLS solution.

Figure 14.5 Representation of solution with changeovers.

Guide

Cover

Table of Contents

Begin Reading

Pages

iv

v

xii

xiii

xiv

xv

xvi

xvii

xviii

xix

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

311

312

313

314

315

316

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

Integer Programming

Laurence A. WolseyUCLouvain

 

 

 

 

Second Edition

 

 

 

 

This edition first published 2021

Copyright 2021 © by John Wiley & Sons, Inc. All rights reserved.

Edition History

John Wiley & Sons (1e, 1998)

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Laurence A. Wolsey to be identified as the author of this work. has been asserted in accordance with law.

Registered Offices

John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USA

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

Editorial Office

111 River Street, Hoboken, NJ 07030, USA

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of Warranty

While the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Cataloging‐in‐Publication Data Applied for

ISBN: 9781119606536

Cover Design: Wiley

Cover Images: Concept of applied astronomy © Jackie Niam/Shutterstock,

LowPoly Trendy Banner © Isniper/Shutterstock

To Marguerite.

Preface to the Second Edition

There has been remarkable progress in the development of mixed integer programming solvers in the past 22 years since the first edition of this book appeared. Though branch‐and‐cut has replaced branch‐and‐bound as the basic computational tool, the underlying theory has changed relatively little and most of the cutting planes were already known at that time, but not yet implemented in the solvers. The other most significant developments in the solvers are much improved preprocessing/presolving and many new ideas for primal heuristics. The result has been a speed‐up of several orders of magnitude in the codes. The other major change has been the widespread use of decomposition algorithms, in particular column generation (branch‐(cut)‐and‐price) and Benders' decomposition.

The changes in this edition reflect these developments. Chapter 11 on column generation has been almost completely rewritten, Chapter 12 on Benders' algorithm is new and the material on preprocessing (Section 7.6), heuristics (Chapter 13), and branch‐and‐cut (Section 9.6) has been expanded. Other minor changes are the inclusion of a few more basic and extended formulations (for instance on fixed cost network flows in Section 3.7) and brief subsections on other topics, such as nonbipartite matching, the complexity of extended formulations or a good linear program for the implementation of lift‐and‐project.

Most of the exercises on formulations and algorithms are very small and typically are solved at the top node by the latest mixed integer programming (MIP) solvers. In such cases, the idea is to understand the algorithm and essentially solve the problem manually just working with a linear programming solver as a black box, or alternatively to program an algorithm for the problem. A small number of sections/subsections have a indicating that they may be of interest, but are not essential.

I am most grateful to a variety of colleagues and friends for making suggestions and/or commenting on different chapters, including Karen Aardal, Daniele Catanzaro, Michele Conforti, Bernard Fortz, Martine Labbé, Andrea Lodi, Fabrizio Rossi, Stefano Smriglio, Eduardo Uchoa, and Dieter Weninger. Thanks again go to Fabienne Henry for her help with my latex difficulties. As before, the errors are all my own.

Preface to the First Edition

Intended Audience

The book is addressed to undergraduates and graduates in operations research, mathematics, engineering, and computer science. It should be suitable for advanced undergraduate and Masters level programs. It is also aimed at users of integer programming who wish to understand why some problems are difficult to solve, how they can be reformulated so as to give better results, and how mixed integer programming systems can be used more effectively.

The book is essentially self‐contained, though some familiarity with linear programming is assumed, and a few basic concepts from graph theory are used.

The book provides material for a one‐semester course of two to three hours per week.

What and How

Integer Programming is about ways to solve optimization problems with discrete or integer variables. Such variables are used to model indivisibilities, and 0/1 variables are used to represent on/off decisions to buy, invest, hire, and so on. Such problems arise in all walks of life, whether in developing train or aircraft timetables, planning the work schedule of a production line or a maintenance team, or planning nationally or regionally the daily or weekly production of electricity.

The last 10 years have seen a remarkable advance in our ability to solve to near optimality difficult practical integer programs. This is due to a combination of

Improved modeling

Superior linear programming software

Faster computers

New cutting plane theory and algorithms

New heuristic methods

Branch‐and‐cut and integer programming decomposition algorithms

Today many industrial users still build an integer programming model and stop at the first integer feasible solution provided by their software. Unless the problem is very easy, such solutions can be , , or away from optimal, resulting in losses running into mega‐dollars. In many cases, it is now possible to obtain solutions that are proved to be optimal, or proven within 0.1, 1, or 5 of optimal, in a reasonable amount of computer time. There is, however, a cost: better models must be built, and either specially tailored algorithms must be constructed, or better use must be made of existing commercial software.

To make such gains, it is necessary to understand why some problems are more difficult than others, why some formulations are better than others, how effective different algorithms can be, and how integer programming software can be best used. The aim of this book is to provide some such understanding.

Chapter 1 introduces the reader to various integer programming problems and their formulation and introduces the important distinction between good and bad formulations. Chapter 2 explains how it is possible to prove that feasible solutions are optimal or close to optimal.

Chapters 3–5 study integer programs that are easy. The problems and algorithms are interesting in their own right, but also because the algorithmic ideas can often be adapted so as to provide good feasible solutions for more difficult problems. In addition, these easy problems must often be solved repeatedly as subproblems in algorithms for the more difficult problems. We examine when linear programs automatically have integer solutions, which is in particular the case for network flow problems. The greedy algorithm for finding an optimal tree, the primal‐dual algorithm for the assignment problem, and a variety of dynamic programming algorithms are presented, and their running times examined.

In Chapter 6, we informally address the question of the apparent difference in difficulty between the problems presented in Chapters 3–5 that can be solved rapidly, and the “difficult” problems treated in the rest of the book.

The fundamental branch‐and‐bound approach is presented in Chapter 7. Certain features of commercial integer programming systems based on branch‐and‐bound are discussed. In Chapters 8 and 9, we discuss valid inequalities and cutting planes. The use of inequalities to improve formulations and obtain tighter bounds is the area in which probably the most progress has been made in the last 10 years. We give examples of the cuts and also the routines to find cuts that are being added to the latest systems.

In Chapters 10 and 11, two important ways of decomposing integer programs are presented. The first is by Lagrangian relaxation and the second by column generation. It is often very easy to implement a special‐purpose algorithm based on Lagrangian relaxation, and many applications are reported in the literature. Integer programming column generation, which is linear programming based, is more recent, but several recent applications suggest that its importance will grow.

Whereas the emphasis in Chapters 7–11 is on obtaining “dual” bounds (upper bounds on the optimal value of a maximization problem), the need to find good feasible solutions that provide “primal” (lower) bounds is addressed in Chapter 12. We present the basic ideas of various modern local search metaheuristics, introduce briefly the worst‐case analysis of heuristics, and also discuss how an integer programming system can be used heuristically to find solutions of reasonable quality for highly intractable integer programs.

Finally, in Chapter 13 we change emphasis. By looking at a couple of applications and asking a series of typical questions, we try to give a better idea of how theory and practice converge when confronted with the choice of an appropriate algorithm, and the question of how to improve a formulation, or how to use a commercial mixed integer programming system effectively.

In using the book for a one‐semester course, the chapters can be taken in order. In any case, we suggest that the basics consisting of Chapters 1, 2, 3, 6, 7 should be studied in sequence. Chapter 4 is interesting for those who have had little exposure to combinatorial optimization. Chapter 5 can be postponed, and parts of Chapter 12 can be studied at any time after Chapter 2. There is also no difficulty in studying Chapter 10 before Chapters 8 and 9. The longer Chapters 7, 8, 9, and 11 contain starred sections that are optional. The instructor may wish to leave out some material from these chapters, or alternatively devote more time to them. Chapter 13 draws on material from most of the book, but can be used as motivation much earlier.

I am sincerely grateful to the many people who have contributed in some way to the preparation of this book. Marko Loparic has voluntarily played the role of teaching assistant in the course for which this material was developed. Michele Conforti, Cid de Souza, Eric Gourdin, and Abilio Lucena have all used parts of it in the classroom and provided feedback. John Beasley, Marc Pirlot, Yves Pochet, James Tebboth, and François Vanderbeck have criticized one or more chapters in detail, and Jan Karel Lenstra has both encouraged and provided rapid feedback when requested. Finishing always takes longer than expected, and I am grateful to my colleagues and doctoral students at Core for their patience when they have tried to interest me in other more pressing matters, to the Computer Science Department of the University of Utrecht for allowing me to finish off the book in congenial and interesting surroundings, and to Esprit program 20118, MEMIPS, for support during part of the 1997–1998 academic year. Sincere thanks go to Fabienne Henry for her secretarial help over many years, and for her work in producing the final manuscript.

Scientifically, I am deeply indebted to the many researchers with whom I have had the good fortune and pleasure to collaborate. Working with George Nemhauser for many years has, I hope, taught me a little about writing. His earlier book on integer programming with R. Garfinkel provided an outstanding model for an undergraduate textbook. Yves Pochet is a considerate and stimulating colleague, and together with Bob Daniel, who has always been ready to provide a “practical problem per day,” they provide a constant reminder that integer programming is challenging both theoretically and practically. However, the bias and faults that remain are entirely my own.

Abbreviations and Notation

BIP:

Binary or zero‐one integer program/programming

BM:

Benders' Master problem

BR:

Relaxation of linear program of Benders' Master problem

:

the set of

‐dimensional 0,1 vectors

C‐G:

Chvátal–Gomory

CLS:

Capacitated lot‐sizing problem

conv(

):

The convex hull of

COP:

Combinatorial optimization problem

D:

Dual problem

DP:

Dynamic programming

DSP:

LP dual of column generation or cut generation subproblem

:

The

th unit vector

:

The characteristic vector of

:

All edges with both endpoints in the node set

FCNF:

Fixed charge network flow problem

GAP:

Generalized assignment problem

GSEC:

Generalized subtour elimination constraint

GUB:

Generalized upper bound

IKP:

Integer knapsack problem

IP:

Integer program/programming

LD:

Lagrangian dual problem

lhs:

Left‐hand side

LP:

Linear program/programming

LM:

Linear programming relaxation of column generation Master problem

):

Length of the input of a problem instance

:

A large positive number

M:

Column generation Master problem

MIP:

Mixed integer program/programming

MIR:

Mixed integer rounding

:

Generic set

:

Class of NP problems

:

Class of NP‐complete problems

:

Generic problem class, or polyhedron

:

Class of polynomially solvable problems

:

Set of subsets of

rhs:

Right‐hand side

RLM:

Restricted linear programming Master problem

:

The

‐dimensional real numbers

:

The

‐dimensional nonnegative real numbers

:

Feasible region of IP, or subset of

SOS:

Special ordered set

SP:

Column generation or cut generation separation/subproblem

STSP:

Symmetric traveling salesman problem

TSP:

(Asymmetric) Traveling salesman problem

TU:

Totally unimodular

UFL:

Uncapacitated facility location problem

ULS:

Uncapacitated lot‐sizing problem

:

Node set

:

Node set

:

Feasible region of IP, or a problem instance

:

The maximum of

and 0

:

Objective function of main or Master problem

:

The

‐dimensional nonnegative integers

:

All arcs going from a node not in

to a node in

About the Companion Website

This book is accompanied by a companion website:

www.wiley.com/go/wolsey/integerprogramming2e

The Instructor companion site contains the Solutions to Exercises.

1Formulations

1.1 Introduction

A wide variety of practical problems can be formulated and solved using integer programming. We start by describing briefly a few such problems.

Train Scheduling

. Certain train schedules repeat every hour. For each line, the travel times between stations are known, and the time spent in a station must lie within a given time interval. Two trains traveling on the same line must for obvious reasons be separated by at least a given number of minutes. To make a connection between trains A and B at a particular station, the difference between the arrival time of A and the departure time of B must be sufficiently long to allow passengers to change, but sufficiently short so that the waiting time is not excessive. The problem is to find a feasible schedule.

Airline Crew Scheduling

. Given the schedule of flights for a particular aircraft type, one problem is to design weekly schedules for the crews. Each day a crew must be assigned a duty period consisting of a set of one or more linking flights satisfying numerous constraints such as limited total flying time, minimum rests between flights, and so on. Then putting together the duty periods, weekly schedules or pairings are constructed which must satisfy further constraints on overnight rests, flying time, returning the crew to its starting point, and so on. The objective is to minimize the amount paid to the crews, which is a function of flying time, length of the duty periods and pairings, a guaranteed minimum number of flying hours, and so forth.

Production Planning

. A multinational company holds a monthly planning meeting in which a three‐month production and shipping plan is drawn up based on their latest estimates of potential sales. The plan covers 200–400 products produced in five different factories with shipments to 50 sales areas. Solutions must be generated on the spot, so only about 15 minutes' computation time is available. For each product, there is a minimum production quantity, and production is in batches – multiples of some fixed amount. The goal is to maximize contribution.

Electricity Generation Planning

. A universal problem is the unit commitment problem of developing an hourly schedule spanning a day or a week so as to decide which generators will be producing and at what levels. Constraints to be satisfied include satisfaction of estimated hourly or half‐hourly demand, reserve constraints to ensure that the capacity of the active generators is sufficient should there be a sudden peak in demand, and ramping constraints to ensure that the rate of change of the output of a generator is not excessive. Generators have minimum on‐ and off‐times, and their start‐up costs are a nonlinear function of the time they have been idle.

Telecommunications

. A typical problem given the explosion of demand in this area concerns the installation of new capacity so as to satisfy a predicted demand for data/voice/video transmission. Given estimates of the requirements between different centers, the existing capacity and the costs of installing new capacity which is only available in discrete amounts, the problem is to minimize cost taking into account the possibility of failure of a line or a center due to a breakdown or accident.

Radiation Therapy Treatment

. In intensity modulated radiation therapy, both the shape of the beam and its intensity need to be controlled by opening and closing multileaf collimators. The goal is to provide the tumor coverage required while maintaining low radiation on critical and normal tissues.

Kidney Exchange Programs

. Patients suffering from kidney failures require a transplant. Though a patient may have a donor willing to donate a kidney, an exchange between the donor–patient pair may be impossible for reasons of blood or tissue incompatibility. The problem is to organize a “best possible” set of exchanges between a number of such pairs in which a patient from one pair receives a compatible kidney from a donor in another pair.

Cutting Problems

. Whether cutting lengths of paper from rolls, plastic from large rectangular sheets, or patterns to make clothes, the problem is in each case to follow precisely determined cutting rules, satisfy demand, and minimize waste.

Other recent application areas include problems in molecular biology, statistics, VLSI, and machine learning.

This book tries to provide some of the understanding and tools necessary for tackling such problems.

1.2 What Is an Integer Program?

Suppose that we have a linear program

where is an by matrix, an ‐dimensional row vector, an ‐dimensional column vector, and an ‐dimensional column vector of variables or unknowns. Now, we add in the restriction that certain variables must take integer values.

If some but not all variables are integer, we have a

(Linear) Mixed Integer Program, written as

where is again by , is by , is an row‐vector, is a row‐vector, is an column‐vector of integer variables, and is a column‐vector of real variables.

If all variables are integer, we have a

(Linear) Integer Program, written as

and if all variables are restricted to 0–1 values, we have a

0–1 or Binary Integer Program

Throughout the text, we will normally use the convention that if there is only one set of variables, they are denoted by . If there are both real and integer variables, and possibly will denote integer variables and and real variables. Note that this differs from the choice of and in the 1st edition.

Another type of problem that we wish to study is a “combinatorial optimization problem.” Here typically we are given a finite set , weights for each , and a set of feasible subsets of . The problem of finding a minimum weight feasible subset can be expressed as follows:

Combinatorial Optimization Problem

In Section 1.3, we will see various examples of integer programs (IPs) and combinatorial optimization problems (COPs), and also see that often a COP can be formulated as an IP or a 0–1 IP.

Figure 1.1 LP and IP solutions.

Given that integer programs look very much like linear programs, it is not surprising that linear programming theory and practice is fundamental in understanding and solving integer programs. However, the first idea that springs to mind, namely “rounding,” is often insufficient, as the following example shows:

Example 1.1

Consider the integer program:

As we see from Figure 1.1, the linear programming solution is a long way from the optimal integer solution .

For 0–1 IPs, the situation is often even worse. The linear programming solution may well be , giving no information whatsoever. What is more, it is typically very difficult just to answer the question whether there exists a feasible 0–1 solution.

1.3 Formulating IPs and BIPs

As in linear programming, translating a problem description into a formulation should be done systematically. A clear distinction should be made between the data of the problem instance and the variables (or unknowns) used in the model.

Define what appear to be the necessary variables.

Use these variables to define a set of constraints so that the feasible points correspond to the feasible solutions of the problem.

Use these variables to define the objective function.

If difficulties arise, define an additional or alternative set of variables and iterate.

Defining variables and constraints may not always be as easy as in linear programming. Especially for COPs, we are often interested in choosing a subset . For this, we typically make use of the incidence vector of S, which is the ‐dimensional 0–1 vector such that if , and otherwise.

Below we formulate four well‐known integer programming problems.

The Assignment Problem

There are people available to carry out jobs. Each person is assigned to carry out exactly one job. Some individuals are better suited to particular jobs than others, so there is an estimated cost if person is assigned to job . The problem is to find a minimum cost assignment.

Definition of the variables

.

if person does job , and otherwise.

Definition of the constraints

.

Each person does one job:

Each job is done by one person:

The variables are 0–1:

Definition of the objective function

.

The cost of the assignment is minimized:

The 0–1 Knapsack Problem

There is a budget available for investment in projects during the coming year and projects are under consideration, where is the outlay for project and is its expected return. The goal is to choose a set of projects so that the budget is not exceeded and the expected return is maximized.

Definition of the variables

.

if project is selected, and otherwise.

Definition of the constraints

.

The budget cannot be exceeded:

The variables are 0–1:

Definition of the objective function

.

The expected return is maximized:

The Set Covering Problem

Given a certain number of regions, the problem is to decide where to install a set of emergency service centers. For each possible center, the cost of installing a service center and which regions it can service are known. For instance, if the centers are fire stations, a station can service those regions for which a fire engine is guaranteed to arrive on the scene of a fire within eight minutes. The goal is to choose a minimum cost set of service centers so that each region is covered.

First, we can formulate it as a more abstract COP. Let be the set of regions, and the set of potential centers. Let be the regions that can be serviced by a center at and its installation cost. We obtain the problem:

Now, we formulate it as a 0–1 IP. To facilitate the description, we first construct a 0–1 incidence matrix such that if and , otherwise. Note that this is nothing but processing of the data.

Definition of the variables

.

if center is selected, and otherwise.

Definition of the constraints

.

At least one center must service region :

The variables are 0–1:

Definition of the objective function

.

The total cost is minimized:

The Traveling Salesman Problem (TSP)

This is perhaps the most notorious problem in Operations Research because it is so easy to explain, and so tempting to try and solve. A salesman must visit each of cities exactly once and then return to his starting point. The time taken to travel from city to city is . Find the order in which he should make his tour so as to finish as quickly as possible.

This problem arises in a multitude of forms: a truck driver has a list of clients he must visit on a given day, or a machine must place modules on printed circuit boards, or a stacker crane must pick up and depose crates. Now, we formulate it as a 0–1 IP.

Definition of the variables

.

if the salesman goes directly from town to town , and , otherwise. ( is not defined for )

Definition of the constraints

.

He leaves town exactly once:

He arrives at town exactly once:

So far these are precisely the constraints of the assignment problem. A solution to the assignment problem might give a solution of the form shown in Figure 1.2 (i.e. a set of disconnected subtours). To eliminate these solutions, we need more constraints that guarantee connectivity by imposing that the salesman must pass from one set of cities to another, so‐called cut‐set constraints:

An alternative is to replace these constraints by subtour elimination constraints:

Figure 1.2 Subtours.

The variables are 0–1:

Definition of the objective function

.

The total travel time is minimized:

1.4 The Combinatorial Explosion

The four problems we have looked at so far are all combinatorial in the sense that the optimal solution is some subset of a finite set. Thus, in principle, these problems can be solved by enumeration. To see for what size of problem instances this is a feasible approach, we need to count the number of possible solutions.

The Assignment Problem

. There is a one‐to‐one correspondence between assignments and permutations of

. Thus, there are

solutions to compare.

The Knapsack and Covering Problems

. In both cases, the number of subsets is

. For the knapsack problem with

, at least half of the subsets are feasible and thus there are at least

feasible subsets.

The Traveling Salesman Problem

. Starting at city 1, the salesman has

choices. For the next choice

cities are possible, and so on. Thus, there are

feasible tours.

In Table 1.1, we show how rapidly certain functions grow. Thus, a traveling salesman problem (TSP) with has approximately tours.

Table 1.1 Some typical functions.

log

  10

3.32

 3.16

 100

6.64

10.00

1000

9.97

31.62

The conclusion to be drawn is that using complete enumeration we can only hope to solve such problems for very small values of . Therefore, we have to devise some more intelligent algorithms, otherwise, the reader can throw this book out of the window.

1.5 Mixed Integer Formulations

Modeling Fixed Costs

Suppose we wish to model a typical nonlinear fixed charge cost function:

with and (see Figure 1.3).

Definition of an additional variable

.

Definition of the constraints and objective function

.

We replace by , and add the constraints .

Note that this is not a completely satisfactory formulation, because although the costs are correct when , it is possible to have the solution . However, as the objective is minimization, this will typically not arise in an optimal solution.

Figure 1.3 Fixed cost function.

Uncapacitated Facility Location (UFL)

Given a set of potential depots and a set of clients, suppose there is a fixed cost associated with the use of depot , and a transportation cost if all of client 's order is delivered from depot . The problem is to decide which depots to open and which depot serves each client so as to minimize the sum of the fixed and transportation costs. Note that this problem is similar to the covering problem, except for the addition of the variable transportation costs.

Definition of the variables

.

We introduce a fixed cost or depot opening variable if depot is used, and otherwise.

is the fraction of the demand of client satisfied from depot .

Definition of the constraints

.

Satisfaction of the demand of client :

To represent the link between the and the variables, we note that , and use the fixed cost formulation above to obtain:

Definition of the objective function

.

The objective is , where if , so we obtain

Uncapacitated Lot‐Sizing (ULS)

The problem is to decide on a production plan for a single product over an ‐period horizon. The basic model can be viewed as having data:

is the fixed cost of producing in period

.

is the unit production cost in period

.

is the unit storage cost in period

.

is the demand in period

.

We use the natural (or obvious) variables:

is the amount produced in period

.

is the stock at the end of period

.

if production occurs in

, and

otherwise.

To handle the fixed costs, we observe that a priori no upper bound is given on . Thus, we either must use a very large value , or calculate an upper bound based on the problem data.

For constraints and objective, we obtain:

If we impose that , then we can tighten the variable upper bound constraints to . Note also that by substituting , the objective function can be rewritten as where and the constant .

Discrete Alternatives or Disjunctions

Suppose satisfies , and either or (see Figure 1.4 in which the feasible region is shaded). We introduce binary variables for . Then if for , we take as constraints:

Now if satisfies , whereas is inactive, and conversely if .

Figure 1.4 Either/or constraints.

Such disjunctions arise naturally in scheduling problems. Suppose that two jobs must be processed on the same machine and cannot be processed simultaneously. If are the processing times and the variables the start times for , then either job 1 precedes job 2 and so , or job 2 comes first and .

1.6 Alternative Formulations

In Sections 1.3 and 1.5, we have formulated a small number of integer programs for which it is not too difficult to verify that the formulations are correct. Here and in Section 1.7, we examine alternative formulations and try to understand why some might be better than others. First, we make precise what we mean by a formulation.

Definition 1.1

A subset of described by a finite set of linear constraints is a polyhedron.

Definition 1.2

A polyhedron is a formulation for a set if and only if .

Note that this definition is restrictive. For example we saw in modeling fixed costs in Section 1.5 that we did not model the set for , but the set .

Example 1.2

In Figure 1.5, we show two different formulations for the set:

Figure 1.5 Two different formulations of an IP.

Note that it would not be easy to write a formulation for the set without introducing many more variables, see Exercise 2.

Equivalent Formulations for a 0–1 Knapsack Set

Consider the set of points

Check that the three polyhedra below are formulations for .

An Equivalent Formulation for UFL

For fixed , consider the constraints:

Logically, these constraints express the condition: if any for one or more , then , or stated a little differently: for each , if , then . This immediately suggests a different set of constraints:

This leads to the alternative formulation:

In the two examples, we have just examined, the variables have been the same in each formulation, and we have essentially modified or added constraints. Another possible approach is to add or choose different variables, in which case we talk of extended formulations.

An Extended Formulation for ULS

What variables, other than production and stock levels, might be useful in describing an optimal solution of the lot‐sizing problem? Suppose we would like to know when the items being produced now will actually be used to satisfy demand. Following the same steps as before, this leads to:

Definition of the variables

.

is the amount produced in period to satisfy demand in period

if production occurs in period (as above), and otherwise.

Definition of the constraints

.

Satisfaction of demand in period :

Variable upper‐bound constraints:

The variables are mixed:

Definition of the objective function

.

Note that here we are again using modified costs with the production cost in period and zero storage costs. It is optional whether we choose to define the old variables in terms of the new by adding defining equalities:

1.7 Good and Ideal Formulations

In Figure 1.5, we show two different formulations of the same problem. We have also seen two possible formulations for the uncapacitated facility location problem. Geometrically, we can see that there must be an infinite number of formulations, so how can we choose between them?

The geometry again helps us to find an answer. Look at Figure 1.6 in which we have repeated the two formulations shown in Figure 1.5 and added a third one . Formulation is ideal, because now if we solve a linear program over , the optimal solution is at a vertex (extreme point). In this ideal case, each vertex is integer and so the IP is solved.

Figure 1.6 The ideal formulation.

We can now formalize this idea.

Definition 1.3

Given a set , the convex hull of, denoted ), is defined as follows: for