Deterministic Operations Research - David J. Rader - E-Book

Deterministic Operations Research E-Book

David J. Rader

3,8
119,99 €

oder
-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations research: modeling real-world problems as linear optimization problem; designing the necessary algorithms to solve these problems; and using mathematical theory to justify algorithmic development. Treating real-world examples as mathematical problems, the author begins with an introduction to operations research and optimization modeling that includes applications form sports scheduling an the airline industry. Subsequent chapters discuss algorithm design for continuous linear optimization problems, covering topics such as convexity. Farkas' Lemma, and the study of polyhedral before culminating in a discussion of the Simplex Method. The book also addresses linear programming duality theory and its use in algorithm design as well as the Dual Simplex Method. Dantzig-Wolfe decomposition, and a primal-dual interior point algorithm. The final chapters present network optimization and integer programming problems, highlighting various specialized topics including label-correcting algorithms for the shortest path problem, preprocessing and probing in integer programming, lifting of valid inequalities, and branch and cut algorithms. Concepts and approaches are introduced by outlining examples that demonstrate and motivate theoretical concepts. The accessible presentation of advanced ideas makes core aspects easy to understand and encourages readers to understand how to think about the problem, not just what to think. Relevant historical summaries can be found throughout the book, and each chapter is designed as the continuation of the "story" of how to both model and solve optimization problems by using the specific problems-linear and integer programs-as guides. The book's various examples are accompanied by the appropriate models and calculations, and a related Web site features these models along with Maple(TM) and MATLAB® content for the discussed calculations. Thoroughly class-tested to ensure a straightforward, hands-on approach, Deterministic Operations Research is an excellent book for operations research of linear optimization courses at the upper-undergraduate and graduate levels. It also serves as an insightful reference for individuals working in the fields of mathematics, engineering, computer science, and operations research who use and design algorithms to solve problem in their everyday work.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 807

Veröffentlichungsjahr: 2013

Bewertungen
3,8 (18 Bewertungen)
4
10
1
3
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



CONTENTS

PREFACE

1 INTRODUCTION TO OPERATIONS RESEARCH

1.1 WHAT IS DETERMINISTIC OPERATIONS RESEARCH?

1.2 INTRODUCTION TO OPTIMIZATION MODELING

1.3 COMMON CLASSES OF MATHEMATICAL PROGRAMS

1.4 ABOUT THIS BOOK

EXERCISES

2 LINEAR PROGRAMMING MODELING

2.1 RESOURCE ALLOCATION MODELS

2.2 WORK SCHEDULING MODELS

2.3 MODELS AND DATA

2.4 BLENDING MODELS

2.5 PRODUCTION PROCESS MODELS

2.6 MULTIPERIOD MODELS: WORK SCHEDULING AND INVENTORY

2.7 LINEARIZATION OF SPECIAL NONLINEAR MODELS

2.8 VARIOUS FORMS OF LINEAR PROGRAMS

2.9 NETWORK MODELS

EXERCISES

3 INTEGER AND COMBINATORIAL MODELS

3.1 FIXED-CHARGE MODELS

3.2 SET COVERING MODELS

3.3 MODELS USING LOGICAL CONSTRAINTS

3.4 COMBINATORIAL MODELS

3.5 SPORTS SCHEDULING AND AN INTRODUCTION TO IP SOLUTION TECHNIQUES

EXERCISES

4 REAL-WORLD OPERATIONS RESEARCH APPLICATIONS: AN INTRODUCTION 126

4.1 VEHICLE ROUTING PROBLEMS

4.2 FACILITY LOCATION AND NETWORK DESIGN MODELS

4.3 APPLICATIONS IN THE AIRLINE INDUSTRY

EXERCISES

5 INTRODUCTION TO ALGORITHM DESIGN

5.1 EXACT AND HEURISTIC ALGORITHMS

5.2 WHAT TO ASK WHEN DESIGNING ALGORITHMS?

5.3 CONSTRUCTIVE VERSUS LOCAL SEARCH ALGORITHMS

5.4 HOW GOOD IS OUR HEURISTIC SOLUTION?

5.5 EXAMPLES OF CONSTRUCTIVE METHODS

5.6 EXAMPLE OF A LOCAL SEARCH METHOD

5.7 OTHER HEURISTIC METHODS

5.8 DESIGNING EXACT METHODS: OPTIMALITY CONDITIONS

EXERCISES

6 IMPROVING SEARCH ALGORITHMS AND CONVEXITY

6.1 IMPROVING SEARCH AND OPTIMAL SOLUTIONS

6.2 FINDING BETTER SOLUTIONS

6.3 CONVEXITY: WHEN DOES IMPROVING SEARCH IMPLY GLOBAL OPTIMALITY?

6.4 FARKAS’ LEMMA: WHEN CAN NO IMPROVING FEASIBLE DIRECTION BE FOUND?

EXERCISES

7 GEOMETRY AND ALGEBRA OF LINEAR PROGRAMS

7.1 GEOMETRY AND ALGEBRA OF “CORNER POINTS”

7.2 FUNDAMENTAL THEOREM OF LINEAR PROGRAMMING

7.3 LINEAR PROGRAMS IN CANONICAL FORM

EXERCISES

8 SOLVING LINEAR PROGRAMS: SIMPLEX METHOD

8.1 SIMPLEX METHOD

8.2 MAKING THE SIMPLEX METHOD MORE EFFICIENT

8.3 CONVERGENCE, DEGENERACY, AND THE SIMPLEX METHOD

8.4 FINDING AN INITIAL SOLUTION: TWO-PHASE METHOD

8.5 BOUNDED SIMPLEX METHOD

8.6 COMPUTATIONAL ISSUES

EXERCISES

9 LINEAR PROGRAMMING DUALITY

9.1 MOTIVATION: GENERATING BOUNDS

9.2 DUAL LINEAR PROGRAM

9.3 DUALITY THEOREMS

9.4 ANOTHER INTERPRETATION OF THE SIMPLEX METHOD

9.5 FARKAS’ LEMMA REVISITED

9.6 ECONOMIC INTERPRETATION OF THE DUAL

9.7 ANOTHER DUALITY APPROACH: LAGRANGIAN DUALITY

EXERCISES

10 SENSITIVITY ANALYSIS OF LINEAR PROGRAMS

10.1 GRAPHICAL SENSITIVITY ANALYSIS

10.2 SENSITIVITY ANALYSIS CALCULATIONS

10.3 USE OF SENSITIVITY ANALYSIS

10.4 PARAMETRIC PROGRAMMING

EXERCISES

11 ALGORITHMIC APPLICATIONS OF DUALITY

11.1 DUAL SIMPLEX METHOD

11.2 TRANSPORTATION PROBLEM

11.3 COLUMN GENERATION

11.4 DANTZIG–WOLFE DECOMPOSITION

11.5 PRIMAL–DUAL INTERIOR POINT METHOD

EXERCISES

12 NETWORK OPTIMIZATION ALGORITHMS

12.1 INTRODUCTION TO NETWORK OPTIMIZATION

12.2 SHORTEST PATH PROBLEMS

12.3 MAXIMUM FLOW PROBLEMS

12.4 MINIMUM COST NETWORK FLOW PROBLEMS

EXERCISES

13 INTRODUCTION TO INTEGER PROGRAMMING

13.1 BASIC DEFINITIONS AND FORMULATIONS

13.2 RELAXATIONS AND BOUNDS

13.3 PREPROCESSING AND PROBING

13.4 WHEN ARE INTEGER PROGRAMS “EASY?”

EXERCISES

14 SOLVING INTEGER PROGRAMS: EXACT METHODS

14.1 COMPLETE ENUMERATION

14.2 BRANCH-AND-BOUND METHODS

14.3 VALID INEQUALITIES AND CUTTING PLANES

14.4 GOMORY’S CUTTING PLANE ALGORITHM

14.5 VALID INEQUALITIES FOR 0–1 KNAPSACK CONSTRAINTS

14.6 BRANCH-AND-CUT ALGORITHMS

14.7 COMPUTATIONAL ISSUES

EXERCISES

15 SOLVING INTEGER PROGRAMS: MODERN HEURISTIC TECHNIQUES

15.1 REVIEW OF LOCAL SEARCH METHODS: PROS AND CONS

15.2 SIMULATED ANNEALING

15.3 TABU SEARCH

15.4 GENETIC ALGORITHMS

15.5 GRASP ALGORITHMS

EXERCISES

APPENDIX A BACKGROUND REVIEW

A.1 BASIC NOTATION

A.2 GRAPH THEORY

A.3 LINEAR ALGEBRA

REFERENCE

INDEX

Copyright © 2010 by John Wiley & Sons, Inc. All rights reserved

Published by John Wiley & Sons, Inc., Hoboken, New Jersey

Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging-in-Publication Data:

Rader, David J., 1970-

Deterministic operations research : models and methods in linear

optimization / David J. Rader, Jr.

p. cm.

Includes bibliographical references and index.

ISBN 978-0-470-48451-7 (cloth)

1. Operations research–Mathematics. 2. Mathematical optimization. 3.

Linear programming. I. Title.

T57.6.R245 2010

519.7’2–dc22

2009054232

10 9 8 7 6 5 4 3 2 1

To Cetta, Megan, andAbby

PREFACE

This book represents the deterministic operations research course that I have taught to mathematics, computer science, and engineering students over the past 10 years. It deals with the study of linear optimization (both continuous and discrete), emphasizing the modeling of real problems as linear optimization problems and the designing of algorithms to solve them, covering topics in linear programming, network optimization, and integer programming.

In this book, I emphasize three aspects of deterministic operations research:

1. Modeling real-world problems as linear optimization problems.

2. Designing algorithms (both heuristic and exact methods) to solve these problems.

3. Using mathematical theory to improve our understanding of the problem, to improve existing algorithms, and to design new algorithms.

Although these aspects are important for both researchers and practitioners of operations research, they are not always in the forefront of operations research textbooks. It is true that many books highlight optimization modeling and algorithms to solve these problems; however, very few, if any, explicitly discuss the algorithm design process used to solve problems. This book is intended to fill that gap.

My intended audience for this book is a junior/senior mathematics course in deterministic operations research. It will serve as both comprehensive text for such a course and a reference book for future exploration of the subjects. The book can also be used for operations research courses in engineering departments and for first-year graduate courses in linear optimization. It is not meant to be a reference book for researchers, but to be a book for people wanting to learn about linear optimization problems and techniques. A student using this book should have been exposed to (multivariable) calculus and linear algebra, whenever possible.

I introduce concepts and approaches by mixing in many examples to both demonstrate and motivate theoretical concepts and then proving these results. Advanced ideas are described in a common-sense style that illustrates the core aspects of the idea in ways that are easy to understand, motivates the reader to understand how to think about the problem, not just what to think, and illustrates this using concrete examples. Each chapter of the book is designed to be the continuation of the “story” of how to both model and solve optimization problems by using the specific problems (linear and integer programs) as guides. This enables the reader (and instructors) to see how solution methods can be derived instead of just seeing the final product (the algorithms themselves).

I begin this book with an introduction to linear optimization modeling, incorporating many common modeling constructs through examples. In class I typically take about 20–25% of the term to discuss various models and the modeling process, so it is no surprise that the modeling chapters encompass that much of the book. I feel it is important for students to be able to translate a real-world situation into a mathematical language (here as an optimization model), be able to solve it using an optimization package, and be able to interpret the results. It is also important for students to see that there is a difference between the model and the data, and I try to emphasize this by discussing the general form of various constructs. I end my discussion of modeling by illustrating some common operations research problems and how they are solved. In addition, the exercises discuss various applications that have appeared in the research literature over the past 20 years.

Once the importance and usefulness of linear and integer programming models have been illustrated, I take a different approach to solution methods than most books. Typically, at an introductory level, solution methods for optimization problems are presented in the following style: (a) the problem is defined, (b) a solution technique is described and illustrated, and (c) the solution method is proven correct (it gives the correct answer) and various properties are studied. I illustrate how to design methods to solve these problems by first introducing the algorithm design paradigm through the use of heuristic methods for common integer programming models and then discussing general optimization approaches and showing how to use the properties of linear programs to specialize these general methods. This illustrates the algorithm design process.

I have found that the algorithm design process (which is key to deterministic operations research) can best be introduced through heuristic methods. This allows us to concentrate on the design itself without having to worry about finding the optimal solution. Once we have designed a few simple algorithms we are ready to search for optimal solutions by first learning through an example what properties an optimal solution must have and then designing (and improving) an algorithm to satisfy them.

One challenge facing instructors is the lack of a formal background in the subject, especially for mathematics professors. Many may not be familiar with how algorithms are designed, the importance of modeling and proper model formulation, and how models and the algorithms that solve them interact. This book will help such instructors understand these ideas and present them to the students. The book emphasizes the thought process involved with modeling optimization problems and both the design and the improvement of algorithms to solve these problems. Each chapter builds upon the previous ones to illustrate these different aspects. For example, by the time the reader is formally introduced to the simplex method (the primary algorithm for solving linear programs), we will have discussed how this algorithm could have been developed, giving the reader “ownership” of the algorithm.

This book is organized around the three aspects of deterministic operations research that I want to emphasize. The first four chapters discuss an introduction to operations research and optimization modeling. Chapters 5 through 8 discuss algorithm design for (continuous linear) optimization problems, culminating in the formal discussion of the simplex method. This includes an introduction to continuous optimization algorithms, convexity, Farkas’ lemma, and the algebraic and geometric study of polyhedra. All of this culminates in the simplex method. Many will find it unusual to see the simplex method appear as late as Chapter 8 in this book. This is by design; I want to emphasize the importance of modeling first (and hence there are four chapters on models) and the algorithm design process next. This means that the simplex method, the end result of the design process, is important but not the sole focus of the book. In fact, because it is central to much of what we discuss in the book, it is symbolic that the simplex method is found in the middle chapter.

Chapters 9 through 11 discuss the duality theory of linear programs and the algorithmic and modeling ramifications of the theory. I also discuss Lagrangian duality and briefly mention optimality conditions for general nonlinear programs. In Chapters 10 and 11, I cover uses of duality, including sensitivity analysis, parametric programming, and such algorithmic uses as dual simplex, column generation, Dantzig–Wolfe decomposition, and the primal–dual interior point algorithm. Each of these topics highlights how theoretical results can be used to improve how we solve problems computationally.

Chapters 12 through 15 continue our algorithmic design theme by discussing network optimization and integer programming methods. I have incorporated some topics not typically given in introductory books, such as various label-correcting algorithms for the shortest path problem (as opposed to just Dykstra’s method), the importance of preprocessing and probing in integer programming, cover inequalities for 0–1 knapsack constraints, lifting of valid inequalities, and branch-and-cut algorithms. In fact, the integer programming material in Chapter 14 introduces methods that are being implemented in software packages for integer programs. I finish the book with a discussion of modern metaheuristic techniques for discrete optimization problems. The methods form the backbone of many heuristic approaches to hard discrete problems.

Supplemental material will be key to the course, which will be placed on a dedicated web site http://www.rose-hulman.edu/∼rader/DOR for the book. This includes the appropriate models (in various modeling languages) for each example in Chapters 2, 3, and 4, as well as Maple and Matlab content for many calculations used in the book. This will enable students and instructors to learn the concepts being introduced without needing to worry about software implementations. A collection of solutions to exercises is available to students; in addition, a full solutions manual will be made available for instructors.

The material from this book can be covered entirely over a two-semester sequence of courses. The core material of this book is contained in Chapters 5–9. Any course that includes the Simplex method (Chapter 8) needs Chapters 5–7 as background material. When I teach a course during one term using the material from this book, I have covered the following chapters over 40 class meetings (50 minutes each): Chapters 1, 2, 3, 5, 6, 7, 8, 9, 11.1–11.3, 13, and 14.1–14.5. In this course, I spend 8–10 lectures on material from Chapters 2 and 3 (optimization modeling); this is due to my personal bias toward the importance of optimization modeling, and users of this book would not need to spend as much time on these topics. I have included a more detailed topics list on the web site.

I have added various topics to this book that are optional materials to cover in a class, but provide the reader with more advanced topics for future study. These include (but are not limited to) sections on Farkas’ lemma (Sections 6.3 and 9.5), Lagrangian duality (Section 9.7), parametric programming (Section 10.4), Dantzig–Wolfe decomposition (Section 11.3), primal–dual interior point algorithm (Section 11.6), network optimization algorithms (Chapter 12), and valid inequalities for 0–1 linear constraints and branch-and-cut algorithms (Sections 14.5 and 14.6).

It is impossible to write a book without any help or guidance, and that is especially true of this book. I want to thank John Wiley & Sons for publishing this book and to acknowledge in particular my editors Susanne Steitz-Filler and Jacqueline Palmieri for their assistance and guidance throughout this process.

I also want to acknowledge the comments I have received from colleagues Susan Martonosi and Kevin Hutson and other anonymous reviewers regarding the topics presented and their order and from the 10 years’ worth of students at Rose-Hulman Institute of Technology who have used early versions of this manuscript as their primary material and have pointed out errors and omissions. Any errors that remain are of course my own.

I especially want to acknowledge Diane Evans, Richard Forrester, Allen Holder, and Mike Spivey, who have used this book in their classes. They have each provided me with numerous suggestions and comments on how to improve the presentation of material. Our conversations regarding the material has been immeasurable, and I cannot thank them enough for their contributions.

My colleagues in the Mathematics Department at Rose-Hulman Institute of Technology have been very supportive during the time I wrote this book. I particularly want to thank S. Allen Broughton, Ralph P. Grimaldi, and Jeffery J. Leader for their support and advice.

I could not even begin to work on such a project without the lifelong support of my parents Mary and David Rader. They sacrificed plenty to allow me the education that led to this project, and words cannot express my gratitude.

Last, but certainly not least, I want to thank my wife Concetta for her help with this project. Her love and support, as always, is greatly appreciated. I also want to thank our children Megan and Abigail for being patient with me when I needed to work instead of play, and for knowing when I needed to play instead of work.

DAVID J. RADER, JR.

Terre Haute, IndianaApril 2010

CHAPTER 1

INTRODUCTION TO OPERATIONS RESEARCH

1.1   WHAT IS DETERMINISTIC OPERATIONS RESEARCH?

Today, many “operations” problems are routinely solved throughout business and industry. These include determining work shifts for large departments, designing production facilities to maximize the throughput, allocating scarce resources (such as raw materials and labor) to meet demands at a minimum cost, or determining which investments to place available funds. Such problems are very different from the traditional problems that students of mathematics, science, and engineering often face, because there are no physical laws that can be used; for example, knowing how to use Newton’s second law or Ohm’s law will not help solve these industrial problems. To handle such problems, different mathematical models must be used.

Mathematical Model A mathematical model is a collection of variables and the relationships needed to describe important features of a given problem.

Until World War II, most businesses and industries did not worry about such operations problems. This was partly because no formal mathematical discipline directly handled these problems, and partly because there was no urgent need for them. During World War II, military planners began working with scientists and mathematicians in order to apply a scientific approach to the management of the war effort, in which they began to devise mathematical models to deal with such issues. After the war, others began to look at these techniques for industry, which brought about the beginning of Operations Research.

Operations Research Operations research (OR) is the study of how to form mathematical models of complex science, engineering, industrial, and management problems and how to analyze them using mathematical techniques.

Operations research is the name most often used to describe the mathematical study of such problems. In business, it is also known as Management Science or Operations Management. Typically, OR is divided into two areas: deterministic operations research, where all parameters are fixed, and stochastic operations research, where we assume some of the problem parameters are random. This book deals exclusively with deterministic operations research.

There have been many successful and interesting applications of operations research techniques since the 1940s. In fact, operations research has been used to solve many diverse problems.

1. Scheduling ACC Basketball. Nemhauser and Trick [67] discussed how they used OR techniques to determine successful schedules for the Atlantic Coast Conference basketball season, which consists of a round-robin schedule among 10 teams stretching from Maryland to Florida.

2. Designing Communication Networks. Balakrishnan, Magnanti, and Mirchandani [4] look at the problem of designing survivable networks with high bandwidth in order to minimize costs. These problems have great importance due to the explosion of phone, cable, and video services.

3. Scheduled Aircraft Maintenance. Gopalan and Talluri [51] studied how to schedule aircraft maintenance when there are many operating restrictions in place, such as making sure the aircraft is at the correct maintenance location within a certain amount of time.

4. Truck Dispatching for Oil Pickup. Bixby and Lee [16] look at the vehicle routing and scheduling problem that arise from an oil company’s desire to better utilize truck fleets and drivers.

5. Component Assignment in Printed Circuit Board Manufacturing. Hillier and Brandeau [58] consider the problem of determining the minimum cost assignment of various components to insertion machines that are used in the automated assembly of printed circuit boards.

OR is much more than the formation of mathematical models of problems; it also deals with the solution of these models and the evaluation of their solutions with regard to the real-world problem. In deterministic OR models, we typically formulate the problem as an optimization problem, where we want to minimize or maximize a function (such as costs, profits, time, etc.) given that the solution must satisfy certain restrictions and requirements (demand met, scarce resources, etc.). Such optimization problems are often referred to as Mathematical Programs.

Mathematical Program A mathematical program or Optimization Model is a mathematical structure where problem choices are represented by decision variables. Using these decision variables, mathematical programs look to optimize some objective function of the decision variables subject to certain constraints that limit the possible choices (values of decision variables). It can be written in the most general form (assuming maximization problem)
where x is a vector of decision variables, f (x) is the objective function, and the set S is the set of values for the decision variables satisfying all of the constraints.

The three highlighted terms in the above definition, decision variables, objective function, and constraints, represent the fundamental concerns of any OR model. In fact, it is the objective function that makes a model an optimization model. This was fairly revolutionary in 1947, when George Dantzig used an objective function to determine which decisions were better than others. Before that, there was not much thought given in modeling toward objective functions; instead, basic ground rules given by those in authority were used to guide the search for the “best answer.”1

But how do we create a mathematical model? That is the real question in any study of operations research. The whole modeling process involves a four-stage cycle:

1. Problem Definition and Data Gathering. Here is where the real work begins. Actually knowing what problem we wish to model is more important than most people realize. If we have no idea what we want to determine, how do we know if we have a reasonable solution or not? Next, a modeler must observe the system and collect data to estimate any parameters of the model.

2. Creating the Mathematical Model. Once the problem is defined and data have been collected, it is now time to develop a model of the situation. This involves determining decision variables, constraints, and objectives that are needed in the problem. Sometimes it involves only determining which formula we need to use. Models can be either deterministic, as studied in this book, or stochastic, in which case our data are more important because we need them to verify assumptions on probability distributions, means, and so on.

3. Solving the Mathematical Model. This can have many different forms. If we have a deterministic optimization model, we try to find the values of the decision variables that give the best objective function value. Sometimes, however, this is not tractable. For example, in the infamous traveling salesperson problem, a person has to travel to many different locations and wants to determine the shortest route to visit them all before returning home. For many instances, this problem cannot be solved to optimality. In that case, when a problem is that difficult or intractable, operations researchers often try to develop heuristics that find a reasonable solution in a short amount of time. For stochastic problems, a “solution” is not always easy to find. In many instances, simulation is used to approximate a good solution. Here, we simulate the system many times by generating random numbers for each random variable and determine from these simulations which alternative is best.

4. Using the Mathematical Model to Make Real-World Predictions. Once we have a solution to a mathematical model, we must use that answer to solve the real-world problem. While it is often the case that the values of our decision variables are what we will use, that is not always the case. For example, if we solve a mathematical program and get fractional values, and we wanted an integral solution, we may want to “fudge” the values a little, perhaps by rounding up or down, to get integers. However, this is not always a good idea, because we may end up with a solution that does not satisfy all the constraints.

Once we’ve gone through these four steps, we are done, right? Not quite. Often the model gives answers that we know cannot work in the real problem, which means the model is not quite right. In this case, we may have to start all over again. Mathematical modeling is a very cyclical process, as illustrated in Figure 1.1.

Even when the model seems correct, we are often interested in scenarios where our parameters change. These “What If” scenarios are important, because often our data do not generate the absolutely correct parameters. Therefore, we want to know what small changes to our parameters do to our solutions.

Sensitivity Analysis Sensitivity analysis is the determination of the effect of small changes of parameter values on the solutions to a mathematical model.

FIGURE 1.1 Cyclical nature of mathematical modeling.

Sometimes this is very easy to do, and sometimes it is not. For one important model we’ll study, sensitivity analysis is both straightforward and informative.

In this book we deal with three important parts of operations research: finding a mathematical model, solving it, and analyzing the results obtained and the solution technique itself. To do this, we first spend time on obtaining models, including some common models found in OR applications. This includes doing sensitivity analysis on certain models and learning what we can from the output of optimization software packages. We then begin a study of how to solve certain models by exploring approaches to algorithms for various mathematical problems. By learning how to derive such algorithms, we will be able to handle problems where a solution technique is not apparent.

1.2   INTRODUCTION TO OPTIMIZATION MODELING

In order to understand the modeling process, we start with a simple example and show how some of the steps can be done. Since we will focus on optimization models, we’ll begin with a two-variable problem, which enables us to solve and analyze the model graphically.

Model Formulation

Consider the following example.

   EXAMPLE 1.1

Farmer Jones decides to supplement his income by baking and selling two types of cakes, chocolate and vanilla. Each chocolate cake sold gives a profit of $3, and the profit on each vanilla cake sold is $5. Each chocolate cake requires 20 minutes of baking time and uses 4 eggs and 4 pounds of flour, while each vanilla cake requires 40 minutes of baking time and uses 2 eggs and 5 pounds of flour. If Farmer Jones has available only 260 minutes of baking time, 32 eggs, and 40 pounds of flour, how many of each type of cake should be baked in order to maximize Farmer Jones’ profit?

This is a classic problem type in operations research. Here, we want to choose the “best” solution (how many cakes to make) given a limited amount of resources (time, eggs, and flour). To model this problem, the first (and most important) step is to identify the decision variables. These are the variables that represent the decisions to be made. In Example 1.1, the decision variables correspond to determining how many chocolate cakes and vanilla cakes to bake. Here, we can define the following variables (of course, the names do not matter):

In this example, items such as baking time, amount of eggs and flour, and the restrictions on their use are all given to us; these quantities are often referred to as input parameters. These are fixed in any mathematical model we create.

Once we know the decision variables, we need to determine what restrictions and/or requirements govern their possible values. These formulations are known as the constraints of the problem and can be classified into two groups: (1) variable bounds and (2) general constraints. Variable bounds specify the values for which the decision variables have meaning. In Example 1.1, the variables C and V don’t mean anything if their values are negative; hence, these variables are subjected to the most common variable bound, that of nonnegativity. Here, we would specify that

(1.1)

in any solution to our model. Of course, in other models, the variables could also be either nonpositive (≤0) or unrestricted in sign (positive, negative, or zero). Note that these variables can (technically) have any value greater than or equal to zero.

Continuous Variable A variable that can have any fractional value is known as a continuous variable.
Discrete Variable If a variable is restricted to have only integer values, this variable would then be a discrete or integer variable.

General constraints specify all remaining restrictions, requirements, and other interactions that could limit the values of the decision variables. In Example 1.1, we are told that there are a limited number of eggs, pounds of flour, and baking time. Each one of these restrictions constitutes a constraint on the model. For the eggs, for example, the constraint would be of the form

Note that this is an inequality since we do not necessarily need to use every available egg; we just cannot exceed our limit. We need to now determine how to find the number of eggs needed using the decision variables. Since we know that every chocolate cake requires 4 eggs and every vanilla cake uses 2 eggs, we find that the number of eggs used to make the cakes is 4 times the number of chocolate cakes made + 2 times the number of vanilla cakes made = 4C + 2V . The constraint restricting the number of eggs used would then be

(1.2)

If we look at the amount of flour, we’d get a constraint of the form

Since each chocolate cake requires 4 pounds of flour and each vanilla cake requires 5 pounds of flour, the total amount of flower used is 4C + 5V , giving the constraint

(1.3)

By this time, you can probably see that the constraint for the amount of baking time is

(1.4)

The last part of any mathematical program is the objective function, or the function of the decision variables that we wish to maximize or minimize. In Example 1.1, we are interested in maximizing the revenue. Similar to our construction of the constraints above, we need to come up with a function relating the decision variables to the revenue. Since each chocolate cake brings in $3, and each vanilla cake brings in $5, our objective is to maximize

(1.5)

Example 1.1 can be written in the general form (1.7) by combining variable bounds (1.1), constraints (1.2), (1.3), and (1.4), and objective function (1.5), giving the following model:

(1.6)

This particular problem is an example of a linear program, because the objective function is linear and every constraint is a linear inequality (or equation).

Any mathematical program is a combination of the objective function, general constraints, and variable bounds. It is written in the following general form:

(1.7)

where “s.t.” stands for “subject to.”

Before we continue with our example, we should introduce some basic definitions.

Solution A solution to an optimization problem is a collection of values of the decision variables.
Feasible Solution A solution is feasible if it satisfies all constraints and bounds.
Feasible Region The feasible region of an optimization problem is the set of feasible solutions to the problem.
Value The value of a feasible solution is its objective function value.
Optimal Solution An optimal solution to an optimization problem is a feasible solution that is at least equal to all other feasible solutions. If our optimization problem is a maximization problem, then the optimal solution has value at least as large as all other feasible solutions. If it is a minimization problem, then the optimal solution has value no bigger than the value of any other feasible solution. The value of an optimal solution is known as the optimal value.
Unbounded Problem An optimization problem is unbounded if, for any feasible solution x, there exists another feasible solution y whose value is better than the value of x. If the optimization problem is a maximization problem, then the value of y is greater than the value of x; if it is a minimization problem, the value of y is less than the value of x. We often say that an unbounded optimization problem has no finite optimal solution.
Infeasible Problem A mathematical program is infeasible if there are no feasible solutions, that is, there does not exist any solution that satisfies all constraints and bounds.

Solving the Model

Now that we have a model for our problem, we need to solve it. If we were to plot the feasible region of problem (1.6), we’d have the shaded region given in Figure 1.2. We are interested in finding the solution within this region that gives the largest value of f (C, V ) = 3C + 5V . How do we find such a solution, if it even exists? It should be apparent that, because the feasible region is bounded and the boundary is well defined (because of the ≤ constraints), there should be an optimal solution. One way to find this solution is to use an idea from calculus, namely contour plots, or curves of f (C, V ) = k for certain values of k. For example, Figure 1.3 shows the feasible region, along with the lines 3C + 5V = 10 (dots) and 3C + 5V = 20 (dashes); the arrows indicate the direction in which to move the line 3C + 5V = k if we want to increase k. Note that each line intersects the feasible region, and hence there exists some solution in the feasible region that has an objective function value of 10, and another that has an objective function value of 20. The process should now be obvious: Find the largest value of k such that the line 3C + 5V = k intersects the feasible region.

Of course, we could just start guessing values of k until we find the right one, but shouldn’t there be a better way? We can use the fact that all contours 3C + 5V = k are parallel. So, let a straightedge (like a ruler) represent the line 3C + 5V = k for some value of k such that the line intersects the feasible region. Here, perhaps an easy line to use would be 3C + 5V = 15. Next, determine which direction to move the line so as to increase the value of k, and move the straightedge in that direction until our straightedge no longer intersects the feasible region. A solution that is both on the line and in the feasible region is then optimal. Note that, if we want to minimize our objective function, we’d move our straightedge in the opposite direction, but the same thinking would be involved.

FIGURE 1.2 Feasible region of problem (1.6).

FIGURE 1.3 Contour lines for problem (1.6).

FIGURE 1.4 Optimal solution for problem (1.6) is at (5, 4).

In problem (1.6), we first notice that increasing the value of k implies moving our straightedge in the direction of the arrows (which are the normal vectors for each line 3C + 5V = k); Figure 1.4 shows the results of drawing lines for the value k = 35 through the feasible solution (5, 4), and any increase in k yields a line that no longer intersects the feasible region. Also, if we take any k < 0, we are again no longer intersecting the feasible region. Hence, every solution in the feasible region is on the line 3C + 5V = k, for some k. Since we’ve in essence looked at all the solutions in the feasible region, if we started at k = 0, we could conclude that C = 5, V = 4 is an optimal solution to our mathematical program with value 3(5) + 5(4) = 35.

This first example actually brings up a few questions that we may want to pay attention to, even though we won’t be answering them right now. Our optimal solution is at a “corner point,” or at the intersection of the two constraints 4C + 5V = 40 and 20C + 40V = 260. Will this always happen, or are there examples where no corner point is an optimal solution? Also, we found this optimal solution by increasing the value of k until our objective function contour line was no longer touching the feasible region. Is it possible for this contour line to intersect the feasible region for some k1, not intersect the feasible region for another k2> k1, and then intersect the feasible region again for some k3> k2? Why is this important, you may ask? If this can happen, then our method, if done by a computer, would stop at a solution that may be “locally” optimal but not “globally.” These are issues that we want to keep in the back of our minds as we look at more examples.

What would happen if our profit margin for selling vanilla cakes changed from $5 per unit to $8? To examine this, plot the contours for f (x, y) = 3C + 8V. It’s easy to see from Figure 1.5 that the solution (5, 4) is no longer optimal—(0, 6.5) is. Can we really make 6.5 vanilla cakes? Probably not, so we may need to alter our model to specify that both V and C have integer value. This is an example of an integer program, which is a linear program where some (or all) the variables are restricted to integer values. However, restricting these variables to be integers alters our feasible region; only solutions with integer coordinates that satisfy the constraints are feasible, as shown in Figure 1.6. How can we find the optimal solution now? The same way as before, by changing the contour lines. Doing so gives the optimal solution (1, 6). Please note some major differences from the first problem: (1) the optimal solution is not at the intersection of two constraints anymore and (2) simply rounding the first solution (without the integer restriction) did not give the new optimal solution. This seems to suggest that solving problems with integer-restricted variables is more difficult in general than those problems where variables can take any value.

FIGURE 1.5 For an objective z = 3C + 8V ,(0, 6.5) is now optimal.

Sensitivity Analysis

As noted previously, it is useful to know how changes to the model parameters affect or do not affect the solution to our current model. Such sensitivity analysis provides insight into the nature of our solution and model, and is an important component of modeling. Let’s see one example of this analysis for the Farmer Jones problem.

FIGURE 1.6 Feasible solutions (in dots) for integer-restricted problem.

FIGURE 1.7 Sensitivity analysis for objective coefficient at optimal solution (5, 4).

We’ve already seen that if the profit margin for vanilla cakes changes from $5 to $8, our optimal solution changes. A natural question that Farmer Jones may ask is “For what profit margins on vanilla cakes will the current solution remain optimal?” Why is this a natural question? Farmer Jones may not want to solve the linear program each time a value changes, so knowing what changes do not affect the optimal solution saves him time. To answer this question, refer back to how we solved the problem. In Figure 1.4, we saw that the optimal solution occurs at the solution (C, V ) = (5, 4) because the contour 3C + 5V = k last intersects the feasible region here. If we were to change the profit margin of vanilla cakes from 5 to some value cV , we are changing the slope of this line. In Figure 1.7, we zoom in on the optimal solution (5, 4) and show the current objective contour 3C + 5V = 35 as the dashed line. It should be apparent that as long as the slope of our objective line stays between the slopes of the two constraints 4C + 5V = 40 and 20C + 40V = 260 (which intersect at (5, 4)), we have answered the question. Specifically, the solution (C, V ) = (5, 4) will remain optimal to our model as long as our profit margin cV for vanilla cakes satisfies the following equation:

where is the slope of the line is the slope of the line 20C + 40V = 260, and is the slope of the objective function. Solving this equation for

cV yields

Thus, if our profit margin increases by at most $1 or decreases by no more than it is best to make 5 chocolate cakes and 4 vanilla cakes.

1.3   COMMON CLASSES OF MATHEMATICAL PROGRAMS

In the Farmer Jones example we explored the concept of modeling using mathematical programs, including objective function and constraint generation. In doing so, we introduced two important classes of mathematical programs, linear programs and integer programs. In this section we will formally introduce these problems as well as introduce, through an example, another common class.

Linear Programs

When the objective function is a linear function, and each constraint is either a linear inequality or linear equation, we say that the mathematical program (1.7) is a Linear Programming Problem, or a Linear Program. The general form of a linear program is

For example, the optimization model (1.6) is a linear program. Linear programs are perhaps the most commonly used mathematical optimization models, having applications in many diverse areas such as production, inventory, finance, science, medicine, engineering, and pretty much any other discipline you can think of.

Of course, modeling a real-world problem as a linear program has a few implications. For example, having all constraints and the objective function as linear functions means that the contribution from each variable to the constraints and objective function must be proportional to the value of the variable. For example, in Example 1.1, we know that the profit (objective function contribution) from making 3 chocolate cakes ($9) is exactly 3 times the profit of making 1 chocolate cake ($3). This is also true for each constraint; in terms of eggs used, the number of eggs used in making 3 chocolate cakes (12) is 3 times the number of eggs used in making 1 chocolate cake (4). This idea is often referred to as the Proportionality Assumption of Linear Programming. Any model that violates this assumption cannot be modeled as a linear program. For example, if we can make 3 chocolate cakes using only 10 eggs while we need 4 eggs for one cake, we cannot model this problem as a linear program.

Another assumption required in order to use linear programming to model situations addresses the total contribution all the variables make toward the objective function and the left-hand side of each constraint. We will make the assumption that these contributions are independent of all other variables. For example, in Example 1.1, the profit contribution for making C chocolate cakes is independent of how many vanilla cakes V are made; also, in the eggs constraint, the number of eggs used in making chocolate cakes is independent of how many vanilla cakes are made. Finally, the total contribution from all variables can be found by summing the contributions made from each variable. This assumption is often referred to as the Additivity Assumption of Linear Programming. If a mathematical model violates this assumption, it can still be formulated as a mathematical optimization problem, but this problem would be nonlinear, which is not as easy to solve as linear programs.

There are two other assumptions needed to model a real-world situation as a linear program. First, we assume that each variable can be infinitely divisible and still have meaning. For example, in Example 1.1, it makes sense to talk about 3.2162 chocolate cakes, if we are modeling average production over an extended period of time. This assumption is known as the Divisibility Assumption. Of course, not all models satisfy this assumption. For example, if our cake model is only for one week, we would want to have integer solutions. In this case, people will often ignore the assumption, get potentially fractional values for the variables, and then round the variables to the nearest integer. Unfortunately, this will not always yield a feasible solution.

The final assumption, and perhaps the most unrealistic, is the Certainty Assumption. Here we assume that every parameter (objective function coefficients, constraint coefficients, and right-hand side values) is known with certainty. While this rarely happens in the real world, we can handle this assumption by studying how changes in parameters affect (or do not affect) the optimal solution of the linear program. In this case, we’ll see when it is acceptable to assume that parameters are fixed when they really are not.

Nonlinear Programs

One of the strengths of linear programs is that they are well understood mathematically and hence easy to solve. In fact, we show that it is possible to derive both algorithms for solving linear programs and criteria to easily identify when the current solution obtained is in fact optimal. Unfortunately, not all mathematical programs can be so easily solved. In many cases, either we cannot guarantee that the final solution is an optimal solution or, if we can get such a guarantee, the time to actually solve even moderately sized problems can be too long (say thousands of years!). However, one drawback of linear programs is that they can be too simple—in general, the world is not linear! Thus, other classes of models also often occur in practice.

One such model is a nonlinear program. In these problems, the additivity and proportionality assumptions of linear programming are often violated. In fact, you have probably been exposed to such problems early in your mathematical career, as the following example illustrates.

   EXAMPLE 1.2

Suppose that we wish to enclose a rectangular yard using at most 100 feet of wire fencing. What is the largest area that can be formed?

You probably recall this example from calculus. If we let our decision variables be

then we can write our problem, in the form of (1.7), as

Note that the objective violates the additivity assumption of linear programming, since the objective function contribution of each variable l and w depends on the other variable.

In general, we can write nonlinear programs in the form

where f and each gk are linear or nonlinear functions of the variables x1, ..., xn. Their strength occurs in the fact that they are much more general than linear programs, and hence can model more complicated interactions between variables. However, nonlinear programs are more difficult to solve to optimality than linear programs, primarily because of the presence of local optimal solutions. Recall from calculus that these solutions are the “best” solution among those “nearby” feasible solutions, but may not be the optimal solution overall. Algorithms for nonlinear programs often stop when they have found such a local optimal solution, but they cannot comment on its global optimality.

Integer Programs

Perhaps the most common mathematical program used in business and industry is the integer program. Typically, these are linear programs where some (or all) of the variables are required to have integer values; this assumption violates the divisibility assumption of linear programming. Integer programs are often written in the form

A relatively straightforward example of an integer program is given below.

   EXAMPLE 1.3

Senior design students are trying to determine which person will take primary responsibility for the remaining tasks the team must complete. Below is a table of the amount of time each student would need to complete the corresponding task, in hours.

If each student must do at least one but no more than two tasks, how should the tasks be divided so as to minimize the total amount of time required to finish all the tasks?

For this model, we want to identify which students get which tasks. One way to model this is to define our variables as follows:

Each of these variables can have only the values 0 or 1, which automatically violates the divisibility assumption of linear programming. With these variables, we have the constraints that (1) each student can be assigned to either one or two jobs, and (2) each job must be assigned to at least one student. Constraint (1) implies that

Constraint (2) is formed similarly:

The objective function is to minimize where cij is the time for student i to do job j. Our integer program is

If we can assign students to portions of tasks, then we could change the variable bounds xij ∈ {0, 1} to

for each i ∈ {1, 2, ..., 4} and j ∈ {1, 2, ..., 6} and thus have a linear program.

Integer programs are often used to model yes-or-no decisions, as the above example illustrates. It uses binary variables (those that can take only values 0 or 1) to do this, primarily because the logic of the model is easier to control. For example, a more natural approach in the above example is perhaps to let xk be the task person k is assigned, so that xk ∈ {1, 2, 3, 4}. However, by defining variables this way, it is difficult to write a constraint that says “no task can be assigned to more than one student.” Problems where all the variables are binary are often called discrete or combinatorial optimization problems.

Unfortunately, in general integer programs are very difficult to solve. Algorithms exist that, at least theoretically, will output the optimal solution when it finishes; however, the amount of time needed for the algorithm to finish may be more than the current age of the universe (independent of the speed of your computer)! Heuristic algorithms are often used for integer programs instead of exact algorithms. Heuristics typically stop in a rather short amount of time, but are not guaranteed to find the global optimal solution. Such drawbacks usually are expected when working with integer programs. For combinatorial optimization problems, heuristics are often the best way to generate good solutions to the problem, since such problems are often computationally expensive to solve to optimality.

1.4   ABOUT THIS BOOK

To study and utilize the models an OR analyst would consider, it is best to look at three interconnected components. First, we must understand the logic and importance of optimization modeling. In fact, when presented with a real problem, we are not usually given a mathematical description of it. This is where the use of modeling techniques comes into play, and it is important for us to be comfortable with this part of the solution process. We study many such techniques in Chapters 2 and 3. We also explore in Chapter 4 some general classes of models that regularly appear in practice today.

Once we’ve developed a model for our problem, we have to know how to solve it. While many models we use are standard, and hence software has been developed to solve them, it is still important to know how to design algorithms, both exact and heuristic, that will solve our optimization model. To do so, in Chapter 5 we begin developing heuristic algorithms for common combinatorial optimization problems because they best illustrate the common aspects of algorithm design. After this study is done, in Chapters 6–8 we develop from first principles an algorithm that solves linear programs. We examine the notions of how to determine the structure of an optimal solution and how best to exploit this structure to develop our algorithm.

Once we have an algorithm that solves a linear program, in Chapter 9 we begin a study of the theory of linear programs that enables us to better understand and improve our algorithm and help design new algorithms. In Chapters 10 and 11, we exploit these new approaches to develop algorithms for special cases of linear programs.

Finally, in Chapters 12–15, we explore solution techniques for network-based optimization models and integer programs. These problems present unique challenges to us, both theoretically and algorithmically. During our study, we examine how the theory of linear programming plays a major role in improving the basic algorithms for integer programs.

When we are done, we will have been introduced to the three main ideas of optimization modeling: model formulation, algorithm design, and algorithm improvement using the theory of the problem. We will have seen that all three areas are interconnected and really drive the development of new and improved models, algorithms, and theoretical results. And, we will have seen why deterministic operations research is such an important, useful, and exciting area to study. Let’s begin!

EXERCISES

1.1 Solve the following linear programs by either identifying an optimal solution, indicating that the problem is unbounded, or showing the problem is infeasible.

(a)

(b)

(c)

(d)

(e)

1.2 For each problem in Exercise 1.1, determine the range of values for each objective coefficient so that, if that coefficient is the only parameter changed, the optimal solution originally found is still optimal.

1.3 Solve the integer program

1.4 Solve the integer program

1Dantzig reminisces about this in Ref. [26].

CHAPTER 2

LINEAR PROGRAMMING MODELING

The importance of deterministic operations research in business and industry, among other places, is due to the enormous number of real-world applications that can be modeled as mathematical programs, and in particular as linear programs, integer programs, or network-based linear programs. In this chapter we explore some of the basic linear programming and network models used in deterministic operations research, and in the process, gain some experience in modeling general situations. The next chapter examines integer and combinatorial models. Finally, in Chapter 4 we explore some operations research models that commonly appear in practice and in the research literature.

2.1 RESOURCE ALLOCATION MODELS

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!

Lesen Sie weiter in der vollständigen Ausgabe!