Engineering Optimization - R. Russell Rhinehart - E-Book

Engineering Optimization E-Book

R. Russell Rhinehart

0,0
113,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

An Application-Oriented Introduction to Essential Optimization Concepts and Best Practices

Optimization is an inherent human tendency that gained new life after the advent of calculus; now, as the world grows increasingly reliant on complex systems, optimization has become both more important and more challenging than ever before. Engineering Optimization provides a practically-focused introduction to modern engineering optimization best practices, covering fundamental analytical and numerical techniques throughout each stage of the optimization process.

Although essential algorithms are explained in detail, the focus lies more in the human function: how to create an appropriate objective function, choose decision variables, identify and incorporate constraints, define convergence, and other critical issues that define the success or failure of an optimization project.

Examples, exercises, and homework throughout reinforce the author’s “do, not study” approach to learning, underscoring the application-oriented discussion that provides a deep, generic understanding of the optimization process that can be applied to any field.

Providing excellent reference for students or professionals, Engineering Optimization:

  • Describes and develops a variety of algorithms, including gradient based (such as Newton’s, and Levenberg-Marquardt), direct search (such as Hooke-Jeeves, Leapfrogging, and Particle Swarm), along with surrogate functions for surface characterization
  • Provides guidance on optimizer choice by application, and explains how to determine appropriate optimizer parameter values
  • Details current best practices for critical stages of specifying an optimization procedure, including decision variables, defining constraints, and relationship modeling
  • Provides access to software and Visual Basic macros for Excel on the companion website, along with solutions to examples presented in the book

Clear explanations, explicit equation derivations, and practical examples make this book ideal for use as part of a class or self-study, assuming a basic understanding of statistics, calculus, computer programming, and engineering models. Anyone seeking best practices for “making the best choices” will find value in this introductory resource.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 1353

Veröffentlichungsjahr: 2018

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Preface

Introduction

Key Points

Book Aspirations

Organization

Rationale for the Book

Target Audience

Presentation Style

Acknowledgments

Nomenclature

About the Companion Website

Section 1: Introductory Concepts

1 Optimization

1.1 Optimization and Terminology

1.2 Optimization Concepts and Definitions

1.3 Examples

1.4 Terminology Continued

1.5 Optimization Procedure

1.6 Issues That Shape Optimization Procedures

1.7 Opposing Trends

1.8 Uncertainty

1.9 Over‐ and Under‐specification in Linear Equations

1.10 Over‐ and Under‐specification in Optimization

1.11 Test Functions

1.12 Significant Dates in Optimization

1.13 Iterative Procedures

1.14 Takeaway

1.15 Exercises

2 Optimization Application Diversity and Complexity

2.1 Optimization

2.2 Nonlinearity

2.3 Min, Max, Min–Max, Max–Min, …

2.4 Integers and Other Discretization

2.5 Conditionals and Discontinuities: Cliffs Ridges/Valleys

2.6 Procedures, Not Equations

2.7 Static and Dynamic Models

2.8 Path Integrals

2.9 Economic Optimization and Other Nonadditive Cost Functions

2.10 Reliability

2.11 Regression

2.12 Deterministic and Stochastic

2.13 Experimental w.r.t. Modeled OF

2.14 Single and Multiple Optima

2.15 Saddle Points

2.16 Inflections

2.17 Continuum and Discontinuous DVs

2.18 Continuum and Discontinuous Models

2.19 Constraints and Penalty Functions

2.20 Ranks and Categorization: Discontinuous OFs

2.21 Underspecified OFs

2.22 Takeaway

2.23 Exercises

3 Validation

3.1 Introduction

3.2 Validation

3.3 Advice on Becoming Proficient

3.4 Takeaway

3.5 Exercises

Section 2: Univariate Search Techniques

4 Univariate (Single DV) Search Techniques

4.1 Univariate (Single DV)

4.2 Analytical Method of Optimization

4.3 Numerical Iterative Procedures

4.4 Direct Search Approaches

4.5 Perspectives on Univariate Search Methods

4.6 Evaluating Optimizers

4.7 Summary of Techniques

4.8 Takeaway

4.9 Exercises

5 Path Analysis

5.1 Introduction

5.2 Path Examples

5.3 Perspective About Variables

5.4 Path Distance Integral

5.5 Accumulation along a Path

5.6 Slope along a Path

5.7 Parametric Path Notation

5.8 Takeaway

5.9 Exercises

6 Stopping and Convergence Criteria: 1‐D Applications

6.1 Stopping versus Convergence Criteria

6.2 Determining Convergence

6.3 Combinations of Convergence Criteria

6.4 Choosing Convergence Threshold Values

6.5 Precision

6.6 Other Convergence Criteria

6.7 Stopping Criteria to End a Futile Search

6.8 Choices!

6.9 Takeaway

6.10 Exercises

Section 3: Multivariate Search Techniques

7 Multidimension Application Introduction and the Gradient

7.1 Introduction

7.2 Illustration of Surface and Terms

7.3 Some Surface Analysis

7.4 Parametric Notation

7.5 Extension to Higher Dimension

7.6 Takeaway

7.7 Exercises

8 Elementary Gradient‐Based Optimizers

8.1 Introduction

8.2 Cauchy’s Sequential Line Search

8.3 Incremental Steepest Descent

8.4 Takeaway

8.5 Exercises

9 Second‐Order Model‐Based Optimizers

9.1 Introduction

9.2 Successive Quadratic

9.3 Newton–Raphson

9.4 Perspective on CSLS, ISD, SQ, and NR

9.5 Choosing Step Size for Numerical Estimate of Derivatives

9.6 Takeaway

9.7 Exercises

10 Gradient‐Based Optimizer Solutions

10.1 Introduction

10.2 Levenberg–Marquardt (LM)

10.3 Scaled Variables

10.4 Conjugate Gradient (CG)

10.5 Broyden–Fletcher–Goldfarb–Shanno (BFGS)

10.6 Generalized Reduced Gradient (GRG)

10.7 Takeaway

10.8 Exercises

11 Direct Search Techniques

11.1 Introduction

11.2 Cyclic Heuristic Direct (CHD) Search

11.3 Hooke–Jeeves (HJ)

11.4 Compare and Contrast CHD and HJ Features: A Summary

11.5 Nelder–Mead (NM) Simplex: Spendley, Hext, and Himsworth

11.6 Multiplayer Direct Search Algorithms

11.7 Leapfrogging

11.8 Particle Swarm Optimization

11.9 Complex Method (CM)

11.10 A Brief Comparison

11.11 Takeaway

11.12 Exercises

12 Linear Programming

12.1 Introduction

12.2 Visual Representation and Concepts

12.3 Basic LP Procedure

12.4 Canonical LP Statement

12.5 LP Algorithm

12.6 Simplex Tableau

12.7 Takeaway

12.8 Exercises

13 Dynamic Programming

13.1 Introduction

13.2 Conditions

13.3 DP Concept

13.4 Some Calculation Tips

13.5 Takeaway

13.6 Exercises

14 Genetic Algorithms and Evolutionary Computation

14.1 Introduction

14.2 GA Procedures

14.3 Fitness of Selection

14.4 Takeaway

14.5 Exercises

15 Intuitive Optimization

15.1 Introduction

15.2 Levels

15.3 Takeaway

15.4 Exercises

16 Surface Analysis II

16.1 Introduction

16.2 Maximize Is Equivalent to Minimize the Negative

16.3 Scaling by a Positive Number Does Not Change DV

16.4 Scaled and Translated OFs Do Not Change DV

16.5 Monotonic Function Transformation Does Not Change DV

16.6 Impact on Search Path or NOFE

16.7 Inequality Constraints

16.8 Transforming DVs

16.9 Takeaway

16.10 Exercises

17 Convergence Criteria 2

17.1 Introduction

17.2 Defining an Iteration

17.3 Criteria for Single TS Deterministic Procedures

17.4 Criteria for Multiplayer Deterministic Procedures

17.5 Stochastic Applications

17.6 Miscellaneous Observations

17.7 Takeaway

17.8 Exercises

18 Enhancements to Optimizers

18.1 Introduction

18.2 Criteria for Replicate Trials

18.3 Quasi‐Newton

18.4 Coarse–Fine Sequence

18.5 Number of Players

18.6 Search Range Adjustment

18.7 Adjustment of Optimizer Coefficient Values or Options in Process

18.8 Initialization Range

18.9 OF and DV Transformations

18.10 Takeaway

18.11 Exercises

Section 4: Developing Your Application Statements

19 Scaled Variables and Dimensional Consistency

19.1 Introduction

19.2 A Scaled Variable Approach

19.3 Sampling of Issues with Primitive Variables

19.4 Linear Scaling Options

19.5 Nonlinear Scaling

19.6 Takeaway

19.7 Exercises

20 Economic Optimization

20.1 Introduction

20.2 Annual Cash Flow

20.3 Including Risk as an Annual Expense

20.4 Capital

20.5 Combining Capital and Nominal Annual Cash Flow

20.6 Combining Time Value and Schedule of Capital and Annual Cash Flow

20.7 Present Value

20.8 Including Uncertainty

20.9 Takeaway

20.10 Exercises

21 Multiple OF and Constraint Applications

21.1 Introduction

21.2 Solution 1: Additive Combinations of the Functions

21.3 Solution 2: Nonadditive OF Combinations

21.4 Solution 3: Pareto Optimal

21.5 Takeaway

21.6 Exercises

22 Constraints

22.1 Introduction

22.2 Equality Constraints

22.3 Inequality Constraints

22.4 Constraints: Pass/Fail Categories

22.5 Hard Constraints Can Block Progress

22.6 Advice

22.7 Constraint‐Equivalent Features

22.8 Takeaway

22.9 Exercises

23 Multiple Optima

23.1 Introduction

23.2 Solution: Multiple Starts

23.3 Other Options

23.4 Takeaway

23.5 Exercises

24 Stochastic Objective Functions

24.1 Introduction

24.2 Method Summary for Optimizing Stochastic Functions

24.3 What Value to Report?

24.4 Application Examples

24.5 Takeaway

24.6 Exercises

25 Effects of Uncertainty

25.1 Introduction

25.2 Sources of Error and Uncertainty

25.3 Significant Digits

25.4 Estimating Uncertainty on Values

25.5 Propagating Uncertainty on DV Values

25.6 Implicit Relations

25.7 Estimating Uncertainty in DV and OF

25.8 Takeaway

25.9 Exercises

26 Optimization of Probable Outcomes and Distribution Characteristics

26.1 Introduction

26.2 The Concept of Modeling Uncertainty

26.3 Stochastic Approach

26.4 Takeaway

26.5 Exercises

27 Discrete and Integer Variables

27.1 Introduction

27.2 Optimization Solutions

27.3 Convergence

27.4 Takeaway

27.5 Exercises

28 Class Variables

28.1 Introduction

28.2 The Random Keys Method: Sequence

28.3 The Random Keys Method: Dichotomous Variables

28.4 Comments

28.5 Takeaway

28.6 Exercises

29 Regression

29.1 Introduction

29.2 Perspective

29.3 Least Squares Regression: Traditional View on Linear Model Parameters

29.4 Models Nonlinear in DV

29.5 Maximum Likelihood

29.6 Convergence Criterion

29.7 Model Order or Complexity

29.8 Bootstrapping to Reveal Model Uncertainty

29.9 Perspective

29.10 Takeaway

29.11 Exercises

Section 5: Perspective on Many Topics

30 Perspective

30.1 Introduction

30.2 Classifications

30.3 Elements Associated with Optimization

30.4 Root Finding Is Not Optimization

30.5 Desired Engineering Attributes

30.6 Overview of Optimizers and Attributes

30.7 Choices

30.8 Variable Classifications

30.9 Constraints

30.10 Takeaway

30.11 Exercises

31 Response Surface Aberrations

31.1 Introduction

31.2 Cliffs (Vertical Walls)

31.3 Sharp Valleys (or Ridges)

31.4 Striations

31.5 Level Spots (Functions 1, 27, 73, 84)

31.6 Hard‐to‐Find Optimum

31.7 Infeasible Calculations

31.8 Uniform Minimum

31.9 Noise: Stochastic Response

31.10 Multiple Optima

31.11 Takeaway

31.12 Exercises

32 Identifying the Models, OF, DV, Convergence Criteria, and Constraints

32.1 Introduction

32.2 Evaluate the Results

32.3 Takeaway

32.4 Exercises

33 Evaluating Optimizers

33.1 Introduction

33.2 Challenges to Optimizers

33.3 Stakeholders

33.4 Metrics of Optimizer Performance

33.5 Designing an Experimental Test

33.6 Takeaway

33.7 Exercises

34 Troubleshooting Optimizers

34.1 Introduction

34.2 DV Values Do Not Change

34.3 Multiple DV Values for the Same OF Value

34.4 EXE Error

34.5 Extreme Values

34.6 DV Is Dependent on Convergence Threshold

34.7 OF Is Irreproducible

34.8 Concern over Results

34.9 CDF Features

34.10 Parameter Correlation

34.11 Multiple Equivalent Solutions

34.12 Takeaway

34.13 Exercises

Section 6: Analysis of Leapfrogging Optimization

35 Analysis of Leapfrogging

35.1 Introduction

35.2 Balance in an Optimizer

35.3 Number of Initializations to be Confident That the Best Will Draw All Others to the Global Optimum

35.4 Leap‐To Window Amplification Analysis

35.5 Analysis of

α

and

M

to Prevent Convergence on the Side of a Hill

35.6 Analysis of

α

and

M

to Minimize NOFE

35.7 Probability Distribution of Leap‐Overs

35.8 Takeaway

35.9 Exercises

Section 7: Case Studies

36 Case Study 1

36.1 Process and Analysis

36.2 Exercises

37 Case Study 2

37.1 The Process and Analysis

37.2 Exercises

38 Case Study 3

38.1 The Process and Analysis

38.2 Exercises

39 Case Study 4

39.1 The Process and Analysis

39.2 Pre‐Assignment Note

39.3 Exercises

40 Case Study 5

40.1 The Process and Analysis

40.2 Exercises

41 Case Study 6

41.1 Description and Analysis

41.2 Exercises

42 Case Study 7

42.1 Concepts and Analysis

42.2 Exercises

43 Case Study 8

43.1 Description and Analysis

43.2 Exercises

44 Case Study 9

44.1 Description and Analysis

44.2 Exercises

45 Case Study 10

45.1 Description and Analysis

45.2 Exercises

Section 8: Appendices

Appendix A: Mathematical Concepts and Procedures

A.1 Representation of Relations

A.2 Taylor Series Expansion (Single Variable)

A.3 Taylor Series Expansion (Multiple Variable)

A.4 Evaluating First Derivatives at

x

0

A.5 Partial Derivatives: First Order

A.6 Partial Derivatives: Second Order

A.7 Linear Algebra Notations

A.8 Algebra and Assignment Statements

Appendix B: Root Finding

B.1 Introduction

B.2 Interval Halving: Bisection Method

B.3 Newton’s: Secant Version

B.4 Which to Choose?

Appendix C: Gaussian Elimination

C.1 Linear Equation Sets

C.2 Gaussian Elimination

C.3 Pivoting

C.4 Code in VBA

Appendix D: Steady‐State Identification in Noisy Signals

D.1 Introduction

D.2 Conceptual Model

D.3 Method Equations

D.4 Coefficient, Threshold, and Sample Frequency Values

D.5 Type‐I Error

D.6 Type‐II Error

D.7 Alternate Type‐I Error

D.8 Alternate Array Method

D.9 SSID and TSID VBA Code

Appendix E: Optimization Challenge Problems (2‐D and Single OF)

E.1 Introduction

E.2 Challenges for Optimizers

E.3 Test Functions

E.4 Other Test Function Sources

Appendix F: Brief on VBA Programming

F.1 Introduction

F.2 To Start

F.3 General

F.4 I/O to Excel Cells

F.5 Variable Types and Declarations

F.6 Operations

F.7 Loops

F.8 Conditionals

F.9 Debugging

F.10 Run Buttons (Commands)

F.11 Objects and Properties

F.12 Keystroke Macros

F.13 External File I/O

F.14 Solver Add‐In

F.15 Calling Solver from VBA

Section 9: References and Index

References and Additional Resources

Books on Optimization

Books on Probability and Statistics

Books on Simulation

Specific Techniques

Selected Landmark Papers

Selected Websites Resources

Index

End User License Agreement

List of Tables

Chapter 03

Table 3.1 Bloom’s taxonomy of cognitive skill.

Chapter 11

Table 11.1 Optimizer results on Function 11.

Chapter 14

Table 14.1 Example of two chromosomes.

Table 14.2 Gene sequencing in two individuals.

Table 14.3 Random selection of parent genes makes the child chromosome.

Table 14.4 Illustration of several types of mutations.

Chapter 19

Table 19.1 Illustration of disparate ranges.

Chapter 20

Table 20.1 Risk of failure.

Table 20.2 Present value example.

Chapter 21

Table 21.1 A Pareto optimal comparison of optimizers.

Chapter 23

Table 23.1 Probability of at least one success in

N

trials.

Table 23.2 Possible conditional probabilities.

Table 23.3 Snyman and Fatti

p

values w.r.t.

r

and

n.

Table 23.4 Probability as a ratio of

r

to

n.

Table 23.5

p

global using

n

and

f

and best‐of‐

N

formula.

Chapter 24

Table 24.1 Experimental comparison—rms values of cluster scaled DV range/average number of function evaluations to converge.

Chapter 29

Table 29.1 Data for Exercise 16.

Table 29.2 Data for Exercise 22.

Chapter 32

Table 32.1 Data for Exercise 10.

Table 32.2 A confusion matrix for Exercise 11.

Chapter 33

Table 33.1 PNOFE data for the optimizer comparison.

Chapter 35

Table 35.1 Number of initial players for test functions—from Equation (35.9).

Table 35.2 Comparison of PNOFE of original and improved initialization algorithms.

Chapter 36

Table 36.1 Cash flow table for PV calculation.

Table 36.2 How various economic indices trend with choice of which to minimize.

Chapter 43

Table 43.1 Value of givens for the liquid–vapor tank separator application.

List of Illustrations

Chapter 01

Figure 1.1 Illustrating a function with a maximum.

Figure 1.2 Regression illustration.

Figure 1.3 Equivalency of maximizing and minimizing the negative: (a) seeking the maximum and (b) seeking the minimum of the negative of the function.

Figure 1.4 Analysis of a rectangle.

Figure 1.5 A function with one minimum.

Figure 1.6 Mental construct of a battery in a resistance circuit.

Figure 1.7 Illustration of a function with a single maximum.

Figure 1.8 Illustrating a monotonic OF response to a DV.

Figure 1.9 Illustrating a dual OF response to a DV—multiplicative functionality.

Figure 1.10 Illustrating a dual OF response to a DV—additive functionality.

Figure 1.11 Illustrating a monotonic OF response to a DV—additive functionality.

Figure 1.12 Illustrating an under‐specified application.

Figure 1.13 Illustrating a balanced application.

Figure 1.14 Illustrating an over‐specified application.

Chapter 02

Figure 2.1 A box.

Figure 2.2 Illustrating striations.

Figure 2.3 Illustration of sharp valleys.

Figure 2.4 Illustration of a stochastic function.

Figure 2.5 Illustrations of multiple optima: (a) discontinuous, (b) continuum.

Figure 2.6 Illustration of a saddle point.

Figure 2.7 (a) A function and (b) its derivative.

Figure 2.8 Illustration of a penalty function, a soft constraint.

Chapter 03

Figure 3.1 Illustration of two functions to be added.

Chapter 04

Figure 4.1 (a) A continuous function of a single variable and (b) its derivative.

Figure 4.2 A secant method for root finding on a derivative.

Figure 4.3 Illustration of the quadratic surrogate model.

Figure 4.4 Bisection search stage 1—marching.

Figure 4.5 Illustration of the bisection method.

Figure 4.6 Golden Section method.

Figure 4.7 Golden Section apportionment.

Figure 4.8 Squaring a rectangle.

Figure 4.9 Illustration of the beginning of a heuristic direct search.

Figure 4.10 Illustration of a heuristic direct search: (a) unconstrained and (b) constrained.

Figure 4.11 Representation of the quandary over flat spots in the OF.

Figure 4.12 Illustration of leapfrogging: (a) first leap and (b) second and third leaps.

Figure 4.13 Illustration of an overstep when the second derivative has a low value.

Figure 4.14 Illustration of an understep when the derivative asymptotically approaches zero.

Figure 4.15 Illustration of ill‐behaved OF responses: (a) stochastic and (b) pinhole.

Figure 4.16 Illustration of (a) function discontinuity and (b) slope discontinuity.

Figure 4.17 Two difficult optimization cases: (a) minimum overstepped and (b) causes divergence in second‐order methods.

Figure 4.18 Illustration for Exercise 7.

Figure 4.19 Illustration for Exercises 8 and 9.

Chapter 05

Figure 5.1 Getting from

A

to

B

.

Figure 5.2 Coast into space game.

Figure 5.3 Best speed profile from

A

to

B

.

Figure 5.4 Path distance: (a) overall, (b) Pythagorean, and (c) tangent.

Figure 5.5 Illustrating slope along a path.

Chapter 06

Figure 6.1 Illustration of convergence criterion on OF threshold.

Figure 6.2 Illustration of issues with convergence on Δ

f.

Figure 6.3 Illustration of issues with convergence on Δ

x.

Figure 6.4 Illustration of steady‐state convergence criterion.

Figure 6.5 Illustration of convergence precision.

Chapter 07

Figure 7.1 3‐D surface illustration.

Figure 7.2 Contours.

Figure 7.3 Contour map near Stillwater, OK.

Figure 7.4 (a) Net Lines and (b) contour lines.

Figure 7.5 (a) 3‐D view and (b) contour map.

Figure 7.6 Illustration of Equation (7.16).

Figure 7.7 Illustration of a function trend w.r.t. a DV line of steepest descent.

Figure 7.8 Illustrating a line in the tangent plane through the point of contact.

Figure 7.9 Illustration for Exercise 1.

Figure 7.10 Illustration for Exercise 9.

Chapter 08

Figure 8.1 Illustrating Cauchy’s sequential line search.

Figure 8.2 CSLS example.

Figure 8.3 Water following an incremental steepest descent path.

Figure 8.4 Illustrating an ISD approach to the optimum.

Figure 8.5 Zigzag approach to optimum.

Figure 8.6 First‐order filtered gradient (a conjugate gradient approach).

Figure 8.7 For Exercises 6 and 7.

Figure 8.8 For Exercises 9 and 10.

Figure 8.9 For Exercise 20.

Chapter 09

Figure 9.1 Illustrating a 2‐D placement of points to evaluate the surrogate quadratic surface.

Figure 9.2 Illustrating a 3‐D placement of points to evaluate the surrogate quadratic surface.

Figure 9.3 SQ diversion when outside of the inflection point.

Figure 9.4 Makes Newton’s oscillate.

Figure 9.5 An NR path on Function 32.

Figure 9.6 (a) A function and (b) its derivative.

Figure 9.7 Combinations of derivative shapes near the root: (a)

A–A

, (b)

B–B

, (c)

B–A

, and (d)

A–B

.

Figure 9.8 An extreme

A

shape.

Figure 9.9 An extreme

B

shape.

Figure 9.10 NR or SQ encountering a constraint.

Figure 9.11 Illustrating too large and too small increments for numerical derivative evaluation.

Figure 9.12 Illustrating the central difference numerical approximation to the local slope: (a) near the optimum and (b) away from the optimum.

Chapter 10

Figure 10.1 Illustrating zigzag behavior of CSLS.

Chapter 11

Figure 11.1 Illustration of CHD in 2‐D.

Figure 11.2 CHD on a 2‐D Function.

Figure 11.3 Pattern move in HJ.

Figure 11.4 Illustration of HJ possible exploration points in 2‐D.

Figure 11.5 Illustration of a HJ search in 2‐D.

Figure 11.6 Illustration of a HJ pattern move in 2‐D.

Figure 11.7 Illustration of a pattern move change if the pattern point is not better.

Figure 11.8 Illustration of exploration fails in HJ.

Figure 11.9 Illustration of an alternate pattern exploration layout in HJ.

Figure 11.10 HJ path on Function 13.

Figure 11.11 A simplex in a 2‐DV case.

Figure 11.12 An NM or SHH simplex move.

Figure 11.13 The NM‐SHH simplex moving toward the optimum.

Figure 11.14 Options for placement of the smaller simplex: (a) on center, (b) on the best corner, and (c) on COG of the other points.

Figure 11.15 LF explanation: (1) Initialization.

Figure 11.16 LF explanation: (2) Choosing leap‐to area.

Figure 11.17 LF explanation: (3) The leap‐over.

Figure 11.18 LF explanation: (4) Test for convergence.

Figure 11.19 LF explanation: (5) Next leap‐over.

Figure 11.20 LF explanation: (6) Test for convergence.

Figure 11.21 LF explanation: (7) Players are converging.

Figure 11.22 3‐D view of OF response to DVs in the rank application of Function 84.

Figure 11.23 LF converged results from 100 trials on Function 84.

Figure 11.24 Path of a particle in 2‐D space.

Figure 11.25 Illustration of influences on an individual particle.

Figure 11.26 Illustration of an individual particle trend toward the minimum.

Figure 11.27 Radial basis function scaling of draw.

Figure 11.28 Illustration of a steady‐state convergence criterion on the OF value of the worst particle.

Figure 11.29 Particle positions at convergence.

Figure 11.30 Surface of Function 11.

Figure 11.31 Illustration for Exercises 1 and 2.

Figure 11.32 Illustration for Exercises 3 and 4.

Figure 11.33 Illustration for Exercise 6.

Figure 11.34 Illustration for Exercise 8.

Figure 11.35 Illustration for Exercise 10.

Chapter 12

Figure 12.1 Illustration of LP concepts.

Chapter 13

Figure 13.1 (a) Time as the stage; (b) distance as the stage.

Figure 13.2 Process heater sketch.

Figure 13.3 Possible trend of temperature w.r.t. time returning to the set point.

Figure 13.4 State and stage grid for DP.

Figure 13.5 Illustration of transitions in the final stage.

Figure 13.6 Illustration of transitions in the next to last stage.

Figure 13.7 Illustration of the second to last stage transition.

Chapter 14

Figure 14.1 Illustration of selection of the fittest.

Figure 14.2 The fittest survive.

Figure 14.3 Showing (a) rank only dominated by individuals in the lower rank, (b) rank is based on the number of individuals with better fitness.

Figure 14.4 Illustration of process response and influences over time.

Figure 14.5 Illustration of membership functions for

x

1

in the categories of low, medium, or high.

Chapter 16

Figure 16.1 A strictly positive monotonic function.

Figure 16.2 Illustration of a caution on using what seems to be a strictly positive monotonic function over a limited range: (a) the original OF, (b) the OF squared, (c) and a reveal of the limit of arguments to make squaring a variable a strictly increasing transformation.

Figure 16.3 Illustration of some strictly negative monotonic functions: (a) reciprocal, (b) negative exponential, (c) negative of the log or log of the reciprocal, and (d) negative.

Figure 16.4 Common strictly increasing functions: (a) squared, (b) square root, (c) log, and (d) scaled and translated.

Chapter 19

Figure 19.1 Illustration of a scaled variable.

Figure 19.2 Scaling on a –1 to +1 basis.

Figure 19.3 Scaling on a –0.8 to +0.8 basis.

Figure 19.4 Scaling on a 0.2–0.8 basis.

Chapter 20

Figure 20.1 Reaction single DV example.

Figure 20.2 Reaction single DV example—impact of coefficient uncertainty.

Figure 20.3 Optimization of nominal and 90% worst case.

Figure 20.4 Comparison of independent and autocorrelated perturbations.

Chapter 21

Figure 21.1 Illustration of how to assign equal concern values.

Figure 21.2 Initial concept for multi‐objective treatment of two desirability metrics.

Figure 21.3 Proper structure of axes variables.

Figure 21.4 Multi‐objective Pareto chart.

Figure 21.5 (a) Sleep benefit and (b) late penalty w.r.t. wake‐up time.

Chapter 22

Figure 22.1 Sketch of the United States in an

x–y

frame.

Figure 22.2 Illustration of a fixed penalty; (a) large enough, (b) not large enough.

Figure 22.3 Illustration of a quadratic penalty added to the OF.

Figure 22.4 illustration of a quadratic soft penalty shifted by

ɛ

.

Figure 22.5 Optimization results of 100 randomly initialized trials on Equation (22.25).

Figure 22.6 Results when optimizing Equation (22.28) a slack variable transformation.

Figure 22.7 RLM solution to Equation (22.29).

Figure 22.8 Results of 100 optimizer trials from randomized initializations for Equation (22.30).

Figure 22.9 Hard constraints can block an optimizer.

Chapter 23

Figure 23.1 Multiple optima (a) discontinuous (b) continuum values.

Figure 23.2 A contour of multiple optima with 2 DVs.

Figure 23.3 A univariate function with three minima and region of attraction labeled.

Figure 23.4 Histogram of optima with randomized initializations in the OF of Figure 23.3.

Figure 23.5 Detail of histogram.

Figure 23.6 The pdf(OF*) solutions associated with Figure 23.3.

Figure 23.7 Diverse pdf(OF*) functions with the best 10% area shaded.

Figure 23.8 Illustration of a CDF(OF*).

Figure 23.9 Detail of the initial portion of Figure 23.8.

Figure 23.10 Illustration for Exercise 4.

Chapter 24

Figure 24.1 (a) Contour plot of the deterministic background. (b) One realization of the stochastic contours. (c) The surface in 3‐D.

Figure 24.2 Algae farm OF surface. One realization.

Figure 24.3 Reservoir. One realization.

Figure 24.4 Conventional optimization results on the GMC problem: (a) Levenberg–Marquardt, (b) particle swarm, and (c) leapfrogging.

Figure 24.5 Method for stochastic optimization.

Chapter 25

Figure 25.1 The function (solid line) and with perturbed coefficient values (dashed line).

Figure 25.2 Shift in the derivative due to uncertainty in model coefficient values.

Figure 25.3 CDF of DV* values in Equation (25.18).

Figure 25.4 CDF of OF* values in Equation (25.18).

Chapter 26

Figure 26.1 The profitability index distribution of two designs.

Figure 26.2 Illustration of the concept of modeling.

Figure 26.3 (a) Continuous or analytic and (b) experimental or discrete.

Chapter 27

Figure 27.1 Illustration of point values that are feasible.

Figure 27.2 Illustration of rounding the DV values to a feasible discretization.

Figure 27.3 An OF response to discretized DVs.

Figure 27.4 A motorcycle headlight configuration.

Figure 27.5 An exhaust fan configuration for a building.

Figure 27.6 The OF response to Function 56.

Figure 27.7 Representation of a branch‐and‐bound strategy.

Chapter 28

Figure 28.1 Illustration of the TSP.

Figure 28.2 A best solution to the Realtor Tour TSP example.

Chapter 29

Figure 29.1 Noise perturbing an experimental measurement.

Figure 29.2 Uncertainty on the experimental inputs—concept 1.

Figure 29.3 Uncertainty on the experimental inputs—concept 2.

Figure 29.4 Pdf of

x

and

y

overlaid on the experimentally obtained data point.

Figure 29.5 Likelihood contours surrounding the experimentally obtained data point.

Figure 29.6 The point on the curve with maximum likelihood of generating the experimental data.

Figure 29.7 Maximum likelihood with scaled variables.

Figure 29.8 The point of maximum likelihood is tangent to the highest likelihood contour: (a) the curve crosses one contour to a better contour, and (b) the curve does not touch the better contour.

Figure 29.9 Points of maximum likelihood are normal to the curve when variables are scaled by their standard deviation.

Figure 29.10 Model coefficient values to maximize the product of each likelihood in scaled variables.

Figure 29.11 Akaho’s method to approximate likelihood.

Figure 29.12 If variability is large, Akaho’s approximation loses validity.

Figure 29.13 Alternate models with equivalent validity to the data.

Figure 29.14 How SSD or rms might relax to a minimum value with progressive iterations.

Figure 29.15 How SSD or rms of a random data sample relaxes to a noisy steady state with iteration.

Figure 29.16 A not‐at‐steady‐state trend.

Figure 29.17 A steady‐state trend.

Figure 29.18 The distributions of the

r

‐statistic for three events.

Figure 29.19 Observation of rms from a random data sampling as optimizer iterations progress.

Figure 29.20 Overfitting.

Figure 29.21 A wrong model may appear to provide a reasonable fit to data.

Figure 29.22 Use of bootstrapping to reveal model uncertainty due to data variation.

Figure 29.23 Instrument calibration of salt concentration w.r.t. conductivity.

Figure 29.24 Instrument calibration of index of refraction w.r.t. mole fraction methanol in water.

Figure 29.25 Illustration for Exercise 3.

Figure 29.26 (a) Data, (b) data with uncertainty indicated by likelihood contours.

Chapter 31

Figure 31.1 3‐D view of a surface with cliffs (level discontinuities).

Figure 31.2 Contour map of a surface with level discontinuities (cliffs).

Figure 31.3 3‐D view of a surface with slope discontinuities (sharp valleys).

Figure 31.4 Contour map of a surface with slope discontinuities (sharp valleys).

Figure 31.5 A surface with a valley with steep side walls and a gentle bottom slope.

Figure 31.6 A closeup 3‐D view of a surface with striations due to numerical discretization.

Figure 31.7 A contour map of the surface section of Figure 31.6.

Figure 31.8 Results of an optimizer guided by striations, even though not visible in the large overview.

Figure 31.9 A 3‐D view of a surface with level spots.

Figure 31.10 A contour map of a surface with level spots.

Figure 31.11 3‐D view of the search for a mountain path, revealing a hard to find minimum.

Figure 31.12 2‐D contour map of the function revealing converged solutions.

Figure 31.13 Illustration of the impact of an infeasible operation encountered within the OF calculation.

Figure 31.14 Consequences of an under‐specified application.

Figure 31.15 3‐D illustration of a stochastic response.

Figure 31.16 Optimization results on the function of Figure 31.15.

Figure 31.17 The popular peaks function for testing optimizers.

Figure 31.18 Optimization on the peaks function with the LM method.

Figure 31.19 Optimization on the peaks function with leapfrogging.

Chapter 32

Figure 32.1 Illustration of an RC dc circuit.

Figure 32.2 The trend of Equation (32.10) leads to an extreme DV value.

Figure 32.3 The impact of filter coefficient on time lag of the filtered response.

Figure 32.4 Illustration for Exercise 11.

Chapter 33

Figure 33.1 Pareto comparison of two contrasting performance criteria on diverse optimizers.

Figure 33.2 Illustration for Exercise 1.

Chapter 35

Figure 35.1 Cumulative distribution function.

Figure 35.2 Global attractor area.

Figure 35.3 Comparison of data to Equation (35.22).

Figure 35.4 Comparison of data and Equation (35.26).

Figure 35.5 Optimization results from Equation (35.42).

Figure 35.6 Illustration of the pdf of leap‐to position for the first leap.

Figure 35.7 Illustration of the pdf of leap‐to position for the second leap from two locations.

Figure 35.8 Illustration of the pdf of leap‐to position for the second leap from all leap‐from locations.

Figure 35.9 Numerical results of using Equation (35.48) to determine the cumulative probability of converging after

N

leap‐overs.

Figure 35.10 Probability distributions of convergence after

N

leap‐overs. Note the semi‐log scale.

Figure 35.11 Pdf of leap‐to position after (a) one leap‐over, (b) after two, (c) after three, and (d) after four.

Chapter 36

Figure 36.1 Process illustration. Determine the optimum pipe diameter.

Figure 36.2 Impact of diameter on (a) pipe cost, (b) in‐pipe inventory, (c) pump cost, and (d) pump energy expenses.

Chapter 38

Figure 38.1 Lifetime joy as a response to retirement year and investment portion.

Figure 38.2 Optimizer results using LM.

Figure 38.3 Close‐up view of the surface revealing striations.

Figure 38.4 One realization of the surface when considering uncertainty in the givens.

Chapter 39

Figure 39.1 The concept.

Figure 39.2 An example of scheduling thrust with elevation way points.

Chapter 40

Figure 40.1 One realization of the reservoir OF.

Figure 40.2 Contour plot of one realization of the reservoir.

Chapter 43

Figure 43.1 Illustration of a vapor–liquid separator, a horizontal right circular tank.

Figure 43.2 Geometry of a segment of a circle.

Figure 43.3 3‐D view of OF with continuum variables, revealing the sharp valleys.

Figure 43.4 3‐D view of the OF with discretized

D

and

L/D

values.

Figure 43.5 The impact of the sharp valleys on a gradient‐based optimizer.

Figure 43.6 The impact of discretization on a single TS optimizer.

Chapter 44

Figure 44.1 One aspect of the optimization if

n

could be a continuum.

Figure 44.2 The OF response to

n

1

.

Figure 44.3 Response to

n

2

.

Figure 44.4 The stochastic response w.r.t.

n

1

and

n

2

.

Chapter 45

Figure 45.1 Illustration of a case of redundant measurements.

Figure 45.2 Combined flows.

Figure 45.3 Blending flows.

Appendix A

Figure A.1 Illustration of local linear approximating models.

Figure A.2 (a) Illustrating a forward finite difference and (b) a central difference.

Figure A.3 Illustration of the meaning of a partial derivative.

Figure A.4 Illustrating second derivative as the difference between sequential derivatives.

Appendix B

Figure B.1 Illustrations of the function w.r.t. the dependent variable: (a) linear and (b) nonlinear.

Figure B.2 A relation with multiple roots.

Figure B.3 Illustration of interval halving stage 1—the marching method.

Figure B.4 Illustration of interval halving stage 2—bisection method.

Figure B.5 Illustration of the secant version of Newton’s method of root finding.

Figure B.6 Illustrating a misdirection that a Newton type of algorithm can produce.

Figure B.7 Illustration of a pole in a function that could misdirect interval halving.

Figure B.8 The marching method may miss a root.

Appendix D

Figure D.1 Illustration of noisy measurements (markers) and filtered data (solid line).

Figure D.2 Data showing autocorrelation.

Figure D.3 Data showing no autocorrelation when the interval is five samplings.

Figure D.4 Example application of SSID and TSID.

Figure D.5

R

‐Statistic distribution at steady state.

Figure D.6 High chance of not being at steady state.

Figure D.7 Critical value for steady‐state identification.

Appendix E

Figure E.1 Peaks surface.

Figure E.2 Shortest‐time path surface.

Figure E.3 Hose winder device.

Figure E.4 Hose winder surface.

Figure E.5 Boot print contour.

Figure E.6 Boot print with pinhole surface.

Figure E.7 Stochastic boot print surface.

Figure E.8 Stochastic boot print contours: (a) one realization and (b) another realization.

Figure E.9 Reservoir surface.

Figure E.10 Chong Vu’s normal regression.

Figure E.11 Windblown surface.

Figure E.12 An integer problem surface.

Figure E.13 Reliability surface.

Figure E.14 Frog surface.

Figure E.15 Hot and cold mixing.

Figure E.16 Hot and cold mixing: (a) deterministic contour, (b) one contour realization with uncertainty, and (c) one surface realization with uncertainty.

Figure E.17 Curved sharp valleys surface.

Figure E.18 Parallel pumps surface.

Figure E.19 Underspecified surface.

Figure E.20 Parameter correlation surface.

Figure E.21 Robot soccer field.

Figure E.22 Robot soccer stochastic surface.

Figure E.23 ARMA (2, 1) regression surface.

Figure E.24 Algae pond contour.

Figure E.25 Algae pond: (a) stochastic contour and (b) stochastic surface.

Figure E.26 Classroom lecture pattern.

Figure E.27 Peaks contour.

Figure E.28 Mountain path of contour w.r.t. path coefficients

c

(horizontal) and

d

(vertical).

Figure E.29 Chemical reaction with phase‐equilibrium surface.

Figure E.30 Cork board sketch.

Figure E.31 Cork board surface.

Figure E.32 Three springs surface.

Figure E.33 Retirement (deterministic surface).

Figure E.34 Detail of retirement surface.

Figure E.35 Retirement stochastic surface.

Figure E.36 Solving an ODE.

Figure E.37 Space slingshot illustration.

Figure E.38 Space slingshot surface.

Figure E.39 (a) A best path and (b) a nearly as good path.

Figure E.40 A Goddard problem surface.

Figure E.41 Rank model surface.

Figure E.42 Liquid–vapor separator surfaces: (a) continuum size variables and (b) discretized size.

Figure E.43 Liquid–vapor separator issues for optimizers: (a) effect of ridges and (b) effect of flat spots.

Guide

Cover

Table of Contents

Begin Reading

Pages

ii

iii

iii

iv

xix

xx

xxi

xxii

xxiii

xxiv

xxv

xxvii

xxviii

xxix

xxx

xxxi

xxxii

xxxiii

xxxiv

xxxv

xxxvi

xxxvii

1

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

59

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

117

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

253

254

255

257

258

259

260

261

262

263

264

265

266

267

268

269

271

272

273

274

275

276

277

278

279

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

387

388

389

391

392

393

394

395

397

398

399

400

401

403

404

405

406

407

408

409

410

411

412

413

414

415

416

417

418

419

420

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

436

437

438

439

441

443

444

445

446

447

448

449

450

451

452

453

454

455

456

457

459

460

461

462

463

464

465

466

467

468

469

470

471

472

473

475

476

477

478

479

480

481

482

483

484

485

486

487

489

490

491

492

493

494

495

496

497

499

500

501

502

503

504

505

507

508

509

510

511

512

513

514

515

516

517

518

519

520

521

522

523

524

525

526

527

528

529

531

532

533

534

535

536

537

538

539

540

541

543

544

545

546

547

548

549

550

551

552

553

554

555

556

557

558

559

561

562

563

565

566

567

568

569

571

572

573

574

575

576

577

579

580

581

582

583

584

585

586

587

588

589

591

593

594

595

596

597

598

599

600

601

602

603

605

606

607

608

609

610

611

612

613

614

615

616

617

618

619

620

621

622

623

624

625

626

627

628

629

630

631

632

633

635

636

637

638

639

640

641

642

643

644

645

646

647

648

649

650

651

652

653

654

655

656

657

658

659

660

661

662

663

664

665

666

667

668

669

670

671

672

673

674

675

676

677

678

679

680

681

682

683

684

685

686

687

688

689

690

691

692

693

694

695

696

697

698

699

700

701

702

703

704

705

706

707

708

709

710

711

712

713

714

715

717

719

720

721

723

724

725

726

727

728

729

730

731

WILEY-ASME PRESS SERIES LIST

Engineering Optimization Applications, Methods and Analysis

Rhinehart

March 2018

Robust Adaptive Control for Fractional-Order Systems with Disturbance and Saturation

Chen

November 2017

Robot Manipulator Redundancy Resolution

Zhang

October 2017

Combined Cooling, Heating, and Power Systems: Modeling, Optimization, and Operation

Shi

August 2017

Applications of Mathematical Heat Transfer and Fluid Flow Models in Engineering and Medicine

Dorfman

February 2017

Bioprocessing Piping and Equipment Design: A Companion Guide for the ASME BPE Standard

Huitt

December 2016

Nonlinear Regression Modeling for Engineering Applications

Rhinehart

September 2016

Fundamentals of Mechanical Vibrations

Cai

May 2016

Introduction to Dynamics and Control of Mechanical Engineering Systems

To

March 2016

Engineering Optimization

Applications, Methods, and Analysis

 

R. Russell Rhinehart

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This Work is a co‐publication between ASME Press and John Wiley & Sons Ltd.

 

This edition first published 2018© 2018 R. Russell RhinehartThis Work is a co-publication between ASME Press and John Wiley & Sons Ltd

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Robert Russell Rhinehart, II to be identified as the author of this work has been asserted in accordance with law.

Registered OfficesJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USAJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

Editorial OfficeThe Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Cataloging‐in‐Publication Data

Names: Rhinehart, R. Russell, 1946– author.Title: Engineering optimization : applications, methods and analysis / by R. Russell Rhinehart.Description: First edition. | Hoboken, NJ : John Wiley & Sons, 2018. | Includes index. |Identifiers: LCCN 2017052555 (print) | LCCN 2017058922 (ebook) | ISBN 9781118936313 (pdf) | ISBN 9781118936320 (epub) | ISBN 9781118936337 (cloth)Subjects: LCSH: Engineering–Mathematical models. | Mathematical optimization.Classification: LCC TA342 (ebook) | LCC TA342 .R494 2018 (print) | DDC 620.001/5196–dc23LC record available at https://lccn.loc.gov/2017052555

Cover design: WileyCover image: © John Wiley & Sons

Preface

Introduction

Optimization means seeking the best outcome or solution. It is an essential component of all human activities. Whether personal or professional, we seek best designs, best choices, best operation, more bang for the buck, and continuous improvement.

Here are some professional examples: Minimize work events that lead to injury while remaining economically competitive. Structure workflow to maximize return on investment. Design an antenna that maximizes signal clarity for a given power. Define a rocket thrust sequence to maximize height. Determine the number of parallel devices to minimize initial cost plus future risk.

Here are some personal examples: Seek the best vacation experience for the lowest cost. Minimize grocery bill, but meet desires for nourishment and joy of eating. Set the family structure for raising children that leads to well‐adjusted, happy, productive outcomes, but keep within the limits of personal resources. Create a workout regime that leads to fastest and most attractive muscle development, with no injury, and in balance with other desires in quality of life.

Optimization is not just an intellectual exercise; although often, solving the challenge is as rewarding as completing a Sudoku puzzle. We implement the optimized decision. Accordingly, within any application it is essential to completely and appropriately assess the metrics that quantify “best.” If the description of what you want to achieve is not quite right, then the answer will also be wrong, which the implementation will reveal in retrospect. You want to get it right prior to implementation. So, part of this book is about development of the optimization objective.

After the objective is stated, we desire an efficient search logic to find the best solution, with precision and with minimal computational and experimental effort. So, other parts of this book are about the optimizer—the search logic, or algorithm.

Both aspects are essential, and I find that most books on optimization focus on the intellectually stimulating mathematics of the algorithms. So, I offer this book to provide a balance of essential topics to the application to guide user choices in structuring the objective, defining constraints, choosing convergence, choosing initialization, etc. Some will be disappointed that this book is not a compendium of every optimization algorithm conceived by mankind. However, others will value the application perspective.

Also, I find that most people using optimization as a tool did not have a course on it while in school. So, I have written this book in a style that I hope facilitates self‐study by those who need to understand optimization applications while keeping it fit for use as a graduate‐school course textbook.

Key Points

Here are a few essential aspects of optimization:

Point 1: Although optimization offers the joys of solving an intellectual puzzle, it is not just a stimulating mathematical game. Optimization applications are complicated, and the major challenges are the clear and complete statement of:

The objective function (OF—the outcome you wish to minimize or maximize)

Constraints (what cannot or should not be violated, or exceeded)

The decision variables (DV—what you are free to change to seek a minimum)

The model (how DVs relate to OF and constraints)

The convergence criterion (the indicator of whether the algorithm has found a close enough proximity to the minimum or maximum and can stop or needs to continue)

The DV initialization values

The number of starts from randomized locations to be confident that the global optimum has been found

The appropriate optimization algorithm (for the function aberrations, for utility, for precision, for efficiency)

Computer implementation in code Oh yes,

The mathematics of the optimization algorithm (understanding this is also important)

This book seeks to address all 10 aspects, not just the 10th.

Point 2: Do not study. Learning is most effective if you integrate the techniques into your daily life. You will forget the material that you memorized in order to pass a test. Since this book provides skills that are essential for both personal and career life, I want you to take the techniques with you. I want this book to be useful in your future. Although memorization and high‐level mathematical analysis are both elements of the book, understanding the examples and doing of the exercises is more important. To maximize the impact of this material, you need to integrate it into your daily life. You need to practice it.

Oh, I see I omitted a comma in the first sentence of the paragraph above. It should have been “Do, not study.” Learn by doing. After you read a section and think you understand it, see if you can implement it. Of course, the comma “error” above was intentional to wake up curiosity about the message.

Point 3: Optimization is universal to all engineering, business, science, computer science, and technology disciplines. Although primarily written for engineering applications, this introductory book is designed to be useful for all those seeking to apply optimization in all fields.

Point 4: The implementation of optimization requires computer programming, which for many is an aggravation. To help the reader, I currently have, and plan to support, a website that offers to any visitor optimization software and examples. Visit www.r3eda.com. The “r3” in the address is my initials, and the appended “eda” means “enabling data analysis.” Seeking to maximize ease of use and accessibility, the programs are written as VBA macros for MS Excel. VBA is not the fastest‐computing environment, nor does it have the best scientific data processing functions. However, it has been adequately functional for all of my applications, and if you need something better, the code can be translated. This book provides a VBA primer (Appendix F) for those needing the help in accessing and modifying the code. The programs on the r3eda site solve many of the examples in this book.

Book Aspirations

Readers should be pleased with their ability to:

Understand and use the fundamental mathematical techniques associated with optimization

Define objective functions, decision variables, models, and constraints for a variety of optimization applications

Develop, modify, and program simplified versions of the more common optimization algorithms

Understand and choose appropriate methods for:

– Constrained optimization

– Global optimization

– Convergence criteria

– Surface aberrations

– Stochastic applications

Understand diverse issues related to optimizer desirability

Explore, contrast, and evaluate the performance of optimization algorithms and user choices of convergence criteria, numerical derivative estimation, threshold, constraint handling method, parameter values, etc. with respect to precision, user convenience, and other measures of optimizer desirability

Apply optimization algorithms to case studies relevant to the reader’s career

Continue learning optimization methods from texts, reports, Internet postings, and refereed journal articles

Optimization is the name for the procedure for finding the best choices. “Procedure,” “best,” and “choices” are separate aspects, and the user must understand each to be able to appropriately define the application. And each aspect has a large range of options.

Procedure

This relates to the method used to find the optimum:

In process or device design, for example, the choices could be the equipment specifications (type, materials, size), and the evaluation of best in the design could be to minimize capital cost with a constraint on reliability. With mixed continuous, discrete, and class variables as the choices, a direct search algorithm might be the best optimizer.

Alternately, in scheduling a rocket thrust to reach a desired height, the stage choices might be height, best might be evaluated as minimizing either time or fuel use, and the appropriate algorithm might be dynamic programming.

Another example is characterized as the traveling salesman problem in which the objective is to determine a sequence of locations to visit to minimize travel distance. Here the choice is the sequence, and the best sequence might be impacted by a priority of visits, expenses, wasted time, etc. The procedure might use the random keys method to convert a sorted list of rational numbers into the sequence.

As a final contrasting example, in model‐predictive control, the objective might be to minimize time to move a response to a set point while penalizing excessive manipulated variable moves while avoiding constraints; and the choices might be the future sequence of manipulations. If the penalties are quadratic, the appropriate algorithm might be a gradient‐based procedure.

Best

Within optimization terminology, the definition of best for a specific application (and the method for calculating a value to quantify best) is variously termed the cost function or the objective function (OF). It is the function that returns a value representing an assessment of goodness. Best usually means minimize undesirable aspects and/or maximize desirable aspects, and the OF can represent a wide range of metrics related to economics, safety, time, resource conservation, quality, deviation, probability, etc. But best might mean to minimize a worst‐case feature (min the max, or min–max), such as finding a path through mountains that minimizes the steepest ascent or finding a process design that minimizes the worst‐case outcome (risk).

Defining the appropriate OF is situation specific, and often it is the key challenge in an optimization application. The user needs to clearly understand the complex situation and realize that a first statement of the OF usually embodies a superficial understanding. Subsequent analysis of the results will lead to an evolution of the OF. For example, a challenge might be to choose the best pipe diameter in a process design. A smaller diameter means a less expensive pipe and lower in‐pipe inventory cost, but it means a larger pump. An initial OF choice might be to minimize capital. However, reconsideration from a business investment view might reveal that operating costs associated with pumping power and maintenance are also important issues, and perhaps net present value (NPV) is a right way to combine initial capital with future expenses. Then, reconsideration might bring understanding of the sensitivity of the optimum solution to uncertainty in the “givens,” which will lead to a refinement of the OF to represent the 95% worst case of the NPV in a Monte Carlo analysis, making it a stochastic function. Risk might then be perceived as an additional issue, and the OF might be split into a multi‐objective version (risk and NPV) that provides a non‐dominated set of solutions for a user to select a best for the particular situation. Finally, the user might realize that pipe comes in discrete diameter values and that the pipe diameter is not a continuous‐valued number. This application might have evolved from an initial simple deterministic (textbook example) case to a complicated application, classified as mixed integer, stochastic, and multi‐objective.

This book will address how to develop the OF and will show examples from a wide range of applications.

Choices

The choices a user has (you may call these inputs, decisions, degrees of freedom, or independent variables) to change things toward the best outcome are termed decision variables (DVs).