29,99 €
This book is designed for engineers, mathematicians, computer scientists, financial analysts, and anyone interested in using numerical linear algebra, matrix theory, and game theory to solve applied problems efficiently. It emphasizes solving linear programming problems with software like MS-Excel, Mathematica, MATLAB, WinQSB, and LINDO, while providing the necessary definitions and theorems for mastering theoretical aspects.
The journey begins with basics of linear algebra using MS-Excel, followed by an introduction to linear programming problems and the graphical method. It then delves into the simplex method, duality, and sensitivity analysis. The course covers transportation, transshipment, assignment problems, and concludes with game theory. Each chapter builds on the previous one, ensuring a comprehensive understanding of the topics.
Understanding these concepts is crucial for solving complex applied problems. This book transitions readers from basic to advanced techniques in numerical linear algebra and linear programming, combining theoretical knowledge with practical applications. It is an essential resource for mastering these topics and maximizing efficiency in problem-solving.
Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:
Seitenzahl: 366
Veröffentlichungsjahr: 2024
OPTIMIZATION USING LINEAR PROGRAMMING
An Introduction
A.J. Meitei, PhDVeena Jain, PhD
MERCURY LEARNING AND INFORMATION
Dulles, VirginiaBoston, MassachusettsNew Delhi
Reprint & Revision Copyright © 2019 by MERCURY LEARNING AND INFORMATION.All rights reserved.
Original Copyright © 2018 by NEW AGE INTERNATIONAL (P) LIMITED, PUBLISHERS.
This publication, portions of it, or any accompanying software may not be reproduced in any way, stored in a retrieval system of any type, or transmitted by any means, media, electronic display or mechanical display, including, but not limited to, photocopy, recording, Internet postings, or scanning, without prior permission in writing from the publisher.
Publisher: David PallaiMERCURY LEARNING AND INFORMATION22841 Quicksilver DriveDulles, VA [email protected](800) 232-0223
A.J. Meitei and Veena Jain. Optimization Using Linear Programming.ISBN: 978-1-68392-347-3
The publisher recognizes and respects all marks used by companies, manufacturers, and developers as a means to distinguish their products. All brand names and product names mentioned in this book are trademarks or service marks of their respective companies. Any omission or misuse (of any kind) of service marks or trademarks, etc. is not an attempt to infringe on the property of others.
Library of Congress Control Number: 2018964994
192021321 Printed on acid-free paper in the United States of America.
Our titles are available for adoption, license, or bulk purchase by institutions, corporations, etc. For additional information, please contact the Customer Service Dept. at (800) 232-0223(toll free). Digital versions of our titles are available at: www.academiccourseware.com and other electronic vendors.
The sole obligation of MERCURY LEARNING AND INFORMATION to the purchaser is to replace the book and/or disc, based on defective materials or faulty workmanship, but not based on the operation or functionality of the product.
Preface
1BASICS OF LINEAR ALGEBRA USING MS-EXCEL
1.1.Vectors
1.2.Linear Independence and Dependence of Vectors
1.3.Solution to a System of Simultaneous Linear Equations
1.4.The Gauss-Jordan Method for Solving Systems of Linear Equations
1.5.Convex Sets
Exercises
2INTRODUCTION TO LPPs AND THE GRAPHICAL METHOD
2.1.Introduction
2.2.Assumptions in a Linear Programming Problem
2.3.Theorems on Extreme Points
2.4.Areas of Application of LPPs
2.5.Formulation of Linear Programming Models
2.6.Graphical Method
2.6.1.Extreme Point Approach
2.6.2.ISO-Profit (cost) Function Line Approach
2.7.Solution of LPPs by the Graphical Method Using MS-Excel
2.8.Special Cases
2.8.1.Problem with Multiple Solutions
2.8.2.The Problem with Unbounded Solutions
2.8.3.The Problem with Inconsistent Constraints
2.8.4.The Problem with Redundant Constraint Equations
Exercises
3SIMPLEX METHOD-I
3.1.Standard and Canonical Form of the General Linear Programming Problem
3.2.Slack and Surplus Variables
3.3.Algebraic Simplex Method
3.4.Relationship between the Simplex and Graphical Methods
3.5.Simplex Method in Tabular Form
3.6.Use of Solver in MS-Excel for Solving a Linear Programming Problem
3.7.Use of Jensen Add-Ins for a Linear Programming Problem
Exercises
4SIMPLEX METHOD-II
4.1.Introduction
4.2.Big M Method (Penalty Method)
4.3.Two-Phase Method
4.4.Degeneracy in Linear Programming Problems
4.4.1.Perturbation Method for the Resolution of Degeneracy Problems in LPPs
4.5.Solving a System of Linear Equations Using the Simplex Method
4.6.Solution of a System of Linear Equations by Using Solver
4.7.Inverse of a Matrix Using the Simplex Method
4.8.Special Cases
4.8.1.The Problem with Alternative or Multiple Solutions
4.8.2.Unbounded Solutions
4.8.3.The Problem with Inconsistent Constraint Equations
Exercises
5DUALITY
5.1.Introduction
5.2.Rules for Finding the Dual of a Given Linear Programming Problem
5.3.Finding the Optimal Dual Solution from the Optimal Table of the Primal Problem
5.4.Use of the Graphical Method for Finding the Optimal Dual Solution
5.5.Construction of a Simplex Table
5.6.Duality Theorems
5.7.Economic Interpretation of Duality
5.8.Dual Simplex Method
Exercises
6SENSITIVITY ANALYSIS
6.1.Introduction
6.2.Changes in the RHS Vector bi
6.2.1.Range of bi’s
6.2.2.Simultaneous Changes in bi’s
6.3.Addition of a New Constraint
6.3.1.When the Current Solution Satisfies the New Constraint
6.3.2.When the Current Solution Fails to Satisfy the New Constraint
6.4.Adding a New Activity or a Variable
6.5.Changes in the Objective Function Coefficients
6.5.1.Changes in the Objective Function Coefficients of Non-Basic Variables
6.5.2.Changes in Objective Function Coefficients of Basic Variables
6.5.3.100% Rule for Making Simultaneous Changes in the Objective Function Coefficients
6.6.Changes in the aij Coefficients
6.6.1.Changes in aij Coefficients of Non-Basic Variables
6.6.2.Changes in aij- Coefficients of Basic Variables
6.7.Deletion of a Variable
6.8.Deletion of a Constraint
6.9.Sensitivity by Using Excel Solver
Exercises
7TRANSPORTATION AND TRANSSHIPMENT PROBLEMS
7.1.Introduction
7.2.Formulation of a Transportation Problem
7.3.Various Methods for Finding the Initial Basic Feasible Solution
7.3.1.North-West (N-W) Corner Method
7.3.2.Row Minima Method
7.3.3.Column Minima Method
7.3.4.Least-Cost or Matrix Minima Method
7.3.5.Vogel’s Approximation Method (VAM)
7.4.Closed Path or Loop in Transportation Problems
7.5.Moving Toward the Optimal Solution
7.5.1.Stepping-Stone Method
7.5.2.The Modified (MODI) Distribution or u-v Method
7.6.Solution of Transportation Problems in Excel
7.7.Some Special Cases in Transportation Problems
7.7.1.Unbalanced Transportation Problems
7.7.2.Restricted Entry
7.7.3.Maximization Problems
7.7.4.Multiple Solutions in Transportation Problems
7.7.5.Degeneracy in Transportation Problems
7.8.Transshipment Problems
Exercises
8.ASSIGNMENT PROBLEMS
8.1.Introduction
8.2.Mathematical Formulation
8.3.Assignment Problems as a Special Case of Transportation Problems
8.4.Hungarian Method
8.5.Special Cases in Assignment Methods
8.5.1.Maximization Problems
8.5.2.Restricted Entry
8.6.Solution of Assignment Problems Using Excel Solver
Exercises
9GAME THEORY
9.1.Introduction
9.2.Zero-Sum Games
9.3.Maximin and Minimax Principle
9.4.Game with a Saddle Point
9.5.Game without a Saddle Point
9.6.Graphical Solution of 2 × n and m × 2 Games
9.7.Method of Dominance
9.8.Solution of a Game Using the Simplex Method
9.9.Solution of a Game Using Gambit
Exercises
Appendix:Use of MATHEMATICA, MATLAB, LINDO, and WinQSB to Solve Linear Programming Models
References
Index
This book on linear programming and game theory has been jointly written by Dr. A. J. Meitei and Dr. Veena Jain with an aim to meet the needs of the students of mathematics, commerce, economics, management studies, and other allied disciplines or courses. The explanation and presentation of every topic in the book have been made as simple and user-friendly as possible. Complex mathematics involved in various theorems and procedures has been avoided, and all explanations are given in simplified and systematic forms so that even non-mathematical students or those who know only basic mathematics can easily and conveniently read the book. The main emphasis is on the solution of various types of linear programming problems by using different kinds of software. Use of software in solving mathematical problems has been an integral part of syllabi these days. Keeping this in mind, the solution of problems using the MS-Excel Solver add-in and the external Jenson add-in have been discussed in all chapters of this book. We explain step by step the procedure of how the add-ins can be used to solve linear programming problems. In addition to MS-Excel, solutions of LPPs by Mathematica, MATLAB, WinQSB, and LINDO have also been explained in the Appendix.
Exercises are given at the end of each chapter so that students can practice a variety of problems. In order to make it easy for students to follow along, all of the materials related to various topics are arranged in a systematic way. All the definitions, theorems, and procedures for solving problems and all cases related to the various topics are discussed clearly in simple language.
The book is divided into nine chapters. At the beginning, Chapter 1 discusses the basic concepts of algebra that include vectors, matrices, operations on matrices and other related methods like the Gauss-Jordan method, solutions of simultaneous linear equations, convex sets, and so forth. The use of MS-Excel in algebraic computations is also explained with relevant examples. All of these concepts are used in developing and understanding the solution procedure for solving a Linear Programming Problem (LPP), so it was essential to incorporate them in the book as a separate chapter. Chapter 2 explains each definition along with the formulation and graphical method for the solution of a linear programming problem. Some important definitions and theorems related to the solution of linear programming problems have also been incorporated. Also, the use of MS-Excel for plotting graphs and finding the solution of an LPP is thoroughly explained with examples. Chapter 3 focuses on solving linear programming problems by the simplex method with the help of its canonical form in a slightly different manner, which has been explained by very few authors. In Chapter 4, the M-Charnes and two Phase-methods are included, in which the manual solution procedure and the solution by using Excel the Solver and the Jensen add-in have also been discussed in detail. In addition, a detailed discussion of various special LPP using both Excel Solver and simplex tables is included in the chapter. The concept of duality with its related theorems and importance is the main topic explained in detail in Chapter 5. In Chapter 6, a sensitivity analysis is carried out in a linear programming problem by considering all possible changes in the parameters and structure of the LPP. Chapters 7 and 8 are on transportation, transshipment, and assignment problems. In these chapters the definition and procedure for solving these types of problems are discussed at length. Chapter 9 is on game theory, where the solution of game problems using different techniques is explained and the use of Gambit Software for finding solutions is discussed as well. Suggestions for further enhancement are welcome.
Dr. A. J. Meitei
Dr. Veena Jain
An arrangement of elements either in a row or in a column is called a vector and is usually denoted by lowercase bold letters like a, b, c, and so on.
Dot or Inner Product of Vectors: The inner or dot product of two vectors will be defined only if the vectors have the same number of components. Let a1 and a2 be any two real vectors from an n-dimensional real space. Then the inner or dot product of a1 and a2 is given by,
Fig. 1.1
Fig. 1.2
In Excel the SUMPRODUCT function can be used to find the dot product of any two vectors of the same dimension.
Zero Vector: A vector whose elements are all zero is called a zero vector, and it is usually denoted by 0. This vector is also referred to as the origin. In the XY plane, (0 0) is a zero vector with two components.
Euclidean Space: This space, sometimes called Cartesian space or simply n space, is the space of all n-tuples of real numbers (x1, x2, ... xn) and is generally denoted by Rnor En.
Matrix: A rectangular arrangement of numbers into rows and columns is called a matrix and is always enclosed in either brackets [] or parentheses (). If the matrix has m rows and n columns, it is called an m × n matrix (read as “m” by “n”). m × n is called the dimension of the matrix. It is usually denoted by capital boldface letters, such as A, B, C, and so forth. A matrix has no numerical value, and the numbers in the matrix are called elements of the matrix. A double subscript is used to denote the location of the element in the matrix, where the first subscript indicates the row number and the second subscript indicates the column number. For example:
is a 2 × 2 matrix or 2 by 2 matrix, and aij is the element in the ith row and jth column of the given matrix where i =1, 2 and j =1, 2.
Square Matrix: A matrix whose number of rows are equal to the number of columns is called a square matrix. For example, is a 2 × 2 square matrix.
Zero Matrix: If each element in a matrix is zero, then the matrix is said to be a zero or null matrix; is a 2 × 3 zero matrix. A null matrix need not be a square matrix.
Determinant: It is a number which is associated with every square matrix. The determinant of the nth order matrix A denoted by |A| is computed as follows:
where the sum is taken over all permutations of the second subscript. A plus sign is assigned to even permutations and a minus sign to odd permutations.
Consider a third-order matrix
In Excel, we can use the MDETERM function to find the determinant of any square matrix as follows:
Fig. 1.3
Singular Matrix: A square matrix B is said to be a singular matrix if its determinant is zero; otherwise, it is non-singular. For example:
Triangular Matrix: Any square matrix is said to be an upper triangular matrix if all the entries below the main diagonal are zeros. Similarly, any square matrix is called a lower triangular matrix if entries above the main diagonal of the matrix are zeros.
For example, is an upper triangular matrix, and is a lower triangular matrix.
To perform this calculation in Excel, select the output space for B, then multiply the matrix A by k as follows, and finally press Ctrl, Shift, and Enter simultaneously.
Fig. 1.4
Fig. 1.5
The previous figure is the screenshot of the same calculation in Excel. Select the dimension of C and then press Ctrl, Shift, and Enter simultaneously, and we will have the required value of C.
Transpose of a Matrix: It is obtained by interchanging the rows and columns of the matrix; for example, the transpose of an m × n matrix C is a new matrix of dimension n × m whose rows are the columns of C and vice versa, generally denoted by C′ or CT.
Let
Then
In Excel, we can use the TRANSPOSE function to find the transpose of a given matrix.
Fig. 1.6
Matrix Multiplication: The multiplication of any two matrices is defined only if the number of columns of the first matrix is equal to the number of rows of the second matrix. Let A be an m × n and B be an n × p matrix. Then their product is another matrix C (= AB) of order m × p with:
Example: and
Then
In Excel the MMULT function can be used for matrix multiplication.
Step 1. Select the dimension of the matrix C in the output space.
Step 2. Type the command MMULT.
Step 3. Select the two matrices as shown in the following figure.
Step 4. Finally, press Ctrl, Shift, and Enter simultaneously.
Fig. 1.7
Remark:For doing any matrix operation in Excel, one should always pressCtrl, Shift, and Entersimultaneously after the necessary inputs.
Vector Space: A vector space is a space consisting of a collection of vectors which are closed under the operation of addition and multiplication by a scalar; that is, if vectors a, b are in a collection, then a + b and ka will also be in the collection, where k is a scalar quantity.
Rank: The rank of any matrix A, written as r (A), is the maximum number of linearly independent columns in A, or it is the order of the largest non-vanishing minor (determinant of the square submatrix) in A. The rank of a matrix is always unique, since the row rank is always equal to the column rank; that is, the maximum number of linearly independent columns in a matrix is always equal to the maximum number of linearly independent rows.
Note:The rank of a matrixAwill be equal to the dimension of the largest square sub-matrix ofAwhich is non-singular.
Example 1.1.Show that the rank of is zero.
Solution: We cannot identify any sub-matrix of the given matrix which is non-singular, and hence the rank of the matrix is zero.
Example 1.2.Show that the rank of is 3.
Solution: The determinant of the largest sub-matrix of the given matrix, which is different from zero, is the matrix itself. Hence the rank of the given matrix is 3.
Fig. 1.8
Example 1.3.Show that the rank of is 2.
Solution: The determinant of the largest order sub-matrix of the given matrix, which is different from zero, is of dimension 2 × 2. Hence the rank of the given matrix is 2.
In Excel we can use the MINVERSE function to find the inverse of any square matrix.
Fig. 1.9
Example 1.4.Use the MINVERSE function to find the inverse of the following matrix:
Solution: The following is the screenshot of the Excel calculation of the inverse of matrix A.
Linear combination of vectors: Let a1, a2, a3, …, ak be a set of k vectors from Rn and λ1, λ2, λ3, …, λk be any k scalars, and then the vector
is known as a linear combination of vectors a1,a2,a3, …,ak.
Linearly dependent vectors: A set of vectors a1,a2,a3, …,ak from Rn is said to be linearly dependent if there exist scalars λ1, λ,2, λ3, ..., λk that are not all zero, such that
Hence, the set of unit vectors is always linearly independent.
Notes:
(i)
A
null vector is not linearly independent of any other vector or set of vectors
.
(ii)
If a set of vectors is linearly independent, then any subset of these vectors is also linearly independent
.
(iii)
If any set of vectors is linearly dependent, then any larger set of vectors containing these vectors is also linearly dependent
.
(iv)
Any vector
x
is said to be linearly dependent on a set of vectors
x
1
,
x
2
,...,
x
k
if
x
can be written as a linear combination of the set of vectors
.
(v)
If
x
1
,
x
2
,... , x
k
is a given set of vectors from
R
n
and there exists at least one subset of r
<
k vectors which are linearly independent but no subset containing
(
r
+ 1)
vectors is linearly independent, then r is the maximum number of linearly independent vectors in the given set. Given this subset of r linearly independent vectors in the set, any other vector in the set can be written as a linear combination of these r vectors
.
(vi)
A set of vectors
b
1
,
b
2
…, b
k
from R
n
where k ≥
2
is linearly independent if and only if one of these vectors can be written as a linear combination of the others
.
Spanning Set: A set of vectors a1, a2, …, ak (k ≥ 2) from Rn is said to span or generate Rn if every vector in Rn can be written as a linear combination of the given set of vectors. The vectors in the spanning set must be linearly independent.
Basis: A basis for Rn is a subset of linearly independent vectors from Rn which spans the entire space.
Notes:
(i)
There exist an infinite number of bases in
R
n
.
(ii)
A set of unit vectors will always form a basis, since it is linearly independent and any vector in the space can be written as a linear combination of unit vectors
.
(iii)
The basis formed by the set of unit vectors is called a standard basis
.
Theorem 1.1. The set of unit vectors forms a basis.
Let λi′s be n scalars, and then we have
Consider a system of m simultaneous linear equations in n unknowns of the form
For understanding linear programming we need to understand the properties of solutions to the linear system of equations. Keeping this in mind, we will devote some effort to studying such systems. The matrix representation of the set of equations (1.1) can be written as,
Basic Solution: Given a system of m simultaneous linear equations in n unknowns (m < n),
Let Bm × m be any m × m non-singular sub-matrix of Am × n. Then, the solution obtained by setting the (n − m) variables not associated with the columns of Bm × m equal to zero is called a basic solution to the given system of equations.
Let the set of m variables associated with the columns of Bm × m be denoted by xB and the remaining (n − m) variables by xNB (= 0), and then
is the basic solution for the given system of equations.
Notes:
(i)
If
x
B
≥ 0, then the basic solution is called a basic feasible solution. If one or more variables in the basic feasible solution have a zero value, then it is called a degenerate basic feasible solution. Otherwise, it is called a non-degenerate basic feasible solution.
(ii)
The maximum number of basic solutions in
m
linear equations “in which
n
is unknown”? (where
m
<
n
) is . To get all these basic solutions, every set of
m
columns must be linearly independent.
Example 1.7.Find all the possible basic solutions of the following simultaneous linear equations:
Solution: The matrix representation of the given system of equations is
Here the rank of the coefficient matrix A is 2. The following are our 2 × 2 non-singular sub-matrices from the coefficient matrix.
and .
The sub-matrix will not be considered, as it is a singular matrix.
When , we have
Here we shall discuss a very efficient method (the Gauss-Jordan method) for solving a system of linear equations. Gauss-Jordan elimination involves creating an augmented matrix of both sides of our equations, changing this matrix into reduced row echelon form (a form in which a matrix has zeros on the lower diagonal and the first non-zero number in each row is 1. Also, if a column has a leading 1, then all the other numbers in that column below 1 need to be 0), then finishing up the problem to find our solution. This method can lead us to one of the following three cases:
(i)
The system has no solution.
(ii)
The system has a unique solution.
(iii)
The system has an infinite number of solutions.
The elementary row operation that we apply in this method is important in the sense that a similar type of elimination method will be used in the simplex method for solving a given linear programming problem (LPP).
Example 1.8.(Problem with no solution).
The augmented matrix representation of the previous system is:
(Divide R1 by 2 and Multiply new R1 by 10 and subtract from R2)
It can be easily seen that matrix A cannot be converted to an identity matrix. This implies:
Whatever the values of x1 and x2 are, the second equation can never be satisfied. Hence, the given system of equations has no solution.
Example 1.9.(Problem with a unique solution). Use the Gauss-Jordan method to solve the following system of simultaneous linear equations:
The augmented matrix representation of the previous system is:
Example 1.10.(Problem with an infinite solution). Use the Gauss-Jordan method to solve the following system of simultaneous linear equations:
The augmented matrix representation of the previous system is:
The linear system corresponding to A | b is
Remark: In the Gauss-Jordan methods the following points can be noted:
Example 1.11.UseGauss-Jordan elementary row operationsto find the inverse of the matrix given in Example 1.4.
Solution: To find inverse of A using the Gauss-Jordan method, form the augmented matrix (A | I). Now we will try to reduce the matrix A to an identity matrix by elementary row operations:
Divide the first row by 2 and subtract the second and third rows from the new row.
Divide the second row by 6 and subtract the new row from the third row, and also multiply the new row by 2 and subtract from the first row.
Divide the third row by – 2, multiply the new row by 3, and subtract it from the first row.
So the inverse of the matrix A is
Line Segment: The line segment joining any two points x and y from Rn is a collection of points u, where
Here the points x and y are called the endpoints of the line segment. It is usually denoted by [x : y].
The open line segment joining x and y is a collection of points, u, where
It is usually denoted by (x : y).
Convex Sets: A set S is said to be a convex set, if for any two points belonging to the set, the line segment joining these two points also belongs to the set itself.
For example, for any two points x1 and x2 in S, the line segment joining these two points λx1 + (1 – λ)x2ϵS for each λϵ [0, 1].
The line segment λx1 + (1 – λ)x2 for λϵ [0, 1] is also called a convex combination of x1 and x2.
Fig. 1.10 Convex sets
Fig. 1.11 Non-Convex sets
Multiplying (1.2) and (1.3),
Now,
Hence the set S is a convex set.
Also since we have
Hence, the set S is a convex set.
Example 1.14.Show that a line segment [x : y] joining any two pointsx, y ϵ Rnisa convex set.
Proof. The line segment joining the two points x, y ϵ Rn is given by,
Let u, v ϵ [x : y], then
Also let w denote a point on the line segment joining the two points u and v, then
From (1.13), (1.14), and (1.15), we have
Putting,
since 0 ≤ β, λ′ and λ″ ≤ 1, we have 0 ≤ α ≤ 1.
Therefore, (1.16) can be rewritten as:
⇒ w ϵ [x : y], hence the line segment is a convex set.
A straight line in a 2-dimensional space and a plane in 3-dimensional space are examples of hyperplanes.
Let w denote a point on the line segment joining the two points u and v, then
Theorem 1.1. A half-space is a convex set.
Let u, v ϵ S, such that
Let w be a point on the line segment joining the two points u and v, then
Theorem 1.1. The intersection of any two convex sets is also a convex set.
If x, y ϵ T, then x, y ϵ S1 and S2. Let x′ represent a point on the line segment joining x and y, then
Since S1 and S2 are convex sets, x′ ϵ S1 and S2, which implies x′ ϵ T also.
Remark:The intersection of any finite number of convex sets is again a convex set.
Polyhedron. It is the intersection of a finite number of half-spaces.
Theorem 1.1. The sum and difference of any two convex sets is again a convex set.
Proof. Let A and B be any two convex sets in Rn, and then we have to show that A ± B is also a convex set.
Let u and v be any two points of the sets A ± B so that
“Since A is a convex set and x1, x2 ϵ A, we have”?
Similarly, for y1, y2 ϵ B,
This implies,
Thus, for any two points u and v from the set A ± B, the line segment joining these two points is also in A ± B. Hence, the sum and difference of any two convex sets is again a convex set.
Convex Hull. A convex hull of a set C of “n” points from Rn, denoted by H(C), is the smallest perimeter fence in Rn enclosing these “n” points.
Fig. 1.12
Hence, a convex hull can also be defined as:
the smallest convex set containing all the points
the smallest area convex “polygon” enclosing the points
a convex “polygon” enclosing the points, whose vertices are points in the set
Convex polyhedron. The set of all the convex combination of a finite number of vectors in Rn is called a convex polyhedron or a polytope spanned by these vectors. In other words, a polytope is a bounded polyhedron and always forms a convex set.
Simplex. A simplex in k-dimension is a polytope having exactly (k + 1) vertices. A simplex in a 1-dimensional space is a line segment, in two dimesnsions it is a triangle, and so on.
Fig. 1.13 Polytope (a bounded Polyhedron)
Extreme Point: Let S be a convex set. A point r ϵ S is called an extreme point if, for any two points u, v ϵ S (where u ≠ v), r cannot be written as a convex combination of the points u and v. An extreme point will always be a boundary point, but all boundary points will not be extreme points.
Fig. 1.14
For example, a circle is a convex set, and all the points on the circumference of the circle are boundary as well as extreme points of the set, whereas in the case of a triangle, the three corner points are the extreme points of the set.
1.Define (i) basic solution, (ii) convex set, (iii) convex polyhedron, and (iv) extreme points of a convex set.
2.Define a line segment and show that a line segment is a convex set.
3.Let
Find using MS-Excel:
(
i
)
A
+
B
(
ii
)
transpose of
A
(
iii
)
dot product of
A
and
B
(
iv
)
Cross Products of
A
and
B
(
v
)
cross products of
C
and
B
(
vi
)
|
A
|
(vii)
A
–1
and
B
–1
.
4.Show that the set of unit vectors will always form a basis.
5.Show that any two bases in Rn will contain the same number of vectors.
6.Check whether the following vectors are linearly independent:
7.Check if the following vectors form the basis:
8.Check with the help of the rank method whether the following system of equations are consistent and solve them by using the Gauss-Jordan method:
9.Find the inverse of the matrices A, B, C given in Question 3 by the Gauss-Jordan Method.
10.Determine how many basic solutions exist for the following system of equations and find all basic feasible solutions:
11.Check if the following sets are convex:
12.Show that the intersection of a finite number of convex sets is also a convex set.
13.Show that a convex polyhedron always forms a convex set.