Linear Algebra - L. Shen - E-Book

Linear Algebra E-Book

L. Shen

0,0
29,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

This book introduces the fundamental concepts of linear algebra and applies the theorems in computation-oriented applications. It is suitable for a one-semester course and combines definitions and proofs with a focus on computational applications. Examples illustrate the use of software packages such as Mathematica, Maple, and Sage.
The journey begins with vector spaces and progresses through linear transformations and operators. It then covers orthogonal bases and matrix decomposition. The material includes a brief introduction to aspects of abstract algebra related to linear algebra, such as groups, rings, modules, fields, and polynomials over fields.
Understanding these concepts is crucial for solving complex problems in various fields. This book transitions readers from basic definitions to advanced computational applications, blending theoretical knowledge with practical skills. It is an essential resource for mastering linear algebra and its applications.

Das E-Book können Sie in Legimi-Apps oder einer beliebigen App lesen, die das folgende Format unterstützen:

EPUB
MOBI

Seitenzahl: 240

Veröffentlichungsjahr: 2024

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



LINEARALGEBRA

LICENSE, DISCLAIMER OF LIABILITY, AND LIMITED WARRANTY

By purchasing or using this book (the “Work”), you agree that this license grants permission to use the contents contained herein, but does not give you the right of ownership to any of the textual content in the book or ownership to any of the information or products contained in it. This license does not permit uploading of the Work onto the Internet or on a network (of any kind) without the written consent ofthe Publisher. Duplication or dissemination of any text, code, simulations, images, etc. contained herein is limited to and subject to licensing terms for the respective products, and permission must be obtained from the Publisher or the owner of the content, etc., in order to reproduce or network any portion of the textual material (in any media) that is contained in the Work.

MERCURY LEARNING AND INFORMATION (“MLI” or “the Publisher”) and anyone involved in the creation, writing, or production of the companion disc, accompanying algorithms, code, or computer programs (“the software”), and any accompanying Web site or software of the Work, cannot and do not warrant the performance or results that might be obtained by using the contents of the Work. The author, developers, and the Publisher have used their best efforts to insure the accuracy and functionality of the textual material and/or programs contained in this package; we, however, make no warranty of any kind, express or implied, regarding the performance of these contents or programs. The Work is sold “as is” without warranty (except for defective materials used in manufacturing the book or due to faulty workmanship).

The author, developers, and the publisher of any accompanying content, and anyone involved in the composition, production, and manufacturing of this work will not be liable for damages of any kind arising out of the use of (or the inability to use) the algorithms, source code, computer programs, or textual material contained in this publication. This includes, but is not limited to, loss of revenue or profit, or other incidental, physical, or consequential damages arising out of the use of this Work.

The sole remedy in the event of a claim of any kind is expressly limited to replacement of the book, and only at the discretion of the Publisher. The use of “implied warranty” and certain “exclusions” vary from state to state, and might not apply to the purchaser of this product.

LINEARALGEBRA

Li-Yong Shen, PhD

(University of the Chinese Academy of Sciences)

Haohao Wang, PhD

(Southeast Missouri State University)

Jerzy Wojdylo, PhD

(Southeast Missouri State University)

Dulles, VirginiaBoston, MassachusettsNew Delhi

Copyright ©2019 by MERCURY LEARNING AND INFORMATION LLC. All rights reserved.

Original Title and Copyright: Linear Algebra. Copyright ©2018 by.

Overseas Press India Private Limited. All rights reserved.

This publication, portions of it, or any accompanying software may not be reproduced in any way, stored in a retrieval system of any type, or transmitted by any means, media, electronic display or mechanical display, including, but not limited to, photocopy, recording, Internet postings, or scanning, without prior permission in writing from the publisher.

Publisher: David Pallai

MERCURY LEARNING AND INFORMATION

22841 Quicksilver Drive

Dulles, VA 20166

info@merclearning.com

www.merclearning.com

(800) 232-0223

Li-Yong Shen, H. Wang, J. Wojdylo. Linear Algebra.

ISBN: 978-1-683923-76-3

The publisher recognizes and respects all marks used by companies, manufacturers, and developers as a means to distinguish their products. All brand names and product names mentioned in this book are trademarks or service marks of their respective companies. Any omission or misuse (of any kind) of service marks or trademarks, etc. is not an attempt to infringe on the property of others.

Library of Congress Control Number: 2018962730

181920321 This book is printed on acid-free paper in the USA.

Our titles are available for adoption, license, or bulk purchase by institutions, corporations, etc.

For additional information, please contact the Customer Service Dept. at 800-232-0223(toll free).

All of our titles are available in digital format at authorcloudware.com and other digital vendors. The sole obligation of MERCURY LEARNING AND INFORMATION to the purchaser is to replace the book, based on defective materials or faulty workmanship, but not based on the operation or functionality of the product.

CONTENTS

Prefaces

List of Figures

Chapter 1Vector Spaces

1.1Vector Spaces

1.2Linear Span and Linearly Independence

1.3Bases for Vector Spaces

1.4Linear System of Equations

1.5A First Look of Determinants

1.6Using Computer Algebra Systems to Perform Computations

1.7Applications of Vector Spaces

1.7.1Code

1.7.2Hamming Code

1.7.3Linear Code

1.7.4Golay Code

1.8Exercises

Chapter 2Linear Transformations

2.1Linear Transformations

2.2Rank and Nullity of a Linear Transformation

2.3Representation of Linear Transformations by Matrices

2.4The Algebra of Linear Transformation

2.5Applications of Linear Transformation

2.5.1Affine Transformations

2.5.2Projective Transformations

2.6Exercises

Chapter 3Linear Operators

3.1Characteristic and Minimal Polynomials

3.2Eigenspaces and Diagonalizability

3.3Jordan Canonical Form

3.4Trace

3.5Determinants

3.6Generalized Companion Matrix

3.7Solution of Linear System of Equations

3.8Some Applications of Matrices

3.8.1Linear Programming

3.8.2Geometric Transformation

3.8.3Matrices in Graph Theory

3.8.4Positional Relationship of Two Ellipsoids

3.9Exercises

Chapter 4Orthogonal Bases

4.1Inner Product Spaces

4.2Gram-Schmidt Process to Produce Orthonormal Basis

4.3Orthogonal Complements and Projections

4.4Orthogonal Projections and Reflections

4.4.1Orthogonal Projection Matrix

4.4.2Reflection Matrix

4.5Properties of Symmetric Matrices

4.6QR Factorization

4.7Singular Value Decomposition

4.8Applications of SVD and QR Decompositions

4.8.1Image Processing

4.8.2QR decomposition and GCD

4.9Exercises

Chapter 5Matrix Decomposition

5.1Decomposition over or

5.1.1Decompositions to Solve Linear Systems

5.1.2Decompositions Based on Eigenvalues

5.1.3Examples

5.2Iterative Methods to Solve Linear Systems Numerically

5.2.1Jacobi Iterative Method

5.2.2Gauss-Seidel Iterative Method

5.3Matrix over Principle Ideal Domain

5.3.1Matrix Decomposition over PID

5.4Exercises

Bibliography

Index

PREFACE

Linear algebra is one of the most widely used mathematical theories and has applications in virtually every area of mathematics, including multivariate calculus, differential equations, and probability theory. The purpose of this book is to bridge the gap between the abstract theoretical aspects and the computational applications of linear algebra. The aim of this book is two-fold: to introduce the fundamental concepts of linear algebra and to apply the theorems in computation-oriented applications. There are many good introductory texts on linear algebra, and the intention of this book is to be a supplement to those texts, or to serve as a text for senior undergraduate students or first year graduate students, whose interests are computational mathematics, science, engineering, and computer science. The presentation of the material combines definitions and proofs with an emphasis on computational applications. We provide examples that illustrate the use of software packages such as Mathematica, Maple, and Sage.

This book has evolved from our experience over several years of teaching linear algebra to mixed audiences of upper division mathematics majors, beginning graduate students, and students from the fields of science and engineering that rely heavily on mathematical methods. Our goal in writing this book has been to develop a text that addresses the exceptional diversity of the audience, and introduce some of the most essential topics about the subject of linear algebra to these groups. To accomplish our goal, we have selected material that covers both the theory and applications, while emphasizing topics useful in other disciplines.

Throughout the text, we present a brief introduction to some aspects of abstract algebra that relate directly to linear algebra, such as groups, rings, modules, fields and polynomials over fields. In particular, the last section of this book is dedicated to the matrix decomposition over principle ideal domains, because this structure theorem is a generalization of the fundamental theorem of finitely-generated abelian groups, and this result provides a simple framework to understand various canonical form results for square matrices over fields.

We use the material from the book to teach our own elective linear algebra course, and some of the solutions to the exercises are provided by our students. It is our hope that this book will help a general reader appreciate the abstract mathematics behind the real applications.

By design, each chapter consists of two parts: the theoretical background and the applications, which make the book suitable for a one semester course in linear algebra that can be used in a variety of contexts. For an audience composed primarily of mathematics undergraduate majors, the material on the theories of abstract vector spaces, linear transformations, linear operators, orthogonal bases, and decomposition over rings can be covered in depth. For an applied mathematics course with students from the fields of science and engineering that rely heavily on mathematical methods, the material on applications of these areas such as linear codes, affine or projective transformations, geometry of transformations, matrix in graph theory, image processing, and QR decomposition, can be treated with more emphasis. In the applications, we allow ourselves to present a number of results from a wide range of sources, and sometimes without detailed proofs. The applications portion of the chapter is suitable for a reader who knows some linear algebra and a particular related area such as coding theory, geometric modeling, or graph theory. Some of the applications can serve as a guide to some interesting research topics.

The prerequisite for this book is a standard first year undergraduate course in linear algebra. In Chapter 1 and Chapter 2 we start with a quick review of the fundamental concepts of vector spaces and linear transformations. To better understand the behavior of a linear transformation, we discuss the eigenvectors in Chapter 3, where the eigenvectors act as the “axes” along which linear transformations behave simply as stretching, compressing, or flipping, and hopefully make understanding of linear transformations easier. Because one can perform some operations on vectors by performing the same operations on the basis, we study orthogonal bases in Chapter 4. In particular, we study linear transformations relative to orthonormal bases that faithfully preserve the linear properties and the metric properties. Finally, in Chapter 5, we focus on the matrix decomposition over real or complex numbers and over principle ideal domains.

This book should be thought of as an introduction to more advanced texts and research topics. The novelty of this book, we hope, is that the material presented here is a unique combination of the essential theory of linear algebra and computational methods in a variety of applications.

LIST OF FIGURES

3.1: A simple graph

4.1: Image compression

4.2: Image compression (cont.)

C H A P T E R 1

VECTOR SPACES

1.1Vector Spaces

Linear algebra is an area of mathematics which deals with vector spaces and linear mappings between vector spaces. The fundamental subjects studied in linear algebra includes lines, planes, and subspaces. For example, the condition under which a set of n hyperplanes intersect in a single point can be investigated via finding the condition for a system of linear equations to have a unique non-trivial solution. This system of linear equation can be represented by vectors and matrices. In this section, we will focus on the vector spaces.

We will first recall the notation of a set. A set is a collection of distinct objects, the objects are called elements of the set. For instance, all integer numbers form a set = {..., −2, −1, 0, 1, 2, ...}.

A set X is called an empty set, denoted by ∅, if X has no elements. We denote x ∈ X if an element x is included in a set X, otherwise denote x ∈ X. For two sets X and Y, we call X a subset of Y (or Y a superset of X), denoted by X ⊆ Y (or Y ⊇ X), if all the elements of X are included in Y. Note that the empty set is a subset of any set. We say X is a strict (alternatively proper in some references) subset of Y (or Y is a strict superset of X), denoted by X ⊂ Y (or Y ⊃ X) if X ⊂ Y and there is an element y such that y ∈ Y and y ∉ X.

We give three basic operations for two sets:

1.Union of two sets X and Y is the set of elements which are in X, in Y, or in both X and Y, denoted by X ∪ Y = {a | a ∈ X or a ∈ Y};

2.Intersection of two sets X and Y is the set of elements which are in both X and Y, denoted by X ∩ Y = {a | a ∈ X and a ∈ Y};

3.Difference of two sets X and Y is the set of elements in X but not in Y, denoted by X\Y = {a | a ∈ X and a ∉ Y}.

Definition 1.1.1

A ringR consists of a set with two operations (+, *) : R × R → R with the properties:

1.Addition is commutative and associative.

2.There is a unique element 0 ∈ R, such that x + 0 = x for all x ∈ R.

3.To each x ∈ R, there corresponds a unique element −x ∈ R such that x + (−x) = 0.

4.Multiplication is associative.

5.Multiplication is distributive over addition on the left (and/or right).

If the multiplication is commutative, then R is called a commutative ring. If there is a unique non-zero element 1 ∈ R such that x * 1 = 1 * x = x for all x ∈ R, then the ring is called a ring with identity. If for each non-zero element x ∈ R, there corresponds a unique element y ∈ R such that x * y = y * x = 1, then we say R is a division ring.

For a ring R, if R satisfies the first three conditions, then (R, +) is a commutativeorabelian additive group. When a ring R is with unity, then (R, *) is a commutative multiplicative group.

Basically, a field is an algebraic structure in which every linear equation in a single variable can be solved.

Definition 1.1.2

Let be a set with two operations (+, *) : × → with the following properties, then is a field.

1.Addition is commutative and associative.

2.There is a unique element 0 ∈ , such that x + 0 = x for all x ∈ .

3.To each x ∈ , there corresponds a unique element −x ∈ such that x + (−x) = 0.

4.Multiplication is commutative and associative.

5.There is a unique non-0 element 1 ∈ such that x * 1 = x for all x ∈ .

6.To each non-zero x ∈ , there corresponds a unique element x−1 ∈ such that x * x−1 = 1.

7.Multiplication is distributive over addition.

We recall that the set of rational numbers , the set of real numbers , and the set of complex numbers , are all fields with the usual addition and multiplication, and ⊂ ⊂ .

We know that the difference between rings and fields is that fields are commutative division rings. The main difference between rings and fields is that one can divide in fields, but in general one cannot always divide in rings. For instance, the set of integers is a commutative ring; if we include all the multiplicative inverses of non-zero integers, then we get a field, the field of rational numbers. A polynomial ring is a commutative ring formed from the set of polynomials in one or more indeterminates, or variables, with coefficients in another ring. For example [x] is a polynomial ring in one variable with coefficients in . If we include all the multiplicative inverses of non-zero polynomials, then we get a field, the field of rational functions, (x).

One can verify that the set of n × n matrices over is a ring, but not a commutative ring. This ring has identity, i.e., the n × n identity matrix, but this is not a ring with unity.

A vector space, or sometimes called a linear space, is a collection of objects called vectors that satisfies certain properties.

Definition 1.1.3

Avector space or linear spaceV over a field consists of a nonempty set V together with two operations + : V × V → V, and * : × V → V which satisfy the following:

1.A rule (or operation) +, called vector addition, which associates with each pair of vectors a, b ∈ V a vector a + b ∈ V, called the sum of a, b, in such a way that

(a)addition is commutative

(b)addition is associative

(c)there is a zero vector 0 which is additive identity

(d)for each vector a ∈ V, there is –a ∈ V, such that a + (−a) = 0.

2.A rule (or operation) ∗, called scalar multiplication, which associates with each c ∈ and a vector a ∈ V a vector ca ∈ V, called the product, such that for all a, b ∈ V and c1, c2 ∈

(a)1 * a = a

(b)(c1c2) * a = c1 * c2 * a)

(c)c*(a + b)= c * a + c * b

(d)(c1 + c2) * a = c1 * a + c2 * a.

Definition 1.1.4

A subset W of a vector space V is called a subspace of V if W itself is a vector space under the addition and scalar multiplication inherited from V.

EXAMPLE 1.1.1

Let [x] be the collection of all polynomials in one variable x, i.e., the polynomial ring with one variable with coefficients over the field . Then [x] with the usual polynomial addition and multiplication by constants is a vector space over . Polynomials of degree n are completely determined, by the coefficient of xk for k = 0, …, n. To check that is a vector space, one needs to know how addition and scalar multiplication by elements of [x] are defined.

A matrix is an important object in linear algebra. A matrix is a rectangular array of numbers or expressions, called entries of the matrix, arranged in rows and columns.

EXAMPLE 1.1.2

Let Mn×n() be the set of n × n matrices with entries in . Then Mn×n() is a vector space over with the usual addition and scalar multiplication. Moreover, if we let Dn() and Un() be the set of diagonal matrices and upper triangular matrices in Mn×n(), then Dn() and Un() are subspaces of Mn×n().

EXAMPLE 1.1.3

An n × n matrix A = (ajk) where aij denotes the entry of the i-th row and j-th column over the field of complex numbers is Hermitian or self-adjoint if

for each j, k; the bar denoting complex conjugation.

The set of all Hermitian matrices is not a subspace of the space of all n × n matrices over .

EXAMPLE 1.1.4

Let Mm×n([x]) be the set of m × n matrices with entries in [x]. Then Mm×n([x]) is a vector space over [x] with the usual addition and scalar multiplication. In fact, Mm×n([x]) is isomorphic to (Mm×n())[x]. For instance, 3 × 3 polynomial matrix of degree 2:

Definition 1.1.5

Suppose that R is a ring and 1 is its multiplicative identity. A leftR-module M is a set together with two operations + : M × M → M with an abelian group (M, +), and an operation * : R × M → M such that for all r, s ∈ R and x, y ∈ M, we have:

The operation * of the ring on M is called scalar multiplication. The notation RM indicates a left R-module M. A right R-module M or MR is defined similarly, except that the ring acts on the right; i.e., scalar multiplication takes the form * : M × R → M, and the above axioms are written with scalars r and s on the right of x and y.

If a ring is non-commutative there is a difference between a left and right action. A given left action does not in general define a right action. Over a commutative ring there is no difference between a left and right module.

A module is abstractly similar to a vector space, but it uses a ring to define coefficients instead of the field used for vector spaces. In fact, a vector space is a module over a field, but in general, modules have coefficients in much more general algebraic objects. It is easy to see that Hermitian matrices is not a module over .

Following are a few theorems which can be verified by the definition directly.

Theorem 1.1.1

A non-empty set W ⊂ V is a subspace of a vector space V if and only if the following hold:

1.For all x, y ∈ W, x + y ∈ W;

2.For every x ∈ W, and scalar a, the vector αx ∈ W.

Definition 1.1.6

If U, W are subspaces of the vector space V, then we define the sum of U, W to be

Similarly, if Ui are subspaces of V, then the sum of Ui is

Remark:We note that the union of two vector subspaces is not a vector or subspace.

Theorem 1.1.2

Suppose U, W are subspaces of the vector space V. Then U ∩ W, and U + W are subspaces of V.

Definition 1.1.7

Let U1,...,Uk be subspaces of a vector space V. We say V is a direct sumof U1, ... ,Uk and write V = U1 ⊕ U2 ⊕ ... ⊕ Uk if

1.Every vector x ∈ V can be written as x = y1 + y2 + ...+ yk with yi ∈ Ui;

2.If yi, wi ∈ Ui, and y1 + ... + yk = w1 + ... + wk, then yi = wi for all i = 1,..., k.

There is a similar definition, namely, the direct product Πi∈IVi of a family of vector spaces V consists of the set of all tuples xi ∈ Vi with i ∈ I where addition and scalar multiplication is performed component wise, and I is either a finite or infinite index set. For example, the direct product is the same as Cartesian product. If X and Y are two sets, then X × Y, the Cartesian product of X and Y is a set made up of all ordered pairs of elements of X and Y, i.e., if x ∈ X, y ∈ Y, then (x, y) ∈ X × Y.

EXAMPLE 1.1.5

Take V = 3. We can describe all subspaces: {0}, 3, all lines and planes through the origin.

Given any two distinct lines U1 and U2 through the origin, we can take the plane U1 + U2 that they span.

Given a plane U through the origin, and a line W through the origin not in that plane, then U + W = 3. Furthermore, every vector v ∈ 3 is a unique sum of a vector of U and another vector in W, that is 3 = U ⊕ W.

Using the definition of direct sum, one can verify the following theorem:

Theorem 1.1.3

Let U1,...,Uk be subspaces of a vector space V. Let i.e., sum of all Uj for j ≠ i. Then V = U1 ⊕ … ⊕ Ui ⊕ …⊕ Uk if and only if

1.V = U1 + … + Uk;

2.Ui ∩ Wi = {0} where 0 ∈ V for each i.

EXAMPLE 1.1.6

Let = {Wi | i ∈ I} be a collection of subspaces of V where the index set I can be finite or infinite. It is clear that ∩|i∈IWi is a subspace of V. If I is a finite set, then the set of all finite sums of the vectors from Σi∈IWi is also a subspace of V, and denote this as Σi∈IWi. If I is an infinite set, let

where Σi∈Iwi means that all wi = 0 except possibly for finitely many i ∈ I. Therefore, regardless whether I is a finite or infinite set, with the above notation Σi∈IWi is a subspace of V.

Theorem 1.1.4

Let be an infinite field and V be a vector space over . Then V cannot be the union of a finite number of proper subspaces (subspaces that are strictly contained in V).

Proof: We will prove the claim by contradiction. Let Wi ⊆ V for i = 1,...,n be proper subspaces of V. Suppose Without loss of generality, we assume that

Let w ∈ W1 and let v ∈ V\W1. Consider the set U = {w + av | a ∈ }. Since is infinite, U is infinite. Hence U ∩ Wk is infinite for some index k where k = 1, 2,…, n.

Suppose k ≠ 1. If (w + av), (w + bv) ∈ U ∩ Wk for some distinct a, b ∈ , then b(w + av) −a(w + bv) = (b − a)w ∈ Wk. And hencew ∈ Wk, con-tradicting the assumption that Thus, k = 1. But this is not possible, since Wk = W1 is a subspace, (w + av) − (w + bv) = (a − b) v ∈ W1, which yields v ∈ W1. This contradicts v ∈ V\W1. Therefore, we proved the original claim.

Notice that the above theorem may not be true if is a finite field. For example, let = 2, and the elements of the vector space is coming from the finite field 2. Then V = {(0, 0), (1, 0), (0, 1), (1, 1)}. Let subspaces V1 = {(0, 0), (1, 0)}, V2 = {(0, 0), (0, 1)}, and V3 = {(0, 0), (1, 1)}, then V = V1 ∪ V2 ∪ V3.

1.2Linear Span and Linear Independence

Definition 1.2.1

A vector b ∈ V is said to be a linear combination of the vectors a1, …, an in V if there exist scalars c1, …, cn in such that

The set of all linear combinations of a1,…,an is called the span of a1,…,an, and it is denoted by Span (a1,…,an).

If a vector space V = Span(a1, …, an), then we say that (a1, …,an) spansV, and (a1, …,an) is a spanning sequence for V.

A vector space V is finitely generated if it is possible to find a finite sequence of vectors (a1, …,an) such that V = Span(a1, …,an).

EXAMPLE 1.2.1

The real vector space 3 has {(2, 0, 0), (0, 1, 0), (0, 0, 1)} as a spanning set. {(1, 2, 3), (0, 1, 2), (−1, 0.5, 3), (1, 1, 1)} is also a spanning set for 3. But, the set {(1, 0, 0), (0, 1, 0), (1, 1, 0)} is not a spanning set of 3, since (0, 0, 1) ∈ 3 is not in the span of this set. The span of this set is the space of all vectors in 3 whose last component is zero.

EXAMPLE 1.2.2

Let A = (a1,…,am), and B = (b1,…,bm) be two spanning sequences of a vector space V. Define the union of the two sequences A and B by A ∪ B = (a1,…,am, b1,…,bn). It is easy to check that Span(A) ∪ Span(B) ⊂ Span(A ∪ B).

Theorem 1.2.1

Let A = (a1,…,am) be a sequence such that ai ∈ V. Then

1.Span(A) is a subspace of V;

2.If W is a subspace of V, and A ⊂ W, then Span(A) ⊂ W.

Proof: One can check the claim by using the definition of span. The second claim says that Span(A) is the “smallest” subspace of V which contains A. If W contains A, and W ⊆ C Span(A), then W = Span(A).

Theorem 1.2.2

Let A = (a1,…,am) be a sequence such that ai ∈ V are distinct vectors. If there exists a vector ai for some i such that then Span(A) = Span(A \ {ai}).

Proof: It is easy to see that Span(A) ⊇ Span(A \ {ai}). To see Span(A) ⊆ Span(A \ {ai}), let b ∈ Span(A), then

Thus, Span(A) = Span(A \ {ai}).

So far, we have seen that any finite subset A of a vector space V determines a subspace Span(A) of V. If we let A denote the set of all subsets of V, and denote the set of all subspaces of V, then there is a function Span : A → which sends a subset A ∈ A to Span(A) ∈ . One can show that Span is a function with the following properties:

1.If A1 ⊆ A2 ∈ A, then Span(A1) ⊆ Span(A2) ∈

2.If w ∈ Span(A), then there exists a subset A′ ⊆ A such that w ∈ Span(A′).

3.A ⊆ Span(A) for all A ∈ A.

4.For every A ∈ A, Span(Span(A)) = Span(A).

5.Let v, w ∈ V, if v ∈ Span(Au{w})\Span(A), then w ∈ Span(A∪{v}).

We observe that if a certain element of a spanning set can be spanned by the other elements of the spanning set, then one can reduce the number of elements in the spanning set. This introduces the following concept.

Definition 1.2.2

A finite sequence of vectors a1,…,an from a vector space V is linearly dependent if there are scalars c1, ... ,cn, not all zero, such that The sequence is said to be linearly independent if

An infinite set of vectors is linearly dependent if it contains a finite subset that is linearly dependent. Otherwise, this infinite set of vectors is called linearly independent.

It is easy to check that if a spanning set contains repeated vectors, or if one of the vectors is a linear combination of the other vectors, then the spanning set is linearly dependent.

Theorem 1.2.3

Let A − (a1,…,an) be a linearly independent sequence of vectors of a vector space V. Then

1.If b ∉ Span(A), then A ∪ {b} − {a1,…, an, b} is a linearly independent set.

2.If x ∈ Span(A), then x can be uniquely expressed as

Proof: We will prove the linearly independent relation by contradiction, that is, suppose there exists c, c1, …, cn not all zero such that Since A is a linearly independent sequence, we must have that c ≠ 0,

This is impossible since b ∉ Span(A).

To show the second claim, suppose

Since A is a linearly independent set

Hence, the expression is unique.

EXAMPLE 1.2.3

Let V = Mm×n(). For any 1 ≤ i < m and 1 ≤ j ≤ n, let ei,j be the m × n matrix whose (i, j)-th entry is 1, and all other entries are zero. It is easy to see that eij for 1 ≤ i ≤ m and 1 ≤ j ≤ n are linearly independent since

where 0 is the m × n matrix with entries all zero. Moreover, Mm×n() is spanned by the eij for 1 ≤ i ≤ m and 1 ≤ j ≤ n.

1.3Bases for Vector Spaces

Definition 1.3.1

A finite set subset = {v1, ..., vn} of a vector space V is called a finite basis for V provided

1.v1,…, vn are linearly independent, and

2.v1,…, vn span V.

Consequently, if v1, ..., vn is a list of vectors in V, then these vectors form a basis if and only if every v ∈ V can be uniquely written as

If there is an infinite set of linearly independent and spanning elements, then this set is called an infinite basis. This basis is called a Hamel basis.

EXAMPLE 1.3.1

We see in Example 1.2.3,

form a basis for the vector space V = Mm×n().

EXAMPLE 1.3.2

Let V = [x]. Then = {1 = x0, x, x2,…} form a basis of V.

Theorem 1.3.1

If V is generated by n vectors v1,…,vn, then any sequence of vectors with more than n vectors is linearly dependent.

Proof: Let (a1, …, ak) with k > n be a sequence of vectors of V. If a1 ∈ V, then a1,v1,…,vn are linearly dependent. There exists a vector say vn ∈ V = Span(a1,v1,…,vn-1). We repeat this process, and obtain V = Span(an, an-1,…,a1). Then an+1 ∈ V, andan+1, an, an−1,…, a1 are linearly dependent. Thus, we conclude that any sequence of vectors with more than n vectors is linearly dependent.

Theorem 1.3.2

Let V = Span(v1,…,vn) be a finitely generated vector space. Then V has a basis with at most n elements.

Proof: Let v1,…,vn generate V, that is V = Span(v1,…,vn). If v1,…,vn are linearly independent, then = {v1,…,vn} is a basis for V. If v1,…,vn are linearly dependent, then there exists a vector say vn such that vn ∈ Span(v1