Essentials of Signals and Systems - Emiliano R. Martins - E-Book

Essentials of Signals and Systems E-Book

Emiliano R. Martins

0,0
61,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Novel approach to the theory of signals and systems in an introductory, accessible textbook Signals and Systems have the reputation of being a difficult subject. Essentials of Signals and Systems is a standalone textbook aiming to change this reputation with a novel approach to this subject, teaching the essential concepts of signals and systems in a clear, friendly, intuitive, and accessible way. The overall vision of the book is that traditional approaches to signals and systems are unnecessarily convoluted, and that students' learning experiences are much improved by making a clear connection between the theory of representation of signal and systems, and the theory of representation of vectors and matrices in linear algebra. The author begins by reviewing the theory of representation in linear algebra, emphasizing that vectors are represented by different coordinates when the basis is changed, and that the basis of eigenvectors is special because it diagonalizes the operator. Thus, in each step of the theory of representation of signals and systems, the author shows the analogous step in linear algebra. With such an approach, students can easily understand that signals are analogous to vectors, that systems are analogous to matrices, and that Fourier transforms are a change to the basis that diagonalizes LTI operators. The text emphasizes the key concepts in the analysis of linear and time invariant systems, demonstrating both the algebraic and physical meaning of Fourier transforms. The text carefully connects the most important transforms (Fourier series, Discrete Time Fourier Transform, Discrete Fourier Transforms, Laplace and z-transforms), emphasizing their relationships and motivations. The continuous and discrete time domains are neatly connected, and the students are shown step-by-step how to use the fft function, using simple examples. Incorporating learning objectives and problems, and supported with simple Matlab codes to illustrate concepts, the text presents to students the foundations to allow the reader to pursue more advanced topics in later courses. Developed from lecture notes already tested with more than 600 students over six years, Essentials of Signals and Systems covers sample topics such as: * Basic concepts of linear algebra that are pertinent to signals and systems. * Theory of representation of signals, with an emphasis on the notion of Fourier transforms as a change of basis, and on their physical meaning. * Theory of representation of linear and time invariant systems, emphasizing the role of Fourier transforms as a change to the basis of eigenvectors, and the physical meaning of the impulse and frequency responses. * What signals and systems have to do with phasors and impedances, and the basics of filter design. * The Laplace transform as an extension of Fourier transforms. * Discrete signals and systems, the sampling theorem, the Discrete Time Fourier Transform (DTFT), the Discrete Fourier Transform (DFT), and how to use the fast fourier transform (fft). * The z-transform as an extension of the Discrete Time Fourier Transform. Essentials of Signals and Systems is an immensely helpful textbook on the subject for undergraduate students of electrical and computer engineering. The information contained within is also pertinent to those in physics and related fields involved in the understanding of signals and system processing, including those working on related practical applications.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 479

Veröffentlichungsjahr: 2023

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Title Page

Copyright

Preface

About the Author

Acknowledgments

About the Companion Website

1 Review of Linear Algebra

1.1 Introduction

1.2 Vectors, Scalars, and Bases

1.3 Vector Representation in Different Bases

1.4 Linear Operators

1.5 Representation of Linear Operators

1.6 Eigenvectors and Eigenvalues

1.7 General Method of Solution of a Matrix Equation

1.8 The Closure Relation

1.9 Representation of Linear Operators in Terms of Eigenvectors and Eigenvalues

1.10 The Dirac Notation

1.11 Exercises

Interlude: Signals and Systems: What is it About?

2 Representation of Signals

2.1 Introduction

2.2 The Convolution

2.3 The Impulse Function, or Dirac Delta

2.4 Convolutions with Impulse Functions

2.5 Impulse Functions as a Basis: The Time Domain Representation of Signals

2.6 The Scalar Product

2.7 Orthonormality of the Basis of Impulse Functions

2.8 Exponentials as a Basis: The Frequency Domain Representation of Signals

2.9 The Fourier Transform

2.10 The Algebraic Meaning of Fourier Transforms

2.11 The Physical Meaning of Fourier Transforms

2.12 Properties of Fourier Transforms

2.13 The Fourier Series

2.14 Exercises

3 Representation of Systems

3.1 Introduction and Properties

3.2 Operators Representing Linear and Time Invariant Systems

3.3 Linear Systems as Matrices

3.4 Operators in Dirac Notation

3.5 Statement of the Problem

3.6 Eigenvectors and Eigenvalues of LTI Operators

3.7 General Method of Solution

3.8 The Physical Meaning of Eigenvalues: The Impulse and Frequency Responses

3.9 Frequency Conservation in LTI Systems

3.10 Frequency Conservation in Other Fields

3.11 Exercises

4 Electric Circuits as LTI Systems

4.1 Electric Circuits as LTI Systems

4.2 Phasors, Impedances, and the Frequency Response

4.3 Exercises

5 Filters

5.1 Ideal Filters

5.2 Example of a Low‐pass Filter

5.3 Example of a High‐pass Filter

5.4 Example of a Band‐pass Filter

5.5 Exercises

6 Introduction to the Laplace Transform

6.1 Motivation: Stability of LTI Systems

6.2 The Laplace Transform as a Generalization of the Fourier Transform

6.3 Properties of Laplace Transforms

6.4 Region of Convergence

6.5 Inverse Laplace Transform by Inspection

6.6 Zeros and Poles

6.7 The Unilateral Laplace Transform

6.8 Exercises

Interlude: Discrete Signals and Systems: Why do we Need Them?

7 The Sampling Theorem and the Discrete Time Fourier Transform (DTFT)

7.1 Discrete Signals

7.2 Fourier Transforms of Discrete Signals and the Sampling Theorem

7.3 The Discrete Time Fourier Transform (DTFT)

7.4 The Inverse DTFT

7.5 Properties of the DTFT

7.6 Concluding Remarks

7.7 Exercises

8 The Discrete Fourier Transform (DFT)

8.1 Discretizing the Frequency Domain

8.2 The DFT and the Fast Fourier Transform (

fft

)

8.3 The Circular Time Shift

8.4 The Circular Convolution

8.5 Relationship Between Circular and Linear Convolutions

8.6 Parseval's Theorem for the DFT

8.7 Exercises

Note

9 Discrete Systems

9.1 Introduction and Properties

9.2 Linear and Time Invariant Discrete Systems

9.3 Digital Filters

9.4 Exercises

10 Introduction to the z‐transform

10.1 Motivation: Stability of LTI Systems

10.2 The z‐transform as a Generalization of the DTFT

10.3 Relationship Between the z‐transform and the Laplace Transform

10.4 Properties of the z‐transform

10.5 The Transfer Function of Discrete LTI Systems

10.6 The Unilateral z‐transform

10.7 Exercises

References with Comments

Appendix A: Laplace Transform Property of Product in the Time Domain

Appendix B: List of Properties of Laplace Transforms

Index

End User License Agreement

List of Illustrations

Chapter 2

Figure 2.1 Illustration of a single‐valued function (a) and non‐single‐valued fu...

Figure 2.2 Illustration of how the convolution builds up a new function by addin...

Figure 2.3 Illustration of how to evaluate the integral of convolution.

Figure 2.4 Illustration of modifications in the quadratic and rectangular functi...

Figure 2.5 Illustration of the four domains of integration involved in the convo...

Figure 2.6 The convolution of a rectangular function with itself results in a tr...

Figure 2.7 Examples of normalized rectangular functions for different values of...

Figure 2.8 Graphical representation of the impulse function as an upwards pointi...

Figure 2.9 Representation of the sifting property of the impulse function. (a) T...

Figure 2.10 Graphical representation of the terms of the Riemann sum in Equation...

Figure 2.11 Graphical representation of the function .

Figure 2.12 Illustration of cosines and sines with different frequencies.

Figure 2.13 Illustration of the recipe of Equation (2.105). The illustration is ...

Figure 2.14 Schematic illustration of the difference between a high‐pitched song...

Figure 2.15 Illustration of a rectangular function and its Fourier transform. ...

Figure 2.16 Comparison between a slow signal (red) and a fast signal (blue). ...

Figure 2.17 Example of the symmetry between time and frequency. The Fourier tran...

Figure 2.18 Illustration of AM modulation. Spectrum before modulation (a) and sp...

Figure 2.19 Illustration of relationship between a function (a) and its integral...

Figure 2.20 Illustration of the separation of into a signal where the value at ...

Figure 2.21 Illustration of the effect of chopping a sinusoidal wave. (a) Ideal ...

Figure 2.22 Illustration of an impulse train in the time domain (a) and in the f...

Figure 2.23 Representation of the creation of a periodic function through the co...

Figure 2.24 Illustration of the reason why the Fourier series involve only multi...

Figure 2.25 A square wave.

Figure 2.26 Illustration of the relationship between the Fourier transform of a ...

Chapter 3

Figure 3.1 Representation of the action of a system.

Figure 3.2 Illustration of the concept of time invariance.

Figure 3.3 Illustration of a harmonic oscillator with mass , spring constant , a...

Figure 3.4 (a)–(c) Magnitude of the frequency response of a harmonic oscillator;...

Figure 3.5 Example of transient response of a harmonic oscillator. The input (to...

Figure 3.6 (a) RC circuit showing input and output. (b) At time , a voltage sour...

Figure 3.7 Relationship between input and output of an RC circuit. The discharge...

Figure 3.8 Illustration of the concept of frequency conservation in LTI systems....

Figure 3.9 Illustration of an optical ray incident on the boundary between two m...

Chapter 4

Figure 4.1 If the voltage and currents have the same time dependence, then Kirch...

Figure 4.2 An RLC circuit.

Chapter 5

Figure 5.1 Example of an ideal low‐pass filter. (a) Illustration of a signal con...

Figure 5.2 Illustration of the magnitude of the frequency response of (a) ideal ...

Figure 5.3 (a) Plot of as defined in Equation (5.1). (b) Plot of frequency resp...

Figure 5.4 Example of Bode plot of an RL high pass filter assuming a cut‐off fre...

Figure 5.5 A band‐pass filter based on a tank circuit.

Chapter 6

Figure 6.1 (a) Example of a stable system. (b) Example of an unstable system. ...

Figure 6.2 Examples of graphical representation of ROCs of Laplace transforms. T...

Chapter 7

Figure 7.1 Illustration of the relationship between a continuous time signal , t...

Figure 7.2 (a) Illustration of with highest frequency ; for simplicity is assu...

Figure 7.3 (a) Illustration of a signal and its samples . (b) Illustration of t...

Figure 7.4 Illustration of the relationship between the Fourier transform of the...

Figure 7.5 Plot of the DTFT of the Gaussian function.

Figure 7.6 (a) Plot of the Fourier transform. (b) Plot of the DTFT, including on...

Chapter 8

Figure 8.1 Illustration of the effect of discretizing the DTFT. (a) An example s...

Figure 8.2 Illustration of the consequence of choosing . With this choice, only ...

Figure 8.3 Illustration of the samples used in the centralized DFT (Equation (8....

Figure 8.4 Illustration of samples used in the DFT calculated by the fft (Equati...

Figure 8.5 Illustration of the command fftshift. (a) An example signal with 16 s...

Figure 8.6 Plot of the centralized Gaussian, generated by the command above (fig...

Figure 8.7 Plot of the Gaussian to be used in the fft, generated by the command ...

Figure 8.8 Plot of DFT calculated by the fft, generated by the command above (fi...

Figure 8.9 Plot of the centralized DFT, generated by the command above (figure 4...

Figure 8.10 Plot of Fourier transform of the Gaussian function of the previous e...

Figure 8.11 Inverse Fourier transform obtained with the ifft, generated by the c...

Figure 8.12 Illustration of circular time shift: only the samples highlighted in...

Figure 8.13 Illustration of a circular convolution that coincides with the linea...

Figure 8.14 Example of a circular convolution that does not coincide with the li...

Chapter 9

Figure 9.1 Plot obtained using the command above (figure 1).

Chapter 10

Figure 10.1 Illustration of region of convergence of a right‐sided function: the...

Figure 10.2 Illustration of the difference between the ‘location’ of the DFTFT i...

Guide

Cover

Table of Contents

Title Page

Copyright

Preface

About the Author

Acknowledgments

About the Companion Website

Begin Reading

References with Comments

Appendix A: Laplace Transform Property of Product in the Time Domain

Appendix B: List of Properties of Laplace Transforms

Index

End User License Agreement

Pages

iii

iv

xi

xii

xiii

xv

xvii

xix

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

35

36

37

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

203

204

205

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

301

302

303

305

306

307

308

309

310

Essentials of Signals and Systems

 

EMILIANO R. MARTINS

Department of Electrical and Computer EngineeringUniversity of São Paulo, Brazil

 

 

 

 

This edition first published 2023© 2023 John Wiley & Sons Ltd

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Emiliano R. Martins to be identified as the author of this work has been asserted in accordance with law.

Registered OfficesJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USAJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

Editorial OfficeThe Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print‐on‐demand. Some content that appears in standard print versions of this book may not be available in other formats.

Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Cataloging‐in‐Publication Data

Names: Martins, Emiliano R., author. | John Wiley & Sons, publisher.Title: Essentials of signals and systems / Emiliano R. Martins.Description: Hoboken, NJ : Wiley, 2023. | Includes bibliographical references and index.Identifiers: LCCN 2022049576 (print) | LCCN 2022049577 (ebook) | ISBN 9781119909217 (paperback) | ISBN 9781119909224 (adobe pdf) | ISBN 9781119909231 (epub)Subjects: LCSH: Signal theory (Telecommunication). | Signals and signaling. | Signal processing–Data processing.Classification: LCC TK5102.5 .M29225 2023 (print) | LCC TK5102.5 (ebook) | DDC 621.382/23–dc23/eng/20221107LC record available at https://lccn.loc.gov/2022049576LC ebook record available at https://lccn.loc.gov/2022049577

Cover Design: WileyCover Image: © Andriy Onufriyenko/Getty Images

Preface

Like Achilles played by Brad Pitt, signals and systems are a powerful and beautiful theory. Have you ever wondered why is it the case that, if any waveform passes through an electric circuit, the circuit modifies the waveform, except if it is a sinusoidal wave? For example, if the circuit input is a square wave, then the output is not a square wave. If the input is triangular, then the output is not triangular. But if the input is sinusoidal, then the output is also sinusoidal, only with a different amplitude and phase. Why is that? And why does the same thing happen in so many different systems? For instance, if you apply a square wave force to the mass of a spring–mass system, the mass does not move like a square wave. But, if the force is sinusoidal, and only if it is sinusoidal, then the mass movement is also sinusoidal. What is so special about sinusoidal waves that their form is maintained in so many different systems?

As you will learn in this textbook, this is a consequence of conservation of frequency in linear and time invariant systems. This is an extremely useful concept, with applications in a wide range of topics in engineering and physics. For example, by changing time for space, the same theory explains Snell's law of reflection and refraction, and many other conservation properties of electromagnetic, optical, and quantum systems. And this is just a taste of the power of this theory.

However, signals and systems enjoy an unfair reputation of being a tough nut to crack. It is not. In fact, if you already know elementary notions of linear algebra – like vectors, change of basis, eigenvectors, and eigenvalues – then you already know the most important bits of the theory of signals and systems. It is only a matter of spelling them out. So, as Socrates, in history's most cringing epistemological simile, put it: you are already pregnant with signals and systems; I am only the midwife. But if you have never taken a linear algebra course, no need to worry, we will introduce the main ideas from scratch.

So you, reader, are in for a treat. With a little bit of effort, you will learn one of the most useful mathematical theories in engineering and physics. And, yes, this is a mathematical theory, but I will make sure to show you the physical meaning of the crucial concepts, like Fourier transforms and the frequency response. You will learn not only their meaning, but also how these mathematical objects can be measured in real life.

Now let me tell you a bit about the lay of the land. We begin by stretching and warming up in Chapter 1, where we review the basic concepts of linear algebra that are pertinent to signals and systems. Though I assume that you have already taken at least one course in linear algebra, the review introduces all ideas from scratch, so that a reader unfamiliar with linear algebra can follow it without difficulty. The only caveat is that the review is not in the rigorous format of a traditional exposition of linear algebra. Indeed, it takes some liberties that are only acceptable in a book that is not about linear algebra per se, as it is the case here. In short: if you have never taken a course in linear algebra, you should have no difficulties in following the review, but you should not forget that this is only a review, not a rigorous exposition of the theory. Finally, I will introduce a notation that may be new even to students familiar with linear algebra. It is the famous Dirac notation, after Paul Dirac, one of the greats of the twentieth century. The main motivation to introduce the Dirac notation is that it helps to highlight a crucial aspect of the theory of signals and systems, namely, the difference between the representation of a signal and the signal itself.

The course per se begins in Chapter 2, which is a foundational chapter, where we will learn the theory of representation of signals. We will see that the same ideas of linear algebra can also be applied to signals. In linear algebra, the same vector can be represented in different bases. In signals and systems, signals play the role of vectors, and as such can also be represented in different bases. This notion leads to two central representations: the time domain representation and the frequency domain representation. The latter is the representation involving Fourier transforms, so a big chunk of Chapter 2 is about Fourier transforms. We will learn what their algebraic and physical meanings are, and that a Fourier transform is a scalar product.

In Chapter 3, we deal with the theory of representation of systems. We are mainly interested in a particular but quite common class of systems, the so‐called linear and time invariant systems. We will learn that they play the same role that matrices play in linear algebra: matrices are objects that transform one vector into another; likewise, systems are objects that transform one signal (so, the vector) into another. Then, we will learn that the time representation of a system is a differential equation. You will probably agree that differential equations are not the most straightforward things to solve in the universe. But here we will learn that they are not straightforward because they are like matrices that are not diagonal. In algebra, however, a general matrix can be made diagonal if a basis of eigenvectors is used. Likewise, differential equations can also be made diagonal by choosing a basis of eigenvectors. That leads to a simple representation of systems: the frequency domain representation, which is a diagonal representation. So, along the way, we will also learn the easiest way to solve differential equations. In this chapter, we will also learn why frequency is conserved in linear and time invariant systems, and so we will finally find out what is so special about sinusoidal waves. Then we will catch a glimpse of applications of this theory in other fields, like optics and quantum mechanics.

In Chapters 4 and 5, we will reap what we have sown and apply the freshly learned concepts to some examples, namely, analysis of electric circuits and filter design. We will learn what signals and systems have to do with phasors and impedances (they are intimately connected) and the basics of filter design.

In Chapter 6, we will learn that sometimes systems go bananas and require a more general representation. That will lead to the Laplace transform, which is a generalization of the Fourier transform.

We change gears in Chapters 7–10, which deal with discrete signals and systems. In Chapter 7 and 8, we will learn how to obtain Fourier transforms using computers. Maybe you have already heard about the fast Fourier transform (fft), which is a widely used algorithm in engineering and physics. We will learn that this algorithm implements a Discrete Fourier Transform (DFT), which is the topic of Chapter 8. So, we will learn what the fft is about and how to use it in practice.

Chapter 9 presents the basic ideas of the representation of discrete systems. The theories of discrete and continuous systems (the latter is the subject of Chapter 3) are quite similar, so we focus attention on their differences. Finally, in Chapter 10, we will learn the z‐transform, which is a version of the Laplace transform for discrete signals, and it is widely used in control engineering.

Now it is time to brew the coffee and play the focusing/meditation playlist. I hope you will enjoy the ride.

First note to instructors: in my experience, I found that students tend to prefer an exposition of the continuous timetheory in one block, without interweaving it with the discrete timetheory. Such a preference guided the organization of content in this textbook. If, however, you prefer to teach continuous and discrete time in parallel, then I suggest the following sequence of exposition:Chapters 1, 2, 7, 8, 3, 4, 5, 9, 6, and10.

Second note to instructors: to bring out the difference between a signal and its representation, I introduce the Dirac notation for vectors and scalar products inSection 1.10and use it in a few places inChapters 2andChapter 3. But I keep it simple, and do not discuss the notion of ‘duals’, since it is not necessary for our purposes. The Dirac notation is used to represent scalar products between signals, and to distinguish between a signal itself and its representation.

About the Author

Emiliano R. Martins majored in electrical engineering at the University of São Paulo (Brazil), obtained a master's degree in electrical engineering from the same university, another master's degree in photonics from the Erasmus Mundus Master in Photonics (European consortium), and a PhD in physics from the University of St. Andrews (UK). He has been teaching signals and systems in the Department of Electrical and Computer Engineering of the University of São Paulo (Brazil) since 2016. He is also the author of Essentials of Semiconductor Device Physics.

Acknowledgments

The raison d'être of all my endeavours lies within three persons: my wife Andrea, and my two children Tom and Manu. Without you, no book would be written, no passion would be meaningful.

I am grateful to my parents Elcio and Ana for teaching me what really matters, and to my sister Barbara for showing me the easiest way to learn what really matters.

I also thank all my students who contributed with invaluable feedback over all these years. You were the provers guiding the recipe of the pudding. If I have lived up to your expectations, then I have done my job well.

I am grateful to the Wiley team who brought the manuscript to life.

Last, but certainly not least, I thank my editors Martin Preuss and Jenny Cossham for believing in this project.

About the Companion Website

This book is accompanied by a companion website:

www.wiley.com/go/martins/essentialsofsignalsandsystems

This website includes:

• Solutions answers

1Review of Linear Algebra

Learning Objectives

We review the concepts of linear algebra that are relevant to signals and systems. There are two key ideas of particular importance to subsequent chapters: (i) the idea that the coordinate representation of a vector depends on the choice of basis vectors and (ii) the idea that a basis of eigenvectors simplifies a matrix equation. In Chapters 2 and 3, these two key ideas will be extended to a new algebraic space: the function space.

1.1 Introduction

In this chapter, we review concepts of linear algebra pertinent to signals and systems. I recommend that you read it even if you are already familiar and comfortable with linear algebra because, besides defining the notation and terminology, we will also make some important connections between linear algebra and signals and systems. This is certainly not a rigorous exposition of linear algebra, but we build it roughly from scratch, so that a reader without any previous knowledge of algebra can follow it.

The theory of signals and systems is essentially a generalization of linear algebra. Thus, the core ideas in the exposition of signals and systems will be illustrated by drawing analogies with a Euclidean space. In Chapter 2, we will learn that some intimidating integrals are only linear combinations of vectors, and some others are only scalar products.

Linear algebra deals with two mathematical objects: vectors and operators. Operators are objects that act on vectors, turning them into another vector. The set of all vectors upon which the operators can act is called a space, and if an operator acts on a vector of a space, then the resulting vector also belongs to the space. Furthermore, if we add two vectors from a given space, the result is also a vector from the same space. The most common space, which we will deal with in this chapter, is the Euclidean space.

We will use two types of notations for vectors. In the beginning, we will use the more standard notation, denoting a vector by placing a little arrow on the top of a letter, like this: . In Section 1.10, we will introduce a more elegant notation, namely, the Dirac notation. For now, though, we stick to the little arrows.

We will also use two types of notations for operators. First, we will denote an operator by a capital letter followed by curly brackets, like this: . Thus, we denote the action of the operator on a vector resulting in another vector , by:

(1.1)

In Section 1.10, we will also introduce the Dirac notation for operators.

Pay attention to a subtlety of the terminology: even though we can define many kinds of operations involving vectors, we reserve the term ‘operator’ for an object that turns a vector into another vector. But not all operations turn a vector into another. For example, in Section 1.2, we will define an operation that maps two vectors into a scalar (scalars are defined in Section 1.2), so we cannot call the object that does this operation an ‘operator’, because it does not turn a vector into another vector. Keep in mind this distinction between ‘operation’ and ‘operator’.

1.2 Vectors, Scalars, and Bases

In linear algebra, we are allowed to multiply vectors by numbers, and the result is another vector. These numbers are also called ‘scalars’. A more precise definition of a scalar involves the notion of invariance, but we do not need to go there, so we will make do with the notion of a scalar as being just a number.

Now, let us define an operation (mind you: I said an ‘operation’, not an ‘operator’) that maps two vectors into a scalar. Think of this operation as a kind of mathematical machine, in which there are two input slots and one output slot. So, you put one vector in one of the input slots, another vector in the other input slot, and out comes a scalar in the output slot. To denote this operation, we will use the symbol , and the rule is: if we put one vector on each side of , out comes a scalar, like this:

DEFINITION 1

(1.2)

In Definition1, is a number (a scalar). So, Definition 1 is only specifying that the object is a scalar: it is the output of the mathematical machine. Now, one feature that may be novel even to students familiar with linear algebra is that, here, we will allow to be a complex number. So, in Definition 1, may be a complex number.

Let us define other two properties of our mapping. The first one is that the mapping is linear, and as such obeys Definition 2:

DEFINITION 2

(1.3)

As implied in Definition 2, the object is itself a vector. We say that this vector was formed by a ‘linear combination’ of the vectors and .

Notice that we have defined linearity in terms of linear combinations on the right‐hand side of the , but we have said nothing about linear combinations on the left‐hand side, such as . To check what happens with operations involving linear combinations on the left‐hand side, we need our third definition:

DEFINITION 3

(1.4)

where is the complex conjugate of .

We call the operation obeying Definitions 1.1–1.3 a ‘scalar product’. The terminology reflects the fact that the operation associates two vectors with a scalar.

Worked Exercise: Linear Combinations on the Left‐hand Side of the Scalar Product

As an exercise in logic, let us check how the linear combination in parenthesis in the expression can be extracted from the parenthesis.

Solution:

First, recall that is itself a vector. Call it :

Thus:

(1.5)

But, according to Definition 3:

(1.6)

Recall that . Moreover, according to Definition 2:

(1.7)

Substituting Equation (1.7) into Equation (1.6):

(1.8)

But, according to Definition 3:

(1.9)

Substituting Equation (1.9) into Equation (1.8):

Thus, with the help of Equation (1.5), we conclude that:

CONCLUSION

(1.10)

Pay close attention to Equation (1.10), and compare it to Equation (1.3): in the way we defined the scalar product, to extract the scalars inside the parenthesis on the left‐hand side of the , we have to complex conjugate them (as in Equation (1.10)). But, if the parenthesis is on the right‐hand side of the , then we do not need to conjugate them (as in Equation (1.3)). Some textbooks define the scalar product in the opposite way (not conjugate if it is on the left part, but conjugate if it is on the right part), so we need to pay attention to the way it is defined.

We also need to define some terminology. If I take the scalar product of a vector with itself, then the result is obviously a scalar. It is also a real scalar, as you will prove in the exercise list. We call the square root of this scalar ‘the magnitude of the vector’. So, the magnitude is a real number associated with a vector. Using the symbol to denote the magnitude of the vector , then, by definition:

(1.11)

A vector with magnitude one is called a ‘unit vector’. To specify that a vector is a unit vector, we will replace the arrow with a funny hat, like this: . Thus, again by definition, a vector only deserves a hat if its magnitude is one, that is:

(1.12)

One key notion in linear algebra is the notion of orthogonality. By definition, two vectors are orthogonal if the scalar product between them is zero.

Now, suppose that two unit vectors and are orthogonal (that is, suppose that ). Now suppose that a vector is formed by a linear combination of and :

If we apply the scalar product between and , then, with the help of Definition 2, we find:

But, by definition, is a unit vector (that is, ) and orthogonal to (that is: ). Therefore:

Likewise:

The fact that and when entails that a vector can be formed by a linear combination of two vectors and only if is not orthogonal to both and . After all, if was orthogonal to and , then and would be zero. Of course, this conclusion also holds for linear combinations involving more than two vectors.

Now let us do a thought mathematical experiment. Imagine you have a bag full of vectors. This bag is your vector space. Pick a vector from the bag. Next, pick a second vector, but this time you must choose one that is orthogonal to the first. Now you have two orthogonal vectors. But suppose you want a third one, and it needs to be orthogonal to both vectors you already picked. If you cannot find a third one that is orthogonal to both vectors you already picked, then your bag is a two‐dimensional Euclidean space. If, on the other hand, you can pick a third one, that means your bag is a higher dimensional space. But how higher? Well, that depends. If, for example, you can find three vectors that are orthogonal to each other (each one must be orthogonal to the other two), but you cannot find a fourth one that is orthogonal to all the other three, that means your bag is a three‐dimensional Euclidean space. And so on.

So, the largest possible group of orthogonal vectors defines the dimension of the space. We call this group a ‘basisof the space’. Thus, any pair of orthogonal vectors forms a basis of a two‐dimensional Euclidean space. Any three orthogonal vectors form a basis of a three‐dimensional Euclidean space. And so on. If, furthermore, the basis is formed only by unit vectors, then we say that it is an orthonormalbasis. Unless explicitly stated otherwise, from now on, when I use the term ‘basis’, I mean an orthonormal basis.

Suppose you have a space of a certain dimension, for example, a three‐dimensional Euclidean space. We have just seen that, to assert that the space is three‐dimensional is tantamount to asserting that a basis has three vectors, and that it is not possible to find a fourth vector that is orthogonal to all these three basis vectors. So, that means any vector of this space can be formed by a linear combination of these basis vectors. If it could not, then it would have to be orthogonal to all these three basis vectors, which would entail that your space is not three‐dimensional after all, but at least four‐dimensional. Therefore, any vector can be formed by a linear combination of the basis vectors. This is one of the foundational concepts in our course in signals and systems: we will be constantly expressing signals in terms of linear combinations of other signals, the latter forming a basis of the space where the signals live (I defer more information about this new space until Chapter 2).

For the sake of simplicity, from now on we assume a two‐dimensional Euclidean space, but the concepts we will be reviewing can be straightforwardly extended to higher dimensional Euclidean spaces.

Let us suppose again that vectors and are orthogonal. If they are orthogonal, then necessarily they form a basis of the two‐dimensional Euclidean space. Thus, any vector of this space can be formed by a linear combination of and . For example, a vector can be expressed as:

(1.13)

The numbers and are called the ‘coordinates’ of vector with respect to the basis and . Once a basis has been specified, then a vector is uniquely specified by giving its coordinates. Thus, we can specify the vector by specifying its coordinates and .

But, if the coordinates uniquely specify a vector, then any information about the vector can be expressed in terms of its coordinates. In particular, the scalar product between two vectors can be expressed in terms of their coordinates. Thus, suppose we have a vector , as specified in Equation (1.13), and another vector , specified as:

(1.14)

Let us find an expression for the scalar product in terms of their coordinates. We begin by expressing in terms of a linear combination of basis vectors:

Now we use Definition 2:

Now we express in terms of a linear combination of basis vectors:

And now we use the Conclusion (Equation (1.10)) to extract the scalars from the parenthesis. Thus:

But if and form a basis, then . Furthermore, the basis vectors are unit vectors, that is: . Thus, we conclude that:

(1.15)

Notice that the coordinates of the vector are conjugated in the scalar product. This happened because is on the left‐hand side of the . So, do not forget that the coordinates of the left‐hand side vector are conjugated in the scalar product.

We can also express the magnitude of a vector in terms of its coordinates. According to Equation (1.11):

Using Equation (1.15) in Equation (1.11):

(1.16)

The root square of the product of a complex number with its conjugate is called the ‘magnitude of the complex number’. Thus, denotes the magnitude of the complex number :

(1.17)

Notice that both and are called magnitudes, but they are not the magnitudes of the same object ( is the magnitude of a vector and is the magnitude of a complex number). Using Equation (1.17) in Equation (1.16), we find:

(1.18)

Are you tired? If you are, please rest a little bit before reading Section 1.3. It is not difficult, but it is crucial that you understand it well.

1.3 Vector Representation in Different Bases

In Section 1.2, I mentioned that the idea of expressing a vector as a linear combination of basis vectors is a foundational idea in our course. Students usually have no problem with this notion. What tends to confuse them is the fact that because there are lots of bases (in fact, an infinite number of bases) of a space, so there are lots of possible representations (actually, an infinite number of possible representations). Each group of coordinates of a vector is a representation of this vector in a given basis. Thus, the very same vector can be represented with different coordinates, using different bases. The confusion arises when we do not take care to distinguish between the vector itself and its representation in a given basis. In other words, beware not to confuse a vector with its coordinates. As I have just said, the coordinates uniquely specify a vector once a basis is chosen. But, if we change the basis, then the very same vector is specified by different coordinates. Let us spell out this difference. Suppose that we picked the basis and , and specified a vector by specifying its coordinates and , like in Equation (1.13), which I repeat below for convenience.

Equation (1.13)

Now, suppose that we specify a unit vector, which I call . As always, to specify a vector, we must specify its coordinates. Thus, say that we chose the coordinates and to specify :

(1.19)

Notice that, if, by construction, we want to be a unit vector, then according to Equation (1.18) its coordinates must satisfy the condition:

(1.20)

Ok, so let us say that we picked a pair , that satisfies Equation (1.20). Now let us specify another unit vector with the coordinates and :

(1.21)

Of course, if we want to be a unit vector, then we must guarantee that:

(1.22)

But, besides being unit vectors, now we also require to be orthogonal to :

(1.23)

In other words, besides satisfying Equation (1.22), the coordinates of must also satisfy:

(1.24)

To obtain Equation (1.24), I used Equation (1.15) in Equation (1.23).

Ok, so let us suppose we have done this job: we picked a pair of coordinates , that satisfies Equation (1.20), and then we found a new pair , that satisfies Equation (1.22) and Equation (1.24). So, we have two new unit vectors that are orthogonal to each other. In other words, we have a new basis: the vectors and also form a basis. Thus, any vector in the space can be expressed as a linear combination of and . In particular, the very same of Equation (1.13) can also be expressed in the basis formed by and . But the coordinates will be different. We denoted the coordinates of with respect to the basis formed by and by and , but in the new basis we have new coordinates. Let us call the coordinates with respect to the new basis and . Then:

(1.25)

But, since this is the very same of Equation (1.13), then:

(1.26)

Suppose you chose to specify in the basis and . That means you specified and . Now, suppose you also chose a new basis and . Recall that you specified these vectors and by specifying their coordinates in the basis and . So, at this moment you have and specified, and you also have , specified and , specified. Thus, these six numbers have been given. Now, I ask you: if these six numbers have been specified, then are you free to pick whatever and you want? Of course not. If and specify the vector , and this vector has already been specified by the coordinates and , then and have also been already specified. Does that mean that we can express and solely in terms of and ? Of course not, because we also need information about the new basis. In other words, we must be able to express and in terms of , , , , , and .

Let us do it! Let us express: and in terms of , , , , , and . It is not difficult at all. All we need to do is to recall that, to extract a coordinate, we need to take the scalar product between the vector and the unit vector, with the unit vector on the left‐hand side of the scalar product. For example:

where Definition 2 was used. But and form an orthonormal basis, so and . Therefore:

(1.27)

Likewise:

(1.28)

Thus, to find , all we need to do is to find . But we know the coordinates of both vectors in the basis formed by and . Thus, with the help of Equation (1.15):

(1.29)

In the same spirit, we have:

(1.30)

So, if we have specified in one basis, its coordinates in the new basis are uniquely specified in terms of Equation (1.29) and Equation (1.30).

In Equation (1.29) and Equation (1.30), we chose to calculate the scalar product using the coordinates in the basis and . We chose to use this basis because, by assumption, we already knew the coordinates with respect to this basis. But we could have chosen to calculate it in the new basis (or, for that matter, in any basis we want). For example, say that we have the coordinates of and in the basis formed by and :

and:

(1.31)

Then, following the same procedure that led to Equation (1.15), we find that:

(1.32)

Recall that the scalar product is a mapping of two vectors into a scalar. When we change the basis, we change only the representation, that is, only the coordinates, but we do not change the vectors themselves. Thus, the scalar product does not depend on the representation: it does not matter which basis we use; the scalar product must be the same. For example, the scalar product of Equation (1.32) is the same of Equation (1.15). In other words:

(1.33)

In the exercise list, you will be asked to check that the last equality in Equation (1.33) is indeed true.

Often, the basis formed by the vectors and is called a ‘canonical basis’. Here, we will also adopt this terminology, because it will come in quite handy when we compare vectors with signals. But I need to warn you that this terminology is a bit inadequate, because the term ‘canonical’ has an overtone of ‘special’. For example, the Christian texts that made it into the Bible are called ‘canonical’, with obvious connotations of legitimacy and uniqueness. But, here, I want you to forget about the religious connotation, and think of the term ‘canonical basis’ as just a name for this basis. We could just as well have called it ‘the apocryphal basis’, and it would not have made any difference. Indeed, the canonical basis is as significant and as meaningful as any other basis.

For example, in Equation (1.29) and Equation (1.30), we expressed the coordinates of the new basis vectors in terms of the coordinates with respect to the canonical basis. But we did not do that because the canonical basis is special. We only did that because we had assumed that the vectors had been specified in the canonical basis. But we could have specified the vector in the new basis. If we had done that, then we would have needed to find and in terms of and and the new basis, following the same logic as before.

To emphasize that all bases are on an equal footing, let us express and in terms of and and the new basis. Thus, recalling that:

(1.34)

we need to compute . By assumption, now we have the coordinates of with respect to the new basis, so this is the natural choice to compute the scalar product. We also need the coordinates of with respect to the new basis. Calling them and :

(1.35)

It then follows that:

But Equation (1.19) entails that . Furthermore, according to Equation (1.4), entails that . Therefore:

(1.36)

Likewise:

(1.37)

Thus:

(1.38)

In the same spirit:

(1.39)

So, it does not matter which basis you use to specify your vector.

To close this section, one last bit of notation. Often, the coordinates of a vector are specified in a column, like this:

We need to be careful with this notation, as it often leads to confusion. The problem is that it is easy to think that by itself specifies a vector. It does not. To be specified, a vector needs both the coordinates AND the basis. Furthermore, completely different coordinates can specify the same vector. So, for example, the same vector can be specified by the coordinates with respect to the canonical basis, and, also, by the coordinates with respect to the new basis. If I only give you a pair of coordinates, but do not tell you the basis, then you cannot know what the vector is. You need to know the basis!

Now, there is one sense in which a canonical basis is indeed special, but it is a bit silly. It is the sense that, if a pair of coordinates is specified in a column, and there is no mention whatsoever of what basis has been used, then it is implied that it is the canonical basis. But we really need to keep in mind that the same vector can be specified by completely different coordinates, like and . I will highlight this issue again and again, and it will become more evident when we consider the representation of operators, which is the subject of Section 1.4.

1.4 Linear Operators

In signals and systems, operators are the mathematical objects describing systems. Recall that an operator is an object that transforms one vector into another vector. We are interested in studying a particular class of operators, namely, linear operators. By definition, a linear operator obeys the following property:

DEFINITION OF LINEARITY

An operator is linear if, and only if, the following equality is true:

(1.40)

Let us pause to reflect on the meaning of a linear operation. Recall from Equation (1.1) that the vector inside the curly brackets is the vector on which the operator is acting. So, in a relationship of the form , is the input of the operator and is the output. Thus, the object is itself the output of the operator when the input is . According to Equation (1.40), to know how a linear operator acts on a vector formed by a linear combination of other vectors (left‐hand side of Equation (1.40)), all we need to know is how the operator acts on each vector of the linear combination separately (right‐hand side of Equation (1.40)). Using a more formal language, if the output of the linear combination (left‐hand side of Equation (1.40)) is identical to the linear combination of the outputs (right‐hand side of Equation (1.40)), then the operator is linear.

Since this is a crucial property (and we will soon spell out why), to gain a solid intuition about what it means, let us see two examples of systems, one described by a linear operator, and the other by a nonlinear operator.

I have already mentioned that in Chapter 2 we will learn that signals are vectors in a different space. Suppose that you have a certain song, and that it is represented by the signal . So, the song is a vector in this example. Now, suppose you want to modify your song. Thus, you pass it through a system (recall that the system is described by an operator) and out comes a new signal, call it . The system could be, for example, an equalizer. So, is your input, the equalizer is the system (the operator), and is the output. Now, imagine a second situation, where you have one part of the same song stored in another signal, call it , and the other part stored in yet another signal, call it . For example, could be the vocal and could be the instruments. If you play and together, then you get your song, that is, . In this new situation, instead of passing through the equalizer, you decided to pass and separately. Call the output of and the output of . So, first you passed , and obtained . Then you passed and obtained . If your equalizer is linear, then, if you combine the output with the output , the result will be , that is, . The experiment where you obtained directly, by plugging in the input, is akin to the left‐hand side of Equation (1.40), while the experiment where you first obtained , and then , and then combined them, is akin to the right‐hand side of Equation (1.40). If both ways give the same result, then the system is linear.

The equalizer is an example of a linear system. Can you think of an example of a nonlinear system? I like the example of an oven: it is not very algebraic, but it is quite intuitive. Think of the ingredients of a cake as being the vectors, and the oven as being the system. If you get all the ingredients, mixes them up like granny taught you, and then put them all together in the oven, out comes a cake. This procedure is akin to the left‐hand side of Equation (1.40): you put all the ingredients (the vectors) together in the input of the operator (the operator is the oven), and the output is your cake. Now, to test if the oven is linear, you need to perform a second experiment, this time following the recipe of the right‐hand side of Equation (1.40). According to the right‐hand side, you need to find out what happens if you put each ingredient separately in the oven, bake them, and then mix the already baked ingredients. So, first, in go the eggs, bake them, take them out. Then, in goes the flour, bake it, take it out. Then the sugar, and so on. After all the ingredients have been baked, and only then, you mix them up. Of course, the result is not a cake, so, in this example, the right‐hand side of Equation (1.40) is not the same as the left‐hand side of Equation (1.40): the left‐hand side is a cake, but the right‐hand side is a disgrace to your coffee break. Conclusion, an oven is a nonlinear system.

Coming back to the question of why linearity is so important.

A key idea in engineering and physics is intimately related with the linearity of systems. The idea is that, since a vector (or signal) can be described as a linear combination of basis vectors (or basis signals), then we know how a linear operator acts on any vector (or any signal) if we know how it acts on the basis vectors (or basis signals). As we will see in Chapter 3, the laws of physics can be treated as operators, so you can imagine how powerful and general this idea is. We will come back to this notion again and again in this course, but for now I want to give you at least one example of how it works.

Suppose we have an operator and three vectors , , and , specified in the canonical basis:

Suppose we need to find the vectors resulting from the action of on , , . If is nonlinear, then we need to compute the action of on each of these vectors separately, that is, we need to find , , and . So, we have three jobs to do. But, if is linear, then all we need to know is its action on and . Indeed, if is linear, then:

Notice that the property of linearity was used in the last equality of each equation. So, when is linear, if we know its action on the basis vectors, that is, if we know and , then we know its action on , , and . Thus, we reduced three jobs (the computation of , , and ) to two jobs (the computation of and ). But, of course, and specify the action not only on , , and , but on any vector whatsoever of the vector space. For example, consider the case of a vector field. A vector field associates each point of space with a vector. Since there are an infinite number of points of space, a vector field is a collection of infinitely many vectors. So, if we want to know the action of a nonlinear operator on a vector field, we need to calculate the action on each vector individually. But we have infinitely many vectors, so, like Sisyphus, we have one darn long job to do: it will take nothing short of eternity to finish this job. If the operator is linear, however, our eternal job is reduced to two jobs (or three if the space is three‐dimensional): all we need to know is the action of the operator on the basis vectors. A theory that reduces an eternal job to two or three jobs must be a powerful theory indeed. Notice that vector fields appear all the time in engineering and physics (for example, an electric field is a vector field). As I mentioned earlier, this idea of analyzing a system in terms of its action on basis vectors is central to signals and systems, in particular, and to engineering and physics, in general.

1.5 Representation of Linear Operators

In Section 1.3 (Vector Representation in Different Bases), we saw that the representation of vectors, that is, their coordinates, depends on the basis. In this section, we will follow a similar line of reasoning to obtain the representation of linear operators.

An operator is an object that transforms one vector into another. Thus, the representation of an operator must be closely connected with how it acts on vectors. So, suppose again that we have a linear operator , and like in Equation (1.1) – which I repeat below for convenience – let us suppose that it acts on a vector (the input) and the result is a vector (the output), that is:

Equation (1.1)

A representation of vectors is nothing else than their coordinates with respect to a given basis. Furthermore, a vector is fully specified when its coordinates are specified (again assuming that a basis has been chosen). Since an operator is an object that transforms a vector into another vector, and a vector is fully specified by its coordinates, then an operator must be an object that transforms one set of coordinates into another set of coordinates. Thus, the representation of an operator must be a mathematical object that specifies how the operator transforms the coordinates of the input vector into the coordinates of the output vector. To find what object this is, let us expand the vectors and of Equation (1.1) into a linear combination of canonical basis vectors. With this expansion, Equation (1.1) reads:

(1.41)

Since we are dealing with linear operators, then, according to