129,99 €
Revolutionize the calculation of mixed derivatives with this groundbreaking text
Transform and inverse transform techniques, such as the Fourier transform and the Laplace transform, enable scientists and engineers to conduct research and design in transformed domains where the work is simpler, after which the results can be converted back into the real domain where they can be applied or actualized. This latter stage in the process, the inverse transform, ordinarily poses significant challenges. New transform/inverse transform techniques carry extraordinary potential to produce revolutionary new science and engineering solutions.
Discrete Taylor Transform and Inverse Transform presents the groundbreaking discovery of a new transform technique. Placing a novel emphasis on the “position variable” and “derivative operator” as main actors, the Discrete Taylor Transform and Inverse Transform (D-TTIT) will facilitate the calculation of mixed derivatives of multivariate functions to any desired order. The result promises to create new applications not only in its allied fields of quantum physics and quantum engineering, but potentially much more widely.
Readers will also find:
Discrete Taylor Transform and Inverse Transform is ideal for any scientific or engineering professional looking to understand a cutting-edge research and design tool.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 460
Veröffentlichungsjahr: 2024
Cover
Table of Contents
Title Page
Copyright
Dedication
Dedication
About the Author
Preface
Acknowledgments
Introduction
I.1 Notation and Elementary Notions
I.2 Orthonormal Bases and Their Corresponding Dual Bases
I.3 Fourier Transform and Inverse Transform and the Associated Resolution of Identity
Note
1 Toy Model I-1:
1.1 Introduction
1.2 Frames and Dual Frames Induced by the Monomials 1, , and
Notes
2 Toy Model I-2:
2.1 Introduction
2.2 Frames and Dual Frames Induced by Monomials 1, , and
Notes
3 Toy Model I-3:
3.1 Introduction
3.2 Frames and Dual Frames
Notes
4 Toy Model I‐4:
4.1 Overcompleteness
4.2 Frames and Dual Frames
Note
5 Toy Model I-5:
5.1 Introduction
5.2 Difference Operators
5.3 Frames and Dual Frames
Note
6 Toy Model I-7:
6.1 Introduction
6.2 Difference Operators
6.3 Frame Vectors
6.4 Frame Operator
6.5 Inverse Frame Operator
6.6 Constructing Skeleton Matrices for
6.7 Practical Implementation
6.8 Dual Vectors
6.9 Dual-Frame Operator
6.10 Conclusions
7 Self-consistent Expressions for
7.1 The Interval
7.2 The Interval
7.3 The Interval
Notes
8 Toy Model I-3:
8.1 A Guide Through the Chapter
8.2 Univariate Functions on Three Nonuniformly Distributed Lattice Points: Derivatives at an Inner Cluster Point
8.3 Setting Up the System of Equations for the Determination of ()
8.4 Matrix Multiplication Expressed in Terms of Exterior Products
8.5 Solving the System of Equations in (8.7) by Successive Elimination (Method 1)
8.6 Exterior Products and the Resolution of Identity (Property 1)
8.7 Inner Products (Property 2)
8.8 Calculation of the Derivative Operators Based on the Inverse of the -Matrix (Method 2)
8.9 Calculating the Derivative Operators Based on the Frame Operator (Method 3)
8.10 Construction of the Derivative Operators in Terms of Rational Polynomials (Method 4)
8.11 Construction of the Derivative Operators Simply-by-Inspection of Indices (Method 5)
8.12 Uniform Lattices
8.13 Conclusions
9 Toy Model I-5:
9.1 The Resolution of Identity
9.2 Setting Up the System of Equations
9.3 Solving the System of Equation in (9.18) by Successive Elimination
9.4 Obtaining the Expressions of the Universal Difference Operators Defined by
9.5 Simplifying the Expressions of the Difference Operators
9.6 Exterior Products of the Position Kets and their Dual Difference Kets
9.7 Uniform Lattices
9.8 The Frame Operator
9.9 The Relationship Between the Resolution of Identity and Biorthogonality
9.10 The Construction of the Derivative Operators by Calculating Residues
10 Toy Model I-6:
10.1 Generating Formulas for the Difference Operators by Residue Method
10.2 Summary of the Relevant Formulas for the Calculation of
11 Toy Model I-7:
11.1 A Guide Through the Chapter
11.2 Univariate Functions on 7 Nonuniformly Distributed Lattice Points
11.3 Setting Up the System of Equations
11.4 Generating Formulas for the Derivative Operators Simply-by-Inspection
11.5 Differential and Position Coordinate Bras
11.6 Differential Bras
11.7 Position Coordinate Bras
11.8 Differential and Position Kets: Uniformly Distributed Lattice Points
11.9 The Biorthogonality and the Resolution of Identity Conditions
11.10 Conclusions: A Brief Philosophical Detour
12 Toy Model II:
12.1 Introduction
12.2 Determination of the Expansion Coefficients
12.3 The Biorthonormality Property
12.4 The Resolution of Identity
13 Toy Model III:
13.1 Discrete Taylor Transform and Inverse Transform of Trivariate Functions
13.2 Determination of the Expansion Coefficients
13.3 Explicit Expressions for the Difference Bras and the Associated Postion Kets ,
13.4 Orthogonality and Completeness
Notes
14 Solidification and Further Refinements
14.1 A Guide Through the Chapter
14.2 Introducing Matrix Tensor Product in D-TTIT
14.3 Frame Operator and Matrix Tensor Product
14.4 Enumerating Grid Points
14.5 Error Analysis
14.6 The Impact of the Numbering Scheme of the Grid Points in a Cluster
14.7 Formula for Low-order Difference Operators with Increased Accuracy
14.8 Interpretation and Consequences of the Established Theorems
14.9 Universal Formulas for Mixed Derivatives of Arbitrary Order in 3D
14.10 Building Derivatives of and Their Generalized Forms Simply-by-Inspection
Appendix A: The Canonical Matrix C
3×3
and Its Inverse
Appendix B: The Canonical Matrix C
3×3
and Its Inverse Revisited
B.1 The Case , ,
B.2 The Case , ,
B.3 The Case , ,
B.4 The Canonical Inverse Matrix
Appendix C: The Canonical Matrix C
4×4
and Its Inverse
The Determinant of
C.1 The Case , , ,
C.2 The Case , , ,
C.3 The Case , , ,
C.4 The Case , , ,
Appendix D: The Canonical Matrix C
5×5
D.1 The Case , , , ,
D.2 The Case , , , ,
Appendix E: The Canonical Matrix C
7×7
Index
End User License Agreement
Cover
Table of Contents
Title Page
Copyright
Dedication
Dedication
About the Author
Preface
Introduction
Begin Reading
Appendix A: The Canonical Matrix C
3×3
and Its Inverse
Appendix B: The Canonical Matrix C
3×3
and Its Inverse Revisited
Appendix C: The Canonical Matrix C
4×4
and Its Inverse
Appendix D: The Canonical Matrix C
5×5
Appendix E: The Canonical Matrix C
7×7
Index
End User License Agreement
ii
iii
iv
v
vi
xv
xvi
xvii
xviii
xix
xx
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
IEEE Press445 Hoes LanePiscataway, NJ 08854
IEEE Press Editorial BoardSarah Spurgeon, Editor-in-Chief
Moeness Amin
Jón Atli Benediktsson
Adam Drobot
James Duncan
Ekram Hossain
Brian Johnson
Hai Li
James Lyke
Joydeep Mitra
Desineni Subbaram Naidu
Tony Q. S. Quek
Behzad Razavi
Thomas Robertazzi
Diomidis Spinellis
Alireza Baghai-Wadji
University of Cape Town
Western Cape
South Africa
Copyright © 2025 by Institute of Electrical and Electronics Engineers, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.
Trademarks Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.
Limit of Liability/Disclaimer of Warranty While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data Applied for:
Hardback ISBN: 9781394240074
Cover Design: WileyCover Image: © oxygen/Getty Images
I dedicate this book to my wife Elisabeth: Thank you for your deep friendship, unwavering support, and encouraging companionship.
My wifeElisabeth
Alireza Baghai-Wadji is a professor emeritus at the University of Cape Town. He earned a PhD and DSc from Vienna University of Technology, Austria. He has been privileged to occupy academic, executive, and resident principal engineering consultant positions on five continents. His contributions to mathematical physics include diagonalizing linear PDEs, taming infinities in hypersingular dyadic Green’s functions, zooming in the nearfields, constructing Green’s function-induced wavelets (in collaboration with Gilbert Walter), and designing dyadic Green’s function-inspired Dirac delta function-like distributions.
The position variable and the derivative operator play eminent roles in quantum physics. They are not commutative, i.e., acting the composite operators and onto a test function is not the same (the order matters in profound ways). Rather, it is the case that as a simple calculation reveals. This motivates considering the commutator , which stands for as a realization of the identity operator. The same is true for the time variable and the derivative operator . The application of the identity matrix to vectors and the operation of the Dirac delta function to functions also reproduce the object onto which they act, leaving the objects unaltered.
However, the position variable and the derivative operator , respectively, the time variable and the derivative operator , and generalizations thereof, are not exclusive preoccupations of quantum physicists. Their rolls in mathematical physics and computational engineering is pervasive and wide ranging. Associating with the wave number and with the radian frequency is embodied in the planewave propagating along the -axis. Planewaves are (computationally nonideal) examples of bases (functional building blocks), and they can be used to synthesize complicated patterns of fields and waves. Their complex conjugate counterparts can be employed to analyze fields and waves. The Fourier transform and inverse transform (FTIT) and their discrete (D-FTIT) and fast realizations (FD-FTIT) are, respectively, manifestations of the analysis and synthesis processes via planewaves.
The Taylor series expansion also plays significant roles in the entire mathematical physics and computational engineering, very often directly, and more often indirectly and in subtle ways. The Taylor series expansion too makes explicit use of the monomials and derivative operators , built from the ingredients and . It requires high-order derivatives of a function, which are evaluated at a single point, say, and uses the set of monomials to synthesize the function on an interval which contains as an interior or a boundary point. The Taylor series expansion is a fascinating and enabling tool. It reveals that while the high-order derivatives are required to be calculated at one point only (here, ), the very existence of the high-order derivatives (before being calculated at one point) encodes the information necessary and sufficient for synthesizing (expressing) the function on the entire interval. The information gathering and encoding are of peculiar types though: when applying to the test function and evaluating the result at , the result, denoted by , is a measure for the relevance of the building block in reconstructing the entire on the entire interval.
Seen from this perspective, it is in essence what the FTIT, or, for this matter, every transform and its associated inverse transform accomplish. The Fourier transform of a function analyses the function and determines the amount of the ingredient basis function that is required to synthesize the function. The Taylor transform of a function analyses the function and determines the amount of the basis function that is required to synthesize the function.
Engineers and scientists appreciate that transforming functions is a big deal. The functions can be scrutinized and manipulated much easier and more pointedly in the transformed domain, which are the main reasons for transforming functions in the first place. Every bona fide transform, however, must admit a well-behaved associated inverse transform. Since the transform of a function (analysis) followed by the corresponding back transform (synthesis) must recover the originating function, it is required that with being the identity operator (the unity matrix in discrete cases). All known transform and inverse transform techniques in the mathematical physics follow this scheme. It should emphatically be pointed out that the relationship does not involve any test function . The specifics of a particular transform technique is incorporated into or by design. Consecutively, or , respectively, must be determined by resorting to the back-transform transform regulating relationship .
In the 80s of the previous century, mathematician, physicists, engineers, and signal processing professionals jointly came up with original ideas to weaken centuries-old conditions imposed upon the basis functions, e.g., locality, orthonormality, and, completeness requirements. These efforts led to the development of the theory of wavelets and dual wavelets, and, in particular, the powerful concept of frames and dual frames. In the following decade, many researchers, including this author, expected utterly new and impact-rich opportunities in computational engineering. However, the anticipated pervasive spread of wavelets and frames in computational engineering did not materialize, despite the fact that wavelets and frames seemed to enable the design of problem-specific analysis and synthesis tools. The concept of developing problem-tailored computational tools is not only fascinating for its own merit but also guides and conditions the intuition in a variety of ways. This author (together with Gilbert Walter) has successfully engaged in constructing wavelets based on Green’s functions. This author has also demonstrated that Dirac delta functions can be constructed from specific dyadic Green’s functions.
The computational electromagnetic professionals are well familiar with the fact that the utilization of the basis functions and weighting functions in the method of moments (MoM) applications aims to synthesizing a desired function with a priori unknown coefficients and analyzing (determining) the coefficients. The involved basis and weighting functions are in general biorthogonal.
The author’s fascination with ‘‘’’ and ‘‘,’’ which ultimately led to the development of the Discrete Taylor Transform and Inverse Transform (D-TTIT), has its genesis not only in quantum physics but also all the way back to the works of Newton, Leibniz, and Cauchy. The lim-process, built in the conceptualization of , has inspired the author to search for alternative differential and integral formulations, and various summability and integrability methods, and in particular generalized functions, a prominent advocate of which is the Dirac’s delta function. The set of monomials , being the archetypical basis, is the foundation of a myriad of orthonormal basis (ONB) functions in the mathematical physics. Each classical orthonormal basis has the potential to give rise to a transform and the corresponding inverse transform. The theories of wavelets and frames extend this concept to biorthogonal general systems of functions. Since between the archetypical basis and the canonical ONB in mathematical physics, a correspondence can be established, extensions of D-TTIT can be envisioned.
This book has been in making since 2019. It introduces D-TTIT and authentically shares its historical development with the reader. The ensuing occasional inconsistencies in notations are hoped to be overlooked. With each passing year new aspects of the theory unraveled themselves. With each proofreading session of a chapter, new connections with group theory, number theory and other branches in mathematics were recognized, which had to be investigated, examined, interpreted, refined, or discarded. The work is by no means complete. However, it has reached a fair level of maturity that can be used in engineering applications and hopefully inspires other researchers to examine any of the salient features of the theory which deserve further investigation. The story telling in the book is based on simplest toy models and variations thereof. The toy models involving univariate functions (e.g., ) are comparatively simple. At times the calculations are lengthy and laborious. All relevant details have been provided, without any reservation. The motivation behind this has been to enable the reader to compare their calculations against the solutions, in case they wish to do so. The interested reader may also benefit from the detailed ideation processes.
The contents of the book can be divided into three parts, without explicitly having introduced the notion of parts. Part I deals with univariate functions, Part II with bivariate functions, and Part III with trivariate functions. The introduction briefly and casually introduces the notions of orthonormal bases, nonorthonormal bases, frames, frame operator, inverse frame operator, and dual frames. The notion of the identity operator has been introduced as well. Dirac’s notion of bras (row vectors) and kets (column vectors), along with the inner product and the exterior product, have been discussed. The Fourier transform and inverse transform have been reviewed within the concept of the resolution of identity.
Chapters 1–11 concern toy models involving univariate functions. Thereby, the early chapters invite the reader to search for patterns, identify them, and utilize them. The latter chapters provide recipes for calculating difference operators of arbitrary complexity.
Chapter 12 analyses bivariate functions (e.g., ). It deals with the simplest possible toy model in this category.
Chapter 13, discussing trivariate functions (e.g., ) is by far the longest chapter comprising nearly one third of the book, despite the fact it deals with the simplest possible toy model. Every conceivable detail has been provided to encourage further research. The book introduces D-TTIT and authentically shares its historical development with the reader. The ensuing occasional inconsistencies in notations are hoped to be overlooked.
Chapter 14 refines the material from previous chapters and consolidates major ideas. It prominently discusses the notion of a matrix tensor product and its importance in calculating mixed difference formulas.
One interesting byproduct of D-TTIT is that it provides a means for ‘‘constructing’’ inverses of matrices of a certain type, simply-by-inspection. Thereby, the dimension of the originating matrix can be assumed to be arbitrary. The exhaustive discussion of one category of such matrices has been relegated to appendices.
To keep the size of the book at an acceptable level, alternative or larger toy models, dealing with bivariate and trivariate functions have not been included. Further work is necessary to enhance the intuition for identifying possible hidden patterns when dealing with multivariate functions. Above all, constructing mixed derivatives on curved surfaces, are of particular interest. The ability to work with indices within the D-TTIT framework is expected to facilitate the laborious manipulations and calculations and avoid complicated mappings such as the conformal mapping. The number theory, the lattices on toruses and manifolds, Galois fields, Lie groups and algebras, graph theory, and other powerful techniques in abstract algebra promise utility in extending the scope of D-TTIT. Already at this stage the reader may appreciate the emergence of the simply-by-inspection formulas similar to those identified in dealing with univariate functions.
Throughout the book, details of calculations have been carried out and presented painstakingly to allow the reader to gain an understanding of how they were conceived in the first place. After fully grasping the solution strategy the reader will witness the simplicity and highly structured final results, which are above all generalizable to models with arbitrary number of ordered lattice points. The direct calculation of these models based on solving a simultaneous inhomogeneous system of equations would be a daunting undertaking. The fact that the final results exhibit astonishingly structured and simple patterns, which are possibly pregnant with further symmetries, suggests that the D-TTIT formulation relies on deeper and foundational relationships yet to be identified. This author’s current research is focused on identifying these relationships. Certain facts and properties can be identified already at this stage of the development of D-TTIT. Number theory, various number systems, group theory, ring theory, category theory, polynomial and various other algebras, combinatorics, continued fractions seem to provide guidance. It is undoubtedly very rewarding to examine the relationship between D-TTIT and these most powerful theories which continue to have immense currency in nearly all advanced theories in physics.
As will be clear in the course of studying the toy models, the columns of the originating matrices are sampled values of the basis on uniform or nonuniform lattices. This finding promises to be of theoretical and practical merit for its own sake. It requires further analysis and hopefully will inspire further discoveries.
The reader who starts with Chapter 1 and progresses through the chapters as they have been arranged in the book, will presumably benefit the most from the developed ideas and most likely gets some inspiration to do research work of their own. The reader who wishes to acquire a general understanding of transforms and inverse transform techniques should read the introduction and browse through any of Chapters 1–11. The reader who is eager to learn the techniques and experiment with the formulas should read the Chapters 8–11. Going through Parts II and III (chapters dealing with bivariate and trivariate functions, respectively) will inspire the reader to develop alternative toy models, corresponding to those models scrutinized in Part I (chapters discussing univariate functions).
This book has been written concurrently with MQPET, Volume 1: Fundamentals, and MQPET, Volume 2: Governing equations, both with SciTech Publishing, an imprint of the IET. Thus, it should not be of much surprise that the position variable , and the derivative operator feature prominently both in the Mathematical Quantum Physics for Engineers and Technologists (MQPET), and in this book. Upcoming volumes of MQPET promise to provide an amalgamation of physics, applied mathematics, and signal processing, and thus continue to feed and enrich the development of D-TTIT.
I thank Professor Douglas H. Werner, Series Editor, for including my book in The IEEE Press Series on Electromagnetic Wave Theory and Applications.
I am grateful to the anonymous reviewers for their time and encouraging reviews. Originally, I had intended to publish the Introduction and Chapter 14 as part of a comprehensive follow-up treatise. In response to the recommendations of the reviewers, I decided to include the Introduction and Chapter 14 in this book.
I express my gratitude to Ms. Margaret Cummins, Editorial Director, Technology & Engineering Careers Wiley Education Publishing; Ms. Mary Hatcher, Editor, Wiley-IEEE Press Acquisitions; Ms. Aileen Storry, Executive Publisher, Electrical and Computer Engineering at Wiley; and Ms. Victoria Bradshaw, Senior Editorial Assistant, Electrical Engineering Technology & Engineering Careers, for their utmost professionalism, enthusiasm, support, and encouragement.
I express my gratitude to Ms. Kavipriya Ramachandran, Managing Editor, Academic Professional Learning, Chennai, India, for her professionalism and patience, and the entire production team for their support and splendid work.
I thank Mr. Govindanagaraj Deenadayalu, Content Refinement Specialist, Wiley, Chennai, India, and his team for their meticulous work, patience, and support.
It has been an enriching and rewarding experience working with the Wiley-IEEE Press team and its associates.
Vienna, Austria 2024
Alireza Baghai-Wadji
The concept of Taylor series expansion (synthesis, inverse transform) builds the foundation of numerous theory developments and applications in mathematical physics and engineering in general, and computational electromagnetics in particular. It occupies a prominent position in the impressive tool box of theoreticians and practitioners alike. The monomials in the near-field, and, the inverse monomials in the far-field, respectively, serve as “microscopes” and “telescopes” in unraveling functional behavior, and estimating the accuracy and adequacy of algorithms. It may be argued that established series and integrals, most notably Fourier-, Laplace-, and allied transforms and inverse transforms, possess similar and presumably superior properties. However, Taylor transform (analysis, the determination of the expansion coefficients) being fundamentally different from known transforms, deserves an adequate scrutiny: (i) the building blocks of the widely-known Fourier transform, e.g., sine-, cosine-, and exponential functions are themselves conveniently expressible in terms of monomials; (ii) standard series (integrals) are introduced in the context of analysis (transform) and synthesis (inverse transform). The introduced theory in this book positions Taylor series (synthesis, inverse transform) on equal footing as established techniques, and consequently introduces the associated Taylor transform in a novel way. The theory-based systematic procedure to achieve this objective, results in optimal construction rules for the calculation of difference operators of any order desired, and more importantly, simply-by-inspection. Thereby, simple algorithms for estimating errors in the constructed difference formulas manifest themselves. The genesis of the presented theory can be found in the author’s efforts in dealing with the near- and far-field analysis, the development of algebraic- and exponential regularization techniques, his preoccupation with non-commutative operators in quantum physics, and the theory of the generalized functions, the Dirac delta function being the most prominent example. The exposition greatly simplifies by employing Dirac’s compact bracket notation, and utilizing the powerful theory of frames and dual frames. This chapter briefly touches up the theory of orthonormal bases and their dual bases, non-orthonormal bases and their dual bases, and frames and dual frames by considering the simplest possible examples, i.e., vectors with two components. The notions of inner- and exterior products, and the resolution of identity have been introduced. The Fourier transform and inverse transform have been reviewed in light of the concept of the resolution of identity.
:
The set of positive integers, .
:
The set of non-negative integers, .
:
The set of integers, .
:
The set of real numbers.
:
The set of positive real numbers.
:
The two-dimensional (2D) Euclidean space.
:
The three-dimensional (3D) Euclidean space.
:
The set of complex numbers.
The Kronecker delta symbol ,
The Dirac delta function is defined in terms of the sifting property,
subject to the “normalization” condition,
Many representations of the Dirac delta function are available in mathematical physics. Additionally, this author has shown that scalar or dyadic Green’s functions associated with partial differential equations, as they arise in boundary value problems, can be utilized to generate problem-specific Dirac delta functions. The details are, however, irrelevant for the present discussion and will not be dealt with in this book.
Consider the column vector with components . It is instructive to introduce the “ket-” vector denote it by ,
Unless explicitly mentioned, .
The row vector with real-valued components will be referred to as the “bra-” vector and denoted by ,
The superscript denotes transposition. For complex-valued , the bra vector stands for the Hermitian conjugate ,
with denoting the complex conjugate of .
The inner-product of the ket-vectors and is denoted by . Employing the Dirac’s bracket notation,
The exterior-product of the ket-vectors and is denoted by ,
Given the matrix ,
with , the inverse of denoted by is,
The determinant of the matrix is denoted by .
The set of monomials , also written as , constitutes a complete non-orthogonal basis on any finite interval with and . Obviously, the set of monomials can be generated from by successive multiplication of by the preceding element. This property is the genesis of the integer algebra in the present theory. The myriad implications of the latter property which carries over to two-, three-, and higher dimensions, will not be discussed further in this text.
The derivative of the function with respect to its argument is denoted by or .
The derivative of the function with respect to its argument and evaluated at the point is denoted by , or .
The -factorial is denoted by with .
The formal Taylor series expansion of the function is denoted by any of the following representations,
with being the -derivative of , evaluated at , and divided by The qualifier “formal,” as it is customarily and routinely employed in quantum physics literature, alludes to the convention that irrespective of whether or not the above series converges, and in which sense it equals (represents) , are not the primary concerns in this text.
Several symmetric closed intervals of the form , with and are considered in this text. Each interval is associated with a set of lattice (grid) points in the corresponding lattice. Examples are,
Asymmetric intervals are also considered along with their associated sets of lattice (grid) points in the corresponding lattices. Examples are,
with . In Chapters 7–11, a powerful index-based notation will be introduced.
Apart from associating a lattice with an interval, a toy model is also assigned to a selected interval. Consider the symmetric interval along with its associated set of equidistant lattice points . Let the function , being defined on , possess derivatives to any order desired. Let , , , , and denote the function values , , , , and , respectively. Let the ket vector denote the column vector with the components , , , , ,
This defines the -point toy model. The introduced Discrete Taylor Transform and Inverse Transform (D-TTIT) concern the analysis and synthesis of functions in terms of various low-order toy models. The constructive (inductive) approach for the development of the theory enables generalization in a self-evident manner. It is demonstrated that the solutions to the posed problems can be found in closed-form and simply-by-inspection. The latter attributes should be viewed as the hallmark of the developed theory.
Consider the orthonormal vectors and in the Euclidean space, satisfying,
The Dirac’s bracket notation has been employed, which involves the kets and , and the corresponding bras and , respectively. The bracket notation stands for the inner- product. The superscript signifies transposition.
The addition of the exterior (outer) products and results in the identity matrix ,
The resulting equation,
and similar expressions in finite or infinite dimensions will be referred to as the “resolution of identity.” The ability to “design” relationships of the type (I.17) is the quint essential step underlying bona fide transforms (the analysis step) and the associated inverse transforms (the synthesis step), as will be elaborated in this text. The simple examples in this chapter are meant to enhance the reader’s intuition.
Consider an arbitrary ket vector in . Applying both sides of (I.17) onto ,
Denote the projections of onto and by and , respectively, i.e.,
The determination of the coefficients and is referred to as the analysis step!
The synthesis step Substituting and back into (I.18),
comprises the synthesis step!
Summary Equations (I.19) constitute the “analysis step,” while (I.20) is referred to the “synthesis step.” The ket-vectors and are called “basis vectors.” The associated bra-vectors and , respectively, are here simply the transposed of the ket-vectors and . In general, the ket- and bra-vectors which constitute the expression for the resolution of identity, (I.17), are referred to as the “basis vectors” and their corresponding “dual basis vectors,” respectively. In the present case, where the simplest possible example of a canonical orthonormal basis in is investigated, the bra-vectors and are themselves the dual basis vectors. The next two entries will reveal that the relationships between the basis and dual basis are in general slightly more delicate, yet straightforward. Occasionally, it is instructive to cast the abstract form in (I.20) in the following alternative standard form, in the position space,
In the preceding case the basis vectors and were taken to be orthonormal. The condition of normality is not substantial: (nonzero) non-normal vectors can always be rendered normal, by dividing them by their respective magnitudes (norms). The orthogonality condition is, however, crucial. In this entry the consequences of relaxing the orthogonality condition are investigated. As an example consider the (non-collinear) non-orthogonal basis vectors and ,
Furthermore, consider a general vector in the Euclidean space .
The resolution of the identity Adding the exterior products and ,
Denoting the resulting matrix by ,
Thus, relaxing the orthonormality condition of the basis vectors and has resulted in the sum of the exterior products of the basis vectors with themselves . Thereby, . As long as the chosen basis vectors and are not collinear, the inverse exists. In the current case,
Multiplying (I.24) from the left by
where the “dual basis vectors” vectors and manifest themselves, as indicated,
Or, more explicitly,
Thus, given the non-orthonormal basis vectors and , for resolving the identity matrix , the dual basis vectors and must be constructed first. Substituting (I.28) into (I.26),
Explicitly carrying out the exterior products and adding is revealing,
It is illuminating to observe the above intricate interplay of terms, despite the fact that dual vectors and were constructed (designed) to yield this result in the first place. The underlying ideas should be reinforced: the ability to establish the relationship (I.29) is tantamount to having established a transform and the corresponding inverse transform, as is readily demonstrated next.
The analysis step Applying both sides of (I.29) to ,
The determination of the coefficients and , via the inner-products,
is referred to as the analysis step!
The synthesis step Substituting (I.32) into (I.31),
comprises the synthesis step. The fact that this is an identity can be readily examined:
Summary Equations (I.32) constitute the “analysis step,” while (I.33) is referred to the “synthesis step.” The ket-vectors and are called the “basis vectors,” and their associated “dual basis vectors” and needed to be constructed. The dual vectors were not simply the bra vectors and , the transposed counterparts of and , respectively. Alternatively, writing (I.33) in the standard form,
A further relaxation is allowing the basis vectors to be linearly dependent. Consider the ket vectors , , and ,
along with a general vector in the Euclidean space. The lengths of the vectors in (I.36) are not unity, the vectors are not orthogonal, and they are not linearly independent. They are referred to as the frame vectors. The task is the determination of the dual frame vectors – a task which largely parallels the procedure in the preceding entry. The presented scheme is not restricted to three frame vectors. It holds valid for any finite number of frame vectors .
The resolution of the identity Add the three exterior products , , and ,
Denote the resulting matrix by ,
The matrix is called the frame operator. The inverse of the frame operator exists,
Multiplying (I.38) from the left by
where the “dual-frame vectors” , , and manifest themselves, as indicated in (I.40),
Or, more explicitly,
To resolve the identity matrix , (I.40),
The verification of this result is immediate,
It is reiterated, that the ability to establish the relationship (I.43) is equivalent with having established a transform and the corresponding inverse transform, as is readily demonstrated next.
The analysis and synthesis steps To see the analysis- and synthesis steps in action, consider,
Summary As shown in (I.45), the evaluation of the inner-products , , and amounts to the “analysis step.” Subsequently, multiplying the inner-products by the corresponding ket-vectors , , and , accomplishes the “synthesis step.” The ket-vectors , , and are called the “frame vector” and their associated “dual frame vectors” , , and needed to be constructed. Thereby, the dual frame vectors were not simply the bra vectors , , and (the transposed counterparts of the , , and , respectively). Alternatively, writing (I.45) in the standard form,
(The reader might skip this section in the first reading without any loss.) Fourier transform and inverse transform can be viewed as an archetype example for elucidating the intricacies of the underlying ideas, the elegance and power of the Dirac notation, and the eminence role of the resolution of identity. The following derivations are formal, in the sense that the operations which are carried out are assumed to be eligible.
Given the function , its formal Fourier transform is ordinarily defined as,
Since the integration variable is a dummy variable, for “bookkeeping” purposes in the following discussion, it is instructive to express the integral in (I.47) in terms of a variable different from , e.g., ,
Conversely, given the function , the original function can be recovered employing the inverse Fourier transform formula,
This equation states that the information content in , suffices to reconstruct the original function for arbitrary . Substituting (I.48) into (I.49), splitting into , and rearranging,
Introducing the orthonormal basis functions,
equation (I.50c) can be cast into,
Inspired by Dirac’s formalism in quantum physics, and introducing the abstract Hilbert space , the function is taken to be represented by the ket-vector . The relationship between and , is established as follows,
Interpretation: Projecting onto the coordinate basis ket vector gives . Thereby, is the eigenvector of the position operator ,
with the normalization condition of the eigenvectors,
REMARK Note that exclusively in this section should be understood upon the definition (I.54) and the orthonormality property (I.55). The ket in this section should not be mixed up with the monomial sampled at the lattice points.
□
As will be demonstrated momentarily, the “completeness” property (the resolution of identity property) of the eigenfunctions , i.e.,
serves well in the manipulations presented here, both in terms of clarity and elegance. The second equation in (I.56) expresses the fact the dummy integration variable can be chosen freely.1
The introduction of the abstract ket vector , inhabiting the abstract Hilbert space , is a convenient tool which is employed here, and does not require the knowledge of any of the mathematical intricacies in quantum physics. For the purposes in the present discussion, projecting onto, e.g., , simply amounts to sampling at , and thus yielding . In countably infinite dimensional Hilbert space , the preceding statement enables one to have the following associations,
In finite dimensions, which are the concern in this work, sampling at the points on the symmetric interval ,
Or, more generally, sampling at the points on the asymmetric interval ,
with and .
In view of (I.52), Dirac’s notation suggests writing,
Substituting the above expressions into (I.52),
Noting that the inner-product is an abbreviation for , transferring between and in , and also viewing as , results in,
The integral sandwiched between the brackets can be recognized as the identity operator , as indicated. Thus, with
Finally, transferring into the inner-product , which is an abbreviation for , leads to,
which is tantamount to the completeness of the basis functions , and thus,
The above equation conveys the following information: take the -basis ket vector , build the exterior product , and add over all possible instances (integrate over ), to obtain the identity operator in the abstract Hilbert space . The content in (I.65) can be “translated” into the language of the “ordinary” Hilbert function space of quadratically integrable functions, by multiplying the L.H.S. and the R.H.S. of (I.65) by the coordinate ket vectors and , respectively,
Thus, with ,
Finally, with , and combining the exponential functions, the more familiar integral representation for the Dirac delta function offers itself,
In the above discussion, the starting point was the Fourier transform and inverse transform (FTIT) formulas. It was established that FTIT essentially amounts to the existence of an associated resolution of identity.
PROBLEM Prove that is an integral expression for the identity operator .
Multiply
on the arbitrary ket vector ,
With , and ,
Project both sides onto the arbitrary ket vector ,
With and ,
The facts that the choice of the ket vector and the projecting coordinate basis ket vector were arbitrary, prove the claim in the problem.
PROBLEM Show that applying onto and projecting the result onto quintessentially amount to the Fourier and inverse Fourier transform.
Apply onto ,
With , and inserting into ,
Rearranging,
With and ,
Bringing outside the ξ-integral,
The under-braced integral is equal to the Fourier Transform of the function of , thus,
Projecting both sides onto the coordinate ket vector ,
Considering and ,
a relationship which establishes the inverse Fourier transform.
PROBLEM To further appreciate the power of the resolution of identity relationship, consider the following. Given the ket vectors and in , translate into the language of the coordinate space.
The following steps are self-explanatory,
The last integral is the interpretation of in the coordinate space. In particular, with ,
where refers to the norm of .
1
Any freedom guaranteed by the formulation should, in principle, be taken advantage of. This process ensures casting the resulting expressions in their most general forms.
The following toy model serves fixing the notation. Consider the symmetric interval with arbitrary . Let the function , defined on , be smooth to any order required. Consider the Taylor series expansion of at , neglecting terms of the order of or higher,
The coefficients , refer to the -derivative of evaluated at , with .
Consider the discrete set corresponding to the interval . Sample successively at the points in the set ,
The replacement should not cause any conflict in the soundness of the arguments and the conclusions drawn thereof.
Equations (1.2) express the sampling values and