Digital Signal Processing - Maurice Bellanger - E-Book

Digital Signal Processing E-Book

Maurice Bellanger

0,0
122,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.
Mehr erfahren.
Beschreibung

DIGITAL SIGNAL PROCESSING Understand the future of signal processing with the latest edition of this groundbreaking text Signal processing is a key aspect of virtually all engineering fields. Digital techniques enormously expand the possible applications of signal processing, forming a part of not only conventional engineering projects but also data analysis and artificial intelligence. There are considerable challenges raised by these techniques, however, as the gulf between theory and practice can be wide; the successful integration of digital signal processing techniques requires engineers capable of bridging this gulf. For years, Digital Signal Processing has met this need with a comprehensive guide that consistently connects abstract theory with practical applications. Now fully updated to reflect the most recent developments in this crucial field, the tenth* edition of this seminal text promises to foster a broader understanding of signal processing among a new generation of engineers and researchers. Readers of the new edition of Digital Signal Processing will also find: * Exercises at the end of each chapter to reinforce key concepts * A new chapter covering digital signal processing for neural networks * Handy structure beginning with undergraduate-level material before moving to more advanced concepts in the second half Digital Signal Processing is a must-own for students, researchers, and industry professionals in any of the hundreds of fields and subfields that make use of signal processing algorithms. *This is the English language translation of the French original Traitement Numérique du Signal 10th edition by Maurice Bellanger Dunod 2022 and is the 4th edition in English.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 615

Veröffentlichungsjahr: 2024

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Digital Signal Processing

Theory and Practice

 

Tenth Edition

Maurice Bellanger

CNAM, Paris

France

Translated by

Benjamin A. Engel

 

 

 

 

This edition first published 2024Copyright © 2024 by John Wiley & Sons Ltd. All rights reserved.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by law. Advice on how to obtain permission to reuse material from this title is available at http://www.wiley.com/go/permissions.

The right of Maurice Bellanger to be identified as the author of this work has been asserted in accordance with law.Originally published in France as: Traitement numérique du signal 10th edition By Maurice BELLANGER © Dunod 2022, Malakoff

Registered OfficesJohn Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, USAJohn Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK

For details of our global editorial offices, customer services, and more information about Wiley products visit us at www.wiley.com.

Wiley also publishes its books in a variety of electronic formats and by print-on-demand. Some content that appears in standard print versions of this book may not be available in other formats.

Trademarks: Wiley and the Wiley logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates in the United States and other countries and may not be used without written permission. All other trademarks are the property of their respective owners. John Wiley & Sons, Inc. is not associated with any product or vendor mentioned in this book.

Limit of Liability/Disclaimer of WarrantyWhile the publisher and authors have used their best efforts in preparing this work, they make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives, written sales materials or promotional statements for this work. This work is sold with the understanding that the publisher is not engaged in rendering professional services. The advice and strategies contained herein may not be suitable for your situation. You should consult with a specialist where appropriate. The fact that an organization, website, or product is referred to in this work as a citation and/or potential source of further information does not mean that the publisher and authors endorse the information or services the organization, website, or product may provide or recommendations it may make. Further, readers should be aware that websites listed in this work may have changed or disappeared between when this work was written and when it is read. Neither the publisher nor authors shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.

Library of Congress Cataloging-in-Publication Data

Names: Bellanger, Maurice, author. | Engel, Benjamin A., translator.

Title: Digital signal processing : theory and practice / Maurice Bellanger, CNAM; translated by Benjamin A. Engel.

Other titles: Traitement numérique du signal. English

Description: Tenth edition. | Hoboken, NJ, USA : Wiley, 2024. | Translation of: Traitement numérique du signal.

Identifiers: LCCN 2023049303 (print) | LCCN 2023049304 (ebook) | ISBN 9781394182664 (cloth) | ISBN 9781394182671 (adobe pdf) | ISBN 9781394182688 (epub)

Subjects: LCSH: Signal processing–Digital techniques.

Classification: LCC TK5102.9.B4513 2024 (print) | LCC TK5102.9 (ebook) | DDC 621.382/2–dc23/eng/20240110

LC record available at https://lccn.loc.gov/2023049303

LC ebook record available at https://lccn.loc.gov/2023049340

Cover Design: WileyCover Image: © MR.Cole_Photographer/Getty Images

Foreword (Historical Perspective)

The most important and most impactful technical revolutions are not always those that are most evident to a product’s end user. Modern digital signalprocessing methods fall into the category of impactful technical revolutions whose consequences are not immediately perceptible, and which do not make the front page.

It is interesting to reflect, for a moment, on the way in which such techniques emerge. Digital computation, applied to a signal in the broadest sense, is certainly not a new idea in itself. When Kepler derived the laws of motion of the planets from the series of observations made by his father-in-law Tycho Brahe, his was a truly numerical computation of the signal – in this case, the signal being Brahe’s observations of the planets’ positions over time. In recent decades, though, digital signal processing has become a discipline in its own right. What has changed is the way it can now process electrical signals in real time, using digital technologies.

This leap forward is the cumulative result of technical progress in numerous fields – starting, of course, with the capability of recording the data we wish to process in the form of an electrical signal. This has been contingent on the gradual development of what are known as information sensors, which can range in complexity from a simple stress gage (which, in itself, took a great deal of research in solid mechanics to make possible) to a radar system.

In addition, with the marvelous progress in micro-electronics came the necessary technological tools, capable, at the extremely fast rates required for real-time processing, of performing the arithmetical operations that the earliest computers (the ENIAC was built in 1945, not long ago in the grand scheme of things) took hours to do, often being interrupted by repeated breakdowns. Today, these operations can be carried out by microprocessors weighing only a few grams and consuming only a few milliwatts of power, capable of functioning for over a decade without breakdown.

Finally, we have had to wait for progress in programming techniques – i.e. the optimal use of these technologies – because though the computational capacities of modern microprocessors are vast, it is unwise to waste those capacities on performing unnecessary operations. The invention of the fast Fourier transform algorithms is one of the most striking examples of the importance of programming methods. This convergence of technical progress in fields ranging from physics to electronics to mathematics has not been unintentional. To a certain extent, every step forward has created a new problem, which was then solved by new progress in a different field. It would undoubtedly be helpful, from the standpoint of the history and epistemology of science and technology, to have an in-depth study of this lengthy and complicated process.

Indeed, the consequences are already considerable. Indisputably, analog processing of electrical signals came before digital processing, and analog processing will surely continue to have an important role to play in certain applications, but the benefits of digital processing can be expressed in two words: accuracy and reliability. Certain applications have only been made possible by the accuracy and reliability offered by digital technologies, which go far beyond the sectors of electronics and telecommunications in which these techniques first emerged. As one example among many, in X-ray tomodensitography, scanners are based on the application of a theorem developed by Johann Radon in 1917. Only the developments mentioned above have enabled the practical implementation of this new medical diagnostic tool. It is a safe bet that, in tomorrow’s world, digital signal processing techniques will be used in increasingly varied products, including consumer electronics. However, it is an equally safe bet that the general public, while benefitting from the lower prices and higher performance and reliability offered by these techniques, will remain blissfully unaware of the phenomenal and complex combination of research, technology, and invention represented by this progress. This shift has already begun in the case of television receivers.

However, when these technical revolutions take place, another problem almost inevitably arises. We need to train users to get to grips not just with a new tool, but often, an entirely new way of thinking. If we are not careful, such training can easily become a bottleneck, delaying the introduction of new techniques. Therefore, this book is a particularly important addition to the field. Its author, Maurice Bellanger, has been teaching for many years at the École Nationale Supérieure des Télécommunications and the Institut Supérieur d’Électronique de Paris. It is a highly didactic book, containing relevant exercises as well as in-depth explanations and multiple programs, which certain people will often be able to make use of exactly as they are. Without a doubt, it will help open the door to desirable and necessary evolution.

P. Aigrain, 1981

Preface

In signal processing, digital techniques offer a fantastic range of possibilities: rigorous system design, flexibility, reproducibility of equipment, stability of operating features, and ease of supervision and monitoring. However, there is a certain amount of abstractness in these techniques, and, in order to apply them to real-world cases, we need a set of theoretical knowledge, which may represent an obstacle to their use. This book aims to break down the barriers and make digital techniques accessible to readers by drawing the connection between theory and practice and providing users with the most widely used results in the domain, at their fingertips.

The foundation upon which this book is built is the author’s teaching at engineering schools – first the École nationale supérieure des télécommunications and the Institut supérieur d’électronique de Paris, and later, Supélec and CNAM. The book offers a clear and concise presentation of the main techniques used in digital processing, comparing them on their merits and giving the most useful results in a form that is directly usable, both for the design and for the concrete implementation of systems. Theoretical explanations have been condensed to what is absolutely necessary for a thorough understanding and a correct application of the results. Bibliographic references are provided, where interested readers will find further information about the topics discussed herein. At the end of each chapter are a few exercises, often drawn from real-world examples, to allow readers to test their absorption of the material in the chapter and familiarize themselves with its application. Answers to these exercises and guidelines are given at the end of the book.

With respect to previous editions, this new edition offers additional information, simplifications, and also a new chapter about one of the most important tools in the field of artificial intelligence – neural networks, as they relate to adaptive systems.

As with the previous editions, this one owes a great deal to the author’s students and colleagues. Thanks to them all for their contributions and assistance.

Introduction

A signal is the medium carrying information, transmitted by a source to a receiver. In other words, a signal is the vehicle of intelligence in systems. It transports commands in control and remote-control equipment; it carries data such as information, spoken words, or images across networks. It is particularly fragile and needs to be handled with a great deal of care. Signal processing is applied in order to extract information, alter the message being carried, or adapt the signal to the transmission techniques being used. It is here that digital techniques come into play. Indeed, if we imagine substituting the signal with a set of numbers, representing its value or amplitude at carefully chosen times, then its processing, even in the most elaborate of forms, boils down to a sequence of logical and arithmetical operations on that set of numbers, committing the results to memory.

A continuous analog signal is converted into a digital signal by sensors which act on readings, or directly in the devices producing or receiving the signal. The operations taking place in the wake of that conversion are carried out by digital computers, tasked or programmed to perform the sequence of operations by which the desired processing is defined.

Before introducing the content of each chapter of this book, it is wise to precisely define the processing of which we speak here.

Digital signal processing refers to the set of operations, arithmetic calculations, and number manipulations, which are applied to a signal to be processed, represented by a series or a set of numbers, to produce another series or set of numbers, which represent the processed signal. In this way, an immense variety of functions can be performed, such as spectral analysis, linear or nonlinear filtration, transcoding, modulation, detection, estimation, and parameter extraction. The machines used are digital computers.

The systems corresponding to this processing obey the laws of discrete systems. In certain cases, the numbers to which the processing is applied may be derived from a discrete process. However, they often represent the amplitude of samples taken from a continuous signal, and, in that case, the computer must be downstream of an analog-to-digital converter and possibly upstream of a digital-to-analog converter. In designing such systems, and in studying how they work, signal digitization is fundamentally important, and the operations of sampling and encoding must be analyzed in terms of their principles and their consequences. The theory of distributions is a concise, simple, and effective means of such analysis. Following the presentation of certain fundamental aspects concerning Fourier analysis, distributions, and signal representation, Chapter 1 contains the most important and most useful results for sampling and encoding of a signal.

The advent of digital processing dates from the discovery of fast computational algorithms of the discrete Fourier transform. Indeed, this transform is the basis for the study of discrete systems. In digital processing, it is the equivalent of the Fourier transform in analog processing, enabling us to transition from the discrete-time domain to the discretefrequency domain. It lends itself very well to spectral analysis, with a frequency step dividing the sampling frequency of the signals being analyzed.

Fast computation algorithms offer gains, as they enable operations to be performed in real time, provided certain elementary conditions are met. Thus, the discrete Fourier transform is not only a fundamental tool in determining the processing characteristics and in the study of the impacts of those characteristics on the signal, but it is also used in the production of popular devices, such as mobile radio and digital television. Chapters 2 and 3 are dedicated to these algorithms. To begin with, they present the elementary properties and the mechanism of fast computation algorithms and their applications before moving on to a set of variants associated with practical situations.

A significant portion of this book is devoted to the study of one-dimensional invariant linear time-discrete systems, which are easily accessible and highly useful. Multi-dimensional systems, and, in particular, two- and three-dimensional systems, are experiencing significant development. For example, they are applied to images. However, their properties are generally deduced from those of one-dimensional systems, of which they are often merely simplified extensions. Nonlinear or time-variable systems either contain a significant subset, retaining the properties of linearity and time-invariance, or can be analyzed with the same techniques as systems that have those properties.

Linearity and time-invariance lead to the existence of a convolution relation, which governs the operation of the system or filter having those properties. This convolution relation is defined on the basis of the system’s response to the elementary signal which represents a pulse – the impulse response – by an integral in the case of analog signals. Thus, if x(t) denotes the signal to be filtered, and h(t) is the filter impulse response, the filtered signal y(t) is given by the equation:

In these conditions, such a relation, which directly expresses the filter’s real operation, offers limited practical interest. To begin with, it is not particularly easy to determine the impulse response on the basis of criteria that define the filter’s intended operation. In addition, an equation that contains an integral cannot easily be used to recognize and check the filter’s behavior. Design is much easier to address in the frequency domain because the Laplace transform or Fourier transform can be used to move to a transformed plane where the convolution relations from the amplitude–time plane become simple products of functions. The Fourier transform matches the system’s frequency response to the impulse response, and the filtration is then the product of that frequency response by the Fourier transform, or spectrum, of the signal to be filtered.

In discrete digital systems, the convolution is expressed by a sum. The filter is defined by a series of numbers, representing its impulse response. Thus, if the series to be filtered is written as x(n), the filtered series y(n) is expressed by the following sum, where n and m are integers:

Two scenarios then arise. Firstly, the sum may pertain to a finite number of terms – i.e. the h(m) values are zero, except for a finite number of values of the integer variable m. The filter is known as a finite impulse-response filter. In reference to its realization, it is also referred to as non-recursive, because it does not require a feedback loop from output to input in its implementation. It occupies finite memory space because it only retains the memory of an elementary signal – an impulse, for example – for a limited time. The numbers h(m) are called the coefficients of the filter, which they define completely. They can be calculated directly, in a very simple way – for instance, by means of a Fourier series development of the frequency response. This type of filter exhibits highly interesting original features (for example, the possibility of a rigorously linear phase response – i.e., a constant group delay); the signals whose components are within the filter’s passband are not deformed as they pass through the filter. This possibility is exploited in data transmission systems, or spectral analysis, for example.

Alternatively, the sum may pertain to an infinite number of terms, and the h(m) may have an infinite number of nonzero values; the filter is called an infinite impulse-response filter, or recursive, because its memory must be set up as a feedback loop from output to input. Its operation is governed by an equation whereby an element in the output series y(n) is calculated by the weighted sum of a number of elements of the input series x(n), and a certain number of elements of the previous output series. For example, if L and K are integers, the filter’s operation may be defined by the following equation:

The al(l = 0, 1, …, L) and bk(k = 1, 2, …, K) are the coefficients. As is the case with analog filters, this type of filter generally cannot easily be studied directly; it is necessary to go through a transformed plane. The Laplace or Fourier transforms could be used for this purpose. However, there is a transform that is much more suitable – the Z transform, which is the equivalent for discrete systems. A filter is characterized by its Z-transfer function, generally written as H(Z), which involves the coefficients in the following equation:

To obtain the filter’s frequency response, in H(Z), we simply need to replace the variable Z with the following expression, where f denotes the frequency variable, and T the time step between the signal samples:

In this operation, the imaginary axis in the Laplacian plane corresponds to the circle with unit radius, centered at the origin in the plane of the variable Z. It is plain that the frequency response of the filter defined by H(Z) is a periodic function whose period is the sampling frequency. Another representation of the function H(Z), which is useful in the design of filters and the study of a number of properties, explicitly includes the roots of the numerator, also known as the zeroes of the filter, Zl(l = 1, 2, …, L), and the roots of the denominator, also known as the poles, Pk(k = 1, 2, …, K):

The term a0 is a scaling factor which defines the gain of the filter. The filter stability condition is expressed very simply by the following constraint: all the poles must be within the unit circle. The position of the poles and zeroes with respect to the unit circle offers a very simple way of determining the characteristics of the filter; this technique is very widely used in practice.

Four chapters are devoted to the study of the characteristics of these digital filters. Chapter 4 presents the properties of time-invariant discrete linear systems, recaps the main properties of the Z-transform, and lays down the fundamental groundwork necessary for the study of filters. Chapter 5 discusses finite impulse-response filters – their properties are studied, the techniques for calculating the coefficients are described, and the structures of real-world filters are examined. Infinite impulse-response filters are generally produced by cascading first- and second-order elementary cells, or sections, so Chapter 6 describes these sections and their properties. To begin with, this makes the study of this type of system considerably easier; in addition, the chapter provides a set of results that are highly useful in practice. Chapter 7 outlines the methods for calculating the coefficients for infinite impulse-response filters and discusses the problems posed by their real-world implementation, with the limitations that are encountered and the consequences of those limitations – in particular, computational noise.

As the properties of infinite impulse-response filters are comparable to those of continuous analog filters, it is natural to envisage similar structures for these filters to those generally employed in analog filtering. This is the subject of Chapter 8, which presents ladder structures. We then take a diversion to look at switched-capacitor filters, which are not digital in the strictest sense of the word, but which are sampled, and are highly useful additions to digital filters. To guide users, a summary of the respective merits of the structures described is given at the end of the chapter.

Certain devices – for example, in instrumentation or telecommunications – work on signals represented by a series of complex numbers. Out of all signals of this type, one category is of particular practical interest: analytic signals. Their properties are studied in Chapter 9, as is the design of devices apt for the generation or processing of such signals. Additional concepts relating to filtering are also explained in this chapter, which, in a unified manner, presents the main interpolation techniques. Signal restoration is also discussed.

Digital processing machines, when operating in real time, operate at a rate that is closely linked to the signal sampling frequency. Their complexity depends on the volume of operations being carried out, and the length of time available in which to perform this processing. The signal sampling frequency is generally imposed either at system input or at output, but within the system itself, it is possible to vary this rate in order to adapt it to the characteristics of the signal and the processing, and thereby reduce the volume of operations and the computation rate. The machines may be simplified – potentially very significantly – if, over the course of the processing, the sampling frequency is adapted to suit the usable bandwidth of the signal; this is multirate filtering, which is presented in Chapter 10. The impacts on the processing characteristics are described, along with realization methods. Rules are provided on usage and assessment. This technique produces particularly interesting results for narrow passband filters or the implementation of sets known as filter banks. In this case, the system associates, with a set of phase-shifting circuits, a discrete Fourier transform calculator.

Filter banks for the breakdown and reconstruction of signals have become a fundamental tool for compression. The way in which they work is described in Chapters 11 and 12 with design methods and realization structures.

The filters can be determined on the basis of time-domain specifications; such is the case, for example, with the modeling of a system, as described in Chapter 13. If the characteristics vary, it may be interesting to adapt the coefficients as a function of changes occurring in the system. This adaptation may depend on an approximation criterion and take place at a rate that may come to equal the system’s sampling rate; then, the filter is said to be adaptive. Chapter 14 is devoted to adaptive filtering, in the simplest of cases, but also the most common and the most useful – where the approximation criterion chosen is the minimization of the mean squared error, and where the coefficients vary depending on the gradient algorithm. After recapping details of random signals and their properties in Chapter 13 – in particular, the autocorrelation function and matrix, whose eigenvalues play an important role – the gradient algorithm is presented in Chapter 14, and its convergence conditions are studied. Then, the two main adaptation parameters, the time constant and the residual error, are analyzed along with the arithmetic complexity. Different structures are proposed for concrete implementation.

Chapter 15 can be viewed as an extension of Chapters 13 and 14 to the domain of neural networks in artificial intelligence. These devices are characterized by the systematic use of nonlinear circuits for the functions of modeling, classification, or shape recognition. Adaptive techniques are used during the learning phases.

Chapter 16 discusses a very specific application: error-correction coding. Indeed, information processing and transmission systems include error-correction coding techniques, which are generally introduced by a mathematical approach, though some of the most widely used types of coding are actually direct applications of the fundamental signal processing techniques. Thus, the chapter puts forward a signalprocessing vision of certain types of coding, to facilitate readers’ access to and use of these techniques.

Finally, Chapter 17 briefly describes some applications, showing how the fundamental methods and techniques are put to use.

1Signal Digitizing – Sampling and Coding

The conversion of an analog signal to digital form involves a twofold approximation. Firstly, in the time domain, the signal function s(t) is replaced by its values at integral time increments T and is thus converted to s(nT). This process is called sampling. Secondly, in the amplitude domain, each value of s(nT) is approximated by a whole multiple of an elementary quantity. This process is called quantization. The approximate value thus obtained is then associated with a number. This process is called coding – a term often used to describe the whole process by which the value of s(nT) is transformed into the number representing it.

The effect of these two approximations on the signal will be analyzed in this chapter. To achieve this, two basic tools will be used: Fourier analysis and distribution theory.

1.1 Fourier Analysis

Fourier analysis is a method of decomposing a signal into a sum of individual components which can easily be produced and observed. The importance of this decomposition is that a system’s response to the signal can be deduced from these individual components using the superposition principle. These elementary component signals are periodic and complex, so both the amplitude and phase of the systems can be studied. They are represented by a function se(t) such that:

(1.1)

where f is the inverse of the period – that is, the frequency of the elementary signal.

Since the elementary signals are periodic, clearly, the analysis is simplified when the signal itself is periodic. This case will be examined first, although it is not the most interesting, since a periodic signal is completely determinate and carries practically no information.

1.1.1 Fourier Series Expansion of a Periodic Function

Let s(t) be a function of a periodic variable t with period T – that is, satisfying the relation:

(1.2)

Under certain conditions, this function can be expanded in a Fourier series as:

(1.3)

Figure 1.1 Impulse train.

The indexn is an integer, and the Cn, called the Fourier coefficients, are defined by:

(1.4)

In fact, the Fourier coefficients minimize the square of the difference between the function s(t) and the series (1.3). Expression (1.4) is obtained by taking the derivative with respect to the index n coefficient of the quantity:

and setting that derivative to zero.

Example:Figure 1.1 shows an example of a Fourier expansion of a function ip(t) composed of a train of impulses, each of width τ and amplitude a, occurring at time intervals T. The time origin is taken as being at the center of an impulse.

The coefficients Cn are given by:

(1.5)

and the Fourier expansion is:

(1.6)

The importance of this example for the study of sampled systems is readily apparent.

The properties of Fourier series expansions are given in Ref. [1]. One important property, expressed by the Bessel–Parseval equation, is that power is conserved in the expansion of the signal:

(1.7)

The constituent elements resulting from the expansion of a periodic signal have frequencies which are integer multiples of 1/T (the inverse of the period). They form a discrete set in the space of all frequencies. In contrast, if the signal is not periodic, the Fourier components form a continuous domain in the frequency space.

1.1.2 Fourier Transform of a Function

Let s(t) be a function of t. Under certain conditions, one can write:

(1.8)

where

(1.9)

The functionS(f) is the Fourier transform of s(t). More commonly, S(f) is called the spectrum of signal s(t).

Example: To calculate the Fourier transform I(f) of an isolated pulse i(t) of width τ and amplitude a, centered on the time origin (Figure 1.2):

(1.10)

Figure 1.2 Isolated impulse.

Figure 1.3 represents the function I(f). This will be used frequently in this book. It is important to note that it will be zero for nonzero frequencies which are whole multiples of the inverse of the impulse width. A table of this function is given in Appendix 1.

This example clearly shows the correspondence between the Fourier coefficients and the spectrum. In effect, by comparing equations (1.6) and (1.10), it can be verified that, apart from the factor 1/T, the coefficients of the Fourier series expansion of an impulse train correspond to the values of the spectrum of the isolated impulse at frequencies which are whole multiples of the inverse of the period of the impulses.

In the case of a nonperiodic function, there is an expression similar to the Bessel–Parseval relation, but this time the energy in the signal is conserved, instead of the power:

(1.11)

Let s′(t) be the derivative of the function s(t); its Fourier transform Sd(f) is given by:

(1.12)

Thus, taking the derivative of a signal leads to multiplying its spectrum by j2πf.

One essential property of the Fourier transform (in fact, the main reason for its use) is that it transforms a convolution into a simple product. Consider two time functions, x(t) and h(t), with Fourier transforms X(f) and H(f), respectively. The convolution y(t) is defined by:

(1.13)

Figure 1.3 Spectrum of an isolated impulse.

The Fourier transform of this product is:

Conversely, it can be shown that the Fourier transform of a simple product is a convolution product.

An interesting result can be derived from the abovementioned properties. Let us consider the Fourier transform II(f) of the function i2(t). Because of equations (1.10) and (1.13), we obtain:

(1.14)

and therefore,

Taking f = n/τ, for any integer n,

(1.15)

Thus, the functions sin π(x – n)/[π(x – n)], with n being an integer, forms a set of orthogonal functions.

The definition and properties of the Fourier transform can be extended to multivariate functions. Let s(x1, x2, …, xn) be a function of n real variables. Its Fourier transform is a function S(λ1, λ2, …, λn) defined by:

(1.16)

If the function s(x1, x2, …, xn) is separable – that is, if:

then:

The variables xi (1 ⩽ i ⩽ n) often represent distances (for example, for two dimensions), and in that case, λi are called spatial frequencies.

1.2 Distributions

Mathematical distributions constitute a formal mathematical representation of the physical distributions found in experiment [1].

1.2.1 Definition

A distribution D is defined as a continuous linear function in the vector space of functions defined in ℝn, indefinitely differentiated, and having a bounded support.

With each function ϕ belonging to , the distribution D associates a complex number D(ϕ) which will also be denoted by 〈D, ϕ〉, with the properties:

D

(

ϕ

1

+

ϕ

2

) =

D

(

ϕ

1

) +

D

(

ϕ

2

).

D

(

λϕ

) =

λD

(

ϕ

) where

λ

is a scalar.

If

ϕ

j

converges to

ϕ

when

j

tends toward infinity, the sequence

D

(

ϕ

j

) converges to

D

(

ϕ

).

Examples:

If

f

(

t

) is a function which is summable over any bounded ensemble, it defines a distribution

D

f

by:

(1.17)

If

ϕ

′ denotes the derivative of

ϕ

, the function:

(1.18)

is also a distribution.

The Dirac distribution

δ

is defined by:

(1.19)

The Dirac distribution δ at a real point x is defined by:

(1.20)

This distribution is said to represent a mass of +1 at the point x.

Consider a pulse

i

(

t

) of duration

τ

, with amplitude

a

= l/

τ

, centered on the origin. It defines a distribution

D

i

:

For very small values of τ, this becomes:

that is, the Dirac distribution can be regarded as the limit of the distribution Di when τ tends toward 0.

1.2.2 Differentiation of Distributions

The derivative ∂D/∂t of a distribution D is defined by the relation:

(1.21)

To illustrate this, consider the Heaviside function Y, or single-step function, which is zero when t < 0 and +1 if t ⩾ 0:

(1.22)

As a result, the discontinuity in Y appears in the derivative as a point of unit mass.

This example illustrates the considerable practical interest of the notion of a distribution, which means that a number of the concepts and properties of continuous functions can be extended to discontinuous functions.

1.2.2.1 The Fourier Transform of a Distribution

By definition, the Fourier transform of a distribution D is a distribution denoted by FD such that:

(1.23)

By applying this definition to distributions with a point support, we obtain:

(1.24)

Consequently, Fδ = 1. Similarly, Fδ(t − a) = e−j2πfa.

A case which is fundamental to the study of sampling is that of the set (u) of Dirac distributions shifted by T and such that:

(1.25)

This set is a distribution of unit mass points separated on the abscissa by whole multiples of T. Its Fourier transform is:

(1.26)

and it can be shown that this sum is a point distribution.

A simple demonstration can be obtained from the Fourier series development of the function ip(t), formed by the set of separate pulses of duration T, with width τ, and amplitude 1/τ, centered on the time origin.

One can consider u(t) as the limit of ip(t) as τ tends toward zero:

and by referring to relation (1.6), we find that:

The following fundamental property is demonstrated in Ref. [2].

The Fourier transform of the time distribution, represented by unit mass points separated by whole multiples of T is a frequency distribution of points of mass 1/T separated by whole multiples of 1/T.

That is:

(1.27)

This result will be used when studying signal sampling. The property of the Fourier transform whereby it exchanges convolution and multiplication applies equally to distributions.

Before considering the influence of the sampling and quantizing operations on the signal, it is useful to discuss the characteristics of those signals which are most often studied.

1.3 Some Commonly Studied Signals

A signal is defined as a function of time s(t). This function can be either an analytic expression or the solution of a differential equation, in which case the signal is said to be deterministic.

1.3.1 Deterministic Signals

Sine waves are the most frequently used signals of this type. For example,

where A is the amplitude, ω = 2πf is the angular frequency, and α is the phase of the signal.

Signals of this type are easy to reproduce and recognize at different points of a system. They allow the various characteristics to be visualized in a simple way. Moreover, as mentioned above, they serve as the basis for the decomposition of any deterministic signal through the Fourier transform.

If the system is linear and invariant in time, it can be characterized by its frequency response H(ω). For each value of the frequency, H(ω) is a complex number whose modulus is the amplitude of the response. By convention, the function ϕ(ω) such that:

(1.28)

is defined as the phase. This convention allows the group delay τ(ω), a positive function for real systems, to be expressed as:

(1.29)

The group delay refers to transmission lines on which different frequencies of the signal propagate at different speeds, which leads to energy dispersion in time. As an illustration of the concept, let us consider two close frequencies ω ± Δω and the corresponding phases per unit length ϕ ± Δϕ. The sum signal is expressed by:

or

This is a modulated signal, and there is no dispersion if the two factors in the above expression undergo the same delay per unit length – that is, Δϕ/Δω is constant. Thus, the group delay characterizes the dispersion imposed on the signal energy by a transmission line or any equivalent system.

If the sinusoidal signal s(t) is applied to the system, then an output signal sr(t) is obtained such that:

(1.30)

Once again, this is a sinusoidal signal, and comparison with the applied signal reveals the response of the system. The importance of this procedure (for example, for test operations) can readily be appreciated.