139,99 €
This book presents a synthesis of Electronics through keynotes which are substantiated in three volumes. The first one comprises four chapters devoted to elementary devices, i.e. diodes, bipolar transistors and related devices, field effect transistors and amplifiers. In each of one, device physics, non linear and linearized models, and applications are studied. The second volume is devoted to systems in the continuous time regime and contains two chapters: one describes different approaches to the transfer function concept and applications, and the following deals with the quadripole properties, filtering and filter synthesis. The third volume presents the various aspects of sampling systems and quantized level systems in the two last chapters.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 300
Veröffentlichungsjahr: 2018
Cover
Title
Copyright
Preface
Introduction
1 Continuous-time Systems: General Properties, Feedback, Stability, Oscillators
1.1. Representation of continuous time signals
1.2. Representations of linear and stationary systems and circuits built with localized elements
1.3. Negative feedback
1.4. Study of system stability
1.5. State space form
1.6. Oscillators and unstable systems
1.7. Exercises
2 Continuous-time Linear Systems: Quadripoles, Filtering and Filter Synthesis
2.1. Quadripoles or two-port networks
2.2. Analog filters
2.3. Synthesis of analog active filters using operational amplifiers
2.4. Non-dissipative filters synthesis methods
2.5. Exercises
Appendix: Notions of Distribution and Operating Properties
A.1. Dirac distribution or Dirac impulse δ
a
or δ(x − a)
A.2. Derivation of a distribution and derivation of discontinuous functions
A.3. Laplace transform of distributions
A.4. Distribution in principal value p.v. following Cauchy’s definition
A.5. Solving equations with discontinuous functions derivatives
Bibliography
Index
End User License Agreement
1 Continuous-time Systems: General Properties, Feedback, Stability, Oscillators
Figure 1.1. Representation of a sinusoidal signal on the complex plane
Figure 1.2. Spectrum of a sinusoidal signal (amplitude solid line, phase dotted)
Figure 1.3. Spectrum of a periodic signal of repetition frequency f
1
(modules in bold and arguments in dotted lines)
Figure 1.4. Triangular signals (left) and their spectrum (FT) (right). For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 1.5. Bode and Nyquist diagrams of a first-order low-pass filter [H
1
(u)]
-1
in full lines and H
1
(u) as a dotted line in the complex plane. For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 1.6. Bode and Nyquist diagrams of a second-order low-pass filter with transmittance [H
2
(u)]
-1
(from the highest to the lowest curve, the ζ values, displayed as z inside the figure, are 0.05, 0.5, 0.707, 5 in the Bode diagrams at left and from the lowest to the highest 0.2, 0.5, 5 in the Nyquist diagram at right). H
2
(u) is plotted in the case ζ = 0.5 as a dotted line in the complex plane. For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 1.7. Bode and Nyquist diagrams of a second-order band-pass filter (from the highest to the lowest curves, the ζ values are 0.06, 0.6, 0.707, 6 in the Bode diagram at left and 0.2, 0.6, 6 in the Nyquist diagram at right). For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 1.8. Bode and Nyquist diagrams of a second-order high-pass circuit (from the highest to the lowest curves, the ζ values are 0.06, 0.6, 0.707, 6 in the Bode diagram at left and 0.2, 0.6, 6 in the Nyquist diagram at right). For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 1.9. Map of transmittance poles
Figure 1.10. Block diagram of a negative feedback system
Figure 1.11. Inverting amplifier (left) and non-inverter (right) circuit
Figure 1.12. Divider (left) and square root function (right), with V
0
as a constant voltage
Figure 1.13. “No threshold” rectification
Figure 1.14. Asymptotic Bode diagram of operational amplifier circuit gains
Figure 1.15. Impedance converter circuit
Figure 1.16. Nyquist diagrams of the complex variable s= σ + jω and of the transfer function H(s)
Figure 1.17. Bode diagrams of the gain modulus and the reverse of the feedback coefficient, and loop gain argument, for a closed-loop system stable in open loop (full line, system stable in closed loop since |1/B| > |A| for f = f
2
; dotted line, system unstable in closed loop since |1/B| < |A| for f = f
1
; the frequencies corresponding to Arg{AB}=π in each case)
Figure 1.18. Block diagram of a system in state representation
Figure 1.19. Sinusoidal Wien bridge oscillator
Figure 1.20. Sinusoidal oscillator using an inverter circuit based on an operational amplifier
Figure 1.21. Colpitts oscillator
Figure 1.22. Symbol and equivalent circuit of quartz resonator
Figure 1.23. Hartley oscillator
Figure 1.24. Clapp oscillator
Figure 1.25. Quartz oscillator operating with an inverting logic gate
Figure 1.26. Amplitude oscillator stabilization mechanism
Figure 1.27. Dynamic circuit with negative conductance −G
a
and circuit with series (at left) or parallel (at right) resonant network, damped by positive conductance G
p
Figure 1.28. Nonlinear dipole N characteristics to the left and S to the right, with negative dynamic resistances and conductance (1/R
d
or G
d
= - G
a
), and load lines suited to relaxation oscillator operation
Figure 1.29. Relaxation oscillator circuits with N dipole to the left and S to the right
Figure 1.30. Start cycle and limiting cycle of a quasi-sinusoidal oscillator obtained from simulation results
Figure 1.31. Chua oscillator
Figure 1.32. Characteristic i = g(u) of dipole D used in the Chua oscillator and load lines corresponding to R equal to (i) R
1
, (ii) R
2
and (iii) R
3
Figure 1.33. Eigenvalues of A in case (i) for the operating point at the origin. For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 1.34. Eigenvalues of A in case (ii) for operating point “0” around the origin. For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 1.35. Eigenvalues of A in case (ii) for operating points P
±1
. For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 1.36. Cycles completed by the operating point u
2
(i
3
) in various conditions
Figure 1.37. FFT of the Chua oscillator signal for β =21, γ =0. For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 1.38. FFT of the Chua oscillator signal for β =24, γ =0. For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 1.39. FFT of Chua oscillator signals in deterministic chaotic conditions obtained with β =30, γ =0 in red (higher spectrum) and with β =24, γ =0.24 in blue (lower spectrum). For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
2 Continuous-time Linear Systems: Quadripoles, Filtering and Filter Synthesis
Figure 2.1. Input and output receptor conventions in a quadripole or two-port network
Figure 2.2. Association of quadripoles in series (on the left) and in parallel (on the right), using the appropriate models
Figure 2.3. Quadripole with receptor convention at left (input) and generator convention at right (output)
Figure 2.4. Current–voltage feedback on the left and voltage–current feedback on the right
Figure 2.5. Current–current feedback on the left and voltage–voltage feedback on the right
Figure 2.6. Representation of a type I quadripole with a single linked source
Figure 2.7. Series Foster′s synthesis
Figure 2.8. Parallel Foster’s synthesis
Figure 2.9. Ladder network obtained through Cauer’s synthesis, the order of the last two elements L
5
and C
6
being arbitrary
Figure 2.10. Ladder network obtained through Cauer synthesis up to L
3
, then Foster synthesis in series for the element C
7
and finally using Cauer synthesis for the last two
Figure 2.11. Representations of a quadripole, including currents, voltages and complex parameters on the left and using incident and reflected waves, and s-parameters on the right
Figure 2.12. Chain of two quadripoles described by their chain parameters, deduced from s-parameters
Figure 2.13. Generator with current source and internal admittance Y
g
loaded by an admittance Y
u
Figure 2.14. Quadripole matched to the input generator and the output load
Figure 2.15. Active quadripole described by its parameters Y, inserted between a generator and a load
Figure 2.16. Quadripole inserted between a generator and a load for the description by the s-parameters
Figure 2.17. Quadripole described by its s-parameters with terminations of any kind
Figure 2.18. Quadripole in a situation of image-matching, where Z
e
and Z
s
are also, respectively, the input and output impedance of the quadripole
Figure 2.19. Quadripole with hybrid parameters (type II) between a generator and a load
Figure 2.20. Block diagram representing the quadripole and termination elements
Figure 2.21. Block diagram giving V
2
from E
g
for the quadripole and termination elements
Figure 2.22. Ladder admittance and impedance network
Figure 2.23. Block diagram of an unspecified portion of the ladder network of Figure 2.22
Figure 2.24. Sinc function
Figure 2.25. Integration contour for the application of the Cauchy theorem to the function H(jω) − H(∞) divided by s−jω
1
Figure 2.26. Transmittance and attenuation template of a low-pass filter
Figure 2.27. Transmittance modulus and group delay for fourth-order low-pass filters (Bessel in red broken line, Butterworth in blue dash-dot line, Chebyschev in full light green line, elliptical in dark green dash). For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 2.28. Active filters with a single operational amplifier
Figure 2.29. Second-order active filter using three operational amplifiers
Figure 2.30. Delyannis–Friend second-order active filter
Figure 2.31. Multiple feedback loop cell with forward transmittances T
n
, T
n-1
, … T
2
, T
1
and feedback transmittances R
1
, R
2
, … R
n-1
R
n
Figure 2.32. Ladder sixth-order low-pass filter
Figure 2.33. Network of the normalized Chebyschev filter obtained after the transformation low-pass to band-pass with a normalized bandwidth Δω = 0.3 and a ripple of 0.1 dB within the bandwidth
Figure 2.34. Transmittance modulus of the eighth-order Chebyshev band-pass filter with normalized bandwidth equal to 0.3 and a ripple of 0.1 dB within the bandwidth
Figure 2.35. Sixth-order low-pass elliptic filter
Figure 2.36. Fourth-order elliptic filter synthesis based on its normalized admittance y
22
Figure 2.37. Low-pass filter synthesis based on its normalized impedance z
11
= Z
11
/R
1
Figure 2.38. Low-pass filter synthesis based on its normalized admittance y
22
= Y
22
R
2
Figure 2.39. Full filter
Figure 2.40. Low-pass filter synthesized from its normalized impedance z
11
Figure 2.41. Z
11
computation network
Figure 2.42. Y
22
computation network
Figure 2.43. Final network of the Chebyschev sixth-order low-pass filter
Figure 2.44. Four-cell low-pass filter, two of each type, with the same image impedance at each port, given by the expression of Z
i1
in the previous table, provided that m
2
= m
1
Figure 2.45. Modulus of image impedances normalized by the characteristic impedance R
0
for the low-pass (a) and (b) cells (real if ω < 1, imaginary if ω > 1): Z
i1
(b) in solid lines for m = 0.5; Z
i1
(a) in dotted line and Z
i2
(a) = Z
i2
(b) in dash-dotted line. For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 2.46. Modulus of the image impedances normalized by the characteristic impedance R
0
for high-pass cells (a′) and (c) (imaginary for ω < 1, real for ω > 1): Z
i2
(c) in solid lines for m′ = 0.5; Z
i1
(c) = Z
i1
(a′) in dash-dotted line and Z
i2
(a′) in dotted line. For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 2.47. Ripple envelope in the bandwidth of low-pass filters of parameter m
1
= 0.5 characterizing the end cells for three values of the parameter μ = 1.02; 1.07; 1.12 in ascending order of the values at ω = 0 (respectively, in dotted line, solid line and dash-dotted line). For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 2.48. Schematic of a low-pass filter, symmetrical and composed of two doubled cells of type (b)-(bi) and two simple terminal (b) and (bi) cells. Terminations are not illustrated
Figure 2.49. Image-transmittance in red and actual transmittance in blue for the low-pass filter synthesized in this section. For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 2.50. Deviation of the actual attenuation in the bandwidth with respect to 6 dB for the low-pass filter synthesized in this section
Figure 2.51. Elementary type-(d) band-pass cell, with an attenuation pole in the upper lateral stop band
Figure 2.52. Elementary type (e) band-pass cell, with an attenuation pole in the lower lateral stop band
Figure 2.53. Complementary cells of band-pass cells
Figure 2.54. Image-matching filter, composed of cascading cells represented by rectangles, with their termination resistances
Figure 2.55. Image attenuation in red and effective attenuation in blue, obtained by simulation, for the filter whose template is defined in the example above. For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
Figure 2.56. Equivalent circuit and symbol of the quartz resonator, where the damping resistance (Joule losses) is neglected
Figure 2.57. Another equivalent circuit of the quartz resonator
Figure 2.58. Lattice filter
Cover
Table of Contents
Begin Reading
C1
iii
iv
v
ix
x
xi
xiii
xiv
xv
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
223
224
225
226
227
228
229
230
231
233
234
235
236
237
238
G1
G2
G3
G4
e1
Pierre Muret
First published 2018 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUK
www.iste.co.uk
John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USA
www.wiley.com
© ISTE Ltd 2018The rights of Pierre Muret to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2017961003
British Library Cataloguing-in-Publication DataA CIP record for this book is available from the British LibraryISBN 978-1-78630-182-6
Today, we can consider electronics to be a subject derived from both the theoretical advances achieved during the 20th Century in areas comprising the modeling and conception of components, circuits, signals and systems, together with the tremendous development attained in integrated circuit technology. However, such development led to something of a knowledge diaspora that this work will attempt to contravene by collecting both the general principles at the center of all electronic systems and components, together with the synthesis and analysis methods required to describe and understand these components and subcomponents. The work is divided into three volumes. Each volume follows one guiding principle from which various concepts flow. Accordingly, Volume 1 addresses the physics of semiconductor components and the consequences thereof, that is, the relations between component properties and electrical models. Volume 2 addresses continuous time systems, initially adopting a general approach in Chapter 1, followed by a review of the highly involved subject of quadripoles in Chapter 2. Volume 3 is devoted to discrete-time and/or quantized level systems. The former, also known as sampled systems, which can either be analog or digital, are studied in Chapter 1, while the latter, conversion systems, we address in Chapter 2. The chapter headings are indicated in the following general outline.
Each chapter is paired with exercises and detailed corrections, with two objectives. First, these exercises help illustrate the general principles addressed in the course, proposing new application layouts and showing how theory can be implemented to assess their properties. Second, the exercises act as extensions of the course, illustrating circuits that may have been described briefly, but whose properties have not been studied in detail. The first volume should be accessible to students with a scientific literacy corresponding to the first 2 years of university education, allowing them to acquire the level of understanding required for the third year of their electronics degree. The level of comprehension required for the following two volumes is that of students on a master’s degree program or enrolled in engineering school.
In summary, electronics, as presented in this book, is an engineering science that concerns the modeling of components and systems from their physical properties to their established function, allowing for the transformation of electrical signals and information processing. Here, the various items are summarized along with their properties to help readers follow the broader direction of their organization and thereby avoid fragmentation and overlap. The representation of signals is treated in a balanced manner, which means that the spectral aspect is given its proper place; to do otherwise would have been outmoded and against the grain of modern electronics, since now a wide range of problems are initially addressed according to criteria concerning frequency response, bandwidth and signal spectrum modification. This should by no means overshadow the application of electrokinetic laws, which remains a necessary first step since electronics remains fundamentally concerned with electric circuits. Concepts related to radio-frequency circuits are not given special treatment here, but can be found in several chapters. Since the summary of logical circuits involves digital electronics and industrial computing, the part treated here is limited to logical functions that may be useful in binary numbers computing and elementary sequencing. The author hopes that this work contributes to a broad foundation for the analysis, modeling and synthesis of most active and passive circuits in electronics, giving readers a good start to begin the development and simulation of integrated circuits.
1) Volume 1: Electronic Components and Elementary Functions [MUR 17].
i) Diodes and Applications
ii) Bipolar Transistors and Applications
iii) Field Effect Transistor and Applications
iv) Amplifiers, Comparators and Other Analog Circuits
2) Volume 2: Continuous-time Signals and Systems.
i) Continuous-time Stationary Systems: General Properties, Feedback, Stability, Oscillators
ii) Continuous-time Linear and Stationary Systems: Two-port Networks, Filtering and Analog Filter Synthesis
3) Volume 3: Discrete-time Signals and Systems and Conversion Systems [MUR 18].
i) Discrete-time Signals: Sampling, Filtering and Phase Control, Frequency control circuits
ii) Quantized Level Systems: Digital-to-analog and Analog-to-digital Conversions
Pierre MURETNovember 2017
This volume is dedicated to the study of linear and stationary systems in which time is considered as a continuous variable, as well as certain extensions in the case of nonlinear systems. It is mainly centered on single-input and single-output systems but a method capable of generalizing studies to linear or nonlinear multi-input and multi-output systems is also addressed. Generally, in order to highlight the properties of these systems, one must necessarily rely on the analysis of electrical signals that either characterize their response to an excitation signal or their natural (or proper) response. The former output signal is dependent on the input signal and is called forced response, whereas their natural response is independent of the excitation signal placed on their input. Therefore, it is essential to begin with the representations of signals by forming a close correlation between the time domain and the frequency domain, which are connected by the Fourier transform or decomposition into Fourier series. It is then natural to customize the study to the case of stationary systems, for which the forced response is invariant under time translation of the signal applied on input, and which, in addition, follow the principle of causality. The unilateral Laplace transform then proves to be useful and it leads us to the notion of transfer function or transmittance, together with the Fourier transform in the case of finite energy signals. The properties of these two types of transforms and their application to the case of electronic systems are covered in the first part of Chapter 1 while the consequences of causality are addressed in Chapter 2.
The second part of Chapter 1 is dedicated to the study of feedback and its applications, and then to the different methods for studying the stability of the systems, or to means able to control their instability, as is the case for oscillators. A system is stable if, after a finite life span excitation, it finally returns to its previous idle state, namely without any variation of electrical quantities, and it is unstable otherwise. In the early stages of electronics, feedback was paramount and it led to much progress and the development of a multitude of applications, which are reviewed here. The mathematical tools constituted by the time–frequency transforms mentioned earlier or representations in the complex plane are then used to address problems of system stability, including the case of those that incorporate a feedback loop, known as looped systems. The extension to state variables and state representation, which is based on the decomposition of the response of a system into a set of first-order differential equations, is then addressed. The previous concepts finally make it possible to detail the different ways for analyzing oscillators’ operation, which initially can be considered as linear systems at the limit of stability, but which in practice are always subject to a limitation of the amplitude that requires nonlinearity to be taken into account. The transition from predictable operation to a chaotic regime is presented in the case of a model system.
In Chapter 2, the properties of stable electronic systems are particularized to the case of networks and specially quadripoles. The different representations of networks in the form of quadripoles are discussed, as well as all notions of impedance or admittance deriving therefrom. Some are measurable, thus experimentally feasible, while others are fictional, such as image impedances, but open a highly fruitful scope of application, which is the subject of the last section of this chapter. The concepts of matching, whether power or impedance matching, are detailed, as well as their consequences and rules to apply in practice in order to optimize the operation of electronic assemblies and to best take advantage of the components that are included.
The last part of Chapter 2 is devoted to stable systems that can be analyzed as analog filters, namely satisfying the principle of causality, of which the general consequences are presented. There are either circuits incorporating one or more active devices such as operational amplifiers or passive circuits, limited here to non-dissipative cases. The synthesis of these analog filters is thorough, and can be used to determine the value of all the components of a filter based on imposed criteria, most often a template in the frequency domain. Two topics are presented; on the one hand for active filters and on the other hand for non-dissipative passive filters. In the second case, the method using effective parameters is an exact method, but not covering all the applications, while the method of image parameters is suitable to most requirements, with a deviation from the template that can be minimized. The ways to make adjustments and all circuits necessary for the practical implementation of the filters are detailed. Examples are given for each important case, based on the transfer functions calculated by means of software programs (here, MATLAB). The different possible choices for the computational functions are presented in relation to the criteria to be verified. In the case of the synthesis based on image parameters, formulas allowing the calculation of all elements are demonstrated. Although the case of systems with distributed (or scattered) elements, essential when the wavelength becomes comparable to the dimensions of the circuit, is not explicitly addressed, the description of the quadripoles using s-parameters, as detailed in Chapter 2, easily adapts.
The linear and stationary systems that concern us here deliver output signal y(t) when input signal x(t) is applied to them, solution to a real and linear ordinary differential equation, where t represents the time variable:
which can also be seen as a linear application:
Function exp(α t), with real or complexα, is of special importance since it is the specific function of the system’s differential equation, which means that if x(t) = exp(α t), the output signal is also proportional to exp(α t). It is this fundamental property that warrants the approaches discussed in the following sections 1.1 and 1.2. Another method, based on the state-space form, also applicable to nonlinear systems, is presented in sections 1.4.5 and 1.5.
These signals are real electrical quantities and thus measurable functions of time variable t, which itself is a continuous variable. They are also referred to as analog signals. An additional representation is formed by the frequency spectrum.
In general, any real sinusoidal signal of angular frequency ω1 and constant frequency f1 (ω1 = 2π f1) is written as y(t) = A cos(ω1t + φ1), once a time and phase origin has been selected. But in complex numbers, this can also be written as:
Both exponential terms with imaginary exponent appear with the same A coefficient and are always complex conjugates, two conditions that are required if y(t) is real. The two vectors corresponding to images on the complex plane rotate in opposite directions; thus, frequency −f1 is consistently found at the same time as frequency f1.
Figure 1.1.Representation of a sinusoidal signal on the complex plane
The spectral or frequency representation is thus formed simply by two lines of amplitude A/2 at frequencies f1 and −f1, and phase lines φ1 and −φ1 at these same frequencies.
Figure 1.2.Spectrum of a sinusoidal signal (amplitude solid line, phase dotted)
Indeed, sinusoidal signals of the same frequency form a two-dimensional vector space for which a basis is provided by exp[jω1t] and exp[−jω1t] (cos[ω1t] and sin[ω1t] form another basis). Thus, we can write:
with
and
where c1 and c−1 are complex conjugates.
However, here only the first of these terms will be considered, with the second provided by complex conjugates. This leads to rotating vector or Fresnel representation: concerning the instantaneous values, only A exp[j(ω1t + φ1)] is used in the complex plane (or rather if these values are considered to be root mean square (rms) quantities for power calculations). Again, y(t) is found in the first case by projection on the real axis, that is, by taking the real part of symbolic representation to within a coefficient of 2.
From sinusoidal signals, the case of periodic signals yT(t) with period T equal to 1/f1 can be generalized by performing the development as a Fourier series. Periodic signals of period T also constitute a vector space but of dimension 2N if signal reconstitution requires N sinusoidal signals of harmonic frequencies f1, 2f1, 3f1, 4f1, … Nf1. The series’ convergence to yT(t) is made certain if N approaches infinity:
where the coefficients are calculated by Fourier series decomposition:
Since yT(t) is real, cn and c−n are complex conjugates (same module and opposite phase). Hence, the even and odd symmetries respectively for module spectrum |cn| and for that of argument Arg{cn}.
Figure 1.3.Spectrum of a periodic signal of repetition frequency f1 (modules in bold and arguments in dotted lines)
By merging the conjugated terms, the real series can be written as:
Power (average energy over time) is calculated by Parseval’s rule, which shows that this energy is independent of time or frequency representation (to within the factor R or 1/R) and is obtained by a scalar product of the signal by itself:
No cn cn′ term with n ≠ n′ appears since the basis of the vector space is orthogonal (scalar products of all basis vectors are zero unless n = n′). It should be noted that and in the frequent event where power is calculated from a complex voltage U or a current I, using or alternatively if Un and In represent, respectively, the u(t) and i(t) complex Fourier series decomposition coefficients.
If the signals are non-periodic, it can be assumed that the period T of the signals approach infinity on condition that is convergent (the signals have to approach zero for t →±∞), replacing discrete variable n/T by continuous variable f (frequency) and thus defining the Fourier transform of y(t) by : thus:
The symmetry properties are the same as for cn since y(t) is assumed to be a real function:
By changing variable t to −t, only the sine term changes sign thus providing:
Obtaining y(t) by means of the reverse FT calculated from the Fourier series by approaching the limit by replacing by Y(f), n/T by f, 1/T by df and the total by an integral:
Other properties of the FT are as follows:
– The FT and the reverse transform are linear applications:
– Derivation and integration of
y
(
t
):
If and (integration by parts of the definition where
y
(
t
) is replaced by
dy
/
dt
).
– Delay theorem:
The phase alone is modified (phase delay if t0 > 0 is a time delay) and not the transform modulus.
Figure 1.4.Triangular signals (left) and their spectrum (FT) (right). For a color version of this figure, see www.iste.co.uk/muret/electronics2.zip
– Similarity and dilatation/contraction of time/frequency scales: (obtained by changing variable
t′
=
αt
in the definition,
α
real), as illustrated in
Figure 1.4
.
– Ordinary product of two functions and convolution productIf
Y
(
f
) = FT[
y
(
t
)] and
X
(
f
) = FT[
x
(
t
)], and or alternatively
– Wiener–Kinchine and Parseval theorems:, the autocorrelation function of
y
(
t
) (after reversing the names of variables
t
and
τ
) or rather
The autocorrelation function measures the degree of resemblance between the function and delayed function. Unlike the convolution product, the integration variable operates with the same sign in both factors under the integral symbol.
is the Wiener–Kinchine theorem stating that the FT of the autocorrelation function of y(t) is equal to the squared modulus of the FT of y(t). This autocorrelation function may be calculated not only for known (or determined) signals but also for random signals such as noise defined only by a density of probability.
is rewritten simply for τ = 0:
This is the Parseval theorem that allows us to perform the energy calculation (clearly visible in the second member to within R or 1/R coefficient if y is, respectively, a current or a voltage) both in the time domain from y(t) and in the frequency domain from Y(f). Thus, we can proceed to , which is an energy with |Y(f)|2 a spectral energy density in J/Hz; |Y(f)| a spectral density of current or voltage within R or 1/R factor in A/Hz1/2 or in V/Hz1/2.
Electric or electronic circuits built with localized elements are those featuring elements in which instantaneous currents (and voltages) are the same irrespective of the location considered in a conductor. Accordingly, it can be assumed that the wavelength of these currents, voltages and associated fields is very large relative to the dimensions of these circuits (approximation applicable up to approximately 1 GHz, corresponding to a vacuum wavelength of 30 cm). Furthermore, the only operational elements here are the sources of current and voltage, together with the linear passive elements: resistance, capacitance, self-inductance and mutual inductance. In electronics, this generally results from an approximation of linearization, which is applicable over a voltage or current range that must be defined.
The result of these two hypotheses is that these systems are also stationary, which is to say that their response is unchanging irrespective of the instant chosen as the time origin, and that they can be described mathematically by one or several linear ordinary differential equations.
The laws for linear electrical circuits (although this also applies in mechanical engineering for forces or torques, velocities and movements) are those of electrokinetics, valid for resistances (u = Ri), capacitances and inductances (where coefficients R, C and L are assumed to be constant if the system is linear), loop law and node law (system of linear equation) (see the Appendix in Volume 1 [MUR 17]). Any system in which value y(t) depends on circuit elements and on an excitation x(t) can thus be described by one (or several) ordinary differential equations of the form:
All of these linear equations can also be constructed by means of the superposition theorem. Solutions are always given by the total of the equation’s general solution without the second member and a special solution to the whole equation, the first corresponding to the system’s free regime and the second to the regime forced by x(t).
If the system is stable, the equation’s solution without the second member corresponds to a transient response that ceases after a certain duration. This does not persist while the forced regime will continue. One may assume that the forced regime has begun at t → −∞ since the system is stationary so its responses are independent of the time origin. Under permanent conditions, if x(t) = exp(α t), y(t) is also proportional to exp(α t). As shown previously, real signals that can be expressed as linear
