Communication Systems Principles Using MATLAB - John W. Leis - E-Book

Communication Systems Principles Using MATLAB E-Book

John W. Leis

0,0
117,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

Discover the basic telecommunications systems principles in an accessible learn-by-doing format

Communication Systems Principles Using MATLAB covers a variety of systems principles in telecommunications in an accessible format without the need to master a large body of theory. The text puts the focus on topics such as radio and wireless modulation, reception and transmission, wired networks and fiber optic communications. The book also explores packet networks and TCP/IP as well as digital source and channel coding, and the fundamentals of data encryption.

Since MATLAB® is widely used by telecommunications engineers, it was chosen as the vehicle to demonstrate many of the basic ideas, with code examples presented in every chapter. The text addresses digital communications with coverage of packet-switched networks. Many fundamental concepts such as routing via shortest-path are introduced with simple and concrete examples. The treatment of advanced telecommunications topics extends to OFDM for wireless modulation, and public-key exchange algorithms for data encryption. Throughout the book, the author puts the emphasis on understanding rather than memorization. The text also:

  • Includes many useful take-home skills that can be honed while studying each aspect of telecommunications
  • Offers a coding and experimentation approach with many real-world examples provided
  • Gives information on the underlying theory in order to better understand conceptual developments
  • Suggests a valuable learn-by-doing approach to the topic

Written for students of telecommunications engineering, Communication Systems Principles Using MATLAB® is the hands-on resource for mastering the basic concepts of telecommunications in a learn-by-doing format.

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 783

Veröffentlichungsjahr: 2018

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



Table of Contents

Cover

Preface

Acknowledgments

Introduction

About the Companion Website

1 Signals and Systems

1.1 Chapter Objectives

1.2 Introduction

1.3 Signals and Phase Shift

1.4 System Building Blocks

1.5 Integration and Differentiation of a Waveform

1.6 Generating Signals

1.7 Measuring and Transferring Power

1.8 System Noise

1.9 Chapter Summary

Problems

2 Wired, Wireless, and Optical Systems

2.1 Chapter Objectives

2.2 Introduction

2.3 Useful Preliminaries

2.4 Wired Communications

2.5 Radio and Wireless

2.6 Optical Transmission

2.7 Chapter Summary

Problems

3 Modulation and Demodulation

3.1 Chapter Objectives

3.2 Introduction

3.3 Useful Preliminaries

3.4 The Need for Modulation

3.5 Amplitude Modulation

3.6 Frequency and Phase Modulation

3.7 Phase Tracking and Synchronization

3.8 Demodulation Using

Methods

3.9 Modulation for Digital Transmission

3.10 Chapter Summary

Problems

4 Internet Protocols and Packet Delivery Algorithms

4.1 Chapter Objectives

4.2 Introduction

4.3 Useful Preliminaries

4.4 Packets, Protocol Layers, and the Protocol Stack

4.5 Local Area Networks

4.6 Device Packet Delivery: Internet Protocol

4.7 Network Access Configuration

4.8 Application Packet Delivery: TCP and UDP

4.9 TCP: Reliable Delivery and Network Fairness

4.10 Packet Routing

4.11 Chapter Summary

Problems

5 Quantization and Coding

5.1 Chapter Objectives

5.2 Introduction

5.3 Useful Preliminaries

5.4 Digital Channel Capacity

5.5 Quantization

5.6 Source Coding

5.7 Image Coding

5.8 Speech and Audio Coding

5.9 Chapter Summary

Problems

6 Data Transmission and Integrity

6.1 Chapter Objectives

6.2 Introduction

6.3 Useful Preliminaries

6.4 Bit Errors in Digital Systems

6.5 Approaches to Block Error Detection

6.6 Encryption and Security

6.7 Chapter Summary

Problems

References

Index

End User License Agreement

List of Tables

Chapter 02

Table 2.1 Radio‐frequency (RF) band designations.

Table 2.2 Microwave band designations.

Table 2.3 Transmission medium broad comparison.

Chapter 03

Table 3.1 Summary of useful trigonometric formulas.

Table 3.2 A Bessel table for determining sideband amplitudes in frequency modulation.

Table 3.3 Comparing

atan

and

atan2

functions. The latter gives a true four‐quadrant result.

Chapter 04

Table 4.1 The

truth table

for standard Boolean logic operations.

Table 4.2 Place‐value representation for binary numbers.

Table 4.3 The initial routing tables for routers R1 and R2.

Table 4.4 The routing tables for R1 and R2 after the Network 1 connection breaks.

Chapter 06

Table 6.1 The XOR function truth table.

Table 6.2 Examples of computation of an even parity bit.

Table 6.3 A naïve repetition code.

Table 6.4 Calculating the required number of check bits to satisfy

. Only selected values of

are tabulated for Hamming

codes.

Table 6.5 State table for the simple illustrative example of convolutional code operation.

Table 6.6 The digital XOR function truth table.

List of Illustrations

Chapter 01

Figure 1.1 Sine and cosine, phase advance, and phase retard. Each plot shows amplitude

versus time

.

Figure 1.2 Basic building blocks: generic input/output, signal source, adder, and multiplier.

Figure 1.3 Cascading blocks in series (left) and adding them in parallel (right).

Figure 1.4 Phase shifting blocks. Note the input and output equations.

Figure 1.5 The process of mapping an input (horizontal axis) to an output (vertical), when the block has a linear characteristic. The constant or DC offset may be zero, or nonzero as illustrated.

Figure 1.6 Example of mapping an input (horizontal axis) to an output (vertical), when the block has a nonlinear characteristic. Other types of nonlinearity are possible, of course.

Figure 1.7 Some important filter blocks and indicative time responses. The waveforms and crossed‐out waveforms in the boxes, arranged high to low in order, represent high to low frequencies. Input/ouput waveform pairs represent low, medium, and high frequencies, and the amplitude of each waveform at the output is shown accordingly.

Figure 1.8 Primary filter types: lowpass, highpass, bandpass, and bandstop, with a low‐order filter shown on the left and higher‐order on the right. Ideally, the passband has a fixed and finite signal gain, whereas the stopband has zero gain.

Figure 1.9 Calculating the area over a small time increment

using a rectangle and the slope of the curve using a triangle.

Figure 1.10 A function

, calculating its cumulative area to

and

, and the area between

and

. Note the negative portions of the “area” below the

line.

Figure 1.11 Calculating area using a succession of small strips of width

.

Figure 1.12 The area under a curve

, but the curve happens to be the derivative of

.

Figure 1.13 The cumulative area under

. Each point on

represents the area up to the right‐hand side of the shaded portion at some value of

(here

for the shaded portion). Note that when

becomes negative, the area reduces.

Figure 1.14 The derivative of

as a function. It may be approximated by the slopes of the lines as indicated, though the spacing is exaggerated for the purpose of illustration.

Figure 1.15 Generating a sinusoid using an index

into a table. The value at each index specifies the required amplitude at that instant.

Figure 1.16 Using a lookup table to generate a waveform. Successive digital (binary‐valued) steps are used to index the table. The digital‐to‐analog (D/A) converter transforms the sample value into a voltage.

Figure 1.17 A Direct Digital Synthesizer (DDS) using a reduced lookup table. Samples are produced at a rate of

and for each new sample produced, a phase step of

is added to the current index

to locate the next sample value in the Lookup Table (LUT).

Figure 1.18 A lookup table (top) with

entries, requiring

bits. One possible waveform generated by stepping through at increments of

is shown below, when the total phase accumulator has

bits.

Figure 1.19 The frequency spectrum of the waveform, showing the magnitude of each signal component. Ideally, only one component should be present, but the stepping approach means that other unwanted components with smaller magnitudes are also produced. Note that the vertical amplitude scale is logarithmic, not linear.

Figure 1.20 Graphical illustration of the calculation of RMS value. Squaring the waveform at the top results in the lower waveform.

Figure 1.21 Imagining RMS calculation as a series of bars, with each bar equal to the height of the waveform at that point. The period between samples is

, with sample index

. The substitution required is then

.

Figure 1.22 Transferring power from a source to a load. The source resistance

is generally quite small, and is inherent in the power source itself. We can adjust the load resistance

to maximize the power transferred.

Figure 1.23 The power transferred to a load as the load resistance is varied. There is a point where the maximum amount of power is transferred, and this occurs when the load resistance exactly matches the source resistance.

Figure 1.24 Modeling the noise transfer of a system. The noise at the input of the first block is

, and this is used as a “noise reference” when subsequent blocks are added after the first. The quantity

is the excess noise added by the stage.

Figure 1.25 Analysis of two systems in cascade. The

values refer to the hypothetical noise added if referred back to the input of the first stage, whose noise is

.

Figure 1.26 Waveform parameter problem.

Chapter 02

Figure 2.1 A square pulse waveform. The fundamental cycle from

to

is shown, and after that the same wave shape repeats forever.

Figure 2.2 Approximating a square waveform with a Fourier series. The Fourier series approximation to the true waveform is shown; it has a limited number of components, but is not a perfect approximation.

Figure 2.3 Using the Fourier transform to calculate the frequency magnitude of a signal. The use of a window to taper the signal provides a smoother picture, but less resolution.

Figure 2.4 Using the Fourier transform to calculate the frequency magnitude where two underlying sinusoidal signals are present. It is able to resolve the presence of the two components.

Figure 2.5 The principle of operation of a spectrum analyzer. The resolution bandwidth (RBW) filter is swept over the desired range and is implemented as a mixer (multiplier and lowpass filter). The video bandwidth (VBW) filter serves to smooth out the resulting display.

Figure 2.6 Spectrum analysis stages with a wide window as it progresses (top) and a narrow window (bottom). Progressively, we see the input signal and RBW bandpass filter (a), the bandpass filtered signal (b), the accumulated bandpass filtered signal as the sweep progresses (c), and the final result after VBW lowpass filtering (d). The two close peaks in the input are able to be resolved with the narrower filter on the right.

Figure 2.7 The measured spectrum of a sine wave, as both VBW and RBW are adjusted. A narrower RBW gives better signal resolution and lower noise floor, but takes more time to sweep across the band of interest. A lower VBW smooths the resulting display, but leaves the noise floor unchanged.

Figure 2.8 Top: measured spectrum of a “pure” sine wave. Note the spurious peaks at the first and second harmonics, due to imperfections in the waveform generation. Bottom: the spectrum of a square wave. Note the frequencies of the harmonics are integer multiples of the fundamental and that their amplitudes decay successively as

if the decibel scale is converted to a ratio.

Figure 2.9 Illustrating some approaches to signal cabling. The use of a twisted pair helps to impart some noise immunity and is widely used in practice for Ethernet data cables. Coaxial cable is used for high‐frequency applications such as antenna connections. It should not be confused with shielded or screened cable, which is composed of two or more wires with a separate outer shield conductor, which is not part of the circuit.

Figure 2.10 Differential or balanced signals are often used for transmission. Illustrated here is a sequence of digital pulses, affected by a short noise spike. Differential voltage driving is most effective where the noise is approximately equal in both wires, such as with a twisted pair.

Figure 2.11 Transmitted pulse sequence and the corresponding received signal. Cable impairments and external interference combine to reduce the quality of signaling.

Figure 2.12 Using an eye diagram for ascertaining the timing and amplitude characteristics of a channel.

Figure 2.13 Ideal pulses (top) and their shapes when received (bottom). Smaller pulse spacing may mean that any given pulse waveform interferes with a later pulse.

Figure 2.14 The sinc function with

, centered about zero.

Figure 2.15 The frequency response of a raised cosine pulse (left) and the corresponding pulse shape (right).

Figure 2.16 Calculating the frequency response of a raised cosine pulse in order to determine the required shape in the time domain.

Figure 2.17 Illustrating the changing of parameters for the raised cosine pulse in the time domain (top) and corresponding frequency rolloff (bottom).

Figure 2.18 The effect of sampling a waveform early or late. Incorrect timing at the receiver results in sampling the waveform's amplitude at the wrong time with respect to the transmitter, and hence the resulting sample value may be incorrect.

Figure 2.19 Some representative line code waveforms. A coding method must balance the requirements for receiver synchronization with minimal bandwidth. Note that NRZ‐I is shown for invert on zero convention (as used in USB).

Figure 2.20 Spectra of some common line codes, derived from encoding a very long string of random binary data. The alternating 1/0 spectrum is shown for reference: It has a primary component at half the bit rate, with discrete harmonics at successively lower power levels.

Figure 2.21 Two captured portions of Ethernet waveforms at 10 Mbps (Manchester) and 100 Mbps (4B5B/MLT). Note the differing scales for each time axis.

Figure 2.22 A scrambler using only 4 bits. The operation of each block is defined in Figure 2.23. The exclusive OR (XOR) operator (shown as

) produces a 1 output if either of the inputs (but not both) is 1. In practice, many more bits than that shown would be employed.

Figure 2.23 Binary operations required to implement the scrambler. Note the mathematical operators used for various cases.

Figure 2.24 Step‐by‐step operation of the feedback register.

Figure 2.25 A slightly longer scrambler based on a feedback shift register. Interestingly, the descrambler is exactly the same.

Figure 2.26 Scrambler sequencing, with an initial seed error (left) and a run of errors (right).

Figure 2.27 A self‐synchronizing scrambler. The essential change is to move the input of the shift register so that it comes from the output bit stream.

Figure 2.28 A self‐synchronizing descrambler, which follows a similar arrangement to the self‐synchronizing scrambler.

Figure 2.29 Self‐synchronizing scrambler errors, showing a seed error (left) and transmission burst error (right).

Figure 2.30 Experimental setup for reflection tests on a transmission line.

Figure 2.31 Pulse reflection with short‐circuit (left) and open‐circuit (right) termination. The cable length is 30 m. The reflection coefficient

determines the relative amount of reflection, as a proportion of the incoming wave at the end of the cable.

Figure 2.32 Waveforms at the instant a switch is thrown, driving a long transmission line.

Figure 2.33 Pulse reflection with 25 and

terminations.

Figure 2.34 Experiments for reflection in a transmission line: pulse input with various termination impedances.

Figure 2.35 An electrical model of a short section of wire. It consists of series inductance and resistance, as well as parallel capacitance and resistance.

Figure 2.36 Simplified case of cable impedance, neglecting the series resistance (effectively zero) and parallel resistance (effectively infinite).

Figure 2.37 Lumping several small segments in series using the inductance/capacitance model for each segment separately.

Figure 2.38 A hypothetical lossless line with the parallel capacitance neglected (top) and the series inductance neglected (lower). If we imagine that there is no capacitance in parallel and only a coil, then the series inductance adds cumulatively. If we imagine that there is no inductance in series and only parallel capacitance, then the capacitance adds cumulatively.

Figure 2.39 A wave traveling along a wire, effectively being delayed over time.

Figure 2.40 A wave described by

shown along the length of the line

at time instant

.

Figure 2.41 A wave described by

shown along the length of the line

at time instant

.

Figure 2.42 A simple wave traveling left to right and its reflection that travels back in the opposite direction. The net waveform that is observed at any point along the transmission line is the sum of the two.

Figure 2.43 Formation of a standing wave, when the reflection has gain of unity and phase shift of zero. On the left, we see the forward wave (top), the reflected wave (middle), and net sum of these (bottom). On the right, we see a snapshot of what happens over time with a few waves traveling (top) and the upper and lower envelopes that result over a period of time (bottom right).

Figure 2.44 Formation of a standing wave, when the reflection has gain of unity and phase shift of

.

Figure 2.45 A traveling wave occurs for cases when the reflection is incomplete. Here we illustrate with a reflection coefficient magnitude of

and phase of

(left) and a reflection coefficient magnitude of 0.2 and phase of

(right). From the envelope of all waves thus generated, the standing wave ratio may be determined.

Figure 2.46 Calculating the magnitude of reflection at a given point.

Figure 2.47 VSWR calculated envelope magnitudes, in‐phase reflection case. Note that the scale is reversed by convention, showing the distance back

from

the load.

Figure 2.48 VSWR calculated envelope magnitudes, out‐of‐phase reflection case. Once again, the distance scale shows the distance back

from

the load.

Figure 2.49 The portion of the electromagnetic spectrum important for telecommunications. Radio, wireless, and satellite systems use the frequency ranges shown. At extremely high frequencies, infrared (IR) is used in fiber optics. Still higher in frequency is the visible light spectrum.

Figure 2.50 Radio horizon calculations for a spherical Earth with no surface features. The maximum transmission distance

is determined by the height of the transmitter

and the mean radius of the Earth

(diagram not to scale).

Figure 2.51 A simplified model for radio reflection calculations. The direct path

differs from the path via reflection

, so that if the signal is not attenuated upon reflection, the strength of the resulting signal at the receiver may be altered.

Figure 2.52 A wave that meets a barrier with an opening. Each point where the wave passes through may be thought of as a new source of wavefronts, which interfere with each other. This produces the phenomenon of diffraction.

Figure 2.53 Considering just two points, the physical path difference results in a wave that reaches the observer, which appears to be from one point. However, the waves interfere according to their phase relationship. The wavelength

relative to the aperture

is clearly important.

Figure 2.54 Diffraction at an aperture. Image (a) is shown assuming no diffraction; (b), (c), and (d) illustrate the situation as the aperture gradually increases.

Figure 2.55 Illustrating knife‐edge diffraction in line‐of‐sight transmission and the resulting Fresnel zone.

Figure 2.56 Doppler cases 1,2 (top) and 3,4 (lower).

Figure 2.57 Doppler left/right conventions.

Figure 2.58 A basic half‐wave dipole. The support beam is electrically insulated from each arm of the dipole. Note that the angle toward a receiver

is measured from the dipole arms, and thus the direction of maximum intensity or sensitivity is perpendicular to the dipole arms (

). The total length of the dipole is

, and in this case it is

. As a result, each arm is a quarter wavelength.

Figure 2.59 The normalized dipole pattern. From this, we may determine the relative field strength of a transmission at a given angle or, alternatively, when used as a receiving antenna, the sensitivity when aligned with respect to a transmitter. (a) Linear scale. (b) Decibel (logarithmic) scale.

Figure 2.60 An elemental or Hertzian dipole (a) consists of a hypothetical current‐carrying element. It is used as the basis for modeling more complex antenna types. The electric field vectors are decomposed into orthogonal (perpendicular) components (b).

Figure 2.61 A half‐wave dipole may be considered as a multitude of elemental dipoles. The resulting field is the summation of all the individual small dipole contributions.

Figure 2.62 Elemental or Hertzian dipole – snapshot at an instant in time. A surface plot (a) shows the intensity as the height, while the image visualization (b) shows a false‐color representation.

Figure 2.63 A half‐wave dipole using the same method of calculating the field. Only the conductor current profile has changed compared with the elemental dipole. A surface plot (a) shows the intensity as the height, while the image visualization (b) shows a false‐color representation.

Figure 2.64 A half‐wave dipole, when combined with one or more directors and a reflector, forms a Yagi antenna.

Figure 2.65 Experimental measurements of antennas at 2.4 GHz. (a) Dipole antenna. (b) Yagi antenna.

Figure 2.66 A log‐periodic antenna formed by multiple half‐wave dipoles. Note the reversal of the interconnections between successive dipoles, which effects a phase reversal.

Figure 2.67 Illustrating the focus of parallel waves encountering a parabolic reflector. The tangent at

results in equal angles

, so the focus is always at

irrespective of which horizontal waves we consider.

Figure 2.68 An antenna array composed of two ideal sources. The receiver is at some angle

to the line connecting the sources.

Figure 2.69 Changing the relative phases produces the two patterns illustrated. Note that the

axis is along the horizontal, corresponding to the axis of the radiator elements, whose relative positions are indicated. (a)

,

. (b)

,

.

Figure 2.70 Receiving a radio signal means selecting a particular RF band and translating it back to the baseband. Sending a signal is the reverse – translating from the baseband up to RF. The actual bandwidth taken up in the RF area is invariably greater than the bandwidth of the baseband signal.

Figure 2.71 The Tuned Radio Frequency (TRF) receiver is essentially just a bandpass filter followed by a detector. The very weak received RF signal is first amplified, and the particular band of interest is selected using a filter. The information signal must be selected from that, and originally this was just a “detector” before more sophisticated modulation methods were devised.

Figure 2.72 The general principle of heterodyning in a receiver. The Radio Frequency (RF) is mixed down using the Local Oscillator (LO) to produce an Intermediate Frequency (IF), which is then demodulated according to the modulation method used at the transmitter. The final stage shown is the Audio Frequency (AF) output. The Automatic Gain Control (AGC) feedback loop is used to adjust the output amplitude to maintain a constant output irrespective of the presence of strong or weak radio signals.

Figure 2.73 Downconversion from RF to an intermediate frequency, with low‐side injection (LO less than RF). Sum and difference frequencies are generated as a result. Note that the LO must be tuned to be below the desired RF by an amount equal to the IF.

Figure 2.74 A signal mixer for downconversion consists of an oscillator and a signal multiplier, followed by a lowpass filter. The difference frequency will always be lower, and hence it is removed by an appropriately designed lowpass filter.

Figure 2.75 Converting a signal by multiplication. The sum and difference frequencies are produced.

Figure 2.76 An ideal downconversion mixer example. The multiplication gives sum and difference frequencies, and the lowpass filter passes only the lower (difference) component.

Figure 2.77 Mixer example with an image frequency present.

Figure 2.78 Illustrating how image signals may be generated in the frequency domain. The spacing between the local oscillator (LO) and desired radio frequency (RF) determines the region where an image frequency will interfere, if one is present.

Figure 2.79 The Hartley image rejection approach. It relies not on filtering to reject the image, but on generating waveforms with a precise phase relationship (not difficult) as well as phase‐shifting another waveform (usually more difficult to achieve).

Figure 2.80 Direct downconversion with quadrature signals:

is the cosine component and

is the sine component.

Figure 2.81 Illustrating the effect of a nonlinearity in the amplifier system, resulting in intermodulation terms. The example shown uses

, and

. The frequency scale is arbitrary. Note that the amplitude scales are not equal.

Figure 2.82 Wireless 2.4 GHz channel usage and interference. Two separate WiFi networks may or may not interfere with each other, background interference may exist for short or long periods of time, and background noise is always present.

Figure 2.83 Illustrating an optical emitter and detector response overlap. Precise matching is almost never possible. This leads to a smaller electrical signal at the detector output, as well as additional noise due to the wider detector bandwidth.

Figure 2.84 Illustrating the basic laser principle. External energy is supplied by the junctions at the top and bottom, stimulating the emission of cascades of photons. The lasing medium has a high gain over a defined optical wavelength. The stimulated radiation thus emitted bounces back and forward within the cavity to form a standing wave, with a fraction released to provide the laser output.

Figure 2.85 Illustration of the optical emission of an LED (top) and Fabry–Pérot (FP) laser diode (bottom). Note the different wavelength scales. The region of 1300 nm shown lies in the infrared spectrum and is not visible to the eye.

Figure 2.86 Multimode step‐index fiber cross section (a), with typical sizes shown. More examples are given at the Fiber Optic Association The Fiber Optic Association (n.d.). The image in (b) shows a single‐mode optical fiber with a human hair. The magnification is

.

Figure 2.87 Motivating the derivation of Snell's law. The plane wave enters at the top and moves into the medium with a higher refractive index at the boundary

.

Figure 2.88 Principle of refraction at an interface (left) and total internal reflection (right). This shows that light emanating from a point

may be kept inside the material with higher refractive index, provided the outside material has a lower refractive index, and the angle is shallow enough with respect to the axis of the core.

Figure 2.89 Illustrating the light entry angle and numerical aperture for multimode step‐index fibers.

Figure 2.90 Illustrating the calculation of fiber loss over four segments. The numerical gains, which are less than unity, are multiplied. But an equivalent method, arguably easier in practice, is to add the dB figures. The dB figures are understood to be negative, since they represent a loss (

0 dB).

Figure 2.91 Using the decibel scale: top, for a gain >1 (positive dB) and bottom, losses (gain <1, or negative dB). Note the position of 0 dB in each case, as well as 3 dB and −3 dB points.

Figure 2.92 Optical time‐domain reflectometry test with a long cable and one join. The cable join is visible at around 2 km, and the fiber loss may be calculated from the slope of the overall trend line. Note also the differing loss characteristics for different wavelengths.

Figure 2.93 Optical time‐domain reflectometry test at

nm with sharp fiber bends introduced. The additional loss thus introduced should ideally be avoided in practice.

Figure 2.94 Transmission line with a square pulse input.

Figure 2.95 Experimental waveforms for investigating reflection on a 4.2 m transmission line.

Chapter 03

Figure 3.1 Lengths and angles for trigonometry. Angle

is shown to be less than

, but this need not be the case, and the concept can be generalized to any angle.

Figure 3.2 A point in the complex plane defines the cosine magnitude (real part) and the sine magnitude (the

or complex part). So

is equivalent to

.

Figure 3.3 Multiplying the point

by another complex number. From left to right:

,

, and

.

Figure 3.4 Converting a lower‐frequency signal

up in frequency using multiplication by a much higher signal

, and back down again, also via multiplication. The final result (lower panel) may be filtered to remove the high‐frequency component, effectively leaving just the envelope, which is essentially the original

waveform.

Figure 3.5 Frequency domain representation of signal conversion. If we imagine a negative frequency to match the given

, then it is just a translation of both

and

by

.

Figure 3.6 Generating an AM waveform using multiplication and addition. (a) Generating an AM signal. (b) The waveforms at each stage.

Figure 3.7 AM modulation parameter calculation, showing the AM waveform with its envelope superimposed.

Figure 3.8 AM modulation showing time waveforms (left) and corresponding frequency spectra (right).

Figure 3.9 AM signal bandwidth and its effect on adjacent channels.

Figure 3.10 A diode detector, rectifying the AM signal followed by a very simple lowpass filter.

Figure 3.11 AM demodulation via squaring and first‐order filtering.

Figure 3.12 AM demodulation as squaring of the input, the envelope of the peaks is shown superimposed.

Figure 3.13 Double‐sideband modulation or DSB.

Figure 3.14 The phase reversal of the modulated DSB waveform produces cancelation of the carrier.

Figure 3.15 Synchronous DSB demodulation. Matching of the local oscillator phase to the received signal phase is critical.

Figure 3.16 SSB modulation using bandpass filtering.

Figure 3.17 SSB modulation using phasing, also called a Hartley modulator.

Figure 3.18 SSB modulation using Weaver's method.

Figure 3.19 SSB demodulation using the Hartley phasing approach.

Figure 3.20 The waveforms and spectra of AM modulation variants – AM, DSB, SSB, and VSB.

Figure 3.21 Generating a time waveform viewed as stepping through a phase angle.

Figure 3.22 Comparison of frequency modulation and phase modulation for a sinusoidal modulation signal. The cosine modulating signal covers a range of amplitudes from positive to negative. Note the phase difference between FM and PM.

Figure 3.23 Comparison of frequency modulation and phase modulation for a sawtooth modulation signal. The sawtooth (ramp) modulating signal starts at zero and ramps up to a maximum value, then falls back to zero. Note the gradual frequency increase in FM, and the abrupt phase change in PM.

Figure 3.24 Phase modulation conceptual diagram (top). The phase angle is determined from the “prototype” sine wave, with the specific point (or phase) being determined by the current oscillator position added to the scaled modulation signal. Frequency modulation (bottom) is similar, but with the phase angle determined not by the instantaneous value but the cumulative value of the input.

Figure 3.25 Showing how FM may be produced from a phase modulator and how PM may be produced from a frequency modulator.

Figure 3.26 FM and PM modulation waveform comparison. In going from ramp to the step (left to right), we differentiate the modulation; in going from right to left, we integrate the modulation. Phase modulation of the ramp is identical to the frequency modulation of the step waveform.

Figure 3.27 Frequency modulation showing time waveforms (left) and corresponding frequency spectra (right).

Figure 3.28 Measured spectrum for FM,

.

Figure 3.29 Measured spectrum for FM,

.

Figure 3.30 Harmonic multiplications for deriving the FM spectrum. The upper panel shows two different frequencies multiplied, with an average of zero. The lower panel shows two identical frequencies multiplied, with an average of

.

Figure 3.31 The expansion of the FM equation (top) yields two terms: term 1 and term 2. By symmetry, it may be observed that term 1 has an average of zero, whereas term 2 does not.

Figure 3.32 Differentiating an FM signal reveals another signal that is amplitude modulated. The timescale is arbitrary, depending on the frequencies of the waveforms concerned. The signals are

xc

(carrier),

xm

(modulation),

xfm

(modulated), and finally the rate‐of‐change

dxfm

.

Figure 3.33 Asynchronous FM demodulation. The dotted part is effectively an AM demodulator. A preceding section (not shown) would limit the amplitude of the incoming signal, so as to reduce any spurious noise amplitude spikes.

Figure 3.34 Determining the correct time to sample a waveform is critical. In this example, a higher value is interpreted as a binary 1, and a lower value as a binary 0. As illustrated, incorrect timing could lead to the wrong decision and hence an incorrect binary value.

Figure 3.35 A phase‐locked loop, which may be considered as a type of control system. The phase comparator determines how close the waveforms are and guides the oscillator via the controller to either increase or decrease its frequency so as to more closely align the timing (or phase) with the incoming waveform. (a) The phase‐locked loop (b) A generic control system.

Figure 3.36 Waveforms with a phase difference (a) and determining the phase difference by averaging over a few cycles the product of the input and local oscillator (b).

Figure 3.37 To derive the amplitude at the next step, and thus the overall waveform, the amplitude must be selected according to the fixed step

plus or minus a small difference

. Accordingly, this yields a faster or slower waveform. (a) Selecting the next amplitude at each step (b) Next step amplitude from phase advance/retard.

Figure 3.38 The PLL is comprised of phase detector (multiplier plus averaging filter), tunable controller, and numerically controlled oscillator, in a feedback‐loop configuration.

Figure 3.39 The Costas loop extends the basic PLL approach to employ quadrature signals in two separate branches, utilizing the combined phase error of each to drive the oscillator.

Figure 3.40 PLL response to change in phase. The phase error is shown, together with the control signal derived from it. The waveforms show the input sinusoid and the PLL oscillator sinusoid at the indicated time instants – before (A), during (B), and after (C) the phase change.

Figure 3.41 PLL response to change in frequency. This should be compared with the previous figure. Note that the phase increment is permanently increased, so as to track the increased input frequency. At time B, the frequency of the input waveform is greater than the oscillator shown below it; however, the PLL action restores the frequency (and phase) match at C.

Figure 3.42 Illustrating quadrature signals: time domain (left) and

plane (right). The magnitude

and phase

are represented using

as

on the horizontal axis, and

as

on the vertical axis.

Figure 3.43 Demodulation with quadrature signals:

is the cosine component, and

is the sine component.

Figure 3.44 Waveforms for

demodulation of AM. Lowpass filtering of the output waveform (lower panel) would remove the double carrier frequency component. Removal of the constant offset is also required. The final output waveform should then correspond to the modulating input (top panel).

Figure 3.45 Waveforms for

demodulation of PM. Further lowpass filtering of the output waveform (lower panel) would smooth the demodulated signal. Note the correspondence to the input modulating signal (top).

Figure 3.46 Waveforms for

demodulation of FM. Filtering is required for the output waveform (lower panel), followed by differentiation – at which point it should correspond to the input modulating wave (top panel).

Figure 3.47 Amplitude shift keying in theory, with an alternating 1/0 input signal (left) and PRBS or pseudo‐random binary sequence (right) to represent a more realistic transmission scenario.

Figure 3.48 Frequency shift keying in theory, with an alternating 1/0 input signal (left) and PRBS (right) to represent a more realistic transmission scenario.

Figure 3.49 Phase shift keying in theory, with an alternating 1/0 input signal (left) and PRBS (right) to represent a more realistic transmission scenario.

Figure 3.50 Measured spectra for ASK and FSK (top) and PSK(lower). Each shows the spectrum for a 1/0 alternating input sequence. The PSK case shows in addition the spectrum resulting from a pseudorandom binary sequence (PRBS) bitstream. Note that the power is measured in dBm.

Figure 3.51 The “clean” version of a digital pulse signal (top), additive white Gaussian noise (middle), and the received signal (bottom).

Figure 3.52 Multiplying the incoming wave and integrating the sum over one symbol period.

Figure 3.53 Waveforms obtained by the multiply‐integrate structure. The stars indicate the sampling point at the end of each symbol interval. After this interval, the multiply‐integrate operation is restarted.

Figure 3.54 Moving from the correlate–integrate concept (left) to the matched filter (right). The correlate–integrate approach is a pointwise multiplication and summation over one symbol period. The matched filter is best thought of as reversing the time waveform according to the order we would “see” the waveform, and multiplying by the impulse response.

Figure 3.55 Matched filtering using a time‐reversed channel impulse response. Imagine the input waveform as shown being reversed, since that is the order the filter “sees” it.

Figure 3.56 Waveforms obtained by the matched filter structure. The stars indicate the sampling point for each symbol. The output is not reset for each symbol, but rather calculated continuously using convolution.

Figure 3.57 Illustrating orthogonal and nonorthogonal signals. The net area under the product of orthogonal signals is zero.

Figure 3.58 Illustrating sine and cosine signals on an

plane for quadrature modulation. (a) A single point with magnitude

and phase

. (b) Four points with the same magnitude and

phase difference. (c) Eight points with the same magnitude but a

phase difference. (d) Sixteen points with differing magnitude and phase.

Figure 3.59 QAM modulation diagram. The input bit combination (4 bits here) selects one of 16 sine and cosine amplitude pairs within the constellation.

Figure 3.60 QAM demodulation. Multiplication by the sine and cosine carrier separately, followed by integration over one or more cycles, determines the amplitude and hence position in the constellation. The original bit pattern may then be looked up directly.

Figure 3.61 Frequency division multiplexing for multiplexing multiple channels on the one physical carrier. A separate subcarrier frequency is assigned to each channel.

Figure 3.62 FDM may be visualized as multiple signals evolving in time but separated in frequency.

Figure 3.63 Using FDM to multiplex a bit stream. The amplitude

means that

takes on different values depending on whether the bit

is 0 or 1. Typically, these would be equal in magnitude but opposite in sign.

Figure 3.64 Using OFDM to multiplex a bit stream. As well as multiple subcarrier frequencies, quadrature signals are used on each subchannel.

Figure 3.65 A point defined by a sine and cosine amplitude (left) is equivalent to a sine with magnitude

and phase

. Multiple points may be represented in this way (right). The 16 points shown are then able to represent a 4‐bit quantity.

Figure 3.66 Multiplying the incoming wave by sine (or cosine) and integrating results in a scaled estimate of the amplitude of that particular component. The integration (or accumulation) is assumed to be performed over one symbol time, after which the integrator is reset to zero.

Figure 3.67 Fourier analysis of an input waveform determines the magnitude of each of the sine and cosine components at the various frequencies.

Figure 3.68 A point on the complex plane defines the cosine magnitude (real part) and the sine magnitude (negative imaginary part).

Figure 3.69 The DFT of a cosine wave corresponds to a single real value

(here at

) and its symmetrical counterpart at

(here

). Note that the MATLAB indexes displayed start at

, not

as in the equations.

Figure 3.70 The DFT of a sine wave corresponds to a single imaginary value

(here at

) and its symmetrical counterpart at

(here

). Note that the MATLAB indexes displayed start at

, not

as in the equations.

Figure 3.71 Some example

signals for OFDM, using the DFT.

Figure 3.72 Transmission process for OFDM using the IFFT. The data is formed into blocks and used to define the constellation pattern, which is converted into the correct waveform to be transmitted using the inverse FFT.

Figure 3.73 Reception of OFDM signals using the FFT. The received signal is converted back into the constellation pattern using the FFT, and the constellation points thus defined determine the bit pattern that was originally sent.

Figure 3.74 Spread spectrum frequency hopping. The center frequency for each transmission time is pseudorandom but synchronized between sender and receiver. Usually a number of bits are transmitted for each hop, making the hop rate slower than the bit rate.

Figure 3.75 Spread spectrum direct sequence. Each bit is subdivided into several chips for transmission, using a pseudorandom binary sequence that is synchronized between sender and receiver. Thus, several chips make up each bit.

Figure 3.76 Waveforms associated with a direct‐sequence spread spectrum design. The carrier itself is phase modulated according to the input bit stream and the chip stream. In this example, the bit stream is used in conjunction with the chip stream to determine the carrier phase, and only one cycle of carrier is shown per chip for clarity.

Figure 3.77 Barker codes and their delayed versions. The reference code starts at 1 and ends at 12, after which it is shown as zero. The delayed versions are moved to the right, with zero values moved in from the left.

Figure 3.78 All‐1s correlation (top) compared to the Barker code correlation (bottom).

Figure 3.79 Generation of pseudorandom sequences. The Pseudo‐Random Binary Sequence (PRBS), consisting of only 1s and 0s, may be generated, or the Pseudo‐Noise (PN) sequence that is composed of discrete values chosen from a total range of possible values.

Figure 3.80 Modulation types. (a) Sine signal to be modulated. (b) Triangular ramp signal to be modulated.

Figure 3.81 AM example spectra.

Figure 3.82 Spectrum of an AM waveform, as shown on a spectrum analyzer.

Figure 3.83 Spectrum for FM modulation question.

Figure 3.84 Single‐sideband (SSB) generation.

Figure 3.85 FM example spectra.

Chapter 04

Figure 4.1 Factors affecting packet delivery: the length of each data packet, the time gap between packets, the routing of packets from one place to another, the possible loss of one or more data packets, and errors within a packet that has reached its destination.

Figure 4.2 Routing from source to destination. Note the variable routing paths (defined by

hops

between nodes) and differing topologies (physical layout/interconnection) at the destination networks.

Figure 4.3 Connection between two devices, with intermediate or

forwarding hops

via forwarding devices Router 1 and Router 2.

Figure 4.4 The TCP/IP protocol stack. The actual data transfer is downwards within Device A using internal memory, across the physical link, then again via memory “upwards” to the application running on Device B. Each layer performs a specific function, which allows the layers to operate independently.

Figure 4.5 Protocol encapsulation or how layers are physically implemented. Each layer adds its own header data for communicating with its corresponding peer layer at the other end of the communications link. The diagram is not to scale, and the application data is usually much, much larger than the protocol headers.

Figure 4.6 An Ethernet bus topology. Each device is connected to a common “bus,” which simplifies wiring, but also means that only one device can transmit at a time.

Figure 4.7 An Ethernet switch, which forms a star topology. This reduces the media contention problem, at the expense of wiring complexity, since there is a need for direct point‐to‐point wiring links to each device. The switch itself must have some intelligence, in order to route data packets to each device.

Figure 4.8 An Ethernet frame, as transmitted across a physical link. This is the lowest level of data encapsulation in the protocol stack. The numbers refer to the size in bytes of each field.

Figure 4.9 The composition of an IPv4 datagram. Note the source and destination addresses. The

Time‐To‐Live

is denoted by TTL and is decremented each time the datagram is forwarded on. For this reason, it is often called a

hop count

.

Figure 4.10 The composition of an IPv6 datagram. As well as larger address space, the simplified layout permits faster packet forwarding.

Figure 4.11 One possible method of mapping 48‐bit MAC addresses into the 64‐bit host portion of the 128‐bit IPv6 address according to RFC 4291 Appendix A. In the case illustrated, the seventh bit from the left is set and two bytes inserted as shown.

Figure 4.12 An example IP header, as captured on a data link. This should be compared with the IPv4 header layout of Figure 4.9. The header checksum is C7 1F hexadecimal.

Figure 4.13 Calculating the checksum, using big‐endian machine architecture (left) and little‐endian architecture (right). The end result must be the correct packet data ordering, independent of the machine byte ordering.

Figure 4.14 The arrangement of the original IP address classes. The leading (leftmost) bits determine the address class, then the next block of bits determines the network, and finally the rightmost bits determine the device or host within that network. This turned out to be a very inefficient way to allocate address space.

Figure 4.15 Subnet example 1. The subnet identifier is 8 bits, and the device identifier is also 8 bits.

Figure 4.16 Subnet example 2. This is the same IP address as the previous example but a larger subnetwork size as defined by the subnet mask.

Figure 4.17 The principle of NAT using address and port translation. Port 80 is reserved for web services, but port 49186 (in this example) is allocated on a per‐connection basis. The combination of 32‐bit IP address and 16‐bit TCP port is termed a

socket

.

Figure 4.18 The composition of a UDP datagram. Source and destination addresses are necessary, as is the length of the datagram. The checksum checks the header, but not the contents, of the segment.

Figure 4.19 The composition of a TCP segment. In addition to port fields, sequence and acknowledgment fields are used to sequence data segments. In tandem with this, binary‐valued bit‐field flags are used to signal the state of the transfer, and the window size is used to maximize the data flow rate.

Figure 4.20 A socket pair consisting of an IP:Port combination uniquely defines the endpoints for a data transfer. Routers in the Internet use the IP address, but not the port. End devices use the port number to ensure the data reaches the correct application.

Figure 4.21 Ethernet frame encapsulation of IP and TCP.

Figure 4.22 Acknowledging data packets, indicated by the

ack

lines. The sliding‐window approach of acknowledging more than one packet at a time gives superior throughput, at the expense of problems in the event of errors or lost packets. (a) Acknowledging each packet as it comes. (b) Acknowledging two packets at once.

Figure 4.23 Sequence of TCP segments when setting up, sending data, and tearing down a connection. The sequence shown above is typical of an HTTP (web) request.

Figure 4.24 Visualizing TCP data flow as a pipe of various dimensions, corresponding to the bandwidth and delay of different sections of the network that a given exchange of data packets must traverse.

Figure 4.25 Data segments and acknowledgments on a connection. At any time, several data packets may be in‐flight, with acknowledgments on their way back to the sender.

Figure 4.26 Slow‐start, exponential window growth, and cumulative acknowledgments. The dotted acknowledgments are not actually sent, but inferred by a subsequent cumulative acknowledgment.

Figure 4.27 Illustrating the principle of TCP congestion avoidance. Section A is the multiplicative increase, B is the linear increase until an error occurs at C, and the threshold is halved.

Figure 4.28 Determining whether two addresses are on the same subnet.

Figure 4.29 Diagram showing a route forwarding table example. There are five routing table entries and three physical link interfaces.

Figure 4.30 Routing table showing IP addresses and netmasks, corresponding to the diagram of Figure 4.29. The network prefix bits are shaded. The routing lookup goal is to select the specific route that maximizes the number of matching bits.

Figure 4.31 Diagram showing a routing loop caused by incorrect forwarding in an aggregated network. The incoming packet destined for the network 192.168.9.0 arrives at 192.168.16.1 for forwarding. For clarity, only the third byte from the left is shown in binary for each route table entry.

Figure 4.32 Binary tree example using a 4‐bit key and node values A, B, C, and D.

Figure 4.33 Binary tree construction for 6‐bit key values.

Figure 4.34 Patricia trie example 1, with the trie fully constructed. The left/right decision is based on the 0/1 value at the boxed bit position. The entire node key is only checked once, at the end of the search, which is when the pointer points back upwards. The search path for 00 0111 as described in the text is also indicated.

Figure 4.35 Patricia trie example 2. The search path for 01 1000 as described in the text is indicated.

Figure 4.36 Successive steps in the construction of a Patricia trie.

Figure 4.37 Example network routing layout. Two routes are possible from Network 1 to Network 4.

Figure 4.38 Routing topology diagram, with routing tables for each router. Rather than just a simple hop count, a metric or cost for each hop is preferable.

Figure 4.39 An example routing path. The goal is to find the least‐cost path from

to

.

Figure 4.40 A routing example with two “islands.” In trying to find the best path to

, we have to avoid getting stuck in the lower branches, where there is no path to

(except back where we came from).

Figure 4.41 Routing path with unconnected nodes labeled with a cost of

.

Figure 4.42 Enumerating the possible paths from the source to destination. Paths that contain an infinite cost on one or more links have not been considered, since they could not constitute a lowest‐cost path.

Figure 4.43 Nodes reachable in one hop from each node in turn.

Figure 4.44 Determining the new path cost at each stage of the Dijkstra algorithm, either directly or via an intermediate node. The cost via the intermediate node may be more, or it may be less.

Figure 4.45 A more convoluted path results when the hop costs change as indicated. Dijkstra's algorithm still works successfully in this case.

Figure 4.46 Routing problem 1.

Figure 4.47 Routing problem 2.

Chapter 05

Figure 5.1 A histogram showing discrete value ranges

and corresponding counts. The total count of values in the bin ranges with centers

and

is just the sum of the counts of the shaded bars.

Figure 5.2 A probability density curve over a continuous sample range

. The total probability in the range from

to

is the area of the shaded region.

Figure 5.3 Difference equation impulse input (top) and output (lower).

Figure 5.4 Quantizing a peak‐to‐peak sine wave using a 3 bit, mid‐rise quantizer.

Figure 5.5 Quantization input–output mapping for a

bit, mid‐rise quantization characteristic.

is the analog input,

represents the quantized values.

Figure 5.6 Representative comparison of

‐law and

‐law companding. The

and

values have been chosen to highlight the fact that

‐law is a piecewise characteristic.

Figure 5.7 The performance of a companding quantizer as compared with linear quantization. The tradeoff inherent in companding is evident: better performance at low signal levels, with inferior performance at higher signal levels.

Figure 5.8 Lloyd–Max PDF‐optimized quantizer with

.

Figure 5.9 Lloyd–Max PDF‐optimized quantizer with

.

Figure 5.10 The layout of a VQ codebook. Each of the

codevectors has a dimension of

.

Figure 5.11 Encoding and decoding with a vector quantizer. The encoder must perform a search for the best matching vector, whereas the decoder simply looks up the vector corresponding to the index transmitted over the communications channel.

Figure 5.12 The VQ training process. More than one training vector may map into any given codebook vector.

Figure 5.13 VQ training iterations and convergence. The small dots are training data, while the circles are the centroids.

Figure 5.14 Huffman code generation. The convention applied here when combining two nodes is to assign a 1 bit to the higher probability leaf. When the probabilities at each step are combined in the way shown, the resulting average length is 2.25 bits/symbol.

Figure 5.15 Huffman code encoding. Starting at the leaf node corresponding to the symbol to be encoded, the node joins are followed until the root node is reached. The branch from which the path entered at each join determines the bit value and is recorded.

Figure 5.16 Huffman code decoding. Starting at the root node, each successive bit received determines which branch to take at each node, until a leaf node is reached. This corresponds to the symbol to be decoded.

Figure 5.17 Huffman code generation, using an alternative grouping. Notice that at each stage, the two lowest probabilities are combined into a new interior node.

Figure 5.18 Huffman code generation, when nodes of the lowest probability at each stage (sibling nodes) are

not

joined in order. The average codeword length is 2.40 bits/symbol.

Figure 5.19 Lempel–Ziv window‐style compression. To encode “banana now” we need the index 6, which is the starting offset in the previously encoded window (ignoring spaces for clarity). The length also happens to be 6. The next byte is “n” (again, ignoring the whitespace).

Figure 5.20 Lempel–Ziv dictionary‐style compression. The longest match in the dictionary illustrated is “ban,” followed by “an.” The encoder and decoder could then add the phrase “bana” to their respective dictionaries. Future encodings of “banana” will then be more efficient, since the phrases “banan,” then “banana” will be built up each time “banana” is encountered in the input stream.

Figure 5.21 A simplified differential encoder, without quantization.

Figure 5.22 A differential decoder. The output is based on the prediction formed at the decoder, added to the difference (prediction error) values received over the channel.

Figure 5.23 Prediction sequence, with and without quantization.

Figure 5.24 A DPCM encoder using quantization in the prediction loop. It is best if the prediction is based on what the decoder knows about, not what the encoder can see.

Figure 5.25 Linear prediction, showing the actual signal, the predicted signal, and the error. Calculating the autocorrelations over a larger block of samples will give a better prediction on average.

Figure 5.26 Adaptive linear prediction, illustrating how one predictor parameter converges.

Figure 5.27 Adaptive linear prediction, showing the convergence of the predictor coefficients

for a given step size parameter

.

Figure 5.28 BTC example image. On the left is the original, then using the block mean only, and finally the BTC coded image for a

subblock. Note the blockiness evident in the mean‐only image, although of course the average number of bits per pixel is quite small. With a better algorithm and transmitting more parameters, a substantially better image quality results.

Figure 5.29 Histograms of the DCT coefficients in the upper‐left portion of a subblock. Note that for an

subblock, there would be an equivalent number of coefficient histograms. Only the upper‐left

coefficient histograms are shown here.

Figure 5.30 DCT basis images for an

transform. Each basis image is an

block of pixels, and there are

basis images in total.

Figure 5.31 Quadtree decomposition. The recursive decomposition from left to right shows how some subblocks are subdivided further, while others are not.

Figure 5.32 Example quadtree decomposition of a grayscale image. The block boundaries are made visible in this illustration in order to show the variable block sizes and how they correspond to the local activity of the image.

Figure 5.33 Chrominance subsampling of a

block of pixels. Each pixel starts out as RGB, then is converted to luminance Y plus color differences CrCb. The color may be subsampled as shown, with little visual impact on the image itself.

Figure 5.34 Linear predictive coder with switched pulse or noise excitation.

Figure 5.35 The essential arrangement of a code‐excited linear predictive coder. The excitation is selected according to the match between the synthesized speech and the original.

Figure 5.36 The poles of a linear predictor, and the corresponding frequency response. The resonances model the vocal tract, so as to produce “synthetic” speech.

Figure 5.37 The poles of a linear predictor (

), and the corresponding noise‐weighted poles (+) for a frame of speech. For the indicated value of

, the poles move inwards and flatten the spectrum shown in Figure 5.36. The resulting noise‐weighting filter

is shown on the right.

Figure 5.38 Audio encoding using sub‐band coding (filterbanks), an overlapped transform, perceptual weighting, and finally, entropy coding.

Figure 5.39 Overlapping blocks for the Modified DCT, with numerical values shown for a block size of

and

in order to illustrate the overlap and addition of successive blocks to yield perfect reconstruction.

Chapter 06

Figure 6.1 Additive noise is characterized by random values with a certain mean and variance.

Figure 6.2 The probability density function is used to tell the likelihood of a signal amplitude falling between two levels. In the case illustrated, this is between

and any higher value (that is,

).

Figure 6.3 Comparing the error function, complementary error function, and the

function.

Figure 6.4 Errors in cascading systems.

Figure 6.5 A binary

/

sequence with noise added.

Figure 6.6 Two polar values

and

, with additive noise. The probability density shows the likelihood for each level at the receiver.

Figure 6.7 Extending the concept of received points to two orthogonal axes. Two bits are transmitted at a time, with the decision boundary being the axes themselves.

Figure 6.8 A small section of received data, together with the resulting bit stream. The bit errors are indicated as

. In some of these cases, the received amplitude is only just on the wrong side of the decision boundary, but incorrect nevertheless.

Figure 6.9 Two possible levels sent (

) and received (PDFs centered on

). The shaded area indicates the probability of

being sent, yet with a decision made in favor of

at the receiver. Both the signal amplitudes and the statistical distribution of the noise influence whether the correct decision is made for each bit.

Figure 6.10 The theoretical and simulated bit error performance curves. At higher values of SNR per bit, increasing the signal power (or reducing the noise) results in a much greater reduction of the BER.

Figure 6.11 Computation of two‐dimensional parity. There are

data or message bits in the

block, and

parity bits in the final row and column.

Figure 6.12 Block interleaving of single‐bit correcting codes. All the data blocks are buffered in memory, and a Hamming code is computed for each horizontal block. The transmission is then ordered vertically, taking one bit from each block in turn, in the order indicated.

Figure 6.13 A portion of a captured data packet for checksum evaluation.

Figure 6.14 Checksum computation with the low‐order byte of a 16‐bit quantity first (little‐endian ordering).

Figure 6.15 Checksum computation with the high‐order byte of a 16‐bit quantity first (big‐endian ordering).

Figure 6.16 Division as a precursor to the CRC calculation.

Figure 6.17 All the steps involved in the CRC calculation at the sender, with the final result shown.

Figure 6.18 The steps involved in CRC calculation at the receiver, assuming no errors have occurred in transit.

Figure 6.19 The steps involved in CRC calculation at the receiver, when an error has occurred in transit. The error is detected in this case.

Figure 6.20 The steps involved in CRC calculation at the receiver, when an error has occurred in transit. The error is

not

detected in this case.

Figure 6.21 Illustrating a hypothetical convolutional code implementation. One‐bit delay elements are represented as

, with the “convoluted” channel codeword produced by XORing a combination of the input and delayed inputs. The dotted lines are not connected in this example design. Of course, such a structure is not unique, and many permutations of this type of layout are possible.