139,99 €
The communication chain is constituted by a source and a recipient, separated by a transmission channel which may represent a portion of cable, an optical fiber, a radio channel, or a satellite link. Whatever the channel, the processing blocks implemented in the communication chain have the same foundation. This book aims to itemize. In this first volume, after having presented the base of the information theory, we will study the source coding techniques with and without loss. Then we analyze the correcting codes for block errors, convutional and concatenated used in current systems.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 382
Veröffentlichungsjahr: 2015
Cover
Title
Copyright
Preface
List of Acronyms
Notations
Introduction
1: Introduction to Information Theory
1.1. Introduction
1.2. Review of probabilities
1.3. Entropy and mutual information
1.4. Lossless source coding theorems
1.5. Theorem for lossy source coding
1.6. Transmission channel models
1.7. Capacity of a transmission channel
1.8. Exercises
2: Source Coding
2.1. Introduction
2.2. Algorithms for lossless source coding
2.3. Sampling and quantization
2.4. Coding techniques for analog sources with memory
2.5. Application to the image and sound compression
2.6. Exercises
3: Linear Block Codes
3.1. Introduction
3.2. Finite fields
3.3. Linear block codes
3.4. Decoding of binary linear block codes
3.5. Performances of linear block codes
3.6. Cyclic codes
3.7. Applications
3.8. Exercises
4: Convolutional Codes
4.1. Introduction
4.2. Mathematical representations and hardware structures
4.3. Graphical representation of the convolutional codes
4.4. Free distance and transfer function of convolutional codes
4.5. Viterbi’s algorithm for the decoding of convolutional codes
4.6. Punctured convolutional codes
4.7. Applications
4.8. Exercises
5: Concatenated Codes and Iterative Decoding
5.1. Introduction
5.2. Soft input soft output decoding
5.3. LDPC codes
5.4. Parallel concatenated convolutional codes or turbo codes
5.5. Other classes of concatenated codes
5.6. Exercises
Appendix A: Proof of the Channel Capacity of the Additive White Gaussian Noise Channel
Appendix B: Calculation of the Weight Enumerator Function IRWEF of a Systematic Recursive Convolutional Encoder
Bibliography
Index
End User License Agreement
Cover
Table of Contents
Begin Reading
Cover
Contents
iii
iv
xiii
xiv
xv
xvi
xvii
xviii
xix
xx
xxi
xxii
xxiii
xxiv
xxv
xxvi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
339
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
Series EditorPierre-Noël Favennec
Didier Le Ruyet
Mylène Pischella
First published 2015 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address:
ISTE Ltd27-37 St George’s RoadLondon SW19 4EUUKwww.iste.co.uk
John Wiley & Sons, Inc.111 River StreetHoboken, NJ 07030USAwww.wiley.com
© ISTE Ltd 2015
The rights of Didier Le Ruyet and Mylène Pischella to be identified as the authors of this work have been asserted by them in accordance with the Copyright, Designs and Patents Act 1988.
Library of Congress Control Number: 2015946705
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN 978-1-84821-845-1
Humans have always used communication systems: in the past, native Americans used clouds of smoke, then Chappe invented his telegraph and Edison the telephone, which has deeply changed our lifestyle. Nowadays, smartphones enable us to make calls, watch videos and communicate on social networks. The future will see the emergence of the connected man and wider applications of smart objects. All current and future communication systems rely on a digital communication chain that consists of a source and a destination separated by a transmission channel, which may be a portion of a cable, an optical fiber, a wireless mobile or satellite channel. Whichever the channel, the processing blocks implemented in the communication chain have the same basis. This book aims at detailing them, across two volumes:
– the first volume deals with source coding and channel coding. After a presentation of the fundamental results of information theory, the different lossless and lossly source coding techniques are studied. Then, error-correcting-codes (block codes, convolutional codes and concatenated codes) are theoretically detailed and their applications provided;
– the second volume concerns the blocks located after channel coding in the communication chain. It first presents baseband and sine waveform transmissions. Then, the different steps required at the receiver to perform detection, namely synchronization and channel estimation, are studied. Two variants of these blocks which are used in current and future systems, multicarrier modulations and coded modulations, are finally detailed.
This book arises from the long experience of its authors in both the business and academic sectors. The authors are in charge of several diploma and higher-education teaching modules at Conservatoire national des arts et métiers (CNAM) concerning digital communication, information theory and wireless mobile communications.
The different notions in this book are presented with an educational objective. The authors have tried to make the fundamental notions of digital communications as understandable and didactic as possible. Nevertheless, some more advanced techniques that are currently strong research topics but are not yet implemented are also developed.
Digital Communications may interest students in the fields of electronics, telecommunications, signal processing, etc., as well as engineering and corporate executives working in the same domains and wishing to update or complete their knowledge on the subject.
The authors thank their colleagues from CNAM, and especially from the EASY department.
Didier Le Ruyet would like to thank his parents and his wife Christine for their support, patience and encouragements during the writing of this book.
Mylène Pischella would like to thank her daughter Charlotte and husband Benjamin for their presence, affection and support.
Didier LE RUYETMylène PISCHELLAParis, FranceAugust 2015
ACK:
Acknowledgment
AEP:
Asymptotic equipartition principle
APP:
A posteriori
probability
APRI:
A priori
probability
ARQ:
Automatic repeat request
BER:
Bit error rate
BP:
Belief propagation
CC:
Chase combining
CELP:
Code excited linear predictive
CRC:
Cyclic redundancy check
CVSD:
Continuously variable slope delta
DCT:
Discrete cosine transform
DFT:
Discrete Fourier transform
DPCM:
Differential pulse coded modulation
EXIT:
Extrinsic information transfer
EXTR:
Extrinsic probability
IR:
Incremental redundancy
IRWEF:
Input redundancy weight enumerator function
LDPC:
Low density parity check
LLR:
Logarithm likelihood ratio
LPC:
Linear predictive coder
LSP:
Line spectrum pairs
LTE:
Long term evolution
MAP:
Maximum
a posteriori
MDS:
Maximum distance separable
ML:
Maximum likelihood
MLSE:
Maximum likelihood sequence estimator
MMSE:
Minimum mean square error
MRC:
Maximum ratio combining
NACK:
Negative acknowledgment
NRSC:
Non recursive systematic convolutional
NRZ:
Non return zero
PCA:
Principal components analysis
PCC codes:
Parallel concatenated convolutional codes
PCM:
Pulse coded modulation
PEP:
Pairwise error probability
PSK:
Phase shift keying
QAM:
Quadrature amplitude modulation
QPP:
Quadratic polynomial permutation
RA:
Repeat accumulated
RLC:
Run length coding
RSC:
Recursive systematic convolutional
RZ:
Return to zero
SER:
Symbol error rate
SNR:
Signal to noise ratio
WEF:
Weight enumerator function
WER:
Word error rate
:
alphabet associated with variable
X
A
:
transformation matrix
A
(
D
):
weight enumerator function WEF
A
d
:
number of codewords with weight
d
A
(
W, Z
):
weight enumerator function IRWEF
A
w,z
:
number of codewords with weight
w
+
z
B
:
bandwidth
B
:
inverse transformation matrix
c
:
codeword
c
(
p
):
polynomial associated with a codeword
C
:
capacity in Sh/dimension
C
′:
capacity in Sh/s
D
:
variable associated with the weight or delay or distortion
D
(
R
):
distortion rate function
D
B
:
binary rate
D
N
:
average distortion per dimension
D
S
:
symbol rate
d
:
distance of Hamming weight
d
min
:
minimum distance
e
:
correction capability
e
:
error vector
E
[
x
]:
expectation of the random variable
x
E
b
:
energy per bit
E
s
:
energy per symbol
e
d
:
detection capability
:
Galois Field with
q
elements
g
(
p
):
polynomial generator
G
:
prototype filter
G
:
generator matrix
γ
xx
(
f
):
power spectrum density of the random process
x
H
:
parity check matrix
H
(
X
):
entropy of
X
H
D
(
X
):
differential entropy of
X
I
(
X
;
Y
):
average mutual information between variables
X
and
Y
k
:
number of bits per information word (convolutional code)
K
:
number of symbols per information word (block code)
n
i
:
noise sample at time
i
or length of the
i
-the message
N
:
noise power or number of symbols per codeword
n
:
number of bits per codeword (convolutional code)
N
0
:
unilateral noise power spectral density
P
:
signal power
P
e
:
symbol error probability
p
:
transition probability for binary symmetric channel
p
(
x
):
probability density
Q
:
size of alphabet
R
:
rate
R
ss
(
t
):
autocorrelation function of the random process
s
R
(
D
):
rate-distortion function
s
:
error syndrome
T
:
symbol duration
T
b
:
bit duration
s
(
p
):
polynomial associated with the syndrome
u
:
information word
u
(
p
):
polynomial associated with the information word
w
:
weight of the information word
W
:
variable associated with the weight of the information sequence
X
:
variable associated with the channel input signal
x
:
transmitted word
y
:
received word after matched filtering and sampling
Y
:
variable associated with the channel output signal
z
:
weight of the redundancy sequence
Z
:
variable associated with the weight of the redundancy sequence
