Trellis and Turbo Coding - Christian B. Schlegel - E-Book

Trellis and Turbo Coding E-Book

Christian B. Schlegel

0,0
131,99 €

-100%
Sammeln Sie Punkte in unserem Gutscheinprogramm und kaufen Sie E-Books und Hörbücher mit bis zu 100% Rabatt.

Mehr erfahren.
Beschreibung

This new edition has been extensively revised to reflect the progress in error control coding over the past few years. Over 60% of the material has been completely reworked, and 30% of the material is original.

  • Convolutional, turbo, and low density parity-check (LDPC) coding and polar codes in a unified framework
  • Advanced research-related developments such as spatial coupling
  • A focus on algorithmic and implementation aspects of error control coding

Sie lesen das E-Book in den Legimi-Apps auf:

Android
iOS
von Legimi
zertifizierten E-Readern

Seitenzahl: 789

Veröffentlichungsjahr: 2015

Bewertungen
0,0
0
0
0
0
0
Mehr Informationen
Mehr Informationen
Legimi prüft nicht, ob Rezensionen von Nutzern stammen, die den betreffenden Titel tatsächlich gekauft oder gelesen/gehört haben. Wir entfernen aber gefälschte Rezensionen.



TRELLIS AND TURBO CODING

Iterative and Graph-Based Error Control Coding

by Christian B. Schlegel and Lance C. PérezWiley/IEEE Press, 2015

Contents

Chapter 1 Introduction

1.1 Modern Digital Communications

1.2 The Rise of Digital Communications

1.3 Communication Systems

1.4 Error Control Coding

1.5 Bandwidth, Power, and Complexity

1.6 A Brief History–The Drive Towards Capacity

Bibliography

Notes

Chapter 2 Communications Basics

2.1 The Probabilistic Viewpoint

2.2 Vector Communication Channels

2.3 Optimum Receivers

2.4 Matched Filters

2.5 Message Sequences

2.6 The Complex Equivalent Baseband Model

2.7 Spectral Behavior

2.8 Advanced Modulation Methods

2.9 A Communications System Case Study

2.10 Appendix 2.A

Bibliography

Notes

Chapter 3 Trellis-Coded Modulation

3.1 An Introductory Example

3.2 Construction of Codes

3.3 Lattices

3.4 Lattice Formulation of Trellis Codes

3.5 Rotational Invariance

3.6 V.fast

3.7 The IEEE 802.3an Standard

3.8 Historical Notes

Bibliography

Notes

Chapter 4 Trellis Representations

4.1 Preliminaries

4.2 The Parity-Check Matrix

4.3 Parity-Check Trellis Representations

4.4 Convolutional Codes and Their Trellis

4.5 Minimal Trellises

4.6 Minimum-Span Generator Matrices

4.7 Systematic Construction of the PC-Trellis

4.8 Tail-Biting Trellises

4.9 The Minimal Trellis of Convolutional Codes

4.10 Fundamental Theorems from Basic Algebra

4.11 Systematic Encoders

4.12 Maximum Free-Distance Convolutional Codes

4.13 The Squaring Construction and the Trellis of Lattices

4.14 The Construction of Reed–Muller Codes

4.15 A Decoding Example

4.16 Polar Codes and Their Relationship to RM Codes

Appendix 4.A

Bibliography

Notes

Chapter 5 Trellis and Tree Decoding

5.1 Background and Introduction

5.2 Tree Decoders

5.3 The Stack Algorithm

5.4 The Fano Algorithm

5.5 The

M

-Algorithm

5.6 Maximum Likelihood Decoding

5.7 A Posteriori Probability Symbol Decoding

5.8 Log-APP and Approximations

5.9 Error Analysis and Distance Spectrum

5.10 Random Coding Analysis of Optimal Decoding

5.11 Random Coding Analysis of Sequential Decoding

5.12 Some Final Remarks

Appendix 5.A

Appendix 5.B

Appendix 5.C

Bibliography

Notes

Chapter 6 Low-Density Parity-Check Codes

6.1 Introduction

6.2 LDPC Codes and Graphs

6.3 LDPC Decoding via Message Passing

6.4 Analysis Techniques

6.5 Code Families and Construction

6.6 Encoding of LDPC Codes

Appendix 6.A: Factor Graphs

Bibliography

Note

Chapter 7 Error Floors

7.1 The Error Floor Problem

7.2 Dynamics of the Absorption Sets

7.3 Code Design for Low Error Floors

7.4 Impact of the Decoding Algorithm

7.5 Importance Sampling (IS)

7.6 Computing Error Rates via Importance Sampling

Bibliography

Note

Chapter 8 Turbo Coding: Basic Principles

8.1 Introduction

8.2 Parallel Concatenated Convolutional Codes

8.3 Distance Spectrum Analysis of Turbo Codes

8.4 The Free Distance of a Turbo Code

8.5 Weight Enumerator Analysis of Turbo Codes

8.6 Iterative Decoding of Turbo Codes

8.7 EXIT Analysis

8.8 Serial Concatenation

8.9 Cascaded Convolutional Codes

8.10 Weight Enumerator Analysis of SCCCs

8.11 Iterative Decoding and Performance of SCCCs

8.12 EXIT Analysis of Serially Concatenated Codes

8.13 Viewpoint

8.14 Turbo-Trellis-Coded Modulation

8.15 Serial Concatenation

8.16 EXIT Analysis of Serial TTCM

8.17 Differential-Coded Modulation

8.18 Concatenated Space–Time Coding

8.19 Bit-Interleaved Coded and Generalized Modulation

Bibliography

Notes

Chapter 9 Turbo Coding: Applications

9.1 Interleavers

9.2 Turbo Codes in Telecommunication Standards

9.3 Product Codes and Block Turbo Decoding

9.4 Approximate APP Decoding

9.5 Product Codes with High-Order Modulations

9.6 The IEEE 802.16 Standard

9.7 Decoding of Polar Codes

9.8 Polar Code Performance and Outlook

Bibliography

Chapter 10 Convolutional LDPC Codes and Spatial Coupling

10.1 Capacity: The Ultimate Limit

10.2 Low-Density Parity-Check Convolutional Codes

10.4 Spatial Coupling: Convergence Analysis

Bibliography

Note

EULA

List of Tables

Chapter 3

Table 3.1

Table 3.2

Table 3.3

Table 3.4

Table 3.5

Table 3.6

Chapter 4

Table 4.1

Table 4.2

Table 4.3

Table 4.4

Table 4.5

Table 4.6

Table 4.7

Chapter 6

Table 6.1

Table 6.2

Table 6.3

Table 6.4

Chapter 7

Table 3.1

Chapter 8

Table 8.1

Table 8.2

Table 8.3

Chapter 9

Table 9.1

Guide

Cover

Table of Contents

Chapter

Pages

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

348

349

350

351

352

353

354

355

356

357

358

359

360

361

362

363

364

365

366

367

368

369

370

371

372

373

374

375

376

377

378

379

380

381

382

383

384

385

386

387

388

389

390

391

392

393

394

395

396

397

398

399

400

401

402

403

404

405

406

407

408

409

410

411

412

413

414

415

416

417

418

419

420

421

422

423

424

425

426

427

428

429

430

431

432

433

434

435

436

437

438

439

440

441

442

443

444

445

446

447

448

449

450

451

452

453

454

455

456

457

458

459

461

462

463

465

466

467

468

469

470

471

472

473

474

475

476

477

478

479

480

481

482

483

484

485

486

487

488

489

490

491

492

Chapter 1Introduction

1.1 Modern Digital Communications

With the advent of high-speed logic circuits and very large scale integration (VLSI), data processing and storage equipment has inexorably moved towards employing digital tech-niques. In digital systems, data is encoded into strings of zeros and ones, corresponding to the on and off states of semiconductor switches. This has brought about fundamental changes in how information is processed. While real-world data is primarily in “analog form” of one type or another, the revolution in digital processing means that this analog information needs to be encoded into a digital representation, e.g., into a string of ones and zeros. The conversion from analog to digital and back are processes which have become ubiquitous. Examples are the digital encoding of speech, picture, and video encoding and rendering, as well as the large variety of capturing and representing data encountered in our modern internet-based lifestyles.

The migration from analog communications of the first half of the 20-th century to the now ubiquitous digital forms of communications were enabled primarily by the fast-paced advances in high-density device integration. This has been the engine behind much of the technological progress over the last half century, initiated by the creation of the first inte-grated circuit (IC) by Kilby at Texas Instruments in 1958. Following Moore’s informal law, device sizes, primarily CMOS (Complementary Metal-Oxide Semiconductors), shrink by a factor two every two years, and computational power doubles accordingly. An impression for this exponential growth in computing capability can be gained from Figure 1.1, which shows the number of transistors integrated in a single circuit and the minimum device size for progressive fabrication processes – known as implementation nodes.

While straightforward miniaturization of the CMOS devices is becoming increasingly more difficult, transistor designers have been very creative in modifying the designs to stay on the Moore trajectory. As of 2015 we now see the introduction of 3-dimensional transistor structures such as thin FETs, double-gated FETs, and tunnel FETs, and it is expected that carbon nanotube devices may continue miniaturization well into the sub-10 nm range. In any case, the future for highly complex computational devices is bright.

Figure 1.1: Moore’s law is driving progress in electronic devices. Top left: A basic CMOS switching structure. Bottom left: Moore observed his “doubling law” in 1965 and predicted that it would continue “at least another 10 years.”

One such computational challenge is data communications: in particular data integrity, as discussed in this book. The migration from analog to digital information processing has opened the door for many sophisticated algorithmic methods. Digital information is treated differently in communications than analog information. Signal estimation becomes signal detection; that is, a communications receiver need not look for an analog signal and make a “best” estimate, it only needs to make a decision between a finite number of discrete signals, say a one or a zero in the most basic case. Digital signals are more reliable in a noisy communications environment; they can usually be detected perfectly, as long as the noise levels are below a certain threshold. This allows us to restore digital data, and, through error correcting techniques, correct errors made during transmission. Digital data can easily be encoded in such a way as to introduce dependency among a large number of symbols, thus enabling a receiver to make a more accurate detection of the symbols. This is the essence of error control coding.

Finally, there are also strong theoretical reasons behind the migration to digital pro-cessing. Nyquist’s sampling theorem, discussed in Section 1.3, tells us that, fundamentally, it is sufficient to know an analog signal at a number of discrete points in time. This opens the door for the discrete time treatment of signals. Then, Shannon’s fundamental chan-nel coding theorem states that the values of these discrete time samples themselves, can contain only a finite amount of information. Therefore, only a finite amount of discrete levels are required to capture the full information content of a signal.

The digitization of data is convenient for a number of other reasons too. The design of signal processing algorithms for digital data is much easier than designing analog signal processing algorithms, albeit not typically less complex. However, the abundance of such digital algorithms, including the error control and correction techniques discussed in this book, combined with their ease of implementation in very large scale integrated (VLSI) circuits has led to the plethora of successful applications of error control coding we see in practice today.

Error control coding was first applied in deep-space communications where we are confronted with low-power communications channels with virtually unlimited bandwidth. On these data links, convolutional codes (Chapter 4) are used with sequential and Viterbi decoding (Chapter 5), and the future will see the application of turbo coding. The next successful application of error control coding was to storage devices, most notably the compact disk player, which employs powerful Reed-Solomon codes [21] to handle the raw error probability from the optical readout device which is too large for high-fidelity sound reproduction without error correction. Another hurdle taken was the successful application of error control to bandwidth-limited telephone channels, where trellis-coded modulation (Chapter 3) was used to produce impressive improvements and push transmission rates towards the theoretical limit of the channel. Nowadays, coding is routinely applied to satellite communications [41, 49], teletext broadcasting, computer storage devices, logic circuits, semiconductor memory systems, magnetic recording systems, audio-video, and WiFi systems. Modern mobile communications systems like the pan-European TDMA digital telephony standard GSM [35], IS 95 [47], CDMA2000, IMT2000, and the new 4-th generation LTE and LTE-A standards [63, 64] all use error control coding.

1.2 The Rise of Digital Communications

Modern digital communication theory started in 1928 with Nyquist’s seminal work on telegraph transmission theory [36]. The message from Nyquist’s theory is that finite bandwidth implies discrete time. That is, a signal whose bandwidth is limited can always be represented by sample values taken at discrete time intervals. The sampling theorem of this theory then asserts that the band-limited signal can always be reconstructed exactly from these discrete-time samples.1 Only these discrete samples need to be processed by a receiver since they contain all the necessary information of the entire waveform.

The second pillar to establish the supremacy of digital information processing came precisely from Shannon’s 1948 theory. Shannon’s theory essentially establishes that the discrete-time samples which are used to represent a bandlimited signal, could be ade-quately described by a finite number of amplitude samples, the number of which depended on the level of the channel noise. These two theories combined state that a finite num-ber of levels taken at discrete time intervals are completely sufficient to characterize any bandlimited signal in the presence of noise, that is, in any communication system.

With these results, technology has moved towards a complete digitization of commu-nications systems, with error control coding being the key to realize the sufficiency of discrete amplitude levels. We will study Shannon’s theorem in more detail in Section 1.5.

1.3 Communication Systems

Figure 1.2 shows the basic configuration of a point-to-point digital communications link. The data to be transmitted over this link can either come from some analog source, in which case it must first be converted into digital format (digitized), or it can be a digital information source. If this data is a speech signal, for example, the digitizer is a speech codec [22]. Usually the digital data is source encoded to remove unnecessary redundancy from the data, i.e., the source data is compressed [14]. Source encoding has the effect that the digital data which enters the encoder has statistics which resemble that of a random symbol source with maximum entropy, i.e., all the different digital symbols occur with equal likelihood, and are statistically independent. The channel encoder operates on this compressed data and introduces controlled redundancy for transmission over the channel. The modulator converts the discrete channel symbols into waveforms which are transmitted through the waveform channel. The demodulator reconverts the waveforms back into a discrete sequence of received symbols, and the decoder reproduces an estimate of the compressed input data sequence, which is subsequently reconverted into the original signal or data sequence.

Figure 1.2: System diagram of a complete point-to-point communication system for digital data. The forward error control (FEC) block is the topic of this book.

An important ancillary function at the receiver is the synchronization process. We usually need to acquire carrier frequency and phase synchronization, as well as symbol timing synchronization in order for the receiver to be able to operate. Synchronization is not a topic of this book, and we will assume in most of our discussion that synchronization has been established (with the exception of phase synchronization in the case of rotation-ally invariant codes in Chapters 4 and 8). References [32] and [33] treat synchronization issues in detail. Since synchronization is a relatively slow estimation process, and data detection is a fast process, we usually have those two operations separated in real receiver implementations as indicated in Figure 1.2. However, we would like to note at this point that novel, iterative receiver designs are increasingly integrating these auxillary functions, for an example of phase synchronization see Chapter 8.

Another important feature in some communication systems is automatic repeat request (ARQ). In systems with ARQ the receiver also performs error detection, and, through a return channel, requests retransmission of erroneous data blocks, or data blocks which cannot be reconstructed with sufficient confidence [25]. ARQ can usually improve the data transmission quality substantially, but the return channel needed for ARQ is not always available, or may be impractical. For a deep-space probe on its way to the outer rim of our solar system, ARQ is infeasible since the return path takes too long (several hours!). For speech-encoded signals ARQ is usually impossible for an analogous reason, since only a maximum speech delay is of about 200 ms is acceptable. In broadcast systems, ARQ is ruled out for obvious reasons. Error control coding without ARQ is termed forward error correction or control (FEC). FEC is more difficult to perform than simple error detection and ARQ, but dispenses with the return channel. Oftentimes, FEC and ARQ are both integrated into hybrid error control systems [25, 24] for data communications.

This book deals primarily with FEC—the dashed block in Figure 1.2. The reason why we can do this relatively easily is due to different functionalities of the various blocks just discussed, and the fact that they operate largely autonomously from each other. Each of them represents a separate entity with its own optimization strategy, and data is simply passed between the different blocks, sometimes with lithe extra mutual interaction. A no-table point in Figure 1.2 is that the encoder/modulator and the demodulator/decoder are combined operations. This is done to reflect the fact that error protection and modulation– in the sense of choosing the discrete signal points that represent the digital data–is a pro-cess best addressed jointly. This view of joint encoding/modulation was first proposed by Wozencraft and Kennedy [61] and Massey [30] and then realized with stunning results by Ungerböck [50, 51, 52] in the methods of trellis-coded modulation of the 1980’s.

Since we will assume the encoder input data to be a sequence of independent, identically and uniformly distributed symbols (courtesy of the source compression), the single most important parameter to optimize for the FEC block is arguably the bit and/or symbol error rate, and we will adopt this as our criterion for the quality of an FEC system. Note that this is not necessarily the most meaningful measure in all cases. Consider, for example, pulse-code-modulated (PCM) speech, where an error in the most significant bit is significantly more detrimental than an error in the least significant bit. Researchers have looked at schemes with unequal error protection for such applications (e.g., [18]). However, such methods usually are a variation of the basic theme of obtaining a minimum error rate. Occasionally we may have need for the frame-, or block error rate (FER), which describes the probability that an entire block of data is incorrectly received. Most communications protocols operate on the basis of frame errors, and frame error control is exercised at the upper layers of a communications protocol. While of course tightly connected, the frame-, and bit error rates do sometimes follow different tendencies. This will become important when we discuss error floors for very-low error rate applications, such as those using cable modem or optical fiber communications.

1.4 Error Control Coding

The modern approach to error control in digital communications started with the ground-breaking work of Shannon [45], Hamming [19], and Golay [16]. While Shannon presented a theory which explained the fundamental limits on the efficiency of communications systems, Hamming and Golay were the first to develop practical error control schemes. A new paradigm was born, one in which errors are not synonymous with data which is irretrievably lost; but by clever design, errors could be corrected, or avoided altogether. This new thinking was revolutionary. Even though Shannon’s theory promised that large improvements in the performance of communication systems could be achieved, practical improvements had to be excavated by laborious work over half a century of intensive research. One reason for this lies in a fundamental shortcoming of Shannon’s theory. While it clearly states theoretical limits on communication efficiency, its methodology provides no insight on how to actually achieve these limits, since it is based on sophisticated averaging arguments which eliminate all detailed system structure. Coding theory, on the other hand, evolved from Hamming and Golay’s work into a flourishing branch of applied mathematics [27].

Let us see where it all started. The most famous formula from Shannon’s work is arguably the channel capacity of an ideal band-limited Gaussian channel,2 which is given by

(1.1)

In this formula, C is the channel capacity, that is, the maximum number of bits which can be transmitted through this channel per unit time (second), W is the bandwidth of the channel, and S/N is the signal-to-noise power ratio at the receiver. Shannon’s main theorem, which accompanies (1.1), asserts that error probabilities as small as desired can be achieved as long as the transmission rate R through the channel (in bits/second) is smaller than the channel capacity C. This can be achieved by using an appropriate encoding and decoding operation. However, Shannon’s theory is silent about the structure of these encoders and decoders.

This new view was in marked contrast to early practices, which embodied the opinion that in order to reduce error probabilities, the signal energy had to be increased, i.e., the S/N had to be improved. Figure 1.3 shows the error performance of QPSK, a popular modulation method for satellite channels (see Chapter 2) which allows data transmission of rates up to 2 bits/symbol. The bit error probability (BER) of QPSK is shown as a function of the signal-to-noise ratio S/N per dimension normalized per bit (see Section 1.3), henceforth called SNR. It is evident that an increased SNR provides a gradual decrease in error probability. This contrasts markedly with Shannon’s theory which promises zero(!) error probability at a spectral efficiency of 2 bits/s/Hz, which is the maximum that QPSK can achieve, as long as SNR > 1.5 (1.76 dB), shattering conventional wisdom. The limit on SNR is calculated using (1.1)–see Section 1.5.

Figure 1.3: Bit error probability of quadrature phase-shift keying (QPSK) and selected 8-PSK trellis-coded modulation (TCM), trellis-turbo-coded (TTCM), and block-turbo-coded (BTC) systems as a function of the normalized signal-to-noise ratio.

Also shown in Figure 1.3 is the performance of several trellis-coded modulation (TCM) and trellis-turbo-coded (TTCM) schemes using 8-ary phase-shift keying (8-PSK) (Chapter 4), and the improvement made possible by coding becomes evident. The difference in SNR for an objective target bit error rate between a coded system and an uncoded system is termed the coding gain. Note that the coding schemes shown in Figure 1.3