110,99 €
The latest edition of this classic is updated with new problem sets and material The Second Edition of this fundamental textbook maintains the book's tradition of clear, thought-provoking instruction. Readers are provided once again with an instructive mix of mathematics, physics, statistics, and information theory. All the essential topics in information theory are covered in detail, including entropy, data compression, channel capacity, rate distortion, network information theory, and hypothesis testing. The authors provide readers with a solid understanding of the underlying theory and applications. Problem sets and a telegraphic summary at the end of each chapter further assist readers. The historical notes that follow each chapter recap the main points. The Second Edition features: * Chapters reorganized to improve teaching * 200 new problems * New material on source coding, portfolio theory, and feedback capacity * Updated references Now current and enhanced, the Second Edition of Elements of Information Theory remains the ideal textbook for upper-level undergraduate and graduate courses in electrical engineering, statistics, and telecommunications.
Sie lesen das E-Book in den Legimi-Apps auf:
Seitenzahl: 1021
Veröffentlichungsjahr: 2012
Contents
Cover
Half Title page
Title page
Copyright page
Preface to the Second Edition
Preface to the First Edition
Acknowledgments for the Second Edition
Acknowledgments for the First Edition
Chapter 1: Introduction and Preview
1.1 Preview of the Book
Chapter 2: Entropy, Relative Entropy, and Mutual Information
2.1 Entropy
2.2 Joint Entropy and Conditional Entropy
2.3 Relative Entropy and Mutual Information
2.4 Relationship Between Entropy and Mutual Information
2.5 Chain Rules for Entropy, Relative Entropy, and Mutual Information
2.6 Jensen’s Inequality and Its Consequences
2.7 Log Sum Inequality and Its Applications
2.8 Data-Processing Inequality
2.9 Sufficient Statistics
2.10 Fano’s Inequality
Summary
Problems
Historical Notes
Chapter 3: Asymptotic Equipartition Property
3.1 Asymptotic Equipartition Property Theorem
3.2 Consequences of the AEP: Data Compression
3.3 High-Probability Sets and The Typical Set
Summary
Problems
Historical Notes
Chapter 4: Entropy Rates of a Stochastic Process
4.1 Markov Chains
4.2 Entropy Rate
4.3 Example: Entropy Rate of a Random Walk on a Weighted Graph
4.4 Second Law of Thermodynamics
4.5 Functions of Markov Chains
Summary
Problems
Historical Notes
Chapter 5: Data Compression
5.1 Examples of Codes
5.2 Kraft Inequality
5.3 Optimal Codes
5.4 Bounds on the Optimal Code Length
5.5 Kraft Inequality for Uniquely Decodable Codes
5.6 Huffman Codes
5.7 Some Comments on Huffman Codes
5.8 Optimality of Huffman Codes
5.9 Shannon–Fano–Elias Coding
5.10 Competitive Optimality of the Shannon Code
5.11 Generation of Discrete Distributions from Fair Coins
Summary
Problems
Historical Notes
Chapter 6: Gambling and Data Compression
6.1 The Horse Race
6.2 Gambling and Side Information
6.3 Dependent Horse Races and Entropy Rate
6.4 The Entropy of English
6.5 Data Compression and Gambling
6.6 Gambling Estimate of the Entropy of English
Summary
Problems
Historical Notes
Chapter 7: Channel Capacity
7.1 Examples of Channel Capacity
7.2 Symmetric Channels
7.3 Properties of Channel Capacity
7.4 Preview of the Channel Coding Theorem
7.5 Definitions
7.6 Jointly Typical Sequences
7.7 Channel Coding Theorem
7.8 Zero-Error Codes
7.9 Fano’s Inequality and the Converse to the Coding Theorem
7.10 Equality in the Converse to the Channel Coding Theorem
7.11 Hamming Codes
7.12 Feedback Capacity
7.13 Source–Channel Separation Theorem
Summary
Problems
Historical Notes
Chapter 8: Differential Entropy
8.1 Definitions
8.2 AEP for Continuous Random Variables
8.3 Relation of Differential Entropy to Discrete Entropy
8.4 Joint and Conditional Differential Entropy
8.5 Relative Entropy and Mutual Information
8.6 Properties of Differential Entropy, Relative Entropy, and Mutual Information
Summary
Problems
Historical Notes
Chapter 9: Gaussian Channel
9.1 Gaussian Channel: Definitions
9.2 Converse to the Coding Theorem for Gaussian Channels
9.3 Bandlimited Channels
9.4 Parallel Gaussian Channels
9.5 Channels with Colored Gaussian Noise
9.6 Gaussian Channels with Feedback
Summary
Problems
Historical Notes
Chapter 10: Rate Distortion Theory
10.1 Quantization
10.2 Definitions
10.3 Calculation of the Rate Distortion Function
10.4 Converse to the Rate Distortion Theorem
10.5 Achievability of the Rate Distortion Function
10.6 Strongly Typical Sequences and Rate Distortion
10.7 Characterization of the Rate Distortion Function
10.8 Computation of Channel Capacity and the Rate Distortion Function
Summary
Problems
Historical Notes
Chapter 11: Information Theory and Statistics
11.1 Method of Types
11.2 Law of Large Numbers
11.3 Universal Source Coding
11.4 Large Deviation Theory
11.5 Examples of Sanov’s Theorem
11.6 Conditional Limit Theorem
11.7 Hypothesis Testing
11.8 Chernoff–Stein Lemma
11.9 Chernoff Information
11.10 Fisher Information and the CraméR–RAO INEQUALITY
Summary
Problems
Historical Notes
Chapter 12: Maximum Entropy
12.1 Maximum Entropy Distributions
12.2 Examples
12.3 Anomalous Maximum Entropy Problem
12.4 Spectrum Estimation
12.5 Entropy Rates of a Gaussian Process
12.6 Burg’s Maximum Entropy Theorem
Summary
Problems
Historical Notes
Chapter 13: Universal Source Coding
13.1 Universal Codes and Channel Capacity
13.2 Universal Coding for Binary Sequences
13.3 Arithmetic Coding
13.4 Lempel–Ziv Coding
13.5 Optimality of Lempel–Ziv Algorithms
Summary
Problems
Historical Notes
Chapter 14: Kolmogorov Complexity
14.1 Models of Computation
14.2 Kolmogorov Complexity: Definitions and Examples
14.3 Kolmogorov Complexity and Entropy
14.4 Kolmogorov Complexity of Integers
14.5 Algorithmically Random and Incompressible Sequences
14.6 Universal Probability
14.7 The Halting Problem and the Noncomputability of Kolmogorov Complexity
14.8 Ω
14.9 Universal Gambling
14.10 Occam’s Razor
14.11 Kolmogorov Complexity and Universal Probability
14.12 Kolmogorov Sufficient Statistic
14.13 Minimum Description Length Principle
Summary
Problems
Historical Notes
Chapter 15: Network Information Theory
15.1 Gaussian Multiple-User Channels
15.2 Jointly Typical Sequences
15.3 Multiple-Access Channel
15.4 Encoding of Correlated Sources
15.5 Duality Between Slepian–Wolf Encoding and Multiple-Access Channels
15.6 Broadcast Channel
15.7 Relay Channel
15.8 Source Coding with Side Information
15.9 Rate Distortion with Side Information
15.10 General Multiterminal Networks
Summary
Problems
Historical Notes
Chapter 16: Information Theory and Portfolio Theory
16.1 The Stock Market: Some Definitions
16.2 Kuhn–Tucker Characterization of the Log-Optimal Portfolio
16.3 Asymptotic Optimality of the Log-Optimal Portfolio
16.4 Side Information and the Growth Rate
16.5 Investment in Stationary Markets
16.6 Competitive Optimality of the Log-Optimal Portfolio
16.7 Universal Portfolios
16.8 Shannon–McMillan-Breiman Theorem (General AEP)
Summary
Problems
Historical Notes
Chapter 17: Inequalities in Information Theory
17.1 Basic Inequalities of Information Theory
17.2 Differential Entropy
17.3 Bounds on Entropy and Relative Entropy
17.4 Inequalities for Types
17.5 Combinatorial Bounds on Entropy
17.6 Entropy rates of Subsets
17.7 Entropy and Fisher Information
17.8 Entropy Power Inequality and Brunn–Minkowski Inequality
17.9 Inequalities for Determinants
17.10 Inequalities for Ratios of Determinants
Summary
Problems
Historical Notes
Bibliography
List of Symbols
Index
ELEMENTS OF INFORMATION THEORY
Copyright © 2006 by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages.
For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
Cover, T. M., 1938– Elements of information theory/by Thomas M. Cover, Joy A. Thomas.–2nd ed. p. cm. “A Wiley-Interscience publication.” Includes bibliographical references and index. ISBN-13 978-0-471-24195-9 ISBN-10 0-471-24195-4 1. Information theory. I. Thomas, Joy A. II. Title. Q360.C68 2005 003′.54–dc22
2005047799
PREFACE TO THE SECOND EDITION
In the years since the publication of the first edition, there were many aspects of the book that we wished to improve, to rearrange, or to expand, but the constraints of reprinting would not allow us to make those changes between printings. In the new edition, we now get a chance to make some of these changes, to add problems, and to discuss some topics that we had omitted from the first edition.
The key changes include a reorganization of the chapters to make the book easier to teach, and the addition of more than two hundred new problems. We have added material on universal portfolios, universal source coding, Gaussian feedback capacity, network information theory, and developed the duality of data compression and channel capacity. A new chapter has been added and many proofs have been simplified. We have also updated the references and historical notes.
The material in this book can be taught in a two-quarter sequence. The first quarter might cover Chapters 1 to 9, which includes the asymptotic equipartition property, data compression, and channel capacity, culminating in the capacity of the Gaussian channel. The second quarter could cover the remaining chapters, including rate distortion, the method of types, Kolmogorov complexity, network information theory, universal source coding, and portfolio theory. If only one semester is available, we would add rate distortion and a single lecture each on Kolmogorov complexity and network information theory to the first semester. A web site, http://www.elementsofinformationtheory.com, provides links to additional material and solutions to selected problems.
In the years since the first edition of the book, information theory celebrated its 50th birthday (the 50th anniversary of Shannon’s original paper that started the field), and ideas from information theory have been applied to many problems of science and technology, including bioinformatics, web search, wireless communication, video compression, and others. The list of applications is endless, but it is the elegance of the fundamental mathematics that is still the key attraction of this area. We hope that this book will give some insight into why we believe that this is one of the most interesting areas at the intersection of mathematics, physics, statistics, and engineering.
TOM COVER JOY THOMAS
Palo Alto, CaliforniaJanuary 2006
PREFACE TO THE FIRST EDITION
This is intended to be a simple and accessible book on information theory. As Einstein said, “Everything should be made as simple as possible, but no simpler.” Although we have not verified the quote (first found in a fortune cookie), this point of view drives our development throughout the book. There are a few key ideas and techniques that, when mastered, make the subject appear simple and provide great intuition on new questions.
This book has arisen from over ten years of lectures in a two-quarter sequence of a senior and first-year graduate-level course in information theory, and is intended as an introduction to information theory for students of communication theory, computer science, and statistics.
There are two points to be made about the simplicities inherent in information theory. First, certain quantities like entropy and mutual information arise as the answers to fundamental questions. For example, entropy is the minimum descriptive complexity of a random variable, and mutual information is the communication rate in the presence of noise. Also, as we shall point out, mutual information corresponds to the increase in the doubling rate of wealth given side information. Second, the answers to information theoretic questions have a natural algebraic structure. For example, there is a chain rule for entropies, and entropy and mutual information are related. Thus the answers to problems in data compression and communication admit extensive interpretation. We all know the feeling that follows when one investigates a problem, goes through a large amount of algebra, and finally investigates the answer to find that the entire problem is illuminated not by the analysis but by the inspection of the answer. Perhaps the outstanding examples of this in physics are Newton’s laws and Schrödinger’s wave equation. Who could have foreseen the awesome philosophical interpretations of Schrödinger’s wave equation?
In the text we often investigate properties of the answer before we look at the question. For example, in Chapter 2, we define entropy, relative entropy, and mutual information and study the relationships and a few interpretations of them, showing how the answers fit together in various ways. Along the way we speculate on the meaning of the second law of thermodynamics. Does entropy always increase? The answer is yes and no. This is the sort of result that should please experts in the area but might be overlooked as standard by the novice.
In fact, that brings up a point that often occurs in teaching. It is fun to find new proofs or slightly new results that no one else knows. When one presents these ideas along with the established material in class, the response is “sure, sure, sure.” But the excitement of teaching the material is greatly enhanced. Thus we have derived great pleasure from investigating a number of new ideas in this textbook.
Examples of some of the new material in this text include the chapter on the relationship of information theory to gambling, the work on the universality of the second law of thermodynamics in the context of Markov chains, the joint typicality proofs of the channel capacity theorem, the competitive optimality of Huffman codes, and the proof of Burg’s theorem on maximum entropy spectral density estimation. Also, the chapter on Kolmogorov complexity has no counterpart in other information theory texts. We have also taken delight in relating Fisher information, mutual information, the central limit theorem, and the Brunn–Minkowski and entropy power inequalities. To our surprise, many of the classical results on determinant inequalities are most easily proved using information theoretic inequalities.
Even though the field of information theory has grown considerably since Shannon’s original paper, we have strived to emphasize its coherence. While it is clear that Shannon was motivated by problems in communication theory when he developed information theory, we treat information theory as a field of its own with applications to communication theory and statistics. We were drawn to the field of information theory from backgrounds in communication theory, probability theory, and statistics, because of the apparent impossibility of capturing the intangible concept of information.
Since most of the results in the book are given as theorems and proofs, we expect the elegance of the results to speak for themselves. In many cases we actually describe the properties of the solutions before the problems. Again, the properties are interesting in themselves and provide a natural rhythm for the proofs that follow.
One innovation in the presentation is our use of long chains of inequalities with no intervening text followed immediately by the explanations. By the time the reader comes to many of these proofs, we expect that he or she will be able to follow most of these steps without any explanation and will be able to pick out the needed explanations. These chains of inequalities serve as pop quizzes in which the reader can be reassured of having the knowledge needed to prove some important theorems. The natural flow of these proofs is so compelling that it prompted us to flout one of the cardinal rules of technical writing; and the absence of verbiage makes the logical necessity of the ideas evident and the key ideas perspicuous. We hope that by the end of the book the reader will share our appreciation of the elegance, simplicity, and naturalness of information theory.
Throughout the book we use the method of weakly typical sequences, which has its origins in Shannon’s original 1948 work but was formally developed in the early 1970s. The key idea here is the asymptotic equipartition property, which can be roughly paraphrased as “Almost everything is almost equally probable.”
Chapter 2 includes the basic algebraic relationships of entropy, relative entropy, and mutual information. The asymptotic equipartition property (AEP) is given central prominence in Chapter 3. This leads us to discuss the entropy rates of stochastic processes and data compression in Chapters 4 and 5. A gambling sojourn is taken in Chapter 6, where the duality of data compression and the growth rate of wealth is developed.
The sensational success of Kolmogorov complexity as an intellectual foundation for information theory is explored in Chapter 14. Here we replace the goal of finding a description that is good on the average with the goal of finding the universally shortest description. There is indeed a universal notion of the descriptive complexity of an object. Here also the wonderful number Ω is investigated. This number, which is the binary expansion of the probability that a Turing machine will halt, reveals many of the secrets of mathematics.
Channel capacity is established in Chapter 7. The necessary material on differential entropy is developed in Chapter 8, laying the groundwork for the extension of previous capacity theorems to continuous noise channels. The capacity of the fundamental Gaussian channel is investigated in Chapter 9.
The relationship between information theory and statistics, first studied by Kullback in the early 1950s and relatively neglected since, is developed in Chapter 11. Rate distortion theory requires a little more background than its noiseless data compression counterpart, which accounts for its placement as late as Chapter 10 in the text.
The huge subject of network information theory, which is the study of the simultaneously achievable flows of information in the presence of noise and interference, is developed in Chapter 15. Many new ideas come into play in network information theory. The primary new ingredients are interference and feedback. Chapter 16 considers the stock market, which is the generalization of the gambling processes considered in Chapter 6, and shows again the close correspondence of information theory and gambling.
Chapter 17, on inequalities in information theory, gives us a chance to recapitulate the interesting inequalities strewn throughout the book, put them in a new framework, and then add some interesting new inequalities on the entropy rates of randomly drawn subsets. The beautiful relationship of the Brunn–Minkowski inequality for volumes of set sums, the entropy power inequality for the effective variance of the sum of independent random variables, and the Fisher information inequalities are made explicit here.
We have made an attempt to keep the theory at a consistent level. The mathematical level is a reasonably high one, probably the senior or first-year graduate level, with a background of at least one good semester course in probability and a solid background in mathematics. We have, however, been able to avoid the use of measure theory. Measure theory comes up only briefly in the proof of the AEP for ergodic processes in Chapter 16. This fits in with our belief that the fundamentals of information theory are orthogonal to the techniques required to bring them to their full generalization.
The essential vitamins are contained in Chapters 2, 3, 4, 5, 7, 8, 9, 11, 10, and 15. This subset of chapters can be read without essential reference to the others and makes a good core of understanding. In our opinion, Chapter 14 on Kolmogorov complexity is also essential for a deep understanding of information theory. The rest, ranging from gambling to inequalities, is part of the terrain illuminated by this coherent and beautiful subject.
Every course has its first lecture, in which a sneak preview and overview of ideas is presented. Chapter 1 plays this role.
TOM COVER JOY THOMAS
Palo Alto, CaliforniaJune 1990
ACKNOWLEDGMENTS FOR THE SECOND EDITION
Since the appearance of the first edition, we have been fortunate to receive feedback, suggestions, and corrections from a large number of readers. It would be impossible to thank everyone who has helped us in our efforts, but we would like to list some of them. In particular, we would like to thank all the faculty who taught courses based on this book and the students who took those courses; it is through them that we learned to look at the same material from a different perspective.
In particular, we would like to thank Andrew Barron, Alon Orlitsky, T. S. Han, Raymond Yeung, Nam Phamdo, Franz Willems, and Marty Cohn for their comments and suggestions. Over the years, students at Stanford have provided ideas and inspirations for the changes—these include George Gemelos, Navid Hassanpour, Young-Han Kim, Charles Mathis, Styrmir Sigurjonsson, Jon Yard, Michael Baer, Mung Chiang, Suhas Diggavi, Elza Erkip, Paul Fahn, Garud Iyengar, David Julian, Yiannis Kontoyiannis, Amos Lapidoth, Erik Ordentlich, Sandeep Pombra, Jim Roche, Arak Sutivong, Joshua Sweetkind-Singer, and Assaf Zeevi. Denise Murphy provided much support and help during the preparation of the second edition.
Joy Thomas would like to acknowledge the support of colleagues at IBM and Stratify who provided valuable comments and suggestions. Particular thanks are due Peter Franaszek, C. S. Chang, Randy Nelson, Ramesh Gopinath, Pandurang Nayak, John Lamping, Vineet Gupta, and Ramana Venkata. In particular, many hours of dicussion with Brandon Roy helped refine some of the arguments in the book. Above all, Joy would like to acknowledge that the second edition would not have been possible without the support and encouragement of his wife, Priya, who makes all things worthwhile.
Tom Cover would like to thank his students and his wife, Karen.
ACKNOWLEDGMENTS FOR THE FIRST EDITION
We wish to thank everyone who helped make this book what it is. In particular, Aaron Wyner, Toby Berger, Masoud Salehi, Alon Orlitsky, Jim Mazo and Andrew Barron have made detailed comments on various drafts of the book which guided us in our final choice of content. We would like to thank Bob Gallager for an initial reading of the manuscript and his encouragement to publish it. Aaron Wyner donated his new proof with Ziv on the convergence of the Lempel-Ziv algorithm. We would also like to thank Normam Abramson, Ed van der Meulen, Jack Salz and Raymond Yeung for their suggested revisions.
Certain key visitors and research associates contributed as well, including Amir Dembo, Paul Algoet, Hirosuke Yamamoto, Ben Kawabata, M. Shimizu and Yoichiro Watanabe. We benefited from the advice of John Gill when he used this text in his class. Abbas El Gamal made invaluable contributions, and helped begin this book years ago when we planned to write a research monograph on multiple user information theory. We would also like to thank the Ph.D. students in information theory as this book was being written: Laura Ekroot, Will Equitz, Don Kimber, Mitchell Trott, Andrew Nobel, Jim Roche, Erik Ordentlich, Elza Erkip and Vittorio Castelli. Also Mitchell Oslick, Chien-Wen Tseng and Michael Morrell were among the most active students in contributing questions and suggestions to the text. Marc Goldberg and Anil Kaul helped us produce some of the figures. Finally we would like to thank Kirsten Goodell and Kathy Adams for their support and help in some of the aspects of the preparation of the manuscript.
Joy Thomas would also like to thank Peter Franaszek, Steve Lavenberg, Fred Jelinek, David Nahamoo and Lalit Bahl for their encouragment and support during the final stages of production of this book.
Information theory answers two fundamental questions in communication theory: What is the ultimate data compression (answer: the entropy H), and what is the ultimate transmission rate of communication (answer: the channel capacity C). For this reason some consider information theory to be a subset of communication theory. We argue that it is much more. Indeed, it has fundamental contributions to make in statistical physics (thermodynamics), computer science (Kolmogorov complexity or algorithmic complexity), statistical inference (Occam’s Razor: “The simplest explanation is best”), and to probability and statistics (error exponents for optimal hypothesis testing and estimation).
This “first lecture” chapter goes backward and forward through information theory and its naturally related ideas. The full definitions and study of the subject begin in Chapter 2. Figure 1.1 illustrates the relationship of information theory to other fields. As the figure suggests, information theory intersects physics (statistical mechanics), mathematics (probability theory), electrical engineering (communication theory), and computer science (algorithmic complexity). We now describe the areas of intersection in greater detail.
FIGURE 1.1. Relationship of information theory to other fields.
Electrical Engineering (Communication Theory). In the early 1940s it was thought to be impossible to send information at a positive rate with negligible probability of error. Shannon surprised the communication theory community by proving that the probability of error could be made nearly zero for all communication rates below channel capacity. The capacity can be computed simply from the noise characteristics of the channel. Shannon further argued that random processes such as music and speech have an irreducible complexity below which the signal cannot be compressed. This he named the entropy, in deference to the parallel use of this word in thermodynamics, and argued that if the entropy of the source is less than the capacity of the channel, asymptotically error-free communication can be achieved.
Information theory today represents the extreme points of the set of all possible communication schemes, as shown in the fanciful Figure 1.2. The data compression minimum I (X; ) lies at one extreme of the set of communication ideas. All data compression schemes require description rates at least equal to this minimum. At the other extreme is the data transmission maximum I (X; Y), known as the channel capacity. Thus, all modulation schemes and data compression schemes lie between these limits.
FIGURE 1.2. Information theory as the extreme points of communication theory.
Information theory also suggests means of achieving these ultimate limits of communication. However, these theoretically optimal communication schemes, beautiful as they are, may turn out to be computationally impractical. It is only because of the computational feasibility of simple modulation and demodulation schemes that we use them rather than the random coding and nearest-neighbor decoding rule suggested by Shannon’s proof of the channel capacity theorem. Progress in integrated circuits and code design has enabled us to reap some of the gains suggested by Shannon’s theory. Computational practicality was finally achieved by the advent of turbo codes. A good example of an application of the ideas of information theory is the use of error-correcting codes on compact discs and DVDs.
Recent work on the communication aspects of information theory has concentrated on network information theory: the theory of the simultaneous rates of communication from many senders to many receivers in the presence of interference and noise. Some of the trade-offs of rates between senders and receivers are unexpected, and all have a certain mathematical simplicity. A unifying theory, however, remains to be found.
Computer Science (Kolmogorov Complexity). Kolmogorov, Chaitin, and Solomonoff put forth the idea that the complexity of a string of data can be defined by the length of the shortest binary computer program for computing the string. Thus, the complexity is the minimal description length. This definition of complexity turns out to be universal, that is, computer independent, and is of fundamental importance. Thus, Kolmogorov complexity lays the foundation for the theory of descriptive complexity. Gratifyingly, the Kolmogorov complexity K is approximately equal to the Shannon entropy H if the sequence is drawn at random from a distribution that has entropy H. So the tie-in between information theory and Kolmogorov complexity is perfect. Indeed, we consider Kolmogorov complexity to be more fundamental than Shannon entropy. It is the ultimate data compression and leads to a logically consistent procedure for inference.
There is a pleasing complementary relationship between algorithmic complexity and computational complexity. One can think about computational complexity (time complexity) and Kolmogorov complexity (program length or descriptive complexity) as two axes corresponding to program running time and program length. Kolmogorov complexity focuses on minimizing along the second axis, and computational complexity focuses on minimizing along the first axis. Little work has been done on the simultaneous minimization of the two.
Physics (Thermodynamics). Statistical mechanics is the birthplace of entropy and the second law of thermodynamics. Entropy always increases. Among other things, the second law allows one to dismiss any claims to perpetual motion machines. We discuss the second law briefly in Chapter 4.
Mathematics (Probability Theory and Statistics). The fundamental quantities of information theory—entropy, relative entropy, and mutual information—are defined as functionals of probability distributions. In turn, they characterize the behavior of long sequences of random variables and allow us to estimate the probabilities of rare events (large deviation theory) and to find the best error exponent in hypothesis tests.
Philosophy of Science (Occam’s Razor). William of Occam said “Causes shall not be multiplied beyond necessity,” or to paraphrase it, “The simplest explanation is best.” Solomonoff and Chaitin argued persuasively that one gets a universally good prediction procedure if one takes a weighted combination of all programs that explain the data and observes what they print next. Moreover, this inference will work in many problems not handled by statistics. For example, this procedure will eventually predict the subsequent digits of π. When this procedure is applied to coin flips that come up heads with probability 0.7, this too will be inferred. When applied to the stock market, the procedure should essentially find all the “laws” of the stock market and extrapolate them optimally. In principle, such a procedure would have found Newton’s laws of physics. Of course, such inference is highly impractical, because weeding out all computer programs that fail to generate existing data will take impossibly long. We would predict what happens tomorrow a hundred years from now.
Economics (Investment). Repeated investment in a stationary stock market results in an exponential growth of wealth. The growth rate of the wealth is a dual of the entropy rate of the stock market. The parallels between the theory of optimal investment in the stock market and information theory are striking. We develop the theory of investment to explore this duality.
Computation vs. Communication. As we build larger computers out of smaller components, we encounter both a computation limit and a communication limit. Computation is communication limited and communication is computation limited. These become intertwined, and thus all of the developments in communication theory via information theory should have a direct impact on the theory of computation.
The initial questions treated by information theory lay in the areas of data compression and transmission. The answers are quantities such as entropy and mutual information, which are functions of the probability distributions that underlie the process of communication. A few definitions will aid the initial discussion. We repeat these definitions in Chapter 2.
The entropy of a random variable X with a probability mass function p(x) is defined by
(1.1)
We use logarithms to base 2. The entropy will then be measured in bits. The entropy is a measure of the average uncertainty in the random variable. It is the number of bits on average required to describe the random variable.
Example 1.1.1 Consider a random variable that has a uniform distribution over 32 outcomes. To identify an outcome, we need a label that takes on 32 different values. Thus, 5-bit strings suffice as labels.
The entropy of this random variable is
(1.2)
which agrees with the number of bits needed to describe X. In this case, all the outcomes have representations of the same length.
Now consider an example with nonuniform distribution.
Example 1.1.2 Suppose that we have a horse race with eight horses taking part. Assume that the probabilities of winning for the eight horses are . We can calculate the entropy of the horse race as
(1.3)
Suppose that we wish to send a message indicating which horse won the race. One alternative is to send the index of the winning horse. This description requires 3 bits for any of the horses. But the win probabilities are not uniform. It therefore makes sense to use shorter descriptions for the more probable horses and longer descriptions for the less probable ones, so that we achieve a lower average description length. For example, we could use the following set of bit strings to represent the eight horses: 0, 10, 110, 1110, 111100, 111101, 111110, 111111. The average description length in this case is 2 bits, as opposed to 3 bits for the uniform code. Notice that the average description length in this case is equal to the entropy. In Chapter 5 we show that the entropy of a random variable is a lower bound on the average number of bits required to represent the random variable and also on the average number of questions needed to identify the variable in a game of “20 questions.” We also show how to construct representations that have an average length within 1 bit of the entropy.
The concept of entropy in information theory is related to the concept of entropy in statistical mechanics. If we draw a sequence of n independent and identically distributed (i.i.d.) random variables, we will show that the probability of a “typical” sequence is about 2−nH(X) and that there are about 2nH(x) such typical sequences. This property [known as the asymptotic equipartition property (AEP)] is the basis of many of the proofs in information theory. We later present other problems for which entropy arises as a natural answer (e.g., the number of fair coin flips needed to generate a random variable).
The notion of descriptive complexity of a random variable can be extended to define the descriptive complexity of a single string. The Kolmogorov complexity of a binary string is defined as the length of the shortest computer program that prints out the string. It will turn out that if the string is indeed random, the Kolmogorov complexity is close to the entropy. Kolmogorov complexity is a natural framework in which to consider problems of statistical inference and modeling and leads to a clearer understanding of Occam’s Razor: “The simplest explanation is best.” We describe some simple properties of Kolmogorov complexity in Chapter 1.
Entropy is the uncertainty of a single random variable. We can define conditional entropy H(X|Y), which is the entropy of a random variable conditional on the knowledge of another random variable. The reduction in uncertainty due to another random variable is called the mutual information. For two random variables X and Y this reduction is the mutual information
(1.4)
The mutual information I (X; Y) is a measure of the dependence between the two random variables. It is symmetric in X and Y and always nonnegative and is equal to zero if and only if X and Y are independent.
A communication channel is a system in which the output depends probabilistically on its input. It is characterized by a probability transition matrix p(y|x) that determines the conditional distribution of the output given the input. For a communication channel with input X and output Y, we can define the capacity C by
(1.5)
Later we show that the capacity is the maximum rate at which we can send information over the channel and recover the information at the output with a vanishingly low probability of error. We illustrate this with a few examples.
FIGURE 1.4. Noisy channel.
In general, communication channels do not have the simple structure of this example, so we cannot always identify a subset of the inputs to send information without error. But if we consider a sequence of transmissions, all channels look like this example and we can then identify a subset of the input sequences (the codewords) that can be used to transmit information over the channel in such a way that the sets of possible output sequences associated with each of the codewords are approximately disjoint. We can then look at the output sequence and identify the input sequence with a vanishingly low probability of error.
Example 1.1.5 (Binary symmetric channel) This is the basic example of a noisy communication system. The channel is illustrated in Figure 1.5.
FIGURE 1.5. Binary symmetric channel.
The ultimate limit on the rate of communication of information over a channel is given by the channel capacity. The channel coding theorem shows that this limit can be achieved by using codes with a long block length. In practical communication systems, there are limitations on the complexity of the codes that we can use, and therefore we may not be able to achieve capacity.
Mutual information turns out to be a special case of a more general quantity called relative entropyD(p||q), which is a measure of the “distance” between two probability mass functions p and q. It is defined as
(1.6)
There are a number of parallels between information theory and the theory of investment in a stock market. A stock market is defined by a random vector X whose elements are nonnegative numbers equal to the ratio of the price of a stock at the end of a day to the price at the beginning of the day. For a stock market with distribution F(x), we can define the doubling rate W as
(1.7)
The doubling rate is the maximum asymptotic exponent in the growth of wealth. The doubling rate has a number of properties that parallel the properties of entropy. We explore some of these properties in Chapter 16.
The quantities H, I, C, D, K, W arise naturally in the following areas:
Information-theoretic quantities such as entropy and relative entropy arise again and again as the answers to the fundamental questions in communication and statistics. Before studying these questions, we shall study some of the properties of the answers. We begin in Chapter 2 with the definitions and basic properties of entropy, relative entropy, and mutual information.
In this chapter we introduce most of the basic definitions required for subsequent development of the theory. It is irresistible to play with their relationships and interpretations, taking faith in their later utility. After defining entropy and mutual information, we establish chain rules, the nonnegativity of mutual information, the data-processing inequality, and illustrate these definitions by examining sufficient statistics and Fano’s inequality.
The concept of information is too broad to be captured completely by a single definition. However, for any probability distribution, we define a quantity called the entropy, which has many properties that agree with the intuitive notion of what a measure of information should be. This notion is extended to define mutual information, which is a measure of the amount of information one random variable contains about another. Entropy then becomes the self-information of a random variable. Mutual information is a special case of a more general quantity called relative entropy, which is a measure of the distance between two probability distributions. All these quantities are closely related and share a number of simple properties, some of which we derive in this chapter.
In later chapters we show how these quantities arise as natural answers to a number of questions in communication, statistics, complexity, and gambling. That will be the ultimate test of the value of these definitions.
Definition The entropy H(X) of a discrete random variable X is defined by
(2.1)
If the base of the logarithm is b, we denote the entropy as Hb(X). If the base of the logarithm is e, the entropy is measured in vats. Unless otherwise specified, we will take all logarithms to base 2, and hence all the entropies will be measured in bits. Note that entropy is a functional of the distribution of X. It does not depend on the actual values taken by the random variable X, but only on the probabilities.
We denote expectation by E. Thus, if X ~ p(x), the expected value of the random variable g(X) is written
(2.2)
Remark The entropy of X can also be interpreted as the expected value of the random variable log , where X is drawn according to probability mass function p(x). Thus,
(2.3)
This definition of entropy is related to the definition of entropy in thermodynamics; some of the connections are explored later. It is possible to derive the definition of entropy axiomatically by defining certain properties that the entropy of a random variable must satisfy. This approach is illustrated in Problem 2.46. We do not use the axiomatic approach to justify the definition of entropy; instead, we show that it arises as the answer to a number of natural questions, such as “What is the average length of the shortest description of the random variable?” First, we derive some immediate consequences of the definition.
Lemma 2.1.1H (X) ≥ O.
Proof: 0 ≤ p(x) ≤ 1 implies that log ≥ 0.
The second property of entropy enables us to change the base of the logarithm in the definition. Entropy can be changed from one base to another by multiplying by the appropriate factor.
Example 2.1.1 Let
(2.4)
Then
(2.5)
FIGURE 2.1.H(p) vs. p.
Example 2.1.2 Let
(2.6)
The entropy of X is
(2.7)
We defined the entropy of a single random variable in Section 2.1. We now extend the definition to a pair of random variables. There is nothing really new in this definition because (X, Y) can be considered to be a single vector-valued random variable.
Definition The joint entropy H(X, Y) of a pair of discrete random variables (X, Y) with a joint distribution p(x, y) is defined as
(2.8)
which can also be expressed as
(2.9)
We also define the conditional entropy of a random variable given another as the expected value of the entropies of the conditional distributions, averaged over the conditioning random variable.
Definition If (X, Y) ~ p(x, y), the conditional entropy H(Y |X) is defined as
(2.10)
(2.11)
(2.12)
(2.13)
The naturalness of the definition of joint entropy and conditional entropy is exhibited by the fact that the entropy of a pair of random variables is the entropy of one plus the conditional entropy of the other. This is proved in the following theorem.
Theorem 2.2.1 (Chain rule)
(2.14)
Proof
(2.15)
(2.16)
(2.17)
(2.18)
(2.19)
Equivalently, we can write
(2.20)
and take the expectation of both sides of the equation to obtain the theorem.
Corollary
(2.21)
Proof: The proof follows along the same lines as the theorem.
Example 2.2.1 Let (X, Y) have the following joint distribution:
(2.22)
(2.23)
(2.24)
(2.25)
The entropy of a random variable is a measure of the uncertainty of the random variable; it is a measure of the amount of information required on the average to describe the random variable. In this section we introduce two related concepts: relative entropy and mutual information.
The relative entropy is a measure of the distance between two distributions. In statistics, it arises as an expected logarithm of the likelihood ratio. The relative entropy D(p||q) is a measure of the inefficiency of assuming that the distribution is q when the true distribution is p. For example, if we knew the true distribution p of the random variable, we could construct a code with average description length H(p). If, instead, we used the code for a distribution q, we would need H(p) + D(P||q) bits on the average to describe the random variable.
Definition The relative entropy or Kullback–Leibler distance between two probability mass functions p(x) and q(x) is defined as
(2.26)
(2.27)
We now introduce mutual information, which is a measure of the amount of information that one random variable contains about another random variable. It is the reduction in the uncertainty of one random variable due to the knowledge of the other.
Definition Consider two random variables X and Y with a joint probability mass function p(x, y) and marginal probability mass functions p(x) and p(y). The mutual information I (X; Y) is the relative entropy between the joint distribution and the product distribution p(x) p(y):
(2.28)
(2.29)
(2.30)
In Chapter 8 we generalize this definition to continuous random variables, and in (8.54) to general random variables that could be a mixture of discrete and continuous random variables.
(2.31)
and
(2.32)
(2.33)
whereas
(2.34)
Note that D(p||q) ≠ D(q||p) in general.
We can rewrite the definition of mutual information I (X; Y) as
(2.35)
(2.36)
(2.37)
(2.38)
(2.39)
Thus, the mutual information I (X; Y) is the reduction in the uncertainty of X due to the knowledge of Y.
By symmetry, it also follows that
(2.40)
Thus, X says as much about Y as Y says about X.
(2.41)
Finally, we note that
(2.42)
Thus, the mutual information of a random variable with itself is the entropy of the random variable. This is the reason that entropy is sometimes referred to as self-information.
Collecting these results, we have the following theorem.
Theorem 2.4.1 (Mutual information and entropy)
(2.43)
(2.44)
(2.45)
(2.46)
(2.47)
The relationship between H (X), H(Y), H (X, Y), H(X|Y), H(Y|X), and I(X; Y) is expressed in a Venn diagram (Figure 2.2). Notice that the mutual information I (X; Y) corresponds to the intersection of the information in X with the information in Y.
FIGURE 2.2. Relationship between entropy and mutual information.
We now show that the entropy of a collection of random variables is the sum of the conditional entropies.
Theorem 2.5.1 (Chain rule for entropy) Let X1, X2,…, Xnbe drawn according to p(x1, x2,…, xn). Then
(2.48)
Proof: By repeated application of the two-variable expansion rule for entropies, we have
(2.49)
(2.50)
(2.51)
(2.52)
(2.53)
Alternative Proof: We write and evaluate
H (X1, X2,…, Xn)
(2.54)
(2.55)
(2.56)
(2.57)
(2.58)
(2.59)
We now define the conditional mutual information as the reduction in the uncertainty of X due to knowledge of Y when Z is given.
Definition The conditional mutual information of random variables X and Y given Z is defined by
(2.60)
(2.61)
Mutual information also satisfies a chain rule.
Theorem 2.5.2 (Chain rule for information)
(2.62)
Proof
(2.63)
(2.64)
We define a conditional version of the relative entropy.
Definition For joint probability mass functions p(x, y) and q(x, y), the conditional relative entropy D(p(y|x)||q(y|x)) is the average of the relative entropies between the conditional probability mass functions p(y|x) and q(y|x) averaged over the probability mass function p(x). More precisely,
(2.65)
(2.66)
The notation for conditional relative entropy is not explicit since it omits mention of the distribution p(x) of the conditioning random variable. However, it is normally understood from the context.
The relative entropy between two joint distributions on a pair of random variables can be expanded as the sum of a relative entropy and a conditional relative entropy. The chain rule for relative entropy is used in Section 4.4 to prove a version of the second law of thermodynamics.
Theorem 2.5.3 (Chain rule for relative entropy)
(2.67)
Proof
(2.68)
(2.69)
(2.70)
(2.71)
In this section we prove some simple properties of the quantities defined earlier. We begin with the properties of convex functions.
Definition A function f (x) is said to be convex over an interval (a, b) if for every x1, x2 ∞ (a, b) and 0 ≤ λ ≤ 1,
(2.72)
Definition A function f is concave if –f is convex. A function is convex if it always lies below any chord. A function is concave if it always lies above any chord.
Examples of convex functions include x2, |x|, ex, x log x (for x ≥ 0), and so on. Examples of concave functions include log x and for x ≥ 0. Figure 2.3 shows some examples of convex and concave functions. Note that linear functions ax + b are both convex and concave. Convexity underlies many of the basic properties of information-theoretic quantities such as entropy and mutual information. Before we prove some of these properties, we derive some simple results for convex functions.
FIGURE 2.3. Examples of (a) convex and (b) concave functions.
Theorem 2.6.1If the function f has a second derivative that is nonnegative (positive) over an interval, the function is convex (strictly convex) over that interval.
Proof: We use the Taylor series expansion of the function around x0:
(2.73)
where x* lies between x0 and x. By hypothesis, f″(x*) ≥ 0, and thus the last term is nonnegative for all x.
(2.74)
(2.75)
Multiplying (2.74) by λ and (2.75) by 1 – λ and adding, we obtain (2.72). The proof for strict convexity proceeds along the same lines.
Theorem 2.6.1 allows us immediately to verify the strict convexity of x2, ex, and x log x for x ≥ 0, and the strict concavity of log x and for x ≥ 0.
The next inequality is one of the most widely used in mathematics and one that underlies many of the basic results in information theory.
Theorem 2.6.2 (Jensen’s inequality) If f is a convex function and X is a random variable,
(2.76)
Proof: We prove this for discrete distributions by induction on the number of mass points. The proof of conditions for equality when f is strictly convex is left to the reader.
For a two-mass-point distribution, the inequality becomes
(2.77)
(2.78)
(2.79)
(2.80)
(2.81)
