基本信息
- 原书名:Elements of Information Theory
- 原出版社: John Wiley & Sons,Inc.
- 作者: (美)Thomas M.Cover,Joy A.Thomas
- 丛书名: 国际知名大学原版教材
- 出版社:清华大学出版社
- ISBN:730207285X
- 上架时间:2003-12-5
- 出版日期:2003 年11月
- 开本:16开
- 页码:545
- 版次:1-1
- 所属分类:数学 > 控制论,信息论
教材 > 研究生/本科/专科教材 > 理学 > 数学
内容简介
数学书籍
本书特色:
本书系统介绍了信息论基本原理及其在通信理论、统计学、计算机科学、概率论以及投资理论等领域的应用。作者以循序渐进的方式,介绍了信息量的基本定义、相对熵、互信息以及他们如何自然地用来解决数据压缩、信道容量、信息率失真、统计假设、网络信息流等问题。除此以外,本书还探讨了很多教材中从未涉及的问题,如:
·热力学第二定律与马尔可夫链之间的联系
·Huffman编码的最优性
·数据压缩的对偶性
·Lempel Ziv编码
·Kolmogorov复杂性
·Porfolio理论
·信息论不等式及其数学结论
本书可作为通信、电子、计算机、自动控制、统计、经济等专业高年级本科生和研究生的教材或参考书,也可供相关领域的科研人员和专业技术人员参考。
目录
1 Introduction and Preview
1.1Preview of the book
2 Entropy,Relative Entropy and Mutual Information
2.1 Entropy
2.2 Joint entropy and conditional entropy
2.3 Relation entropy and mutual information
2.4 Relationship between entropy ad mutual information
2.5 Chain rules for entropy,relative entropy and mutual information
2.6 Jensen's inequality and its consequences
2.7 The log sum inequality and ist applications
2.8 Data processing inequality
2.9 The second law of thermodymamics
2.10 Sufficient statistics
2.11 Fano;s inequality
Summary of Chapter
Problems for Chatpter
Historical notes
前言
This book has arisen from over ten years of lectures in a two-quarter sequence of a senior and first-year graduate level course in information theory, and is intended as an introduction to information theory for students of communication theory, computer science and statistics.
There are two points to be made about the simplicities inherent in information theory. First, certain quantities like entropy and mutual information arise as the answers to fundamental questions. For exampie, entropy is the minimum descriptive complexity of a random variable, and mutual information is the communication rate in the presence of noise. Also, as we shall point out, mutual information corresponds to the increase in the doubling rate of wealth given side information. Second, the answers to information theoretic questions have a natural algebraic structure. For example, there is a chain rule for entropies, and entropy and mutual information are related. Thus the answers to problems in data compression and communication admit extensive
interpretation. We all know the feeling that follows when one investigates a problem, goes through a large amount of algebra and finally investigates the answer to find that the entire problem is illuminated, not by the analysis, but by the inspection of the answer. Perhaps the outstanding examples of this in physics are Newton's laws and Schrodinger's wave equation. Who could have foreseen the awesome philosophical interpretations of Schrodinger's wave equation?
In the text we often investigate properties of the answer before we look at the question. For example, in Chapter 2, we define entropy, relative entropy and mutual information and study the relationships and a few interpretations of them, showing how the answers fit together in various ways. Along the way we speculate on the meaning of the second law of thermodynamics. Does entropy always increase? The answer is yes and no. This is the sort of result that should please experts in the area but might be overlooked as standard by the novice.
In fact, that brings up a point that often occurs in teaching. It is fun to find new proofs or slightly new results that no one else knows. When one presents these ideas along with the established material in class, the response is "sure, sure, sure." But the excitement of teaching the material is greatly enhanced. Thus we have derived great pleasure irom investigating a number of new ideas in this text book.
Examples of some of the new material in this text include the chapter on the relationship of information theory to gambling, the work on the universality of the second law of thermodynamics in the context of Markov chains, the joint typicality proofs of the channel capacity theorem, the competitive optimality of Huffman codes and the proof of Burghs theorem on maximum entropy spectral density estimation. Also the chapter on Kolmogorov complexity has no counterpart in other information theory texts. We 'have also taken delight in relating Fisher information, mutual information, and the Brunn-Minkowski and entropy power inequalities. To our surprise, many of the classical results on determinant inequalities are most easily proved using information theory.
Even though the field of information theory has grown considerably since Shannon's original paper, we have strived to emphasize its coherence. While it is clear that Shannon was motivated by problems in communication theory when he developed information theory, we treat information theory as a field of its own with applications to communication theory and statistics.
We were drawn to the field of information theory from backgrounds in communication theory, probability theory and statistics, because of the apparent impossibility of capturing the intangible concept of information.
Since most of the results in the book are given as theorems and proofs, we expect the elegance of the results to speak for themselves. In many cases we actually describe the properties of the solutions before introducing the problems. Again, the properties are interesting in themselves and provide a natural rhythm for the proofs that follow.
One innovation in the presentation is our use of long chains of inequalities, with no intervening text, followed immediately by the explanations. By the time the reader comes to many of these proofs, we expect that he or she will be able to follow most of these steps without any explanation and will be able to pick out the needed explanations. These chains of inequalities serve as pop quizzes in which the reader can be reassured of having the knowledge needed to prove some important theorems. The natural flow of these proofs is so compelling that it prompted us to flout one of the cardinal rules of technical writing. And the absence of verbiage makes the logical necessity of the ideas evident and the key ideas perspicuous. We hope that by the end of the book the reader will share our appreciation of the elegance, simplicity and naturalness of information theory.
Throughout the book we use the method of weakly typical sequences, which has its origins in Shannon's original 1948 work but was formally developed in the early 1970s. The key idea here is the so-called asymptotic equipartition property, which can be roughly paraphrased as "Almost everything is almost equally probable."
Chapter 2, which is the true first chapter of the subject, includes the basic algebraic relationships of entropy, relative entropy and mutual information as well as a discussion of the second law of thermodynamics and sufficient statistics. The asymptotic equipartition property (AEP) is given central prominence in Chapter 3. This leads us to discuss the entropy rates of stochastic processes and data compression in Chapters 4 and 5. A gambling sojourn is taken in Chapter 6, where the duality of data compression and the growth rate of wealth is developed.
The fundamental idea of Kolmogorov complexity as an intellectual foundation for information theory is explored in Chapter 7. Here we replace the goal of finding a description that is good on the average with the goal of finding the universally shortest description. There is indeed a universal notion of the descriptive complexity of an object. Here also the wonderful number is investigated. This number, which is the binary expansion of the probability that a Turing machine will halt, reveals many of the secrets of mathematics.
Channel capacity, which is the fundamental theorem in information theory, is established in Chapter 8. The necessary material on differenrial entropy is developed in Chapter 9, laying the groundwork for the extension of previous capacity theorems to continuous noise channels. The capacity of the fundamental Gaussian channel is investigated in Chapter 10.
The relationship between information theory and statistics, first studied by Kullback in the early 1950s, and relatively neglected since, is developed in Chapter 12. Rate distortion theory requires a little more background than its noiseless data compression counterpart, which
accounts for its placement as late as Chapter 13 in the text.
The huge subject of network information theory, which is the study of the simultaneously achievable flows of information in the presence of noise and interference, is developed in Chapter 14. Many new ideas come into play in network information theory. The primary new ingredi-
ents are interference and feedback. Chapter 15 considers the stock market, which is the generalization of the gambling processes considered in Chapter 6, and shows again the close correspondence of information theory and gambling.
Chapter 16, on inequalities in information theory, gives us a chance to recapitulate the interesting inequalities strewn throughout the book, put them in a new framework and then add some interesting new inequalities on the entropy rates of randomly drawn subsets. The beautiful relationship of the Brunn-Minkowski inequality for volumes of set sums, the entropy power inequality for the effective variance of the sum of independent random variables and the Fisher information inequalities are made explicit here.