基本信息
- 原书名:The Design and Analysis of Computer Algorithms
- 原出版社: Addison Wesley/Pearson

编辑推荐
本书是一部经典著作,着重介绍了计算机算法设计领域的统一原则和基本概念。
内容简介
作译者
John E.Hopcroft 于斯坦福大学获得博士学位,美国康奈尔大学计算机科学系教授、美国国家工程院院士,曾担任贝尔实验室的顾问。
Jeffrey D.Ullman 于普林斯顿大学获得博士学位,斯坦福大学电子工程教授、美国国家工程院院士。
目录
1.1 Algorithms and their complexity
1.2 Random access machines
1.3 Computational complexity of RAM programs
1.4 A stored program model
1.5 Abstractions of the RAM
1.6 A primitive model of computation: the Turing machine
1.7 Relationship between the Turing machine and RAM models
1.8 Pidgin ALGOL-a high-level language
2 Design of Efficient Algorithms
2.1 Data structures: lists. queues. and stacks
2.2 Set representations
2.3 Graphs
2.4 Trees
2.5 Recursion
2.6 Divide-and-conquer
2.7 Balancing
2.8 Dynamic programming
2.9 Epilogue
3 Sorting and Order Statistics
前言
THE SCOPE OF THE BOOK
To analyze the performance of an algorithm some model of a computer is necessary. Our book begins by formulating several computer models which are simple enough to establish analytical results but which at the same time accurately reflect the salient features of real machines. These models include the random access register machine, the random access stored program machine, and some specialized variants of these. The Turing machine is introduced in order to prove the exponential lower bounds on efficiency in Chapters 10 and 11. Since the trend in program design is away from machine language, a high-level language called Pidgin ALGOL is introduced as the main vehicle for describing algorithms. The complexity of a Pidgin ALGOL program is related to the machine models.
The second chapter introduces basic data structures and programming techniques often used in efficient algorithms. It covers the use of lists, pushdown stores, queues, trees, and graphs. Detailed explanations of recursion, divide-and-conquer, and dynamic programming are given, along with examples of their use.
Chapters 3 to 9 provide a sampling of the diverse areas to which the fundamental techniques of Chapter 2 can be applied. Our emphasis in these chapters is on developing algorithms that are asymptotically the most efficient known. Because of this emphasis, some of the algorithms presented are suitable only for inputs whose size is much larger than what is currently encountered in practice. This is particularly true of some of the matrix multiplication algorithms in Chapter 6, the Sch0nhage-Strassen integer-multiplication algorithm of Chapter 7, and some of the polynomial and integer algorithms of Chapter 8.
On the other hand, most of the sorting algorithms of Chapter 3, the searching algorithms of Chapter 4, the graph algorithms of Chapter 5, the fast Fourier transform of Chapter 7, and the string-matching algorithms of Chapter 9 are widely used, since the sizes of inputs for which these algorithms are efficient are sufficiently small to be encountered in many practical situations.
Chapters 10 through 12 discuss lower bounds on computational complexity. The inherent computational difficulty of a problem is of universal interest, both to program design and to an understanding of the nature of computation. In Chapter 10 an important class of problems, the NP-complete problems, is studied. All problems in this class are equivalent in computational difficulty, in that if one problem in the class has an efficient (polynomial time-bounded) solution, then all problems in the class have efficient solutions. Since this class of problems contains many practically important and well-studied problems, such as the integer-programming problem and the traveling salesman problem, there is good reason to suspect that no problem in this class can be solved efficiently. Thus, if a program designer knows that the problem for which he is trying tO find an efficient algorithm is in this class, then he may very well be content to try heuristic approaches to the problem. In spite of the overwhelming empirical evidence to the contrary, it is still an open question whether NP-complete problems admit of efficient solutions.
In Chapter 11 certain problems are defined for which we can actually prove that no efficient algorithms exist. The material in Chapters 10 and 11 draws heavily on the concept of Turing machines introduced in Sections 1.6 and 1.7.
In the final chapter of the book we relate concepts of computational difficulty to notions of linear independence in vector spaces. The material in this chapter provides techniques for proving lower bounds for much simpler problems than those considered in Chapters 10 and 11.
THE USE OF THE BOOK
This book is intended as a first course in the design and analysis of algorithms. The emphasis is on ideas and ease of understanding rather than implementation details or programming tricks. Informal, intuitive explanations are often used in place of long tedious proofs. The book is self-contained and assumes no specific background in mathematics or programming languages. However, a certain amount of maturity in being able to handle mathematical concepts is desirable, as is some exposure to a higher-level programming language such as FORTRAN or ALGOL. Some knowledge of linear algebra is needed for a full understanding of Chapters 6, 7, 8, and 12.
This book has been used in graduate and undergraduate courses in algorithm design. In a one-semester course most of Chapters 1-5 and 9-10 were covered, along with a smattering of topics from the remaining chapters. In introductory courses the emphasis was on material from Chapters 1-5, but Sections 1.6, 1.7, 4.13, 5.11, and Theorem 4.5 were generally not covered. The book can also be used in more advanced courses emphasizing the theory of algorithms. Chapters 6-12 could serve as the foundation for such a course.
Numerous exercises have been provided at the end of each chapter to provide an instructor with a wide range of homework problems. The exercises are graded according to difficulty. Exercises with no stars are suitable for introductory courses, singly starred exercises for more advanced courses, and doubly starred exercises for advanced graduate courses. The bibliographic notes at the end of every chapter attempt to point to a published source for each of the algorithms and results contained in the text and the exercises.
ACKNOWLEDGMENTS
The material in this book has been derived from class notes for courses taught by the authors at Cornell, Princeton, and Stevens Institute of Technology. The authors would like to thank the many people who have critically read various portions of the manuscript and offered many helpful suggestions. In particular we would like to thank Kellogg Booth, Stan Brown, Steve Chen, Allen Cypher, Arch Davis, Mike Fischer, Hania Gajewska, Mike Garey, Udai Gupta, Mike Harrison, Matt Hecht, Harry Hunt, Dave Johnson, Marc Kaplan, Don Johnson, Steve Johnson, Brian Kernighan, Don Knuth, Richard Ladner, Anita LaSalle, Doug McIlroy, Albert Meyer, Christos Papadimitriou, Bill Plauger, John Savage, Howard Siegel, Ken Steiglitz, Larry Stockmeyer, Tom Szymanski, and Theodore Yen.
Special thanks go to Gemma Carnevale, Pauline Cameron, Hannah Kresse, Edith Purser, and Ruth Suzuki for their cheerful and careful typing of the manuscript.
The authors are also grateful to Bell Laboratories and Cornell, Princeton, and the University of California at Berkeley for providing facilities for the preparation of the manuscript.
June 1974
A. V.A.
J. E. H.