- 原书名：Algebraic Complexity Theory
- 原出版社： Springer
In this book we focus on Algebraic Complexity Theory, the study of the intrinsic algorithmic difficulty of algebraic problems within an algebraic model of computation. Motivated by questions of numerical and symbolic computation, this branch of research originated in 1954 when Ostrowski  inquired about the optimality of Homer's rule. Algebraic complexity theory grew rapidly and has since become a well-established area of research. However, with the exception of the now classic monograph by Borodin and Munro, published in 1975, a systematic treatment of this theory is not available. .
This book is intended to be a comprehensive text which presents both traditional material and recent research in algebraic complexity theory in a coherent way. Requiring only some basic algebra and offering over 350 exercises, it should be well-suited as a textbook for beginners at the graduate level. With its extensive bibliographic notes covering nearly 600 research papers, it might also serve as a reference book. ...
1.2 Open Problems
Part I. Fundamental Algorithms
Chapter 2. Efficient Polynomial Arithmetic
2.1 Multiplication of Polynomials I
2.2* Multiplication of Polynomials II
2.3* Multiplication of Several Polynomials
2.4 Multiplication and Inversion of Power Series
2.5* Composition of Power Series
2.7 Open Problems
Chapter 3. Efficient Algorithms with Brancmng
3.1 Polynomial Greatest Common Divisors
3.2* Local Analysis of the Knuth-Sch6nhage Algorithm
3.3 Evaluation and Interpolation
3.4* Fast Point Location in Arrangements of Hyperplanes
3.5* Vapnik-Chervonenkis Dimension and Epsilon-Nets
In the 1930s, a number of quite different concepts for this purpose were proposed, such as luting machines, WHiLE-programs, recursive functions, Markov algorithms, and Thue systems. All these concepts turned out to be equivalent, a fact summarized in Church's thesis, which says that the resulting definitions form an adequate formalization of the intuitive notion of computability. This had and continues to have an enormous effect. First of all, with these notions it has been possible to prove that various problems are algorithmically unsolvable. Among these undecidable problems are the halting problem, the word problem of group theory, the Post correspondence problem, and Hilbert's tenth problem. Secondly, concepts like Turing machines and WHILE-programs had a strong influence on the development of the first computers and programming languages.
In the era of digital computers, the question of finding efficient solutions to algorithmically solvable problems has become increasingly important. In addition, the fact that some problems can be solved very efficiently, while others seem to defy all attempts to find an efficient solution, has called for a deeper understanding of the intrinsic computational difficulty of problems. This has resulted in the development of complexity theory. Complexity theory has since become a very diversified area of research. Each branch uses specific models of computation, like Turing machines, random access machines, Boolean circuits, straightline programs, computation trees, or VLSI-models. Every computation in such a model induces costs, such as the number of computation steps, the amount of memory required, the number of gates of a circuit, the number of instructions, or the chip area. Accordingly, studies in computational complexity are generally based on some model of computation together with a complexity measure. For an overview, we refer the interested reader to the Handbook of Theoretical Computer Science , which contains several surveys of various branches of complexity theory.
In this book we focus on Algebraic Complexity Theory, the study of the intrinsic algorithmic difficulty of algebraic problems within an algebraic model of computation. Motivated by questions of numerical and symbolic computation, this branch of research originated in 1954 when Ostrowski  inquired about the optimality of Homer's rule. Algebraic complexity theory grew rapidly and has since become a well-established area of research. (See the surveys of von zur Gathen , Grigoriev , Heintz , Schohage , and Strassen [506, 510].) However, with the exception of the now classic monograph by Borodin and Munro , published in 1975, a systematic treatment of this theory is not available.
This book is intended to be a comprehensive text which presents both traditional material and recent research in algebraic complexity theory in a coherent way. Requiring only some basic algebra and offering over 350 exercises, it should be well-suited as a textbook for beginners at the graduate level. With its extensive bibliographic notes covering nearly 600 research papers, it might also serve as a reference book.
The text provides a uniform treatment of algebraic complexity theory on the basis of the straight-line program and the computation tree models, with special emphasis on lower complexity bounds. This also means that this is not a book on Computer Algebra, whose main theme is the design and implementation of efficient algorithms for algebraic problems.
Nonetheless, our book contains numerous algorithms, typically those that are essentially optimal within the specified computation model. Our main goal is to develop methods for proving the optimality of such algorithms.
To emphasize the logical development of the subject, we have divided the book into five parts, with 21 chapters in total. The first chapter consists of an informal introduction to algebraic complexity theory.
The next two chapters form PART I: FUNDAMENTAL ALGORiTHMS. Chapter 2 is concerned with efficient algorithms for the symbolic manipulation of polynomials and power series, such as the Schohage-Strassen algorithm for polynomial multiplication, the Sieveking-Kung algorithm for the inversion of power series, or the Brent-Kung algorithm for the composition of power series. It is followed by a chapter in which the emphasis lies on efficient algorithms within the branching model. In particular, we present the fast Knuth-Schonhage algorithm for computing the greatest common divisor (GCD) of univariate polynomials. This algorithm combined with Huffman coding then yields efficient solutions of algorithmic problems associated with Chinese remaindering. Furthermore the VC-dimension and the theory of epsilon nets are used to show that certain NP-complete problems, like the knapsack or the traveling salesman problem, may be solved by "nonuniform polynomial time algorithms" in the computation tree model over the reals. This surprising and important result, due to Meyer auf der Heide, demonstrates that it is not possible to prove exponential lower bounds for the above problems in the model of computation trees. Moreover, it stresses the role of uniformity in the definition of the language class NP and, at the same time, puts emphasis on the quality of several lower bounds derived later in Chapter 11. ..
While the first three chapters rely on the reader's intuitive notion of algorithm, the remaining parts of the book, directed towards lower bounds, call for an exact specification of computation models and complexity measures.
Therefore, in PARr II: ELEMENTARY LOWER BOUNDS (Chapters 4-7), we first introduce the models of straight-line programs and computation trees, which we use throughout the rest of the book. We then describe several elementary lower bound techniques. Chapter 5 contains transcendence degree arguments, including results of Motzkin and Belaga as well as the Baur-Rabin theorem. Chapter 6 discusses a unified approach to Pan's substitution method and its extensions. The methods of Chapters 5 and 6 yield lower bounds which are at most linear in the number of input variables. Nonetheless, the methods are strong enough to show the optimality of some basic algorithms, the most prominent being Homer's rule. In Chapter 7 we introduce two fundamental program transformation techniques. The first is Strassen's technique of "avoiding divisions." The second is a method for transforming a program for the computation of a multivariate rational function into one which computes the given function and all its first-order partial derivatives. The results of Chapter 7 are of importance in Chapters 8, 14, and 16.
PART III: HIGH DEGREE (Chapters 8-12) shows that concepts from algebraic geometry and algebraic topology, like the degree or Betti numbers, can be applied to prove nonlinear lower complexity bounds. Chapter 8 studies Strassen's degree bound, one of the central tools for obtaining almost sharp lower complexity bounds for a number of problems of high degree, like the computation of the coefficients of a univariate polynomial from its roots, Chapter 9 is devoted to the investigation of specific polynomials that are hard to compute. It may be considered as a counterpart to Chapters 5 and 6 where we study generic polynomials. In Chapter 10 the degree bound is adapted to the computation tree model. With this tool it tums out that the Knuth-SchGnhage algorithm is essentially optimal for Computing the Euclidean representation. In Chapter 11 Ben-Or's lower complexity bound for semi-algebraic membership problems is deduced from the Milner-Thom bound. This is applied to several problems of computational geometry. In Chapter 12 the Grigoriev-Risler lower bound for the additive complexity of univariate real polynomials is derived from Khovanskii's theorem on the number of real roots of sparse systems of polynomial equations.
PART IV: LOW DEGREE (Chapters 13-20) is concerned with the problem of computing a finite set ofmult/variate polynomials of degree at most two. In Chapter 13 we discuss upper and lower complexity bounds for computing a finite set of linear polynomials, which is simply the task of multiplying a generic input vector by a specific matrix. This problem is of great practical interest, as the notable examples of the discrete Fourier transform (DFT), Toeplitz, Hankel and Vandermonde matrices indicate.
The theory of bilinear complexity is concerned with the problem of computing a finite set of bilinear polynomials. Chapters 14-20 contain a thorough treatment of this theory and can be regarded as a book within a book. Chapter 14 introduces the framework of bilinear complexity theory and is meant as a prerequisite for Chapters 15-20. The language introduced in Chapter 14 allows a concise discussion of the matrix multiplication methods in Chapter 15, such as Strassen's original algorithm and the notion of rank, Bini-Capovani-Lotti-Romani's concept of border rank, Schtnhage's ι-theorem, as well as Strassen's laser method, and its tricky extension by Coppersmith and Winograd. Chapter 16 shows that several problems in computational linear algebra are about as hard as matrix multiplication, thereby emphasizing the key role of the matrix multiplication problem. Chapter 17 discusses Lafon and Winograd's lower bound for the complexity of matrix multiplication, and its generalization by Alder and Strassen. Moreover , in Chapter 18 we study a relationship, observed by Brockett and Dobkin, between the complexity of bilinear maps over finite fields and a well-known problem of coding theory. Partial solutions to the latter lead to interesting lower bounds, some of which are not known to be valid over infinite fields. This chapter also discusses the Chudnovsky-Chudnovsky interpolation algorithm on algebraic curves which yields a linear upper complexity bound for the multiplication in finite fields.
The bilinear complexity or rank of bilinear problems can be reformulated in terms of tensors, resulting in a generalization of the usual matrix rank. In Chapter 19 tensorial rank is investigated for special classes of tensors, while Chapter 20 is devoted to the study of the rank of "generic" tensors. In the language of algebraic geometry this problem is closely related to computing the dimension of higher secant varieties to Segre varieties.
PART V: COMPLETE PROBLEMS (Chapter 21) presents Valiant's nonuniform algebraic analogue of the P versus NP problem. It builds a bridge both to the theory of NP- and #P-completeness as well as to that part of algebraic complexity theory which is based on the parallel computation model.
A number of topics are not covered in this book; this is due to limitations of time and space, the lack of reasonable lower complexity bounds, as well as the fact that certain problems do not fit into the straight-line program or computation tree model. More specifically, our book treats neither computational number theory nor computational group and representation theory (cf. Cohen [ 117], Lenstra and Lenstra , Sims , Atkinson (ed.) , Lux and Pahlings , Finkelstein and Kantor (eds.) ). Also, we have not included a discussion of topics in computational commutative algebra like factorization and Grtbner bases, nor do we speak about the complexity of first-order algebraic theories (cf. Becker and Weispfenning , Fitchas et al. , Heintz et al. , and Kaltofen [284, 286]). We have also omitted a treatment of parallel and randomized algorithms (cf. von zur Gathen , Ja'Ja ). However, many of these topics have already been discussed in other books or surveys, as the given references indicate.
Clearly, much is left to be done. We hope that our book will serve as a foundation for advanced research and as a starting point for further monographs on algebraic complexity theory. ...
Zuirich, Bonn, and Berkeley P. Burgisser · M. Clausen · M.A. Shokrollahi