### 基本信息

- 原书名：Algebraic Complexity Theory
- 原出版社： Springer

- 作者：
**(瑞士)Peter Burgisser****Michael Clausen****M.Amis Shokrollahi** - 丛书名：
**国外数学名著系列** - 出版社：科学出版社
- ISBN：
**9787030182999** - 上架时间：2007-2-12
- 出版日期：2007 年1月
- 开本：16开
- 页码：618
- 版次：1-1
- 所属分类：数学 > 代数，数论及组合理论 > 综合

### 内容简介

数学书籍

In this book we focus on Algebraic Complexity Theory, the study of the intrinsic algorithmic difficulty of algebraic problems within an algebraic model of computation. Motivated by questions of numerical and symbolic computation, this branch of research originated in 1954 when Ostrowski [403] inquired about the optimality of Homer's rule. Algebraic complexity theory grew rapidly and has since become a well-established area of research. However, with the exception of the now classic monograph by Borodin and Munro, published in 1975, a systematic treatment of this theory is not available. .

This book is intended to be a comprehensive text which presents both traditional material and recent research in algebraic complexity theory in a coherent way. Requiring only some basic algebra and offering over 350 exercises, it should be well-suited as a textbook for beginners at the graduate level. With its extensive bibliographic notes covering nearly 600 research papers, it might also serve as a reference book. ...

### 目录

1.1 Exercises.

1.2 Open Problems

1.3 Notes

Part I. Fundamental Algorithms

Chapter 2. Efficient Polynomial Arithmetic

2.1 Multiplication of Polynomials I

2.2* Multiplication of Polynomials II

2.3* Multiplication of Several Polynomials

2.4 Multiplication and Inversion of Power Series

2.5* Composition of Power Series

2.6 Exercises

2.7 Open Problems

2.8 Notes

Chapter 3. Efficient Algorithms with Brancmng

3.1 Polynomial Greatest Common Divisors

3.2* Local Analysis of the Knuth-Sch6nhage Algorithm

3.3 Evaluation and Interpolation

3.4* Fast Point Location in Arrangements of Hyperplanes

3.5* Vapnik-Chervonenkis Dimension and Epsilon-Nets

### 前言

In the 1930s, a number of quite different concepts for this purpose were proposed, such as luting machines, WHiLE-programs, recursive functions, Markov algorithms, and Thue systems. All these concepts turned out to be equivalent, a fact summarized in Church's thesis, which says that the resulting definitions form an adequate formalization of the intuitive notion of computability. This had and continues to have an enormous effect. First of all, with these notions it has been possible to prove that various problems are algorithmically unsolvable. Among these undecidable problems are the halting problem, the word problem of group theory, the Post correspondence problem, and Hilbert's tenth problem. Secondly, concepts like Turing machines and WHILE-programs had a strong influence on the development of the first computers and programming languages.

In the era of digital computers, the question of finding efficient solutions to algorithmically solvable problems has become increasingly important. In addition, the fact that some problems can be solved very efficiently, while others seem to defy all attempts to find an efficient solution, has called for a deeper understanding of the intrinsic computational difficulty of problems. This has resulted in the development of complexity theory. Complexity theory has since become a very diversified area of research. Each branch uses specific models of computation, like Turing machines, random access machines, Boolean circuits, straightline programs, computation trees, or VLSI-models. Every computation in such a model induces costs, such as the number of computation steps, the amount of memory required, the number of gates of a circuit, the number of instructions, or the chip area. Accordingly, studies in computational complexity are generally based on some model of computation together with a complexity measure. For an overview, we refer the interested reader to the Handbook of Theoretical Computer Science [321], which contains several surveys of various branches of complexity theory.

In this book we focus on Algebraic Complexity Theory, the study of the intrinsic algorithmic difficulty of algebraic problems within an algebraic model of computation. Motivated by questions of numerical and symbolic computation, this branch of research originated in 1954 when Ostrowski [403] inquired about the optimality of Homer's rule. Algebraic complexity theory grew rapidly and has since become a well-established area of research. (See the surveys of von zur Gathen [189], Grigoriev [210], Heintz [241], Schohage [462], and Strassen [506, 510].) However, with the exception of the now classic monograph by Borodin and Munro [65], published in 1975, a systematic treatment of this theory is not available.

This book is intended to be a comprehensive text which presents both traditional material and recent research in algebraic complexity theory in a coherent way. Requiring only some basic algebra and offering over 350 exercises, it should be well-suited as a textbook for beginners at the graduate level. With its extensive bibliographic notes covering nearly 600 research papers, it might also serve as a reference book.

The text provides a uniform treatment of algebraic complexity theory on the basis of the straight-line program and the computation tree models, with special emphasis on lower complexity bounds. This also means that this is not a book on Computer Algebra, whose main theme is the design and implementation of efficient algorithms for algebraic problems.

Nonetheless, our book contains numerous algorithms, typically those that are essentially optimal within the specified computation model. Our main goal is to develop methods for proving the optimality of such algorithms.

To emphasize the logical development of the subject, we have divided the book into five parts, with 21 chapters in total. The first chapter consists of an informal introduction to algebraic complexity theory.

The next two chapters form PART I: FUNDAMENTAL ALGORiTHMS. Chapter 2 is concerned with efficient algorithms for the symbolic manipulation of polynomials and power series, such as the Schohage-Strassen algorithm for polynomial multiplication, the Sieveking-Kung algorithm for the inversion of power series, or the Brent-Kung algorithm for the composition of power series. It is followed by a chapter in which the emphasis lies on efficient algorithms within the branching model. In particular, we present the fast Knuth-Schonhage algorithm for computing the greatest common divisor (GCD) of univariate polynomials. This algorithm combined with Huffman coding then yields efficient solutions of algorithmic problems associated with Chinese remaindering. Furthermore the VC-dimension and the theory of epsilon nets are used to show that certain NP-complete problems, like the knapsack or the traveling salesman problem, may be solved by "nonuniform polynomial time algorithms" in the computation tree model over the reals. This surprising and important result, due to Meyer auf der Heide, demonstrates that it is not possible to prove exponential lower bounds for the above problems in the model of computation trees. Moreover, it stresses the role of uniformity in the definition of the language class NP and, at the same time, puts emphasis on the quality of several lower bounds derived later in Chapter 11. ..

While the first three chapters rely on the reader's intuitive notion of algorithm, the remaining parts of the book, directed towards lower bounds, call for an exact specification of computation models and complexity measures.

Therefore, in PARr II: ELEMENTARY LOWER BOUNDS (Chapters 4-7), we first introduce the models of straight-line programs and computation trees, which we use throughout the rest of the book. We then describe several elementary lower bound techniques. Chapter 5 contains transcendence degree arguments, including results of Motzkin and Belaga as well as the Baur-Rabin theorem. Chapter 6 discusses a unified approach to Pan's substitution method and its extensions. The methods of Chapters 5 and 6 yield lower bounds which are at most linear in the number of input variables. Nonetheless, the methods are strong enough to show the optimality of some basic algorithms, the most prominent being Homer's rule. In Chapter 7 we introduce two fundamental program transformation techniques. The first is Strassen's technique of "avoiding divisions." The second is a method for transforming a program for the computation of a multivariate rational function into one which computes the given function and all its first-order partial derivatives. The results of Chapter 7 are of importance in Chapters 8, 14, and 16.

PART III: HIGH DEGREE (Chapters 8-12) shows that concepts from algebraic geometry and algebraic topology, like the degree or Betti numbers, can be applied to prove nonlinear lower complexity bounds. Chapter 8 studies Strassen's degree bound, one of the central tools for obtaining almost sharp lower complexity bounds for a number of problems of high degree, like the computation of the coefficients of a univariate polynomial from its roots, Chapter 9 is devoted to the investigation of specific polynomials that are hard to compute. It may be considered as a counterpart to Chapters 5 and 6 where we study generic polynomials. In Chapter 10 the degree bound is adapted to the computation tree model. With this tool it tums out that the Knuth-SchGnhage algorithm is essentially optimal for Computing the Euclidean representation. In Chapter 11 Ben-Or's lower complexity bound for semi-algebraic membership problems is deduced from the Milner-Thom bound. This is applied to several problems of computational geometry. In Chapter 12 the Grigoriev-Risler lower bound for the additive complexity of univariate real polynomials is derived from Khovanskii's theorem on the number of real roots of sparse systems of polynomial equations.

PART IV: LOW DEGREE (Chapters 13-20) is concerned with the problem of computing a finite set ofmult/variate polynomials of degree at most two. In Chapter 13 we discuss upper and lower complexity bounds for computing a finite set of linear polynomials, which is simply the task of multiplying a generic input vector by a specific matrix. This problem is of great practical interest, as the notable examples of the discrete Fourier transform (DFT), Toeplitz, Hankel and Vandermonde matrices indicate.

The theory of bilinear complexity is concerned with the problem of computing a finite set of bilinear polynomials. Chapters 14-20 contain a thorough treatment of this theory and can be regarded as a book within a book. Chapter 14 introduces the framework of bilinear complexity theory and is meant as a prerequisite for Chapters 15-20. The language introduced in Chapter 14 allows a concise discussion of the matrix multiplication methods in Chapter 15, such as Strassen's original algorithm and the notion of rank, Bini-Capovani-Lotti-Romani's concept of border rank, Schtnhage's ι-theorem, as well as Strassen's laser method, and its tricky extension by Coppersmith and Winograd. Chapter 16 shows that several problems in computational linear algebra are about as hard as matrix multiplication, thereby emphasizing the key role of the matrix multiplication problem. Chapter 17 discusses Lafon and Winograd's lower bound for the complexity of matrix multiplication, and its generalization by Alder and Strassen. Moreover , in Chapter 18 we study a relationship, observed by Brockett and Dobkin, between the complexity of bilinear maps over finite fields and a well-known problem of coding theory. Partial solutions to the latter lead to interesting lower bounds, some of which are not known to be valid over infinite fields. This chapter also discusses the Chudnovsky-Chudnovsky interpolation algorithm on algebraic curves which yields a linear upper complexity bound for the multiplication in finite fields.

The bilinear complexity or rank of bilinear problems can be reformulated in terms of tensors, resulting in a generalization of the usual matrix rank. In Chapter 19 tensorial rank is investigated for special classes of tensors, while Chapter 20 is devoted to the study of the rank of "generic" tensors. In the language of algebraic geometry this problem is closely related to computing the dimension of higher secant varieties to Segre varieties.

PART V: COMPLETE PROBLEMS (Chapter 21) presents Valiant's nonuniform algebraic analogue of the P versus NP problem. It builds a bridge both to the theory of NP- and #P-completeness as well as to that part of algebraic complexity theory which is based on the parallel computation model.

A number of topics are not covered in this book; this is due to limitations of time and space, the lack of reasonable lower complexity bounds, as well as the fact that certain problems do not fit into the straight-line program or computation tree model. More specifically, our book treats neither computational number theory nor computational group and representation theory (cf. Cohen [ 117], Lenstra and Lenstra [326], Sims [484], Atkinson (ed.) [13], Lux and Pahlings [344], Finkelstein and Kantor (eds.) [172]). Also, we have not included a discussion of topics in computational commutative algebra like factorization and Grtbner bases, nor do we speak about the complexity of first-order algebraic theories (cf. Becker and Weispfenning [34], Fitchas et al. [174], Heintz et al. [245], and Kaltofen [284, 286]). We have also omitted a treatment of parallel and randomized algorithms (cf. von zur Gathen [186], Ja'Ja [268]). However, many of these topics have already been discussed in other books or surveys, as the given references indicate.

Clearly, much is left to be done. We hope that our book will serve as a foundation for advanced research and as a starting point for further monographs on algebraic complexity theory. ...

Zuirich, Bonn, and Berkeley P. Burgisser · M. Clausen · M.A. Shokrollahi

June 1996

### 序言

从出版方面来讲，除了较好较快地出版我们自己的成果外，引进国外的先进出版物无疑也是十分重要与必不可少的。从数学来说，施普林格(Springer)出版社至今仍然是世界上最具权威的出版社。科学出版社影印一批他们出版的好的新书，使我国广大数学家能以较低的价格购买，特别是在边远地区工作的数学家能普遍见到这些书，无疑是对推动我国数学的科研与教学十分有益的事。

这次科学出版社购买了版权，一次影印了23本施普林格出版社出版的数学书，就是一件好事，也是值得继续做下去的事情。大体上分一下，这23本书中，包括基础数学书5本，应用数学书6本与计算数学书12本，其中有些书也具有交叉性质。这些书都是很新的，2000年以后出版的占绝大部分，共计16本，其余的也是1990年以后出版的。这些书可以使读者较快地了解数学某方面的前沿，例如基础数学中的数论、代数与拓扑三本，都是由该领域大数学家编著的“数学百科全书”的分册。对从事这方面研究的数学家了解该领域的前沿与全貌很有帮助。按照学科的特点，基础数学类的书以“经典”为主，应用和计算数学类的书以“前沿”为主。这些书的作者多数是国际知名的大数学家，例如《拓扑学》一书的作者诺维科夫是俄罗斯科学院的院士，曾获“菲尔兹奖”和“沃尔夫数学奖”。这些大数学家的著作无疑将会对我国的科研人员起到非常好的指导作用。..

当然，23本书只能涵盖数学的一部分，所以，这项工作还应该继续做下去。更进一步，有些读者面较广的好书还应该翻译成中文出版，使之有更大的读者群。

总之，我对科学出版社影印施普林格出版社的部分数学著作这一举措表示热烈的支持，并盼望这一工作取得更大的成绩。...

王元

2005年12月3日