《应用数值线性代数(英文影印版)》
Preface
1Introduction
1.1Basic Notation
1.2Standard Problems of Numerical Linear Algebra
1.3General Techniques
1.3.1Matrix Factorizations
1.3.2Perturbation Theory and Condition Numbers
1.3.3Effects of Roundoff Error on Algorithms
1.3.4Analyzing the Speed of Algorithms
1.3.5Engineering Numerical Software
1.4Example: Polynomial Evaluation
1.5Floating Point Arithmetic
1.5,1Further Details
1.6Polynomial Evaluation Revisited
1.7Vector and Matrix Norms
1.8References and Other Topics for Chapter 1
1.9Questions for Chapter 1
2Linear Equation Solving
2.1Introduction
2.2Perturbation Theory
2.2.1Relative Perturbation Theory
2.3Ganssian Elimination
2.4Error Analysis
2.4.1The Need for Pivoting
2.4.2Formal Error Analysis of Gaussian Elimination
2.4.3Estimating Condition Numbers
2.4.4Practical Error Bounds
2.5Improving the Accuracy of a Solution
2.5.1Single Precision Iterative Refinement
2.5.2Equilibration
2.6Blocking Algorithms for Higher Performance
2.6.1Basic Linear Algebra Subroutines (BLAS)
2.6.2How to Optimize Matrix Multiplication
2.6.3Reorganizing Gaussian Elimination to Use Level 3 BLAS
2.6.4More About Parallelism and Other Performance Issues.
2.7Special Linear Systems
2.7.1Real Symmetric Positive Definite Matrices
2.7.2Symmetric Indefinite Matrices
2.7.3Band Matrices
2.7.4General Sparse Matrices
2.7.5Dense Matrices Depending on Fewer Than O(n2) Pa-rameters
2.8References and Other Topics for Chapter 2
2.9Questions for Chapter 2
3Linear Least Squares Problems
3.1Introduction
3.2Matrix Factorizations That Solve the Linear Least Squares Prob-leto
3,2,1Normal Equations
3.2.2QR Decomposition
3.2.3Singular Value Decomposition
3.3Perturbation Theory for the Least Squares Problem
3.4Orthogonal Matrices
3,4.1Householder Transformations
3.4,2Givens Rotations
3.4,3Roundoff Error Analysis for Orthogonal Matrices
3.4,4Why Orthogonal Matrices?
3.5Rank-Deficient Least Squares Problems
3.5.1Solving Rm~k-Deficient Least Squares Problems Using the SVD
3.5.2Solving Rank-Deficient Least Squares Problems Using QR with Pivoting
3.6Performance Comparison of Methods for Solving Least SquaresProblems
3.7References and Other Topics for Chapter 3
3.8Questions for Chapter 3
4 Nonsymmetric Eigenvalue Problems
4.1Introduction
4.2Canonical Forms
4.2.1Computing Eigenvectors from the Schur Form
4.3Perturbation Theory
4.4Algorithms for the Nonsymmetric Eigenproblem
4.4.1Power Method
4,4,2Inverse Iteration
4.4.3Orthogonal Iteration
4.4.4QR Iteration
4.4.5Making QR Iteration Practical
4.4.6Hessenberg Reduction
4.4.7Tridiagonal and Bidiagonal Reduction
4.4.8QR Iteration with Implicit Shifts
4.5Other Nonsymmetric Eigenvalue Problems
4.5.1Regular Matrix Pencils and Weierstrass Canonical Form
4.5.2Singular Matrix Pencils and the Kronecker Canonical Form
4.5.3Nonlinear Eigenvalue Problems
4.6Summary
4.7References and Other Topics for Chapter 4
4.8Questions for Chapter 4
5 The Symmetric Eigenproblem and Singular Value Decompo-sition
5.1Introduction
5.2Perturbation Theory
5.2.1Relative Perturbation Theory
5.3Algorithms for the Symmetric Eigenproblem
5.3.1Tridiagonal QR Iteration
5.3.2Rayleigh Quotient Iteration
5.3.3Divide-and-Conquer
5.3.4Bisection and Inverse Iteration
5.3.5Jacobi's Method
5.3.6Performance Comparison
5.4Algorithms for the Singular Value Decomposition
5.4.1QR Iteration and Its Variations for the Bidiagonal SVD
5.4.2Computing the Bidiagonal SVD to High Relative Accuracy
5.4.3Jacobi's Method for the SVD
5.5Differential Equations and Eigenvalue Problems
5.5.1The Toda Lattice
5.5.2 The Connection to Partial Differential Equations
5.6References and Other Topics for Chapter 5
5.7Questions for Chapter 5
6 Iterative Methods for Linear Systems
6.1Introduction
6.2On-line Help for Iterative Methods
6.3Poisson's Equation
6.3.1Poisson's Equation in One Dimension
6.3.2Poisson's Equation in Two Dimensions
6.3.3Expressing Poisson's Equation with Kronecker Products
6.4Summary of Methods for Solving Poisson's Equation
6.5Basic Iterative Methods
6.5.1Jacobi's Method
6.5.2Gauss-Seidel Method
6.5.3Successive Overrelaxation
6.5.4Convergence of Jacobi's, Gauss-Seidel, and SOR(w) Methods on the Model Problem
6.5.5Detailed Convergence Criteria for Jacobi's,Gauss-Seidel, and SOR(w) Methods
6.5.6Chebyshev Acceleration and Symmetric SOR (SSOR)
6.6KryIov Subspace Methods
6.6.1Extracting Information about A via Matrix-Vector Mul-tiplication
6.6.2Solving Ax = b Using the Krylov Subspace Kk
6.6.3Conjugate Gradient Method
6.6.4Convergence Analysis of the Conjugate Gradient Method
6.6.5Preconditioning
6.6.6Other Krylov Subspace Algorithms for Solving Ax = b
6.7Fast Fourier Transform
6.7.1The Discrete Fourier Transform
6.7.2Solving the Continuous Model Problem Using Fourier Series
6.7.3. Convolutions
6.7.4Computing the Fast Fourier Transform
6.8Block Cyclic Reduction
6.9 Multigrid
6.9.1Overview of Multigrid on the Two-Dimensional Poisson's Equation
6.9.2Detailed Description of Multigrid on the One-Dimensional Poisson's Equation
6.10 Domain Decomposition
6.10.1 Nonoverlapping Methods
6.10.2 Overlapping Methods
6.11 References and Other Topics for Chapter 6
6.12 Questions for Chapter 6
7 Iterative Methods for Eigenvalue Problems
7.1Introduction
7.2The Rayleigh-Ritz Method
7.3The Lanczos Algorithm in Exact Arithmetic
7.4The Lanczos Algorithm in Floating Point Arithmetic
7.5The Lanczos Algorithm with Selective Orthogonalization
7.6Beyond Selective Orthogonalization
7.7Iterative Algorithms for the Nonsymmetric Eigenproblem
7.8References and Other Topics for Chapter 7
7.9Questions for Chapter 7
Bibliography
Index