基本信息
- 原书名:An Introduction to Parallel Programming
- 原出版社: Morgan Kaufmann

编辑推荐
循序渐进地展示了如何利用MPI、PThread 和OpenMP开发高效的并行程序,教给读者如何开发、调试分布式内存和共享式内存的程序,以及对程序进行性能评估。
内容简介
作译者
His research is in parallel scientific computing. He has worked on the development of parallel software for circuit simulation, speech recognition, and the simulation of large networks of biologically accurate neurons. Peter has been teaching parallel computing at both the undergraduate and graduate levels for nearly twenty years. He is the author of Parallel Programming with MPI, published by Morgan Kaufmann Publishers.
目录
CHAPTER 1 Why Parallel Computing? 1
1.1 Why We Need Ever-Increasing Performance 2
1.2 Why We’re Building Parallel Systems 3
1.3 Why We Need to Write Parallel Programs 3
1.4 How Do We Write Parallel Programs? 6
1.5 What We’ll Be Doing 8
1.6 Concurrent, Parallel, Distributed 9
1.7 The Rest of the Book 10
1.8 A Word of Warning 10
1.9 Typographical Conventions 11
1.10 Summary 12
1.11 Exercises 12
CHAPTER 2 Parallel Hardware and Parallel Software 15
2.1 Some Background15
2.1.1 The von Neumann architecture 15
2.1.2 Processes, multitasking, and threads 17
2.2 Modifications to the von Neumann Model 18
2.2.1 The basics of caching 19
2.2.2 Cache mappings 20
前言
In spite of this, most computer science majors graduate with little or no experience in parallel programming. Many colleges and universities offer upper-division elective courses in parallel computing, but since most computer science majors have to take numerous required courses, many graduate without ever writing a multithreaded or multiprocess program.
It seems clear that this state of affairs needs to change. Although many programs can obtain satisfactory performance on a single core, computer scientists should be made aware of the potentially vast performance improvements that can be obtained with parallelism, and they should be able to exploit this potential when the need arises.
An Introduction to Parallel Programming was written to partially address this problem. It provides an introduction to writing parallel programs using MPI, Pthreads, and OpenMP—three of the most widely used application programming interfaces (APIs) for parallel programming. The intended audience is students and professionals who need to write parallel programs. The prerequisites are minimal: a college-level course in mathematics and the ability to write serial programs in C. They are minimal because we believe that students should be able to start programming parallel systems as early as possible.
At the University of San Francisco, computer science students can fulfill a requirement for the major by taking the course, on which this text is based, immediately after taking the “Introduction to Computer Science I” course that most majors take in the first semester of their freshman year. We’ve been offering this course in parallel computing for six years now, and it has been our experience that there really is no reason for students to defer writing parallel programs until their junior or senior year. To the contrary, the course is popular, and students have found that using concurrency in other courses is much easier after having taken the Introduction course.
If second-semester freshmen can learn to write parallel programs by taking a class, then motivated computing professionals should be able to learn to write parallel programs through self-study.We hope this book will prove to be a useful resource for them.
About This Book
As we noted earlier, the main purpose of the book is to teach parallel programming in MPI, Pthreads, and OpenMP to an audience with a limited background in computer science and no previous experience with parallelism. We also wanted to make it as flexible as possible so that readers who have no interest in learning one or two of the APIs can still read the remaining material with little effort. Thus, the chapters on the three APIs are largely independent of each other: they can be read in any order, and one or two of these chapters can be bypass. This independence has a cost: It was necessary to repeat some of the material in these chapters. Of course, repeated material can be simply scanned or skipped.
Readers with no prior experience with parallel computing should read Chapter 1 first. It attempts to provide a relatively nontechnical explanation of why parallel systems have come to dominate the computer landscape. The chapter also provides a short introduction to parallel systems and parallel programming.
Chapter 2 provides some technical background in computer hardware and software. Much of the material on hardware can be scanned before proceeding to the API chapters. Chapters 3, 4, and 5 are the introductions to programming with MPI, Pthreads, and OpenMP, respectively.
In Chapter 6 we develop two longer programs: a parallel n-body solver and a parallel tree search. Both programs are developed using all three APIs. Chapter 7 provides a brief list of pointers to additional information on various aspects of parallel computing.
We use the C programming language for developing our programs because all three APIs have C-language interfaces, and, since C is such a small language, it is a relatively easy language to learn—especially for C++ and Java programmers, since they are already familiar with C’s control structures.
Classroom Use
This text grew out of a lower-division undergraduate course at the University of San
Francisco. The course fulfills a requirement for the computer science major, and it also fulfills a prerequisite for the undergraduate operating systems course. The only prerequisites for the course are either a grade of “B” or better in a one-semester introduction to computer science or a “C” or better in a two-semester introduction to computer science. The course begins with a four-week introduction to C programming. Since most students have already written Java programs, the bulk of what is covered is devoted to the use pointers in C.1 The remainder of the course provides introductions to programming in MPI, Pthreads, and OpenMP.
We cover most of the material in Chapters 1, 3, 4, and 5, and parts of the material in Chapters 2 and 6. The background in Chapter 2 is introduced as the need arises. For example, before discussing cache coherence issues in OpenMP (Chapter 5), we cover the material on caches in Chapter 2.
The coursework consists of weekly homework assignments, five programming assignments, a couple of midterms, and a final exam. The homework usually involves 1Interestingly, a number of students have said that they found the use of C pointers more difficult than MPI programming.
writing a very short program or making a small modification to an existing program. Their purpose is to insure that students stay current with the course work and to give them hands-on experience with the ideas introduced in class. It seems likely that the existence of the assignments has been one of the principle reasons for the course’s success. Most of the exercises in the text are suitable for these brief assignments. The programming assignments are larger than the programs written for homework, but we typically give students a good deal of guidance: We’ll frequently include pseudocode in the assignment and discuss some of the more difficult aspects in class. This extra guidance is often crucial: It’s not difficult to give programming assignments that will take far too long for the students to complete. The results of the midterms and finals, and the enthusiastic reports of the professor who teaches operating systems, suggest that the course is actually very successful in teaching students how to write parallel programs.
For more advanced courses in parallel computing, the text and its online support materials can serve as a supplement so that much of the information on the syntax and semantics of the three APIs can be assigned as outside reading. The text can also be used as a supplement for project-based courses and courses outside of computer science that make use of parallel computation.
Support Materials
媒体评论
——Duncan Buell 南卡罗来纳大学计算机科学与工程系
本书阐述了两个越来越重要的领域:使用PThread和OpenMP进行共享式内存编程,以及使用MPI进行分布式内存编程。更重要的是,它通过指出可能出现的性能错误,强调好的编程实现的重要性。这本书在不同学科(包括计算机科学、物理和数学等)背景下介绍以上话题,各章节包含了难易程度不同的编程习题。对于希望学习并行编程技巧、扩展知识面的学生或专业人士采说,这是一本理想的参考书籍。
——Leigh Little 纽约州大学布罗科波特学院计算机科学系
本书是一本精心撰写的全面介绍并行计算的书籍,学生以及相关领域从业者会从书中的相关最新信息中获益匪浅。作者以通俗易懂的写作手法,结合各种有趣的实例使本书引人入胜。在并行计算这个瞬息万变、不断发展的领域里,本书深入浅出、全面涵盖了并行软件和硬件的方方面面。
——Kathy J.Liszka 阿克隆大学计算机科学系