- 原书名：Simulation, Third Edition
- 原出版社： Elsevier(Singapore)
2 Elements of Probability 5
2.1 Sample Space and Events 5
2.2 Axioms of Probability 6
2.3 Conditional Probability and Independence 7
2.4 Random Variables 8
2.5 Expectation 10
2.6 Variance 13
2.7 Chebyshev's Inequality and the Laws of Large Numbers 15
2.8 Some Discrete Random Variables 17
Binomial Random Variables 17
Poisson Random Variables 18
Geometric Random Variables 20
The Negative Binomial Random Variable 20
Hypergeometric Random Variables 21
2.9 Continuous Random Variables 22
Uniformly Distributed Random Variables 22
Normal Random Variables 23
Exponential Random Variables 25
In formulating a stochastic model to describe a real phenomenon, it used to be that one compromised between choosing a model that is a realistic replica of the actual situation and choosing one whose mathematical analysis is tractable. That is, there did not seem to be any payoff in choosing a model that faithfully conformed to the phenomenon under study if it were not possible to mathematically analyze that model. Similar considerations have led to the concentration on asymptotic or steady-state results as opposed to the more useful ones on transient time.However, the relatively recent advent of fast and inexpensive computational power has opened up another approach--namely, to try to model the phenomenon as faithfully as possible and then to rely on a simulation study to analyze it..
In this text we show how to analyze a model by use of a simulation study. In particular, we first show how a computer can be utilized to generate random (more precisely, pseudorandom) numbers, and then how these random numbers can be used to generate the values of random variables from arbitrary distributions. Using the concept of discrete events we show how to use random variables to generate the behavior of a stochastic model over time. By continually generating the behavior of the system we show how to obtain estimators of desired quantities of interest. The statistical questions of when to stop a simulation and what confidence to place in the resulting estimators are considered. A variety of ways in which one can improve on the usual simulation estimators are presented. In addition, we show how to use simulation to determine whether the stochastic model chosen is consistent with a set of actual data.
New to this Edition
Expositional and notational changes throughout the text.
New exercises in almost all chapters.
A new section (4.6) on generating random vectors, with an example illustrating how to generate a multinomial vector.
A new section (6.6) on using discrete events to simulate an insurance risk model.
A new section (8.7) on using simulation to efficiently evaluate the expected price of an exotic option. Various variance redffction methods are combined to obtain an efficient procedure for evaluating these options, which are of importance in finance and insurance.
A new section (11.4) on using simulation to estimate first passage time distributions of a Markov chain, a problem with a variety of applications in almost all areas of applied probability. The method utilized is also applied to deriving the tail probabilities of joint distributions, such as the bivariate normal distribution.A new section (11.5) on coupling from the past, a techique that allows one to simulate a random variable whose distribution is that of the stationary distribution of a Markov chain.
A new example (8m) on using simulation to estimate tail probabilities of compound random variables. Such random variables are of great importance in insurance.There is new material in Section 8.5 on using importance sampling to estimate tail probabilities.
Section 10.3 on the Gibbs sampler has been rewritten. A new example concerned with generating a multinomial vector conditional on the event that all outcomes occur at least once is presented.
The successive chapters in this text are as follows. Chapter 1 is an introductory chapter which presents a typical phenomenon that is of interest to study. Chapter 2 is a review of probability. Whereas this chapter is self-contained and does not assume the reader is familiar with probability, we imagine that it will indeed be a review for most readers. Chapter 3 deals with random numbers and how a variant of them (the so-called pseudorandom numbers) can be generated on a computer.The use of random numbers to generate discrete and then continuous random variables is considered in Chapters 4 and 5.
Chapter 6 presents the discrete event approach to track an arbitrary system as it evolves over time. A variety of examples -- relating to both single and multiple server queueing systems, to an insurance risk model, to an inventory system, to a machine repair model, and to the exercising of a stock option -- are presented.Chapter 7 introduces the subject matter of statistics. Assuming that our average reader has not previously studied this subject, the chapter starts with very basic concepts and ends by introducing the bootstrap statistical method, which is quite useful in analyzing the results of a simulation.
Chapter 8 deals with the important subject of variance reduction. This is an attempt to improve on the usual simulation estimators by finding ones having the same mean and smaller variances. The chapter begins by introducing the technique of using antithetic variables. We note (with a proof deferred to the chapter's appendix) that this always results in a variance reduction along with a computational savings when we are trying to estimate the expected value of a function that is monotone in each of its variables. We then introduce control variables and illustrate their usefulness in variance reduction. For instance, we show how control variables can be effectively utilized in analyzing queueing systems, reliability systems, a list reordering problem, and blackjack. We also indicate how to use regression packages to facilitate the resulting computations when using control variables. Variance reduction by use of conditional expectations is then considered. Its use is indicated in examples dealing with estimating π, and in analyzing finite capacity queueing systems. Also, in conjunction with a control variate, conditional expectation is used to estimate the expected number of events of a renewal process by some fixed time. The use of stratified sampling as a variance reduction tool is indicated in examples dealing with queues with varying arrival rates and evaluating integrals. The relationship between the variance reduction techniques of conditional expectation and stratified sampling is explained and illustrated in the estimation of the expected return in video poker. The technique of importance sampling is next considered. We indicate and explainhow this can be an extremely powerful variance reduction technique when estimating small probabilities. In doing so, we introduce the concept of tilted distributions and show how they can be utilized in an importance sampling estimation of a small convolution tail probability. Applications of importance sampling to queueing, random walks, and random permutations, and to computing conditional expectations when one is conditioning on a rare event are presented. The final variance reduction technique of Chapter 8 relates to the use of a common stream of random numbers. An application to valuing an exotic stock option that utilizes a combination of variance reduction techniques is presented in Section 8.7.
Chapter 9 is concerned with statistical validation techniques, which are statistical procedures that can be used to validate the stochastic model when some real data are available. Goodness of fit tests such as the chi-square test and the Kolmogorov-Smimov test are presented. Other sections in this chapter deal with the two-sample and the n-sample problems and with ways of statistically testing the hypothesis that a given process is a Poisson process...
Chapter 10 is concerned with Markov chain Monte Carlo methods. These are techniques that have greatly expanded the use of simulation in recent years. The standard simulation paradigm for estimating 0 = E[h(X)], where X is a random vector, is to simulate independent and identically distributed copies of X and then use the average value of h (X) as the estimator. This is the so-called "raw" simulation estimator, which can then possibly be improved upon by using one or more of the variance reduction ideas of Chapter 8. However, in order to employ this approach it is necessary both that the distribution of X be specified and also that we be able to simulate from this distribution. Yet, as we see in Chapter 10, there are many examples where the distribution of X is known but we are not able to directly simulate the random vector X, and other examples where the distribution is not completely known but is only specified up to a multiplicative constant. Thus,in either case, the usual approach to estimating 0 is not available. However, a new approach, based on generating a Markov chain whose limiting distribution is the distribution of X, and estimating 0 by the average of the values of the function h evaluated at the successive states of this chain, has become widely used in recent years. These Markov chain Monte Carlo methods are explored in Chapter 10.We start, in Section 10.2, by introducing and presenting some of the properties of Markov chains. A general technique for generating a Mark0v chain having a limiting distribution that is specified up to a multiplicative constant, known as the Hastings-Metropolis algorithm, is presented in Section 10.3, and an application to generating a random element of a large "combinatorial" set is given. The most widely used version of the Hastings-Metropolis algorithm is known as the Gibbs sampler, and this is presented in Section 10.4. Examples are discussed relating to such problems as generating random points in a region subject to a constraint that no pair of points are within a fixed distance of each other, to analyzing product form queueing networks, to analyzing a hierarchicalBayesian statistical model for predicting the numbers of home runs that will be hit by certain baseball players,and to simulating a multinomial vector conditional on the event that all outcomes occur at least once. An application of the methods of this chapter to deterministic optimization problems, called simulated annealing, is presented in Section 10.5, and an example concerning the traveling salesman problem is presented. The finalsection of Chapter 10 deals with the sampling importance resampling algorithm,which is a generalization of the acceptance-rejection technique of Chapters 4 and 5. The use of this algorithm in Bayesian statistics is indicated.
Chapter 11 deals with some additional topics in simulation. In Section 11.1we learn of the alias method which, at the cost of some setup time, is a very efficient way to generate discrete random variables. Section 11.2 is concerned with simulating a two-dimensional Poisson process. In Section 11.3 we present an identity concerning the covariance of the sum of dependent Bernoulli random variables and show how its use can result in estimators of small probabilities having very low variances. Applications relating to estimating the reliability of a system, which appears to be more efficient that any other known estimator of a small system reliability, and to estimating the probability that a specified pattern occurs by some fixed time, are given. Section 11.4 presents an efficient technique to employ simulation to estimate first passage time means and distributions of a Markov chain. An application to computing the tail probabilities of a bivariate normal random variable is given. Section 11.5 presents the coupling from the past approach to simulating a random variable whose distribution is that of the stationary distribution of a specified Markov chain....