### 基本信息

- 原书名：Introductory Econometrics
- 原出版社： South-Western College Pub

### 内容简介

### 作译者

### 目录

第1部分 横截面数据的回归分析

第2章 简单回归模型

第3章 多元回归分析：估计

第4章 多元回归分析：推断

第5章 多元回归分析：OLS的渐近性

第6章 多元回归分析：其他问题

第7章 含有定性信息的多元回归分析：二值(或虚拟)变量

第8章 异方差性

第9章 模型设定和数据问题的深入探讨

第2部分 时间序列数据的回归分析

第10章 时间序列数据的基本回归分析

第11章 用时间序列数据计算OLS的其他问题

第12章 时间序列回归中的序列相关和异方差..

第3部分 高级专题讨论

第13章 跨时横截面的混合：简单综列数据方法

第14章 高级综列数据方法

第15章 工具变量估计与两阶段最小二乘法

第16章 联立方程模型

第17章 限值因变量模型和样本选择纠正

### 前言

Based on the positive reactions to the first two editions, it appears that my hunch was correct. A growing number of instructors, with a variety of backgrounds and interests, and teaching students with different levels of preparation, have embraced the modem approach to econometrics espoused in this text. Consequently, the structure of the third edition is much like the second, although I describe some notable changes below. The emphasis is still on applying econometrics to real-world problems. Each econometric method is motivated by a particular issue facing researchers analyzing nonexperimental data. The focus in the main text is on understanding and interpreting the assumptions in light of actual empirical applications; the mathematics required is no more than college algebra and basic probability and statistics.

Organized for Today's Econometrics Instructor

The third edition preserves the overall organization of the second edition. The most noticeable feature that distinguishes this text from most others is the separation of topics by the kind of data being analyzed. This is a clear departure from the traditional approach, which presents a linear model, lists all assumptions that may be needed at some future point in the analysis, and then proves or asserts results without clearly connecting them to the assumptions. My approach is to first treat, in Part One, multiple regression analysis with cross-sectional data, under the assumption of random sampling. This setting is natural to students because they are familiar with random sampling from a population from their introductory statistics courses. Importantly, it allows us to distinguish between assumptions made about the underlying population regression model--assumptions that can be given economic or behavioral content--from assumptions about how the data were sampled. Discussions about the consequences of nonrandom sampling can be treated in an intuitive fashion after the students have a good grasp of the multiple regression model estimated using random samples.

An important feature of a modem approach is that the explanatory variables--along with the dependent variable--are treated as outcomes of random variables. For the social sciences, allowing random explanatory variables is much more realistic than the traditional assumption of nonrandom explanatory variables. As a nontrivial benefit, the population model/random sampling approach reduces the number of assumptions that students must absorb and understand. Ironically, the classical approach to regression analysis, which treats the explanatory variables as fixed in repeated samples and is pervasive in introductory texts, literally applies to data collected in an experimental setting. In addition, the contortions required to state and explain assumptions can be confusing to students.

My focus on the population model emphasizes that the fundamental assumptions underlying regression analysis, such as the zero mean assumption on the unobservables, are propefiy stated conditional on the explanatory variables. This leads to a clear understanding of the kinds of problems, such as heteroskedasticity (nonconstant variance), that can invalidate standard inference procedures. Plus, I am able to dispel several misconceptions that arise in econometrics texts at all levels. For example, I explain why the usual R-squared is still valid as a goodness-of-fit measure in the presence of heteroskedasticity (Chapter 8) or serially correlated errors (Chapter 12); I demonstrate that tests for functional form should not be viewed as general tests of omitted variables (Chapter 9); and I explain why one should always include in a regression model extra control variables that are uncorrelated with the explanatory variable of interest, such as a policy variable (Chapter 6).

Because the assumptions for cross-sectional analysis are relatively straightforward yet realistic, students can get involved early with serious cross-sectional applications without having to worry about the thorny issues of trends, seasonality, serial correlation, high persistence, and spurious regression that are ubiquitous in time series regression models. Initially, I figured that my treatment of regression with cross-sectional data followed by regression with time series data would find favor with instructors whose own research interests are in applied microeconomics, and that appears to be the case. It has been gratifying that adopters of the text with an applied time series bent have been equally enthusiastic about the structure of the text. By postponing the econometric analysis of time series data, I am able to put proper focus on the potential pitfalls in analyzing time series data that do not arise with cross-sectional data. In effect, time series econometrics finally gets the serious treatment it deserves in an introductory text.

As in the earlier editions, I have consciously chosen topics that are important for reading journal articles and for conducting basic empirical research. Within each topic, I have deliberately omitted many tests and estimation procedures that, while traditionally included in textbooks, have not withstood the empirical test of time. Likewise, I have emphasized more recent topics that have clearly demonstrated their usefulness, such as obtaining test statistics that are robust to heteroskedasticity (or serial correlation) of unknown form, using multiple years of data for policy analysis, or solving the omitted variable problem by instrumental variables methods. I appear to have made sound choices,as I have received only a few suggestions for adding or deleting material. Like the second edition, the third edition contains an introductory treatment of least absolute deviations estimation (LAD) in Chapter 9. LAD is becoming more and more popular in empirical work, especially when the conditional distribution of the dependent variable is asymmetric or has fat tails. Students reading empirical research in labor economics, public economics, and other fields are more and more likely to run across linear models estimated by LAD.

In rewriting segments of tho,next, I have tried to further improve on the systematicapproach of the second edition. By systematic, I mean that each topic is presented by building on the previous material in a logical fashion, and assumptions are introduced only as they are needed to obtain a conclusion. For example, professional users of econometrics understand that not all of the Gauss-Markov assumptions are needed to show that the ordinary least squares (OLS) estimators are unbiased. Yet, the vast majority of econometrics texts introduce the full set of assumptions (many of which are redundant or, in somecases, even logically conflicting) before proving unbiasedness of OLS. Similarly, the normality assumption is often included among the assumptions that are needed for the GaussMarkov Theorem, even though it is fairly well known that normality plays no role in showing that the OLS estimators are the best linear unbiased estimators. ..

My systematic approach carries over to studying large sample properties, where assumptions for consistency are introduced only as needed. This makes it relatively easy to cover more advanced topics, such as using pooled cross sections, exploiting panel data structures, and applying instrumental variables methods. I have worked to provide a unified view of econometrics, by which I mean that all estimators and test statistics are obtained using just a few, intuitively reasonable principles of estimation and testing (which, of course, also have rigorous justification). For example, regression-based tests for heteroskedasticity and serial correlation are easy for students to grasp because they already have a solid understanding of regression. This is in contrast to treatments that give a set of disjointed recipes for outdated econometric procedures.Throughout the text, I emphasize ceteris paribus relationships, which is why, after one chapter on the simple regression model, I move to multiple regression analysis. This motivates students to think about serious applications early. I also give much more prominence to policy analysis with all kinds of data structures. Practical topics, such as using proxy variables to obtain ceteris paribus effects and obtaining standard errors for partial effects in models with interaction terms, are covered in a simple fashion.

New to This Edition

I have made changes in the third edition that are meant to make the text more user-friendly.First, in the earlier editions, some empirical examples could not be replicated (because I did not make the data available) or confirmed by reading a journal article. In the third edition, all empirical results either can be replicated using the included data sets or can be found in a published article. Because replication is more helpful to students, I have changed a few examples so that the numbers can be obtained using a new data set. A notable example is Example 7.7, which studies the effect of "beauty,, on wages.

Based on several requests, I have added summaries of assumptions at the end of the relevant chapters (Chapters 3, 4, 10, and 11). Consequently, students now have a quick reference for the assumptions, as well as brief descriptions of how each is used.

An important difference from earlier editions, especially for instructors who have written lecture notes from the first or second edition, is that I have slightly reordered the assumptions for simple and multiple regression (as well as for panel data analysis and instrumental variables estimation in the more advanced part). In particular, I have reversed Assumptions SLR.3 and SLR.4 in Chapter 2 and, likewise, Assumptions MLR.3 and MLR.4 in Chapter 3, as noted be, bow. (Similar changes are made in Chapters 5, 10, and 11.) Pedagogically, the new ordering is more natural, and I give credit to Angelo Melino at the University of Toronto for convincing me to make this change. Especially when reviewing assumptions of multiple regression (which I do often throughout my own course), the new ordering is appealing. In this edition, all assumptions about how the conditional distribution of the unobserved error depends on the observed explanatory variables are grouped together, as MLR.4, MLR.5, and MLR.6. The result is a natural progression in briefly summarizing the importance of each assumption:

MLR.1: Introduce the population model and interpret the parameters (which we hope to estimate).

MLR.2: Introduce random sampling, which also serves to describe the data that we need to estimate the population parameters.

MLR.3: Add the assumption that allows us to compute the estimates from our data sample; this is the so-called "no perfect collinearity" assumption.

MLR.4: Assume that the mean of the unobservable does not depend on the values of the explanatory variables; this is the "zero conditional mean" assumption, and it is the key assumption that delivers unbiasedness of OLS.

After introducing Assumptions MLR. 1 to MLR.3, one can discuss the algebraic properties of ordinary least squares--that is, the properties of OLS for a particular set of data. By adding Assumption MLR.4, we can mm to unbiasedness. As in earlier editions, Assumption MLR.5 (homoskedasticity) is added for the Gauss-Markov theorem, and MLR.6 (normality) is added to round out the classical linear model assumptions (for exact statistical inference).

Other specific changes include those in chapter 6, where I expand on how it is possible to get "good" parameter estimates even with poor fit. I appeal to an example with experimental data, where we know we can get unbiased, and even fairly precise, estimators of slope coefficients--even though the R-squared is very small. Also in this chapter I have expanded the discussion of how it is possible to include too many control variables in a multiple regression.