11.8
A Caution about Overreacting to Heteroscedasticity
Reverting to the R&D example discussed in the previous section, we saw that when we
used the square root transformation to correct for heteroscedasticity in the original model
(11.7.3), the standard error of the slope coefficient decreased and its
t
value increased. Is
this change so significant that one should worry about it in practice? To put the matter dif-
ferently, when should we really worry about the heteroscedasticity problem? As one author
contends, “heteroscedasticity has never been a reason to throw out an otherwise good
model.”
43
Here it may be useful to bear in mind the caution sounded by John Fox:
. . . unequal error variance is worth correcting only when the problem is severe.
The impact of nonconstant error variance on the efficiency of ordinary least-squares
estimator and on the validity of least-squares inference depends on several factors, includ-
ing the sample size, the degree of variation in the
σ
2
i
, the configuration of the
X
[i.e.,
regressor] values, and the relationship between the error variance and the
X
’s. It is therefore
not possible to develop wholly general conclusions concerning the harm produced by
heteroscedasticity.
44
Returning to the model (11.3.1), we saw earlier that variance of the slope estimator, var
(
ˆ
β
2
), is given by the usual formula shown in (11.2.3). Under GLS the variance of the slope
estimator, var (
ˆ
β
∗
2
), is given by (11.3.9). We know that the latter is more efficient than the
former. But how large does the former (i.e., OLS) variance have to be in relation to the GLS
variance before one should really worry about it? As a rule of thumb, Fox suggests that we
worry about this problem “. . . when the largest error variance is more than about 10 times
the smallest.”
45
Thus, returning to the Monte Carlo simulations results of Davidson and
MacKinnon presented in Section 11.4, consider the value of
α
=
2. The variance of the
estimated
β
2
is 0.04 under OLS and 0.012 under GLS, the ratio of the former to the latter
thus being about 3.33.
46
According to the Fox rule, the severity of heteroscedasticity in this
case may not be large enough to worry about.
Also remember that, despite heteroscedasticity, OLS estimators are linear unbiased and
are (under general conditions) asymptotically (i.e., in large samples) normally distributed.
As we will see when we discuss other violations of the assumptions of the classical
linear regression model, the caution sounded in this section is appropriate as a general rule.
Otherwise, one can go overboard.
400
Do'stlaringiz bilan baham: |