“lag correlation between two different series.’’
3
Thus, correlation between two time series
such as
u
1
,
u
2
,
. . .
,
u
10
and
u
2
,
u
3
,
. . .
,
u
11
, where the former is the latter series lagged by
one time period, is
autocorrelation,
whereas correlation between time series such as
u
1
,
u
2
,
. . .
,
u
10
and
v
2
,
v
3
,
. . .
,
v
11
, where
u
and
v
are two different time series, is called
serial correlation
. Although the distinction between the two terms may be useful, in this
book we shall treat them synonymously.
Let us visualize some of the plausible patterns of auto- and nonautocorrelation, which are
given in Figure 12.1. Figures 12.1
a
to
d
show that there is a discernible pattern among the
u
’s.
Figure 12.1
a
shows a cyclical pattern; Figures 12.1
b
and
c
suggest an upward or downward
linear trend in the disturbances; whereas Figure 12.1
d
indicates that both linear and quadratic
trend terms are present in the disturbances. Only Figure 12.1
e
indicates no systematic pat-
tern, supporting the nonautocorrelation assumption of the classical linear regression model.
The natural question is: Why does serial correlation occur? There are several reasons,
some of which are as follows:
Inertia
A salient feature of most economic time series is inertia, or sluggishness. As is well known,
time series such as GNP, price indexes, production, employment, and unemployment exhibit
(business) cycles. Starting at the bottom of the recession, when economic recovery starts,
most of these series start moving upward. In this upswing, the value of a series at one point
in time is greater than its previous value. Thus there is a “momentum’’ built into them, and
it continues until something happens (e.g., increase in interest rate or taxes or both) to slow
them down. Therefore, in regressions involving time series data, successive observations are
likely to be interdependent.
Specification Bias: Excluded Variables Case
In empirical analysis the researcher often starts with a plausible regression model that may
not be the most “perfect’’ one. After the regression analysis, the researcher does the post-
mortem to find out whether the results accord with a priori expectations. If not, surgery is
begun. For example, the researcher may plot the residuals
ˆ
u
i
obtained from the fitted re-
gression and may observe patterns such as those shown in Figure 12.1
a
to
d
. These residu-
als (which are proxies for
u
i
) may suggest that some variables that were originally
candidates but were not included in the model for a variety of reasons should be included.
This is the case of
Do'stlaringiz bilan baham: