94
Part One
Single-Equation Regression Models
Therefore,
¯
Y
=
β
1
+
β
2
¯
X
+ ¯
u
(10)
Subtracting Equation (10) from Equation (9) gives
y
i
=
β
2
x
i
+
(
u
i
− ¯
u
)
(11)
Also
recall that
ˆ
u
i
=
y
i
− ˆ
β
2
x
i
(12)
Therefore, substituting Equation (11) into Equation (12) yields
ˆ
u
i
=
β
2
x
i
+
(
u
i
− ¯
u
)
− ˆ
β
2
x
i
(13)
Collecting terms, squaring, and summing on both sides, we obtain
ˆ
u
2
i
=
(
ˆ
β
2
−
β
2
)
2
x
2
i
+
(
u
i
− ¯
u
)
2
−
2(
ˆ
β
2
−
β
2
)
x
i
(
u
i
− ¯
u
)
(14)
Taking expectations on both sides gives
E
ˆ
u
2
i
=
x
2
i
E
(
ˆ
β
2
−
β
2
)
2
+
E
(
u
i
− ¯
u
)
2
−
2
E
(
ˆ
β
2
−
β
2
)
x
i
(
u
i
− ¯
u
)
=
x
2
i
var (
ˆ
β
2
)
+
(
n
−
1) var (
u
i
)
−
2
E
k
i
u
i
(
x
i
u
i
)
=
σ
2
+
(
n
−
1)
σ
2
−
2
E
k
i
x
i
u
2
i
(15)
=
σ
2
+
(
n
−
1)
σ
2
−
2
σ
2
=
(
n
−
2)
σ
2
where, in the last but one step, use is made of the definition of
k
i
given in Eq. (3)
and the relation
given in Eq. (4). Also note that
E
(
u
i
− ¯
u
)
2
=
E
u
2
i
−
n
¯
u
2
=
E
u
2
i
−
n
u
i
n
2
=
E
u
2
i
−
1
n
u
2
i
=
n
σ
2
−
n
n
σ
2
=
(
n
−
1)
σ
2
where use is made of the fact that the
u
i
are uncorrelated and the variance of each
u
i
is
σ
2
.
Thus, we obtain
E
ˆ
u
2
i
=
(
n
−
2)
σ
2
(16)
Therefore, if we define
ˆ
σ
2
=
ˆ
u
2
i
n
−
2
(17)
its expected value is
E
(
ˆ
σ
2
)
=
1
n
−
2
E
ˆ
u
2
i
=
σ
2
using Equation (16)
(18)
which shows that
ˆ
σ
2
is an
unbiased estimator of true
σ
2
.
guj75772_ch03.qxd 23/08/2008 02:35 PM Page 94
Chapter 3
Two-Variable Regression Model: The Problem of Estimation
95
3A.6
Minimum-Variance Property
of Least-Squares Estimators
It was shown in Appendix 3A, Section 3A.2, that the least-squares estimator
ˆ
β
2
is linear as well as
unbiased (this holds true of
ˆ
β
1
too). To show that these estimators are also minimum variance in the
class of
all linear unbiased estimators, consider the least-squares estimator
ˆ
β
2
:
ˆ
β
2
=
k
i
Y
i
where
k
i
=
X
i
− ¯
X
(
X
i
− ¯
X
)
2
=
x
i
x
2
i
(see Appendix 3A
.
2)
(19)
which shows that
ˆ
β
2
is a weighted average of the
Y
’s, with
k
i
serving as the weights.
Let us define an alternative linear estimator of
β
2
as follows:
β
∗
2
=
w
i
Y
i
(20)
where
w
i
are also weights, not necessarily equal to
k
i
.
Now
E
(
β
∗
2
)
=
w
i
E
(
Y
i
)
=
w
i
(
β
1
+
β
2
X
i
)
(21)
=
β
1
w
i
+
β
2
w
i
X
i
Therefore, for
β
∗
2
to be unbiased, we must have
w
i
=
0
(22)
and
w
i
X
i
=
1
(23)
Also,
we may write
var (
β
∗
2
)
=
var
w
i
Y
i
=
w
2
i
var
Y
i
[
N ote:
var
Y
i
=
var
u
i
=
σ
2
]
=
σ
2
w
2
i
[
N ote:
cov (
Y
i
,
Y
j
)
=
0 (
i
=
j
)]
=
σ
2
w
i
−
x
i
x
2
i
+
x
i
x
2
i
2
(Note the mathematical trick)
=
σ
2
w
i
−
x
i
x
2
i
2
+
σ
2
x
2
i
x
2
i
2
+
2
σ
2
w
i
−
x
i
x
2
i
x
i
x
2
i
=
σ
2
w
i
−
x
i
x
2
i
2
+
σ
2
1
x
2
i
(24)
because the last term in the next to the last step drops out. (Why?)
Since the last term in Equation (24) is constant, the variance of (
β
∗
2
) can be minimized only by
manipulating the first term. If we let
w
i
=
x
i
x
2
i
Eq. (24) reduces to
var (
β
∗
2
)
=
σ
2
x
2
i
=
var (
ˆ
β
2
)
(25)
guj75772_ch03.qxd 23/08/2008 02:35 PM Page 95
96
Part One
Single-Equation Regression Models
In words, with weights
w
i
=
k
i
, which are the least-squares weights, the
variance of the linear esti-
mator
β
∗
2
is equal to the variance of the least-squares estimator
ˆ
β
2
; otherwise var (
β
∗
2
)
>
var (
ˆ
β
2
)
.
To
put it differently, if there is a minimum-variance linear unbiased estimator of
β
2
, it must be the least-
squares estimator. Similarly it can be shown that
ˆ
β
1
is a minimum-variance linear unbiased estimator
of
β
1
.
3A.7
Consistency of Least-Squares Estimators
We have
shown that, in the framework of the classical linear regression model, the least-squares esti-
mators are unbiased (and efficient) in any sample size, small or large. But sometimes, as discussed in
Appendix A,
an estimator may not satisfy one or more desirable statistical properties in small sam-
ples. But as the sample size increases indefinitely, the estimators possess several
desirable statistical
properties. These properties are known as the
large sample,
or
asymptotic, properties.
In this ap-
pendix, we will discuss one large sample property, namely, the property of
Do'stlaringiz bilan baham: