(
K
)
∂
ln
L
∂σ
2
= −
n
2
σ
2
+
1
2
σ
4
(
Y
i
−
β
1
−
β
2
X
2
i
− · · · −
β
k
X
ki
)
2
(
K
+
1)
Setting these equations equal to zero (the first-order condition for optimization) and letting
˜
β
1
,
˜
β
2
,
. . .
,
˜
β
k
and
˜
σ
2
denote the ML estimators, we obtain, after simple algebraic manipulations,
Y
i
=
n
˜
β
1
+ ˜
β
2
X
2
i
+ · · · + ˜
β
k
X
ki
Y
i
X
2
i
= ˜
β
1
X
2
i
+ ˜
β
2
X
2
2
i
+ · · · + ˜
β
k
X
2
i
X
ki
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Y
i
X
ki
= ˜
β
1
X
ki
+ ˜
β
2
X
2
i
X
ki
+ · · · + ˜
β
k
X
2
ki
which are precisely the normal equations of the least-squares theory, as can be seen from Appen-
dix 7A, Section 7A.1. Therefore, the ML estimators, the
˜
β
’s, are the same as the OLS estimators, the
ˆ
β
’s, given previously. But as noted in Chapter 4, Appendix 4A, this equality is not accidental.
Substituting the ML (
=
OLS) estimators into the (
K
+
1)st equation just given, we obtain, after
simplification, the ML estimator of
σ
2
as
˜
σ
2
=
1
n
(
Y
i
− ˜
β
1
− ˜
β
2
X
2
i
− · · · − ˜
β
k
X
ki
)
2
=
1
n
ˆ
u
2
i
As noted in the text, this estimator differs from the OLS estimator
ˆ
σ
2
=
ˆ
u
2
i
/
(
n
−
k
)
.
And since the
latter is an unbiased estimator of
σ
2
, this conclusion implies that the ML estimator
˜
σ
2
is a biased
estimator. But, as can be readily verified, asymptotically,
˜
σ
2
is unbiased too.
guj75772_ch07.qxd 11/08/2008 04:22 PM Page 230
Chapter 7
Multiple Regression Analysis: The Problem of Estimation
231
7A.5
EViews Output of the Cobb–Douglas Production
Function in Equation (7.9.4)
Dependent Variable: Y1
Method: Least Squares
Included observations: 51
Coefficient
Std. Error
t
-Statistic
Prob.
C
3.887600
0.396228
9.811514
0.0000
Y2
0.468332
0.098926
4.734170
0.0000
Y3
0.521279
0.096887
5.380274
0.0000
R-squared
0.964175
Mean dependent var.
16.94139
Adjusted R-squared
0.962683
S.D. dependent var.
1.380870
S.E. of regression
0.266752
Akaike info criterion
0.252028
Sum squared resid.
3.415520
Schwarz criterion
0.365665
Log likelihood
−
3.426721
Hannan-Quinn criterion
0.295452
F
-statistic
645.9311
Durbin-Watson stat.
1.946387
Prob. (
F
-statistic)
0.000000
Covariance of Estimates
C
Y2
Y3
C
0.156997
0.010364
−
0.020014
Y2
0.010364
0.009786
−
0.009205
Y3
−
0.020014
−
0.009205
0.009387
Y
X2
X3
Y1
Y2
Y3
Y1HAT
Y1RESID
38,372,840
424,471
2,689,076
17.4629
12.9586
14.8047
17.6739
−
0.2110
1,805,427
19,895
57,997
14.4063
9.8982
10.9681
14.2407
0.1656
23,736,129
206,893
2,308,272
16.9825
12.2400
14.6520
17.2577
−
0.2752
26,981,983
304,055
1,376,235
17.1107
12.6250
14.1349
17.1685
−
0.0578
217,546,032
1,809,756
13,554,116
19.1979
14.4087
16.4222
19.1962
0.0017
19,462,751
180,366
1,790,751
16.7840
12.1027
14.3981
17.0612
−
0.2771
28,972,772
224,267
1,210,229
17.1819
12.3206
14.0063
16.9589
0.2229
14,313,157
54,455
421,064
16.4767
10.9051
12.9505
15.7457
0.7310
159,921
2,029
7,188
11.9824
7.6153
8.8802
12.0831
−
0.1007
47,289,846
471,211
2,761,281
17.6718
13.0631
14.8312
17.7366
−
0.0648
63,015,125
659,379
3,540,475
17.9589
13.3991
15.0798
18.0236
−
0.0647
1,809,052
17,528
146,371
14.4083
9.7716
11.8939
14.6640
−
0.2557
10,511,786
75,414
848,220
16.1680
11.2307
13.6509
16.2632
−
0.0952
105,324,866
963,156
5,870,409
18.4726
13.7780
15.5854
18.4646
0.0079
90,120,459
835,083
5,832,503
18.3167
13.6353
15.5790
18.3944
−
0.0778
39,079,550
336,159
1,795,976
17.4811
12.7253
14.4011
17.3543
0.1269
22,826,760
246,144
1,595,118
16.9434
12.4137
14.2825
17.1465
−
0.2030
38,686,340
384,484
2,503,693
17.4710
12.8597
14.7333
17.5903
−
0.1193
69,910,555
216,149
4,726,625
18.0627
12.2837
15.3687
17.6519
0.4109
7,856,947
82,021
415,131
15.8769
11.3147
12.9363
15.9301
−
0.0532
21,352,966
174,855
1,729,116
16.8767
12.0717
14.3631
17.0284
−
0.1517
46,044,292
355,701
2,706,065
17.6451
12.7818
14.8110
17.5944
0.0507
(
Continued
)
guj75772_ch07.qxd 11/08/2008 04:22 PM Page 231
232
Part One
Single-Equation Regression Models
Y
X2
X3
Y1
Y2
Y3
Y1HAT
Y1RESID
92,335,528
943,298
5,294,356
18.3409
13.7571
15.4822
18.4010
−
0.0601
48,304,274
456,553
2,833,525
17.6930
13.0315
14.8570
17.7353
−
0.0423
17,207,903
267,806
1,212,281
16.6609
12.4980
14.0080
17.0429
−
0.3820
47,340,157
439,427
2,404,122
17.6729
12.9932
14.6927
17.6317
0.0411
2,644,567
24,167
334,008
14.7880
10.0927
12.7189
15.2445
−
0.4564
14,650,080
163,637
627,806
16.5000
12.0054
13.3500
16.4692
0.0308
7,290,360
59,737
522,335
15.8021
10.9977
13.1661
15.9014
−
0.0993
9,188,322
96,106
507,488
16.0334
11.4732
13.1372
16.1090
−
0.0756
51,298,516
407,076
3,295,056
17.7532
12.9168
15.0079
17.7603
−
0.0071
20,401,410
43,079
404,749
16.8311
10.6708
12.9110
15.6153
1.2158
87,756,129
727,177
4,260,353
18.2901
13.4969
15.2649
18.1659
0.1242
101,268,432
820,013
4,086,558
18.4333
13.6171
15.2232
18.2005
0.2328
3,556,025
34,723
184,700
15.0842
10.4552
12.1265
15.1054
−
0.0212
124,986,166
1,174,540
6,301,421
18.6437
13.9764
15.6563
18.5945
0.0492
20,451,196
201,284
1,327,353
16.8336
12.2125
14.0987
16.9564
−
0.1229
34,808,109
257,820
1,456,683
17.3654
12.4600
14.1917
17.1208
0.2445
104,858,322
944,998
5,896,392
18.4681
13.7589
15.5899
18.4580
0.0101
6,541,356
68,987
297,618
15.6937
11.1417
12.6036
15.6756
0.0181
37,668,126
400,317
2,500,071
17.4443
12.9000
14.7318
17.6085
−
0.1642
4,988,905
56,524
311,251
15.4227
10.9424
12.6484
15.6056
−
0.1829
62,828,100
582,241
4,126,465
17.9559
13.2746
15.2329
18.0451
−
0.0892
172,960,157
1,120,382
11,588,283
18.9686
13.9292
16.2655
18.8899
0.0786
15,702,637
150,030
762,671
16.5693
11.9186
13.5446
16.5300
0.0394
5,418,786
48,134
276,293
15.5054
10.7817
12.5292
15.4683
0.0371
49,166,991
425,346
2,731,669
17.7107
12.9607
14.8204
17.6831
0.0277
46,164,427
313,279
1,945,860
17.6477
12.6548
14.4812
17.3630
0.2847
9,185,967
89,639
685,587
16.0332
11.4035
13.4380
16.2332
−
0.2000
66,964,978
694,628
3,902,823
18.0197
13.4511
15.1772
18.0988
−
0.0791
2,979,475
15,221
361,536
14.9073
9.6304
12.7981
15.0692
−
0.1620
Notes:
Y1
=
ln Y; Y2
=
ln X2; Y3
=
ln X3.
The eigenvalues are 3.7861 and 187,5269, which will be used in Chapter 10.
guj75772_ch07.qxd 11/08/2008 04:22 PM Page 232
233
This chapter, a continuation of Chapter 5, extends the ideas of interval estimation and hypo-
thesis testing developed there to models involving three or more variables. Although in
many ways the concepts developed in Chapter 5 can be applied straightforwardly to the
multiple regression model, a few additional features are unique to such models, and it is
these features that will receive more attention in this chapter.
8.1
The Normality Assumption Once Again
We know by now that if our sole objective is point estimation of the parameters of the
regression models, the method of ordinary least squares (OLS), which does not make any
assumption about the probability distribution of the disturbances
u
i
, will suffice. But if our
objective is estimation as well as inference, then, as argued in Chapters 4 and 5, we need to
assume that the
u
i
follow some probability distribution.
For reasons already clearly spelled out, we assumed that the
u
i
follow the normal distri-
bution with zero mean and constant variance
σ
2
.
We continue to make the same assump-
tion for multiple regression models. With the normality assumption and following the
discussion of Chapters 4 and 7, we find that the OLS estimators of the partial regression
coefficients, which are identical with the maximum likelihood (ML) estimators, are best
linear unbiased estimators (BLUE).
1
Moreover, the estimators
ˆ
β
2
,
ˆ
β
3
, and
ˆ
β
1
are them-
selves normally distributed with means equal to true
β
2
,
β
3
, and
β
1
and the variances given
in Chapter 7. Furthermore, (
n
−
3)
ˆ
σ
2
/σ
2
follows the
χ
2
distribution with
n
−
3 df, and the
three OLS estimators are distributed independently of
ˆ
σ
2
. The proofs follow the two-
variable case discussed in Appendix 3A, Section 3A. As a result and following Chapter 5,
Chapter
8
Multiple Regression
Analysis: The Problem
of Inference
1
With the normality assumption, the OLS estimators
ˆ
β
2
,
ˆ
β
3
, and
ˆ
β
1
are minimum-variance estimators
in the entire class of unbiased estimators, whether linear or not. In short, they are BUE (best unbiased
estimators). See C. R. Rao,
Linear Statistical Inference and Its Applications,
John Wiley & Sons, New
York, 1965, p. 258.
guj75772_ch08.qxd 12/08/2008 10:03 AM Page 233
234
Do'stlaringiz bilan baham: |