3.0
MAIN CONTENT
3.1
The method of Ordinary Least Squares (OLS)
49
The method of ordinary least squares is attributed to Carl Friedrich Gausss, a German
mathematician. Under certain assumptions, the method of least square has some very
attractive statistical properties that have made it one of the most powerful and popular
methods of regression analysis. To understand, we first explain the least square principle.
Recall the two variable model.
Where
is called the dependent variable while
are called independent
or explanatory variables.
The equation is not directly observable. However, we can gather data and obtain
estimates of
and
from a sample of the population. This gives us the following
relationship, which is a fitted straight linw with intercept
̂
and
̂
.
̂
̂
Equation (II) can be referred to as the sample regression equation. Here
̂
and
̂
are
sample estimates of the population parameters
and
, and
̂
denotes the predicted
value of
. Once we have the estimated sample regression equation we can easily predict
for various values of
.
When we fit a sample regression line to a scatter of points, it is obviously desirable to
select the line in such a manner that it is as close as possible to the actual Y, or, in other
words, that it provides the smallest possible number of residuals. To do this we adopt the
following criterion: choose the sample regression function in such a way that the sum of
the squared residuals is as small as possible (that is minimized).
3.2. Properties of OLS
This method of estimation has some desirable properties that make it the most popular
technique in uncomplicated applications of regression analysis, namely:
1.
By using the squared residuals we eliminate the effect of the sign of the residuals,
so it is not possible that a positive and negative residual will offset each other.
For example, we could minimize the sum of the residuals by setting the forecast
for
(
̂
). But this would not be a very well-fitting line at all. So clearly we want a
transformation that gives all the residuals the same sign before making them as
small as possible.
2.
By squaring the residuals, we give more weight to the larger residuals and so, in
effect, we work harder to reduce the very large errors.
3.
The OLS method chooses
̂
and
̂
estimates that have certain numerical and
statistical properties (such as unbiasedness and efficiency). Let us see how to
derive the OLS estimators. Denoting by RSS the Residual Sum of square.
50
̂
̂
∑ ̂
However, we know that:
̂
(
̂ ̂
)
and therefore:
∑ ̂
∑
̂
)
∑
̂ ̂
)
To minimum equation (V), the first order condition is to take the partial derivatives of
Rss with respect to
̂
and
̂
and set them to zero.
Thus, we have:
̂
∑
̂ ̂
and
̂
∑
̂ ̂
The second – order partial derivatives are:
̂
̂
∑
̂
∑
Therefore it is easy to verify that the second-order conditions for a minimum are met.
Since
∑ ̂ ̂
for simplicity on notation we omit the upper and lower limits of the summation
symbol), we can (by using that and rearranging) rewrite equation (6) and (7) as follows:
∑
̂
̂ ∑
∑
̂ ∑
̂ ∑
The only unknowns in these two equations are
̂
and
̂
. Therefore, we can solve this
system of two equations with two unknown to obtain
̂
and
̂
. First, we divide both sides
of equation (11) by n to get;
∑
̂
̂ ∑
Denoting
∑
̂
and
∑
̂
and rearranging,
51
we obtain:
̂ ̅ ̂ ̅
Substituting equation (14) into equation (12), we get:
∑
̅ ∑
̂ ̅ ∑
̂ ∑
or
̂
̂
And finally, factorizing the
̂
terms, we have:
∑
̂ *
+
and finally, factorizing the
̂
as:
̂
and given
̂
we can use equation (14) to obtain
̂
.
Do'stlaringiz bilan baham: |