SUR-1 3. SEEMINGLY UNRELATED REGRESSIONS (SUR) [1] Examples • Demand for some commodities: y Nike,t = x Nike,t ′β Nike + ε Nike,t y Reebok,t = x Reebok,t ′β Reebok + ε Reebok,t ; where y Nike,t is the quantity demanded for Nike sneakers, x Nike,t is an 1×k Nike vector of regressors such as the unit price of Nike sneakers, prices of other sneakers, income ... , and t indexes time. • Household consumption expenditures on housing, clothing, and food. • Grunfeld’s investment data: • I: gross investment ($million) • F: market value of firm at the end of previous year. • C: value of firm’s capital at the end of previous year. • I it = β 1i + β 2i F it + β 3i C it + ε it , where i = GM, CH (Chrysler), GE, etc. • Notice that although the same regressors are used for each i, the values of the regressors are different across different i.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
SUR-1
3. SEEMINGLY UNRELATED REGRESSIONS (SUR)
[1] Examples
• Demand for some commodities:
yNike,t = xNike,t′βNike + εNike,t
yReebok,t = xReebok,t′βReebok + εReebok,t;
where yNike,t is the quantity demanded for Nike sneakers, xNike,t is an 1×kNike
vector of regressors such as the unit price of Nike sneakers, prices of other
sneakers, income ... , and t indexes time.
• Household consumption expenditures
on housing, clothing, and food.
• Grunfeld’s investment data:
• I: gross investment ($million)
• F: market value of firm at the end of previous year.
• C: value of firm’s capital at the end of previous year.
• Iit = β1i + β2iFit + β3iCit + εit, where i = GM, CH (Chrysler), GE, etc.
• Notice that although the same regressors are used for each i, the values of
the regressors are different across different i.
• CAPM (Capital Asset Pricing Model)
• rit - rft: excess return on security i over a risk-free security.
• rmt-rft: excess market return.
• rit-rft = αi + βi(rmt-rft) + εit.
• Notice that the values of regressors are the same for every security.
• Let Rθ be the restricted MLE wihch max. lT subject to w(θ) = 0.
• Let ( ) TT
ls θθ∂
=∂
and 2 ( )( ) T o
TlI E θθθ θ
⎛ ⎞∂= − ⎜ ⎟′∂ ∂⎝ ⎠
.
• Then, 1ˆ ˆ( ) [ ( )] ( )T R T R T RLM s I s ˆθ θ θ−′= .
End of Digression
• Note that under Ho: Σ is diagonal, the restricted MLE of βi’s = OLS of βi’s;
restricted MLE of σii = sii = ; and restricted MLE of
σ
ˆ ˆ( ) (i i i i i iy X y X Tβ β′− − ) /
ij = 0.
• Breusch and Pagan (1979, Restud):
• Let rij = sij/(siisjj)1/2 (estimated correlation coefficient between ei and ej).
• LMT for Ho = TΣiΣj<irij2 →d χ2[n(n-1)/2].
• Do not need to compute unrestricted MLE.
• This statistic is obtained under the assumption of normal errors.
• Question: Is this statistic still chi-squared even if the errors are not normal?
SUR-25
SUR-26
[6] When Initial Assumptions Are Violated
Initial Assumptions:
1) All the variables in X1, … , Xn are weakly exogenous to ε1, … , εn:
E(εit|x11, … ,x1t, x21, … , x2t, …, xn1,…,xnt) = 0 for all i and t.
2) No autocorrelations:
• E(εitεjs|x11,…,x1s,…,xn1,…xns) = 0 for all i, j, and t > s.
3) No time heteroskedasticity:
• E(εitεjt|x11,…,x1t,…,xn1,…,xnt) = E(εitεjt) ≡ σij (≠, or = 0) for any t, i and j.
4) The εit are normally distributed. (For simplicity. Not required)
(1) What if E(εjt|xit) ≠ 0 for i ≠ j?
• Could happen if the equation i is misspecified.
• GLS inconsistent. Use OLS.
(2) What if the errors are heteroskedastic over time?
• E(εitεjt|x11,…,x1t,…,xn1,…,xnt) ≡ σij,t changes over time.
• Both GLS and OLS consistent. Can’t determine which estimator is more
efficient.
SUR-27
: : :0 0 ...
t
tt
nt
xx
• Define X
1
2*,
0 ... 00 ... 0
x
⎛ ⎞⎜ ⎟⎜ ⎟=⎜ ⎟⎜ ⎟⎝ ⎠
11, 12, 1 ,
21, 22, 2 ,
1, 2, ,
...
...: : :
...
t t n t
t t n tt
n t n t nn t
σ σ σσ σ σ
σ σ σ
⎛ ⎞⎜ ⎟⎜ ⎟Σ =⎜ ⎟⎜ ⎟⎝ ⎠
) t
t
;
• , where ( ) (1 1* * * * *
ˆ( ) oCov X X X Xβ − −′ ′= Δ 1 *, *,T
o t t tX X= ′Δ = Σ Σ
, where 1 *, *,ˆ T
o t t t tX e e X= ′ ′Δ = Σ 1 2( , ,..., )t t t nte e e e ′= .
• ( ) ( )1 11 1* * * * *( ) gCov X X X Xβ
− −−′ ′= Ω Δ Ω−t, where 1/ 2 1/ 2
1 *, *,Tt t tX X− −= ′Δ = Σ Σ Σ Σ
1 11 *, *,
ˆ Tt t t tX e e X− −= t′ ′Δ = Δ = Σ Σ Σ .
(3) What if the errors are autocorrelated?
• Both GLS and OLS consistent. Can’t determine which estimator is more
efficient.
• ( ) ( )1 1* * * , * *
ˆ( ) ( )o aCov X X T X Xβ − −′ ′= Δ , where , 11 T
o a t t tCov XT *, ε=
⎛ ⎞Δ = Σ⎜ ⎟⎝ ⎠
:
Can estimate by GMM ,o aΔ
• ( ) ( )1 1* * * , * *
ˆ( ) ( )g aCov X X T X Xβ − −′ ′= Δ ,
where 1, 1 *
1 To a t t tCov X
T , ε−=
⎛ ⎞Δ = Σ Σ⎜ ⎟⎝ ⎠
SUR-28
[7] Application: • Use Table F13-1.wf1 (EVIEWS data set). • Grunfeld’s investment data: • I: gross investment ($million) • F: market value of firm at the end of previous year. • CS: value of firm’s capital at the end of previous year. • Iit = β1i + β2iFit + β3iCSit + εit,
where i = GM (1), CH (Chrysler, 2), GE (3), WE (Westinghouse, 4), and US (U.S. Steel, 5).
• GLS • Read the work file using EVIEWS. • Go to \objects\New Objects... • Choose System and click on the ok button. • Then, an empty window will pop up. • Type the followings on the window: i1 = c(1)+c(2)*f1+c(3)*cs1 i2 = c(4)+c(5)*f2+c(6)*cs2 i3 = c(7)+c(8)*f3+c(9)*cs3 i4 = c(10)+c(11)*f4+c(12)*cs4 i5 = c(13)+c(14)*f5+c(15)*cs5 • Click on proc\Estimate.
• Then, you will see the menu for etimation of systems of equations. Choose Seeingly Unrelated Regression.
• For Two-Step GLS, choose Iterate Coefs. • For Iterative GLS, choose Sequential. • Do not use “One-Step Coefs” nor “Simultaneous”.
SUR-29
<Two-Step GLS>
• Estimation Results:
System: SUR Estimation Method: Seemingly Unrelated Regression (Marquardt) Sample: 1935 1954 Included observations: 20 Total system (balanced) observations 100 Linear estimation after one-step weighting matrix
Equation: I1 = C(1)+C(2)*F1+C(3)*CS1 Observations: 20 R-squared 0.920742 Mean dependent var 608.0200Adjusted R-squared 0.911417 S.D. dependent var 309.5746S.E. of regression 92.13828 Sum squared resid 144320.9Durbin-Watson stat 0.936490
Equation: I2 = C(4)+C(5)*F2+C(6)*CS2 Observations: 20 R-squared 0.911862 Mean dependent var 86.12350Adjusted R-squared 0.901493 S.D. dependent var 42.72556S.E. of regression 13.40980 Sum squared resid 3056.985Durbin-Watson stat 1.917509
SUR-30
Equation: I3 = C(7)+C(8)*F3+C(9)*CS3 Observations: 20 R-squared 0.687636 Mean dependent var 102.2900Adjusted R-squared 0.650887 S.D. dependent var 48.58450S.E. of regression 28.70654 Sum squared resid 14009.12Durbin-Watson stat 0.962757
Equation: I4 = C(10)+C(11)*F4+C(12)*CS4 Observations: 20 R-squared 0.726429 Mean dependent var 42.89150Adjusted R-squared 0.694244 S.D. dependent var 19.11019S.E. of regression 10.56701 Sum squared resid 1898.249Durbin-Watson stat 1.259005
Equation: I5 = C(13)+C(14)*F5+C(15)*CS5 Observations: 20 R-squared 0.421959 Mean dependent var 405.4600Adjusted R-squared 0.353954 S.D. dependent var 129.3519S.E. of regression 103.9692 Sum squared resid 183763.0Durbin-Watson stat 1.017982
Equation: I1 = C(1)+C(2)*F1+C(3)*CS1 Observations: 20 R-squared 0.919702 Mean dependent var 608.0200Adjusted R-squared 0.910255 S.D. dependent var 309.5746S.E. of regression 92.74073 Sum squared resid 146214.3Durbin-Watson stat 0.936717
Equation: I2 = C(4)+C(5)*F2+C(6)*CS2 Observations: 20 R-squared 0.910565 Mean dependent var 86.12350Adjusted R-squared 0.900043 S.D. dependent var 42.72556S.E. of regression 13.50807 Sum squared resid 3101.956Durbin-Watson stat 1.885111
SUR-34
Equation: I3 = C(7)+C(8)*F3+C(9)*CS3 Observations: 20 R-squared 0.669021 Mean dependent var 102.2900Adjusted R-squared 0.630083 S.D. dependent var 48.58450S.E. of regression 29.54949 Sum squared resid 14843.93Durbin-Watson stat 0.898029
Equation: I4 = C(10)+C(11)*F4+C(12)*CS4 Observations: 20 R-squared 0.701750 Mean dependent var 42.89150Adjusted R-squared 0.666661 S.D. dependent var 19.11019S.E. of regression 11.03336 Sum squared resid 2069.496Durbin-Watson stat 1.124739
Equation: I5 = C(13)+C(14)*F5+C(15)*CS5 Observations: 20 R-squared 0.390335 Mean dependent var 405.4600Adjusted R-squared 0.318610 S.D. dependent var 129.3519S.E. of regression 106.7753 Sum squared resid 193816.3Durbin-Watson stat 0.967353
Consider the following three-equations system:
y1 = β1eT + β2x1 + β3x2 + ε1;
y2 = β1eT + γ2x3 + γ3x4 + ε2;
y3 = β1eT + δ2x5 + δ3x6 + ε3,
where eT is a T×1 vector of ones, and all of y’s, x’s and ε’s are T×1 vectors.
Estimation Procedure:
Step 1: Estimate each equation by OLS, and get Σ using the OLS residuals.