This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
GENERATING GENERAL DUMMIES 8 GENERATING TIME DUMMIES 9
TIMES-SERIES ANALYSES 10
1. ASSUMPTIONS OF THE OLS ESTIMATOR 10 2. CHECK THE INTERNAL AND EXTERNAL VALIDITY 11 A. THREATS TO INTERNAL VALIDITY 12 B. THREATS TO EXTERNAL VALIDITY 12 3. THE LINEAR REGRESSION MODEL 13 4. LINEAR REGRESSION WITH MULTIPLE REGRESSORS 13 ASSUMPTIONS OF THE OLS ESTIMATOR 13 5. NONLINEAR REGRESSION FUNCTIONS 14 A. EXAMPLES OF NONLINEAR REGRESSIONS 15 1) POLYNOMIAL REGRESSION MODEL OF A SINGLE INDEPENDENT VARIABLE 15 2) LOGARITHMS 15 B. INTERACTIONS BETWEEN TWO BINARY VARIABLES 16 C. INTERACTIONS BETWEEN A CONTINUOUS AND A BINARY VARIABLE 16 D. INTERACTIONS BETWEEN TWO CONTINUOUS VARIABLES 16
RUNNING TIME-SERIES ANALYSES IN STATA 16
A TIME SERIES REGRESSION 16 REGRESSION DIAGNOSTICS: NON NORMALITY 21
We are going to allocate 10 megabites to the dataset. You do not want to allocate
to much memory to the dataset because the more memory you allocate to the dataset, the less memory will be available to perform the commands. You could
reduce the speed of Stata or even kill it.
set mem 10m
we can also decide to have the “more” separation line on the screen or not when
the software displays results:
set more on
set more off
Setting up a panel
Now, we have to instruct Stata that we have a panel dataset. We do it with the
command tsset, or iis and tis
iis idcode
tis year
or
tsset idcode year
In the previous command, idcode is the variable that identifies individuals in our
dataset. Year is the variable that identifies time periods. This is always the rule.
The commands refering to panel data in Stata almost always start with the prefix
xt. You can check for these commands by calling the help file for xt.
You should describe and summarize the dataset as usually before you perform estimations. Stata has specific commands for describing and summarizing panel
datasets.
xtdes
xtsum
xtdes permits you to observe the pattern of the data, like the number of individuals
with different patterns of observations across time periods. In our case, we have
an unbalanced panel because not all individuals have observations to all years.
The xtsum command gives you general descriptive statistics of the variables in the
dataset, considering the overall, the between and the within variations. Overall refers to the whole dataset.
Between refers to the variation of the means to each individual (across time periods). Within refers to the variation of the deviation from the respective mean
to each individual.
You may be interested in applying the panel data tabulate command to a variable. For instance, to the variable south, in order to obtain a one-way table.
xttab south
As in the previous commands, Stata will report the tabulation for the overall
Let's generate the dummy variable black, which is not in our dataset.
gen black=1 if race==2
replace black=0 if black==.
Suppose you want to generate a new variable called tenure1 that is equal to the variable tenure lagged one period. Than you would use a time series operator (l).
First, you would need to sort the dataset according to idcode and year, and then generate the new variable with the "by" prefix on the variable idcode.
sort idcode year
by idcode: gen tenure1=l.tenure
If you were interested in generating a new variable tenure3 equal to one
difference of the variable tenure, you would use the time series d operator.
by idcode: gen tenure3=d.tenure
If you would like to generate a new variable tenure4 equal to two lags of the
variable tenure, you would type:
by idcode: gen tenure4=l2.tenure
The same principle would apply to the operator d.
Let's just save our data file with the changes that we made to it.
Another way would be to use the xi command. It takes the items (string of letters,
for instance) of a designated variable (category, for instance) and create a dummy
variable for each item. You need to change the base anyway:
char _dta[omit] “prevalent”
xi: i.category
tabulate category
Generating time dummies
In order to do this, let's first generate our time dummies. We use the "tabulate" command with the option "gen" in order to generate time dummies for each year
of our dataset.
We will name the time dummies as "y",
• and we will get a first time dummy called "y1" which takes the value 1 if year=1980, 0 otherwise,
• a second time dummy "y2" which assumes the value 1 if year=1982, 0
otherwise, and similarly for the remaining years. You could give any other name to your time dummies.
First, let's look at a scatterplot of all variables. There are a few observations that could be outliers, but there is nothing seriously wrong in this scatterplot.
. graph y x1 x2 x3, matrix
Let's use the regress command to run a regression predicting y from x1 x2 and
x3.
. regress y x1 x2 x3
Source | SS df MS Number of obs = 100 ---------+------------------------------ F( 3, 96) = 21.69
We can use the rvfplot command to display the residuals by the fitted (predicted) values. This can be useful in checking for outliers, checking for non-
normality, checking for non-linearity, and checking for heteroscedasticity. The
distribution of points seems fairly even and random, so no serious problems seem evident from this plot.
. rvfplot
We can use the vif command to check for multicollinearity. As a rule of thumb,
VIF values in excess of 20, or 1/VIF values (tolerances) lower than 0.05 may
merit further investigation. These values all seem to be fine.
. vif
Variable | VIF 1/VIF
---------+----------------------
x1 | 1.23 0.812781 x2 | 1.16 0.864654
x3 | 1.12 0.892234
---------+---------------------- Mean VIF | 1.17
We can use the predict command (with the rstudent option) to make studentized residuals, and then use the summarize command to check the distribution of the
residuals. The residuals do not seem to be seriously skewed (although they do have a higher than expected kurtosis). The largest studentized residual (in
absolute value) is -2.88, which is somewhat large but not extremely large.
We use the kdensity command below to show the distribution of y (in yellow) and a normal overlay (in red). We can see that y is positively skewed, i.e., it has
The rvfplot command gives us a graph of the residual value by the fitted
(predicted) value. We are looking for a nice even distribution of the residuals
across the levels of the fitted value. We see that the points are more densely
packed together at the bottom part of the graph (for the negative residuals),
indicating that there could be a problem of non-normally distributed residuals.
. rvfplot
Below we use the avplots command to produce added variable plots. These plots
show the relationship between each predictor, and the dependent variable after adjusting for all other predictors. For example, the plot in the top left shows the
relationship between x1 and y, after y has been adjusted for x2 and x3. The plot
in the top right shows x2 on the bottom axis, and the plot in the bottom left shows x3 on the bottom axis. We would expect the points to be normally distributed
around these regression lines. Looking at these plots show the data points seem
to be more densely packed below the regression line, another possible indicator
Below we create studentized residuals using the predict command, creating a variable called rstud containing the studentized residuals. Stata knew we wanted
studentized residuals because we used the rstudent option after the comma. We
then use the summarize command to examine the residuals for normality. We see that the residuals are positively skewed, and that the 5 smallest values go as
low as -1.78, while the five highest values go from 2.44 to 3.42, another indicator
Let us try using a square root transformation on y creating sqy, and examine the distribution of sqy. The transformation has considerably reduced (but not totally
The avplots below also look improved (not perfect, but much improved).
. avplots
We create studentized residuals (called rstud2) and look at their distribution using summarize. The skewness is much better (.27) and the 5 smallest and 5
The distribution of the residuals below look nearly normal.
. kdensity rstud2, normal
The boxplot of the residuals looks symmetrical, and there are no outliers in the plot.
. graph rstud2, box
In this case, a square root transformation of the dependent variable addressed both problems in skewness in the residuals, and outliers in the residuals. Had we tried
to address these problems via dealing with the outliers, the problem of the skewness of the residuals would have remained. When there are outliers in the
residuals, it can be useful to assess whether the residuals are skewed. If so,
addressing the skewness may also solve the outliers at the same time.
Regression diagnostics: Non-linearity
use http://www.ats.ucla.edu/stat/stata/modules/reg/nonlin, clear
. regress y x1 x2 x3
Source | SS df MS Number of obs = 100 ---------+------------------------------ F( 3, 96) = 2.21
The ovtest command with the rhs option tests whether higher order trend effects (e.g. squared, cubed) are present but omitted from the regression model. The null
hypothesis, as shown below, is that there are no omitted variables (no significant
higher order trends). Because the test is significant, this suggest there are higher order trends in the data that we have overlooked.
. ovtest, rhs Ramsey RESET test using powers of the independent variables Ho: model has no omitted variables
F(9, 87) = 67.86
Prob > F = 0.0000
A scatterplot matrix is used below to look for higher order trends. We can see that there is a very clear curvilinear relationship between x2 and y.
We can likewise use avplots to look for non-linear trend patterns. Consistent with the scatterplot, the avplot for x2 (top right) exhibits a distinct curved pattern.
. avplots
Below we create x2sq and add it to the regression equation to account for the
curvilinear relationship between x2 and y.
. generate x2sq = x2*x2
. regress y x1 x2 x2sq x3 Source | SS df MS Number of obs = 100 ---------+------------------------------ F( 4, 95) = 171.00
Model | 76669.3763 4 19167.3441 Prob > F = 0.0000 Residual | 10648.6237 95 112.090776 R-squared = 0.8780
We use the ovtest again below, however the results are misleading. Stata gave us
a note saying that x2sq was dropped due to collinearity. In testing for higher
order trends, Stata created x2sq^2 which duplicated x2sq, and then x2sq was discarded since it was the same as the term Stata created. The resulting ovtest
misleads us into thinking there may be higher order trends, but it has discarded
the higher trend we just included.
. ovtest, rhs (note: x2sq dropped due to collinearity)
(note: x2sq^2 dropped due to collinearity)
Ramsey RESET test using powers of the independent variables
Ho: model has no omitted variables
F(11, 85) = 54.57 Prob > F = 0.0000
There is another minor problem. We use the vif command below to look for
problems of multicollinearity. A general rule of thumb is that a VIF in excess of 20 (or a 1/VIF or tolerance of less than 0.05) may merit further investigation. We
see that the VIF for x2 and x2sq are over 32. The reason for this is that x2 and
We can solve both of these problems with one solution. If we "center" x2 (i.e. subtract its mean) before squaring it, the results of the ovtest will no longer be
misleading, and the VIF values for x2 and x2sq will get much better. Below, we
center x2 (called x2cent) and then square that value (creating x2centsq).
Prob > F = 0.9178 We create avplots below and no longer see any substantial non-linear trends in
the data.
. avplots
Note that if we examine the avplot for x2cent, it shows no curvilinear trend. This
is because the avplot adjusts for all other terms in the model, so after adjusting for the other terms (including x2centsq) there is no longer any curved trend between
x2cent and the adjusted value of y.
. avplot x2cent
Had we simply run the regression and reported the initial results, we would have
ignored the significant curvilinear component between x2 and y.
------------------------------------------------------------------------------ We can use the hettest command to test for heteroscedasticity. The test indicates
that the regression results are indeed heteroscedastic, so we need to further
understand this problem and try to address it.
. hettest Cook-Weisberg test for heteroscedasticity using fitted values of y
Ho: Constant variance chi2(1) = 21.30
Prob > chi2 = 0.0000
Looking at the rvfplot below that shows the residual by fitted (predicted) value,
we can clearly see evidence for heteroscedasticity. The variability of the residuals at the left side of the graph is much smaller than the variability of the residuals at
------------------------------------------------------------------------------ Using the hettest again, the chi-square value is somewhat reduced, but the test for
heteroscedasticity is still quite significant. The square root transformation was
not successful.
. hettest
Cook-Weisberg test for heteroscedasticity using fitted values of sqy
------------------------------------------------------------------------------ The results for the hettest are the same as before. Whether we chose a log to the
base e or a log to the base 10, the effect in reducing heteroscedasticity (as
Cook-Weisberg test for heteroscedasticity using fitted values of log10y
Ho: Constant variance
chi2(1) = 5.60 Prob > chi2 = 0.0179
While these results are not perfect, we will be content for now that this has
substantially reduced the heteroscedasticity as compared to the original data.
Regression Diagnostics: Outliers
use http://www.ats.ucla.edu/stat/stata/modules/reg/outlier.dta , clear Below we run an ordinary least squares (OLS) regression predicting y from x1, x2, and x3. The results suggest that x2 and x3 are significant, but x1 is not
significant.
. regress y x1 x2 x3
Source | SS df MS Number of obs = 100
---------+------------------------------ F( 3, 96) = 14.12 Model | 6358.64512 3 2119.54837 Prob > F = 0.0000
------------------------------------------------------------------------------ Let's start an examination for outliers by looking at a scatterplot matrix showing
scatterplots among y, x1, x2, and x3. Although we cannot see a great deal of
detail in these plots (especially since we have reduced their size for faster web access) we can see that there is a single point that stands out from the rest. This
We repeat the scatterplot matrix below, using the symbol([case]) option that indicates to make the symbols the value of the variable case. The variable case is
the case id of the observation, ranging from 1 to 100. It is difficult to see below,
but the case for the outlier is 100. If you run this in Stata yourself, the case numbers will be much easier to see.
. graph matrix y x1 x2 x3, symbol([case])
We can use the lvr2plot command to obtain a plot of the leverage by normalized
residual squared plot. The most problematic outliers would be in the top right of
the plot, indicating both high leverage and a large residual. This plot shows us that case 100 has a very large residual (compared to the others) but does not have
The rvfplot command shows us residuals by fitted (predicted) values, and also indicates that case 100 has the largest residual.
. rvfplot, symbol([case])
The avplots command gives added variable plots (sometimes called partial regression plots). The plot in the top left shows x1 on the horizontal axis, and
the residual value of y after using x2 and x3 as predictors. Likewise, the top right
plot shows x2 on the horizontal axis, and the residual value of y after using x1 and x3 as predictors, and the bottom plot shows x3 on the horizontal axis, and the
residual value of y after using x1 and x2 as predictors.
Returning to the top left plot, this shows us the relationship between x1 and y, after adjusting y for x2 and x3. The line plotted has the slope of the coefficient
for x1 and is the least squares regression line for the data in the scatterplot. In
short, these plots allow you to view each of the scatterplots much like you would look at a scatterplot from a simple regression analysis with one predictor. In
looking at these plots, we see that case 100 appears to be an outlier in each plot.
Beyond noting that x1 is an outlier, we can see the type of influence that it has on each of the regression lines. For x1 we see that x1 seems to be tugging the line up
at the left giving the line a smaller slope. By contrast, the outlier for x2 seems to
be tugging the line up at the right giving the line a greater slope. Finally, for x3 the outlier is right in the center and seems to have no influence on the slope (but
would pull the entire line up influencing the intercept).
. avplots, symbol([case])
Below we repeat the avplot just for the variable x2, showing that you can obtain
an avplot for a single variable at a time. Also, we can better see the influence of observation 100 tugging the regression line up at the right, possibly increasing the
overall slope for x2.
. avplot x2, symbol([case])
We use the predict command below to create a variable containing the
studentized residuals called rstu. Stata knew we wanted studentized residuals because we used the rstudent option after the comma. We can then use the
graph command to make a boxplot looking at the studentized residuals, looking for outliers. As we would expect, observation 100 stands out as an outlier.
. predict rstu, rstudent
. graph box rstu, symbol([case])
Below we use the predict command to create a variable called l that will contain
the leverage for each observation. Stata knew we wanted leverages because we
used the leverage option after the comma. The boxplot shows some observations that might be outliers based on their leverage. Note that observation 100 is not
among them. This is consistent with the lvr2plot (see above) that showed us that
observation 100 had a high residual, but not exceptionally high leverage.
. predict l, leverage
. graph box l, symbol([case])
Below use use the predict command to compute Cooks D for each observation.
We make a boxplot of that below, and 100 shows to have the highest value for
We can make a plot that shows us the studentized residual, leverage, and cooks D
all in one plot. The graph command below puts the studentized residual (rstud) on the vertical axis, leverage (l) on the horizontal axis, and the size of the bubble
reflects the size of Cook's D (d). The [w=d] tells Stata to weight the size of the
symbol by the variable d so the higher the value of Cook's D, the larger the
symbol will be. As we would expect, the plot below shows an observation that has a very large residual, a very large value of Cook's D, but does not have a very
large leverage.
. graph rstu l [w=d]
We repeat the graph above, except using the symbol([case]) option to show us the variable case as the symbol, which shows us that the observation we identified
The leverage gives us an overall idea of how influential an observation is. From
our examination of the avplots above, it appeared that the outlier for case 100 influences x1 and x2 much more than it influences x3. We can use the dfbeta
command to generate dfbeta values for observation, and for each predictor. The
dfbeta value shows the degree the coefficient will change when that single
observation is omitted. This allows you to see, for a given predictor, how influential a single observation can be. The output below shows that three
variables were created, DFx1, DFx2, and DFx3.
. dfbeta DFx1: DFbeta(x1)
DFx2: DFbeta(x2)
DFx3: DFbeta(x3) Below we make a graph of the studentized residual by DFx1. We see that
observation 100 has a very high residual value, and that it has a large negative
DFBeta. This indicates that the presence of observation x1 decreases the value of the coefficient for x1 and if it was removed, the coefficient for x1 would get
Below we make a graph of the studentized residual by the value of DFx2. Like
above, we see that x2 has a very large residual, but instead DFx2 is a large positive value. This suggests that the presence of observation 100 enhances the
coefficient for x2 and its removal would lower the coefficient for x2.
. graph rstu DFx2, symbol([case])
Finally, we make a plot showing the studentized residual and DFx3. This shows
that observation 100 has a large residual, but it has a small DFBeta (small DFx2). This suggests that the exclusion of observation 100 would have little impact on
The results of looking at the DFbeta values is consistent with our observations looking at the avplots. It looks like observation 100 diminishes the coefficient for
x1, enhances the coefficient for x2, and has little impact on the coefficient for x3.
We can see that the information provided by the avplots and the values provided by dfbeta are related. Instead of looking at this information separately, we could
look at the DFbeta values right in the added variable plots.
Below we take the DFbeta value for x2 (DFx2) and round it to 2 decimal places, creating rDFx2. We then include rDFx2 as a symbol in the added variable plot
below. We can see that the outlier at the top right has the largest DFbeta value
and that observation enhances the coefficient (.576) and if this value were
omitted, that coefficient would get smaller. In fact, the value of the DFbeta tells us exactly how much smaller, it indicates that the coefficient will be .98 standard
errors smaller, or .98 * .1578 =.154. Removing this observation will make the
coefficient for x2 go from .576 to .422. As a rule of thumb, a DFbeta value of 1 or larger is considered worthy of attention.
The plot below is the same as above, but shows us the case numbers (the variable
case) as the symbol, allowing us to see that observation 100 is the outlying case.
. avplot x2, symbol([case])
Below, we look at the data for observation 100 and see that it has a value of 110. We checked the original data, and found that this was a data entry error. The
value really should have been 11.
. list in 100 Observation 100 case 100 x1 -5 x2 8
x3 0 y 110 rstu 7.829672
l .0298236 d .2893599 DFx1 -.8180526 DFx2 .9819155 DFx3 .0784904 rDFx2 .98
We change the value of y to be 11, the correct value.
. replace y = 11 if (case == 100) (1 real change made)
Having fixed the value of y we run the regression again. The coefficients change just as the regression diagnostics indicated. The coefficient for x1 increased
(from .19 to .325), the coefficient for x2 decreased by the exact amount the
DFbeta indicated (from .57 to .42). Note that for x2 the coefficient went from being non-significant to being significant. As we expected, the coefficient for x3
We repeat the regression diagnostic plots below. We need to look carefully at the scale of these plots since the axes will be rescaled with the omission of the large
outlier. The lvr2plot shows a point with a larger residual than most but low
leverage, and 3 points with larger leverage than most, but a small residual.
. predict d, cooksd We then make the plot that shows the residual, leverage and Cook's D all in one
graph. None of the points really jump out as having an exceptionally high
residual, leverage and Cook's D value.
. graph rstu l [w=d]
Although we could scrutinize the data a bit more closely, we can tentatively state
that these revised results are good. Had we skipped checking the residuals, we
would have used the original results which would have underestimated the impact of x1 and overstated the impact of x2.
Regression Diagnotics: Multicollinearity
use http://www.ats.ucla.edu/stat/stata/modules/reg/multico, clear Below we run a regression predicting y from x1 x2 x3 and x4. If we were to
report these results without any further checking, we would conclude that none of these predictors are significant predictors of the dependent variable, y. If we look
more carefully, we note that the test of all four predictors is significant (F = 16.37,
p = 0.0000) and these predictors account for 40% of the variance in y (R-squared = 0.40). It seems like a contradiction that the combination of these 4 predictors
should be so strongly related to y, yet none of them are significant. Let us
investigate further.
. regress y x1 x2 x3 x4
Source | SS df MS Number of obs = 100
---------+------------------------------ F( 4, 95) = 16.37 Model | 5995.66253 4 1498.91563 Prob > F = 0.0000
We use the vif command to examine the VIF values (and 1/VIF values, also
called tolerances). A general rule of thumb is that a VIF in excess of 20, or a tolerance of 0.05 or less may be worthy of further investigation. A tolerance
(1/VIF) can be described in this way, using x1 as an example. Use x1 as a
dependent variable, and use x2 x3 and x4 as predictors and compute the R-squared (the proportion of variance that x2 x3 and x4 explain in x1) and then take
1-Rsquared. In this example, 1-Rsquared equals 0.010964 (the value of 1/VIF for
x1). This means that only about 1% of the variance in x1 is not explained by the
other predictors. If we look at x4, we see that less than .2% of the variance in x4 is not explained by x1 x2 and x3. You can see that these results indicate that there
We might conclude that x4 is redundant, and is really not needed in the model, so we try removing x4 from the regression equation. Note that the variance
explained is about the same (still about 40%) but now the predictors x1 x2 and x3
are now significant. If you compare the standard errors in the table above with
the standard errors below, we see that the standard errors in the table above were
much larger. This makes sense, because when a variable has a low tolerance, its
standard error will be increased.
. regress y x1 x2 x3
Source | SS df MS Number of obs = 100
---------+------------------------------ F( 3, 96) = 21.69 Model | 5936.21931 3 1978.73977 Prob > F = 0.0000
------------------------------------------------------------------------------ Below we look at the VIF and tolerances and see that they are very good, and
much better than the prior results. With these improved tolerances, the standard
We should emphasize that dropping variables is not the only solution to problems of multicollinearity. The solutions are often driven by the nature of your study
and the nature of your variables. You may decide to combine variables that are very highly correlated because you realize that the measures are really tapping the
exact same thing. You might decide to use principal component analysis or factor
analysis to study the structure of your variables, and decide how you might combine the variables. Or, you might choose to generate factor scores from a
principal component analysis or factor analysis, and use the factor scores as
predictors.
Had we not investigated further, we might have concluded that none of these
predictors were related to the dependent variable. After dropping x4, the results
were dramatically different showing x1 x2 and x3 all significantly related to the
dependent variable.
Regression Diagnostics: Non-Independence
use http://www.ats.ucla.edu/stat/stata/modules/reg/nonind, clear Below we run a regression predicting y from x1 x2 and x3. These results suggest
that none of the predictors are related to y.
. regress y x1 x2 x3
Source | SS df MS Number of obs = 100
---------+------------------------------ F( 3, 96) = 0.41 Model | 11.2753664 3 3.75845547 Prob > F = 0.7431
------------------------------------------------------------------------------ Let's create and examine the residuals for this analysis, showing the residuals over
time. Below we see the residuals are clearly not distributed evenly across time,
suggesting the results are not independent over time.
Time-Series Cross-Section Analyses (TSCS) or Panel data models
A balanced panel has all its observations, that is, the variables are observed for
each entity and each time period. A panel that has some missing data for at least one time period for at least one entity is called an unbalanced panel data.
A. The fixed effects regression model
0 1 2it it i itY X Z uβ β β= + + + (12)
Where iZ is an unobserved variable that varies from one state to the next but does
not change over time. We can rewrite equation (12):
1it it i itY X uβ α= + + (13)
Where 0 2i iZα β β= + .
Assumptions of the OLS estimator
1. the conditional distribution of iu given
1 2, ,...,i i kiX X X has a mean of zero.
This means that the other factors captured in the error term are unrelated
to 1 2, ,...,i i kiX X X . The correlation between
1 2, ,...,i i kiX X X and iu should
be nil. This is the most important assumption in practice. If this
assumption does not hold, then it is likely because there is an omitted
variable bias. One should test for omitted variables using (Ramsey and
Braithwaite, 1931)’s test.
2. Related to the first assumption: if the variance of this conditional
distribution of iu does not depend on
1 2, ,...,i i kiX X X , then the errors are
said to be homoskedastic. The error term iu is homoskedastic if the
Where tS is unobserved, where the subscript “t” emphasizes that the variable S
changes over time but is constant across states.
Running Pooled OLS regressions in Stata
The simplest estimator for panel data is pooled OLS. In most cases this is unlikely
to be adequate.
The fixed and random effects models
The fixed and random effects models have in common that they decompose the
unitary pooled error term, uit . That is, we decompose uit into a unit-specific and time-invariant
component, _i, and an observation specific error, "it .1 The _is are then treated as
fixed parameters (in effect, unit-specific y-intercepts), which are to be estimated. This can be done by including a dummy variable for each cross-sectional unit
(and suppressing the global constant). This is sometimes called the Least Squares
Dummy Variables (LSDV) method or “de-meaned” variables method.
For the random effects model. In contrast to the fixed effects model, the vis are
not treated as fixed parameters, but as random drawings from a given probability distribution.
The celebrated Gauss–Markov theorem, according to which OLS is the best linear
unbiased estimator (BLUE), depends on the assumption that the error term is
independently and identically distributed (IID). If these assumptions are not met — and they are unlikely to be met in the context of panel data — OLS is not the
most efficient estimator. Greater efficiency may be gained using generalized least
squares (GLS), taking into account the covariance structure of the error term.
However, GLS estimation is equivalent to OLS using “quasi-demeaned”
variables; that is, variables from which we subtract a fraction of their average. This means that if all the variance is attributable to the individual effects, then the
fixed effects estimator is optimal; if, on the other hand, individual effects are
negligible, then pooled OLS turns out, unsurprisingly, to be the optimal estimator.
Which panel method should one use, fixed effects or random effects? One way of answering this question is in relation to the nature of the data set. If
the panel comprises observations on a fixed and relatively small set of units of
interest (say, the member states of the European Union), there is a presumption in favor of fixed effects. If it comprises observations on a large number of randomly
selected individuals (as in many epidemiological and other longitudinal studies),
there is a presumption in favor of random effects.
Besides this general heuristic, however, various statistical issues must be taken
into account.
1. Some panel data sets contain variables whose values are specific to the cross-sectional unit but which do not vary over time. If you want to include such
variables in the model, the fixed effects option is simply not available. When the
fixed effects approach is implemented using dummy variables, the problem is that the time-invariant variables are perfectly collinear with the per-unit dummies.
When using the approach of subtracting the group means, the issue is that after
de-meaning these variables are nothing but zeros.
2. A somewhat analogous prohibition applies to the random effects estimator.
This estimator is in effect a matrix-weighted average of pooled OLS and the
“between” estimator. Suppose we have observations on n units or individuals and there are k independent variables of interest. If k > n, the “between” estimator is
undefined — since we have only n effective observations — and hence so is the
random effects estimator. If one does not fall foul of one or other of the prohibitions mentioned above, the choice between fixed effects and random
effects may be expressed in terms of the two econometric desiderata, efficiency
and consistency. From a purely statistical viewpoint, we could say that there is a tradeoff between robustness and efficiency. In the fixed effects approach, we do
not make any hypotheses on the “group effects” (that is, the time-invariant
differences in mean between the groups) beyond the fact that they exist — and
that can be tested; see below. As a consequence, once these effects are swept out by taking deviations from the group means, the remaining parameters can be
estimated.
On the other hand, the random effects approach attempts to model the group effects as drawings from a probability distribution instead of removing them. This
requires that individual effects are representable as a legitimate part of the
disturbance term, that is, zero-mean random variables, uncorrelated with the regressors.
As a consequence, the fixed-effects estimator “always works”, but at the cost of not being able to estimate the effect of time-invariant regressors. The richer
hypothesis set of the random-effects estimator ensures that parameters for time-
invariant regressors can be estimated, and that estimation of the parameters for time-varying regressors is carried out more efficiently. These advantages, though,
are tied to the validity of the additional hypotheses. If, for example, there is
reason to think that individual effects may be correlated with some of the
explanatory variables, then the random-effects estimator would be inconsistent,
while fixed-effects estimates would still be valid.
It is precisely on this principle that the Hausman test is built: if the fixed- and
random effects estimates agree, to within the usual statistical margin of error, there is no reason to think the additional hypotheses invalid, and as a
consequence, no reason not to use the more efficient RE estimator.
Testing panel models
Panel models carry certain complications that make it difficult to implement all of
the tests one expects to see for models estimated on straight time-series or cross-sectional data.
When you estimate a model using fixed effects, you automatically get an F-test for
the null hypothesis that the cross-sectional units all have a common intercept. When you estimate using random effects, the Breusch–Pagan and Hausman tests
are presented automatically.
The Breusch–Pagan test is the counterpart to the F-test mentioned above. The null
hypothesis is that the variance of vi in equation equals zero; if this hypothesis is not rejected, then again we conclude that the simple pooled model is adequate.
The Hausman test probes the consistency of the GLS estimates. The null
hypothesis is that these estimates are consistent — that is, that the requirement of orthogonality of the vi and the Xi is satisfied. The test is based on a measure, H, of
the “distance” between the fixed-effects and random-effects estimates,
constructed such that under the null it follows the _2 distribution with degrees of freedom equal to the number of time-varying regressors in the matrix X. If the
value of H is “large” this suggests that the random effects estimator is not
consistent and the fixed-effects model is preferable.
Robust standard errors
For most estimators, Stata offers the option of computing an estimate of the
covariance matrix that is robust with respect to heteroskedasticity and/or
autocorrelation (and hence also robust standard errors). In the case of panel data, robust covariance matrix estimators are available for the pooled and fixed effects
model but not currently for random effects.
Let's now turn to estimation commands for panel data.
The first type of regression that you may run is a pooled OLS regression, which is
simply an OLS regression applied to the whole dataset. This regression is not
considering that you have different individuals across time periods, and so, it is
not considering for the panel nature of the dataset.
reg ln_wage grade age ttl_exp tenure black not_smsa south
In the previous command, you do not need to type age1 or age2. You just need to
type age. When you do this, you are instructing Stata to include all the variables
starting with the expression age to be included in the regression.
Suppose you want to observe the internal results saved in Stata associated with
the last estimation. This is valid for any regression that you perform. In order to
In empirical work in panel data, you are always concerned in choosing between two alternative regressions. This choice is between fixed effects (or within, or least
squares dummy variables - LSDV) estimation and random effects (or feasible
generalized least squares - FGLS) estimation.
In panel data, in the two-way model, the error term can be the result of the sum of
three components:
1. The two-way model assumes the error term as having a specific individual term effect,
2. a specific time effect 3. and an additional idiosyncratic term.
In the one-way model, the error term can be the result of the sum of one
component: 1. assumes the error term as having a specific individual term effect
It is absolutely fundamental that the error term is not correlated with the independent variables.
• If you have no correlation, then the random effects model should be used
because it is a weighted average of between and within estimations.
• But, if there is correlation between the individual and/or time effects and the independent variables, then the individual and time effects (fixed
effects model) must be estimated as dummy variables in order to solve for
the endogeneity problem.
The fixed effects (or within regression) is an OLS regression of the form:
where yi., xi. and vi. are the means of the respective variables (and the error) within the individual across time, y.t, x.t and v.t are the means of the respective
variables (and the error) within each time period across individuals and y.., x..
and v.. is the overall mean of the respective variables (and the error).
Choosing between Fixed effects and Random effects? The Hausman test
The generally accepted way of choosing between fixed and random effects is
running a Hausman test.
Statistically, fixed effects are always a reasonable thing to do with panel data
(they always give consistent results) but they may not be the most efficient model to run. Random effects will give you better P-values as they are a more efficient
estimator, so you should run random effects if it is statistically justifiable to do so.
The Hausman test checks a more efficient model against a less efficient but consistent model to make sure that the more efficient model also gives consistent
results.
To run a Hausman test comparing fixed with random effects in Stata, you need to
first estimate the fixed effects model, save the coefficients so that you can
compare them with the results of the next model, estimate the random effects
model, and then do the comparison.
1. xtreg dependentvar independentvar1 independentvar2... , fe 2. estimates store fixed 3. xtreg dependentvar independentvar1 independentvar2... , re 4. estimates store random 5. hausman fixed random
The hausman test tests the null hypothesis that the coefficients estimated by the efficient random effects estimator are the same as the ones estimated by the
consistent fixed effects estimator. If they are insignificant (P-value, Prob>chi2
larger than .05) then it is safe to use random effects. If you get a significant P-value, however, you should use fixed effects.
If you want a fixed effects model with robust standard errors, you can use the
following command:
areg ln_wage grade age ttl_exp tenure black not_smsa south, absorb(idcode)
robust
You may be interested in running a maximum likelihood estimation in panel data.
You would type:
xtreg ln_wage grade age ttl_exp tenure black not_smsa south, mle
If you qualify for a fixed effects model, should you include time effects?
Other important question, when you are doing empirical work in panel data is to
choose for the inclusion or not of time effects (time dummies) in your fixed effects model.
In order to perform the test for the inclusion of time dummies in our fixed effects regression,
1. first we run fixed effects including the time dummies. In the next fixed effects regression, the time dummies were abbreviated to "y" (see “Generating time dummies”, but you could type them all if you prefer.
xtreg ln_wage grade age ttl_exp tenure black not_smsa south y, fe
2. Second, we apply the "testparm" command. It is the test for time dummies, which assumes the null hypothesis that the time dummies are not jointly significant.
testparm y
3. We reject the null hypothesis that the time dummies are not jointly
significant if p-value smaller than 10%, and as a consequence our fixed
effects regression should include time effects.
Fixed effects or random effects when time dummies are involved: a test
What about if the inclusion of time dummies in our regression would permit us to
use a random effects model in the individual effects?
[This question is not usually considered in typical empirical work- the purpose
here is to show you an additional test for random effects in panel data.)
1. First, we will run a random effects regression including our time dummies,
xtreg ln_wage grade age ttl_exp tenure black not_smsa south y, re
2. and then we will apply the "xttest0" command to test for random effects in this case, which assumes the null hypothesis of random effects.
xttest0
3. The null hypothesis of random effects is again rejected if p-value smaller than 10%, and thus we should use a fixed effects model with time effects.
Special problems arise when a lag of the dependent variable is included among
the regressors in a panel model.
First, if the error uit includes a group effect, vi, then yit�1 is bound to be correlated with the error, since the value of vi affects yi at all t. That means that OLS will be inconsistent as well as inefficient. The fixed-effects model sweeps
out the group effects and so overcomes this particular problem, but a subtler issue
remains, which applies to both fixed and random effects estimation.
Estimators which ignore this correlation will be consistent only as T ! 1 (in which
case the marginal effect of "it on the group mean of y tends to vanish). One strategy for handling this problem, and producing consistent estimates of _
and _, was proposed by Anderson and Hsiao (1981). Instead of de-meaning the
data, they suggest taking the first difference, an alternative tactic for sweeping out
the group effects: Although the Anderson–Hsiao estimator is consistent, it is not most efficient: it
does not make the fullest use of the available instruments, nor does it take into
account the differenced structure of the error _it . It is improved upon by the methods of Arellano and Bond (1991) and
Blundell and Bond (1998).
Stata implements natively the Arellano–Bond estimator. The rationale behind it is, strictly speaking, that of a GMM estimator. This procedure has the double effect
of handling
heteroskedasticity and/or serial correlation, plus producing estimators that are asymptotically efficient.
One-step estimators have sometimes been preferred on the grounds that they are more robust.
Moreover, computing the covariance matrix of the 2-step estimator via the
standard GMM formulae has been shown to produce grossly biased results in finite samples. However, implementing the finite-sample correction devised by
Windmeijer (2005), leads to standard errors for the 2-step estimator that can be
considered relatively accurate.
Two additional commands that are very usefull in empirical work are the Arellano
and Bond estimator (GMM estimator) and the Arellano and Bover estimator
Both commands permit you do deal with dynamic panels (where you want to use as independent variable lags of the dependent variable) as well with problems of
endogeneity.
You may want to have a look at them The commands are respectively "xtabond"
and "xtabond2". "xtabond" is a built in command in Stata, so in order to check
how it works, just type:
help xtabond
"xtabond2" is not a built in command in Stata. If you want to look at it, previously, you must get it from the net (this is another feature of Stata- you can
always get additional commands from the net). You type the following:
findit xtabond2
The next steps to install the command should be obvious.
How does it work?
The xtabond2 commands allows to estimate dynamic models either with the GMM estimator in difference or the GMM estimator in system.
• This option replaces the z-statistics by the t-test results.
TESTS
In need for a causality test?
The first thing to do is to use the command summarize, detail or other functions
presented in the previous tutorials, to obtain a description of the data. Once again,
it is required that you show explicitly what are the NULL and ALTERNATIVE
hypotheses of this test, and the regression equations you are going to run. The results of Thurman and Fisher's (1988), Table 1, can be easily replicated using
OLS regressions and the time series commands introduced in the previous
tutorials.
A simple example in Stata:
*Causality direction A: Do chickens Granger-cause eggs? For example, using the
Probit and logit regressions are models designed for binary dependent variables. Because a regression with a binary dependent variable Y models the probability
that Y=1, it makes sense to adopt a nonlinear formulation that forces the predicted values to be between zero and one.
Probit regression uses the standard normal cumulative probability distribution
function. Logit regression uses the logistic cumulative probability distribution function.
Probit regression
( ) ( )1 0 1 1Pr 1 ,..., ...k k kY X X X Xφ β β β= = + + + (15)
Where φ is the cumulative standard normal distribution.
Logit regression
( ) ( )
( ) ( )0 1 1
1 0 1 1
1 ...
Pr 1 ,..., ...
1Pr 1 ,...,
1 k k
k k k
k X X
Y X X F X X
Y X Xe
β β β
β β β
− + + +
= = + + +
= =+
(16)
Logit regression is similar to probit regression except that the cumulative distribution function is different.
The rvfplot shows a real fan spread pattern where the variability of the residuals
grows across the fitted values.
. rvfplot
The hettest command confirms there is a problem of heteroscedasticity.
. hettest
Cook-Weisberg test for heteroscedasticity using fitted values of timedrs Ho: Constant variance
chi2(1) = 148.83
Prob > chi2 = 0.0000
Let's address the problems of non-normality and heteroscedasticity. Tabachnick and Fidell recommend a log (to the base 10) transformation for ltimedrs and
phyheal and a square root transformation for stress. We make these
transformations below.
. generate ltimedrs = log10(timedrs+1)
. generate lphyheal = log10(phyheal+1)
. generate sstress = sqrt(stress) Let's examine the distributions of these new variables. These transformations
------------------------------------------------------------------------------ The distribution of the residuals looks better. There still is a flat portion in the
bottom left of the plot, and there is a residual in the top left.
. rvfplot
The hettest command is no longer significant, suggesting that the residuals are
homoscedastic.
. hettest
Cook-Weisberg test for heteroscedasticity using fitted values of ltimedrs Ho: Constant variance
chi2(1) = 0.86
Prob > chi2 = 0.3529 We use the ovtest command to test for omitted variables from the equation. The
results suggest no omitted variables.
. ovtest Ramsey RESET test using powers of the fitted values of ltimedrs
Ho: model has no omitted variables
F(3, 458) = 0.60 Prob > F = 0.6134
We use the ovtest with the rhs option to test for omitted higher order trends (e.g.
quadratic, cubic trends). The results suggest there are no omitted higher order trends.
. ovtest, rhs Ramsey RESET test using powers of the independent variables
Ho: model has no omitted variables F(9, 452) = 0.87
Prob > F = 0.5525
Examination of the added variable plots below show no dramatic problems.
. avplots
Let's create leverage, studentized residuals, and Cook's D, and plot these. These
result look mostly OK. There is one observation in the middle top-right section that has a large Cook's D (large bubble) a fairly large residual, but not a very large
leverage.
. predict l, leverage
. predict rstud, rstudent
. predict d, cooksd
. graph rstu l [w=d]
Below we show the same plot showing the subject number, and see that
observation 548 is the observation we identified in the plot above.
The residuals look like they are OK. Let's try running the regression using robust
standard errors and see if we get the same results. Indeed, the results below (using robust standard errors) are virtually the same as the prior results.
. regress ltimedrs lphyheal menheal sstress, robust Regression with robust standard errors Number of obs = 465
------------------------------------------------------------------------------ Since the dependent variable was a count variable, we could have tried analyzing
the data using poisson regression. We try analyzing the original variables using
------------------------------------------------------------------------------ Likelihood ratio test of alpha=0: chi2(1) = 2075.77 Prob > chi2 = 0.0000
The test of overdispersion (test of alpha=0) is significant, indicating that the
negative binomial model would be preferred over the poisson model.
This module illustrated some of the diagnostic techniques and remedies that can be used in regression analysis. The main problems shown here were problems of
non-normality and heteroscedasticity that could be mended using log and square
The Crosstabs procedure forms two-way and multiway tables and provides a
variety of tests and measures of association for two-way tables. The structure of the table and whether categories are ordered determine what test or measure to
use.
Crosstabs’ statistics and measures of association are computed for two-way tables
only. If you specify a row, a column, and a layer factor (control variable), the
Crosstabs procedure forms one panel of associated statistics and measures for
each value of the layer factor (or a combination of values for two or more control
variables). For example, if GENDER is a layer factor for a table of MARRIED (yes, no) against LIFE (is life exciting, routine, or dull), the results for a two-way
table for the females are computed separately from those for the males and printed
as panels following one another.
Example. Are customers from small companies more likely to be profitable in
sales of services (for example, training and consulting) than those from larger companies? From a crosstabulation, you might learn that the majority of small
companies (fewer than 500 employees) yield high service profits, while the
majority of large companies (more than 2500 employees) yield low service
profits. Statistics and measures of association. Pearson chi-square, likelihood-ratio chi-
square, linear-by-linear association test, Fisher’s exact test, Yates’ corrected chi-
square, Pearson’s r, Spearman’s rho, contingency coefficient, phi, Cramér’s V, symmetric and asymmetric lambdas, Goodman and Kruskal’s tau, uncertainty
coefficient, gamma, Somers’ d, Kendall’s tau-b, Kendall’s tau-c, eta coefficient,
Chi-square. For tables with two rows and two columns, select Chi-square to calculate the Pearson chi-square, the likelihood-ratio chi-square, Fisher’s exact
test, and Yates’ corrected chi-square (continuity correction). For 2 ´ 2 tables,
Fisher’s exact test is computed when a table that does not result from missing
rows or columns in a larger table has a cell with an expected frequency of less than 5. Yates’ corrected chi-square is computed for all other 2 ´ 2 tables. For
tables with any number of rows and columns, select Chi-square to calculate the
Pearson chi-square and the likelihood-ratio chi-square. When both table variables are quantitative, Chi-square yields the linear-by-linear association test.
Correlations. For tables in which both rows and columns contain ordered values, Correlations yields Spearman’s correlation coefficient, rho (numeric data only).
Spearman’s rho is a measure of association between rank orders. When both table
variables (factors) are quantitative, Correlations yields the Pearson correlation coefficient, r, a measure of linear association between the variables.
Nominal. For nominal data (no intrinsic order, such as Catholic, Protestant, and
Jewish), you can select Phi (coefficient) and Cramér’s V, Contingency
coefficient, Lambda (symmetric and asymmetric lambdas and Goodman and
Kruskal’s tau), and Uncertainty coefficient.
Ordinal. For tables in which both rows and columns contain ordered values, select Gamma (zero-order for 2-way tables and conditional for 3-way to 10-way tables),
Kendall’s tau-b, and Kendall’s tau-c. For predicting column categories from row
categories, select Somers’ d. Nominal by Interval. When one variable is categorical and the other is
quantitative, select Eta. The categorical variable must be coded numerically.
Kappa. For tables that have the same categories in the columns as in the rows (for
example, measuring agreement between two raters), select Cohen’
s Kappa.
Risk. For tables with two rows and two columns, select Risk for relative risk estimates and the odds ratio.
McNemar. The McNemar test is a nonparametric test for two related dichotomous
variables. It tests for changes in responses using the chi-square distribution. It is useful for detecting changes in responses due to experimental intervention in
"before and after" designs.
Cochran’s and Mantel-Haenszel. Cochran’s and Mantel-Haenszel statistics can
be used to test for independence between a dichotomous factor variable and a
dichotomous response variable, conditional upon covariate patterns defined by one or more layer (control) variables. The Mantel-Haenszel common odds ratio is
also computed, along with Breslow-Day and Tarone's statistics for testing the