Divyansh Aggarwal 100002971 Sabeena Vasandani 100016442 Asaad Rafi 090002252 Roshni Sahni 100000098 CASS BUSINESS SCHOOL Financial Econometrics
Divyansh Aggarwal 100002971 Sabeena Vasandani 100016442 Asaad Rafi 090002252 Roshni Sahni 100000098
C A S S B U S I N E S S S C H O O L
Financial Econometrics
[Type text] [Type text] [Type text]
INTRODUCTION
This paper will assess the rationality of the model !! = ! + !!!! + !! for characterising the behaviour of asset returns. The company that has been analysed for this study is the Goldman Sachs Group Inc (GS:US), and their stock from January 2001-‐September 2011. Goldman Sachs has often been referred to as “the money-‐making machine”, particularly after their performance even amidst a financial crisis. It is listed on the NYSE, NASDAQ and S&P500. For the purpose of this report, our focus will remain on the S&P500 (SPX) as it is a free-‐float capitalization-‐weighted index and it is one of the most commonly used indices, after the Dow Jones. The US 3 month bond yields have been used as the short-‐term risk-‐free asset (USTBILL), while the US 10+ year bond yields have been used as the long-‐term risk free asset (LY). These variables have been considered to run the capital asset pricing model (CAPM). The macroeconomic model takes into account the US effective exchange rate (EER), GDP and CPI to analyse correlation between the Goldman Sachs stock and the macroeconomic conditions in the US. Our analysis also includes the Goldman Sachs dividend yield (DIVY), return on investment (ROI), earnings index (EI) in order to run a regression on the company specific model. These three models have been assessed in order to portray and identify the most valid model for the asset price return on the stock of Goldman Sachs US. The data collected for this investigation was obtained from Bloomberg.
[Type text] [Type text] [Type text]
CONTENTS PAGE
INTRODUCTION 1
CONTENTS 2
THE MODEL AND THE FINDINGS 3
DIAGNOSTIC TESTS 5
REMEDY 9
WALD TEST 12
NON-‐LINEARITY TEST 12
CHOW TEST 14
SEASONALITY EFFECTS 15
THE BEST MODEL 16
EVALUATION 17
REFERENCES 18
[Type text] [Type text] [Type text]
THE MODEL AND THE FINDINGS The graph on the left compares the returns on the US 10+ year bond yield, return on the Standard and Poor index (RSPX) and the returns on Goldman Sachs (RGS). It can be seen that the RGS graph is more volatile; and the fluctuations in the Goldman Sachs returns are greater than RSPX and USTBILL returns. Even SPX returns are volatile compared to USTBILL which is supposed to be stable at it is considered a risk-‐free asset. We can observe that RGS and RSPX are positively correlated as when RSPX falls, RGS follows the same trend and vice versa. However, as RGS is more volatile, the extent of correlation is limited. The correlation can be seen in the scatter graph below.
Following are the statistics obtained using Eviews: We can observe that the average excess return of Goldman Sachs is negative and so is RSPX. This is shown by the mean values in the table. Also the maximum, minimum and standard deviation values show that RGS is more volatile than RSPX, whereas USTBILL does not show great fluctuations. The kurtosis is greater than 3 for RSPX and RGS indicating a greater possibility of outliers unlike USTBILL. The negative skewness of RSPX and RGS indicates that the left side tail is longer than the right side
[Type text] [Type text] [Type text]
and majority of the values lie to the right of the mean. The Capital Asset Pricing Model (CAPM) is based on the ideology that investors must be remunerated for their risk taking ability and time value of money. We have used the principle of Ordinary Least Squares (OLS) throughout our project as it is the most common method to fit a line to data. We will use OLS to estimate our CAPM and will regress it using the simple regression model. The simple regression model is in the form of; Rt= α + βRmt
Rt is our dependent variable which will be RGS.
α is the excess returns which is earned even if the beta is 0.
β is the multiplier factor of the independent variable.
!!! is the independent variable which will be RSPX.
The regressed model was obtained using Eviews by inputting the data from January 2001-‐September 2011 and producing a new equation CAPM from New Object. The equation specification was as follows: RGS C RSPX. The result equation and estimation output were as follows: !"# = !.!!"#"$!%%!%$&& + !.!"##$!!%$"# ∗ !"#$
A general regression model is given by Y=α + β1X + u. Our regression model is in the form: Portfolio Return=α + β1 x market return + error. R-‐squared is a measure of how well a regression function fits the data and can take any value between 0-‐1. The closer R-‐squared is to 1, the more accurate the model. In this case, the R-‐squared value is 0.518236, which suggests it is a decent model. Adjusted R-‐squared measures the proportion of the variation in the dependant variable accounted for by the descriptive variables;
therefore it is useful for comparing between models. Like R-‐Squared, it has the range 0-‐1 and the closer the value is to 1, the better. With regards to the model we have generated, adjusted R-‐squared is 0.514412, suggesting it can be improved with some corrections. Standard error is the standard deviation of the sampling distribution. Therefore, the lower the standard error, the better the model. Our standard error value is 0.063540. It is a significant value and as commented can be lowered with some corrections. The sum squared residual (also called RSS) is the sum of squares of the residuals is a measure of the divergence between the data and the estimating model. A small RSS is a good sign as it means the model fits the data closely. With reference to the table, our RSS value is 0.508698 which is again not a very high value and hence, suggesting of having a decent model. The F-‐statistic test examines whether all the slope coefficients are statistically insignificant or not. The test is represented as follows: H0: β1=0 (null hypothesis) H1:β1≠0 (alternative hypothesis)
[Type text] [Type text] [Type text]
We worked the F-‐statistic value to be 135.5387, and the corresponding probability of the F-‐statistic in this case, our p-‐value (Prob(F-‐statistic)) is zero, which means we reject the null hypothesis and conclude that one if not more of the slope coefficients (β) are significant. Seeing as the F-‐statistic value proved that the β values are significant, we can now use the t-‐statistic value to establish whether the value of each individual coefficient is statistically significant or not. The t-‐statistic is a ratio of the divergence of an estimated parameter from its theoretical value and its standard error. With regards to the variables, they are only significant if they have a probability less than 5%. Therefore, rspx is significant as its p-‐value is zero, which is less than 5%. The significance of the other variables will be checked in the same way throughout this project. The coefficient of rspx is 1.382273, which reflects that for every 100% return on spx will give about 138% for a GS share. This suggests of a positive correlation, which has already been represented through a graph previously.
DIAGNOSTIC TESTS The four assumptions underlying the Classic Linear Regression Model (CLRM) are; 1.! !! = 0 2.!"#(!!) = !! 3.!"# !! , !! = 0 4.!"# !! , !! = 0 5.!! ∼ !(0,!!) The diagnostic tests below have been carried out with the intention of testing whether the assumptions underlying the model hold true. Our interest lies in determining whether the β values obtained by OLS are the Best Linear Unbiased Estimators (BLUE), this is only possible if the assumptions mentioned above hold.
White Heteroskedasticity Test
White test: a statistical test that establishes whether the residual variance of the error terms in a regression model is constant.
Heteroskedasticity arises when there is no constant variance between the errors, thus contradicting the assumption of homoscedasticity. If our objective is BLUE, then we require the assumption !"# (! t) = !2 to be true. A graphical representation can be used in order to test the variability of the residuals. However, such a method does not point out the form and cause of heteroskedasticity. In addition, simply by looking at the patterns in the residual graph, we may be led to the wrong conclusions. Therefore, running a formal test such as the White test explained above is more useful as it results in assumptions that can be justified. We can define our hypotheses as follows:
[Type text] [Type text] [Type text]
H0: the disturbances ( ! 2) are homoscedastic H1: the disturbances ( ! 2) are heteroscedastic
The p value for the F-‐test is 0.0035 (<5%), thus we reject the null hypothesis. This indicates that there is systematically changing variability over the sample, and that the coefficient estimates are no longer BLUE (Best Linear Unbiased Estimators).
Sources of misspecification: It is highly unlikely to experience periods of high and low volatility in stock prices. Markets react to exogenous factors.
Autocorrelation Test Another assumption in the CLRM states that !"# !! , !! = 0 for ! ≠ ! , this means that the model assumes that
there is no autocorrelation in the residuals (there is no pattern in the disturbances). !!= no autocorrelation !!= autocorrelation exists Durban Watson The easiest way to test autocorrelation in residuals is the Durban Watson test. In the regression, the DW statistic is 1.754542, which is less than 2 therefore the null hypothesis is rejected and there is evidence of positive autocorrelation. However, there are two limitations to this test; firstly, it tests for first order correlation which means that it only tests the relationship between an error and the value preceding it. Secondly, the model assumes non-‐stochastic regressors, this simply means that if the regression was repeated with a new sample, the variable values would remain unchanged. Only the dependent variable would change, as the new sample would comprise of new values for the disturbance term. Therefore, the DW is limited in its reliability and a more coherent test such as the Breusch-‐Godfrey Serial Correlation LM Test is used to examine the existence of any higher order autocorrelation amongst the residuals.
Breusch-‐Godfrey Serial Correlation LM Test The BG test is free of these constraints and is therefore considered to be a more reliable test. The test is carried out with 12 lags taking into consideration that monthly data has been collated. The p value is 0.6173, this is greater than the 5% significant level, and therefore the null hypothesis is accepted. This test indicates that there is no autocorrelation of independent variables. Therefore, the result of the BG test is acknowledged and the DW statistic is disregarded. Residuals Normality Test The residuals normality test examines for the condition of normality, which is said to exist if the errors have a mean of 0; !(!!) = 0. Through the use of Eviews, the historgram normality test is carried out. A normal bell
[Type text] [Type text] [Type text]
shaped distribution has the following properties; skewness=0 and kurtosis=3. The skewness for our model is 0.508741 and the kurtosis is 5.364892, therefore it is evident that non-‐normality exists.
Ramsey-‐RESET Test
A fundamental assumption of the Classical Linear Regression Model (CLRM) is that appropriate functional form should generally be linear. The Ramsey Regression Equation Specification Error Test (RESET) is used to detect any misspecification of functional form. To obtain the expected results, a fitted term of 1 was used so as to consider only the square of the fitted values. The reason for
doing this is so that non-‐linearity is introduced into the specification. We can then attempt to detect non-‐linearities that are in the functional form and will not be recovered by a linear specification. This way, we would be able to better conclude if our model would profit from having non-‐linear coefficients.
We can define our hypotheses: H0: correct specification is linear H1: correct specification is non-‐linear
According to the test results, the p-‐value for both the F and t-‐statistics is 0.4714, which is more than 5%. We accept the null hypothesis that all regression coefficients of the non-‐linear terms are 0 and we conclude that there is no apparent non-‐linearity in the regression equation. The linear model is thus appropriate.
In addition, from the data generated above, we can conclude that there is no significant effect of the inclusion of squared variables on our model. It would thus be best to omit such factors as they would not result in a better-‐fitted model but instead worsen our model as the coefficients change. Conclusively, we would not want our model to be non-‐linear as it goes against our primary assumption that linearity is essential for appropriate functional form.
[Type text] [Type text] [Type text]
Jarque-‐Bera Test
The fifth assumption of the CLRM suggests that the disturbances are normally distributed: ! t ~ N(0, ! 2). This assumption is essential so that conclusions regarding model parameters can be made. The Jarque-‐Bera (JB) test is the most common normality measure and it
should have a value that is less than 6 or 7 and a p – value that is greater than 0.05 (p – value > 0.05) for the null hypothesis to be accepted.
The hypotheses may be defined as:
H0: distribution is normal H1: distribution is not normal
The Jarque-‐Bera value is 35.34923(>7) and the p-‐value is 0(<5%). We thus reject the null hypothesis at the 5% significance level and conclude that the residuals do not follow a normal distribution. The cause of this may be a breakpoint in the regression residuals or even an outlier. There is no skewness in a normally distributed random variable, yet our model displays a skewness coefficient that is quite large (0.508741) and towards the positive side. The Kurtosis coefficient is 5.364892 (>3), which indicates a flat tail. Due to this, extreme events are likely to happen.
Accurate inferences thus cannot be made on the regression parameters due to violation of the fifth assumption.
Sources of misspecifications: On the right hand side of the histogram, there are some outliers that have resulted from the 2007-‐2009 financial crisis. Goldman Sachs, along with other banks, had to adjust its monetary policy, incorporate methods such as Quantitative Easing to battle against the credit crunch and reform its structure.
[Type text] [Type text] [Type text]
REMEDY Some models experience a few extreme observations. These outliers exist due to the very exogenous events that might affect the returns on the portfolio and cause a structural break. To cure this problem, dummy variables can be added which normalizes the model.
Below is the residual graph for the model with the outliers being 2005M05, 2008M08, 2009M02 and 2010M04 for which the dummy variables were generated in Eviews.
In April 2010, the residual is negative reflecting that market returns were less than expected. The reason was that Goldman Sachs was charged with $1bn fraud over toxic sup-‐prime securities leading to fall in the market return for the stock.
In February 2009, Goldman Sachs became joint lead manager, book runner and underwrite for an Aus$ 750 million share sale
boosting investor confidence eventually.
In August 2008, Goldman Sachs was involved in a settlement with the state regulators and had to pay penalty and repurchase auction rate securities for $1 billion.
In May 2005, Goldman Sachs JBWere appointed Andrew Smith as the Senior Investment Manager who probably caused a drop in investor confidence.
After rerunning the CAPM regression with the 4 dummy variables, the estimation output and the residual graph for the performed correction was as follows-‐
!"# = 0.00297319544894 + 1.53231238734 ∗ !"#$ −
0.137902886751 ∗ !"05!05 − 0.136251010166 ∗ !"08!08 + 0.296211721062 ∗ !"09!02 − 0.186740520686 ∗ !"10!04
[Type text] [Type text] [Type text]
As we can observe, the dummy variables are highly significant for the subject model and increase the R-‐squared by 0.145407 which is substantial. The p-‐value for RSPX is also 0.0000, thus, there are no insignificant variables in the model. Hence, by adding the dummy variables, we have effectively eliminated the residual gap and the impact of the big outliers, therefore
made the model more efficient.
In addition, the other problematic issues that we have found (non-‐normality and heteroskedasticity) have unfavourable effects on the validity of our parameter estimators, because they are no longer BLUE. We continue to solve these problems below –
Non-‐normality Correction:
With respect to the new model above (that has been created by introducing dummies), we have conducted another normality test with the inclusion of the dummy variables. We now arrive at
a Jarque-‐Bera value of 2.096821 (<7) and a probability of 35.0494% (>5%). We thus accept our null hypothesis that the distribution of the disturbance terms is normal. Also, the kurtosis coefficient is 2.929959 (<3), which indicates a stable model.
Our new regression therefore confirms that the inclusion of the four dummy variables is very highly significant. Furthermore, the advantage of this is that our standard errors are now accurate – dummy variables have been included to represent events that have only occurred once, thus they ‘knock-‐out’ the effects of the exogenous shocks.
[Type text] [Type text] [Type text]
Heteroskedasticity Correction:
We have run White’s test again after the inclusion of the dummy variables to establish whether our new regression model indicates heteroskedasticity. Our p – value for the F-‐stat and !! = 0.4341 and 0.5037 respectively. Both values are greater than our significance level of 5%; therefore we accept our null hypothesis that the disturbances ( ! 2) are homoscedastic. As a result, our new result holds for the assumption that !"# (!t) = !2.
We also conducted a White heteroskedasticity-‐consistent standard errors test to obtain the estimation output of our new model. When we compare the results of the original
regression with the new regression, we observe that the standard errors of the coefficients have increased relative to the original OLS standard errors, and this leads to a change in the t-‐stat: which in this case is 0, as β/s.e. = 0 for the new regression. This contradicts with the previous method we used above, as here we come to the conclusion that our standard error above has been underestimated. However with a t-‐stat of 0, we are less likely to reject the null hypothesis.
There are 2 other models that can also be regressed and compared with CAPM – Macroeconomic (MACRO) and Company specific (COMP). The following models are represented as follows-‐
[Type text] [Type text] [Type text]
We can observe that the correction for the dummy variables has already been performed. MACRO has a slightly higher R-‐squared value (0.668202) than the corrected CAPM model that has a value of 0.663643, suggesting that the model is insignificantly better than the CAPM model. However, if we see the company specific model, we see a very high r-‐squared value, 0.979931. Thus, COMP is significantly better than CAPM with lesser standard error and RSS values. Also, the significant independent variables are more in COMP, which allows for the model to be more precise.
WALD TEST
!!:! = 0,! = 1
!!:! ≠ 0,! = 1
It is important to investigate whether the beta coefficient is statistically different from 1, and whether the coefficient (c(1)) is statistically different from 0. The Wald Test examines restrictions on parameters and is used to test the null hypothesis at a 5% significance level. The p value is 0.0069, which is significantly lower than 0.05 therefore the null hypothesis has to be rejected.
The restrictions ! = 0 and ! = 1 were inserted into the equation. Through the outcome of the Wald test, because the coefficient ! ≠ 1 , it is concluded that Goldman Sachs returns on average fluctuate more than the market returns as a whole. From this result, it is also derived that if the market shows a 0 return,
Goldman Sachs will continue to generate returns, as ! ≠ 0.
NON-‐LINEARITY TEST
A simple regression has a linear form, which suggests that, the CAPM should be a straight line. Thus, to validate this statement a Ramsey’s RESET test needs to be carried out. To carry this out, we augment the model by adding the square of the market return (rspx2). We include
!"# = 0.000371011241589 + 1.45717441113 ∗!"#$ + 0.888045824634 ∗ !"#$ −
3.2965019748 ∗ !"# + 0.798033168986 ∗ !"#$ − 0.445025862943 ∗ !""# − 0.132440827324 ∗!"05!05 − 0.119168922959 ∗ !"08!08 +
0.299225377047 ∗ !"09!02 − 0.187001508452 ∗!"10!04
!"# = 0.00179303049176 + 0.123403015048 ∗!"#$ − 0.911798231838 ∗ !"#$% +
0.064354649494 ∗ !"#$ + 0.0133060319989 ∗!"# + 0.211154801388 ∗ !"03!08 +
0.163828493908 ∗ !"03!11 + 0.13871075955 ∗ !"04!02 + 0.113480480509 ∗ !"04!05 + 0.0230190834149 ∗ !"09!02 − 0.0649459471681 ∗ !"09!12
[Type text] [Type text] [Type text]
squares to make sure that the positive and negative cefficients become positive and they do not cancel each other out when summed up. It also penalizes the big differences over the smaller ones.
We regress the added variable along with the model and test its significance on the portfolio return. The hypothesis for the same will be-‐
H0 : rspx2=0 (no relationship between rspx2 and rgs)
H1 : rspx2≠0 (relationship between rspx2 and rgs)
After rerunning the regression, the estimated output is as follows-‐
We can observe that the F-‐statistic provided by the test is 67.77302, and the p-‐value obtained is 0.4714. Thus we fail to reject our null hypothesis as p-‐value (0.4714) > 0.05. Hence, there is no relation between the portfolio return and the squares of the market return.
There is no significant change in the R-‐Squared also (just 0.002002), showing rspx2 does not have any substantial impact on the model.
Now to test the linearity of this model, we will run a Ramsey RESET Test. The hypothesis for the test will be-‐
H0 : correct functional form
H1 : incorrect functional form
[Type text] [Type text] [Type text]
The test statistics were as follows-‐
The test shows that the p-‐value obtained is 0.4232 which is greater than 0.05. So under 5% significance level, we accept the null hypothesis and conclude that the test indicates linearity in our model. Also, augmenting our model with the square of market return does not cause a substantial change towards having a proper functional form. Hence, it is also concluded that rspx2 does not affect the regressed model.
CHOW TEST The Chow Test is used to analyze if there has been a structural break in the performance of a financial portfolio according to certain events that have happened in the world such as a stock market crash. We will use this test to determine whether our portfolio’s return has been significantly altered due to these ‘structural breaks.’
[Type text] [Type text] [Type text]
The test has been conducted by choosing two breakpoints – 2003M03 and 2007M11. It was identified that in 2003M03, Goldman Sachs merged with JBWere, an Australian Financial Institution owning 45% of the newly formed merger and 2007M11 was chosen as the current ongoing recession started in that period. If the coefficients change over the period, it reflects a change in the relationship of dependent and the independent variables indicating a structural break.
H0: p-‐value<0.05 (null hypothesis stating there are structural breaks) H1: p-‐value>0.05 (alternative hypothesis suggesting otherwise) In Eviews, we select Stability Diagnostics in the View window and choose Chow Breakpoint test. We input: 2003M03 2007M11. We get the following result:
As the p-‐value of the F-‐test is higher than 0.05(p=0.5304), we reject the null hypothesis and conclude that the two dates didn’t have effect on the return of the portfolio. Thus, the coefficients are stable over the period and there are no structural breaks.
SEASONALITY EFFECTS Other than the exogenous factors, there might also be some seasonal elements that affect the returns on the portfolio. To overcome this, a dummy variable called Jandum was generated by entering: jandum=0 and then editing all the January values to 1. The month January has been chosen due to the fact that the income-‐sensitive investors, who hold small stocks, sell them for tax reasons at the end of the year and re-‐invest in January when there is an increase the stock prices, resulting in small stocks outperforming large stocks. It the most common calendar anomaly which is faced by many other market stocks. H0: jandum=0 (dummy variable is significant) H : jandum≠0 (dummy variable is not significant) The CAPM was re-‐estimated to allow for the independent variable jandum. The result were as follows-‐ RGS = 0.000271884850416 + 1.38941577154*RSPX + 0.0124085411764*JANDUM
As we can observe from the table above, the p-‐value for jandum is 0.5118, thus, reflecting that the January phenomenon does not affect the returns on the stock significantly.
[Type text] [Type text] [Type text]
THE BEST MODEL
The only relevant models which can be used to represent the company returns and will be
compared are-‐
• Capital Asset Pricing Model (corrected for heteroskedasticity and exogenous factors)
• Company Specific Model (corrected for heteroskedasticity and exogenous factors)
Models Schwarz
Criterion
R-‐Squared Adjusted
R-‐Squared
Standard
Error
Sum Square
Residual
CAPM -‐2.821896 0.663643 0.649857 0.053955 0.355162
COMP -‐5.451362 0.979931 0.978216 0.013458 0.021191
Normality
Ramsey RESET
Test
Autocorrelation
White
Akaike
CAPM 2.096821 0.7548 0.8473 0.4341 -‐2.955585
COMP 1852.804 0.8690 0.0000 0.0000 -‐5.696459
As shown in the table above, eight out of the ten tests indicate that the Company Specific Model
is more suited to the represent the GS stock returns. it is evident that the COMP model has a
higher R-‐squared value, lower standard error, lower Schwarz criterion, lower akaike and no
autocorrelation.
Following is the representation and the graph of the model:
RGS = 0.00179303049176 + 0.123403015048*RSPX -‐ 0.911798231838*LDIVY + 0.064354649494*LROI + 0.0133060319989*LEI + 0.211154801388*DM03M08 +
0.163828493908*DM03M11 + 0.13871075955*DM04M02 + 0.113480480509*DM04M05 + 0.0230190834149*DM09M02 -‐ 0.0649459471681*DM09M12
[Type text] [Type text] [Type text]
We can observe that the fitted line fits the actual line readily and hence, proves that the model is the best among those compared with.
EVALUATION
The most prominent feature of this model is that it readily fits to the data. Hence, an investor can make an informed choice about their risk taking abilities and investment decisions. It is evident from our findings that the returns on GS stock are highly volatile, with a negative mean resulting in lower than expected average returns. An investor looking for a long term investment would be better off investing in the government risk free bonds as they offer a secure rate but with a relatively lower return. Our model is highly efficient and can provide a good indication on the returns of GS and benefit the investor in making greater returns.
In order to produce a more coherent model, we can examine the effect of adding more variables such as the Price Earnings ratio, the financials of a company (profit and loss), competitors’ performances and other indices such as the NASDAQ and NYSE. These will also help extend the model and aid investors to forecast future returns on their investment that could lead to the formation of a more proficient investment model.
[Type text] [Type text] [Type text]
REFERENCES Websites: Guardian, April 2010. Goldman Sachs charged with $1bn fraud over toxic sub-‐prime securities. [Online] Available at: <http://www.guardian.co.uk/business/2010/apr/16/goldman-‐sachs-‐fraud-‐charges> [Accessed 15 November 2011] Money to Metal, August 2011. Goldman Sachs Group. [Online] Available at: <http://moneytometal.org/index.php/Goldman_Sachs_Group> [Accessed 15 November 2011] Goldman Sachs, Aug 2008. Goldman Sachs Settles with State Regulators and Offers to Repurchase Auction Rate Securities Sold to its Private Clients. [Online] Available at: <http://www2.goldmansachs.com/media-‐relations/press-‐releases/archived/2008/repurchase-‐auction-‐rate-‐securities.html> [Accessed 15 November 2011] Goldman Sachs, March 2003. Goldman Sachs and JBWere Agree on Australian / NZ Merger. [Online] Available at: <http://www2.goldmansachs.com/media-‐relations/press-‐releases/archived/2003/2003-‐03-‐26.html> [Accessed 15 November 2011] Goldman Sachs, May 2005. Goldman Sachs and JBWere Asset Management expands property team. [Online] Available at: <http://www.gs.com.au/documents/About/MediaRoom/Smith_May05.pdf> [Accessed 15 November 2011]
Investopedia, 2011. Capital Asset Pricing Model – CAPM. [Online] Available at: <http://www.investopedia.com/terms/c/capm.asp#axzz1dnZAGMJV> [Accessed 15 November 2011] Books:
Gavin Cameron, GC. Trinity Term 1999. Lecture VI: Stochastic Regressors and Measurement Errors, Econometric Theory. Nuffield College, Oxford, unpublished.