1 Systemic Risk, Financial Market Developments and Market and Regulatory Performance: What is Known from Finance Research, and What Remains to be Learnt A benchmarking report prepared by Terry S. Walter 1 for The Centre for International Finance and Regulation August 2012 1 Terry Walter is a Visiting Fellow in the Finance Discipline Group at the University of Technology, Sydney, an Adjunct Professor in the UWA Business School, The University of Western Australia and Chief Research Officer, Sirca Limited.
70
Embed
Systemic Risk, Financial Market Developments and Market ... · 1 Systemic Risk, Financial Market Developments and Market and Regulatory Performance: What is Known from Finance Research,
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
1
Systemic Risk, Financial Market Developments and Market and Regulatory Performance:
What is Known from Finance Research, and What Remains to be Learnt
A benchmarking report prepared by
Terry S. Walter1
for
The Centre for International Finance and Regulation
August 2012
1 Terry Walter is a Visiting Fellow in the Finance Discipline Group at the University of Technology, Sydney, an
Adjunct Professor in the UWA Business School, The University of Western Australia and Chief Research Officer, Sirca Limited.
2
Abstract and summary
This report has been commissioned by the Centre for International Finance and Regulation (CIFR). It
provides a high level literature review of finance research in research areas of key interest to CIFR,
broadly Systemic Risk, Financial Market Developments and Market and Regulatory Performance. It
seeks to answer the question “What do we know from prior studies about these key research
areas?” In conducting this analysis an objective is to identify gaps in knowledge, and thus provide a
list of potential projects for possible CIFR funding. It thus poses a second question “What do we not
know from prior studies about these key research areas?” While the papers surveyed are not based
entirely on evidence from U.S. financial markets, these feature prominently in most survey papers,
and some use U.S. data exclusively. The report identifies major opportunities for additional research
in the following areas:
Systemic risk
There is currently considerable ongoing theoretical and empirical research and there is a lively
debate between academics, policy makers, regulators and practitioners on how to regulate systemic
risk. Better access to banking data would enable additional insights via empirical analysis.
Asset price bubbles
Potential research topics include (i) whether short sale restrictions be liberalised, (ii) whether the
asymmetric protection of limited liability should be reconsidered, (iii) whether governments should
intervene in markets when “warning bells start ringing”, and (iv) whether financial education might
play a role in reducing investor irrationality.
Financial Crises
More research is required to allow us to gain a better understanding of the market failures that lead
to financial crises. This work is needed in order to design policies to prevent such crises and
ameliorate their effects. The three main market failures that warrant additional research are (i)
provision of liquidity, (ii) limits to arbitrage and the mispricing of assets and (iii) contagion.
Additional macroeconomic research is also warranted to give a better understanding of the
relationship between monetary policy, credit and asset prices.
Measuring systemic risk
Existing research uses U.S. data to investigate statistical patterns in monthly index returns for hedge
funds, banks, brokers and insurance companies in order to develop various measures of systemic
risk. It is apparent that similar analysis in other nations would be desirable. Further, the analysis
could be extended to measures other than returns, in particular higher moments of the return
distributions, as well as volumes of trading, spreads and informed trading measures. It would also be
worth investigating individual firm returns, rather than index returns.
Risk management
In spite of several quantitative advances, risk management is still largely an art form. Risk
architecture needs to consider the costs and timeliness of risk signals relative to the benefits. There
is considerable scope for both theoretical development and model calibration. Empirical
3
investigation would be greatly enhanced by more granular databases becoming available, though
commercial-in-confidence issues abound.
Modelling financial crises and sovereign risks
Several new directions for future research are identified in the report in order to improve
measurement, analysis and management of financial and sovereign risks, and to reduce the severity
of financial crises. A key to such analysis is that comprehensive risk exposure data must be collected.
Mortgage-backed securities
Despite the considerable effort devoted to valuation models by academics and practitioners,
“development of better models is undoubtedly an area ripe for research”. Again, empirical
investigations would be facilitated by having reliable past data. Much of the work in MBS has
stemmed from the U.S. creating the opportunity for innovative research in nations where the
mortgage lending arrangements differ from those in the U.S. Australia is a case in point.
Managing financial risk
There is wide disparity in the extent to which corporations hedge their risks, but questions as to why
a firm hedges and what instruments should be employed remain unresolved, prompting a call for
further research. There has been scant research effort associated with a firm pre-committing to
hedging strategies. Firm characteristics that are associated with hedging have received some
attention, but there are substantial areas where further research is necessary. There is considerable
scope for research on implementation of risk management strategies.
Securitisation
Securitisation is “largely unregulated and it is not well understood”. There has been little research on
the issue. Basic questions remain unresolved. Why did securitisation arise? Was there innovation?
What are the sources of value? How and why does innovation occur? Will securitisation regain its
pre-financial crisis prominence in capital markets? How does securitisation affect incentives to
monitor borrower behaviour? These are quite fundamental questions and illustrate that relatively
little empirical research has been undertaken on this topic. It is also clear that answering many of
these questions will require much more data collection than has happened to date. The data to
address these questions are not currently obtainable at low cost.
Financial innovation – counterfactual research
There are several research areas, such as (i) counterfactual analysis of major financial innovations
that to date have not been investigated other than in the U.S., where differing institutional
arrangements might shed light on the U.S. findings (ii) the analysis of settings where there are
constraints or barriers to financial innovation – a specific example being Islamic finance, (iii) the
greater use of experimental techniques and (iv) the use of structural estimation models, albeit with
the caveat of an inability to assess externalities.
Volatility derivatives
4
There are several issues that need further theoretical exposition and development. No doubt there
will be some CIFR researchers who are interested in the mathematical finance aspects of volatility
derivatives and who wish to take up such development. There is ample scope for empirical
application to investigate which (how, why) firms use volatility derivatives and the impacts of these
instruments on firm value.
Credit risk models
Additional model development is required in relation to (i) implementation of realistic contagion
models (ii) stochastic recovery rates, and (iii) the inclusion of liquidity risk in credit risk models.
Credit default swaps
There is considerable scope for further research in CDSs, however such endeavour is hampered by a
lack of a comprehensive and reliable set of data.
Mutual funds
Relatively little finance research has been conducted on the marketing activities of mutual funds.
U.S. evidence shows that a member of a family that performs well results in a spill-over effect of
flows into the other funds within the family. While the empirical evidence shows that fund
performance declines as a particular fund in a family gets larger, performance increases with the size
of the rest of the family, consistent with economies of scale. Similar research outside the U.S. is
warranted.
Life-cycle finance and pension funds
It is noted that future research will be enhanced by access to more granular data on individual
portfolio holdings. Current empirical research, which relies almost exclusively on linear regression
analysis, is unable to identify the nonlinearities that are embedded in horizon effects, wealth effects,
cohort effects, and behavioural distortions in preferences.
Performance of mutual funds
Additional research in several areas is warranted. In particular (i) a more balanced treatment of fund
costs, (ii) the resolution of theoretical ambiguities associated with the interpretation of alpha, and
(iii) the development of client-specific performance measures. More granular data on fund trading
activities would open up a huge range of possible topics, including, but by no means limited to,
optimal strategies for fund trading activities to reduce market impact costs, investigation of the
investment skill of fund analysts, investigation of buy verses sell recommendation performance
asymmetries, and the way in which fund performance changes as a result of algorithmic trading.
Collateralised debt obligations
The problem facing future research is not that appropriate mathematical models have been
developed, but rather, not having appropriate data for back-testing of those models.
Mutual funds
5
Research of the macroeconomic predictive model has shown great promise in the hedge fund
universe, and it would be interesting to see future research on other asset classes, such as bond
funds, private equity funds, and emerging market funds. Further, one glaring omission from the
literature is an estimate of the trading costs of individual funds. Securities lending is a sideline
business that is profitable for many managed funds. It would be of great interest to study the
interface between the holdings of managed funds, securities lending activity, and the effects of short
selling on markets. Further research advances should be made for non-U.S. asset managers. Finally,
the recently passed Dodd-Frank Act might provide further data for researchers, which could benefit
the public in many ways.
Corporate governance and the board of directors
Although there is a growing empirical literature of the role of boards in setting strategy this is an
area in which much work remains to be done. We still lack a good understanding of committee
structure and the relation between committees and the full board. Progress would be enabled by a
dataset containing all committees. Open questions are numerous and frequently quite fundamental.
Researchers should look for natural experiments, in particular, changes in regulation as a potential
way of dealing with joint-endogeneity issues. Clearly, a database containing changes in global
regulations would be of great value to researchers and regulators alike. Several specific topics seen
as profitable for future research are: How are potential outside directors identified? How are inside
directors identified? What is the role of social networks in director selection? Are busy directors bad
for the firm? How and why does director expertise matter? Typically, the board is modelled as a
single decision maker, but what are the dynamics within boards? Committees need further research.
Much of the literature is on Anglo-American firms, yet there are potential insights and lessons to be
learned from understanding how others deal with corporate governance. Finally, behavioural
corporate finance identifies several human decision-making biases. Do boards suffer from these?
Market microstructure
This is one area where data abounds. Some possible future research issues include:
optimal trading strategies for typical trading problems;
how information is impounded into prices;
how we might enhance the information aggregation process;
how we avoid market failures;
what sorts of trading mechanisms maximise efficiency;
what the trade-off is between fairness and efficiency;
how market structure is related to the valuation of securities;
what market / trading data can tell us about the informational environment of the firm;
what market / trading data can tell us about long-term risk; and
how fair are capital markets. Much has been done on efficiency, but fairness has received
relatively little attention.
Finance and inequality
Future research would benefit enormously from constructing more precise measures of financial
development, access to financial services, and inequality on a global basis. There is a shortage of
6
theoretical and empirical research on the potentially enormous impact of formal financial sector
policies, such as bank regulations and securities law, on persistent inequality. Furthermore, we lack a
conceptual framework for considering the joint and endogenous evolution of finance, inequality,
and economic growth.
Learning in financial markets
This is a fascinating research area with many potential topics. In most existing learning models,
agents learn by observing cash flows or asset returns, but they could also learn from the prices of
derivative securities. Another promising direction is to separate systematic and idiosyncratic
uncertainty. Future work can also analyse strategic information generation. More generally, we need
more dynamic learning models in corporate finance.
Particular challenges in conducting this research
Academics invest heavily in developing their research skills to gain expertise and reputation in
particular areas. These skills are not always easily transferred to other research topics; consequently
many academics maintain whole-of-career associations with particular topics where they have
sound knowledge of the literature and research methods. The CIFR key research issues are in areas
that are not necessarily those in which academics have developed their expertise, hence creating the
incentives for researchers to take on new areas of investigation is a vexing issue. CIFR wants
researchers to break out of their comfort zones and take on new (and thus seen as risky) projects.
A recurring theme in this report is that the availability of reliable datasets is a precursor to empirical
research. Accordingly, to the extent that CIFR’s key research interests require data that are currently
not available, a challenge for CIFR is to invest (or co-invest) in providing such data. When CIFR
receives research proposals that involve the creation of datasets that will both address key CIFR
research questions, and facilitate research for subsequent investigations, it seems that such
proposals warrant special consideration.
7
Introduction
This report has been commissioned by the Centre for International Finance and Regulation (CIFR). It
provides a high level (i.e., it is based on a reading of survey papers rather than the original research
contributions) literature review of finance research in research areas of key interest to CIFR, broadly
Systemic Risk, Financial Market Developments and Market and Regulatory Performance (see below
for further sub-category details). It answers the question “What do we know from prior studies
about these key research areas?” In conducting this analysis, an objective is to identify gaps in
knowledge, and thus provide a list of potential projects for possible CIFR funding. It thus answers the
second question “What do we not know from prior studies about these key research areas?”
What we know about finance and what remains to be learned is the subject of the final chapter of
Brealey, Myers and Allen (2011), one of the most widely used textbooks in corporate finance. This
chapter contains a list of seven things that we know about finance, as well as 10 unsolved problems
in finance. The seven most important ideas in finance are:
1. Net Present Value. This rule, which states that firms should accept projects with positive net
present values, allows managers of firms to make decisions that shareholders will agree
with, irrespective of their wealth and attitude to risk.
2. The Capital Asset Pricing Model (CAPM). This model, which was developed from Markowitz’s
Portfolio Theory, gives firms a manageable way of estimating the required rate of return on
a risky investment. The model has proved empirically intractable, however the practical
implication that the risk of an asset should be thought about in terms of its non-diversifiable
(alternatively called systematic, market or beta risk) component is of monumental
significance.
3. Efficient Capital Markets (ECM). This idea states that security prices accurately and rapidly
reflect available information. Were this not true then the use of market signals (price,
volume, volatility, order book depth, spread etc.) in empirical research would cause invalid
conclusions.
4. Value Additivity and the Law of the Conservation of Value. This rule states that the value of
the whole (say firm) is equal to the summation of the individual investments that the firm
has made.
5. Capital Structure Theory. If the Law of Conservation of Value works for adding project cash
flows, it must also work when you subtract cash flows, i.e., for financing decisions. This is the
Modigliani and Miller (MM) perfect capital market proposition that “the total size of a pie
does not depend on how it is sliced”. Proof of the MM capital structure irrelevance
propositions was developed through the exploitation of arbitrage opportunities that would
arise if, in perfect capital markets, levered and unlevered firms were valued differently.
Some would argue that arbitrage, rather than capital structure, is the key lesson from this
analysis.
6. Option Theory. The relevant attributes to price an option, i.e., the exercise price, the
exercise date, the spot price, the volatility of the underlying asset and the risk-free rate of
interest, are known as a result of the work done by Fisher Black and Myron Scholes.
7. Agency Theory. Participants in firms have incentives to act in their own interest, and this
creates conflicts of interest. These conflicts need to be monitored and managed.
8
The ten unsolved problems in finance, according to Brealey et al. (2011), are:
1. How can a financial manager assess a project beta?
2. What is missing from the CAPM, i.e., are factors other than beta priced?
3. How important are the exceptions to ECM?
4. Why do closed-end funds sell at a discount?
5. Why has there been such a huge proliferation in financial markets and instruments?
6. What is the optimal payout policy for a firm?
7. What risks should a firm hedge?
8. What is the value of liquidity?
9. How can we explain merger waves?
10. Why are financial systems seemingly so prone to crisis and what factors cause these crises?
Brealey, Myers and Allen (2011, p. 875) state “Understanding financial crises will occupy
economists and financial regulators for many years to come. Let’s hope they figure out the
last one before the next one knocks on the door”. CIFR would no doubt concur.
In developing the papers that are analysed below I adopted the following approach. I used library,
SSRN and web searches to identify major finance survey papers that had been published in the last
four years. The principal sources for these papers were the electronic journal “Annual Review of
Financial Economics”, Eckbo ed. (2008) and SSRN. This search discovered 66 survey papers. The
Abstract for each survey paper was then read and used to classify the paper as being of high,
moderate or low relevance to CIFR’s key research agenda. This resulted in 23 finance papers being
classified as high relevance. This list and the identified papers were then sent to CIFR for comment,
and this resulted in a further four finance papers being added to the high relevance category. Thus
this review is a summary of 27 recent finance survey papers (which refer to a total of 2,873 original
published articles, books and working papers) that the author and CIFR have jointly agreed to be
highly relevant to the future key research focus of the Centre.
Each of the 27 reviewed papers follows a consistent format and is set out in three sections. First, the
abstract from each paper is quoted verbatim. Second, a summary of the references used in the
paper is provided so the reader can understand the breadth and quality of the papers used in the
survey. Third, a discussion of the key messages from the paper and the areas where future research
is warranted, is provided.
9
CIFR Key Areas of Interest
(A) Systemic Risk
Understanding and managing systemic risk
Specific systemic risks
Transmission of systemic risk through the global financial system.
Reviewed papers
Paper 1
Michiel Bijlsma, Jeroen Klomp and Sijmen Duineveld, 2010, “Systemic risk in the financial sector: A
review and synthesis”, CPB Netherland Bureau for Economic Policy Analysis Paper 210.
Abstract – quoted directly from Bijlsma et al. (2010)
“The financial crisis has put systemic risk firmly on the policy agenda. In such a crisis, an initial shock
gets amplified while it propagates to other financial intermediaries, ultimately disrupting the
financial sector. We review the literature on such amplification mechanisms, which create
externalities from risk taking. We distinguish between two classes of mechanisms: contagion within
the financial sector and pro-cyclical connection between the financial sector and the real economy.
Regulation can diminish systemic risk by reducing these externalities. However, regulation of
systemic risk faces several problems. First, systemic risk and its costs are difficult to quantify.
Second, banks have strong incentives to evade regulation meant to reduce systemic risk. Third,
regulators are prone to forbearance. Finally, the inability of governments to commit not to bail out
systemic institutions creates moral hazard and reduces the market’s incentive to price systemic risk.
Strengthening market discipline can play an important role in addressing these problems, because it
reduces the scope for regulatory forbearance, does not rely on complex information requirements,
and is difficult to manipulate.”
Reference Count – Overall ratio of A* and A journals to total references 0.418
Journal Count
Journal of Finance 10
Econometrica 1
American Economic Review 11
Journal of Financial Economics 1
Review of Financial Studies 3
Other A-ranked journals 70
Other references 134
Total 230
Discussion
10
This 98 page report, containing 230 references, is Paper 210 issued by the CBP Netherlands Bureau
for Economic Policy Analysis. The three authors of this report are policy analysts within the Bureau
for Economic Policy Analysis. The paper would perhaps fit better into a banking benchmark review,
rather than a finance review.
Systemic risk – definition and costs of systemic crises
Definition
The report notes that there is disagreement about the definition of systemic risk. The authors define
systemic risk as “the probability that a systemic crisis arises” and the essence of this lies in the
“negative effects that one bank’s problems have on other banks”. The report also notes that the
“mechanisms that lie behind these effects … are ill understood, empirically as well as theoretically”.2
Cost of systemic crises
Empirical evidence shows that the cost of providing government support to bail out a weak banking
system (via a recapitalisation which occurs in 85 per cent of the global banking crises or a
nationalisation which occurs in 57 per cent of global banking crises) is “about 14 per cent of GPD the
first five years after the start of the crisis”. Output is also disrupted, with empirical estimates of the
permanent output loss due to a crisis in the range of 4 to 16 per cent of GDP. When a crisis involves
both a credit crunch and a house price bust the median loss is 6.7 per cent of GDP. According to the
International Monetary Fund (IMF) governments spent about 30 per cent of global GDP on rescue
operations following the recent global financial crisis.
There are three elements to a financial crisis. First, there is an initial shock. Second, this shock is
propagated and amplified. Third, the financial system is disrupted. Systemic risk can be reduced by
reducing the probability of a shock, by damping the amplification or by isolating crucial parts of the
financial system. The report focuses on the propagation and amplification mechanisms which can
occur through two different channels, namely (i) contagion within the financial sector and (ii) pro-
cyclical connection between the financial sector and the real economy. Each of these channels is
analysed in detail in a major chapter in the report. A discussion of the various approaches to the
regulation of systemic risk also occupies a major chapter.
Contagion
Contagion is defined as “the propagation of shocks experienced by one bank to other banks through
mechanisms within the financial sector”. The literature identifies three ways in which this
propagation can occur, namely (i) through direct and indirect interconnections between the banks
created by an intricate web of financial contracts (i.e., direct exposures are due to credit lines,
derivative counter-party exposures, loans in the inter-bank market, while indirect exposures arise
due to a common exposure to borrowers or lenders) (ii) one bank’s problems might negatively affect
the ability of other banks to obtain funding in the face of unexpected liquidity needs (resulting in fire
sales of assets, liquidity hoarding and adverse selection) and (iii) information spill-overs (causing
bank runs). While these propagation mechanisms are understood conceptually, and have been
2 Clearly if researchers had access to better data from the banks and the central bank these transmission
mechanisms could be investigated.
11
subject to extensive economic modelling, a precise understanding of the relative contribution of
each propagation mechanism remains both controversial and elusive. Researchers have turned to
empirical investigations, but these frequently suffer from an inability to access sufficient detail on
the complex financial contracting between banks, necessitating the use of assumptions that might
be invalid. Both micro-level and market data studies have been conducted. Micro-level studies
typically employ simulation of balance sheet connections or payment system connections. These
studies generally conclude that the likelihood of large scale banking defaults being caused by
contagion is relatively small. However there are several methodological problems in these empirical
investigations. Another group of studies employ market data such as share prices and credit-default
swap spreads, though there is considerable difference in the way they define a systemic risk. Most
market data studies find some evidence of contagion, but again experimental design problems
abound.
Clearly there is ample opportunity for additional empirical research, particularly if richer datasets
become available.
Pro-cyclicality
Pro-cyclicality refers to the feedback loop between the financial sector and the real economy. If the
literature to date on contagion, as discussed above, does not find strong support for contagion as a
dominant factor in the propagation of systemic risk, then it follows that pro-cyclicality (assuming
other propagation mechanisms do not exist) is likely to play a strong role. Pro-cyclicality is induced in
a financial system in several ways. First, regulation of bank capital adequacy ratios can play a role.
Regulation requires banks to have higher capital adequacy in downturns (reflecting the increased
potential credit losses in their portfolios) than in periods of economic expansion and banks might
respond by restricting credit, leading to a reduction in investment and consumption. Second, the use
of Fair Value Accounting (FVA) might require banks to write down their assets in periods when
economic activity is slowing. This writing down reduces banks’ capital, resulting in a curtailment of
their lending activities. Third, pro-cyclicality can be caused by financial acceleration, a macro-
economic literature term that is developed around the idea that financial market imperfections can
amplify the business cycle through the availability of bank credit. Fourth, behavioural biases, herding
and bubbles might be at play.
The discussion in the paper on pro-cyclicality is almost entirely conceptual. The discussion points out
that, empirically, loan loss provisions are higher when GDP growth is lower, that the quality of loans
is positively associated with the business cycle, and that there has been some empirical work in
relation to FVA, while herding and asset price bubbles have been more extensively researched.
Regulation
The benefits of creating systemic risk accrue privately, but the costs are imposed across the whole
society. This disparity is argued to make regulation of the financial system necessary. Three broad
policy measures that are intended to reduce systemic risk are incentive regulation (with elements of
taxation, capital requirements, insurance and market discipline), structural regulation (with
elements of quantity regulation, portfolio restrictions, transparency and standardisation, fair value
accounting and credit rating agencies) and ex post crisis intervention (with elements of lender of the
last resort, deposit insurance, Prompt Corrective Action, living will and bank-specific bankruptcy
12
law). The paper proposes that regulation has a role to play in r educing systemic risk, but that there
are four generic problems. These are summarised in the Abstract above. It should however be noted
that it is both impossible and undesirable to fully eliminate systemic risk via regulation.
Clearly there is currently considerable ongoing theoretical and empirical research and a lively debate
between academics, policy makers, regulators and practitioners on how to regulate systemic risk.
Better access to banking data would enable additional insights via empirical analysis. Current
research is somewhat limited by the inability of researchers to access detailed banking records that
would allow a deeper understanding through access to detailed global banking records.
Paper 2
Anna Scherbina and Bernd Schlusche (2011), “Asset Price Bubbles: A Survey”
Abstract - quoted directly from Scherbina and Schlusche (2011)
“Why do asset price bubbles continue to appear in various markets? What types of events give rise
to bubbles and why do arbitrage forces fail to quickly burst them? Do bubbles have real economic
consequences and should policy makers do more to prevent them? This paper provides an overview
of recent literature on bubbles, with significant attention given to behavioural models and rational
models with frictions. The latest U.S. real estate bubble is described in the context of this literature.”
Reference Count – Overall ratio of A* and A journals to total references 0.671
Journal Count
Journal of Finance 12
Econometrica 3
American Economic Review 9
Journal of Financial Economics 8
Review of Financial Studies 2
Other A-ranked journals 23
Other references 28
Total 85
Discussion
A straightforward definition of a bubble is “a (persistent) deviation of the market price from the
asset’s fundamental value”. Because trading against an overvalued asset involves additional costs of
maintaining a short position, it can be expected that persistent overvaluation is more common than
persistent undervaluation. The paper focuses on positive price bubbles. The paper provides a
chronological overview of famous bubbles, including the 1634 – 1637 tulip price bubble, the South
Sea Company market price bubble of 1720, the Mississippi Company bubble of 1719 – 1721, and the
collapse of both stock and real estate prices in the U.S. in October 1929 leading to the Great
Depression. In the 1980s Japanese real estate and equity prices increased dramatically, but fell
dramatically in the early 1990s. The dot-com bubble began in 1995 and began to deflate in March
13
2000, reaching a low in August 2002. These and the many other examples3 provided in the paper
indicate that bubbles can sometimes explode, and they can sometimes deflate rather more
gradually. However, the time it takes asset prices to deflate is generally shorter than the time to
inflate. Bubbles are, in short, reasonably common.
The paper (Section 3) then reviews the theoretical and empirical work on rational models of bubbles,
i.e., bubbles that arise when all agents are perfectly rational. The models reviewed in this section
help explain the seemingly puzzling empirical facts that stock prices are more volatile than dividends
and that stock prices overreact to dividend changes.
Section 4 reviews behavioural models, where agents are not necessarily perfectly rational. These
models generally draw on evidence of human behaviour from the psychology literature. Several
behavioural categories exist, namely (i) models based on asymmetric information in the presence of
short sale constraints, (ii) feedback trading leading to price paths for assets akin to a Ponzi scheme,
(iii) models where agents take more notice of signals that confirm their beliefs than signals that
contradict their priors, and (iv) models based on the representativeness heuristic where agents put
too much weight on recent observations relative to more distant outcomes.
Historically, most bubbles have a somewhat sensible explanation, i.e., the dot-com bubble fed on
the way new technology would improve productivity, and real estate bubbles can arise because land
supply is fixed but population expands, particularly if securitisation allowed risk mitigation. Bubbles
are frequently associated with abnormally high trading volumes. The literature points to a number of
reasons why bubbles are not arbitraged away, including the risk to a short-seller that prices will
continue to rise and that the cost of trading is higher when information asymmetries are greater
(which is typical of conditions in a bubble). During the dot-com bubble, hedge funds rode the bubble
(i.e., maintained long positions) rather than attack it via sale and short-sale strategies.
Newer generation bubble models are based on (i) herding of agents – here the role of the media in
herding has not been analysed, (ii) limited liability which can offer downside protection, (iii)
(sometimes) perverse incentives of important market participants including analysts, rating agencies
and auditors.
Bubbles have been shown to develop in a large number of experimental settings. Experimental
market bubbles retain many of the features of bubbles in actual markets, namely high transaction
volumes, large swings in price relative to fundamentals and sustained trading at prices different
from the fundamental value.
The deflation of some bubbles has dramatic effects on the economy (subprime mortgage collapse of
2007), while deflation of others has a minimal effect (e.g. the dot-com bubble). The literature
regards real estate price bubble collapses as especially likely to be rapidly transmitted to, and
disruptive of, the financial system and the wider economy. Some argue that central banks should
aim to deflate bubbles. However the authors do not agree because (i) bubbles are not easily
identifiable, (ii) intervention to pierce a bubble will harm those with a long position in the asset, (iii)
3 Perhaps the best example of an Australian bubble is the shares of Poseidon NL’s announcement of a
promising nickel discovery in September 1969. In early September of 1969 Poseidon’s share price was around 80 cents; the price peaked at $280 per share in intraday trading on 5 February 1970, and then fell steadily to around $50 per share by November 1970 as the bubble deflated.
14
a bubble might be positive for the economy for extended periods, (iv) some bubbles might result
from inefficiencies in the market, and regulation to improve market quality might be more
productive, and (v) central banks might not have tools that produce net economic gains to the
economy.
The origins of the subprime mortgage bubble are traced to the low-interest-rate environment that
was introduced to help the U.S. economy recover from the dot-com bubble’s collapse. Further, a
number of financial innovations (securitisation of MBSs and credit default swaps that were
incorrectly priced due to AAA ratings being given to CCC portfolios of mortgages) and policies that
made housing investment seem attractive relative to other asset classes. The U.S. Financial Crisis
Inquiry Commission (2011) recently determined that the crisis could have been avoided if the
government had paid more attention to the warning signs. It is unlikely that an explosion in
subprime lending, rapid home price increases, unscrupulous lending, and steep increases in
homeowners’ leverage will be allowed to continue unabated in the future. In short, closer
monitoring of economic activity will prevail. Potential research topics include (i) whether short sale
restrictions should be liberalised, (ii) whether the asymmetric protection of limited liability should be
reconsidered, (iii) whether governments should intervene in markets when “warning bells start
ringing”, and (iv) what role can financial education better play in reducing investor irrationality.
Paper 3
Franklin Allen, Ana Babus, and Elena Carletti, 2009, “Financial Crises: Theory and Evidence”
Abstract – quoted directly from Allen et al. (2009)
“Financial crises have occurred for many centuries. They are often preceded by a credit boom and a
rise in real estate and other asset prices, as in the current crisis. They are also often associated with
severe disruption in the real economy. This paper surveys the theoretical and empirical literature on
crises. The first explanation of banking crises is that they are a panic. The second is that they are part
of the business cycle. Modelling crises as a global game allows the two to be unified. With all the
liquidity problems in interbank markets that have occurred during the current crisis, there is a
growing literature on this topic. Perhaps the most serious market failure associated with crises is
contagion, and there are many papers on this important topic. The relationship between asset price
bubbles, particularly in real estate, and crises is discussed at length.”
Reference Count – Overall ratio of A* and A journals to total references 0.542
Journal Count
Journal of Finance 5
Econometrica 5
American Economic Review 10
Journal of Financial Economics 3
Review of Financial Studies 4
Other A-ranked journals 38
Other references 55
Total 120
15
Discussion
Financial crises result, on average, in severe economic effects. For example, real housing prices fall
35 per cent over six years, equity prices fall 55 per cent in 3.5 years, output falls nine per cent in two
years, unemployment rises seven per cent in four years and central government debt is typically 86
per cent higher than its pre-crisis level.
The paper gives a brief overview of theoretical models of (a) banking crises (the literature is divided
into two sections; one set of models is built on bank runs being caused by random deposit
withdrawals that are unrelated to changes in the real economy, while the second set describes
banking crises as a natural outgrowth of the business cycle), (b) liquidity and interbank markets, (c)
contagion (where models examine either direct linkages or indirect balance-sheet linkages), and (d)
bubbles and crises. The discussion in these four sections is almost entirely a non-technical
description of the main theoretical models. Some empirical studies are discussed in relation to
indirect interbank balance-sheet linkages and asset bubbles.
The final section of the papers provides directions for future research. While the current crisis has
spurned a large literature, Allen et al. (2009) call for more research that will allow us to gain a better
understanding of the market failures that lead to financial crises. Such work is needed in order to
design policies to prevent such crises and ameliorate their effects. The three main market failures
that warrant additional research are (i) provision of liquidity, (ii) limits to arbitrage and the
mispricing of assets, and (iii) contagion. The paper also calls for additional macroeconomic research
to give a better understanding of the relationship between monetary policy, credit and asset prices.
Allen et al. (2009) state (p.112) that the “financial services industry is perhaps the most regulated
(industry) in the world”. Yet this regulation did not prevent the crisis. “The failure … indicates that a
complete overhaul is needed.” (p. 112) In particular they argue that the Basel arrangements are not
based on a coherent intellectual framework. Current policies of supporting failed institutions create
bad incentives for large institutions if they start to rely on being saved in future crises. The best form
of intervention is not understood, and perhaps temporary nationalisation and subsequent orderly
privatisation would be preferred.
Paper 4
Monica Billio, Mila Getmansky, Andrew W. Lo, and Loriana Pelizzon, 2010, “Measuring Systemic Risk in the Finance and Insurance Sectors”
Abstract – quoted directly from Billio et al. (2010)
“A significant contributing factor to the Financial Crisis of 2007–2009 was the apparent
interconnectedness among hedge funds, banks, brokers, and insurance companies, which amplified
shocks into systemic events. In this paper, we propose five measures of systemic risk based on
statistical relations among the market returns of these four types of financial institutions. Using
correlations, cross-autocorrelations, principal components analysis, regime-switching models, and
Granger causality tests, we find that all four sectors have become highly interrelated and less liquid
over the past decade, increasing the level of systemic risk in the finance and insurance industries.
These measures can also identify and quantify financial crisis periods. Our results suggest that while
16
hedge funds can provide early indications of market dislocation, their contributions to systemic risk
may not be as significant as those of banks, insurance companies, and brokers who take on risks
more appropriate for hedge funds.”
Reference Count – Overall ratio of A* and A journals to total references 0.512
Journal Count
Journal of Finance 13
Econometrica 1
American Economic Review 2
Journal of Financial Economics 6
Review of Financial Studies 10
Other A-ranked journals 33
Other references 62
Total 127
Discussion
This paper uses the statistical patterns in monthly index returns for hedge funds, banks, brokers and
insurance companies to develop various measures of systemic risk. These measures capture the
liquidity, leverage, linkages and losses of the four types of financial institutions. As the abstract
states, the paper uses correlations, cross-autocorrelations, principal components analysis, regime-
switching models, and Granger causality tests as alternative ways of empirically estimating the level
of systemic risk in the U.S. economy. Data for the 180 months of January 1994 to December 2008
are used. It is immediately apparent that similar analysis in other nations would be desirable.
Further, the analysis could be extended to measures other than returns, in particular higher
moments of the return distributions, as well as volumes of trading, spreads, and informed trading
measures. It would also be worth investigating individual firm returns, rather than index returns.
The data reveal a number of interesting findings, which are summarised in four sections below.
1. Findings – Illiquidity and Correlation
The correlations vary considerably over time. All types of financial institutions have high
illiquidity during the recent crisis. The evidence suggests that hedge funds suffer from,
rather than cause, contagion from other financial institutions.
2. Findings – Principal Components Analysis
The findings suggest that banks, brokers and insurance companies are heavily exposed to
the first and second principal components in returns, while hedge funds have high exposure
to the third and fourth components. Hedge funds seem to be quite independent of other
financial institutions. Hedge funds have, however, become more interconnected with each
other over time, thus increasing systemic risk.
3. Findings – Regime Switching Models
17
Volatility in the high-volatility regime is typically twice as large as it is in the low-volatility
regime. A natural measure would be to classify periods when all four types of financial
institutions are in high-volatility regimes as being a period of systemic risk. An alternative
measure would be to calculate the average probability of being in a high-volatility regime for
the four institutions. Both measures allow periods when systemic risk is on the rise to be
identified, and to assess the influence of each industry.
4. Findings – Granger Causality Tests
During the first part of the sample (1994-2000), banks, brokers and insurers uni-directionally
affected hedge funds. However, shocks to hedge funds did not propagate to other financial
institutions. In the second part of the sample all financial institutions became highly linked.
Banks are found to be the most contagious of the four types, i.e., their shocks propagate to
the others while shocks to the others do not affect banks.4
Over the recent period, the empirical results suggest that the banking sector is a more important
source of systemic risk than other financial sectors. The best method of avoiding some of the most
disruptive consequences of crises is to develop methods for measuring, monitoring and anticipating
such events.
Paper 5
Philippe Jorion, 2010, “Risk Management” Abstract – quoted directly from Jorion (2010) “Modern risk management systems were developed in the early 1990s to provide centralized risk
measures at the top level of financial institutions. These are based on a century of theoretical
developments in risk measures. In particular, value at risk (VAR) has become widely used as a
statistical measure of market risk based on current positions. This methodology has been extended
to credit risk and operational risk. This article reviews the benefits and limitations of these models.
In spite of all these advances, risk methods are poorly adapted to measure liquidity risk and systemic
risk.”
Reference Count – Overall ratio of A* and A journals to total references 0.640
Journal Count
Journal of Finance 4
Econometrica 2
American Economic Review -
Journal of Financial Economics 1
Review of Financial Studies -
4 It is important to note that the empirical tests are based on index returns, and what is true for the index is
not necessarily true for the individual firms that are part of the index.
18
Other A-ranked journals 9
Other references 9
Total 25
Discussion
A time-series of the main developments in risk management is provided in Table 1 of Jorion (2010, p.
348), and this is reproduced below.
This chronology introduces risk concepts of duration (1938), stock beta (1963), and option “Greeks”
(especially option delta and vega) (1973). Markowitz shows that the risk of the portfolio of
exposures / assets that a firm holds depends, not on the individual risks of the exposures / assets
(which can be eliminated in a diversified portfolio), but on the way the individual elements
contribute to the risk of the whole. This is a covariance risk, not a variance risk.5 However, if the
assets in a portfolio are highly correlated, as they were in the case of Long-Term Capital
Management, and as they are in the case of the highly levered portfolios of banks, particularly in
periods of stress, then portfolio risk does not benefit from diversification. Following Sharpe’s single
instruments such as complex options, forwards, futures and swaps most frequently occurred among
participants in financial markets and institutions, rather than as a result of academic research.
Global derivatives markets have experienced dramatic growth7, as illustrated in Table 2 from Jorion
(2010, p. 353) reproduced below:
5 A stock’s beta is its covariance with the market return scaled by the variance of the market return.
6 Several versions of the CAPM were developed, including those by James Tobin, Jack Treynor, John Lintner
and Jan Mossin, but William Sharpe was “singled-out” by being jointly-awarded the 1990 Noble Prize in Economics. 7 To put these numbers in context it is noted that the global equity market capitalisation for listed securities
stood at approximately $US50 trillion in 2008, approximately the same value as outstanding currency swaps. Interest rate swaps are larger that global equity market capitalisation by a factor of eight. It is clear that if a major link in this chain of counterparty exposures in interest rate swaps were to fail, the repercussions are
19
As a result of some spectacular losses (for example Barings lost $1.3 billion from unauthorised
speculation in stock index futures), the financial industry devised more comprehensive measures of
risk. Value at Risk (VaR) was introduced in 1993 at J.P. Morgan and led to the creation of its
RiskMetrics in 1994.8 VaR has several limitations. It ignores losses in a portfolio that are outside a
pre-determined confidence interval, it assumes orderly liquidation of assets, and it views a financial
institution as a price-taker and thus ignores market impact costs. Recognition of these led to the
development of Conditional VaR (CVaR), which has some theoretical advantages to VaR, but still
ignores market impact costs. Finally, it needs to be noted that in any risk management system the
use of historical data as inputs for models poses problems. The past does not necessarily repeat and
structural breaks may not be present in past data. Methods to model time variation in risk (ARCH)
were developed in 1982 by Robert Engle.9 Hence risk management requires specific modelling of
regime switches and the use of long periods of historical data in model calibration.
VaR methods and their subsequent refinements were introduced to measure market risk. These
approaches have been extended to consider credit risk (counterparty risk) and operational risk.
Credit risk models, which build on the Merton model and the Jarrow and Turnbull model, were
developed in the late 1990s. These allowed financial institutions to measure their total credit risk for
the first time – and this spurned the development of credit default swaps. As is the case with market
risk measures, credit risk models also have limitations. In particular, it is difficult to incorporate
counterparty risk, because this depends on the counterparties that the counterparty has, and
financial contracting is a complex web. Operational risk models (risk of losses arising from
inadequate or failed internal processes, from people and systems or from external events) have also
been developed, but data to estimate these are scarce and the methods remain controversial.
potentially dramatic. Hence the famous “too big to fail” quote popularised by U.S. Congressman Stewart McKinney in a 1984 Congressional hearing, discussing the Federal Deposit Insurance Corporation's intervention with Continental Illinois. Alan Greenspan, who served as Chairman of the Federal Reserve of the United States from 1987 to 2006, believes that such large organisations should be deliberately broken up: “If they’re too big to fail, they’re too big.” 8 J.P. Morgan made RiskMetrics freely available on the internet.
9 Engle was awarded the 2003 Nobel Prize in Economics.
“We need to ask ourselves whether we could have done a better job of understanding the processes at work. The difficulty we had was that we all look at one bit of the clichéd elephant in the room. Monetary economists looked at the monetary policy. Financial economists looked at risk management. International macroeconomists looked at global imbalances. Central bankers focused on inflation. Regulators looked at Basel capital ratios and then only inside the banking system. Politicians enjoyed the good times and did not ask too many questions ... One big lesson of this experience is that economics is too compartmentalized and so … are official institutions. To get a full sense of the risks we need to combine the worst scenarios of each set of experts. Only then would we have had some sense of how the global imbalances, inflation targeting, the impact of China, asset price bubbles, financial innovation, deregulation and risk management systems might interact.”
The very clear message that emerges is that information from several different perspectives needs
to be analysed simultaneously to get a clear picture of systemic risk. Wolf identifies monetary
economists, financial economists, international macroeconomists, central bankers, regulators and
politicians as having a role.
Gray (2009) then argues that what was missing were inputs from macro financial risk experts (who
work for central banks, ministries of finance, regulatory bodies and international institutions) who
can provide timely assessments of
(i) Risk exposures and risk-adjusted balance sheets aggregated for the financial sector
and sovereign level, including off-balance sheet risks,
(ii) The integration of financial sector risks with monetary policy, and
(iii) Financial contagion and interconnections.
Conventional models and tools used by central banks and governments are ill-suited for analysing
risk exposures (because macroeconomic risk assessment is almost entirely flow- or balance sheet-
based, because default risk is ignored in macroeconomic models, and because of not adequately
incorporating the off-balance sheet exposures to obscure credit derivatives). So what can be done?
Gray suggests (p. 121) the “development of a sovereign contingent claims option theoretic model
through the application of modern finance theory (and in particular the lessons from the Merton
model where equity is priced as a call option with an exercise price equal to the value of the debt)
and risk-adjusted balance sheets, using contingent claims analysis (CCA), to all the key sectors of the
economy. These finance and balance sheet models are then integrated with macroeconomic
monetary policy models, dynamic stochastic general equilibrium models, and other macro models.”
Gray (2009) identifies six new directions for future research to improve measurement, analysis and
management of financial and sovereign risk, and to reduce the severity of financial crises. These are
as follows:
1. Developing a unified macrofinance framework aggregated across the
household and corporate sectors incorporating financial stability policies,
monetary policies and fiscal and debt policies, as well as global market claims;
2. Integrating financial sector and sovereign risk into monetary policy models;
3. Developing new models of early warning, financial contagion and
interconnectedness;
4. Developing new tools and techniques to mitigate control and transfer risk;
22
5. Introducing new approaches to the regulation of financial sector risk taking.
This involves several aspects including fully understanding linkages, rethinking
capital adequacy, cross-border banking regulation, and understanding the
unintended consequences of inappropriate regulations. As a result of the GFC
the U.S. banking system is far more concentrated, and the top 15 institutions
now control more than half the nonfinancial debt. These are “really too big to
fail and may be too big to rescue”. Institutions that are adjudged to be
contributing substantially to systemic risk should be required to purchase
insurance or pay a surcharge. Comprehensive risk exposure data must be
collected. (bold emphasis added)
6. Monitoring and managing sovereign risk through the application of CCA and
VaR analysis at the sovereign level.
23
(B) Financial Market Developments
Market quality measures
Market innovation, complexity and integrity (products, services, technologies,
This paper describes the evolution of the U.S. market for mortgage-backed-securities (MBS) and
investigates whether these developments have lowered the cost of housing finance, and whether
they have expanded the pool of funds available for house finance. The paper also investigates the
role that MBS and collateralised mortgage obligations (CMO) played in the GFC, and describes the
evolution of models developed to price these complex assets.10
Government-Sponsored Mortgage Enterprises (GSE); Agency and Non-Agency (Private) MBS
The MBS market in the U.S. can be divided into two sectors, agency and non-agency (i.e., private)
MBS. Loans must meet certain criteria to be acceptable for inclusion in the agency market. Such
loans are called conforming loans. All other loans are a part of the non-agency market (subprime and
nonprime loans, i.e., nonconforming loans). The agency market covers government-sponsored
mortgage enterprises (GSE) which had their genesis in 1933 and 1944 when the U.S. government
established mortgage insurance programs11 to provide guarantees for mortgage investors, thereby
making house finance more readily available following the Great Depression and the Second World
War, respectively. In 1938 the Federal National Mortgage Association (FNMA) was established, and
this was privatised in 1968. When this privatisation took place, the U.S. government established the
Government National Mortgage Association (GNMA) which, at its inception, was chartered to issue
MBS supported by FHA and VA mortgage loans.12 In 1996, total MBS issuances were less than $500
million, and private label offerings were only 10 per cent of this total. Total MBS reached an all-time
high of $3.2 trillion in 2003, and private label issuances peaked at $0.9 trillion in 2006. Private
issuances shrank to $45 million during 2008 (1/20th of the peak) and to $30 million in 2009 (1/30th of
the peak).
Collateralised Mortgage Obligations
The first multiclass MBSs (or CMOs) were issued in 1983 and 1985 by the Federal Home Loan
Mortgage Corporation (FHLMC) and FNMA respectively. Investors in MBS face two risks, interest rate
risk and credit risk. Because the earliest MBS was essentially government-guaranteed, credit risk
played a relatively small role in early research on the MBS market and the valuation of MBSs and
CMOs.
Cost of Mortgage Credit
The creation of GSEs and the establishment of their MBS programs had the national goal of assisting
individuals and families in buying homes. Did these programs cause borrowing rates to fall? While
the evidence is not unanimous, the several studies that have been conducted suggest the answer is
yes, with estimates of lower borrowing costs in the range of nine to 30 basis points. Did these
programs expand the pool of mortgage credit? Again the tentative answer is yes, but the main driver
of this expanded pool is attributed to the expansion of subprime MBS lending.
10
The U.S. market for house finance differs from the Australian market. U.S. mortgages are typically fixed-rate loans. A U.S. mortgagee has the option to repay a loan, without penalty, if interest rates fall. A U.S. mortgagee also has an option to walk away from a home if house price falls. Thus U.S. borrowers have both a call option and a put option. Whether the subprime defaults that occurred in the U.S. would have been as dramatic if lending followed Australian practice is an interesting issue. 11
The Federal Housing Administration (FHA) and Veterans Administration (VA) program, respectively. 12
The first ever financial futures contract, initiated in 1974, has as its underlying asset GNMA MBS.
25
Credit Rating Agencies (CRAs) and the GFC
The early MBS market had no role for CRAs because of implicit or explicit government guarantees.
But as the volume of private label CMOs expanded, CRAs were called upon to rate the credit quality
of CMO tranches. The empirical evidence suggests that the rating standards of the CRAs declined in
the lead-up to the GFC. But were investors duped? The evidence suggests not, because the yields on
lower quality tranches were higher, indicating that investors were able to discern and price this
greater credit risk. That aside, McConnell and Buser (2011, p. 175) suggest that “sorting out the
factors that led to the crash of the MBS market during 2006-2009 is likely to require further
investigation with an emphasis on policy implications”, in particular, what to do with the GSEs that
were taken back into government conservatorship in the bail-out mounted by the Federal Reserve.
Valuation models for MBS and CMO
MBSs are complex securities, and CMOs are even more so. Two approaches to the valuation of these
instruments exist in the literature. One approach is via structural models in which it is typically
assumed that borrowers optimize their loan decisions (i.e., they choose to make a monthly payment,
pay off the loan or default) resulting in a stream of cash flows to the MBS investors. Several
developments, primarily by academics, have occurred in the structural model approaches to valuing
MBSs. Reduced form models were, in contrast, primarily developed by Wall Street, because
structural models were not suited to the valuation of CMOs. Even so, reduced form models suffer
because they require significant time series data to estimate the cash flows to investors, and if the
economic conditions prevailing in the estimation period are structurally different to those in the
valuation period, considerable error can enter into the valuations. Despite the considerable effort
devoted to valuation models by academics and practitioners, McConnell and Buser (2011, p. 175)
state that “development of better models is undoubtedly an area ripe for research”.
Paper 8
Clifford W. Smith, “Managing Financial Risk”, Chapter 18 of Eckbo (2008)
Abstract – quoted directly from Smith (2008)
“Recent developments in corporate risk management include an expansion of the available
instruments, a material reduction in the costs of risk management products, and a more
sophisticated understanding of the benefits. This chapter examines the underlying theory of how risk
management increases firm value, and it summarises the evidence on the use of risk management
instruments.”
Reference Count – Overall ratio of A* and A journals to total references 0.875
Journal Count
Journal of Finance 6
Econometrica -
American Economic Review 1
Journal of Financial Economics 5
26
Review of Financial Studies 1
Other A-ranked journals 15
Other references 4
Total 32
Discussion
Cliff Smith is a highly regarded academic with an impressive publication record. This paper sets out a
series of conclusions drawn from a distillation of the conceptual and empirical literature on
corporate risk management. Many of these propositions are well understood and generally accepted
as theoretically valid, though the empirical evidence is frequently less consistent. The key messages
for the paper are:
Risk management has expanded substantially during the last 20 years.
Our understanding of the costs (which have dramatically decreased) and benefits of risk
management has improved.
Much of the literature on risk management focuses on the use of derivatives, especially
forwards, futures, swaps and options in hedging exposures to interest rates, foreign
exchange rates and commodity prices. However the available risk management instruments
are much broader than these derivatives.
There is disagreement about whether cash flow risk, earnings risk or firm value risk should
be the focus of risk management approaches, and thus additional research is warranted.
The literature identifies wide disparity in the extent to which corporations hedge their risks,
but the questions of why a firm hedges and what instruments should be employed remain
unresolved, again prompting a call for further research.
Some risks are firm specific and some are market-wide or systemic.
For a corporation held by investors who hold diversified portfolios, portfolio theory tells us
that the required rate of return depends on systematic risk, and not on total risk (i.e.,
systematic risk plus diversifiable or firm specific risk). Thus a hedging instrument that
reduces diversifiable risk for such a firm does not reduce the required rate of return for the
firm.
Even if risk management affects systematic risk, as long as the instrument is appropriately
priced, risk management will not affect firm value.
Risk management can, however, increase the value of a widely held firm by increasing the
firm’s expected net cash flows – not by reducing its required rate of return. This can occur
because risk management might affect investment decisions, taxes or contracting costs.
The impact of risk management cannot be confined to shareholders only. Other claimants on
the firm’s cash flows (i.e., bondholders, employees, managers, suppliers and customers)
have a vested interest in the firm’s success.
There has been scant research effort associated with a firm pre-committing to hedging
strategies.
The cost of risk management techniques like swaps has fallen dramatically since the 1980s
when spreads of 100 basis points were common; whereas many swap spreads by 2006 were
as low as two basis points. Swap markets are much more liquid, yet they remain relatively
27
un-researched. The costs of hedging are negatively related to the liquidity of the instrument,
and positively related to asset volatility and specificity.
Almost all firms use derivatives for hedging rather than speculation according to the survey
evidence. Additional research is warranted.
Firm characteristics that are associated with hedging have received some attention, but
there are substantial areas where further research is necessary.
Large firms tend to hedge less than small firms.
Closely held firms’ use of hedging remains largely unexplored.
While hedging involves four steps (exposure identification, instrument design, net benefit
assessment and strategy implementation), the academic literature has concentrated on the
first three steps, leaving considerable scope for research on implementation strategies. Yet
this “may be the single most important in terms of creating firm value”.
Paper 9
Gary Gorton and Andrew Metrick, 2011, “Securitization”
Abstract – quoted directly from Gorton and Metrick (2011)
“We survey the literature on securitization and lay out a research program for its open questions.
Securitization is the process by which loans, previously held to maturity on the balance sheets of
financial intermediaries, are sold in capital markets. Securitization has grown from a small amount
in 1990 to a pre-crisis issuance amount that makes it one of the largest capital markets. In 2005 the
amount of non-mortgage asset-backed securities issues in U.S. capital markets exceeded the amount
of U.S corporate debt issued, and these securitized bonds – even those unrelated to sub-prime
mortgages – were at the centre of the recent financial crisis. Nevertheless, despite the
transformative effect of securitization on financial intermediation, the literature is still relatively
small and many fundamental questions remain open.”
Reference Count – Overall ratio of A* and A journals to total references 0.331
Journal Count
Journal of Finance 8
Econometrica 1
American Economic Review 3
Journal of Financial Economics 4
Review of Financial Studies 4
Other A-ranked journals 27
Other references 95
Total 142
Discussion
28
Securitisation played a central role in the global financial crisis. Prior to the 2007-2008 financial
crisis, securitisation was a very large part of the U.S. capital markets.13 Yet securitisation is “largely
unregulated and it is not well understood”. There has been little research on the issue. The paper,
after providing institutional and descriptive statistics on securitisation, presents a simple model of
the securitisation decision. The origins of securitisation are then summarised. The theoretical
implications for the cost of capital are then discussed, and this is followed by the empirical evidence
on the effect of securitisation on the cost of capital. The question posed is whether securitised loans
have lower rates than loans held on the balance sheet. “Although the perfect experiment … is yet to
be run”, the evidence that does exist suggests that the cost of securitised loans is indeed lower by
around 10-17 basis points.
The final section of this paper is particularly relevant as it sets out a raft of unanswered questions.
These are summarised below.
Why did securitisation arise? Was there innovation? What are the sources of value? How
and why does innovation occur? Will securitisation regain its pre-financial crisis prominence
in capital markets? How does securitisation affect incentives to monitor borrower
behaviour? These are quite fundamental questions and illustrate that relatively little
empirical research has been undertaken on this topic.
The tranching of pools sold to special purpose vehicles (SPVs) remains a puzzle, as, too, does
the choice of loans to pool and sell to the SPV. Why, for example, has securitisation been
confined to specific pools of obligations rather than building portfolios of securitised assets
that combine, for example, credit card receivables, automobile loans, consumer loans,
mortgages and leases?
Explaining the structure of securitisation appears to have a long way to go.
The contractual nature of securitisation also needs more attention.
The structure of securitisation, that is, the internal workings of the SPV, has received little
attention.
Bankruptcy remoteness has received much more attention (particularly in the legal
literature) than liquidation-efficiency, that is, the living will aspect of the SPV. There are a
few examples where SPVs have been liquidated following the contractual rules, but there
are no studies of the contractual rules, or of their evolution.
The market pricing of asset-backed securities (ABS), both primary and secondary, are very
under-studied. As an important asset class, asset pricing of ABS is woefully lacking.
The role of rating agencies in the collapse of securitised loans is under-researched.
The legal form of the special purpose vehicle, typically a trust, has not been studied. Many
countries other than the U.S. had to pass special legislation to create tax neutral legal
vehicles. But, this international dimension has also not been studied.
Little is known about the international cross section of securitisation, other than that it has
grown. Whether there is important national variation is not known.
13
In 1990 straight corporate debt and securitisation in the U.S. both stood at around $100 billion. By 2005 securitisation had grown to $1,700 billion (compound growth of 20.8 per cent per annum) and corporate debt was $1,000 billion (growth rate 16.6 per cent). By 2010 the securitisation market had collapsed to around $120 billion, while corporate debt, after a fall to around $700 billion in 2007, recovered to around $1,080 by 2010.
29
Nothing is known about servicers of securitised portfolios or about the contractual
arrangements with servicers.
Could securitisation evolve further?
An important asset class securitised in Europe is the “whole businesses” – whole business
securitisation. What are the limits of securitisation?
What are the implications of securitisation for monetary policy?
Asset-backed securities were at the core of the financial crisis. Their value as collateral was
questioned, causing runs on repo and asset-backed commercial paper (ABCP). What is the
social calculus as to costs and benefits of securitisation? Can a new regulatory design retain
the benefits while minimising the costs?
The paper concludes (p. 62) as follows: “Answering these questions will, in large part, require an
interdisciplinary approach involving scholars of accounting, finance, and law. The institutional and
legal details are very important. It is also clear that answering many of these questions will require
much more data collection than has happened to date. The data to address these questions is (sic)
not obtainable at low cost. It can’t simply be downloaded.”
Paper 10
Josh Lerner and Peter Tufano, 2011, “The Consequences of Financial Innovation: A Counterfactual
Research Agenda”
Abstract – quoted directly from Lerner and Tufano (2011)
“Financial innovation has been both praised as the engine of growth of society and castigated for
being the source of the weakness of the economy. In this article, we review the literature on
financial innovation and highlight the similarities and differences between financial innovation and
other forms of innovation. We also propose a research agenda to systematically address the social
welfare implications of financial innovation. To complement existing empirical and theoretical
methods, we propose that scholars examine case studies of systemic (widely adopted) innovations,
explicitly considering counterfactual histories had the innovations never been invented or adopted.
The significance of financial innovation has been widely touted. Many leading scholars, including
Miller (1986) and Merton (1992), highlight the importance of new products and services in the
financial arena, sometimes characterizing these innovations as an engine of economic growth.
But at the same time, claims of the beneficial impacts of financial innovations must be approached
with caution. One reason is that despite the acknowledged economic importance of financial
innovation, the sources of such innovation remain poorly understood, particularly empirically. In a
recent review, Frame & White (2004) are able to identify only 39 empirical studies of financial
innovation. Moreover, this literature concentrates largely on the back end of the innovation process,
focusing on the diffusion of these innovations, the characteristics of adopters, and the consequences
of innovation for firm profitability and social welfare. Frame & White identify only two papers on the
origins of innovation, namely, Ben-Horim & Silber (1977) and Lerner (2002).
There are many different research approaches to understanding financial innovation, including
empirical studies, theoretical models, and traditional historical descriptions. Each has advantages
and disadvantages, which we discuss below. In this review, our goal is to lay out a complementary
research agenda, which we hope will encourage subsequent scholars. After we review the definition
of financial innovation, we turn to three general observations about how financial innovation is
similar to and different from other forms of innovation—and which inform the limitations of
standard research methods. We then consider three case studies of particular innovations and
highlight what is both known and unknown about their consequences.”
Reference Count – Overall ratio of A* and A journals to total references 0.387
Journal Count
Journal of Finance 8
Econometrica 1
American Economic Review 3
Journal of Financial Economics 2
Review of Financial Studies 3
Other A-ranked journals 36
Other references 84
Total 137
Discussion
As noted above, Brealey et al. (2011) is one of the most widely used textbooks in corporate finance.
Much of this book is devoted to developing the net present value (NPV) model as the appropriate
investment decision-making approach. The NPV rule states that firms should accept projects with a
positive NPV. Brealey et al. (2011) argue that finding positive NPV projects for real assets (buying a
new machine, manufacturing a new product, starting a mining project etc.) is relatively easy, but
that investment in financial assets will be a zero NPV project if the asset is efficiently priced. Only if a
new financial product can be developed, for which there is an existing or potential unsatisfied
clientele, will the innovation have a positive NPV. The clear message to many thousands of corporate
finance students is “financial innovation can be a source of value creation”. The proliferation of
financial products over the last 20 years, especially products that are used in risk management,
shows that innovation in financial products is a growth industry.
A good understanding of the main arguments in the Lerner and Tufano (2011) paper can be gleaned
from the rather extensive abstract to their paper that is quoted above. Financial innovation is
controversial, with some papers arguing that it adds to the fragility of financial markets (i.e.,
increased systemic risk), yet others suggesting that financial innovation is “an engine for economic
growth”. Much of the paper concentrates on three case studies of financial innovations, namely (i)
venture capital (VC) and private equity (PE), (ii) mutual funds and exchange trades funds (ETFs) and
(iii) securitisation. The paper describes the innovation in detail and then explores the counterfactual
histories.
31
Discussion of a counterfactual history, i.e., what would the world look like if the innovation had not
been invented and popularised, is, of course, judgmental and somewhat subjective14. However,
Lerner and Tufano (2011) argue that this approach allows a researcher to identify the direct costs
and benefits of the innovation, as well as the positive and negative externalities of the innovation. In
relation to VC and PE, Lerner and Tufano examine whether three alternatives to VC and PE financing,
namely angel investors, government funding and banks, might have filled the need for equity
funding of high-risk early-stage companies. They suggest these alternatives could not have replaced
VC and PE firms. When analysing mutual fund and ETFs, the principal counterfactuals are fractional
shares and bonds, direct holding of portfolios by investors through intermediaries such as insurance
companies, and alternative forms of intermediation such as insurance-wrapped investments. They
suggest mutual funds and ETFs “were likely beneficial for investors, relative to other reasonable
counterfactuals”. The counterfactuals to securitisation analysed15 by Lerner and Tufano (2011) are (i)
sale of portfolios of mortgage loans to other deposit taking institutions, (ii) the creation of pass-
through MBSs, and (iii) the creation of multi-class structures. They suggest that securitisation is so
complex that understanding (p. 78) “the precise details … will probably thwart any definitive
scientific study of the phenomenon”.
In drawing areas for future research Lerner and Tufano (2011) recommend (i) counterfactual analysis
of other major financial innovations, (ii) the analysis of settings where there are constraints or
barriers to financial innovation – a specific example being Islamic finance, (iii) the greater use of
experimental techniques, and (iv) the use of structural estimation models, albeit with the caveat of
an inability to assess externalities.
Paper 11
Peter Carr and Roger Lee, 2009, “Volatility Derivatives”
Abstract – quoted directly from Carr and Lee (2009) “Volatility derivatives are a class of derivative securities where the payoff explicitly depends on some
measure of the volatility of an underlying asset. Prominent examples of these derivatives include
variance swaps and VIX futures and options. We provide an overview of the current market for these
derivatives. We also survey the early literature on the subject. Finally, we provide relatively simple
proofs of some fundamental results related to variance swaps and volatility swaps.”
Reference Count – Overall ratio of A* and A journals to total references 0.618
Journal Count
Journal of Finance 2
14
Perhaps the best-known counterfactual analysis is Fogel’s analysis of the importance of the railroad industry in promoting U.S. economic growth. Fogel received the Nobel Prize in 1993 for this work. 15
The authors acknowledge that there are a “nearly unlimited number of counterfactuals”. One such example is what would have happened if only prime mortgages (i.e., rather than sub-prime) had been securitised. Another is what would have happened if rating agencies had rated certain highly structured transactions more poorly, or even refused to rate such arrangements.
The volatility of returns on an asset is the term most frequently used in finance to encapsulate its
risk.16 Since the mid-1990s, new derivative securities have arisen that are called volatility derivatives,
in particular, variance swaps and futures and options on a volatility index called the VIX.
History of volatility derivative markets
The paper provides a history of the evolution of the volatility derivatives market. The first variance
swap was written in 1993. In a variance swap the payoff is a function of the realised volatility over
the life of the swap. A volatility swap differs with its payoff being a function of implied volatility. The
CBOE introduced the first volatility index (the VIX) in 1993. The VIX is constructed from implied
volatilities from short-term near-the-money put and calls for the S&P 100 constituents. The CBOE
did not start trading derivatives on the VIX for more than a decade after its launch, but European
exchanges did – the Swedish exchange OM in 1997, and the Deutsche Terminborse in 1998. In 2003
the CBOE introduced the Chicago Futures Exchange (CFE) whose sole purpose was to provide
exchange-traded volatility derivatives. At this time the VIX composition was changed in three ways –
the S&P 500 replaced the S&P 100, second annualisation was based on business days rather than
calendar days, and finally the new VIX no longer relied on the Black-Scholes model. Rather, a more
robust theory of pricing continuously monitored variance swaps was introduced to determine
implied volatilities.
The cataclysm that hit financial markets in 2008 had a profound effect on volatility derivatives.
Historically profitable strategies suffered huge losses, and the markets in many instruments dried
up.
Early literature on volatility derivatives
Prior to the introduction of the VIX, a small but prescient set of papers advocated the development
of volatility indices and financial products with payoffs tied to these. The literature that is surveyed
in Carr and Lee (2009) is entirely conceptual, and it is not reviewed here.
Other aspects of Carr and Lee (2009)
Carr and Lee (2009) then provide derivations and proofs for the following:
16
If that asset is held in a well-diversified portfolio its individual volatility is less important; what matters is how it adds to the risk of the portfolio.
33
Why variance swap payoffs involve squaring log price relatives, rather than discretely
compounded returns and why the standard methodology under-prices actual variance
swaps when the risk-neutral expectation of cubed returns is negative.
An alternative measure of the money-ness of an option.
A simple approximation for market volatility swap rates and an economic justification for its
use in forecasts of subsequently realised volatility.
Carr and Lee (2009) point to several issues that need further theoretical exposition and
development. The paper, however, provides no suggestions for empirical work. No doubt there will
be some CIFR researchers who are interested in the mathematical finance aspects of volatility
derivatives. They would benefit from carefully reading the references provided.
Paper 12
Mark J. Flannery, Paul Glasserman, David K.A. Mordecai, Cliff Rossi, 2012, “Forging Best Practices in Risk Management” Abstract – quoted directly from Flannery et al. (2012) “This paper approaches risk management from three perspectives: firm-level risk measurement,
governance and incentives, and systemic concerns. These are three essential dimensions of best
practices in risk management; although we discuss each dimension separately, they are interrelated.
The paper begins with a brief review of salient changes and unmet challenges in risk measurement in
the wake of the financial crisis. It proceeds with a discussion of the interplay between volatility
regimes and the potential for risk amplification at a system-wide level through simultaneous risk
mitigation at the individual firm level. Quantitative risk measurement cannot be effective without a
sound corporate risk culture, so the paper then develops a model of governance that recognizes
cognitive biases in managers. The model allows a comparison of the incentive effects of
compensation contracts and leads to recommendations for improving risk management through
improved contract design. The last section takes a systemic perspective on risk management. Risk
managers must recognize important ways in which market dynamics deviate from simple, idealized
models of hedging an individual firm’s exposures. Firms’ collective hedging, funding, and collateral
arrangements can channel through the financial system in ways that amplify shocks. Understanding
these effects requires an appreciation for the organization of trading operations within firms. The
article concludes with a summary and recommendations.”
Reference Count – Overall ratio of A* and A journals to total references 0.338
Journal Count
Journal of Finance 1
Econometrica 2
American Economic Review 1
Journal of Financial Economics -
Review of Financial Studies 1
Other A-ranked journals 20
34
Other references 49
Total 74
Discussion
This report is published by the U.S. Office of Financial Research (ORF), which was established under
the Dodd-Frank Act of 2010 in the aftermath of the GFC. The OFR’s purpose is to improve the quality
of financial data available to policymakers and to facilitate more robust and sophisticated analysis of
the financial system and the management of financial risk. The report is in four parts: (i) an
introduction written by Mark Flannery, (ii) firm-level issues in risk management written by Paul
Glasserman, (iii) risk governance, incentives and cognitive bias written by Clifford Rossi, and (iv)
systemic issues in risk management written by David Mordecai.
Introduction
The GFC revealed major gaps in risk management in some of the largest and most complex firms.
This paper’s main purpose is to understand the macro-prudential implications of individual firm’s
risk management practices that were introduced by the Dodd-Frank Act.
Firm-level issues in risk management
The main propositions advanced in this section are:
Modern risk management began around 1992 when VaR was introduced. VaR has many
shortcomings (discussed briefly above in Jorion (2010)), but one positive aspect of
introducing VaR is that firms must collect data on every position in their total portfolio, and
such data are essential inputs into risk management.
Volatility regimes need to be explicitly incorporated into risk management. This requires a
20+ year historical perspective be taken, not just 2-4 years.
There needs to be increased recognition of the interactions between micro (firm-level) risk
management and macro (economy-wide or systemic) risk; this analysis reveals that the risk
management of firms can create increased systemic risk when multiple firms use similar risk
management approaches.
A priority for the future is to carefully analyse the pro-cyclicality aspects of regulation.
Risk governance, incentives and cognitive bias
Risk management involves both quantitative approaches and human judgments. Whenever humans
are involved there is a possibility of cognitive biases causing breakdown in risk management
processes. Poorly designed executive compensation systems can lead managers to marginalise risk
management. Defects in the quality of the data and analytical processes can add to defects in the
governance of risk management systems. The paper then develops a bank risk management model
that shows when cognitive biases exist in the presence of weak governance, there is a tendency for
management to underestimate risk due to behavioural heuristics of confirmation bias, a house-
money effect, a herd effect and an ambiguity bias. Policy solutions are proposed to address these
biases. Specifically, (i) financial incentives for more rigorous assessments of risk management at
financial institutions by the providers of directors and officers liability insurance, rating agencies and
35
regulatory agencies with direct links to policy premiums, rating premiums and supervisory
outcomes, (ii) developing robust risk data warehouse capabilities, (iii) formalising the reporting of
risk management with the senior risk officer having direct access to the Risk Committee of the
board, and (iv) building an organisational understanding of the stature of risk management.
Systemic issues in risk management
Financial institutions may share exposure to systematic market conditions and common market
forces which can become amplified by collective actions that would otherwise be benign at the firm-
specific level. The two main topics in this section are (i) how delta-neutral pricing and hedging
policies can generate obligations that cause similar risks to accumulate among traders, and (ii) how
the computational complexity of system-wide borrowing and lending (and settlement mechanisms)
can produce contingent forward obligations that result in unpriced residual risk factors and
increased systemic risk. During regime changes, acute and unanticipated increases in risk can occur
when idiosyncratic effects no longer cancel in a manner that is expected.
Paper 13
Robert A. Jarrow, 2009, “Credit Risk Models”
Abstract – quoted directly from Jarrow (2009)
“This paper reviews the literature on credit risk models. Topics included are structural and reduced
form models, incomplete information, credit derivatives, and default contagion. It is argued that
reduced form models and not structural models are appropriate for the pricing and hedging of
credit-risky securities. Directions for future research are discussed.”
Reference Count – Overall ratio of A* and A journals to total references 0.697
Journal Count
Journal of Finance 10
Econometrica 2
American Economic Review 1
Journal of Financial Economics 2
Review of Financial Studies 5
Other A-ranked journals 26
Other references 20
Total 66
Discussion
Credit risk arises whenever two counterparties engage in borrowing and lending. Credit derivatives,
which began trading in the early 1990s, are financial contracts where the payoffs depend on
whether a credit entity defaults on its debt. By 2007 the outstanding notional value of credit
derivatives was estimated at over $US 62 trillion, compared to the more mature equity derivatives
market at $US 10 trillion.
This paper provides an historical overview of the developments in credit risk models. It starts with
the Merton model of 1970 and 1974, and several extensions over the next several years. A major
development in 1992 by Heath, Jarrow and Morton (HJM) modelled the term structure of interest
rates using option pricing technology. Extensions to the HJM model to include credit (or default) risk,
credit risk migration, default correlations for pricing of baskets of credit derivatives, default
contagion, incomplete information and models of recovery rates and stochastic recovery rates have
subsequently been introduced.
Jarrow (2009) provides an extensive and technical discussion of the Merton model and the
subsequent developments that are summarised briefly above. This exposition is highly mathematical
and, as such, is not reviewed here. Researchers in the mathematical finance of credit risk, default
probability and recovery rates, the valuation of risky debt, credit risk spreads, default contagion, risk
management, loss distributions using Copulas, and the extensions to stochastic interest rates,
general submartingale processes, more complex liability structures, and recovery rates, will no doubt
find this survey of high relevance. Jarrow (2009) then provides a summary of developments in (i)
reduced form models, (ii) incomplete information, and (iii) credit derivatives. Again these discussions
are highly mathematical and are not reviewed here.
The final section of Jarrow (2009) provides directions for future research. These all involve additional
model development in relation to:
the implementation of realistic contagion models – because existing models are static in
nature and do not capture the dynamic structure of credit risk,
stochastic recovery rate models – because standard recovery rate models assume constant
recovery rates that are independent of the health of the economy, and
the inclusion of liquidity risk – when credit risk is high many financial contracts default and
markets become skittish; when markets are skittish liquidity dries up, prices for credit risky
securities fall, and lending rates soar. These elements and dependencies are missing from
existing credit risk models.
Paper 14
Robert A. Jarrow, 2011, “The Economics of Credit Default Swaps”
Abstract – quoted directly from Jarrow (2011)
“Credit default swaps (CDSs) are term insurance contracts written on traded bonds. This review
studies the economics of CDSs using the economics of insurance literature as a basis for analysis. It is
alleged that trading in CDSs caused the 2007 credit crisis, and therefore trading CDSs is an evil that
needs to be eliminated or controlled. In contrast, I argue that the trading of CDSs is welfare
increasing because it facilitates a more optimal allocation of risks in the economy. To perform this
function, however, the risk of the CDS seller's failure needs to be minimised. In this regard,
37
government regulation imposing stricter collateral requirements and higher equity capital for CDS
traders needs to be introduced.”
Reference Count – Overall ratio of A* and A journals to total references 0.431
Journal Count
Journal of Finance 6
Econometrica -
American Economic Review 1
Journal of Financial Economics 1
Review of Financial Studies 2
Other A-ranked journals 21
Other references 41
Total 72
Discussion
In some senses, the abstract above says it all. Jarrow (2011) argues that the risk of failure of a CDS
seller needs to be minimised, and that this is best achieved by tighter regulation imposing higher
collateral requirements and higher equity capital, i.e., lower leverage. CDSs per se, did not cause the
GFC.
CDS trading has grown dramatically in the U.S. since they were first introduced in 1995. Table 1 from
Jarrow (2011) is reproduced below. CDSs trade in the over-the-counter markets, and this is relatively
loosely regulated.
To put these statistics in context, I note that the global equity market capitalisation at the end of
2009 was approximately $US 50 trillion. Interest rate and currency swap exposures are more than
eight times as large, at around $US 427 trillion. The CDS market is approximately $US 30 trillion. The
dramatic reduction in CDS notional value from 2007 is not due to decreased trading of CDSs, but
rather, to a change in regulation, whereby net positions are now reported. While some have argued
38
that CDS trading was a major contributor to the GFC (but note that the CDS market, even at its peak,
is only 16 per cent of the size of the interest rate and currency swap markets), Jarrow (2011) argues
that before these assertions can be properly evaluated, an understanding of the economics of CDS
(and in particular the economics of insurance because CDS contracts are insurance contracts - they
only pay a benefit if the bond / bonds on which they are written defaults) is necessary. Jarrow’s
(2011) key conclusions from this analysis are as follows (p. 237 – 238, bold emphasis added):
“1. The trading of CDSs increases the welfare of the traders in financial markets via the optimal allocation of risks, thereby lowering debt costs.17 2. The trading of CDSs reduces market imperfections in the trading of debt, especially by enabling the taking of short positions. This reduction in market imperfections facilitates the access to more debt capital, thereby lowering debt costs. 3. The possibility of CDS seller default, analogous to insurance company failure, reduces the welfare-increasing role of trading CDSs.18 Government regulation of the CDS collateral requirements and CDS seller equity capital is needed to maintain the benefits of trading CDSs. 4. CDS defaults have a systemic risk component, which in the aggregate might lead to the failure of financial markets. This negative externality is not currently priced into the contracts. If it exists, regulation is needed to correct for this negative externality in the trading of CDSs. 5. CDS spreads can be decomposed into (a) the expected loss, plus (b) a default risk premium, plus (c) asymmetric information-monitoring costs, plus (d) a liquidity risk premium due to a quantity impact of trades on the price. 6. The valuation of CDS must take into account counterparty risk in the execution of the contracts. This depends on the collateral requirements and creditworthiness of the CDS seller. 7. For OTC CDSs, posting 100% of the notional in collateral using riskless securities will completely remove counterparty risk and the negative externality due to systemic risk and the failure of financial markets. Due to the cost of posting 100% collateral, this suggestion has not been made in either the financial press or in the regulatory arena. An alternative clearing mechanism is a central clearing counterparty (CCP). A CCP will reduce, but not eliminate, the risk of financial market failure. 8. Exchange-traded CDSs will also reduce, but not eliminate, the risk of financial market failure. This is true unless the exchange-traded CDSs become futures contracts, with daily settlement of gains and losses. Exchange trading has the additional benefits that it should reduce trading/liquidity costs and increase transparency in both pricing and trading activity. 9. A centralized collateral authority that monitors and regulates (as necessary) collateral positions across CDS market participants can both reduce counterparty risk in customized CDS contacts and reduce the risk of financial market collapse. 10. Rating agency error in evaluating correlated default risk generated two distortions in the computation of equity capital, which was a key factor in creating the financial crisis. One, it generated the perception of nearly riskless securities (AAA rated) with high yields, against which little equity capital was required. Two, it enabled highly rated firms to sell CDSs with little or no collateral and insufficient equity capital (e.g., AIG).”
17
The economics of insurance in essence finds that these contracts can be Pareto Optimal, or at least welfare increasing. 18
In general the equilibrium models developed in the economics literature assume no default risk for the insurance providers. If this possibility is included a role for regulation arises.
39
(C) Market and Regulatory Performance
Informed regulation
Regulatory system architecture
Regulatory performance and reform (e.g. assessment of performance,
enforcement experience, prudential and accounting standards
Superannuation / managed funds
Effective measures of financial consumer/investor protection
Development of new markets (e.g. corporate bonds, carbon emission and other
environmentally motivated markets)
Reviewed Papers
Paper 15
David K. Musto, 2011, “The Economics of Mutual Funds”
Abstract – quoted directly from Musto (2011)
“This review surveys the literature on the economics of mutual funds in general, and open-end
mutual funds in particular. This mutual fund design has been very successful, though it carries risks
that have recently been realized at large scales. It also frustrates the analysis of performance in ways
only recently appreciated. Among the topics reviewed are tax efficiency, transactions costs, risk
shifting, window dressing, governance, marketing, price setting, and concerns that arise at the family
level.”
Reference Count – Overall ratio of A* and A journals to total references 0.742
Journal Count
Journal of Finance 23
Econometrica -
American Economic Review 1
Journal of Financial Economics 15
Review of Financial Studies 6
Other A-ranked journals 27
Other references 25
Total 97
Discussion
Mutual funds were invented in 1924. Investments in mutual funds now comprise 43 per cent of U.S.
Musto (2011) does not provide a thorough review of the fund performance literature because this is
done in Wermers (2011) (reviewed below). Musto sets out the basic economic arguments of open-
end funds as follows19. Suppose fund A has expected performance that exceeds fund B. Investors
would withdraw money from B and direct this to A, and they would continue to do so until scale
diseconomies improve B’s prospects to the point where they equal A’s. If this happens in all funds in
the universe then the cross-section of performance would be unpredictable, whether from past
performance or from anything else, regardless of manager skill.20 It also follows that flows would not
have predictive power, even though these flows are rational and informed. Thus a key difference
between a mutual fund and equity investments is that the expected returns for mutual funds
equilibrate through flows, whereas the expected returns on equities equilibrate through price. This
analysis of the economics of mutual funds is apposite because it clarifies what we can and cannot
learn from the empirical literature.
Money funds, bank runs and Lehman
Money market mutual funds arose in the 1970s. At this time short-term interest rates in the U.S.
were well above the rates that banks could pay; this fostered a demand for an investment vehicle
that resembled a bank account but paid closer to the market interest rate. Money funds met this
demand and proliferated, and by 1981, they accounted for 77 per cent of all mutual fund assets. As a
result of Lehman’s collapse the U.S. government provided deposit insurance (for a small premium)
for money funds21, else the commercial paper market in the U.S. (of which money funds hold about
40 per cent) would have been in turmoil. Money funds have gross returns that are very similar
(because their investments are so tightly regulated) and so they compete on fees.
Operation of mutual funds
U.S. mutual funds operate as companies, this being mandated by the Investment Company Act of
1940. When a mutual fund, that is part of a family, contracts for investment advice with its family
advisor, legally, it must act in the interests of its shareholders, not in the interests of the family.
However, the family normally dominates the selection of directors, and these directors normally
never fire the family investment advisor.
Fees and expenses
The U.S. equally-weighted mutual fund expense ratio in 2009 averaged 1.50 per cent, the value-
weighted ratio was 0.86 per cent. Clearly, larger funds have lower expense ratios. Empirically,
higher-expense funds (i.e., smaller funds) have lower net-of-expense returns. The fee structure in a
19
This raises the issue of “Why are there not more closed-end funds?” 20
It should be noted that the empirical evidence suggests that investors do not punish poor performance with cash withdrawals, though money does flow to funds with good performance. However, much of the empirical literature does show that past performance is not a good predictor of future performance for mutual funds, in contrast to the evidence on private equity (PE) and venture capital (VC) funds where past performance is persistent. It is of course much harder to withdraw funds from a PE and VC fund, which typically have a 10 year investment commitment. 21
Money funds do not normally have, or indeed need, lender of the last resort protection, because their investments are tightly regulated in highly liquid short-term deposits with highest credit ratings. The insurance arrangement operated for one year. New regulations were then introduced that allowed money funds to suspend convertibility, which they could not do prior to the GFC.
41
mutual fund is usually a flat percentage, but one variant (called a fulcrum fee structure) is for fees to
be higher if the fund beats its benchmark, but the fees must also decrease at the same rate if the
fund falls short of its benchmark.22
Marketing of mutual funds
Relatively little finance research has been conducted on this topic. However the evidence suggests
that advertising works. It has also been found that price matters, i.e., that higher front end fees
results in lower net flows. The industry has evolved such that no-load funds now hold the bulk of the
assets and get the bulk of the new investments.
Family funds
The evidence shows that a member of a family that performs well results in a spill-over effect of
flows into the other funds within the family. While the empirical evidence shows that fund
performance declines as a particular fund in a family gets larger, performance increases with the size
of the rest of the family, consistent with economies of scale.
Paper 16
Zvi Bodie , Jérôme Detemple , and Marcel Rindisbacher, 2009, “Life-Cycle Finance and the Design of Pension Plans”
Abstract – quoted directly from Bodie et al. (2009)
“This article reviews recent scientific literature on consumer financial decisions over the life cycle,
outlining its implications for the design of pension plans. It begins with a review of advances in the
theory of rational financial planning and wealth management. It then summarizes the recent
empirical literature on the actual behavior of households regarding saving, investing, and insuring
their consumption in old age. Finally, it briefly comments on the practical implications of the theory
for the design of pension systems and outlines areas of future research.”
Reference Count – Overall ratio of A* and A journals to total references 0.761
Journal Count
Journal of Finance 18
Econometrica 4
American Economic Review 12
Journal of Financial Economics 2
Review of Financial Studies 12
Other A-ranked journals 60
Other references 34
22
The fee structure where fees increased faster if the fund met its benchmark that they fell if the benchmark was not reached was outlawed in the Investment Company Act of 1970. An interesting issue is “Why are there no more funds with fulcrum fee structures?”
expected growth rate of wages, wage rate volatility) and on market factors (interest rate, market
price of risk, return volatilities). Optimal hedging of the risks faced by individuals requires a huge
number of securities and hence individualised solutions are prohibitively costly. Rather innovations
such as target rate funds or “safe” funds, akin to defined benefit funds, with payoff structures that
match the consumption patterns of individuals. The extent to which individuals are responsible for
their own retirement expenditures, as opposed to government pension systems, varies over time
and across nations.
The paper concludes with suggestions for additional empirical and theoretical research. These are
reproduced verbatim (p. 279) “To offer optimal retirement vehicles, plans need to be fully
customized so as to service the needs of each individual household. In practice, various
imperfections hinder implementation of optimal strategies. Difficulties that need to be addressed
include the following:
1. Full customization is extremely costly: Offering fully tailored plans involves information collection
costs, database creation and maintenance costs, and individual account management costs. Fully
disaggregated solutions may prove difficult to achieve.
2. Implementation requires complete information about individual characteristics: Whereas some
characteristics such as age and education are easy to identify, others such as risk aversion and tastes
43
are difficult to elicit. Even individuals who are well acquainted with notions of risk and return often
find it difficult to articulate precise risk preferences.
3. Markets are not complete: Severe shocks can introduce discontinuities in prices. Other shocks
correspond to risks that are simply not traded. In both cases, hedging proves challenging, if not
infeasible. For financial intermediaries selling and managing pension plans, the constitution of
adequate capital reserves may be the only solution to absorb unhedgeable risks.
4. Continuous trading is not feasible: Transaction costs have shrunk significantly during the past 30
years, but they still remain. This precludes instantaneous rebalancing.
5. Implementation also requires complete information about the model parameters: Statistical and
econometric techniques are not powerful enough to achieve that goal. Even if the true structure of
the model is known, parameter estimates are subject to errors.”
Finally it is noted that future research will be enhanced by access to more granular data on
individual portfolio holdings. It is noted that current empirical research, which relies almost
exclusively on linear regression analysis, is unable to identify the nonlinearities that are embedded in
horizon effects, wealth effects, cohort effects, and behavioural distortions in preferences.
Paper 17
Wayne E. Ferson, 2010, “Investment Performance Evaluation”
Abstract – quoted directly from Ferson (2010)
“This article provides a review of the rapidly developing literature on investment performance
evaluation. The goals are to summarize the significant forces and contributions that have brought
this field of research to its current state of knowledge and to suggest directions for future research.
This review is written for a reader who is familiar with financial economics but not the specific
literature and who wishes to become familiar with the current state of the art. Suggestions for
future research include refinements to portfolio holdings-based performance measures, a more
balanced treatment of costs, and clientele-specific measures of investment performance.”
Reference Count – Overall ratio of A* and A journals to total references 0.853
Journal Count
Journal of Finance 33
Econometrica 4
American Economic Review 3
Journal of Financial Economics 22
Review of Financial Studies 9
Other A-ranked journals 51
Other references 21
Total 143
44
Discussion
In the last 10 years research on fund performance has seen explosive growth, primarily due to
databases becoming more available at reasonable cost, reductions in the cost of computing, and
increased demand from investors. There has also been dramatic growth in alternate investment
vehicles, in particular hedge funds and private equity vehicles.
Mutual fund performance can be measured in a number of ways, including alpha from the CAPM,
stochastic discount factor (SDF) alphas, and multifactor asset pricing model alphas (in particular style
adjustments for growth, value and momentum). These performance measures are also applied to
the individual securities held by a fund. The empirical evidence suggests that alphas are typically
negative, often approximating the fund’s expense ratio. Alphas display little persistence over time,
except among poorly performing funds. Conditional performance evaluation methods have also
been developed, where the most common conditioning variables are macroeconomic performance.
The conditional models tend to paint a more optimistic picture of fund performance. A range of
models that test managers’ timing ability has been developed. The evidence suggests that market
timing ability is confined to subsamples of funds of particular style, and in certain economic
conditions. Performance ratios have also been developed. The Sharpe ratio is also widely used in
performance evaluation. This ratio scales portfolio return by portfolio volatility. The Treynor ratio
scales portfolio return by portfolio beta. Once data for the constituent stocks in a mutual fund are
available, it is possible to evaluate performance using the weight a stock has in the portfolio relative
to its market weight. The literature also shows an asymmetric performance flow relationship,
whereby funds with good performance get strong inflows of new money, while funds with poor
performance do not suffer withdrawals of a similar magnitude, however recent evidence suggests
this relationship is diminishing. Other empirical work has investigated “skill verses luck” explanations
of performance, the liquidity of funds, and of course, the “newer” fund types, i.e., hedge funds,
bond funds and VC and PE funds.
Ferson (2010) calls for additional research in several areas. In particular he suggests (i) a more
balanced treatment of fund costs, (ii) resolving the theoretical ambiguities associated with the
interpretation of alpha, and (iii) the development of client-specific performance measures. Again
more granular data on fund trading activities would open up a huge range of possible topics,
including, but by no means limited to, optimal strategies for fund trading activities to reduce market
impact costs, investigation of the investment skill of fund analysts, investigation of buy verses sell
recommendation performance asymmetries, and investigating the way in which fund performance
has changed as a result of algorithmic trading.23
Paper 18 Christian Bluhm and Christoph Wagner, 2011, “Valuation and Risk Management of Collateralized Debt Obligations and Related Securities”
23
It is interesting to note that three Australian fund managers (Perpetual Investments, Schroders Australia and Antares Capital) have recently called for greater regulation of high-frequency trading, which they warn is resulting in market manipulation and insider trading at the expense of retail investors. (see Australian Financial Review, 15 August 2012, p.1, 15, 33 and 55)
performance methods, (i) models that incorporate stochastic discount factors (SDFs), and finally (j)
models that consider whether false discovery rates (FDRs) exist in the examined group of funds –
allowing a researcher to estimate whether an economically significant number of skilled managers
exist and whether these managers can be identified ex ante. Many of these involve innovations in
research methodology that have been introduced since 2000.
48
Wermers discusses the following holdings-based approaches (a) returns imputed from the individual
stocks held in a portfolio (which are gross of fees) (b) the well-known Daniel Grinblatt Titman and
Wermers (DGTW) characteristic-based holdings measure (where the characteristics are the market
index and size, book-to-market and momentum portfolios), and (c) conditional holdings-based
performance measures.
What have we learned from the returns-based mutual fund analysis? The recent literature finds that
at least some active managers demonstrate skill, in contrast to many of the older investigations.
Using daily data it has been shown that active managers have timing ability.
Bayesian methods combined with conditional models are useful in selecting skilled
managers who produce alpha of two to six per cent per year.
The SDF approach has produced some evidence of positive performance.
Macroeconomic information is useful in identifying business conditions under which active
managers are more likely to outperform, with alphas in the range of four to 12 per cent
being documented.
What have we learned from the holdings based mutual fund analysis?
Several papers have applied the DGTW methods in international settings, including U.K,
Japan, Australia, Finland and Thailand.
There have been some interesting recent papers that investigate other variables that are correlated
with mutual fund performance. Such covariates are, as follows;
Managers taking larger bets (i.e., higher weights than in the benchmark) outperform.
Mangers investing in companies where they have social connections outperform.
Mangers from colleges with higher SAT scores outperform.
The quality of a manager’s MBA program and performance are positively correlated, but
having a PhD or a CFA qualification does not matter.
Experience matters in managing large funds, but experience lead to underperformance in
smaller funds.
High IQ managers run funds that have better performance.
The returns-based hedge fund empirical work shows;
The hedge fund FUM grew rapidly from 1994 to 2008 to a peak of USD $2.5 trillion in July
2008. Given the typical leverage of hedge funds, the assets controlled by them are many
times larger.
Research is hampered by self-reporting of performance and a likely upward bias in returns
for those funds reporting their data.
Although the average fund of funds delivers alpha in relatively short periods, a sub-set of
funds consistently delivers alpha. These funds receive far greater cash inflows and this tends
to attenuate performance.
Hedge funds have performance persistence for up to three years.
Funds with lower R-squareds against the seven-factor model have better future
performance.
49
Institutional portfolio management research is hampered by lack of disclosure, because these are
generally privately managed funds for endowments, trusts, insurance companies and institutional
pension funds. Yet, there has been some research, summarised as follows;
The evidence suggests that U.K. institutions pension funds do not have timing ability.
There is no strong evidence of performance persistence for U.S. funds.
Decentralisation of the management of U.K. funds results in improved performance.
Finally Wermers (2011) offers some suggestions for future research. Specifically, he suggests:
The macroeconomic predictive model has shown great promise in the hedge fund universe,
and it would be interesting to see future research on other asset classes, such as bond funds,
private equity funds, and emerging market funds.
One glaring omission from the literature is an estimate of the trading costs of individual
funds.
Securities lending is a sideline business that is profitable for many managed funds. It would
be of great interest to study the interface between the holdings of managed funds,
securities lending activity, and the effects of short selling on markets.
Further research advances should be made for non-U.S. asset managers.
The recently passed Dodd-Frank Act might provide further data for researchers, which could
benefit the public in many ways.
50
(D) Other Broad Research Areas
Reviewed papers
Paper 20
Renee Adams, Benjamin Hermalin and Michael Weisbach, 2010, “The Role of Boards of Directors in
Corporate Governance: A Conceptual Framework & Survey”
Abstract – quoted directly from Adams et al. (2010)
“This paper is a survey of the literature on boards of directors, with an emphasis on research done
subsequent to the Hermalin and Weisbach (2003) survey. The two questions most asked about
boards are what determines their makeup and what determines their actions? These questions are
fundamentally intertwined, which complicates the study of boards because makeup and actions are
jointly endogenous. A focus of this survey is how the literature, theoretical as well as empirically,
deals – or on occasions fails to deal – with this complication. We suggest that many studies of boards
can best be interpreted as joint statements about both the director-selection process and the effect
of board composition on board actions and firm performance.”
Reference Count – Overall ratio of A* and A journals to total references 0.673
Journal Count
Journal of Finance 15
Econometrica 4
American Economic Review 2
Journal of Financial Economics 26
Review of Financial Studies 9
Other A-ranked journals 59
Other references 56
Total 171
Discussion
It is difficult to observe the activities of boards, because much of what they do is conducted privately
and confidentially. However, when things go wrong, the actions (or inactions) of boards are
highlighted. Corporate scandals such as Enron, Worldcom and Parmalat have put the role of boards
in corporate governance at the centre of policy debate. This paper reviews what we have and have
not learned from research on corporate boards. The paper does not attempt to survey all board
research. Indeed, the focus is on more recent research; it excludes much of the event-study
evidence; it excludes the literature on the monitoring role of the board; it does not cover director
turnover, the boards of financial institutions and the boards of non-profits and central banks; and it
does not cover comparisons of international governance approaches.
51
A problem that pervades research on corporate boards is the issue of endogeneity, and this is a
central theme in Adams et al. (2010). Governance structures do not arise randomly; they arise
endogenously because firms choose them in response to the governance issues they face. Further,
all corporations have boards, and hence we cannot observe cases where, and the reasons behind, a
corporation chooses not to have a board.
What do directors do?
The actions of individual members of a board cannot generally be observed24. Rather, the research
effort tends to classify boards in a particular manner (whether the board is comprised of mainly
internal or external directors – a proxy for board independence, whether the CEO is also the
Chairman, the size of the board etc.) and to then relate that classification to firm characteristics.
Boards have roles in the hiring, firing and assessment of management and in the setting of corporate
strategy. Considerable research effort has been devoted to these issues; both theoretical and
empirical. The evidence generally supports the following propositions:
Survey evidence suggests that members of boards frequently (75-80 per cent) regard
“setting strategy” as one of their responsibilities; while being a “watchdog for shareholders”
and “hiring/firing the CEO” was listed around 25 per cent of the time. The evidence also
suggests that boards have become less passive over time.
Successful CEOs are able to achieve boards that are less independent; but this evidence is
cross-sectional. Additional study of this within firms (i.e., in time series) is warranted.
The literature on the board’s role in project selection and CEO tenure is at this stage
theoretical. One avenue for future research might be to investigate major investment
projects and subsequent CEO tenure. It is known from the M&A literature that CEO
remuneration generally goes up after a merger, even if shareholder wealth is destroyed.
The incentives associated with the CEO selection process has been modelled and U.S
evidence suggests that increased pressure from institutional shareholders, greater threats of
litigation, new exchange requirements and changes to governmental regulations have led to
more independent and diligent boards.
Both anecdotal and statistical evidence shows that boards can and do dismiss poorly
performing CEOs.
The evidence of board size and firm performance is mixed with different sub-sets displaying
inconsistent results. The earlier evidence suggested that large board result in decreased
performance, but a positive relationship has been found for complex firms (highly diversified
firms and those with high debt). More research is warranted – and this endeavour would be
enabled by broad panel data becoming available.
The evidence from directors’ trades shows that both internal and external directors earn
abnormal profits, though internal directors have access to inside information that makes
their trades more profitable.
24
The identity of directors, their age and qualifications is disclosed. Also, multiple directorships can be traced. The attendance at meetings for individual directors is often available, but being at a meeting does not necessarily mean active participation in the agenda.
52
Although there is a growing empirical literature of the role of boards in setting strategy (and
mergers are a strategy area of potential) “it is safe to say that this is an area in which much
work remains to be done.” (p. 25)
How are boards structured?
Since 2000 there have been significant changes in the structure of U.S. boards as a result of
Sarbanes-Oxley. It would be of considerable help to future research endeavours if a
systematic, accurate and global record of such regulatory changes was developed,
particularly where such events are exogenous. Such events create “natural experiments”.
Adams et al. state (p. 27) “policy makers should be wary of calls for prohibiting the CEO as
serving as chairman”. In general the empirical evidence on staggered boards (where some
portion of the board is elected annually – generally one-third) indicates that this
arrangement is not in the shareholders’ interests.
The role of specific types of directors (bankers, venture capitalists, politically-connected
directors, CEOs from other firms and employee representatives) has been investigated,
though this literature is small. The evidence suggests that CEO interlocks are associated with
poor performance. Political connections can help. VCs with high reputation are associated
with stronger boards. Whether having a banker on the board is advantageous is unresolved,
as too is the consequence of having employee representation.
How does the board work?
What happens inside the boardroom is necessarily private. Interviews and case studies are
one approach; though in general the research has relied on publicly available data that sheds
light on the inner workings.
The role and contributions of “busy directors” has been investigated. Busy directors are
those that have multiple directorships. While the empirical evidence is somewhat muddy,
Adams et al. (2010) conclude that, overall, having busy directors fails to be in the firm’s
interests. They, however, call for more research on this topic.
Boards perform many of their functions through committees, and data on committee
structure is frequently publicly available (though costly to collect). Papers have looked at the
functioning of the nominating, the finance and investments’, audit and remuneration
committees. However, Adams et al. (2010) conclude “we still lack a good understanding …
(of) committee structure … and the relation between committees and the full board.”
Progress would be enabled by a dataset containing all committees. It is worth noting that a
project to collect such data for all listed Australian firms is currently underway.25
What motivates directors?
Directors’ compensation has been modelled theoretically and empirically investigated in
several papers. The empirical work in this area is subject to “omitted variables” and
endogeneity problems and further innovation in research design is warranted.
25
This work is being done jointly between Sirca and the Accounting Discipline at UTS. The database will be available to researchers in all Sirca’s members.
53
Reputational issues have been investigated, again in both theoretical and empirical work.
Reputation is difficult to investigate because some CEOs favour the appointment of directors
with lower reputation (i.e., they don’t rock the boat) while having a strong reputation has
been shown to lead to more board positions. The corporate governance database being
developed at present in Australia provides the opportunity for further work on reputational
issues.
Adams et al. (2010) conclude that “understanding the role of boards is vital both for our
understanding of corporate behaviour and with respect to setting policy to regulate corporate
activities”. Yet study of boards is difficult, in particular because of vexing endogeneity issues and
because boards are only one aspect of governance. Open questions are numerous and frequently
quite fundamental. Adams et al. (2010) encourage researchers to look for natural experiments, in
particular changes in regulation as a potential way of dealing with joint-endogeneity issues. Clearly, a
database containing changes in global regulations would be of great value to researchers and
regulators alike. Several specific topics seen as profitable for future research are listed below:
(i) How are potential outside directors identified? How are inside directors identified?
(ii) What is the role of social networks in director selection? Are busy directors bad for
the firm?
(iii) How and why does director expertise matter? After all, management consultants
can be hired. Perhaps it is harder to ignore the opinions of directors?
(iv) Typically the board is modelled as a single decision maker, but what are the
dynamics within boards?
(v) Committees need further research.
(vi) Much of the literature is on Anglo-American firms, yet there are potential insights
and lessons to be learned from understanding how others deal with corporate
governance.
(vii) Finally, behavioural corporate finance identifies several human decision-making
biases. Do boards suffer from these?
Paper 21
Ananth Madhavan, 2000, Market Microstructure: A Survey”.
Abstract – quoted directly from Madhavan (2000).
“Market microstructure is the area of finance that studies the process by which investors’ latent
demands are ultimately translated into prices and volumes. This paper review the theoretical,
empirical and experimental literature on market microstructure with a special focus on
informational issues relating to: (1) Price formation and price discovery, including both static issues
such as the determinants of trading costs and dynamic issues such as the process by which prices
come to impound information over time, (2) Market structure and design, including the relation
between price formation and trading protocols, (3) Information and disclosure, especially the topic
of market transparency, i.e., the ability of market participants to observe information about the
trading process, and (4) Interface of market microstructure with other areas of finance including
54
asset pricing, international finance, and corporate finance. I discuss the implications of recent
research for academics, investors, policy makers, and regulators.”
Reference Count – Overall ratio of A* and A journals to total references 0.763
Journal Count
Journal of Finance 59
Econometrica 3
American Economic Review -
Journal of Financial Economics 31
Review of Financial Studies 22
Other A-ranked journals 30
Other references 45
Total 190
Discussion
Two features common among the responsibilities of global stock market regulators are an
affirmative obligation to ensure that markets are (i) fair, and (ii) efficient. A huge amount of work
has been done on the second requirement, while the first has been largely overlooked.26 Innovative
research that investigates the fairness of global equity and derivative markets (and their linkages) is
an area of considerable relevance and interest to CIFR. Our ability to engage in such research is
greatly facilitated through all 32 Sirca-member universities having access to the Thompson Reuters
Tick History (TRTH) database. This database provides millisecond-time-stamped tick data going back
to January 1996, and covers 45 million active OTC and exchange-traded instruments worldwide. The
database currently updates at a rate of 1 million messages per second and is around 3 Petabytes
uncompressed. Market microstructure research is thus not hampered by access to relevant data,
though handling such massive amounts of data clearly raises a different set of challenges.
Madhavan’s survey, as the abstract above indicates, concentrates on efficiency issues with major
sections being devoted to price formation and price discovery, market structure and design and the
links to price formation, market transparency (which is an aspect of fairness) and the way
microstructure research relates to other major areas in finance, namely international finance,
corporate finance and asset pricing.
Madhavan’s survey is somewhat dated, being written in 2000, a period prior to much of the change
in financial markets that has resulted from the rapid escalation of algorithmic trading. Some areas
that are highlighted as needing further investigation are:
26
The Australian-domiciled Capital Markets Cooperative Research Centre (CMCRC) has a series of project that address fairness, including projects that investigate front-running, closing price manipulation and potential insider trading (identified via price run-ups prior to price sensitive announcements). Madhavan does not include these projects, sometime for the obvious reason that some are confidential, in his review.
55
What is the nature of price discovery in a multi-asset or multi-market setting?
How does structure affects return dynamics, in particular, the speed of price discovery?
There is broad agreement that transparency does matter in that it affects the
informativeness of the order flow and hence the process of price discovery. Greater
transparency is generally associated with more informative prices. However, complete
transparency is not always “beneficial” to the operation of the market. Indeed, many studies
demonstrate that too much transparency can actually reduce liquidity because traders are
unwilling to reveal their intentions to trade. Accordingly, further research is needed.
Should insider trading continue to be considered illegal?
Microstructure techniques can be used to decompose the bid-ask spread (or price impacts)
into transitory and information based components, and thus a researcher might be able to
make a more precise determination of the market perceptions regarding insider trading.
A more recent and comprehensive survey of the empirical literature in market microstructure is
provided in Hasbrouck (2007). Indeed, this textbook sets out a series of research questions that the
author considers warrant further research. Specifically, Hasbrouck (2007) identifies these issues:
What are the optimal trading strategies for typical trading problems?
How, exactly, is information impounded into prices?
How do we enhance the information aggregation process?
How do we avoid market failures?
What sorts of trading mechanisms maximise efficiency?
What is the trade-off between fairness and efficiency?
How is market structure related to the valuation of securities?
What can market / trading data tell us about the informational environment of the firm?
What can market / trading data tell us about long-term risk?
Can “flash crashes” be predicted?
In short, much microstructure research remains to be done.
Paper 22
Kai Li and Nagpurnanand Prabhala, 2008, “Self-Selection Models in Corporate Finance”, Chapter 2 of
Eckbo (2008)
Abstract – quoted directly from Li and Prabhala (2008)
“Corporate finance decisions are not made at random, but are usually deliberate decisions by firms
or their managers to self-select into their preferred choices. This chapter reviews econometric
models of self-selection. The review is organised into two parts. The first part reviews econometric
models of self-selection, focusing on the key assumptions of different models and the types of
applications they may be best suited for. Part two reviews empirical applications of selection models
in the areas of corporate investment, financing, and financial intermediation. We find that self-
56
selection is a rapidly growing area in corporate finance, partly reflecting its recognition as a
pervasive feature of corporate finance decisions, but more importantly, the increasing recognition of
selection models as unique tools for understanding, modelling, and testing the role of private
information in corporate finance.”
Reference Count – Overall ratio of A* and A journals to total references 0.821
Journal Count
Journal of Finance 17
Econometrica 5
American Economic Review 1
Journal of Financial Economics 14
Review of Financial Studies 6
Other A-ranked journals 26
Other references 15
Total 84
Discussion
Part one of Li and Prabhala (2008) discusses the econometric models of self-selection as applied in
corporate finance research. Corporate financial decisions or outcomes can often be observed by a
researcher, but the private information used by managers in making these choices cannot. Models of
self-selection represent one way of incorporating and controlling for unobservable private
information. Almost all corporate finance research deals with subsets of a population and analyses
firms that make particular choices. In such circumstances the estimated relationship for the subset is
no longer a consistent estimator of the population sensitivity. Several models exist, each having its
own substantive assumptions. The paper, first, presents an analysis of the econometrics of the
baseline Heckman selection model, and then reviews two extensions of the baseline model, namely
switching regressions and structural selection models. The details of this discussion are beyond the
scope of this report; however the fundamental point - that appropriate econometric corrections are
required when researchers study a subset of a population of firms that make a particular choice or
choices – remains an essential element of appropriate research design. An alternative approach to
dealing with self-selection is to use matching models, and these are becoming increasingly common
in applied work. Matching models are however based on fundamentally different assumptions; in
particular it is assumed that unobserved private information is irrelevant. In contrast, unobserved
private information is the raison d'être of self-selection models. Other approaches, namely the use
of fixed effects in panel data and Bayesian self-selection models are also canvassed.
The second part of the paper deals with empirical applications of self-selection models in corporate
finance. Application to event studies (section 7 of the paper), the pricing of public debt offerings
(section 8), other investment banking applications (section 9), the diversification discount (section
10) and other applications (section 11) are discussed. The key points that emerge are:
Event Studies
Self-selection entered the event study literature relatively recently.
Self-selection models can be used to estimate the separate effects of two announcements
made by a firm on the same day. In particular stock splits are frequently announced jointly
with dividends. This procedure has interesting research implications in potentially separating
the earnings announcement effect from simultaneous announcements (which might include
capital expenditure, compensation and security issuance announcements).
When gains to bidders are analysed with self-selection considered (i.e., using a non-linear
model) and with self-selection ignored (i.e., using OLS), self-selection corrections cause the
empirical findings to differ. Several merger applications are canvassed, including pre-
commitments to not pay greenmail, non-linear relationships between managerial ownership
and merger announcement effects and antitrust challenges to takeovers.
Pricing of public debt offerings
The choice of an underwriter has been extensively analysed using standard Heckman self-
selection models.
The structure of the underwriting syndicate has been modelled using switching regression
techniques.
Underwriter reputation effects on spreads and yield of debt instruments have also applied
the switching regression methods.
The debt covenants selected are likely to differ between underwriters. The decision to
include covenants influences and is influenced by the expected pricing benefits of doing so,
thus giving rise to a simultaneity that can be modelled using the structural self-selection
model.
Debt offerings involve many choices, including maturity, collateral, callability and the timing
of the issue. All involve self-selection and the thorny issue of which particular self-selection
issue should be controlled for has no easy answer. Additional research is warranted.
Other investment banking applications include:
Underwriter compensation and the use of warrants in IPOs, where the empirical evidence
suggests that the use of warrants reduces underpricing costs.
Underwriter selection and analyst coverage.
Diversification discount
Self-selection models have been applied to explain the so-called “diversification discount”
and in so doing have shown that this discount is, after correction, a premium.
Other applications of selection models
Other issues examined include accounting choices for research and development
expenditures, bankruptcy costs, and family ownership and firm value.
Other applications of matching methods
Issues investigated include the use of bank debt verses bonds and matching as a technique
for determining long-run share market performance.
58
By way of summary, self-selection is a growth area in empirical corporate finance, spurned by an
increasing recognition of the potential biases in empirical investigations that do not adequately
model the issue. A range of models have been developed and each has specific settings where they
are particularly appropriate. It is becoming increasingly unlikely that research will be published in
quality journals where self-selection issues are not specifically recognised and addressed in the
empirical design. Research proposals that come to CIFR should include a discussion of the way the
researcher intends to deal with self-selection issues.
Paper 23
Denis Gromb and Dimitri Vayanos, 2010, “Limits of Arbitrage”
Abstract – quoted directly from Gromb and Vayanos (2010)
“We survey theoretical developments in the literature on the limits of arbitrage. This literature
investigates how costs faced by arbitrageurs can prevent them from eliminating mispricings and
providing liquidity to other investors. Research in this area is currently evolving into a broader
agenda, emphasizing the role of financial institutions and agency frictions for asset prices. This
research has the potential to explain so-called market anomalies and inform welfare and policy
debates about asset markets. We begin with examples of demand shocks that generate mispricings,
arguing that they can stem from behavioral or from institutional considerations. We next survey, and
nest within a simple model, the following costs faced by arbitrageurs: (a) risk, both fundamental and
nonfundamental; (b) short-selling costs; (c) leverage and margin constraints; and (d) constraints on
equity capital. We finally discuss implications for welfare and policy and suggest directions for future
research.”
Reference Count – Overall ratio of A* and A journals to total references 0.596
Journal Count
Journal of Finance 28
Econometrica 3
American Economic Review 6
Journal of Financial Economics 16
Review of Financial Studies 9
Other A-ranked journals 22
Other references 57
Total 141
Discussion
Several anomalies appear in the empirical literature, including momentum, the value effect, the
volatility of asset returns relative to the volatility of dividend payments, the post-earnings
announcement drift and long-run reversals in performance. Empirical tests generally assume a
59
standard (CAPM or consumption-based CAPM) model based on participants being able to transact in
markets without cost. Understanding why anomalies arise and persist requires a careful study of the
process of arbitrage - in particular what are the constraints that are faced by arbitrageurs, i.e., what
are the limits to arbitrage. The review focuses on theoretical developments on the limits to
arbitrage, and in particular on (a) risk, both fundamental and non-fundamental, (b) costs of short
selling, (c) leverage and margin constraints, and (d) constraints of equity capital. In addition, four
interesting specific examples of apparent arbitrage opportunities that existed in markets for
extended period of time are discussed, namely (i) the Palm spin-off from 3Com, (ii) effects arising
from a stock’s inclusion or deletion from a prominent market index, (iii) a V-shaped return pattern in
stocks that are sold by mutual funds that experience large cash outflows, and (iv) the effects of U.K.
pension fund reforms on the term structure of interest rates. The paper sets out a research agenda
for additional theoretical developments which (p. 270) have “the potential to inform debate on
systemic risk, macro-prudential regulation, and lending of last resort, topics that are highly relevant
in the context of financial crises”.
Paper 24
Asli Demirgüç-Kunt and Ross Levine, 2009, “Finance and Inequality: Theory and Evidence”
Abstract – quoted directly from Demirgüç-Kunt and Levine (2009) “In this paper, we critically review the literature on finance and inequality, highlighting substantive
gaps in the literature. Finance plays a crucial role in the preponderance of theories of persistent
inequality. Unsurprisingly, therefore, economic theory provides a rich set of predictions concerning
both the impact of finance on inequality and about the relevant mechanisms. While subject to ample
qualifications, the bulk of empirical research suggests that improvements in financial contracts,
markets, and intermediaries expand economic opportunities and reduce inequality. Yet, there is a
shortage of theoretical and empirical research on the potentially enormous impact of formal
financial sector policies, such as bank regulations and securities law, on persistent inequality.
Furthermore, we lack a conceptual framework for considering the joint and endogenous evolution of
finance, inequality, and economic growth.”
Reference Count – Overall ratio of A* and A journals to total references 0.648
“This essay summarizes my views on (a) the foundations of portfolio theory and its applications to
current issues, such as the choice of criteria for practical risk-return analysis, and whether some
form of risk-return analysis should be used in fact; (b) hypotheses about actual financial behavior, as
opposed to idealized rational behavior, including two proofs of the fact that expected-utility
maximisers would never prefer a multiple-prize lottery to all single-prize lotteries, as asserted in one
of my 1952 papers; and (c) a simple proof of the theorem (which was initially greeted with some
scepticism, especially by referees) that investors in capital asset pricing models do not get paid for
bearing risk.”
Reference Count – Overall ratio of A* and A journals to total references 0.771
Journal Count
Journal of Finance 5
Econometrica 4
American Economic Review 2
Journal of Financial Economics -
Review of Financial Studies 1
Other A-ranked journals 25
Other references 11
Total 48
Discussion
Most financial economists consider that modern finance started with Markowitz’s development of
portfolio theory.
Markowitz introduces this paper by dispelling two myths in relation to his development of portfolio
theory, namely that (i) he did not assume that return distributions are Gaussian, and (ii) he did not
assume that the investor’s utility function is quadratic.
The paper then summarises the development of portfolio theory as presented in his 1959 book, and
contrasts this development to his ideas in his 1952 paper. Much of this discussion concerns (a) the
range of portfolio returns in which an investor’s utility function (and other feasible alternatives) can
be approximated sufficiently -well as being quadratic, and still allow mean-variance choice that
“almost” maximizes expected utility, and (b) alternative measures of risk27. An important point is
introduced, namely that the mean-variance approximation will break down if too much leverage is
used, which is the case of the CAPM without non-negativity constraints. This point is further
developed below.
27 The risk measures discussed are semi-variance, expected loss, expected absolute deviation, probability of loss maximum loss, VaR and CVaR. All of these risk measures, with the exception of VaR and CVaR were discussed in Markowitz’s 1959 book. He states in relation to VaR and CVaR (2010, p. 10) “I still accept the expected-utility maximum. I therefore count the discontinuous nature of VaR and CVaR as a “black mark” against these proposed measures of risk”.
64
The paper then discusses the utility of wealth, and in particular his views, verses those proposed in
Friedman and Savage (1948), to explain why investors buy both lottery tickets and insurance. His
utility of wealth diagram (Figure 2, p. 14, Markowitz (2010)) is reproduced below.
This figure differs from that proposed in Kahneman and Tversky’s prospect theory in the following
manner “the Kahneman and Tversky utility function is convex to the left of the origin and concave to
the right, whereas the Markowitz (1952b) utility function is concave to the left and convex to the
right”. Markowitz then provides two proofs, which were not given in his original 1952 paper, of why
an expected-utility-maximizing agent would never prefer a multiple-prize lottery to all other types of
lottery.
The final section of Markowitz (2010) deals with his gripes about aspects of empirical work on the
CAPM. He argues that if the sum of weights invested in each security in a portfolio equals unity “is
the only constraint on the choice of (a) portfolio, then negative weights are not an accurate - in fact,
they are an absurd - representation of real-world short positions. However, if … (this) is not assumed
as the only constraint, nor is it assumed that all investors can borrow all they want at the risk-free
rate, then the market (index) is typically not an efficient portfolio. Also, a linear relationship is not
typically found between expected returns and betas. Thus, the many hundreds of hours spent by our
colleagues trying to establish whether a linear relationship exists between expected return and beta,
in effect, seeks to determine empirically whether, e.g., the reader can borrow the same amount as
can the U.S. Treasury and for the same interest rate.” Only someone of Markowitz’s standing can
command such pithy and elegant statements.
Paper 27
Lubos Pastor and Pietro Veronesi, 2009, “Learning in Financial Markets”
Abstract – quoted directly from Pastor and Veronesi (2009)
“We survey the recent literature on learning in financial markets. Our main theme is that many
financial market phenomena that appear puzzling at first sight are easier to understand once we
recognize that parameters in financial models are uncertain and subject to learning. We discuss
phenomena related to the volatility and predictability of asset returns, stock price bubbles, portfolio
choice, mutual fund flows, trading volume, and firm profitability, among others.”
Reference Count – Overall ratio of A* and A journals to total references 0.810
Journal Count
Journal of Finance 28
Econometrica 2
American Economic Review 3
Journal of Financial Economics 17
Review of Financial Studies 26
Other A-ranked journals 26
Other references 24
Total 126
Discussion
Financial markets produce vast quantities of data28, and analysis of these data facilitates learning by
participants in these markets, i.e., it allows uncertainty about key empirical relationships or
parameters to be reduced. This article reviews recent work in learning in finance. Many facts that
appear baffling at first sight become less puzzling when it is recalled that parameters are uncertain
and participants are subject to learning. The paper addresses the following interesting issues, many
of which are of fundamental importance to financial economists (p. 362)
“Why are stock returns so volatile? Why are they predictable? Why do investors trade so much? Why do stocks of young firms exhibit high valuations and high volatility? Why are technological revolutions accompanied by stock price “bubbles”? Why do fund flows react strongly to (good) fund performance? Why do firms become less profitable after they go public? We show that learning helps us answer all of these questions, as well as many others.”
The cornerstone of learning is Bayes’ rule, which describes how rational agents update their beliefs
when they receive new information. Pastor and Veronesi (2009) then apply the lessons from
Bayesian updating in the presence of learning (allowing the growth rate (g) of dividends to be
unknown) to the well-known Gordon dividend valuation formula29. Their model predicts that the
28
As mentioned above in the review of Madhavan (2000) the TRTH database held by Sirca Limited updates at about 1 million transactions per second. While there are vast quantities of data on financial markets it is also true that there is a large amount of randomness pervading transactions in financial markets. 29
The Gordon model shows that P = D / (r-g), where P is the stock price, D is the dividend amount, r is the discount rate and g is the dividend growth rate.
66
market-to-book (M/B) ratio of a firm will decline over a firm’s lifetime, a prediction that is borne out
in U.S. data30.
Stock Price Bubbles
This model is then extended to match observed stock valuations at the peak of the NASDAQ
“bubble”. They argue that stocks were not necessarily over-valued because there was huge
uncertainty about the value of g. While many attribute the existence of bubbles as being due to
market irrationality, Pastor and Veronesi (2009) argue that this is also consistent with a rational
general equilibrium model of learning. The model also predicts that the market beta of innovative
firms – a measure of their systematic risk – should increase during technological revolutions. Indeed,
the beta of the technology-laden NASDAQ index doubled between 1997 and 2002. Arguments that
bubbles are due to irrationality do not make such predictions, suggesting that rational learning is
useful in explaining bubble-like patterns in stock prices.
Stock Return Volatility
Pastor and Veronesi (2009) show that uncertainty about g (assuming the discount rate r is constant)
can lead to return volatility that is much higher than the volatility of dividend growth.31 It follows
that the volatility of younger firm’s price returns should be higher than for more mature firms, and
indeed this is the case empirically. Stock return volatility is also known to vary dramatically over
time, and that it can be persistent. Learning helps explain this variation, particularly is the case
where g is subject to regime shifts.
Return Predictability
Stock returns are somewhat predictable, for example when the average P/D is low, future stock
returns tend to be high. Learning about g can explain this empirical regularity. In a similar vein,
learning can help explain the value effect (i.e., that stocks with low M/B ratios outperform growth
stocks – i.e., stocks with high M/B.), why analysts’ forecasts are negatively related to returns and
credit spreads on corporate bonds.
The Equity Premium
The equity premium puzzle (i.e., that stock returns are much higher than bond returns) is also shown
to depend on g, but here uncertainty about g can lead to both increases and decreases in the equity
premium, depending on the representative agent’s elasticity of inter-temporal substitution (EIS)32.
Portfolio Choice
30
Specifically, 1-year old firms have an average M/B ratio of 2.25, and this falls to 1.25 for 10-year old firms. The model also predicts that the effect of age should be stronger for younger firms and non-dividend paying firms, and that M/B should decrease with expected return and increase with both the level and the volatility of profitability. These predictions are also confirmed empirically. 31
They show in an example that if the volatility of dividend growth is 5%, the volatility of returns is approximately 20% when there is uncertainty about g, i.e., when g has a mean of 3% and a standard deviation of 2%. For the post-war period the volatility of market returns was about 17%, while the volatility of dividend growth was about 5%. 32
An EIS less than one causes the equity premium to fall, while an EIS greater than one causes the equity premium to increase.
67
Investors appear to invest too little in stocks. Investors also display a home bias, and anomalies exist
with respect to size and value, but investors are uncertain as to whether these anomalies are
genuine. Recent contributions to the learning literature emphasise informative priors motivated by
economic theory. Others study the portfolio choice of ambiguity-averse investors.
Investor Behaviour - Mutual fund flows
Investors’ allocation of capital to mutual funds seems puzzling, because they chase past
performance, even though there is little evidence of performance persistence, yet they do not
withdraw to the same extent from funds that underperform.33 It has been shown that these flow
patterns are consistent with rational learning.
Investor trading behaviour
Individuals trade too frequently, even though they lose money from trading. Their trade intensity
depends on their past performance. These attributes have been shown to be consistent with rational
learning. As long as the value of what they learn from trading exceeds the short-term losses they
incur from trading, it is rational to trade. Behavioural explanations such as overconfidence and risk-
seeking seem to not fit the data as well as learning explanations. Learning combined with
information asymmetry can shed light on why trading volume is correlated with volatility.
Entrepreneurial Finance
Firm profitability before an IPO increases, but it decreases thereafter. Frequently advanced
explanations include irrationality and asymmetric information. Again, it has been shown that such
behaviour is consistent with a rational symmetric-information model of learning. Another empirical
market regularity is that the performance of VC funds (with the same general partner GP) is
persistent. Why do successful GPs not raise their fees or fund size to capture this advantage? The
learning models predict that there should be persistence in the composition of limited partners (LP)
and this is borne out in empirical tests.
Future Issues
1. Much work on the role of learning in finance still lies ahead. In most existing learning models, agents learn by observing cash flows or asset returns, but they could also learn from the prices of derivative securities.
2. Another promising direction is to separate systematic and idiosyncratic uncertainty.
3. Future work can also analyse strategic information generation. More generally, we need
more dynamic learning models in corporate finance.
4. New learning models should be held to high standards. For each model, one should identify testable predictions that are unique to learning so the model can be empirically distinguished from alternatives. It is also important to assess the magnitude of the learning-induced effects, either by calibration or by structural estimation.
33
Note however the arguments in Musto (2011) discussed above.
68
Conclusion
This report provides a high level literature review of finance research in research areas of key
interest to CIFR, broadly Systemic Risk, Financial Market Developments and Market and Regulatory
Performance. It thus answers the question “What do we know from prior studies about these key
research areas?” In conducting this analysis a further objective is to identify gaps in knowledge, and
thus provide a list of potential projects for possible CIFR funding. It thus answers a second question
“What do we not know from prior studies about these key research areas?” The report identifies
many opportunities to develop our understanding of research questions that are central to CIFR’s
mission.
Two particular challenges in achieving these outcomes are identified.
First, academics invest heavily in developing their research skills to gain expertise and reputation in
particular areas. These skills are not always easily transferred to other research topics; consequently
many academics maintain whole-of-career associations with particular topics where they have
sound knowledge of the literature and research methods. The CIFR key research issues are in areas
that are not necessarily those in which academics have developed their expertise. Creating the
incentives for researchers to take on new areas of investigation is a vexing issue. CIFR’s research
agenda invites researchers to break out of their comfort zones and take on new (and thus risky)
projects, though some doubt exists as to how many are prepared to take up these opportunities.
Second, a recurring theme in this report is that the availability of reliable datasets is a precursor to
empirical research. Accordingly, to the extent that CIFR’s key research interests require data that are
currently not available, a challenge for CIFR is to invest (or co-invest) in providing such data. When
CIFR receives research proposals that involve the creation of datasets that will both address key CIFR
research questions, and facilitate research for subsequent investigations, it seems that such
proposals warrant special consideration.
69
References
Adams, R. B., Hermalin, B. E. and M. S. Weisbach, 2010, The Role of Boards of Directors in Corporate
Governance: A Conceptual Framework & Survey, Journal of Economic Literature, 48(1): 58–
107.
Allen, F., Babus, A. and E. Carletti, 2009, Financial Crises: Theory and Evidence, Annual Review of
Financial Economics, 1: 97–116.
Ben-Horim, M and W. Silber, 1977, Financial Innovation: A Linear Programming Approach, Journal of
Banking and Finance, 1: 277-296.
Bijlsma, M., Klomp, J. and S. Duineveld, 2010, Systemic risk in the financial sector: A review and
synthesis, Policy Paper 210, CPB Netherlands Bureau for Economic Policy Analysis.
Billio, M., Getmansky, M., Lo, A. W. and L. Pelizzon, 2010, Measuring Systemic Risk in the Finance
and Insurance Sectors, MIT Sloan School of Management, Working Paper 4774-10.
Bluhm, C. and C. Wagner, 2011, Valuation and Risk Management of Collateralized Debt Obligations
and Related Securities, Annual Review of Financial Economics, 3: 193–222.
Bodie, Z., Detemple, J. and M. Rindisbacher, 2009, Life-Cycle Finance and the Design of Pension
Plans, Annual Review of Financial Economics, 1 :249–286
Brealey, R. A., Myers, S. C. and F. Allen, 2011, Principles of Corporate Finance, McGraw-Hill
Companies Inc., New York, N. Y.
Carr, P. and R. Lee, 2009, Volatility Derivatives, Annual Review of Financial Economics, 1: 319–39.
Demirgüç-Kunt, A. and R. Levine, 2009, Finance and Inequality: Theory and Evidence, Annual Review
of Financial Economics, 1: 287–318.
Ferson, W. E., 2010, Investment Performance Evaluation, Annual Review of Financial Economics, 2:
207–234.
Flannery, M. J., Glasserman, P., Mordecai, D. K. A. and C. Rossi, 2012, Forging Best Practices in Risk
Management, Office of Financial Research, Working paper #0002.
Gorton, G. and A. Metrick, 2011, Securitization, Handbook of the Economics of Finance, edited by G.
Constantinides, M. Harris, and R. Stulz, available at http://ssrn.com/abstract=1909887.
Gray, D. F., 2009, Modeling Financial Crises and Sovereign Risks, Annual Review of Financial
Economics, 1: 117–144.
Gromb, D. and D. Vayanos, 2010, Limits of Arbitrage, Annual Review of Financial Economics, 2: 251–
275.
Hasbrouck, J., 2007, Empirical Market Microstructure: The Institutions, Economics, and Econometrics