Top Banner
Value at Risk By A V Vedpuriswar February 8, 2009
46

Value at risk

Jan 24, 2015

Download

Economy & Finance

dpioneer

VaR
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Value at risk

Value at Risk

By A V Vedpuriswar

February 8, 2009

Page 2: Value at risk

2

VAR summarizes the worst loss over a target horizon that will not be exceeded at a given level of confidence.

For example, “under normal market conditions, the most the portfolio can lose over a month is about $3.6 billion at the 99% confidence level.”

Page 3: Value at risk

3

The main idea behind VAR is to consider the total portfolio risk at the highest level of the institution.

Initially applied to market risk, it is now used to measure credit risk, operational risk and

enterprise wide risk.

Many banks can now use their own VAR models as the basis for their required capital for market risk.

Page 4: Value at risk

4

VAR can be calculated using two broad approaches :

Non parametric method : This is the most general method which does not make any assumption about the shape of the distribution of returns.

Parametric method: VAR computation becomes much easier if a distribution, such as normal, is assumed.

Page 5: Value at risk

5

Average revenue = $5.1 million per dayTotal no.of observations = 254. Std dev = $9.2 millionConfidence level = 95%No. of observations < - $10 million = 11No. of observations < - $ 9 million = 15

Illustration

Page 6: Value at risk

Find the point such that the no. of observations to the left = (254) (.05) = 12.7

(12.7 – 11) /( 15 – 11 ) = 1.7 / 4 ≈ .4

So required point = - (10 - .4) = - $9.6 million

VAR = E (W) – (-9.6) = 5.1 – (-9.6) = $14.7 million

If we assume a normal distribution,

Z at 95% confidence interval, 1 tailed = 1.645

VAR = (1.645) (9.2) = $ 15.2 million

Page 7: Value at risk

7

VAR can be used as company wide yardstick to compare risks across different markets.

VAR can also be used to understand whether risk has increased over time.

VAR can be used to drill down into risk reports to understand whether the higher risk is due to increased volatility or bigger bets.

VAR as a benchmark measure

Page 8: Value at risk

8

VAR can also give a broad idea of the worst loss an institution can incur.

The choice of time horizon must correspond to the time required for corrective action as losses start to develop.

Corrective action may include reducing the risk profile of the institution or raising new capital.

Banks may use daily VAR because of the liquidity and rapid turnover in their portfolios.

In contrast, pension funds generally invest in less liquid portfolios and adjust their risk exposures only slowly.

So a one month horizon makes more sense.

VAR as a potential loss measure

Page 9: Value at risk

9

The VAR measure should adequately capture all the risks facing the institution.

So the risk measure must encompass market risk, credit risk, operational risk and other risks.

The higher the degree of risk aversion of the company, the higher the confidence level chosen.

If the bank determines its risk profile by targeting a particular credit rating, the expected default rate can be converted directly into a confidence level.

Higher credit ratings should lead to a higher confidence level.

VAR as equity capital

Page 10: Value at risk

15

In linear models, daily VAR is adjusted to other periods, by scaling by a square root of time factor.

This adjustment assumes that the position is fixed and the daily returns are independent and

identically distributed.

This adjustment is not appropriate for options because option delta changes dynamically

over time.

The delta gamma method provides an analytical second order correction to the delta normal VAR.

Delta Gamma Method

Page 11: Value at risk

17

The historical simulation method consists of going back in time and applying current weights to a time

series of historical asset returns.

This method makes no specific assumption about return distribution, other than relying on historical data.

This is an improvement over the normal distribution because historical data typically contain fat tails.

The main drawback of this method is its reliance on a short historical moving window to infer movements in market prices.

Historical simulation method

Page 12: Value at risk

18

The sampling variation of historical simulation VAR is greater than for a parametric method.

Longer sample paths are required to obtain meaningful quantities.

The dilemma is that this may involve observations that are no longer relevant.

Banks use periods between 250 and 750 days.

This is taken as a reasonable trade off between precision and non stationarity.

Many institutions are now using historical simulation over a window of 1-4 years, duly supplemented by stress tests .

Page 13: Value at risk

19

The Monte Carlo Simulation Method is similar to the historical simulation, except that movements in risk factors are generated by drawings from some pre specified distribution.

The risk manager samples pseudo random numbers from this distribution and then generates pseudo dollar returns as before.

Finally, the returns are sorted to produce the desired VAR.

This method uses computer simulations to generate random price paths.

Monte Carlo Simulation Method

Page 14: Value at risk

20

They are by far the most powerful approach to VAR.

They can account for a wide range of risks including price risk, volatility risk, fat tails and extreme scenarios and complex interactions.

Non linear exposures and complex pricing patterns can also be handled.

Monte Carlo analysis can deal with time decay of options, daily settlements & associated cash flows and the effect of pre specified trading or hedging strategies.

Page 15: Value at risk

21

The Monte Carlo approach requires users to make assumptions about the stochastic process and to understand the sensitivity of the results to these assumptions.

Different random numbers will lead to different results.

A large number of iterations may be needed to converge to a stable VAR measure.

When all the risk factors have a normal distribution and exposures are linear, the method should converge to the VAR produced by the delta-normal VAR.

Page 16: Value at risk

22

The Monte Carlo approach is computationally quite demanding.

It requires marking to market the whole portfolio over a large number of realisations of underlying random variables.

To speed up the process, methods, have been devised to break the link between the number of Monte Carlo draws and the number of times the portfolio is repriced.

In the grid Monte Carlo approach, the portfolio is exactly valued over a limited number of grid points.

For each simulation, the portfolio is valued using a linear interpolation from the exact values at adjoining grid points.

Page 17: Value at risk

29

Backtesting is done to check the accuracy of the model.

It should be done in such a way that the likelihood of catching biases in VAR forecasts is maximized.

Longer horizon reduces the number of independent observations and thus the power of the tests.

Too high a confidence level reduces the expected number of observations in the tail and thus the power of the tests.

For the internal models approach, the Basle Committee recommends a 99% confidence level over a 10 business day horizon.

The resulting VAR is multiplied by a safety factor of 3 to arrive at the minimum regulatory capital.

Backtesting

Page 18: Value at risk

30

As the confidence level increases, the number of occurrences below VAR shrinks, leading to poor measures of high quantiles.

There is no simple way to estimate a 99.99% VAR from the sample because it has too few

observations.

Shorter time intervals create more data points and facilitate more effective back testing.

Page 19: Value at risk

31

Simulation methods are quite flexible.

They can either postulate a stochastic process or resample from historical data.

They allow full valuation on the target data.

But they are prone to model risk and sampling variation.

Greater precision can be achieved by increasing the number of replications but this may slow the process down.

Choosing the method

Page 20: Value at risk

32

For large portfolios where optionality is not a dominant factor, the delta normal method provides a

fast and efficient method for measuring VAR.

For fast approximations of option values, delta gamma is efficient.

For portfolios with substantial option components, or longer horizons, a full valuation method may be required.

Page 21: Value at risk

33

If the stochastic process chosen for the price is unrealistic, so will be the estimate of VAR.

For example, the geometric Brownian motion model adequately describes the behaviour of stock prices and exchange rates but not that of fixed income securities.

In Brownian motion models, price shocks are never reversed and prices move as a random walk.

This cannot be the price process for default free bond prices which must converge to their face value at expiration.

Page 22: Value at risk

34

V A R Applications

PassiveReporting

risk

Disclosure to shareholders

Management reportsRegulatory requirements

Defensive

Controllingrisks

Setting risk limits

ActiveAllocating

risk

Performance valuationCapital allocation , Strategic business

decisions.

Page 23: Value at risk

35

VAR methods represent the culmination of a trend towards centralized risk management.

Many institutions have started to measure market risk on a global basis because the

sources of risk have multiplied and volatility has increased.

A portfolio approach gives a better picture of risk rather than looking at different instruments in isolation.

Page 24: Value at risk

36

Centralization makes sense for credit risk management too.

A financial institution may have myriad transactions with the same counterparty, coming from various desks such as currencies, fixed income commodities and so on.

Even though all the desks may have a reasonable exposure when considered on an individual basis,

these exposures may add up to an unacceptable risk.

Also, with netting agreements, the total exposure depends on the net current value of contracts

covered by the agreements.

All these steps are not possible in the absence of a global measurement system.

Page 25: Value at risk

37

Institutions which will benefit most from a global risk management system are those which are exposed to:

- diverse risk

- active positions taking / proprietary trading

- complex instruments

.

Page 26: Value at risk

38

VAR is a useful information reporting tool.

Banks can disclose their aggregated risk without revealing their individual positions.

Ideally, institutions should provide summary VAR figures on a daily, weekly or monthly basis.

Disclosure of information is an effective means of market discipline.

Page 27: Value at risk

39

VAR is also a useful risk control tool.

Position limits alone do not give a complete picture.

The same limit on a 30 year treasury, (compared to 5 year treasury) may be more risky.

VAR limits can supplement position limits.

In volatile environments, VAR can be used as the basis for scaling down positions.

VAR acts as a common denominator for comparing various risky activities.

Page 28: Value at risk

40

VAR can be viewed as a measure of risk capital or economic capital required to support a financial

activity.

The economic capital is the aggregate capital required as a cushion against unexpected

losses.

VAR helps in measuring risk adjusted return.

Without controlling for risk, traders may become reckless.

If the trader makes a large profit, he receives a large bonus.

If he makes a loss, the worst that can happen is he will get fined.

Page 29: Value at risk

41

The application of VAR in performance measurement depends on its intended purposes.

Internal performance measurement aims at rewarding people for actions they have full control over.

The individual/undiversified VAR seems the appropriate choice.

External performance measurement aims at allocation of existing / new capital to

existing or new business units.

Such decisions should be based on marginal and diversified VAR measures.

Page 30: Value at risk

42

VAR can also be used at the strategic level to identify where shareholder value is being

added throughout the corporation.

VAR can help management take decisions about which business lines to expand, maintain or reduce.

And also about the appropriate level of capital to hold.

Page 31: Value at risk

43

A strong capital allocation process produces substantial benefits.

The process almost always leads to improvements.

Finance executives are forced to examine prospects for revenues, costs and risks in all

their business activities.

Managers start to learn things about their business they did not know.

Page 32: Value at risk

44

EVT extends the central limit theorem which deals with the distribution of the average of identically and independently distributed

variables from an unknown distribution to the distribution of their tails.

The EVT approach is useful for estimating tail probabilities of extreme events.

For very high confidence levels (>99%), the normal distribution generally underestimates potential losses.

Extreme Value Theory (EVT)

Page 33: Value at risk

45

Empirical distributions suffer from a lack of data in the tails.

This makes it difficult to estimate VAR reliably.

EVT helps us to draw smooth curves through the extreme tails of the distribution based on powerful statistical theory.

In many cases the t distribution with 4-6 degrees of freedom is adequate to

describe the tails of financial data.

Page 34: Value at risk

4646

EVT applies to the tails

Not appropriate for the centre of the distribution

Also called semi parametric approach

EVT theorem was proved by Gnedenko in 1943

EVT helps us to draw smooth curves through the tails of the distribution

Page 35: Value at risk

4747

EVT Theorem

F (y) = 1 – (1+€ y)- 1/€ € ≠ 0

F (y) = 1 – e-y € = 0

y = (x - µ) / ß, ß > 0

Normal distribution corresponds to € = 0

Tails disappear at exponential speed

Page 36: Value at risk

4848

EVT Estimators

2%

Normal

EVT

0%

Page 37: Value at risk

4949

Fitting EVT functions to recent historical data is fraught with the same pitfalls as VAR.

Once in a lifetime events cannot be taken into account even by powerful statistical tools.

So they need to be complemented by stress testing.

The goal of stress testing is to identify unusual scenarios that would not occur under standard

VAR models.

Stress tests can simulate shocks that have never occurred or have been covered highly unlikely.

Stress tests can also simulate shocks that reflect permanent structural breaks or temporarily changed statistical patterns.

Page 38: Value at risk

5050

Stress testing should be enforced, but the problem is the stress needs to be pertinent to the type of risk the institution has.

It would be difficult to enforce a limited number of relevant stress tests.

The complex portfolio models banks generally employ give the illusion of accurate simulation

at the expense of substance.

Page 39: Value at risk

5151

The tendency of risk managers and other` executives to describe events in terms of ‘sigma’ tells

us a lot.

Whenever there is talk about sigma, it implies a normal distribution.

Real life distributions have fat tails.

Goldman Sachs’ chief financial officer David Viniar once described the credit crunch as “a 25-sigma

event”

How effective are VAR models? VAR and sub prime

Page 40: Value at risk

5252

The credit crisis of late 2007 was largely a failure of risk management.

Risk models of many banks were unable to predict the likelihood , speed or severity of the crisis.

Attention turned particularly to the use of value-at- risk as a measure of the risk involved in a portfolio.

While a few VAR exceptions are expected – 99%, a properly working model would still produce two to three exceptions a year – the existence of clusters of exceptions indicates that something is wrong.

Page 41: Value at risk

53

Credit Suisse reported 11 exceptions at the 99% confidence level in the third quarter, Lehman

brothers three at 95%, Goldman Sachs five at 95%, Morgan Stanley six at 95%, Bear Stearns 10 at 99% and UBS 16 at 99%.

Clearly, VAR is a tool for normal markets and it is not designed for stress situations.

53

Page 42: Value at risk

5454

What window?

It would have been difficult for VAR models to have captured all the recent market events, especially as the environment was emerging from a period of relatively benign volatility.

A two-year window won’t capture the extremes, so the VAR it produces will be too low.

A longer window is a partial solution at best .

It will improve matters a little, but it also swamps recent events.

Page 43: Value at risk

5555

Is shorter window a better thing?

A longer observation period may pick up a wider variety of market conditions, but it would

not necessarily allow VAR models to react quickly to an extreme event.

If the problem is that models are not reacting fast enough, some believe the answer would in fact be

to use shorter windows.

These models would be surprised by the first outbreak of volatility, but would rapidly adapt.

Page 44: Value at risk

5656

What models work best?

The best VAR models are those that are quicker to react to a step-change in volatility.

With the benefit of hindsight, the type of VAR model that would actually have worked best in the second half of 2007 would most likely

have been a model driven by a frequently updated short data history.

Or any frequently updated short data history that weights more recent observations more heavily

than more distant observations.

Page 45: Value at risk

5757

In an environment like the third quarter of 2007, a long data series will include an extensive period of low volatility, which will mute the model’s reaction to a sudden increase in volatility.

Although it will include episodes of volatility from several years ago, these will be outweighed by the intervening period of calm.

Page 46: Value at risk

5858

The importance of updating

In the wake of the recent credit crisis, an unarguable improvement seems to be increasing the frequency of updating.

Monthly or even quarterly updating of the data series is the norm.

Shifting to weekly or even daily updating would improve the responsiveness of the model to a sudden change of conditions.