Libor Market Models: the reasons behind the success
A focus on calibration
2
Introduction Market models have become a standard in the bank industry.
This success is attested by the number of publications on the subject (not to mention the conferences…)
The repeated efforts to transpose this methodology to other underlyings (credit, inflation) is another remarkable sign.
Standard arguments cannot explain this phenomenon. The ability of these models to capture rate curve dynamics is more than questionable. Their implementation is demanding and their computational cost is high (even though
optimization techniques have been developed).
Exploring the reasons behind this success is very enlightening. As we will see, tractability, readability, flexibility are the keywords for understanding
the popularity of the LMM framework But some reasons may not be as bright as one could expect… Success also has a lot to do with calibration, but new classes of derivatives rise new
challenges which may prove difficult to address in this framework..
3
Objectives
Our intention in this brief presentation is twofold.
Firstly, clarify the reasons why this modeling framework has reached such a success in the industry despite its strong limitations.
Naturally, there are many solutions to circumvent the limitations, but exposing these techniques will not be our intention here.
Instead, we would like to detail in an honest and practical way the reasons for this remarkable success - including the most questionable reasons.
Secondly, explore calibration as a key to this success.
Again, exploring advanced numerical techniques will not be our purpose.
Instead, we would like to expose some practical issues on calibration, with an evocation of the new challenges LMM are now confronted with.
The underlying objective is to provide some insight on how a model is used in the derivatives department of a bank.
4
Outline
1. LMM: the reasons behind the success
Libor Market Models…
… have a tremendous success…
… the reasons of which need to be explained in details.
2. Calibration: practical issues and new challenges
The calibration process raises many delicate questions…
… and LMM offer a good control…
… but this flexibility may reach its limits with new-generation products.
The reasons behind the success
1. The libor market model framework
2. The marks of success
3. Exploring the reasons…
6
The framework: what are we talking about ?
LMM dynamics
LMM parameters
The model is entirely characterized by volatility functions:
It is sometimes represented with scalar notations:
7
What are we talking about ?(continued)
In preparation for the forthcoming sections, we propose an “HJM-biased” presentation of BGM :
This is a trivial observation, but it will be useful to understand what is new and what is not when switching from classical models to market models .
8
This framework has become a standard.
They are commonly used for pricing most exotic interest rate derivatives
And there are more interesting signs of popularity…
First sign: when dealing with Bermudan options, the industry has preferred to explore new numerical techniques rather than change the model
Bermudan MC techniques
(estimation of continuation value: )
Markovian approximations for PDE implementation
(estimation of drift : )
Second sign: when facing the limitations of the model, the industry has preferred to extend the model rather than totally change the framework.
Stochastic-volatility and local-volatility extensions
Multiple-currency extension
A natural question arises: what is the rationale behind this success-story?
The marks of success
9
What the reasons could be… but are not
Does LMM properly capture interest rates dynamics ?
Nobody sincerely believes in the lognormal dynamics of the forwards curve.
Statistical observation suggest the presence of jumps, regimes, etc
Importantly, implicit volatilities exhibit non-trivial smiles.
More globally, the deterministic volatility/correlation is very restrictive.
This assumption yields little control on joint moves of the parts of the curve.
Eventually, using Brownian motions is questionable.
Does the model allow an easy implementation ?
LMM are not strictly speaking Markovian
Even though satisfying Markovian approximations are attainable, the natural tool for implementing this model is a rather heavy Monte-Carlo simulation.
A naïve implementation consequently is very costly
A naïve implementation precludes backward valuation (PDE schemes)
Markovian approximations require the estimation of intricate conditional expectations
10
What the reasons should be… and are indeed
Tractability : an overemphasized argument?
It is the most frequent argument and it is a strong one indeed: dealing with market quantities is very convenient.
But in our opinion, this argument is a bit overemphasized: most traditional models can be rewritten in more convenient form.
Readability : the “Gaussian process view”
A model can now be characterized through the forward covariance cube (T -1 = 0):
Linear combinations of such elements can be interpreted as swaption variances, caplet variances, spot or forward. For example:
11
What the reasons should be… and are indeed(continued) Simplicity: the flexibility of HJM with an intuitive parameterization
Naturally LMM are a particular case of HJM (hopefully) HJM is based on a full volatility surface BGM works with a vector of volatilities
In this perspective, market models fill the conceptual gap between: The classical, simple models - too restrictive The HJM framework - too general
Note that it is often stated that the major break is log-normality of rates Log-normality is definitely an essential feature of (the first version of) LMM But it is useful to understand that the major change attached to LMM is a reduction in the
complexity of parameterization
In this perspective, it is crucial to see the “continuity” with classical models For example, the Hull-White model can be presented as a (displaced) BGM
Displacement 1 / i
Volatility defined as: and dimension 1:
12
What the reasons should not be… and are anyway
Familiarity with Black-Scholes
LMM framework allows to think of the curve as a (highly correlated) basket.
Each libor follows a BS-type diffusion under its martingale measure
Familiarity with BS in terms of possible extensions, robustness, etc, can thus be transposed to the interest rates world.
Familiarity with Gaussian Calculus
Correlation as a characterization interdependence is poor but convenient
Same observation for the variance as a characterization of dispersion.
Simplicity of MC schemes
LMM are more naturally suited to MC schemes (even though it is not compulsory)
The industry is very prone to implement generic solutions and such solutions are more rapidly attained with a simulation approach (it can be delegated to non-specialists).
To a some extent, it is the simplest choice (from an organizational point of view).
13
Conclusion (of part 1)
Does the success of LMM result from an educational bias in the quants/traders community? To some extent, the answer is yes. But it is not shocking: pricing models are meant to serve as decision-making tools and
should be adapted to their users. And there is more to it than that…
To fully appreciate this success, one has to understand the very role of a model in a trading room.
Actually, it is a rather modest role. Interpolate available information (pricing) Connect risks from different sources (hedging)
But for this role, calibration is critical. The calibration set can be thought of as a choice of interpolation points. The model and its parameterization can be thought of as a choice of interpolation
method. This « interpolation » analogy is not very convincing but helps understanding why one
should not expect too much from a model.
Calibration
1. The questions behind calibration
2. LMM and calibration: the perfect match?
3. New products, new challenges...
15
Calibration in practice
The steps for calibration
Model parameterization
Determination of constraints (target instruments)
Choice of calibration mode (cascade vs. global)
Numerical methods (inversion / minimization)
These steps express specific views on risk management issues
Curve and volatility dynamics
Product risk factor analysis
Risk diversification of trading portfolios
Computation time capacity
Structure of the market in terms of products & risks
Again, our intention is to expose the beliefs hidden in the calibration process before exploring the virtues of LMM
16
Parameterization
It is an expression of a view (or an intention) on curve dynamics
How does one expect the curve and, as importantly, its dispersion structure to evolve?
Using the interpolation analogy, this question reads: what information should one retrieve from the interpolated points?
A quick review on volatility
Notations: t time of observation T fixing date of underlying rate
Function of t : time-dependent structures (like short-rate models)
Function of T: underlying-dependent structures (equity-like model)
Function of T-t : stationary structures (non low-dimension Markovian)
Mixture of such forms are commonly used (example: stationary with scaling)
Example: callable products
Forward volatility is one of the key risk factors.
Consequently, using non-stationary is dangerous
But it may be a choice when one knows the bias of the model.
17
Parameterization (continued)
Naturally the same kind of taxonomy holds for correlation.
Choosing the rank of the model is the first delicate question
Empirical evidence suggests not to exceed three, but some nice parametric forms impose full rank correlation matrices
Then all the questions regarding stationary, time-dependence, etc need to be addressed.
This choice should be dictated by volatility structure (in practice, only covariance really matters so using different choices is dangerous)
Extensions of LMM require additional parameterization
18
Calibration targets
It is an expression of a view on products’ risk factors What points in the market should be considered as relevant for the pricing of the
structured product? Using the interpolation analogy, this question reads: which points should we
interpolated from (apart from the curve itself) ?
Example: Bermudan swaption It is natural to calibrate on underlying swaptions However, the main exotic risk factor is forward volatility Some may think that the spread between caps and swaptions volatilities says
something about forward volatility (under correlation assumptions) In this perspective, using caplets makes sense for calibration.
In more sophisticated products, the choice is highly non-trivial. For instance, callable cms-spread products have at least 3 obvious risks This sometimes push for a more global approach.
19
Calibration mode : global vs. cascade calibration
It is an expression of organizational choices
It is very closely related to the determination of the calibration set.
But it may also be very related to the structure of the business and to the level of sophistication of the persons in charge of quotation.
Principles
Global calibration consists in using an arbitrary set of vanilla instruments as calibration targets (typically a whole set of caps and swaptions)
Cascade calibration consists in solving a series of one-dimension problems (based on a specific parameterization of the model)
Implications
A global calibration is well suited to an organization where a high level of accuracy is not required for each price but where a large number of quotations are addressed.
In this case, once calibrated, the model may be shared for distinct quotations
A local calibration is typically more adapted when transparent risk reports and high accuracy are mandatory.
In this case, the model will typically be recalibrated at each quotation.
20
Calibration mode : global vs. cascade calibration(continued) Pros and cons of global calibration
It avoids the complex questions regarding risk factor analysis (is it a good thing ?)
It allows using a unique model for wide range of products, ensuring some consistency in risk analysis reports
But it is computationally costly (global minimization schemes)
It is sometimes numerically unstable (due to the existence of local minima)
Risk reports (deltas, vegas, etc) may prove difficult to decipher.
Pros and cons of cascade calibration
It is easy, fast and robust (because of dimension one)
It often implies that models are product-dependent
It requires a thorough analysis of product risk factors
21
Calibration algorithm: numerical choices
It is an expression of skills but of the environment as well. Obviously, numerical methods depend on quantitative talents inside the institution. But in many situations, it also reflects the structure of the market. And it is naturally related to technological constraints inside the institution.
Naturally algorithmic choices depend on previous steps (parameterization, constraints, calibration mode, etc) Valuation formula for the target instruments Root-finding or minimization methods
But the market (and technological) environment may be determining: In a market where strong risk diversification is allowed an institution may prefer to
resort to a global approach, using a global model with a heavy calibration procedure (where a unique model can be shared for many quotations)
However, structured products markets are often one-way markets (clients always trade the same side for a given exotic risk), which rather pushes for product-adapted (“on the fly”) models. In this case, fast and unbiased formulae are required.
22
LMM and calibration
We rapidly exposed the successive steps in the calibration process:
Parameterization
Selection of constraints
Selection of a methodology (cascade or global)
Selection of formulae and numerical schemes
Now, it is interesting to explore why does the LMM outmatch other models in this process
What particularities does LMM bring into the process ?
What makes LMM so easy to use ?
For this purpose, we briefly review each step in the process.
23
LMM and calibration: parameterization
LMM offer a clear view on volatility
The model directly characterize the volatility functions of libors, which are the direct underlyings of vanilla caplets
Consequently volatility forms have a clear interpretation in terms of the evolution of caplet implicit volatilities
LMM offer a clear distinction between volatility and correlation risks
Most calibration constraints can be thought of as basket option problems.
Even spread options are easy to handle in this framework
In particular, the distinction between caps and swaptions can thought as the combination of a question of time repartition of volatility and a question of correlation
In practice…
Simple is beautiful: stepwise constant functions are ok.
Readability is critical: starting with something reasonably stationary is wise
24
LMM and calibration: constraints
Naturally, the model has little to do with this stage
Ideally, calibration constraints are a question of product risks, not model properties
it is indeed dangerous to have some preconceptions regarding the model.
LMM do not impose as many limitations as classical models do
Their flexibility allows considering many constraints without loosing too much in accuracy
Besides, the built-in calibration of caplets is remarkable (as long as structured libor swap legs are involved, this is a very nice feature)
In practice…
If a global minimization scheme is used, the whole caps/swaptions matrix is used
Otherwise, depending on the product, it might be a column of caplets, a column of swaptions, a diagonal of swaptions, etc (typically a combination of these).
25
LMM and calibration: calibration mode
Whether global or local, LMM calibration proves very adaptable
Global calibration can be expressed in simple terms
Swaptions and caps imposes constraints on the covariance cube.
Using standard approximations, these constraints have a quadratic form
In the end, the problem can be expressed in terms of semi-definite programming, for which abundant literature can be found
But cascade calibration is more interesting…
A simple example is caplet column calibration with stationary volatility
Assume stationarity
Write the constraints:
Then solving the problem is a trivial bootstrapping.
This exactly is where LMM is strong: a fully stationary volatility column can be obtained without effort (and regardless of constraints on correlation parameters)
26
An example of cascade calibration
LMM allows full calibration of the vanilla matrix. Here we consider the problem: Targets: full vanilla matrix (forget about smile here) Calibrated parameters: volatilities (in the strict sense) Fixed parameters: correlations:
It is useful to have a nice representation of forward volatility structure:
Armed with this representation, the calibration process is straightforward The first element is given by the caplet of exercise date T0 Then the other elements in the first column are recursively calculated from swaptions of exercise
date T0 and maturity Ti for all i≥2 This entirely defines the first column, of the matrix Then one can proceed recursively in the same way for the other columns
27
LMM and calibration: numerical issues
The calibration of LMM requires using simple and efficient formulae
The standard market formula for swaptions consists in three steps:
Write the swap rate as a function of libors
Write the dynamics of the swap rate
Simplify the expression assuming deterministic weights (freeze expression at forward rates) and log-normality
This is a very practical and intuitive approach, but
It has a limited scope: swaptions only
It does not allow computing convexity adjustments (for CMS options calibration)
A slightly more general approach may thus prove useful for a more ambitious calibration...
28
An alternative formula for calibration
29
New products, new challenges
We have explored the advantages of LMM for calibration
Intuitive parameters, especially in terms of caplet implied volatility.
Flexible parameterization, with a quadratic expression of constraints
Feasibility of powerful cascade calibration
Existence of simple and accurate formulae
This positive image would be deceptive if we ignored the challenges imposed by new classes of products.
Structured swaps with multiple underlyings (Libors, CMS)
Popularity of products with an exposure to the slope of the curve (CMS-spreads)
We will use an example: “lock-up on CMS spread”
30
New products, new challenges (continued) Product description
The product is a structured leg (embedded in a structured swap) Each quarter, the client receives the spread cms10y – cms2y (with a leverage and an shift)
floored at some strike: MAX(ADDITOR + LEVERAGE * CMS-SPREAD, STRIKE) When the CMS-SPREAD exceeds some LIMIT, the coupon becomes fixed at a predetermined
level until maturity
Risk factor analysis Naturally the implied volatility of CMS-spread is essential But the trigger mechanism implies a binary risk (end of the structured leg), triggered by a
spread. The magnitude of binary risk is determined by the mark-to-market of the residual leg. Consequently the exotic risk is the forward volatility and correlation, and the inter-temporal
correlation between CMS-spreads.
Challenges in terms of parameterization Provide a parameterization with some control on this “second-order” correlation Once this has been achieved, understand how classical model extensions (to account for
smile) affect or does not affect the conclusions
31
Conclusion
LMM are not perfect, but who needs a perfect model ?
Models are important (heavy decisions at stake)…
… but not that important (to some extent, models work as interpolation tools)
Above all, there is no such thing as the “perfect” model (believing in one is dangerous)
LMM are not trivial, but who needs a trivial model ?
Products are sophisticated and sophisticated models are required to price and hedge them
Computation cost is not as critical as it used to be.
Traders have a high level of sophistication (most are ex-quants)
So what do we need exactly ?
A model that is naturally adapted to the human, organizational, and technical environment
A model that allows flexible calibration (flexible enough to keep up the pace of the evolution of the market in terms of payoff sophistication and risk complexity)
32
Conclusion (continued)
This is where the value of LMM lies: they are well adapted…
…to the people (educational bias on BS, taste for simplicity)
…to the market (in terms of information to calibrate to and in terms of products to price)
…to the technology (computational capacity increases rapidly)
What is critical is calibration, and LMM do more than well on this side
LMM bring the sophistication HJM to the reach of non-specialists
It allows flexible, accurate and rich calibration while keeping everything intuitive and simple
Intuitive enough? As far as new generation products are concerned, the heralded “interpretability” of LMM may soon reach its limits…
Questions & Answers